Compare commits

...

799 Commits

Author SHA1 Message Date
Tupin Laurent
05dabb7e7b Fix Bionic networking restart error #3430 (#3431) 2018-10-02 03:10:52 -07:00
okamototk
66e304c41b Fixed Ubuntu 18.04's docker version(fixes #3424). (#3425) 2018-10-01 04:26:51 -07:00
LiuDui
192f7967c9 Remove excess space (#3421) 2018-10-01 00:09:45 -07:00
SataQiu
f67d82a9db fix typo: delete duplicate words (#3422) 2018-10-01 00:07:25 -07:00
Luke Seelenbinder
3cfbc1a79a Add Pod IP to Flannel manifest. (#3379) 2018-10-01 00:06:13 -07:00
rboyapat
d9f495d391 Fix the dic iteration method in the kubelet template (#3415)
* Fix the jinja expression for openstack_tenant_id

OS_PROJECT_ID is obsolete in keystone v3 and jinja expression
doesn't set openstack_tenant_id as expected because of
undefined env var. Fixed the expression.

* Fix the dic iteration method in the kubelet template

Kubelet template rendering errors when additional Node lables are
added and using Python3. Update the method to be compatible to both
python2/3

Node lables doesn't work
2018-09-30 05:10:12 -07:00
SataQiu
71f6c018ce fix typo: remove repeated words(is) (#3419) 2018-09-29 21:04:43 -07:00
LiuDui
0401f4afff remove the redundant space (#3420) 2018-09-29 21:03:27 -07:00
Mikael Berthe
b4989b5a2a Fix netcheck agent/server image variable names (#3417)
According to the documentation, container images are described
by vars like `foo_image_repo` and `foo_image_tag`.
The variables netcheck_{agent,server}_{img_repo,tag} do not
follow that convention.
2018-09-29 20:44:01 -07:00
SataQiu
6f4054679e Remove the redundant space (#3418) 2018-09-29 20:31:57 -07:00
KMilhan
df7d53b9ef Fix ready to not to be an AnsibleUnsafeText (#3404) 2018-09-28 05:07:27 -07:00
Andreas Holmsten
0a9a42b544 Change from Nova security groups to Neutron (#2910)
* Replace `openstack_compute_secgroup_v2` with `openstack_networking_secgroup_v2`

The `openstack_networking_secgroup_v2` resource allow specifications of
both ingress and egress. Nova security groups define ingress rules only.

This change will also allow for more user-friendly specified security
rules, as the different security group resources have different HCL
syntax.
2018-09-28 11:35:02 +02:00
Rong Zhang
0232e755f3 Upgrade kubedns and kubednsautoscaler (#3407) 2018-09-28 01:20:08 -07:00
sangwook
0536125f75 Better fix for openstack cinder zone issue using ignore-volume-az option (#2980)
* Better fix for openstack cinder zone issue[1][2]
using ignore-volume-az option[3].
[1]: https://github.com/kubernetes-incubator/kubespray/pull/2155
[2]: https://github.com/kubernetes-incubator/kubespray/pull/2346
[3]: https://github.com/kubernetes/kubernetes/pull/53523

* Remove kube-scheduler-policy.yaml
2018-09-27 22:15:47 -07:00
Cédric de Saint Martin
53d87e53c5 All CNIs: support ANY toleration. (#3391)
Before, Nodes tainted with NoExecute policy did not have calico/weave Pod.
Network pod should run on all nodes whatever happens on a specific node.

Also always set the Pods to be critical.
Also remove deprecated scheduler.alpha.kubernetes.io/tolerations annotations.
2018-09-27 05:28:54 -07:00
Erwan Miran
232020ef96 skip-exists is an flag for create command, not for calicoctl (#3401) 2018-09-27 04:57:02 -07:00
Antoine Legrand
a22d74b165 Update README.md (#3402) 2018-09-27 13:32:28 +02:00
Shida Qiu
8b8e534769 remove the redundant space (#3400) 2018-09-27 03:32:26 -07:00
arzarif
6b71229d3f Resolve issues associated with Calico deployment in policy-only mode. (#3392) 2018-09-27 03:31:14 -07:00
刘旭
145e5c8943 use copy and slurp module (#3313) 2018-09-27 02:12:02 -07:00
Ryan McGuire
28315ca933 Fix README links to new inventory file paths. (#3398) 2018-09-27 01:09:51 -07:00
Victor Palma
dced082e5f fixes roles/docker/vars/ubuntu-bionic.yml points to xenial (#3395)
* fixes: #3387
2018-09-27 01:08:39 -07:00
Tupin Laurent
408faac3c9 Pip is required for vault #3376 (#3378)
* Change execution order for pip

* Remove spaces
2018-09-26 00:28:54 -07:00
Tupin Laurent
cd4a606cb1 UI is required for vault #3376 (#3377) 2018-09-26 00:27:38 -07:00
Hoat Le
c7c3effd6f Ansible var should be quoted (#3393)
to fix the follow problem in case quote is not used:

PLAY [k8s-cluster:etcd:calico-rr] **********************************************
ERROR! Syntax Error while loading YAML.
  expected <block end>, but found '<scalar>'

The error appears to have been in '/tmp/vagrant-ansible/inventory/group_vars/k8s-cluster.yml': line 59, column 39, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

kube_oidc_ca_file: {{ kube_cert_dir }}/openid-ca.pem
                                      ^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes.  Always quote template expression brackets when they
start a value. For instance:

    with_items:
      - {{ foo }}

Should be written as:

    with_items:
      - "{{ foo }}"

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
2018-09-25 23:35:35 -07:00
Kuldip Madnani
36898a2c39 Adding pod priority for all the components. (#3361)
* Changes to assign pod priority to kube components.

* Removed the boolean flag pod_priority_assignment

* Created new priorityclass k8s-cluster-critical

* Created new priorityclass k8s-cluster-critical

* Fixed the trailing spaces

* Fixed the trailing spaces

* Added kube version check while creating Priority Class k8s-cluster-critical

* Moved k8s-cluster-critical.yml

* Moved k8s-cluster-critical.yml to kube_config_dir
2018-09-25 07:50:22 -07:00
Wilmar den Ouden
8526c30b63 Replaces nonexisting system_namespace variable (#3389) 2018-09-25 01:39:02 -07:00
Andreas Krüger
d6ebe8c3e7 Sync manifests with kubeadm (#3383) 2018-09-24 02:17:18 -07:00
Rui Cao
02de35cfc3 Fix some typos (#3382)
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-09-23 06:33:17 -07:00
Arnaud M
7d8e21634c add CRI-O in the list of core components supported (#3381) 2018-09-22 23:03:53 -07:00
k8s-ci-robot
6b598eaacb Merge pull request #3367 from mgsergio/master
Add check that kube-master, kube-node and etcd groups are not empty.
2018-09-21 07:09:44 -07:00
Sergey Magidovich
2197330727 Add check that kube-master, kube-node and etcd groups are not empty. 2018-09-21 17:02:53 +03:00
k8s-ci-robot
1d8627eb8b Merge pull request #3370 from AnatolyRugalev/issue-3357
Added download_validate_certs option
2018-09-21 04:44:05 -07:00
Anatoly Rugalev
8f85ea89fa Added download_validate_certs option which allows to disables SSL validation for file downloads 2018-09-21 11:51:17 +02:00
k8s-ci-robot
51a5f54fc4 Merge pull request #3335 from AtzeDeVries/fix/ubuntu-xenial-resolv-conf
Fix/ubuntu xenial resolv conf
2018-09-20 23:16:11 -07:00
k8s-ci-robot
e5550b5140 Merge pull request #3369 from crandles/rm-varlibcni
remove /var/lib/cni directory in reset playbook
2018-09-20 19:32:23 -07:00
Chris Randles
a1d6078d46 remove /var/lib/cni directory 2018-09-20 15:36:25 -04:00
k8s-ci-robot
7fd87b95cf Merge pull request #3368 from woopstar/fedora_fix_1
Fix CI issue (Fedora task introduce new lookup plugin)
2018-09-20 08:16:22 -07:00
Rajitha Perera
e3d562bcdb Support for AWS cloud-config (#1465)
* Support for AWS cloud-config

* Update docs

* Fix version incompatibilities

* Do not use shorthand `default`

* Add new cloud config variable, roleArn
2018-09-20 16:31:28 +02:00
Andreas Kruger
442e6e55b6 Fix CI issue with Fedora 2018-09-20 15:45:15 +02:00
k8s-ci-robot
1f1a87bd3d Merge pull request #3366 from riverzhang/fix-error
Remove some useless files
2018-09-20 05:27:28 -07:00
rongzhang
4d1055f5d5 Remove some useless files 2018-09-20 20:24:06 +08:00
k8s-ci-robot
68acdd71f1 Merge pull request #3172 from Atoms/additional-proxy
Add additional no proxy parameter for more customization
2018-09-20 03:26:29 -07:00
k8s-ci-robot
62b1ea2b48 Merge pull request #3360 from gabibbo97/master
Support Fedora 28
2018-09-20 02:22:53 -07:00
k8s-ci-robot
9fa23ffa21 Merge pull request #3364 from SataQiu/fix-20180920
Remove duplicate persistent_volumes_enabled element in k8s-cluster.yml
2018-09-20 02:21:41 -07:00
k8s-ci-robot
1dda89dbe3 Merge pull request #3363 from woopstar/remove_efk
Remove EFK from Kubespray
2018-09-20 02:19:31 -07:00
SataQiu
2a1f77efc6 remove duplicate persistent_volumes_enabled parameter in k8s-cluster.yml 2018-09-20 17:02:26 +08:00
k8s-ci-robot
f9502e0964 Merge pull request #3362 from mirake/fix-typos
Fix some typos
2018-09-20 01:46:58 -07:00
Andreas Kruger
09b67c1ad5 Remove EFK from Kubespray 2018-09-20 10:44:17 +02:00
Rui Cao
66475f98b9 Fix some typos
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-09-20 16:27:16 +08:00
k8s-ci-robot
8512cc5cca Merge pull request #3280 from wozniakjan/openstack/openstack_cacert
Check `openstack_cacert` for empty string
2018-09-19 22:42:37 -07:00
k8s-ci-robot
3a65c66a3e Merge pull request #3355 from wwt/rr-v3
Uses etcdv3 for calico 3 rr_v4 resources
2018-09-19 22:35:02 -07:00
Giacomo Longo
492b3e525d Support Fedora 28 2018-09-19 20:11:07 +02:00
Kevin Schuck
639010b3df Uses environment vars for etcd cert paths 2018-09-19 12:32:16 -05:00
k8s-ci-robot
34d1f0bff2 Merge pull request #3351 from woopstar/kubeadm_token_basic_auth_fix
Mount basic auth or token auth dirs to support it on kubeadm deployments
2018-09-19 07:50:43 -07:00
Jan Wozniak
a330b281e8 Check openstack_cacert for empty string 2018-09-19 16:37:24 +02:00
Kevin Schuck
6f9f80acee Uses etcdv3 for calico 3 rr_v4 resources 2018-09-19 09:22:52 -05:00
k8s-ci-robot
a8a62afd74 Merge pull request #3304 from kubernetes-incubator/gpu2
Add support for GPU accelerator
2018-09-19 07:12:32 -07:00
k8s-ci-robot
7fa682bdd5 Merge pull request #3342 from okamototk/fix_path_for_kubeadm_join
Add kubelet path for kubeadm.
2018-09-19 06:17:47 -07:00
Aivars Sterns
34019291b8 Merge pull request #3143 from jbcraig/add_os_trust_id
add support for openstack trust to cloud provider config
2018-09-19 16:07:03 +03:00
Aivars Sterns
847390dd9c Merge pull request #3225 from niallmcandrew/patch-1
Fix test readme formatting
2018-09-19 16:06:21 +03:00
Antoine Legrand
08179018d4 Merge branch 'master' into gpu2 2018-09-19 15:02:51 +02:00
k8s-ci-robot
b796226869 Merge pull request #3325 from firaxis/configurable_felix_healthhost
Make Felix healthhost configurable
2018-09-19 06:02:29 -07:00
Romain GUICHARD
131d565498 fix openstack cli syntax (#3353)
* fix openstack cli syntax

* 'allowed-address' is also a dash, not an underscore

* multiple allowed-address

multiple allowed-address must be in separate parameters
2018-09-19 14:50:38 +02:00
k8s-ci-robot
084af7b6e5 Merge pull request #3354 from mirwan/offline_env
Offline environment documentation
2018-09-19 05:36:37 -07:00
Aivars Sterns
bacd8c70e1 Merge pull request #3149 from rguichard/fix-router-id-output
fix the output of router_id with the right id
2018-09-19 15:34:03 +03:00
Erwan Miran
963c3479a9 Offline environment documentation 2018-09-19 14:18:51 +02:00
k8s-ci-robot
39c567de47 Merge pull request #3307 from kaarolch/upgrade_docs
Calico version verification before cluster upgrade begin.
2018-09-19 05:15:55 -07:00
k8s-ci-robot
da4cc74498 Merge pull request #3340 from wwt/master
Fixes Calico 3.x BGPPeer resources
2018-09-19 04:43:35 -07:00
Andreas Kruger
cac485756b Mount basic auth or token auth dirs to support it on kubeadm deployments 2018-09-19 13:21:58 +02:00
k8s-ci-robot
118a7cd4ae Merge pull request #3350 from woopstar/kubeadm_audit_fix_2
Remove audit again from Kubeadm 1.10.x. Write mounts not supported un…
2018-09-19 04:17:29 -07:00
Andreas Kruger
c058e7a5ec Remove audit again from Kubeadm 1.10.x. Write mounts not supported untill 1.11 2018-09-19 13:15:14 +02:00
k8s-ci-robot
1c10c3e2ff Merge pull request #3348 from woopstar/kubelet_node_custom_flags
Add support for kubelet_node_custom_flags
2018-09-19 04:10:29 -07:00
Andreas Kruger
e0ddabc463 Add support for kubelet_node_custom_flags 2018-09-19 12:58:06 +02:00
k8s-ci-robot
13da9bf75e Merge pull request #3337 from LuckySB/groupvars-networkplugin
create separate options files for network plugins
2018-09-19 03:56:29 -07:00
k8s-ci-robot
e47eeb67ee Merge pull request #3344 from woopstar/kubeadm-minor-fix
Sync manifests from non-kubeadm to kubeadm deploy
2018-09-19 03:48:32 -07:00
k8s-ci-robot
824199fc7f Merge pull request #3347 from mirake/fix-error
Fix some typos
2018-09-19 03:43:29 -07:00
Rui Cao
c004896a40 Fix some typos
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-09-19 18:22:08 +08:00
Andreas Kruger
940d2fdbb1 Add missing enforce-node-allocatable to kubelet for kubeadm deployments 2018-09-19 11:54:34 +02:00
Andreas Kruger
1c999b2a61 Move kube_kubeadm_controller_extra_args to controllerManagerExtraArgs section. It was placed in controllerManagerExtraVolumes 2018-09-19 11:24:19 +02:00
Andreas Kruger
8e37841a2e Add audit support to v1alpha1 of Kubeadm 2018-09-19 11:01:30 +02:00
Andreas Kruger
8d1c0c469c Added missing enable-aggregator-routing option 2018-09-19 10:58:46 +02:00
Rong Zhang
8f5b0c777b Merge pull request #3345 from mirake/fix-typos
Fix some typos
2018-09-19 16:50:14 +08:00
Rui Cao
0dd82293f1 Fix some typos
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-09-19 16:47:58 +08:00
Andreas Kruger
26d7380c2e Sync manifests from non-kubeadm to kubeadm deploy 2018-09-19 10:01:45 +02:00
Takashi Okamoto
95703fb6f2 Add kubelet path for kubeadm. 2018-09-19 03:04:03 +00:00
Karol Chrapek
0121bce9e5 Instead of doc update, change the verify step 2018-09-18 22:13:15 +02:00
Sergey Bondarev
e766dd5582 move calico options from all.yml to k8s-cluster/k8s-net-calico.yml 2018-09-18 21:30:49 +03:00
Kevin Schuck
fb1678d425 Ensures BGPPeer resource names are unique 2018-09-18 10:48:30 -05:00
Alex Yakovenko
884053aaa7 Make Felix healthhost configurable 2018-09-18 15:48:29 +03:00
Sergey Bondarev
93429bc661 create separate options files for network plugins
remove plugin options from common files
2018-09-18 14:29:53 +03:00
k8s-ci-robot
3d27007750 Merge pull request #3329 from riverzhang/checksum
Keep list of k8s checksums for hyperkube and kubeadm
2018-09-18 02:42:59 -07:00
AtzeDeVries
4cbd97667d Merge remote-tracking branch 'upstream/master' into fix/ubuntu-xenial-resolv-conf 2018-09-18 09:51:46 +02:00
k8s-ci-robot
2730c90dcd Merge pull request #3320 from riverzhang/kubelet
Support dynamic kubelet config
2018-09-18 00:16:04 -07:00
rongzhang
09a1bcb30b Keep list of k8s checksums for hyperkube and kubeadm
Keep a list of checksums for kubeadm and hyperkube downloads.
Makes it easier to switch version
2018-09-18 15:05:17 +08:00
rongzhang
77e08ba204 Support dynamic kubelet config
https://kubernetes.io/blog/2018/07/11/dynamic-kubelet-configuration/
2018-09-18 08:44:39 +08:00
Kevin Schuck
d3adf09bde Fixes BGPPeer resource for calico >= 3.0.0 2018-09-17 15:22:28 -05:00
k8s-ci-robot
26db1afd1a Merge pull request #3227 from mirwan/contiv121
Upgrade contiv to 1.2.1 with some enhancements
2018-09-17 08:15:23 -07:00
Erwan Miran
afa2a5f1c4 enhanced reset for contiv 2018-09-17 16:46:19 +02:00
Erwan Miran
bcaf2f9ea3 contiv 1.2.1 2018-09-17 16:45:05 +02:00
Rong Zhang
3cd38e0d4c Merge pull request #3245 from ctang1989/patch-1
terraform.tfvars.example is not correct, remove.
2018-09-17 20:41:08 +08:00
k8s-ci-robot
d16b562b18 Merge pull request #3316 from mattymo/tiller_override_fix
Fix tiller override command
2018-09-17 05:12:05 -07:00
k8s-ci-robot
0538f8a70d Merge pull request #3290 from riverzhang/fix-upgrade
Fix upgrade k8s
2018-09-17 04:26:47 -07:00
k8s-ci-robot
1a426ada3c Merge pull request #3324 from alvistack/cert-manager-v0.5.0
cert-manager: Upgrade to 0.5.0
2018-09-17 04:20:56 -07:00
k8s-ci-robot
d96e17451e Merge pull request #3326 from alvistack/weave-v2.4.1
weave: Upgrade to 2.4.1
2018-09-17 04:19:39 -07:00
Wong Hoi Sing Edison
a544e54578 weave: Upgrade to 2.4.1
Upstream Changes:

-   weave 2.4.1 (https://github.com/weaveworks/weave/releases/tag/v2.4.1)

Our Changes:

-   Templates sync with upstream manifests
2018-09-17 17:09:19 +08:00
Wong Hoi Sing Edison
f34a6699ef cert-manager: Upgrade to 0.5.0
Upstream Changes:

-   cert-manager 0.5.0 (https://github.com/jetstack/cert-manager/releases/tag/v0.5.0)

Our Changes:

-   Templates sync with upstream manifests
2018-09-17 16:58:04 +08:00
AtzeDeVries
482857611a added extra var for ubuntu 18 netplan resolv 2018-09-17 09:01:55 +02:00
AtzeDeVries
8d8bbc294a fix for resolvconf in ubuntu18 2018-09-17 09:00:55 +02:00
k8s-ci-robot
7f91f6e034 Merge pull request #3287 from Kami-no/coredns_metrics
Monitor CoreDNS over svc
2018-09-16 23:39:59 -07:00
rongzhang
84c4c7dc82 Use synchronize module 2018-09-16 20:36:44 +08:00
rongzhang
1d4aa7abcc Fix upgrade k8s 2018-09-16 10:35:12 +08:00
Matthew Mosesohn
fe35c32c62 Fix tiller override command 2018-09-15 16:35:19 +03:00
Rong Zhang
aa0da221e9 Merge pull request #2880 from hfinucane/rh7-paths
Fix #2261 by supporting Red Hat's limited PATH
2018-09-15 19:27:22 +08:00
k8s-ci-robot
f1403493df Merge pull request #3296 from rabi/fix_cilium_crio
Add volume and volumeMount for crio-socket
2018-09-15 03:23:02 -07:00
k8s-ci-robot
36901d8394 Merge pull request #3309 from ant31/fix_download_file
Fix download file
2018-09-15 03:18:23 -07:00
k8s-ci-robot
a789707027 Merge pull request #3310 from mirwan/document_psp_auditing
Document podsecuritypolicy_enabled and kubernetes_audit
2018-09-15 02:09:37 -07:00
k8s-ci-robot
e6a2e34dd1 Merge pull request #3315 from riverzhang/upgrade-kubedns
Upgrade kubedns to 1.14.11
2018-09-15 02:08:20 -07:00
rongzhang
934d92f09c Upgrade kubedns to 1.14.11 2018-09-15 15:22:38 +08:00
Antoine Legrand
016ba4cdfa update hyperkube checksum 2018-09-14 00:44:36 +02:00
k8s-ci-robot
b227e44498 Merge pull request #3265 from torvitas/fix_metallb_configuration
[bugfix] fix path to metallb configuration
2018-09-13 14:47:38 -07:00
k8s-ci-robot
5e59541faa Merge pull request #3258 from okamototk/fix_kubectl_path
absolute path for kubectl.
2018-09-13 14:38:20 -07:00
Antoine Legrand
d94b7fd57c Don't download binary if docker is selected 2018-09-13 22:06:51 +02:00
k8s-ci-robot
9964ba77ee Merge pull request #3305 from mattymo/fixup_upgrade
Fixes for upgrade mode
2018-09-13 12:57:23 -07:00
k8s-ci-robot
153661cc47 Merge pull request #3284 from mattymo/more_calico_legacy
Put back legacy support for calico ippools and bgp settings
2018-09-13 09:25:26 -07:00
Erwan Miran
166da2ffd0 Document podsecuritypolicy_enabled and kubernetes_audit 2018-09-13 18:07:15 +02:00
Matthew Mosesohn
8becd905b8 Fixes for upgrade mode
Uses correct flag for draining with a pod selector
Verifies minimum kubectl version for compatibility
2018-09-13 18:42:01 +03:00
Matthew Mosesohn
c83350e597 refactor to base on calico_version 2018-09-13 18:05:10 +03:00
Karol Chrapek
730866f431 Update upgrades.md 2018-09-13 15:58:41 +02:00
k8s-ci-robot
ffbe9e7fd8 Merge pull request #1973 from guenhter/rsync-cmd-to-synchronize
Replace the raw rsync command with the synchronize module
2018-09-13 03:12:05 -07:00
AtzeDeVries
91b02c057e Add support for GPU accelerator 2018-09-13 11:53:11 +02:00
Matthew Mosesohn
55d76ea3d8 Update install.yml 2018-09-13 12:04:53 +03:00
rabi
1df0b67ec1 Add volume and volumeMount for crio-socket
This commit fixes #3295
2018-09-13 14:34:44 +05:30
Sascha Marcel Schmidt
cd3b30d3bf fix path to configuration 2018-09-13 10:15:31 +02:00
k8s-ci-robot
53a685dbf8 Merge pull request #3262 from torvitas/fix_bin_path
[bugfix] Use bin_dir to find kubectl in contrib/metallb
2018-09-13 00:51:45 -07:00
k8s-ci-robot
218e527363 Merge pull request #3243 from mirwan/helm_binary_should_be_installed_on_all_masters
Install Helm client on all masters
2018-09-13 00:39:36 -07:00
k8s-ci-robot
27fc391f71 Merge pull request #3291 from mirwan/remove_insecure-bind-address_when_insecure_port_is_0
Remove --insecure-bind-address when insecure-port=0
2018-09-13 00:34:39 -07:00
Matthew Mosesohn
1091e82327 Update install.yml 2018-09-12 22:15:46 +03:00
k8s-ci-robot
a5cc8537f9 Merge pull request #3283 from mattymo/more_upgrade_options
Extra options for upgrade mode
2018-09-12 10:50:33 -07:00
Matthew Mosesohn
d692737a13 Extra options for upgrade mode
Optionally do not drain nodes by setting drain_nodes to false
Optionally set a labelselector to target which pods should be drained.
2018-09-12 17:05:41 +03:00
Matthew Mosesohn
cc79125d3e Update install.yml 2018-09-12 17:03:55 +03:00
k8s-ci-robot
a801e02cea Merge pull request #3261 from mattymo/etcd_ssl_dir_perms
Ensure etcd file permissions are correct when using vault
2018-09-12 01:10:26 -07:00
Zinin D.A
29c7775ea1 Monitor CoreDNS over svc 2018-09-12 10:24:15 +03:00
k8s-ci-robot
cbf099de4d Merge pull request #3285 from mirwan/fix_netchecker_sa_when_psp
Fix wrong sa name in crb when psp is enabled
2018-09-12 00:20:38 -07:00
k8s-ci-robot
c8630f46fd Merge pull request #3286 from fritchie/master
Change update strategy to RollingUpdate
2018-09-12 00:18:05 -07:00
Erwan Miran
af74d85b7d Remove --insecure-bind-address when insecure-port=0 2018-09-12 08:22:11 +02:00
Chad Swenson
b8e7b4c0cd Merge pull request #3288 from kubernetes-incubator/revert-3252-remove_insecure-bind-address_when_insecure-bind-port_is_0
Revert "Remove insecure-port and insecure-bind-address when possible"
2018-09-11 17:44:59 -05:00
Chad Swenson
97e5f28537 Revert "Remove insecure-port and insecure-bind-address when possible" 2018-09-11 17:42:12 -05:00
Frank Ritchie
f42e0a4711 Change update strategy to RollingUpdate.
When enable_network_policy is set to True with Calico 3 kubectl
apply fails with the error:

The Deployment "calico-kube-controllers" is invalid:
spec.strategy.rollingUpdate: Forbidden: may not be specified when
strategy type is 'Recreate'

See

https://github.com/kubernetes-incubator/kubespray/issues/3267

Changing the update strategy to RollingUpdate avoids this error.
2018-09-11 12:03:42 -04:00
Sascha Marcel Schmidt
6a5c828b6c fix bin_dir 2018-09-11 16:27:20 +02:00
Sascha Marcel Schmidt
97aa87612a use bin_dir 2018-09-11 16:27:17 +02:00
Matthew Mosesohn
d91f9e14e6 Put back legacy support for calico ippools and bgp settings 2018-09-11 16:40:11 +03:00
Erwan Miran
e24b1220a0 Fix wrong sa name in crb when psp is enabled 2018-09-11 15:04:55 +02:00
k8s-ci-robot
18f0531bba Merge pull request #3266 from mirwan/doc_mixed_ansible_installation
Precision on control machine mixed Ansible installation
2018-09-11 03:04:26 -07:00
k8s-ci-robot
0a720b35af Merge pull request #3270 from riverzhang/fix-registry
Add insecure_registry config to docker options
2018-09-10 04:28:52 -07:00
rongzhang
f557b54489 Add docker_ to values 2018-09-10 18:05:49 +08:00
Erwan Miran
04852ad753 Install Helm on all masters 2018-09-10 11:39:26 +02:00
k8s-ci-robot
ee4f437aa2 Merge pull request #3276 from riverzhang/1.11.3
Upgrade kubernetes to v1.11.3
2018-09-10 02:15:52 -07:00
Matthew Mosesohn
aaa9a4efac Ensure vault file permissions are correct 2018-09-10 12:04:04 +03:00
rongzhang
0140cf71c8 Upgrade kubernetes to v1.11.3 2018-09-10 15:52:49 +08:00
rongzhang
51794e4c13 Deploying k8s clusters in a private environment 2018-09-09 11:06:00 +08:00
rongzhang
b249b06036 Move docker options to kubespray-defaults 2018-09-09 10:21:18 +08:00
rongzhang
20caaf9d1f Delete gitignore file 2018-09-09 02:09:02 +08:00
rongzhang
cb133cba68 Add registry_mirrors config to docker options 2018-09-09 01:21:32 +08:00
rongzhang
c41ca22a78 Planning the configuration of docker parameters 2018-09-09 00:59:59 +08:00
rongzhang
009d2ffc6c Add insecure_registry config to docker options 2018-09-08 23:24:35 +08:00
k8s-ci-robot
baf1aba239 Merge pull request #3257 from georgejdli/feature-helm-tls-2
[helm-tls] add option to secure helm tiller with tls
2018-09-07 10:34:58 -07:00
georgejdli
b891d77679 add option to secure helm tiller with tls 2018-09-07 10:29:31 -05:00
Erwan Miran
1d2ae39cff Precision on control machine mixed Ansible installation 2018-09-07 17:26:55 +02:00
k8s-ci-robot
7bf09945f2 Merge pull request #3259 from okamototk/fix_indent
Fix indent error by yamllint.
2018-09-07 08:25:20 -07:00
k8s-ci-robot
5c2e9a5376 Merge pull request #3252 from mirwan/remove_insecure-bind-address_when_insecure-bind-port_is_0
Remove insecure-port and insecure-bind-address when possible
2018-09-07 07:41:21 -07:00
k8s-ci-robot
b3a689658b Merge pull request #3255 from mlushpenko/calico_check
Fix calico health checks
2018-09-07 07:39:20 -07:00
Takashi Okamoto
d182d4f979 absolute path for kubectl. 2018-09-07 09:33:43 -04:00
k8s-ci-robot
9c49e071d3 Merge pull request #3260 from riverzhang/discoverytimeout
Add discovery_timeout to join configuration
2018-09-07 05:20:19 -07:00
rongzhang
0f63924ed4 Add discovery_timeout to join configuration
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha2#JoinConfiguration
2018-09-07 16:28:53 +08:00
Takashi Okamoto
b2a7a27dfb Fix indent error by yamllint. 2018-09-06 16:23:00 -04:00
k8s-ci-robot
b79dd602f3 Merge pull request #3256 from torvitas/heketi-fix-storageclass
[bugfix] heketi storageclass privilege
2018-09-06 07:52:08 -07:00
Sascha Marcel Schmidt
157639e451 use privileged user 2018-09-06 16:38:11 +02:00
mlushpenko
ea2c9d8f57 Fix yaml checks 2018-09-06 16:26:57 +02:00
mlushpenko
f958b32c83 Fix calico health checks 2018-09-06 15:57:21 +02:00
k8s-ci-robot
2faa8f1e37 Merge pull request #3254 from mattymo/calico_upgrade_tweaks
Fix backward compatibility with calico 2.6
2018-09-06 06:20:52 -07:00
k8s-ci-robot
ab462d92b8 Merge pull request #3249 from mattymo/fix_missing_var_kube_proxy_nodeport
Add missing variable kube_proxy_nodeport_addresses
2018-09-06 06:18:23 -07:00
k8s-ci-robot
27905bbddf Merge pull request #3250 from mattymo/openstack_cacert
Fix openstack cacert task
2018-09-06 06:15:59 -07:00
Matthew Mosesohn
dc3e317d20 Fix backward compatibility with calico 2.6 2018-09-06 15:54:20 +03:00
k8s-ci-robot
ac87ba5c0d Merge pull request #3253 from mattymo/reduce_gce_size
Reduce instance sizes in gce
2018-09-06 05:45:23 -07:00
Matthew Mosesohn
ea918e1999 Reduce instance sizes in gce 2018-09-06 15:22:46 +03:00
Erwan Miran
a5509fc2ce Remove insecure-port and insecure-bind-address when possible 2018-09-06 13:46:09 +02:00
Matthew Mosesohn
b614a3504b Fix openstack cacert task 2018-09-06 14:06:06 +03:00
k8s-ci-robot
661d455ab4 Merge pull request #3248 from mattymo/fixupfixup
put back endif in kubelet rkt template
2018-09-06 03:51:50 -07:00
Matthew Mosesohn
cd8e469b9c Add missing variable kube_proxy_nodeport_addresses 2018-09-06 13:36:17 +03:00
Matthew Mosesohn
991b3dbe54 put back endif in kubelet rkt template 2018-09-06 13:21:22 +03:00
k8s-ci-robot
a3caeba242 Merge pull request #2931 from torvitas/master
Heketi/GlusterFS
2018-09-06 03:07:35 -07:00
k8s-ci-robot
f5251f7d27 Merge pull request #3247 from mattymo/kubelet_rkt_Fix
remove broken endifs in kubelet rkt mode
2018-09-06 02:49:35 -07:00
Matthew Mosesohn
faedfb6307 remove broken endifs in kubelet rkt mode 2018-09-06 11:59:25 +03:00
k8s-ci-robot
1940495817 Merge pull request #3246 from riverzhang/pause
Upgrade pause image to 3.1
2018-09-06 00:48:05 -07:00
rongzhang
b979fb0116 Upgrade pause image to 3.1 2018-09-06 14:15:51 +08:00
Antoine Legrand
7e140e5f3c Merge pull request #3122 from jbcraig/fix_cacert_feature
resolve issues with new cacert feature
2018-09-05 23:31:53 +02:00
k8s-ci-robot
0e5393f203 Merge pull request #3224 from riverzhang/fix-feature-gates
Fix feature-gates
2018-09-05 08:55:48 -07:00
Sascha Marcel Schmidt
df6cf9aa51 add cleanup 2018-09-05 17:18:53 +02:00
Sascha Marcel Schmidt
5cf1396cb7 removes unnecessary check 2018-09-05 17:17:49 +02:00
rongzhang
435e098751 Fix feature-gates 2018-09-05 22:55:51 +08:00
Sascha Marcel Schmidt
6ffddbff24 fix database not available heketi error 2018-09-05 16:03:32 +02:00
Sascha Marcel Schmidt
64b0ce974d use bin_dir variable 2018-09-05 16:02:55 +02:00
Sascha Marcel Schmidt
ce776f0f6a actually use heketi auth 2018-09-05 15:59:56 +02:00
Sascha Marcel Schmidt
949984601f actually use heketi auth 2018-09-05 15:58:44 +02:00
Antoine Legrand
055e80f846 Merge pull request #3244 from ant31/calico31
Reverts calico update to 3.2.0, fixes #3223
2018-09-05 11:45:22 +02:00
Antoine Legrand
15363530ae Reverts calico update to 3.2.0, fixes #3223 2018-09-05 11:44:32 +02:00
唐超
ca6c5e2a6a terraform.tfvars.example is not correct, remove. 2018-09-05 17:41:34 +08:00
k8s-ci-robot
73ddb62c58 Merge pull request #3234 from warmchang/tryUpdateNodeStatus
Fix the tryUpdateNodeStatus link
2018-09-05 00:21:33 -07:00
k8s-ci-robot
a512f68650 Merge pull request #3236 from luisyonaldo/fix-configure-calico-network-pool
Fix configure calico network pool for ipipMode = CrossSubnet
2018-09-04 23:22:33 -07:00
Antoine Legrand
769f99b369 Merge pull request #3233 from mgsergio/patch-2
Hint on how to join the slack channel README.md
2018-09-04 17:27:10 +02:00
Antoine Legrand
bf1b9649d0 Merge pull request #3235 from mirwan/docker_version_emphasis
Emphasis on docker recommended version
2018-09-04 17:11:29 +02:00
Luis Nunez
6569180654 remove capitalize filter 2018-09-04 14:56:53 +02:00
Erwan Miran
ae0ed87c0f Emphasis on docker recommended version 2018-09-04 14:34:04 +02:00
Sascha Marcel Schmidt
9cc8ef4b91 MetalLB as loadbalancer for on premise deployments (#3027)
* add metallb as loadbalancer for on premise deployments

* improve configuration

* add variables to DaemonSet
2018-09-04 15:17:23 +03:00
k8s-ci-robot
ad33f71ac2 Merge pull request #3228 from mirwan/credentials_dir
Introducing credentials_dir variable in order to be able to override it
2018-09-04 04:35:11 -07:00
William Zhang
30634b3a25 Fix the tryUpdateNodeStatus link
Signed-off-by: William Zhang <zhang.wanmin@zte.com.cn>
2018-09-04 19:17:05 +08:00
mgsergio
b31cf0284d Update README.md
It wan't obvious for me how to join this channel.
2018-09-04 12:33:55 +03:00
k8s-ci-robot
50c6a98b15 Merge pull request #3229 from mirwan/docker_1806_ubuntu_under_bionic
Docker 18.06 for ubuntu versions before bionic
2018-09-03 11:37:13 -07:00
Antoine Legrand
e7234c9114 Merge pull request #3232 from rabi/master
Document correct var kubeadm_enabled
2018-09-03 20:34:16 +02:00
Erwan Miran
a644b7c267 Introducing credentials_dir in order to be able to override it 2018-09-03 18:04:50 +02:00
Erwan Miran
f0af7262b1 credentials directory should be ignored as inventory 2018-09-03 18:04:34 +02:00
rabi
0865bef382 Document correct var kubeadm_enabled 2018-09-03 21:14:53 +05:30
Atoms
8c9588ab59 Add additional no proxy parameter for more customization 2018-09-03 17:09:58 +03:00
Erwan Miran
c0ce875743 change edge to 18.06 for ubuntu 2018-09-03 14:11:25 +02:00
Erwan Miran
a22d28e1c1 docker 18.06 for ubuntu version before bionic 2018-09-03 14:10:51 +02:00
k8s-ci-robot
c32145057d Merge pull request #3178 from gitphill/patch-1
Add azure-container-registry-config for Azure
2018-09-03 05:06:01 -07:00
rboyapat
fbb98b0070 Fix the jinja expression for openstack_tenant_id (#3151)
OS_PROJECT_ID is obsolete in keystone v3 and jinja expression
doesn't set openstack_tenant_id as expected because of
undefined env var. Fixed the expression.
2018-09-03 14:59:49 +03:00
k8s-ci-robot
db11394711 Merge pull request #3200 from pablodav/feature/k8s_win_v1.11
Required support to start working on windows node support
2018-09-03 04:51:23 -07:00
k8s-ci-robot
72f6b3f836 Merge pull request #3210 from kubernetes-incubator/re-org-group_vars
Split group-variables
2018-09-03 04:40:26 -07:00
Antoine Legrand
0a08268efb Merge pull request #3226 from mattymo/always_run_helm_init
Always run helm init to allow for settings changes
2018-09-03 12:49:05 +02:00
Antoine Legrand
ccda9664e7 remove duplicated var 2018-09-03 12:09:31 +02:00
Antoine Legrand
e98ba9e839 Split group-variables 2018-09-03 12:09:31 +02:00
Matthew Mosesohn
fd57fde075 Always run helm init to allow for settings changes 2018-09-03 11:16:01 +03:00
k8s-ci-robot
6204b85a37 Merge pull request #3222 from alvistack/nginx-0.19.0
ingress-nginx: Upgrade to 0.19.0
2018-09-03 00:11:38 -07:00
Wong Hoi Sing Edison
9fc8f9a07d ingress-nginx: Upgrade to 0.19.0
Upstream Changes:

-   ingress-nginx 0.19.0 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.19.0)

Our Changes:

-   Sync templates with upstream changes
2018-09-03 08:00:08 +08:00
niallmcandrew
8745486fb3 Fix test readme formatting
Adds a missing vertical bar at the star of a table so its correct markdown.
2018-09-03 08:38:21 +12:00
Pablo Estigarribia
7cbe3c2171 ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version
ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

remove empty when line

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

force kubeadm upgrade due to failure without --force flag

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

added nodeSelector to have compatibility with hybrid cluster with win nodes, also fix for download with missing container type

fixes in syntax and LF for newline in files

fix on yamllint check

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

some cleanup for innecesary lines

remove conditions for nodeselector
2018-09-02 12:47:06 -03:00
k8s-ci-robot
a47c9239e8 Merge pull request #3221 from alvistack/cephfs-provisioner-v2.1.0-k8s1.11
cephfs-provisioner: Upgrade to v2.1.0-k8s1.11
2018-09-02 04:16:17 -07:00
k8s-ci-robot
635ca1a0b8 Merge pull request #3220 from alvistack/coredns-1.2.2
coredns: Upgrade to v1.2.2
2018-09-02 04:13:53 -07:00
Wong Hoi Sing Edison
32fdfbcd5a cephfs-provisioner: Upgrade to v2.1.0-k8s1.11
Upstream Changes:

-   cephfs-provisioner v2.1.0-k8s1.11 (https://github.com/kubernetes-incubator/external-storage/releases/tag/cephfs-provisioner-v2.1.0-k8s1.11)

Our Changes:

-   Sync clusterrole and role with upstream changes
2018-09-02 11:51:28 +08:00
k8s-ci-robot
dee9324d4b Merge pull request #3219 from mlushpenko/kubeadm-ha
Fix ports for kubeadm client and master configs for ha setups
2018-09-01 20:49:22 -07:00
Wong Hoi Sing Edison
df8b27c03c coredns: Upgrade to v1.2.2
Upstream Changes:

-   coredns v1.2.2 (https://github.com/coredns/coredns/releases/tag/v1.2.2)

NOTE:

-   coredns image for 1.2.0 and 1.2.1 had been removed from https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/coredns
2018-09-02 11:37:21 +08:00
mlushpenko
8e95974930 Fix ports for kubeadm client and master configs for ha setups 2018-09-01 18:02:52 +02:00
k8s-ci-robot
64b32146ca Merge pull request #3217 from mirwan/fix_3215
Fix docker_options definition to remove newlines
2018-09-01 07:29:47 -07:00
Erwan Miran
36a7bdfac1 Fix docker_options definition to remove newlines 2018-09-01 09:55:04 +02:00
k8s-ci-robot
13dda0e36e Merge pull request #3207 from mirwan/fix_3206
Fix target hosts generation when /etc/hosts does not contain 127.0.0.1 or ::1
2018-08-31 17:50:56 -07:00
k8s-ci-robot
6e7100f283 Merge pull request #3208 from mirwan/etcd_ha_doc_n_cleaning
Add documentation about having HA for etcd
2018-08-31 08:06:05 -07:00
Erwan Miran
059cd17b47 Fix target hosts generation when /etc/hosts does not contain 127.0.0.1 or ::1 2018-08-31 16:33:18 +02:00
k8s-ci-robot
fb7b3305dc Merge pull request #3209 from mirwan/use_etcd_events_access_address
etcd_events_access_address should be used for peer_url and client_url
2018-08-31 07:26:25 -07:00
k8s-ci-robot
0e1f24e95a Merge pull request #3140 from kubernetes-incubator/preinstall-tasks_num
Add support for etcd arm64
2018-08-31 06:20:56 -07:00
Erwan Miran
81c3f2c971 etcd_events_access_address should be used for peer_url and client_url 2018-08-31 15:03:07 +02:00
Erwan Miran
82a28d6bb3 Add documentation about having HA for etcd 2018-08-31 14:40:25 +02:00
Antoine Legrand
22f9114630 update calico to 3.2.0 2018-08-31 13:45:08 +02:00
Antoine Legrand
1704d699c4 CI: switch ubuntu18 to manual job 2018-08-31 13:45:08 +02:00
Antoine Legrand
f2f0cdd0ff add arch vars for docker 2018-08-31 13:45:08 +02:00
Antoine Legrand
da06c8e5a9 etcd UNSUPPORTED for all arch 2018-08-31 13:45:08 +02:00
Antoine Legrand
2f1fe44762 update images to use arch 2018-08-31 13:45:08 +02:00
Antoine Legrand
19268ded23 Fix some arm64 errors 2018-08-31 13:45:08 +02:00
Antoine Legrand
f67933d2ac add ETCD_UNSUPPORTED_ARCH=arm64 flag 2018-08-31 13:45:08 +02:00
Antoine Legrand
247b9e83d8 etcd arch-image 2018-08-31 13:45:08 +02:00
Antoine Legrand
9c2098b8fa fix kubelet_max_pod assert 2018-08-31 13:45:08 +02:00
Antoine Legrand
48c0c8d854 Update dir list 2018-08-31 13:45:08 +02:00
k8s-ci-robot
f5f7b1626b Merge pull request #3203 from riverzhang/doc
Update readme
2018-08-31 02:57:55 -07:00
k8s-ci-robot
c87a373c53 Merge pull request #3204 from riverzhang/fix-copy-ssl-ca
Fix copy etcd-ssl-ca failed
2018-08-31 02:00:01 -07:00
rongzhang
2609ec0dc3 Fix copy etcd-ssl-ca failed 2018-08-31 15:06:03 +08:00
rongzhang
61ed9886c1 Update readme 2018-08-31 14:12:16 +08:00
k8s-ci-robot
aafd034ab8 Merge pull request #3202 from riverzhang/fix-ipvs
Fix ipvs by kubeadm v1alpha1
2018-08-30 13:26:02 -07:00
k8s-ci-robot
d14394c691 Merge pull request #3185 from mirwan/helm_install_docker_insecureport_0
Mount /root/.kube to helm container
2018-08-30 08:11:33 -07:00
rongzhang
16fc22a207 Fix ipvs by kubeadm v1alpha1 2018-08-30 23:04:57 +08:00
k8s-ci-robot
d9ea937493 Merge pull request #3187 from mirwan/kubeadm-config_syntax
Fix kubeadm-config for audit-log-path and feature-gates
2018-08-30 06:55:43 -07:00
k8s-ci-robot
a96a0ee307 Merge pull request #3198 from riverzhang/fix-kubeadm-v1alpha1
Fix kubeadm v1alpha1 configure
2018-08-30 04:11:37 -07:00
k8s-ci-robot
f48468b83b Merge pull request #3195 from mirwan/fix_psp_templates
Fix some addons when PodSecurityPolicy is enabled
2018-08-30 03:37:52 -07:00
Aivars Sterns
5b79ec8e3b Merge pull request #3199 from kubernetes-incubator/ant31-patch-2
Add mirwan as Reviewer
2018-08-30 13:24:15 +03:00
Antoine Legrand
3f4acbc5f6 Add mirwan as Reviewer 2018-08-30 11:53:50 +02:00
rongzhang
35e5adaf0a Fix kubeadm v1alpha1 configure 2018-08-30 17:44:00 +08:00
k8s-ci-robot
a268a49e1a Merge pull request #3197 from riverzhang/kubeadm-test
Enable kubeadm test
2018-08-29 22:56:46 -07:00
rongzhang
91a83a3a0f Enable kubeadm test
Need to test the kubeadm deployment cluster, most of the functional changes, will involve kubeadm.
2018-08-30 12:58:00 +08:00
k8s-ci-robot
a247c2c713 Merge pull request #3191 from fcgravalos/make-canal-mount-xtables-lock
canal should mount xtables.lock to share the lock with other processe…
2018-08-29 08:57:32 -07:00
k8s-ci-robot
4feb62f6bf Merge pull request #3193 from riverzhang/fix-lb-kubeadm
Fix kubeadm lb
2018-08-29 04:22:40 -07:00
Fernando Crespo Grávalos
ac4ef719cc canal should mount xtables.lock to share the lock with other processes like kube-proxy 2018-08-29 13:08:51 +02:00
Erwan Miran
ceb97e5809 Fix wrong syntax for jinja sub list extraction and addition of missing role template 2018-08-29 12:58:10 +02:00
k8s-ci-robot
3bfda55fca Merge pull request #3061 from okamototk/crio
cri-o support
2018-08-29 03:48:40 -07:00
rongzhang
9eade647e6 Fix kubeadm lb 2018-08-29 18:29:24 +08:00
k8s-ci-robot
f82a1933b0 Merge pull request #3176 from equinix-ms/master
Add option to change the Tiller Deployment namespace.
2018-08-29 03:03:40 -07:00
Robin Elfrink
bbdd1c8f06 Add option to change the Tiller Deployment namespace. 2018-08-29 11:20:41 +02:00
k8s-ci-robot
f876c89081 Merge pull request #3189 from Arslanbekov/up-dashboard-version
Up dashboard version to 1.10.0
2018-08-29 02:08:40 -07:00
Phill Garrett
1babbcca85 Fix elif azure statement 2018-08-28 15:43:03 +01:00
k8s-ci-robot
58ecd312a7 Merge pull request #3186 from mirwan/fix_etchosts_localhost_handling
Fix localhost handling when /etc/hosts contains parenthesis
2018-08-28 07:07:53 -07:00
Takashi Okamoto
c0dfa72707 Separate RedHat specific vars for cri-o. 2018-08-28 13:36:14 +00:00
Arslanbekov Denis
fe1e758856 Up dashboard version to 1.10.0 2018-08-28 14:10:19 +03:00
Phill Garrett
f325d13082 Add azure-container-registry-config for Azure
Seperated out KUBELET_CLOUDPROVIDER env var assignment when cloud_provider equals azure
Appended azure-container-registry-config parameter
2018-08-28 10:23:25 +00:00
Erwan Miran
52ab54eeea Fix missing quotes for audit-log-path and wrong placement of feature-gates 2018-08-28 09:05:57 +02:00
Takashi Okamoto
d407a590a6 container_manager variable to specify runtime. 2018-08-28 06:23:38 +00:00
Takashi Okamoto
5eb805f098 Change timeout for kubeadm 600s.
* kubeadm timeout is too short and it may interrupt by timeout.
2018-08-28 04:51:38 +00:00
Takashi Okamoto
dfdcb56784 Delete all cri-o containers when execute reset.yml. 2018-08-28 02:25:33 +00:00
Takashi Okamoto
659cccc507 Update sample. 2018-08-28 02:25:33 +00:00
Takashi Okamoto
f47c31dce5 Add cri-o document. 2018-08-28 02:25:33 +00:00
Takashi Okamoto
236f066635 kubeadm cri-o support. 2018-08-28 02:24:45 +00:00
Takashi Okamoto
5ab8a712d9 Add download_container flag to avoid docker pull when use cri-o. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
cf7b9cfeef Support crio in kubelet service. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
6090af29e7 Add cri-o role. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
359009bb05 Download etcd and hyperkube binary. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
bdbfa4d403 Add ipvs support for kubeadm 1.10 or later. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
6849788ebc Fix copy ca cert and ca key for kubeadm. 2018-08-28 01:24:25 +00:00
Takashi Okamoto
ac639b2a17 Change kubeadm config to run etcd by kubeadm. 2018-08-28 01:24:25 +00:00
Takashi Okamoto
b18ed5922b Add etcd default value in kubespray-default. 2018-08-28 01:24:25 +00:00
Erwan Miran
b395bb953f Fix wrong when condition that ends up with jinja error when the content of /etc/hosts contains parenthesis 2018-08-27 21:20:57 +02:00
Erwan Miran
b652792a93 /root/.kube must to mounted in order for helm to read kubeconfig and not fallback to localhost:8080 2018-08-27 18:17:26 +02:00
k8s-ci-robot
7efe287c74 Merge pull request #2474 from mirwan/localhost_in_etc_hosts
Localhost in hosts files should be updated (if necessary), not overriden
2018-08-27 06:25:43 -07:00
k8s-ci-robot
881b46f458 Merge pull request #3095 from mirwan/dnsmasq_template_rendering_filename
Dnsmasq manifests should not have j2 extension but templates should
2018-08-27 02:51:43 -07:00
k8s-ci-robot
d43cd9a24c Merge pull request #3104 from maxbrunet/hotfix/replace-local_actions
Use delegate_to: localhost instead of local_action
2018-08-27 02:50:42 -07:00
guenhter
fff48d24ea Replace the raw rsync command with the synchronize module 2018-08-27 10:00:21 +02:00
k8s-ci-robot
f4feb17629 Merge pull request #2958 from elementyang/etcd-pr
change the way that getting etcd_member_name
2018-08-26 23:55:04 -07:00
Maxime Brunet
33135f2ada k8s/preinstall: Turn AND condition into a list 2018-08-25 14:33:31 -04:00
k8s-ci-robot
d6f4d10075 Merge pull request #3153 from alvistack/remove-image_tag-suffix
Remove *_image_tag suffix from ReplicaSet/Deployment
2018-08-25 04:42:19 -07:00
k8s-ci-robot
f97515352b Merge pull request #3161 from nutellinoit/kube_proxy_nodeport_addresses
--nodeport-addresses added on kube-proxy.manifest.j2 and on k8s-cluster.yml
2018-08-25 02:00:19 -07:00
k8s-ci-robot
f765ed8f1c Merge pull request #3179 from kubernetes-incubator/removeubuntu18
move ubuntu18 to CI part2
2018-08-24 10:08:57 -07:00
Antoine Legrand
84bfcbc0d8 move ubuntu18 to CI part2 2018-08-24 18:18:27 +02:00
Aivars Sterns
2c98efb781 Merge pull request #3158 from tiri/fix-glusterfs-inventory
Fix node hostname in glusterfs inventory.example
2018-08-24 16:35:34 +03:00
Aivars Sterns
f7f58bf070 Merge pull request #3173 from msimonin/fix-3164
Fix createhome directory for adduser role
2018-08-24 16:34:57 +03:00
Erwan Miran
1432e511a2 same work with less lines 2018-08-24 14:06:07 +02:00
Aivars Sterns
1ddc420e39 Merge pull request #3058 from vasrem/feature_add_etcd_quota_backend_bytes
Add ETCD_QUOTA_BACKEND_BYTES environment variable
2018-08-24 14:17:55 +03:00
Vasilis Remmas
b61eb7d7f3 Add ETCD_QUOTA_BACKEND_BYTES environment variable 2018-08-24 12:17:34 +02:00
Aivars Sterns
dd55458315 Merge pull request #3174 from kubernetes-incubator/revert-3147-etcd-cleanup
Revert "gen_certs_script: refactor using stdin (Ansible 2.4+)"
2018-08-24 12:51:45 +03:00
Aivars Sterns
1567a977c3 Revert "gen_certs_script: refactor using stdin (Ansible 2.4+)" 2018-08-24 12:35:31 +03:00
Samuele Chiocca
cb8be37f72 fix on v1alpha1 2018-08-24 11:19:06 +02:00
Samuele Chiocca
e5dd4e1e70 added on v1alpha1 2018-08-24 10:59:06 +02:00
Antoine Legrand
6d74a3db7a Merge pull request #3163 from kubernetes-incubator/fix-docker-ubuntu1804
Fix docker apt-repo for Ubuntu18
2018-08-24 00:51:59 +02:00
ant31
1da5926a94 Use xenial repo for ubuntu18 2018-08-23 22:34:44 +00:00
Antoine Legrand
4882531c29 Merge pull request #3115 from oracle/oracle_oci_controller
Cloud provider support for OCI (Oracle Cloud Infrastructure)
2018-08-23 18:22:45 +02:00
Antoine Legrand
f59b80b80b Merge pull request #3147 from ishitatsuyuki/etcd-cleanup
gen_certs_script: refactor using stdin (Ansible 2.4+)
2018-08-23 18:19:28 +02:00
Antoine Legrand
f7d0e4208e Merge pull request #3142 from riverzhang/fix-kubeadm-lb
Fix kubeadm LB configure
2018-08-23 16:40:59 +02:00
rongzhang
7b61a0eff0 Fix kubeadm LB configure
1. join node add LB discoveryTokenAPIServers
2. kubeadm_config_api_fqdn support ipddress and domain_name
2018-08-23 22:22:34 +08:00
Aivars Sterns
23fd3461bc calico upgrade to v3 (#3086)
* calico upgrade to v3

* update calico_rr version

* add missing file

* change contents of main.yml as it was left old version

* enable network policy by default

* remove unneeded task

* Fix kubelet calico settings

* fix when statement

* switch back to node-kubeconfig.yaml
2018-08-23 17:17:18 +03:00
msimonin
e22e15afda Fix createhome directory for adduser role
A typo in the adduser role prevents the createhome
variable to be taken into account.

Fix #3164
2018-08-23 08:55:11 +02:00
Rong Zhang
f453567cce Merge pull request #3144 from riverzhang/fix-audit-log
Fix install audit failed
2018-08-23 14:41:37 +08:00
Tatsuyuki Ishi
69786b2d16 gen_certs_script: refactor using stdin (Ansible 2.4+) 2018-08-23 11:19:17 +09:00
rongzhang
5a4352657d Fix install audit failed
1.fix audit log not write
2.fix Parameter not recognized
3.delete kubedm futuregates auditing and use apiServerExtraArgs
2018-08-23 01:47:15 +08:00
Samuele Chiocca
f13bc796d9 added nodePortAddresses on kubeadm conf v1alpha2 (not present on v1alpha1) 2018-08-22 18:43:03 +02:00
Antoine Legrand
7a2cfb8578 Merge pull request #3102 from mirwan/psp
PodSecurityPolicy admission controller support
2018-08-22 18:37:40 +02:00
Erwan Miran
a6a14e7f77 create the service account and roles even if the rbac is not enabled. it will just be ignored 2018-08-22 18:17:11 +02:00
Erwan Miran
80cfeea957 psp, roles and rbs for PodSecurityPolicy when podsecuritypolicy_enabled is true 2018-08-22 18:16:13 +02:00
ant31
2c90208486 Fix docker apt-repo for Ubuntu18 2018-08-22 15:53:14 +00:00
Antoine Legrand
4eea7f7eb9 Merge pull request #3152 from johnzheng1975/cilium_1.2.0
new cilium stable version: 1.2.0
2018-08-22 17:11:42 +02:00
Antoine Legrand
3c59657f59 Merge pull request #3165 from hadrien-toma/patch-1
Update ansible.md
2018-08-22 16:58:29 +02:00
Hadrien TOMA
6598beb804 Update ansible.md 2018-08-22 16:40:17 +02:00
Antoine Legrand
32049efbc2 Merge pull request #3162 from kubernetes-incubator/add-ubuntu1804-ci
Add ubuntu18 ci job
2018-08-22 16:27:19 +02:00
Antoine Legrand
78be27e18f Add ubuntu18 job 2018-08-22 16:02:07 +02:00
Samuele Chiocca
5d9908c2c3 --nodeport-addresses added on kube-proxy.manifest.j2
Changed author
2018-08-22 15:32:07 +02:00
Antoine Legrand
7eb4d7bb19 Merge pull request #3155 from alvistack/rbac_enabled
Always create service account even rbac_enabled = false
2018-08-22 13:26:20 +02:00
Erwan Miran
a7b0c454db Localhost in hosts files should be updated (if necessary), not overriden 2018-08-22 12:10:49 +02:00
Timo Ribbers
83e3b72220 Fix node hostname in glusterfs inventory.example
Remove duplicate hostname usage.
2018-08-22 11:03:38 +02:00
john
7e2e3ddd32 update new cilium version 1.2.0 in README.md 2018-08-22 15:29:42 +08:00
Wong Hoi Sing Edison
c3b3572025 Always create service account even rbac_enabled = false 2018-08-22 11:41:29 +08:00
Wong Hoi Sing Edison
f897596844 Remove *_image_tag suffix from ReplicaSet/Deployment 2018-08-22 11:02:56 +08:00
john
6df71956c4 new cilium stable version: 1.2.0 2018-08-22 10:52:24 +08:00
Jeff Bornemann
94df70be98 Cloud provider support for OCI (Oracle Cloud Infrastructure)
Signed-off-by: Jeff Bornemann <jeff.bornemann@oracle.com>
2018-08-21 17:36:42 -04:00
rguichard
6650bc6b25 fix the output of router_id with the right id 2018-08-21 13:21:25 +02:00
Antoine Legrand
7398858572 Merge pull request #3141 from qeqar/bad-hostname
allow '.' in hostnames for verify bad hostnames
2018-08-21 11:39:49 +02:00
Mark Eisenblaetter
0c0a2138d9 allow '.' in hostnames
we use FQDN as inventory_hostname
2018-08-21 08:24:33 +02:00
Jonathan Craig
5bf152886b add support for openstack trust to cloud provider config 2018-08-20 12:51:25 -04:00
Mark Eisenblätter
08353f291b scaling: issue etcd certs for new nodes (#3125) 2018-08-20 14:40:44 +03:00
Andreas Krüger
497db69c9f Merge pull request #3130 from riverzhang/add-control-plane
Add kubeadm controlplaneEndpoint
2018-08-20 10:43:50 +02:00
Andreas Krüger
c7de737551 Merge pull request #3133 from mirwan/auditlog_to_stdout_w_kubeadm
Audit log to stdout with kubeadm
2018-08-20 10:43:22 +02:00
Andreas Krüger
69749a5b7b Merge pull request #3132 from mirwan/custom_audit_policy
Custom audit policy
2018-08-20 10:42:38 +02:00
Andreas Krüger
b3e32c1393 Merge pull request #3094 from hedayat/master
Add --dns-loop-detect to dnsmasq used in kube-dns
2018-08-20 09:27:15 +02:00
Erwan Miran
fc38b6d0ca Ability to define custom audit polcy rules 2018-08-20 07:04:56 +02:00
Erwan Miran
c34900e569 Define apiserver flags directly instead of relying on auditPolicy section in order to have the ability to redirect audit log to stdout with kubeadm 2018-08-20 07:00:53 +02:00
Rong Zhang
855f2a55cb Merge pull request #3135 from ishitatsuyuki/patch-1
Add bad hostname preflight check
2018-08-20 12:08:02 +08:00
Rong Zhang
ea35e6be9b Merge pull request #3139 from alvistack/cephfs-provisioner-v2.0.1-k8s1.11
cephfs-provisioner: Upgrade to v2.0.1-k8s1.11
2018-08-20 12:04:53 +08:00
Wong Hoi Sing Edison
71fdc257bc cephfs-provisioner: Upgrade to v2.0.1-k8s1.11 2018-08-20 11:55:04 +08:00
Rong Zhang
fd16f77e20 Merge pull request #3017 from seungkyua/fix_kubeadm_client_conf
Fix kubeadm client conf
2018-08-20 10:51:02 +08:00
Tatsuyuki Ishi
3eef8dc8d0 Add bad hostname preflight check
Hostname must be a valid DNS name, which is checked as https://github.com/kubernetes/apimachinery/blob/master/pkg/util/validation/validation.go#L115

The situation I have encountered is that my hostname contained underscore which is disallowed and apiserver refused to start.
2018-08-20 09:09:00 +09:00
rongzhang
59176ebbb9 Add kubeadm controlplaneEndpoint
Nginx LB(default)
Other LB by kubeadm controlplane
2018-08-20 00:57:13 +08:00
Rong Zhang
3663061b38 Merge pull request #3137 from riverzhang/packages
Fix install nss
2018-08-20 00:47:53 +08:00
rongzhang
b421d0ed5b Fix install nss 2018-08-20 00:07:31 +08:00
Rong Zhang
f7097fbe07 Merge pull request #3134 from riverzhang/image
Fix pull dns image error
2018-08-19 23:29:57 +08:00
rongzhang
35efc387c4 Fix pull dns image error 2018-08-19 22:47:17 +08:00
Rong Zhang
fb309ca446 Merge pull request #3128 from riverzhang/delete-kubeadm
Remove unused configuration
2018-08-19 10:01:33 +08:00
Antoine Legrand
c833a8872b Merge pull request #3131 from 3cky/patch-1
Fix k8s-dns-dnsmasq-nanny repo path
2018-08-19 01:31:45 +02:00
Antoine Legrand
1d4f88eea8 Fix typo in image url 2018-08-19 01:30:54 +02:00
Victor Antonovich
e9b8c8956d Fix k8s-dns-dnsmasq-nanny repo path 2018-08-19 00:01:19 +03:00
rongzhang
095ccef8bd Remove unused configuration 2018-08-19 01:23:20 +08:00
Rong Zhang
0df969ad19 Merge pull request #3117 from mirwan/audit_usecases
Audit support improvement
2018-08-19 01:13:22 +08:00
Antoine Legrand
3e5b6a5481 Merge pull request #3105 from mirwan/remove_cilium_device_at_reset_plus_move_network_to_network_plugin_roles
Move network_plugin specific reset tasks to its role directory
2018-08-17 22:27:16 +02:00
Antoine Legrand
3201f17058 Merge pull request #3119 from hoatle/improvements/ansible-ignored-patterns
add ignore_patterns to ansible.cfg
2018-08-17 22:13:16 +02:00
Antoine Legrand
c36744e96d Merge pull request #3120 from alvistack/cephfs-provisioner-v2.0.0-k8s1.11
cephfs-provisioner: Upgrade to v2.0.0-k8s1.11
2018-08-17 22:11:15 +02:00
Antoine Legrand
e51c5dc0a6 Merge pull request #3123 from mathieuherbert/until-restart-etcd
add until option for etcd backup commands
2018-08-17 22:09:08 +02:00
Antoine Legrand
d297b82e82 Merge pull request #3126 from LuckySB/etcd_restart_on_update
add etcd version to etcd environment file to trigger a reload
2018-08-17 22:05:34 +02:00
Antoine Legrand
ca649b57e6 Merge pull request #1942 from jerrypeng/patch-1
SERIOUS Bug in download main.yml
2018-08-17 18:23:05 +02:00
Antoine Legrand
2c587f9ea5 Merge pull request #2104 from xd007/multi-arch-support
add support for non-amd64 arch gcr.io images
2018-08-17 16:38:14 +02:00
Erwan Miran
98b818bbaf comply with ansible syntax consistency guideline 2018-08-17 16:37:33 +02:00
Antoine Legrand
26bf719a02 Merge branch 'master' into multi-arch-support 2018-08-17 16:35:50 +02:00
Antoine Legrand
7e37aa4aca Merge pull request #2103 from xd007/docker_aarch64_pkg
Update docker package info for aarch64
2018-08-17 16:26:56 +02:00
Sergey Bondarev
ce6854e726 add version to environment file
Trigger reboot handler when version upgrade during update script
2018-08-17 17:25:35 +03:00
Antoine Legrand
ac49bbb336 Merge pull request #2168 from xd007/docker_arm64
fix docker opts incompatible running on aarch64 Redhat/Centos
2018-08-17 16:24:07 +02:00
Antoine Legrand
b490231f59 Merge pull request #2025 from kubernetes-incubator/terraform-aws-inventory
contrib/terraform/aws: Make path to generated inventory configurable
2018-08-17 15:55:38 +02:00
Antoine Legrand
6c7eabb53b Merge pull request #2001 from b0r1sp/patch-3
Quote false and yes, otherwise they'll be transformed to 'False', 'Yes'
2018-08-17 15:52:15 +02:00
Antoine Legrand
7a0f0126f7 Merge pull request #1295 from xuhuilong/master
fix curl get calico status error ( error in tls version, centos 7.3 1611)
2018-08-17 14:29:01 +02:00
Mathieu Herbert
59d89a37cc add until option for etcd backup commands 2018-08-17 11:05:57 +02:00
Wong Hoi Sing Edison
1a07c87af7 cephfs-provisioner: Upgrade to v2.0.0-k8s1.11
Upstream Changes:

-   cephfs-provisioner v2.0.0-k8s1.11 (https://github.com/kubernetes-incubator/external-storage/releases/tag/cephfs-provisioner-v2.0.0-k8s1.11)
-   Update ClusterRole

Our Changes:

-   Fix typo in defaults/main.yml (rs -> deploy)
-   Manifests cleanup
2018-08-17 12:41:56 +08:00
Seungkyu Ahn
29894293eb Fix kubeadm client conf
Fix DiscoveryTokenCACertHashes key to discoveryTokenCACertHashes in kubeadm-client.conf
2018-08-17 04:40:08 +00:00
Jonathan Craig
4d783fff0d resolve issues with new cacert feature 2018-08-16 23:31:21 -04:00
hoatle
a7a53d1f38 add ignore_patterns to ansible.cfg
To avoid warning message when artifacts is generated within
the inventory directory
2018-08-17 09:22:02 +07:00
Erwan Miran
7f16b46ed5 Reset tasks specific to a network_plugin moved inside its role directory + Reset tasks specific to cilium 2018-08-16 17:34:33 +02:00
Antoine Legrand
58ee5f1cc9 Merge pull request #3089 from mattymo/cloudconfig
Remove erroneous cloud-config task
2018-08-16 16:17:01 +02:00
Antoine Legrand
bc844ca96e Merge pull request #3079 from wikiselev/master
fix glusterfs ppa and glusterfs server command name errors
2018-08-16 16:15:32 +02:00
Antoine Legrand
253dc4f606 Merge pull request #3114 from woopstar/coredns-1.2.0
Update CoreDNS to 1.2.0
2018-08-16 16:14:13 +02:00
Antoine Legrand
b54ce3e66e Merge pull request #3043 from jerryrelmore/patch-3
Update openstack.md
2018-08-16 16:09:12 +02:00
Antoine Legrand
a642931422 Merge pull request #3019 from holmsten/terraform-ops-worker-groups
[contrib/terraform/openstack] Add supplementary node groups
2018-08-16 16:06:53 +02:00
Antoine Legrand
2228f0dabc Merge pull request #3116 from kubernetes-incubator/update-owners
Update OWNERS
2018-08-16 15:57:53 +02:00
Antoine Legrand
a619dfb03e Update OWNERS 2018-08-16 13:32:46 +02:00
Erwan Miran
54548d3b95 kubeadm mounts the hostpaths itself 2018-08-16 13:17:30 +02:00
Erwan Miran
58d4d65fab minor variable fix and reuse + handle auditlog redirected to stdout 2018-08-16 12:51:09 +02:00
Rong Zhang
364ab2a6b7 Merge pull request #3113 from riverzhang/support-audit
Support audit
2018-08-16 15:33:43 +08:00
Andreas Krüger
fdbb078aa9 Merge pull request #3111 from alvistack/cert-manager-0.4.1
cert-manager: Upgrade to 0.4.1
2018-08-16 09:13:46 +02:00
rongzhang
2ffc1afe40 Support audit 2018-08-16 14:38:07 +08:00
Wong Hoi Sing Edison
18612b3501 cert-manager: Upgrade to 0.4.1
Upstream Changes:

-   cert-manager 0.4.1 (https://github.com/jetstack/cert-manager/releases/tag/v0.4.1)

Our Changes:

-   Better templates sync with upstream manifests
-   Remove fancy resources requests/limits customization
2018-08-16 08:47:01 +08:00
Andreas Krüger
d635a97088 Merge pull request #3112 from alvistack/ingress-nginx-0.18.0
ingress-nginx: Upgrade to 0.18.0
2018-08-15 17:07:24 +02:00
Andreas Kruger
9da5d67728 Update CoreDNS to 1.2.0 2018-08-15 13:39:05 +02:00
Wong Hoi Sing Edison
bd413e36a3 ingress-nginx: Upgrade to 0.18.0
Upstream Changes:

-   ingress-nginx 0.18.0 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.18.0)
2018-08-15 11:40:42 +08:00
Chad Swenson
2c5781ace1 Merge pull request #2932 from wiremind/efk-fluentd-no-nodeselector
fluentd daemonset: do not set old nodeSelector.
2018-08-14 13:48:30 -05:00
JohnZheng
b50b3430be Disable locksmithd on CoreOS if coreos_auto_upgrade set to false (#3088)
* Disable locksmithd on CoreOS if coreos_auto_upgrade set to false

* change when format to support multiple-condition
2018-08-14 13:42:16 -05:00
Chad Swenson
0e3518f2ca Merge pull request #2871 from fritchie/lptolerate
Local volume provisioner: tolerate NoSchedule
2018-08-14 13:39:57 -05:00
Chad Swenson
238f04c931 Merge pull request #3097 from sdemura/vagrantfile-playbook
Define custom playbook in Vagrantfile
2018-08-14 13:31:50 -05:00
Chad Swenson
3a85a2f81c Merge pull request #3080 from mirwan/netchecker_template_rendering_filename
Netchecker manifests should not have j2 extension
2018-08-14 13:24:16 -05:00
Chad Swenson
5dbfa0384e Merge pull request #3101 from chenhonggc/uninstall_old_versions_of_docker
Uninstall old versions of Docker
2018-08-14 11:32:23 -05:00
edemsea
80c87db148 Define custom playbook in Vagrantfile
This change allows the playbook used in Vagrant to be
defined by the end user.

This is useful in the case where a developer may want to use
their own playbook that imports Kubespray, but also leverage
the Kubespray Vagrantfile.
2018-08-14 12:12:07 -04:00
Rong Zhang
d0d7777d68 Merge pull request #3108 from riverzhang/upgrade-coredns
Upgrade coredns to 1.1.3
2018-08-15 00:08:09 +08:00
rongzhang
48b6128814 Upgrade coredns to 1.1.3 2018-08-15 00:05:55 +08:00
Maxime Brunet
70b28288a3 Use delegate_to: localhost instead of local_action
Allow to use `ansible_become: true` (#2969)
And set it to `false` for `localhost` with an `host_var`
2018-08-14 10:08:43 -04:00
Rong Zhang
a11e1eba9e Upgrade kubernetes to V1.11.x (#3078)
Upgrade Kubernetes to V1.11.2
The kubeadm configuration file version has been upgraded from v1alpha1 to v1alpha2
Add bootstrap kubeadm-config.yaml with external etcd
2018-08-14 15:13:44 +03:00
Chen Hong
2dfa928c90 Uninstall old versions of Docker 2018-08-14 17:48:30 +08:00
Erwan Miran
d3c0fe1fcb Templates (even without actual templating inside) should have j2 extension but should not be rendered with j2 extension 2018-08-13 09:51:26 +02:00
Rong Zhang
36e8683cf5 Merge pull request #3091 from mauromedda/master
Add the path to kubectl binary
2018-08-13 10:04:59 +08:00
Hedayat Vatankhah
c0221c2e72 Add --dns-loop-detect to dnsmasq used in kube-dns
It prevents DNS loops when host's DNS server is a localhost DNS server,
or when DNS server of cluster is also added as an upstream DNS server
2018-08-12 20:36:33 +04:30
mauromedda
9cef20187c Add the path to kubectl binary
The post-remove action fails during the kubectl delete node action because with rc: 2, command not found. The kubectl is not in the system PATH and the full path to the binary is required
2018-08-12 10:50:50 +02:00
Anton Fayzrahmanov
95f1e4634a local-volume-provisioner: use mountPropagation HostToContainer and version bump (#3081)
* Update local-volume-provisioner-ds.yml.j2

After v1.10.2 default mountPropagation is "None"

* local_volume_provisioner version bump

v2.1.0 uses the beta nodeAffinity API by default which is available starting 1.10

* Update local-volume-provisioner-ds.yml.j2

MY_NAMESPACE env

* Update README.md

Raw block devices docs.
2018-08-10 17:14:34 +03:00
Matthew Mosesohn
581a30fdec Remove erroneous cloud-config task 2018-08-10 15:59:18 +03:00
Sascha Marcel Schmidt
19e2868484 fix path to bootstrap tear down 2018-08-10 13:42:28 +02:00
Matthew Mosesohn
8b3ce6e418 bump upgrade tests to v2.5.0 commit (#3087) 2018-08-10 13:05:05 +03:00
Andreas Krüger
d8e77600e2 Merge pull request #3066 from luisyonaldo/fix-conditional
fix bad conditional
2018-08-10 10:38:52 +02:00
Cédric de Saint Martin
e3dcd96301 kubedns & kubedns-autoscaler: Stick to master nodes. (#2909)
* kubedns & kubedns-autoscaler: Stick to master nodes.

 - Tolerate only master nodes and not any NoSchedule taint
 - Pods are on different nodes
 - Pods are required to be on a master node.

* kubedns: use soft nodeAffinity.

Prefer to be on a master node, don't require.

* coredns: Stick to (different) master nodes.

     - Pods are on different nodes
     - Pods are preferred to be on a master node.
2018-08-09 10:42:53 -05:00
Chad Swenson
001cae5894 Merge pull request #3028 from Kami-no/cilium
cilium v1.1.2
2018-08-09 10:35:29 -05:00
Erwan Miran
494ff9522b j2 extension should only be used for template filename, not target file on remote host 2018-08-09 11:29:45 +02:00
wikiselev
53aee6dc24 fix glusterfs ppa and glusterfs server command name errors 2018-08-09 10:14:14 +01:00
Luis Nuñez
fd380615a0 fix bad conditional 2018-08-09 10:20:45 +02:00
Rong Zhang
039180b2ca Merge pull request #3022 from alvistack/weave-2.4.0
weave: Upgrade to 2.4.0
2018-08-09 15:01:05 +08:00
Zinin D.A
22b89edbbc cilium v1.1.2
Update all configs to current upstream state.
Add more resources (unable to pass tests now)...
2018-08-08 22:42:50 +03:00
Rong Zhang
4650f04b37 Merge pull request #3075 from okamototk/fix_skipdownloads_condition
Fix skip_downloads condition.
2018-08-08 20:23:01 +08:00
Sascha Marcel Schmidt
9fba448053 recator to use kube module, finally fix race condition in storage tasks 2018-08-08 14:22:50 +02:00
Takashi Okamoto
82f9652fd8 Fix skip_downloads condition. 2018-08-08 10:56:02 +00:00
Rong Zhang
94ae945bea Merge pull request #2904 from mirwan/var_lib_kubelet_should_not_be_unmounted_when_having_its_own_partition
Only subdirectories in /var/lib/kubelet should be unmounted at reset time
2018-08-08 15:00:54 +08:00
Rong Zhang
f6189885c2 Merge pull request #3037 from okamototk/fix_skipdownload
Fixed checking skip_downloads condition.
2018-08-08 14:58:22 +08:00
Rong Zhang
5c039d87aa Merge pull request #3054 from reverson/1.10-admission
Add support for admission controllers in 1.10 and above
2018-08-08 14:32:11 +08:00
Rong Zhang
08dfb7b59f Merge pull request #3073 from riverzhang/delete-istio
Remove istio support
2018-08-08 13:00:57 +08:00
Rong Zhang
4c0e723ead Merge pull request #3069 from magnuhho/master
contrib/terraform/terraform.py: fix for Ansible 2.6.2+, issue #3067
2018-08-08 11:52:07 +08:00
rongzhang
ea6af449a8 Remove istio support
Use helm install or support in future
2018-08-08 11:10:09 +08:00
Rong Zhang
f72d74f951 Merge pull request #3072 from mathieuherbert/dns-tags
Add tags for coredns and kubedns
2018-08-08 09:58:25 +08:00
Mathieu Herbert
d285565475 Add tags for coredns and kubedns 2018-08-07 20:55:38 +02:00
Robert Everson
4eadf3228e Only add admission plugins if defined 2018-08-07 11:25:03 -07:00
Robert Everson
99c5aa5a02 Use k8s default plugin list 2018-08-07 11:25:03 -07:00
Robert Everson
6ed65d762b Separate out plugins into 2 variables 2018-08-07 11:25:03 -07:00
Robert Everson
ac18f6cf8b Add support for admission controllers in 1.10 and above 2018-08-07 11:25:03 -07:00
Takashi Okamoto
1f7a42f3a4 Fixed checking skip_downloads condition. 2018-08-07 12:03:57 -04:00
Rong Zhang
e71f261935 Merge pull request #3068 from riverzhang/swap
Enable swap
2018-08-07 21:29:41 +08:00
Magnus Holm
fcfe12437c contrib/terraform/terraform.py: fix for Ansible 2.6.2+, issue #3067 2018-08-07 15:22:14 +02:00
rongzhang
b902602d16 Enable swap 2018-08-07 21:13:12 +08:00
Rong Zhang
b1ef336ffa Merge pull request #3001 from alvistack/ingress-nginx-0.17.0
ingress-nginx: Upgrade to 0.17.1
2018-08-07 20:50:53 +08:00
Simon Li
d284961d47 Change heketi-tear-down to run on nodes instead of localhost delegate_to 2018-08-07 13:52:49 +02:00
Simon Li
8ac57201a7 Prefix heketi kubectl calls with {{ bin_dir }} 2018-08-07 13:48:16 +02:00
Wong Hoi Sing Edison
538cb3b1bd weave: Upgrade to 2.4.0
Upstream Changes:

-   weave 2.4.0 (https://github.com/weaveworks/weave/releases/tag/v2.4.0)
-   Support `externalTrafficPolicy: Local` (https://github.com/weaveworks/weave/issues/2924)
-   Make the ipset list size bigger (https://github.com/weaveworks/weave/pull/3305)
-   Break out of kube rm-peers loop if nothing changes (https://github.com/weaveworks/weave/pull/3317)

Our Changes:

-   Revamp weave-net.yml.j2 with upstream changes
-   Add more variables for customization
-   Replace WEAVE_PASSWORD with k8s secret
-   Remove hard-corded seed mode support, in favor of variables customization
2018-08-07 18:34:51 +08:00
Wong Hoi Sing Edison
17e335c6a7 ingress-nginx: Upgrade to 0.17.1
Upstream Changes:

-   ingress-nginx 0.17.1 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.17.1)
-   Remove duplicated `securityContext` (https://github.com/kubernetes/ingress-nginx/pull/2705)
-   Remove --publish-service flag, in favor of DaemonSet + hostPort

Close #2998
Close #2999
2018-08-07 18:31:08 +08:00
Rong Zhang
280d6cac1a Merge pull request #2997 from alvistack/cert-manager-0.4.0
cert-manager: Upgrade to 0.4.0
2018-08-07 18:00:46 +08:00
Rong Zhang
c288ffc55d Merge pull request #2342 from southquist/add-ca-cert
allow for setting the cacert on openstack cloud provider
2018-08-07 17:46:01 +08:00
Rong Zhang
9075dbdd3c Merge pull request #2875 from bradbeam/movault
Adding cluster_name to api cert alt name for vault
2018-08-07 17:36:04 +08:00
Rong Zhang
16bd0d2b5d Merge pull request #2900 from drekle/configure_openstack_subnet_CIDR
Configure openstack subnet cidr
2018-08-07 17:27:01 +08:00
Rong Zhang
7850bce254 Merge pull request #2994 from DBLaci/master
dashboard_token_ttl option override possibility with default
2018-08-07 17:16:25 +08:00
Rong Zhang
3d19e03294 Merge pull request #3015 from podnov/kube_proxy_healthz_bind_address
Variablize kube_proxy_healthz_bind_address
2018-08-07 17:10:33 +08:00
Rong Zhang
496cb306bc Merge pull request #3050 from woosley/master
update .gitignore
2018-08-07 17:01:51 +08:00
Rong Zhang
b1f8bfdf7c Merge pull request #3055 from reverson/17.09-docker
Add support for docker 17.09
2018-08-07 16:57:50 +08:00
Rong Zhang
2c38e4e1ac Merge pull request #3059 from okumin/fix-glusterfs-group_vars
Fix a broken symbolic link for group_vars
2018-08-07 16:55:32 +08:00
Rong Zhang
411d07a4f6 Merge pull request #3047 from rguichard/openstack-az-support
availability zones support for OpenStack
2018-08-07 16:51:41 +08:00
Rong Zhang
7d3a6541d7 Merge pull request #3065 from freeseacher/patch-1
Service file binary place mismatch
2018-08-07 16:48:56 +08:00
Wong Hoi Sing Edison
0f400a113c cert-manager: Upgrade to 0.4.0
Upstream Changes:

-   cert-manager 0.4.0 (https://github.com/jetstack/cert-manager/releases/tag/v0.4.0)
2018-08-07 14:29:28 +08:00
Aleksey Shirokih
e8447e3d71 Service file binary place mismatch
According to cluster/binary.yml vault binary will be placed to `{{ bin_dir }}` and according to `inventory/sample/group_vars/all.yml` that is 
`inventory/sample/group_vars/all.yml`
2018-08-06 14:44:13 +03:00
Rong Zhang
f086b6824e Merge pull request #3064 from riverzhang/yamlroles
Fix yaml roles error
2018-08-05 18:51:02 +08:00
rongzhang
ac644ed049 Fix yaml roles error 2018-08-05 18:48:07 +08:00
Rong Zhang
453fea1977 Merge pull request #3034 from cornelius-keller/library_fix
fix missing libraries on newer coreos versions
2018-08-05 12:54:03 +08:00
okumin
a953f1ca8b Fix a broken symbolic link for group_vars 2018-08-04 23:49:06 +09:00
cornelius-keller
4b5cb1185f fix missing libraries on newer coreos versions 2018-08-03 15:29:05 +02:00
Robert Everson
275cdc1ce3 Add support for docker 17.09 2018-08-02 11:35:16 -07:00
woosley.xu
8d6f67e476 update .gitigonre
- add vim default backup file *~
- remove duplicated *sw[pon]
2018-08-02 11:30:55 +08:00
Rong Zhang
9172150966 Merge pull request #3044 from jerryrelmore/patch-4
Clarify etcd deployment script failure mechanism
2018-08-01 22:57:14 +08:00
Rong Zhang
1f2831967e Merge pull request #3041 from woosley/master
set LC_ALL=C for growpart
2018-08-01 22:54:19 +08:00
rguichard
c19643cee2 availability zones support for OpenStack
allow masters, nodes and gluster nodes (within each group) to be scheduled
on differents AZ.
2018-08-01 16:42:58 +02:00
Rong Zhang
a5c165bb13 Merge pull request #3033 from rguichard/remotes/fork/master
add openstack security group for traffic to 30000-32767/tcp on worker nodes
2018-08-01 22:34:14 +08:00
DBLaci
d43f09081e Merge pull request #1 from kubernetes-incubator/master
Follow upstream
2018-08-01 16:34:10 +02:00
Jerry Elmore
1385091768 Clarify etcd deployment script failure mechanism
Attempting to clarify the language surrounding the etcd node deployment script failure mechanism. I had this error when doing a new cluster deployment last night and, though it should have been, it wasn't immediately apparent to me what was causing the issue (since my default master node hostnames do not specify whether they are also acting as etcd replicas).
2018-07-31 15:15:49 -04:00
Jerry Elmore
e30847e231 Update openstack.md
Neutron cli is deprecated - replaced neutron cli commands with equivalent openstack cli commands.
2018-07-31 14:34:04 -04:00
woosley.xu
72074f283b set local for growpart part 2 2018-07-31 06:56:09 +08:00
woosley.xu
a5db3dbea9 set locale for growpart 2018-07-31 06:52:56 +08:00
Rong Zhang
a2c9331b56 Merge pull request #3031 from a14n/patch-1
Fix label of registry in README
2018-07-27 21:38:27 +08:00
rguichard
1a38a9df88 add security groups for traffic to 30000-32767/tcp
This will make NodePort services work out of the box
2018-07-27 14:57:29 +02:00
Alexandre Ardhuin
9b349a9049 Fix label of registry in README 2018-07-27 11:42:21 +02:00
Chad Swenson
329e97c4d3 Merge pull request #3018 from seungkyua/remove_double_slash
Remove double slash
2018-07-25 12:31:46 -05:00
Sascha Marcel Schmidt
2bd8fbb2dd add missing templates 2018-07-25 16:46:12 +02:00
Sascha Marcel Schmidt
205ea33b10 "fix" race condition 2018-07-25 16:42:57 +02:00
Sascha Marcel Schmidt
c42397d7db run kubectl on one of the masters 2018-07-25 16:42:30 +02:00
Seungkyu Ahn
0366600b45 Remove double slash
Even without this PR, the operation works well.
However, it is better to use a single slash rather than
a double slash in the path.
2018-07-20 07:34:33 +00:00
Evan Zeimet
6a4ce96b7d Variablize kube_proxy_healthz_bind_address
This fixes #3014
2018-07-19 14:19:09 -05:00
DBLaci
b61c64a8ea token-ttl default value is int in seconds 2018-07-19 12:15:47 +02:00
Andreas Krüger
ca62c75bdf Merge pull request #2990 from Miouge1/update-adding-node-doc
Include etcd and masters in adding node doc
2018-07-19 11:55:55 +02:00
Rong Zhang
38bd328abb Merge pull request #2995 from okamototk/fix_kubectl_path
Fixed kubectl path.
2018-07-18 22:31:38 +08:00
Takashi Okamoto
37ccf7e405 Fixed kubectl path. 2018-07-13 15:32:08 +00:00
DBLaci
cb91003cea dashboard_token_ttl option override possibility with default 2018-07-13 15:26:18 +02:00
Miouge1
4ad7b229d3 Include etcd and masters in adding node doc 2018-07-12 17:22:11 +02:00
Matthew Mosesohn
97e0de7e29 Fix vault file owner issues and k8s apiserver cert creation (#2985)
apiserver cert should be created only once
2018-07-11 14:58:02 +03:00
Rong Zhang
83d1486a67 Merge pull request #2984 from mattymo/docker_tag
add docker upgrade tag doc
2018-07-10 20:57:34 +08:00
Matthew Mosesohn
9081b3f914 add docker upgrade tag doc 2018-07-10 13:37:37 +03:00
Rong Zhang
cf445fd4fe Merge pull request #2930 from alvistack/ingress-nginx-0.16.1
ingress-nginx: Upgrade to 0.16.2
2018-07-10 14:42:37 +08:00
Aivars Sterns
72f053d9bb Merge pull request #2972 from mattymo/force_cni_cp
Force copy cni files
2018-07-10 09:40:10 +03:00
Wong Hoi Sing Edison
a0defefb3f ingress-nginx: Upgrade to 0.16.2
ingress-nginx 0.16.2 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.16.2)

This patch simplify ingress-nginx deployment by default deploy on
master, with customizable options; on the other hand, remove the
additional Ansible group "kube-ingress" and its k8s node label
injection.

Reference to https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites:

    GCE/Google Kubernetes Engine deploys an ingress controller on the master.

By changing `ingress_nginx_nodeselector` plus custom k8s node
label, user could customize the DaemonSet deployment target.

If `ingress_nginx_nodeselector` is empty, will deploy DaemonSet on
every k8s node.
2018-07-10 12:26:06 +08:00
Rong Zhang
9e19159547 Merge pull request #2935 from alvistack/cert-manager-0.3.1
cert-manager: Upgrade to 0.3.2
2018-07-10 12:05:31 +08:00
Wong Hoi Sing Edison
62b1166911 cert-manager: Upgrade to 0.3.2
Upstream Changes:

-   cert-manager 0.3.2 (https://github.com/jetstack/cert-manager/releases/tag/v0.3.2)

Our Changes:

-   Remove legacy addon dir, manifests and namespace before upgrade
2018-07-10 08:48:44 +08:00
Rong Zhang
810596c6d8 Merge pull request #2974 from alvistack/cephfs-provisioner-1.1.0-k8s1.10
cephfs-provisioner: Upgrade to 1.1.0-k8s1.10
2018-07-09 13:53:07 +08:00
Rong Zhang
a488d55c2c Merge pull request #2975 from daohoangson/remove_force_disable_kube_basic_auth
Remove step that disables `kube_basic_auth`.
2018-07-08 21:18:36 +08:00
Rong Zhang
8106f1c86d Merge pull request #2977 from pennycoders/master
Fix 2976
2018-07-08 21:17:37 +08:00
Sascha Marcel Schmidt
306a6a751f wait for job to complete 2018-07-08 13:16:25 +02:00
Sascha Marcel Schmidt
318c69350e pin heketi image version 2018-07-08 13:15:54 +02:00
Alexandru Bogdan Pica
e63bc65a9d Fix 2976
Fix failure when the container attribute is not set for a download
2018-07-08 13:36:47 +03:00
Dao Hoang Son
d306c9708c Remove step that force disable kube_basic_auth.
The referenced issue (https://github.com/kubernetes/kubeadm/issues/441) has already been fixed.
2018-07-08 16:57:43 +07:00
Wong Hoi Sing Edison
6a65345ef3 cephfs-provisioner: Upgrade to 1.1.0-k8s1.10
Upstream Changes:

-   Update CEPH_VERSION to mimic (https://github.com/kubernetes-incubator/external-storage/pull/841)

Our Changes:

-   Using image from official repo which contain latest changes (https://quay.io/repository/external_storage/cephfs-provisioner)
2018-07-08 00:37:08 +08:00
Rong Zhang
f1e348ab95 Merge pull request #2971 from elementyang/calico-pr
change create to apply
2018-07-07 09:13:57 +08:00
Matthew Mosesohn
1a3b9dd864 Force copy cni files 2018-07-06 16:39:42 +03:00
elementyang
8fee1ab102 change create to apply 2018-07-06 19:36:19 +08:00
Matthew Mosesohn
5c617c5a8b Add tags to deploy components by --tags option (#2960)
* Add tags for cert serial tasks

This will help facilitate tag-based deployment of specific components.

* fixup kubernetes node
2018-07-06 09:12:13 +03:00
Sascha Marcel Schmidt
6d1804d8a4 also remove storage class 2018-07-05 14:19:18 +02:00
Sascha Marcel Schmidt
ee67ece641 suppress unnecessary change 2018-07-05 14:18:27 +02:00
Matthew Mosesohn
0b939a495b Improve vault etcd initialization check (#2959) 2018-07-05 12:27:45 +03:00
Rong Zhang
4d7426ec95 Fix terraform env Not effective (#2966)
Add TF_VAR_ to terraform env
2018-07-05 12:20:02 +03:00
Sascha Marcel Schmidt
f703814561 add tear down playbook 2018-07-05 02:15:05 +02:00
Sascha Marcel Schmidt
c39835628d prevent some race conditions, increase over all time limits 2018-07-05 02:14:36 +02:00
Sascha Marcel Schmidt
1253725975 add necessary chdir 2018-07-04 19:31:25 +02:00
Sascha Marcel Schmidt
f4c1d6a5d7 remove unnecessary check for existing artifact 2018-07-04 19:08:02 +02:00
Sascha Marcel Schmidt
d7abdced05 fix typo 2018-07-04 18:58:45 +02:00
Sascha Marcel Schmidt
78aeef074e add hint on how to install heketi-cli 2018-07-04 18:40:48 +02:00
Sascha Marcel Schmidt
0b7aa33bc2 add jmespath as requirement 2018-07-04 18:25:35 +02:00
elementyang
5a4f07adca change the way of getting etcd_member_name 2018-07-05 00:06:37 +08:00
Aivars Sterns
4092f96dd8 Merge pull request #2946 from Miouge1/remove-pid-predicate
CheckNodePIDPressure is not supported in v1.10
2018-07-04 18:30:19 +03:00
elementyang
effd27a5f6 change the way that getting etcd_member_name 2018-07-03 22:02:44 +08:00
Rong Zhang
fa003af8f0 Merge pull request #2954 from aioue/patch-1
Update README.md
2018-07-03 19:43:22 +08:00
Rong Zhang
77c870b7d0 Merge pull request #2951 from alvistack/cephfs-provisioner-06fddbe2
cephfs-provisioner: Upgrade to 06fddbe2
2018-07-03 19:36:42 +08:00
Rong Zhang
32a6ca4fd6 Merge pull request #2948 from qeqar/remove-node-limit
move node selection from --limit to --extra-vars=node<nodename>"
2018-07-03 18:41:57 +08:00
Tom Paine
958eca2863 Update README.md 2018-07-03 11:39:51 +02:00
Mark Eisenblaetter
af635ff3ff [remove-node] add doku for nodeselector 2018-07-03 10:38:37 +02:00
Wong Hoi Sing Edison
728024e8ff cephfs-provisioner: Upgrade to 06fddbe2
-   cephfs-provisioner 06fddbe2 (https://github.com/kubernetes-incubator/external-storage/tree/06fddbe2/ceph/cephfs)

Noteable changes from upstream:

-   Added storage class parameters to specify a root path within the backing cephfs and, optionally, use deterministic directory and user names (https://github.com/kubernetes-incubator/external-storage/pull/696)
-   Support capacity (https://github.com/kubernetes-incubator/external-storage/pull/770)
-   Enable metrics server (https://github.com/kubernetes-incubator/external-storage/pull/797)

Other noteable changes:

-   Clean up legacy manifests file naming
-   Remove legacy manifests, namespace and storageclass before upgrade
-   `cephfs_provisioner_monitors` simplified as string
-   Default to new deterministic naming
-   Add `reclaimPolicy` support in StorageClass

With legacy non-deterministic naming style (where $UUID are generated ramdonly):

-   cephfs_provisioner_claim_root: /volumes/kubernetes
-   cephfs_provisioner_deterministic_names: false
-   Generated CephFS volume: /volumes/kubernetes/kubernetes-dynamic-pvc-$UUID
-   Generated CephFS user: kubernetes-dynamic-user-$UUID

With new default deterministic naming style (where $NAMESPACE and $PVC are predictable):

-   cephfs_provisioner_claim_root: /volumes
-   cephfs_provisioner_deterministic_names: true
-   Generated CephFS volume: /volumes/$NAMESPACE/$PVC
-   Generated CephFS user: k8s.$NAMESPACE.$PVC
2018-07-03 10:15:24 +08:00
Mark Eisenblaetter
b548f6f320 move node selection from --limit to --extra-vars=node<nodename>" 2018-07-02 20:04:36 +02:00
Rong Zhang
62df6ac724 Merge pull request #2952 from scality/coredns-typo
Fix `coreos_dual` -> `coredns_dual` typo
2018-07-02 23:50:59 +08:00
Nicolas Trangez
8bcad4f5ef Fix coreos_dual -> coredns_dual typo
See: e40368ae2b
2018-07-02 17:19:35 +02:00
Rong Zhang
31e6c44b07 Merge pull request #2924 from elementyang/make-ssl-etcd-pr
fix the time of ca files are changed in make-ssl-etcd
2018-07-02 20:44:20 +08:00
Matthew Mosesohn
77c910c1c3 Fixup vault etcd check (#2938)
* Fixup vault etcd

* Update main.yml
2018-07-02 15:37:37 +03:00
Matthew Mosesohn
c20196f9a0 Remove modprobe binary from kubelet rkt deployment (#2917) 2018-07-02 15:37:24 +03:00
Rong Zhang
f6a15b1829 Merge pull request #2918 from elementyang/fix-pr
fix add etcd_events_access_address
2018-06-30 11:55:38 +08:00
elementyang
7c22def422 add etcd_events_access_address 2018-06-30 07:32:29 +08:00
Rong Zhang
87e49f0055 Merge pull request #2921 from elementyang/index-out-of-range-pr
fix template index out of range for pull images
2018-06-30 00:53:53 +08:00
Matthew Mosesohn
a36e3fbec3 Add rkt gc task (#2945) 2018-06-29 19:53:21 +03:00
Derek Lemon
4bceaf77ee Merge branch 'master' of https://github.com/kubernetes-incubator/kubespray 2018-06-29 16:40:16 +00:00
Rong Zhang
35a3597416 Merge pull request #2941 from amaya382/fix-dns-doc
Fix default value for dns_mode on the document
2018-06-29 22:24:31 +08:00
Miouge1
2a279e30b0 CheckNodePIDPressure is not supported in v1.10 2018-06-28 20:10:38 +02:00
Andreas Holmsten
b900bd6e94 [contrib/terraform/openstack] Add supplementary node groups
* Add supplementary node groups

  To add additional ansible groups to the k8s nodes, such as
  `kube-ingress` for running ingress controller pods. Empty by default.
2018-06-28 16:46:20 +02:00
southquist
c685dc493f allow for setting the cacert on openstack cloud provider 2018-06-28 16:00:13 +02:00
amaya
aacc89e4e6 Fix default value for dns_mode on the document 2018-06-28 17:08:27 +09:00
Sascha Marcel Schmidt
8e275ab2bd change order and validation of bootstrap and rest tasks as well as
volumes
2018-06-27 12:30:14 +02:00
Andreas Krüger
e24f888bc4 Merge pull request #2923 from bradbeam/vaultrkt
Adding uuidfile for rkt based vault to properly cleanup after itself
2018-06-27 11:18:39 +02:00
Sascha Marcel Schmidt
b56f465145 fix creation of heketi volumes and storage provisioning validation 2018-06-27 10:12:23 +02:00
Sascha Marcel Schmidt
74cad6b811 pin versions of container images 2018-06-27 10:11:14 +02:00
Andreas Krüger
3d2ea28c96 Merge pull request #2926 from neith00/coreos_rkt
No need to install rkt on CoreOS
2018-06-26 10:58:16 +02:00
Cédric de Saint Martin
a260412c7e fluentd daemonset: do not set arbitrary nodeSelector. 2018-06-25 15:19:56 +02:00
Sascha Marcel Schmidt
8ef0cf771f update link 2018-06-25 15:09:22 +02:00
Sascha Marcel Schmidt
9516170ce5 remove unnecessary become flag 2018-06-25 15:09:19 +02:00
Sascha Marcel Schmidt
5aefa847df add fences 2018-06-25 15:09:16 +02:00
Sascha Marcel Schmidt
831ef7ea2c add readme 2018-06-25 15:09:13 +02:00
Sascha Marcel Schmidt
9c7e30e4b4 add sample inventory 2018-06-25 15:09:03 +02:00
Sascha Marcel Schmidt
8c5bfc7718 add debian compatibility 2018-06-25 15:08:53 +02:00
Sascha Marcel Schmidt
61046a6923 move heketi playbook 2018-06-25 15:08:35 +02:00
Sascha Marcel Schmidt
9d2fabc9b9 add heketi/glusterfs as additional contributional network storage 2018-06-25 15:08:18 +02:00
neith00
a643f72d93 No need to install rkt on CoreOS 2018-06-25 09:38:24 +02:00
Aivars Sterns
73a2a18006 Merge pull request #2795 from gfkse/baremetal-override-calico-hostname
Make Calico nodename overridable on bare metal
2018-06-25 08:45:09 +03:00
Rong Zhang
2ef05fb3b7 Merge pull request #2763 from ameukam/update_efk_stack
Update efk stack
2018-06-24 19:01:32 +08:00
Rong Zhang
e06d02365e Merge pull request #2338 from southquist/template-openstack-storage-class
allow for configurable openstack storage class
2018-06-24 18:42:29 +08:00
elementyang
d6f2dbc723 fix the time of ca files are changed in make-ssl-etcd 2018-06-24 13:05:43 +08:00
Brad Beam
20dba8b388 Adding uuidfile for rkt based vault to properly cleanup after itself 2018-06-23 15:14:40 -05:00
Rong Zhang
f624ba47fb Merge pull request #2922 from riverzhang/remove-node
Add run_once to remove-node
2018-06-23 15:09:16 +08:00
rongzhang
94aa062d51 Add run_once to remove-node 2018-06-23 07:05:24 +00:00
elementyang
c0935e161b fix template index out of range for pull images 2018-06-23 05:32:44 +08:00
elementyang
70fbc01cc1 fix etcd_events_access_addresses 2018-06-23 00:04:19 +08:00
Yumo Yang
6c2f169ea2 update test-pr2 (#2911) 2018-06-22 13:22:26 +03:00
Rong Zhang
c230e617f0 Merge pull request #2891 from earlruby/fix-python-pip-version-flag-in-readme
Fix the Python and pip version flag in the README
2018-06-22 14:10:39 +08:00
Rong Zhang
1aee6ec371 Merge pull request #2903 from riverzhang/swap
Add manage swap on the worker node
2018-06-21 22:20:23 +08:00
Erwan Miran
d3fdfee211 Only subdirectories in /var/lib/kubelet should be unmounted 2018-06-21 11:50:02 +02:00
rongzhang
3232e2743e Add manage swap on the worker node 2018-06-21 08:15:01 +00:00
Andreas Krüger
cbb959151c Merge pull request #2737 from Miouge1/update-scheduler
Update kube-scheduler policy
2018-06-19 14:53:22 +02:00
Andreas Krüger
c3d8b131db Merge pull request #2801 from dvazar/bugfix/undefined__network_plugin__variable
Fixed "network_plugin" variable
2018-06-19 10:01:06 +02:00
Andreas Krüger
236d1a448d Merge pull request #2898 from kubernetes-incubator/default_true_authtoken
Enable by default the kubelet token auth
2018-06-19 09:56:32 +02:00
Andreas Krüger
cfd51b1ac7 Merge pull request #2899 from mattymo/etcd_events_var_clarity
Improve variable handling for disabling etcd events cluster
2018-06-19 09:55:56 +02:00
Matthew Mosesohn
61e97251a5 Improve variable handling for disabling etcd events cluster 2018-06-18 16:58:29 +03:00
Antoine Legrand
c192a01b20 Enable by default the kubelet token auth 2018-06-18 14:20:05 +02:00
Henry Finucane
3ad9e9c5eb Fix #2261 by supporting Red Hat's limited PATH
Red Hat has this theory that binaries in sbin are too dangerous to be on
the default path, but we need them anyway.

RH7 has /sbin and /usr/sbin as symlinks, so that is no longer important.

I'm adding it to the `PATH` instead of making the path to `modinfo`
absolute because I am worried about breaking support for other
distributions.
2018-06-15 12:49:22 -07:00
Earl C. Ruby III
97a05ff34a Fix the Python and pip version flag in the README
The README says to check if Python and pip are installed type:

```
python -v && pip -v
```

Lowercase `-v` is `--verbose`, uppercase `-V` is `--version`. The
command should be:

```
python -V && pip -V
```
2018-06-15 11:10:29 -07:00
Julien Mailleret
6aaaf4a272 Limit the maximum number of revisions saved per helm release (#2894)
* Limit the maximum number of revisions saved per helm release
2018-06-15 12:50:18 +02:00
Andreas Krüger
cd64f41524 Merge pull request #2844 from chechiachang/fix-inconsistent-variable-in-task-name-and-msg
Fix inconsistent variables in task name and task message
2018-06-15 09:19:31 +02:00
Andreas Krüger
df279b1ff6 Merge pull request #2890 from drekle/bugfix/dns-domain-incorrect-for-coredns
CoreDNS uses cluster_name instead of dns_domain
2018-06-15 09:06:11 +02:00
Derek Lemon
aa859bc640 Merge pull request #2 from drekle/configure_openstack_subnet_CIDR
Configure openstack subnet cidr
2018-06-14 15:15:51 -06:00
Andreas Krüger
6ac601fd2d Merge pull request #2876 from neith00/docker_iptables
parametrized iptables options for docker daemon
2018-06-14 22:23:27 +02:00
Andreas Krüger
3a569c9dcb Merge pull request #2750 from w-leads/feature/add-vmname-to-vcp-config
Add vm_name option to vsphere cloud provider config
2018-06-14 22:22:34 +02:00
Derek Lemon
27d62941b2 Add the subnet_cidr as a required argument to the network module 2018-06-14 17:41:58 +00:00
Derek Lemon
ab345c5f69 Change was not picked up 2018-06-14 17:31:04 +00:00
Derek Lemon
a06f641b6c Configurable openstack subnet cidr 2018-06-14 16:40:32 +00:00
neith00
f2f1e7f9d1 parametrized iptables options for docker daemon 2018-06-14 12:16:16 +02:00
Rong Zhang
0686b8452e Merge pull request #2860 from alvistack/cert-manager-0.3.0
cert-manager: Upgrade to v0.3.0
2018-06-14 10:35:23 +08:00
Derek Lemon
72504d26dc Merge pull request #1 from drekle/bugfix/dns-domain-incorrect-for-coredns
appropriately use dns_domain instead of cluster_name for coredns for coredns config map
2018-06-13 14:01:00 -06:00
Derek Lemon
1e98e8444e Using dns domain instead of cluster name for coredns, incase they differ 2018-06-13 18:52:35 +00:00
Rong Zhang
f216e7339b Merge pull request #2629 from alvistack/cephfs-provisioner-namespace
Fixup #2545, cephfs-provisioner: Individual Namespace for Add-on
2018-06-13 22:42:20 +08:00
Wong Hoi Sing Edison
291dd1aca8 Fixup #2545, cephfs-provisioner: Individual Namespace for Add-on 2018-06-13 21:52:58 +08:00
Wong Hoi Sing Edison
38da0adead cert-manager: Upgrade to v0.3.0 2018-06-13 21:47:44 +08:00
Rong Zhang
81b3343796 Merge pull request #2857 from alvistack/ingress-nginx-0.15.0
ingress-nginx: Upgrade to 0.15.0
2018-06-13 21:16:17 +08:00
Rong Zhang
f2c160e7e0 Merge pull request #2872 from riverzhang/kube-proxy
Reconfigure kube-proxy to access kube-apiserver via the LB(kubeadm)
2018-06-13 17:43:34 +08:00
Brad Beam
3d819a6edd Adding cluster_name to api cert alt name for vault 2018-06-12 14:15:07 -05:00
rongzhang
20bd656975 Reconfigure kube-proxy to access kube-apiserver via the LB(kubeadm) 2018-06-12 12:53:50 +00:00
Frank Ritchie
cfe939ff08 Tolerate NoSchedule by default 2018-06-11 20:10:13 -04:00
Wong Hoi Sing Edison
9f245dd9b2 ingress-nginx: Upgrade to 0.15.0 2018-06-08 16:05:15 +08:00
Rong Zhang
cf8e9eed69 Merge pull request #2853 from pomverte/patch-1
docs(azure arm): update link azure cli login
2018-06-08 01:24:29 +08:00
Rong Zhang
10c9fe96b0 Merge pull request #2859 from riverzhang/nginx
Fix nginx-proxy HA when kubeadm enable
2018-06-08 01:10:01 +08:00
Rong Zhang
42b24616ac Merge pull request #2856 from alvistack/kubernetes-1.10.4
Upgrade Kubernetes to 10.0.4 and etcd to 3.2.18
2018-06-07 23:54:03 +08:00
rongzhang
f9ccb93825 Fix nginx-proxy HA when kubeadm enable 2018-06-07 14:27:19 +00:00
Aivars Sterns
daeea75fbb Merge pull request #2835 from oracle/bm_fix-apiserver-access-ip
roles/kubernetes/client: kubeconfig template should use access_ip
2018-06-07 11:50:57 +03:00
Wong Hoi Sing Edison
0ad0202e8f Upgrade Kubernetes to 10.0.4 and etcd to 3.2.18 2018-06-07 16:20:29 +08:00
hvle
a2a26755fe docs(azure cli): update links
install and login links
2018-06-07 07:10:33 +02:00
Brad Beam
1f02cc70f1 Merge pull request #2825 from dshuvar/dshuvar/docker-options.conf
Changed /etc/systemd/system/docker.service.d/docker-options.conf file for successful parsing mount aguments
2018-06-06 12:56:18 -05:00
Brad Beam
fe010504aa Merge pull request #2851 from bradbeam/vaultnotify
Adding wait for vault up handler in service restart
2018-06-06 12:49:03 -05:00
Brad Beam
05e3c76b1d Merge pull request #2852 from bradbeam/etcdeventsrkt
Adding missing rkt template for etcd-events
2018-06-06 12:48:31 -05:00
Brad Beam
63a458063b Adding missing rkt template for etcd-events 2018-06-06 10:43:30 -05:00
Brad Beam
a8715f9f0f Adding wait for vault up handler in service restart 2018-06-06 10:40:27 -05:00
Matthew Mosesohn
59be578842 Revert "wip pr for improved cert sync" (#2849) 2018-06-06 17:22:25 +03:00
Aivars Sterns
cb0a257349 Merge pull request #2819 from oleh-ozimok/fix-cidr-assert
Fix enough network address space assert
2018-06-06 07:32:16 +03:00
Di Xu
1081f620d2 add support for non-amd64 arch gcr.io images
Currently all the gcr.io images used in kubespray can only run on x86.
Also gcr.io has not fully support multi-arch docker images.

Add extra var "image_arch" (default is amd64) to support running other
platforms, like arm64.

Change-Id: I8e1c9af533c021cb96ade291a1ce58773b40e271
2018-06-05 17:29:02 +08:00
David Chang
e1cfe83825 Fix inconsistent variables in task name and task message 2018-06-05 16:45:02 +08:00
Di Xu
6019a84fb3 Update docker package info for aarch64
Missing corresponding package docker-engine on aarch64, use docker instead.

Change-Id: If5df58337746a81752b5d477e0473600eaee8381
2018-06-05 16:30:28 +08:00
Di Xu
f4d762bb95 fix docker opts incompatible running on aarch64 Redhat/Centos
On Aarch64, the default cgroup driver for docker is systemd
instead of cgroupfs. Should conform kubelet to use systemd
as cgroup driver as well to keep it consistent with docker.

Without this change, below exception will be raised.
/usr/bin/docker-current: Error response from daemon: shim
error: docker-runc not installed on system.

Change-Id: Id496ec9eaac6580e4da2f3ef1a386c9abc2a5129
2018-06-05 16:17:16 +08:00
Aivars Sterns
69ea28e187 Merge pull request #2827 from mattymo/testpr
wip pr for improved cert sync
2018-06-04 12:43:00 +03:00
Ben Meier
2f5a9e180c kubernetes/client: kubeconfig template should use the access_ip for the chosen master node 2018-06-04 09:51:05 +01:00
Dmitry
f912a4ece5 Fix compare AnsibleUnsafeText with int (#2828) 2018-06-04 11:34:10 +03:00
Rong Zhang
d1e66f9cc8 Add label to kubelet env for kubeadm deploy cluster (#2841) 2018-06-04 11:26:47 +03:00
Aivars Sterns
1a25903583 Merge pull request #2838 from kubernetes-incubator/ant31-patch-1
Remove the HUGE gitlab logo
2018-06-02 13:19:22 +03:00
Antoine Legrand
0728a2a78a Update README.md
Remove the HUGE gitlab logo
2018-06-01 11:30:40 +02:00
Aivars Sterns
b67cf74c5e Merge pull request #2823 from scality/dashboard_in_cluster_info
Dashboard in cluster info
2018-05-31 15:48:25 +03:00
Aivars Sterns
2832a1cdcd Merge pull request #2821 from MithunMJ/patch-1
Update README.md
2018-05-31 11:43:59 +03:00
Aivars Sterns
4e0ed1ea50 Adding SECURITY_CONTACTS fixes #2816 (#2833) 2018-05-31 10:48:49 +03:00
Andreas Krüger
164122555d Merge pull request #2822 from mirwan/contiv_etcd_init_image
contiv-etcd-init image as default instead hardcoded
2018-05-31 09:35:39 +02:00
Erwan Miran
11d87ecc37 removed surnumerary definition of contiv_etcd_init_image_* (already in download role) 2018-05-31 00:02:11 +02:00
Matthew Mosesohn
7433348aae wip pr for improved cert sync 2018-05-30 12:15:11 +03:00
Erwan Miran
3673ed6262 include contiv_etcd_init_image to downloads role 2018-05-29 17:05:33 +02:00
Dmitrii Shuvar
16f860bbc2 Update docker-options.conf.j2
Changed /etc/systemd/system/docker.service.d/docker-options.conf file for successful parsing mount aguments
try fix ci error previous commit
2018-05-29 12:40:33 +03:00
dshuvar
d973ecf5cc fix error message: '[/etc/systemd/system/docker.service.d/docker-options.conf:3] Failed to parse mount flag , ignoring.' 2018-05-28 18:23:15 +03:00
Julien Girardin
f88cd27686 Add dashboard url as part of kubectl cluster-info output 2018-05-28 11:46:11 +02:00
Erwan Miran
2a4fc70e1c contiv-etcd-init image as default instead hardcoded 2018-05-28 11:11:18 +02:00
Mithun Arunan
c9c12129fd Update README.md
fix gitlab logo
2018-05-28 13:04:40 +05:30
Oleg Ozimok
38f7ba2584 Fix enough network address space assert 2018-05-27 18:01:17 +03:00
Bogdan Dobrelya
c4b1808983 Use relative paths for data_files in setup.cfg (#2812)
pip install doesn't work with absolute paths

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2018-05-25 11:57:03 +02:00
Aivars Sterns
f3ed740a75 Merge pull request #2793 from lpaulmp/use-env-python-header
Set widely used header to execute python scripts in different OS
2018-05-25 08:27:41 +03:00
dvazar
b3f9cae820 fixed a check unknown networks (cilium & contiv) 2018-05-22 16:43:19 +07:00
Andreas Krüger
a67bdff28c Merge pull request #2743 from mrostecki/opensuse-tumbleweed-openssl
opensuse: Fix OpenSSL package name
2018-05-22 11:21:04 +02:00
Andreas Krüger
e3c8b230a0 Merge pull request #2806 from Miouge1/no-kpm
Remove KPM support
2018-05-22 11:17:52 +02:00
Andreas Krüger
9689a28d15 Merge pull request #2805 from mvasilenko/helm_v291
Update Helm to latest version 2.9.1
2018-05-22 11:14:39 +02:00
Miouge1
095d33bc51 Remove KPM support 2018-05-21 22:28:08 +02:00
Mikhail Vasilenko
821966b319 Update Helm version to 2.9.1 2018-05-21 17:36:51 +03:00
Aivars Sterns
ab46687a8a Merge pull request #2777 from spinside/patch-2
Update README.md
2018-05-19 19:29:53 +03:00
spinside
be7278ce9d Update README.md 2018-05-19 17:11:57 +02:00
spinside
428218dbf0 Update README.md 2018-05-19 17:10:27 +02:00
spinside
d110999d31 Update README.md 2018-05-19 17:09:38 +02:00
dvazar
4b8daa22f6 Fixes #2800 2018-05-19 00:57:09 +07:00
Paul Montero
3f1887316b Set widely used header for python for different OS 2018-05-17 17:00:49 -05:00
Andreas Krüger
e60a63ea51 Merge pull request #2577 from woopstar/etcd-fix-4
Makeover of etcd- and etcd-cluster setup.
2018-05-16 20:49:54 +02:00
Andreas Krüger
a2a7bcd43d Merge pull request #2786 from cruwe/cjr-assert-maximum-pods-on-node-cidr
assert that number of pods on node does not exceed CIDR address range
2018-05-16 19:57:43 +02:00
Christopher J. Ruwe
c1bc4615fe assert that number of pods on node does not exceed CIDR address range
The number of pods on a given node is determined by the  --max-pods=k
directive. When the address space is exhausted, no more pods can be
scheduled even if from the --max-pods-perspective, the node still has
capacity.

The special case that a pod is scheduled and uses the node IP in the
host network namespace is too "soft" to derive a guarantee.

Comparing kubelet_max_pods with kube_network_node_prefix when given
allows to assert that pod limits match the CIDR address space.
2018-05-16 11:55:46 +00:00
Andreas Kruger
76dca877da Set the vars explicit 2018-05-16 13:14:13 +02:00
Aivars Sterns
38e727dbe1 Merge pull request #2744 from girikuncoro/fix-tf-aws-readme
Remove unnecessary loadbalancer_apiserver binding on terraform AWS readme
2018-05-16 14:10:38 +03:00
Aivars Sterns
eba486f229 add posibility to provide different yum repository directory (#2787) 2018-05-16 13:56:04 +03:00
Andreas Krüger
4ac79993e2 Merge pull request #2666 from AnatolyRugalev/master
Added MountFlags variable to docker options
2018-05-16 09:34:34 +02:00
Matthew Mosesohn
7c93e71801 Upgrade k8s to 1.10.2 (#2748)
* Upgrade k8s to 1.10.2

Bumped etcd version to 3.2.16 as recommended

* Add ipvs fix for v1.10

* change flannel addons test to ha
2018-05-15 16:00:29 +03:00
Andreas Krüger
1be399ab7b Merge pull request #2772 from cruwe/cjr-correct-perms-on-kubeconfig
make admin.conf -> .kube/config non-executable
2018-05-15 13:26:33 +02:00
Anatoly Rugalev
eae4fa040a Added docker_mount_flags option (fixes #2624) 2018-05-15 11:57:18 +02:00
spinside
a3c53efaf7 Update README.md 2018-05-15 10:29:41 +02:00
spinside
0f7fefd1b5 Update README.md 2018-05-15 10:27:44 +02:00
Rong Zhang
76fc786c07 Merge pull request #2782 from riverzhang/kube-dns-upgrade
Bump kube-dns to 1.14.10
2018-05-15 16:12:37 +08:00
Andreas Krüger
76a1fd37ff Merge pull request #2779 from lvthillo/patch-2
Update README.md
2018-05-15 10:04:34 +02:00
Christopher J. Ruwe
73800ef111 make certificates non-executable 2018-05-15 07:54:32 +00:00
rongzhang
742a8782dd Bump kube-dns to 1.14.10
Upgrade kube-dns to 1.14.10
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
2018-05-15 03:29:10 +00:00
Lorenz Vanthillo
8f6c863d7b Update README.md
https://github.com/kubernetes-incubator/kubespray/issues/2764
2018-05-14 20:11:57 +02:00
Arnaud Meukam
cd7c58e8d3 correct some indentation issues in the fluentd daemonset. 2018-05-14 19:56:18 +02:00
spinside
a1de8a07d6 Update README.md
Added the requirement of pip for Vagrant users in readme.
See issue #2766
2018-05-14 16:22:38 +02:00
Daniel Mohr
476b14b06e Make Calico nodename overridable on bare metal
Signed-off-by: Daniel Mohr <daniel.mohr@supercrunch.io>
2018-05-14 14:13:51 +02:00
Christopher J. Ruwe
49d106f615 make admin.conf -> .kube/config non-executable
Almost certainly, the .kube/config file (YAML) should not be executable.
2018-05-14 09:29:48 +00:00
Andreas Krüger
63fdfae918 Merge pull request #2770 from Miouge1/notify-policy
Restart scheduler when policy changes
2018-05-14 10:57:16 +02:00
Miouge1
ad48606e4e Restart scheduler when policy changes 2018-05-14 10:09:30 +02:00
Rong Zhang
32f312f4a6 Merge pull request #2757 from qbl/master
Fix issue #2702: 'docker_bin_dir' is undefined when running ansible-playbook remove-node.yml
2018-05-14 09:54:57 +08:00
Iqbal Farabi
52ffd5dae4 Fix issue #2702: 'docker_bin_dir' is undefined when running ansible-playbook remove-node.yml 2018-05-14 07:20:45 +05:30
Arnaud Meukam
c75da43f22 add missing field in fluentd 2018-05-13 21:39:27 +02:00
Arnaud Meukam
65f14f636d remove support of other CRI runtimes than Docker in the efk stack 2018-05-13 18:37:36 +02:00
Rong Zhang
d7d85d2d3e Merge pull request #2758 from girikuncoro/fix-remove-node
Fix privilege escalation timeout for remove-node playbook
2018-05-13 21:42:10 +08:00
Arnaud Meukam
363627d9f8 serviceName added in elasticsearch. Required when a Statefulset is used 2018-05-13 14:23:37 +02:00
Rong Zhang
322b528ee0 Merge pull request #2765 from alirezaDavid/debug_docs
add svc to netchecker-service.default.svc.cluster.local
2018-05-13 12:31:38 +08:00
Alireza David
0fe5f120a3 add svc to netchecker-service.default.svc.cluster.local 2018-05-12 17:34:51 +04:30
Arnaud Meukam
7950a49e28 update fluentd deployment and configmap 2018-05-11 18:56:14 +02:00
Arnaud Meukam
698da78768 update kibana docker image 2018-05-11 18:36:50 +02:00
Arnaud Meukam
ba320e918d update elasticsearch image 2018-05-11 18:22:44 +02:00
Matthew Mosesohn
07cc981971 refactor vault role (#2733)
* Move front-proxy-client certs back to kube mount

We want the same CA for all k8s certs

* Refactor vault to use a third party module

The module adds idempotency and reduces some of the repetitive
logic in the vault role

Requires ansible-modules-hashivault on ansible node and hvac
on the vault hosts themselves

Add upgrade test scenario
Remove bootstrap-os tags from tasks

* fix upgrade issues

* improve unseal logic

* specify ca and fix etcd check

* Fix initialization check

bump machine size
2018-05-11 19:11:38 +03:00
Andreas Krüger
e23fd5ca44 Merge pull request #2762 from woopstar/fix-coreos-bootstrap-fact
Fix path for pip and python when already bootstrapped
2018-05-11 17:06:28 +02:00
woopstar
7df5edef52 Fix path for pip and python 2018-05-11 16:01:52 +02:00
Giri Kuncoro
1eaa6925b9 Fix privilege escalation timeout for remove-node playbook 2018-05-10 11:53:48 +05:30
Iqbal Farabi
86212d59ae Fix issue #2702: 'docker_bin_dir' is undefined when running ansible-playbook remove-node.yml 2018-05-10 10:10:59 +05:30
Andreas Krüger
82deb2c57f Merge pull request #2725 from desaintmartin/coreos-pip-path
coreos: explicitely set pip executable.
2018-05-09 09:47:14 +02:00
Cédric de Saint Martin
7507031cb1 CoreOS bootstrap: set bin_dir and PATH for pip. 2018-05-08 22:20:58 +02:00
Ryo Nishikawa
51a9379d3c Add vm_name option to vsphere cloud provider config 2018-05-08 12:23:58 -07:00
Andreas Krüger
d73d60c9b0 Merge pull request #2600 from maximegaillard/master
Add Openstack tenant name
2018-05-08 12:03:01 +02:00
Andreas Krüger
004b4a0436 Merge pull request #2729 from Ashon/issues/fix-python-compat
Use 'items()' for python compatibility
2018-05-08 12:02:28 +02:00
Andreas Krüger
67ce8925e4 Merge pull request #2742 from woopstar/coredns-update
Update CoreDNS to version 1.1.2
2018-05-08 12:01:42 +02:00
Giri Kuncoro
3a1f6810b7 Remove loadbalancer_apiserver binding on readme 2018-05-08 14:55:52 +05:30
Michal Rostecki
066016cd3e opensuse: Fix OpenSSL package name
OpenSSL 1.1 package in openSUSE Tumbleweed is named openssl-1_1,
not openssl-1_1_0.
2018-05-08 10:03:30 +02:00
Andreas Krüger
28d6eb6af1 Merge pull request #2644 from cp3hu/master
Fix apiserver manifest and kubelet for kube version < 1.9
2018-05-08 09:22:36 +02:00
woopstar
1a47a9b850 Update CoreDNS to version 1.1.2 2018-05-08 09:14:01 +02:00
Andreas Krüger
addd67dc63 Merge pull request #2738 from krystan/master
tiny spacing change "can be"
2018-05-04 20:58:26 +02:00
Miouge1
70e0998a70 Update kube-scheduler policy 2018-05-03 21:56:51 +02:00
Krystan Honour
988bd88468 tiny spacing change "can be" 2018-05-03 20:56:07 +01:00
Andreas Krüger
0d88972d3e Merge pull request #2732 from Towmeykaw/patch-1
Update aws.md
2018-05-03 12:45:08 +02:00
Tommy Kindmark
0e012e5987 Update aws.md
I had an issue with DNS not working because i didn't add the "kubernetes.io/cluster/$cluster_name" to the route table my subnets where using.
2018-05-02 22:32:41 +02:00
Chad Swenson
595e96ebf1 Merge pull request #2693 from romaindequidt/sync-certs-tasks-fix
sync certs tasks (fix #2596 #2667)
2018-05-02 12:17:23 -05:00
woopstar
4c81cd2a71 Merge branch 'master' of https://github.com/kubernetes-incubator/kubespray into etcd-fix-4 2018-05-02 14:45:58 +02:00
Andreas Kruger
32a8ea8094 Fix wrong var used 2018-05-02 12:44:05 +02:00
Andreas Kruger
c594bd7feb Do not run setup on all the nodes. 2018-05-02 10:58:38 +02:00
Andreas Krüger
223ed98828 Merge pull request #2728 from hswong3i/ingress-nginx-0.14.0
ingress-nginx: Upgrade to 0.14.0
2018-05-02 10:20:46 +02:00
Andreas Krüger
39e3df25a3 Merge pull request #2731 from girikuncoro/fix-aws-readme
Fix broken terraform aws readme
2018-05-02 09:35:59 +02:00
Giri Kuncoro
0fb017b9c1 Rename ansible user env vars 2018-05-02 14:07:54 +07:00
ashon
fb465f8b4b Use 'items()' for python compatibility 2018-05-01 16:55:50 +09:00
Wong Hoi Sing Edison
3501eb6916 ingress-nginx: Upgrade to 0.14.0 2018-05-01 15:42:07 +08:00
Maxime Gaillard
00db751646 Add Openstack tenant name 2018-05-01 09:21:37 +02:00
Pablo Moreno
df6c5b28a1 [contrib/terraform/openstack] Backward compatibility changes (#2539)
* [terraform/openstack] Restores ability to use existing public nodes and masters as bastion.

* [terraform/openstack] Uses network_id as output

* [terraform/openstack] Fixes link to inventory/local/group_vars

* [terraform/openstack] Adds supplementary master groups

* [terraform/openstack] Updates documentation avoiding manual setups for bastion (as they are not needed now).

* [terraform/openstack] Supplementary master groups in docs.

* [terraform/openstack] Fixes repeated usage of master fips instead of bastion fips

* [terraform/openstack] Missing change for network_id to subnet_id

* [terraform/openstack] Changes conditional to element( concat ) form to avoid type issues with empty lists.
2018-04-30 18:11:07 +03:00
Tomasz Majchrowski
59789ae02a ISSUE-2706: Provide consistent usage of supplementary_addresses_in_ssl_keys across vault and script mode (#2707) 2018-04-30 14:48:17 +03:00
Andreas Krüger
414e420bd2 Merge pull request #2701 from desaintmartin/netchecker-update
Update netchecker to v1.2.2.
2018-04-30 10:55:18 +02:00
Andreas Krüger
03de4c0806 Merge pull request #2695 from suzutan/add-oidc-prefix-args
Add oidc-user-prefix and oidc-group-prefix args
2018-04-30 09:17:02 +02:00
Andreas Krüger
4fb8e6d455 Merge pull request #2653 from kidk/fixed-incorrect-mem-tag
Replaced 'mem' with 'memory/ in elasticsearch and kibana deployment
2018-04-30 09:14:15 +02:00
mirwan
06cdb260f6 labelvalue must be formatted to handle non string values (#2722) 2018-04-29 19:02:14 +03:00
mirwan
c3c5817af6 sysctl file should be in defaults so that it can be overriden (#2475)
* sysctl file should be in defaults so that it can be overriden

* Change sysctl_file_path to be consistent with roles/kubernetes/preinstall/defaults/main.yml
2018-04-27 18:50:58 +03:00
Markos Chandras
9168c71359 Revert "Revert "Add openSUSE support" (#2697)" (#2699)
This reverts commit 51f4e6585a.
2018-04-26 12:52:06 +03:00
Matthew Mosesohn
1a14f1ecc1 Fix vol format for local volume provisioner in rkt (#2698) 2018-04-24 20:32:08 +03:00
Cédric de Saint Martin
44cb126e7d Update netchecker to v1.2.2.
Using official image from mirantis at dockerhub.
2018-04-24 09:13:56 +02:00
Matthew Mosesohn
51f4e6585a Revert "Add openSUSE support" (#2697) 2018-04-23 14:28:24 +03:00
Suzuka Asagiri
f81e6d2ccf Add oidc-user-prefix and oidc-group-prefix args 2018-04-23 12:23:59 +09:00
Romain DEQUIDT
80dd230a65 sync certs tasks (fix #2596 #2667) 2018-04-22 10:00:31 +02:00
Aivars Sterns
d1b4ea5807 Merge pull request #2687 from noris-network/master
Document how to allow ipip traffic with calico on OpenStack
2018-04-21 10:38:21 +03:00
Aivars Sterns
f5db403c45 Merge pull request #2689 from lpaulmp/run-once-preinstall-upgrade
run_once pre_upgrade tasks which are executing in localhost
2018-04-21 10:37:10 +03:00
Paul Montero
75950344fb run_once pre_upgrade tasks which are executing in localhost 2018-04-19 11:38:13 -05:00
oz123
a49e06b54b Document how to allow ipip traffic with calico on OpenStack 2018-04-19 16:00:01 +02:00
Matthew Mosesohn
0945eb990a Make it possible to skip docker role as a var (#2686) 2018-04-19 16:47:20 +03:00
Andreas Krüger
a498cc223b Merge pull request #2673 from hswong3i/cephfs-provisioner-a71a49d4
cephfs-provisioner: Upgrade to a71a49d4
2018-04-19 11:39:10 +02:00
Andreas Krüger
ddd200bbfa Merge pull request #2604 from shravanpn7/shravan-pr
kubectl get pods from 'test' namespace as the pods were created in test ns
2018-04-19 09:27:53 +02:00
Andreas Krüger
9707aa8091 Merge pull request #2677 from woopstar/bootstrap-fix-1
Properly check need_pip, always run pip to check if needed
2018-04-19 09:23:26 +02:00
Spencer Smith
2e6a260ab1 Merge pull request #2683 from rsmitty/custom-etcd-vars
support custom env vars for etcd
2018-04-18 16:07:43 -04:00
Spencer Smith
49c6bf8fa6 support custom env vars for etcd 2018-04-18 14:03:24 -04:00
Samuel Vandamme
296b92dbd4 Replaced 'mem' with 'memory/ in elasticsearch and kibana deployment 2018-04-18 11:25:29 +02:00
Andreas Krüger
b2756d148a Merge pull request #2671 from hswong3i/cert-manager-0.2.4
cert-manager: Upgrade to v0.2.4
2018-04-18 10:17:39 +02:00
woopstar
756af57787 Properly check need_pip, always run pip to check if needed
pip was always being downloaded on subsequent runs, This PR always runs the pip command, and checks the rc of it before downloading pip

Fix in favor of #2582
2018-04-18 10:15:46 +02:00
Andreas Krüger
cb7096f2ec Merge pull request #2672 from hswong3i/ingress-nginx-0.13.0
ingress-nginx: Upgrade to 0.13.0
2018-04-18 10:10:13 +02:00
Andreas Krüger
3c4871d9b8 Merge pull request #2670 from hswong3i/weave-2.3.0
weave: Upgrade to 2.3.0
2018-04-18 10:09:38 +02:00
Aivars Sterns
f90673ac68 Merge pull request #2662 from ganeshmaharaj/vagrant-gitignore
Vagrantfile: Add vagrant inventory file in any directory to .gitignore
2018-04-17 19:16:00 +03:00
Wong Hoi Sing Edison
d435e17681 cephfs-provisioner: Upgrade to a71a49d4 2018-04-17 13:41:34 +08:00
Wong Hoi Sing Edison
23e9737b85 ingress-nginx: Upgrade to 0.13.0 2018-04-17 12:19:44 +08:00
Wong Hoi Sing Edison
54beb27eaa cert-manager: Upgrade to v0.2.4 2018-04-17 12:08:10 +08:00
Wong Hoi Sing Edison
7968437a65 Weave: Upgrade to 2.3.0 2018-04-17 08:51:24 +08:00
Andreas Krüger
693b7c5fd0 Merge pull request #2668 from Arslanbekov/kubernetes-logo
Kubernetes logo in README.md
2018-04-16 20:06:46 +02:00
Arslanbekov Denis
1bd49ff125 Add production uri 2018-04-16 17:33:24 +03:00
Arslanbekov Denis
9f460dd1bf Change uri 2018-04-16 17:32:00 +03:00
Arslanbekov Denis
2441dd6f6f Usage kubernetes-logo in README.md 2018-04-16 17:30:53 +03:00
Arslanbekov Denis
ea44ad4d75 Added img kubernetes-logo.png 2018-04-16 17:29:55 +03:00
Aivars Sterns
4b4786f75d Merge pull request #2381 from vikas027/inventory_fixes
Replaced ansible_ssh_host with ansible_host in sample inventory file and fixed usage of bastion
2018-04-16 10:06:19 +03:00
Ganesh Maharaj Mahalingam
c432697667 Vagrantfile: Add vagrant inventory file in any directory to .gitignore
Follow-on fix for #2654

Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
2018-04-13 10:54:21 -07:00
Christian Phu
3535c29e59 Fix apiserver manifest for kube version < 1.9 2018-04-10 18:17:56 +02:00
Vikas Kumar
94eb18b3d9 Replaced ansible_ssh_host with ansible_host in sample inventory file as the former is deprecated since Ansible v2.0
Fixed the reference of ansible_user in kubespray-defaults role

References:
 - http://docs.ansible.com/ansible/latest/intro_inventory.html
2018-04-10 15:21:40 +10:00
Vikas Kumar
af5943f7e6 Merge branch 'master' of github.com:kubernetes-incubator/kubespray 2018-04-10 15:07:35 +10:00
Shravan Papanaidu
f26e16bf79 kubectl get pods from 'test' namespace as the pods were created in 'test' ns 2018-04-04 13:26:16 -07:00
woopstar
86e3506ae6 Etcd cluster setup makeover
The current way to setup the etc cluster is messy and buggy.

- It checks for cluster is healthy before the cluster is even created.
- The unit files are started on handlers, not in the task, so you mess with "flush handlers".
- The join_member.yml is not used.
- etcd events cluster is not configured for kubeadm
- remove duplicate runs between running the role on etcd nodes and k8s nodes
2018-04-01 21:38:33 +02:00
Sebastian Söderqvist
ba2107ea8c is-default-class is case sensative so we must return a lowercase string 2018-02-15 10:51:42 +01:00
southquist
3f44a33738 allow for configurable openstack storage class 2018-02-14 11:32:56 +01:00
Jan Jungnickel
8766b36144 Make path to generated inventory configurable 2017-12-04 16:41:35 +01:00
brx
2ffcfdcd25 Update main.yml 2017-11-24 20:13:38 +01:00
Boyang Jerry Peng
8d460a7300 Bug in download main.yml
I think there was a mistake here:

"{{ peer_with_calico_rr is defined and peer_with_calico_rr }} and kube_network_plugin == 'calico'"

should be

"{{ peer_with_calico_rr is defined and peer_with_calico_rr and kube_network_plugin == 'calico' }}"

this is causing calico_rr to be download even if you are using something other than calico
2017-11-07 17:17:19 -08:00
xuhuilong
71dabf9fb3 fix curl get calico status error ( error in tls version) :https://bugzilla.redhat.com/show_bug.cgi?id=1272504 2017-05-15 08:12:26 -04:00
476 changed files with 9205 additions and 5185 deletions

4
.gitignore vendored
View File

@@ -1,6 +1,6 @@
.vagrant
*.retry
inventory/vagrant_ansible_inventory
**/vagrant_ansible_inventory
inventory/credentials/
inventory/group_vars/fake_hosts.yml
inventory/host_vars/
@@ -12,9 +12,9 @@ temp
*.tfstate
*.tfstate.backup
contrib/terraform/aws/credentials.tfvars
**/*.sw[pon]
/ssh-bastion.conf
**/*.sw[pon]
*~
vagrant/
# Byte-compiled / optimized / DLL files

View File

@@ -93,7 +93,7 @@ before_script:
# Check out latest tag if testing upgrade
# Uncomment when gitlab kubespray repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout f7d52564aad2ff8e337634951beb4a881c0e8aa6
- test "${UPGRADE_TEST}" != "false" && git checkout 8b3ce6e418ccf48171eb5b3888ee1af84f8d71ba
# Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Workaround https://github.com/kubernetes-incubator/kubespray/issues/2021
@@ -240,6 +240,10 @@ before_script:
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu18_flannel_aio_variables: &ubuntu18_flannel_aio_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-part1
UPGRADE_TEST: "graceful"
@@ -263,7 +267,7 @@ before_script:
.ubuntu_cilium_sep_variables: &ubuntu_cilium_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
@@ -304,6 +308,10 @@ before_script:
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.coreos_vault_upgrade_variables: &coreos_vault_upgrade_variables
# stage: deploy-part1
UPGRADE_TEST: "basic"
.ubuntu_flannel_variables: &ubuntu_flannel_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
@@ -327,6 +335,18 @@ gce_coreos-calico-aio:
only: [/^pr-.*$/]
### PR JOBS PART2
gce_ubuntu18-flannel-aio:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *ubuntu18_flannel_aio_variables
<<: *gce_variables
when: manual
except: ['triggers']
only: [/^pr-.*$/]
gce_centos7-flannel-addons:
stage: deploy-part2
<<: *job
@@ -338,6 +358,17 @@ gce_centos7-flannel-addons:
except: ['triggers']
only: [/^pr-.*$/]
gce_centos-weave-kubeadm:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_ubuntu-weave-sep:
stage: deploy-part2
<<: *job
@@ -434,17 +465,6 @@ gce_ubuntu-canal-kubeadm-triggers:
when: on_success
only: ['triggers']
gce_centos-weave-kubeadm:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos-weave-kubeadm-triggers:
stage: deploy-part2
<<: *job
@@ -560,7 +580,7 @@ gce_rhel7-canal-sep:
<<: *rhel7_canal_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/,]
only: ['master', /^pr-.*$/]
gce_rhel7-canal-sep-triggers:
stage: deploy-part2
@@ -638,6 +658,17 @@ gce_ubuntu-vault-sep:
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-vault-upgrade:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_vault_upgrade_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-flannel-sep:
stage: deploy-special
<<: *job

12
OWNERS
View File

@@ -1,9 +1,7 @@
# See the OWNERS file documentation:
# https://github.com/kubernetes/kubernetes/blob/master/docs/devel/owners.md
# https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
owners:
- Smana
- ant31
- bogdando
- mattymo
- rsmitty
approvers:
- kubespray-approvers
reviewers:
- kubespray-reviewers

18
OWNERS_ALIASES Normal file
View File

@@ -0,0 +1,18 @@
aliases:
kubespray-approvers:
- ant31
- mattymo
- atoms
- chadswen
- rsmitty
- bogdando
- bradbeam
- woopstar
- riverzhang
- holser
- smana
kubespray-reviewers:
- jjungnickel
- archifleks
- chapsuk
- mirwan

View File

@@ -1,14 +1,15 @@
![Kubernetes Logo](https://s28.postimg.org/lf3q4ocpp/k8s.png)
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-incubator/kubespray/master/docs/img/kubernetes-logo.png)
Deploy a Production Ready Kubernetes Cluster
============================================
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
-   Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere or Baremetal**
- **High available** cluster
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Support most popular **Linux distributions**
- Supports most popular **Linux distributions**
- **Continuous integration tests**
Quick Start
@@ -18,23 +19,44 @@ To deploy the cluster you can use :
### Ansible
# Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
cp -rfp inventory/sample/* inventory/mycluster
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all.yml
cat inventory/mycluster/group_vars/k8s-cluster.yml
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with:
```
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
```
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.
### Vagrant
# Simply running `vagrant up` (for tests purposes)
For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python and pip are installed:
python -V && pip -V
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
sudo pip install -r requirements.txt
vagrant up
Documents
@@ -68,28 +90,37 @@ Supported Linux Distributions
- **Container Linux by CoreOS**
- **Debian** Jessie, Stretch, Wheezy
- **Ubuntu** 16.04
- **Ubuntu** 16.04, 18.04
- **CentOS/RHEL** 7
- **Fedora** 28
- **Fedora/CentOS** Atomic
- **openSUSE** Leap 42.3/Tumbleweed
Note: Upstart/SysV init based OS types are not supported.
Versions of supported components
--------------------------------
Supported Components
--------------------
- [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.9.5
- [etcd](https://github.com/coreos/etcd/releases) v3.2.4
- [flanneld](https://github.com/coreos/flannel/releases) v0.10.0
- [calico](https://docs.projectcalico.org/v2.6/releases/) v2.6.8
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.0.0-rc8
- [contiv](https://github.com/contiv/install/releases) v1.1.7
- [weave](http://weave.works/) v2.2.1
- [docker](https://www.docker.com/) v17.03 (see note)
- [rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.11.3
- [etcd](https://github.com/coreos/etcd) v3.2.18
- [docker](https://www.docker.com/) v17.03 (see note)
- [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin
- [calico](https://github.com/projectcalico/calico) v3.1.3
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.2.0
- [contiv](https://github.com/contiv/install) v1.1.7
- [flanneld](https://github.com/coreos/flannel) v0.10.0
- [weave](https://github.com/weaveworks/weave) v2.4.1
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.0
- [coredns](https://github.com/coredns/coredns) v1.2.2
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.19.0
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note: kubernetes doesn't support newer docker versions ("Version 17.03 is recommended... Versions 17.06+ might work, but have not yet been tested and verified by the Kubernetes node team" cf. [Bootstrapping Clusters with kubeadm](https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-docker)). Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note 2: rkt support as docker alternative is limited to control plane (etcd and
kubelet). Docker is still used for Kubernetes cluster workloads and network
@@ -122,7 +153,7 @@ You can choose between 6 network plugins. (default: `calico`, except Vagrant use
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
@@ -152,8 +183,6 @@ Tools and projects on top of Kubespray
CI Tests
--------
![Gitlab Logo](https://s27.postimg.org/wmtaig1wz/gitlabci.png)
[![Build graphs](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/badges/master/build.svg)](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines)
CI/end-to-end tests sponsored by Google (GCE)

13
SECURITY_CONTACTS Normal file
View File

@@ -0,0 +1,13 @@
# Defined below are the security contacts for this repo.
#
# They are the contact point for the Product Security Team to reach out
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
# and will be removed and replaced if they violate that agreement.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
atoms
mattymo

9
Vagrantfile vendored
View File

@@ -18,6 +18,7 @@ SUPPORTED_OS = {
"coreos-beta" => {box: "coreos-beta", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
"ubuntu" => {box: "bento/ubuntu-16.04", bootstrap_os: "ubuntu", user: "vagrant"},
"centos" => {box: "centos/7", bootstrap_os: "centos", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", bootstrap_os: "fedora", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-42.3-x86_64", bootstrap_os: "opensuse", use: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", bootstrap_os: "opensuse", use: "vagrant"},
}
@@ -44,6 +45,8 @@ $kube_node_instances_with_disks = false
$kube_node_instances_with_disks_size = "20G"
$kube_node_instances_with_disks_number = 2
$playbook = "cluster.yml"
$local_release_dir = "/vagrant/temp"
host_vars = {}
@@ -125,6 +128,10 @@ Vagrant.configure("2") do |config|
config.vm.provider :libvirt do |lv|
lv.memory = $vm_memory
# Fix kernel panic on fedora 28
if $os == "fedora"
lv.cpu_mode = "host-passthrough"
end
end
ip = "#{$subnet}.#{i+100}"
@@ -157,7 +164,7 @@ Vagrant.configure("2") do |config|
# when all the machines are up and ready.
if i == $num_instances
config.vm.provision "ansible" do |ansible|
ansible.playbook = "cluster.yml"
ansible.playbook = $playbook
if File.exist?(File.join(File.dirname($inventory), "hosts"))
ansible.inventory_path = $inventory
end

View File

@@ -13,3 +13,5 @@ callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds
[inventory]
ignore_patterns = artifacts, credentials

View File

@@ -33,11 +33,12 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker }
- { role: docker, tags: docker, when: container_manager == 'docker' }
- { role: cri-o, tags: crio, when: container_manager == 'crio' }
- role: rkt
tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
- { role: download, tags: download, skip_downloads: false }
- { role: download, tags: download, when: "not skip_downloads" }
environment: "{{proxy_env}}"
- hosts: etcd:k8s-cluster:vault:calico-rr
@@ -51,13 +52,13 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true }
- { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" }
- hosts: k8s-cluster:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false }
- { role: etcd, tags: etcd, etcd_cluster_setup: false, etcd_events_cluster_setup: false }
- hosts: etcd:k8s-cluster:vault:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -93,6 +94,7 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: win_nodes/kubernetes_patch, tags: win_nodes, when: "kubeadm_enabled" }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -117,7 +119,7 @@
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
environment: "{{proxy_env}}"
- hosts: kube-master[0]
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}

View File

@@ -9,8 +9,8 @@ Resource Group. It will not install Kubernetes itself, this has to be done in a
## Requirements
- [Install azure-cli](https://docs.microsoft.com/en-us/azure/xplat-cli-install)
- [Login with azure-cli](https://docs.microsoft.com/en-us/azure/xplat-cli-connect)
- [Install azure-cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
- [Login with azure-cli](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest)
- Dedicated Resource Group created in the Azure Portal or through azure-cli
## Configuration through group_vars/all

View File

@@ -1,4 +1,4 @@
#!/usr/bin/python3
#!/usr/bin/env python3
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

10
contrib/metallb/README.md Normal file
View File

@@ -0,0 +1,10 @@
# Deploy MetalLB into Kubespray/Kubernetes
```
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that dont run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
```
This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer2/tutorial). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
## Install
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/metallb/metallb.yml
```

View File

@@ -0,0 +1,6 @@
---
- hosts: kube-master[0]
tags:
- "provision"
roles:
- { role: provision }

View File

@@ -0,0 +1,7 @@
---
metallb:
ip_range: "10.5.0.50-10.5.0.99"
limits:
cpu: "100m"
memory: "100Mi"
port: "7472"

View File

@@ -0,0 +1,17 @@
---
- name: "Kubernetes Apps | Lay Down MetalLB"
become: true
template: { src: "{{ item }}.j2", dest: "{{ kube_config_dir }}/{{ item }}" }
with_items: ["metallb.yml", "metallb-config.yml"]
register: "rendering"
when:
- "inventory_hostname == groups['kube-master'][0]"
- name: "Kubernetes Apps | Install and configure MetalLB"
kube:
name: "MetalLB"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "{{ item.changed | ternary('latest','present') }}"
with_items: "{{ rendering.results }}"
when:
- "inventory_hostname == groups['kube-master'][0]"

View File

@@ -0,0 +1,13 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: loadbalanced
protocol: layer2
addresses:
- {{ metallb.ip_range }}

View File

@@ -0,0 +1,254 @@
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:controller
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["services/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:speaker
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: leader-election
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["metallb-speaker"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
## Role bindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:controller
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:speaker
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: leader-election
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-election
---
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
component: speaker
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
labels:
app: metallb
component: speaker
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ metallb.port }}"
spec:
serviceAccountName: speaker
terminationGracePeriodSeconds: 0
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:v0.6.2
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: monitoring
containerPort: {{ metallb.port }}
resources:
limits:
cpu: {{ metallb.limits.cpu }}
memory: {{ metallb.limits.memory }}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
add:
- net_raw
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
component: controller
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
labels:
app: metallb
component: controller
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ metallb.port }}"
spec:
serviceAccountName: controller
terminationGracePeriodSeconds: 0
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:v0.6.2
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
- --config=config
ports:
- name: monitoring
containerPort: {{ metallb.port }}
resources:
limits:
cpu: {{ metallb.limits.cpu }}
memory: {{ metallb.limits.memory }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
---

View File

@@ -1 +1 @@
../../../inventory/group_vars
../../../inventory/local/group_vars

View File

@@ -12,7 +12,7 @@
# ## As in the previous case, you can set ip to give direct communication on internal IPs
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
# gfs_node1 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# [kube-master]
# node1

View File

@@ -2,7 +2,7 @@
# For Ubuntu.
glusterfs_default_release: ""
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.8"
glusterfs_ppa_version: "4.1"
# Gluster configuration.
gluster_mount_dir: /mnt/gluster

View File

@@ -2,7 +2,7 @@
# For Ubuntu.
glusterfs_default_release: ""
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.8"
glusterfs_ppa_version: "3.12"
# Gluster configuration.
gluster_mount_dir: /mnt/gluster

View File

@@ -1,2 +1,2 @@
---
glusterfs_daemon: glusterfs-server
glusterfs_daemon: glusterd

View File

@@ -0,0 +1,16 @@
# Deploy Heketi/Glusterfs into Kubespray/Kubernetes
This playbook aims to automate [this](https://github.com/heketi/heketi/blob/master/docs/admin/install-kubernetes.md) tutorial. It deploys heketi/glusterfs into kubernetes and sets up a storageclass.
## Client Setup
Heketi provides a CLI that provides users with a means to administer the deployment and configuration of GlusterFS in Kubernetes. [Download and install the heketi-cli](https://github.com/heketi/heketi/releases) on your client machine.
## Install
Copy the inventory.yml.sample over to inventory/sample/k8s_heketi_inventory.yml and change it according to your setup.
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi.yml
```
## Tear down
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi-tear-down.yml
```

View File

@@ -0,0 +1,9 @@
---
- hosts: kube-master[0]
roles:
- { role: tear-down }
- hosts: heketi-node
become: yes
roles:
- { role: tear-down-disks }

View File

@@ -0,0 +1,10 @@
---
- hosts: heketi-node
roles:
- { role: prepare }
- hosts: kube-master[0]
tags:
- "provision"
roles:
- { role: provision }

View File

@@ -0,0 +1,26 @@
all:
vars:
heketi_admin_key: "11elfeinhundertundelf"
heketi_user_key: "!!einseinseins"
children:
k8s-cluster:
vars:
kubelet_fail_swap_on: false
children:
kube-master:
hosts:
node1:
etcd:
hosts:
node2:
kube-node:
hosts: &kube_nodes
node1:
node2:
node3:
node4:
heketi-node:
vars:
disk_volume_device_1: "/dev/vdb"
hosts:
<<: *kube_nodes

View File

@@ -0,0 +1 @@
jmespath

View File

@@ -0,0 +1,24 @@
---
- name: "Load lvm kernel modules"
become: true
with_items:
- "dm_snapshot"
- "dm_mirror"
- "dm_thin_pool"
modprobe:
name: "{{ item }}"
state: "present"
- name: "Install glusterfs mount utils (RedHat)"
become: true
yum:
name: "glusterfs-fuse"
state: "present"
when: "ansible_os_family == 'RedHat'"
- name: "Install glusterfs mount utils (Debian)"
become: true
apt:
name: "glusterfs-client"
state: "present"
when: "ansible_os_family == 'Debian'"

View File

@@ -0,0 +1 @@
---

View File

@@ -0,0 +1,3 @@
---
- name: "stop port forwarding"
command: "killall "

View File

@@ -0,0 +1,56 @@
# Bootstrap heketi
- name: "Get state of heketi service, deployment and pods."
register: "initial_heketi_state"
changed_when: false
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
- name: "Bootstrap heketi."
when:
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Deployment']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod']\"))|length == 0"
include_tasks: "bootstrap/deploy.yml"
# Prepare heketi topology
- name: "Get heketi initial pod state."
register: "initial_heketi_pod"
command: "{{ bin_dir }}/kubectl get pods --selector=deploy-heketi=pod,glusterfs=heketi-pod,name=deploy-heketi --output=json"
changed_when: false
- name: "Ensure heketi bootstrap pod is up."
assert:
that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1"
- set_fact:
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}"
- name: "Test heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Load heketi topology."
when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0"
include_tasks: "bootstrap/topology.yml"
# Provision heketi database volume
- name: "Prepare heketi volumes."
include_tasks: "bootstrap/volumes.yml"
# Remove bootstrap heketi
- name: "Tear down bootstrap."
include_tasks: "bootstrap/tear-down.yml"
# Prepare heketi storage
- name: "Test heketi storage."
command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
changed_when: false
register: "heketi_storage_state"
# ensure endpoints actually exist before trying to move database data to it
- name: "Create heketi storage."
include_tasks: "bootstrap/storage.yml"
vars:
secret_query: "items[?metadata.name=='heketi-storage-secret' && kind=='Secret']"
endpoints_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Endpoints']"
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"

View File

@@ -0,0 +1,24 @@
---
- name: "Kubernetes Apps | Lay Down Heketi Bootstrap"
become: true
template: { src: "heketi-bootstrap.json.j2", dest: "{{ kube_config_dir }}/heketi-bootstrap.json" }
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-bootstrap.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Wait for heketi bootstrap to complete."
changed_when: false
register: "initial_heketi_state"
vars:
initial_heketi_state: { stdout: "{}" }
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
until:
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5

View File

@@ -0,0 +1,33 @@
---
- name: "Test heketi storage."
command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
changed_when: false
register: "heketi_storage_state"
- name: "Create heketi storage."
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage-bootstrap.json"
state: "present"
vars:
secret_query: "items[?metadata.name=='heketi-storage-secret' && kind=='Secret']"
endpoints_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Endpoints']"
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"
register: "heketi_storage_result"
- name: "Get state of heketi database copy job."
command: "{{ bin_dir }}/kubectl get jobs --output=json"
changed_when: false
register: "heketi_storage_state"
vars:
heketi_storage_state: { stdout: "{}" }
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job' && status.succeeded==1]"
until:
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 1"
retries: 60
delay: 5

View File

@@ -0,0 +1,14 @@
---
- name: "Get existing Heketi deploy resources."
command: "{{ bin_dir }}/kubectl get all --selector=\"deploy-heketi\" -o=json"
register: "heketi_resources"
changed_when: false
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over."
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5

View File

@@ -0,0 +1,26 @@
---
- name: "Get heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Render heketi topology template."
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
register: "render"
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"
- name: "Get heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60
delay: 5

View File

@@ -0,0 +1,41 @@
---
- name: "Get heketi volume ids."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume list --json"
changed_when: false
register: "heketi_volumes"
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_exists: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod."
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume list --json"
changed_when: false
register: "heketi_volumes"
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_created: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Ensure heketi database volume exists."
assert: { that: "heketi_database_volume_created is defined" , msg: "Heketi database volume does not exist." }

View File

@@ -0,0 +1,4 @@
---
- name: "Clean up left over jobs."
command: "{{ bin_dir }}/kubectl delete jobs,pods --selector=\"deploy-heketi\""
changed_when: false

View File

@@ -0,0 +1,38 @@
---
- name: "Kubernetes Apps | Lay Down GlusterFS Daemonset"
template: { src: "glusterfs-daemonset.json.j2", dest: "{{ kube_config_dir }}/glusterfs-daemonset.json" }
become: true
register: "rendering"
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/glusterfs-daemonset.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Kubernetes Apps | Label GlusterFS nodes"
include_tasks: "glusterfs/label.yml"
with_items: "{{ groups['heketi-node'] }}"
loop_control:
loop_var: "node"
- name: "Kubernetes Apps | Wait for daemonset to become available."
register: "daemonset_state"
command: "{{ bin_dir }}/kubectl get daemonset glusterfs --output=json --ignore-not-found=true"
changed_when: false
vars:
daemonset_state: { stdout: "{}" }
ready: "{{ daemonset_state.stdout|from_json|json_query(\"status.numberReady\") }}"
desired: "{{ daemonset_state.stdout|from_json|json_query(\"status.desiredNumberScheduled\") }}"
until: "ready | int >= 3"
retries: 60
delay: 5
- name: "Kubernetes Apps | Lay Down Heketi Service Account"
template: { src: "heketi-service-account.json.j2", dest: "{{ kube_config_dir }}/heketi-service-account.json" }
become: true
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Service Account"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-service-account.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,11 @@
---
- register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false
- name: "Assign storage label"
when: "label_present.stdout_lines|length == 0"
command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
- register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false
- assert: { that: "label_present|length > 0", msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs." }

View File

@@ -0,0 +1,26 @@
---
- name: "Kubernetes Apps | Lay Down Heketi"
become: true
template: { src: "heketi-deployment.json.j2", dest: "{{ kube_config_dir }}/heketi-deployment.json" }
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-deployment.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Ensure heketi is up and running."
changed_when: false
register: "heketi_state"
vars:
heketi_state: { stdout: "{}" }
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
until:
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5
- set_fact:
heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"

View File

@@ -0,0 +1,30 @@
---
- name: "Kubernetes Apps | GlusterFS"
include_tasks: "glusterfs.yml"
- name: "Kubernetes Apps | Heketi Secrets"
include_tasks: "secret.yml"
- name: "Kubernetes Apps | Test Heketi"
register: "heketi_service_state"
command: "{{bin_dir}}/kubectl get service heketi-storage-endpoints -o=name --ignore-not-found=true"
changed_when: false
- name: "Kubernetes Apps | Bootstrap Heketi"
when: "heketi_service_state.stdout == \"\""
include_tasks: "bootstrap.yml"
- name: "Kubernetes Apps | Heketi"
include_tasks: "heketi.yml"
- name: "Kubernetes Apps | Heketi Topology"
include_tasks: "topology.yml"
- name: "Kubernetes Apps | Heketi Storage"
include_tasks: "storage.yml"
- name: "Kubernetes Apps | Storage Class"
include_tasks: "storageclass.yml"
- name: "Clean up"
include_tasks: "cleanup.yml"

View File

@@ -0,0 +1,27 @@
---
- register: "clusterrolebinding_state"
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout == \"\""
command: "{{bin_dir}}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- register: "clusterrolebinding_state"
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- assert: { that: "clusterrolebinding_state.stdout != \"\"", message: "Cluster role binding is not present." }
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- name: "Render Heketi secret configuration."
become: true
template:
src: "heketi.json.j2"
dest: "{{ kube_config_dir }}/heketi.json"
- name: "Deploy Heketi config secret"
when: "secret_state.stdout == \"\""
command: "{{bin_dir}}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- assert: { that: "secret_state.stdout != \"\"", message: "Heketi config secret is not present." }

View File

@@ -0,0 +1,12 @@
---
- name: "Kubernetes Apps | Lay Down Heketi Storage"
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
template: { src: "heketi-storage.json.j2", dest: "{{ kube_config_dir }}/heketi-storage.json" }
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Storage"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,25 @@
---
- name: "Test storage class."
command: "{{ bin_dir }}/kubectl get storageclass gluster --ignore-not-found=true --output=json"
register: "storageclass"
changed_when: false
- name: "Test heketi service."
command: "{{ bin_dir }}/kubectl get service heketi --ignore-not-found=true --output=json"
register: "heketi_service"
changed_when: false
- name: "Ensure heketi service is available."
assert: { that: "heketi_service.stdout != \"\"" }
- name: "Render storage class configuration."
become: true
vars:
endpoint_address: "{{ (heketi_service.stdout|from_json).spec.clusterIP }}"
template:
src: "storageclass.yml.j2"
dest: "{{ kube_config_dir }}/storageclass.yml"
register: "rendering"
- name: "Kubernetes Apps | Install and configure Storace Class"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/storageclass.yml"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,25 @@
---
- name: "Get heketi topology."
register: "heketi_topology"
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Render heketi topology template."
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
register: "rendering"
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
- name: "Copy topology configuration into container."
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."
register: "heketi_topology"
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60
delay: 5

View File

@@ -0,0 +1,144 @@
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "glusterfs",
"labels": {
"glusterfs": "deployment"
},
"annotations": {
"description": "GlusterFS Daemon Set",
"tags": "glusterfs"
}
},
"spec": {
"template": {
"metadata": {
"name": "glusterfs",
"labels": {
"glusterfs-node": "daemonset"
}
},
"spec": {
"nodeSelector": {
"storagenode" : "glusterfs"
},
"hostNetwork": true,
"containers": [
{
"image": "gluster/gluster-centos:gluster4u0_centos7",
"imagePullPolicy": "IfNotPresent",
"name": "glusterfs",
"volumeMounts": [
{
"name": "glusterfs-heketi",
"mountPath": "/var/lib/heketi"
},
{
"name": "glusterfs-run",
"mountPath": "/run"
},
{
"name": "glusterfs-lvm",
"mountPath": "/run/lvm"
},
{
"name": "glusterfs-etc",
"mountPath": "/etc/glusterfs"
},
{
"name": "glusterfs-logs",
"mountPath": "/var/log/glusterfs"
},
{
"name": "glusterfs-config",
"mountPath": "/var/lib/glusterd"
},
{
"name": "glusterfs-dev",
"mountPath": "/dev"
},
{
"name": "glusterfs-cgroup",
"mountPath": "/sys/fs/cgroup"
}
],
"securityContext": {
"capabilities": {},
"privileged": true
},
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 60,
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 60,
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
}
}
],
"volumes": [
{
"name": "glusterfs-heketi",
"hostPath": {
"path": "/var/lib/heketi"
}
},
{
"name": "glusterfs-run"
},
{
"name": "glusterfs-lvm",
"hostPath": {
"path": "/run/lvm"
}
},
{
"name": "glusterfs-etc",
"hostPath": {
"path": "/etc/glusterfs"
}
},
{
"name": "glusterfs-logs",
"hostPath": {
"path": "/var/log/glusterfs"
}
},
{
"name": "glusterfs-config",
"hostPath": {
"path": "/var/lib/glusterd"
}
},
{
"name": "glusterfs-dev",
"hostPath": {
"path": "/dev"
}
},
{
"name": "glusterfs-cgroup",
"hostPath": {
"path": "/sys/fs/cgroup"
}
}
]
}
}
}
}

View File

@@ -0,0 +1,133 @@
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "deploy-heketi",
"labels": {
"glusterfs": "heketi-service",
"deploy-heketi": "support"
},
"annotations": {
"description": "Exposes Heketi Service"
}
},
"spec": {
"selector": {
"name": "deploy-heketi"
},
"ports": [
{
"name": "deploy-heketi",
"port": 8080,
"targetPort": 8080
}
]
}
},
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "deploy-heketi",
"labels": {
"glusterfs": "heketi-deployment",
"deploy-heketi": "deployment"
},
"annotations": {
"description": "Defines how to deploy Heketi"
}
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"name": "deploy-heketi",
"labels": {
"name": "deploy-heketi",
"glusterfs": "heketi-pod",
"deploy-heketi": "pod"
}
},
"spec": {
"serviceAccountName": "heketi-service-account",
"containers": [
{
"image": "heketi/heketi:7",
"imagePullPolicy": "Always",
"name": "deploy-heketi",
"env": [
{
"name": "HEKETI_EXECUTOR",
"value": "kubernetes"
},
{
"name": "HEKETI_DB_PATH",
"value": "/var/lib/heketi/heketi.db"
},
{
"name": "HEKETI_FSTAB",
"value": "/var/lib/heketi/fstab"
},
{
"name": "HEKETI_SNAPSHOT_LIMIT",
"value": "14"
},
{
"name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
"value": "y"
}
],
"ports": [
{
"containerPort": 8080
}
],
"volumeMounts": [
{
"name": "db",
"mountPath": "/var/lib/heketi"
},
{
"name": "config",
"mountPath": "/etc/heketi"
}
],
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/hello",
"port": 8080
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"httpGet": {
"path": "/hello",
"port": 8080
}
}
}
],
"volumes": [
{
"name": "db"
},
{
"name": "config",
"secret": {
"secretName": "heketi-config-secret"
}
}
]
}
}
}
}
]
}

View File

@@ -0,0 +1,159 @@
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "heketi-db-backup",
"labels": {
"glusterfs": "heketi-db",
"heketi": "db"
}
},
"data": {
},
"type": "Opaque"
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "heketi",
"labels": {
"glusterfs": "heketi-service",
"deploy-heketi": "support"
},
"annotations": {
"description": "Exposes Heketi Service"
}
},
"spec": {
"selector": {
"name": "heketi"
},
"ports": [
{
"name": "heketi",
"port": 8080,
"targetPort": 8080
}
]
}
},
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "heketi",
"labels": {
"glusterfs": "heketi-deployment"
},
"annotations": {
"description": "Defines how to deploy Heketi"
}
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"name": "heketi",
"labels": {
"name": "heketi",
"glusterfs": "heketi-pod"
}
},
"spec": {
"serviceAccountName": "heketi-service-account",
"containers": [
{
"image": "heketi/heketi:7",
"imagePullPolicy": "Always",
"name": "heketi",
"env": [
{
"name": "HEKETI_EXECUTOR",
"value": "kubernetes"
},
{
"name": "HEKETI_DB_PATH",
"value": "/var/lib/heketi/heketi.db"
},
{
"name": "HEKETI_FSTAB",
"value": "/var/lib/heketi/fstab"
},
{
"name": "HEKETI_SNAPSHOT_LIMIT",
"value": "14"
},
{
"name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
"value": "y"
}
],
"ports": [
{
"containerPort": 8080
}
],
"volumeMounts": [
{
"mountPath": "/backupdb",
"name": "heketi-db-secret"
},
{
"name": "db",
"mountPath": "/var/lib/heketi"
},
{
"name": "config",
"mountPath": "/etc/heketi"
}
],
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/hello",
"port": 8080
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"httpGet": {
"path": "/hello",
"port": 8080
}
}
}
],
"volumes": [
{
"name": "db",
"glusterfs": {
"endpoints": "heketi-storage-endpoints",
"path": "heketidbstorage"
}
},
{
"name": "heketi-db-secret",
"secret": {
"secretName": "heketi-db-backup"
}
},
{
"name": "config",
"secret": {
"secretName": "heketi-config-secret"
}
}
]
}
}
}
}
]
}

View File

@@ -0,0 +1,7 @@
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "heketi-service-account"
}
}

View File

@@ -0,0 +1,54 @@
{
"apiVersion": "v1",
"kind": "List",
"items": [
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "heketi-storage-endpoints",
"creationTimestamp": null
},
"subsets": [
{% set nodeblocks = [] %}
{% for node in nodes %}
{% set nodeblock %}
{
"addresses": [
{
"ip": "{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}"
}
],
"ports": [
{
"port": 1
}
]
}
{% endset %}
{% if nodeblocks.append(nodeblock) %}{% endif %}
{% endfor %}
{{ nodeblocks|join(',') }}
]
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "heketi-storage-endpoints",
"creationTimestamp": null
},
"spec": {
"ports": [
{
"port": 1,
"targetPort": 0
}
]
},
"status": {
"loadBalancer": {}
}
}
]
}

View File

@@ -0,0 +1,44 @@
{
"_port_comment": "Heketi Server Port Number",
"port": "8080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": true,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "{{ heketi_admin_key }}"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "{{ heketi_user_key }}"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
"executor": "kubernetes",
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"kubeexec": {
"rebalance_on_expansion": true
},
"sshexec": {
"rebalance_on_expansion": true,
"keyfile": "/etc/heketi/private_key",
"fstab": "/etc/fstab",
"port": "22",
"user": "root",
"sudo": false
}
},
"_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
"backup_db_to_kube_secret": false
}

View File

@@ -0,0 +1,12 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://{{ endpoint_address }}:8080"
restuser: "admin"
restuserkey: "{{ heketi_admin_key }}"

View File

@@ -0,0 +1,34 @@
{
"clusters": [
{
"nodes": [
{% set nodeblocks = [] %}
{% for node in nodes %}
{% set nodeblock %}
{
"node": {
"hostnames": {
"manage": [
"{{ node }}"
],
"storage": [
"{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}"
]
},
"zone": 1
},
"devices": [
{
"name": "{{ hostvars[node]['disk_volume_device_1'] }}",
"destroydata": false
}
]
}
{% endset %}
{% if nodeblocks.append(nodeblock) %}{% endif %}
{% endfor %}
{{ nodeblocks|join(',') }}
]
}
]
}

View File

@@ -0,0 +1,46 @@
---
- name: "Install lvm utils (RedHat)"
become: true
yum:
name: "lvm2"
state: "present"
when: "ansible_os_family == 'RedHat'"
- name: "Install lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "present"
when: "ansible_os_family == 'Debian'"
- name: "Get volume group information."
become: true
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
register: "volume_groups"
ignore_errors: true
changed_when: false
- name: "Remove volume groups."
become: true
command: "vgremove {{ volume_group }} --yes"
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks."
become: true
command: "pvremove {{ disk_volume_device_1 }} --yes"
ignore_errors: true
- name: "Remove lvm utils (RedHat)"
become: true
yum:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'RedHat'"
- name: "Remove lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'Debian'"

View File

@@ -0,0 +1,51 @@
---
- name: "Remove storage class."
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true
- name: "Tear down heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true
- name: "Tear down heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true
- name: "Tear down bootstrap."
include_tasks: "../provision/tasks/bootstrap/tear-down.yml"
- name: "Ensure there is nothing left over."
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Ensure there is nothing left over."
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Tear down glusterfs."
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true
- name: "Remove heketi storage service."
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true
- name: "Remove heketi gluster role binding"
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true
- name: "Remove heketi config secret"
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true
- name: "Remove heketi db backup"
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true
- name: "Remove heketi service account"
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true
- name: "Get secrets"
command: "{{ bin_dir }}/kubectl get secrets --output=\"json\""
register: "secrets"
changed_when: false
- name: "Remove heketi storage secret"
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
when: "storage_query is defined"
ignore_errors: true

View File

@@ -17,21 +17,18 @@ This project will create:
- Export the variables for your AWS credentials or edit `credentials.tfvars`:
```
export AWS_ACCESS_KEY_ID="www"
export AWS_SECRET_ACCESS_KEY ="xxx"
export AWS_SSH_KEY_NAME="yyy"
export AWS_DEFAULT_REGION="zzz"
export TF_VAR_AWS_ACCESS_KEY_ID="www"
export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
export TF_VAR_AWS_DEFAULT_REGION="zzz"
```
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use CoreOS as base image. If you want to change this behaviour, see note "Using other distrib than CoreOs" below.
- Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below)
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
Example:
```commandline
terraform apply -var-file=credentials.tfvars -var 'loadbalancer_apiserver_address=34.212.228.77'
terraform apply -var-file=credentials.tfvars
```
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
@@ -46,7 +43,7 @@ ssh -F ./ssh-bastion.conf user@$ip
Example (this one assumes you are using CoreOS)
```commandline
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
```
***Using other distrib than CoreOs***
If you want to use another distribution than CoreOS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.

View File

@@ -181,7 +181,7 @@ data "template_file" "inventory" {
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ../../../inventory/hosts"
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers {

View File

@@ -31,3 +31,5 @@ default_tags = {
# Env = "devtest"
# Product = "kubernetes"
}
inventory_file = "../../../inventory/hosts"

View File

@@ -103,3 +103,7 @@ variable "default_tags" {
description = "Default tags for all resources"
type = "map"
}
variable "inventory_file" {
description = "Where to store the generated inventory file"
}

View File

@@ -1 +1 @@
../../inventory/group_vars
../../inventory/local/group_vars

View File

@@ -32,7 +32,11 @@ floating IP addresses or not.
- Kubernetes worker nodes
Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration.
with an even number of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
### GlusterFS
The Terraform configuration supports provisioning of an optional GlusterFS
@@ -135,7 +139,7 @@ the one you want to use with the environment variable `OS_CLOUD`:
export OS_CLOUD=mycloud
```
##### Openrc method (deprecated)
##### Openrc method
When using classic environment variables, Terraform uses default `OS_*`
environment variables. A script suitable for your environment may be available
@@ -218,6 +222,9 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
#### Terraform state files
@@ -253,7 +260,7 @@ $ terraform apply -var-file=cluster.tf ../../contrib/terraform/openstack
if you chose to create a bastion host, this script will create
`contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for Ansible to
be able to access your machines tunneling through the bastion's IP address. If
be able to access your machines tunneling through the bastion's IP address. If
you want to manually handle the ssh tunneling to these machines, please delete
or move that file. If you want to use this, just leave it there, as ansible will
pick it up automatically.
@@ -299,11 +306,15 @@ If you have deployed and destroyed a previous iteration of your cluster, you wil
#### Bastion host
If you are not using a bastion host, but not all of your nodes have floating IPs, create a file `inventory/$CLUSTER/group_vars/no-floating.yml` with the following content. Use one of your nodes with a floating IP (this should have been output at the end of the Terraform step) and the appropriate user for that OS, or if you have another jump host, use that.
Bastion access will be determined by:
```
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -W %h:%p -q USER@MASTER_IP"'
```
- Your choice on the amount of bastion hosts (set by `number_of_bastions` terraform variable).
- The existence of nodes/masters with floating IPs (set by `number_of_k8s_masters`, `number_of_k8s_nodes`, `number_of_k8s_masters_no_etcd` terraform variables).
If you have a bastion host, your ssh traffic will be directly routed through it. This is regardless of whether you have masters/nodes with a floating IP assigned.
If you don't have a bastion host, but at least one of your masters/nodes have a floating IP, then ssh traffic will be tunneled by one of these machines.
So, either a bastion host, or at least master/node with a floating IP are required.
#### Test access

View File

@@ -3,6 +3,7 @@ module "network" {
external_net = "${var.external_net}"
network_name = "${var.network_name}"
subnet_cidr = "${var.subnet_cidr}"
cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}"
}
@@ -24,6 +25,7 @@ module "compute" {
source = "modules/compute"
cluster_name = "${var.cluster_name}"
az_list = "${var.az_list}"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_etcd = "${var.number_of_etcd}"
@@ -48,6 +50,9 @@ module "compute" {
k8s_master_fips = "${module.ips.k8s_master_fips}"
k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}"
bastion_allowed_remote_ips = "${var.bastion_allowed_remote_ips}"
supplementary_master_groups = "${var.supplementary_master_groups}"
supplementary_node_groups = "${var.supplementary_node_groups}"
network_id = "${module.network.router_id}"
}

View File

@@ -3,61 +3,62 @@ resource "openstack_compute_keypair_v2" "k8s" {
public_key = "${chomp(file(var.public_key_path))}"
}
resource "openstack_compute_secgroup_v2" "k8s_master" {
resource "openstack_networking_secgroup_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master"
description = "${var.cluster_name} - Kubernetes Master"
rule {
ip_protocol = "tcp"
from_port = "6443"
to_port = "6443"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "bastion" {
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "6443"
port_range_max = "6443"
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.k8s_master.id}"
}
resource "openstack_networking_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion"
description = "${var.cluster_name} - Bastion Server"
rule {
ip_protocol = "tcp"
from_port = "22"
to_port = "22"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "k8s" {
resource "openstack_networking_secgroup_rule_v2" "bastion" {
count = "${length(var.bastion_allowed_remote_ips)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.bastion.id}"
}
resource "openstack_networking_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
cidr = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "k8s" {
direction = "ingress"
ethertype = "IPv4"
remote_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
rule {
ip_protocol = "tcp"
from_port = "1"
to_port = "65535"
self = true
}
resource "openstack_networking_secgroup_v2" "worker" {
name = "${var.cluster_name}-k8s-worker"
description = "${var.cluster_name} - Kubernetes worker nodes"
}
rule {
ip_protocol = "udp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
self = true
}
resource "openstack_networking_secgroup_rule_v2" "worker" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "30000"
port_range_max = "32767"
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.worker.id}"
}
resource "openstack_compute_instance_v2" "bastion" {
@@ -71,8 +72,8 @@ resource "openstack_compute_instance_v2" "bastion" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"default",
]
@@ -83,7 +84,7 @@ resource "openstack_compute_instance_v2" "bastion" {
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
@@ -91,6 +92,7 @@ resource "openstack_compute_instance_v2" "bastion" {
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -99,23 +101,28 @@ resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,k8s-cluster,vault"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -124,21 +131,27 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,k8s-cluster,vault"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -147,7 +160,7 @@ resource "openstack_compute_instance_v2" "etcd" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}"]
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
metadata = {
ssh_user = "${var.ssh_user}"
@@ -160,6 +173,7 @@ resource "openstack_compute_instance_v2" "etcd" {
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -168,14 +182,14 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,k8s-cluster,vault,no-floating"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
@@ -184,6 +198,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -192,13 +207,13 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,k8s-cluster,vault,no-floating"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
@@ -207,6 +222,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -215,22 +231,28 @@ resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.worker.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster"
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -239,13 +261,14 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating"
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
}
@@ -279,6 +302,7 @@ resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -287,7 +311,7 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"default",
]

View File

@@ -1,5 +1,9 @@
variable "cluster_name" {}
variable "az_list" {
type = "list"
}
variable "number_of_k8s_masters" {}
variable "number_of_k8s_masters_no_etcd" {}
@@ -55,3 +59,15 @@ variable "k8s_node_fips" {
variable "bastion_fips" {
type = "list"
}
variable "bastion_allowed_remote_ips" {
type = "list"
}
variable "supplementary_master_groups" {
default = ""
}
variable "supplementary_node_groups" {
default = ""
}

View File

@@ -12,7 +12,7 @@ resource "openstack_networking_network_v2" "k8s" {
resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network"
network_id = "${openstack_networking_network_v2.k8s.id}"
cidr = "10.0.0.0/24"
cidr = "${var.subnet_cidr}"
ip_version = 4
dns_nameservers = "${var.dns_nameservers}"
}

View File

@@ -1,4 +1,8 @@
output "router_id" {
value = "${openstack_networking_router_v2.k8s.id}"
}
output "router_internal_port_id" {
value = "${openstack_networking_router_interface_v2.k8s.id}"
}

View File

@@ -7,3 +7,5 @@ variable "cluster_name" {}
variable "dns_nameservers" {
type = "list"
}
variable "subnet_cidr" {}

View File

@@ -41,5 +41,6 @@ number_of_k8s_nodes_no_floating_ip = 4
# networking
network_name = "<network>"
external_net = "<UUID>"
subnet_cidr = "<cidr>"
floatingip_pool = "<pool>"
bastion_allowed_remote_ips = ["0.0.0.0/0"]

View File

@@ -2,6 +2,12 @@ variable "cluster_name" {
default = "example"
}
variable "az_list" {
description = "List of Availability Zones available in your OpenStack cluster"
type = "list"
default = ["nova"]
}
variable "number_of_bastions" {
default = 1
}
@@ -97,6 +103,12 @@ variable "network_name" {
default = "internal"
}
variable "subnet_cidr" {
description = "Subnet CIDR block."
type = "string"
default = "10.0.0.0/24"
}
variable "dns_nameservers" {
description = "An array of DNS name server names used by hosts in this subnet."
type = "list"
@@ -111,3 +123,19 @@ variable "floatingip_pool" {
variable "external_net" {
description = "uuid of the external/public network"
}
variable "supplementary_master_groups" {
description = "supplementary kubespray ansible groups for masters, such kube-node"
default = ""
}
variable "supplementary_node_groups" {
description = "supplementary kubespray ansible groups for worker nodes, such as kube-ingress"
default = ""
}
variable "bastion_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
default = ["0.0.0.0/0"]
}

View File

@@ -706,6 +706,10 @@ def query_list(hosts):
for name, attrs, hostgroups in hosts:
for group in set(hostgroups):
# Ansible 2.6.2 stopped supporting empty group names: https://github.com/ansible/ansible/pull/42584/commits/d4cd474b42ed23d8f8aabb2a7f84699673852eaf
# Empty group name defaults to "all" in Ansible < 2.6.2 so we alter empty group names to "all"
if not group: group = "all"
groups[group].setdefault('hosts', [])
groups[group]['hosts'].append(name)

View File

@@ -123,7 +123,6 @@ The following tags are defined in playbooks:
| hyperkube | Manipulations with K8s hyperkube image
| k8s-pre-upgrade | Upgrading K8s cluster
| k8s-secrets | Configuring K8s certs/keys
| kpm | Installing K8s apps definitions with KPM
| kube-apiserver | Configuring static pod kube-apiserver
| kube-controller-manager | Configuring static pod kube-controller-manager
| kubectl | Installing kubectl and bash completion
@@ -159,7 +158,7 @@ And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/reso
```
ansible-playbook -i inventory/sample/hosts.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
```
And this prepares all container images localy (at the ansible runner node) without installing
And this prepares all container images locally (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes:
```
ansible-playbook -i inventory/sample/hosts.ini cluster.yml \

View File

@@ -1,11 +1,11 @@
AWS
===============
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`.
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
Make sure your VPC has both DNS Hostnames support and Private DNS enabled.
@@ -58,3 +58,21 @@ export AWS_SECRET_ACCESS_KEY="yyyyy"
export REGION="us-east-2"
```
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
## Kubespray configuration
Declare the cloud config variables for the `aws` provider as follows. Setting these variables are optional and depend on your use case.
Variable|Type|Comment
---|---|---
aws_zone|string|Force set the AWS zone. Recommended to leave blank.
aws_vpc|string|The AWS VPC flag enables the possibility to run the master components on a different aws account, on a different cloud provider or on-premise. If the flag is set also the KubernetesClusterTag must be provided
aws_subnet_id|string|SubnetID enables using a specific subnet to use for ELB's
aws_route_table_id|string|RouteTableID enables using a specific RouteTable
aws_role_arn|string|RoleARN is the IAM role to assume when interaction with AWS APIs
aws_kubernetes_cluster_tag|string|KubernetesClusterTag is the legacy cluster id we'll use to identify our cluster resources
aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use to identify our cluster resources
aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000.
aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead.
aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment.

View File

@@ -1,6 +1,13 @@
Calico
===========
---
**N.B. Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
If you create automated backups of etcdv2 please switch for creating etcdv3 backups, as kubernetes and calico now uses etcdv3
After migration you can check `/tmp/calico_upgrade/` directory for converted items to etcdv3.
**PLEASE TEST upgrade before upgrading production cluster.**
---
Check if the calico-node container is running
```
@@ -14,7 +21,7 @@ The **calicoctl** command allows to check the status of the network workloads.
calicoctl node status
```
or for versions prior *v1.0.0*:
or for versions prior to *v1.0.0*:
```
calicoctl status
@@ -26,7 +33,7 @@ calicoctl status
calicoctl get ippool -o wide
```
or for versions prior *v1.0.0*:
or for versions prior to *v1.0.0*:
```
calicoctl pool show
@@ -66,7 +73,7 @@ In some cases you may want to route the pods subnet and so NAT is not needed on
For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
The following variables need to be set:
`peer_with_router` to enable the peering with the datacenter's border router (default value: false).
you'll need to edit the inventory and add a and a hostvar `local_as` by node.
you'll need to edit the inventory and add a hostvar `local_as` by node.
```
node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
@@ -86,7 +93,7 @@ To do so you can deploy BGP route reflectors and peer `calico-node` with them as
recommended here:
* https://hub.docker.com/r/calico/routereflector/
* http://docs.projectcalico.org/v2.0/reference/private-cloud/l3-interconnect-fabric
* https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric
You need to edit your inventory and add:
@@ -149,7 +156,7 @@ The inventory above will deploy the following topology assuming that calico's
##### Optional : Define default endpoint to host action
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) withing the same node are dropped.
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
To re-define default action please set the following variable in your inventory:
@@ -157,6 +164,15 @@ To re-define default action please set the following variable in your inventory:
calico_endpoint_to_host_action: "ACCEPT"
```
##### Optional : Define address on which Felix will respond to health requests
Since Calico 3.2.0, HealthCheck default behavior changed from listening on all interfaces to just listening on localhost.
To re-define health host please set the following variable in your inventory:
```
calico_healthhost: "0.0.0.0"
```
Cloud providers configuration
=============================
@@ -169,3 +185,12 @@ By default the felix agent(calico-node) will abort if the Kernel RPF setting is
```
calico_node_ignorelooserpf: true
```
Note that in OpenStack you must allow `ipip` traffic in your security groups,
otherwise you will experience timeouts.
To do this you must add a rule which allows it, for example:
```
neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t
neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t
```

View File

@@ -54,16 +54,18 @@ The default configuration uses VXLAN to create an overlay. Two networks are crea
You can change the default network configuration by overriding the `contiv_networks` variable.
The default forward mode is set to routing:
The default forward mode is set to routing and the default network mode is vxlan:
```yaml
contiv_fwd_mode: routing
contiv_net_mode: vxlan
```
The following is an example of how you can use VLAN instead of VXLAN:
```yaml
contiv_fwd_mode: bridge
contiv_net_mode: vlan
contiv_vlan_interface: eth0
contiv_networks:
- name: default-net

31
docs/cri-o.md Normal file
View File

@@ -0,0 +1,31 @@
cri-o
===============
cri-o is container developed by kubernetes project.
Currently, only basic function is supported for cri-o.
* cri-o is supported kubernetes 1.11.1 or later.
* helm and other feature may not be supported due to docker dependency.
* scale.yml and upgrade-cluster.yml are not supported.
helm and other feature may not be supported due to docker dependency.
Use cri-o instead of docker, set following variable:
#### all.yml
```
kubeadm_enabled: true
...
download_container: false
skip_downloads: false
```
#### k8s-cluster.yml
```
etcd_deployment_type: host
kubelet_deployment_type: host
container_manager: crio
```

View File

@@ -52,13 +52,13 @@ You can modify how Kubespray sets up DNS for your cluster with the variables ``d
## dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### dnsmasq_kubedns (default)
#### dnsmasq_kubedns
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns.
It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All
other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver``
#### kubedns
#### kubedns (default)
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
all queries.

View File

@@ -40,3 +40,14 @@ The full list of available vars may be found in the download's ansible role defa
Those also allow to specify custom urls and local repositories for binaries and container
images as well. See also the DNS stack docs for the related intranet configuration,
so the hosts can resolve those urls and repos.
## Offline environment
In case your servers don't have access to internet (for example when deploying on premises with security constraints), you'll have, first, to setup the appropriate proxies/caches/mirrors and/or internal repositories and registries and, then, adapt the following variables to fit your environment before deploying:
* At least `foo_image_repo` and `foo_download_url` as described before (i.e. in case of use of proxies to registries and binaries repositories, checksums and versions do not necessarily need to be changed).
NB: Regarding `foo_image_repo`, when using insecure registries/proxies, you will certainly have to append them to the `docker_insecure_registries` variable in group_vars/all/docker.yml
* Depending on the `container_manager`
* When `container_manager=docker`, `docker_foo_repo_base_url`, `docker_foo_repo_gpgkey`, `dockerproject_bar_repo_base_url` and `dockerproject_bar_repo_gpgkey` (where `foo` is the distribution and `bar` is system package manager)
* When `container_manager=crio`, `crio_rhel_repo_base_url`
* When using Helm, `helm_stable_repo_url`

View File

@@ -38,9 +38,9 @@ See more details in the [ansible guide](ansible.md).
Adding nodes
------------
You may want to add **worker** nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
ansible-playbook -i inventory/mycluster/hosts.ini scale.yml -b -v \
@@ -51,11 +51,26 @@ Remove nodes
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `remove-node.yml`:
Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key
We support two ways to select the nodes:
- Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key
--private-key=~/.ssh/private_key \
--extra-vars "node=nodename,nodename2"
```
or
- Use `--limit nodename,nodename2` to select the node
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--limit nodename,nodename2"
```
Connecting to Kubernetes
@@ -74,7 +89,7 @@ authentication. One could generate a kubeconfig based on one installed
kube-master hosts (needs improvement) or connect with a username and password.
By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user.creds`. This contains a randomly generated
`{{ credentials_dir }}/kube_user.creds` (`credentials_dir` is set to `{{ inventory_dir }}/credentials` by default). This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this
file yourself.

View File

@@ -5,18 +5,38 @@ The following components require a highly available endpoints:
* etcd cluster,
* kube-apiserver service instances.
The latter relies on a 3rd side reverse proxies, like Nginx or HAProxy, to
The latter relies on a 3rd side reverse proxy, like Nginx or HAProxy, to
achieve the same goal.
Etcd
----
The `etcd_access_endpoint` fact provides an access pattern for clients. And the
`etcd_multiaccess` (defaults to `True`) group var controls that behavior.
It makes deployed components to access the etcd cluster members
directly: `http://ip1:2379, http://ip2:2379,...`. This mode assumes the clients
do a loadbalancing and handle HA for connections.
In order to use an external loadbalancing (L4/TCP or L7 w/ SSL Passthrough VIP), the following variables need to be overriden in group_vars
* `etcd_access_addresses`
* `etcd_client_url`
* `etcd_cert_alt_names`
* `etcd_cert_alt_ips`
### Example of a VIP w/ FQDN
```yaml
etcd_access_addresses: https://etcd.example.com:2379
etcd_client_url: https://etcd.example.com:2379
etcd_cert_alt_names:
- "etcd.kube-system.svc.{{ dns_domain }}"
- "etcd.kube-system.svc"
- "etcd.kube-system"
- "etcd"
- "etcd.example.com" # This one needs to be added to the default etcd_cert_alt_names
```
### Example of a VIP w/o FQDN (IP only)
```yaml
etcd_access_addresses: https://2.3.7.9:2379
etcd_client_url: https://2.3.7.9:2379
etcd_cert_alt_ips:
- "2.3.7.9"
```
Kube-apiserver
--------------
@@ -83,7 +103,7 @@ loadbalancer_apiserver:
so you will need to use a different port for the vip from that the API is
listening on, or set the `kube_apiserver_bind_address` so that the API only
listens on a specific interface (to avoid conflict with haproxy binding the
port on the VIP adddress)
port on the VIP address)
This domain name, or default "lb-apiserver.kubernetes.local", will be inserted
into the `/etc/hosts` file of all servers in the `k8s-cluster` group and wired

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

View File

@@ -15,7 +15,7 @@ By default the normal behavior looks like:
2. Kubernetes controller manager checks the statuses of Kubelets every
`-node-monitor-period`. The default value is **5s**.
3. In case the status is updated within `--node-monitor-grace-period` of time,
3. In case the status is updated within `--node-monitor-grace-period` of time,
Kubernetes controller manager considers healthy status of Kubelet. The
default value is **40s**.
@@ -33,7 +33,7 @@ Kubelet will try to make `nodeStatusUpdateRetry` post attempts. Currently
[kubelet.go](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet.go#L102).
Kubelet will try to update the status in
[tryUpdateNodeStatus](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet_node_status.go#L345)
[tryUpdateNodeStatus](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet_node_status.go#L312)
function. Kubelet uses `http.Client()` Golang method, but has no specified
timeout. Thus there may be some glitches when API Server is overloaded while
TCP connection is established.
@@ -69,7 +69,7 @@ minute which may require large etcd containers or even dedicated nodes for etcd.
> If we calculate the number of tries, the division will give 5, but in reality
> it will be from 3 to 5 with `nodeStatusUpdateRetry` attempts of each try. The
> total number of attemtps will vary from 15 to 25 due to latency of all
> total number of attempts will vary from 15 to 25 due to latency of all
> components.
## Medium Update and Average Reaction
@@ -92,7 +92,7 @@ etcd updates per minute.
Let's set `-node-status-update-frequency` to **1m**.
`--node-monitor-grace-period` will set to **5m** and `--pod-eviction-timeout`
to **1m**. In this scenario, every kubelet will try to update the status every
minute. There will be 5 * 5 = 25 attempts before unhealty status. After 5m,
minute. There will be 5 * 5 = 25 attempts before unhealthy status. After 5m,
Kubernetes controller manager will set unhealthy status. This means that pods
will be evicted after 1m after being marked unhealthy. (6m in total).

View File

@@ -25,13 +25,13 @@ There are related application specifc variables:
netchecker_port: 31081
agent_report_interval: 15
netcheck_namespace: default
agent_img: "quay.io/l23network/k8s-netchecker-agent:v1.0"
server_img: "quay.io/l23network/k8s-netchecker-server:v1.0"
agent_img: "mirantis/k8s-netchecker-agent:v1.2.2"
server_img: "mirantis/k8s-netchecker-server:v1.2.2"
```
Note that the application verifies DNS resolve for FQDNs comprising only the
combination of the ``netcheck_namespace.dns_domain`` vars, for example the
``netchecker-service.default.cluster.local``. If you want to deploy the application
``netchecker-service.default.svc.cluster.local``. If you want to deploy the application
to the non default namespace, make sure as well to adjust the ``searchdomains`` var
so the resulting search domain records to contain that namespace, like:

View File

@@ -3,7 +3,7 @@ OpenStack
To deploy kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'openstack'`.
After that make sure to source in your OpenStack credentials like you would do when using `nova-client` by using `source path/to/your/openstack-rc`.
After that make sure to source in your OpenStack credentials like you would do when using `nova-client` or `neutron-client` by using `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`.
The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack.
Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected.
@@ -12,35 +12,34 @@ Unless you are using calico you can now run the playbook.
**Additional step needed when using calico:**
Calico does not encapsulate all packages with the hosts ip addresses. Instead the packages will be routed with the PODs ip addresses directly.
Calico does not encapsulate all packages with the hosts' ip addresses. Instead the packages will be routed with the PODs ip addresses directly.
OpenStack will filter and drop all packages from ips it does not know to prevent spoofing.
In order to make calico work on OpenStack you will need to tell OpenStack to allow calicos packages by allowing the network it uses.
In order to make calico work on OpenStack you will need to tell OpenStack to allow calico's packages by allowing the network it uses.
First you will need the ids of your OpenStack instances that will run kubernetes:
nova list --tenant Your-Tenant
openstack server list --project YOUR_PROJECT
+--------------------------------------+--------+----------------------------------+--------+-------------+
| ID | Name | Tenant ID | Status | Power State |
+--------------------------------------+--------+----------------------------------+--------+-------------+
| e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
| 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports:
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports (though they are now configured through using OpenStack):
neutron port-list -c id -c device_id
openstack port list -c id -c device_id --project YOUR_PROJECT
+--------------------------------------+--------------------------------------+
| id | device_id |
+--------------------------------------+--------------------------------------+
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
Given the port ids on the left, you can set the `allowed_address_pairs` in neutron.
Note that you have to allow both of `kube_service_addresses` (default `10.233.0.0/18`)
and `kube_pods_subnet` (default `10.233.64.0/18`.)
Given the port ids on the left, you can set the two `allowed_address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
# allow kube_service_addresses and kube_pods_subnet network
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18 ip_address=10.233.64.0/18
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18 ip_address=10.233.64.0/18
openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
Now you can finally run the playbook.

16
docs/proxy.md Normal file
View File

@@ -0,0 +1,16 @@
# Setting up Environment Proxy
If you set http and https proxy, all nodes and loadbalancer will be excluded from proxy with generating no_proxy variable in `roles/kubespray-defaults/defaults/main.yml`, if you have additional resources for exclude add them to `additional_no_proxy` variable. If you want fully override your `no_proxy` setting, then fill in just `no_proxy` and no nodes or loadbalancer addresses will be added to no_proxy.
## Set proxy for http and https
`http_proxy:"http://example.proxy.tld:port"`
`https_proxy:"http://example.proxy.tld:port"`
## Set default no_proxy (this will override default no_proxy generation)
`no_proxy: "node1,node1_ip,node2,node2_ip...additional_host"`
## Set additional addresses to default no_proxy (all cluster nodes and loadbalancer)
`additional_no_proxy: "aditional_host,"`

View File

@@ -9,7 +9,7 @@ Kubespray's roadmap
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320)
- the playbook would install and configure docker/rkt and the etcd cluster
- the following data would be inserted into etcd: certs,tokens,users,inventory,group_vars.
- a "kubespray" container would be deployed (kubespray-cli, ansible-playbook, kpm)
- a "kubespray" container would be deployed (kubespray-cli, ansible-playbook)
- to be discussed, a way to provide the inventory
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321)
@@ -24,7 +24,7 @@ Kubespray's roadmap
### Tests
- [ ] Run kubernetes e2e tests
- [ ] Test idempotency on on single OS but for all network plugins/container engines
- [ ] Test idempotency on single OS but for all network plugins/container engines
- [ ] single test on AWS per day
- [ ] test scale up cluster: +1 etcd, +1 master, +1 node
- [ ] Reorganize CI test vars into group var files

View File

@@ -35,7 +35,7 @@ ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.
#### Graceful upgrade
Kubespray also supports cordon, drain and uncordoning of nodes when performing
Kubespray also supports cordon, drain and uncordoning of nodes when performing
a cluster upgrade. There is a separate playbook used for this purpose. It is
important to note that upgrade-cluster.yml can only be used for upgrading an
existing cluster. That means there must be at least 1 kube-master already
@@ -81,3 +81,61 @@ kubernetes-apps/rotate_tokens role, only pods in kube-system are destroyed and
recreated. All other invalidated service account tokens are cleaned up
automatically, but other pods are not deleted out of an abundance of caution
for impact to user deployed pods.
### Component-based upgrades
A deployer may want to upgrade specific components in order to minimize risk
or save time. This strategy is not covered by CI as of this writing, so it is
not guaranteed to work.
These commands are useful only for upgrading fully-deployed, healthy, existing
hosts. This will definitely not work for undeployed or partially deployed
hosts.
Upgrade docker:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=docker
```
Upgrade etcd:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd
```
Upgrade vault:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=vault
```
Upgrade kubelet:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=node --skip-tags=k8s-gen-certs,k8s-gen-tokens
```
Upgrade Kubernetes master components:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=master
```
Upgrade network plugins:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=network
```
Upgrade all add-ons:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=apps
```
Upgrade just helm (assuming `helm_enabled` is true):
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=helm
```

View File

@@ -126,9 +126,20 @@ node_labels:
label1_name: label1_value
label2_name: label2_value
```
* *podsecuritypolicy_enabled* - When set to `true`, enables the PodSecurityPolicy admission controller and defines two policies `privileged` (applying to all resources in `kube-system` namespace and kubelet) and `restricted` (applying all other namespaces).
Addons deployed in kube-system namespaces are handled.
* *kubernetes_audit* - When set to `true`, enables Auditing.
The auditing parameters can be tuned via the following variables (which default values are shown below):
* `audit_log_path`: /var/log/audit/kube-apiserver-audit.log
* `audit_log_maxage`: 30
* `audit_log_maxbackups`: 1
* `audit_log_maxsize`: 100
* `audit_policy_file`: "{{ kube_config_dir }}/audit-policy/apiserver-audit-policy.yaml"
By default, the `audit_policy_file` contains [default rules](https://github.com/kubernetes-incubator/kubespray/blob/master/roles/kubernetes/master/templates/apiserver-audit-policy.yaml.j2) that can be overriden with the `audit_policy_custom_rules` variable.
##### Custom flags for Kube Components
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. Example:
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. The `kubelet_node_custom_flags` apply kubelet settings only to nodes and not masters. Example:
```
kubelet_custom_flags:
- "--eviction-hard=memory.available<100Mi"
@@ -140,11 +151,12 @@ The possible vars are:
* *controller_mgr_custom_flags*
* *scheduler_custom_flags*
* *kubelet_custom_flags*
* *kubelet_node_custom_flags*
#### User accounts
By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user.creds`. This contains a randomly generated
`{{ credentials_dir }}/kube_user.creds` (`credentials_dir` is set to `{{ inventory_dir }}/credentials` by default). This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this
file yourself or change `kube_api_pwd` var.

View File

@@ -9,7 +9,7 @@ Weave uses [**consensus**](https://www.weave.works/docs/net/latest/ipam/##consen
Weave encryption is supported for all communication
* To use Weave encryption, specify a strong password (if no password, no encrytion)
* To use Weave encryption, specify a strong password (if no password, no encryption)
```
# In file ./inventory/sample/group_vars/k8s-cluster.yml

View File

@@ -8,8 +8,8 @@
version: "{{ item.version }}"
state: "{{ item.state }}"
with_items:
- { state: "present", name: "docker", version: "2.7.0" }
- { state: "present", name: "docker-compose", version: "1.18.0" }
- { state: "present", name: "docker", version: "3.4.1" }
- { state: "present", name: "docker-compose", version: "1.21.2" }
- name: CephFS Provisioner | Check Go version
shell: |
@@ -35,19 +35,19 @@
- name: CephFS Provisioner | Clone repo
git:
repo: https://github.com/kubernetes-incubator/external-storage.git
dest: "~/go/src/github.com/kubernetes-incubator"
version: 92295a30
clone: no
dest: "~/go/src/github.com/kubernetes-incubator/external-storage"
version: 06fddbe2
clone: yes
update: yes
- name: CephFS Provisioner | Build image
shell: |
cd ~/go/src/github.com/kubernetes-incubator/external-storage
REGISTRY=quay.io/kubespray/ VERSION=92295a30 make ceph/cephfs
REGISTRY=quay.io/kubespray/ VERSION=06fddbe2 make ceph/cephfs
- name: CephFS Provisioner | Push image
docker_image:
name: quay.io/kubespray/cephfs-provisioner:92295a30
name: quay.io/kubespray/cephfs-provisioner:06fddbe2
push: yes
retries: 10

View File

@@ -1,137 +0,0 @@
# Valid bootstrap options (required): ubuntu, coreos, centos, none
bootstrap_os: none
#Directory where etcd data stored
etcd_data_dir: /var/lib/etcd
# Directory where the binaries will be installed
bin_dir: /usr/local/bin
## The access_ip variable is used to define how other nodes should access
## the node. This is used in flannel to allow other flannel nodes to see
## this node for example. The access_ip is really useful AWS and Google
## environments where the nodes are accessed remotely by the "public" ip,
## but don't know about that address themselves.
#access_ip: 1.1.1.1
### LOADBALANCING AND ACCESS MODES
## Enable multiaccess to configure etcd clients to access all of the etcd members directly
## as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers.
## This may be the case if clients support and loadbalance multiple etcd servers natively.
#etcd_multiaccess: true
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
#etcd_peer_client_auth: true
## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
#loadbalancer_apiserver:
# address: 1.2.3.4
# port: 1234
## Internal loadbalancers for apiservers
#loadbalancer_apiserver_localhost: true
## Local loadbalancer should use this port instead, if defined.
## Defaults to kube_apiserver_port (6443)
#nginx_kube_apiserver_port: 8443
### OTHER OPTIONAL VARIABLES
## For some things, kubelet needs to load kernel modules. For example, dynamic kernel services are needed
## for mounting persistent volumes into containers. These may not be loaded by preinstall kubernetes
## processes. For example, ceph and rbd backed volumes. Set to true to allow kubelet to load kernel
## modules.
#kubelet_load_modules: false
## Internal network total size. This is the prefix of the
## entire network. Must be unused in your environment.
#kube_network_prefix: 18
## With calico it is possible to distributed routes with border routers of the datacenter.
## Warning : enabling router peering will disable calico's default behavior ('node mesh').
## The subnets of each nodes will be distributed by the datacenter router
#peer_with_router: false
## Upstream dns servers used by dnsmasq
#upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4
## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using nova-client before starting the playbook.
#cloud_provider:
## When azure is used, you need to also set the following variables.
## see docs/azure.md for details on how to get these values
#azure_tenant_id:
#azure_subscription_id:
#azure_aad_client_id:
#azure_aad_client_secret:
#azure_resource_group:
#azure_location:
#azure_subnet_name:
#azure_security_group_name:
#azure_vnet_name:
#azure_vnet_resource_group:
#azure_route_table_name:
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
#openstack_blockstorage_version: "v1/v2/auto (default)"
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
#openstack_lbaas_enabled: True
#openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
## To enable automatic floating ip provisioning, specify a subnet.
#openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
## Override default LBaaS behavior
#openstack_lbaas_use_octavia: False
#openstack_lbaas_method: "ROUND_ROBIN"
#openstack_lbaas_provider: "haproxy"
#openstack_lbaas_create_monitor: "yes"
#openstack_lbaas_monitor_delay: "1m"
#openstack_lbaas_monitor_timeout: "30s"
#openstack_lbaas_monitor_max_retries: "3"
## Uncomment to enable experimental kubeadm deployment mode
#kubeadm_enabled: false
## Set these proxy values in order to update package manager and docker daemon to use proxies
#http_proxy: ""
#https_proxy: ""
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
#no_proxy: ""
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
#docker_storage_options: -s overlay2
# Uncomment this if you have more than 3 nameservers, then we'll only use the first 3.
#docker_dns_servers_strict: false
## Default packages to install within the cluster, f.e:
#kpm_packages:
# - name: kube-system/grafana
## Certificate Management
## This setting determines whether certs are generated via scripts or whether a
## cluster of Hashicorp's Vault is started to issue certificates (using etcd
## as a backend). Options are "script" or "vault"
#cert_management: script
# Set to true to allow pre-checks to fail and continue deployment
#ignore_assert_errors: false
## Etcd auto compaction retention for mvcc key value store in hour
#etcd_compaction_retention: 0
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
#etcd_metrics: basic
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
#etcd_memory_limit: "512M"
# The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
#kube_read_only_port: 10255

View File

@@ -0,0 +1,85 @@
## Valid bootstrap options (required): ubuntu, coreos, centos, none
## If the OS is not listed here, it means it doesn't require extra/bootstrap steps.
## In example, python is not available on 'coreos' so it must be installed before
## anything else. In the opposite, Debian has already all its dependencies fullfiled, then bootstrap_os should be set to `none`.
bootstrap_os: none
## Directory where etcd data stored
etcd_data_dir: /var/lib/etcd
## Directory where the binaries will be installed
bin_dir: /usr/local/bin
## The access_ip variable is used to define how other nodes should access
## the node. This is used in flannel to allow other flannel nodes to see
## this node for example. The access_ip is really useful AWS and Google
## environments where the nodes are accessed remotely by the "public" ip,
## but don't know about that address themselves.
#access_ip: 1.1.1.1
## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
#loadbalancer_apiserver:
# address: 1.2.3.4
# port: 1234
## Internal loadbalancers for apiservers
#loadbalancer_apiserver_localhost: true
## Local loadbalancer should use this port instead, if defined.
## Defaults to kube_apiserver_port (6443)
#nginx_kube_apiserver_port: 8443
### OTHER OPTIONAL VARIABLES
## For some things, kubelet needs to load kernel modules. For example, dynamic kernel services are needed
## for mounting persistent volumes into containers. These may not be loaded by preinstall kubernetes
## processes. For example, ceph and rbd backed volumes. Set to true to allow kubelet to load kernel
## modules.
#kubelet_load_modules: false
## Upstream dns servers used by dnsmasq
#upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4
## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', 'oci', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using nova-client before starting the playbook.
#cloud_provider:
## Uncomment to enable experimental kubeadm deployment mode
#kubeadm_enabled: false
## Set these proxy values in order to update package manager and docker daemon to use proxies
#http_proxy: ""
#https_proxy: ""
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
#no_proxy: ""
## Some problems may occur when downloading files over https proxy due to ansible bug
## https://github.com/ansible/ansible/issues/32750. Set this variable to False to disable
## SSL validation of get_url module. Note that kubespray will still be performing checksum validation.
#download_validate_certs: False
## If you need exclude all cluster nodes from proxy and other resources, add other resources here.
#additional_no_proxy: ""
## Certificate Management
## This setting determines whether certs are generated via scripts or whether a
## cluster of Hashicorp's Vault is started to issue certificates (using etcd
## as a backend). Options are "script" or "vault"
#cert_management: script
## Set to true to allow pre-checks to fail and continue deployment
#ignore_assert_errors: false
## The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
#kube_read_only_port: 10255
## Set true to download and cache container
#download_container: true

View File

@@ -0,0 +1,14 @@
## When azure is used, you need to also set the following variables.
## see docs/azure.md for details on how to get these values
#azure_tenant_id:
#azure_subscription_id:
#azure_aad_client_id:
#azure_aad_client_secret:
#azure_resource_group:
#azure_location:
#azure_subnet_name:
#azure_security_group_name:
#azure_vnet_name:
#azure_vnet_resource_group:
#azure_route_table_name:

View File

@@ -0,0 +1,2 @@
## Does coreos need auto upgrade, default is true
#coreos_auto_upgrade: true

View File

@@ -0,0 +1,61 @@
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
#docker_storage_options: -s overlay2
## Enable docker_container_storage_setup, it will configure devicemapper driver on Centos7 or RedHat7.
docker_container_storage_setup: false
## It must be define a disk path for docker_container_storage_setup_devs.
## Otherwise docker-storage-setup will be executed incorrectly.
#docker_container_storage_setup_devs: /dev/vdb
## Uncomment this if you have more than 3 nameservers, then we'll only use the first 3.
docker_dns_servers_strict: false
# Path used to store Docker data
docker_daemon_graph: "/var/lib/docker"
## Used to set docker daemon iptables options to true
docker_iptables_enabled: "false"
# Docker log options
# Rotate container stderr/stdout logs at 50m and keep last 5
docker_log_opts: "--log-opt max-size=50m --log-opt max-file=5"
# define docker bin_dir
docker_bin_dir: "/usr/bin"
## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipddress and domain_name.
## example define 172.19.16.11 or mirror.registry.io
#docker_insecure_registries:
# - mirror.registry.io
# - 172.19.16.11
## Add other registry,example China registry mirror.
#docker_registry_mirrors:
# - https://registry.docker-cn.com
# - https://mirror.aliyuncs.com
## If non-empty will override default system MounFlags value.
## This option takes a mount propagation flag: shared, slave
## or private, which control whether mounts in the file system
## namespace set up for docker will receive or propagate mounts
## and unmounts. Leave empty for system default
#docker_mount_flags:
## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear.
docker_options: >-
{%- if docker_insecure_registries is defined -%}
{{ docker_insecure_registries | map('regex_replace', '^(.*)$', '--insecure-registry=\1' ) | list | join(' ') }}
{%- endif %}
{% if docker_registry_mirrors is defined -%}
{{ docker_registry_mirrors | map('regex_replace', '^(.*)$', '--registry-mirror=\1' ) | list | join(' ') }}
{%- endif %}
--graph={{ docker_daemon_graph }} {{ docker_log_opts }}
{%- if ansible_architecture == "aarch64" and ansible_os_family == "RedHat" %}
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current
--default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current --signature-verification=false
{%- endif -%}

View File

@@ -0,0 +1,15 @@
## When Oracle Cloud Infrastructure is used, set these variables
#oci_private_key:
#oci_region_id:
#oci_tenancy_id:
#oci_user_id:
#oci_user_fingerprint:
#oci_compartment_id:
#oci_vnc_id:
#oci_subnet1_id:
#oci_subnet2_id:
## Overide these default behaviors if you wish
#oci_security_list_management: All
# If oci_use_instance_principals is true, you do not need to set the region, tenancy, user, key, passphrase, or fingerprint
#oci_use_instance_principals: false
#oci_cloud_controller_version: 0.5.0

View File

@@ -0,0 +1,16 @@
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
#openstack_blockstorage_version: "v1/v2/auto (default)"
#openstack_blockstorage_ignore_volume_az: yes
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
#openstack_lbaas_enabled: True
#openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
## To enable automatic floating ip provisioning, specify a subnet.
#openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
## Override default LBaaS behavior
#openstack_lbaas_use_octavia: False
#openstack_lbaas_method: "ROUND_ROBIN"
#openstack_lbaas_provider: "haproxy"
#openstack_lbaas_create_monitor: "yes"
#openstack_lbaas_monitor_delay: "1m"
#openstack_lbaas_monitor_timeout: "30s"
#openstack_lbaas_monitor_max_retries: "3"

View File

@@ -0,0 +1,18 @@
## Etcd auto compaction retention for mvcc key value store in hour
#etcd_compaction_retention: 0
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
#etcd_metrics: basic
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
#etcd_memory_limit: "512M"
## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
## etcd documentation for more information.
#etcd_quota_backend_bytes: "2G"
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
#etcd_peer_client_auth: true

View File

@@ -0,0 +1,51 @@
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: true
# Helm deployment
helm_enabled: false
# Registry deployment
registry_enabled: false
# registry_namespace: kube-system
# registry_storage_class: ""
# registry_disk_size: "10Gi"
# Local volume provisioner deployment
local_volume_provisioner_enabled: false
# local_volume_provisioner_namespace: kube-system
# local_volume_provisioner_base_dir: /mnt/disks
# local_volume_provisioner_mount_dir: /mnt/disks
# local_volume_provisioner_storage_class: local-storage
# CephFS provisioner deployment
cephfs_provisioner_enabled: false
# cephfs_provisioner_namespace: "cephfs-provisioner"
# cephfs_provisioner_cluster: ceph
# cephfs_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
# cephfs_provisioner_admin_id: admin
# cephfs_provisioner_secret: secret
# cephfs_provisioner_storage_class: cephfs
# cephfs_provisioner_reclaim_policy: Delete
# cephfs_provisioner_claim_root: /volumes
# cephfs_provisioner_deterministic_names: true
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_nodeselector:
# node-role.kubernetes.io/master: "true"
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
# ingress_nginx_configmap:
# map-hash-bucket-size: "128"
# ssl-protocols: "SSLv2"
# ingress_nginx_configmap_tcp_services:
# 9000: "default/example-go:8080"
# ingress_nginx_configmap_udp_services:
# 53: "kube-system/kube-dns:53"
# Cert manager deployment
cert_manager_enabled: false
# cert_manager_namespace: "cert-manager"

View File

@@ -19,7 +19,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.9.5
kube_version: v1.11.3
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -34,9 +34,12 @@ kube_cert_group: kube-cert
# Cluster Loglevel configuration
kube_log_level: 2
# Directory where credentials will be stored
credentials_dir: "{{ inventory_dir }}/credentials"
# Users to create for basic auth in Kubernetes API via HTTP
# Optionally add groups for user
kube_api_pwd: "{{ lookup('password', inventory_dir + '/credentials/kube_user.creds length=15 chars=ascii_letters,digits') }}"
kube_api_pwd: "{{ lookup('password', credentials_dir + '/kube_user.creds length=15 chars=ascii_letters,digits') }}"
kube_users:
kube:
pass: "{{kube_api_pwd}}"
@@ -56,38 +59,17 @@ kube_users:
# kube_oidc_url: https:// ...
# kube_oidc_client_id: kubernetes
## Optional settings for OIDC
# kube_oidc_ca_file: {{ kube_cert_dir }}/ca.pem
# kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem"
# kube_oidc_username_claim: sub
# kube_oidc_username_prefix: oidc:
# kube_oidc_groups_claim: groups
# kube_oidc_groups_prefix: oidc:
# Choose network plugin (cilium, calico, contiv, weave or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico
# weave's network password for encryption
# if null then no network encryption
# you can use --extra-vars to pass the password in command line
weave_password: EnterPasswordHere
# Weave uses consensus mode by default
# Enabling seed mode allow to dynamically add or remove hosts
# https://www.weave.works/docs/net/latest/ipam/
weave_mode_seed: false
# This two variable are automatically changed by the weave's role, do not manually change these values
# To reset values :
# weave_seed: uninitialized
# weave_peers: uninitialized
weave_seed: uninitialized
weave_peers: uninitialized
# Set the MTU of Weave (default 1376, Jumbo Frames: 8916)
weave_mtu: 1376
# Enable kubernetes network policies
enable_network_policy: false
# Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18
@@ -112,6 +94,11 @@ kube_apiserver_insecure_port: 8080 # (http)
# Can be ipvs, iptables
kube_proxy_mode: iptables
# Kube-proxy nodeport address.
# cidr to bind nodeport services. Flag --nodeport-addresses on kube-proxy manifest
kube_proxy_nodeport_addresses: false
# kube_proxy_nodeport_addresses_cidr: 10.0.1.0/24
## Encrypting Secret Data at Rest (experimental)
kube_encrypt_secret_data: false
@@ -135,18 +122,11 @@ skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipad
dnsmasq_dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
# Path used to store Docker data
docker_daemon_graph: "/var/lib/docker"
## Container runtime
## docker for docker and crio for cri-o.
container_manager: docker
## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear.
## An obvious use case is allowing insecure-registry access
## to self hosted registries like so:
docker_options: "--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }} {{ docker_log_opts }}"
docker_bin_dir: "/usr/bin"
# Settings for containerized control plane (etcd/kubelet/secrets)
## Settings for containerized control plane (etcd/kubelet/secrets)
etcd_deployment_type: docker
kubelet_deployment_type: host
vault_deployment_type: docker
@@ -155,64 +135,19 @@ helm_deployment_type: host
# K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: true
# audit log for kubernetes
kubernetes_audit: false
# Monitoring apps for k8s
efk_enabled: false
# dynamic kubelet configuration
dynamic_kubelet_configuration: false
# Helm deployment
helm_enabled: false
# define kubelet config dir for dynamic kubelet
#kubelet_config_dir:
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
dynamic_kubelet_configuration_dir: "{{ kubelet_config_dir | default(default_kubelet_config_dir) }}"
# Istio deployment
istio_enabled: false
# Registry deployment
registry_enabled: false
# registry_namespace: "{{ system_namespace }}"
# registry_storage_class: ""
# registry_disk_size: "10Gi"
# Local volume provisioner deployment
local_volume_provisioner_enabled: false
# local_volume_provisioner_namespace: "{{ system_namespace }}"
# local_volume_provisioner_base_dir: /mnt/disks
# local_volume_provisioner_mount_dir: /mnt/disks
# local_volume_provisioner_storage_class: local-storage
# CephFS provisioner deployment
cephfs_provisioner_enabled: false
# cephfs_provisioner_namespace: "{{ system_namespace }}"
# cephfs_provisioner_cluster: ceph
# cephfs_provisioner_monitors:
# - 172.24.0.1:6789
# - 172.24.0.2:6789
# - 172.24.0.3:6789
# cephfs_provisioner_admin_id: admin
# cephfs_provisioner_secret: secret
# cephfs_provisioner_storage_class: cephfs
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
# ingress_nginx_configmap:
# map-hash-bucket-size: "128"
# ssl-protocols: "SSLv2"
# ingress_nginx_configmap_tcp_services:
# 9000: "default/example-go:8080"
# ingress_nginx_configmap_udp_services:
# 53: "kube-system/kube-dns:53"
# Cert manager deployment
cert_manager_enabled: false
# cert_manager_namespace: "cert-manager"
# Add Persistent Volumes Storage Class for corresponding cloud provider ( OpenStack is only supported now )
persistent_volumes_enabled: false
# pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
podsecuritypolicy_enabled: false
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
# kubeconfig_localhost: false
@@ -239,3 +174,18 @@ persistent_volumes_enabled: false
## See https://github.com/kubernetes-incubator/kubespray/issues/2141
## Set this variable to true to get rid of this issue
volume_cross_zone_attachment: false
# Add Persistent Volumes Storage Class for corresponding cloud provider ( OpenStack is only supported now )
persistent_volumes_enabled: false
## Container Engine Acceleration
## Enable container accelertion feature, for example use gpu acceleration in containers
# nvidia_accelerator_enabled: true
## Nvidia GPU driver install. Install will by done by a (init) pod running as a daemonset.
## Important: if you use Ubuntu then you should set in all.yml 'docker_storage_options: -s overlay2'
## Array with nvida_gpu_nodes, leave empty or comment if you dont't want to install drivers.
## Labels and taints won't be set to nodes if they are not in the array.
# nvidia_gpu_nodes:
# - kube-gpu-001
# nvidia_driver_version: "384.111"
## flavor can be tesla or gtx
# nvidia_gpu_flavor: gtx

View File

@@ -0,0 +1,20 @@
# see roles/network_plugin/calico/defaults/main.yml
## With calico it is possible to distributed routes with border routers of the datacenter.
## Warning : enabling router peering will disable calico's default behavior ('node mesh').
## The subnets of each nodes will be distributed by the datacenter router
#peer_with_router: false
# Enables Internet connectivity from containers
# nat_outgoing: true
# add default ippool name
# calico_pool_name: "default-pool"
# Global as_num (/calico/bgp/v1/global/as_num)
# global_as_num: "64512"
# You can set MTU value here. If left undefined or empty, it will
# not be specified in calico CNI config, so Calico will use built-in
# defaults. The value should be a number, not a string.
# calico_mtu: 1500

Some files were not shown because too many files have changed in this diff Show More