Compare commits

...

338 Commits

Author SHA1 Message Date
Chad Swenson
f7d52564aa Merge pull request #2084 from riverzhang/devicemapper
Fix can not use devicemapper driver
2018-01-31 20:52:22 -06:00
Spencer Smith
f7e8d1149a Merge pull request #2229 from whereismyjetpack/etcd-quorum-read
--etcd-quorum-read is depricated in kube >= 1.9
2018-01-31 17:10:10 -05:00
Spencer Smith
bd091caaf9 Merge pull request #2200 from riverzhang/hyperkube
Upgrade to Kubernetes v1.9.2
2018-01-31 16:08:22 -05:00
Spencer Smith
b455a1bf76 Merge pull request #2212 from mattymo/missing_defaults
Add missing group var default values to kubespray-defaults
2018-01-31 16:07:53 -05:00
Spencer Smith
c0a3bcf9b3 Merge pull request #2221 from Xuxe/patch-vcp-v1.9.2
Updated vSphere cloud provider config for Kubernetes >= v1.9.2 and added resource pool deployment variable
2018-01-31 16:06:07 -05:00
Spencer Smith
5eedb5562f Merge pull request #2228 from mattymo/vault_etcd_secure
Vault should use cert auth for etcd
2018-01-31 16:05:28 -05:00
Dann Bohn
dc6c703741 --etcd-quorum-read is depricated in kube >= 1.9 2018-01-31 15:49:52 -05:00
Matthew Mosesohn
16629d0b8e Vault should use cert auth for etcd 2018-01-31 20:37:14 +03:00
Julian Hübenthal
7f79210ed1 reworked vsphere-cloud-config template 2018-01-31 16:51:23 +01:00
Aivars Sterns
c1267004ef Merge pull request #2130 from ArchiFleKs/simplify_os_provider
Simplify and update OpenStack cloud provider
2018-01-31 12:02:02 +02:00
Julian Hübenthal
9cdd2214f9 render vsphere_resource_pool only if defined 2018-01-31 09:56:43 +01:00
Julian Hübenthal
fc29764911 fixed broken variables table 2018-01-31 09:27:45 +01:00
Julian Hübenthal
989e9174c2 Added vSphere cloud provider config update for Kubernetes >= 1.9.2 2018-01-31 09:15:46 +01:00
rong.zhang
3993e12335 Fix can not be used devicemapper driver
Fix can not be used devicemapper driver
2018-01-31 15:51:11 +08:00
Brad Beam
ac4d782937 Merge pull request #2074 from fangzhen/fix-domains-split
Make spliting system_search_domains more robust
2018-01-30 21:01:19 -06:00
rong.zhang
32d18ca992 remove trailing space 2018-01-31 09:50:41 +08:00
Matthew Mosesohn
2df4b6c5d2 Rename default_resolver to cloud_resolver (#2209)
Cloud resolvers are mandatory for hosts on GCE and OpenStack
clouds. The 8.8.8.8 alternative resolver was dropped because
there is already a default nameserver. The new var name
reflects the purpose better.

Also restart apiserver when modifying dns settings.
2018-01-31 00:26:07 +03:00
RongZhang
3846384d56 Bump kube-dns to 1.14.8 (#2204)
Bump kube-dns to 1.14.8
2018-01-30 19:23:37 +03:00
Dmitri Rubinstein
331f141f63 Fix DNS entries in etcd's openssl.conf by adding a newline. (#2208)
DNS entries generated from 'etcd_cert_alt_names' variable in etcd's
openssl.conf are not terminated by a newline.

This fixes issue #2207.
2018-01-30 16:26:58 +03:00
Matthew Mosesohn
62dd3d2a9d Add missing group var default values to kubespray-defaults 2018-01-30 16:04:00 +03:00
rong.zhang
e22c70e431 Upgrade to Kubernetes v1.9.2 2018-01-30 13:04:38 +08:00
Chad Swenson
f4fe9e3421 Merge pull request #2171 from ArchiFleKs/kubeproxy-lvs
Add lib/modules to kube-proxy to enable LVS
2018-01-29 22:58:02 -06:00
Brad Beam
da173615e4 Merge pull request #2048 from xizhibei/master
Fix: always only one container got synced after download
2018-01-29 16:01:11 -06:00
Matthew Mosesohn
dc6a17e092 Use include/import tasks (#2192)
import_tasks will consume far less memory, so it should be
used whenever it is compatible.
2018-01-29 14:37:48 +03:00
Antoine Legrand
f4180503c8 Merge pull request #2196 from Miouge1/network-size-large-deploy
Network size large deploy documentation
2018-01-26 15:26:03 +01:00
Miouge1
240d4193ae Update information about network sizes 2018-01-26 15:23:21 +01:00
Matthew Mosesohn
ac66e98ae9 Upgrade to Kubernetes v1.9.1 (#2152)
Raise drain timeout to 5m
2018-01-25 18:44:44 +03:00
Matthew Mosesohn
d2935ffed0 Optionally ignore the presence of extra calico pools (#2190) 2018-01-25 18:44:20 +03:00
Chad Swenson
c6e0fcea31 Merge pull request #1948 from sgmitchell/secured-etcd
Enable etcd secure client to prevent etcdctl access without cert and key
2018-01-25 09:35:51 -06:00
Chad Swenson
5d014d986b Merge pull request #1992 from manics/flannel-hairpin
Enable flannel hairpin mode
2018-01-24 21:20:03 -06:00
mirwan
714994cad8 iptables: flush nat table as well as filter table upon reset (#2174)
* iptables: flush nat table as well as filter table upon reset

* Indentation fix
2018-01-24 20:22:49 -06:00
Brad Beam
08fe61e058 Merge pull request #2071 from riverzhang/dashboard
Update dashboard version to v1.8.1
2018-01-24 20:10:05 -06:00
Brad Beam
0c8bed21ee Merge pull request #2019 from chadswen/disable-api-insecure-port
Support for disabling apiserver insecure port (the sequel)
2018-01-24 19:58:53 -06:00
Brad Beam
98eb845f8c Merge pull request #2173 from mirwan/hardcoded_dnsmasq-autoscaler_image
Dnsmasq autoscaler image should be a variable
2018-01-24 16:15:59 -06:00
Brad Beam
98300e3165 Merge pull request #2155 from brutus333/fix/pvc
Fix for Issue #2141
2018-01-24 16:15:33 -06:00
Matthew Mosesohn
bf1411060e Add optional manual dns_mode (#2178) 2018-01-23 14:28:42 +01:00
Virgil Chereches
a4d142368b Renamed variable from disable_volume_zone_conflict to volume_cross_zone_attachment and removed cloud provider condition; fix identation 2018-01-23 13:14:00 +00:00
Brad Beam
eb80f9b606 Merge pull request #2154 from tdihp/proxy-conf-restart-docker
Restart docker when http-proxy.conf changed.
2018-01-22 08:39:05 -06:00
Stanislav Makar
ae47b617e3 Fix 'no such host' problem (#2148)
Fix 'no such host' problem reported by commands *kubectl logs* and *kubectl exec*
when cloud_provider is OpenStack

Closes: #2147
2018-01-22 16:08:24 +03:00
Bogdan Dobrelya
c116b8022e Update rpm spec and pbr setup configs (#2170)
* Update rpm spec and pbr setup configs

* Rename package to kubespray
* Do not break Fedora's FHS and install to /usr/share instead
* Remove the vendor tag
* Update source0 for better artifacts' names
* Fix missing files build errors
* Make version/release to auto match from git and fit PEP 440

Co-authored-by: Matthias Runge <mrunge@redhat.com>
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Add package paths to roles search in ansible conf

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Poke jinja2 requirements in rpm spec file

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2018-01-18 16:22:37 +01:00
Erwan Miran
5b98e15613 Merge branch 'hardcoded_dnsmasq-autoscaler_image' of github.com:mirwan/kubespray into hardcoded_dnsmasq-autoscaler_image 2018-01-18 16:04:35 +01:00
Erwan Miran
e5b4011aa4 move hardcoded dnsmasq autoscaler image to its own variable 2018-01-18 16:04:29 +01:00
Virgil Chereches
3125f93b3f Added disable_volume_zone_conflict variable 2018-01-18 10:55:23 +00:00
Spencer Smith
f19c8e8c1d Merge pull request #2132 from PhilippeChepy/flex-volumes
Add support for flex volumes plugins.
2018-01-17 15:00:45 -05:00
Dave Carley
752fba1691 Fix spelling mistakes in group_vars (#2166) 2018-01-17 18:42:27 +03:00
ArchiFleKs
637604d08f Add lib/modules to kube-proxy to enable LVS
kube-proxy is complaining of missing modules at startup. There is a plan
to also support an LVS implementation of kube-proxy in additon to
userspace and iptables
2018-01-17 16:35:53 +01:00
Erwan Miran
1a9989ade9 move hardcoded dnsmasq autoscaler image to its own variable 2018-01-16 09:11:59 +01:00
Virgil Chereches
8c45c88d15 Fix for Issue #2141 - added policy file 2018-01-12 07:15:35 +00:00
Virgil Chereches
c87bb2f239 Fix for Issue #2141 2018-01-12 07:07:02 +00:00
heping
32eeb9a0e0 Restart docker when http-proxy.conf changed. 2018-01-12 10:56:25 +08:00
rong.zhang
df21fc8643 Remove initContainer 2018-01-10 12:17:17 +08:00
Spencer Smith
ffbdf31ac4 Merge pull request #2135 from riverron/master
Updated with correct syntax to access default_tags variable.
2018-01-09 17:22:12 -05:00
Spencer Smith
ccd9cc3dce Merge pull request #2146 from abelgana/master
Manage deprecated kubelet option
2018-01-09 17:19:42 -05:00
Spencer Smith
81867402f6 Merge pull request #2145 from pslijkhuis/master
Add kubelet_custom_flags to kubelet.kubeadm.env.j2
2018-01-09 17:19:09 -05:00
Spencer Smith
4f5d61212b Merge pull request #2144 from neith00/weave-2.1.3
updated weave to 2.1.3
2018-01-09 17:18:26 -05:00
Spencer Smith
ef96123482 Merge pull request #2068 from chadswen/remove-container-retries
Retry kube container removal during upgrade
2018-01-09 15:03:50 -05:00
Spencer Smith
ee27ab0052 Merge pull request #2124 from riverzhang/patch-3
Remove blank lines
2018-01-09 14:58:49 -05:00
Spencer Smith
57f87ba083 Merge pull request #2142 from trilogy-group/hotfix/fluentd-template
fix fluentd template
2018-01-09 14:44:50 -05:00
abelgana
a9bb72c6fd require-kubeconfig is depricated since k8s v1.8 2018-01-09 14:35:42 -05:00
abelgana
9506c2e597 require-kubeconfig is deprecated since K8s v1.8 2018-01-09 14:33:05 -05:00
Peter Slijkhuis
32884357ff Add kubelet_custom_flags to kubelet.kubeadm.env.j2 2018-01-09 14:04:36 +01:00
Bogdan Dobrelya
278ac08087 Fix HA docs API access endpoints explained (#2126)
* Fix HA docs API access endpoints explained

Follow-up commit 81347298a3
and fix the endpoint value provided in HA docs.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Clarify internal LB with external LB use case

* Clarify how to use both internal and external, non-cluster aware and
  not managed with Kubespray, LB solutions.
* Clarify the requirements, like TLS/SSL termination, for such an external LB.
  Unlike to the 'cluster-aware' external LB config, endpoints' security must be
  managed by that non-cluster aware external LB.
* Note that masters always contact their local apiservers via https://bip:sp.
  It's highly unlikely to go down and it reduces latency that might be
  introduced when going host->lb->host. Only computes go that path.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Add a note for supplementary_addresses_in_ssl_keys

Explain how to benefit from supplementary_addresses_in_ssl_keys

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2018-01-09 16:01:50 +03:00
neith00
88204642b7 updated weave to 2.1.3 2018-01-09 13:50:42 +01:00
Matthew Mosesohn
1401286910 Add support for cert alt names for etcd (#2139)
* Add support for cert alt names for etcd

* Update gen_certs_vault.yml
2018-01-09 14:37:34 +03:00
Lukasz Piatkowski
12eb242224 fix fluentd template 2018-01-08 13:40:47 +00:00
Ronald Rivera
8f36a02998 Merge branch 'master' of https://github.com/riverron/kubespray 2018-01-07 15:40:34 +00:00
Ronald Rivera
88f9e25f76 Updated with correct syntax to access default_tags variable. 2018-01-07 15:39:58 +00:00
Ron Rivera
dba1c13954 Updated with correct syntax to access default_tags variable. 2018-01-07 14:57:14 +00:00
Philippe Chepy
df9faa1743 Add support for flex volumes plugins. 2018-01-05 17:56:36 +01:00
ArchiFleKs
ce85bcaee7 Simplify and update OpenStack cloud provider
Simplify the number of variables necessary to "just" enable OpenStack
cloud provider. Also add the new options available in K8s 1.9.
2018-01-05 12:05:24 +01:00
rong.zhang
6ed2a60978 fix run dashboard error 2018-01-04 13:13:36 +08:00
Brad Beam
fd04c14260 Merge pull request #2127 from spiffxp/follow-cla-doc
Follow CLA doc to kubernetes/community
2018-01-03 19:19:34 -06:00
Aaron Crickenberger
10a5273f07 Follow CLA doc to kubernetes/community 2018-01-03 16:48:53 -08:00
Bogdan Dobrelya
bac3bf1a5f Fix auto-evaluated API access endpoint for bind IP (#2086)
Auto configure API access endpoint with a custom bind IP, if provided.
Fix HA docs' http URLs are https in fact, clarify the insecure vs secure
API access modes as well.

Closes: #issues/2051

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2018-01-03 17:40:21 +01:00
RongZhang
e3b684df21 Remove blank lines
Remove blank lines
2018-01-03 00:54:04 -06:00
Steve Mitchell
e45b30d033 Add etcd key and cert environment variables for use with client auth 2018-01-02 13:52:17 -05:00
Matthew Mosesohn
ad6fecefa8 Update Kubernetes to v1.9.0 (#2100)
Update checksum for kubeadm
Use v1.9.0 kubeadm params
Include hash of ca.crt for kubeadm join
Update tag for testing upgrades
Add workaround for testing upgrades
Remove scale CI scenarios because of slow inventory parsing
in ansible 2.4.x.

Change region for tests to us-central1 to
improve ansible performance
2017-12-25 08:57:45 +00:00
Jan Jungnickel
3fdb2ccf55 Revert back to using an empty var as default to exclude hostname (#2110) 2017-12-22 22:09:59 +00:00
Matthew Mosesohn
29f5b55d42 remove unwanted whitespace for kube_override_hostname (#2105) 2017-12-22 11:31:18 +00:00
rong.zhang
5aef52e8c0 fix dashboard certs secret 2017-12-22 11:17:05 +08:00
Brad Beam
336e0cbf70 Merge pull request #2102 from spiffxp/update-code-of-conduct
Update code-of-conduct.md
2017-12-20 20:00:47 -06:00
Aaron Crickenberger
3cd06b0eb4 Update code-of-conduct.md
Refer to kubernetes/community as authoritative source for code of conduct
2017-12-20 14:12:38 -05:00
Matthew Mosesohn
6bb46e3ecb Fix param names in preparation for Kubernetes v1.9.0 (#2098)
This does not update v1.9.0, but fixes two incompatibilities
when trying to deploy v1.9.0.
2017-12-20 10:48:09 +00:00
Matthew Mosesohn
127bc01857 Do not override kubelet hostname if cloud_provider is used (#2095)
Starting with Kubernetes v1.8.4, kubelet ignores the AWS cloud
provider string and uses the override hostname, which fails
Node admission checks.

Fixes #2094
2017-12-19 20:18:20 +00:00
Evan Zeimet
a6975c1850 Rename runtime docker_version (#2082)
Renaming runtime docker_version to prevent setting that
value on the command line from breaking the play run.

This fixes #2081
2017-12-19 14:47:54 +00:00
Stanislav Makar
b2cb0725ac Default OpenStack Cinder Storage Class (#2083)
Add possibility to create default OpenStack Cinder Storage Class

Closes: #1609
2017-12-19 14:47:00 +00:00
rong.zhang
b974b144a8 Add RBAC to binding Dahsboard UI 2017-12-18 23:07:19 +08:00
Matthew Mosesohn
bfb25fa47b Change vault cert ttl to 8y (#2013) 2017-12-15 13:34:00 +00:00
Matthew Mosesohn
b135bcb9d9 Split download container task for delegate and non-delegate modes (#2077)
Ansible cannot seem to handle omitting delegate_to since v2.4.0.0.

Possibly related: https://github.com/ansible/ansible/issues/30760
2017-12-14 16:45:54 +00:00
rong.zhang
0771cd8599 Remove dashboard_tls_key and dashboard_tls_cert 2017-12-13 15:42:20 +08:00
Fang Zhen
91d848f98a Make spliting system_search_domains more robust
The search line in /etc/resolv.conf could have
multiple spaces or tabs between domains.
split(' ') will give wrong results in some case,
use split() without argument instead.

e.g.
>>> 'domain.tld	cluster.tld '.split(' ')
['domain.tld\tcluster.tld', '']
>>> 'domain.tld cluster.tld '.split()
['domain.tld', 'cluster.tld']
2017-12-13 15:39:38 +08:00
rong.zhang
40edf8c6f5 Update dashboard version to v1.8.0
Update dependencies to be compatible with Kubernetes v1.8
2017-12-13 12:50:44 +08:00
Chad Swenson
e78562830f Retry kube container removal during upgrade
As we have seen with other containers, sometimes container removal fails on the first attempt due to some Docker bugs. Retrying typically corrects the issue.
2017-12-12 12:06:41 -06:00
Brad Beam
39ce1bd8be Merge pull request #2059 from bradbeam/vaultalt
Fixing alt_names for vault cert generation
2017-12-12 09:28:51 -06:00
Spencer Smith
6291881943 Merge pull request #2057 from rsmitty/master
set docker_version fact regardless of docker_dns in use
2017-12-12 10:28:14 -05:00
Brad Beam
802fd94dad Merge pull request #2054 from ArchiFleKs/os-cloud-provider-domain-fix
Fix domain id for OpenStack provider
2017-12-11 21:06:16 -06:00
Xu Zhipei
66f38a1b31 fix: always only one docker image got synced after download 2017-12-12 09:51:03 +08:00
Brad Beam
d3850a4da5 Fixing alt_names for vault cert generation 2017-12-11 17:28:18 -06:00
Spencer Smith
53a4355e60 set docker_version fact regardless of docker_dns in use 2017-12-11 17:48:11 -05:00
Spencer Smith
18a616f57c Merge pull request #2052 from ArchiFleKs/os-terraform-fix-inventory
Change OpenStack inventory to python2
2017-12-11 13:42:05 -05:00
Spencer Smith
32333eb627 Merge pull request #2035 from brutus333/fix/proxy
Added proxy_env to scale and upgrade playbooks
2017-12-11 12:43:06 -05:00
Brad Beam
19def41fdf Merge pull request #2047 from bradbeam/vaulttime
Adding retries for vault-temp to come online
2017-12-11 09:04:57 -06:00
ArchiFleKs
44b9dce134 Fix domain id for OpenStack provider
OpenStack authentication does not support using a mix of DomainID and
DomainName, only one or the other should be used.
2017-12-11 15:57:33 +01:00
Brad Beam
fa5a538fe5 Merge pull request #2050 from jbonachera/fix-vault-tls-validation
append newline char to vault generated certs
2017-12-11 08:41:34 -06:00
ArchiFleKs
5e3fd2253f Change OpenStack inventory to python2
For distribution who ship python3 as default python, it breaks the
inventory script as it is not compatible with python3.
2017-12-11 14:25:05 +01:00
Brad Beam
9643c2c1e3 Fixes to reset (#2046)
- adding additional directories to cleanup (rkt/vault)
- targeting kubespray ansible groups instead of all
2017-12-11 12:49:21 +00:00
Brad Beam
93f3614382 Fixes #2039 - changing alt_names to be string instead of list (#2043) 2017-12-11 12:48:07 +00:00
Brad Beam
cbc8a7d679 Merge pull request #1995 from b0r1sp/patch-1
Update main.yml
2017-12-10 21:45:02 -06:00
Julien BONACHERA
290bc993a5 append newline char to vault generated certs 2017-12-10 13:06:28 +01:00
Brad Beam
3694657eb6 Adding retries for vault-init to come online 2017-12-09 17:40:44 -06:00
Thomas Sarboni
79417e07ca Fix systemd service unit for docker >= 17.03 (#1844) 2017-12-08 13:12:45 +00:00
Spencer Smith
626b35e1b0 Merge pull request #2005 from riverzhang/patch-1
Delete helm home
2017-12-07 11:23:30 -05:00
Brad Beam
fed7b97dcb Merge pull request #2030 from mattymo/removerbaccheck
Remove RBAC from boolean checks
2017-12-06 23:41:13 -06:00
Spencer Smith
c4458c9d9a Merge pull request #1997 from mrbobbytables/feature-keepalived-cloud-provider
Add minimal keepalived-cloud-provider support
2017-12-06 23:28:27 -05:00
Virgil Chereches
7bae2a4547 Added proxy_env to scale and upgrade playbooks 2017-12-06 15:06:34 +00:00
riverzhang
aeb3e647d4 Remove the network device created by the flannel (#2006)
* Remove the network device created by the flannel

Remove the network device created by the flannel

* Modify flannel.1 device path

Modify flannel.1 device path

* remove trailing spaces
2017-12-06 14:15:39 +00:00
Kuldip Madnani
fe036cbe77 Adding changes to handle updation of yum Management cache in rhel. (#2026)
* Adding changes to handle updation of yum cache in rhel.

* Removed the redundant spaces
2017-12-06 09:00:41 +00:00
Matthew Mosesohn
952ec65a40 Remove RBAC from boolean checks 2017-12-06 11:57:40 +03:00
Chad Swenson
b8788421d5 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).

Rework of #1937 with kubeadm support

Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
2017-12-05 09:13:45 -06:00
Brad Beam
c2347db934 Merge pull request #1953 from chadswen/dashboard-refactor
Kubernetes Dashboard v1.7.1 Refactor
2017-12-05 08:50:55 -06:00
Brad Beam
27ead5d4fa Merge pull request #2003 from abelgana/master
Change altnames to alt_names
2017-12-05 08:48:32 -06:00
BenGalewsky
591ae700ce Update OpenStack Terraform: Modules, Bastions, and New Floating IP config (#1958)
* Adding bastion and private network provisioning for openstack terraform

* Remove usage of floating-ip property

* Combine openstack instances + floating ips

* Fix relating floating IPs to hosts for openstack builds

* Tighten up security groups

Allow ssh into all instances with floating IP

* Add the gluster hosts to the no-floating group

* Break terraform into modules

* Update README and var descriptions to match current config

* Remove volume property in gluster compute def

* Include cluster name in internal network and router names

* Make dns_nameservers a variable
2017-12-05 12:48:47 +00:00
Stanislav Makar
6ade7c0a8d Update k8s version to 1.8.4 (#2015)
* Update k8s version to 1.8.4

* Update main.yml
2017-12-04 16:23:04 +00:00
Jan Jungnickel
b3745f2614 contrib/terraform/aws: Tag instances and remove loadbalancer ip (#2023)
* Properly tag instances and subnets with `kubernetes.io/cluster/$cluster_name`

This is required by kubernetes to support multiple clusters in a single vpc/az

* Get rid of loadbalancer_apiserver_address as it is no longer needed
2017-12-04 14:31:46 +00:00
Jean-Marie F
ca8a9c600a Terraform - Remove the need for region specific reference data (#1962)
* Dynamically retrieve aws_bastion_ami latest reference by querying AWS rather than hard coded

* Dynamically retrieve the list of availability_zones instead of needing to have them hard coded

* Limit availability zones to first 2, using slice extrapolation function

* Replace the need for hardcoded variable "aws_cluster_ami" by the data provided by Terraform

* Move ami choosing to vars, so people don't need to edit create infrastructure if they want another vendor image (as suggested by @atoms)

* Make name of the data block agnostic of distribution, given there are more than one distribution supported

* Add documentation about other distros being supported and what to change in which location to make these changes
2017-11-30 15:27:52 +00:00
Matthew Mosesohn
a0225507a0 Set helm deployment type to host (#2012) 2017-11-29 19:52:54 +00:00
Steven Hardy
d39a88d63f Allow setting --bind-address for apiserver hyperkube (#1985)
* Allow setting --bind-address for apiserver hyperkube

This is required if you wish to configure a loadbalancer (e.g haproxy)
running on the master nodes without choosing a different port for the
vip from that used by the API - in this case you need the API to bind to
a specific interface, then haproxy can bind the same port on the VIP:

root@overcloud-controller-0 ~]# netstat -taupen | grep 6443
tcp        0      0 192.168.24.6:6443       0.0.0.0:*               LISTEN      0          680613     134504/haproxy
tcp        0      0 192.168.24.16:6443      0.0.0.0:*               LISTEN      0          653329     131423/hyperkube
tcp        0      0 192.168.24.16:6443      192.168.24.16:58404     ESTABLISHED 0          652991     131423/hyperkube
tcp        0      0 192.168.24.16:58404     192.168.24.16:6443      ESTABLISHED 0          652986     131423/hyperkube

This can be achieved e.g via:

kube_apiserver_bind_address: 192.168.24.16

* Address code review feedback

* Update kube-apiserver.manifest.j2
2017-11-29 15:24:02 +00:00
unclejack
e5d353d0a7 contiv network support (#1914)
* Add Contiv support

Contiv is a network plugin for Kubernetes and Docker. It supports
vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies,
multiple networks and bridging pods onto physical networks.

* Update contiv version to 1.1.4

Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config.

* Load openvswitch module to workaround on CentOS7.4

* Set contiv cni version to 0.1.0

Correct contiv CNI version to 0.1.0.

* Use kube_apiserver_endpoint for K8S_API_SERVER

Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks
to a available endpoint no matter if there's a loadbalancer or not.

* Make contiv use its own etcd

Before this commit, contiv is using a etcd proxy mode to k8s etcd,
this work fine when the etcd hosts are co-located with contiv etcd
proxy, however the k8s peering certs are only in etcd group, as a
result the etcd-proxy is not able to peering with the k8s etcd on
etcd group, plus the netplugin is always trying to find the etcd
endpoint on localhost, this will cause problem for all netplugins
not runnign on etcd group nodes.
This commit make contiv uses its own etcd, separate from k8s one.
on kube-master nodes (where net-master runs), it will run as leader
mode and on all rest nodes it will run as proxy mode.

* Use cp instead of rsync to copy cni binaries

Since rsync has been removed from hyperkube, this commit changes it
to use cp instead.

* Make contiv-etcd able to run on master nodes

* Add rbac_enabled flag for contiv pods

* Add contiv into CNI network plugin lists

* migrate contiv test to tests/files

Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

* Add required rules for contiv netplugin

* Better handling json return of fwdMode

* Make contiv etcd port configurable

* Use default var instead of templating

* roles/download/defaults/main.yml: use contiv 1.1.7

Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2017-11-29 14:24:16 +00:00
Di Xu
de422c822d update nginx tag to use multi-arch docker image (#2009) 2017-11-29 10:39:52 +00:00
Matthew Mosesohn
4d3326b542 Raise default vault lease TTL to 10y (#2008) 2017-11-29 10:38:59 +00:00
riverzhang
1b82138142 Delete helm home
Delete helm home
2017-11-29 13:27:09 +08:00
Christopher Randles
208ff8e350 Allow for more customization of the tiller deploy (#1946) 2017-11-28 18:33:57 +00:00
Matthew Mosesohn
ec54b36e05 add retries for calico/canal etcd commands (#2007) 2017-11-28 16:39:55 +00:00
Spencer Smith
38e8522cbf Merge pull request #1983 from tomdee/bump-flannel-ver
Bump flannel version to v0.9.1
2017-11-28 11:38:55 -05:00
Spencer Smith
52f8687397 Merge pull request #1977 from mattymo/initializers
Disable initializers feature gate if istio is not used
2017-11-28 11:37:41 -05:00
Spencer Smith
43600ffcf8 Merge pull request #1972 from chadswen/master-static-pod-flush
Additional flush for static pod master upgrade
2017-11-28 11:36:38 -05:00
Christopher Randles
938d2d9e6e update helm/tiller to v2.7.2 -- security bugfix (#1986) 2017-11-28 14:52:42 +00:00
Kevin Lefevre
9368dbe0e7 update calico to 2.6.2 (#1874)
Move RS to deployment so no need to take care of the revision history
limits :
  - Delete the old RS
  - Make Calico manifest a deployment
  - move deployments to apps/v1beta2 API since Kubernetes 1.8
2017-11-28 12:01:30 +00:00
abelgana
fe3290601a The variable altnames is used by this task.
Since the value will change on the default. It needs to change here also.
2017-11-27 06:57:16 -05:00
abelgana
e7173e1d62 Change altnames to alt_names
Hi,

Could you please check if it was a typo?

https://www.vaultproject.io/api/secret/pki/

Regards,
2017-11-25 17:29:21 -05:00
Bogdan Dobrelya
8aafe64397 Defaults for apiserver_loadbalancer_domain_name (#1993)
* Defaults for apiserver_loadbalancer_domain_name

When loadbalancer_apiserver is defined, use the
apiserver_loadbalancer_domain_name with a given default value.

Fix unconsistencies for checking if apiserver_loadbalancer_domain_name
is defined AND using it with a default value provided at once.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Define defaults for LB modes in common defaults

Adjust the defaults for apiserver_loadbalancer_domain_name and
loadbalancer_apiserver_localhost to come from a single source, which is
kubespray-defaults. Removes some confusion and simplefies the code.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-11-23 16:15:48 +00:00
Bob Killen
2140303fcc add minimal keepalived-cloud-provider support 2017-11-23 08:43:36 -05:00
brx
b80ded63ca Update main.yml
just a small spelling mistake
2017-11-21 22:37:52 +01:00
Simon Li
7be2521a31 Add flannel hairping mode 2017-11-21 10:43:50 +00:00
Tom Denham
15b9d54a32 Bump flannel version to v0.9.1 2017-11-16 12:52:18 -07:00
Spencer Smith
bc1a4e12ad fix broken variable in ansible 2.4.1.0 and ensure tasks for calico-rr (#1982) 2017-11-16 18:44:15 +00:00
Matthew Mosesohn
67419e8d0a Run rotate_tokens role only once (#1970) 2017-11-15 18:50:23 +00:00
Chad Swenson
849aaf7435 Update to k8s 1.8.3 (#1971) 2017-11-15 17:43:22 +00:00
Chad Swenson
a89ee8c406 Add ability to use custom cert secret instead of init container provisioned self-signed certs 2017-11-15 10:05:52 -06:00
Chad Swenson
0c6f172e75 Kubernetes Dashboard v1.7.1 Refactor
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated.

* New login/auth options that use apiserver auth proxying by default
* Requires RBAC in `authorization_modes`
* Only serves over https
* No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL:
* Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials
* Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
* It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
2017-11-15 10:05:48 -06:00
Matthew Mosesohn
a67349b076 Disable initializers feature gate if istio is not used 2017-11-15 12:56:36 +00:00
Matthew Mosesohn
f9b68a5d17 Revert "Support for disabling apiserver insecure port" (#1974) 2017-11-14 13:41:28 +00:00
chenhonggc
c7910b51a1 --peers DEPRECATED - --endpoints should be used instead (#1943) 2017-11-14 11:28:35 +00:00
Chad Swenson
1f99710b21 Additional flush for static pod master upgrade
Thought this wasn't required at first but I forgot there's no auto flush at the end of these tasks since the `kubernetes/master` role is not the end of the play.
2017-11-13 18:11:57 -06:00
Aivars Sterns
5e558c361b update weave-net to 2.0.5 version (#1877) 2017-11-13 16:11:47 +00:00
neith00
5f39efcdfd adding mount for kubelet to enable rbd mounts (#1957)
* adding mount for kubelet to enable rbd mounts

* fix conditionnal variable name
2017-11-13 14:04:13 +00:00
Stanislav Makar
037edf1215 Fix failed task of setting up bash completion for helm (#1968)
Closes: #1967
2017-11-13 10:15:53 +00:00
Hyunsun Moon
37125866ca Make calico_node_ignorelooserpf have an effect (#1945) 2017-11-13 09:35:13 +00:00
Günther Grill
421e73b87c Add missing exclamation mark in shebang line (#1966) 2017-11-13 09:34:21 +00:00
Maxim Krasilnikov
0d8de289dd Revert "Change deprecated vagrant ansible flag 'sudo' to 'become'" (#1960) 2017-11-12 09:20:30 +00:00
Brad Beam
00916dec38 Merge pull request #1954 from abelgana/patch-1
fix a typo
2017-11-10 11:04:57 -05:00
Brad Beam
c115e5677e Merge pull request #1828 from hzamani/patch-1
Use etcd_access_addresses for vault_etcd_url
2017-11-10 10:56:37 -05:00
abelgana
56047c1c83 fix a typo 2017-11-10 09:30:27 -05:00
Spencer Smith
09d85631dc Merge pull request #1944 from chadswen/reload-master-pods
Master component and kubelet container upgrade fixes
2017-11-08 22:23:12 -05:00
Brad Beam
f25e4dc3ed Merge pull request #1937 from chadswen/disable-api-insecure-port
Support for disabling apiserver insecure port
2017-11-08 18:13:49 -05:00
Spencer Smith
a3a7c2d24e Merge pull request #1947 from rsmitty/rkt-proxy
provide environment for rkt trust and run with etcd
2017-11-08 15:26:47 -05:00
Spencer Smith
0126168472 provide environment for rkt trust and run with etcd 2017-11-08 12:57:22 -05:00
Chad Swenson
e9f795c5ce Master component and kubelet container upgrade fixes
* Fixes an issue where apiserver and friends (controller manager, scheduler) were prevented from restarting after manifests/secrets are changed. This occurred when a replaced kubelet doesn't reconcile new master manifests, which caused old master component versions to linger during deployment. In my case this was causing upgrades from k8s 1.6/1.7 -> k8s 1.8 to fail
* Improves transitions from kubelet container to host kubelet by preventing issues where kubelet container reappeared during the deployment
2017-11-08 01:40:33 -06:00
Chad Swenson
0c7e1889e4 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:

1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.

Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.

Options:

1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)

Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 14:01:10 -06:00
Aivars Sterns
8b2bec700a add bastion role to scale (#1882) 2017-11-06 13:51:36 +00:00
Amit Kumar Jaiswal
125267544e Fix Typo (#1935) 2017-11-06 13:51:22 +00:00
Günther Grill
0d55ed3600 Avoid that some read-only tasks cause an ansible-change (#1910) 2017-11-06 13:51:07 +00:00
Haiwei Liu
ad0cd6939a Add support cAdvisor (#1908)
Signed-off-by: Haiwei Liu <carllhw@gmail.com>
2017-11-06 13:50:28 +00:00
Rob Hirschfeld
a1244d7bd3 update link to latest Digital Rebar integration (#1933) 2017-11-06 13:49:54 +00:00
Stanislav Makar
33adb334cd Fix openstack tenant id variable name (#1932) 2017-11-05 08:40:41 +00:00
Spencer Smith
ef87a8a1f0 Merge pull request #1916 from vtomasr5/master
Fix bad handler directory name in kubeadm role
2017-11-03 18:14:48 -04:00
Spencer Smith
5223a80ab8 Merge pull request #1925 from chadswen/proxy-fixes
Remove proxy settings from etcd and kubernetes/master roles
2017-11-03 18:13:36 -04:00
Spencer Smith
a595c84f7e Merge pull request #1928 from chadswen/flannel-rbac-fix
Flannel RBAC Fix
2017-11-03 18:12:16 -04:00
Spencer Smith
adcfcc1178 Merge pull request #1931 from chadswen/docker-update
Docker Version Update
2017-11-03 18:11:33 -04:00
Chad Swenson
b158dbcf79 Docker Version Update
Update default docker version to 17.03.1
2017-11-03 12:34:45 -05:00
Matthew Mosesohn
ab3832f3e7 Set host IP for kubelet always (#1924)
* Set host IP for kubelet always

Use ansible default IP if ip var is not set.

* Update main.yml
2017-11-03 10:19:37 +00:00
Kevin Lefevre
9bf415f749 update helm to v2.7.0 (#1875)
* update helm to v2.7.0

* Update main.yml
2017-11-03 07:15:00 +00:00
Günther Grill
a2bda9e5f1 Eliminate jinja2 template expression warning and rename coreos-python var (#1911)
* Change deprecated vagrant ansible flag 'sudo' to 'become'

* Emphasize, that the name of the pip_pyton_modules is only considered in coreos

* Remove useless unused variable

* Fix warning when jinja2 template-delimiters used in when statement

There is no need for jinja2 template-delimiters like {{ }} or {% %}
any more. They can just be omitted as described in https://github.com/ansible/ansible/issues/22397

* Fix broken link in getting-started guide
2017-11-03 07:11:36 +00:00
Günther Grill
0195725563 Workaround ansible bug where access var via dict doesn't get real value (#1912)
* Change deprecated vagrant ansible flag 'sudo' to 'become'

* Workaround ansible bug where access var via dict doesn't get real value

When accessing a variable via it's name "{{ foo }}" its value is
retrieved. But when the variable value is retrieved via the vars-dict
"{{ vars['foo'] }}" this doesn't resolve the expression of the variable
any more due to a bug. So e.g. a expression foo="{{ 1 == 1 }}" isn't
longer resolved but just returned as string "1 == 1".

* Make file yamllint complient
2017-11-03 07:11:14 +00:00
Spencer Smith
ec1170bd37 only mount volumes if local_volumes_enabled is true. fix mount flags in rkt. (#1923) 2017-11-03 07:10:37 +00:00
Matthew Mosesohn
66c67dbe73 Add optional helm deployment mode for host (#1920) 2017-11-03 07:09:24 +00:00
Chad Swenson
e5d8d8234d Remove proxy settings from etcd and kubernetes/master roles
When proxy vars are set, `uri` module tasks will attempt to route traffic through the proxy. This causes the "Wait for" tasks in the `etcd` and `kubernetes/master` roles to hang, as localhost connections struggle with a proxy.

As far as I know these roles only need local/cluster networking, so a proxy doesn't apply here anyway.
2017-11-03 01:41:17 -05:00
Chad Swenson
16ae2c1809 Flannel RBAC Fix
Fixes a bug that can occur if `cni-flannel-rbac.yml` was written but the playbook failed before it was applied. Uses the same approach as calico.
2017-11-02 23:20:23 -05:00
Spencer Smith
5c5e879c2c Merge pull request #1904 from guenhter/master
Change deprecated vagrant ansible flag 'sudo' to 'become'
2017-11-02 12:02:32 -04:00
Spencer Smith
4771716ab2 Merge pull request #1907 from mattymo/disable_anon_auth
Block anonymous auth requests to kubelet
2017-11-02 12:01:39 -04:00
Spencer Smith
b156585739 Merge pull request #1917 from chadswen/docker-daemon-graph
Fix kubelet container with alternate Docker data paths
2017-11-02 11:58:55 -04:00
Spencer Smith
7a77b5c419 Merge pull request #1919 from mattymo/fix_rkt_local_vols
Fix local volume provisioner mount point for rkt
2017-11-02 11:32:30 -04:00
Spencer Smith
9872b594bf Merge pull request #1921 from pipo02mix/patch-2
Typo in apt-get command
2017-11-02 11:29:32 -04:00
Aivars Sterns
e6c88db0a0 change how terraform generates apiserver variables (#1922) 2017-11-02 12:26:11 +00:00
Fernando Ripoll
257280a050 Typo in apt-get command
Typo in apt-get command
2017-11-02 11:40:08 +01:00
Matthew Mosesohn
520103df78 Change namespace for provisioner account 2017-11-02 10:16:08 +00:00
Matthew Mosesohn
3e3787de15 Fix local volume provisioner mount point for rkt 2017-11-02 09:45:26 +00:00
Chad Swenson
0c824d5ef1 Fix kubelet container with alternate Docker data paths
Some time ago I think the hardcoded `/var/lib/docker` was required, but kubelet running in a container has been aware of the Docker path since at least as far back as k8s 1.6.

Without this change, you see a large number of errors in the kubelet logs if you installed with a non-default `docker_daemon_graph`
2017-11-01 13:25:15 -05:00
Matthew Mosesohn
c0e989b17c New addon: local_volume_provisioner (#1909) 2017-11-01 14:25:35 +00:00
Vicenç Juan Tomàs Montserrat
5218b3af82 Fix bad handler directory name in kubeadm role 2017-11-01 14:36:28 +01:00
Spencer Smith
ef0a91da27 Merge pull request #1891 from rsmitty/proxy-fixes
Improved proxy support
2017-10-31 14:32:12 -04:00
Spencer Smith
8412181746 Merge pull request #1899 from skyscooby/update_kube182
Update to Kubernetes 1.8.2
2017-10-31 14:30:56 -04:00
Spencer Smith
400ee2aa57 Merge pull request #1898 from skyscooby/update_kubedns
Update kubedns to 1.14.7 release
2017-10-31 14:30:36 -04:00
Spencer Smith
05b8466f87 Merge pull request #1890 from chadswen/apt-repo-params
Parameterize dockerproject apt repo endpoints
2017-10-31 14:29:19 -04:00
Spencer Smith
6061c691e6 Merge pull request #1902 from pipo02mix/patch-1
Typo in the apt-get command
2017-10-31 12:30:41 -04:00
guenhter
3ac967a7b6 Merge branch 'master' of https://github.com/kubernetes-incubator/kubespray 2017-10-31 15:15:39 +01:00
Spencer Smith
19962f6b6a fix indentation for master template (#1906) 2017-10-31 06:43:54 +00:00
Matthew Mosesohn
f7703dbca3 Block anonymous auth requests to kubelet 2017-10-30 19:06:54 +00:00
Spencer Smith
74a9eedb93 helm template check for http/https_proxy 2017-10-30 13:11:04 -04:00
Spencer Smith
6df104b275 don't check for no_proxy, only http/https_proxy. fix linting issues. 2017-10-30 11:42:14 -04:00
Spencer Smith
b27453d8d8 improved proxy support 2017-10-30 11:42:14 -04:00
Spencer Smith
4470ee4ccf Merge pull request #1887 from mattymo/fix_indent_apiserver
fix indentation for network policy option
2017-10-30 11:33:13 -04:00
Andrew Greenwood
df27fd1e9c Update README.md 2017-10-30 09:39:02 -04:00
guenhter
97c68810e0 Change deprecated vagrant ansible flag 'sudo' to 'become' 2017-10-30 14:37:06 +01:00
Andrew Greenwood
8a86acf75d Update kubespray-defaults kubernetes to v1.8.2 2017-10-30 09:34:32 -04:00
Fernando Ripoll
160e479f8d Typo in the apt-get command
Typo in the apt-get command
2017-10-30 13:47:39 +01:00
abelgana
d738acf638 Update kubelet.kubeadm.env.j2 (#1901) 2017-10-30 11:33:02 +00:00
tanshanshan
84d92aa3c7 fix-bug (#1900) 2017-10-30 11:23:24 +00:00
Andrew Greenwood
dd01cabcdc Update to kubernetes 1.8.2 2017-10-29 22:13:06 -04:00
Andrew Greenwood
e196adb98c Update kubernetes 1.8.2 2017-10-29 22:09:22 -04:00
Andrew Greenwood
c383c7e2c1 Update kubedns image to latest 2017-10-29 21:58:05 -04:00
Andrew Greenwood
958bb5285d Update kubedns image to latest 2017-10-29 21:57:32 -04:00
Spencer Smith
f0317ae70b Merge pull request #1876 from ArchiFleKs/update_flannel
update flannel
2017-10-27 15:22:54 -04:00
Spencer Smith
591941bd39 Merge pull request #1884 from abelgana/master
Sysctl reload if needed after IP forward enabling
2017-10-27 15:12:08 -04:00
Spencer Smith
e90769c869 Merge pull request #1888 from chapsuk/issue_1885
Disable swap in vagrant vms
2017-10-27 15:10:16 -04:00
Chad Swenson
256bbb1a8a Parameterize apt repo endpoints
This allows overriding of apt repo endpoints when internet sources are not accessible. Additionally, switch to using the dockerproject.org gpg key url for apt instead of keyservers.net
2017-10-27 13:48:11 -05:00
mkrasilnikov
2c7c956be9 Disable swap in vagrant vms 2017-10-27 19:57:54 +03:00
Matthew Mosesohn
fe81bba08d Force kubelet certificates to be generated as lowercase (#1886)
All nodes get converted to lowercase, so certs should set
CN with lowercase as well.
2017-10-27 15:58:25 +01:00
Matthew Mosesohn
564de07963 fix indentation for network policy option 2017-10-27 14:56:22 +01:00
Aivars Sterns
84cf6fbe83 change ssh_args/bastion configuration (#1883) 2017-10-27 12:18:39 +01:00
abelgana
d9160f19c0 Sysctl reload if needed after IP forward enabling
Add reload yes to reload sysctl if the value of net.ipv4.ip_forward changes.

- name: Enable ip forwarding
  sysctl:
    sysctl_file: "{{sysctl_file_path}}"
    name: net.ipv4.ip_forward
    value: 1
    state: present
    reload: yes
  tags:
    - bootstrap-os
2017-10-26 13:06:21 -04:00
Brad Beam
ba0a03a8ba Merge pull request #1880 from mattymo/node_auth_fixes2
Move cluster roles and system namespace to new role
2017-10-26 10:02:24 -05:00
Matthew Mosesohn
b0f04d925a Update network policy setting for Kubernetes 1.8 (#1879)
It is now enabled by default in 1.8 with the api changed
to networking.k8s.io/v1 instead of extensions/v1beta1.
2017-10-26 15:35:26 +01:00
Matthew Mosesohn
7b78e68727 disable idempotency tests (#1872) 2017-10-26 15:35:12 +01:00
Matthew Mosesohn
ec53b8b66a Move cluster roles and system namespace to new role
This should be done after kubeconfig is set for admin and
before network plugins are up.
2017-10-26 14:36:05 +01:00
ArchiFleKs
6e949bf951 update flannel 2017-10-26 11:18:06 +02:00
Matthew Mosesohn
86fb669fd3 Idempotency fixes (#1838) 2017-10-25 21:19:40 +01:00
Matthew Mosesohn
7123956ecd update checksum for kubeadm (#1869) 2017-10-25 21:15:16 +01:00
Spencer Smith
46cf6b77cf Merge pull request #1857 from pmontanari/patch-1
Use same kubedns_version: 1.14.5 in downloads  and kubernetes-apps/ansible roles
2017-10-25 10:05:43 -04:00
Matthew Mosesohn
a52bc44f5a Fix broken CI jobs (#1854)
* Fix broken CI jobs

Adjust image and image_family scenarios for debian.
Checkout CI file for upgrades

* add debugging to file download

* Fix download for alternate playbooks

* Update ansible ssh args to force ssh user

* Update sync_container.yml
2017-10-25 11:45:54 +01:00
Matthew Mosesohn
acb63a57fa Only limit etcd memory on small hosts (#1860)
Also disable oom killer on etcd
2017-10-25 10:25:15 +01:00
Flavio Percoco Premoli
5b08277ce4 Access dict item's value keys using .value (#1865) 2017-10-24 20:49:36 +01:00
Chiang Fong Lee
5dc56df64e Fix ordering of kube-apiserver admission control plug-ins (#1841) 2017-10-24 17:28:07 +01:00
Matthew Mosesohn
33c4d64b62 Make ClusterRoleBinding to admit all nodes with right cert (#1861)
This is to work around #1856 which can occur when kubelet
hostname and resolvable hostname (or cloud instance name)
do not match.
2017-10-24 17:05:58 +01:00
Matthew Mosesohn
25de6825df Update Kubernetes to v1.8.1 (#1858) 2017-10-24 17:05:45 +01:00
Peter Lee
0b60201a1e fix etcd health check bug (#1480) 2017-10-24 16:10:56 +01:00
Haiwei Liu
cfea99c4ee Fix scale.yml to supoort kubeadm (#1863)
Signed-off-by: Haiwei Liu <carllhw@gmail.com>
2017-10-24 16:08:48 +01:00
Matthew Mosesohn
cea41a544e Use include instead of import tasks to support v2.3 (#1855)
Eventually 2.3 support will be dropped, so this is
a temporary change.
2017-10-23 13:56:03 +01:00
pmontanari
8371a060a0 Update main.yml
Match kubedns_version with roles/download/defaults/main.yml:kubedns_version: 1.14.5
2017-10-22 23:48:51 +02:00
Matthew Mosesohn
7ed140cea7 Update refs to kubernetes version to v1.8.0 (#1845) 2017-10-20 08:29:28 +01:00
Matthew Mosesohn
cb97c2184e typo fix for ci job name (#1847) 2017-10-20 08:26:42 +01:00
Matthew Mosesohn
0b4fcc83bd Fix up warnings and deprecations (#1848) 2017-10-20 08:25:57 +01:00
Matthew Mosesohn
514359e556 Improve etcd scale up (#1846)
Now adding unjoined members to existing etcd cluster
occurs one at a time so that the cluster does not
lose quorum.
2017-10-20 08:02:31 +01:00
Peter Slijkhuis
55b9d02a99 Update README.md (#1843)
Changed Ansible 2.3 to 2.4
2017-10-19 13:49:04 +01:00
Matthew Mosesohn
fc9a65be2b Refactor downloads to use download role directly (#1824)
* Refactor downloads to use download role directly

Also disable fact delegation so download delegate works acros OSes.

* clean up bools and ansible_os_family conditionals
2017-10-19 09:17:11 +01:00
Jan Jungnickel
49dff97d9c Relabel controler-manager to kube-controller-manager (#1830)
Fixes #1129
2017-10-18 17:29:18 +01:00
Matthew Mosesohn
4efb0b78fa Move CI vars out of gitlab and into var files (#1808) 2017-10-18 17:28:54 +01:00
Hassan Zamani
c9fe8fde59 Use fail-swap-on flag only for kube_version >= 1.8 (#1829) 2017-10-18 16:32:38 +01:00
Simon Li
74d54946bf Add note that glusterfs is not automatically deployed (#1834) 2017-10-18 13:26:14 +01:00
Matthew Mosesohn
16462292e1 Properly skip extra SANs when not specified for kubeadm (#1831) 2017-10-18 12:04:13 +01:00
Aivars Sterns
7ef1e1ef9d update terraform, fix deprecated values add default_tags, fix ansible inventory (#1821) 2017-10-18 11:44:32 +01:00
pmontanari
20d80311f0 Update main.yml (#1822)
* Update main.yml

Needs to set up resolv.conf before updating Yum cache otherwise no name resolution available (resolv.conf empty).

* Update main.yml

Removing trailing spaces
2017-10-18 11:42:00 +01:00
Tim(Xiaoyu) Zhang
f1a1f53f72 fix slack UR; (#1832) 2017-10-18 10:32:47 +01:00
Hassan Zamani
3acc42c5b3 Use etcd_access_addresses for vault_etcd_url 2017-10-17 19:27:36 +03:30
Matthew Mosesohn
c766bd077b Use batch mode for graceful docker/rkt upgrade (#1815) 2017-10-17 14:12:11 +01:00
Tennis Smith
54320c5b09 set to 3 digit version number (#1817) 2017-10-17 11:14:29 +01:00
Seungkyu Ahn
291b71ea3b Changing default value string to boolean. (#1669)
When downloading containers or files, use boolean
as a default value.
2017-10-17 11:14:12 +01:00
Rémi de Passmoilesel
356515222a Add possibility to insert more ip adresses in certificates (#1678)
* Add possibility to insert more ip adresses in certificates

* Add newline at end of files

* Move supp ip parameters to k8s-cluster group file

* Add supplementary addresses in kubeadm master role

* Improve openssl indexes
2017-10-17 11:06:07 +01:00
Aivars Sterns
688e589e0c fix #1788 lock dashboard version to 1.6.3 version while 1.7.x is not working (#1805) 2017-10-17 11:04:55 +01:00
刘旭
6c98201aa4 remove kube-dns versions and images in kubernetes-apps/ansible/defaults/main.yaml (#1807) 2017-10-17 11:03:53 +01:00
Matthew Mosesohn
d4b10eb9f5 Fix path for calico get node names (#1816) 2017-10-17 10:54:48 +01:00
Jiří Stránský
728d56e74d Only write bastion ssh config when needed (#1810)
This will allow running Kubespray when the user who runs it doesn't
have write permissions to the Kubespray dir, at least when not using
bastion.
2017-10-17 10:28:45 +01:00
Matthew Mosesohn
a9f4038fcd Update roadmap (#1814) 2017-10-16 17:02:53 +01:00
neith00
77f1d4b0f1 Revert "Update roadmap" (#1809)
* Revert "Debian jessie docs (#1806)"

This reverts commit d78577c810.

* Revert "[contrib/network-storage/glusterfs] adds service for glusterfs endpoint (#1800)"

This reverts commit 5fb6b2eaf7.

* Revert "[contrib/network-storage/glusterfs] bootstrap for glusterfs nodes (#1799)"

This reverts commit 404caa111a.

* Revert "Fixed kubelet standard log environment (#1780)"

This reverts commit b838468500.

* Revert "Add support for fedora atomic host (#1779)"

This reverts commit f2235be1d3.

* Revert "Update network-plugins to use portmap plugin (#1763)"

This reverts commit 6ec45b10f1.

* Revert "Update roadmap (#1795)"

This reverts commit d9879d8026.
2017-10-16 14:09:24 +01:00
Marc Zahn
d78577c810 Debian jessie docs (#1806)
* Add Debian Jessie notes

* Add installation notes for Debian Jessie
2017-10-16 09:02:12 +01:00
Pablo Moreno
5fb6b2eaf7 [contrib/network-storage/glusterfs] adds service for glusterfs endpoint (#1800) 2017-10-16 08:48:29 +01:00
Pablo Moreno
404caa111a [contrib/network-storage/glusterfs] bootstrap for glusterfs nodes (#1799) 2017-10-16 08:23:38 +01:00
Seungkyu Ahn
b838468500 Fixed kubelet standard log environment (#1780)
Change KUBE_LOGGING to KUBE_LOGTOSTDERR, when installing kubelet
as host type.
2017-10-16 08:22:54 +01:00
Jason Brooks
f2235be1d3 Add support for fedora atomic host (#1779)
* don't try to install this rpm on fedora atomic

* add docker 1.13.1 for fedora

* built-in docker unit file is sufficient, as tested on both fedora and centos atomic
2017-10-16 08:03:33 +01:00
Kevin Lefevre
6ec45b10f1 Update network-plugins to use portmap plugin (#1763)
Portmap allow to use hostPort with CNI plugins. Should fix #1675
2017-10-16 07:11:38 +01:00
Matthew Mosesohn
d9879d8026 Update roadmap (#1795) 2017-10-16 07:06:06 +01:00
Matthew Mosesohn
d487b2f927 Security best practice fixes (#1783)
* Disable basic and token auth by default

* Add recommended security params

* allow basic auth to fail in tests

* Enable TLS authentication for kubelet
2017-10-15 20:41:17 +01:00
Julian Poschmann
66e5e14bac Restart kubelet on update in deployment-type host on update (#1759)
* Restart kubelet on update in deployment-type host on update

* Update install_host.yml

* Update install_host.yml

* Update install_host.yml
2017-10-15 20:22:17 +01:00
Matthew Mosesohn
7e4668859b Change file used to check kubeadm upgrade method (#1784)
* Change file used to check kubeadm upgrade method

Test for ca.crt instead of admin.conf because admin.conf
is created during normal deployment.

* more fixes for upgrade
2017-10-15 10:33:22 +01:00
Matthew Mosesohn
92d038062e Fix node authorization for cloudprovider installs (#1794)
In 1.8, the Node authorization mode should be listed first to
allow kubelet to access secrets. This seems to only impact
environments with cloudprovider enabled.
2017-10-14 11:28:46 +01:00
abelgana
2972bceb90 Changre raw execution to use yum module (#1785)
* Changre raw execution to use yum module

Changed raw exection to use yum module provided by Ansible.

* Replace ansible_ssh_* by ansible_*

Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user, ansible_ssh_host, and ansible_ssh_port to become ansible_user, ansible_host, and ansible_port. If you are using a version of Ansible prior to 2.0, you should continue using the older style variables (ansible_ssh_*). These shorter variables are ignored, without warning, in older versions of Ansible.

I am not sure about the broader impact of this change. But I have seen on the requirements the version required is ansible>=2.4.0.

http://docs.ansible.com/ansible/latest/intro_inventory.html
2017-10-14 09:52:40 +01:00
刘旭
cb0a60a0fe calico v2.5.0 should use calico/routereflector:v0.4.0 (#1792) 2017-10-14 09:51:48 +01:00
Matthew Mosesohn
3ee91e15ff Use commas in no_proxy (#1782) 2017-10-13 15:43:10 +01:00
Matthew Mosesohn
ef47a73382 Add new addon Istio (#1744)
* add istio addon

* add addons to a ci job
2017-10-13 15:42:54 +01:00
Matthew Mosesohn
dc515e5ac5 Remove kernel-upgrade role (#1798)
This role only support Red Hat type distros and is not maintained
or used by many users. It should be removed because it creates
feature disparity between supported OSes and is not maintained.
2017-10-13 15:36:21 +01:00
Julian Poschmann
56763d4288 Persist br_netfilter module loading (#1760) 2017-10-13 10:50:29 +01:00
Maxim Krasilnikov
ad9fa73301 Remove cert_managment var definition from k8s-cluster group vars (#1790) 2017-10-13 10:21:39 +01:00
Matthew Mosesohn
10dd049912 Revert "Security fixes for etcd (#1778)" (#1786)
This reverts commit 4209f1cbfd.
2017-10-12 14:02:51 +01:00
Matthew Mosesohn
4209f1cbfd Security fixes for etcd (#1778)
* Security fixes for etcd

* Use certs when querying etcd
2017-10-12 13:32:54 +01:00
Matthew Mosesohn
ee83e874a8 Clear admin kubeconfig when rotating certs (#1772)
* Clear admin kubeconfig when rotating certs

* Update main.yml
2017-10-12 09:55:46 +01:00
Vijay Katam
27ed73e3e3 Rename dns_server, add var for selinux. (#1572)
* Rename dns_server to dnsmasq_dns_server so that it includes role prefix
as the var name is generic and conflicts when integrating with existing ansible automation.
*  Enable selinux state to be configurable with new var preinstall_selinux_state
2017-10-11 20:40:21 +01:00
Aivars Sterns
e41c0532e3 add possibility to disable fail with swap (#1773) 2017-10-11 19:49:31 +01:00
Matthew Mosesohn
eeb7274d65 Adjust memory reservation for master nodes (#1769) 2017-10-11 19:47:42 +01:00
Matthew Mosesohn
eb0dcf6063 Improve proxy (#1771)
* Set no_proxy to all local ips

* Use proxy settings on all necessary tasks
2017-10-11 19:47:27 +01:00
Matthew Mosesohn
83be0735cd Fix setting etcd client cert serial (#1775) 2017-10-11 19:47:11 +01:00
Matthew Mosesohn
fe4ba51d1a Set node IP correctly (#1770)
Fixes #1741
2017-10-11 15:28:42 +01:00
Hyunsun Moon
adf575b75e Set default value for disable_shared_pid (#1710)
PID namespace sharing is disabled only in Kubernetes 1.7.
Explicitily enabling it by default could help reduce unexpected
results when upgrading to or downgrading from 1.7.
2017-10-11 14:55:51 +01:00
Spencer Smith
e5426f74a8 Merge pull request #1762 from manics/bindir-helm
Include bin_dir when patching helm tiller with kubectl
2017-10-10 10:40:47 -04:00
Spencer Smith
f5212d3b79 Merge pull request #1752 from pmontanari/patch-1
Force synchronize to use ssh_args so it works when using bastion
2017-10-10 10:40:01 -04:00
Spencer Smith
3d09c4be75 Merge pull request #1756 from kubernetes-incubator/fix_bool_assert
Fix bool check assert
2017-10-10 10:38:53 -04:00
Spencer Smith
f2db15873d Merge pull request #1754 from ArchiFleKs/rkt-kubelet-fix
add hosts to rkt kubelet
2017-10-10 10:37:36 -04:00
ArchiFleKs
7c663de6c9 add /etc/hosts volume to rkt templates 2017-10-09 16:41:51 +02:00
Simon Li
c14bbcdbf2 Include bin_dir when patching helm tiller with kubectl 2017-10-09 15:17:52 +01:00
ant31
1be4c1935a Fix bool check assert 2017-10-06 17:02:38 +00:00
pmontanari
764b1aa5f8 Force synchronize to use ssh_args so it works when using bastion
In case ssh.config is set to use bastion, synchronize needs to use it too.
2017-10-06 00:21:54 +02:00
Spencer Smith
d13b07ba59 Merge pull request #1751 from bradbeam/calicoprometheus
Adding calico/node env vars for prometheus configuration
2017-10-05 17:29:12 -04:00
Spencer Smith
028afab908 Merge pull request #1750 from bradbeam/dnsmasq2
Followup fix for CVE-2017-14491
2017-10-05 17:28:28 -04:00
Brad Beam
55dfae2a52 Followup fix for CVE-2017-14491 2017-10-05 11:31:04 -05:00
Matthew Mosesohn
994324e19c Update gce CI (#1748)
Use image family for picking latest coreos image
Update python deps
2017-10-05 16:52:28 +01:00
Brad Beam
b81c0d869c Adding calico/node env vars for prometheus configuration 2017-10-05 08:46:01 -05:00
Matthew Mosesohn
f14f04c5ea Upgrade to kubernetes v1.8.0 (#1730)
* Upgrade to kubernetes v1.8.0

hyperkube no longer contains rsync, so now use cp

* Enable node authorization mode

* change kube-proxy cert group name
2017-10-05 10:51:21 +01:00
Aivars Sterns
9c86da1403 Normalize tags in all places to prepare for tag fixing in future (#1739) 2017-10-05 08:43:04 +01:00
Spencer Smith
cb611b5ed0 Merge pull request #1742 from mattymo/facts_as_vars
Move set_facts to kubespray-defaults defaults
2017-10-04 15:46:39 -04:00
Spencer Smith
891269ef39 Merge pull request #1743 from rsmitty/kube-client
Don't delegate cert gathering before creating admin.conf
2017-10-04 15:38:21 -04:00
Spencer Smith
ab171a1d6d don't delegate cert slurp 2017-10-04 13:06:51 -04:00
Matthew Mosesohn
a56738324a Move set_facts to kubespray-defaults defaults
These facts can be generated in defaults with a performance
boost.

Also cleaned up duplicate etcd var names.
2017-10-04 14:02:47 +01:00
Maxim Krasilnikov
da61b8e7c9 Added workaround for vagrant 1.9 and centos vm box (#1738) 2017-10-03 11:32:19 +01:00
Maxim Krasilnikov
d6d58bc938 Fixed vagrant up with flannel network, removed old config values (#1737) 2017-10-03 11:16:13 +01:00
Matthew Mosesohn
e42cb43ca5 add bootstrap for debian (#1726) 2017-10-03 08:30:45 +01:00
Brad Beam
ca541c7e4a Ensuring vault service is stopped in reset tasks (#1736) 2017-10-03 08:30:28 +01:00
Brad Beam
96e14424f0 Adding kubedns update for CVE-2017-14491 (#1735) 2017-10-03 08:30:14 +01:00
Brad Beam
47830896e8 Merge pull request #1733 from chapsuk/vagrant_mem
Increase vagrant vm's memory size
2017-10-02 15:45:37 -05:00
mkrasilnikov
5fd4b4afae Increase vagrant vm's memory size 2017-10-02 23:16:39 +03:00
Matthew Mosesohn
dae9f6d3c2 Test if tokens are expired from host instead of inside container (#1727)
* Test if tokens are expired from host instead of inside container

* Update main.yml
2017-10-02 13:14:50 +01:00
Julian Poschmann
8e1210f96e Fix cluster-network w/ prefix > 25 not possible with CNI (#1713) 2017-10-01 10:43:00 +01:00
Matthew Mosesohn
56aa683f28 Fix logic in idempotency tests in CI (#1722) 2017-10-01 10:42:33 +01:00
Brad Beam
1b9a6d7ad8 Merge pull request #1672 from manics/bastion-proxycommand-newline
Insert a newline in bastion ssh config after ProxyCommand conditional
2017-09-29 11:37:47 -05:00
Brad Beam
f591c4db56 Merge pull request #1720 from shiftky/improve_integration_doc
Improve playbook example of integration document
2017-09-29 11:34:44 -05:00
Peter Slijkhuis
371fa51e82 Make installation of EPEL optional (#1721) 2017-09-29 13:44:29 +01:00
shiftky
a927ed2da4 Improve playbook example of integration document 2017-09-29 18:00:01 +09:00
Matthew Mosesohn
a55675acf8 Enable RBAC with kubeadm always (#1711) 2017-09-29 09:18:24 +01:00
Matthew Mosesohn
25dd3d476a Fix error for azure+calico assert (#1717)
Fixes #1716
2017-09-29 08:17:18 +01:00
Simon Li
7c2b12ebd7 Insert a newline in bastion after ProxyCommand conditional 2017-09-18 16:29:12 +01:00
310 changed files with 6502 additions and 2222 deletions

1
.gitignore vendored
View File

@@ -10,6 +10,7 @@ temp
*.bak
*.tfstate
*.tfstate.backup
contrib/terraform/aws/credentials.tfvars
**/*.sw[pon]
/ssh-bastion.conf
**/*.sw[pon]

View File

@@ -20,7 +20,6 @@ variables:
before_script:
- pip install -r tests/requirements.txt
- mkdir -p /.ssh
- cp tests/ansible.cfg .
.job: &job
tags:
@@ -40,27 +39,20 @@ before_script:
GCE_USER: travis
SSH_USER: $GCE_USER
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CONTAINER_ENGINE: docker
PRIVATE_KEY: $GCE_PRIVATE_KEY
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
CLOUD_MACHINE_TYPE: "g1-small"
GCE_PREEMPTIBLE: "false"
ANSIBLE_KEEP_REMOTE_FILES: "1"
ANSIBLE_CONFIG: ./tests/ansible.cfg
BOOTSTRAP_OS: none
DOWNLOAD_LOCALHOST: "false"
DOWNLOAD_RUN_ONCE: "false"
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
KUBEADM_ENABLED: "false"
RESOLVCONF_MODE: docker_dns
LOG_LEVEL: "-vv"
ETCD_DEPLOYMENT: "docker"
KUBELET_DEPLOYMENT: "host"
VAULT_DEPLOYMENT: "docker"
WEAVE_CPU_LIMIT: "100m"
AUTHORIZATION_MODES: "{ 'authorization_modes': [] }"
MAGIC: "ci check this"
.gce: &gce
@@ -81,7 +73,9 @@ before_script:
- echo $GCE_CREDENTIALS > $HOME/.ssh/gce.json
- chmod 400 $HOME/.ssh/id_rsa
- ansible-playbook --version
- export PYPATH=$([ $BOOTSTRAP_OS = none ] && echo /usr/bin/python || echo /opt/bin/python)
- export PYPATH=$([[ ! "$CI_JOB_NAME" =~ "coreos" ]] && echo /usr/bin/python || echo /opt/bin/python)
- echo "CI_JOB_NAME is $CI_JOB_NAME"
- echo "PYPATH is $PYPATH"
script:
- pwd
- ls
@@ -90,48 +84,36 @@ before_script:
- >
ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local
${LOG_LEVEL}
-e cloud_image=${CLOUD_IMAGE}
-e cloud_region=${CLOUD_REGION}
-e gce_credentials_file=${HOME}/.ssh/gce.json
-e gce_project_id=${GCE_PROJECT_ID}
-e gce_service_account_email=${GCE_ACCOUNT}
-e cloud_machine_type=${CLOUD_MACHINE_TYPE}
-e inventory_path=${PWD}/inventory/inventory.ini
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e mode=${CLUSTER_MODE}
-e test_id=${TEST_ID}
-e startup_script="'${STARTUP_SCRIPT}'"
-e preemptible=$GCE_PREEMPTIBLE
# Check out latest tag if testing upgrade
# Uncomment when gitlab kargo repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout 72ae7638bcc94c66afa8620dfa4ad9a9249327ea
- test "${UPGRADE_TEST}" != "false" && git checkout ba0a03a8ba2d97a73d06242ec4bb3c7e2012e58c
# Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Workaround https://github.com/kubernetes-incubator/kubespray/issues/2021
- 'sh -c "echo ignore_assert_errors: true | tee -a tests/files/${CI_JOB_NAME}.yml"'
# Create cluster
- >
ansible-playbook -i inventory/inventory.ini -b --become-user=root --private-key=${HOME}/.ssh/id_rsa -u $SSH_USER
ansible-playbook
-i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e cert_management=${CERT_MGMT:-script}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml
@@ -141,27 +123,17 @@ before_script:
test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml";
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
git checkout "${CI_BUILD_REF}";
ansible-playbook -i inventory/inventory.ini -b --become-user=root --private-key=${HOME}/.ssh/id_rsa -u $SSH_USER
ansible-playbook
-i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
$PLAYBOOK;
fi
@@ -181,25 +153,16 @@ before_script:
## Idempotency checks 1/5 (repeat deployment)
- >
if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
ansible-playbook
-i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml;
fi
@@ -207,20 +170,29 @@ before_script:
## Idempotency checks 2/5 (Advanced DNS checks)
- >
if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
ansible-playbook
-i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
--limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi
## Idempotency checks 3/5 (reset deployment)
- >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook
-i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e reset_confirmation=yes
--limit "all:!fake_hosts"
@@ -229,33 +201,24 @@ before_script:
## Idempotency checks 4/5 (redeploy after reset)
- >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook
-i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml;
fi
## Idempotency checks 5/5 (Advanced DNS checks)
- >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts"
@@ -265,166 +228,77 @@ before_script:
after_script:
- >
ansible-playbook -i inventory/inventory.ini tests/cloud_playbooks/delete-gce.yml -c local $LOG_LEVEL
-e mode=${CLUSTER_MODE}
-e @${CI_TEST_VARS}
-e test_id=${TEST_ID}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e gce_project_id=${GCE_PROJECT_ID}
-e gce_service_account_email=${GCE_ACCOUNT}
-e gce_credentials_file=${HOME}/.ssh/gce.json
-e cloud_image=${CLOUD_IMAGE}
-e inventory_path=${PWD}/inventory/inventory.ini
-e cloud_region=${CLOUD_REGION}
# Test matrix. Leave the comments for markup scripts.
.coreos_calico_aio_variables: &coreos_calico_aio_variables
# stage: deploy-gce-part1
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-west1-b
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: aio
BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
##User-data to simply turn off coreos upgrades
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
MOVED_TO_GROUP_VARS: "true"
.ubuntu_canal_ha_rbac_variables: &ubuntu_canal_ha_rbac_variables
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: weave
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: centos-7
CLOUD_MACHINE_TYPE: "n1-standard-1"
CLOUD_REGION: us-central1-b
CLUSTER_MODE: ha
KUBEADM_ENABLED: "true"
UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_MACHINE_TYPE: "n1-standard-1"
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
KUBEADM_ENABLED: "true"
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
# stage: deploy-gce-special
MOVED_TO_GROUP_VARS: "true"
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: weave
CLOUD_IMAGE: rhel-7
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.centos7_flannel_variables: &centos7_flannel_variables
.centos7_flannel_addons_variables: &centos7_flannel_addons_variables
# stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: centos-7
CLOUD_REGION: us-west1-a
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.debian8_calico_variables: &debian8_calico_variables
# stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: debian-8-kubespray
CLOUD_REGION: us-central1-b
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.coreos_canal_variables: &coreos_canal_variables
# stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: canal
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-east1-b
CLUSTER_MODE: default
BOOTSTRAP_OS: coreos
IDEMPOT_CHECK: "true"
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
MOVED_TO_GROUP_VARS: "true"
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: canal
CLOUD_IMAGE: rhel-7
CLOUD_REGION: us-east1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: weave
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate
IDEMPOT_CHECK: "false"
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.centos7_calico_ha_variables: &centos7_calico_ha_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: calico
DOWNLOAD_LOCALHOST: "true"
DOWNLOAD_RUN_ONCE: "true"
CLOUD_IMAGE: centos-7
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha-scale
IDEMPOT_CHECK: "true"
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: weave
CLOUD_IMAGE: coreos-alpha-1506-0-0-v20170817
CLOUD_REGION: us-west1-a
CLUSTER_MODE: ha-scale
BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
MOVED_TO_GROUP_VARS: "true"
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate
ETCD_DEPLOYMENT: rkt
KUBELET_DEPLOYMENT: rkt
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
# stage: deploy-gce-part1
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_MACHINE_TYPE: "n1-standard-2"
KUBE_NETWORK_PLUGIN: canal
CERT_MGMT: vault
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
.ubuntu_flannel_rbac_variables: &ubuntu_flannel_rbac_variables
.ubuntu_flannel_variables: &ubuntu_flannel_variables
# stage: deploy-gce-special
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
MOVED_TO_GROUP_VARS: "true"
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
coreos-calico-aio:
@@ -448,24 +322,24 @@ coreos-calico-sep-triggers:
when: on_success
only: ['triggers']
centos7-flannel:
centos7-flannel-addons:
stage: deploy-gce-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_flannel_variables
<<: *centos7_flannel_addons_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
centos7-flannel-triggers:
centos7-flannel-addons-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_flannel_variables
<<: *centos7_flannel_addons_variables
when: on_success
only: ['triggers']
@@ -491,28 +365,28 @@ ubuntu-weave-sep-triggers:
only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto)
ubuntu-canal-ha-rbac:
ubuntu-canal-ha:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_rbac_variables
<<: *ubuntu_canal_ha_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
ubuntu-canal-ha-rbac-triggers:
ubuntu-canal-ha-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_rbac_variables
<<: *ubuntu_canal_ha_variables
when: on_success
only: ['triggers']
ubuntu-canal-kubeadm-rbac:
ubuntu-canal-kubeadm:
stage: deploy-gce-part1
<<: *job
<<: *gce
@@ -533,7 +407,7 @@ ubuntu-canal-kubeadm-triggers:
when: on_success
only: ['triggers']
centos-weave-kubeadm-rbac:
centos-weave-kubeadm:
stage: deploy-gce-part1
<<: *job
<<: *gce
@@ -554,6 +428,17 @@ centos-weave-kubeadm-triggers:
when: on_success
only: ['triggers']
ubuntu-contiv-sep:
stage: deploy-gce-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_contiv_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
rhel7-weave:
stage: deploy-gce-part1
<<: *job
@@ -693,13 +578,13 @@ ubuntu-vault-sep:
except: ['triggers']
only: ['master', /^pr-.*$/]
ubuntu-flannel-rbac-sep:
ubuntu-flannel-sep:
stage: deploy-gce-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_flannel_rbac_variables
<<: *ubuntu_flannel_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

View File

@@ -2,7 +2,7 @@
## Deploy a production ready kubernetes cluster
If you have questions, join us on the [kubernetes slack](https://slack.k8s.io), channel **#kubespray**.
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **#kubespray**.
- Can be deployed on **AWS, GCE, Azure, OpenStack or Baremetal**
- **High available** cluster
@@ -29,6 +29,7 @@ To deploy the cluster you can use :
* [Network plugins](#network-plugins)
* [Vagrant install](docs/vagrant.md)
* [CoreOS bootstrap](docs/coreos.md)
* [Debian Jessie setup](docs/debian.md)
* [Downloaded artifacts](docs/downloads.md)
* [Cloud providers](docs/cloud.md)
* [OpenStack](docs/openstack.md)
@@ -53,11 +54,12 @@ Versions of supported components
--------------------------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.7.3 <br>
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.9.2 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.2.4 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br>
[calico](https://docs.projectcalico.org/v2.5/releases/) v2.5.0 <br>
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
[contiv](https://github.com/contiv/install/releases) v1.0.3 <br>
[weave](http://weave.works/) v2.0.1 <br>
[docker](https://www.docker.com/) v1.13 (see note)<br>
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)<br>
@@ -72,7 +74,7 @@ plugins can be deployed for a given single cluster.
Requirements
--------------
* **Ansible v2.3 (or newer) and python-netaddr is installed on the machine
* **Ansible v2.4 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands**
* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
* The target servers must have **access to the Internet** in order to pull docker images.
@@ -92,6 +94,9 @@ You can choose between 4 network plugins. (default: `calico`, except Vagrant use
* [**canal**](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
* [**contiv**](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
* [**weave**](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
@@ -106,7 +111,7 @@ See also [Network checker](docs/netcheck.md).
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
## Tools and projects on top of Kubespray
- [Digital Rebar](https://github.com/digitalrebar/digitalrebar)
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
- [Kubespray-cli](https://github.com/kubespray/kubespray-cli)
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
- [Terraform Contrib](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform)

32
Vagrantfile vendored
View File

@@ -3,7 +3,7 @@
require 'fileutils'
Vagrant.require_version ">= 1.8.0"
Vagrant.require_version ">= 1.9.0"
CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
@@ -21,16 +21,19 @@ SUPPORTED_OS = {
$num_instances = 3
$instance_name_prefix = "k8s"
$vm_gui = false
$vm_memory = 1536
$vm_memory = 2048
$vm_cpus = 1
$shared_folders = {}
$forwarded_ports = {}
$subnet = "172.17.8"
$os = "ubuntu"
$network_plugin = "flannel"
# The first three nodes are etcd servers
$etcd_instances = $num_instances
# The first two nodes are masters
# The first two nodes are kube masters
$kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes
$kube_node_instances = $num_instances
$local_release_dir = "/vagrant/temp"
host_vars = {}
@@ -39,9 +42,6 @@ if File.exist?(CONFIG)
require CONFIG
end
# All nodes are kube nodes
$kube_node_instances = $num_instances
$box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = File.join(File.dirname(__FILE__), "inventory") if ! $inventory
@@ -115,16 +115,22 @@ Vagrant.configure("2") do |config|
ip = "#{$subnet}.#{i+100}"
host_vars[vm_name] = {
"ip": ip,
"flannel_interface": ip,
"flannel_backend_type": "host-gw",
"bootstrap_os": SUPPORTED_OS[$os][:bootstrap_os],
"local_release_dir" => $local_release_dir,
"download_run_once": "False",
# Override the default 'calico' with flannel.
# inventory/group_vars/k8s-cluster.yml
"kube_network_plugin": "flannel",
"bootstrap_os": SUPPORTED_OS[$os][:bootstrap_os]
"kube_network_plugin": $network_plugin
}
config.vm.network :private_network, ip: ip
# workaround for Vagrant 1.9.1 and centos vm
# https://github.com/hashicorp/vagrant/issues/8096
if Vagrant::VERSION == "1.9.1" && $os == "centos"
config.vm.provision "shell", inline: "service network restart", run: "always"
end
# Disable swap for each vm
config.vm.provision "shell", inline: "swapoff -a"
# Only execute once the Ansible provisioner,
# when all the machines are up and ready.
@@ -137,7 +143,7 @@ Vagrant.configure("2") do |config|
ansible.sudo = true
ansible.limit = "all"
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}"]
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache"]
ansible.host_vars = host_vars
#ansible.tags = ['download']
ansible.groups = {

View File

@@ -1,7 +1,6 @@
[ssh_connection]
pipelining=True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
host_key_checking=False
@@ -11,4 +10,5 @@ fact_caching_connection = /tmp
stdout_callback = skippy
library = ./library
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False

View File

@@ -26,18 +26,20 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kernel-upgrade, tags: kernel-upgrade, when: kernel_upgrade is defined and kernel_upgrade }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker }
- role: rkt
tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
- { role: download, tags: download, skip_downloads: false }
environment: "{{proxy_env}}"
- hosts: etcd:k8s-cluster:vault
- hosts: etcd:k8s-cluster:vault:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults, when: "cert_management == 'vault'" }
- { role: vault, tags: vault, vault_bootstrap: true, when: "cert_management == 'vault'" }
environment: "{{proxy_env}}"
- hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -45,29 +47,33 @@
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true }
- hosts: k8s-cluster
- hosts: k8s-cluster:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false }
- hosts: etcd:k8s-cluster:vault
- hosts: etcd:k8s-cluster:vault:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: vault, tags: vault, when: "cert_management == 'vault'"}
environment: "{{proxy_env}}"
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/node, tags: node }
environment: "{{proxy_env}}"
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/master, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -76,14 +82,18 @@
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
- { role: network_plugin, tags: network }
- hosts: kube-master
- hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes/client, tags: client }
- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -97,6 +107,7 @@
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
environment: "{{proxy_env}}"
- hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"

View File

@@ -1,58 +1,3 @@
## Kubernetes Community Code of Conduct
# Kubernetes Community Code of Conduct
### Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free experience for
everyone, regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information, such as physical or electronic addresses,
without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are not
aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
commit themselves to fairly and consistently applying these principles to every aspect
of managing this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Kubernetes maintainer, Sarah Novotny <sarahnovotny@google.com>, and/or Dan Kohn <dan@linuxfoundation.org>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
http://contributor-covenant.org/version/1/2/0/
### Kubernetes Events Code of Conduct
Kubernetes events are working conferences intended for professional networking and collaboration in the
Kubernetes community. Attendees are expected to behave according to professional standards and in accordance
with their employer's policies on appropriate workplace behavior.
While at Kubernetes events or related social networking opportunities, attendees should not engage in
discriminatory or offensive speech or actions regarding gender, sexuality, race, or religion. Speakers should
be especially aware of these concerns.
The Kubernetes team does not condone any statements by speakers contrary to these standards. The Kubernetes
team reserves the right to deny entrance and/or eject from an event (without refund) any individual found to
be engaging in discriminatory or offensive speech or actions.
Please bring any concerns to the immediate attention of Kubernetes event staff.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/code-of-conduct.md?pixel)]()
Please refer to our [Kubernetes Community Code of Conduct](https://git.k8s.io/community/code-of-conduct.md)

View File

@@ -1,8 +1,17 @@
---
- hosts: gfs-cluster
gather_facts: false
vars:
ansible_ssh_pipelining: false
roles:
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: all
gather_facts: true
- hosts: gfs-cluster
vars:
ansible_ssh_pipelining: true
roles:
- { role: glusterfs/server }
@@ -12,6 +21,5 @@
- hosts: kube-master[0]
roles:
- { role: kubernetes-pv/lib }
- { role: kubernetes-pv }

View File

@@ -0,0 +1 @@
../../../inventory/group_vars

View File

@@ -0,0 +1 @@
../../../../roles/bootstrap-os

View File

@@ -4,6 +4,7 @@
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
register: gluster_pv
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined

View File

@@ -0,0 +1,12 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs"
},
"spec": {
"ports": [
{"port": 1}
]
}
}

View File

@@ -1,60 +0,0 @@
%global srcname ansible_kubespray
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: ansible-kubespray
Version: XXX
Release: XXX
Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Vendor: Kubespray <smainklh@gmail.com>
Url: https://github.com/kubernetes-incubator/kubespray
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python-d2to1
BuildRequires: python-pbr
Requires: ansible
Requires: python-jinja2
Requires: python-netaddr
%description
Ansible-kubespray is a set of Ansible modules and playbooks for
installing a Kubernetes cluster. If you have questions, join us
on the https://slack.k8s.io, channel '#kubespray'.
%prep
%autosetup -n %{name}-%{upstream_version} -S git
%build
%{__python2} setup.py build
%install
export PBR_VERSION=%{version}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot}
%files
%doc README.md
%doc inventory/inventory.example
%config /etc/kubespray/ansible.cfg
%config /etc/kubespray/inventory/group_vars/all.yml
%config /etc/kubespray/inventory/group_vars/k8s-cluster.yml
%license LICENSE
%{python2_sitelib}/%{srcname}-%{version}-py%{python2_version}.egg-info
/usr/local/share/kubespray/roles/
/usr/local/share/kubespray/playbooks/
%defattr(-,root,root)
%changelog

View File

@@ -0,0 +1,61 @@
%global srcname kubespray
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: kubespray
Version: master
Release: %(git describe | sed -r 's/v(\S+-?)-(\S+)-(\S+)/\1.dev\2+\3/')
Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Url: https://github.com/kubernetes-incubator/kubespray
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz#/%{name}-%{release}.tar.gz
BuildArch: noarch
BuildRequires: git
BuildRequires: python2
BuildRequires: python2-devel
BuildRequires: python2-setuptools
BuildRequires: python-d2to1
BuildRequires: python2-pbr
Requires: ansible
Requires: python-jinja2 >= 2.10
Requires: python-netaddr
%description
Ansible-kubespray is a set of Ansible modules and playbooks for
installing a Kubernetes cluster. If you have questions, join us
on the https://slack.k8s.io, channel '#kubespray'.
%prep
%autosetup -n %{name}-%{upstream_version} -S git
%build
export PBR_VERSION=%{release}
%{__python2} setup.py build bdist_rpm
%install
export PBR_VERSION=%{release}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot} bdist_rpm
%files
%doc %{_docdir}/%{name}/README.md
%doc %{_docdir}/%{name}/inventory/inventory.example
%config %{_sysconfdir}/%{name}/ansible.cfg
%config %{_sysconfdir}/%{name}/inventory/group_vars/all.yml
%config %{_sysconfdir}/%{name}/inventory/group_vars/k8s-cluster.yml
%license %{_docdir}/%{name}/LICENSE
%{python2_sitelib}/%{srcname}-%{release}-py%{python2_version}.egg-info
%{_datarootdir}/%{name}/roles/
%{_datarootdir}/%{name}/playbooks/
%defattr(-,root,root)
%changelog

View File

@@ -24,7 +24,7 @@ export AWS_DEFAULT_REGION="zzz"
```
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
- Update `contrib/terraform/aws/terraform.tfvars` with your data
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use CoreOS as base image. If you want to change this behaviour, see note "Using other distrib than CoreOs" below.
- Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below)
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
@@ -36,17 +36,72 @@ terraform apply -var-file=credentials.tfvars -var 'loadbalancer_apiserver_addres
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To make use of it, make sure you have a line in your `ansible.cfg` file that looks like the following:
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
Ansible automatically detects bastion and changes ssh_args
```commandline
ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m
ssh -F ./ssh-bastion.conf user@$ip
```
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
Example (this one assumes you are using CoreOS)
```commandline
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
```
***Using other distrib than CoreOs***
If you want to use another distribution than CoreOS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
For example, to use:
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["debian-jessie-amd64-hvm-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["379101102735"]
}
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["dcos-centos7-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["688023202711"]
}
**Troubleshooting**

View File

@@ -8,6 +8,8 @@ provider "aws" {
region = "${var.AWS_DEFAULT_REGION}"
}
data "aws_availability_zones" "available" {}
/*
* Calling modules who create the initial AWS VPC / AWS ELB
* and AWS IAM Roles for Kubernetes Deployment
@@ -18,10 +20,10 @@ module "aws-vpc" {
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones="${var.aws_avail_zones}"
aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
aws_cidr_subnets_private="${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public="${var.aws_cidr_subnets_public}"
default_tags="${var.default_tags}"
}
@@ -31,10 +33,11 @@ module "aws-elb" {
aws_cluster_name="${var.aws_cluster_name}"
aws_vpc_id="${module.aws-vpc.aws_vpc_id}"
aws_avail_zones="${var.aws_avail_zones}"
aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}"
default_tags="${var.default_tags}"
}
@@ -48,12 +51,13 @@ module "aws-iam" {
* Create Bastion Instances in AWS
*
*/
resource "aws_instance" "bastion-server" {
ami = "${var.aws_bastion_ami}"
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_bastion_size}"
count = "${length(var.aws_cidr_subnets_public)}"
associate_public_ip_address = true
availability_zone = "${element(var.aws_avail_zones,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
@@ -61,11 +65,11 @@ resource "aws_instance" "bastion-server" {
key_name = "${var.AWS_SSH_KEY_NAME}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-bastion-${count.index}"
Cluster = "${var.aws_cluster_name}"
Role = "bastion-${var.aws_cluster_name}-${count.index}"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
"Cluster", "${var.aws_cluster_name}",
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
))}"
}
@@ -75,13 +79,13 @@ resource "aws_instance" "bastion-server" {
*/
resource "aws_instance" "k8s-master" {
ami = "${var.aws_cluster_ami}"
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_master_size}"
count = "${var.aws_kube_master_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
@@ -92,11 +96,11 @@ resource "aws_instance" "k8s-master" {
key_name = "${var.AWS_SSH_KEY_NAME}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-master${count.index}"
Cluster = "${var.aws_cluster_name}"
Role = "master"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "master"
))}"
}
resource "aws_elb_attachment" "attach_master_nodes" {
@@ -107,13 +111,13 @@ resource "aws_elb_attachment" "attach_master_nodes" {
resource "aws_instance" "k8s-etcd" {
ami = "${var.aws_cluster_ami}"
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_etcd_size}"
count = "${var.aws_etcd_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
@@ -121,23 +125,22 @@ resource "aws_instance" "k8s-etcd" {
key_name = "${var.AWS_SSH_KEY_NAME}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-etcd${count.index}"
Cluster = "${var.aws_cluster_name}"
Role = "etcd"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd"
))}"
}
resource "aws_instance" "k8s-worker" {
ami = "${var.aws_cluster_ami}"
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_worker_size}"
count = "${var.aws_kube_worker_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
@@ -146,11 +149,11 @@ resource "aws_instance" "k8s-worker" {
key_name = "${var.AWS_SSH_KEY_NAME}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-worker${count.index}"
Cluster = "${var.aws_cluster_name}"
Role = "worker"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker"
))}"
}
@@ -162,18 +165,16 @@ resource "aws_instance" "k8s-worker" {
*/
data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}"
vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_ssh_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_ssh_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_ssh_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n",formatlist("%s ansible_ssh_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n",formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
elb_api_port = "loadbalancer_apiserver.port=${var.aws_elb_api_port}"
loadbalancer_apiserver_address = "loadbalancer_apiserver.address=${var.loadbalancer_apiserver_address}"
}
}

View File

@@ -2,9 +2,9 @@ resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = "${var.aws_vpc_id}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
))}"
}
@@ -43,7 +43,7 @@ resource "aws_elb" "aws-elb-api" {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:8080/"
target = "TCP:${var.k8s_secure_api_port}"
interval = 30
}
@@ -52,7 +52,7 @@ resource "aws_elb" "aws-elb-api" {
connection_draining = true
connection_draining_timeout = 400
tags {
Name = "kubernetes-${var.aws_cluster_name}-elb-api"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-elb-api"
))}"
}

View File

@@ -26,3 +26,8 @@ variable "aws_subnet_ids_public" {
description = "IDs of Public Subnets"
type = "list"
}
variable "default_tags" {
description = "Tags for all resources"
type = "map"
}

View File

@@ -129,10 +129,10 @@ EOF
resource "aws_iam_instance_profile" "kube-master" {
name = "kube_${var.aws_cluster_name}_master_profile"
roles = ["${aws_iam_role.kube-master.name}"]
role = "${aws_iam_role.kube-master.name}"
}
resource "aws_iam_instance_profile" "kube-worker" {
name = "kube_${var.aws_cluster_name}_node_profile"
roles = ["${aws_iam_role.kube-worker.name}"]
role = "${aws_iam_role.kube-worker.name}"
}

View File

@@ -6,9 +6,9 @@ resource "aws_vpc" "cluster-vpc" {
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = "kubernetes-${var.aws_cluster_name}-vpc"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
))}"
}
@@ -18,13 +18,13 @@ resource "aws_eip" "cluster-nat-eip" {
}
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-internetgw"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-internetgw"
))}"
}
resource "aws_subnet" "cluster-vpc-subnets-public" {
@@ -33,9 +33,10 @@ resource "aws_subnet" "cluster-vpc-subnets-public" {
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
))}"
}
resource "aws_nat_gateway" "cluster-nat-gateway" {
@@ -51,9 +52,9 @@ resource "aws_subnet" "cluster-vpc-subnets-private" {
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
))}"
}
#Routing in VPC
@@ -66,9 +67,10 @@ resource "aws_route_table" "kubernetes-public" {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
}
tags {
Name = "kubernetes-${var.aws_cluster_name}-routetable-public"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
))}"
}
resource "aws_route_table" "kubernetes-private" {
@@ -78,9 +80,11 @@ resource "aws_route_table" "kubernetes-private" {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
}
tags {
Name = "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
))}"
}
resource "aws_route_table_association" "kubernetes-public" {
@@ -104,9 +108,9 @@ resource "aws_security_group" "kubernetes" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-securitygroup"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
))}"
}
resource "aws_security_group_rule" "allow-all-ingress" {

View File

@@ -14,3 +14,8 @@ output "aws_security_group" {
value = ["${aws_security_group.kubernetes.*.id}"]
}
output "default_tags" {
value = "${var.default_tags}"
}

View File

@@ -22,3 +22,8 @@ variable "aws_cidr_subnets_public" {
description = "CIDR Blocks for public subnets in Availability zones"
type = "list"
}
variable "default_tags" {
description = "Default tags for all resources"
type = "map"
}

View File

@@ -22,3 +22,7 @@ output "aws_elb_api_fqdn" {
output "inventory" {
value = "${data.template_file.inventory.rendered}"
}
output "default_tags" {
value = "${var.default_tags}"
}

View File

@@ -1,3 +1,4 @@
[all]
${connection_strings_master}
${connection_strings_node}
${connection_strings_etcd}
@@ -24,5 +25,3 @@ kube-master
[k8s-cluster:vars]
${elb_api_fqdn}
${elb_api_port}
${loadbalancer_apiserver_address}

View File

@@ -5,10 +5,8 @@ aws_cluster_name = "devtest"
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_avail_zones = ["us-west-2a","us-west-2b"]
#Bastion Host
aws_bastion_ami = "ami-db56b9a3"
aws_bastion_size = "t2.medium"
@@ -23,10 +21,13 @@ aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
aws_cluster_ami = "ami-db56b9a3"
#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"
default_tags = {
# Env = "devtest"
# Product = "kubernetes"
}

View File

@@ -20,6 +20,21 @@ variable "aws_cluster_name" {
description = "Name of AWS Cluster"
}
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["CoreOS-stable-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["595879546273"] #CoreOS
}
//AWS VPC Variables
@@ -27,11 +42,6 @@ variable "aws_vpc_cidr_block" {
description = "CIDR Block for VPC"
}
variable "aws_avail_zones" {
description = "Availability Zones Used"
type = "list"
}
variable "aws_cidr_subnets_private" {
description = "CIDR Blocks for private subnets in Availability Zones"
type = "list"
@@ -44,10 +54,6 @@ variable "aws_cidr_subnets_public" {
//AWS EC2 Settings
variable "aws_bastion_ami" {
description = "AMI ID for Bastion Host in chosen AWS Region"
}
variable "aws_bastion_size" {
description = "EC2 Instance Size of Bastion Host"
}
@@ -81,9 +87,6 @@ variable "aws_kube_worker_size" {
description = "Instance size of Kubernetes Worker Nodes"
}
variable "aws_cluster_ami" {
description = "AMI ID for Kubernetes Cluster"
}
/*
* AWS ELB Settings
*
@@ -96,6 +99,7 @@ variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server"
}
variable "loadbalancer_apiserver_address" {
description= "Bind Address for ELB of K8s API Server"
variable "default_tags" {
description = "Default tags for all resources"
type = "map"
}

View File

@@ -5,65 +5,91 @@ Openstack.
## Status
This will install a Kubernetes cluster on an Openstack Cloud. It has been tested on a
OpenStack Cloud provided by [BlueBox](https://www.blueboxcloud.com/) and on OpenStack at [EMBL-EBI's](http://www.ebi.ac.uk/) [EMBASSY Cloud](http://www.embassycloud.org/). This should work on most modern installs of OpenStack that support the basic
services.
This will install a Kubernetes cluster on an Openstack Cloud. It should work on
most modern installs of OpenStack that support the basic services.
There are some assumptions made to try and ensure it will work on your openstack cluster.
## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your OpenStack cluster.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by the main ansible script
to actually install kubernetes and stand up the cluster.
* floating-ips are used for access, but you can have masters and nodes that don't use floating-ips if needed. You need currently at least 1 floating ip, which needs to be used on a master. If using more than one, at least one should be on a master for bastions to work fine.
* you already have a suitable OS image in glance
* you already have both an internal network and a floating-ip pool created
* you have security-groups enabled
### Networking
The configuration includes creating a private subnet with a router to the
external net. It will allocate floating-ips from a pool and assign them to the
hosts where that makes sense. You have the option of creating bastion hosts
inside the private subnet to access the nodes there.
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating
floating ip addresses or not.
- Master Nodes with etcd
- Master nodes without etcd
- Standalone etcd hosts
- Kubernetes worker nodes
Note that the ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration.
### Gluster FS
The terraform configuration supports provisioning of an optional GlusterFS
shared file system based on a separate set of VMs. To enable this, you need to
specify
- the number of gluster hosts
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
- Other properties related to provisioning the hosts
Even if you are using Container Linux by CoreOS for your cluster, you will still
need the GlusterFS VMs to be based on either Debian or RedHat based images,
Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through
binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
- you already have a suitable OS image in glance
- you already have a floating-ip pool created
- you have security-groups enabled
- you have a pair of keys generated that can be used to secure the new hosts
## Module Architecture
The configuration is divided into three modules:
- Network
- IPs
- Compute
The main reason for splitting the configuration up in this way is to easily
accommodate situations where floating IPs are limited by a quota or if you have
any external references to the floating IP (e.g. DNS) that would otherwise have
to be updated.
You can force your existing IPs by modifying the compute variables in
`kubespray.tf` as
```
k8s_master_fips = ["151.101.129.67"]
k8s_node_fips = ["151.101.129.68"]
```
## Terraform
Terraform will be used to provision all of the OpenStack resources. It is also used to deploy and provision the software
requirements.
Terraform will be used to provision all of the OpenStack resources. It is also
used to deploy and provision the software requirements.
### Prep
#### OpenStack
Ensure your OpenStack **Identity v2** credentials are loaded in environment variables. This can be done by downloading a credentials .rc file from your OpenStack dashboard and sourcing it:
Ensure your OpenStack **Identity v2** credentials are loaded in environment
variables. This can be done by downloading a credentials .rc file from your
OpenStack dashboard and sourcing it:
```
$ source ~/.stackrc
```
> You must set **OS_REGION_NAME** and **OS_TENANT_ID** environment variables not required by openstack CLI
You will need two networks before installing, an internal network and
an external (floating IP Pool) network. The internet network can be shared as
we use security groups to provide network segregation. Due to the many
differences between OpenStack installs the Terraform does not attempt to create
these for you.
By default Terraform will expect that your networks are called `internal` and
`external`. You can change this by altering the Terraform variables `network_name` and `floatingip_pool`. This can be done on a new variables file or through environment variables.
A full list of variables you can change can be found at [variables.tf](variables.tf).
All OpenStack resources will use the Terraform variable `cluster_name` (
default `example`) in their name to make it easier to track. For example the
first compute resource will be named `example-kubernetes-1`.
#### Terraform
Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner:
```
$ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/id_rsa
```
Ensure that you have your Openstack credentials loaded into Terraform
environment variables. Likely via a command similar to:
@@ -75,59 +101,106 @@ $ echo Setting up Terraform creds && \
export TF_VAR_auth_url=${OS_AUTH_URL}
```
##### Alternative: etcd inside masters
### Terraform Variables
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
If you want to provision master or node VMs that don't use floating ips and where etcd is inside masters, write on a `my-terraform-vars.tfvars` file, for example:
The best way to set these values is to create a file in the project's root
directory called something like`my-terraform-vars.tfvars`. Many of the
variables are obvious. Here is a summary of some of the more interesting
ones:
```
number_of_k8s_masters = "1"
number_of_k8s_masters_no_floating_ip = "2"
number_of_k8s_nodes_no_floating_ip = "1"
number_of_k8s_nodes = "0"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy) and one VM as node, again without a floating ip.
|Variable | Description |
|---------|-------------|
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`network_name` | The name to be given to the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through`nova flavor-list` |
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
|`number_of_k8s_masters_no_etcd`, `number_of_k8s_masters_no_floating_ip_no_etcd` | Number of nodes that serve as just master with no etcd. These can be provisioned with or without floating IP addresses |
|`number_of_etcd` | Number of pure etcd nodes |
|`number_of_k8s_nodes`, `number_of_k8s_nodes_no_floating_ip` | Kubernetes worker nodes. These can be provisioned with or without floating ip addresses. |
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
##### Alternative: etcd on separate machines
## Initializing Terraform
Before Terraform can operate on your cluster you need to install required
plugins. This is accomplished with the command
If you want to provision master or node VMs that don't use floating ips and where **etcd is on separate nodes from Kubernetes masters**, write on a `my-terraform-vars.tfvars` file, for example:
```
number_of_etcd = "3"
number_of_k8s_masters = "0"
number_of_k8s_masters_no_etcd = "1"
number_of_k8s_masters_no_floating_ip = "0"
number_of_k8s_masters_no_floating_ip_no_etcd = "2"
number_of_k8s_nodes_no_floating_ip = "1"
number_of_k8s_nodes = "2"
flavor_k8s_node = "desired-flavor-id"
flavor_k8s_master = "desired-flavor-id"
flavor_etcd = "desired-flavor-id"
```bash
$ terraform init contrib/terraform/openstack
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy), two VMs as nodes with floating ips, one VM as node without floating ip and three VMs for etcd.
##### Alternative: add GlusterFS
Additionally, now the terraform based installation supports provisioning of a GlusterFS shared file system based on a separate set of VMs, running either a Debian or RedHat based set of VMs. To enable this, you need to add to your `my-terraform-vars.tfvars` the following variables:
```
# Flavour depends on your openstack installation, you can get available flavours through `nova flavor-list`
flavor_gfs_node = "af659280-5b8a-42b5-8865-a703775911da"
# This is the name of an image already available in your openstack installation.
image_gfs = "Ubuntu 15.10"
number_of_gfs_nodes_no_floating_ip = "3"
# This is the size of the non-ephemeral volumes to be attached to store the GlusterFS bricks.
gfs_volume_size_in_gb = "50"
# The user needed for the image choosen for GlusterFS.
ssh_user_gfs = "ubuntu"
## Provisioning Cluster with Terraform
You can apply the terraform config to your cluster with the following command
issued from the project's root directory
```bash
$ terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
```
If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using Container Linux by CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher.
if you chose to create a bastion host, this script will create
`contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for ansible to
be able to access your machines tunneling through the bastion's ip adress. If
you want to manually handle the ssh tunneling to these machines, please delete
or move that file. If you want to use this, just leave it there, as ansible will
pick it up automatically.
# Configure Cluster variables
Edit `inventory/group_vars/all.yml`:
## Destroying Cluster with Terraform
You can destroy a config deployed to your cluster with the following command
issued from the project's root directory
```bash
$ terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
```
## Debugging Cluster Provisioning
You can enable debugging output from Terraform by setting
`OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before runing the terraform command
# Running the Ansible Script
Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner:
```
$ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/id_rsa
```
Make sure you can connect to the hosts:
```
$ ansible -i contrib/terraform/openstack/hosts -m ping all
example-k8s_node-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-etcd-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-k8s-master-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
if you are deploying a system that needs bootstrapping, like Container Linux by
CoreOS, these might have a state`FAILED` due to Container Linux by CoreOS not
having python. As long as the state is not`UNREACHABLE`, this is fine.
if it fails try to connect manually via SSH ... it could be something as simple as a stale host key.
## Configure Cluster variables
Edit`inventory/group_vars/all.yml`:
- Set variable **bootstrap_os** according selected image
```
# Valid bootstrap options (required): ubuntu, coreos, centos, none
@@ -145,7 +218,7 @@ bin_dir: /opt/bin
```
cloud_provider: openstack
```
Edit `inventory/group_vars/k8s-cluster.yml`:
Edit`inventory/group_vars/k8s-cluster.yml`:
- Set variable **kube_network_plugin** according selected networking
```
# Choose network plugin (calico, weave or flannel)
@@ -166,63 +239,13 @@ resolvconf_mode: host_resolvconf
For calico configure OpenStack Neutron ports: [OpenStack](/docs/openstack.md)
# Provision a Kubernetes Cluster on OpenStack
If not using a tfvars file for your setup, then execute:
```
terraform apply -state=contrib/terraform/openstack/terraform.tfstate contrib/terraform/openstack
openstack_compute_secgroup_v2.k8s_master: Creating...
description: "" => "example - Kubernetes Master"
name: "" => "example-k8s-master"
rule.#: "" => "<computed>"
...
...
Apply complete! Resources: 9 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: contrib/terraform/openstack/terraform.tfstate
```
Alternatively, if you wrote your terraform variables on a file `my-terraform-vars.tfvars`, your command would look like:
```
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
```
if you choose to add masters or nodes without floating ips (only internal ips on your OpenStack tenancy), this script will create as well a file `contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for ansible to be able to access your machines tunneling through the first floating ip used. If you want to manually handling the ssh tunneling to these machines, please delete or move that file. If you want to use this, just leave it there, as ansible will pick it up automatically.
Make sure you can connect to the hosts:
```
$ ansible -i contrib/terraform/openstack/hosts -m ping all
example-k8s_node-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-etcd-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-k8s-master-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
if you are deploying a system that needs bootstrapping, like Container Linux by CoreOS, these might have a state `FAILED` due to Container Linux by CoreOS not having python. As long as the state is not `UNREACHABLE`, this is fine.
if it fails try to connect manually via SSH ... it could be somthing as simple as a stale host key.
Deploy kubernetes:
## Deploy kubernetes:
```
$ ansible-playbook --become -i contrib/terraform/openstack/hosts cluster.yml
```
# Set up local kubectl
## Set up local kubectl
1. Install kubectl on your workstation:
[Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
2. Add route to internal IP of master node (if needed):
@@ -243,16 +266,15 @@ ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
```
5. Edit OpenStack Neutron master's Security Group to allow TCP connections to port 6443
6. Configure kubectl:
5. Configure kubectl:
```
kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
--certificate-authority=ca.pem
--certificate-authority=ca.pem
kubectl config set-credentials default-admin \
--certificate-authority=ca.pem \
--client-key=admin-key.pem \
--client-certificate=admin.pem
--client-certificate=admin.pem
kubectl config set-context default-system --cluster=default-cluster --user=default-admin
kubectl config use-context default-system
@@ -262,19 +284,24 @@ kubectl config use-context default-system
kubectl version
```
If you are using floating ip addresses then you may get this error:
```
Unable to connect to the server: x509: certificate is valid for 10.0.0.6, 10.0.0.6, 10.233.0.1, 127.0.0.1, not 132.249.238.25
```
You can tell kubectl to ignore this condition by adding the
`--insecure-skip-tls-verify` option.
## GlusterFS
GlusterFS is not deployed by the standard`cluster.yml` playbook, see the
[glusterfs playbook documentation](../../network-storage/glusterfs/README.md)
for instructions.
Basically you will install gluster as
```bash
$ ansible-playbook --become -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
```
# What's next
[Start Hello Kubernetes Service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/)
# clean up:
```
$ terraform destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
...
...
Apply complete! Resources: 0 added, 0 changed, 12 destroyed.
```

View File

@@ -1,226 +1,55 @@
resource "openstack_networking_floatingip_v2" "k8s_master" {
count = "${var.number_of_k8s_masters + var.number_of_k8s_masters_no_etcd}"
pool = "${var.floatingip_pool}"
}
resource "openstack_networking_floatingip_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
pool = "${var.floatingip_pool}"
module "network" {
source = "modules/network"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}"
}
resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}"
public_key = "${file(var.public_key_path)}"
module "ips" {
source = "modules/ips"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_k8s_nodes = "${var.number_of_k8s_nodes}"
floatingip_pool = "${var.floatingip_pool}"
number_of_bastions = "${var.number_of_bastions}"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
router_id = "${module.network.router_id}"
}
resource "openstack_compute_secgroup_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master"
description = "${var.cluster_name} - Kubernetes Master"
module "compute" {
source = "modules/compute"
cluster_name = "${var.cluster_name}"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_etcd = "${var.number_of_etcd}"
number_of_k8s_masters_no_floating_ip = "${var.number_of_k8s_masters_no_floating_ip}"
number_of_k8s_masters_no_floating_ip_no_etcd = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
number_of_k8s_nodes = "${var.number_of_k8s_nodes}"
number_of_bastions = "${var.number_of_bastions}"
number_of_k8s_nodes_no_floating_ip = "${var.number_of_k8s_nodes_no_floating_ip}"
number_of_gfs_nodes_no_floating_ip = "${var.number_of_gfs_nodes_no_floating_ip}"
gfs_volume_size_in_gb = "${var.gfs_volume_size_in_gb}"
public_key_path = "${var.public_key_path}"
image = "${var.image}"
image_gfs = "${var.image_gfs}"
ssh_user = "${var.ssh_user}"
ssh_user_gfs = "${var.ssh_user_gfs}"
flavor_k8s_master = "${var.flavor_k8s_master}"
flavor_k8s_node = "${var.flavor_k8s_node}"
flavor_etcd = "${var.flavor_etcd}"
flavor_gfs_node = "${var.flavor_gfs_node}"
network_name = "${var.network_name}"
flavor_bastion = "${var.flavor_bastion}"
k8s_master_fips = "${module.ips.k8s_master_fips}"
k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}"
network_id = "${module.network.router_id}"
}
resource "openstack_compute_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
rule {
ip_protocol = "tcp"
from_port = "22"
to_port = "22"
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "tcp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "udp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
self = true
}
}
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster,vault"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index + var.number_of_k8s_masters)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault"
}
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_node.*.address, count.index)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,vault"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
name = "${var.cluster_name}-gfs-nephe-vol-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage"
}
volume {
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/gfs-cluster.yml"
}
}
#output "msg" {
# value = "Your hosts are ready to go!\nYour ssh hosts are: ${join(", ", openstack_networking_floatingip_v2.k8s_master.*.address )}"
#}

View File

@@ -0,0 +1,280 @@
variable user_data {
type = "string"
default = <<EOF
#cloud-config
manage_etc_hosts: localhost
package_update: true
package_upgrade: true
EOF
}
resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}"
}
resource "openstack_compute_secgroup_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master"
description = "${var.cluster_name} - Kubernetes Master"
rule {
ip_protocol = "tcp"
from_port = "6443"
to_port = "6443"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion"
description = "${var.cluster_name} - Bastion Server"
rule {
ip_protocol = "tcp"
from_port = "22"
to_port = "22"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "tcp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "udp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
self = true
}
}
resource "openstack_compute_instance_v2" "bastion" {
name = "${var.cluster_name}-bastion-${count.index+1}"
count = "${var.number_of_bastions}"
image_name = "${var.image}"
flavor_id = "${var.flavor_bastion}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"default" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "bastion"
depends_on = "${var.network_id}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
user_data = "${var.user_data}"
}
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
"default" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster,vault"
depends_on = "${var.network_id}"
}
user_data = "${var.user_data}"
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault"
depends_on = "${var.network_id}"
}
user_data = "${var.user_data}"
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}"
}
user_data = "${var.user_data}"
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
"default" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
user_data = "${var.user_data}"
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
user_data = "${var.user_data}"
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"default" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster"
depends_on = "${var.network_id}"
}
user_data = "${var.user_data}"
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s.name}",
"default" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating"
depends_on = "${var.network_id}"
}
user_data = "${var.user_data}"
}
resource "openstack_compute_floatingip_associate_v2" "bastion" {
count = "${var.number_of_bastions}"
floating_ip = "${var.bastion_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.bastion.*.id, count.index)}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master.*.id, count.index)}"
floating_ip = "${var.k8s_master_fips[count.index]}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
floating_ip = "${var.k8s_node_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.k8s_node.*.id, count.index)}"
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
name = "${var.cluster_name}-glusterfs_volume-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"default" ]
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}"
}
user_data = "#cloud-config\nmanage_etc_hosts: localhost\npackage_update: true\npackage_upgrade: true"
}
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
count = "${var.number_of_gfs_nodes_no_floating_ip}"
instance_id = "${element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)}"
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
}

View File

@@ -0,0 +1,83 @@
variable "cluster_name" {
}
variable "number_of_k8s_masters" {
}
variable "number_of_k8s_masters_no_etcd" {
}
variable "number_of_etcd" {
}
variable "number_of_k8s_masters_no_floating_ip" {
}
variable "number_of_k8s_masters_no_floating_ip_no_etcd" {
}
variable "number_of_k8s_nodes" {
}
variable "number_of_k8s_nodes_no_floating_ip" {
}
variable "number_of_bastions" {
}
variable "number_of_gfs_nodes_no_floating_ip" {
}
variable "gfs_volume_size_in_gb" {
}
variable "public_key_path" {
}
variable "image" {
}
variable "image_gfs" {
}
variable "ssh_user" {
}
variable "ssh_user_gfs" {
}
variable "flavor_k8s_master" {
}
variable "flavor_k8s_node" {
}
variable "flavor_etcd" {
}
variable "flavor_gfs_node" {
}
variable "network_name" {
}
variable "flavor_bastion" {
}
variable "network_id"{
}
variable "k8s_master_fips" {
type = "list"
}
variable "k8s_node_fips" {
type = "list"
}
variable "bastion_fips" {
type = "list"
}

View File

@@ -0,0 +1,24 @@
resource "null_resource" "dummy_dependency" {
triggers {
dependency_id = "${var.router_id}"
}
}
resource "openstack_networking_floatingip_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
}
resource "openstack_networking_floatingip_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
}
resource "openstack_networking_floatingip_v2" "bastion" {
count = "${var.number_of_bastions}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
}

View File

@@ -0,0 +1,11 @@
output "k8s_master_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_master.*.address}"]
}
output "k8s_node_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_node.*.address}"]
}
output "bastion_fips" {
value = ["${openstack_networking_floatingip_v2.bastion.*.address}"]
}

View File

@@ -0,0 +1,26 @@
variable "number_of_k8s_masters" {
}
variable "number_of_k8s_masters_no_etcd" {
}
variable "number_of_k8s_nodes" {
}
variable "floatingip_pool" {
}
variable "number_of_bastions" {
}
variable "external_net" {
}
variable "network_name" {
}
variable "router_id"{
}

View File

@@ -0,0 +1,24 @@
resource "openstack_networking_router_v2" "k8s" {
name = "${var.cluster_name}-router"
admin_state_up = "true"
external_gateway = "${var.external_net}"
}
resource "openstack_networking_network_v2" "k8s" {
name = "${var.network_name}"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network"
network_id = "${openstack_networking_network_v2.k8s.id}"
cidr = "10.0.0.0/24"
ip_version = 4
dns_nameservers = "${var.dns_nameservers}"
}
resource "openstack_networking_router_interface_v2" "k8s" {
router_id = "${openstack_networking_router_v2.k8s.id}"
subnet_id = "${openstack_networking_subnet_v2.k8s.id}"
}

View File

@@ -0,0 +1,7 @@
output "router_id" {
value = "${openstack_networking_router_interface_v2.k8s.id}"
}
output "network_id" {
value = "${openstack_networking_subnet_v2.k8s.id}"
}

View File

@@ -0,0 +1,13 @@
variable "external_net" {
}
variable "network_name" {
}
variable "cluster_name" {
}
variable "dns_nameservers"{
type = "list"
}

View File

@@ -2,6 +2,10 @@ variable "cluster_name" {
default = "example"
}
variable "number_of_bastions" {
default = 1
}
variable "number_of_k8s_masters" {
default = 2
}
@@ -63,19 +67,28 @@ variable "ssh_user_gfs" {
default = "ubuntu"
}
variable "flavor_bastion" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_k8s_master" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_k8s_node" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_etcd" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_gfs_node" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
}
@@ -84,11 +97,21 @@ variable "network_name" {
default = "internal"
}
variable "dns_nameservers"{
description = "An array of DNS name server names used by hosts in this subnet."
type = "list"
default = []
}
variable "floatingip_pool" {
description = "name of the floating ip pool to use"
default = "external"
}
variable "external_net" {
description = "uuid of the external/public network"
}
variable "username" {
description = "Your openstack username"
}

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python2
#
# Copyright 2015 Cisco Systems, Inc.
#
@@ -70,6 +70,14 @@ def iterhosts(resources):
yield parser(resource, module_name)
def iterips(resources):
'''yield ip tuples of (instance_id, ip)'''
for module_name, key, resource in resources:
resource_type, name = key.split('.', 1)
if resource_type == 'openstack_compute_floatingip_associate_v2':
yield openstack_floating_ips(resource)
def parses(prefix):
def inner(func):
PARSERS[prefix] = func
@@ -298,6 +306,17 @@ def softlayer_host(resource, module_name):
return name, attrs, groups
def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
attrs = {
'ip': raw_attrs['floating_ip'],
'instance_id': raw_attrs['instance_id'],
}
return attrs
def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
return raw_attrs['instance_id'], raw_attrs['floating_ip']
@parses('openstack_compute_instance_v2')
@calculate_mantl_vars
@@ -343,6 +362,8 @@ def openstack_host(resource, module_name):
except (KeyError, ValueError):
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
# Handling of floating IPs has changed: https://github.com/terraform-providers/terraform-provider-openstack/blob/master/CHANGELOG.md#010-june-21-2017
# attrs specific to Ansible
if 'metadata.ssh_user' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
@@ -656,6 +677,19 @@ def clc_server(resource, module_name):
return name, attrs, groups
def iter_host_ips(hosts, ips):
'''Update hosts that have an entry in the floating IP list'''
for host in hosts:
host_id = host[1]['id']
if host_id in ips:
ip = ips[host_id]
host[1].update({
'access_ip_v4': ip,
'public_ipv4': ip,
'ansible_ssh_host': ip,
})
yield host
## QUERY TYPES
def query_host(hosts, target):
@@ -727,6 +761,13 @@ def main():
parser.exit()
hosts = iterhosts(iterresources(tfstates(args.root)))
# Perform a second pass on the file to pick up floating_ip entries to update the ip address of referenced hosts
ips = dict(iterips(iterresources(tfstates(args.root))))
if ips:
hosts = iter_host_ips(hosts, ips)
if args.list:
output = query_list(hosts)
if args.nometa:

View File

@@ -157,7 +157,7 @@ ansible-playbook -i inventory/inventory.ini cluster.yml --tags preinstall,dnsma
```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
```
ansible-playbook -i inventory/inventory.ini -e dns_server='' cluster.yml --tags resolvconf
ansible-playbook -i inventory/inventory.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
```
And this prepares all container images localy (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes:

74
docs/contiv.md Normal file
View File

@@ -0,0 +1,74 @@
Contiv
======
Here is the [Contiv documentation](http://contiv.github.io/documents/).
## Administrate Contiv
There are two ways to manage Contiv:
* a web UI managed by the api proxy service
* a CLI named `netctl`
### Interfaces
#### The Web Interface
This UI is hosted on all kubernetes master nodes. The service is available at `https://<one of your master node>:10000`.
You can configure the api proxy by overriding the following variables:
```yaml
contiv_enable_api_proxy: true
contiv_api_proxy_port: 10000
contiv_generate_certificate: true
```
The default credentials to log in are: admin/admin.
#### The Command Line Interface
The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster:
```bash
export NETMASTER=http://127.0.0.1:9999
```
The port can be changed by overriding the following variable:
```yaml
contiv_netmaster_port: 9999
```
The CLI doesn't use the authentication process needed by the web interface.
### Network configuration
The default configuration uses VXLAN to create an overlay. Two networks are created by default:
* `contivh1`: an infrastructure network. It allows nodes to access the pods IPs. It is mandatory in a Kubernetes environment that uses VXLAN.
* `default-net` : the default network that hosts pods.
You can change the default network configuration by overriding the `contiv_networks` variable.
The default forward mode is set to routing:
```yaml
contiv_fwd_mode: routing
```
The following is an example of how you can use VLAN instead of VXLAN:
```yaml
contiv_fwd_mode: bridge
contiv_vlan_interface: eth0
contiv_networks:
- name: default-net
subnet: "{{ kube_pods_subnet }}"
gateway: "{{ kube_pods_subnet|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
encap: vlan
pkt_tag: 10
```

38
docs/debian.md Normal file
View File

@@ -0,0 +1,38 @@
Debian Jessie
===============
Debian Jessie installation Notes:
- Add
```GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"```
to /etc/default/grub. Then update with
```
sudo update-grub
sudo update-grub2
sudo reboot
```
- Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd.
```apt-get -t jessie-backports install systemd```
(Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files)
- Add the Ansible repository and install Ansible to get a proper version
```
sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
```
- Install Jinja2 and Python-Netaddr
```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View File

@@ -50,7 +50,7 @@ DNS modes supported by Kubespray
You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
## dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are three modes available:
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### dnsmasq_kubedns (default)
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
@@ -62,6 +62,12 @@ other queries are forwardet to the nameservers found in ``upstream_dns_servers``
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
all queries.
#### manual
This does not install dnsmasq or kubedns, but allows you to specify
`manual_dns_server`, which will be configured on nodes for handling Pod DNS.
Use this method if you plan to install your own DNS server in the cluster after
initial deployment.
#### none
This does not install any of dnsmasq and kubedns/skydns. This basically disables cluster DNS completely and
leaves you with a non functional cluster.

View File

@@ -75,7 +75,7 @@ kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use http://localhost:8080 to connect. The kubeconfig files
generated will point to localhost (on kube-masters) and kube-node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](ha.md).
More details on this process are in the [HA guide](ha-mode.md).
Kubespray permits connecting to the cluster remotely on any IP of any
kube-master host on port 6443 by default. However, this requires
@@ -93,14 +93,19 @@ the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-applicati
Accessing Kubernetes Dashboard
------------------------------
If the variable `dashboard_enabled` is set (default is true), then you can
access the Kubernetes Dashboard at the following URL:
As of kubernetes-dashboard v1.7.x:
* New login options that use apiserver auth proxying of token/basic/kubeconfig by default
* Requires RBAC in authorization_modes
* Only serves over https
* No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL
https://kube:_kube-password_@_host_:6443/ui/
If the variable `dashboard_enabled` is set (default is true), then you can access the Kubernetes Dashboard at the following URL, You will be prompted for credentials:
https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
To see the password, refer to the section above, titled *Connecting to
Kubernetes*. The host can be any kube-master or kube-node or loadbalancer
(when enabled).
Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
It is recommended to access dashboard from behind a gateway (like Ingress Controller) that enforces an authentication token. Details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
Accessing Kubernetes API
------------------------

View File

@@ -27,19 +27,21 @@ non-master Kubernetes node. This is referred to as localhost loadbalancing. It
is less efficient than a dedicated load balancer because it creates extra
health checks on the Kubernetes apiserver, but is more practical for scenarios
where an external LB or virtual IP management is inconvenient. This option is
configured by the variable `loadbalancer_apiserver_localhost` (defaults to `True`).
configured by the variable `loadbalancer_apiserver_localhost` (defaults to
`True`. Or `False`, if there is an external `loadbalancer_apiserver` defined).
You may also define the port the local internal loadbalancer uses by changing,
`nginx_kube_apiserver_port`. This defaults to the value of `kube_apiserver_port`.
It is also important to note that Kubespray will only configure kubelet and kube-proxy
on non-master nodes to use the local internal loadbalancer.
`nginx_kube_apiserver_port`. This defaults to the value of
`kube_apiserver_port`. It is also important to note that Kubespray will only
configure kubelet and kube-proxy on non-master nodes to use the local internal
loadbalancer.
If you choose to NOT use the local internal loadbalancer, you will need to configure
your own loadbalancer to achieve HA. Note that deploying a loadbalancer is up to
a user and is not covered by ansible roles in Kubespray. By default, it only configures
a non-HA endpoint, which points to the `access_ip` or IP address of the first server
node in the `kube-master` group. It can also configure clients to use endpoints
for a given loadbalancer type. The following diagram shows how traffic to the
apiserver is directed.
If you choose to NOT use the local internal loadbalancer, you will need to
configure your own loadbalancer to achieve HA. Note that deploying a
loadbalancer is up to a user and is not covered by ansible roles in Kubespray.
By default, it only configures a non-HA endpoint, which points to the
`access_ip` or IP address of the first server node in the `kube-master` group.
It can also configure clients to use endpoints for a given loadbalancer type.
The following diagram shows how traffic to the apiserver is directed.
![Image](figures/loadbalancer_localhost.png?raw=true)
@@ -66,40 +68,72 @@ listen kubernetes-apiserver-https
balance roundrobin
```
And the corresponding example global vars config:
Note: That's an example config managed elsewhere outside of Kubespray.
And the corresponding example global vars for such a "cluster-aware"
external LB with the cluster API access modes configured in Kubespray:
```
apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local"
apiserver_loadbalancer_domain_name: "my-apiserver-lb.example.com"
loadbalancer_apiserver:
address: <VIP>
port: 8383
```
Note: The default kubernetes apiserver configuration binds to all interfaces,
so you will need to use a different port for the vip from that the API is
listening on, or set the `kube_apiserver_bind_address` so that the API only
listens on a specific interface (to avoid conflict with haproxy binding the
port on the VIP adddress)
This domain name, or default "lb-apiserver.kubernetes.local", will be inserted
into the `/etc/hosts` file of all servers in the `k8s-cluster` group. Note that
into the `/etc/hosts` file of all servers in the `k8s-cluster` group and wired
into the generated self-signed TLS/SSL certificates as well. Note that
the HAProxy service should as well be HA and requires a VIP management, which
is out of scope of this doc. Specifying an external LB overrides any internal
localhost LB configuration.
is out of scope of this doc.
Note: In order to achieve HA for HAProxy instances, those must be running on
the each node in the `k8s-cluster` group as well, but require no VIP, thus
no VIP management.
There is a special case for an internal and an externally configured (not with
Kubespray) LB used simultaneously. Keep in mind that the cluster is not aware
of such an external LB and you need no to specify any configuration variables
for it.
Access endpoints are evaluated automagically, as the following:
Note: TLS/SSL termination for externally accessed API endpoints' will **not**
be covered by Kubespray for that case. Make sure your external LB provides it.
Alternatively you may specify an externally load balanced VIPs in the
`supplementary_addresses_in_ssl_keys` list. Then, kubespray will add them into
the generated cluster certifactes as well.
| Endpoint type | kube-master | non-master |
|------------------------------|---------------|---------------------|
| Local LB (default) | http://lc:p | https://lc:nsp |
| External LB, no internal | https://lb:lp | https://lb:lp |
| No ext/int LB | http://lc:p | https://m[0].aip:sp |
Aside of that specific case, the `loadbalancer_apiserver` considered mutually
exclusive to `loadbalancer_apiserver_localhost`.
Access API endpoints are evaluated automagically, as the following:
| Endpoint type | kube-master | non-master | external |
|------------------------------|----------------|---------------------|---------------------|
| Local LB (default) | https://bip:sp | https://lc:nsp | https://m[0].aip:sp |
| Local LB + Unmanaged here LB | https://bip:sp | https://lc:nsp | https://ext |
| External LB, no internal | https://bip:sp | https://lb:lp | https://lb:lp |
| No ext/int LB | https://bip:sp | https://m[0].aip:sp | https://m[0].aip:sp |
Where:
* `m[0]` - the first node in the `kube-master` group;
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
* `ext` - Externally load balanced VIP:port and FQDN, not managed by Kubespray;
* `lc` - localhost;
* `p` - insecure port, `kube_apiserver_insecure_port`
* `nsp` - nginx secure port, `nginx_kube_apiserver_port`;
* `bip` - a custom bind IP or localhost for the default bind IP '0.0.0.0';
* `nsp` - nginx secure port, `nginx_kube_apiserver_port`, defers to `sp`;
* `sp` - secure port, `kube_apiserver_port`;
* `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
* `ip` - the node IP, defers to the ansible IP;
* `aip` - `access_ip`, defers to the ip.
A second and a third column represent internal cluster access modes. The last
column illustrates an example URI to access the cluster APIs externally.
Kubespray has nothing to do with it, this is informational only.
As you can see, the masters' internal API endpoints are always
contacted via the local bind IP, which is `https://bip:sp`.
**Note** that for some cases, like healthchecks of applications deployed by
Kubespray, the masters' APIs are accessed via the insecure endpoint, which
consists of the local `kube_apiserver_insecure_bind_address` and
`kube_apiserver_insecure_port`.

View File

@@ -35,7 +35,7 @@
7. Modify path to library and roles in your ansible.cfg file (role naming should be uniq, you may have to rename your existent roles if they have same names as kubespray project):
```
...
library = 3d/kubespray/library/
library = 3d/kubespray/library/
roles_path = 3d/kubespray/roles/
...
```
@@ -73,7 +73,7 @@ You could rename *all.yml* config to something else, i.e. *kubespray.yml* and cr
10. Now you can include kargo tasks in you existent playbooks by including cluster.yml file:
```
- name: Include kargo tasks
include: 3d/kubespray/cluster.yml
include: 3d/kubespray/cluster.yml
```
Or your could copy separate tasks from cluster.yml into your ansible repository.
@@ -84,7 +84,7 @@ Other members of your team should use ```git submodule sync```, ```git submodule
# Contributing
If you made useful changes or fixed a bug in existent kubespray repo, use this flow for PRs to original kubespray repo.
0. Sign the [CNCF CLA](https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ).
0. Sign the [CNCF CLA](https://git.k8s.io/community/CLA.md).
1. Change working directory to git submodule directory (3d/kubespray).

View File

@@ -34,6 +34,9 @@ For a large scaled deployments, consider the following configuration changes:
``kube_controller_pod_eviction_timeout`` for better Kubernetes reliability.
Check out [Kubernetes Reliability](kubernetes-reliability.md)
* Tune network prefix sizes. Those are ``kube_network_node_prefix``,
``kube_service_addresses`` and ``kube_pods_subnet``.
* Add calico-rr nodes if you are deploying with Calico or Canal. Nodes recover
from host/network interruption much quicker with calico-rr. Note that
calico-rr role must be on a host without kube-master or kube-node role (but

View File

@@ -0,0 +1,67 @@
# Local Storage Provisioner
The local storage provisioner is NOT a dynamic storage provisioner as you would
expect from a cloud provider. Instead, it simply creates PersistentVolumes for
all manually created volumes located in the directory `local_volume_base_dir`.
The default path is /mnt/disks and the rest of this doc will use that path as
an example.
## Examples to create local storage volumes
### tmpfs method:
```
for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
done
```
The tmpfs method is not recommended for production because the mount is not
persistent and data will be deleted on reboot.
### Mount physical disks
```
mkdir /mnt/disks/ssd1
mount /dev/vdb1 /mnt/disks/ssd1
```
Physical disks are recommended for production environments because it offers
complete isolation in terms of I/O and capacity.
### File-backed sparsefile method
```
truncate /mnt/disks/disk5 --size 2G
mkfs.ext4 /mnt/disks/disk5
mkdir /mnt/disks/vol5
mount /mnt/disks/disk5 /mnt/disks/vol5
```
If you have a development environment and only one disk, this is the best way
to limit the quota of persistent volumes.
### Simple directories
```
for vol in vol6 vol7 vol8; do
mkdir /mnt/disks/$vol
done
```
This is also acceptable in a development environment, but there is no capacity
management.
## Usage notes
The volume provisioner cannot calculate volume sizes correctly, so you should
delete the daemonset pod on the relevant host after creating volumes. The pod
will be recreated and read the size correctly.
Make sure to make any mounts persist via /etc/fstab or with systemd mounts (for
CoreOS/Container Linux). Pods with persistent volume claims will not be
able to start if the mounts become unavailable.
## Further reading
Refer to the upstream docs here: https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume

View File

@@ -2,8 +2,9 @@ Kubespray's roadmap
=================
### Kubeadm
- Propose kubeadm as an option in order to setup the kubernetes cluster.
That would probably improve deployment speed and certs management [#553](https://github.com/kubespray/kubespray/issues/553)
- Switch to kubeadm deployment as the default method after some bugs are fixed:
* Support for basic auth
* cloudprovider cloud-config mount [#484](https://github.com/kubernetes/kubeadm/issues/484)
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320)
- the playbook would install and configure docker/rkt and the etcd cluster
@@ -12,60 +13,35 @@ That would probably improve deployment speed and certs management [#553](https:/
- to be discussed, a way to provide the inventory
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321)
### Provisionning and cloud providers
### Provisioning and cloud providers
- [ ] Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure**
- [ ] On AWS autoscaling, multi AZ
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297)
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280)
- [x] **TLS boostrap** support for kubelet [#234](https://github.com/kubespray/kubespray/issues/234)
- [x] **TLS boostrap** support for kubelet (covered by kubeadm, but not in standard deployment) [#234](https://github.com/kubespray/kubespray/issues/234)
(related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br>
https://github.com/kubernetes/kubernetes/issues/18112)
### Tests
- [x] Run kubernetes e2e tests
- [x] migrate to jenkins
(a test is currently a deployment on a 3 node cluste, testing k8s api, ping between 2 pods)
- [x] Full tests on GCE per day (All OS's, all network plugins)
- [x] trigger a single test per pull request
- [ ] ~~single test with the Ansible version n-1 per day~~
- [x] Test idempotency on on single OS but for all network plugins/container engines
- [ ] Run kubernetes e2e tests
- [ ] Test idempotency on on single OS but for all network plugins/container engines
- [ ] single test on AWS per day
- [x] test different achitectures :
- 3 instances, 3 are members of the etcd cluster, 2 of them acting as master and node, 1 as node
- 5 instances, 3 are etcd and nodes, 2 are masters only
- 7 instances, 3 etcd only, 2 masters, 2 nodes
- [ ] test scale up cluster: +1 etcd, +1 master, +1 node
- [ ] Reorganize CI test vars into group var files
### Lifecycle
- [ ] Adopt the kubeadm tool by delegating CM tasks it is capable to accomplish well [#553](https://github.com/kubespray/kubespray/issues/553)
- [x] Drain worker node when upgrading k8s components in a worker node. [#154](https://github.com/kubespray/kubespray/issues/154)
- [ ] Drain worker node when shutting down/deleting an instance
- [ ] Upgrade granularity: select components to upgrade and skip others
### Networking
- [ ] romana.io support [#160](https://github.com/kubespray/kubespray/issues/160)
- [ ] Configure network policy for Calico. [#159](https://github.com/kubespray/kubespray/issues/159)
- [ ] Opencontrail
- [x] Canal
- [x] Cloud Provider native networking (instead of our network plugins)
### High availability
- (to be discussed) option to set a loadbalancer for the apiservers like ucarp/packemaker/keepalived
While waiting for the issue [kubernetes/kubernetes#18174](https://github.com/kubernetes/kubernetes/issues/18174) to be fixed.
### Kubespray-cli
- Delete instances
- `kubespray vagrant` to setup a test cluster locally
- `kubespray azure` for Microsoft Azure support
- switch to Terraform instead of Ansible for provisionning
- update $HOME/.kube/config when a cluster is deployed. Optionally switch to this context
- [ ] Consolidate network_plugins and kubernetes-apps/network_plugins
### Kubespray API
- Perform all actions through an **API**
- Store inventories / configurations of mulltiple clusters
- make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory
### Addons (with kpm)
### Addons (helm or native ansible)
Include optionals deployments to init the cluster:
##### Monitoring
- Heapster / Grafana ....
@@ -85,10 +61,10 @@ Include optionals deployments to init the cluster:
- Deis Workflow
### Others
- remove nodes (adding is already supported)
- being able to choose any k8s version (almost done)
- **rkt** support [#59](https://github.com/kubespray/kubespray/issues/59)
- Review documentation (split in categories)
- remove nodes (adding is already supported)
- Organize and update documentation (split in categories)
- Refactor downloads so it all runs in the beginning of deployment
- Make bootstrapping OS more consistent
- **consul** -> if officialy supported by k8s
- flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kubespray/issues/312)
- Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kubespray/issues/329)

View File

@@ -1,7 +1,7 @@
Vagrant Install
=================
Assuming you have Vagrant (1.8+) installed with virtualbox (it may work
Assuming you have Vagrant (1.9+) installed with virtualbox (it may work
with vmware, but is untested) you should be able to launch a 3 node
Kubernetes cluster by simply running `$ vagrant up`.<br />

View File

@@ -28,6 +28,7 @@ Some variables of note include:
* *kube_version* - Specify a given Kubernetes hyperkube version
* *searchdomains* - Array of DNS domains to search when looking up hostnames
* *nameservers* - Array of nameservers to use for DNS lookup
* *preinstall_selinux_state* - Set selinux state, permitted values are permissive and disabled.
#### Addressing variables
@@ -61,7 +62,7 @@ following default cluster paramters:
* *kube_network_node_prefix* - Subnet allocated per-node for pod IPs. Remainin
bits in kube_pods_subnet dictates how many kube-nodes can be in cluster.
* *dns_setup* - Enables dnsmasq
* *dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2)
* *dnsmasq_dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2)
* *skydns_server* - Cluster IP for KubeDNS (default is 10.233.0.3)
* *cloud_provider* - Enable extra Kubelet option if operating inside GCE or
OpenStack (default is unset)
@@ -71,9 +72,12 @@ following default cluster paramters:
alpha/experimental Kubernetes features. (defaults is `[]`)
* *authorization_modes* - A list of [authorization mode](
https://kubernetes.io/docs/admin/authorization/#using-flags-for-your-authorization-module)
that the cluster should be configured for. Defaults to `[]` (i.e. no authorization).
Note: `RBAC` is currently in experimental phase, and do not support either calico or
vault. Upgrade from non-RBAC to RBAC is not tested.
that the cluster should be configured for. Defaults to `['Node', 'RBAC']`
(Node and RBAC authorizers).
Note: `Node` and `RBAC` are enabled by default. Previously deployed clusters can be
converted to RBAC mode. However, your apps which rely on Kubernetes API will
require a service account and cluster role bindings. You can override this
setting by setting authorization_modes to `[]`.
Note, if cloud providers have any use of the ``10.233.0.0/16``, like instances'
private addresses, make sure to pick another values for ``kube_service_addresses``
@@ -99,7 +103,8 @@ Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-st
* *docker_options* - Commonly used to set
``--insecure-registry=myregistry.mydomain:5000``
* *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a
proxy
proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames
that correspond to each node.
* *kubelet_deployment_type* - Controls which platform to deploy kubelet on.
Available options are ``host``, ``rkt``, and ``docker``. ``docker`` mode
is unlikely to work on newer releases. Starting with Kubernetes v1.7

View File

@@ -24,7 +24,7 @@ hardcoded to only create a Vault role for Etcd.
This step is where the long-term Vault cluster is started and configured. Its
first task, is to stop any temporary instances of Vault, to free the port for
the long-term. At the end of this task, the entire Vault cluster should be up
and read to go.
and ready to go.
Keys to the Kingdom
-------------------

View File

@@ -34,10 +34,12 @@ Then, in the same file, you need to declare your vCenter credential following th
| vsphere_datastore | TRUE | string | | | Datastore name to use |
| vsphere_working_dir | TRUE | string | | | Working directory from the view "VMs and template" in the vCenter where VM are placed |
| vsphere_scsi_controller_type | TRUE | string | buslogic, pvscsi, parallel | pvscsi | SCSI controller name. Commonly "pvscsi". |
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` |
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` (Optional, only used for Kubernetes <= 1.9.2) |
| vsphere_public_network | FALSE | string | | Blank | Name of the network the VMs are joined to |
| vsphere_resource_pool | FALSE | string | | Blank | Name of the Resource pool where the VMs are located (Optional, only used for Kubernetes >= 1.9.2) |
Example configuration
```yml
vsphere_vcenter_ip: "myvcenter.domain.com"
vsphere_vcenter_port: 443
@@ -48,6 +50,7 @@ vsphere_datacenter: "DATACENTER_name"
vsphere_datastore: "DATASTORE_name"
vsphere_working_dir: "Docker_hosts"
vsphere_scsi_controller_type: "pvscsi"
vsphere_resource_pool: "K8s-Pool"
```
## Deployment

View File

@@ -91,7 +91,7 @@ weave_peers: uninitialized
The first variable, `weave_seed`, contains the initial nodes of the weave network
The seconde variable, `weave_peers`, saves the IPs of all nodes joined to the weave network
The second variable, `weave_peers`, saves the IPs of all nodes joined to the weave network
These two variables are used to connect a new node to the weave network. The new node needs to know the firsts nodes (seed) and the list of IPs of all nodes.

View File

@@ -3,6 +3,7 @@
### * Will not upgrade etcd
### * Will not upgrade network plugins
### * Will not upgrade Docker
### * Will not pre-download containers or kubeadm
### * Currently does not support Vault deployment.
###
### In most cases, you probably want to use upgrade-cluster.yml playbook and
@@ -46,6 +47,8 @@
- { role: upgrade/pre-upgrade, tags: pre-upgrade }
- { role: kubernetes/node, tags: node }
- { role: kubernetes/master, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- { role: upgrade/post-upgrade, tags: post-upgrade }
#Finally handle worker upgrades, based on given batch size

View File

@@ -56,7 +56,7 @@ bin_dir: /usr/local/bin
## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', or 'vsphere'
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using nova-client before starting the playbook.
#cloud_provider:
@@ -74,12 +74,17 @@ bin_dir: /usr/local/bin
#azure_vnet_name:
#azure_route_table_name:
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (https://github.com/kubernetes/kubernetes/issues/50461)
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
#openstack_blockstorage_version: "v1/v2/auto (default)"
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following variables.
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
#openstack_lbaas_enabled: True
#openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
## To enable automatic floating ip provisioning, specify a subnet.
#openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
## Override default LBaaS behavior
#openstack_lbaas_use_octavia: False
#openstack_lbaas_method: "ROUND_ROBIN"
#openstack_lbaas_provider: "haproxy"
#openstack_lbaas_create_monitor: "yes"
#openstack_lbaas_monitor_delay: "1m"
#openstack_lbaas_monitor_timeout: "30s"
@@ -91,9 +96,10 @@ bin_dir: /usr/local/bin
#kubeadm_token_second: "{{ lookup('password', 'credentials/kubeadm_token_second length=16 chars=ascii_lowercase,digits') }}"
#kubeadm_token: "{{ kubeadm_token_first }}.{{ kubeadm_token_second }}"
#
## Set these proxy values in order to update docker daemon to use proxies
## Set these proxy values in order to update package manager and docker daemon to use proxies
#http_proxy: ""
#https_proxy: ""
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
#no_proxy: ""
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
@@ -113,9 +119,6 @@ bin_dir: /usr/local/bin
## as a backend). Options are "script" or "vault"
#cert_management: script
## Please specify true if you want to perform a kernel upgrade
kernel_upgrade: false
# Set to true to allow pre-checks to fail and continue deployment
#ignore_assert_errors: false

View File

@@ -8,9 +8,6 @@ kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests"
system_namespace: kube-system
# Logging directory (sysvinit systems)
kube_log_dir: "/var/log/kubernetes"
# This is where all the cert scripts and certs will be located
kube_cert_dir: "{{ kube_config_dir }}/ssl"
@@ -20,10 +17,10 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
# This is where to save basic auth file
kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: false
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.7.5
kube_version: v1.9.2
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -50,8 +47,8 @@ kube_users:
## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth)
#kube_oidc_auth: false
#kube_basic_auth: true
#kube_token_auth: true
#kube_basic_auth: false
#kube_token_auth: false
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
@@ -65,7 +62,7 @@ kube_users:
# kube_oidc_groups_claim: groups
# Choose network plugin (calico, weave or flannel)
# Choose network plugin (calico, contiv, weave or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico
@@ -106,21 +103,26 @@ kube_network_node_prefix: 24
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 6443 # (https)
kube_apiserver_insecure_port: 8080 # (http)
# Set to 0 to disable insecure port - Requires RBAC in authorization_modes and kube_api_anonymous_auth: true
#kube_apiserver_insecure_port: 0 # (disabled)
# DNS configuration.
# Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# Can be dnsmasq_kubedns, kubedns or none
# Can be dnsmasq_kubedns, kubedns, manual or none
dns_mode: kubedns
# Set manual server if using a custom cluster DNS server
#manual_dns_server: 10.x.x.x
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
dnsmasq_dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
# Path used to store Docker data
@@ -137,13 +139,14 @@ docker_bin_dir: "/usr/bin"
# Settings for containerized control plane (etcd/kubelet/secrets)
etcd_deployment_type: docker
kubelet_deployment_type: host
cert_management: script
vault_deployment_type: docker
helm_deployment_type: host
# K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
# Kubernetes dashboard (available at http://first_master:6443/ui by default)
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: true
# Monitoring apps for k8s
@@ -152,6 +155,15 @@ efk_enabled: false
# Helm deployment
helm_enabled: false
# Istio deployment
istio_enabled: false
# Local volume provisioner deployment
local_volumes_enabled: false
# Add Persistent Volumes Storage Class for corresponding cloud provider ( OpenStack is only supported now )
persistent_volumes_enabled: false
# Make a copy of kubeconfig on the host that runs Ansible in GITDIR/artifacts
# kubeconfig_localhost: false
# Download kubectl onto the host that runs Ansible in GITDIR/artifacts
@@ -166,5 +178,14 @@ helm_enabled: false
# kubelet_cgroups_per_qos: true
# A comma separated list of levels of node allocatable enforcement to be enforced by kubelet.
# Acceptible options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
# Acceptable options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
# kubelet_enforce_node_allocatable: pods
## Supplementary addresses that can be added in kubernetes ssl keys.
## That can be useful for example to setup a keepalived virtual IP
# supplementary_addresses_in_ssl_keys: [10.0.0.1, 10.0.0.2, 10.0.0.3]
## Running on top of openstack vms with cinder enabled may lead to unschedulable pods due to NoVolumeZoneConflict restriction in kube-scheduler.
## See https://github.com/kubernetes-incubator/kubespray/issues/2141
## Set this variable to true to get rid of this issue
volume_cross_zone_attachment: false

View File

@@ -288,8 +288,6 @@ def main():
else:
module.fail_json(msg='Unrecognized state %s.' % state)
if result:
changed = True
module.exit_json(changed=changed,
msg='success: %s' % (' '.join(result))
)

View File

@@ -1,4 +1,4 @@
pbr>=1.6
ansible>=2.3.2
ansible>=2.4.0
netaddr
jinja2>=2.9.6

View File

@@ -1,6 +1,9 @@
---
- hosts: all
gather_facts: true
- hosts: etcd:k8s-cluster:vault:calico-rr
vars_prompt:
name: "reset_confirmation"
prompt: "Are you sure you want to reset cluster state? Type 'yes' to reset your cluster."

View File

@@ -3,13 +3,13 @@
has_bastion: "{{ 'bastion' in groups['all'] }}"
- set_fact:
bastion_ip: "{{ hostvars['bastion']['ansible_ssh_host'] }}"
bastion_ip: "{{ hostvars['bastion']['ansible_host'] }}"
when: has_bastion
# As we are actually running on localhost, the ansible_ssh_user is your local user when you try to use it directly
# To figure out the real ssh user, we delegate this task to the bastion and store the ansible_ssh_user in real_user
# To figure out the real ssh user, we delegate this task to the bastion and store the ansible_user in real_user
- set_fact:
real_user: "{{ ansible_ssh_user }}"
real_user: "{{ ansible_user }}"
delegate_to: bastion
when: has_bastion
@@ -18,3 +18,4 @@
template:
src: ssh-bastion.conf
dest: "{{ playbook_dir }}/ssh-bastion.conf"
when: has_bastion

View File

@@ -16,6 +16,5 @@ Host {{ bastion_ip }}
ControlPersist 5m
Host {{ vars['hosts'] }}
ProxyCommand ssh -W %h:%p {{ real_user }}@{{ bastion_ip }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
StrictHostKeyChecking no
ProxyCommand ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p {{ real_user }}@{{ bastion_ip }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
{% endif %}

View File

@@ -1,5 +1,4 @@
---
pypy_version: 2.4.0
pip_python_modules:
pip_python_coreos_modules:
- httplib2
- six
- six

View File

@@ -1,4 +1,4 @@
#/bin/bash
#!/bin/bash
set -e
BINDIR="/opt/bin"

View File

@@ -15,4 +15,6 @@
when: fastestmirror.stat.exists
- name: Install packages requirements for bootstrap
raw: yum -y install libselinux-python
yum:
name: libselinux-python
state: present

View File

@@ -3,7 +3,9 @@
raw: stat /opt/bin/.bootstrapped
register: need_bootstrap
failed_when: false
tags: facts
changed_when: false
tags:
- facts
- name: Bootstrap | Run bootstrap.sh
script: bootstrap.sh
@@ -11,7 +13,8 @@
- set_fact:
ansible_python_interpreter: "/opt/bin/python"
tags: facts
tags:
- facts
- name: Bootstrap | Check if we need to install pip
shell: "{{ansible_python_interpreter}} -m pip --version"
@@ -20,7 +23,8 @@
changed_when: false
check_mode: no
when: need_bootstrap.rc != 0
tags: facts
tags:
- facts
- name: Bootstrap | Copy get-pip.py
copy:
@@ -48,4 +52,4 @@
- name: Install required python modules
pip:
name: "{{ item }}"
with_items: "{{pip_python_modules}}"
with_items: "{{pip_python_coreos_modules}}"

View File

@@ -0,0 +1,24 @@
---
# raw: cat /etc/issue.net | grep '{{ bootstrap_versions }}'
- name: Bootstrap | Check if bootstrap is needed
raw: which "{{ item }}"
register: need_bootstrap
failed_when: false
changed_when: false
with_items:
- python
- pip
- dbus-daemon
tags: facts
- name: Bootstrap | Install python 2.x, pip, and dbus
raw:
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y python-minimal python-pip dbus
when:
need_bootstrap.results | map(attribute='rc') | sort | last | bool
- set_fact:
ansible_python_interpreter: "/usr/bin/python"
tags: facts

View File

@@ -5,18 +5,22 @@
raw: which "{{ item }}"
register: need_bootstrap
failed_when: false
changed_when: false
with_items:
- python
- pip
tags: facts
- dbus-daemon
tags:
- facts
- name: Bootstrap | Install python 2.x and pip
raw:
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y python-minimal python-pip
DEBIAN_FRONTEND=noninteractive apt-get install -y python-minimal python-pip dbus
when:
"{{ need_bootstrap.results | map(attribute='rc') | sort | last | bool }}"
- set_fact:
ansible_python_interpreter: "/usr/bin/python"
tags: facts
tags:
- facts

View File

@@ -1,14 +1,17 @@
---
- include: bootstrap-ubuntu.yml
- import_tasks: bootstrap-ubuntu.yml
when: bootstrap_os == "ubuntu"
- include: bootstrap-coreos.yml
- import_tasks: bootstrap-debian.yml
when: bootstrap_os == "debian"
- import_tasks: bootstrap-coreos.yml
when: bootstrap_os == "coreos"
- include: bootstrap-centos.yml
- import_tasks: bootstrap-centos.yml
when: bootstrap_os == "centos"
- include: setup-pipelining.yml
- import_tasks: setup-pipelining.yml
- name: check if atomic host
stat:

View File

@@ -1,6 +0,0 @@
---
dependencies:
- role: download
file: "{{ downloads.dnsmasq }}"
when: dns_mode == 'dnsmasq_kubedns' and download_localhost|default(false)
tags: [download, dnsmasq]

View File

@@ -3,13 +3,15 @@
file:
path: /etc/dnsmasq.d
state: directory
tags: bootstrap-os
tags:
- bootstrap-os
- name: ensure dnsmasq.d-available directory exists
file:
path: /etc/dnsmasq.d-available
state: directory
tags: bootstrap-os
tags:
- bootstrap-os
- name: check system nameservers
shell: awk '/^nameserver/ {print $NF}' /etc/resolv.conf
@@ -100,7 +102,7 @@
- name: Check for dnsmasq port (pulling image and running container)
wait_for:
host: "{{dns_server}}"
host: "{{dnsmasq_dns_server}}"
port: 53
timeout: 180
when: inventory_hostname == groups['kube-node'][0] and groups['kube-node'][0] in ansible_play_hosts

View File

@@ -39,7 +39,7 @@ spec:
operator: Exists
containers:
- name: autoscaler
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
image: "{{ dnsmasqautoscaler_image_repo }}:{{ dnsmasqautoscaler_image_tag }}"
resources:
requests:
cpu: "20m"

View File

@@ -18,6 +18,6 @@ spec:
targetPort: 53
protocol: UDP
type: ClusterIP
clusterIP: {{dns_server}}
clusterIP: {{dnsmasq_dns_server}}
selector:
k8s-app: dnsmasq

View File

@@ -1,5 +1,5 @@
---
docker_version: '1.13'
docker_version: '17.03'
docker_package_info:
pkgs:
@@ -16,3 +16,5 @@ docker_container_storage_setup: false
docker_rh_repo_base_url: 'https://yum.dockerproject.org/repo/main/centos/7'
docker_rh_repo_gpgkey: 'https://yum.dockerproject.org/gpg'
docker_apt_repo_base_url: 'https://apt.dockerproject.org/repo'
docker_apt_repo_gpgkey: 'https://apt.dockerproject.org/gpg'

View File

@@ -3,6 +3,9 @@ docker_container_storage_setup_version: v0.6.0
docker_container_storage_setup_profile_name: kubespray
docker_container_storage_setup_storage_driver: devicemapper
docker_container_storage_setup_container_thinpool: docker-pool
#It must be define a disk path for docker_container_storage_setup_devs.
#Otherwise docker-storage-setup will be executed incorrectly.
#docker_container_storage_setup_devs: /dev/vdb
docker_container_storage_setup_data_size: 40%FREE
docker_container_storage_setup_min_data_size: 2G
docker_container_storage_setup_chunk_size: 512K

View File

@@ -23,7 +23,7 @@
copy:
dest: /etc/systemd/system/docker.service.d/override.conf
content: |-
### Thie file is managed by Ansible
### This file is managed by Ansible
[Service]
EnvironmentFile=-/etc/sysconfig/docker-storage
@@ -31,6 +31,12 @@
group: root
mode: 0644
#https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-using-the-repository
- name: docker-storage-setup | install lvm2
yum:
name: lvm2
state: present
- name: docker-storage-setup | install and run container-storage-setup
become: yes
script: install_container_storage_setup.sh {{ docker_container_storage_setup_version }} {{ docker_container_storage_setup_profile_name }}

View File

@@ -12,11 +12,13 @@
paths:
- ../vars
skip: true
tags: facts
tags:
- facts
- include: set_facts_dns.yml
- include_tasks: set_facts_dns.yml
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
tags: facts
tags:
- facts
- name: check for minimum kernel version
fail:
@@ -25,13 +27,14 @@
{{ docker_kernel_min_version }} on
{{ ansible_distribution }}-{{ ansible_distribution_version }}
when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]) and (ansible_kernel|version_compare(docker_kernel_min_version, "<"))
tags: facts
tags:
- facts
- name: ensure docker repository public key is installed
action: "{{ docker_repo_key_info.pkg_key }}"
args:
id: "{{item}}"
keyserver: "{{docker_repo_key_info.keyserver}}"
url: "{{docker_repo_key_info.url}}"
state: present
register: keyserver_task_result
until: keyserver_task_result|succeeded
@@ -68,15 +71,24 @@
notify: restart docker
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_package_info.pkgs|length > 0)
- name: check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns
- name: flush handlers so we can wait for docker to come up
meta: flush_handlers
- name: set fact for docker_version
command: "docker version -f '{{ '{{' }}.Client.Version{{ '}}' }}'"
register: docker_version
failed_when: docker_version.stdout|version_compare('1.12', '<')
register: installed_docker_version
changed_when: false
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
- name: check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns
fail:
msg: "You need at least docker version >= 1.12 for resolvconf_mode=docker_dns"
when: >
dns_mode != 'none' and
resolvconf_mode == 'docker_dns' and
installed_docker_version.stdout|version_compare('1.12', '<')
- name: Set docker systemd config
include: systemd.yml
import_tasks: systemd.yml
- name: ensure docker service is started and enabled
service:

View File

@@ -6,7 +6,9 @@
{%- if dns_mode == 'kubedns' -%}
{{ [ skydns_server ] }}
{%- elif dns_mode == 'dnsmasq_kubedns' -%}
{{ [ dns_server ] }}
{{ [ dnsmasq_dns_server ] }}
{%- elif dns_mode == 'manual' -%}
{{ [ manual_dns_server ] }}
{%- endif -%}
- name: set base docker dns facts
@@ -47,7 +49,7 @@
- name: add system search domains to docker options
set_fact:
docker_dns_search_domains: "{{ docker_dns_search_domains | union(system_search_domains.stdout.split(' ')|default([])) | unique }}"
docker_dns_search_domains: "{{ docker_dns_search_domains | union(system_search_domains.stdout.split()|default([])) | unique }}"
when: system_search_domains.stdout != ""
- name: check number of nameservers

View File

@@ -8,7 +8,8 @@
template:
src: http-proxy.conf.j2
dest: /etc/systemd/system/docker.service.d/http-proxy.conf
when: http_proxy is defined or https_proxy is defined or no_proxy is defined
notify: restart docker
when: http_proxy is defined or https_proxy is defined
- name: get systemd version
command: rpm -q --qf '%{V}\n' systemd
@@ -24,13 +25,6 @@
notify: restart docker
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic)
- name: Write docker.service systemd file for atomic
template:
src: docker_atomic.service.j2
dest: /etc/systemd/system/docker.service
notify: restart docker
when: is_atomic
- name: Write docker options systemd drop-in
template:
src: docker-options.conf.j2
@@ -44,4 +38,4 @@
notify: restart docker
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
- meta: flush_handlers
- meta: flush_handlers

View File

@@ -18,7 +18,7 @@ Environment=GOTRACEBACK=crash
ExecReload=/bin/kill -s HUP $MAINPID
Delegate=yes
KillMode=process
ExecStart={{ docker_bin_dir }}/docker daemon \
ExecStart={{ docker_bin_dir }}/docker{% if installed_docker_version.stdout|version_compare('17.03', '<') %} daemon{% else %}d{% endif %} \
$DOCKER_OPTS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \

View File

@@ -1,37 +0,0 @@
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecReload=/bin/kill -s HUP $MAINPID
Delegate=yes
KillMode=process
ExecStart=/usr/bin/dockerd-current \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
$DOCKER_OPTS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$DOCKER_DNS_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=1min
Restart=on-abnormal
[Install]
WantedBy=multi-user.target

View File

@@ -1,2 +1,2 @@
[Service]
Environment={% if http_proxy %}"HTTP_PROXY={{ http_proxy }}"{% endif %} {% if https_proxy %}"HTTPS_PROXY={{ https_proxy }}"{% endif %} {% if no_proxy %}"NO_PROXY={{ no_proxy }}"{% endif %}
Environment={% if http_proxy is defined %}"HTTP_PROXY={{ http_proxy }}"{% endif %} {% if https_proxy is defined %}"HTTPS_PROXY={{ https_proxy }}"{% endif %} {% if no_proxy is defined %}"NO_PROXY={{ no_proxy }}"{% endif %}

View File

@@ -7,8 +7,9 @@ docker_versioned_pkg:
'1.11': docker-engine=1.11.2-0~{{ ansible_distribution_release|lower }}
'1.12': docker-engine=1.12.6-0~debian-{{ ansible_distribution_release|lower }}
'1.13': docker-engine=1.13.1-0~debian-{{ ansible_distribution_release|lower }}
'stable': docker-engine=17.03.0~ce-0~debian-{{ ansible_distribution_release|lower }}
'edge': docker-engine=17.03.0~ce-0~debian-{{ ansible_distribution_release|lower }}
'17.03': docker-engine=17.03.1~ce-0~debian-{{ ansible_distribution_release|lower }}
'stable': docker-engine=17.03.1~ce-0~debian-{{ ansible_distribution_release|lower }}
'edge': docker-engine=17.05.0~ce-0~debian-{{ ansible_distribution_release|lower }}
docker_package_info:
pkg_mgr: apt
@@ -18,7 +19,7 @@ docker_package_info:
docker_repo_key_info:
pkg_key: apt_key
keyserver: hkp://p80.pool.sks-keyservers.net:80
url: '{{ docker_apt_repo_gpgkey }}'
repo_keys:
- 58118E89F3A912897C070ADBF76221572C52609D
@@ -26,6 +27,6 @@ docker_repo_info:
pkg_repo: apt_repository
repos:
- >
deb https://apt.dockerproject.org/repo
deb {{ docker_apt_repo_base_url }}
{{ ansible_distribution|lower }}-{{ ansible_distribution_release|lower }}
main

View File

@@ -8,7 +8,9 @@ docker_kernel_min_version: '0'
docker_versioned_pkg:
'latest': docker
'1.11': docker-1:1.11.2
'1.12': docker-1:1.12.5
'1.12': docker-1:1.12.6
'1.13': docker-1.13.1
'17.03': docker-17.03.1
'stable': docker-ce
'edge': docker-ce-edge

View File

@@ -8,8 +8,9 @@ docker_versioned_pkg:
'1.11': docker-engine-1.11.2-1.el7.centos
'1.12': docker-engine-1.12.6-1.el7.centos
'1.13': docker-engine-1.13.1-1.el7.centos
'stable': docker-engine-17.03.0.ce-1.el7.centos
'edge': docker-engine-17.03.0.ce-1.el7.centos
'17.03': docker-engine-17.03.1.ce-1.el7.centos
'stable': docker-engine-17.03.1.ce-1.el7.centos
'edge': docker-engine-17.05.0.ce-1.el7.centos
# https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

View File

@@ -7,8 +7,9 @@ docker_versioned_pkg:
'1.11': docker-engine=1.11.1-0~{{ ansible_distribution_release|lower }}
'1.12': docker-engine=1.12.6-0~ubuntu-{{ ansible_distribution_release|lower }}
'1.13': docker-engine=1.13.1-0~ubuntu-{{ ansible_distribution_release|lower }}
'stable': docker-engine=17.03.0~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'edge': docker-engine=17.03.0~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'17.03': docker-engine=17.03.1~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'stable': docker-engine=17.03.1~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'edge': docker-engine=17.05.0~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
docker_package_info:
pkg_mgr: apt
@@ -18,7 +19,7 @@ docker_package_info:
docker_repo_key_info:
pkg_key: apt_key
keyserver: hkp://p80.pool.sks-keyservers.net:80
url: '{{ docker_apt_repo_gpgkey }}'
repo_keys:
- 58118E89F3A912897C070ADBF76221572C52609D
@@ -26,6 +27,6 @@ docker_repo_info:
pkg_repo: apt_repository
repos:
- >
deb https://apt.dockerproject.org/repo
deb {{ docker_apt_repo_base_url }}
{{ ansible_distribution|lower }}-{{ ansible_distribution_release|lower }}
main

View File

@@ -1,6 +1,9 @@
---
local_release_dir: /tmp
# Used to only evaluate vars from download role
skip_downloads: false
# if this is set to true will only download files once. Doesn't work
# on Container Linux by CoreOS unless the download_localhost is true and localhost
# is running another OS type. Default compress level is 1 (fastest).
@@ -17,27 +20,37 @@ download_localhost: False
# Always pull images if set to True. Otherwise check by the repo's tag/digest.
download_always_pull: False
# Use the first kube-master if download_localhost is not set
download_delegate: "{% if download_localhost %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
# Versions
kube_version: v1.7.5
# Change to kube_version after v1.8.0 release
kubeadm_version: "v1.8.0-rc.1"
kube_version: v1.9.2
kubeadm_version: "{{ kube_version }}"
etcd_version: v3.2.4
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download
calico_version: "v2.5.0"
calico_ctl_version: "v1.5.0"
calico_cni_version: "v1.10.0"
calico_policy_version: "v0.7.0"
weave_version: 2.0.4
flannel_version: "v0.8.0"
flannel_cni_version: "v0.2.0"
calico_version: "v2.6.2"
calico_ctl_version: "v1.6.1"
calico_cni_version: "v1.11.0"
calico_policy_version: "v1.0.0"
calico_rr_version: "v0.4.0"
flannel_version: "v0.9.1"
flannel_cni_version: "v0.3.0"
istio_version: "0.2.6"
vault_version: 0.8.1
weave_version: 2.1.3
pod_infra_version: 3.0
contiv_version: 1.1.7
# Download URLs
istioctl_download_url: "https://storage.googleapis.com/istio-release/releases/{{ istio_version }}/istioctl/istioctl-linux"
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm"
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip"
# Checksums
kubeadm_checksum: "8f6ceb26b8503bfc36a99574cf6f853be1c55405aa31669561608ad8099bf5bf"
istioctl_checksum: fd703063c540b8c0ab943f478c05ab257d88ae27224c746a27d0526ddbf7c370
kubeadm_checksum: 560b44a2b91747f4fb64ac8754fcf65db9a39a84c6b54d4e6483400ac6c674fc
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
# Containers
etcd_image_repo: "quay.io/coreos/etcd"
@@ -52,10 +65,10 @@ calico_node_image_repo: "quay.io/calico/node"
calico_node_image_tag: "{{ calico_version }}"
calico_cni_image_repo: "quay.io/calico/cni"
calico_cni_image_tag: "{{ calico_cni_version }}"
calico_policy_image_repo: "quay.io/calico/kube-policy-controller"
calico_policy_image_repo: "quay.io/calico/kube-controllers"
calico_policy_image_tag: "{{ calico_policy_version }}"
calico_rr_image_repo: "quay.io/calico/routereflector"
calico_rr_image_tag: "v0.3.0"
calico_rr_image_tag: "{{ calico_rr_version }}"
hyperkube_image_repo: "quay.io/coreos/hyperkube"
hyperkube_image_tag: "{{ kube_version }}_coreos.0"
pod_infra_image_repo: "gcr.io/google_containers/pause-amd64"
@@ -71,20 +84,27 @@ weave_kube_image_repo: "weaveworks/weave-kube"
weave_kube_image_tag: "{{ weave_version }}"
weave_npc_image_repo: "weaveworks/weave-npc"
weave_npc_image_tag: "{{ weave_version }}"
contiv_image_repo: "contiv/netplugin"
contiv_image_tag: "{{ contiv_version }}"
contiv_auth_proxy_image_repo: "contiv/auth_proxy"
contiv_auth_proxy_image_tag: "{{ contiv_version }}"
nginx_image_repo: nginx
nginx_image_tag: 1.11.4-alpine
dnsmasq_version: 2.72
nginx_image_tag: 1.13
dnsmasq_version: 2.78
dnsmasq_image_repo: "andyshinn/dnsmasq"
dnsmasq_image_tag: "{{ dnsmasq_version }}"
kubedns_version: 1.14.2
kubedns_version: 1.14.8
kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-amd64"
kubedns_image_tag: "{{ kubedns_version }}"
dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64"
dnsmasq_nanny_image_tag: "{{ kubedns_version }}"
dnsmasq_sidecar_image_repo: "gcr.io/google_containers/k8s-dns-sidecar-amd64"
dnsmasq_sidecar_image_tag: "{{ kubedns_version }}"
kubednsautoscaler_version: 1.1.1
dnsmasqautoscaler_version: 1.1.2
dnsmasqautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-autoscaler-amd64"
dnsmasqautoscaler_image_tag: "{{ dnsmasqautoscaler_version }}"
kubednsautoscaler_version: 1.1.2
kubednsautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-autoscaler-amd64"
kubednsautoscaler_image_tag: "{{ kubednsautoscaler_version }}"
test_image_repo: busybox
@@ -99,32 +119,36 @@ kibana_version: "v4.6.1"
kibana_image_repo: "gcr.io/google_containers/kibana"
kibana_image_tag: "{{ kibana_version }}"
helm_version: "v2.2.2"
helm_version: "v2.7.2"
helm_image_repo: "lachlanevenson/k8s-helm"
helm_image_tag: "{{ helm_version }}"
tiller_version: "{{ helm_version }}"
tiller_image_repo: "gcr.io/kubernetes-helm/tiller"
tiller_image_tag: "{{ tiller_version }}"
tiller_image_tag: "{{ helm_version }}"
vault_image_repo: "vault"
vault_image_tag: "{{ vault_version }}"
downloads:
netcheck_server:
enabled: "{{ deploy_netchecker }}"
container: true
repo: "{{ netcheck_server_img_repo }}"
tag: "{{ netcheck_server_tag }}"
sha256: "{{ netcheck_server_digest_checksum|default(None) }}"
enabled: "{{ deploy_netchecker|bool }}"
netcheck_agent:
enabled: "{{ deploy_netchecker }}"
container: true
repo: "{{ netcheck_agent_img_repo }}"
tag: "{{ netcheck_agent_tag }}"
sha256: "{{ netcheck_agent_digest_checksum|default(None) }}"
enabled: "{{ deploy_netchecker|bool }}"
etcd:
enabled: true
container: true
repo: "{{ etcd_image_repo }}"
tag: "{{ etcd_image_tag }}"
sha256: "{{ etcd_digest_checksum|default(None) }}"
kubeadm:
enabled: "{{ kubeadm_enabled }}"
file: true
version: "{{ kubeadm_version }}"
dest: "kubeadm"
sha256: "{{ kubeadm_checksum }}"
@@ -133,146 +157,197 @@ downloads:
unarchive: false
owner: "root"
mode: "0755"
istioctl:
enabled: "{{ istio_enabled }}"
file: true
version: "{{ istio_version }}"
dest: "istio/istioctl"
sha256: "{{ istioctl_checksum }}"
source_url: "{{ istioctl_download_url }}"
url: "{{ istioctl_download_url }}"
unarchive: false
owner: "root"
mode: "0755"
hyperkube:
enabled: true
container: true
repo: "{{ hyperkube_image_repo }}"
tag: "{{ hyperkube_image_tag }}"
sha256: "{{ hyperkube_digest_checksum|default(None) }}"
flannel:
enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}"
container: true
repo: "{{ flannel_image_repo }}"
tag: "{{ flannel_image_tag }}"
sha256: "{{ flannel_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}"
flannel_cni:
enabled: "{{ kube_network_plugin == 'flannel' }}"
container: true
repo: "{{ flannel_cni_image_repo }}"
tag: "{{ flannel_cni_image_tag }}"
sha256: "{{ flannel_cni_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'flannel' }}"
calicoctl:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true
repo: "{{ calicoctl_image_repo }}"
tag: "{{ calicoctl_image_tag }}"
sha256: "{{ calicoctl_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
calico_node:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true
repo: "{{ calico_node_image_repo }}"
tag: "{{ calico_node_image_tag }}"
sha256: "{{ calico_node_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
calico_cni:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true
repo: "{{ calico_cni_image_repo }}"
tag: "{{ calico_cni_image_tag }}"
sha256: "{{ calico_cni_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
calico_policy:
enabled: "{{ enable_network_policy or kube_network_plugin == 'canal' }}"
container: true
repo: "{{ calico_policy_image_repo }}"
tag: "{{ calico_policy_image_tag }}"
sha256: "{{ calico_policy_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'canal' }}"
calico_rr:
enabled: "{{ peer_with_calico_rr is defined and peer_with_calico_rr}} and kube_network_plugin == 'calico'"
container: true
repo: "{{ calico_rr_image_repo }}"
tag: "{{ calico_rr_image_tag }}"
sha256: "{{ calico_rr_digest_checksum|default(None) }}"
enabled: "{{ peer_with_calico_rr is defined and peer_with_calico_rr}} and kube_network_plugin == 'calico'"
weave_kube:
enabled: "{{ kube_network_plugin == 'weave' }}"
container: true
repo: "{{ weave_kube_image_repo }}"
tag: "{{ weave_kube_image_tag }}"
sha256: "{{ weave_kube_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'weave' }}"
weave_npc:
enabled: "{{ kube_network_plugin == 'weave' }}"
container: true
repo: "{{ weave_npc_image_repo }}"
tag: "{{ weave_npc_image_tag }}"
sha256: "{{ weave_npc_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'weave' }}"
contiv:
enabled: "{{ kube_network_plugin == 'contiv' }}"
container: true
repo: "{{ contiv_image_repo }}"
tag: "{{ contiv_image_tag }}"
sha256: "{{ contiv_digest_checksum|default(None) }}"
contiv_auth_proxy:
enabled: "{{ kube_network_plugin == 'contiv' }}"
container: true
repo: "{{ contiv_auth_proxy_image_repo }}"
tag: "{{ contiv_auth_proxy_image_tag }}"
sha256: "{{ contiv_auth_proxy_digest_checksum|default(None) }}"
pod_infra:
enabled: true
container: true
repo: "{{ pod_infra_image_repo }}"
tag: "{{ pod_infra_image_tag }}"
sha256: "{{ pod_infra_digest_checksum|default(None) }}"
install_socat:
enabled: "{{ ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] }}"
container: true
repo: "{{ install_socat_image_repo }}"
tag: "{{ install_socat_image_tag }}"
sha256: "{{ install_socat_digest_checksum|default(None) }}"
nginx:
enabled: true
container: true
repo: "{{ nginx_image_repo }}"
tag: "{{ nginx_image_tag }}"
sha256: "{{ nginx_digest_checksum|default(None) }}"
dnsmasq:
enabled: "{{ dns_mode == 'dnsmasq_kubedns' }}"
container: true
repo: "{{ dnsmasq_image_repo }}"
tag: "{{ dnsmasq_image_tag }}"
sha256: "{{ dnsmasq_digest_checksum|default(None) }}"
kubedns:
enabled: true
container: true
repo: "{{ kubedns_image_repo }}"
tag: "{{ kubedns_image_tag }}"
sha256: "{{ kubedns_digest_checksum|default(None) }}"
dnsmasq_nanny:
enabled: true
container: true
repo: "{{ dnsmasq_nanny_image_repo }}"
tag: "{{ dnsmasq_nanny_image_tag }}"
sha256: "{{ dnsmasq_nanny_digest_checksum|default(None) }}"
dnsmasq_sidecar:
enabled: true
container: true
repo: "{{ dnsmasq_sidecar_image_repo }}"
tag: "{{ dnsmasq_sidecar_image_tag }}"
sha256: "{{ dnsmasq_sidecar_digest_checksum|default(None) }}"
kubednsautoscaler:
enabled: true
container: true
repo: "{{ kubednsautoscaler_image_repo }}"
tag: "{{ kubednsautoscaler_image_tag }}"
sha256: "{{ kubednsautoscaler_digest_checksum|default(None) }}"
testbox:
enabled: true
container: true
repo: "{{ test_image_repo }}"
tag: "{{ test_image_tag }}"
sha256: "{{ testbox_digest_checksum|default(None) }}"
elasticsearch:
enabled: "{{ efk_enabled }}"
container: true
repo: "{{ elasticsearch_image_repo }}"
tag: "{{ elasticsearch_image_tag }}"
sha256: "{{ elasticsearch_digest_checksum|default(None) }}"
fluentd:
enabled: "{{ efk_enabled }}"
container: true
repo: "{{ fluentd_image_repo }}"
tag: "{{ fluentd_image_tag }}"
sha256: "{{ fluentd_digest_checksum|default(None) }}"
kibana:
enabled: "{{ efk_enabled }}"
container: true
repo: "{{ kibana_image_repo }}"
tag: "{{ kibana_image_tag }}"
sha256: "{{ kibana_digest_checksum|default(None) }}"
helm:
enabled: "{{ helm_enabled }}"
container: true
repo: "{{ helm_image_repo }}"
tag: "{{ helm_image_tag }}"
sha256: "{{ helm_digest_checksum|default(None) }}"
tiller:
enabled: "{{ helm_enabled }}"
container: true
repo: "{{ tiller_image_repo }}"
tag: "{{ tiller_image_tag }}"
sha256: "{{ tiller_digest_checksum|default(None) }}"
vault:
enabled: "{{ cert_management == 'vault' }}"
container: "{{ vault_deployment_type != 'host' }}"
file: "{{ vault_deployment_type == 'host' }}"
dest: "vault/vault_{{ vault_version }}_linux_amd64.zip"
mode: "0755"
owner: "vault"
repo: "{{ vault_image_repo }}"
sha256: "{{ vault_binary_checksum if vault_deployment_type == 'host' else vault_digest_checksum|d(none) }}"
source_url: "{{ vault_download_url }}"
tag: "{{ vault_image_tag }}"
unarchive: true
url: "{{ vault_download_url }}"
version: "{{ vault_version }}"
download:
container: "{{ file.container|default('false') }}"
repo: "{{ file.repo|default(None) }}"
tag: "{{ file.tag|default(None) }}"
enabled: "{{ file.enabled|default('true') }}"
dest: "{{ file.dest|default(None) }}"
version: "{{ file.version|default(None) }}"
sha256: "{{ file.sha256|default(None) }}"
source_url: "{{ file.source_url|default(None) }}"
url: "{{ file.url|default(None) }}"
unarchive: "{{ file.unarchive|default('false') }}"
owner: "{{ file.owner|default('kube') }}"
mode: "{{ file.mode|default(None) }}"
download_defaults:
container: false
file: false
repo: None
tag: None
enabled: false
dest: None
version: None
url: None
unarchive: false
owner: kube
mode: None

View File

@@ -0,0 +1,2 @@
---
allow_duplicates: true

View File

@@ -0,0 +1,40 @@
---
- name: container_download | Make download decision if pull is required by tag or sha256
include_tasks: set_docker_image_facts.yml
delegate_to: "{{ download_delegate if download_run_once or omit }}"
delegate_facts: no
run_once: "{{ download_run_once }}"
when:
- download.enabled
- download.container
tags:
- facts
# FIXME(mattymo): In Ansible 2.4 omitting download delegate is broken. Move back
# to one task in the future.
- name: container_download | Download containers if pull is required or told to always pull (delegate)
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result
until: pull_task_result|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- download_run_once
- download.enabled
- download.container
- pull_required|default(download_always_pull)
delegate_to: "{{ download_delegate }}"
delegate_facts: yes
run_once: yes
- name: container_download | Download containers if pull is required or told to always pull (all nodes)
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result
until: pull_task_result|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- not download_run_once
- download.enabled
- download.container
- pull_required|default(download_always_pull)

View File

@@ -0,0 +1,42 @@
---
- name: file_download | Downloading...
debug:
msg:
- "URL: {{ download.url }}"
- "Dest: {{ download.dest }}"
- name: file_download | Create dest directory
file:
path: "{{local_release_dir}}/{{download.dest|dirname}}"
state: directory
recurse: yes
when:
- download.enabled
- download.file
- name: file_download | Download item
get_url:
url: "{{download.url}}"
dest: "{{local_release_dir}}/{{download.dest}}"
sha256sum: "{{download.sha256 | default(omit)}}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
register: get_url_result
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- download.enabled
- download.file
- name: file_download | Extract archives
unarchive:
src: "{{ local_release_dir }}/{{download.dest}}"
dest: "{{ local_release_dir }}/{{download.dest|dirname}}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
copy: no
when:
- download.enabled
- download.file
- download.unarchive|default(False)

View File

@@ -0,0 +1,32 @@
---
- name: Register docker images info
raw: >-
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} (index .RepoTags 0) {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}" | tr '\n' ','
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
- name: container_download | Create dest directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
recurse: yes
mode: 0755
owner: "{{ansible_ssh_user|default(ansible_user_id)}}"
- name: container_download | create local directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
recurse: yes
delegate_to: localhost
delegate_facts: false
become: false
run_once: true
when:
- download_run_once
- download_delegate == 'localhost'
tags:
- localhost

View File

@@ -1,201 +1,24 @@
---
- name: file_download | Create dest directories
file:
path: "{{local_release_dir}}/{{download.dest|dirname}}"
state: directory
recurse: yes
- include_tasks: download_prep.yml
when:
- download.enabled|bool
- not download.container|bool
tags: bootstrap-os
- not skip_downloads|default(false)
- name: file_download | Download item
get_url:
url: "{{download.url}}"
dest: "{{local_release_dir}}/{{download.dest}}"
sha256sum: "{{download.sha256 | default(omit)}}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
register: get_url_result
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
- name: "Download items"
include_tasks: "download_{% if download.container %}container{% else %}file{% endif %}.yml"
vars:
download: "{{ download_defaults | combine(item.value) }}"
with_dict: "{{ downloads }}"
when:
- download.enabled|bool
- not download.container|bool
- not skip_downloads|default(false)
- item.value.enabled
- name: file_download | Extract archives
unarchive:
src: "{{ local_release_dir }}/{{download.dest}}"
dest: "{{ local_release_dir }}/{{download.dest|dirname}}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
copy: no
- name: "Sync container"
include_tasks: sync_container.yml
vars:
download: "{{ download_defaults | combine(item.value) }}"
with_dict: "{{ downloads }}"
when:
- download.enabled|bool
- not download.container|bool
- download.unarchive|default(False)
- name: file_download | Fix permissions
file:
state: file
path: "{{local_release_dir}}/{{download.dest}}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
when:
- download.enabled|bool
- not download.container|bool
- (download.unarchive is not defined or download.unarchive == False)
- set_fact:
download_delegate: "{% if download_localhost|bool %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
run_once: true
tags: facts
- name: container_download | Create dest directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
recurse: yes
mode: 0755
owner: "{{ansible_ssh_user|default(ansible_user_id)}}"
when:
- download.enabled|bool
- download.container|bool
tags: bootstrap-os
# This is required for the download_localhost delegate to work smooth with Container Linux by CoreOS cluster nodes
- name: container_download | Hack python binary path for localhost
raw: sh -c "mkdir -p /opt/bin; ln -sf /usr/bin/python /opt/bin/python"
delegate_to: localhost
when: download_delegate == 'localhost'
failed_when: false
tags: localhost
- name: container_download | create local directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
recurse: yes
delegate_to: localhost
become: false
run_once: true
when:
- download_run_once|bool
- download.enabled|bool
- download.container|bool
- download_delegate == 'localhost'
tags: localhost
- name: container_download | Make download decision if pull is required by tag or sha256
include: set_docker_image_facts.yml
when:
- download.enabled|bool
- download.container|bool
delegate_to: "{{ download_delegate if download_run_once|bool or omit }}"
run_once: "{{ download_run_once|bool }}"
tags: facts
- name: container_download | Download containers if pull is required or told to always pull
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result
until: pull_task_result|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- download.enabled|bool
- download.container|bool
- pull_required|bool|default(download_always_pull)
delegate_to: "{{ download_delegate if download_run_once|bool or omit }}"
run_once: "{{ download_run_once|bool }}"
- set_fact:
fname: "{{local_release_dir}}/containers/{{download.repo|regex_replace('/|\0|:', '_')}}:{{download.tag|default(download.sha256)|regex_replace('/|\0|:', '_')}}.tar"
run_once: true
tags: facts
- name: "container_download | Set default value for 'container_changed' to false"
set_fact:
container_changed: "{{pull_required|default(false)|bool}}"
- name: "container_download | Update the 'container_changed' fact"
set_fact:
container_changed: "{{ pull_required|bool|default(false) or not 'up to date' in pull_task_result.stdout }}"
when:
- download.enabled|bool
- download.container|bool
- pull_required|bool|default(download_always_pull)
run_once: "{{ download_run_once|bool }}"
tags: facts
- name: container_download | Stat saved container image
stat:
path: "{{fname}}"
register: img
changed_when: false
when:
- download.enabled|bool
- download.container|bool
- download_run_once|bool
delegate_to: "{{ download_delegate }}"
become: false
run_once: true
tags: facts
- name: container_download | save container images
shell: "{{ docker_bin_dir }}/docker save {{ pull_args }} | gzip -{{ download_compress }} > {{ fname }}"
delegate_to: "{{ download_delegate }}"
register: saved
run_once: true
when:
- (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or download_delegate == "localhost")
- download_run_once|bool
- download.enabled|bool
- download.container|bool
- (container_changed|bool or not img.stat.exists)
- name: container_download | copy container images to ansible host
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
mode: pull
delegate_to: localhost
become: false
when:
- not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- inventory_hostname == groups['kube-master'][0]
- download_delegate != "localhost"
- download_run_once|bool
- download.enabled|bool
- download.container|bool
- saved.changed
- name: container_download | upload container images to nodes
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
mode: push
delegate_to: localhost
become: false
register: get_task
until: get_task|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != groups['kube-master'][0] or
download_delegate == "localhost")
- download_run_once|bool
- download.enabled|bool
- download.container|bool
tags: [upload, upgrade]
- name: container_download | load container images
shell: "{{ docker_bin_dir }}/docker load < {{ fname }}"
when:
- (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != groups['kube-master'][0] or download_delegate == "localhost")
- download_run_once|bool
- download.enabled|bool
- download.container|bool
tags: [upload, upgrade]
- not skip_downloads|default(false)
- item.value.enabled
- item.value.container
- download_run_once

View File

@@ -5,7 +5,7 @@
- set_fact:
pull_args: >-
{%- if pull_by_digest|bool %}{{download.repo}}@sha256:{{download.sha256}}{%- else -%}{{download.repo}}:{{download.tag}}{%- endif -%}
{%- if pull_by_digest %}{{download.repo}}@sha256:{{download.sha256}}{%- else -%}{{download.repo}}:{{download.tag}}{%- endif -%}
- name: Register docker images info
raw: >-
@@ -15,16 +15,16 @@
failed_when: false
changed_when: false
check_mode: no
when: not download_always_pull|bool
when: not download_always_pull
- set_fact:
pull_required: >-
{%- if pull_args in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
when: not download_always_pull|bool
when: not download_always_pull
- name: Check the local digest sha256 corresponds to the given image tag
assert:
that: "{{download.repo}}:{{download.tag}} in docker_images.stdout.split(',')"
when: not download_always_pull|bool and not pull_required|bool and pull_by_digest|bool
when: not download_always_pull and not pull_required and pull_by_digest
tags:
- asserts

View File

@@ -0,0 +1,125 @@
---
- name: container_download | Make download decision if pull is required by tag or sha256
include: set_docker_image_facts.yml
delegate_to: "{{ download_delegate if download_run_once or omit }}"
delegate_facts: no
run_once: "{{ download_run_once }}"
when:
- download.enabled
- download.container
tags:
- facts
- set_fact:
fname: "{{local_release_dir}}/containers/{{download.repo|regex_replace('/|\0|:', '_')}}:{{download.tag|default(download.sha256)|regex_replace('/|\0|:', '_')}}.tar"
run_once: true
when:
- download.enabled
- download.container
- download_run_once
tags:
- facts
- name: "container_download | Set default value for 'container_changed' to false"
set_fact:
container_changed: "{{pull_required|default(false)}}"
when:
- download.enabled
- download.container
- download_run_once
- name: "container_download | Update the 'container_changed' fact"
set_fact:
container_changed: "{{ pull_required|default(false) or not 'up to date' in pull_task_result.stdout }}"
when:
- download.enabled
- download.container
- download_run_once
- pull_required|default(download_always_pull)
run_once: "{{ download_run_once }}"
tags:
- facts
- name: container_download | Stat saved container image
stat:
path: "{{fname}}"
register: img
changed_when: false
delegate_to: "{{ download_delegate }}"
delegate_facts: no
become: false
run_once: true
when:
- download.enabled
- download.container
- download_run_once
tags:
- facts
- name: container_download | save container images
shell: "{{ docker_bin_dir }}/docker save {{ pull_args }} | gzip -{{ download_compress }} > {{ fname }}"
delegate_to: "{{ download_delegate }}"
delegate_facts: no
register: saved
run_once: true
when:
- download.enabled
- download.container
- download_run_once
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] or download_delegate == "localhost")
- (container_changed or not img.stat.exists)
- name: container_download | copy container images to ansible host
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
delegate_to: localhost
delegate_facts: no
run_once: true
become: false
when:
- download.enabled
- download.container
- download_run_once
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- inventory_hostname == download_delegate
- download_delegate != "localhost"
- saved.changed
- name: container_download | upload container images to nodes
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push
delegate_to: localhost
delegate_facts: no
become: false
register: get_task
until: get_task|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- download.enabled
- download.container
- download_run_once
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != download_delegate or
download_delegate == "localhost")
tags:
- upload
- upgrade
- name: container_download | load container images
shell: "{{ docker_bin_dir }}/docker load < {{ fname }}"
when:
- download.enabled
- download.container
- download_run_once
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != download_delegate or download_delegate == "localhost")
tags:
- upload
- upgrade

View File

@@ -8,6 +8,13 @@ etcd_data_dir: "/var/lib/etcd"
etcd_config_dir: /etc/ssl/etcd
etcd_cert_dir: "{{ etcd_config_dir }}/ssl"
etcd_cert_group: root
# Note: This does not set up DNS entries. It simply adds the following DNS
# entries to the certificate
etcd_cert_alt_names:
- "etcd.{{ system_namespace }}.svc.{{ dns_domain }}"
- "etcd.{{ system_namespace }}.svc"
- "etcd.{{ system_namespace }}"
- "etcd"
etcd_script_dir: "{{ bin_dir }}/etcd-scripts"
@@ -17,7 +24,8 @@ etcd_election_timeout: "5000"
etcd_metrics: "basic"
# Limits
etcd_memory_limit: 512M
# Limit memory only if <4GB memory on host. 0=unlimited
etcd_memory_limit: "{% if ansible_memtotal_mb < 4096 %}512M{% else %}0{% endif %}"
# Uncomment to set CPU share for etcd
# etcd_cpu_limit: 300m
@@ -29,3 +37,6 @@ etcd_node_cert_hosts: "{{ groups['k8s-cluster'] | union(groups.get('calico-rr',
etcd_compaction_retention: "8"
etcd_vault_mount_path: etcd
# Force clients like etcdctl to use TLS certs (different than peer security)
etcd_secure_client: true

View File

@@ -48,5 +48,7 @@
snapshot save {{ etcd_backup_directory }}/snapshot.db
environment:
ETCDCTL_API: 3
ETCDCTL_CERT: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}.pem"
ETCDCTL_KEY: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}-key.pem"
retries: 3
delay: "{{ retry_stagger | random + 3 }}"

View File

@@ -7,7 +7,7 @@
- reload etcd
- wait for etcd up
- include: backup.yml
- import_tasks: backup.yml
- name: etcd | reload systemd
command: systemctl daemon-reload
@@ -22,6 +22,8 @@
uri:
url: "https://{% if is_etcd_master %}{{ etcd_address }}{% else %}127.0.0.1{% endif %}:2379/health"
validate_certs: no
client_cert: "{{ etcd_cert_dir }}/member-{{ inventory_hostname }}.pem"
client_key: "{{ etcd_cert_dir }}/member-{{ inventory_hostname }}-key.pem"
register: result
until: result.status is defined and result.status == 200
retries: 10

View File

@@ -3,8 +3,5 @@ dependencies:
- role: adduser
user: "{{ addusers.etcd }}"
when: not (ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] or is_atomic)
- role: download
file: "{{ downloads.etcd }}"
tags: download
# NOTE: Dynamic task dependency on Vault Role if cert_management == "vault"

View File

@@ -26,7 +26,7 @@
- name: "Check_certs | Set 'gen_certs' to true"
set_fact:
gen_certs: true
when: "not '{{ item }}' in etcdcert_master.files|map(attribute='path') | list"
when: not item in etcdcert_master.files|map(attribute='path') | list
run_once: true
with_items: >-
['{{etcd_cert_dir}}/ca.pem',

View File

@@ -1,16 +1,16 @@
---
- name: Configure | Check if member is in cluster
shell: "{{ bin_dir }}/etcdctl --no-sync --peers={{ etcd_access_addresses }} member list | grep -q {{ etcd_access_address }}"
shell: "{{ bin_dir }}/etcdctl --no-sync --endpoints={{ etcd_access_addresses }} member list | grep -q {{ etcd_access_address }}"
register: etcd_member_in_cluster
ignore_errors: true
changed_when: false
check_mode: no
when: is_etcd_master
tags: facts
- name: Configure | Add member to the cluster if it is not there
when: is_etcd_master and etcd_member_in_cluster.rc != 0 and etcd_cluster_is_healthy.rc == 0
shell: "{{ bin_dir }}/etcdctl --peers={{ etcd_access_addresses }} member add {{ etcd_member_name }} {{ etcd_peer_url }}"
tags:
- facts
environment:
ETCDCTL_CERT_FILE: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}.pem"
ETCDCTL_KEY_FILE: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}-key.pem"
- name: Install etcd launch script
template:
@@ -28,3 +28,12 @@
backup: yes
when: is_etcd_master
notify: restart etcd
- name: Configure | Join member(s) to cluster one at a time
include_tasks: join_member.yml
vars:
target_node: "{{ item }}"
loop_control:
pause: 10
with_items: "{{ groups['etcd'] }}"
when: inventory_hostname == item and etcd_member_in_cluster.rc != 0 and etcd_cluster_is_healthy.rc == 0

View File

@@ -83,7 +83,8 @@
'node-{{ node }}-key.pem',
{% endfor %}]"
my_node_certs: ['ca.pem', 'node-{{ inventory_hostname }}.pem', 'node-{{ inventory_hostname }}-key.pem']
tags: facts
tags:
- facts
- name: Gen_certs | Gather etcd master certs
shell: "tar cfz - -C {{ etcd_cert_dir }} -T /dev/stdin <<< {{ my_master_certs|join(' ') }} {{ all_node_certs|join(' ') }} | base64 --wrap=0"

Some files were not shown because too many files have changed in this diff Show More