Compare commits

...

94 Commits

Author SHA1 Message Date
Matthew Mosesohn
3ff5f40bdb fix graceful upgrade (#1704)
Fix system namespace creation
Only rotate tokens when necessary
2017-09-27 14:49:20 +01:00
Matthew Mosesohn
689ded0413 Enable kubeadm upgrades to any version (#1709) 2017-09-27 14:48:18 +01:00
Matthew Mosesohn
327ed157ef Verify valid settings before deploy (#1705)
Also fix yaml lint issues

Fixes #1703
2017-09-27 14:47:47 +01:00
Pablo Moreno
c819238da9 Adds support for separate etcd machines on terraform/openstack deployment (#1674) 2017-09-27 10:59:09 +01:00
tanshanshan
477afa8711 when and run_once are reduplicative (#1694) 2017-09-26 14:48:05 +01:00
Matthew Mosesohn
bd272e0b3c Upgrade to kubeadm (#1667)
* Enable upgrade to kubeadm

* fix kubedns upgrade

* try upgrade route

* use init/upgrade strategy for kubeadm and ignore kubedns svc

* Use bin_dir for kubeadm

* delete more secrets

* fix waiting for terminating pods

* Manually enforce kube-proxy for kubeadm deploy

* remove proxy. update to kubeadm 1.8.0rc1
2017-09-26 10:38:58 +01:00
Maxim Krasilnikov
1067595b5c Change used chars for kubeadm tokens (#1701) 2017-09-26 05:56:08 +01:00
Brad Beam
14c232e3c4 Merge pull request #1663 from foxyriver/fix-shell
use command module instead of shell module
2017-09-25 13:24:45 -05:00
Brad Beam
57f5fb1f4f Merge pull request #1661 from neith00/master
upgrading from weave version 2.0.1 to 2.0.4
2017-09-25 13:23:57 -05:00
Bogdan Dobrelya
bcddfb786d Merge pull request #1692 from mattymo/old-etcd-logic
drop unused etcd logic
2017-09-25 17:44:33 +02:00
Martin Uddén
20db1738fa feature: install project atomic CSS on RedHat family (#1499)
* feature: install project atomic CSS on RedHat family

* missing patch for this feature

* sub-role refactor

* Yamllint fix
2017-09-25 12:29:17 +01:00
Hassan Zamani
b23d81f825 Add etcd_blkio_weight var (#1690) 2017-09-25 12:20:24 +01:00
Maxim Krasilnikov
bc15ceaba1 Update var doc about users accounts (#1685) 2017-09-25 12:20:00 +01:00
Junaid Ali
6f17d0817b Updating getting-started.md (#1683)
Signed-off-by: Junaid Ali <junaidali.yahya@gmail.com>
2017-09-25 12:19:38 +01:00
Matthew Mosesohn
a1cde03b20 Correct master manifest cleanup logic (#1693)
Fixes #1666
2017-09-25 12:19:04 +01:00
Bogdan Dobrelya
cfce23950a Merge pull request #1687 from jistr/cgroup-driver-kubeadm
Set correct kubelet cgroup-driver also for kubeadm deployments
2017-09-25 11:16:40 +02:00
Deni Bertovic
64740249ab Adds tags for asserts (#1639) 2017-09-25 08:41:03 +01:00
Matthew Mosesohn
126f42de06 drop unused etcd logic
Fixes #1660
2017-09-25 07:52:55 +01:00
Matthew Mosesohn
d94e3a81eb Use api lookup for kubelet hostname when using cloudprovider (#1686)
The value cannot be determined properly via local facts, so
checking k8s api is the most reliable way to look up what hostname
is used when using a cloudprovider.
2017-09-24 09:22:15 +01:00
Jiri Stransky
70d0235770 Set correct kubelet cgroup-driver also for kubeadm deployments
This follows pull request #1677, adding the cgroup-driver
autodetection also for kubeadm way of deploying.

Info about this and the possibility to override is added to the docs.
2017-09-22 13:19:04 +02:00
foxyriver
30b5493fd6 use command module instead of shell module 2017-09-22 15:47:03 +08:00
Bogdan Dobrelya
4f6362515f Merge pull request #1677 from jistr/cgroup-driver
Allow setting cgroup driver for kubelet
2017-09-21 17:31:48 +02:00
Jiri Stransky
dbbe9419e5 Allow setting cgroup driver for kubelet
Red Hat family platforms run docker daemon with `--exec-opt
native.cgroupdriver=systemd`. When kubespray tried to start kubelet
service, it failed with:

Error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

Setting kubelet's cgroup driver to the correct value for the platform
fixes this issue. The code utilizes autodetection of docker's cgroup
driver, as different RPMs for the same distro may vary in that regard.
2017-09-21 11:58:11 +02:00
Matthew Mosesohn
188bae142b Fix wait for hosts in CI (#1679)
Also fix usage of failed_when and handling exit code.
2017-09-20 14:30:09 +01:00
Matthew Mosesohn
ef8e35e39b Create admin credential kubeconfig (#1647)
New files: /etc/kubernetes/admin.conf
           /root/.kube/config
           $GITDIR/artifacts/{kubectl,admin.conf}

Optional method to download kubectl and admin.conf if
kubeconfig_lcoalhost is set to true (default false)
2017-09-18 13:30:57 +01:00
Matthew Mosesohn
975accbe1d just use public_ip in creating gce temporary waitfor hosts (#1646)
* just use public_ip in creating gce temporary waitfor hosts

* Update create-gce.yml
2017-09-18 13:24:57 +01:00
Brad Beam
aaa27d0a34 Adding quotes around parameters in cloud_config (#1664)
This is to help support escapes and special characters
2017-09-16 08:43:47 +01:00
Kevin Lefevre
9302ce0036 Enhanced OpenStack cloud provider (#1627)
- Enable Cinder API version for block storage
- Enable floating IP for LBaaS
2017-09-16 08:43:24 +01:00
Matthew Mosesohn
0aab3c97a0 Add all-in-one CI mode and make coreos test aio (#1665) 2017-09-15 22:28:37 +01:00
Matthew Mosesohn
8e731337ba Enable HA deploy of kubeadm (#1658)
* Enable HA deploy of kubeadm

* raise delay to 60s for starting gce hosts
2017-09-15 22:28:15 +01:00
Matthew Mosesohn
b294db5aed fix apply for netchecker upgrade (#1659)
* fix apply for netchecker upgrade and graceful upgrade

* Speed up daemonset upgrades. Make check wait for ds upgrades.
2017-09-15 13:19:37 +01:00
Matthew Mosesohn
8d766a2ca9 Enable ssh opts by in config, set 100 connection retries (#1662)
Also update to ansible 2.3.2
2017-09-15 10:19:36 +01:00
Brad Beam
f2ae16e71d Merge pull request #1651 from bradbeam/vaultnocontent
Fixing condition where vault CA already exists
2017-09-14 17:04:15 -05:00
Brad Beam
ac281476c8 Prune unnecessary certs from vault setup (#1652)
* Cleaning up cert checks for vault

* Removing all unnecessary etcd certs from each node

* Removing all unnecessary kube certs from each node
2017-09-14 12:28:11 +01:00
neith00
1b1c8d31a9 upgrading from weave version 2.0.1 to 2.0.4
This upgrade has been testing offline on a 1.7.5 cluster
2017-09-14 10:29:28 +02:00
Brad Beam
4b587aaf99 Adding ability to specify altnames for vault cert (#1640) 2017-09-14 07:19:44 +01:00
Kyle Bai
016301508e Update to Kubernetes v1.7.5 (#1649) 2017-09-14 07:18:03 +01:00
Matthew Mosesohn
6744726089 kubeadm support (#1631)
* kubeadm support

* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job

* change ci boolean vars to json format

* fixup

* Update create-gce.yml

* Update create-gce.yml

* Update create-gce.yml
2017-09-13 19:00:51 +01:00
Brad Beam
0a89f88b89 Fixing condition where CA already exists 2017-09-13 03:40:46 +00:00
Brad Beam
69fac8ea58 Merge pull request #1634 from bradbeam/calico_cni
fix for calico cni plugin node name
2017-09-11 22:18:06 -05:00
Brad Beam
a51104e844 Merge pull request #1648 from kubernetes-incubator/mattymo-patch-1
Update getting-started.md
2017-09-11 17:55:51 -05:00
Matthew Mosesohn
943aaf84e5 Update getting-started.md 2017-09-11 12:47:04 +03:00
Seungkyu Ahn
e8bde03a50 Setting kubectl bin directory (#1635) 2017-09-09 23:54:13 +03:00
Matthew Mosesohn
75b13caf0b Fix kube-apiserver status checks when changing insecure bind addr (#1633) 2017-09-09 23:41:48 +03:00
Matthew Mosesohn
0f231f0e76 Improve method to create and wait for gce instances (#1645) 2017-09-09 23:41:31 +03:00
Matthew Mosesohn
5d99fa0940 Purge old upgrade hooks and unused tasks (#1641) 2017-09-09 23:41:20 +03:00
Matthew Mosesohn
649388188b Fix netchecker update side effect (#1644)
* Fix netchecker update side effect

kubectl apply should only be used on resources created
with kubectl apply. To workaround this, we should apply
the old manifest before upgrading it.

* Update 030_check-network.yml
2017-09-09 23:38:38 +03:00
Matthew Mosesohn
9fa1873a65 Add kube dashboard, enabled by default (#1643)
* Add kube dashboard, enabled by default

Also add rbac role for kube user

* Update main.yml
2017-09-09 23:38:03 +03:00
Matthew Mosesohn
f2057dd43d Refactor downloads (#1642)
* Refactor downloads

Add prefixes to tasks (file vs container)
Remove some delegates
Clean up some conditions

* Update ansible.cfg
2017-09-09 23:32:12 +03:00
Brad Beam
eeffbbb43c Updating calicocni.hostname to calicocni.nodename 2017-09-08 12:47:40 +00:00
Brad Beam
aaa0105f75 Flexing calicocni.hostname based on cloud provider 2017-09-08 12:47:40 +00:00
Matthew Mosesohn
f29a42721f Clean up debug in check apiserver test (#1638)
* Clean up debug in check apiserver test

* Change password generation for kube_user

Special characters are not allowed in known_users.csv file
2017-09-08 15:47:13 +03:00
Matthew Mosesohn
079d317ade Default is_atomic to false (#1637) 2017-09-08 15:00:57 +03:00
Matthew Mosesohn
6f1fd12265 Revert "Add option for fact cache expiry" (#1636)
* Revert "Add option for fact cache expiry (#1602)"

This reverts commit fb30f65951.
2017-09-08 10:19:58 +03:00
Maxim Krasilnikov
e16b57aa05 Store vault users passwords to credentials dir. Create vault and etcd roles after start vault cluster (#1632) 2017-09-07 23:30:16 +03:00
Yorgos Saslis
fb30f65951 Add option for fact cache expiry (#1602)
* Add option for fact cache expiry 

By adding the `fact_caching_timeout` we avoid having really stale/invalid data ending up in there. 
Leaving commented out by default, for backwards compatibility, but nice to have there.

* Enabled cache-expiry by default

Set to 2 hours and modified comment to reflect change
2017-09-07 23:29:27 +03:00
Tennis Smith
a47aaae078 Add bastion host definitions (#1621)
* Add comment line and documentation for bastion host usage

* Take out unneeded sudo parm

* Remove blank lines

* revert changes

* take out disabling of strict host checking
2017-09-07 23:26:52 +03:00
Matthew Mosesohn
7117614ee5 Use a generated password for kube user (#1624)
Removed unnecessary root user
2017-09-06 20:20:25 +03:00
Chad Swenson
e26aec96b0 Consolidate kube-proxy module and sysctl loading (#1586)
This sets br_netfilter and net.bridge.bridge-nf-call-iptables sysctl from a single play before kube-proxy is first ran instead of from the flannel and weave network_plugin roles after kube-proxy is started
2017-09-06 15:11:51 +03:00
Sam Powers
c60d104056 Update checksums (etcd calico calico-cni weave) to fix uploads.yml (#1584)
the uploads.yml playbook was broken with checksum mismatch errors in
various kubespray commits, for example, 3bfad5ca73
which updated the version from 3.0.6 to 3.0.17 without updating the
corresponding checksums.
2017-09-06 15:11:13 +03:00
Oliver Moser
e6ff8c92a0 Using 'hostnamectl' to set unconfigured hostname on CoreOS (#1600) 2017-09-06 15:10:52 +03:00
Maxim Krasilnikov
9bce364b3c Update auth enabled methods in group_vars example (#1625) 2017-09-06 15:10:18 +03:00
Chad Swenson
cbaa2b5773 Retry Remove all Docker containers in reset (#1623)
Due to various occasional docker bugs, removing a container will sometimes fail. This can often be mitigated by trying again.
2017-09-06 14:23:16 +03:00
Matthieu
0453ed8235 Fix an error with Canal when RBAC are disabled (#1619)
* Fix an error with Canal when RBAC are disabled

* Update using same rbac strategy used elsewhere
2017-09-06 11:32:32 +03:00
Brad Beam
a341adb7f3 Updating CN for node certs generated by vault (#1622)
This allows the node authorization plugin to function correctly
2017-09-06 10:55:08 +03:00
Matthew Mosesohn
4c88ac69f2 Use kubectl apply instead of create/replace (#1610)
Disable checks for existing resources to speed up execution.
2017-09-06 09:36:54 +03:00
Brad Beam
85c237bc1d Merge pull request #1607 from chapsuk/vault_roles
Vault role updates
2017-09-05 11:48:41 -05:00
Tennis Smith
35d48cc88c Point apiserver address to 0.0.0.0 (#1617)
* Point apiserver address to 0.0.0.0
Added loadbalancer api server address
* Update documentation
2017-09-05 18:41:47 +03:00
mkrasilnikov
957b7115fe Remove node name from kube-proxy and admin certificates 2017-09-05 14:40:26 +03:00
Yorgos Saslis
82eedbd622 Update ansible inventory file when template changes (#1612)
This trigger ensures the inventory file is kept up-to-date. Otherwise, if the file exists and you've made changes to your terraform-managed infra without having deleted the file, it would never get updated. 

For example, consider the case where you've destroyed and re-applied the terraform resources, none of the IPs would get updated, so ansible would be trying to connect to the old ones.
2017-09-05 14:10:53 +03:00
mkrasilnikov
b930b0ef5a Place vault role credentials only to vault group hosts 2017-09-05 11:16:18 +03:00
mkrasilnikov
ad313c9d49 typo fix 2017-09-05 09:07:36 +03:00
mkrasilnikov
06035c0f4e Change vault CI CLOUD_MACHINE_TYPE to n1-standard-2 2017-09-05 09:07:36 +03:00
mkrasilnikov
e1384f6618 Using issue cert result var instead hostvars 2017-09-05 09:07:36 +03:00
mkrasilnikov
3acb86805b Rename vault_address to vault_bind_address 2017-09-05 09:07:35 +03:00
mkrasilnikov
bf0af1cd3d Vault role updates:
* using separated vault roles for generate certs with different `O` (Organization) subject field;
  * configure vault roles for issuing certificates with different `CN` (Common name) subject field;
  * set `CN` and `O` to `kubernetes` and `etcd` certificates;
  * vault/defaults vars definition was simplified;
  * vault dirs variables defined in kubernetes-defaults foles for using
  shared tasks in etcd and kubernetes/secrets roles;
  * upgrade vault to 0.8.1;
  * generate random vault user password for each role by default;
  * fix `serial` file name for vault certs;
  * move vault auth request to issue_cert tasks;
  * enable `RBAC` in vault CI;
2017-09-05 09:07:35 +03:00
ArthurMa
c77d11f1c7 Bugfix (#1616)
lost executable path
2017-09-05 08:35:14 +03:00
Matthew Mosesohn
d279d145d5 Fix non-rbac deployment of resources as a list (#1613)
* Use kubectl apply instead of create/replace

Disable checks for existing resources to speed up execution.

* Fix non-rbac deployment of resources as a list

* Fix autoscaler tolerations field

* set all kube resources to state=latest

* Update netchecker and weave
2017-09-05 08:23:12 +03:00
Matthew Mosesohn
fc7905653e Add socat for CoreOS when using host deploy kubelet (#1575) 2017-09-04 11:30:18 +03:00
Matthew Mosesohn
660282e82f Make daemonsets upgradeable (#1606)
Canal will be covered by a separate PR
2017-09-04 11:30:01 +03:00
Matthew Mosesohn
77602dbb93 Move calico to daemonset (#1605)
* Drop legacy calico logic

* add calico as a daemonset
2017-09-04 11:29:51 +03:00
Matthew Mosesohn
a3e6896a43 Add RBAC support for canal (#1604)
Refactored how rbac_enabled is set
Added RBAC to ubuntu-canal-ha CI job
Added rbac for calico policy controller
2017-09-04 11:29:40 +03:00
Dann
702ce446df Apply ClusterRoleBinding to dnsmaq when rbac_enabled (#1592)
* Add RBAC policies to dnsmasq

* fix merge conflict

* yamllint

* use .j2 extension for dnsmasq autoscaler
2017-09-03 10:53:45 +03:00
Brad Beam
8ae77e955e Adding in certificate serial numbers to manifests (#1392) 2017-09-01 09:02:23 +03:00
sgmitchell
783924e671 Change backup handler to only run v2 data backup if snap directory exists (#1594) 2017-08-31 18:23:24 +03:00
Julian Poschmann
93304e5f58 Fix calico leaving service behind. (#1599) 2017-08-31 12:00:05 +03:00
Brad Beam
917373ee55 Merge pull request #1595 from bradbeam/cacerts
Fixing CA certificate locations for k8s components
2017-08-30 21:31:19 -05:00
Brad Beam
7a98ad50b4 Fixing CA certificate locations for k8s components 2017-08-30 15:30:40 -05:00
Brad Beam
982058cc19 Merge pull request #1514 from vijaykatam/docker_systemd
Configurable docker yum repos, systemd fix
2017-08-30 11:50:23 -05:00
Oliver Moser
576beaa6a6 Include /opt/bin in PATH for host deployed kubelet on CoreOS (#1591)
* Include /opt/bin in PATH for host deployed kubelet on CoreOS

* Removing conditional check for CoreOS
2017-08-30 16:50:33 +03:00
Maxim Krasilnikov
6eb22c5db2 Change single Vault pki mount to multi pki mounts paths for etcd and kube CA`s (#1552)
* Added update CA trust step for etcd and kube/secrets roles

* Added load_balancer_domain_name to certificate alt names if defined. Reset CA's in RedHat os.

* Rename kube-cluster-ca.crt to vault-ca.crt, we need separated CA`s for vault, etcd and kube.

* Vault role refactoring, remove optional cert vault auth because not not used and worked. Create separate CA`s fro vault and etcd.

* Fixed different certificates set for vault cert_managment

* Update doc/vault.md

* Fixed condition create vault CA, wrong group

* Fixed missing etcd_cert_path mount for rkt deployment type. Distribute vault roles for all vault hosts

* Removed wrong when condition in create etcd role vault tasks.
2017-08-30 16:03:22 +03:00
Vijay Katam
55ba81fee5 Add changed_when: false to rpm query 2017-08-14 12:31:44 -07:00
Vijay Katam
7ad5523113 restrict rpm query to redhat 2017-08-10 13:49:14 -07:00
Vijay Katam
5efda3eda9 Configurable docker yum repos, systemd fix
* Make yum repos used for installing docker rpms configurable
* TasksMax is only supported in systemd version >= 226
* Change to systemd file should restart docker
2017-08-09 15:49:53 -07:00
192 changed files with 2945 additions and 1576 deletions

2
.gitignore vendored
View File

@@ -22,8 +22,10 @@ __pycache__/
# Distribution / packaging
.Python
artifacts/
env/
build/
credentials/
develop-eggs/
dist/
downloads/

View File

@@ -53,6 +53,7 @@ before_script:
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
KUBEADM_ENABLED: "false"
RESOLVCONF_MODE: docker_dns
LOG_LEVEL: "-vv"
ETCD_DEPLOYMENT: "docker"
@@ -115,11 +116,11 @@ before_script:
-e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cert_management=${CERT_MGMT:-script}
-e cloud_provider=gce
-e deploy_netchecker=true
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e cert_management=${CERT_MGMT:-script}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
@@ -127,6 +128,9 @@ before_script:
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml
@@ -144,17 +148,19 @@ before_script:
-e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e deploy_netchecker=true
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
$PLAYBOOK;
@@ -162,7 +168,9 @@ before_script:
# Tests Cases
## Test Master API
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
- >
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
## Ping the between 2 pod
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
@@ -177,15 +185,20 @@ before_script:
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e ansible_python_interpreter=${PYPATH}
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml;
@@ -207,6 +220,7 @@ before_script:
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e ansible_python_interpreter=${PYPATH}
-e reset_confirmation=yes
--limit "all:!fake_hosts"
@@ -220,15 +234,20 @@ before_script:
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e ansible_python_interpreter=${PYPATH}
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml;
@@ -257,27 +276,52 @@ before_script:
-e cloud_region=${CLOUD_REGION}
# Test matrix. Leave the comments for markup scripts.
.coreos_calico_sep_variables: &coreos_calico_sep_variables
.coreos_calico_aio_variables: &coreos_calico_aio_variables
# stage: deploy-gce-part1
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-west1-b
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: separate
CLUSTER_MODE: aio
BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
##User-data to simply turn off coreos upgrades
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
.ubuntu_canal_ha_rbac_variables: &ubuntu_canal_ha_rbac_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: weave
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: centos-7
CLOUD_MACHINE_TYPE: "n1-standard-1"
CLOUD_REGION: us-central1-b
CLUSTER_MODE: ha
KUBEADM_ENABLED: "true"
UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_MACHINE_TYPE: "n1-standard-1"
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
KUBEADM_ENABLED: "true"
STARTUP_SCRIPT: ""
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: weave
@@ -364,6 +408,8 @@ before_script:
.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
# stage: deploy-gce-part1
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_MACHINE_TYPE: "n1-standard-2"
KUBE_NETWORK_PLUGIN: canal
CERT_MGMT: vault
CLOUD_IMAGE: ubuntu-1604-xenial
@@ -381,13 +427,13 @@ before_script:
STARTUP_SCRIPT: ""
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
coreos-calico-sep:
coreos-calico-aio:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_calico_sep_variables
<<: *coreos_calico_aio_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
@@ -398,7 +444,7 @@ coreos-calico-sep-triggers:
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_calico_sep_variables
<<: *coreos_calico_aio_variables
when: on_success
only: ['triggers']
@@ -445,24 +491,66 @@ ubuntu-weave-sep-triggers:
only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto)
ubuntu-canal-ha:
ubuntu-canal-ha-rbac:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
<<: *ubuntu_canal_ha_rbac_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
ubuntu-canal-ha-triggers:
ubuntu-canal-ha-rbac-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
<<: *ubuntu_canal_ha_rbac_variables
when: on_success
only: ['triggers']
ubuntu-canal-kubeadm-rbac:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_kubeadm_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
ubuntu-canal-kubeadm-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_kubeadm_variables
when: on_success
only: ['triggers']
centos-weave-kubeadm-rbac:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
centos-weave-kubeadm-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']

View File

@@ -1,6 +1,7 @@
[ssh_connection]
pipelining=True
#ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
host_key_checking=False

View File

@@ -62,15 +62,28 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes/node, tags: node }
- { role: network_plugin, tags: network }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/master, tags: master }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
- { role: network_plugin, tags: network }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes/client, tags: client }
- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"

View File

@@ -25,16 +25,29 @@ export AWS_DEFAULT_REGION="zzz"
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
- Update `contrib/terraform/aws/terraform.tfvars` with your data
- Allocate new AWS Elastic IPs: Depending on # of Availability Zones used (2 for each AZ)
- Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below)
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
Example:
```commandline
terraform apply -var-file=credentials.tfvars -var 'loadbalancer_apiserver_address=34.212.228.77'
```
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To make use of it, make sure you have a line in your `ansible.cfg` file that looks like the following:
```commandline
ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m
```
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
Example (this one assumes you are using CoreOS)
```commandline
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
```
**Troubleshooting**
***Remaining AWS IAM Instance Profile***:

View File

@@ -173,9 +173,9 @@ data "template_file" "inventory" {
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
elb_api_port = "loadbalancer_apiserver.port=${var.aws_elb_api_port}"
kube_insecure_apiserver_address = "kube_apiserver_insecure_bind_address: ${var.kube_insecure_apiserver_address}"
loadbalancer_apiserver_address = "loadbalancer_apiserver.address=${var.loadbalancer_apiserver_address}"
}
}
resource "null_resource" "inventories" {
@@ -183,4 +183,8 @@ resource "null_resource" "inventories" {
command = "echo '${data.template_file.inventory.rendered}' > ../../../inventory/hosts"
}
triggers {
template = "${data.template_file.inventory.rendered}"
}
}

View File

@@ -25,4 +25,4 @@ kube-master
[k8s-cluster:vars]
${elb_api_fqdn}
${elb_api_port}
${kube_insecure_apiserver_address}
${loadbalancer_apiserver_address}

View File

@@ -5,11 +5,11 @@ aws_cluster_name = "devtest"
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_avail_zones = ["eu-central-1a","eu-central-1b"]
aws_avail_zones = ["us-west-2a","us-west-2b"]
#Bastion Host
aws_bastion_ami = "ami-5900cc36"
aws_bastion_size = "t2.small"
aws_bastion_ami = "ami-db56b9a3"
aws_bastion_size = "t2.medium"
#Kubernetes Cluster
@@ -23,9 +23,10 @@ aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
aws_cluster_ami = "ami-903df7ff"
aws_cluster_ami = "ami-db56b9a3"
#Settings AWS ELB
aws_elb_api_port = 443
k8s_secure_api_port = 443
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"

View File

@@ -96,6 +96,6 @@ variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server"
}
variable "kube_insecure_apiserver_address" {
description= "Bind Address for insecure Port of K8s API Server"
variable "loadbalancer_apiserver_address" {
description= "Bind Address for ELB of K8s API Server"
}

View File

@@ -11,7 +11,7 @@ services.
There are some assumptions made to try and ensure it will work on your openstack cluster.
* floating-ips are used for access, but you can have masters and nodes that don't use floating-ips if needed. You need currently at least 1 floating ip, which we would suggest is used on a master.
* floating-ips are used for access, but you can have masters and nodes that don't use floating-ips if needed. You need currently at least 1 floating ip, which needs to be used on a master. If using more than one, at least one should be on a master for bastions to work fine.
* you already have a suitable OS image in glance
* you already have both an internal network and a floating-ip pool created
* you have security-groups enabled
@@ -75,7 +75,9 @@ $ echo Setting up Terraform creds && \
export TF_VAR_auth_url=${OS_AUTH_URL}
```
If you want to provision master or node VMs that don't use floating ips, write on a `my-terraform-vars.tfvars` file, for example:
##### Alternative: etcd inside masters
If you want to provision master or node VMs that don't use floating ips and where etcd is inside masters, write on a `my-terraform-vars.tfvars` file, for example:
```
number_of_k8s_masters = "1"
@@ -85,6 +87,28 @@ number_of_k8s_nodes = "0"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy) and one VM as node, again without a floating ip.
##### Alternative: etcd on separate machines
If you want to provision master or node VMs that don't use floating ips and where **etcd is on separate nodes from Kubernetes masters**, write on a `my-terraform-vars.tfvars` file, for example:
```
number_of_etcd = "3"
number_of_k8s_masters = "0"
number_of_k8s_masters_no_etcd = "1"
number_of_k8s_masters_no_floating_ip = "0"
number_of_k8s_masters_no_floating_ip_no_etcd = "2"
number_of_k8s_nodes_no_floating_ip = "1"
number_of_k8s_nodes = "2"
flavor_k8s_node = "desired-flavor-id"
flavor_k8s_master = "desired-flavor-id"
flavor_etcd = "desired-flavor-id"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy), two VMs as nodes with floating ips, one VM as node without floating ip and three VMs for etcd.
##### Alternative: add GlusterFS
Additionally, now the terraform based installation supports provisioning of a GlusterFS shared file system based on a separate set of VMs, running either a Debian or RedHat based set of VMs. To enable this, you need to add to your `my-terraform-vars.tfvars` the following variables:
```

View File

@@ -1,5 +1,5 @@
resource "openstack_networking_floatingip_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
count = "${var.number_of_k8s_masters + var.number_of_k8s_masters_no_etcd}"
pool = "${var.floatingip_pool}"
}
@@ -73,6 +73,44 @@ resource "openstack_compute_instance_v2" "k8s_master" {
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index + var.number_of_k8s_masters)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault"
}
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
@@ -94,6 +132,27 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"

View File

@@ -6,10 +6,22 @@ variable "number_of_k8s_masters" {
default = 2
}
variable "number_of_k8s_masters_no_etcd" {
default = 2
}
variable "number_of_etcd" {
default = 2
}
variable "number_of_k8s_masters_no_floating_ip" {
default = 2
}
variable "number_of_k8s_masters_no_floating_ip_no_etcd" {
default = 2
}
variable "number_of_k8s_nodes" {
default = 1
}
@@ -59,6 +71,10 @@ variable "flavor_k8s_node" {
default = 3
}
variable "flavor_etcd" {
default = 3
}
variable "flavor_gfs_node" {
default = 3
}

View File

@@ -28,10 +28,10 @@ an example inventory located
You can use an
[inventory generator](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py)
to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only use for making a basic Kubespray cluster, but it does
support creating large clusters. It now supports
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does
support creating inventory file for large clusters as well. It now supports
separated ETCD and Kubernetes master roles from node role if the size exceeds a
certain threshold. Run inventory.py help for more information.
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` help for more information.
Example inventory generator usage:
@@ -57,9 +57,9 @@ ansible-playbook -i my_inventory/inventory.cfg cluster.yml -b -v \
See more details in the [ansible guide](ansible.md).
Adding nodes
--------------------------
------------
You may want to add worker nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
You may want to add **worker** nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
@@ -67,3 +67,51 @@ You may want to add worker nodes to your existing cluster. This can be done by r
ansible-playbook -i my_inventory/inventory.cfg scale.yml -b -v \
--private-key=~/.ssh/private_key
```
Connecting to Kubernetes
------------------------
By default, Kubespray configures kube-master hosts with insecure access to
kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use http://localhost:8080 to connect. The kubeconfig files
generated will point to localhost (on kube-masters) and kube-node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](ha.md).
Kubespray permits connecting to the cluster remotely on any IP of any
kube-master host on port 6443 by default. However, this requires
authentication. One could generate a kubeconfig based on one installed
kube-master hosts (needs improvement) or connect with a username and password.
By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user`. This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this
file yourself.
For more information on kubeconfig and accessing a Kubernetes cluster, refer to
the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
Accessing Kubernetes Dashboard
------------------------------
If the variable `dashboard_enabled` is set (default is true), then you can
access the Kubernetes Dashboard at the following URL:
https://kube:_kube-password_@_host_:6443/ui/
To see the password, refer to the section above, titled *Connecting to
Kubernetes*. The host can be any kube-master or kube-node or loadbalancer
(when enabled).
Accessing Kubernetes API
------------------------
The main client of Kubernetes is `kubectl`. It is installed on each kube-master
host and can optionally be configured on your ansible host by setting
`kubeconfig_localhost: true` in the configuration. If enabled, kubectl and
admin.conf will appear in the artifacts/ directory after deployment. You can
see a list of nodes by running the following commands:
cd artifacts/
./kubectl --kubeconfig admin.conf get nodes
If desired, copy kubectl to your bin dir and admin.conf to ~/.kube/config.

View File

@@ -67,3 +67,17 @@ follows:
* network_plugin (such as Calico or Weave)
* kube-apiserver, kube-scheduler, and kube-controller-manager
* Add-ons (such as KubeDNS)
#### Upgrade considerations
Kubespray supports rotating certificates used for etcd and Kubernetes
components, but some manual steps may be required. If you have a pod that
requires use of a service token and is deployed in a namespace other than
`kube-system`, you will need to manually delete the affected pods after
rotating certificates. This is because all service account tokens are dependent
on the apiserver token that is used to generate them. When the certificate
rotates, all service account tokens must be rotated as well. During the
kubernetes-apps/rotate_tokens role, only pods in kube-system are destroyed and
recreated. All other invalidated service account tokens are cleaned up
automatically, but other pods are not deleted out of an abundance of caution
for impact to user deployed pods.

View File

@@ -109,6 +109,9 @@ Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-st
dynamic kernel services are needed for mounting persistent volumes into containers. These may not be
loaded by preinstall kubernetes processes. For example, ceph and rbd backed volumes. Set this variable to
true to let kubelet load kernel modules.
* *kubelet_cgroup_driver* - Allows manual override of the
cgroup-driver option for Kubelet. By default autodetection is used
to match Docker configuration.
##### Custom flags for Kube Components
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. Example:
@@ -126,5 +129,8 @@ The possible vars are:
#### User accounts
Kubespray sets up two Kubernetes accounts by default: ``root`` and ``kube``. Their
passwords default to changeme. You can set this by changing ``kube_api_pwd``.
By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user`. This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this
file yourself or change `kube_api_pwd` var.

View File

@@ -26,7 +26,6 @@ first task, is to stop any temporary instances of Vault, to free the port for
the long-term. At the end of this task, the entire Vault cluster should be up
and read to go.
Keys to the Kingdom
-------------------
@@ -50,24 +49,32 @@ Vault by default encrypts all traffic to and from the datastore backend, all
resting data, and uses TLS for its TCP listener. It is recommended that you
do not change the Vault config to disable TLS, unless you absolutely have to.
Usage
-----
To get the Vault role running, you must to do two things at a minimum:
1. Assign the ``vault`` group to at least 1 node in your inventory
2. Change ``cert_management`` to be ``vault`` instead of ``script``
1. Change ``cert_management`` to be ``vault`` instead of ``script``
Nothing else is required, but customization is possible. Check
``roles/vault/defaults/main.yml`` for the different variables that can be
overridden, most common being ``vault_config``, ``vault_port``, and
``vault_deployment_type``.
Also, if you intend to use a Root or Intermediate CA generated elsewhere,
you'll need to copy the certificate and key to the hosts in the vault group
prior to running the vault role. By default, they'll be located at
``/etc/vault/ssl/ca.pem`` and ``/etc/vault/ssl/ca-key.pem``, respectively.
As a result of the Vault role will be create separated Root CA for `etcd`,
`kubernetes` and `vault`. Also, if you intend to use a Root or Intermediate CA
generated elsewhere, you'll need to copy the certificate and key to the hosts in the vault group prior to running the vault role. By default, they'll be located at:
* vault:
* ``/etc/vault/ssl/ca.pem``
* ``/etc/vault/ssl/ca-key.pem``
* etcd:
* ``/etc/ssl/etcd/ssl/ca.pem``
* ``/etc/ssl/etcd/ssl/ca-key.pem``
* kubernetes:
* ``/etc/kubernetes/ssl/ca.pem``
* ``/etc/kubernetes/ssl/ca-key.pem``
Additional Notes:
@@ -77,7 +84,6 @@ Additional Notes:
credentials are saved to ``/etc/vault/roles/<role>/``. The service will
need to read in those credentials, if they want to interact with Vault.
Potential Work
--------------
@@ -87,6 +93,3 @@ Potential Work
- Add the ability to start temp Vault with Host, Rkt, or Docker
- Add a dynamic way to change out the backend role creation during Bootstrap,
so other services can be used (such as Consul)
- Segregate Server Cert generation from Auth Cert generation (separate CAs).
This work was partially started with the `auth_cert_backend` tasks, but would
need to be further applied to all roles (particularly Etcd and Kubernetes).

View File

@@ -74,14 +74,23 @@ bin_dir: /usr/local/bin
#azure_vnet_name:
#azure_route_table_name:
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (https://github.com/kubernetes/kubernetes/issues/50461)
#openstack_blockstorage_version: "v1/v2/auto (default)"
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following variables.
#openstack_lbaas_enabled: True
#openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
#openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
#openstack_lbaas_create_monitor: "yes"
#openstack_lbaas_monitor_delay: "1m"
#openstack_lbaas_monitor_timeout: "30s"
#openstack_lbaas_monitor_max_retries: "3"
## Uncomment to enable experimental kubeadm deployment mode
#kubeadm_enabled: false
#kubeadm_token_first: "{{ lookup('password', 'credentials/kubeadm_token_first length=6 chars=ascii_lowercase,digits') }}"
#kubeadm_token_second: "{{ lookup('password', 'credentials/kubeadm_token_second length=16 chars=ascii_lowercase,digits') }}"
#kubeadm_token: "{{ kubeadm_token_first }}.{{ kubeadm_token_second }}"
#
## Set these proxy values in order to update docker daemon to use proxies
#http_proxy: ""
#https_proxy: ""
@@ -107,6 +116,9 @@ bin_dir: /usr/local/bin
## Please specify true if you want to perform a kernel upgrade
kernel_upgrade: false
# Set to true to allow pre-checks to fail and continue deployment
#ignore_assert_errors: false
## Etcd auto compaction retention for mvcc key value store in hour
#etcd_compaction_retention: 0

View File

@@ -23,7 +23,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: false
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.7.3
kube_version: v1.7.5
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -40,23 +40,18 @@ kube_log_level: 2
# Users to create for basic auth in Kubernetes API via HTTP
# Optionally add groups for user
kube_api_pwd: "changeme"
kube_api_pwd: "{{ lookup('password', 'credentials/kube_user length=15 chars=ascii_letters,digits') }}"
kube_users:
kube:
pass: "{{kube_api_pwd}}"
role: admin
root:
pass: "{{kube_api_pwd}}"
role: admin
# groups:
# - system:masters
groups:
- system:masters
## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth)
#kube_oidc_auth: false
#kube_basic_auth: false
#kube_token_auth: false
#kube_basic_auth: true
#kube_token_auth: true
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
@@ -148,12 +143,20 @@ vault_deployment_type: docker
# K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
# Kubernetes dashboard (available at http://first_master:6443/ui by default)
dashboard_enabled: true
# Monitoring apps for k8s
efk_enabled: false
# Helm deployment
helm_enabled: false
# Make a copy of kubeconfig on the host that runs Ansible in GITDIR/artifacts
# kubeconfig_localhost: false
# Download kubectl onto the host that runs Ansible in GITDIR/artifacts
# kubectl_localhost: false
# dnsmasq
# dnsmasq_upstream_dns_servers:
# - /resolvethiszone.with/10.0.4.250

View File

@@ -135,11 +135,14 @@ class KubeManager(object):
return None
return out.splitlines()
def create(self, check=True):
def create(self, check=True, force=True):
if check and self.exists():
return []
cmd = ['create']
cmd = ['apply']
if force:
cmd.append('--force')
if not self.filename:
self.module.fail_json(msg='filename required to create')
@@ -148,14 +151,11 @@ class KubeManager(object):
return self._execute(cmd)
def replace(self):
def replace(self, force=True):
if not self.force and not self.exists():
return []
cmd = ['apply']
cmd = ['replace']
if self.force:
if force:
cmd.append('--force')
if not self.filename:
@@ -270,9 +270,8 @@ def main():
manager = KubeManager(module)
state = module.params.get('state')
if state == 'present':
result = manager.create()
result = manager.create(check=False)
elif state == 'absent':
result = manager.delete()
@@ -284,11 +283,7 @@ def main():
result = manager.stop()
elif state == 'latest':
if manager.exists():
manager.force = True
result = manager.replace()
else:
result = manager.create(check=False)
else:
module.fail_json(msg='Unrecognized state %s.' % state)

View File

@@ -1,4 +1,4 @@
pbr>=1.6
ansible>=2.3.0
ansible>=2.3.2
netaddr
jinja2>=2.9.6

View File

@@ -7,7 +7,7 @@
- name: Bootstrap | Run bootstrap.sh
script: bootstrap.sh
when: (need_bootstrap | failed)
when: need_bootstrap.rc != 0
- set_fact:
ansible_python_interpreter: "/opt/bin/python"
@@ -19,31 +19,31 @@
failed_when: false
changed_when: false
check_mode: no
when: (need_bootstrap | failed)
when: need_bootstrap.rc != 0
tags: facts
- name: Bootstrap | Copy get-pip.py
copy:
src: get-pip.py
dest: ~/get-pip.py
when: (need_pip | failed)
when: need_pip != 0
- name: Bootstrap | Install pip
shell: "{{ansible_python_interpreter}} ~/get-pip.py"
when: (need_pip | failed)
when: need_pip != 0
- name: Bootstrap | Remove get-pip.py
file:
path: ~/get-pip.py
state: absent
when: (need_pip | failed)
when: need_pip != 0
- name: Bootstrap | Install pip launcher
copy:
src: runner
dest: /opt/bin/pip
mode: 0755
when: (need_pip | failed)
when: need_pip != 0
- name: Install required python modules
pip:

View File

@@ -21,9 +21,20 @@
- name: Gather nodes hostnames
setup:
gather_subset: '!all'
filter: ansible_hostname
filter: ansible_*
- name: Assign inventory name to unconfigured hostnames
- name: Assign inventory name to unconfigured hostnames (non-CoreOS)
hostname:
name: "{{inventory_hostname}}"
when: ansible_hostname == 'localhost'
when: ansible_os_family not in ['CoreOS', 'Container Linux by CoreOS']
- name: Assign inventory name to unconfigured hostnames (CoreOS only)
command: "hostnamectl set-hostname {{inventory_hostname}}"
register: hostname_changed
when: ansible_hostname == 'localhost' and ansible_os_family in ['CoreOS', 'Container Linux by CoreOS']
- name: Update hostname fact (CoreOS only)
setup:
gather_subset: '!all'
filter: ansible_hostname
when: ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] and hostname_changed.changed

View File

@@ -1,6 +1,4 @@
---
- include: pre_upgrade.yml
- name: ensure dnsmasq.d directory exists
file:
path: /etc/dnsmasq.d
@@ -56,6 +54,26 @@
dest: /etc/dnsmasq.d/01-kube-dns.conf
state: link
- name: Create dnsmasq RBAC manifests
template:
src: "{{ item }}"
dest: "{{ kube_config_dir }}/{{ item }}"
with_items:
- "dnsmasq-clusterrolebinding.yml"
- "dnsmasq-serviceaccount.yml"
when: rbac_enabled
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
- name: Apply dnsmasq RBAC manifests
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/{{ item }}"
with_items:
- "dnsmasq-clusterrolebinding.yml"
- "dnsmasq-serviceaccount.yml"
when: rbac_enabled
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
- name: Create dnsmasq manifests
template:
src: "{{item.file}}"
@@ -63,7 +81,7 @@
with_items:
- {name: dnsmasq, file: dnsmasq-deploy.yml, type: deployment}
- {name: dnsmasq, file: dnsmasq-svc.yml, type: svc}
- {name: dnsmasq-autoscaler, file: dnsmasq-autoscaler.yml, type: deployment}
- {name: dnsmasq-autoscaler, file: dnsmasq-autoscaler.yml.j2, type: deployment}
register: manifests
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
@@ -75,7 +93,7 @@
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "{{item.changed | ternary('latest','present') }}"
state: "latest"
with_items: "{{ manifests.results }}"
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true

View File

@@ -1,9 +0,0 @@
---
- name: Delete legacy dnsmasq daemonset
kube:
name: dnsmasq
namespace: "{{system_namespace}}"
kubectl: "{{bin_dir}}/kubectl"
resource: "ds"
state: absent
when: inventory_hostname == groups['kube-master'][0]

View File

@@ -31,6 +31,9 @@ spec:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
{% if rbac_enabled %}
serviceAccountName: dnsmasq
{% endif %}
tolerations:
- effect: NoSchedule
operator: Exists

View File

@@ -0,0 +1,14 @@
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dnsmasq
namespace: "{{ system_namespace }}"
subjects:
- kind: ServiceAccount
name: dnsmasq
namespace: "{{ system_namespace}}"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

View File

@@ -57,7 +57,6 @@ spec:
mountPath: /etc/dnsmasq.d
- name: etcdnsmasqdavailable
mountPath: /etc/dnsmasq.d-available
volumes:
- name: etcdnsmasqd
hostPath:

View File

@@ -0,0 +1,8 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dnsmasq
namespace: "{{ system_namespace }}"
labels:
kubernetes.io/cluster-service: "true"

View File

@@ -11,3 +11,8 @@ docker_repo_info:
repos:
docker_dns_servers_strict: yes
docker_container_storage_setup: false
docker_rh_repo_base_url: 'https://yum.dockerproject.org/repo/main/centos/7'
docker_rh_repo_gpgkey: 'https://yum.dockerproject.org/gpg'

View File

@@ -0,0 +1,15 @@
---
docker_container_storage_setup_version: v0.6.0
docker_container_storage_setup_profile_name: kubespray
docker_container_storage_setup_storage_driver: devicemapper
docker_container_storage_setup_container_thinpool: docker-pool
docker_container_storage_setup_data_size: 40%FREE
docker_container_storage_setup_min_data_size: 2G
docker_container_storage_setup_chunk_size: 512K
docker_container_storage_setup_growpart: false
docker_container_storage_setup_auto_extend_pool: yes
docker_container_storage_setup_pool_autoextend_threshold: 60
docker_container_storage_setup_pool_autoextend_percent: 20
docker_container_storage_setup_device_wait_timeout: 60
docker_container_storage_setup_wipe_signatures: false
docker_container_storage_setup_container_root_lv_size: 40%FREE

View File

@@ -0,0 +1,22 @@
#!/bin/sh
set -e
version=${1:-master}
profile_name=${2:-kubespray}
dir=`mktemp -d`
export GIT_DIR=$dir/.git
export GIT_WORK_TREE=$dir
git init
git fetch --depth 1 https://github.com/projectatomic/container-storage-setup.git $version
git merge FETCH_HEAD
make -C $dir install
rm -rf /var/lib/container-storage-setup/$profile_name $dir
set +e
/usr/bin/container-storage-setup create $profile_name /etc/sysconfig/docker-storage-setup && /usr/bin/container-storage-setup activate $profile_name
# FIXME: exit status can be 1 for both fatal and non fatal errors in current release,
# could be improved by matching error strings
exit 0

View File

@@ -0,0 +1,37 @@
---
- name: docker-storage-setup | install git and make
with_items: [git, make]
package:
pkg: "{{ item }}"
state: present
- name: docker-storage-setup | docker-storage-setup sysconfig template
template:
src: docker-storage-setup.j2
dest: /etc/sysconfig/docker-storage-setup
- name: docker-storage-override-directory | docker service storage-setup override dir
file:
dest: /etc/systemd/system/docker.service.d
mode: 0755
owner: root
group: root
state: directory
- name: docker-storage-override | docker service storage-setup override file
copy:
dest: /etc/systemd/system/docker.service.d/override.conf
content: |-
### Thie file is managed by Ansible
[Service]
EnvironmentFile=-/etc/sysconfig/docker-storage
owner: root
group: root
mode: 0644
- name: docker-storage-setup | install and run container-storage-setup
become: yes
script: install_container_storage_setup.sh {{ docker_container_storage_setup_version }} {{ docker_container_storage_setup_profile_name }}
notify: Docker | reload systemd

View File

@@ -0,0 +1,35 @@
{%if docker_container_storage_setup_storage_driver is defined%}STORAGE_DRIVER={{docker_container_storage_setup_storage_driver}}{%endif%}
{%if docker_container_storage_setup_extra_storage_options is defined%}EXTRA_STORAGE_OPTIONS={{docker_container_storage_setup_extra_storage_options}}{%endif%}
{%if docker_container_storage_setup_devs is defined%}DEVS={{docker_container_storage_setup_devs}}{%endif%}
{%if docker_container_storage_setup_container_thinpool is defined%}CONTAINER_THINPOOL={{docker_container_storage_setup_container_thinpool}}{%endif%}
{%if docker_container_storage_setup_vg is defined%}VG={{docker_container_storage_setup_vg}}{%endif%}
{%if docker_container_storage_setup_root_size is defined%}ROOT_SIZE={{docker_container_storage_setup_root_size}}{%endif%}
{%if docker_container_storage_setup_data_size is defined%}DATA_SIZE={{docker_container_storage_setup_data_size}}{%endif%}
{%if docker_container_storage_setup_min_data_size is defined%}MIN_DATA_SIZE={{docker_container_storage_setup_min_data_size}}{%endif%}
{%if docker_container_storage_setup_chunk_size is defined%}CHUNK_SIZE={{docker_container_storage_setup_chunk_size}}{%endif%}
{%if docker_container_storage_setup_growpart is defined%}GROWPART={{docker_container_storage_setup_growpart}}{%endif%}
{%if docker_container_storage_setup_auto_extend_pool is defined%}AUTO_EXTEND_POOL={{docker_container_storage_setup_auto_extend_pool}}{%endif%}
{%if docker_container_storage_setup_pool_autoextend_threshold is defined%}POOL_AUTOEXTEND_THRESHOLD={{docker_container_storage_setup_pool_autoextend_threshold}}{%endif%}
{%if docker_container_storage_setup_pool_autoextend_percent is defined%}POOL_AUTOEXTEND_PERCENT={{docker_container_storage_setup_pool_autoextend_percent}}{%endif%}
{%if docker_container_storage_setup_device_wait_timeout is defined%}DEVICE_WAIT_TIMEOUT={{docker_container_storage_setup_device_wait_timeout}}{%endif%}
{%if docker_container_storage_setup_wipe_signatures is defined%}WIPE_SIGNATURES={{docker_container_storage_setup_wipe_signatures}}{%endif%}
{%if docker_container_storage_setup_container_root_lv_name is defined%}CONTAINER_ROOT_LV_NAME={{docker_container_storage_setup_container_root_lv_name}}{%endif%}
{%if docker_container_storage_setup_container_root_lv_size is defined%}CONTAINER_ROOT_LV_SIZE={{docker_container_storage_setup_container_root_lv_size}}{%endif%}
{%if docker_container_storage_setup_container_root_lv_mount_path is defined%}CONTAINER_ROOT_LV_MOUNT_PATH={{docker_container_storage_setup_container_root_lv_mount_path}}{%endif%}

View File

@@ -0,0 +1,4 @@
---
dependencies:
- role: docker/docker-storage
when: docker_container_storage_setup and ansible_os_family == "RedHat"

View File

@@ -10,11 +10,18 @@
dest: /etc/systemd/system/docker.service.d/http-proxy.conf
when: http_proxy is defined or https_proxy is defined or no_proxy is defined
- name: get systemd version
command: rpm -q --qf '%{V}\n' systemd
register: systemd_version
when: ansible_os_family == "RedHat" and not is_atomic
changed_when: false
- name: Write docker.service systemd file
template:
src: docker.service.j2
dest: /etc/systemd/system/docker.service
register: docker_service_file
notify: restart docker
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic)
- name: Write docker.service systemd file for atomic

View File

@@ -24,7 +24,9 @@ ExecStart={{ docker_bin_dir }}/docker daemon \
$DOCKER_NETWORK_OPTIONS \
$DOCKER_DNS_OPTIONS \
$INSECURE_REGISTRY
{% if ansible_os_family == "RedHat" and systemd_version.stdout|int >= 226 %}
TasksMax=infinity
{% endif %}
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity

View File

@@ -1,7 +1,7 @@
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
baseurl={{ docker_rh_repo_base_url }}
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
gpgkey={{ docker_rh_repo_gpgkey }}
{% if http_proxy is defined %}proxy={{ http_proxy }}{% endif %}

View File

@@ -18,7 +18,9 @@ download_localhost: False
download_always_pull: False
# Versions
kube_version: v1.7.3
kube_version: v1.7.5
# Change to kube_version after v1.8.0 release
kubeadm_version: "v1.8.0-rc.1"
etcd_version: v3.2.4
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download
@@ -26,20 +28,18 @@ calico_version: "v2.5.0"
calico_ctl_version: "v1.5.0"
calico_cni_version: "v1.10.0"
calico_policy_version: "v0.7.0"
weave_version: 2.0.1
weave_version: 2.0.4
flannel_version: "v0.8.0"
flannel_cni_version: "v0.2.0"
pod_infra_version: 3.0
# Download URL's
etcd_download_url: "https://storage.googleapis.com/kargo/{{etcd_version}}_etcd"
# Download URLs
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm"
# Checksums
etcd_checksum: "385afd518f93e3005510b7aaa04d38ee4a39f06f5152cd33bb86d4f0c94c7485"
kubeadm_checksum: "8f6ceb26b8503bfc36a99574cf6f853be1c55405aa31669561608ad8099bf5bf"
# Containers
# Possible values: host, docker
etcd_deployment_type: "docker"
etcd_image_repo: "quay.io/coreos/etcd"
etcd_image_tag: "{{ etcd_version }}"
flannel_image_repo: "quay.io/coreos/flannel"
@@ -60,6 +60,8 @@ hyperkube_image_repo: "quay.io/coreos/hyperkube"
hyperkube_image_tag: "{{ kube_version }}_coreos.0"
pod_infra_image_repo: "gcr.io/google_containers/pause-amd64"
pod_infra_image_tag: "{{ pod_infra_version }}"
install_socat_image_repo: "xueshanf/install-socat"
install_socat_image_tag: "latest"
netcheck_version: "v1.0"
netcheck_agent_img_repo: "quay.io/l23network/k8s-netchecker-agent"
netcheck_agent_tag: "{{ netcheck_version }}"
@@ -118,18 +120,19 @@ downloads:
sha256: "{{ netcheck_agent_digest_checksum|default(None) }}"
enabled: "{{ deploy_netchecker|bool }}"
etcd:
version: "{{etcd_version}}"
dest: "etcd/etcd-{{ etcd_version }}-linux-amd64.tar.gz"
sha256: >-
{%- if etcd_deployment_type in [ 'docker', 'rkt' ] -%}{{etcd_digest_checksum|default(None)}}{%- else -%}{{etcd_checksum}}{%- endif -%}
source_url: "{{ etcd_download_url }}"
url: "{{ etcd_download_url }}"
unarchive: true
owner: "etcd"
mode: "0755"
container: "{{ etcd_deployment_type in [ 'docker', 'rkt' ] }}"
container: true
repo: "{{ etcd_image_repo }}"
tag: "{{ etcd_image_tag }}"
sha256: "{{ etcd_digest_checksum|default(None) }}"
kubeadm:
version: "{{ kubeadm_version }}"
dest: "kubeadm"
sha256: "{{ kubeadm_checksum }}"
source_url: "{{ kubeadm_download_url }}"
url: "{{ kubeadm_download_url }}"
unarchive: false
owner: "root"
mode: "0755"
hyperkube:
container: true
repo: "{{ hyperkube_image_repo }}"
@@ -194,6 +197,11 @@ downloads:
repo: "{{ pod_infra_image_repo }}"
tag: "{{ pod_infra_image_tag }}"
sha256: "{{ pod_infra_digest_checksum|default(None) }}"
install_socat:
container: true
repo: "{{ install_socat_image_repo }}"
tag: "{{ install_socat_image_tag }}"
sha256: "{{ install_socat_digest_checksum|default(None) }}"
nginx:
container: true
repo: "{{ nginx_image_repo }}"

View File

@@ -1,12 +1,5 @@
---
- name: downloading...
debug:
msg: "{{ download.url }}"
when:
- download.enabled|bool
- not download.container|bool
- name: Create dest directories
- name: file_download | Create dest directories
file:
path: "{{local_release_dir}}/{{download.dest|dirname}}"
state: directory
@@ -16,7 +9,7 @@
- not download.container|bool
tags: bootstrap-os
- name: Download items
- name: file_download | Download item
get_url:
url: "{{download.url}}"
dest: "{{local_release_dir}}/{{download.dest}}"
@@ -31,7 +24,7 @@
- download.enabled|bool
- not download.container|bool
- name: Extract archives
- name: file_download | Extract archives
unarchive:
src: "{{ local_release_dir }}/{{download.dest}}"
dest: "{{ local_release_dir }}/{{download.dest|dirname}}"
@@ -41,10 +34,9 @@
when:
- download.enabled|bool
- not download.container|bool
- download.unarchive is defined
- download.unarchive == True
- download.unarchive|default(False)
- name: Fix permissions
- name: file_download | Fix permissions
file:
state: file
path: "{{local_release_dir}}/{{download.dest}}"
@@ -56,10 +48,11 @@
- (download.unarchive is not defined or download.unarchive == False)
- set_fact:
download_delegate: "{% if download_localhost %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
download_delegate: "{% if download_localhost|bool %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
run_once: true
tags: facts
- name: Create dest directory for saved/loaded container images
- name: container_download | Create dest directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
@@ -72,15 +65,14 @@
tags: bootstrap-os
# This is required for the download_localhost delegate to work smooth with Container Linux by CoreOS cluster nodes
- name: Hack python binary path for localhost
- name: container_download | Hack python binary path for localhost
raw: sh -c "mkdir -p /opt/bin; ln -sf /usr/bin/python /opt/bin/python"
when: download_delegate == 'localhost'
delegate_to: localhost
when: download_delegate == 'localhost'
failed_when: false
run_once: true
tags: localhost
- name: Download | create local directory for saved/loaded container images
- name: container_download | create local directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
@@ -95,24 +87,16 @@
- download_delegate == 'localhost'
tags: localhost
- name: Make download decision if pull is required by tag or sha256
- name: container_download | Make download decision if pull is required by tag or sha256
include: set_docker_image_facts.yml
when:
- download.enabled|bool
- download.container|bool
delegate_to: "{{ download_delegate if download_run_once|bool else inventory_hostname }}"
delegate_to: "{{ download_delegate if download_run_once|bool or omit }}"
run_once: "{{ download_run_once|bool }}"
tags: facts
- name: pulling...
debug:
msg: "{{ pull_args }}"
when:
- download.enabled|bool
- download.container|bool
# NOTE(bogdando) this brings no docker-py deps for nodes
- name: Download containers if pull is required or told to always pull
- name: container_download | Download containers if pull is required or told to always pull
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result
until: pull_task_result|succeeded
@@ -122,29 +106,29 @@
- download.enabled|bool
- download.container|bool
- pull_required|bool|default(download_always_pull)
delegate_to: "{{ download_delegate if download_run_once|bool else inventory_hostname }}"
delegate_to: "{{ download_delegate if download_run_once|bool or omit }}"
run_once: "{{ download_run_once|bool }}"
- set_fact:
fname: "{{local_release_dir}}/containers/{{download.repo|regex_replace('/|\0|:', '_')}}:{{download.tag|default(download.sha256)|regex_replace('/|\0|:', '_')}}.tar"
run_once: true
tags: facts
- name: "Set default value for 'container_changed' to false"
- name: "container_download | Set default value for 'container_changed' to false"
set_fact:
container_changed: "{{pull_required|default(false)|bool}}"
- name: "Update the 'container_changed' fact"
- name: "container_download | Update the 'container_changed' fact"
set_fact:
container_changed: "{{ pull_required|bool|default(false) or not 'up to date' in pull_task_result.stdout }}"
when:
- download.enabled|bool
- download.container|bool
- pull_required|bool|default(download_always_pull)
delegate_to: "{{ download_delegate if download_run_once|bool else inventory_hostname }}"
run_once: "{{ download_run_once|bool }}"
tags: facts
- name: Stat saved container image
- name: container_download | Stat saved container image
stat:
path: "{{fname}}"
register: img
@@ -158,7 +142,7 @@
run_once: true
tags: facts
- name: Download | save container images
- name: container_download | save container images
shell: "{{ docker_bin_dir }}/docker save {{ pull_args }} | gzip -{{ download_compress }} > {{ fname }}"
delegate_to: "{{ download_delegate }}"
register: saved
@@ -170,7 +154,7 @@
- download.container|bool
- (container_changed|bool or not img.stat.exists)
- name: Download | copy container images to ansible host
- name: container_download | copy container images to ansible host
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
@@ -186,7 +170,7 @@
- download.container|bool
- saved.changed
- name: Download | upload container images to nodes
- name: container_download | upload container images to nodes
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
@@ -206,7 +190,7 @@
- download.container|bool
tags: [upload, upgrade]
- name: Download | load container images
- name: container_download | load container images
shell: "{{ docker_bin_dir }}/docker load < {{ fname }}"
when:
- (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and

View File

@@ -9,25 +9,22 @@
- name: Register docker images info
raw: >-
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} .RepoTags {{ '}}' }},{{ '{{' }} .RepoDigests {{ '}}' }}"
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} (index .RepoTags 0) {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}" | tr '\n' ','
no_log: true
register: docker_images_raw
register: docker_images
failed_when: false
changed_when: false
check_mode: no
when: not download_always_pull|bool
- set_fact:
docker_images: "{{docker_images_raw.stdout|regex_replace('\\[|\\]|\\n]','')|regex_replace('\\s',',')}}"
no_log: true
when: not download_always_pull|bool
- set_fact:
pull_required: >-
{%- if pull_args in docker_images.split(',') %}false{%- else -%}true{%- endif -%}
{%- if pull_args in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
when: not download_always_pull|bool
- name: Check the local digest sha256 corresponds to the given image tag
assert:
that: "{{download.repo}}:{{download.tag}} in docker_images.split(',')"
that: "{{download.repo}}:{{download.tag}} in docker_images.stdout.split(',')"
when: not download_always_pull|bool and not pull_required|bool and pull_by_digest|bool
tags:
- asserts

View File

@@ -3,7 +3,6 @@
etcd_cluster_setup: true
etcd_backup_prefix: "/var/backups"
etcd_bin_dir: "{{ local_release_dir }}/etcd/etcd-{{ etcd_version }}-linux-amd64/"
etcd_data_dir: "/var/lib/etcd"
etcd_config_dir: /etc/ssl/etcd
@@ -23,6 +22,10 @@ etcd_memory_limit: 512M
# Uncomment to set CPU share for etcd
# etcd_cpu_limit: 300m
etcd_blkio_weight: 1000
etcd_node_cert_hosts: "{{ groups['k8s-cluster'] | union(groups.get('calico-rr', [])) }}"
etcd_compaction_retention: "8"
etcd_vault_mount_path: etcd

View File

@@ -5,6 +5,7 @@
- Refresh Time Fact
- Set Backup Directory
- Create Backup Directory
- Stat etcd v2 data directory
- Backup etcd v2 data
- Backup etcd v3 data
when: etcd_cluster_is_healthy.rc == 0
@@ -24,7 +25,13 @@
group: root
mode: 0600
- name: Stat etcd v2 data directory
stat:
path: "{{ etcd_data_dir }}/member"
register: etcd_data_dir_member
- name: Backup etcd v2 data
when: etcd_data_dir_member.stat.exists
command: >-
{{ bin_dir }}/etcdctl backup
--data-dir {{ etcd_data_dir }}

View File

@@ -115,7 +115,7 @@
# FIXME(mattymo): Use tempfile module in ansible 2.3
- name: Gen_certs | Prepare tempfile for unpacking certs
shell: mktemp /tmp/certsXXXXX.tar.gz
command: mktemp /tmp/certsXXXXX.tar.gz
register: cert_tempfile
when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
inventory_hostname != groups['etcd'][0]
@@ -161,30 +161,3 @@
owner: kube
mode: "u=rwX,g-rwx,o-rwx"
recurse: yes
- name: Gen_certs | target ca-certificate store file
set_fact:
ca_cert_path: |-
{% if ansible_os_family == "Debian" -%}
/usr/local/share/ca-certificates/etcd-ca.crt
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/etcd-ca.crt
{%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
/etc/ssl/certs/etcd-ca.pem
{%- endif %}
tags: facts
- name: Gen_certs | add CA to trusted CA dir
copy:
src: "{{ etcd_cert_dir }}/ca.pem"
dest: "{{ ca_cert_path }}"
remote_src: true
register: etcd_ca_cert
- name: Gen_certs | update ca-certificates (Debian/Ubuntu/Container Linux by CoreOS)
command: update-ca-certificates
when: etcd_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Container Linux by CoreOS"]
- name: Gen_certs | update ca-certificates (RedHat)
command: update-ca-trust extract
when: etcd_ca_cert.changed and ansible_os_family == "RedHat"

View File

@@ -7,51 +7,14 @@
when: inventory_hostname in etcd_node_cert_hosts
tags: etcd-secrets
- name: gen_certs_vault | Read in the local credentials
command: cat /etc/vault/roles/etcd/userpass
register: etcd_vault_creds_cat
delegate_to: "{{ groups['vault'][0] }}"
- name: gen_certs_vault | Set facts for read Vault Creds
set_fact:
etcd_vault_creds: "{{ etcd_vault_creds_cat.stdout|from_json }}"
delegate_to: "{{ groups['vault'][0] }}"
- name: gen_certs_vault | Log into Vault and obtain an token
uri:
url: "{{ hostvars[groups.vault|first]['vault_leader_url'] }}/v1/auth/userpass/login/{{ etcd_vault_creds.username }}"
headers:
Accept: application/json
Content-Type: application/json
method: POST
body_format: json
body:
password: "{{ etcd_vault_creds.password }}"
register: etcd_vault_login_result
delegate_to: "{{ groups['vault'][0] }}"
- name: gen_certs_vault | Set fact for vault_client_token
set_fact:
vault_client_token: "{{ etcd_vault_login_result.get('json', {}).get('auth', {}).get('client_token') }}"
run_once: true
- name: gen_certs_vault | Set fact for Vault API token
set_fact:
etcd_vault_headers:
Accept: application/json
Content-Type: application/json
X-Vault-Token: "{{ vault_client_token }}"
run_once: true
when: vault_client_token != ""
# Issue master certs to Etcd nodes
- include: ../../vault/tasks/shared/issue_cert.yml
vars:
issue_cert_common_name: "etcd:master:{{ item.rsplit('/', 1)[1].rsplit('.', 1)[0] }}"
issue_cert_alt_names: "{{ groups.etcd + ['localhost'] }}"
issue_cert_copy_ca: "{{ item == etcd_master_certs_needed|first }}"
issue_cert_file_group: "{{ etcd_cert_group }}"
issue_cert_file_owner: kube
issue_cert_headers: "{{ etcd_vault_headers }}"
issue_cert_hosts: "{{ groups.etcd }}"
issue_cert_ip_sans: >-
[
@@ -66,6 +29,7 @@
issue_cert_path: "{{ item }}"
issue_cert_role: etcd
issue_cert_url: "{{ hostvars[groups.vault|first]['vault_leader_url'] }}"
issue_cert_mount_path: "{{ etcd_vault_mount_path }}"
with_items: "{{ etcd_master_certs_needed|d([]) }}"
when: inventory_hostname in groups.etcd
notify: set etcd_secret_changed
@@ -73,11 +37,11 @@
# Issue node certs to everyone else
- include: ../../vault/tasks/shared/issue_cert.yml
vars:
issue_cert_common_name: "etcd:node:{{ item.rsplit('/', 1)[1].rsplit('.', 1)[0] }}"
issue_cert_alt_names: "{{ etcd_node_cert_hosts }}"
issue_cert_copy_ca: "{{ item == etcd_node_certs_needed|first }}"
issue_cert_file_group: "{{ etcd_cert_group }}"
issue_cert_file_owner: kube
issue_cert_headers: "{{ etcd_vault_headers }}"
issue_cert_hosts: "{{ etcd_node_cert_hosts }}"
issue_cert_ip_sans: >-
[
@@ -92,6 +56,7 @@
issue_cert_path: "{{ item }}"
issue_cert_role: etcd
issue_cert_url: "{{ hostvars[groups.vault|first]['vault_leader_url'] }}"
issue_cert_mount_path: "{{ etcd_vault_mount_path }}"
with_items: "{{ etcd_node_certs_needed|d([]) }}"
when: inventory_hostname in etcd_node_cert_hosts
notify: set etcd_secret_changed

View File

@@ -1,5 +1,4 @@
---
# Plan A: no docker-py deps
- name: Install | Copy etcdctl binary from docker container
command: sh -c "{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy;
{{ docker_bin_dir }}/docker create --name etcdctl-binarycopy {{ etcd_image_repo }}:{{ etcd_image_tag }} &&
@@ -11,22 +10,3 @@
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
changed_when: false
# Plan B: looks nicer, but requires docker-py on all hosts:
# - name: Install | Set up etcd-binarycopy container
# docker:
# name: etcd-binarycopy
# state: present
# image: "{{ etcd_image_repo }}:{{ etcd_image_tag }}"
# when: etcd_deployment_type == "docker"
#
# - name: Install | Copy etcdctl from etcd-binarycopy container
# command: /usr/bin/docker cp "etcd-binarycopy:{{ etcd_container_bin_dir }}etcdctl" "{{ bin_dir }}/etcdctl"
# when: etcd_deployment_type == "docker"
#
# - name: Install | Clean up etcd-binarycopy container
# docker:
# name: etcd-binarycopy
# state: absent
# image: "{{ etcd_image_repo }}:{{ etcd_image_tag }}"
# when: etcd_deployment_type == "docker"

View File

@@ -1,8 +1,4 @@
---
- include: pre_upgrade.yml
when: etcd_cluster_setup
tags: etcd-pre-upgrade
- include: check_certs.yml
when: cert_management == "script"
tags: [etcd-secrets, facts]
@@ -10,6 +6,14 @@
- include: "gen_certs_{{ cert_management }}.yml"
tags: etcd-secrets
- include: upd_ca_trust.yml
tags: etcd-secrets
- name: "Gen_certs | Get etcd certificate serials"
shell: "openssl x509 -in {{ etcd_cert_dir }}/node-{{ inventory_hostname }}.pem -noout -serial | cut -d= -f2"
register: "etcd_client_cert_serial"
when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort
- include: "install_{{ etcd_deployment_type }}.yml"
when: is_etcd_master
tags: upgrade

View File

@@ -1,60 +0,0 @@
---
- name: "Pre-upgrade | check for etcd-proxy unit file"
stat:
path: /etc/systemd/system/etcd-proxy.service
register: etcd_proxy_service_file
tags: facts
- name: "Pre-upgrade | check for etcd-proxy init script"
stat:
path: /etc/init.d/etcd-proxy
register: etcd_proxy_init_script
tags: facts
- name: "Pre-upgrade | stop etcd-proxy if service defined"
service:
name: etcd-proxy
state: stopped
when: (etcd_proxy_service_file.stat.exists|default(False) or etcd_proxy_init_script.stat.exists|default(False))
- name: "Pre-upgrade | remove etcd-proxy service definition"
file:
path: "{{ item }}"
state: absent
when: (etcd_proxy_service_file.stat.exists|default(False) or etcd_proxy_init_script.stat.exists|default(False))
with_items:
- /etc/systemd/system/etcd-proxy.service
- /etc/init.d/etcd-proxy
- name: "Pre-upgrade | find etcd-proxy container"
command: "{{ docker_bin_dir }}/docker ps -aq --filter 'name=etcd-proxy*'"
register: etcd_proxy_container
changed_when: false
failed_when: false
- name: "Pre-upgrade | remove etcd-proxy if it exists"
command: "{{ docker_bin_dir }}/docker rm -f {{item}}"
with_items: "{{etcd_proxy_container.stdout_lines}}"
- name: "Pre-upgrade | see if etcdctl is installed"
stat:
path: "{{ bin_dir }}/etcdctl"
register: etcdctl_installed
- name: "Pre-upgrade | check if member list is non-SSL"
command: "{{ bin_dir }}/etcdctl --no-sync --peers={{ etcd_access_addresses | regex_replace('https','http') }} member list"
register: etcd_member_list
retries: 10
delay: 3
until: etcd_member_list.rc != 2
run_once: true
when: etcdctl_installed.stat.exists
changed_when: false
failed_when: false
- name: "Pre-upgrade | change peer names to SSL"
shell: >-
{{ bin_dir }}/etcdctl --no-sync --peers={{ etcd_access_addresses | regex_replace('https','http') }} member list |
awk -F"[: =]" '{print "{{ bin_dir }}/etcdctl --peers={{ etcd_access_addresses | regex_replace('https','http') }} member update "$1" https:"$7":"$8}' | bash
run_once: true
when: 'etcdctl_installed.stat.exists and etcd_member_list.rc == 0 and "http://" in etcd_member_list.stdout'

View File

@@ -4,20 +4,17 @@
set_fact:
etcd_master_cert_list: >-
{{ etcd_master_cert_list|default([]) + [
"admin-" + item + ".pem",
"member-" + item + ".pem"
"admin-" + inventory_hostname + ".pem",
"member-" + inventory_hostname + ".pem"
] }}
with_items: "{{ groups.etcd }}"
run_once: true
- include: ../../vault/tasks/shared/sync_file.yml
vars:
sync_file: "{{ item }}"
sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ groups.etcd }}"
sync_file_hosts: [ "{{ inventory_hostname }}" ]
sync_file_is_cert: true
with_items: "{{ etcd_master_cert_list|d([]) }}"
run_once: true
- name: sync_etcd_certs | Set facts for etcd sync_file results
set_fact:
@@ -33,8 +30,7 @@
vars:
sync_file: ca.pem
sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ groups.etcd }}"
run_once: true
sync_file_hosts: [ "{{ inventory_hostname }}" ]
- name: sync_etcd_certs | Unset sync_file_results after ca.pem sync
set_fact:

View File

@@ -2,14 +2,13 @@
- name: sync_etcd_node_certs | Create list of node certs needing creation
set_fact:
etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + item + '.pem'] }}"
with_items: "{{ etcd_node_cert_hosts }}"
etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + inventory_hostname + '.pem'] }}"
- include: ../../vault/tasks/shared/sync_file.yml
vars:
sync_file: "{{ item }}"
sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ etcd_node_cert_hosts }}"
sync_file_hosts: [ "{{ inventory_hostname }}" ]
sync_file_is_cert: true
with_items: "{{ etcd_node_cert_list|d([]) }}"
@@ -27,7 +26,7 @@
vars:
sync_file: ca.pem
sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ etcd_node_cert_hosts }}"
sync_file_hosts: "{{ groups['etcd'] }}"
- name: sync_etcd_node_certs | Unset sync_file_results after ca.pem
set_fact:

View File

@@ -0,0 +1,27 @@
---
- name: Gen_certs | target ca-certificate store file
set_fact:
ca_cert_path: |-
{% if ansible_os_family == "Debian" -%}
/usr/local/share/ca-certificates/etcd-ca.crt
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/etcd-ca.crt
{%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
/etc/ssl/certs/etcd-ca.pem
{%- endif %}
tags: facts
- name: Gen_certs | add CA to trusted CA dir
copy:
src: "{{ etcd_cert_dir }}/ca.pem"
dest: "{{ ca_cert_path }}"
remote_src: true
register: etcd_ca_cert
- name: Gen_certs | update ca-certificates (Debian/Ubuntu/Container Linux by CoreOS)
command: update-ca-certificates
when: etcd_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Container Linux by CoreOS"]
- name: Gen_certs | update ca-certificates (RedHat)
command: update-ca-trust extract
when: etcd_ca_cert.changed and ansible_os_family == "RedHat"

View File

@@ -12,6 +12,9 @@
{% if etcd_cpu_limit is defined %}
--cpu-shares={{ etcd_cpu_limit|regex_replace('m', '') }} \
{% endif %}
{% if etcd_blkio_weight is defined %}
--blkio-weight={{ etcd_blkio_weight }} \
{% endif %}
--name={{ etcd_member_name | default("etcd") }} \
{{ etcd_image_repo }}:{{ etcd_image_tag }} \
{% if etcd_after_v3 %}

View File

@@ -38,6 +38,17 @@ netchecker_server_memory_limit: 256M
netchecker_server_cpu_requests: 50m
netchecker_server_memory_requests: 64M
# Dashboard
dashboard_enabled: false
dashboard_image_repo: kubernetesdashboarddev/kubernetes-dashboard-amd64
dashboard_image_tag: head
# Limits for dashboard
dashboard_cpu_limit: 100m
dashboard_memory_limit: 256M
dashboard_cpu_requests: 50m
dashboard_memory_requests: 64M
# SSL
etcd_cert_dir: "/etc/ssl/etcd/ssl"
canal_cert_dir: "/etc/canal/certs"

View File

@@ -0,0 +1,20 @@
---
- name: Kubernetes Apps | Lay down dashboard template
template:
src: "{{item.file}}"
dest: "{{kube_config_dir}}/{{item.file}}"
with_items:
- {file: dashboard.yml.j2, type: deploy, name: netchecker-agent}
register: manifests
when: inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Start dashboard
kube:
name: "{{item.item.name}}"
namespace: "{{system_namespace}}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "latest"
with_items: "{{ manifests.results }}"
when: inventory_hostname == groups['kube-master'][0]

View File

@@ -1,21 +1,43 @@
---
- name: Kubernetes Apps | Wait for kube-apiserver
uri:
url: http://localhost:{{ kube_apiserver_insecure_port }}/healthz
url: "{{ kube_apiserver_insecure_endpoint }}/healthz"
register: result
until: result.status == 200
retries: 10
delay: 6
when: inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Delete old kubedns resources
kube:
name: "kubedns"
namespace: "{{ system_namespace }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{ item }}"
state: absent
with_items: ['deploy', 'svc']
tags: upgrade
- name: Kubernetes Apps | Delete kubeadm kubedns
kube:
name: "kubedns"
namespace: "{{ system_namespace }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "deploy"
state: absent
when:
- kubeadm_enabled|default(false)
- kubeadm_init.changed|default(false)
- inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Lay Down KubeDNS Template
template:
src: "{{item.file}}"
dest: "{{kube_config_dir}}/{{item.file}}"
with_items:
- {name: kubedns, file: kubedns-sa.yml, type: sa}
- {name: kubedns, file: kubedns-deploy.yml.j2, type: deployment}
- {name: kubedns, file: kubedns-svc.yml, type: svc}
- {name: kube-dns, file: kubedns-sa.yml, type: sa}
- {name: kube-dns, file: kubedns-deploy.yml.j2, type: deployment}
- {name: kube-dns, file: kubedns-svc.yml, type: svc}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-sa.yml, type: sa}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrole.yml, type: clusterrole}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrolebinding.yml, type: clusterrolebinding}
@@ -51,13 +73,20 @@
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "{{item.changed | ternary('latest','present') }}"
state: "latest"
with_items: "{{ manifests.results }}"
failed_when: manifests|failed and "Error from server (AlreadyExists)" not in manifests.msg
when: dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
when:
- dns_mode != 'none'
- inventory_hostname == groups['kube-master'][0]
- not item|skipped
tags: dnsmasq
- name: Kubernetes Apps | Netchecker
include: tasks/netchecker.yml
when: deploy_netchecker
tags: netchecker
- name: Kubernetes Apps | Dashboard
include: tasks/dashboard.yml
when: dashboard_enabled
tags: dashboard

View File

@@ -1,4 +1,22 @@
---
- name: Kubernetes Apps | Check if netchecker-server manifest already exists
stat:
path: "{{ kube_config_dir }}/netchecker-server-deployment.yml.j2"
register: netchecker_server_manifest
tags: ['facts', 'upgrade']
- name: Kubernetes Apps | Apply netchecker-server manifest to update annotations
kube:
name: "netchecker-server"
namespace: "{{ netcheck_namespace }}"
filename: "{{ netchecker_server_manifest.stat.path }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "deploy"
state: latest
when: inventory_hostname == groups['kube-master'][0] and netchecker_server_manifest.stat.exists
tags: upgrade
- name: Kubernetes Apps | Lay Down Netchecker Template
template:
src: "{{item.file}}"
@@ -25,18 +43,6 @@
state: absent
when: inventory_hostname == groups['kube-master'][0]
# FIXME: remove if kubernetes/features#124 is implemented
- name: Kubernetes Apps | Purge old Netchecker daemonsets
kube:
name: "{{item.item.name}}"
namespace: "{{netcheck_namespace}}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: absent
with_items: "{{ manifests.results }}"
when: inventory_hostname == groups['kube-master'][0] and item.item.type == "ds" and item.changed
- name: Kubernetes Apps | Start Netchecker Resources
kube:
name: "{{item.item.name}}"
@@ -44,7 +50,6 @@
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "{{item.changed | ternary('latest','present') }}"
state: "latest"
with_items: "{{ manifests.results }}"
failed_when: manifests|failed and "Error from server (AlreadyExists)" not in manifests.msg
when: inventory_hostname == groups['kube-master'][0]
when: inventory_hostname == groups['kube-master'][0] and not item|skipped

View File

@@ -0,0 +1,110 @@
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy head version of the Dashboard UI compatible with
# Kubernetes 1.6 (RBAC enabled).
#
# Example usage: kubectl create -f <this_file>
{% if rbac_enabled %}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: {{ system_namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: {{ system_namespace }}
{% endif %}
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: {{ system_namespace }}
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: {{ dashboard_image_repo }}:{{ dashboard_image_tag }}
# Image is tagged and updated with :head, so always pull it.
imagePullPolicy: Always
resources:
limits:
cpu: {{ dashboard_cpu_limit }}
memory: {{ dashboard_memory_limit }}
requests:
cpu: {{ dashboard_cpu_requests }}
memory: {{ dashboard_memory_requests }}
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
{% if rbac_enabled %}
serviceAccountName: kubernetes-dashboard
{% endif %}
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: {{ system_namespace }}
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard

View File

@@ -27,17 +27,13 @@ spec:
metadata:
labels:
k8s-app: kubedns-autoscaler
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
containers:
- name: autoscaler
image: "{{ kubednsautoscaler_image_repo }}:{{ kubednsautoscaler_image_tag }}"
tolerations:
- effect: NoSchedule
operator: Exists
- effect: CriticalAddonsOnly
operator: exists
containers:
- name: autoscaler
image: "{{ kubednsautoscaler_image_repo }}:{{ kubednsautoscaler_image_tag }}"
resources:
requests:
cpu: "20m"

View File

@@ -40,3 +40,7 @@ spec:
requests:
cpu: {{ netchecker_agent_cpu_requests }}
memory: {{ netchecker_agent_memory_requests }}
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate

View File

@@ -44,3 +44,7 @@ spec:
requests:
cpu: {{ netchecker_agent_cpu_requests }}
memory: {{ netchecker_agent_memory_requests }}
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate

View File

@@ -25,12 +25,14 @@ spec:
memory: {{ netchecker_server_memory_requests }}
ports:
- containerPort: 8081
hostPort: 8081
args:
- "-v=5"
- "-logtostderr"
- "-kubeproxyinit"
- "-endpoint=0.0.0.0:8081"
tolerations:
- effect: NoSchedule
operator: Exists
{% if rbac_enabled %}
serviceAccountName: netchecker-server
{% endif %}

View File

@@ -10,7 +10,7 @@
when: rbac_enabled
- name: "ElasticSearch | Create Serviceaccount and Clusterrolebinding (RBAC)"
command: "kubectl apply -f {{ kube_config_dir }}/{{ item }} -n {{ system_namespace }}"
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/{{ item }} -n {{ system_namespace }}"
with_items:
- "efk-sa.yml"
- "efk-clusterrolebinding.yml"

View File

@@ -58,4 +58,3 @@ spec:
{% if rbac_enabled %}
serviceAccountName: efk
{% endif %}

View File

@@ -12,7 +12,7 @@
name: "kibana-logging"
namespace: "{{system_namespace}}"
resource: "deployment"
state: "{{ item | ternary('latest','present') }}"
state: "latest"
with_items: "{{ kibana_deployment_manifest.changed }}"
run_once: true
@@ -29,6 +29,6 @@
name: "kibana-logging"
namespace: "{{system_namespace}}"
resource: "svc"
state: "{{ item | ternary('latest','present') }}"
state: "latest"
with_items: "{{ kibana_service_manifest.changed }}"
run_once: true

View File

@@ -27,9 +27,8 @@
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "{{item.changed | ternary('latest','present') }}"
state: "latest"
with_items: "{{ manifests.results }}"
failed_when: manifests|failed and "Error from server (AlreadyExists)" not in manifests.msg
when: dns_mode != 'none' and inventory_hostname == groups['kube-master'][0] and rbac_enabled
- name: Helm | Install/upgrade helm

View File

@@ -0,0 +1,11 @@
---
- name: Start Calico resources
kube:
name: "{{item.item.name}}"
namespace: "{{ system_namespace }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "latest"
with_items: "{{ calico_node_manifests.results }}"
when: inventory_hostname == groups['kube-master'][0] and not item|skipped

View File

@@ -1,20 +1,11 @@
---
- name: Create canal ConfigMap
run_once: true
- name: Canal | Start Resources
kube:
name: "canal-config"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{kube_config_dir}}/canal-config.yaml"
resource: "configmap"
name: "{{item.item.name}}"
namespace: "{{ system_namespace }}"
- name: Start flannel and calico-node
run_once: true
kube:
name: "canal-node"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{kube_config_dir}}/canal-node.yaml"
resource: "ds"
namespace: "{{system_namespace}}"
state: "{{ item | ternary('latest','present') }}"
with_items: "{{ canal_node_manifest.changed }}"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "latest"
with_items: "{{ canal_manifests.results }}"
when: inventory_hostname == groups['kube-master'][0] and not item|skipped

View File

@@ -11,7 +11,7 @@
filename: "{{ kube_config_dir }}/cni-flannel.yml"
resource: "ds"
namespace: "{{system_namespace}}"
state: "{{ item | ternary('latest','present') }}"
state: "latest"
with_items: "{{ flannel_manifest.changed }}"
when: inventory_hostname == groups['kube-master'][0]

View File

@@ -1,5 +1,8 @@
---
dependencies:
- role: kubernetes-apps/network_plugin/calico
when: kube_network_plugin == 'calico'
tags: calico
- role: kubernetes-apps/network_plugin/canal
when: kube_network_plugin == 'canal'
tags: canal

View File

@@ -1,15 +1,4 @@
---
# FIXME: remove if kubernetes/features#124 is implemented
- name: Weave | Purge old weave daemonset
kube:
name: "weave-net"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/weave-net.yml"
resource: "ds"
namespace: "{{system_namespace}}"
state: absent
when: inventory_hostname == groups['kube-master'][0] and weave_manifest.changed
- name: Weave | Start Resources
kube:
name: "weave-net"
@@ -17,8 +6,7 @@
filename: "{{ kube_config_dir }}/weave-net.yml"
resource: "ds"
namespace: "{{system_namespace}}"
state: "{{ item | ternary('latest','present') }}"
with_items: "{{ weave_manifest.changed }}"
state: "latest"
when: inventory_hostname == groups['kube-master'][0]
- name: "Weave | wait for weave to become available"

View File

@@ -8,3 +8,8 @@ calico_policy_controller_memory_requests: 64M
# SSL
calico_cert_dir: "/etc/calico/certs"
canal_cert_dir: "/etc/canal/certs"
rbac_resources:
- sa
- clusterrole
- clusterrolebinding

View File

@@ -1,22 +1,49 @@
---
- set_fact:
- name: Set cert dir
set_fact:
calico_cert_dir: "{{ canal_cert_dir }}"
when: kube_network_plugin == 'canal'
tags: [facts, canal]
- name: Write calico-policy-controller yaml
- name: Get calico-policy-controller version if running
shell: "{{ bin_dir }}/kubectl -n {{ system_namespace }} get rs calico-policy-controller -o=jsonpath='{$.spec.template.spec.containers[:1].image}' | cut -d':' -f2"
register: existing_calico_policy_version
run_once: true
failed_when: false
# FIXME(mattymo): This should not be necessary
- name: Delete calico-policy-controller if an old one is installed
kube:
name: calico-policy-controller
kubectl: "{{bin_dir}}/kubectl"
resource: rs
namespace: "{{ system_namespace }}"
state: absent
run_once: true
when:
- not "NotFound" in existing_calico_policy_version.stderr
- existing_calico_policy_version.stdout | version_compare('v0.7.0', '<')
- name: Create calico-policy-controller manifests
template:
src: calico-policy-controller.yml.j2
dest: "{{kube_config_dir}}/calico-policy-controller.yml"
when: inventory_hostname == groups['kube-master'][0]
tags: canal
src: "{{item.file}}.j2"
dest: "{{kube_config_dir}}/{{item.file}}"
with_items:
- {name: calico-policy-controller, file: calico-policy-controller.yml, type: rs}
- {name: calico-policy-controller, file: calico-policy-sa.yml, type: sa}
- {name: calico-policy-controller, file: calico-policy-cr.yml, type: clusterrole}
- {name: calico-policy-controller, file: calico-policy-crb.yml, type: clusterrolebinding}
register: calico_policy_manifests
when:
- rbac_enabled or item.type not in rbac_resources
- name: Start of Calico policy controller
kube:
name: "calico-policy-controller"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{kube_config_dir}}/calico-policy-controller.yml"
name: "{{item.item.name}}"
namespace: "{{ system_namespace }}"
resource: "rs"
when: inventory_hostname == groups['kube-master'][0]
tags: canal
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "latest"
with_items: "{{ calico_policy_manifests.results }}"
when: inventory_hostname == groups['kube-master'][0] and not item|skipped

View File

@@ -21,6 +21,9 @@ spec:
k8s-app: calico-policy
spec:
hostNetwork: true
{% if rbac_enabled %}
serviceAccountName: calico-policy-controller
{% endif %}
tolerations:
- effect: NoSchedule
operator: Exists

View File

@@ -0,0 +1,17 @@
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-policy-controller
namespace: {{ system_namespace }}
rules:
- apiGroups:
- ""
- extensions
resources:
- pods
- namespaces
- networkpolicies
verbs:
- watch
- list

View File

@@ -0,0 +1,13 @@
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-policy-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-policy-controller
subjects:
- kind: ServiceAccount
name: calico-policy-controller
namespace: {{ system_namespace }}

View File

@@ -0,0 +1,8 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-policy-controller
namespace: {{ system_namespace }}
labels:
kubernetes.io/cluster-service: "true"

View File

@@ -0,0 +1,37 @@
---
- name: Rotate Tokens | Test if default certificate is expired
shell: >-
kubectl run -i test-rotate-tokens
--image={{ hyperkube_image_repo }}:{{ hyperkube_image_tag }}
--restart=Never --rm
kubectl get nodes
register: check_secret
failed_when: false
run_once: true
- name: Rotate Tokens | Determine if certificate is expired
set_fact:
needs_rotation: '{{ "You must be logged in" in check_secret.stderr }}'
# FIXME(mattymo): Exclude built in secrets that were automatically rotated,
# instead of filtering manually
- name: Rotate Tokens | Get all serviceaccount tokens to expire
shell: >-
{{ bin_dir }}/kubectl get secrets --all-namespaces
-o 'jsonpath={range .items[*]}{"\n"}{.metadata.namespace}{" "}{.metadata.name}{" "}{.type}{end}'
| grep kubernetes.io/service-account-token
| egrep 'default-token|kube-proxy|kube-dns|dnsmasq|netchecker|weave|calico|canal|flannel|dashboard|cluster-proportional-autoscaler|efk|tiller'
register: tokens_to_delete
run_once: true
when: needs_rotation
- name: Rotate Tokens | Delete expired tokens
command: "{{ bin_dir }}/kubectl delete secrets -n {{ item.split(' ')[0] }} {{ item.split(' ')[1] }}"
with_items: "{{ tokens_to_delete.stdout_lines }}"
run_once: true
when: needs_rotation
- name: Rotate Tokens | Delete pods in system namespace
command: "{{ bin_dir }}/kubectl delete pods -n {{ system_namespace }} --all"
run_once: true
when: needs_rotation

View File

@@ -0,0 +1,7 @@
---
kubeconfig_localhost: false
kubectl_localhost: false
artifacts_dir: "./artifacts"
kube_config_dir: "/etc/kubernetes"
kube_apiserver_port: "6443"

View File

@@ -0,0 +1,66 @@
---
- name: Set first kube master
set_fact:
first_kube_master: "{{ hostvars[groups['kube-master'][0]]['access_ip'] | default(hostvars[groups['kube-master'][0]]['ip'] | default(hostvars[groups['kube-master'][0]]['ansible_default_ipv4']['address'])) }}"
- name: Set external kube-apiserver endpoint
set_fact:
external_apiserver_endpoint: >-
{%- if loadbalancer_apiserver is defined and loadbalancer_apiserver.port is defined -%}
https://{{ apiserver_loadbalancer_domain_name|default('lb-apiserver.kubernetes.local') }}:{{ loadbalancer_apiserver.port|default(kube_apiserver_port) }}
{%- else -%}
https://{{ first_kube_master }}:{{ kube_apiserver_port }}
{%- endif -%}
tags: facts
- name: Gather certs for admin kubeconfig
slurp:
src: "{{ item }}"
delegate_to: "{{ groups['kube-master'][0] }}"
delegate_facts: no
register: admin_certs
with_items:
- "{{ kube_cert_dir }}/ca.pem"
- "{{ kube_cert_dir }}/admin-{{ inventory_hostname }}.pem"
- "{{ kube_cert_dir }}/admin-{{ inventory_hostname }}-key.pem"
when: not kubeadm_enabled|d(false)|bool
- name: Write admin kubeconfig
template:
src: admin.conf.j2
dest: "{{ kube_config_dir }}/admin.conf"
when: not kubeadm_enabled|d(false)|bool
- name: Create kube config dir
file:
path: "/root/.kube"
mode: "0700"
state: directory
- name: Copy admin kubeconfig to root user home
copy:
src: "{{ kube_config_dir }}/admin.conf"
dest: "/root/.kube/config"
remote_src: yes
mode: "0700"
backup: yes
- name: Copy admin kubeconfig to ansible host
fetch:
src: "{{ kube_config_dir }}/admin.conf"
dest: "{{ artifacts_dir }}/admin.conf"
flat: yes
validate_checksum: no
become: no
run_once: yes
when: kubeconfig_localhost|default(false)
- name: Copy kubectl binary to ansible host
fetch:
src: "{{ bin_dir }}/kubectl"
dest: "{{ artifacts_dir }}/kubectl"
flat: yes
validate_checksum: no
become: no
run_once: yes
when: kubectl_localhost|default(false)

View File

@@ -0,0 +1,19 @@
apiVersion: v1
kind: Config
current-context: admin-{{ cluster_name }}
preferences: {}
clusters:
- cluster:
certificate-authority-data: {{ admin_certs.results[0]['content'] }}
server: {{ external_apiserver_endpoint }}
name: {{ cluster_name }}
contexts:
- context:
cluster: {{ cluster_name }}
user: admin-{{ cluster_name }}
name: admin-{{ cluster_name }}
users:
- name: admin-{{ cluster_name }}
user:
client-certificate-data: {{ admin_certs.results[1]['content'] }}
client-key-data: {{ admin_certs.results[2]['content'] }}

View File

@@ -0,0 +1,52 @@
---
- name: Set kubeadm_discovery_address
set_fact:
kubeadm_discovery_address: >-
{%- if "127.0.0.1" or "localhost" in kube_apiserver_endpoint -%}
{{ first_kube_master }}:{{ kube_apiserver_port }}
{%- else -%}
{{ kube_apiserver_endpoint }}
{%- endif %}
when: not is_kube_master
tags: facts
- name: Check if kubelet.conf exists
stat:
path: "{{ kube_config_dir }}/kubelet.conf"
register: kubelet_conf
- name: Create kubeadm client config
template:
src: kubeadm-client.conf.j2
dest: "{{ kube_config_dir }}/kubeadm-client.conf"
backup: yes
when: not is_kube_master
register: kubeadm_client_conf
- name: Join to cluster if needed
command: "{{ bin_dir }}/kubeadm join --config {{ kube_config_dir}}/kubeadm-client.conf --skip-preflight-checks"
register: kubeadm_join
when: not is_kube_master and (kubeadm_client_conf.changed or not kubelet_conf.stat.exists)
- name: Wait for kubelet bootstrap to create config
wait_for:
path: "{{ kube_config_dir }}/kubelet.conf"
delay: 1
timeout: 60
- name: Update server field in kubelet kubeconfig
replace:
path: "{{ kube_config_dir }}/kubelet.conf"
regexp: '(\s+){{ first_kube_master }}:{{ kube_apiserver_port }}(\s+.*)?$'
replace: '\1{{ kube_apiserver_endpoint }}\2'
backup: yes
when: not is_kube_master and kubeadm_discovery_address != kube_apiserver_endpoint
# FIXME(mattymo): Reconcile kubelet kubeconfig filename for both deploy modes
- name: Symlink kubelet kubeconfig for calico/canal
file:
src: "{{ kube_config_dir }}//kubelet.conf"
dest: "{{ kube_config_dir }}/node-kubeconfig.yaml"
state: link
force: yes
when: kube_network_plugin in ['calico','canal']

View File

@@ -0,0 +1,6 @@
apiVersion: kubeadm.k8s.io/v1alpha1
kind: NodeConfiguration
caCertPath: {{ kube_config_dir }}/ssl/ca.crt
token: {{ kubeadm_token }}
discoveryTokenAPIServers:
- {{ kubeadm_discovery_address | replace("https://", "")}}

View File

@@ -66,3 +66,7 @@ apiserver_custom_flags: []
controller_mgr_custom_flags: []
scheduler_custom_flags: []
# kubeadm settings
# Value of 0 means it never expires
kubeadm_token_ttl: 0

View File

@@ -39,8 +39,12 @@
- name: Master | wait for the apiserver to be running
uri:
url: http://localhost:{{ kube_apiserver_insecure_port }}/healthz
url: "{{ kube_apiserver_insecure_endpoint }}/healthz"
register: result
until: result.status == 200
retries: 20
delay: 6
- name: Master | set secret_changed
set_fact:
secret_changed: true

View File

@@ -0,0 +1,3 @@
---
- name: kubeadm | Purge old certs
command: "rm -f {{kube_cert_dir }}/*.pem"

View File

@@ -0,0 +1,12 @@
---
- name: Copy old certs to the kubeadm expected path
copy:
src: "{{ kube_cert_dir }}/{{ item.src }}"
dest: "{{ kube_cert_dir }}/{{ item.dest }}"
remote_src: yes
with_items:
- {src: apiserver.pem, dest: apiserver.crt}
- {src: apiserver.pem, dest: apiserver.key}
- {src: ca.pem, dest: ca.crt}
- {src: ca-key.pem, dest: ca.key}
register: kubeadm_copy_old_certs

View File

@@ -0,0 +1,161 @@
---
- name: kubeadm | Check if old apiserver cert exists on host
stat:
path: "{{ kube_cert_dir }}/apiserver.pem"
register: old_apiserver_cert
delegate_to: "{{groups['kube-master']|first}}"
run_once: true
- name: kubeadm | Check service account key
stat:
path: "{{ kube_cert_dir }}/sa.key"
register: sa_key_before
delegate_to: "{{groups['kube-master']|first}}"
run_once: true
- name: kubeadm | Check if kubeadm has already run
stat:
path: "{{ kube_config_dir }}/admin.conf"
register: admin_conf
- name: kubeadm | Delete old static pods
file:
path: "{{ kube_config_dir }}/manifests/{{item}}.manifest"
state: absent
with_items: ["kube-apiserver", "kube-controller-manager", "kube-scheduler", "kube-proxy"]
when: old_apiserver_cert.stat.exists
- name: kubeadm | Forcefully delete old static pods
shell: "docker ps -f name=k8s_{{item}} -q | xargs --no-run-if-empty docker rm -f"
with_items: ["kube-apiserver", "kube-controller-manager", "kube-scheduler"]
when: old_apiserver_cert.stat.exists
- name: kubeadm | aggregate all SANs
set_fact:
apiserver_sans: >-
kubernetes
kubernetes.default
kubernetes.default.svc
kubernetes.default.svc.{{ dns_domain }}
{{ kube_apiserver_ip }}
localhost
127.0.0.1
{{ ' '.join(groups['kube-master']) }}
{%- if loadbalancer_apiserver is defined and apiserver_loadbalancer_domain_name is defined %}
{{ apiserver_loadbalancer_domain_name }}
{%- endif %}
{%- for host in groups['kube-master'] -%}
{%- if hostvars[host]['access_ip'] is defined %}{{ hostvars[host]['access_ip'] }}{% endif %}
{{ hostvars[host]['ip'] | default(hostvars[host]['ansible_default_ipv4']['address']) }}
{%- endfor %}
tags: facts
- name: kubeadm | Copy etcd cert dir under k8s cert dir
command: "cp -TR {{ etcd_cert_dir }} {{ kube_config_dir }}/ssl/etcd"
changed_when: false
- name: kubeadm | Create kubeadm config
template:
src: kubeadm-config.yaml.j2
dest: "{{ kube_config_dir }}/kubeadm-config.yaml"
register: kubeadm_config
- name: kubeadm | Initialize first master
command: timeout -k 240s 240s {{ bin_dir }}/kubeadm init --config={{ kube_config_dir }}/kubeadm-config.yaml --skip-preflight-checks
register: kubeadm_init
# Retry is because upload config sometimes fails
retries: 3
when: inventory_hostname == groups['kube-master']|first and not admin_conf.stat.exists
failed_when: kubeadm_init.rc != 0 and "field is immutable" not in kubeadm_init.stderr
notify: Master | restart kubelet
- name: kubeadm | Upgrade first master
command: >-
timeout -k 240s 240s
{{ bin_dir }}/kubeadm
upgrade apply -y {{ kube_version }}
--config={{ kube_config_dir }}/kubeadm-config.yaml
--skip-preflight-checks
--allow-experimental-upgrades
--allow-release-candidate-upgrades
register: kubeadm_upgrade
# Retry is because upload config sometimes fails
retries: 3
when: inventory_hostname == groups['kube-master']|first and (kubeadm_config.changed and admin_conf.stat.exists)
failed_when: kubeadm_upgrade.rc != 0 and "field is immutable" not in kubeadm_upgrade.stderr
notify: Master | restart kubelet
# FIXME(mattymo): remove when https://github.com/kubernetes/kubeadm/issues/433 is fixed
- name: kubeadm | Enable kube-proxy
command: "{{ bin_dir }}/kubeadm alpha phase addon kube-proxy --config={{ kube_config_dir }}/kubeadm-config.yaml"
when: inventory_hostname == groups['kube-master']|first
changed_when: false
- name: slurp kubeadm certs
slurp:
src: "{{ item }}"
with_items:
- "{{ kube_cert_dir }}/apiserver.crt"
- "{{ kube_cert_dir }}/apiserver.key"
- "{{ kube_cert_dir }}/apiserver-kubelet-client.crt"
- "{{ kube_cert_dir }}/apiserver-kubelet-client.key"
- "{{ kube_cert_dir }}/ca.crt"
- "{{ kube_cert_dir }}/ca.key"
- "{{ kube_cert_dir }}/front-proxy-ca.crt"
- "{{ kube_cert_dir }}/front-proxy-ca.key"
- "{{ kube_cert_dir }}/front-proxy-client.crt"
- "{{ kube_cert_dir }}/front-proxy-client.key"
- "{{ kube_cert_dir }}/sa.key"
- "{{ kube_cert_dir }}/sa.pub"
register: kubeadm_certs
delegate_to: "{{ groups['kube-master']|first }}"
run_once: true
- name: kubeadm | write out kubeadm certs
copy:
dest: "{{ item.item }}"
content: "{{ item.content | b64decode }}"
owner: root
group: root
mode: 0700
no_log: true
register: copy_kubeadm_certs
with_items: "{{ kubeadm_certs.results }}"
when: inventory_hostname != groups['kube-master']|first
- name: kubeadm | Init other uninitialized masters
command: timeout -k 240s 240s {{ bin_dir }}/kubeadm init --config={{ kube_config_dir }}/kubeadm-config.yaml --skip-preflight-checks
register: kubeadm_init
when: inventory_hostname != groups['kube-master']|first and not admin_conf.stat.exists
failed_when: kubeadm_init.rc != 0 and "field is immutable" not in kubeadm_init.stderr
notify: Master | restart kubelet
- name: kubeadm | Upgrade other masters
command: >-
timeout -k 240s 240s
{{ bin_dir }}/kubeadm
upgrade apply -y {{ kube_version }}
--config={{ kube_config_dir }}/kubeadm-config.yaml
--skip-preflight-checks
--allow-experimental-upgrades
--allow-release-candidate-upgrades
register: kubeadm_upgrade
when: inventory_hostname != groups['kube-master']|first and (kubeadm_config.changed and admin_conf.stat.exists)
failed_when: kubeadm_upgrade.rc != 0 and "field is immutable" not in kubeadm_upgrade.stderr
notify: Master | restart kubelet
- name: kubeadm | Check service account key again
stat:
path: "{{ kube_cert_dir }}/sa.key"
register: sa_key_after
delegate_to: "{{groups['kube-master']|first}}"
run_once: true
- name: kubeadm | Set secret_changed if service account key was updated
command: /bin/true
notify: Master | set secret_changed
when: sa_key_before.stat.checksum|default("") != sa_key_after.stat.checksum
- name: kubeadm | cleanup old certs if necessary
include: kubeadm-cleanup-old-certs.yml
when: old_apiserver_cert.stat.exists

View File

@@ -2,6 +2,15 @@
- include: pre-upgrade.yml
tags: k8s-pre-upgrade
# upstream bug: https://github.com/kubernetes/kubeadm/issues/441
- name: Disable kube_basic_auth until kubeadm/441 is fixed
set_fact:
kube_basic_auth: false
when: kubeadm_enabled|bool|default(false)
- include: users-file.yml
when: kube_basic_auth|default(true)
- name: Copy kubectl from hyperkube container
command: "{{ docker_bin_dir }}/docker run --rm -v {{ bin_dir }}:/systembindir {{ hyperkube_image_repo }}:{{ hyperkube_image_tag }} /bin/cp /hyperkube /systembindir/kubectl"
register: kube_task_result
@@ -25,66 +34,10 @@
when: ansible_os_family in ["Debian","RedHat"]
tags: [kubectl, upgrade]
- name: Write kube-apiserver manifest
template:
src: manifests/kube-apiserver.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-apiserver.manifest"
notify: Master | wait for the apiserver to be running
tags: kube-apiserver
- task: Include kubeadm setup if enabled
include: kubeadm-setup.yml
when: kubeadm_enabled|bool|default(false)
- meta: flush_handlers
- name: Write kube system namespace manifest
template:
src: namespace.j2
dest: "{{kube_config_dir}}/{{system_namespace}}-ns.yml"
run_once: yes
when: inventory_hostname == groups['kube-master'][0]
tags: apps
- name: Check if kube system namespace exists
command: "{{ bin_dir }}/kubectl get ns {{system_namespace}}"
register: 'kubesystem'
changed_when: False
failed_when: False
run_once: yes
tags: apps
- name: Create kube system namespace
command: "{{ bin_dir }}/kubectl create -f {{kube_config_dir}}/{{system_namespace}}-ns.yml"
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
register: create_system_ns
until: create_system_ns.rc == 0
changed_when: False
when: kubesystem|failed and inventory_hostname == groups['kube-master'][0]
tags: apps
- name: Write kube-scheduler kubeconfig
template:
src: kube-scheduler-kubeconfig.yaml.j2
dest: "{{ kube_config_dir }}/kube-scheduler-kubeconfig.yaml"
tags: kube-scheduler
- name: Write kube-scheduler manifest
template:
src: manifests/kube-scheduler.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-scheduler.manifest"
notify: Master | wait for kube-scheduler
tags: kube-scheduler
- name: Write kube-controller-manager kubeconfig
template:
src: kube-controller-manager-kubeconfig.yaml.j2
dest: "{{ kube_config_dir }}/kube-controller-manager-kubeconfig.yaml"
tags: kube-controller-manager
- name: Write kube-controller-manager manifest
template:
src: manifests/kube-controller-manager.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-controller-manager.manifest"
notify: Master | wait for kube-controller-manager
tags: kube-controller-manager
- include: post-upgrade.yml
tags: k8s-post-upgrade
- task: Include static pod setup if not using kubeadm
include: static-pod-setup.yml
when: not kubeadm_enabled|bool|default(false)

View File

@@ -1,31 +0,0 @@
---
- name: "Post-upgrade | stop kubelet on all masters"
service:
name: kubelet
state: stopped
delegate_to: "{{item}}"
with_items: "{{groups['kube-master']}}"
when: needs_etcd_migration|bool
run_once: true
- name: "Post-upgrade | Pause for kubelet stop"
pause:
seconds: 10
when: needs_etcd_migration|bool
- name: "Post-upgrade | start kubelet on all masters"
service:
name: kubelet
state: started
delegate_to: "{{item}}"
with_items: "{{groups['kube-master']}}"
when: needs_etcd_migration|bool
run_once: true
- name: "Post-upgrade | etcd3 upgrade | purge etcd2 k8s data"
command: "{{ bin_dir }}/etcdctl --endpoints={{ etcd_access_addresses }} rm -r /registry"
environment:
ETCDCTL_API: 2
delegate_to: "{{groups['etcd'][0]}}"
run_once: true
when: kube_apiserver_storage_backend == "etcd3" and needs_etcd_migration|bool|default(false)

View File

@@ -1,38 +1,4 @@
---
- name: "Pre-upgrade | check for kube-apiserver unit file"
stat:
path: /etc/systemd/system/kube-apiserver.service
register: kube_apiserver_service_file
tags: [facts, kube-apiserver]
- name: "Pre-upgrade | check for kube-apiserver init script"
stat:
path: /etc/init.d/kube-apiserver
register: kube_apiserver_init_script
tags: [facts, kube-apiserver]
- name: "Pre-upgrade | stop kube-apiserver if service defined"
service:
name: kube-apiserver
state: stopped
when: (kube_apiserver_service_file.stat.exists|default(False) or kube_apiserver_init_script.stat.exists|default(False))
tags: kube-apiserver
- name: "Pre-upgrade | remove kube-apiserver service definition"
file:
path: "{{ item }}"
state: absent
when: (kube_apiserver_service_file.stat.exists|default(False) or kube_apiserver_init_script.stat.exists|default(False))
with_items:
- /etc/systemd/system/kube-apiserver.service
- /etc/init.d/kube-apiserver
tags: kube-apiserver
- name: "Pre-upgrade | See if kube-apiserver manifest exists"
stat:
path: /etc/kubernetes/manifests/kube-apiserver.manifest
register: kube_apiserver_manifest
- name: "Pre-upgrade | etcd3 upgrade | see if old config exists"
command: "{{ bin_dir }}/etcdctl --peers={{ etcd_access_addresses }} ls /registry/minions"
environment:
@@ -47,64 +13,18 @@
kube_apiserver_storage_backend: "etcd2"
when: old_data_exists.rc == 0 and not force_etcd3|bool
- name: "Pre-upgrade | etcd3 upgrade | see if data was already migrated"
command: "{{ bin_dir }}/etcdctl --endpoints={{ etcd_access_addresses }} get --limit=1 --prefix=true /registry/minions"
environment:
ETCDCTL_API: 3
register: data_migrated
delegate_to: "{{groups['etcd'][0]}}"
when: kube_apiserver_storage_backend == "etcd3"
failed_when: false
- name: "Pre-upgrade | etcd3 upgrade | set needs_etcd_migration"
set_fact:
needs_etcd_migration: "{{ force_etcd3|default(false) and kube_apiserver_storage_backend == 'etcd3' and data_migrated.stdout_lines|length == 0 and old_data_exists.rc == 0 }}"
- name: "Pre-upgrade | Delete master manifests on all kube-masters"
- name: "Pre-upgrade | Delete master manifests"
file:
path: "/etc/kubernetes/manifests/{{item[1]}}.manifest"
path: "/etc/kubernetes/manifests/{{item}}.manifest"
state: absent
delegate_to: "{{item[0]}}"
with_nested:
- "{{groups['kube-master']}}"
with_items:
- ["kube-apiserver", "kube-controller-manager", "kube-scheduler"]
register: kube_apiserver_manifest_replaced
when: (secret_changed|default(false) or etcd_secret_changed|default(false) or needs_etcd_migration|bool) and kube_apiserver_manifest.stat.exists
when: (secret_changed|default(false) or etcd_secret_changed|default(false))
- name: "Pre-upgrade | Delete master containers forcefully on all kube-masters"
- name: "Pre-upgrade | Delete master containers forcefully"
shell: "docker ps -f name=k8s-{{item}}* -q | xargs --no-run-if-empty docker rm -f"
delegate_to: "{{item[0]}}"
with_nested:
- "{{groups['kube-master']}}"
with_items:
- ["kube-apiserver", "kube-controller-manager", "kube-scheduler"]
register: kube_apiserver_manifest_replaced
when: (secret_changed|default(false) or etcd_secret_changed|default(false) or needs_etcd_migration|bool) and kube_apiserver_manifest.stat.exists
run_once: true
- name: "Pre-upgrade | etcd3 upgrade | stop etcd"
service:
name: etcd
state: stopped
delegate_to: "{{item}}"
with_items: "{{groups['etcd']}}"
when: needs_etcd_migration|bool
run_once: true
- name: "Pre-upgrade | etcd3 upgrade | migrate data"
command: "{{ bin_dir }}/etcdctl migrate --data-dir=\"{{ etcd_data_dir }}\" --wal-dir=\"{{ etcd_data_dir }}/member/wal\""
environment:
ETCDCTL_API: 3
delegate_to: "{{item}}"
with_items: "{{groups['etcd']}}"
register: etcd_migrated
when: needs_etcd_migration|bool
run_once: true
- name: "Pre-upgrade | etcd3 upgrade | start etcd"
service:
name: etcd
state: started
delegate_to: "{{item}}"
with_items: "{{groups['etcd']}}"
when: needs_etcd_migration|bool
when: kube_apiserver_manifest_replaced.changed
run_once: true

View File

@@ -0,0 +1,60 @@
---
- name: Write kube-apiserver manifest
template:
src: manifests/kube-apiserver.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-apiserver.manifest"
notify: Master | wait for the apiserver to be running
tags: kube-apiserver
- meta: flush_handlers
- name: Write kube system namespace manifest
template:
src: namespace.j2
dest: "{{kube_config_dir}}/{{system_namespace}}-ns.yml"
when: inventory_hostname == groups['kube-master'][0]
tags: apps
- name: Check if kube system namespace exists
command: "{{ bin_dir }}/kubectl get ns {{system_namespace}}"
register: 'kubesystem'
changed_when: False
failed_when: False
when: inventory_hostname == groups['kube-master'][0]
tags: apps
- name: Create kube system namespace
command: "{{ bin_dir }}/kubectl create -f {{kube_config_dir}}/{{system_namespace}}-ns.yml"
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
register: create_system_ns
until: create_system_ns.rc == 0
changed_when: False
when: inventory_hostname == groups['kube-master'][0] and kubesystem.rc != 0
tags: apps
- name: Write kube-scheduler kubeconfig
template:
src: kube-scheduler-kubeconfig.yaml.j2
dest: "{{ kube_config_dir }}/kube-scheduler-kubeconfig.yaml"
tags: kube-scheduler
- name: Write kube-scheduler manifest
template:
src: manifests/kube-scheduler.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-scheduler.manifest"
notify: Master | wait for kube-scheduler
tags: kube-scheduler
- name: Write kube-controller-manager kubeconfig
template:
src: kube-controller-manager-kubeconfig.yaml.j2
dest: "{{ kube_config_dir }}/kube-controller-manager-kubeconfig.yaml"
tags: kube-controller-manager
- name: Write kube-controller-manager manifest
template:
src: manifests/kube-controller-manager.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-controller-manager.manifest"
notify: Master | wait for kube-controller-manager
tags: kube-controller-manager

View File

@@ -0,0 +1,14 @@
---
- name: Make sure the users directory exits
file:
path: "{{ kube_users_dir }}"
state: directory
mode: o-rwx
group: "{{ kube_cert_group }}"
- name: Populate users for basic auth in API
template:
src: known_users.csv.j2
dest: "{{ kube_users_dir }}/known_users.csv"
backup: yes
notify: Master | set secret_changed

View File

@@ -0,0 +1,67 @@
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: {{ ip | default(ansible_default_ipv4.address) }}
bindPort: {{ kube_apiserver_port }}
etcd:
endpoints:
{% for endpoint in etcd_access_endpoint.split(',') %}
- {{ endpoint }}
{% endfor %}
caFile: {{ kube_config_dir }}/ssl/etcd/ca.pem
certFile: {{ kube_config_dir }}/ssl/etcd/node-{{ inventory_hostname }}.pem
keyFile: {{ kube_config_dir }}/ssl/etcd/node-{{ inventory_hostname }}-key.pem
networking:
dnsDomain: {{ dns_domain }}
serviceSubnet: {{ kube_service_addresses }}
podSubnet: {{ kube_pods_subnet }}
kubernetesVersion: {{ kube_version }}
cloudProvider: {{ cloud_provider|default('') }}
authorizationModes:
- Node
{% for mode in authorization_modes %}
- {{ mode }}
{% endfor %}
token: {{ kubeadm_token }}
tokenTTL: "{{ kubeadm_token_ttl }}"
selfHosted: false
apiServerExtraArgs:
insecure-bind-address: {{ kube_apiserver_insecure_bind_address }}
insecure-port: "{{ kube_apiserver_insecure_port }}"
admission-control: {{ kube_apiserver_admission_control | join(',') }}
apiserver-count: "{{ kube_apiserver_count }}"
service-node-port-range: {{ kube_apiserver_node_port_range }}
{% if kube_basic_auth|default(true) %}
basic-auth-file: {{ kube_users_dir }}/known_users.csv
{% endif %}
{% if kube_oidc_auth|default(false) and kube_oidc_url is defined and kube_oidc_client_id is defined %}
oidc-issuer-url: {{ kube_oidc_url }}
oidc-client-id: {{ kube_oidc_client_id }}
{% if kube_oidc_ca_file is defined %}
oidc-ca-file: {{ kube_oidc_ca_file }}
{% endif %}
{% if kube_oidc_username_claim is defined %}
oidc-username-claim: {{ kube_oidc_username_claim }}
{% endif %}
{% if kube_oidc_groups_claim is defined %}
oidc-groups-claim: {{ kube_oidc_groups_claim }}
{% endif %}
{% endif %}
storage-backend: {{ kube_apiserver_storage_backend }}
{% if kube_api_runtime_config is defined %}
runtime-config: {{ kube_api_runtime_config }}
{% endif %}
allow-privileged: "true"
controllerManagerExtraArgs:
node-monitor-grace-period: {{ kube_controller_node_monitor_grace_period }}
node-monitor-period: {{ kube_controller_node_monitor_period }}
pod-eviction-timeout: {{ kube_controller_pod_eviction_timeout }}
{% if kube_feature_gates %}
feature-gates: {{ kube_feature_gates|join(',') }}
{% endif %}
apiServerCertSANs:
{% for san in apiserver_sans.split(' ') | unique %}
- {{ san }}
{% endfor %}
certificatesDir: {{ kube_config_dir }}/ssl
unifiedControlPlaneImage: "{{ hyperkube_image_repo }}:{{ hyperkube_image_tag }}"

View File

@@ -6,6 +6,9 @@ metadata:
labels:
k8s-app: kube-apiserver
kubespray: v2
annotations:
kubespray.etcd-cert/serial: "{{ etcd_client_cert_serial }}"
kubespray.apiserver-cert/serial: "{{ apiserver_cert_serial }}"
spec:
hostNetwork: true
{% if kube_version | version_compare('v1.6', '>=') %}
@@ -105,9 +108,14 @@ spec:
- mountPath: {{ kube_config_dir }}
name: kubernetes-config
readOnly: true
- mountPath: /etc/ssl/certs
- mountPath: /etc/ssl
name: ssl-certs-host
readOnly: true
{% for dir in ssl_ca_dirs %}
- mountPath: {{ dir }}
name: {{ dir | regex_replace('^/(.*)$', '\\1' ) | regex_replace('/', '-') }}
readOnly: true
{% endfor %}
- mountPath: {{ etcd_cert_dir }}
name: etcd-certs
readOnly: true
@@ -120,9 +128,14 @@ spec:
- hostPath:
path: {{ kube_config_dir }}
name: kubernetes-config
- hostPath:
path: /etc/ssl/certs/
name: ssl-certs-host
- name: ssl-certs-host
hostPath:
path: /etc/ssl
{% for dir in ssl_ca_dirs %}
- name: {{ dir | regex_replace('^/(.*)$', '\\1' ) | regex_replace('/', '-') }}
hostPath:
path: {{ dir }}
{% endfor %}
- hostPath:
path: {{ etcd_cert_dir }}
name: etcd-certs

View File

@@ -5,6 +5,9 @@ metadata:
namespace: {{system_namespace}}
labels:
k8s-app: kube-controller
annotations:
kubespray.etcd-cert/serial: "{{ etcd_client_cert_serial }}"
kubespray.controller-manager-cert/serial: "{{ controller_manager_cert_serial }}"
spec:
hostNetwork: true
{% if kube_version | version_compare('v1.6', '>=') %}
@@ -70,9 +73,14 @@ spec:
initialDelaySeconds: 30
timeoutSeconds: 10
volumeMounts:
- mountPath: /etc/ssl/certs
- mountPath: /etc/ssl
name: ssl-certs-host
readOnly: true
{% for dir in ssl_ca_dirs %}
- mountPath: {{ dir }}
name: {{ dir | regex_replace('^/(.*)$', '\\1' ) | regex_replace('/', '-') }}
readOnly: true
{% endfor %}
- mountPath: "{{kube_config_dir}}/ssl"
name: etc-kube-ssl
readOnly: true
@@ -87,11 +95,12 @@ spec:
volumes:
- name: ssl-certs-host
hostPath:
{% if ansible_os_family == 'RedHat' %}
path: /etc/pki/tls
{% else %}
path: /usr/share/ca-certificates
{% endif %}
path: /etc/ssl
{% for dir in ssl_ca_dirs %}
- name: {{ dir | regex_replace('^/(.*)$', '\\1' ) | regex_replace('/', '-') }}
hostPath:
path: {{ dir }}
{% endfor %}
- name: etc-kube-ssl
hostPath:
path: "{{ kube_config_dir }}/ssl"

View File

@@ -5,6 +5,8 @@ metadata:
namespace: {{ system_namespace }}
labels:
k8s-app: kube-scheduler
annotations:
kubespray.scheduler-cert/serial: "{{ scheduler_cert_serial }}"
spec:
hostNetwork: true
{% if kube_version | version_compare('v1.6', '>=') %}
@@ -45,9 +47,14 @@ spec:
initialDelaySeconds: 30
timeoutSeconds: 10
volumeMounts:
- mountPath: /etc/ssl/certs
- mountPath: /etc/ssl
name: ssl-certs-host
readOnly: true
{% for dir in ssl_ca_dirs %}
- mountPath: {{ dir }}
name: {{ dir | regex_replace('^/(.*)$', '\\1' ) | regex_replace('/', '-') }}
readOnly: true
{% endfor %}
- mountPath: "{{ kube_config_dir }}/ssl"
name: etc-kube-ssl
readOnly: true
@@ -57,11 +64,12 @@ spec:
volumes:
- name: ssl-certs-host
hostPath:
{% if ansible_os_family == 'RedHat' %}
path: /etc/pki/tls
{% else %}
path: /usr/share/ca-certificates
{% endif %}
path: /etc/ssl
{% for dir in ssl_ca_dirs %}
- name: {{ dir | regex_replace('^/(.*)$', '\\1' ) | regex_replace('/', '-') }}
hostPath:
path: {{ dir }}
{% endfor %}
- name: etc-kube-ssl
hostPath:
path: "{{ kube_config_dir }}/ssl"

View File

@@ -6,7 +6,16 @@ dependencies:
- role: download
file: "{{ downloads.pod_infra }}"
tags: [download, kubelet]
- role: download
file: "{{ downloads.install_socat }}"
tags: [download, kubelet]
when: ansible_os_family in ['CoreOS', 'Container Linux by CoreOS']
- role: download
file: "{{ downloads.kubeadm }}"
tags: [download, kubelet, kubeadm]
when: kubeadm_enabled
- role: kubernetes/secrets
when: not kubeadm_enabled
tags: k8s-secrets
- role: download
file: "{{ downloads.nginx }}"

View File

@@ -0,0 +1,9 @@
---
- name: look up docker cgroup driver
shell: "docker info | grep 'Cgroup Driver' | awk -F': ' '{ print $2; }'"
register: docker_cgroup_driver_result
- set_fact:
standalone_kubelet: >-
{%- if inventory_hostname in groups['kube-master'] and inventory_hostname not in groups['kube-node'] -%}true{%- else -%}false{%- endif -%}
kubelet_cgroup_driver_detected: "{{ docker_cgroup_driver_result.stdout }}"

View File

@@ -13,6 +13,26 @@
]"
tags: facts
- name: Set kubelet deployment to host if kubeadm is enabled
set_fact:
kubelet_deployment_type: host
when: kubeadm_enabled
tags: kubeadm
- name: install | Copy kubeadm binary from download dir
command: rsync -piu "{{ local_release_dir }}/kubeadm" "{{ bin_dir }}/kubeadm"
changed_when: false
when: kubeadm_enabled
tags: kubeadm
- name: install | Set kubeadm binary permissions
file:
path: "{{ bin_dir }}/kubeadm"
mode: "0755"
state: file
when: kubeadm_enabled
tags: kubeadm
- include: "install_{{ kubelet_deployment_type }}.yml"
- name: install | Write kubelet systemd init file

View File

@@ -8,3 +8,9 @@
changed_when: false
tags: [hyperkube, upgrade]
notify: restart kubelet
- name: install | Copy socat wrapper for Container Linux
command: "{{ docker_bin_dir }}/docker run --rm -v {{ bin_dir }}:/opt/bin {{ install_socat_image_repo }}:{{ install_socat_image_tag }}"
args:
creates: "{{ bin_dir }}/socat"
when: ansible_os_family in ['CoreOS', 'Container Linux by CoreOS']

Some files were not shown because too many files have changed in this diff Show More