Compare commits

..

131 Commits

Author SHA1 Message Date
Matthew Mosesohn
3ff5f40bdb fix graceful upgrade (#1704)
Fix system namespace creation
Only rotate tokens when necessary
2017-09-27 14:49:20 +01:00
Matthew Mosesohn
689ded0413 Enable kubeadm upgrades to any version (#1709) 2017-09-27 14:48:18 +01:00
Matthew Mosesohn
327ed157ef Verify valid settings before deploy (#1705)
Also fix yaml lint issues

Fixes #1703
2017-09-27 14:47:47 +01:00
Pablo Moreno
c819238da9 Adds support for separate etcd machines on terraform/openstack deployment (#1674) 2017-09-27 10:59:09 +01:00
tanshanshan
477afa8711 when and run_once are reduplicative (#1694) 2017-09-26 14:48:05 +01:00
Matthew Mosesohn
bd272e0b3c Upgrade to kubeadm (#1667)
* Enable upgrade to kubeadm

* fix kubedns upgrade

* try upgrade route

* use init/upgrade strategy for kubeadm and ignore kubedns svc

* Use bin_dir for kubeadm

* delete more secrets

* fix waiting for terminating pods

* Manually enforce kube-proxy for kubeadm deploy

* remove proxy. update to kubeadm 1.8.0rc1
2017-09-26 10:38:58 +01:00
Maxim Krasilnikov
1067595b5c Change used chars for kubeadm tokens (#1701) 2017-09-26 05:56:08 +01:00
Brad Beam
14c232e3c4 Merge pull request #1663 from foxyriver/fix-shell
use command module instead of shell module
2017-09-25 13:24:45 -05:00
Brad Beam
57f5fb1f4f Merge pull request #1661 from neith00/master
upgrading from weave version 2.0.1 to 2.0.4
2017-09-25 13:23:57 -05:00
Bogdan Dobrelya
bcddfb786d Merge pull request #1692 from mattymo/old-etcd-logic
drop unused etcd logic
2017-09-25 17:44:33 +02:00
Martin Uddén
20db1738fa feature: install project atomic CSS on RedHat family (#1499)
* feature: install project atomic CSS on RedHat family

* missing patch for this feature

* sub-role refactor

* Yamllint fix
2017-09-25 12:29:17 +01:00
Hassan Zamani
b23d81f825 Add etcd_blkio_weight var (#1690) 2017-09-25 12:20:24 +01:00
Maxim Krasilnikov
bc15ceaba1 Update var doc about users accounts (#1685) 2017-09-25 12:20:00 +01:00
Junaid Ali
6f17d0817b Updating getting-started.md (#1683)
Signed-off-by: Junaid Ali <junaidali.yahya@gmail.com>
2017-09-25 12:19:38 +01:00
Matthew Mosesohn
a1cde03b20 Correct master manifest cleanup logic (#1693)
Fixes #1666
2017-09-25 12:19:04 +01:00
Bogdan Dobrelya
cfce23950a Merge pull request #1687 from jistr/cgroup-driver-kubeadm
Set correct kubelet cgroup-driver also for kubeadm deployments
2017-09-25 11:16:40 +02:00
Deni Bertovic
64740249ab Adds tags for asserts (#1639) 2017-09-25 08:41:03 +01:00
Matthew Mosesohn
126f42de06 drop unused etcd logic
Fixes #1660
2017-09-25 07:52:55 +01:00
Matthew Mosesohn
d94e3a81eb Use api lookup for kubelet hostname when using cloudprovider (#1686)
The value cannot be determined properly via local facts, so
checking k8s api is the most reliable way to look up what hostname
is used when using a cloudprovider.
2017-09-24 09:22:15 +01:00
Jiri Stransky
70d0235770 Set correct kubelet cgroup-driver also for kubeadm deployments
This follows pull request #1677, adding the cgroup-driver
autodetection also for kubeadm way of deploying.

Info about this and the possibility to override is added to the docs.
2017-09-22 13:19:04 +02:00
foxyriver
30b5493fd6 use command module instead of shell module 2017-09-22 15:47:03 +08:00
Bogdan Dobrelya
4f6362515f Merge pull request #1677 from jistr/cgroup-driver
Allow setting cgroup driver for kubelet
2017-09-21 17:31:48 +02:00
Jiri Stransky
dbbe9419e5 Allow setting cgroup driver for kubelet
Red Hat family platforms run docker daemon with `--exec-opt
native.cgroupdriver=systemd`. When kubespray tried to start kubelet
service, it failed with:

Error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

Setting kubelet's cgroup driver to the correct value for the platform
fixes this issue. The code utilizes autodetection of docker's cgroup
driver, as different RPMs for the same distro may vary in that regard.
2017-09-21 11:58:11 +02:00
Matthew Mosesohn
188bae142b Fix wait for hosts in CI (#1679)
Also fix usage of failed_when and handling exit code.
2017-09-20 14:30:09 +01:00
Matthew Mosesohn
ef8e35e39b Create admin credential kubeconfig (#1647)
New files: /etc/kubernetes/admin.conf
           /root/.kube/config
           $GITDIR/artifacts/{kubectl,admin.conf}

Optional method to download kubectl and admin.conf if
kubeconfig_lcoalhost is set to true (default false)
2017-09-18 13:30:57 +01:00
Matthew Mosesohn
975accbe1d just use public_ip in creating gce temporary waitfor hosts (#1646)
* just use public_ip in creating gce temporary waitfor hosts

* Update create-gce.yml
2017-09-18 13:24:57 +01:00
Brad Beam
aaa27d0a34 Adding quotes around parameters in cloud_config (#1664)
This is to help support escapes and special characters
2017-09-16 08:43:47 +01:00
Kevin Lefevre
9302ce0036 Enhanced OpenStack cloud provider (#1627)
- Enable Cinder API version for block storage
- Enable floating IP for LBaaS
2017-09-16 08:43:24 +01:00
Matthew Mosesohn
0aab3c97a0 Add all-in-one CI mode and make coreos test aio (#1665) 2017-09-15 22:28:37 +01:00
Matthew Mosesohn
8e731337ba Enable HA deploy of kubeadm (#1658)
* Enable HA deploy of kubeadm

* raise delay to 60s for starting gce hosts
2017-09-15 22:28:15 +01:00
Matthew Mosesohn
b294db5aed fix apply for netchecker upgrade (#1659)
* fix apply for netchecker upgrade and graceful upgrade

* Speed up daemonset upgrades. Make check wait for ds upgrades.
2017-09-15 13:19:37 +01:00
Matthew Mosesohn
8d766a2ca9 Enable ssh opts by in config, set 100 connection retries (#1662)
Also update to ansible 2.3.2
2017-09-15 10:19:36 +01:00
Brad Beam
f2ae16e71d Merge pull request #1651 from bradbeam/vaultnocontent
Fixing condition where vault CA already exists
2017-09-14 17:04:15 -05:00
Brad Beam
ac281476c8 Prune unnecessary certs from vault setup (#1652)
* Cleaning up cert checks for vault

* Removing all unnecessary etcd certs from each node

* Removing all unnecessary kube certs from each node
2017-09-14 12:28:11 +01:00
neith00
1b1c8d31a9 upgrading from weave version 2.0.1 to 2.0.4
This upgrade has been testing offline on a 1.7.5 cluster
2017-09-14 10:29:28 +02:00
Brad Beam
4b587aaf99 Adding ability to specify altnames for vault cert (#1640) 2017-09-14 07:19:44 +01:00
Kyle Bai
016301508e Update to Kubernetes v1.7.5 (#1649) 2017-09-14 07:18:03 +01:00
Matthew Mosesohn
6744726089 kubeadm support (#1631)
* kubeadm support

* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job

* change ci boolean vars to json format

* fixup

* Update create-gce.yml

* Update create-gce.yml

* Update create-gce.yml
2017-09-13 19:00:51 +01:00
Brad Beam
0a89f88b89 Fixing condition where CA already exists 2017-09-13 03:40:46 +00:00
Brad Beam
69fac8ea58 Merge pull request #1634 from bradbeam/calico_cni
fix for calico cni plugin node name
2017-09-11 22:18:06 -05:00
Brad Beam
a51104e844 Merge pull request #1648 from kubernetes-incubator/mattymo-patch-1
Update getting-started.md
2017-09-11 17:55:51 -05:00
Matthew Mosesohn
943aaf84e5 Update getting-started.md 2017-09-11 12:47:04 +03:00
Seungkyu Ahn
e8bde03a50 Setting kubectl bin directory (#1635) 2017-09-09 23:54:13 +03:00
Matthew Mosesohn
75b13caf0b Fix kube-apiserver status checks when changing insecure bind addr (#1633) 2017-09-09 23:41:48 +03:00
Matthew Mosesohn
0f231f0e76 Improve method to create and wait for gce instances (#1645) 2017-09-09 23:41:31 +03:00
Matthew Mosesohn
5d99fa0940 Purge old upgrade hooks and unused tasks (#1641) 2017-09-09 23:41:20 +03:00
Matthew Mosesohn
649388188b Fix netchecker update side effect (#1644)
* Fix netchecker update side effect

kubectl apply should only be used on resources created
with kubectl apply. To workaround this, we should apply
the old manifest before upgrading it.

* Update 030_check-network.yml
2017-09-09 23:38:38 +03:00
Matthew Mosesohn
9fa1873a65 Add kube dashboard, enabled by default (#1643)
* Add kube dashboard, enabled by default

Also add rbac role for kube user

* Update main.yml
2017-09-09 23:38:03 +03:00
Matthew Mosesohn
f2057dd43d Refactor downloads (#1642)
* Refactor downloads

Add prefixes to tasks (file vs container)
Remove some delegates
Clean up some conditions

* Update ansible.cfg
2017-09-09 23:32:12 +03:00
Brad Beam
eeffbbb43c Updating calicocni.hostname to calicocni.nodename 2017-09-08 12:47:40 +00:00
Brad Beam
aaa0105f75 Flexing calicocni.hostname based on cloud provider 2017-09-08 12:47:40 +00:00
Matthew Mosesohn
f29a42721f Clean up debug in check apiserver test (#1638)
* Clean up debug in check apiserver test

* Change password generation for kube_user

Special characters are not allowed in known_users.csv file
2017-09-08 15:47:13 +03:00
Matthew Mosesohn
079d317ade Default is_atomic to false (#1637) 2017-09-08 15:00:57 +03:00
Matthew Mosesohn
6f1fd12265 Revert "Add option for fact cache expiry" (#1636)
* Revert "Add option for fact cache expiry (#1602)"

This reverts commit fb30f65951.
2017-09-08 10:19:58 +03:00
Maxim Krasilnikov
e16b57aa05 Store vault users passwords to credentials dir. Create vault and etcd roles after start vault cluster (#1632) 2017-09-07 23:30:16 +03:00
Yorgos Saslis
fb30f65951 Add option for fact cache expiry (#1602)
* Add option for fact cache expiry 

By adding the `fact_caching_timeout` we avoid having really stale/invalid data ending up in there. 
Leaving commented out by default, for backwards compatibility, but nice to have there.

* Enabled cache-expiry by default

Set to 2 hours and modified comment to reflect change
2017-09-07 23:29:27 +03:00
Tennis Smith
a47aaae078 Add bastion host definitions (#1621)
* Add comment line and documentation for bastion host usage

* Take out unneeded sudo parm

* Remove blank lines

* revert changes

* take out disabling of strict host checking
2017-09-07 23:26:52 +03:00
Matthew Mosesohn
7117614ee5 Use a generated password for kube user (#1624)
Removed unnecessary root user
2017-09-06 20:20:25 +03:00
Chad Swenson
e26aec96b0 Consolidate kube-proxy module and sysctl loading (#1586)
This sets br_netfilter and net.bridge.bridge-nf-call-iptables sysctl from a single play before kube-proxy is first ran instead of from the flannel and weave network_plugin roles after kube-proxy is started
2017-09-06 15:11:51 +03:00
Sam Powers
c60d104056 Update checksums (etcd calico calico-cni weave) to fix uploads.yml (#1584)
the uploads.yml playbook was broken with checksum mismatch errors in
various kubespray commits, for example, 3bfad5ca73
which updated the version from 3.0.6 to 3.0.17 without updating the
corresponding checksums.
2017-09-06 15:11:13 +03:00
Oliver Moser
e6ff8c92a0 Using 'hostnamectl' to set unconfigured hostname on CoreOS (#1600) 2017-09-06 15:10:52 +03:00
Maxim Krasilnikov
9bce364b3c Update auth enabled methods in group_vars example (#1625) 2017-09-06 15:10:18 +03:00
Chad Swenson
cbaa2b5773 Retry Remove all Docker containers in reset (#1623)
Due to various occasional docker bugs, removing a container will sometimes fail. This can often be mitigated by trying again.
2017-09-06 14:23:16 +03:00
Matthieu
0453ed8235 Fix an error with Canal when RBAC are disabled (#1619)
* Fix an error with Canal when RBAC are disabled

* Update using same rbac strategy used elsewhere
2017-09-06 11:32:32 +03:00
Brad Beam
a341adb7f3 Updating CN for node certs generated by vault (#1622)
This allows the node authorization plugin to function correctly
2017-09-06 10:55:08 +03:00
Matthew Mosesohn
4c88ac69f2 Use kubectl apply instead of create/replace (#1610)
Disable checks for existing resources to speed up execution.
2017-09-06 09:36:54 +03:00
Brad Beam
85c237bc1d Merge pull request #1607 from chapsuk/vault_roles
Vault role updates
2017-09-05 11:48:41 -05:00
Tennis Smith
35d48cc88c Point apiserver address to 0.0.0.0 (#1617)
* Point apiserver address to 0.0.0.0
Added loadbalancer api server address
* Update documentation
2017-09-05 18:41:47 +03:00
mkrasilnikov
957b7115fe Remove node name from kube-proxy and admin certificates 2017-09-05 14:40:26 +03:00
Yorgos Saslis
82eedbd622 Update ansible inventory file when template changes (#1612)
This trigger ensures the inventory file is kept up-to-date. Otherwise, if the file exists and you've made changes to your terraform-managed infra without having deleted the file, it would never get updated. 

For example, consider the case where you've destroyed and re-applied the terraform resources, none of the IPs would get updated, so ansible would be trying to connect to the old ones.
2017-09-05 14:10:53 +03:00
mkrasilnikov
b930b0ef5a Place vault role credentials only to vault group hosts 2017-09-05 11:16:18 +03:00
mkrasilnikov
ad313c9d49 typo fix 2017-09-05 09:07:36 +03:00
mkrasilnikov
06035c0f4e Change vault CI CLOUD_MACHINE_TYPE to n1-standard-2 2017-09-05 09:07:36 +03:00
mkrasilnikov
e1384f6618 Using issue cert result var instead hostvars 2017-09-05 09:07:36 +03:00
mkrasilnikov
3acb86805b Rename vault_address to vault_bind_address 2017-09-05 09:07:35 +03:00
mkrasilnikov
bf0af1cd3d Vault role updates:
* using separated vault roles for generate certs with different `O` (Organization) subject field;
  * configure vault roles for issuing certificates with different `CN` (Common name) subject field;
  * set `CN` and `O` to `kubernetes` and `etcd` certificates;
  * vault/defaults vars definition was simplified;
  * vault dirs variables defined in kubernetes-defaults foles for using
  shared tasks in etcd and kubernetes/secrets roles;
  * upgrade vault to 0.8.1;
  * generate random vault user password for each role by default;
  * fix `serial` file name for vault certs;
  * move vault auth request to issue_cert tasks;
  * enable `RBAC` in vault CI;
2017-09-05 09:07:35 +03:00
ArthurMa
c77d11f1c7 Bugfix (#1616)
lost executable path
2017-09-05 08:35:14 +03:00
Matthew Mosesohn
d279d145d5 Fix non-rbac deployment of resources as a list (#1613)
* Use kubectl apply instead of create/replace

Disable checks for existing resources to speed up execution.

* Fix non-rbac deployment of resources as a list

* Fix autoscaler tolerations field

* set all kube resources to state=latest

* Update netchecker and weave
2017-09-05 08:23:12 +03:00
Matthew Mosesohn
fc7905653e Add socat for CoreOS when using host deploy kubelet (#1575) 2017-09-04 11:30:18 +03:00
Matthew Mosesohn
660282e82f Make daemonsets upgradeable (#1606)
Canal will be covered by a separate PR
2017-09-04 11:30:01 +03:00
Matthew Mosesohn
77602dbb93 Move calico to daemonset (#1605)
* Drop legacy calico logic

* add calico as a daemonset
2017-09-04 11:29:51 +03:00
Matthew Mosesohn
a3e6896a43 Add RBAC support for canal (#1604)
Refactored how rbac_enabled is set
Added RBAC to ubuntu-canal-ha CI job
Added rbac for calico policy controller
2017-09-04 11:29:40 +03:00
Dann
702ce446df Apply ClusterRoleBinding to dnsmaq when rbac_enabled (#1592)
* Add RBAC policies to dnsmasq

* fix merge conflict

* yamllint

* use .j2 extension for dnsmasq autoscaler
2017-09-03 10:53:45 +03:00
Brad Beam
8ae77e955e Adding in certificate serial numbers to manifests (#1392) 2017-09-01 09:02:23 +03:00
sgmitchell
783924e671 Change backup handler to only run v2 data backup if snap directory exists (#1594) 2017-08-31 18:23:24 +03:00
Julian Poschmann
93304e5f58 Fix calico leaving service behind. (#1599) 2017-08-31 12:00:05 +03:00
Brad Beam
917373ee55 Merge pull request #1595 from bradbeam/cacerts
Fixing CA certificate locations for k8s components
2017-08-30 21:31:19 -05:00
Brad Beam
7a98ad50b4 Fixing CA certificate locations for k8s components 2017-08-30 15:30:40 -05:00
Brad Beam
982058cc19 Merge pull request #1514 from vijaykatam/docker_systemd
Configurable docker yum repos, systemd fix
2017-08-30 11:50:23 -05:00
Oliver Moser
576beaa6a6 Include /opt/bin in PATH for host deployed kubelet on CoreOS (#1591)
* Include /opt/bin in PATH for host deployed kubelet on CoreOS

* Removing conditional check for CoreOS
2017-08-30 16:50:33 +03:00
Maxim Krasilnikov
6eb22c5db2 Change single Vault pki mount to multi pki mounts paths for etcd and kube CA`s (#1552)
* Added update CA trust step for etcd and kube/secrets roles

* Added load_balancer_domain_name to certificate alt names if defined. Reset CA's in RedHat os.

* Rename kube-cluster-ca.crt to vault-ca.crt, we need separated CA`s for vault, etcd and kube.

* Vault role refactoring, remove optional cert vault auth because not not used and worked. Create separate CA`s fro vault and etcd.

* Fixed different certificates set for vault cert_managment

* Update doc/vault.md

* Fixed condition create vault CA, wrong group

* Fixed missing etcd_cert_path mount for rkt deployment type. Distribute vault roles for all vault hosts

* Removed wrong when condition in create etcd role vault tasks.
2017-08-30 16:03:22 +03:00
Brad Beam
72a0d78b3c Merge pull request #1585 from mattymo/canal_upgrade
Fix upgrade for canal and apiserver cert
2017-08-29 18:45:21 -05:00
Matthew Mosesohn
13d08af054 Fix upgrade for canal and apiserver cert
Fixes #1573
2017-08-29 22:08:30 +01:00
Brad Beam
80a7ae9845 Merge pull request #1581 from 2ffs2nns/update-calico-version
update calico version
2017-08-29 07:48:44 -05:00
Eric Hoffmann
6c30a7b2eb update calico version
update calico releases link
2017-08-28 16:23:51 -07:00
Matthew Mosesohn
76b72338da Add CNI config for rkt kubelet (#1579) 2017-08-28 21:11:01 +03:00
Chad Swenson
a39e78d42d Initial version of Flannel using CNI (#1486)
* Updates Controller Manager/Kubelet with Flannel's required configuration for CNI
* Removes old Flannel installation
* Install CNI enabled Flannel DaemonSet/ConfigMap/CNI bins and config (with portmap plugin) on host
* Uses RBAC if enabled
* Fixed an issue that could occur if br_netfilter is not a module and net.bridge.bridge-nf-call-iptables sysctl was not set
2017-08-25 10:07:50 +03:00
Brad Beam
4550dccb84 Fixing reference to vault leader url (#1569) 2017-08-24 23:21:39 +03:00
Hassan Zamani
01ce09f343 Add feature_gates var for customizing Kubernetes feature gates (#1520) 2017-08-24 23:18:38 +03:00
Brad Beam
71dca67ca2 Merge pull request #1508 from tmjd/update-calico-2-4-0
Update Calico to 2.4.1 release.
2017-08-24 14:57:29 -05:00
Hans Kristian Flaatten
327f9baccf Update supported component versions in README.md (#1555) 2017-08-24 21:36:53 +03:00
Yuki KIRII
a98b866a66 Verify if br_netfilter module exists (#1492) 2017-08-24 17:47:32 +03:00
Xavier Mehrenberger
3aabba7535 Remove discontinued option --reconcile-cidr if kube_network_plugin=="cloud" (#1568) 2017-08-24 17:01:30 +03:00
Mohamed Mehany
c22cfa255b Added private key file to ssh bastion conf (#1563)
* Added private key file to ssh bastion conf

* Used regular if condition insted of inline conditional
2017-08-24 17:00:45 +03:00
Brad Beam
af211b3d71 Merge pull request #1567 from mattymo/tolerations
Enable scheduling of critical pods and network plugins on master
2017-08-24 08:40:41 -05:00
Matthew Mosesohn
6bb3463e7c Enable scheduling of critical pods and network plugins on master
Added toleration to DNS, netchecker, fluentd, canal, and
calico policy.

Also small fixes to make yamllint pass.
2017-08-24 10:41:17 +01:00
Brad Beam
8b151d12b9 Adding yamllinter to ci steps (#1556)
* Adding yaml linter to ci check

* Minor linting fixes from yamllint

* Changing CI to install python pkgs from requirements.txt

- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
2017-08-24 12:09:52 +03:00
Ian Lewis
ecb6dc3679 Register standalone master w/ taints (#1426)
If Kubernetes > 1.6 register standalone master nodes w/ a
node-role.kubernetes.io/master=:NoSchedule taint to allow
for more flexible scheduling rather than just marking unschedulable.
2017-08-23 16:44:11 +03:00
riverzhang
49a223a17d Update elrepo-release rpm version (#1554) 2017-08-23 09:54:51 +03:00
Brad Beam
e5cfdc648c Adding ability to override max ttl (#1559)
Prior this would fail because we didnt set max ttl for vault temp
2017-08-23 09:54:01 +03:00
Erik Stidham
9f9f70aade Update Calico to 2.4.1 release.
- Switched Calico images to be pulled from quay.io
- Updated Canal too
2017-08-21 09:33:12 -05:00
Bogdan Dobrelya
e91c04f586 Merge pull request #1553 from mattymo/kubelet-deployment-doc
Add node to docs about kubelet deployment type changes
2017-08-21 11:42:23 +02:00
Matthew Mosesohn
277fa6c12d Add node to docs about kubelet deployment type changes 2017-08-21 09:13:59 +01:00
Matthew Mosesohn
ca3050ec3d Update to Kubernetes v1.7.3 (#1549)
Change kubelet deploy mode to host
Enable cri and qos per cgroup for kubelet
Update CoreOS images
Add upgrade hook for switching from kubelet deployment from docker to host.
Bump machine type for ubuntu-rkt-sep
2017-08-21 10:53:49 +03:00
Bogdan Dobrelya
1b3ced152b Merge pull request #1544 from bogdando/rpm_spec
[WIP] Support pbr builds and prepare for RPM packaging as the ansible-kubespray artifact
2017-08-21 09:13:59 +02:00
Vijay Katam
97031f9133 Make epel-release install configurable (#1497) 2017-08-20 14:03:10 +03:00
Vijay Katam
c92506e2e7 Add calico variable that enables ignoring Kernel's RPF Setting (#1493) 2017-08-20 14:01:09 +03:00
Kevin Lefevre
65a9772adf Add OpenStack LBaaS support (#1506) 2017-08-20 13:59:15 +03:00
Anton
1e07ee6cc4 etcd_compaction_retention every 8 hour (#1527) 2017-08-20 13:55:48 +03:00
Abdelsalam Abbas
01a130273f fix issues with if condition (#1537) 2017-08-20 13:55:13 +03:00
Miad Abrin
3c710219a1 Fix Some Typos in kubernetes master role (#1547)
* Fix Typo etc3 -> etcd3

* Fix typo in post-upgrade of master. stop -> start
2017-08-20 13:54:28 +03:00
Maxim Krasilnikov
2ba285a544 Fixed deploy cluster with vault cert manager (#1548)
* Added custom ips to etcd vault distributed certificates

* Added custom ips to kube-master vault distributed certificates

* Added comment about issue_cert_copy_ca var in vault/issue_cert role file

* Generate kube-proxy, controller-manager and scheduler certificates by vault

* Revert "Disable vault from CI (#1546)"

This reverts commit 781f31d2b8.

* Fixed upgrade cluster with vault cert manager

* Remove vault dir in reset playbook
2017-08-20 13:53:58 +03:00
Bogdan Dobrelya
668d02846d Align pbr config data with the spec file
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 16:04:48 +02:00
Bogdan Dobrelya
48edf1757b Adjust the rpm spec data
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 14:09:55 +02:00
Bogdan Dobrelya
db121049b3 Move the spec file
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 13:59:27 +02:00
Bogdan Dobrelya
8058cdbc0e Add pbr build configuration
Required for an RPM package builds with the contrib/ansible-kubespray.spec

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 12:56:01 +02:00
Bogdan Dobrelya
31d357284a Update gitignore to prepare for a package build
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 11:58:07 +02:00
Bogdan Dobrelya
4ee77ce026 Add an RPM spec file and customize ansible roles_path
Install roles under /usr/local/share/kubespray/roles,
playbooks - /usr/local/share/kubespray/playbooks/,
ansible.cfg and inventory group vars - into /etc/kubespray.
Ship README and an example inventory as the package docs.
Update the ansible.cfg to consume the roles from the given path,
including virtualenvs prefix, if defined.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 11:54:20 +02:00
Vijay Katam
55ba81fee5 Add changed_when: false to rpm query 2017-08-14 12:31:44 -07:00
Vijay Katam
7ad5523113 restrict rpm query to redhat 2017-08-10 13:49:14 -07:00
Vijay Katam
5efda3eda9 Configurable docker yum repos, systemd fix
* Make yum repos used for installing docker rpms configurable
* TasksMax is only supported in systemd version >= 226
* Change to systemd file should restart docker
2017-08-09 15:49:53 -07:00
269 changed files with 3833 additions and 1926 deletions

81
.gitignore vendored
View File

@@ -8,12 +8,87 @@ temp
.tox
.cache
*.bak
*.egg-info
*.pyc
*.pyo
*.tfstate
*.tfstate.backup
**/*.sw[pon]
/ssh-bastion.conf
**/*.sw[pon]
vagrant/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# Distribution / packaging
.Python
artifacts/
env/
build/
credentials/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# dotenv
.env
# virtualenv
venv/
ENV/

View File

@@ -18,10 +18,7 @@ variables:
# us-west1-a
before_script:
- pip install ansible==2.3.0
- pip install netaddr
- pip install apache-libcloud==0.20.1
- pip install boto==2.9.0
- pip install -r tests/requirements.txt
- mkdir -p /.ssh
- cp tests/ansible.cfg .
@@ -56,10 +53,11 @@ before_script:
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
KUBEADM_ENABLED: "false"
RESOLVCONF_MODE: docker_dns
LOG_LEVEL: "-vv"
ETCD_DEPLOYMENT: "docker"
KUBELET_DEPLOYMENT: "docker"
KUBELET_DEPLOYMENT: "host"
VAULT_DEPLOYMENT: "docker"
WEAVE_CPU_LIMIT: "100m"
AUTHORIZATION_MODES: "{ 'authorization_modes': [] }"
@@ -75,10 +73,7 @@ before_script:
- $HOME/.cache
before_script:
- docker info
- pip install ansible==2.3.0
- pip install netaddr
- pip install apache-libcloud==0.20.1
- pip install boto==2.9.0
- pip install -r tests/requirements.txt
- mkdir -p /.ssh
- mkdir -p $HOME/.ssh
- echo $PRIVATE_KEY | base64 -d > $HOME/.ssh/id_rsa
@@ -110,7 +105,7 @@ before_script:
# Check out latest tag if testing upgrade
# Uncomment when gitlab kargo repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout acae0fe4a36bd1d3cd267e72ad01126a72d1458a
- test "${UPGRADE_TEST}" != "false" && git checkout 72ae7638bcc94c66afa8620dfa4ad9a9249327ea
# Create cluster
@@ -121,11 +116,11 @@ before_script:
-e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cert_management=${CERT_MGMT:-script}
-e cloud_provider=gce
-e deploy_netchecker=true
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e cert_management=${CERT_MGMT:-script}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
@@ -133,6 +128,9 @@ before_script:
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml
@@ -150,17 +148,19 @@ before_script:
-e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e deploy_netchecker=true
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
$PLAYBOOK;
@@ -168,7 +168,9 @@ before_script:
# Tests Cases
## Test Master API
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
- >
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
## Ping the between 2 pod
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
@@ -183,15 +185,20 @@ before_script:
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e ansible_python_interpreter=${PYPATH}
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml;
@@ -213,6 +220,7 @@ before_script:
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e ansible_python_interpreter=${PYPATH}
-e reset_confirmation=yes
--limit "all:!fake_hosts"
@@ -226,15 +234,20 @@ before_script:
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e ansible_python_interpreter=${PYPATH}
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml;
@@ -263,27 +276,52 @@ before_script:
-e cloud_region=${CLOUD_REGION}
# Test matrix. Leave the comments for markup scripts.
.coreos_calico_sep_variables: &coreos_calico_sep_variables
.coreos_calico_aio_variables: &coreos_calico_aio_variables
# stage: deploy-gce-part1
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: coreos-stable
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-west1-b
CLUSTER_MODE: separate
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: aio
BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
##User-data to simply turn off coreos upgrades
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
.ubuntu_canal_ha_rbac_variables: &ubuntu_canal_ha_rbac_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: ha
UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: weave
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: centos-7
CLOUD_MACHINE_TYPE: "n1-standard-1"
CLOUD_REGION: us-central1-b
CLUSTER_MODE: ha
KUBEADM_ENABLED: "true"
UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_MACHINE_TYPE: "n1-standard-1"
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
KUBEADM_ENABLED: "true"
STARTUP_SCRIPT: ""
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: weave
@@ -297,6 +335,7 @@ before_script:
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: centos-7
CLOUD_REGION: us-west1-a
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
@@ -311,7 +350,7 @@ before_script:
.coreos_canal_variables: &coreos_canal_variables
# stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: canal
CLOUD_IMAGE: coreos-stable
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-east1-b
CLUSTER_MODE: default
BOOTSTRAP_OS: coreos
@@ -350,7 +389,7 @@ before_script:
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: weave
CLOUD_IMAGE: coreos-alpha-1325-0-0-v20170216
CLOUD_IMAGE: coreos-alpha-1506-0-0-v20170817
CLOUD_REGION: us-west1-a
CLUSTER_MODE: ha-scale
BOOTSTRAP_OS: coreos
@@ -367,15 +406,16 @@ before_script:
KUBELET_DEPLOYMENT: rkt
STARTUP_SCRIPT: ""
#Note(mattymo): Vault deployment is broken and needs work
#.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
## stage: deploy-gce-part1
# KUBE_NETWORK_PLUGIN: canal
# CERT_MGMT: vault
# CLOUD_IMAGE: ubuntu-1604-xenial
# CLOUD_REGION: us-central1-b
# CLUSTER_MODE: separate
# STARTUP_SCRIPT: ""
.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
# stage: deploy-gce-part1
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_MACHINE_TYPE: "n1-standard-2"
KUBE_NETWORK_PLUGIN: canal
CERT_MGMT: vault
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
.ubuntu_flannel_rbac_variables: &ubuntu_flannel_rbac_variables
# stage: deploy-gce-special
@@ -387,13 +427,13 @@ before_script:
STARTUP_SCRIPT: ""
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
coreos-calico-sep:
coreos-calico-aio:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_calico_sep_variables
<<: *coreos_calico_aio_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
@@ -404,7 +444,7 @@ coreos-calico-sep-triggers:
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_calico_sep_variables
<<: *coreos_calico_aio_variables
when: on_success
only: ['triggers']
@@ -451,24 +491,66 @@ ubuntu-weave-sep-triggers:
only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto)
ubuntu-canal-ha:
ubuntu-canal-ha-rbac:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
<<: *ubuntu_canal_ha_rbac_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
ubuntu-canal-ha-triggers:
ubuntu-canal-ha-rbac-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
<<: *ubuntu_canal_ha_rbac_variables
when: on_success
only: ['triggers']
ubuntu-canal-kubeadm-rbac:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_kubeadm_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
ubuntu-canal-kubeadm-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_kubeadm_variables
when: on_success
only: ['triggers']
centos-weave-kubeadm-rbac:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
centos-weave-kubeadm-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
@@ -600,17 +682,16 @@ ubuntu-rkt-sep:
except: ['triggers']
only: ['master', /^pr-.*$/]
#Note(mattymo): Vault deployment is broken (https://github.com/kubernetes-incubator/kubespray/issues/1545)
#ubuntu-vault-sep:
# stage: deploy-gce-part1
# <<: *job
# <<: *gce
# variables:
# <<: *gce_variables
# <<: *ubuntu_vault_sep_variables
# when: manual
# except: ['triggers']
# only: ['master', /^pr-.*$/]
ubuntu-vault-sep:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_vault_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
ubuntu-flannel-rbac-sep:
stage: deploy-gce-special
@@ -643,6 +724,13 @@ syntax-check:
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root extra_playbooks/upgrade-only-k8s.yml -vvv --syntax-check
except: ['triggers', 'master']
yamllint:
<<: *job
stage: unit-tests
script:
- yamllint roles
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
<<: *job

16
.yamllint Normal file
View File

@@ -0,0 +1,16 @@
---
extends: default
rules:
braces:
min-spaces-inside: 0
max-spaces-inside: 1
brackets:
min-spaces-inside: 0
max-spaces-inside: 1
indentation:
spaces: 2
indent-sequences: consistent
line-length: disable
new-line-at-end-of-file: disable
truthy: disable

View File

@@ -53,13 +53,13 @@ Versions of supported components
--------------------------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.6.7 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.0.17 <br>
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.7.3 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.2.4 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br>
[calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.23.0 <br>
[calico](https://docs.projectcalico.org/v2.5/releases/) v2.5.0 <br>
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
[weave](http://weave.works/) v2.0.1 <br>
[docker](https://www.docker.com/) v1.13.1 (see note)<br>
[docker](https://www.docker.com/) v1.13 (see note)<br>
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)<br>
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).

View File

@@ -1,6 +1,7 @@
[ssh_connection]
pipelining=True
#ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
host_key_checking=False
@@ -10,3 +11,4 @@ fact_caching_connection = /tmp
stdout_callback = skippy
library = ./library
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles

View File

@@ -62,15 +62,28 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes/node, tags: node }
- { role: network_plugin, tags: network }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/master, tags: master }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
- { role: network_plugin, tags: network }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes/client, tags: client }
- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"

View File

@@ -9,10 +9,10 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
if [ $(az &>/dev/null) ] ; then
if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0"
./apply-rg_2.sh "$AZURE_RESOURCE_GROUP"
elif [ $(azure &>/dev/null) ] ; then
elif azure &>/dev/null; then
ansible-playbook generate-templates.yml
azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP

View File

@@ -9,7 +9,7 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
if [ $(az &>/dev/null) ] ; then
if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0"
./clear-rg_2.sh "$AZURE_RESOURCE_GROUP"
else

View File

@@ -9,9 +9,9 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
# check if azure cli 2.0 exists else use azure cli 1.0
if [ $(az &>/dev/null) ] ; then
if az &>/dev/null; then
ansible-playbook generate-inventory_2.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP"
elif [ $(azure &>/dev/null) ]; then
elif azure &>/dev/null; then
ansible-playbook generate-inventory.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP"
else
echo "Azure cli not found"

View File

@@ -0,0 +1,60 @@
%global srcname ansible_kubespray
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: ansible-kubespray
Version: XXX
Release: XXX
Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Vendor: Kubespray <smainklh@gmail.com>
Url: https://github.com/kubernetes-incubator/kubespray
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python-d2to1
BuildRequires: python-pbr
Requires: ansible
Requires: python-jinja2
Requires: python-netaddr
%description
Ansible-kubespray is a set of Ansible modules and playbooks for
installing a Kubernetes cluster. If you have questions, join us
on the https://slack.k8s.io, channel '#kubespray'.
%prep
%autosetup -n %{name}-%{upstream_version} -S git
%build
%{__python2} setup.py build
%install
export PBR_VERSION=%{version}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot}
%files
%doc README.md
%doc inventory/inventory.example
%config /etc/kubespray/ansible.cfg
%config /etc/kubespray/inventory/group_vars/all.yml
%config /etc/kubespray/inventory/group_vars/k8s-cluster.yml
%license LICENSE
%{python2_sitelib}/%{srcname}-%{version}-py%{python2_version}.egg-info
/usr/local/share/kubespray/roles/
/usr/local/share/kubespray/playbooks/
%defattr(-,root,root)
%changelog

View File

@@ -25,16 +25,29 @@ export AWS_DEFAULT_REGION="zzz"
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
- Update `contrib/terraform/aws/terraform.tfvars` with your data
- Allocate new AWS Elastic IPs: Depending on # of Availability Zones used (2 for each AZ)
- Create an AWS EC2 SSH Key
- Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below)
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
Example:
```commandline
terraform apply -var-file=credentials.tfvars -var 'loadbalancer_apiserver_address=34.212.228.77'
```
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To make use of it, make sure you have a line in your `ansible.cfg` file that looks like the following:
```commandline
ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m
```
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
Example (this one assumes you are using CoreOS)
```commandline
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
```
**Troubleshooting**
***Remaining AWS IAM Instance Profile***:

View File

@@ -162,7 +162,7 @@ resource "aws_instance" "k8s-worker" {
*/
data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}"
vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_ssh_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_ssh_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
@@ -173,9 +173,9 @@ data "template_file" "inventory" {
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
elb_api_port = "loadbalancer_apiserver.port=${var.aws_elb_api_port}"
kube_insecure_apiserver_address = "kube_apiserver_insecure_bind_address: ${var.kube_insecure_apiserver_address}"
loadbalancer_apiserver_address = "loadbalancer_apiserver.address=${var.loadbalancer_apiserver_address}"
}
}
resource "null_resource" "inventories" {
@@ -183,4 +183,8 @@ resource "null_resource" "inventories" {
command = "echo '${data.template_file.inventory.rendered}' > ../../../inventory/hosts"
}
triggers {
template = "${data.template_file.inventory.rendered}"
}
}

View File

@@ -25,4 +25,4 @@ kube-master
[k8s-cluster:vars]
${elb_api_fqdn}
${elb_api_port}
${kube_insecure_apiserver_address}
${loadbalancer_apiserver_address}

View File

@@ -5,11 +5,11 @@ aws_cluster_name = "devtest"
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_avail_zones = ["eu-central-1a","eu-central-1b"]
aws_avail_zones = ["us-west-2a","us-west-2b"]
#Bastion Host
aws_bastion_ami = "ami-5900cc36"
aws_bastion_size = "t2.small"
aws_bastion_ami = "ami-db56b9a3"
aws_bastion_size = "t2.medium"
#Kubernetes Cluster
@@ -23,9 +23,10 @@ aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
aws_cluster_ami = "ami-903df7ff"
aws_cluster_ami = "ami-db56b9a3"
#Settings AWS ELB
aws_elb_api_port = 443
k8s_secure_api_port = 443
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"

View File

@@ -96,6 +96,6 @@ variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server"
}
variable "kube_insecure_apiserver_address" {
description= "Bind Address for insecure Port of K8s API Server"
variable "loadbalancer_apiserver_address" {
description= "Bind Address for ELB of K8s API Server"
}

View File

@@ -11,7 +11,7 @@ services.
There are some assumptions made to try and ensure it will work on your openstack cluster.
* floating-ips are used for access, but you can have masters and nodes that don't use floating-ips if needed. You need currently at least 1 floating ip, which we would suggest is used on a master.
* floating-ips are used for access, but you can have masters and nodes that don't use floating-ips if needed. You need currently at least 1 floating ip, which needs to be used on a master. If using more than one, at least one should be on a master for bastions to work fine.
* you already have a suitable OS image in glance
* you already have both an internal network and a floating-ip pool created
* you have security-groups enabled
@@ -75,7 +75,9 @@ $ echo Setting up Terraform creds && \
export TF_VAR_auth_url=${OS_AUTH_URL}
```
If you want to provision master or node VMs that don't use floating ips, write on a `my-terraform-vars.tfvars` file, for example:
##### Alternative: etcd inside masters
If you want to provision master or node VMs that don't use floating ips and where etcd is inside masters, write on a `my-terraform-vars.tfvars` file, for example:
```
number_of_k8s_masters = "1"
@@ -85,6 +87,28 @@ number_of_k8s_nodes = "0"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy) and one VM as node, again without a floating ip.
##### Alternative: etcd on separate machines
If you want to provision master or node VMs that don't use floating ips and where **etcd is on separate nodes from Kubernetes masters**, write on a `my-terraform-vars.tfvars` file, for example:
```
number_of_etcd = "3"
number_of_k8s_masters = "0"
number_of_k8s_masters_no_etcd = "1"
number_of_k8s_masters_no_floating_ip = "0"
number_of_k8s_masters_no_floating_ip_no_etcd = "2"
number_of_k8s_nodes_no_floating_ip = "1"
number_of_k8s_nodes = "2"
flavor_k8s_node = "desired-flavor-id"
flavor_k8s_master = "desired-flavor-id"
flavor_etcd = "desired-flavor-id"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy), two VMs as nodes with floating ips, one VM as node without floating ip and three VMs for etcd.
##### Alternative: add GlusterFS
Additionally, now the terraform based installation supports provisioning of a GlusterFS shared file system based on a separate set of VMs, running either a Debian or RedHat based set of VMs. To enable this, you need to add to your `my-terraform-vars.tfvars` the following variables:
```

View File

@@ -1,5 +1,5 @@
resource "openstack_networking_floatingip_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
count = "${var.number_of_k8s_masters + var.number_of_k8s_masters_no_etcd}"
pool = "${var.floatingip_pool}"
}
@@ -73,6 +73,44 @@ resource "openstack_compute_instance_v2" "k8s_master" {
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index + var.number_of_k8s_masters)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault"
}
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
@@ -94,6 +132,27 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"

View File

@@ -6,10 +6,22 @@ variable "number_of_k8s_masters" {
default = 2
}
variable "number_of_k8s_masters_no_etcd" {
default = 2
}
variable "number_of_etcd" {
default = 2
}
variable "number_of_k8s_masters_no_floating_ip" {
default = 2
}
variable "number_of_k8s_masters_no_floating_ip_no_etcd" {
default = 2
}
variable "number_of_k8s_nodes" {
default = 1
}
@@ -59,6 +71,10 @@ variable "flavor_k8s_node" {
default = 3
}
variable "flavor_etcd" {
default = 3
}
variable "flavor_gfs_node" {
default = 3
}

View File

@@ -161,3 +161,11 @@ Cloud providers configuration
=============================
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
##### Optional : Ignore kernel's RPF check setting
By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting:
```
calico_node_ignorelooserpf: true
```

View File

@@ -23,13 +23,6 @@ ip a show dev flannel.1
valid_lft forever preferred_lft forever
```
* Docker must be configured with a bridge ip in the flannel subnet.
```
ps aux | grep docker
root 20196 1.7 2.7 1260616 56840 ? Ssl 10:18 0:07 /usr/bin/docker daemon --bip=10.233.16.1/24 --mtu=1450
```
* Try to run a container and check its ip address
```

View File

@@ -28,10 +28,10 @@ an example inventory located
You can use an
[inventory generator](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py)
to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only use for making a basic Kubespray cluster, but it does
support creating large clusters. It now supports
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does
support creating inventory file for large clusters as well. It now supports
separated ETCD and Kubernetes master roles from node role if the size exceeds a
certain threshold. Run inventory.py help for more information.
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` help for more information.
Example inventory generator usage:
@@ -57,13 +57,61 @@ ansible-playbook -i my_inventory/inventory.cfg cluster.yml -b -v \
See more details in the [ansible guide](ansible.md).
Adding nodes
--------------------------
------------
You may want to add worker nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
You may want to add **worker** nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
```
ansible-playbook -i my_inventory/inventory.cfg scale.yml -b -v \
--private-key=~/.ssh/private_key
```
```
Connecting to Kubernetes
------------------------
By default, Kubespray configures kube-master hosts with insecure access to
kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use http://localhost:8080 to connect. The kubeconfig files
generated will point to localhost (on kube-masters) and kube-node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](ha.md).
Kubespray permits connecting to the cluster remotely on any IP of any
kube-master host on port 6443 by default. However, this requires
authentication. One could generate a kubeconfig based on one installed
kube-master hosts (needs improvement) or connect with a username and password.
By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user`. This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this
file yourself.
For more information on kubeconfig and accessing a Kubernetes cluster, refer to
the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
Accessing Kubernetes Dashboard
------------------------------
If the variable `dashboard_enabled` is set (default is true), then you can
access the Kubernetes Dashboard at the following URL:
https://kube:_kube-password_@_host_:6443/ui/
To see the password, refer to the section above, titled *Connecting to
Kubernetes*. The host can be any kube-master or kube-node or loadbalancer
(when enabled).
Accessing Kubernetes API
------------------------
The main client of Kubernetes is `kubectl`. It is installed on each kube-master
host and can optionally be configured on your ansible host by setting
`kubeconfig_localhost: true` in the configuration. If enabled, kubectl and
admin.conf will appear in the artifacts/ directory after deployment. You can
see a list of nodes by running the following commands:
cd artifacts/
./kubectl --kubeconfig admin.conf get nodes
If desired, copy kubectl to your bin dir and admin.conf to ~/.kube/config.

View File

@@ -67,3 +67,17 @@ follows:
* network_plugin (such as Calico or Weave)
* kube-apiserver, kube-scheduler, and kube-controller-manager
* Add-ons (such as KubeDNS)
#### Upgrade considerations
Kubespray supports rotating certificates used for etcd and Kubernetes
components, but some manual steps may be required. If you have a pod that
requires use of a service token and is deployed in a namespace other than
`kube-system`, you will need to manually delete the affected pods after
rotating certificates. This is because all service account tokens are dependent
on the apiserver token that is used to generate them. When the certificate
rotates, all service account tokens must be rotated as well. During the
kubernetes-apps/rotate_tokens role, only pods in kube-system are destroyed and
recreated. All other invalidated service account tokens are cleaned up
automatically, but other pods are not deleted out of an abundance of caution
for impact to user deployed pods.

View File

@@ -67,6 +67,8 @@ following default cluster paramters:
OpenStack (default is unset)
* *kube_hostpath_dynamic_provisioner* - Required for use of PetSets type in
Kubernetes
* *kube_feature_gates* - A list of key=value pairs that describe feature gates for
alpha/experimental Kubernetes features. (defaults is `[]`)
* *authorization_modes* - A list of [authorization mode](
https://kubernetes.io/docs/admin/authorization/#using-flags-for-your-authorization-module)
that the cluster should be configured for. Defaults to `[]` (i.e. no authorization).
@@ -98,10 +100,18 @@ Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-st
``--insecure-registry=myregistry.mydomain:5000``
* *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a
proxy
* *kubelet_deployment_type* - Controls which platform to deploy kubelet on.
Available options are ``host``, ``rkt``, and ``docker``. ``docker`` mode
is unlikely to work on newer releases. Starting with Kubernetes v1.7
series, this now defaults to ``host``. Before v1.7, the default was Docker.
This is because of cgroup [issues](https://github.com/kubernetes/kubernetes/issues/43704).
* *kubelet_load_modules* - For some things, kubelet needs to load kernel modules. For example,
dynamic kernel services are needed for mounting persistent volumes into containers. These may not be
loaded by preinstall kubernetes processes. For example, ceph and rbd backed volumes. Set this variable to
true to let kubelet load kernel modules.
* *kubelet_cgroup_driver* - Allows manual override of the
cgroup-driver option for Kubelet. By default autodetection is used
to match Docker configuration.
##### Custom flags for Kube Components
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. Example:
@@ -119,5 +129,8 @@ The possible vars are:
#### User accounts
Kubespray sets up two Kubernetes accounts by default: ``root`` and ``kube``. Their
passwords default to changeme. You can set this by changing ``kube_api_pwd``.
By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user`. This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this
file yourself or change `kube_api_pwd` var.

View File

@@ -26,7 +26,6 @@ first task, is to stop any temporary instances of Vault, to free the port for
the long-term. At the end of this task, the entire Vault cluster should be up
and read to go.
Keys to the Kingdom
-------------------
@@ -44,30 +43,38 @@ to authenticate to almost everything in Kubernetes and decode all private
(HTTPS) traffic on your network signed by Vault certificates.
For even greater security, you may want to remove and store elsewhere any
CA keys generated as well (e.g. /etc/vault/ssl/ca-key.pem).
CA keys generated as well (e.g. /etc/vault/ssl/ca-key.pem).
Vault by default encrypts all traffic to and from the datastore backend, all
resting data, and uses TLS for its TCP listener. It is recommended that you
do not change the Vault config to disable TLS, unless you absolutely have to.
Usage
-----
To get the Vault role running, you must to do two things at a minimum:
1. Assign the ``vault`` group to at least 1 node in your inventory
2. Change ``cert_management`` to be ``vault`` instead of ``script``
1. Change ``cert_management`` to be ``vault`` instead of ``script``
Nothing else is required, but customization is possible. Check
``roles/vault/defaults/main.yml`` for the different variables that can be
overridden, most common being ``vault_config``, ``vault_port``, and
``vault_deployment_type``.
Also, if you intend to use a Root or Intermediate CA generated elsewhere,
you'll need to copy the certificate and key to the hosts in the vault group
prior to running the vault role. By default, they'll be located at
``/etc/vault/ssl/ca.pem`` and ``/etc/vault/ssl/ca-key.pem``, respectively.
As a result of the Vault role will be create separated Root CA for `etcd`,
`kubernetes` and `vault`. Also, if you intend to use a Root or Intermediate CA
generated elsewhere, you'll need to copy the certificate and key to the hosts in the vault group prior to running the vault role. By default, they'll be located at:
* vault:
* ``/etc/vault/ssl/ca.pem``
* ``/etc/vault/ssl/ca-key.pem``
* etcd:
* ``/etc/ssl/etcd/ssl/ca.pem``
* ``/etc/ssl/etcd/ssl/ca-key.pem``
* kubernetes:
* ``/etc/kubernetes/ssl/ca.pem``
* ``/etc/kubernetes/ssl/ca-key.pem``
Additional Notes:
@@ -77,7 +84,6 @@ Additional Notes:
credentials are saved to ``/etc/vault/roles/<role>/``. The service will
need to read in those credentials, if they want to interact with Vault.
Potential Work
--------------
@@ -87,6 +93,3 @@ Potential Work
- Add the ability to start temp Vault with Host, Rkt, or Docker
- Add a dynamic way to change out the backend role creation during Bootstrap,
so other services can be used (such as Consul)
- Segregate Server Cert generation from Auth Cert generation (separate CAs).
This work was partially started with the `auth_cert_backend` tasks, but would
need to be further applied to all roles (particularly Etcd and Kubernetes).

View File

@@ -74,6 +74,23 @@ bin_dir: /usr/local/bin
#azure_vnet_name:
#azure_route_table_name:
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (https://github.com/kubernetes/kubernetes/issues/50461)
#openstack_blockstorage_version: "v1/v2/auto (default)"
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following variables.
#openstack_lbaas_enabled: True
#openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
#openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
#openstack_lbaas_create_monitor: "yes"
#openstack_lbaas_monitor_delay: "1m"
#openstack_lbaas_monitor_timeout: "30s"
#openstack_lbaas_monitor_max_retries: "3"
## Uncomment to enable experimental kubeadm deployment mode
#kubeadm_enabled: false
#kubeadm_token_first: "{{ lookup('password', 'credentials/kubeadm_token_first length=6 chars=ascii_lowercase,digits') }}"
#kubeadm_token_second: "{{ lookup('password', 'credentials/kubeadm_token_second length=16 chars=ascii_lowercase,digits') }}"
#kubeadm_token: "{{ kubeadm_token_first }}.{{ kubeadm_token_second }}"
#
## Set these proxy values in order to update docker daemon to use proxies
#http_proxy: ""
#https_proxy: ""
@@ -99,6 +116,9 @@ bin_dir: /usr/local/bin
## Please specify true if you want to perform a kernel upgrade
kernel_upgrade: false
# Set to true to allow pre-checks to fail and continue deployment
#ignore_assert_errors: false
## Etcd auto compaction retention for mvcc key value store in hour
#etcd_compaction_retention: 0

View File

@@ -23,7 +23,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: false
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.6.7
kube_version: v1.7.5
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -40,23 +40,18 @@ kube_log_level: 2
# Users to create for basic auth in Kubernetes API via HTTP
# Optionally add groups for user
kube_api_pwd: "changeme"
kube_api_pwd: "{{ lookup('password', 'credentials/kube_user length=15 chars=ascii_letters,digits') }}"
kube_users:
kube:
pass: "{{kube_api_pwd}}"
role: admin
root:
pass: "{{kube_api_pwd}}"
role: admin
# groups:
# - system:masters
groups:
- system:masters
## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth)
#kube_oidc_auth: false
#kube_basic_auth: false
#kube_token_auth: false
#kube_basic_auth: true
#kube_token_auth: true
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
@@ -141,19 +136,27 @@ docker_bin_dir: "/usr/bin"
# Settings for containerized control plane (etcd/kubelet/secrets)
etcd_deployment_type: docker
kubelet_deployment_type: docker
kubelet_deployment_type: host
cert_management: script
vault_deployment_type: docker
# K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
# Kubernetes dashboard (available at http://first_master:6443/ui by default)
dashboard_enabled: true
# Monitoring apps for k8s
efk_enabled: false
# Helm deployment
helm_enabled: false
# Make a copy of kubeconfig on the host that runs Ansible in GITDIR/artifacts
# kubeconfig_localhost: false
# Download kubectl onto the host that runs Ansible in GITDIR/artifacts
# kubectl_localhost: false
# dnsmasq
# dnsmasq_upstream_dns_servers:
# - /resolvethiszone.with/10.0.4.250

View File

@@ -135,11 +135,14 @@ class KubeManager(object):
return None
return out.splitlines()
def create(self, check=True):
def create(self, check=True, force=True):
if check and self.exists():
return []
cmd = ['create']
cmd = ['apply']
if force:
cmd.append('--force')
if not self.filename:
self.module.fail_json(msg='filename required to create')
@@ -148,14 +151,11 @@ class KubeManager(object):
return self._execute(cmd)
def replace(self):
def replace(self, force=True):
if not self.force and not self.exists():
return []
cmd = ['apply']
cmd = ['replace']
if self.force:
if force:
cmd.append('--force')
if not self.filename:
@@ -270,9 +270,8 @@ def main():
manager = KubeManager(module)
state = module.params.get('state')
if state == 'present':
result = manager.create()
result = manager.create(check=False)
elif state == 'absent':
result = manager.delete()
@@ -284,11 +283,7 @@ def main():
result = manager.stop()
elif state == 'latest':
if manager.exists():
manager.force = True
result = manager.replace()
else:
result = manager.create(check=False)
result = manager.replace()
else:
module.fail_json(msg='Unrecognized state %s.' % state)

View File

@@ -1,3 +1,4 @@
ansible>=2.3.0
pbr>=1.6
ansible>=2.3.2
netaddr
jinja2>=2.9.6

View File

@@ -16,6 +16,6 @@ Host {{ bastion_ip }}
ControlPersist 5m
Host {{ vars['hosts'] }}
ProxyCommand ssh -W %h:%p {{ real_user }}@{{ bastion_ip }}
ProxyCommand ssh -W %h:%p {{ real_user }}@{{ bastion_ip }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
StrictHostKeyChecking no
{% endif %}

View File

@@ -7,7 +7,7 @@
- name: Bootstrap | Run bootstrap.sh
script: bootstrap.sh
when: (need_bootstrap | failed)
when: need_bootstrap.rc != 0
- set_fact:
ansible_python_interpreter: "/opt/bin/python"
@@ -19,34 +19,33 @@
failed_when: false
changed_when: false
check_mode: no
when: (need_bootstrap | failed)
when: need_bootstrap.rc != 0
tags: facts
- name: Bootstrap | Copy get-pip.py
copy:
src: get-pip.py
dest: ~/get-pip.py
when: (need_pip | failed)
when: need_pip != 0
- name: Bootstrap | Install pip
shell: "{{ansible_python_interpreter}} ~/get-pip.py"
when: (need_pip | failed)
when: need_pip != 0
- name: Bootstrap | Remove get-pip.py
file:
path: ~/get-pip.py
state: absent
when: (need_pip | failed)
when: need_pip != 0
- name: Bootstrap | Install pip launcher
copy:
src: runner
dest: /opt/bin/pip
mode: 0755
when: (need_pip | failed)
when: need_pip != 0
- name: Install required python modules
pip:
name: "{{ item }}"
with_items: "{{pip_python_modules}}"

View File

@@ -21,10 +21,20 @@
- name: Gather nodes hostnames
setup:
gather_subset: '!all'
filter: ansible_hostname
filter: ansible_*
- name: Assign inventory name to unconfigured hostnames
- name: Assign inventory name to unconfigured hostnames (non-CoreOS)
hostname:
name: "{{inventory_hostname}}"
when: ansible_hostname == 'localhost'
when: ansible_os_family not in ['CoreOS', 'Container Linux by CoreOS']
- name: Assign inventory name to unconfigured hostnames (CoreOS only)
command: "hostnamectl set-hostname {{inventory_hostname}}"
register: hostname_changed
when: ansible_hostname == 'localhost' and ansible_os_family in ['CoreOS', 'Container Linux by CoreOS']
- name: Update hostname fact (CoreOS only)
setup:
gather_subset: '!all'
filter: ansible_hostname
when: ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] and hostname_changed.changed

View File

@@ -6,4 +6,3 @@
regexp: '^\w+\s+requiretty'
dest: /etc/sudoers
state: absent

View File

@@ -4,12 +4,12 @@
# Max of 4 names is allowed and no more than 256 - 17 chars total
# (a 2 is reserved for the 'default.svc.' and'svc.')
#searchdomains:
# - foo.bar.lc
# searchdomains:
# - foo.bar.lc
# Max of 2 is allowed here (a 1 is reserved for the dns_server)
#nameservers:
# - 127.0.0.1
# nameservers:
# - 127.0.0.1
dns_forward_max: 150
cache_size: 1000

View File

@@ -1,6 +1,4 @@
---
- include: pre_upgrade.yml
- name: ensure dnsmasq.d directory exists
file:
path: /etc/dnsmasq.d
@@ -56,6 +54,26 @@
dest: /etc/dnsmasq.d/01-kube-dns.conf
state: link
- name: Create dnsmasq RBAC manifests
template:
src: "{{ item }}"
dest: "{{ kube_config_dir }}/{{ item }}"
with_items:
- "dnsmasq-clusterrolebinding.yml"
- "dnsmasq-serviceaccount.yml"
when: rbac_enabled
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
- name: Apply dnsmasq RBAC manifests
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/{{ item }}"
with_items:
- "dnsmasq-clusterrolebinding.yml"
- "dnsmasq-serviceaccount.yml"
when: rbac_enabled
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
- name: Create dnsmasq manifests
template:
src: "{{item.file}}"
@@ -63,7 +81,7 @@
with_items:
- {name: dnsmasq, file: dnsmasq-deploy.yml, type: deployment}
- {name: dnsmasq, file: dnsmasq-svc.yml, type: svc}
- {name: dnsmasq-autoscaler, file: dnsmasq-autoscaler.yml, type: deployment}
- {name: dnsmasq-autoscaler, file: dnsmasq-autoscaler.yml.j2, type: deployment}
register: manifests
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
@@ -75,7 +93,7 @@
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "{{item.changed | ternary('latest','present') }}"
state: "latest"
with_items: "{{ manifests.results }}"
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
@@ -86,4 +104,3 @@
port: 53
timeout: 180
when: inventory_hostname == groups['kube-node'][0] and groups['kube-node'][0] in ansible_play_hosts

View File

@@ -1,9 +0,0 @@
---
- name: Delete legacy dnsmasq daemonset
kube:
name: dnsmasq
namespace: "{{system_namespace}}"
kubectl: "{{bin_dir}}/kubectl"
resource: "ds"
state: absent
when: inventory_hostname == groups['kube-master'][0]

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -30,21 +31,26 @@ spec:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
{% if rbac_enabled %}
serviceAccountName: dnsmasq
{% endif %}
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: autoscaler
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
resources:
- name: autoscaler
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
resources:
requests:
cpu: "20m"
memory: "10Mi"
command:
- /cluster-proportional-autoscaler
- --namespace=kube-system
- --configmap=dnsmasq-autoscaler
- --target=Deployment/dnsmasq
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
# If using small nodes, "nodesPerReplica" should dominate.
- --default-params={"linear":{"nodesPerReplica":{{ dnsmasq_nodes_per_replica }},"preventSinglePointFailure":true}}
- --logtostderr=true
- --v={{ kube_log_level }}
cpu: "20m"
memory: "10Mi"
command:
- /cluster-proportional-autoscaler
- --namespace=kube-system
- --configmap=dnsmasq-autoscaler
- --target=Deployment/dnsmasq
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
# If using small nodes, "nodesPerReplica" should dominate.
- --default-params={"linear":{"nodesPerReplica":{{ dnsmasq_nodes_per_replica }},"preventSinglePointFailure":true}}
- --logtostderr=true
- --v={{ kube_log_level }}

View File

@@ -0,0 +1,14 @@
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dnsmasq
namespace: "{{ system_namespace }}"
subjects:
- kind: ServiceAccount
name: dnsmasq
namespace: "{{ system_namespace}}"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

View File

@@ -21,6 +21,9 @@ spec:
kubernetes.io/cluster-service: "true"
kubespray/dnsmasq-checksum: "{{ dnsmasq_stat.stat.checksum }}"
spec:
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: dnsmasq
image: "{{ dnsmasq_image_repo }}:{{ dnsmasq_image_tag }}"
@@ -35,7 +38,6 @@ spec:
capabilities:
add:
- NET_ADMIN
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: {{ dns_cpu_limit }}
@@ -55,7 +57,6 @@ spec:
mountPath: /etc/dnsmasq.d
- name: etcdnsmasqdavailable
mountPath: /etc/dnsmasq.d-available
volumes:
- name: etcdnsmasqd
hostPath:
@@ -64,4 +65,3 @@ spec:
hostPath:
path: /etc/dnsmasq.d-available
dnsPolicy: Default # Don't use cluster DNS.

View File

@@ -0,0 +1,8 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dnsmasq
namespace: "{{ system_namespace }}"
labels:
kubernetes.io/cluster-service: "true"

View File

@@ -1,3 +1,4 @@
---
docker_version: '1.13'
docker_package_info:
@@ -10,3 +11,8 @@ docker_repo_info:
repos:
docker_dns_servers_strict: yes
docker_container_storage_setup: false
docker_rh_repo_base_url: 'https://yum.dockerproject.org/repo/main/centos/7'
docker_rh_repo_gpgkey: 'https://yum.dockerproject.org/gpg'

View File

@@ -0,0 +1,15 @@
---
docker_container_storage_setup_version: v0.6.0
docker_container_storage_setup_profile_name: kubespray
docker_container_storage_setup_storage_driver: devicemapper
docker_container_storage_setup_container_thinpool: docker-pool
docker_container_storage_setup_data_size: 40%FREE
docker_container_storage_setup_min_data_size: 2G
docker_container_storage_setup_chunk_size: 512K
docker_container_storage_setup_growpart: false
docker_container_storage_setup_auto_extend_pool: yes
docker_container_storage_setup_pool_autoextend_threshold: 60
docker_container_storage_setup_pool_autoextend_percent: 20
docker_container_storage_setup_device_wait_timeout: 60
docker_container_storage_setup_wipe_signatures: false
docker_container_storage_setup_container_root_lv_size: 40%FREE

View File

@@ -0,0 +1,22 @@
#!/bin/sh
set -e
version=${1:-master}
profile_name=${2:-kubespray}
dir=`mktemp -d`
export GIT_DIR=$dir/.git
export GIT_WORK_TREE=$dir
git init
git fetch --depth 1 https://github.com/projectatomic/container-storage-setup.git $version
git merge FETCH_HEAD
make -C $dir install
rm -rf /var/lib/container-storage-setup/$profile_name $dir
set +e
/usr/bin/container-storage-setup create $profile_name /etc/sysconfig/docker-storage-setup && /usr/bin/container-storage-setup activate $profile_name
# FIXME: exit status can be 1 for both fatal and non fatal errors in current release,
# could be improved by matching error strings
exit 0

View File

@@ -0,0 +1,37 @@
---
- name: docker-storage-setup | install git and make
with_items: [git, make]
package:
pkg: "{{ item }}"
state: present
- name: docker-storage-setup | docker-storage-setup sysconfig template
template:
src: docker-storage-setup.j2
dest: /etc/sysconfig/docker-storage-setup
- name: docker-storage-override-directory | docker service storage-setup override dir
file:
dest: /etc/systemd/system/docker.service.d
mode: 0755
owner: root
group: root
state: directory
- name: docker-storage-override | docker service storage-setup override file
copy:
dest: /etc/systemd/system/docker.service.d/override.conf
content: |-
### Thie file is managed by Ansible
[Service]
EnvironmentFile=-/etc/sysconfig/docker-storage
owner: root
group: root
mode: 0644
- name: docker-storage-setup | install and run container-storage-setup
become: yes
script: install_container_storage_setup.sh {{ docker_container_storage_setup_version }} {{ docker_container_storage_setup_profile_name }}
notify: Docker | reload systemd

View File

@@ -0,0 +1,35 @@
{%if docker_container_storage_setup_storage_driver is defined%}STORAGE_DRIVER={{docker_container_storage_setup_storage_driver}}{%endif%}
{%if docker_container_storage_setup_extra_storage_options is defined%}EXTRA_STORAGE_OPTIONS={{docker_container_storage_setup_extra_storage_options}}{%endif%}
{%if docker_container_storage_setup_devs is defined%}DEVS={{docker_container_storage_setup_devs}}{%endif%}
{%if docker_container_storage_setup_container_thinpool is defined%}CONTAINER_THINPOOL={{docker_container_storage_setup_container_thinpool}}{%endif%}
{%if docker_container_storage_setup_vg is defined%}VG={{docker_container_storage_setup_vg}}{%endif%}
{%if docker_container_storage_setup_root_size is defined%}ROOT_SIZE={{docker_container_storage_setup_root_size}}{%endif%}
{%if docker_container_storage_setup_data_size is defined%}DATA_SIZE={{docker_container_storage_setup_data_size}}{%endif%}
{%if docker_container_storage_setup_min_data_size is defined%}MIN_DATA_SIZE={{docker_container_storage_setup_min_data_size}}{%endif%}
{%if docker_container_storage_setup_chunk_size is defined%}CHUNK_SIZE={{docker_container_storage_setup_chunk_size}}{%endif%}
{%if docker_container_storage_setup_growpart is defined%}GROWPART={{docker_container_storage_setup_growpart}}{%endif%}
{%if docker_container_storage_setup_auto_extend_pool is defined%}AUTO_EXTEND_POOL={{docker_container_storage_setup_auto_extend_pool}}{%endif%}
{%if docker_container_storage_setup_pool_autoextend_threshold is defined%}POOL_AUTOEXTEND_THRESHOLD={{docker_container_storage_setup_pool_autoextend_threshold}}{%endif%}
{%if docker_container_storage_setup_pool_autoextend_percent is defined%}POOL_AUTOEXTEND_PERCENT={{docker_container_storage_setup_pool_autoextend_percent}}{%endif%}
{%if docker_container_storage_setup_device_wait_timeout is defined%}DEVICE_WAIT_TIMEOUT={{docker_container_storage_setup_device_wait_timeout}}{%endif%}
{%if docker_container_storage_setup_wipe_signatures is defined%}WIPE_SIGNATURES={{docker_container_storage_setup_wipe_signatures}}{%endif%}
{%if docker_container_storage_setup_container_root_lv_name is defined%}CONTAINER_ROOT_LV_NAME={{docker_container_storage_setup_container_root_lv_name}}{%endif%}
{%if docker_container_storage_setup_container_root_lv_size is defined%}CONTAINER_ROOT_LV_SIZE={{docker_container_storage_setup_container_root_lv_size}}{%endif%}
{%if docker_container_storage_setup_container_root_lv_mount_path is defined%}CONTAINER_ROOT_LV_MOUNT_PATH={{docker_container_storage_setup_container_root_lv_mount_path}}{%endif%}

View File

@@ -8,7 +8,7 @@
- Docker | pause while Docker restarts
- Docker | wait for docker
- name : Docker | reload systemd
- name: Docker | reload systemd
shell: systemctl daemon-reload
- name: Docker | reload docker.socket

View File

@@ -0,0 +1,4 @@
---
dependencies:
- role: docker/docker-storage
when: docker_container_storage_setup and ansible_os_family == "RedHat"

View File

@@ -3,14 +3,14 @@
include_vars: "{{ item }}"
with_first_found:
- files:
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}.yml"
- "{{ ansible_os_family|lower }}.yml"
- defaults.yml
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}.yml"
- "{{ ansible_os_family|lower }}.yml"
- defaults.yml
paths:
- ../vars
- ../vars
skip: true
tags: facts

View File

@@ -48,7 +48,7 @@
- name: add system search domains to docker options
set_fact:
docker_dns_search_domains: "{{ docker_dns_search_domains | union(system_search_domains.stdout.split(' ')|default([])) | unique }}"
when: system_search_domains.stdout != ""
when: system_search_domains.stdout != ""
- name: check number of nameservers
fail:

View File

@@ -10,11 +10,18 @@
dest: /etc/systemd/system/docker.service.d/http-proxy.conf
when: http_proxy is defined or https_proxy is defined or no_proxy is defined
- name: get systemd version
command: rpm -q --qf '%{V}\n' systemd
register: systemd_version
when: ansible_os_family == "RedHat" and not is_atomic
changed_when: false
- name: Write docker.service systemd file
template:
src: docker.service.j2
dest: /etc/systemd/system/docker.service
register: docker_service_file
notify: restart docker
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic)
- name: Write docker.service systemd file for atomic

View File

@@ -1,3 +1,3 @@
[Service]
Environment="DOCKER_OPTS={{ docker_options | default('') }} \
--iptables={% if kube_network_plugin == 'flannel' %}true{% else %}false{% endif %}"
--iptables=false"

View File

@@ -24,7 +24,9 @@ ExecStart={{ docker_bin_dir }}/docker daemon \
$DOCKER_NETWORK_OPTIONS \
$DOCKER_DNS_OPTIONS \
$INSECURE_REGISTRY
{% if ansible_os_family == "RedHat" and systemd_version.stdout|int >= 226 %}
TasksMax=infinity
{% endif %}
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity

View File

@@ -1,7 +1,7 @@
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
baseurl={{ docker_rh_repo_base_url }}
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
gpgkey={{ docker_rh_repo_gpgkey }}
{% if http_proxy is defined %}proxy={{ http_proxy }}{% endif %}

View File

@@ -1,3 +1,4 @@
---
docker_kernel_min_version: '3.10'
# https://apt.dockerproject.org/repo/dists/debian-wheezy/main/filelist

View File

@@ -1,3 +1,4 @@
---
docker_kernel_min_version: '0'
# versioning: docker-io itself is pinned at docker 1.5

View File

@@ -1,3 +1,4 @@
---
docker_kernel_min_version: '0'
# https://docs.docker.com/engine/installation/linux/fedora/#install-from-a-package

View File

@@ -1,3 +1,4 @@
---
docker_kernel_min_version: '0'
# https://yum.dockerproject.org/repo/main/centos/7/Packages/
@@ -8,7 +9,7 @@ docker_versioned_pkg:
'1.12': docker-engine-1.12.6-1.el7.centos
'1.13': docker-engine-1.13.1-1.el7.centos
'stable': docker-engine-17.03.0.ce-1.el7.centos
'edge': docker-engine-17.03.0.ce-1.el7.centos
'edge': docker-engine-17.03.0.ce-1.el7.centos
# https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

View File

@@ -18,37 +18,41 @@ download_localhost: False
download_always_pull: False
# Versions
kube_version: v1.6.7
kube_version: v1.7.5
# Change to kube_version after v1.8.0 release
kubeadm_version: "v1.8.0-rc.1"
etcd_version: v3.2.4
#TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download
calico_version: "v1.1.3"
calico_cni_version: "v1.8.0"
calico_policy_version: "v0.5.4"
weave_version: 2.0.1
flannel_version: v0.8.0
calico_version: "v2.5.0"
calico_ctl_version: "v1.5.0"
calico_cni_version: "v1.10.0"
calico_policy_version: "v0.7.0"
weave_version: 2.0.4
flannel_version: "v0.8.0"
flannel_cni_version: "v0.2.0"
pod_infra_version: 3.0
# Download URL's
etcd_download_url: "https://storage.googleapis.com/kargo/{{etcd_version}}_etcd"
# Download URLs
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm"
# Checksums
etcd_checksum: "385afd518f93e3005510b7aaa04d38ee4a39f06f5152cd33bb86d4f0c94c7485"
kubeadm_checksum: "8f6ceb26b8503bfc36a99574cf6f853be1c55405aa31669561608ad8099bf5bf"
# Containers
# Possible values: host, docker
etcd_deployment_type: "docker"
etcd_image_repo: "quay.io/coreos/etcd"
etcd_image_tag: "{{ etcd_version }}"
flannel_image_repo: "quay.io/coreos/flannel"
flannel_image_tag: "{{ flannel_version }}"
calicoctl_image_repo: "calico/ctl"
calicoctl_image_tag: "{{ calico_version }}"
calico_node_image_repo: "calico/node"
flannel_cni_image_repo: "quay.io/coreos/flannel-cni"
flannel_cni_image_tag: "{{ flannel_cni_version }}"
calicoctl_image_repo: "quay.io/calico/ctl"
calicoctl_image_tag: "{{ calico_ctl_version }}"
calico_node_image_repo: "quay.io/calico/node"
calico_node_image_tag: "{{ calico_version }}"
calico_cni_image_repo: "calico/cni"
calico_cni_image_repo: "quay.io/calico/cni"
calico_cni_image_tag: "{{ calico_cni_version }}"
calico_policy_image_repo: "calico/kube-policy-controller"
calico_policy_image_repo: "quay.io/calico/kube-policy-controller"
calico_policy_image_tag: "{{ calico_policy_version }}"
calico_rr_image_repo: "quay.io/calico/routereflector"
calico_rr_image_tag: "v0.3.0"
@@ -56,6 +60,8 @@ hyperkube_image_repo: "quay.io/coreos/hyperkube"
hyperkube_image_tag: "{{ kube_version }}_coreos.0"
pod_infra_image_repo: "gcr.io/google_containers/pause-amd64"
pod_infra_image_tag: "{{ pod_infra_version }}"
install_socat_image_repo: "xueshanf/install-socat"
install_socat_image_tag: "latest"
netcheck_version: "v1.0"
netcheck_agent_img_repo: "quay.io/l23network/k8s-netchecker-agent"
netcheck_agent_tag: "{{ netcheck_version }}"
@@ -114,18 +120,19 @@ downloads:
sha256: "{{ netcheck_agent_digest_checksum|default(None) }}"
enabled: "{{ deploy_netchecker|bool }}"
etcd:
version: "{{etcd_version}}"
dest: "etcd/etcd-{{ etcd_version }}-linux-amd64.tar.gz"
sha256: >-
{%- if etcd_deployment_type in [ 'docker', 'rkt' ] -%}{{etcd_digest_checksum|default(None)}}{%- else -%}{{etcd_checksum}}{%- endif -%}
source_url: "{{ etcd_download_url }}"
url: "{{ etcd_download_url }}"
unarchive: true
owner: "etcd"
mode: "0755"
container: "{{ etcd_deployment_type in [ 'docker', 'rkt' ] }}"
container: true
repo: "{{ etcd_image_repo }}"
tag: "{{ etcd_image_tag }}"
sha256: "{{ etcd_digest_checksum|default(None) }}"
kubeadm:
version: "{{ kubeadm_version }}"
dest: "kubeadm"
sha256: "{{ kubeadm_checksum }}"
source_url: "{{ kubeadm_download_url }}"
url: "{{ kubeadm_download_url }}"
unarchive: false
owner: "root"
mode: "0755"
hyperkube:
container: true
repo: "{{ hyperkube_image_repo }}"
@@ -137,6 +144,12 @@ downloads:
tag: "{{ flannel_image_tag }}"
sha256: "{{ flannel_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}"
flannel_cni:
container: true
repo: "{{ flannel_cni_image_repo }}"
tag: "{{ flannel_cni_image_tag }}"
sha256: "{{ flannel_cni_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'flannel' }}"
calicoctl:
container: true
repo: "{{ calicoctl_image_repo }}"
@@ -184,6 +197,11 @@ downloads:
repo: "{{ pod_infra_image_repo }}"
tag: "{{ pod_infra_image_tag }}"
sha256: "{{ pod_infra_digest_checksum|default(None) }}"
install_socat:
container: true
repo: "{{ install_socat_image_repo }}"
tag: "{{ install_socat_image_tag }}"
sha256: "{{ install_socat_digest_checksum|default(None) }}"
nginx:
container: true
repo: "{{ nginx_image_repo }}"

View File

@@ -1,12 +1,5 @@
---
- name: downloading...
debug:
msg: "{{ download.url }}"
when:
- download.enabled|bool
- not download.container|bool
- name: Create dest directories
- name: file_download | Create dest directories
file:
path: "{{local_release_dir}}/{{download.dest|dirname}}"
state: directory
@@ -16,7 +9,7 @@
- not download.container|bool
tags: bootstrap-os
- name: Download items
- name: file_download | Download item
get_url:
url: "{{download.url}}"
dest: "{{local_release_dir}}/{{download.dest}}"
@@ -31,7 +24,7 @@
- download.enabled|bool
- not download.container|bool
- name: Extract archives
- name: file_download | Extract archives
unarchive:
src: "{{ local_release_dir }}/{{download.dest}}"
dest: "{{ local_release_dir }}/{{download.dest|dirname}}"
@@ -41,10 +34,9 @@
when:
- download.enabled|bool
- not download.container|bool
- download.unarchive is defined
- download.unarchive == True
- download.unarchive|default(False)
- name: Fix permissions
- name: file_download | Fix permissions
file:
state: file
path: "{{local_release_dir}}/{{download.dest}}"
@@ -56,10 +48,11 @@
- (download.unarchive is not defined or download.unarchive == False)
- set_fact:
download_delegate: "{% if download_localhost %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
download_delegate: "{% if download_localhost|bool %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
run_once: true
tags: facts
- name: Create dest directory for saved/loaded container images
- name: container_download | Create dest directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
@@ -72,15 +65,14 @@
tags: bootstrap-os
# This is required for the download_localhost delegate to work smooth with Container Linux by CoreOS cluster nodes
- name: Hack python binary path for localhost
- name: container_download | Hack python binary path for localhost
raw: sh -c "mkdir -p /opt/bin; ln -sf /usr/bin/python /opt/bin/python"
when: download_delegate == 'localhost'
delegate_to: localhost
when: download_delegate == 'localhost'
failed_when: false
run_once: true
tags: localhost
- name: Download | create local directory for saved/loaded container images
- name: container_download | create local directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
@@ -95,24 +87,16 @@
- download_delegate == 'localhost'
tags: localhost
- name: Make download decision if pull is required by tag or sha256
- name: container_download | Make download decision if pull is required by tag or sha256
include: set_docker_image_facts.yml
when:
- download.enabled|bool
- download.container|bool
delegate_to: "{{ download_delegate if download_run_once|bool else inventory_hostname }}"
delegate_to: "{{ download_delegate if download_run_once|bool or omit }}"
run_once: "{{ download_run_once|bool }}"
tags: facts
- name: pulling...
debug:
msg: "{{ pull_args }}"
when:
- download.enabled|bool
- download.container|bool
#NOTE(bogdando) this brings no docker-py deps for nodes
- name: Download containers if pull is required or told to always pull
- name: container_download | Download containers if pull is required or told to always pull
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result
until: pull_task_result|succeeded
@@ -122,29 +106,29 @@
- download.enabled|bool
- download.container|bool
- pull_required|bool|default(download_always_pull)
delegate_to: "{{ download_delegate if download_run_once|bool else inventory_hostname }}"
delegate_to: "{{ download_delegate if download_run_once|bool or omit }}"
run_once: "{{ download_run_once|bool }}"
- set_fact:
fname: "{{local_release_dir}}/containers/{{download.repo|regex_replace('/|\0|:', '_')}}:{{download.tag|default(download.sha256)|regex_replace('/|\0|:', '_')}}.tar"
run_once: true
tags: facts
- name: "Set default value for 'container_changed' to false"
- name: "container_download | Set default value for 'container_changed' to false"
set_fact:
container_changed: "{{pull_required|default(false)|bool}}"
- name: "Update the 'container_changed' fact"
- name: "container_download | Update the 'container_changed' fact"
set_fact:
container_changed: "{{ pull_required|bool|default(false) or not 'up to date' in pull_task_result.stdout }}"
when:
- download.enabled|bool
- download.container|bool
- pull_required|bool|default(download_always_pull)
delegate_to: "{{ download_delegate if download_run_once|bool else inventory_hostname }}"
run_once: "{{ download_run_once|bool }}"
tags: facts
- name: Stat saved container image
- name: container_download | Stat saved container image
stat:
path: "{{fname}}"
register: img
@@ -158,7 +142,7 @@
run_once: true
tags: facts
- name: Download | save container images
- name: container_download | save container images
shell: "{{ docker_bin_dir }}/docker save {{ pull_args }} | gzip -{{ download_compress }} > {{ fname }}"
delegate_to: "{{ download_delegate }}"
register: saved
@@ -170,7 +154,7 @@
- download.container|bool
- (container_changed|bool or not img.stat.exists)
- name: Download | copy container images to ansible host
- name: container_download | copy container images to ansible host
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
@@ -186,7 +170,7 @@
- download.container|bool
- saved.changed
- name: Download | upload container images to nodes
- name: container_download | upload container images to nodes
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
@@ -206,7 +190,7 @@
- download.container|bool
tags: [upload, upgrade]
- name: Download | load container images
- name: container_download | load container images
shell: "{{ docker_bin_dir }}/docker load < {{ fname }}"
when:
- (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and

View File

@@ -9,25 +9,22 @@
- name: Register docker images info
raw: >-
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} .RepoTags {{ '}}' }},{{ '{{' }} .RepoDigests {{ '}}' }}"
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} (index .RepoTags 0) {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}" | tr '\n' ','
no_log: true
register: docker_images_raw
register: docker_images
failed_when: false
changed_when: false
check_mode: no
when: not download_always_pull|bool
- set_fact:
docker_images: "{{docker_images_raw.stdout|regex_replace('\\[|\\]|\\n]','')|regex_replace('\\s',',')}}"
no_log: true
when: not download_always_pull|bool
- set_fact:
pull_required: >-
{%- if pull_args in docker_images.split(',') %}false{%- else -%}true{%- endif -%}
{%- if pull_args in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
when: not download_always_pull|bool
- name: Check the local digest sha256 corresponds to the given image tag
assert:
that: "{{download.repo}}:{{download.tag}} in docker_images.split(',')"
that: "{{download.repo}}:{{download.tag}} in docker_images.stdout.split(',')"
when: not download_always_pull|bool and not pull_required|bool and pull_by_digest|bool
tags:
- asserts

View File

@@ -3,7 +3,6 @@
etcd_cluster_setup: true
etcd_backup_prefix: "/var/backups"
etcd_bin_dir: "{{ local_release_dir }}/etcd/etcd-{{ etcd_version }}-linux-amd64/"
etcd_data_dir: "/var/lib/etcd"
etcd_config_dir: /etc/ssl/etcd
@@ -21,8 +20,12 @@ etcd_metrics: "basic"
etcd_memory_limit: 512M
# Uncomment to set CPU share for etcd
#etcd_cpu_limit: 300m
# etcd_cpu_limit: 300m
etcd_blkio_weight: 1000
etcd_node_cert_hosts: "{{ groups['k8s-cluster'] | union(groups.get('calico-rr', [])) }}"
etcd_compaction_retention: "0"
etcd_compaction_retention: "8"
etcd_vault_mount_path: etcd

View File

@@ -5,6 +5,7 @@
- Refresh Time Fact
- Set Backup Directory
- Create Backup Directory
- Stat etcd v2 data directory
- Backup etcd v2 data
- Backup etcd v3 data
when: etcd_cluster_is_healthy.rc == 0
@@ -24,7 +25,13 @@
group: root
mode: 0600
- name: Stat etcd v2 data directory
stat:
path: "{{ etcd_data_dir }}/member"
register: etcd_data_dir_member
- name: Backup etcd v2 data
when: etcd_data_dir_member.stat.exists
command: >-
{{ bin_dir }}/etcdctl backup
--data-dir {{ etcd_data_dir }}
@@ -43,4 +50,3 @@
ETCDCTL_API: 3
retries: 3
delay: "{{ retry_stagger | random + 3 }}"

View File

@@ -30,4 +30,3 @@
- name: set etcd_secret_changed
set_fact:
etcd_secret_changed: true

View File

@@ -66,4 +66,3 @@
{%- set _ = certs.update({'sync': True}) -%}
{% endif %}
{{ certs.sync }}

View File

@@ -73,11 +73,10 @@
'member-{{ node }}-key.pem',
{% endfor %}]"
my_master_certs: ['ca-key.pem',
'admin-{{ inventory_hostname }}.pem',
'admin-{{ inventory_hostname }}-key.pem',
'member-{{ inventory_hostname }}.pem',
'member-{{ inventory_hostname }}-key.pem'
]
'admin-{{ inventory_hostname }}.pem',
'admin-{{ inventory_hostname }}-key.pem',
'member-{{ inventory_hostname }}.pem',
'member-{{ inventory_hostname }}-key.pem']
all_node_certs: "['ca.pem',
{% for node in (groups['k8s-cluster'] + groups['calico-rr']|default([]))|unique %}
'node-{{ node }}.pem',
@@ -111,22 +110,22 @@
sync_certs|default(false) and inventory_hostname not in groups['etcd']
notify: set etcd_secret_changed
#NOTE(mattymo): Use temporary file to copy master certs because we have a ~200k
#char limit when using shell command
#FIXME(mattymo): Use tempfile module in ansible 2.3
- name: Gen_certs | Prepare tempfile for unpacking certs
shell: mktemp /tmp/certsXXXXX.tar.gz
register: cert_tempfile
when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
inventory_hostname != groups['etcd'][0]
# NOTE(mattymo): Use temporary file to copy master certs because we have a ~200k
# char limit when using shell command
- name: Gen_certs | Write master certs to tempfile
copy:
content: "{{etcd_master_cert_data.stdout}}"
dest: "{{cert_tempfile.stdout}}"
owner: root
mode: "0600"
# FIXME(mattymo): Use tempfile module in ansible 2.3
- name: Gen_certs | Prepare tempfile for unpacking certs
command: mktemp /tmp/certsXXXXX.tar.gz
register: cert_tempfile
when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
inventory_hostname != groups['etcd'][0]
- name: Gen_certs | Write master certs to tempfile
copy:
content: "{{etcd_master_cert_data.stdout}}"
dest: "{{cert_tempfile.stdout}}"
owner: root
mode: "0600"
when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
inventory_hostname != groups['etcd'][0]
@@ -162,30 +161,3 @@
owner: kube
mode: "u=rwX,g-rwx,o-rwx"
recurse: yes
- name: Gen_certs | target ca-certificate store file
set_fact:
ca_cert_path: |-
{% if ansible_os_family == "Debian" -%}
/usr/local/share/ca-certificates/etcd-ca.crt
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/etcd-ca.crt
{%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
/etc/ssl/certs/etcd-ca.pem
{%- endif %}
tags: facts
- name: Gen_certs | add CA to trusted CA dir
copy:
src: "{{ etcd_cert_dir }}/ca.pem"
dest: "{{ ca_cert_path }}"
remote_src: true
register: etcd_ca_cert
- name: Gen_certs | update ca-certificates (Debian/Ubuntu/Container Linux by CoreOS)
command: update-ca-certificates
when: etcd_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Container Linux by CoreOS"]
- name: Gen_certs | update ca-certificates (RedHat)
command: update-ca-trust extract
when: etcd_ca_cert.changed and ansible_os_family == "RedHat"

View File

@@ -7,63 +7,29 @@
when: inventory_hostname in etcd_node_cert_hosts
tags: etcd-secrets
- name: gen_certs_vault | Read in the local credentials
command: cat /etc/vault/roles/etcd/userpass
register: etcd_vault_creds_cat
delegate_to: "{{ groups['vault'][0] }}"
- name: gen_certs_vault | Set facts for read Vault Creds
set_fact:
etcd_vault_creds: "{{ etcd_vault_creds_cat.stdout|from_json }}"
delegate_to: "{{ groups['vault'][0] }}"
- name: gen_certs_vault | Log into Vault and obtain an token
uri:
url: "{{ hostvars[groups.vault|first]['vault_leader_url'] }}/v1/auth/userpass/login/{{ etcd_vault_creds.username }}"
headers:
Accept: application/json
Content-Type: application/json
method: POST
body_format: json
body:
password: "{{ etcd_vault_creds.password }}"
register: etcd_vault_login_result
delegate_to: "{{ groups['vault'][0] }}"
- name: gen_certs_vault | Set fact for vault_client_token
set_fact:
vault_client_token: "{{ etcd_vault_login_result.get('json', {}).get('auth', {}).get('client_token') }}"
run_once: true
- name: gen_certs_vault | Set fact for Vault API token
set_fact:
etcd_vault_headers:
Accept: application/json
Content-Type: application/json
X-Vault-Token: "{{ vault_client_token }}"
run_once: true
when: vault_client_token != ""
# Issue master certs to Etcd nodes
- include: ../../vault/tasks/shared/issue_cert.yml
vars:
issue_cert_common_name: "etcd:master:{{ item.rsplit('/', 1)[1].rsplit('.', 1)[0] }}"
issue_cert_alt_names: "{{ groups.etcd + ['localhost'] }}"
issue_cert_copy_ca: "{{ item == etcd_master_certs_needed|first }}"
issue_cert_file_group: "{{ etcd_cert_group }}"
issue_cert_file_owner: kube
issue_cert_headers: "{{ etcd_vault_headers }}"
issue_cert_hosts: "{{ groups.etcd }}"
issue_cert_ip_sans: >-
[
{%- for host in groups.etcd -%}
"{{ hostvars[host]['ansible_default_ipv4']['address'] }}",
{%- if hostvars[host]['ip'] is defined -%}
"{{ hostvars[host]['ip'] }}",
{%- endif -%}
{%- endfor -%}
"127.0.0.1","::1"
]
issue_cert_path: "{{ item }}"
issue_cert_role: etcd
issue_cert_url: "{{ hostvars[groups.vault|first]['vault_leader_url'] }}"
issue_cert_mount_path: "{{ etcd_vault_mount_path }}"
with_items: "{{ etcd_master_certs_needed|d([]) }}"
when: inventory_hostname in groups.etcd
notify: set etcd_secret_changed
@@ -71,24 +37,26 @@
# Issue node certs to everyone else
- include: ../../vault/tasks/shared/issue_cert.yml
vars:
issue_cert_common_name: "etcd:node:{{ item.rsplit('/', 1)[1].rsplit('.', 1)[0] }}"
issue_cert_alt_names: "{{ etcd_node_cert_hosts }}"
issue_cert_copy_ca: "{{ item == etcd_node_certs_needed|first }}"
issue_cert_file_group: "{{ etcd_cert_group }}"
issue_cert_file_owner: kube
issue_cert_headers: "{{ etcd_vault_headers }}"
issue_cert_hosts: "{{ etcd_node_cert_hosts }}"
issue_cert_ip_sans: >-
[
{%- for host in etcd_node_cert_hosts -%}
"{{ hostvars[host]['ansible_default_ipv4']['address'] }}",
{%- if hostvars[host]['ip'] is defined -%}
"{{ hostvars[host]['ip'] }}",
{%- endif -%}
{%- endfor -%}
"127.0.0.1","::1"
]
issue_cert_path: "{{ item }}"
issue_cert_role: etcd
issue_cert_url: "{{ hostvars[groups.vault|first]['vault_leader_url'] }}"
issue_cert_mount_path: "{{ etcd_vault_mount_path }}"
with_items: "{{ etcd_node_certs_needed|d([]) }}"
when: inventory_hostname in etcd_node_cert_hosts
notify: set etcd_secret_changed

View File

@@ -1,5 +1,4 @@
---
#Plan A: no docker-py deps
- name: Install | Copy etcdctl binary from docker container
command: sh -c "{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy;
{{ docker_bin_dir }}/docker create --name etcdctl-binarycopy {{ etcd_image_repo }}:{{ etcd_image_tag }} &&
@@ -11,22 +10,3 @@
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
changed_when: false
#Plan B: looks nicer, but requires docker-py on all hosts:
#- name: Install | Set up etcd-binarycopy container
# docker:
# name: etcd-binarycopy
# state: present
# image: "{{ etcd_image_repo }}:{{ etcd_image_tag }}"
# when: etcd_deployment_type == "docker"
#
#- name: Install | Copy etcdctl from etcd-binarycopy container
# command: /usr/bin/docker cp "etcd-binarycopy:{{ etcd_container_bin_dir }}etcdctl" "{{ bin_dir }}/etcdctl"
# when: etcd_deployment_type == "docker"
#
#- name: Install | Clean up etcd-binarycopy container
# docker:
# name: etcd-binarycopy
# state: absent
# image: "{{ etcd_image_repo }}:{{ etcd_image_tag }}"
# when: etcd_deployment_type == "docker"

View File

@@ -1,8 +1,4 @@
---
- include: pre_upgrade.yml
when: etcd_cluster_setup
tags: etcd-pre-upgrade
- include: check_certs.yml
when: cert_management == "script"
tags: [etcd-secrets, facts]
@@ -10,6 +6,14 @@
- include: "gen_certs_{{ cert_management }}.yml"
tags: etcd-secrets
- include: upd_ca_trust.yml
tags: etcd-secrets
- name: "Gen_certs | Get etcd certificate serials"
shell: "openssl x509 -in {{ etcd_cert_dir }}/node-{{ inventory_hostname }}.pem -noout -serial | cut -d= -f2"
register: "etcd_client_cert_serial"
when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort
- include: "install_{{ etcd_deployment_type }}.yml"
when: is_etcd_master
tags: upgrade

View File

@@ -1,59 +0,0 @@
- name: "Pre-upgrade | check for etcd-proxy unit file"
stat:
path: /etc/systemd/system/etcd-proxy.service
register: etcd_proxy_service_file
tags: facts
- name: "Pre-upgrade | check for etcd-proxy init script"
stat:
path: /etc/init.d/etcd-proxy
register: etcd_proxy_init_script
tags: facts
- name: "Pre-upgrade | stop etcd-proxy if service defined"
service:
name: etcd-proxy
state: stopped
when: (etcd_proxy_service_file.stat.exists|default(False) or etcd_proxy_init_script.stat.exists|default(False))
- name: "Pre-upgrade | remove etcd-proxy service definition"
file:
path: "{{ item }}"
state: absent
when: (etcd_proxy_service_file.stat.exists|default(False) or etcd_proxy_init_script.stat.exists|default(False))
with_items:
- /etc/systemd/system/etcd-proxy.service
- /etc/init.d/etcd-proxy
- name: "Pre-upgrade | find etcd-proxy container"
command: "{{ docker_bin_dir }}/docker ps -aq --filter 'name=etcd-proxy*'"
register: etcd_proxy_container
changed_when: false
failed_when: false
- name: "Pre-upgrade | remove etcd-proxy if it exists"
command: "{{ docker_bin_dir }}/docker rm -f {{item}}"
with_items: "{{etcd_proxy_container.stdout_lines}}"
- name: "Pre-upgrade | see if etcdctl is installed"
stat:
path: "{{ bin_dir }}/etcdctl"
register: etcdctl_installed
- name: "Pre-upgrade | check if member list is non-SSL"
command: "{{ bin_dir }}/etcdctl --no-sync --peers={{ etcd_access_addresses | regex_replace('https','http') }} member list"
register: etcd_member_list
retries: 10
delay: 3
until: etcd_member_list.rc != 2
run_once: true
when: etcdctl_installed.stat.exists
changed_when: false
failed_when: false
- name: "Pre-upgrade | change peer names to SSL"
shell: >-
{{ bin_dir }}/etcdctl --no-sync --peers={{ etcd_access_addresses | regex_replace('https','http') }} member list |
awk -F"[: =]" '{print "{{ bin_dir }}/etcdctl --peers={{ etcd_access_addresses | regex_replace('https','http') }} member update "$1" https:"$7":"$8}' | bash
run_once: true
when: 'etcdctl_installed.stat.exists and etcd_member_list.rc == 0 and "http://" in etcd_member_list.stdout'

View File

@@ -1,7 +1,7 @@
---
- name: Refresh config | Create etcd config file
template:
src: etcd.env.yml
src: etcd.env.j2
dest: /etc/etcd.env
notify: restart etcd
when: is_etcd_master

View File

@@ -1,23 +1,20 @@
---
- name: sync_etcd_master_certs | Create list of master certs needing creation
set_fact:
set_fact:
etcd_master_cert_list: >-
{{ etcd_master_cert_list|default([]) + [
"admin-" + item + ".pem",
"member-" + item + ".pem"
"admin-" + inventory_hostname + ".pem",
"member-" + inventory_hostname + ".pem"
] }}
with_items: "{{ groups.etcd }}"
run_once: true
- include: ../../vault/tasks/shared/sync_file.yml
vars:
vars:
sync_file: "{{ item }}"
sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ groups.etcd }}"
sync_file_hosts: [ "{{ inventory_hostname }}" ]
sync_file_is_cert: true
with_items: "{{ etcd_master_cert_list|d([]) }}"
run_once: true
- name: sync_etcd_certs | Set facts for etcd sync_file results
set_fact:
@@ -33,8 +30,7 @@
vars:
sync_file: ca.pem
sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ groups.etcd }}"
run_once: true
sync_file_hosts: [ "{{ inventory_hostname }}" ]
- name: sync_etcd_certs | Unset sync_file_results after ca.pem sync
set_fact:

View File

@@ -1,15 +1,14 @@
---
- name: sync_etcd_node_certs | Create list of node certs needing creation
set_fact:
etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + item + '.pem'] }}"
with_items: "{{ etcd_node_cert_hosts }}"
set_fact:
etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + inventory_hostname + '.pem'] }}"
- include: ../../vault/tasks/shared/sync_file.yml
vars:
vars:
sync_file: "{{ item }}"
sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ etcd_node_cert_hosts }}"
sync_file_hosts: [ "{{ inventory_hostname }}" ]
sync_file_is_cert: true
with_items: "{{ etcd_node_cert_list|d([]) }}"
@@ -24,10 +23,10 @@
sync_file_results: []
- include: ../../vault/tasks/shared/sync_file.yml
vars:
vars:
sync_file: ca.pem
sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ etcd_node_cert_hosts }}"
sync_file_hosts: "{{ groups['etcd'] }}"
- name: sync_etcd_node_certs | Unset sync_file_results after ca.pem
set_fact:

View File

@@ -0,0 +1,27 @@
---
- name: Gen_certs | target ca-certificate store file
set_fact:
ca_cert_path: |-
{% if ansible_os_family == "Debian" -%}
/usr/local/share/ca-certificates/etcd-ca.crt
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/etcd-ca.crt
{%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
/etc/ssl/certs/etcd-ca.pem
{%- endif %}
tags: facts
- name: Gen_certs | add CA to trusted CA dir
copy:
src: "{{ etcd_cert_dir }}/ca.pem"
dest: "{{ ca_cert_path }}"
remote_src: true
register: etcd_ca_cert
- name: Gen_certs | update ca-certificates (Debian/Ubuntu/Container Linux by CoreOS)
command: update-ca-certificates
when: etcd_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Container Linux by CoreOS"]
- name: Gen_certs | update ca-certificates (RedHat)
command: update-ca-trust extract
when: etcd_ca_cert.changed and ansible_os_family == "RedHat"

View File

@@ -12,6 +12,9 @@
{% if etcd_cpu_limit is defined %}
--cpu-shares={{ etcd_cpu_limit|regex_replace('m', '') }} \
{% endif %}
{% if etcd_blkio_weight is defined %}
--blkio-weight={{ etcd_blkio_weight }} \
{% endif %}
--name={{ etcd_member_name | default("etcd") }} \
{{ etcd_image_repo }}:{{ etcd_image_tag }} \
{% if etcd_after_v3 %}

View File

@@ -1,9 +1,8 @@
---
elrepo_key_url: 'https://www.elrepo.org/RPM-GPG-KEY-elrepo.org'
elrepo_rpm : elrepo-release-7.0-2.el7.elrepo.noarch.rpm
elrepo_mirror : http://www.elrepo.org
elrepo_rpm: elrepo-release-7.0-3.el7.elrepo.noarch.rpm
elrepo_mirror: http://www.elrepo.org
elrepo_url : '{{elrepo_mirror}}/{{elrepo_rpm}}'
elrepo_url: '{{elrepo_mirror}}/{{elrepo_rpm}}'
elrepo_kernel_package: "kernel-lt"

View File

@@ -1,5 +1,6 @@
---
# Versions
kubedns_version : 1.14.2
kubedns_version: 1.14.2
kubednsautoscaler_version: 1.1.1
# Limits for dnsmasq/kubedns apps
@@ -37,6 +38,17 @@ netchecker_server_memory_limit: 256M
netchecker_server_cpu_requests: 50m
netchecker_server_memory_requests: 64M
# Dashboard
dashboard_enabled: false
dashboard_image_repo: kubernetesdashboarddev/kubernetes-dashboard-amd64
dashboard_image_tag: head
# Limits for dashboard
dashboard_cpu_limit: 100m
dashboard_memory_limit: 256M
dashboard_cpu_requests: 50m
dashboard_memory_requests: 64M
# SSL
etcd_cert_dir: "/etc/ssl/etcd/ssl"
canal_cert_dir: "/etc/canal/certs"

View File

@@ -0,0 +1,20 @@
---
- name: Kubernetes Apps | Lay down dashboard template
template:
src: "{{item.file}}"
dest: "{{kube_config_dir}}/{{item.file}}"
with_items:
- {file: dashboard.yml.j2, type: deploy, name: netchecker-agent}
register: manifests
when: inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Start dashboard
kube:
name: "{{item.item.name}}"
namespace: "{{system_namespace}}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "latest"
with_items: "{{ manifests.results }}"
when: inventory_hostname == groups['kube-master'][0]

View File

@@ -1,25 +1,47 @@
---
- name: Kubernetes Apps | Wait for kube-apiserver
uri:
url: http://localhost:{{ kube_apiserver_insecure_port }}/healthz
url: "{{ kube_apiserver_insecure_endpoint }}/healthz"
register: result
until: result.status == 200
retries: 10
delay: 6
when: inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Delete old kubedns resources
kube:
name: "kubedns"
namespace: "{{ system_namespace }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{ item }}"
state: absent
with_items: ['deploy', 'svc']
tags: upgrade
- name: Kubernetes Apps | Delete kubeadm kubedns
kube:
name: "kubedns"
namespace: "{{ system_namespace }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "deploy"
state: absent
when:
- kubeadm_enabled|default(false)
- kubeadm_init.changed|default(false)
- inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Lay Down KubeDNS Template
template:
src: "{{item.file}}"
dest: "{{kube_config_dir}}/{{item.file}}"
with_items:
- {name: kubedns, file: kubedns-sa.yml, type: sa}
- {name: kubedns, file: kubedns-deploy.yml, type: deployment}
- {name: kubedns, file: kubedns-svc.yml, type: svc}
- {name: kube-dns, file: kubedns-sa.yml, type: sa}
- {name: kube-dns, file: kubedns-deploy.yml.j2, type: deployment}
- {name: kube-dns, file: kubedns-svc.yml, type: svc}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-sa.yml, type: sa}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrole.yml, type: clusterrole}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrolebinding.yml, type: clusterrolebinding}
- {name: kubedns-autoscaler, file: kubedns-autoscaler.yml, type: deployment}
- {name: kubedns-autoscaler, file: kubedns-autoscaler.yml.j2, type: deployment}
register: manifests
when:
- dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
@@ -51,13 +73,20 @@
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "{{item.changed | ternary('latest','present') }}"
state: "latest"
with_items: "{{ manifests.results }}"
failed_when: manifests|failed and "Error from server (AlreadyExists)" not in manifests.msg
when: dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
when:
- dns_mode != 'none'
- inventory_hostname == groups['kube-master'][0]
- not item|skipped
tags: dnsmasq
- name: Kubernetes Apps | Netchecker
include: tasks/netchecker.yml
when: deploy_netchecker
tags: netchecker
- name: Kubernetes Apps | Dashboard
include: tasks/dashboard.yml
when: dashboard_enabled
tags: dashboard

View File

@@ -1,3 +1,22 @@
---
- name: Kubernetes Apps | Check if netchecker-server manifest already exists
stat:
path: "{{ kube_config_dir }}/netchecker-server-deployment.yml.j2"
register: netchecker_server_manifest
tags: ['facts', 'upgrade']
- name: Kubernetes Apps | Apply netchecker-server manifest to update annotations
kube:
name: "netchecker-server"
namespace: "{{ netcheck_namespace }}"
filename: "{{ netchecker_server_manifest.stat.path }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "deploy"
state: latest
when: inventory_hostname == groups['kube-master'][0] and netchecker_server_manifest.stat.exists
tags: upgrade
- name: Kubernetes Apps | Lay Down Netchecker Template
template:
src: "{{item.file}}"
@@ -24,18 +43,6 @@
state: absent
when: inventory_hostname == groups['kube-master'][0]
#FIXME: remove if kubernetes/features#124 is implemented
- name: Kubernetes Apps | Purge old Netchecker daemonsets
kube:
name: "{{item.item.name}}"
namespace: "{{netcheck_namespace}}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: absent
with_items: "{{ manifests.results }}"
when: inventory_hostname == groups['kube-master'][0] and item.item.type == "ds" and item.changed
- name: Kubernetes Apps | Start Netchecker Resources
kube:
name: "{{item.item.name}}"
@@ -43,7 +50,6 @@
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "{{item.changed | ternary('latest','present') }}"
state: "latest"
with_items: "{{ manifests.results }}"
failed_when: manifests|failed and "Error from server (AlreadyExists)" not in manifests.msg
when: inventory_hostname == groups['kube-master'][0]
when: inventory_hostname == groups['kube-master'][0] and not item|skipped

View File

@@ -0,0 +1,110 @@
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy head version of the Dashboard UI compatible with
# Kubernetes 1.6 (RBAC enabled).
#
# Example usage: kubectl create -f <this_file>
{% if rbac_enabled %}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: {{ system_namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: {{ system_namespace }}
{% endif %}
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: {{ system_namespace }}
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: {{ dashboard_image_repo }}:{{ dashboard_image_tag }}
# Image is tagged and updated with :head, so always pull it.
imagePullPolicy: Always
resources:
limits:
cpu: {{ dashboard_cpu_limit }}
memory: {{ dashboard_memory_limit }}
requests:
cpu: {{ dashboard_cpu_requests }}
memory: {{ dashboard_memory_requests }}
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
{% if rbac_enabled %}
serviceAccountName: kubernetes-dashboard
{% endif %}
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: {{ system_namespace }}
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -26,26 +27,26 @@ spec:
metadata:
labels:
k8s-app: kubedns-autoscaler
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: autoscaler
image: "{{ kubednsautoscaler_image_repo }}:{{ kubednsautoscaler_image_tag }}"
resources:
requests:
cpu: "20m"
memory: "10Mi"
requests:
cpu: "20m"
memory: "10Mi"
command:
- /cluster-proportional-autoscaler
- --namespace={{ system_namespace }}
- --configmap=kubedns-autoscaler
# Should keep target in sync with cluster/addons/dns/kubedns-controller.yaml.base
- --target=Deployment/kube-dns
- --default-params={"linear":{"nodesPerReplica":{{ kubedns_nodes_per_replica }},"min":{{ kubedns_min_replicas }}}}
- --logtostderr=true
- --v=2
- /cluster-proportional-autoscaler
- --namespace={{ system_namespace }}
- --configmap=kubedns-autoscaler
# Should keep target in sync with cluster/addons/dns/kubedns-controller.yaml.base
- --target=Deployment/kube-dns
- --default-params={"linear":{"nodesPerReplica":{{ kubedns_nodes_per_replica }},"min":{{ kubedns_min_replicas }}}}
- --logtostderr=true
- --v=2
{% if rbac_enabled %}
serviceAccountName: cluster-proportional-autoscaler
{% endif %}

View File

@@ -1,3 +1,4 @@
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
@@ -29,6 +30,8 @@ spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- effect: NoSchedule
operator: Exists
volumes:
- name: kube-dns-config
configMap:

View File

@@ -1,3 +1,4 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:

View File

@@ -1,3 +1,4 @@
---
apiVersion: v1
kind: Service
metadata:
@@ -19,4 +20,3 @@ spec:
- name: dns-tcp
port: 53
protocol: TCP

View File

@@ -12,6 +12,9 @@ spec:
labels:
app: netchecker-agent
spec:
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: netchecker-agent
image: "{{ agent_img }}"
@@ -37,3 +40,7 @@ spec:
requests:
cpu: {{ netchecker_agent_cpu_requests }}
memory: {{ netchecker_agent_memory_requests }}
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate

View File

@@ -16,6 +16,9 @@ spec:
{% if kube_version | version_compare('v1.6', '>=') %}
dnsPolicy: ClusterFirstWithHostNet
{% endif %}
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: netchecker-agent
image: "{{ agent_img }}"
@@ -41,3 +44,7 @@ spec:
requests:
cpu: {{ netchecker_agent_cpu_requests }}
memory: {{ netchecker_agent_memory_requests }}
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate

View File

@@ -25,12 +25,14 @@ spec:
memory: {{ netchecker_server_memory_requests }}
ports:
- containerPort: 8081
hostPort: 8081
args:
- "-v=5"
- "-logtostderr"
- "-kubeproxyinit"
- "-endpoint=0.0.0.0:8081"
tolerations:
- effect: NoSchedule
operator: Exists
{% if rbac_enabled %}
serviceAccountName: netchecker-server
{% endif %}

View File

@@ -1,5 +1,5 @@
---
elasticsearch_cpu_limit: 1000m
elasticsearch_cpu_limit: 1000m
elasticsearch_mem_limit: 0M
elasticsearch_cpu_requests: 100m
elasticsearch_mem_requests: 0M

View File

@@ -1,3 +1,4 @@
---
dependencies:
- role: download
file: "{{ downloads.elasticsearch }}"

View File

@@ -10,7 +10,7 @@
when: rbac_enabled
- name: "ElasticSearch | Create Serviceaccount and Clusterrolebinding (RBAC)"
command: "kubectl apply -f {{ kube_config_dir }}/{{ item }} -n {{ system_namespace }}"
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/{{ item }} -n {{ system_namespace }}"
with_items:
- "efk-sa.yml"
- "efk-clusterrolebinding.yml"
@@ -38,4 +38,3 @@
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/elasticsearch-service.yaml -n {{ system_namespace }}"
run_once: true
when: es_service_manifest.changed

View File

@@ -1,3 +1,4 @@
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:

View File

@@ -1,3 +1,4 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:

View File

@@ -1,5 +1,5 @@
---
fluentd_cpu_limit: 0m
fluentd_cpu_limit: 0m
fluentd_mem_limit: 200Mi
fluentd_cpu_requests: 100m
fluentd_mem_requests: 200Mi

View File

@@ -1,3 +1,4 @@
---
dependencies:
- role: download
file: "{{ downloads.fluentd }}"

View File

@@ -20,4 +20,3 @@
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/fluentd-ds.yaml -n {{ system_namespace }}"
run_once: true
when: fluentd_ds_manifest.changed

View File

@@ -17,6 +17,9 @@ spec:
kubernetes.io/cluster-service: "true"
version: "v{{ fluentd_version }}"
spec:
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: fluentd-es
image: "{{ fluentd_image_repo }}:{{ fluentd_image_tag }}"
@@ -55,4 +58,3 @@ spec:
{% if rbac_enabled %}
serviceAccountName: efk
{% endif %}

View File

@@ -1,5 +1,5 @@
---
kibana_cpu_limit: 100m
kibana_cpu_limit: 100m
kibana_mem_limit: 0M
kibana_cpu_requests: 100m
kibana_mem_requests: 0M

Some files were not shown because too many files have changed in this diff Show More