Compare commits

...

105 Commits

Author SHA1 Message Date
Brad Beam
ba0a03a8ba Merge pull request #1880 from mattymo/node_auth_fixes2
Move cluster roles and system namespace to new role
2017-10-26 10:02:24 -05:00
Matthew Mosesohn
b0f04d925a Update network policy setting for Kubernetes 1.8 (#1879)
It is now enabled by default in 1.8 with the api changed
to networking.k8s.io/v1 instead of extensions/v1beta1.
2017-10-26 15:35:26 +01:00
Matthew Mosesohn
7b78e68727 disable idempotency tests (#1872) 2017-10-26 15:35:12 +01:00
Matthew Mosesohn
ec53b8b66a Move cluster roles and system namespace to new role
This should be done after kubeconfig is set for admin and
before network plugins are up.
2017-10-26 14:36:05 +01:00
Matthew Mosesohn
86fb669fd3 Idempotency fixes (#1838) 2017-10-25 21:19:40 +01:00
Matthew Mosesohn
7123956ecd update checksum for kubeadm (#1869) 2017-10-25 21:15:16 +01:00
Spencer Smith
46cf6b77cf Merge pull request #1857 from pmontanari/patch-1
Use same kubedns_version: 1.14.5 in downloads  and kubernetes-apps/ansible roles
2017-10-25 10:05:43 -04:00
Matthew Mosesohn
a52bc44f5a Fix broken CI jobs (#1854)
* Fix broken CI jobs

Adjust image and image_family scenarios for debian.
Checkout CI file for upgrades

* add debugging to file download

* Fix download for alternate playbooks

* Update ansible ssh args to force ssh user

* Update sync_container.yml
2017-10-25 11:45:54 +01:00
Matthew Mosesohn
acb63a57fa Only limit etcd memory on small hosts (#1860)
Also disable oom killer on etcd
2017-10-25 10:25:15 +01:00
Flavio Percoco Premoli
5b08277ce4 Access dict item's value keys using .value (#1865) 2017-10-24 20:49:36 +01:00
Chiang Fong Lee
5dc56df64e Fix ordering of kube-apiserver admission control plug-ins (#1841) 2017-10-24 17:28:07 +01:00
Matthew Mosesohn
33c4d64b62 Make ClusterRoleBinding to admit all nodes with right cert (#1861)
This is to work around #1856 which can occur when kubelet
hostname and resolvable hostname (or cloud instance name)
do not match.
2017-10-24 17:05:58 +01:00
Matthew Mosesohn
25de6825df Update Kubernetes to v1.8.1 (#1858) 2017-10-24 17:05:45 +01:00
Peter Lee
0b60201a1e fix etcd health check bug (#1480) 2017-10-24 16:10:56 +01:00
Haiwei Liu
cfea99c4ee Fix scale.yml to supoort kubeadm (#1863)
Signed-off-by: Haiwei Liu <carllhw@gmail.com>
2017-10-24 16:08:48 +01:00
Matthew Mosesohn
cea41a544e Use include instead of import tasks to support v2.3 (#1855)
Eventually 2.3 support will be dropped, so this is
a temporary change.
2017-10-23 13:56:03 +01:00
pmontanari
8371a060a0 Update main.yml
Match kubedns_version with roles/download/defaults/main.yml:kubedns_version: 1.14.5
2017-10-22 23:48:51 +02:00
Matthew Mosesohn
7ed140cea7 Update refs to kubernetes version to v1.8.0 (#1845) 2017-10-20 08:29:28 +01:00
Matthew Mosesohn
cb97c2184e typo fix for ci job name (#1847) 2017-10-20 08:26:42 +01:00
Matthew Mosesohn
0b4fcc83bd Fix up warnings and deprecations (#1848) 2017-10-20 08:25:57 +01:00
Matthew Mosesohn
514359e556 Improve etcd scale up (#1846)
Now adding unjoined members to existing etcd cluster
occurs one at a time so that the cluster does not
lose quorum.
2017-10-20 08:02:31 +01:00
Peter Slijkhuis
55b9d02a99 Update README.md (#1843)
Changed Ansible 2.3 to 2.4
2017-10-19 13:49:04 +01:00
Matthew Mosesohn
fc9a65be2b Refactor downloads to use download role directly (#1824)
* Refactor downloads to use download role directly

Also disable fact delegation so download delegate works acros OSes.

* clean up bools and ansible_os_family conditionals
2017-10-19 09:17:11 +01:00
Jan Jungnickel
49dff97d9c Relabel controler-manager to kube-controller-manager (#1830)
Fixes #1129
2017-10-18 17:29:18 +01:00
Matthew Mosesohn
4efb0b78fa Move CI vars out of gitlab and into var files (#1808) 2017-10-18 17:28:54 +01:00
Hassan Zamani
c9fe8fde59 Use fail-swap-on flag only for kube_version >= 1.8 (#1829) 2017-10-18 16:32:38 +01:00
Simon Li
74d54946bf Add note that glusterfs is not automatically deployed (#1834) 2017-10-18 13:26:14 +01:00
Matthew Mosesohn
16462292e1 Properly skip extra SANs when not specified for kubeadm (#1831) 2017-10-18 12:04:13 +01:00
Aivars Sterns
7ef1e1ef9d update terraform, fix deprecated values add default_tags, fix ansible inventory (#1821) 2017-10-18 11:44:32 +01:00
pmontanari
20d80311f0 Update main.yml (#1822)
* Update main.yml

Needs to set up resolv.conf before updating Yum cache otherwise no name resolution available (resolv.conf empty).

* Update main.yml

Removing trailing spaces
2017-10-18 11:42:00 +01:00
Tim(Xiaoyu) Zhang
f1a1f53f72 fix slack UR; (#1832) 2017-10-18 10:32:47 +01:00
Matthew Mosesohn
c766bd077b Use batch mode for graceful docker/rkt upgrade (#1815) 2017-10-17 14:12:11 +01:00
Tennis Smith
54320c5b09 set to 3 digit version number (#1817) 2017-10-17 11:14:29 +01:00
Seungkyu Ahn
291b71ea3b Changing default value string to boolean. (#1669)
When downloading containers or files, use boolean
as a default value.
2017-10-17 11:14:12 +01:00
Rémi de Passmoilesel
356515222a Add possibility to insert more ip adresses in certificates (#1678)
* Add possibility to insert more ip adresses in certificates

* Add newline at end of files

* Move supp ip parameters to k8s-cluster group file

* Add supplementary addresses in kubeadm master role

* Improve openssl indexes
2017-10-17 11:06:07 +01:00
Aivars Sterns
688e589e0c fix #1788 lock dashboard version to 1.6.3 version while 1.7.x is not working (#1805) 2017-10-17 11:04:55 +01:00
刘旭
6c98201aa4 remove kube-dns versions and images in kubernetes-apps/ansible/defaults/main.yaml (#1807) 2017-10-17 11:03:53 +01:00
Matthew Mosesohn
d4b10eb9f5 Fix path for calico get node names (#1816) 2017-10-17 10:54:48 +01:00
Jiří Stránský
728d56e74d Only write bastion ssh config when needed (#1810)
This will allow running Kubespray when the user who runs it doesn't
have write permissions to the Kubespray dir, at least when not using
bastion.
2017-10-17 10:28:45 +01:00
Matthew Mosesohn
a9f4038fcd Update roadmap (#1814) 2017-10-16 17:02:53 +01:00
neith00
77f1d4b0f1 Revert "Update roadmap" (#1809)
* Revert "Debian jessie docs (#1806)"

This reverts commit d78577c810.

* Revert "[contrib/network-storage/glusterfs] adds service for glusterfs endpoint (#1800)"

This reverts commit 5fb6b2eaf7.

* Revert "[contrib/network-storage/glusterfs] bootstrap for glusterfs nodes (#1799)"

This reverts commit 404caa111a.

* Revert "Fixed kubelet standard log environment (#1780)"

This reverts commit b838468500.

* Revert "Add support for fedora atomic host (#1779)"

This reverts commit f2235be1d3.

* Revert "Update network-plugins to use portmap plugin (#1763)"

This reverts commit 6ec45b10f1.

* Revert "Update roadmap (#1795)"

This reverts commit d9879d8026.
2017-10-16 14:09:24 +01:00
Marc Zahn
d78577c810 Debian jessie docs (#1806)
* Add Debian Jessie notes

* Add installation notes for Debian Jessie
2017-10-16 09:02:12 +01:00
Pablo Moreno
5fb6b2eaf7 [contrib/network-storage/glusterfs] adds service for glusterfs endpoint (#1800) 2017-10-16 08:48:29 +01:00
Pablo Moreno
404caa111a [contrib/network-storage/glusterfs] bootstrap for glusterfs nodes (#1799) 2017-10-16 08:23:38 +01:00
Seungkyu Ahn
b838468500 Fixed kubelet standard log environment (#1780)
Change KUBE_LOGGING to KUBE_LOGTOSTDERR, when installing kubelet
as host type.
2017-10-16 08:22:54 +01:00
Jason Brooks
f2235be1d3 Add support for fedora atomic host (#1779)
* don't try to install this rpm on fedora atomic

* add docker 1.13.1 for fedora

* built-in docker unit file is sufficient, as tested on both fedora and centos atomic
2017-10-16 08:03:33 +01:00
Kevin Lefevre
6ec45b10f1 Update network-plugins to use portmap plugin (#1763)
Portmap allow to use hostPort with CNI plugins. Should fix #1675
2017-10-16 07:11:38 +01:00
Matthew Mosesohn
d9879d8026 Update roadmap (#1795) 2017-10-16 07:06:06 +01:00
Matthew Mosesohn
d487b2f927 Security best practice fixes (#1783)
* Disable basic and token auth by default

* Add recommended security params

* allow basic auth to fail in tests

* Enable TLS authentication for kubelet
2017-10-15 20:41:17 +01:00
Julian Poschmann
66e5e14bac Restart kubelet on update in deployment-type host on update (#1759)
* Restart kubelet on update in deployment-type host on update

* Update install_host.yml

* Update install_host.yml

* Update install_host.yml
2017-10-15 20:22:17 +01:00
Matthew Mosesohn
7e4668859b Change file used to check kubeadm upgrade method (#1784)
* Change file used to check kubeadm upgrade method

Test for ca.crt instead of admin.conf because admin.conf
is created during normal deployment.

* more fixes for upgrade
2017-10-15 10:33:22 +01:00
Matthew Mosesohn
92d038062e Fix node authorization for cloudprovider installs (#1794)
In 1.8, the Node authorization mode should be listed first to
allow kubelet to access secrets. This seems to only impact
environments with cloudprovider enabled.
2017-10-14 11:28:46 +01:00
abelgana
2972bceb90 Changre raw execution to use yum module (#1785)
* Changre raw execution to use yum module

Changed raw exection to use yum module provided by Ansible.

* Replace ansible_ssh_* by ansible_*

Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user, ansible_ssh_host, and ansible_ssh_port to become ansible_user, ansible_host, and ansible_port. If you are using a version of Ansible prior to 2.0, you should continue using the older style variables (ansible_ssh_*). These shorter variables are ignored, without warning, in older versions of Ansible.

I am not sure about the broader impact of this change. But I have seen on the requirements the version required is ansible>=2.4.0.

http://docs.ansible.com/ansible/latest/intro_inventory.html
2017-10-14 09:52:40 +01:00
刘旭
cb0a60a0fe calico v2.5.0 should use calico/routereflector:v0.4.0 (#1792) 2017-10-14 09:51:48 +01:00
Matthew Mosesohn
3ee91e15ff Use commas in no_proxy (#1782) 2017-10-13 15:43:10 +01:00
Matthew Mosesohn
ef47a73382 Add new addon Istio (#1744)
* add istio addon

* add addons to a ci job
2017-10-13 15:42:54 +01:00
Matthew Mosesohn
dc515e5ac5 Remove kernel-upgrade role (#1798)
This role only support Red Hat type distros and is not maintained
or used by many users. It should be removed because it creates
feature disparity between supported OSes and is not maintained.
2017-10-13 15:36:21 +01:00
Julian Poschmann
56763d4288 Persist br_netfilter module loading (#1760) 2017-10-13 10:50:29 +01:00
Maxim Krasilnikov
ad9fa73301 Remove cert_managment var definition from k8s-cluster group vars (#1790) 2017-10-13 10:21:39 +01:00
Matthew Mosesohn
10dd049912 Revert "Security fixes for etcd (#1778)" (#1786)
This reverts commit 4209f1cbfd.
2017-10-12 14:02:51 +01:00
Matthew Mosesohn
4209f1cbfd Security fixes for etcd (#1778)
* Security fixes for etcd

* Use certs when querying etcd
2017-10-12 13:32:54 +01:00
Matthew Mosesohn
ee83e874a8 Clear admin kubeconfig when rotating certs (#1772)
* Clear admin kubeconfig when rotating certs

* Update main.yml
2017-10-12 09:55:46 +01:00
Vijay Katam
27ed73e3e3 Rename dns_server, add var for selinux. (#1572)
* Rename dns_server to dnsmasq_dns_server so that it includes role prefix
as the var name is generic and conflicts when integrating with existing ansible automation.
*  Enable selinux state to be configurable with new var preinstall_selinux_state
2017-10-11 20:40:21 +01:00
Aivars Sterns
e41c0532e3 add possibility to disable fail with swap (#1773) 2017-10-11 19:49:31 +01:00
Matthew Mosesohn
eeb7274d65 Adjust memory reservation for master nodes (#1769) 2017-10-11 19:47:42 +01:00
Matthew Mosesohn
eb0dcf6063 Improve proxy (#1771)
* Set no_proxy to all local ips

* Use proxy settings on all necessary tasks
2017-10-11 19:47:27 +01:00
Matthew Mosesohn
83be0735cd Fix setting etcd client cert serial (#1775) 2017-10-11 19:47:11 +01:00
Matthew Mosesohn
fe4ba51d1a Set node IP correctly (#1770)
Fixes #1741
2017-10-11 15:28:42 +01:00
Hyunsun Moon
adf575b75e Set default value for disable_shared_pid (#1710)
PID namespace sharing is disabled only in Kubernetes 1.7.
Explicitily enabling it by default could help reduce unexpected
results when upgrading to or downgrading from 1.7.
2017-10-11 14:55:51 +01:00
Spencer Smith
e5426f74a8 Merge pull request #1762 from manics/bindir-helm
Include bin_dir when patching helm tiller with kubectl
2017-10-10 10:40:47 -04:00
Spencer Smith
f5212d3b79 Merge pull request #1752 from pmontanari/patch-1
Force synchronize to use ssh_args so it works when using bastion
2017-10-10 10:40:01 -04:00
Spencer Smith
3d09c4be75 Merge pull request #1756 from kubernetes-incubator/fix_bool_assert
Fix bool check assert
2017-10-10 10:38:53 -04:00
Spencer Smith
f2db15873d Merge pull request #1754 from ArchiFleKs/rkt-kubelet-fix
add hosts to rkt kubelet
2017-10-10 10:37:36 -04:00
ArchiFleKs
7c663de6c9 add /etc/hosts volume to rkt templates 2017-10-09 16:41:51 +02:00
Simon Li
c14bbcdbf2 Include bin_dir when patching helm tiller with kubectl 2017-10-09 15:17:52 +01:00
ant31
1be4c1935a Fix bool check assert 2017-10-06 17:02:38 +00:00
pmontanari
764b1aa5f8 Force synchronize to use ssh_args so it works when using bastion
In case ssh.config is set to use bastion, synchronize needs to use it too.
2017-10-06 00:21:54 +02:00
Spencer Smith
d13b07ba59 Merge pull request #1751 from bradbeam/calicoprometheus
Adding calico/node env vars for prometheus configuration
2017-10-05 17:29:12 -04:00
Spencer Smith
028afab908 Merge pull request #1750 from bradbeam/dnsmasq2
Followup fix for CVE-2017-14491
2017-10-05 17:28:28 -04:00
Brad Beam
55dfae2a52 Followup fix for CVE-2017-14491 2017-10-05 11:31:04 -05:00
Matthew Mosesohn
994324e19c Update gce CI (#1748)
Use image family for picking latest coreos image
Update python deps
2017-10-05 16:52:28 +01:00
Brad Beam
b81c0d869c Adding calico/node env vars for prometheus configuration 2017-10-05 08:46:01 -05:00
Matthew Mosesohn
f14f04c5ea Upgrade to kubernetes v1.8.0 (#1730)
* Upgrade to kubernetes v1.8.0

hyperkube no longer contains rsync, so now use cp

* Enable node authorization mode

* change kube-proxy cert group name
2017-10-05 10:51:21 +01:00
Aivars Sterns
9c86da1403 Normalize tags in all places to prepare for tag fixing in future (#1739) 2017-10-05 08:43:04 +01:00
Spencer Smith
cb611b5ed0 Merge pull request #1742 from mattymo/facts_as_vars
Move set_facts to kubespray-defaults defaults
2017-10-04 15:46:39 -04:00
Spencer Smith
891269ef39 Merge pull request #1743 from rsmitty/kube-client
Don't delegate cert gathering before creating admin.conf
2017-10-04 15:38:21 -04:00
Spencer Smith
ab171a1d6d don't delegate cert slurp 2017-10-04 13:06:51 -04:00
Matthew Mosesohn
a56738324a Move set_facts to kubespray-defaults defaults
These facts can be generated in defaults with a performance
boost.

Also cleaned up duplicate etcd var names.
2017-10-04 14:02:47 +01:00
Maxim Krasilnikov
da61b8e7c9 Added workaround for vagrant 1.9 and centos vm box (#1738) 2017-10-03 11:32:19 +01:00
Maxim Krasilnikov
d6d58bc938 Fixed vagrant up with flannel network, removed old config values (#1737) 2017-10-03 11:16:13 +01:00
Matthew Mosesohn
e42cb43ca5 add bootstrap for debian (#1726) 2017-10-03 08:30:45 +01:00
Brad Beam
ca541c7e4a Ensuring vault service is stopped in reset tasks (#1736) 2017-10-03 08:30:28 +01:00
Brad Beam
96e14424f0 Adding kubedns update for CVE-2017-14491 (#1735) 2017-10-03 08:30:14 +01:00
Brad Beam
47830896e8 Merge pull request #1733 from chapsuk/vagrant_mem
Increase vagrant vm's memory size
2017-10-02 15:45:37 -05:00
mkrasilnikov
5fd4b4afae Increase vagrant vm's memory size 2017-10-02 23:16:39 +03:00
Matthew Mosesohn
dae9f6d3c2 Test if tokens are expired from host instead of inside container (#1727)
* Test if tokens are expired from host instead of inside container

* Update main.yml
2017-10-02 13:14:50 +01:00
Julian Poschmann
8e1210f96e Fix cluster-network w/ prefix > 25 not possible with CNI (#1713) 2017-10-01 10:43:00 +01:00
Matthew Mosesohn
56aa683f28 Fix logic in idempotency tests in CI (#1722) 2017-10-01 10:42:33 +01:00
Brad Beam
1b9a6d7ad8 Merge pull request #1672 from manics/bastion-proxycommand-newline
Insert a newline in bastion ssh config after ProxyCommand conditional
2017-09-29 11:37:47 -05:00
Brad Beam
f591c4db56 Merge pull request #1720 from shiftky/improve_integration_doc
Improve playbook example of integration document
2017-09-29 11:34:44 -05:00
Peter Slijkhuis
371fa51e82 Make installation of EPEL optional (#1721) 2017-09-29 13:44:29 +01:00
shiftky
a927ed2da4 Improve playbook example of integration document 2017-09-29 18:00:01 +09:00
Matthew Mosesohn
a55675acf8 Enable RBAC with kubeadm always (#1711) 2017-09-29 09:18:24 +01:00
Matthew Mosesohn
25dd3d476a Fix error for azure+calico assert (#1717)
Fixes #1716
2017-09-29 08:17:18 +01:00
Simon Li
7c2b12ebd7 Insert a newline in bastion after ProxyCommand conditional 2017-09-18 16:29:12 +01:00
187 changed files with 3261 additions and 1279 deletions

1
.gitignore vendored
View File

@@ -10,6 +10,7 @@ temp
*.bak *.bak
*.tfstate *.tfstate
*.tfstate.backup *.tfstate.backup
contrib/terraform/aws/credentials.tfvars
**/*.sw[pon] **/*.sw[pon]
/ssh-bastion.conf /ssh-bastion.conf
**/*.sw[pon] **/*.sw[pon]

View File

@@ -20,7 +20,6 @@ variables:
before_script: before_script:
- pip install -r tests/requirements.txt - pip install -r tests/requirements.txt
- mkdir -p /.ssh - mkdir -p /.ssh
- cp tests/ansible.cfg .
.job: &job .job: &job
tags: tags:
@@ -40,27 +39,20 @@ before_script:
GCE_USER: travis GCE_USER: travis
SSH_USER: $GCE_USER SSH_USER: $GCE_USER
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID" TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CONTAINER_ENGINE: docker CONTAINER_ENGINE: docker
PRIVATE_KEY: $GCE_PRIVATE_KEY PRIVATE_KEY: $GCE_PRIVATE_KEY
GS_ACCESS_KEY_ID: $GS_KEY GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET GS_SECRET_ACCESS_KEY: $GS_SECRET
CLOUD_MACHINE_TYPE: "g1-small" CLOUD_MACHINE_TYPE: "g1-small"
GCE_PREEMPTIBLE: "false"
ANSIBLE_KEEP_REMOTE_FILES: "1" ANSIBLE_KEEP_REMOTE_FILES: "1"
ANSIBLE_CONFIG: ./tests/ansible.cfg ANSIBLE_CONFIG: ./tests/ansible.cfg
BOOTSTRAP_OS: none
DOWNLOAD_LOCALHOST: "false"
DOWNLOAD_RUN_ONCE: "false"
IDEMPOT_CHECK: "false" IDEMPOT_CHECK: "false"
RESET_CHECK: "false" RESET_CHECK: "false"
UPGRADE_TEST: "false" UPGRADE_TEST: "false"
KUBEADM_ENABLED: "false" KUBEADM_ENABLED: "false"
RESOLVCONF_MODE: docker_dns
LOG_LEVEL: "-vv" LOG_LEVEL: "-vv"
ETCD_DEPLOYMENT: "docker"
KUBELET_DEPLOYMENT: "host"
VAULT_DEPLOYMENT: "docker"
WEAVE_CPU_LIMIT: "100m"
AUTHORIZATION_MODES: "{ 'authorization_modes': [] }"
MAGIC: "ci check this" MAGIC: "ci check this"
.gce: &gce .gce: &gce
@@ -81,7 +73,9 @@ before_script:
- echo $GCE_CREDENTIALS > $HOME/.ssh/gce.json - echo $GCE_CREDENTIALS > $HOME/.ssh/gce.json
- chmod 400 $HOME/.ssh/id_rsa - chmod 400 $HOME/.ssh/id_rsa
- ansible-playbook --version - ansible-playbook --version
- export PYPATH=$([ $BOOTSTRAP_OS = none ] && echo /usr/bin/python || echo /opt/bin/python) - export PYPATH=$([[ ! "$CI_JOB_NAME" =~ "coreos" ]] && echo /usr/bin/python || echo /opt/bin/python)
- echo "CI_JOB_NAME is $CI_JOB_NAME"
- echo "PYPATH is $PYPATH"
script: script:
- pwd - pwd
- ls - ls
@@ -90,48 +84,34 @@ before_script:
- > - >
ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local
${LOG_LEVEL} ${LOG_LEVEL}
-e cloud_image=${CLOUD_IMAGE}
-e cloud_region=${CLOUD_REGION}
-e gce_credentials_file=${HOME}/.ssh/gce.json -e gce_credentials_file=${HOME}/.ssh/gce.json
-e gce_project_id=${GCE_PROJECT_ID} -e gce_project_id=${GCE_PROJECT_ID}
-e gce_service_account_email=${GCE_ACCOUNT} -e gce_service_account_email=${GCE_ACCOUNT}
-e cloud_machine_type=${CLOUD_MACHINE_TYPE}
-e inventory_path=${PWD}/inventory/inventory.ini -e inventory_path=${PWD}/inventory/inventory.ini
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e mode=${CLUSTER_MODE}
-e test_id=${TEST_ID} -e test_id=${TEST_ID}
-e startup_script="'${STARTUP_SCRIPT}'" -e preemptible=$GCE_PREEMPTIBLE
# Check out latest tag if testing upgrade # Check out latest tag if testing upgrade
# Uncomment when gitlab kargo repo has tags # Uncomment when gitlab kargo repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1)) #- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout 72ae7638bcc94c66afa8620dfa4ad9a9249327ea - test "${UPGRADE_TEST}" != "false" && git checkout 72ae7638bcc94c66afa8620dfa4ad9a9249327ea
# Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Create cluster # Create cluster
- > - >
ansible-playbook -i inventory/inventory.ini -b --become-user=root --private-key=${HOME}/.ssh/id_rsa -u $SSH_USER ansible-playbook
-i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS} ${SSH_ARGS}
${LOG_LEVEL} ${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER} -e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e cert_management=${CERT_MGMT:-script}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
cluster.yml cluster.yml
@@ -141,27 +121,17 @@ before_script:
test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml"; test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml";
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml"; test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
git checkout "${CI_BUILD_REF}"; git checkout "${CI_BUILD_REF}";
ansible-playbook -i inventory/inventory.ini -b --become-user=root --private-key=${HOME}/.ssh/id_rsa -u $SSH_USER ansible-playbook
-i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS} ${SSH_ARGS}
${LOG_LEVEL} ${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER} -e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
$PLAYBOOK; $PLAYBOOK;
fi fi
@@ -181,25 +151,16 @@ before_script:
## Idempotency checks 1/5 (repeat deployment) ## Idempotency checks 1/5 (repeat deployment)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS ansible-playbook
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS} -u $SSH_USER
-e cloud_provider=gce ${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
cluster.yml; cluster.yml;
fi fi
@@ -207,20 +168,29 @@ before_script:
## Idempotency checks 2/5 (Advanced DNS checks) ## Idempotency checks 2/5 (Advanced DNS checks)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} ansible-playbook
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root -i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL; tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi fi
## Idempotency checks 3/5 (reset deployment) ## Idempotency checks 3/5 (reset deployment)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS ansible-playbook
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS} -u $SSH_USER
-e cloud_provider=gce ${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e reset_confirmation=yes -e reset_confirmation=yes
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
@@ -229,33 +199,24 @@ before_script:
## Idempotency checks 4/5 (redeploy after reset) ## Idempotency checks 4/5 (redeploy after reset)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS ansible-playbook
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN} -i inventory/inventory.ini
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS} -u $SSH_USER
-e cloud_provider=gce ${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH} -e ansible_python_interpreter=${PYPATH}
-e "{deploy_netchecker: true}"
-e "{download_localhost: ${DOWNLOAD_LOCALHOST}}"
-e "{download_run_once: ${DOWNLOAD_RUN_ONCE}}"
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
cluster.yml; cluster.yml;
fi fi
## Idempotency checks 5/5 (Advanced DNS checks) ## Idempotency checks 5/5 (Advanced DNS checks)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
@@ -265,166 +226,73 @@ before_script:
after_script: after_script:
- > - >
ansible-playbook -i inventory/inventory.ini tests/cloud_playbooks/delete-gce.yml -c local $LOG_LEVEL ansible-playbook -i inventory/inventory.ini tests/cloud_playbooks/delete-gce.yml -c local $LOG_LEVEL
-e mode=${CLUSTER_MODE} -e @${CI_TEST_VARS}
-e test_id=${TEST_ID} -e test_id=${TEST_ID}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e gce_project_id=${GCE_PROJECT_ID} -e gce_project_id=${GCE_PROJECT_ID}
-e gce_service_account_email=${GCE_ACCOUNT} -e gce_service_account_email=${GCE_ACCOUNT}
-e gce_credentials_file=${HOME}/.ssh/gce.json -e gce_credentials_file=${HOME}/.ssh/gce.json
-e cloud_image=${CLOUD_IMAGE}
-e inventory_path=${PWD}/inventory/inventory.ini -e inventory_path=${PWD}/inventory/inventory.ini
-e cloud_region=${CLOUD_REGION}
# Test matrix. Leave the comments for markup scripts. # Test matrix. Leave the comments for markup scripts.
.coreos_calico_aio_variables: &coreos_calico_aio_variables .coreos_calico_aio_variables: &coreos_calico_aio_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }" MOVED_TO_GROUP_VARS: "true"
KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-west1-b
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: aio
BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
##User-data to simply turn off coreos upgrades
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.ubuntu_canal_ha_rbac_variables: &ubuntu_canal_ha_rbac_variables .ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
UPGRADE_TEST: "graceful" UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables .centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: weave
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: centos-7
CLOUD_MACHINE_TYPE: "n1-standard-1"
CLOUD_REGION: us-central1-b
CLUSTER_MODE: ha
KUBEADM_ENABLED: "true"
UPGRADE_TEST: "graceful" UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables .ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal MOVED_TO_GROUP_VARS: "true"
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_MACHINE_TYPE: "n1-standard-1"
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
KUBEADM_ENABLED: "true"
STARTUP_SCRIPT: ""
.rhel7_weave_variables: &rhel7_weave_variables .rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: weave MOVED_TO_GROUP_VARS: "true"
CLOUD_IMAGE: rhel-7
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.centos7_flannel_variables: &centos7_flannel_variables .centos7_flannel_addons_variables: &centos7_flannel_addons_variables
# stage: deploy-gce-part2 # stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: flannel MOVED_TO_GROUP_VARS: "true"
CLOUD_IMAGE: centos-7
CLOUD_REGION: us-west1-a
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.debian8_calico_variables: &debian8_calico_variables .debian8_calico_variables: &debian8_calico_variables
# stage: deploy-gce-part2 # stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: calico MOVED_TO_GROUP_VARS: "true"
CLOUD_IMAGE: debian-8-kubespray
CLOUD_REGION: us-central1-b
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.coreos_canal_variables: &coreos_canal_variables .coreos_canal_variables: &coreos_canal_variables
# stage: deploy-gce-part2 # stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: canal MOVED_TO_GROUP_VARS: "true"
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-east1-b
CLUSTER_MODE: default
BOOTSTRAP_OS: coreos
IDEMPOT_CHECK: "true"
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables .rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-gce-special # stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: canal MOVED_TO_GROUP_VARS: "true"
CLOUD_IMAGE: rhel-7
CLOUD_REGION: us-east1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables .ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
# stage: deploy-gce-special # stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: weave MOVED_TO_GROUP_VARS: "true"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate
IDEMPOT_CHECK: "false"
STARTUP_SCRIPT: ""
.centos7_calico_ha_variables: &centos7_calico_ha_variables .centos7_calico_ha_variables: &centos7_calico_ha_variables
# stage: deploy-gce-special # stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: calico MOVED_TO_GROUP_VARS: "true"
DOWNLOAD_LOCALHOST: "true"
DOWNLOAD_RUN_ONCE: "true"
CLOUD_IMAGE: centos-7
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha-scale
IDEMPOT_CHECK: "true"
STARTUP_SCRIPT: ""
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables .coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-gce-special # stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: weave MOVED_TO_GROUP_VARS: "true"
CLOUD_IMAGE: coreos-alpha-1506-0-0-v20170817
CLOUD_REGION: us-west1-a
CLUSTER_MODE: ha-scale
BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables .ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: flannel MOVED_TO_GROUP_VARS: "true"
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate
ETCD_DEPLOYMENT: rkt
KUBELET_DEPLOYMENT: rkt
STARTUP_SCRIPT: ""
.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables .ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }" MOVED_TO_GROUP_VARS: "true"
CLOUD_MACHINE_TYPE: "n1-standard-2"
KUBE_NETWORK_PLUGIN: canal
CERT_MGMT: vault
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
.ubuntu_flannel_rbac_variables: &ubuntu_flannel_rbac_variables .ubuntu_flannel_variables: &ubuntu_flannel_variables
# stage: deploy-gce-special # stage: deploy-gce-special
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }" MOVED_TO_GROUP_VARS: "true"
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto) # Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
coreos-calico-aio: coreos-calico-aio:
@@ -448,24 +316,24 @@ coreos-calico-sep-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
centos7-flannel: centos7-flannel-addons:
stage: deploy-gce-part2 stage: deploy-gce-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
<<: *gce_variables <<: *gce_variables
<<: *centos7_flannel_variables <<: *centos7_flannel_addons_variables
when: on_success when: on_success
except: ['triggers'] except: ['triggers']
only: [/^pr-.*$/] only: [/^pr-.*$/]
centos7-flannel-triggers: centos7-flannel-addons-triggers:
stage: deploy-gce-part1 stage: deploy-gce-part1
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
<<: *gce_variables <<: *gce_variables
<<: *centos7_flannel_variables <<: *centos7_flannel_addons_variables
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
@@ -491,28 +359,28 @@ ubuntu-weave-sep-triggers:
only: ['triggers'] only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto) # More builds for PRs/merges (manual) and triggers (auto)
ubuntu-canal-ha-rbac: ubuntu-canal-ha:
stage: deploy-gce-part1 stage: deploy-gce-part1
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
<<: *gce_variables <<: *gce_variables
<<: *ubuntu_canal_ha_rbac_variables <<: *ubuntu_canal_ha_variables
when: manual when: manual
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
ubuntu-canal-ha-rbac-triggers: ubuntu-canal-ha-triggers:
stage: deploy-gce-part1 stage: deploy-gce-part1
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
<<: *gce_variables <<: *gce_variables
<<: *ubuntu_canal_ha_rbac_variables <<: *ubuntu_canal_ha_variables
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
ubuntu-canal-kubeadm-rbac: ubuntu-canal-kubeadm:
stage: deploy-gce-part1 stage: deploy-gce-part1
<<: *job <<: *job
<<: *gce <<: *gce
@@ -533,7 +401,7 @@ ubuntu-canal-kubeadm-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
centos-weave-kubeadm-rbac: centos-weave-kubeadm:
stage: deploy-gce-part1 stage: deploy-gce-part1
<<: *job <<: *job
<<: *gce <<: *gce
@@ -693,13 +561,13 @@ ubuntu-vault-sep:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
ubuntu-flannel-rbac-sep: ubuntu-flannel-sep:
stage: deploy-gce-special stage: deploy-gce-special
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
<<: *gce_variables <<: *gce_variables
<<: *ubuntu_flannel_rbac_variables <<: *ubuntu_flannel_variables
when: manual when: manual
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]

View File

@@ -2,7 +2,7 @@
## Deploy a production ready kubernetes cluster ## Deploy a production ready kubernetes cluster
If you have questions, join us on the [kubernetes slack](https://slack.k8s.io), channel **#kubespray**. If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **#kubespray**.
- Can be deployed on **AWS, GCE, Azure, OpenStack or Baremetal** - Can be deployed on **AWS, GCE, Azure, OpenStack or Baremetal**
- **High available** cluster - **High available** cluster
@@ -29,6 +29,7 @@ To deploy the cluster you can use :
* [Network plugins](#network-plugins) * [Network plugins](#network-plugins)
* [Vagrant install](docs/vagrant.md) * [Vagrant install](docs/vagrant.md)
* [CoreOS bootstrap](docs/coreos.md) * [CoreOS bootstrap](docs/coreos.md)
* [Debian Jessie setup](docs/debian.md)
* [Downloaded artifacts](docs/downloads.md) * [Downloaded artifacts](docs/downloads.md)
* [Cloud providers](docs/cloud.md) * [Cloud providers](docs/cloud.md)
* [OpenStack](docs/openstack.md) * [OpenStack](docs/openstack.md)
@@ -53,7 +54,7 @@ Versions of supported components
-------------------------------- --------------------------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.7.3 <br> [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.8.1 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.2.4 <br> [etcd](https://github.com/coreos/etcd/releases) v3.2.4 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br> [flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br>
[calico](https://docs.projectcalico.org/v2.5/releases/) v2.5.0 <br> [calico](https://docs.projectcalico.org/v2.5/releases/) v2.5.0 <br>
@@ -72,7 +73,7 @@ plugins can be deployed for a given single cluster.
Requirements Requirements
-------------- --------------
* **Ansible v2.3 (or newer) and python-netaddr is installed on the machine * **Ansible v2.4 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands** that will run Ansible commands**
* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks** * **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
* The target servers must have **access to the Internet** in order to pull docker images. * The target servers must have **access to the Internet** in order to pull docker images.

29
Vagrantfile vendored
View File

@@ -3,7 +3,7 @@
require 'fileutils' require 'fileutils'
Vagrant.require_version ">= 1.8.0" Vagrant.require_version ">= 1.9.0"
CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb") CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
@@ -21,16 +21,19 @@ SUPPORTED_OS = {
$num_instances = 3 $num_instances = 3
$instance_name_prefix = "k8s" $instance_name_prefix = "k8s"
$vm_gui = false $vm_gui = false
$vm_memory = 1536 $vm_memory = 2048
$vm_cpus = 1 $vm_cpus = 1
$shared_folders = {} $shared_folders = {}
$forwarded_ports = {} $forwarded_ports = {}
$subnet = "172.17.8" $subnet = "172.17.8"
$os = "ubuntu" $os = "ubuntu"
$network_plugin = "flannel"
# The first three nodes are etcd servers # The first three nodes are etcd servers
$etcd_instances = $num_instances $etcd_instances = $num_instances
# The first two nodes are masters # The first two nodes are kube masters
$kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1) $kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes
$kube_node_instances = $num_instances
$local_release_dir = "/vagrant/temp" $local_release_dir = "/vagrant/temp"
host_vars = {} host_vars = {}
@@ -39,9 +42,6 @@ if File.exist?(CONFIG)
require CONFIG require CONFIG
end end
# All nodes are kube nodes
$kube_node_instances = $num_instances
$box = SUPPORTED_OS[$os][:box] $box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example # if $inventory is not set, try to use example
$inventory = File.join(File.dirname(__FILE__), "inventory") if ! $inventory $inventory = File.join(File.dirname(__FILE__), "inventory") if ! $inventory
@@ -115,17 +115,20 @@ Vagrant.configure("2") do |config|
ip = "#{$subnet}.#{i+100}" ip = "#{$subnet}.#{i+100}"
host_vars[vm_name] = { host_vars[vm_name] = {
"ip": ip, "ip": ip,
"flannel_interface": ip, "bootstrap_os": SUPPORTED_OS[$os][:bootstrap_os],
"flannel_backend_type": "host-gw",
"local_release_dir" => $local_release_dir, "local_release_dir" => $local_release_dir,
"download_run_once": "False", "download_run_once": "False",
# Override the default 'calico' with flannel. "kube_network_plugin": $network_plugin
# inventory/group_vars/k8s-cluster.yml
"kube_network_plugin": "flannel",
"bootstrap_os": SUPPORTED_OS[$os][:bootstrap_os]
} }
config.vm.network :private_network, ip: ip config.vm.network :private_network, ip: ip
# workaround for Vagrant 1.9.1 and centos vm
# https://github.com/hashicorp/vagrant/issues/8096
if Vagrant::VERSION == "1.9.1" && $os == "centos"
config.vm.provision "shell", inline: "service network restart", run: "always"
end
# Only execute once the Ansible provisioner, # Only execute once the Ansible provisioner,
# when all the machines are up and ready. # when all the machines are up and ready.
if i == $num_instances if i == $num_instances
@@ -137,7 +140,7 @@ Vagrant.configure("2") do |config|
ansible.sudo = true ansible.sudo = true
ansible.limit = "all" ansible.limit = "all"
ansible.host_key_checking = false ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}"] ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache"]
ansible.host_vars = host_vars ansible.host_vars = host_vars
#ansible.tags = ['download'] #ansible.tags = ['download']
ansible.groups = { ansible.groups = {

View File

@@ -1,7 +1,7 @@
[ssh_connection] [ssh_connection]
pipelining=True pipelining=True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 ansible_ssh_common_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 #ansible_ssh_common_args = -F {{ inventory_dir|quote }}/ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#control_path = ~/.ssh/ansible-%%r@%%h:%%p #control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults] [defaults]
host_key_checking=False host_key_checking=False

View File

@@ -26,12 +26,12 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: kernel-upgrade, tags: kernel-upgrade, when: kernel_upgrade is defined and kernel_upgrade }
- { role: kubernetes/preinstall, tags: preinstall } - { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker } - { role: docker, tags: docker }
- role: rkt - role: rkt
tags: rkt tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]" when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
- { role: download, tags: download, skip_downloads: false }
- hosts: etcd:k8s-cluster:vault - hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -68,6 +68,8 @@
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: kubernetes/master, tags: master } - { role: kubernetes/master, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- hosts: k8s-cluster - hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -83,7 +85,6 @@
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" } - { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: kubernetes-apps/network_plugin, tags: network } - { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller } - { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes/client, tags: client }
- hosts: calico-rr - hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"

View File

@@ -1,8 +1,17 @@
--- ---
- hosts: gfs-cluster
gather_facts: false
vars:
ansible_ssh_pipelining: false
roles:
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: all - hosts: all
gather_facts: true gather_facts: true
- hosts: gfs-cluster - hosts: gfs-cluster
vars:
ansible_ssh_pipelining: true
roles: roles:
- { role: glusterfs/server } - { role: glusterfs/server }
@@ -12,6 +21,5 @@
- hosts: kube-master[0] - hosts: kube-master[0]
roles: roles:
- { role: kubernetes-pv/lib }
- { role: kubernetes-pv } - { role: kubernetes-pv }

View File

@@ -0,0 +1 @@
../../../inventory/group_vars

View File

@@ -0,0 +1 @@
../../../../roles/bootstrap-os

View File

@@ -4,6 +4,7 @@
with_items: with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json} - { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml} - { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
register: gluster_pv register: gluster_pv
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined

View File

@@ -0,0 +1,12 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs"
},
"spec": {
"ports": [
{"port": 1}
]
}
}

View File

@@ -19,9 +19,9 @@ module "aws-vpc" {
aws_cluster_name = "${var.aws_cluster_name}" aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}" aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones="${var.aws_avail_zones}" aws_avail_zones="${var.aws_avail_zones}"
aws_cidr_subnets_private="${var.aws_cidr_subnets_private}" aws_cidr_subnets_private="${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public="${var.aws_cidr_subnets_public}" aws_cidr_subnets_public="${var.aws_cidr_subnets_public}"
default_tags="${var.default_tags}"
} }
@@ -35,6 +35,7 @@ module "aws-elb" {
aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}" aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}" aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}" k8s_secure_api_port = "${var.k8s_secure_api_port}"
default_tags="${var.default_tags}"
} }
@@ -61,11 +62,11 @@ resource "aws_instance" "bastion-server" {
key_name = "${var.AWS_SSH_KEY_NAME}" key_name = "${var.AWS_SSH_KEY_NAME}"
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-bastion-${count.index}" "Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
Cluster = "${var.aws_cluster_name}" "Cluster", "${var.aws_cluster_name}",
Role = "bastion-${var.aws_cluster_name}-${count.index}" "Role", "bastion-${var.aws_cluster_name}-${count.index}"
} ))}"
} }
@@ -92,11 +93,11 @@ resource "aws_instance" "k8s-master" {
key_name = "${var.AWS_SSH_KEY_NAME}" key_name = "${var.AWS_SSH_KEY_NAME}"
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-master${count.index}" "Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
Cluster = "${var.aws_cluster_name}" "Cluster", "${var.aws_cluster_name}",
Role = "master" "Role", "master"
} ))}"
} }
resource "aws_elb_attachment" "attach_master_nodes" { resource "aws_elb_attachment" "attach_master_nodes" {
@@ -121,12 +122,11 @@ resource "aws_instance" "k8s-etcd" {
key_name = "${var.AWS_SSH_KEY_NAME}" key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
tags { "Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
Name = "kubernetes-${var.aws_cluster_name}-etcd${count.index}" "Cluster", "${var.aws_cluster_name}",
Cluster = "${var.aws_cluster_name}" "Role", "etcd"
Role = "etcd" ))}"
}
} }
@@ -146,11 +146,11 @@ resource "aws_instance" "k8s-worker" {
key_name = "${var.AWS_SSH_KEY_NAME}" key_name = "${var.AWS_SSH_KEY_NAME}"
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-worker${count.index}" "Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
Cluster = "${var.aws_cluster_name}" "Cluster", "${var.aws_cluster_name}",
Role = "worker" "Role", "worker"
} ))}"
} }
@@ -164,10 +164,10 @@ data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}" template = "${file("${path.module}/templates/inventory.tpl")}"
vars { vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_ssh_host=%s" , aws_instance.bastion-server.*.public_ip))}" public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_ssh_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}" connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_ssh_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}" connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n",formatlist("%s ansible_ssh_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}" connection_strings_etcd = "${join("\n",formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}" list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}" list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}" list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"

View File

@@ -2,9 +2,9 @@ resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb" name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = "${var.aws_vpc_id}" vpc_id = "${var.aws_vpc_id}"
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb" "Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
} ))}"
} }
@@ -52,7 +52,7 @@ resource "aws_elb" "aws-elb-api" {
connection_draining = true connection_draining = true
connection_draining_timeout = 400 connection_draining_timeout = 400
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-elb-api" "Name", "kubernetes-${var.aws_cluster_name}-elb-api"
} ))}"
} }

View File

@@ -26,3 +26,8 @@ variable "aws_subnet_ids_public" {
description = "IDs of Public Subnets" description = "IDs of Public Subnets"
type = "list" type = "list"
} }
variable "default_tags" {
description = "Tags for all resources"
type = "map"
}

View File

@@ -129,10 +129,10 @@ EOF
resource "aws_iam_instance_profile" "kube-master" { resource "aws_iam_instance_profile" "kube-master" {
name = "kube_${var.aws_cluster_name}_master_profile" name = "kube_${var.aws_cluster_name}_master_profile"
roles = ["${aws_iam_role.kube-master.name}"] role = "${aws_iam_role.kube-master.name}"
} }
resource "aws_iam_instance_profile" "kube-worker" { resource "aws_iam_instance_profile" "kube-worker" {
name = "kube_${var.aws_cluster_name}_node_profile" name = "kube_${var.aws_cluster_name}_node_profile"
roles = ["${aws_iam_role.kube-worker.name}"] role = "${aws_iam_role.kube-worker.name}"
} }

View File

@@ -6,9 +6,9 @@ resource "aws_vpc" "cluster-vpc" {
enable_dns_support = true enable_dns_support = true
enable_dns_hostnames = true enable_dns_hostnames = true
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-vpc" "Name", "kubernetes-${var.aws_cluster_name}-vpc"
} ))}"
} }
@@ -18,13 +18,13 @@ resource "aws_eip" "cluster-nat-eip" {
} }
resource "aws_internet_gateway" "cluster-vpc-internetgw" { resource "aws_internet_gateway" "cluster-vpc-internetgw" {
vpc_id = "${aws_vpc.cluster-vpc.id}" vpc_id = "${aws_vpc.cluster-vpc.id}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-internetgw" tags = "${merge(var.default_tags, map(
} "Name", "kubernetes-${var.aws_cluster_name}-internetgw"
))}"
} }
resource "aws_subnet" "cluster-vpc-subnets-public" { resource "aws_subnet" "cluster-vpc-subnets-public" {
@@ -33,9 +33,9 @@ resource "aws_subnet" "cluster-vpc-subnets-public" {
availability_zone = "${element(var.aws_avail_zones, count.index)}" availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}" cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public" "Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
} ))}"
} }
resource "aws_nat_gateway" "cluster-nat-gateway" { resource "aws_nat_gateway" "cluster-nat-gateway" {
@@ -51,9 +51,9 @@ resource "aws_subnet" "cluster-vpc-subnets-private" {
availability_zone = "${element(var.aws_avail_zones, count.index)}" availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}" cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private" "Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
} ))}"
} }
#Routing in VPC #Routing in VPC
@@ -66,9 +66,10 @@ resource "aws_route_table" "kubernetes-public" {
cidr_block = "0.0.0.0/0" cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}" gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
} }
tags {
Name = "kubernetes-${var.aws_cluster_name}-routetable-public" tags = "${merge(var.default_tags, map(
} "Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
))}"
} }
resource "aws_route_table" "kubernetes-private" { resource "aws_route_table" "kubernetes-private" {
@@ -78,9 +79,11 @@ resource "aws_route_table" "kubernetes-private" {
cidr_block = "0.0.0.0/0" cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}" nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
} }
tags {
Name = "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}" tags = "${merge(var.default_tags, map(
} "Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
))}"
} }
resource "aws_route_table_association" "kubernetes-public" { resource "aws_route_table_association" "kubernetes-public" {
@@ -104,9 +107,9 @@ resource "aws_security_group" "kubernetes" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup" name = "kubernetes-${var.aws_cluster_name}-securitygroup"
vpc_id = "${aws_vpc.cluster-vpc.id}" vpc_id = "${aws_vpc.cluster-vpc.id}"
tags { tags = "${merge(var.default_tags, map(
Name = "kubernetes-${var.aws_cluster_name}-securitygroup" "Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
} ))}"
} }
resource "aws_security_group_rule" "allow-all-ingress" { resource "aws_security_group_rule" "allow-all-ingress" {

View File

@@ -14,3 +14,8 @@ output "aws_security_group" {
value = ["${aws_security_group.kubernetes.*.id}"] value = ["${aws_security_group.kubernetes.*.id}"]
} }
output "default_tags" {
value = "${default_tags}"
}

View File

@@ -22,3 +22,8 @@ variable "aws_cidr_subnets_public" {
description = "CIDR Blocks for public subnets in Availability zones" description = "CIDR Blocks for public subnets in Availability zones"
type = "list" type = "list"
} }
variable "default_tags" {
description = "Default tags for all resources"
type = "map"
}

View File

@@ -22,3 +22,7 @@ output "aws_elb_api_fqdn" {
output "inventory" { output "inventory" {
value = "${data.template_file.inventory.rendered}" value = "${data.template_file.inventory.rendered}"
} }
output "default_tags" {
value = "${default_tags}"
}

View File

@@ -1,3 +1,4 @@
[all]
${connection_strings_master} ${connection_strings_master}
${connection_strings_node} ${connection_strings_node}
${connection_strings_etcd} ${connection_strings_etcd}

View File

@@ -30,3 +30,8 @@ aws_cluster_ami = "ami-db56b9a3"
aws_elb_api_port = 6443 aws_elb_api_port = 6443
k8s_secure_api_port = 6443 k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0" kube_insecure_apiserver_address = "0.0.0.0"
default_tags = {
# Env = "devtest"
# Product = "kubernetes"
}

View File

@@ -99,3 +99,8 @@ variable "k8s_secure_api_port" {
variable "loadbalancer_apiserver_address" { variable "loadbalancer_apiserver_address" {
description= "Bind Address for ELB of K8s API Server" description= "Bind Address for ELB of K8s API Server"
} }
variable "default_tags" {
description = "Default tags for all resources"
type = "map"
}

View File

@@ -125,6 +125,8 @@ ssh_user_gfs = "ubuntu"
If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using Container Linux by CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher. If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using Container Linux by CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher.
GlusterFS is not deployed by the standard `cluster.yml` playbook, see the [glusterfs playbook documentation](../../network-storage/glusterfs/README.md) for instructions.
# Configure Cluster variables # Configure Cluster variables
Edit `inventory/group_vars/all.yml`: Edit `inventory/group_vars/all.yml`:

View File

@@ -157,7 +157,7 @@ ansible-playbook -i inventory/inventory.ini cluster.yml --tags preinstall,dnsma
``` ```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files: And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
``` ```
ansible-playbook -i inventory/inventory.ini -e dns_server='' cluster.yml --tags resolvconf ansible-playbook -i inventory/inventory.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
``` ```
And this prepares all container images localy (at the ansible runner node) without installing And this prepares all container images localy (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes: or upgrading related stuff or trying to upload container to K8s cluster nodes:

38
docs/debian.md Normal file
View File

@@ -0,0 +1,38 @@
Debian Jessie
===============
Debian Jessie installation Notes:
- Add
```GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"```
to /etc/default/grub. Then update with
```
sudo update-grub
sudo update-grub2
sudo reboot
```
- Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd.
```apt-get -t jessie-backports install systemd```
(Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files)
- Add the Ansible repository and install Ansible to get a proper version
```
sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update
sudo apt.get install ansible
```
- Install Jinja2 and Python-Netaddr
```sudo apt-get install phyton-jinja2=2.8-1~bpo8+1 python-netaddr```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View File

@@ -93,7 +93,8 @@ the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-applicati
Accessing Kubernetes Dashboard Accessing Kubernetes Dashboard
------------------------------ ------------------------------
If the variable `dashboard_enabled` is set (default is true), then you can If the variable `dashboard_enabled` is set (default is true) as well as
kube_basic_auth (default is false), then you can
access the Kubernetes Dashboard at the following URL: access the Kubernetes Dashboard at the following URL:
https://kube:_kube-password_@_host_:6443/ui/ https://kube:_kube-password_@_host_:6443/ui/
@@ -102,6 +103,9 @@ To see the password, refer to the section above, titled *Connecting to
Kubernetes*. The host can be any kube-master or kube-node or loadbalancer Kubernetes*. The host can be any kube-master or kube-node or loadbalancer
(when enabled). (when enabled).
To access the Dashboard with basic auth disabled, follow the instructions here:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#command-line-proxy
Accessing Kubernetes API Accessing Kubernetes API
------------------------ ------------------------

View File

@@ -35,7 +35,7 @@
7. Modify path to library and roles in your ansible.cfg file (role naming should be uniq, you may have to rename your existent roles if they have same names as kubespray project): 7. Modify path to library and roles in your ansible.cfg file (role naming should be uniq, you may have to rename your existent roles if they have same names as kubespray project):
``` ```
... ...
library = 3d/kubespray/library/ library = 3d/kubespray/library/
roles_path = 3d/kubespray/roles/ roles_path = 3d/kubespray/roles/
... ...
``` ```
@@ -73,7 +73,7 @@ You could rename *all.yml* config to something else, i.e. *kubespray.yml* and cr
10. Now you can include kargo tasks in you existent playbooks by including cluster.yml file: 10. Now you can include kargo tasks in you existent playbooks by including cluster.yml file:
``` ```
- name: Include kargo tasks - name: Include kargo tasks
include: 3d/kubespray/cluster.yml include: 3d/kubespray/cluster.yml
``` ```
Or your could copy separate tasks from cluster.yml into your ansible repository. Or your could copy separate tasks from cluster.yml into your ansible repository.

View File

@@ -2,8 +2,9 @@ Kubespray's roadmap
================= =================
### Kubeadm ### Kubeadm
- Propose kubeadm as an option in order to setup the kubernetes cluster. - Switch to kubeadm deployment as the default method after some bugs are fixed:
That would probably improve deployment speed and certs management [#553](https://github.com/kubespray/kubespray/issues/553) * Support for basic auth
* cloudprovider cloud-config mount [#484](https://github.com/kubernetes/kubeadm/issues/484)
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320) ### Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320)
- the playbook would install and configure docker/rkt and the etcd cluster - the playbook would install and configure docker/rkt and the etcd cluster
@@ -12,60 +13,35 @@ That would probably improve deployment speed and certs management [#553](https:/
- to be discussed, a way to provide the inventory - to be discussed, a way to provide the inventory
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321) - **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321)
### Provisionning and cloud providers ### Provisioning and cloud providers
- [ ] Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure** - [ ] Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure**
- [ ] On AWS autoscaling, multi AZ - [ ] On AWS autoscaling, multi AZ
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297) - [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297)
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280) - [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280)
- [x] **TLS boostrap** support for kubelet [#234](https://github.com/kubespray/kubespray/issues/234) - [x] **TLS boostrap** support for kubelet (covered by kubeadm, but not in standard deployment) [#234](https://github.com/kubespray/kubespray/issues/234)
(related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br> (related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br>
https://github.com/kubernetes/kubernetes/issues/18112) https://github.com/kubernetes/kubernetes/issues/18112)
### Tests ### Tests
- [x] Run kubernetes e2e tests - [ ] Run kubernetes e2e tests
- [x] migrate to jenkins - [ ] Test idempotency on on single OS but for all network plugins/container engines
(a test is currently a deployment on a 3 node cluste, testing k8s api, ping between 2 pods)
- [x] Full tests on GCE per day (All OS's, all network plugins)
- [x] trigger a single test per pull request
- [ ] ~~single test with the Ansible version n-1 per day~~
- [x] Test idempotency on on single OS but for all network plugins/container engines
- [ ] single test on AWS per day - [ ] single test on AWS per day
- [x] test different achitectures :
- 3 instances, 3 are members of the etcd cluster, 2 of them acting as master and node, 1 as node
- 5 instances, 3 are etcd and nodes, 2 are masters only
- 7 instances, 3 etcd only, 2 masters, 2 nodes
- [ ] test scale up cluster: +1 etcd, +1 master, +1 node - [ ] test scale up cluster: +1 etcd, +1 master, +1 node
- [ ] Reorganize CI test vars into group var files
### Lifecycle ### Lifecycle
- [ ] Adopt the kubeadm tool by delegating CM tasks it is capable to accomplish well [#553](https://github.com/kubespray/kubespray/issues/553)
- [x] Drain worker node when upgrading k8s components in a worker node. [#154](https://github.com/kubespray/kubespray/issues/154)
- [ ] Drain worker node when shutting down/deleting an instance
- [ ] Upgrade granularity: select components to upgrade and skip others - [ ] Upgrade granularity: select components to upgrade and skip others
### Networking ### Networking
- [ ] romana.io support [#160](https://github.com/kubespray/kubespray/issues/160)
- [ ] Configure network policy for Calico. [#159](https://github.com/kubespray/kubespray/issues/159)
- [ ] Opencontrail - [ ] Opencontrail
- [x] Canal - [ ] Consolidate network_plugins and kubernetes-apps/network_plugins
- [x] Cloud Provider native networking (instead of our network plugins)
### High availability
- (to be discussed) option to set a loadbalancer for the apiservers like ucarp/packemaker/keepalived
While waiting for the issue [kubernetes/kubernetes#18174](https://github.com/kubernetes/kubernetes/issues/18174) to be fixed.
### Kubespray-cli
- Delete instances
- `kubespray vagrant` to setup a test cluster locally
- `kubespray azure` for Microsoft Azure support
- switch to Terraform instead of Ansible for provisionning
- update $HOME/.kube/config when a cluster is deployed. Optionally switch to this context
### Kubespray API ### Kubespray API
- Perform all actions through an **API** - Perform all actions through an **API**
- Store inventories / configurations of mulltiple clusters - Store inventories / configurations of mulltiple clusters
- make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory - make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory
### Addons (with kpm) ### Addons (helm or native ansible)
Include optionals deployments to init the cluster: Include optionals deployments to init the cluster:
##### Monitoring ##### Monitoring
- Heapster / Grafana .... - Heapster / Grafana ....
@@ -85,10 +61,10 @@ Include optionals deployments to init the cluster:
- Deis Workflow - Deis Workflow
### Others ### Others
- remove nodes (adding is already supported) - remove nodes (adding is already supported)
- being able to choose any k8s version (almost done) - Organize and update documentation (split in categories)
- **rkt** support [#59](https://github.com/kubespray/kubespray/issues/59) - Refactor downloads so it all runs in the beginning of deployment
- Review documentation (split in categories) - Make bootstrapping OS more consistent
- **consul** -> if officialy supported by k8s - **consul** -> if officialy supported by k8s
- flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kubespray/issues/312) - flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kubespray/issues/312)
- Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kubespray/issues/329) - Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kubespray/issues/329)

View File

@@ -1,7 +1,7 @@
Vagrant Install Vagrant Install
================= =================
Assuming you have Vagrant (1.8+) installed with virtualbox (it may work Assuming you have Vagrant (1.9+) installed with virtualbox (it may work
with vmware, but is untested) you should be able to launch a 3 node with vmware, but is untested) you should be able to launch a 3 node
Kubernetes cluster by simply running `$ vagrant up`.<br /> Kubernetes cluster by simply running `$ vagrant up`.<br />

View File

@@ -28,6 +28,7 @@ Some variables of note include:
* *kube_version* - Specify a given Kubernetes hyperkube version * *kube_version* - Specify a given Kubernetes hyperkube version
* *searchdomains* - Array of DNS domains to search when looking up hostnames * *searchdomains* - Array of DNS domains to search when looking up hostnames
* *nameservers* - Array of nameservers to use for DNS lookup * *nameservers* - Array of nameservers to use for DNS lookup
* *preinstall_selinux_state* - Set selinux state, permitted values are permissive and disabled.
#### Addressing variables #### Addressing variables
@@ -61,7 +62,7 @@ following default cluster paramters:
* *kube_network_node_prefix* - Subnet allocated per-node for pod IPs. Remainin * *kube_network_node_prefix* - Subnet allocated per-node for pod IPs. Remainin
bits in kube_pods_subnet dictates how many kube-nodes can be in cluster. bits in kube_pods_subnet dictates how many kube-nodes can be in cluster.
* *dns_setup* - Enables dnsmasq * *dns_setup* - Enables dnsmasq
* *dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2) * *dnsmasq_dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2)
* *skydns_server* - Cluster IP for KubeDNS (default is 10.233.0.3) * *skydns_server* - Cluster IP for KubeDNS (default is 10.233.0.3)
* *cloud_provider* - Enable extra Kubelet option if operating inside GCE or * *cloud_provider* - Enable extra Kubelet option if operating inside GCE or
OpenStack (default is unset) OpenStack (default is unset)
@@ -71,9 +72,12 @@ following default cluster paramters:
alpha/experimental Kubernetes features. (defaults is `[]`) alpha/experimental Kubernetes features. (defaults is `[]`)
* *authorization_modes* - A list of [authorization mode]( * *authorization_modes* - A list of [authorization mode](
https://kubernetes.io/docs/admin/authorization/#using-flags-for-your-authorization-module) https://kubernetes.io/docs/admin/authorization/#using-flags-for-your-authorization-module)
that the cluster should be configured for. Defaults to `[]` (i.e. no authorization). that the cluster should be configured for. Defaults to `['Node', 'RBAC']`
Note: `RBAC` is currently in experimental phase, and do not support either calico or (Node and RBAC authorizers).
vault. Upgrade from non-RBAC to RBAC is not tested. Note: `Node` and `RBAC` are enabled by default. Previously deployed clusters can be
converted to RBAC mode. However, your apps which rely on Kubernetes API will
require a service account and cluster role bindings. You can override this
setting by setting authorization_modes to `[]`.
Note, if cloud providers have any use of the ``10.233.0.0/16``, like instances' Note, if cloud providers have any use of the ``10.233.0.0/16``, like instances'
private addresses, make sure to pick another values for ``kube_service_addresses`` private addresses, make sure to pick another values for ``kube_service_addresses``
@@ -99,7 +103,8 @@ Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-st
* *docker_options* - Commonly used to set * *docker_options* - Commonly used to set
``--insecure-registry=myregistry.mydomain:5000`` ``--insecure-registry=myregistry.mydomain:5000``
* *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a * *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a
proxy proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames
that correspond to each node.
* *kubelet_deployment_type* - Controls which platform to deploy kubelet on. * *kubelet_deployment_type* - Controls which platform to deploy kubelet on.
Available options are ``host``, ``rkt``, and ``docker``. ``docker`` mode Available options are ``host``, ``rkt``, and ``docker``. ``docker`` mode
is unlikely to work on newer releases. Starting with Kubernetes v1.7 is unlikely to work on newer releases. Starting with Kubernetes v1.7

View File

@@ -3,6 +3,7 @@
### * Will not upgrade etcd ### * Will not upgrade etcd
### * Will not upgrade network plugins ### * Will not upgrade network plugins
### * Will not upgrade Docker ### * Will not upgrade Docker
### * Will not pre-download containers or kubeadm
### * Currently does not support Vault deployment. ### * Currently does not support Vault deployment.
### ###
### In most cases, you probably want to use upgrade-cluster.yml playbook and ### In most cases, you probably want to use upgrade-cluster.yml playbook and
@@ -46,6 +47,8 @@
- { role: upgrade/pre-upgrade, tags: pre-upgrade } - { role: upgrade/pre-upgrade, tags: pre-upgrade }
- { role: kubernetes/node, tags: node } - { role: kubernetes/node, tags: node }
- { role: kubernetes/master, tags: master } - { role: kubernetes/master, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- { role: upgrade/post-upgrade, tags: post-upgrade } - { role: upgrade/post-upgrade, tags: post-upgrade }
#Finally handle worker upgrades, based on given batch size #Finally handle worker upgrades, based on given batch size

View File

@@ -91,9 +91,10 @@ bin_dir: /usr/local/bin
#kubeadm_token_second: "{{ lookup('password', 'credentials/kubeadm_token_second length=16 chars=ascii_lowercase,digits') }}" #kubeadm_token_second: "{{ lookup('password', 'credentials/kubeadm_token_second length=16 chars=ascii_lowercase,digits') }}"
#kubeadm_token: "{{ kubeadm_token_first }}.{{ kubeadm_token_second }}" #kubeadm_token: "{{ kubeadm_token_first }}.{{ kubeadm_token_second }}"
# #
## Set these proxy values in order to update docker daemon to use proxies ## Set these proxy values in order to update package manager and docker daemon to use proxies
#http_proxy: "" #http_proxy: ""
#https_proxy: "" #https_proxy: ""
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
#no_proxy: "" #no_proxy: ""
## Uncomment this if you want to force overlay/overlay2 as docker storage driver ## Uncomment this if you want to force overlay/overlay2 as docker storage driver
@@ -113,9 +114,6 @@ bin_dir: /usr/local/bin
## as a backend). Options are "script" or "vault" ## as a backend). Options are "script" or "vault"
#cert_management: script #cert_management: script
## Please specify true if you want to perform a kernel upgrade
kernel_upgrade: false
# Set to true to allow pre-checks to fail and continue deployment # Set to true to allow pre-checks to fail and continue deployment
#ignore_assert_errors: false #ignore_assert_errors: false

View File

@@ -23,7 +23,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: false kube_api_anonymous_auth: false
## Change this to use another Kubernetes version, e.g. a current beta release ## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.7.5 kube_version: v1.8.1
# Where the binaries will be downloaded. # Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G) # Note: ensure that you've enough disk space (about 1G)
@@ -50,8 +50,8 @@ kube_users:
## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth) ## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth)
#kube_oidc_auth: false #kube_oidc_auth: false
#kube_basic_auth: true #kube_basic_auth: false
#kube_token_auth: true #kube_token_auth: false
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/ ## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
@@ -120,7 +120,7 @@ resolvconf_mode: docker_dns
deploy_netchecker: false deploy_netchecker: false
# Ip address of the kubernetes skydns service # Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}" skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}" dnsmasq_dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}" dns_domain: "{{ cluster_name }}"
# Path used to store Docker data # Path used to store Docker data
@@ -137,7 +137,6 @@ docker_bin_dir: "/usr/bin"
# Settings for containerized control plane (etcd/kubelet/secrets) # Settings for containerized control plane (etcd/kubelet/secrets)
etcd_deployment_type: docker etcd_deployment_type: docker
kubelet_deployment_type: host kubelet_deployment_type: host
cert_management: script
vault_deployment_type: docker vault_deployment_type: docker
# K8s image pull policy (imagePullPolicy) # K8s image pull policy (imagePullPolicy)
@@ -152,6 +151,9 @@ efk_enabled: false
# Helm deployment # Helm deployment
helm_enabled: false helm_enabled: false
# Istio depoyment
istio_enabled: false
# Make a copy of kubeconfig on the host that runs Ansible in GITDIR/artifacts # Make a copy of kubeconfig on the host that runs Ansible in GITDIR/artifacts
# kubeconfig_localhost: false # kubeconfig_localhost: false
# Download kubectl onto the host that runs Ansible in GITDIR/artifacts # Download kubectl onto the host that runs Ansible in GITDIR/artifacts
@@ -168,3 +170,7 @@ helm_enabled: false
# A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. # A comma separated list of levels of node allocatable enforcement to be enforced by kubelet.
# Acceptible options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "". # Acceptible options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
# kubelet_enforce_node_allocatable: pods # kubelet_enforce_node_allocatable: pods
## Supplementary addresses that can be added in kubernetes ssl keys.
## That can be usefull for example to setup a keepalived virtual IP
# supplementary_addresses_in_ssl_keys: [10.0.0.1, 10.0.0.2, 10.0.0.3]

View File

@@ -288,8 +288,6 @@ def main():
else: else:
module.fail_json(msg='Unrecognized state %s.' % state) module.fail_json(msg='Unrecognized state %s.' % state)
if result:
changed = True
module.exit_json(changed=changed, module.exit_json(changed=changed,
msg='success: %s' % (' '.join(result)) msg='success: %s' % (' '.join(result))
) )

View File

@@ -1,4 +1,4 @@
pbr>=1.6 pbr>=1.6
ansible>=2.3.2 ansible>=2.4.0
netaddr netaddr
jinja2>=2.9.6 jinja2>=2.9.6

View File

@@ -3,13 +3,13 @@
has_bastion: "{{ 'bastion' in groups['all'] }}" has_bastion: "{{ 'bastion' in groups['all'] }}"
- set_fact: - set_fact:
bastion_ip: "{{ hostvars['bastion']['ansible_ssh_host'] }}" bastion_ip: "{{ hostvars['bastion']['ansible_host'] }}"
when: has_bastion when: has_bastion
# As we are actually running on localhost, the ansible_ssh_user is your local user when you try to use it directly # As we are actually running on localhost, the ansible_ssh_user is your local user when you try to use it directly
# To figure out the real ssh user, we delegate this task to the bastion and store the ansible_ssh_user in real_user # To figure out the real ssh user, we delegate this task to the bastion and store the ansible_user in real_user
- set_fact: - set_fact:
real_user: "{{ ansible_ssh_user }}" real_user: "{{ ansible_user }}"
delegate_to: bastion delegate_to: bastion
when: has_bastion when: has_bastion
@@ -18,3 +18,4 @@
template: template:
src: ssh-bastion.conf src: ssh-bastion.conf
dest: "{{ playbook_dir }}/ssh-bastion.conf" dest: "{{ playbook_dir }}/ssh-bastion.conf"
when: has_bastion

View File

@@ -17,5 +17,6 @@ Host {{ bastion_ip }}
Host {{ vars['hosts'] }} Host {{ vars['hosts'] }}
ProxyCommand ssh -W %h:%p {{ real_user }}@{{ bastion_ip }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %} ProxyCommand ssh -W %h:%p {{ real_user }}@{{ bastion_ip }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
StrictHostKeyChecking no StrictHostKeyChecking no
{% endif %} {% endif %}

View File

@@ -15,4 +15,6 @@
when: fastestmirror.stat.exists when: fastestmirror.stat.exists
- name: Install packages requirements for bootstrap - name: Install packages requirements for bootstrap
raw: yum -y install libselinux-python yum:
name: libselinux-python
state: present

View File

@@ -3,7 +3,8 @@
raw: stat /opt/bin/.bootstrapped raw: stat /opt/bin/.bootstrapped
register: need_bootstrap register: need_bootstrap
failed_when: false failed_when: false
tags: facts tags:
- facts
- name: Bootstrap | Run bootstrap.sh - name: Bootstrap | Run bootstrap.sh
script: bootstrap.sh script: bootstrap.sh
@@ -11,7 +12,8 @@
- set_fact: - set_fact:
ansible_python_interpreter: "/opt/bin/python" ansible_python_interpreter: "/opt/bin/python"
tags: facts tags:
- facts
- name: Bootstrap | Check if we need to install pip - name: Bootstrap | Check if we need to install pip
shell: "{{ansible_python_interpreter}} -m pip --version" shell: "{{ansible_python_interpreter}} -m pip --version"
@@ -20,7 +22,8 @@
changed_when: false changed_when: false
check_mode: no check_mode: no
when: need_bootstrap.rc != 0 when: need_bootstrap.rc != 0
tags: facts tags:
- facts
- name: Bootstrap | Copy get-pip.py - name: Bootstrap | Copy get-pip.py
copy: copy:

View File

@@ -0,0 +1,23 @@
---
# raw: cat /etc/issue.net | grep '{{ bootstrap_versions }}'
- name: Bootstrap | Check if bootstrap is needed
raw: which "{{ item }}"
register: need_bootstrap
failed_when: false
with_items:
- python
- pip
- dbus-daemon
tags: facts
- name: Bootstrap | Install python 2.x, pip, and dbus
raw:
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y python-minimal python-pip dbus
when:
"{{ need_bootstrap.results | map(attribute='rc') | sort | last | bool }}"
- set_fact:
ansible_python_interpreter: "/usr/bin/python"
tags: facts

View File

@@ -8,15 +8,18 @@
with_items: with_items:
- python - python
- pip - pip
tags: facts - dbus-daemon
tags:
- facts
- name: Bootstrap | Install python 2.x and pip - name: Bootstrap | Install python 2.x and pip
raw: raw:
apt-get update && \ apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y python-minimal python-pip DEBIAN_FRONTEND=noninteractive apt-get install -y python-minimal python-pip dbus
when: when:
"{{ need_bootstrap.results | map(attribute='rc') | sort | last | bool }}" "{{ need_bootstrap.results | map(attribute='rc') | sort | last | bool }}"
- set_fact: - set_fact:
ansible_python_interpreter: "/usr/bin/python" ansible_python_interpreter: "/usr/bin/python"
tags: facts tags:
- facts

View File

@@ -2,6 +2,9 @@
- include: bootstrap-ubuntu.yml - include: bootstrap-ubuntu.yml
when: bootstrap_os == "ubuntu" when: bootstrap_os == "ubuntu"
- include: bootstrap-debian.yml
when: bootstrap_os == "debian"
- include: bootstrap-coreos.yml - include: bootstrap-coreos.yml
when: bootstrap_os == "coreos" when: bootstrap_os == "coreos"

View File

@@ -1,6 +0,0 @@
---
dependencies:
- role: download
file: "{{ downloads.dnsmasq }}"
when: dns_mode == 'dnsmasq_kubedns' and download_localhost|default(false)
tags: [download, dnsmasq]

View File

@@ -3,13 +3,15 @@
file: file:
path: /etc/dnsmasq.d path: /etc/dnsmasq.d
state: directory state: directory
tags: bootstrap-os tags:
- bootstrap-os
- name: ensure dnsmasq.d-available directory exists - name: ensure dnsmasq.d-available directory exists
file: file:
path: /etc/dnsmasq.d-available path: /etc/dnsmasq.d-available
state: directory state: directory
tags: bootstrap-os tags:
- bootstrap-os
- name: check system nameservers - name: check system nameservers
shell: awk '/^nameserver/ {print $NF}' /etc/resolv.conf shell: awk '/^nameserver/ {print $NF}' /etc/resolv.conf
@@ -100,7 +102,7 @@
- name: Check for dnsmasq port (pulling image and running container) - name: Check for dnsmasq port (pulling image and running container)
wait_for: wait_for:
host: "{{dns_server}}" host: "{{dnsmasq_dns_server}}"
port: 53 port: 53
timeout: 180 timeout: 180
when: inventory_hostname == groups['kube-node'][0] and groups['kube-node'][0] in ansible_play_hosts when: inventory_hostname == groups['kube-node'][0] and groups['kube-node'][0] in ansible_play_hosts

View File

@@ -18,6 +18,6 @@ spec:
targetPort: 53 targetPort: 53
protocol: UDP protocol: UDP
type: ClusterIP type: ClusterIP
clusterIP: {{dns_server}} clusterIP: {{dnsmasq_dns_server}}
selector: selector:
k8s-app: dnsmasq k8s-app: dnsmasq

View File

@@ -12,11 +12,13 @@
paths: paths:
- ../vars - ../vars
skip: true skip: true
tags: facts tags:
- facts
- include: set_facts_dns.yml - include: set_facts_dns.yml
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns' when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
tags: facts tags:
- facts
- name: check for minimum kernel version - name: check for minimum kernel version
fail: fail:
@@ -25,7 +27,8 @@
{{ docker_kernel_min_version }} on {{ docker_kernel_min_version }} on
{{ ansible_distribution }}-{{ ansible_distribution_version }} {{ ansible_distribution }}-{{ ansible_distribution_version }}
when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]) and (ansible_kernel|version_compare(docker_kernel_min_version, "<")) when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]) and (ansible_kernel|version_compare(docker_kernel_min_version, "<"))
tags: facts tags:
- facts
- name: ensure docker repository public key is installed - name: ensure docker repository public key is installed
action: "{{ docker_repo_key_info.pkg_key }}" action: "{{ docker_repo_key_info.pkg_key }}"
@@ -37,6 +40,7 @@
until: keyserver_task_result|succeeded until: keyserver_task_result|succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
environment: "{{ proxy_env }}"
with_items: "{{ docker_repo_key_info.repo_keys }}" with_items: "{{ docker_repo_key_info.repo_keys }}"
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic)
@@ -64,6 +68,7 @@
until: docker_task_result|succeeded until: docker_task_result|succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
environment: "{{ proxy_env }}"
with_items: "{{ docker_package_info.pkgs }}" with_items: "{{ docker_package_info.pkgs }}"
notify: restart docker notify: restart docker
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_package_info.pkgs|length > 0) when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_package_info.pkgs|length > 0)

View File

@@ -6,7 +6,7 @@
{%- if dns_mode == 'kubedns' -%} {%- if dns_mode == 'kubedns' -%}
{{ [ skydns_server ] }} {{ [ skydns_server ] }}
{%- elif dns_mode == 'dnsmasq_kubedns' -%} {%- elif dns_mode == 'dnsmasq_kubedns' -%}
{{ [ dns_server ] }} {{ [ dnsmasq_dns_server ] }}
{%- endif -%} {%- endif -%}
- name: set base docker dns facts - name: set base docker dns facts

View File

@@ -8,7 +8,7 @@
template: template:
src: http-proxy.conf.j2 src: http-proxy.conf.j2
dest: /etc/systemd/system/docker.service.d/http-proxy.conf dest: /etc/systemd/system/docker.service.d/http-proxy.conf
when: http_proxy is defined or https_proxy is defined or no_proxy is defined when: http_proxy is defined or https_proxy is defined
- name: get systemd version - name: get systemd version
command: rpm -q --qf '%{V}\n' systemd command: rpm -q --qf '%{V}\n' systemd
@@ -24,13 +24,6 @@
notify: restart docker notify: restart docker
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic)
- name: Write docker.service systemd file for atomic
template:
src: docker_atomic.service.j2
dest: /etc/systemd/system/docker.service
notify: restart docker
when: is_atomic
- name: Write docker options systemd drop-in - name: Write docker options systemd drop-in
template: template:
src: docker-options.conf.j2 src: docker-options.conf.j2

View File

@@ -1,37 +0,0 @@
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecReload=/bin/kill -s HUP $MAINPID
Delegate=yes
KillMode=process
ExecStart=/usr/bin/dockerd-current \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
$DOCKER_OPTS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$DOCKER_DNS_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=1min
Restart=on-abnormal
[Install]
WantedBy=multi-user.target

View File

@@ -1,2 +1,2 @@
[Service] [Service]
Environment={% if http_proxy %}"HTTP_PROXY={{ http_proxy }}"{% endif %} {% if https_proxy %}"HTTPS_PROXY={{ https_proxy }}"{% endif %} {% if no_proxy %}"NO_PROXY={{ no_proxy }}"{% endif %} Environment={% if http_proxy is defined %}"HTTP_PROXY={{ http_proxy }}"{% endif %} {% if https_proxy is defined %}"HTTPS_PROXY={{ https_proxy }}"{% endif %} {% if no_proxy is defined %}"NO_PROXY={{ no_proxy }}"{% endif %}

View File

@@ -9,6 +9,7 @@ docker_versioned_pkg:
'latest': docker 'latest': docker
'1.11': docker-1:1.11.2 '1.11': docker-1:1.11.2
'1.12': docker-1:1.12.5 '1.12': docker-1:1.12.5
'1.13': docker-1.13.1
'stable': docker-ce 'stable': docker-ce
'edge': docker-ce-edge 'edge': docker-ce-edge

View File

@@ -1,6 +1,9 @@
--- ---
local_release_dir: /tmp local_release_dir: /tmp
# Used to only evaluate vars from download role
skip_downloads: false
# if this is set to true will only download files once. Doesn't work # if this is set to true will only download files once. Doesn't work
# on Container Linux by CoreOS unless the download_localhost is true and localhost # on Container Linux by CoreOS unless the download_localhost is true and localhost
# is running another OS type. Default compress level is 1 (fastest). # is running another OS type. Default compress level is 1 (fastest).
@@ -17,10 +20,12 @@ download_localhost: False
# Always pull images if set to True. Otherwise check by the repo's tag/digest. # Always pull images if set to True. Otherwise check by the repo's tag/digest.
download_always_pull: False download_always_pull: False
# Use the first kube-master if download_localhost is not set
download_delegate: "{% if download_localhost %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
# Versions # Versions
kube_version: v1.7.5 kube_version: v1.8.1
# Change to kube_version after v1.8.0 release kubeadm_version: "{{ kube_version }}"
kubeadm_version: "v1.8.0-rc.1"
etcd_version: v3.2.4 etcd_version: v3.2.4
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults # TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download # after migration to container download
@@ -28,6 +33,7 @@ calico_version: "v2.5.0"
calico_ctl_version: "v1.5.0" calico_ctl_version: "v1.5.0"
calico_cni_version: "v1.10.0" calico_cni_version: "v1.10.0"
calico_policy_version: "v0.7.0" calico_policy_version: "v0.7.0"
calico_rr_version: "v0.4.0"
weave_version: 2.0.4 weave_version: 2.0.4
flannel_version: "v0.8.0" flannel_version: "v0.8.0"
flannel_cni_version: "v0.2.0" flannel_cni_version: "v0.2.0"
@@ -37,7 +43,19 @@ pod_infra_version: 3.0
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm" kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm"
# Checksums # Checksums
kubeadm_checksum: "8f6ceb26b8503bfc36a99574cf6f853be1c55405aa31669561608ad8099bf5bf" kubeadm_checksum: "93246027cc225b4fd7ec57bf1f562dbc78f2ed9f2b77a1468976c266a104cf4d"
istio_version: "0.2.6"
istioctl_download_url: "https://storage.googleapis.com/istio-release/releases/{{ istio_version }}/istioctl/istioctl-linux"
istioctl_checksum: fd703063c540b8c0ab943f478c05ab257d88ae27224c746a27d0526ddbf7c370
vault_version: 0.8.1
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip"
vault_image_repo: "vault"
vault_image_tag: "{{ vault_version }}"
# Containers # Containers
etcd_image_repo: "quay.io/coreos/etcd" etcd_image_repo: "quay.io/coreos/etcd"
@@ -55,7 +73,7 @@ calico_cni_image_tag: "{{ calico_cni_version }}"
calico_policy_image_repo: "quay.io/calico/kube-policy-controller" calico_policy_image_repo: "quay.io/calico/kube-policy-controller"
calico_policy_image_tag: "{{ calico_policy_version }}" calico_policy_image_tag: "{{ calico_policy_version }}"
calico_rr_image_repo: "quay.io/calico/routereflector" calico_rr_image_repo: "quay.io/calico/routereflector"
calico_rr_image_tag: "v0.3.0" calico_rr_image_tag: "{{ calico_rr_version }}"
hyperkube_image_repo: "quay.io/coreos/hyperkube" hyperkube_image_repo: "quay.io/coreos/hyperkube"
hyperkube_image_tag: "{{ kube_version }}_coreos.0" hyperkube_image_tag: "{{ kube_version }}_coreos.0"
pod_infra_image_repo: "gcr.io/google_containers/pause-amd64" pod_infra_image_repo: "gcr.io/google_containers/pause-amd64"
@@ -74,10 +92,10 @@ weave_npc_image_tag: "{{ weave_version }}"
nginx_image_repo: nginx nginx_image_repo: nginx
nginx_image_tag: 1.11.4-alpine nginx_image_tag: 1.11.4-alpine
dnsmasq_version: 2.72 dnsmasq_version: 2.78
dnsmasq_image_repo: "andyshinn/dnsmasq" dnsmasq_image_repo: "andyshinn/dnsmasq"
dnsmasq_image_tag: "{{ dnsmasq_version }}" dnsmasq_image_tag: "{{ dnsmasq_version }}"
kubedns_version: 1.14.2 kubedns_version: 1.14.5
kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-amd64" kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-amd64"
kubedns_image_tag: "{{ kubedns_version }}" kubedns_image_tag: "{{ kubedns_version }}"
dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64" dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64"
@@ -108,23 +126,26 @@ tiller_image_tag: "{{ tiller_version }}"
downloads: downloads:
netcheck_server: netcheck_server:
enabled: "{{ deploy_netchecker }}"
container: true container: true
repo: "{{ netcheck_server_img_repo }}" repo: "{{ netcheck_server_img_repo }}"
tag: "{{ netcheck_server_tag }}" tag: "{{ netcheck_server_tag }}"
sha256: "{{ netcheck_server_digest_checksum|default(None) }}" sha256: "{{ netcheck_server_digest_checksum|default(None) }}"
enabled: "{{ deploy_netchecker|bool }}"
netcheck_agent: netcheck_agent:
enabled: "{{ deploy_netchecker }}"
container: true container: true
repo: "{{ netcheck_agent_img_repo }}" repo: "{{ netcheck_agent_img_repo }}"
tag: "{{ netcheck_agent_tag }}" tag: "{{ netcheck_agent_tag }}"
sha256: "{{ netcheck_agent_digest_checksum|default(None) }}" sha256: "{{ netcheck_agent_digest_checksum|default(None) }}"
enabled: "{{ deploy_netchecker|bool }}"
etcd: etcd:
enabled: true
container: true container: true
repo: "{{ etcd_image_repo }}" repo: "{{ etcd_image_repo }}"
tag: "{{ etcd_image_tag }}" tag: "{{ etcd_image_tag }}"
sha256: "{{ etcd_digest_checksum|default(None) }}" sha256: "{{ etcd_digest_checksum|default(None) }}"
kubeadm: kubeadm:
enabled: "{{ kubeadm_enabled }}"
file: true
version: "{{ kubeadm_version }}" version: "{{ kubeadm_version }}"
dest: "kubeadm" dest: "kubeadm"
sha256: "{{ kubeadm_checksum }}" sha256: "{{ kubeadm_checksum }}"
@@ -133,146 +154,185 @@ downloads:
unarchive: false unarchive: false
owner: "root" owner: "root"
mode: "0755" mode: "0755"
istioctl:
enabled: "{{ istio_enabled }}"
file: true
version: "{{ istio_version }}"
dest: "istio/istioctl"
sha256: "{{ istioctl_checksum }}"
source_url: "{{ istioctl_download_url }}"
url: "{{ istioctl_download_url }}"
unarchive: false
owner: "root"
mode: "0755"
hyperkube: hyperkube:
enabled: true
container: true container: true
repo: "{{ hyperkube_image_repo }}" repo: "{{ hyperkube_image_repo }}"
tag: "{{ hyperkube_image_tag }}" tag: "{{ hyperkube_image_tag }}"
sha256: "{{ hyperkube_digest_checksum|default(None) }}" sha256: "{{ hyperkube_digest_checksum|default(None) }}"
flannel: flannel:
enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ flannel_image_repo }}" repo: "{{ flannel_image_repo }}"
tag: "{{ flannel_image_tag }}" tag: "{{ flannel_image_tag }}"
sha256: "{{ flannel_digest_checksum|default(None) }}" sha256: "{{ flannel_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}"
flannel_cni: flannel_cni:
enabled: "{{ kube_network_plugin == 'flannel' }}"
container: true container: true
repo: "{{ flannel_cni_image_repo }}" repo: "{{ flannel_cni_image_repo }}"
tag: "{{ flannel_cni_image_tag }}" tag: "{{ flannel_cni_image_tag }}"
sha256: "{{ flannel_cni_digest_checksum|default(None) }}" sha256: "{{ flannel_cni_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'flannel' }}"
calicoctl: calicoctl:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ calicoctl_image_repo }}" repo: "{{ calicoctl_image_repo }}"
tag: "{{ calicoctl_image_tag }}" tag: "{{ calicoctl_image_tag }}"
sha256: "{{ calicoctl_digest_checksum|default(None) }}" sha256: "{{ calicoctl_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
calico_node: calico_node:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ calico_node_image_repo }}" repo: "{{ calico_node_image_repo }}"
tag: "{{ calico_node_image_tag }}" tag: "{{ calico_node_image_tag }}"
sha256: "{{ calico_node_digest_checksum|default(None) }}" sha256: "{{ calico_node_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
calico_cni: calico_cni:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ calico_cni_image_repo }}" repo: "{{ calico_cni_image_repo }}"
tag: "{{ calico_cni_image_tag }}" tag: "{{ calico_cni_image_tag }}"
sha256: "{{ calico_cni_digest_checksum|default(None) }}" sha256: "{{ calico_cni_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
calico_policy: calico_policy:
enabled: "{{ enable_network_policy or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ calico_policy_image_repo }}" repo: "{{ calico_policy_image_repo }}"
tag: "{{ calico_policy_image_tag }}" tag: "{{ calico_policy_image_tag }}"
sha256: "{{ calico_policy_digest_checksum|default(None) }}" sha256: "{{ calico_policy_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'canal' }}"
calico_rr: calico_rr:
enabled: "{{ peer_with_calico_rr is defined and peer_with_calico_rr}} and kube_network_plugin == 'calico'"
container: true container: true
repo: "{{ calico_rr_image_repo }}" repo: "{{ calico_rr_image_repo }}"
tag: "{{ calico_rr_image_tag }}" tag: "{{ calico_rr_image_tag }}"
sha256: "{{ calico_rr_digest_checksum|default(None) }}" sha256: "{{ calico_rr_digest_checksum|default(None) }}"
enabled: "{{ peer_with_calico_rr is defined and peer_with_calico_rr}} and kube_network_plugin == 'calico'"
weave_kube: weave_kube:
enabled: "{{ kube_network_plugin == 'weave' }}"
container: true container: true
repo: "{{ weave_kube_image_repo }}" repo: "{{ weave_kube_image_repo }}"
tag: "{{ weave_kube_image_tag }}" tag: "{{ weave_kube_image_tag }}"
sha256: "{{ weave_kube_digest_checksum|default(None) }}" sha256: "{{ weave_kube_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'weave' }}"
weave_npc: weave_npc:
enabled: "{{ kube_network_plugin == 'weave' }}"
container: true container: true
repo: "{{ weave_npc_image_repo }}" repo: "{{ weave_npc_image_repo }}"
tag: "{{ weave_npc_image_tag }}" tag: "{{ weave_npc_image_tag }}"
sha256: "{{ weave_npc_digest_checksum|default(None) }}" sha256: "{{ weave_npc_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'weave' }}"
pod_infra: pod_infra:
enabled: true
container: true container: true
repo: "{{ pod_infra_image_repo }}" repo: "{{ pod_infra_image_repo }}"
tag: "{{ pod_infra_image_tag }}" tag: "{{ pod_infra_image_tag }}"
sha256: "{{ pod_infra_digest_checksum|default(None) }}" sha256: "{{ pod_infra_digest_checksum|default(None) }}"
install_socat: install_socat:
enabled: "{{ ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] }}"
container: true container: true
repo: "{{ install_socat_image_repo }}" repo: "{{ install_socat_image_repo }}"
tag: "{{ install_socat_image_tag }}" tag: "{{ install_socat_image_tag }}"
sha256: "{{ install_socat_digest_checksum|default(None) }}" sha256: "{{ install_socat_digest_checksum|default(None) }}"
nginx: nginx:
enabled: true
container: true container: true
repo: "{{ nginx_image_repo }}" repo: "{{ nginx_image_repo }}"
tag: "{{ nginx_image_tag }}" tag: "{{ nginx_image_tag }}"
sha256: "{{ nginx_digest_checksum|default(None) }}" sha256: "{{ nginx_digest_checksum|default(None) }}"
dnsmasq: dnsmasq:
enabled: "{{ dns_mode == 'dnsmasq_kubedns' }}"
container: true container: true
repo: "{{ dnsmasq_image_repo }}" repo: "{{ dnsmasq_image_repo }}"
tag: "{{ dnsmasq_image_tag }}" tag: "{{ dnsmasq_image_tag }}"
sha256: "{{ dnsmasq_digest_checksum|default(None) }}" sha256: "{{ dnsmasq_digest_checksum|default(None) }}"
kubedns: kubedns:
enabled: true
container: true container: true
repo: "{{ kubedns_image_repo }}" repo: "{{ kubedns_image_repo }}"
tag: "{{ kubedns_image_tag }}" tag: "{{ kubedns_image_tag }}"
sha256: "{{ kubedns_digest_checksum|default(None) }}" sha256: "{{ kubedns_digest_checksum|default(None) }}"
dnsmasq_nanny: dnsmasq_nanny:
enabled: true
container: true container: true
repo: "{{ dnsmasq_nanny_image_repo }}" repo: "{{ dnsmasq_nanny_image_repo }}"
tag: "{{ dnsmasq_nanny_image_tag }}" tag: "{{ dnsmasq_nanny_image_tag }}"
sha256: "{{ dnsmasq_nanny_digest_checksum|default(None) }}" sha256: "{{ dnsmasq_nanny_digest_checksum|default(None) }}"
dnsmasq_sidecar: dnsmasq_sidecar:
enabled: true
container: true container: true
repo: "{{ dnsmasq_sidecar_image_repo }}" repo: "{{ dnsmasq_sidecar_image_repo }}"
tag: "{{ dnsmasq_sidecar_image_tag }}" tag: "{{ dnsmasq_sidecar_image_tag }}"
sha256: "{{ dnsmasq_sidecar_digest_checksum|default(None) }}" sha256: "{{ dnsmasq_sidecar_digest_checksum|default(None) }}"
kubednsautoscaler: kubednsautoscaler:
enabled: true
container: true container: true
repo: "{{ kubednsautoscaler_image_repo }}" repo: "{{ kubednsautoscaler_image_repo }}"
tag: "{{ kubednsautoscaler_image_tag }}" tag: "{{ kubednsautoscaler_image_tag }}"
sha256: "{{ kubednsautoscaler_digest_checksum|default(None) }}" sha256: "{{ kubednsautoscaler_digest_checksum|default(None) }}"
testbox: testbox:
enabled: true
container: true container: true
repo: "{{ test_image_repo }}" repo: "{{ test_image_repo }}"
tag: "{{ test_image_tag }}" tag: "{{ test_image_tag }}"
sha256: "{{ testbox_digest_checksum|default(None) }}" sha256: "{{ testbox_digest_checksum|default(None) }}"
elasticsearch: elasticsearch:
enabled: "{{ efk_enabled }}"
container: true container: true
repo: "{{ elasticsearch_image_repo }}" repo: "{{ elasticsearch_image_repo }}"
tag: "{{ elasticsearch_image_tag }}" tag: "{{ elasticsearch_image_tag }}"
sha256: "{{ elasticsearch_digest_checksum|default(None) }}" sha256: "{{ elasticsearch_digest_checksum|default(None) }}"
fluentd: fluentd:
enabled: "{{ efk_enabled }}"
container: true container: true
repo: "{{ fluentd_image_repo }}" repo: "{{ fluentd_image_repo }}"
tag: "{{ fluentd_image_tag }}" tag: "{{ fluentd_image_tag }}"
sha256: "{{ fluentd_digest_checksum|default(None) }}" sha256: "{{ fluentd_digest_checksum|default(None) }}"
kibana: kibana:
enabled: "{{ efk_enabled }}"
container: true container: true
repo: "{{ kibana_image_repo }}" repo: "{{ kibana_image_repo }}"
tag: "{{ kibana_image_tag }}" tag: "{{ kibana_image_tag }}"
sha256: "{{ kibana_digest_checksum|default(None) }}" sha256: "{{ kibana_digest_checksum|default(None) }}"
helm: helm:
enabled: "{{ helm_enabled }}"
container: true container: true
repo: "{{ helm_image_repo }}" repo: "{{ helm_image_repo }}"
tag: "{{ helm_image_tag }}" tag: "{{ helm_image_tag }}"
sha256: "{{ helm_digest_checksum|default(None) }}" sha256: "{{ helm_digest_checksum|default(None) }}"
tiller: tiller:
enabled: "{{ helm_enabled }}"
container: true container: true
repo: "{{ tiller_image_repo }}" repo: "{{ tiller_image_repo }}"
tag: "{{ tiller_image_tag }}" tag: "{{ tiller_image_tag }}"
sha256: "{{ tiller_digest_checksum|default(None) }}" sha256: "{{ tiller_digest_checksum|default(None) }}"
vault:
enabled: "{{ cert_management == 'vault' }}"
container: "{{ vault_deployment_type != 'host' }}"
file: "{{ vault_deployment_type == 'host' }}"
dest: "vault/vault_{{ vault_version }}_linux_amd64.zip"
mode: "0755"
owner: "vault"
repo: "{{ vault_image_repo }}"
sha256: "{{ vault_binary_checksum if vault_deployment_type == 'host' else vault_digest_checksum|d(none) }}"
source_url: "{{ vault_download_url }}"
tag: "{{ vault_image_tag }}"
unarchive: true
url: "{{ vault_download_url }}"
version: "{{ vault_version }}"
download: download_defaults:
container: "{{ file.container|default('false') }}" container: false
repo: "{{ file.repo|default(None) }}" file: false
tag: "{{ file.tag|default(None) }}" repo: None
enabled: "{{ file.enabled|default('true') }}" tag: None
dest: "{{ file.dest|default(None) }}" enabled: false
version: "{{ file.version|default(None) }}" dest: None
sha256: "{{ file.sha256|default(None) }}" version: None
source_url: "{{ file.source_url|default(None) }}" url: None
url: "{{ file.url|default(None) }}" unarchive: false
unarchive: "{{ file.unarchive|default('false') }}" owner: kube
owner: "{{ file.owner|default('kube') }}" mode: None
mode: "{{ file.mode|default(None) }}"

View File

@@ -0,0 +1,2 @@
---
allow_duplicates: true

View File

@@ -0,0 +1,26 @@
---
- name: container_download | Make download decision if pull is required by tag or sha256
include: set_docker_image_facts.yml
delegate_to: "{{ download_delegate if download_run_once or omit }}"
delegate_facts: no
run_once: "{{ download_run_once }}"
when:
- download.enabled
- download.container
tags:
- facts
- name: container_download | Download containers if pull is required or told to always pull
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result
until: pull_task_result|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
environment: "{{ proxy_env }}"
when:
- download.enabled
- download.container
- pull_required|default(download_always_pull)
delegate_to: "{{ download_delegate if download_run_once or omit }}"
delegate_facts: no
run_once: "{{ download_run_once }}"

View File

@@ -0,0 +1,43 @@
---
- name: file_download | Downloading...
debug:
msg:
- "URL: {{ download.url }}"
- "Dest: {{ download.dest }}"
- name: file_download | Create dest directory
file:
path: "{{local_release_dir}}/{{download.dest|dirname}}"
state: directory
recurse: yes
when:
- download.enabled
- download.file
- name: file_download | Download item
get_url:
url: "{{download.url}}"
dest: "{{local_release_dir}}/{{download.dest}}"
sha256sum: "{{download.sha256 | default(omit)}}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
register: get_url_result
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
environment: "{{ proxy_env }}"
when:
- download.enabled
- download.file
- name: file_download | Extract archives
unarchive:
src: "{{ local_release_dir }}/{{download.dest}}"
dest: "{{ local_release_dir }}/{{download.dest|dirname}}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
copy: no
when:
- download.enabled
- download.file
- download.unarchive|default(False)

View File

@@ -0,0 +1,32 @@
---
- name: Register docker images info
raw: >-
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} (index .RepoTags 0) {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}" | tr '\n' ','
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
- name: container_download | Create dest directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
recurse: yes
mode: 0755
owner: "{{ansible_ssh_user|default(ansible_user_id)}}"
- name: container_download | create local directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
recurse: yes
delegate_to: localhost
delegate_facts: false
become: false
run_once: true
when:
- download_run_once
- download_delegate == 'localhost'
tags:
- localhost

View File

@@ -1,201 +1,24 @@
--- ---
- name: file_download | Create dest directories - include: download_prep.yml
file:
path: "{{local_release_dir}}/{{download.dest|dirname}}"
state: directory
recurse: yes
when: when:
- download.enabled|bool - not skip_downloads|default(false)
- not download.container|bool
tags: bootstrap-os
- name: file_download | Download item - name: "Download items"
get_url: include: "download_{% if download.container %}container{% else %}file{% endif %}.yml"
url: "{{download.url}}" vars:
dest: "{{local_release_dir}}/{{download.dest}}" download: "{{ download_defaults | combine(item.value) }}"
sha256sum: "{{download.sha256 | default(omit)}}" with_dict: "{{ downloads }}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
register: get_url_result
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when: when:
- download.enabled|bool - not skip_downloads|default(false)
- not download.container|bool - item.value.enabled
- name: file_download | Extract archives - name: "Sync container"
unarchive: include: sync_container.yml
src: "{{ local_release_dir }}/{{download.dest}}" vars:
dest: "{{ local_release_dir }}/{{download.dest|dirname}}" download: "{{ download_defaults | combine(item.value) }}"
owner: "{{ download.owner|default(omit) }}" with_dict: "{{ downloads }}"
mode: "{{ download.mode|default(omit) }}"
copy: no
when: when:
- download.enabled|bool - not skip_downloads|default(false)
- not download.container|bool - item.value.enabled
- download.unarchive|default(False) - item.value.container
- download_run_once
- name: file_download | Fix permissions
file:
state: file
path: "{{local_release_dir}}/{{download.dest}}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
when:
- download.enabled|bool
- not download.container|bool
- (download.unarchive is not defined or download.unarchive == False)
- set_fact:
download_delegate: "{% if download_localhost|bool %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
run_once: true
tags: facts
- name: container_download | Create dest directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
recurse: yes
mode: 0755
owner: "{{ansible_ssh_user|default(ansible_user_id)}}"
when:
- download.enabled|bool
- download.container|bool
tags: bootstrap-os
# This is required for the download_localhost delegate to work smooth with Container Linux by CoreOS cluster nodes
- name: container_download | Hack python binary path for localhost
raw: sh -c "mkdir -p /opt/bin; ln -sf /usr/bin/python /opt/bin/python"
delegate_to: localhost
when: download_delegate == 'localhost'
failed_when: false
tags: localhost
- name: container_download | create local directory for saved/loaded container images
file:
path: "{{local_release_dir}}/containers"
state: directory
recurse: yes
delegate_to: localhost
become: false
run_once: true
when:
- download_run_once|bool
- download.enabled|bool
- download.container|bool
- download_delegate == 'localhost'
tags: localhost
- name: container_download | Make download decision if pull is required by tag or sha256
include: set_docker_image_facts.yml
when:
- download.enabled|bool
- download.container|bool
delegate_to: "{{ download_delegate if download_run_once|bool or omit }}"
run_once: "{{ download_run_once|bool }}"
tags: facts
- name: container_download | Download containers if pull is required or told to always pull
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result
until: pull_task_result|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- download.enabled|bool
- download.container|bool
- pull_required|bool|default(download_always_pull)
delegate_to: "{{ download_delegate if download_run_once|bool or omit }}"
run_once: "{{ download_run_once|bool }}"
- set_fact:
fname: "{{local_release_dir}}/containers/{{download.repo|regex_replace('/|\0|:', '_')}}:{{download.tag|default(download.sha256)|regex_replace('/|\0|:', '_')}}.tar"
run_once: true
tags: facts
- name: "container_download | Set default value for 'container_changed' to false"
set_fact:
container_changed: "{{pull_required|default(false)|bool}}"
- name: "container_download | Update the 'container_changed' fact"
set_fact:
container_changed: "{{ pull_required|bool|default(false) or not 'up to date' in pull_task_result.stdout }}"
when:
- download.enabled|bool
- download.container|bool
- pull_required|bool|default(download_always_pull)
run_once: "{{ download_run_once|bool }}"
tags: facts
- name: container_download | Stat saved container image
stat:
path: "{{fname}}"
register: img
changed_when: false
when:
- download.enabled|bool
- download.container|bool
- download_run_once|bool
delegate_to: "{{ download_delegate }}"
become: false
run_once: true
tags: facts
- name: container_download | save container images
shell: "{{ docker_bin_dir }}/docker save {{ pull_args }} | gzip -{{ download_compress }} > {{ fname }}"
delegate_to: "{{ download_delegate }}"
register: saved
run_once: true
when:
- (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or download_delegate == "localhost")
- download_run_once|bool
- download.enabled|bool
- download.container|bool
- (container_changed|bool or not img.stat.exists)
- name: container_download | copy container images to ansible host
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
mode: pull
delegate_to: localhost
become: false
when:
- not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- inventory_hostname == groups['kube-master'][0]
- download_delegate != "localhost"
- download_run_once|bool
- download.enabled|bool
- download.container|bool
- saved.changed
- name: container_download | upload container images to nodes
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
mode: push
delegate_to: localhost
become: false
register: get_task
until: get_task|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != groups['kube-master'][0] or
download_delegate == "localhost")
- download_run_once|bool
- download.enabled|bool
- download.container|bool
tags: [upload, upgrade]
- name: container_download | load container images
shell: "{{ docker_bin_dir }}/docker load < {{ fname }}"
when:
- (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != groups['kube-master'][0] or download_delegate == "localhost")
- download_run_once|bool
- download.enabled|bool
- download.container|bool
tags: [upload, upgrade]

View File

@@ -5,7 +5,7 @@
- set_fact: - set_fact:
pull_args: >- pull_args: >-
{%- if pull_by_digest|bool %}{{download.repo}}@sha256:{{download.sha256}}{%- else -%}{{download.repo}}:{{download.tag}}{%- endif -%} {%- if pull_by_digest %}{{download.repo}}@sha256:{{download.sha256}}{%- else -%}{{download.repo}}:{{download.tag}}{%- endif -%}
- name: Register docker images info - name: Register docker images info
raw: >- raw: >-
@@ -15,16 +15,16 @@
failed_when: false failed_when: false
changed_when: false changed_when: false
check_mode: no check_mode: no
when: not download_always_pull|bool when: not download_always_pull
- set_fact: - set_fact:
pull_required: >- pull_required: >-
{%- if pull_args in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%} {%- if pull_args in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
when: not download_always_pull|bool when: not download_always_pull
- name: Check the local digest sha256 corresponds to the given image tag - name: Check the local digest sha256 corresponds to the given image tag
assert: assert:
that: "{{download.repo}}:{{download.tag}} in docker_images.stdout.split(',')" that: "{{download.repo}}:{{download.tag}} in docker_images.stdout.split(',')"
when: not download_always_pull|bool and not pull_required|bool and pull_by_digest|bool when: not download_always_pull and not pull_required and pull_by_digest
tags: tags:
- asserts - asserts

View File

@@ -0,0 +1,114 @@
---
- set_fact:
fname: "{{local_release_dir}}/containers/{{download.repo|regex_replace('/|\0|:', '_')}}:{{download.tag|default(download.sha256)|regex_replace('/|\0|:', '_')}}.tar"
run_once: true
when:
- download.enabled
- download.container
- download_run_once
tags:
- facts
- name: "container_download | Set default value for 'container_changed' to false"
set_fact:
container_changed: "{{pull_required|default(false)}}"
when:
- download.enabled
- download.container
- download_run_once
- name: "container_download | Update the 'container_changed' fact"
set_fact:
container_changed: "{{ pull_required|default(false) or not 'up to date' in pull_task_result.stdout }}"
when:
- download.enabled
- download.container
- download_run_once
- pull_required|default(download_always_pull)
run_once: "{{ download_run_once }}"
tags:
- facts
- name: container_download | Stat saved container image
stat:
path: "{{fname}}"
register: img
changed_when: false
delegate_to: "{{ download_delegate }}"
delegate_facts: no
become: false
run_once: true
when:
- download.enabled
- download.container
- download_run_once
tags:
- facts
- name: container_download | save container images
shell: "{{ docker_bin_dir }}/docker save {{ pull_args }} | gzip -{{ download_compress }} > {{ fname }}"
delegate_to: "{{ download_delegate }}"
delegate_facts: no
register: saved
run_once: true
when:
- download.enabled
- download.container
- download_run_once
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] or download_delegate == "localhost")
- (container_changed or not img.stat.exists)
- name: container_download | copy container images to ansible host
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
delegate_to: localhost
delegate_facts: no
run_once: true
become: false
when:
- download.enabled
- download.container
- download_run_once
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- inventory_hostname == download_delegate
- download_delegate != "localhost"
- saved.changed
- name: container_download | upload container images to nodes
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push
delegate_to: localhost
delegate_facts: no
become: false
register: get_task
until: get_task|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- download.enabled
- download.container
- download_run_once
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != download_delegate or
download_delegate == "localhost")
tags:
- upload
- upgrade
- name: container_download | load container images
shell: "{{ docker_bin_dir }}/docker load < {{ fname }}"
when:
- download.enabled
- download.container
- download_run_once
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != download_delegate or download_delegate == "localhost")
tags:
- upload
- upgrade

View File

@@ -17,7 +17,8 @@ etcd_election_timeout: "5000"
etcd_metrics: "basic" etcd_metrics: "basic"
# Limits # Limits
etcd_memory_limit: 512M # Limit memory only if <4GB memory on host. 0=unlimited
etcd_memory_limit: "{% if ansible_memtotal_mb < 4096 %}512M{% else %}0{% endif %}"
# Uncomment to set CPU share for etcd # Uncomment to set CPU share for etcd
# etcd_cpu_limit: 300m # etcd_cpu_limit: 300m

View File

@@ -3,8 +3,5 @@ dependencies:
- role: adduser - role: adduser
user: "{{ addusers.etcd }}" user: "{{ addusers.etcd }}"
when: not (ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] or is_atomic) when: not (ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] or is_atomic)
- role: download
file: "{{ downloads.etcd }}"
tags: download
# NOTE: Dynamic task dependency on Vault Role if cert_management == "vault" # NOTE: Dynamic task dependency on Vault Role if cert_management == "vault"

View File

@@ -26,7 +26,7 @@
- name: "Check_certs | Set 'gen_certs' to true" - name: "Check_certs | Set 'gen_certs' to true"
set_fact: set_fact:
gen_certs: true gen_certs: true
when: "not '{{ item }}' in etcdcert_master.files|map(attribute='path') | list" when: not item in etcdcert_master.files|map(attribute='path') | list
run_once: true run_once: true
with_items: >- with_items: >-
['{{etcd_cert_dir}}/ca.pem', ['{{etcd_cert_dir}}/ca.pem',

View File

@@ -6,11 +6,8 @@
changed_when: false changed_when: false
check_mode: no check_mode: no
when: is_etcd_master when: is_etcd_master
tags: facts tags:
- facts
- name: Configure | Add member to the cluster if it is not there
when: is_etcd_master and etcd_member_in_cluster.rc != 0 and etcd_cluster_is_healthy.rc == 0
shell: "{{ bin_dir }}/etcdctl --peers={{ etcd_access_addresses }} member add {{ etcd_member_name }} {{ etcd_peer_url }}"
- name: Install etcd launch script - name: Install etcd launch script
template: template:
@@ -28,3 +25,12 @@
backup: yes backup: yes
when: is_etcd_master when: is_etcd_master
notify: restart etcd notify: restart etcd
- name: Configure | Join member(s) to cluster one at a time
include: join_member.yml
vars:
target_node: "{{ item }}"
loop_control:
pause: 10
with_items: "{{ groups['etcd'] }}"
when: inventory_hostname == item and etcd_member_in_cluster.rc != 0 and etcd_cluster_is_healthy.rc == 0

View File

@@ -83,7 +83,8 @@
'node-{{ node }}-key.pem', 'node-{{ node }}-key.pem',
{% endfor %}]" {% endfor %}]"
my_node_certs: ['ca.pem', 'node-{{ inventory_hostname }}.pem', 'node-{{ inventory_hostname }}-key.pem'] my_node_certs: ['ca.pem', 'node-{{ inventory_hostname }}.pem', 'node-{{ inventory_hostname }}-key.pem']
tags: facts tags:
- facts
- name: Gen_certs | Gather etcd master certs - name: Gen_certs | Gather etcd master certs
shell: "tar cfz - -C {{ etcd_cert_dir }} -T /dev/stdin <<< {{ my_master_certs|join(' ') }} {{ all_node_certs|join(' ') }} | base64 --wrap=0" shell: "tar cfz - -C {{ etcd_cert_dir }} -T /dev/stdin <<< {{ my_master_certs|join(' ') }} {{ all_node_certs|join(' ') }} | base64 --wrap=0"

View File

@@ -1,11 +1,13 @@
--- ---
- include: sync_etcd_master_certs.yml - include: sync_etcd_master_certs.yml
when: inventory_hostname in groups.etcd when: inventory_hostname in groups.etcd
tags: etcd-secrets tags:
- etcd-secrets
- include: sync_etcd_node_certs.yml - include: sync_etcd_node_certs.yml
when: inventory_hostname in etcd_node_cert_hosts when: inventory_hostname in etcd_node_cert_hosts
tags: etcd-secrets tags:
- etcd-secrets
# Issue master certs to Etcd nodes # Issue master certs to Etcd nodes
- include: ../../vault/tasks/shared/issue_cert.yml - include: ../../vault/tasks/shared/issue_cert.yml

View File

@@ -2,7 +2,7 @@
- name: Install | Copy etcdctl binary from docker container - name: Install | Copy etcdctl binary from docker container
command: sh -c "{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy; command: sh -c "{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy;
{{ docker_bin_dir }}/docker create --name etcdctl-binarycopy {{ etcd_image_repo }}:{{ etcd_image_tag }} && {{ docker_bin_dir }}/docker create --name etcdctl-binarycopy {{ etcd_image_repo }}:{{ etcd_image_tag }} &&
{{ docker_bin_dir }}/docker cp etcdctl-binarycopy:{{ etcd_container_bin_dir }}etcdctl {{ bin_dir }}/etcdctl && {{ docker_bin_dir }}/docker cp etcdctl-binarycopy:/usr/local/bin/etcdctl {{ bin_dir }}/etcdctl &&
{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy" {{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy"
when: etcd_deployment_type == "docker" when: etcd_deployment_type == "docker"
register: etcd_task_result register: etcd_task_result

View File

@@ -18,7 +18,7 @@
--mount=volume=bin-dir,target=/host/bin --mount=volume=bin-dir,target=/host/bin
{{ etcd_image_repo }}:{{ etcd_image_tag }} {{ etcd_image_repo }}:{{ etcd_image_tag }}
--name=etcdctl-binarycopy --name=etcdctl-binarycopy
--exec=/bin/cp -- {{ etcd_container_bin_dir }}/etcdctl /host/bin/etcdctl --exec=/bin/cp -- /usr/local/bin/etcdctl /host/bin/etcdctl
register: etcd_task_result register: etcd_task_result
until: etcd_task_result.rc == 0 until: etcd_task_result.rc == 0
retries: 4 retries: 4

View File

@@ -0,0 +1,41 @@
---
- name: Join Member | Add member to cluster
shell: "{{ bin_dir }}/etcdctl --peers={{ etcd_access_addresses }} member add {{ etcd_member_name }} {{ etcd_peer_url }}"
register: member_add_result
until: member_add_result.rc == 0
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when: target_node == inventory_hostname
- include: refresh_config.yml
vars:
etcd_peer_addresses: >-
{% for host in groups['etcd'] -%}
{%- if hostvars[host]['etcd_member_in_cluster'].rc == 0 -%}
{{ "etcd"+loop.index|string }}=https://{{ hostvars[host].access_ip | default(hostvars[host].ip | default(hostvars[host].ansible_default_ipv4['address'])) }}:2380,
{%- endif -%}
{%- if loop.last -%}
{{ etcd_member_name }}={{ etcd_peer_url }}
{%- endif -%}
{%- endfor -%}
when: target_node == inventory_hostname
- name: Join Member | reload systemd
command: systemctl daemon-reload
when: target_node == inventory_hostname
- name: Join Member | Ensure etcd is running
service:
name: etcd
state: started
enabled: yes
when: target_node == inventory_hostname
- name: Join Member | Ensure member is in cluster
shell: "{{ bin_dir }}/etcdctl --no-sync --peers={{ etcd_access_addresses }} member list | grep -q {{ etcd_access_address }}"
register: etcd_member_in_cluster
changed_when: false
check_mode: no
tags:
- facts
when: target_node == inventory_hostname

View File

@@ -1,22 +1,33 @@
--- ---
- include: check_certs.yml - include: check_certs.yml
when: cert_management == "script" when: cert_management == "script"
tags: [etcd-secrets, facts] tags:
- etcd-secrets
- facts
- include: "gen_certs_{{ cert_management }}.yml" - include: "gen_certs_{{ cert_management }}.yml"
tags: etcd-secrets tags:
- etcd-secrets
- include: upd_ca_trust.yml - include: upd_ca_trust.yml
tags: etcd-secrets tags:
- etcd-secrets
- name: "Gen_certs | Get etcd certificate serials" - name: "Gen_certs | Get etcd certificate serials"
shell: "openssl x509 -in {{ etcd_cert_dir }}/node-{{ inventory_hostname }}.pem -noout -serial | cut -d= -f2" shell: "openssl x509 -in {{ etcd_cert_dir }}/node-{{ inventory_hostname }}.pem -noout -serial | cut -d= -f2"
register: "etcd_client_cert_serial" register: "etcd_client_cert_serial_result"
changed_when: false
when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort
- name: Set etcd_client_cert_serial
set_fact:
etcd_client_cert_serial: "{{ etcd_client_cert_serial_result.stdout }}"
when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort
- include: "install_{{ etcd_deployment_type }}.yml" - include: "install_{{ etcd_deployment_type }}.yml"
when: is_etcd_master when: is_etcd_master
tags: upgrade tags:
- upgrade
- include: set_cluster_health.yml - include: set_cluster_health.yml
when: is_etcd_master and etcd_cluster_setup when: is_etcd_master and etcd_cluster_setup

View File

@@ -6,4 +6,5 @@
changed_when: false changed_when: false
check_mode: no check_mode: no
when: is_etcd_master when: is_etcd_master
tags: facts tags:
- facts

View File

@@ -9,7 +9,8 @@
{%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%} {%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
/etc/ssl/certs/etcd-ca.pem /etc/ssl/certs/etcd-ca.pem
{%- endif %} {%- endif %}
tags: facts tags:
- facts
- name: Gen_certs | add CA to trusted CA dir - name: Gen_certs | add CA to trusted CA dir
copy: copy:

View File

@@ -6,7 +6,7 @@ After=docker.service
[Service] [Service]
User=root User=root
PermissionsStartOnly=true PermissionsStartOnly=true
EnvironmentFile=/etc/etcd.env EnvironmentFile=-/etc/etcd.env
ExecStart={{ bin_dir }}/etcd ExecStart={{ bin_dir }}/etcd
ExecStartPre=-{{ docker_bin_dir }}/docker rm -f {{ etcd_member_name | default("etcd") }} ExecStartPre=-{{ docker_bin_dir }}/docker rm -f {{ etcd_member_name | default("etcd") }}
ExecStop={{ docker_bin_dir }}/docker stop {{ etcd_member_name | default("etcd") }} ExecStop={{ docker_bin_dir }}/docker stop {{ etcd_member_name | default("etcd") }}

View File

@@ -11,6 +11,8 @@ LimitNOFILE=40000
ExecStart=/usr/bin/rkt run \ ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/run/etcd.uuid \ --uuid-file-save=/var/run/etcd.uuid \
--volume hosts,kind=host,source=/etc/hosts,readOnly=true \
--mount volume=hosts,target=/etc/hosts \
--volume=etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \ --volume=etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount=volume=etc-ssl-certs,target=/etc/ssl/certs \ --mount=volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume=etcd-cert-dir,kind=host,source={{ etcd_cert_dir }},readOnly=true \ --volume=etcd-cert-dir,kind=host,source={{ etcd_cert_dir }},readOnly=true \

View File

@@ -9,6 +9,7 @@
{% if etcd_memory_limit is defined %} {% if etcd_memory_limit is defined %}
--memory={{ etcd_memory_limit|regex_replace('Mi', 'M') }} \ --memory={{ etcd_memory_limit|regex_replace('Mi', 'M') }} \
{% endif %} {% endif %}
--oom-kill-disable \
{% if etcd_cpu_limit is defined %} {% if etcd_cpu_limit is defined %}
--cpu-shares={{ etcd_cpu_limit|regex_replace('m', '') }} \ --cpu-shares={{ etcd_cpu_limit|regex_replace('m', '') }} \
{% endif %} {% endif %}
@@ -17,7 +18,5 @@
{% endif %} {% endif %}
--name={{ etcd_member_name | default("etcd") }} \ --name={{ etcd_member_name | default("etcd") }} \
{{ etcd_image_repo }}:{{ etcd_image_tag }} \ {{ etcd_image_repo }}:{{ etcd_image_tag }} \
{% if etcd_after_v3 %} /usr/local/bin/etcd \
{{ etcd_container_bin_dir }}etcd \
{% endif %}
"$@" "$@"

View File

@@ -1,8 +0,0 @@
---
elrepo_key_url: 'https://www.elrepo.org/RPM-GPG-KEY-elrepo.org'
elrepo_rpm: elrepo-release-7.0-3.el7.elrepo.noarch.rpm
elrepo_mirror: http://www.elrepo.org
elrepo_url: '{{elrepo_mirror}}/{{elrepo_rpm}}'
elrepo_kernel_package: "kernel-lt"

View File

@@ -1,33 +0,0 @@
---
- name: install ELRepo key
rpm_key:
state: present
key: '{{ elrepo_key_url }}'
- name: install elrepo repository
yum:
name: '{{elrepo_url}}'
state: present
- name: upgrade kernel
yum:
name: "{{elrepo_kernel_package}}"
state: present
enablerepo: elrepo-kernel
register: upgrade
- name: change default grub entry
lineinfile:
dest: '/etc/default/grub'
regexp: '^GRUB_DEFAULT=.*'
line: 'GRUB_DEFAULT=0'
when: upgrade.changed
register: grub_entry
- name: re-generate grub-config
command: grub2-mkconfig -o /boot/grub2/grub.cfg
when: upgrade.changed and grub_entry.changed
- include: reboot.yml
when: upgrade.changed

View File

@@ -1,5 +0,0 @@
---
- include: centos-7.yml
when: ansible_distribution in ["CentOS","RedHat"] and
ansible_distribution_major_version >= 7 and not is_atomic

View File

@@ -1,40 +0,0 @@
---
# Reboot the machine gets more complicated as we want to support bastion hosts. A simple wait_for task would not work
# as we can not directly reach the hosts (except the bastion). In case a basion is used, we first check for it to come
# back. After it is back, we check for all the hosts by delegating to the bastion.
- name: Rebooting server
shell: nohup bash -c "sleep 5 && shutdown -r now 'Reboot required for updated kernel'" &
- name: Wait for some seconds
pause:
seconds: 10
- set_fact:
is_bastion: "{{ inventory_hostname == 'bastion' }}"
wait_for_delegate: "localhost"
- set_fact:
wait_for_delegate: "{{hostvars['bastion']['ansible_ssh_host']}}"
when: "'bastion' in groups['all']"
- name: wait for bastion to come back
wait_for:
host: "{{ ansible_ssh_host }}"
port: 22
delay: 10
timeout: 300
become: false
delegate_to: localhost
when: is_bastion
- name: waiting for server to come back (using bastion if necessary)
wait_for:
host: "{{ ansible_ssh_host }}"
port: 22
delay: 10
timeout: 300
become: false
delegate_to: "{{ wait_for_delegate }}"
when: not is_bastion

View File

@@ -1,6 +1,6 @@
--- ---
# Versions # Versions
kubedns_version: 1.14.2 kubedns_version: 1.14.5
kubednsautoscaler_version: 1.1.1 kubednsautoscaler_version: 1.1.1
# Limits for dnsmasq/kubedns apps # Limits for dnsmasq/kubedns apps
@@ -40,8 +40,8 @@ netchecker_server_memory_requests: 64M
# Dashboard # Dashboard
dashboard_enabled: false dashboard_enabled: false
dashboard_image_repo: kubernetesdashboarddev/kubernetes-dashboard-amd64 dashboard_image_repo: gcr.io/google_containers/kubernetes-dashboard-amd64
dashboard_image_tag: head dashboard_image_tag: v1.6.3
# Limits for dashboard # Limits for dashboard
dashboard_cpu_limit: 100m dashboard_cpu_limit: 100m

View File

@@ -5,7 +5,7 @@
register: result register: result
until: result.status == 200 until: result.status == 200
retries: 10 retries: 10
delay: 6 delay: 2
when: inventory_hostname == groups['kube-master'][0] when: inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Delete old kubedns resources - name: Kubernetes Apps | Delete old kubedns resources
@@ -16,7 +16,8 @@
resource: "{{ item }}" resource: "{{ item }}"
state: absent state: absent
with_items: ['deploy', 'svc'] with_items: ['deploy', 'svc']
tags: upgrade tags:
- upgrade
- name: Kubernetes Apps | Delete kubeadm kubedns - name: Kubernetes Apps | Delete kubeadm kubedns
kube: kube:
@@ -46,7 +47,8 @@
when: when:
- dns_mode != 'none' and inventory_hostname == groups['kube-master'][0] - dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
- rbac_enabled or item.type not in rbac_resources - rbac_enabled or item.type not in rbac_resources
tags: dnsmasq tags:
- dnsmasq
# see https://github.com/kubernetes/kubernetes/issues/45084, only needed for "old" kube-dns # see https://github.com/kubernetes/kubernetes/issues/45084, only needed for "old" kube-dns
- name: Kubernetes Apps | Patch system:kube-dns ClusterRole - name: Kubernetes Apps | Patch system:kube-dns ClusterRole
@@ -64,7 +66,8 @@
when: when:
- dns_mode != 'none' and inventory_hostname == groups['kube-master'][0] - dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
- rbac_enabled and kubedns_version|version_compare("1.11.0", "<", strict=True) - rbac_enabled and kubedns_version|version_compare("1.11.0", "<", strict=True)
tags: dnsmasq tags:
- dnsmasq
- name: Kubernetes Apps | Start Resources - name: Kubernetes Apps | Start Resources
kube: kube:
@@ -79,14 +82,17 @@
- dns_mode != 'none' - dns_mode != 'none'
- inventory_hostname == groups['kube-master'][0] - inventory_hostname == groups['kube-master'][0]
- not item|skipped - not item|skipped
tags: dnsmasq tags:
- dnsmasq
- name: Kubernetes Apps | Netchecker - name: Kubernetes Apps | Netchecker
include: tasks/netchecker.yml include: tasks/netchecker.yml
when: deploy_netchecker when: deploy_netchecker
tags: netchecker tags:
- netchecker
- name: Kubernetes Apps | Dashboard - name: Kubernetes Apps | Dashboard
include: tasks/dashboard.yml include: tasks/dashboard.yml
when: dashboard_enabled when: dashboard_enabled
tags: dashboard tags:
- dashboard

View File

@@ -4,7 +4,9 @@
stat: stat:
path: "{{ kube_config_dir }}/netchecker-server-deployment.yml.j2" path: "{{ kube_config_dir }}/netchecker-server-deployment.yml.j2"
register: netchecker_server_manifest register: netchecker_server_manifest
tags: ['facts', 'upgrade'] tags:
- facts
- upgrade
- name: Kubernetes Apps | Apply netchecker-server manifest to update annotations - name: Kubernetes Apps | Apply netchecker-server manifest to update annotations
kube: kube:
@@ -15,7 +17,8 @@
resource: "deploy" resource: "deploy"
state: latest state: latest
when: inventory_hostname == groups['kube-master'][0] and netchecker_server_manifest.stat.exists when: inventory_hostname == groups['kube-master'][0] and netchecker_server_manifest.stat.exists
tags: upgrade tags:
- upgrade
- name: Kubernetes Apps | Lay Down Netchecker Template - name: Kubernetes Apps | Lay Down Netchecker Template
template: template:

View File

@@ -0,0 +1,56 @@
---
- name: Kubernetes Apps | Wait for kube-apiserver
uri:
url: "{{ kube_apiserver_insecure_endpoint }}/healthz"
register: result
until: result.status == 200
retries: 10
delay: 6
when: inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Add ClusterRoleBinding to admit nodes
template:
src: "node-crb.yml.j2"
dest: "{{ kube_config_dir }}/node-crb.yml"
register: node_crb_manifest
when: rbac_enabled
- name: Apply workaround to allow all nodes with cert O=system:nodes to register
kube:
name: "system:node"
kubectl: "{{bin_dir}}/kubectl"
resource: "clusterrolebinding"
filename: "{{ kube_config_dir }}/node-crb.yml"
state: latest
when:
- rbac_enabled
- node_crb_manifest.changed
# This is not a cluster role, but should be run after kubeconfig is set on master
- name: Write kube system namespace manifest
template:
src: namespace.j2
dest: "{{kube_config_dir}}/{{system_namespace}}-ns.yml"
when: inventory_hostname == groups['kube-master'][0]
tags:
- apps
- name: Check if kube system namespace exists
command: "{{ bin_dir }}/kubectl get ns {{system_namespace}}"
register: 'kubesystem'
changed_when: False
failed_when: False
when: inventory_hostname == groups['kube-master'][0]
tags:
- apps
- name: Create kube system namespace
command: "{{ bin_dir }}/kubectl create -f {{kube_config_dir}}/{{system_namespace}}-ns.yml"
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
register: create_system_ns
until: create_system_ns.rc == 0
changed_when: False
when: inventory_hostname == groups['kube-master'][0] and kubesystem.rc != 0
tags:
- apps

View File

@@ -0,0 +1,17 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes

View File

@@ -1,7 +1,4 @@
--- ---
dependencies:
- role: download
file: "{{ downloads.elasticsearch }}"
# TODO: bradbeam add in curator # TODO: bradbeam add in curator
# https://github.com/Skillshare/kubernetes-efk/blob/master/configs/elasticsearch.yml#L94 # https://github.com/Skillshare/kubernetes-efk/blob/master/configs/elasticsearch.yml#L94
# - role: download # - role: download

View File

@@ -1,4 +0,0 @@
---
dependencies:
- role: download
file: "{{ downloads.fluentd }}"

View File

@@ -1,4 +0,0 @@
---
dependencies:
- role: download
file: "{{ downloads.kibana }}"

View File

@@ -1,6 +0,0 @@
---
dependencies:
- role: download
file: "{{ downloads.helm }}"
- role: download
file: "{{ downloads.tiller }}"

View File

@@ -36,7 +36,7 @@
when: helm_container.changed when: helm_container.changed
- name: Helm | Patch tiller deployment for RBAC - name: Helm | Patch tiller deployment for RBAC
command: kubectl patch deployment tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' -n {{ system_namespace }} command: "{{bin_dir}}/kubectl patch deployment tiller-deploy -p '{\"spec\":{\"template\":{\"spec\":{\"serviceAccount\":\"tiller\"}}}}' -n {{ system_namespace }}"
when: rbac_enabled when: rbac_enabled
- name: Helm | Set up bash completion - name: Helm | Set up bash completion

View File

@@ -0,0 +1,32 @@
---
istio_enabled: false
istio_namespace: istio-system
istio_version: "0.2.6"
istioctl_download_url: "https://storage.googleapis.com/istio-release/releases/{{ istio_version }}/istioctl/istioctl-linux"
istioctl_checksum: fd703063c540b8c0ab943f478c05ab257d88ae27224c746a27d0526ddbf7c370
istio_proxy_image_repo: docker.io/istio/proxy
istio_proxy_image_tag: "{{ istio_version }}"
istio_proxy_init_image_repo: docker.io/istio/proxy_init
istio_proxy_init_image_tag: "{{ istio_version }}"
istio_ca_image_repo: docker.io/istio/istio-ca
istio_ca_image_tag: "{{ istio_version }}"
istio_mixer_image_repo: docker.io/istio/mixer
istio_mixer_image_tag: "{{ istio_version }}"
istio_pilot_image_repo: docker.io/istio/pilot
istio_pilot_image_tag: "{{ istio_version }}"
istio_proxy_debug_image_repo: docker.io/istio/proxy_debug
istio_proxy_debug_image_tag: "{{ istio_version }}"
istio_sidecar_initializer_image_repo: docker.io/istio/sidecar_initializer
istio_sidecar_initializer_image_tag: "{{ istio_version }}"
istio_statsd_image_repo: prom/statsd-exporter
istio_statsd_image_tag: latest

View File

@@ -0,0 +1,45 @@
---
- name: istio | Create addon dir
file:
path: "{{ kube_config_dir }}/addons/istio"
owner: root
group: root
mode: 0755
recurse: yes
- name: istio | Lay out manifests
template:
src: "{{item.file}}.j2"
dest: "{{kube_config_dir}}/addons/istio/{{item.file}}"
with_items:
- {name: istio-mixer, file: istio.yml, type: deployment }
- {name: istio-initializer, file: istio-initializer.yml, type: deployment }
register: manifests
when: inventory_hostname == groups['kube-master'][0]
- name: istio | Copy istioctl binary from download dir
command: rsync -piu "{{ local_release_dir }}/istio/istioctl" "{{ bin_dir }}/istioctl"
changed_when: false
- name: istio | Set up bash completion
shell: "{{ bin_dir }}/istioctl completion >/etc/bash_completion.d/istioctl.sh"
when: ansible_os_family in ["Debian","RedHat"]
- name: istio | Set bash completion file
file:
path: /etc/bash_completion.d/istioctl.sh
owner: root
group: root
mode: 0755
when: ansible_os_family in ["Debian","RedHat"]
- name: istio | apply manifests
kube:
name: "{{item.item.name}}"
namespace: "{{ istio_namespace }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/addons/istio/{{item.item.file}}"
state: "latest"
with_items: "{{ manifests.results }}"
when: inventory_hostname == groups['kube-master'][0]

View File

@@ -0,0 +1,84 @@
# GENERATED FILE. Use with Kubernetes 1.7+
# TO UPDATE, modify files in install/kubernetes/templates and run install/updateVersion.sh
################################
# Istio initializer
################################
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-inject
namespace: {{ istio_namespace }}
data:
config: |-
policy: "enabled"
namespaces: [""] # everything, aka v1.NamepsaceAll, aka cluster-wide
initializerName: "sidecar.initializer.istio.io"
params:
initImage: {{ istio_proxy_init_image_repo }}:{{ istio_proxy_init_image_tag }}
proxyImage: {{ istio_proxy_image_repo }}:{{ istio_proxy_image_tag }}
verbosity: 2
version: 0.2.6
meshConfigMapName: istio
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-initializer-service-account
namespace: {{ istio_namespace }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: istio-initializer
namespace: {{ istio_namespace }}
annotations:
sidecar.istio.io/inject: "false"
initializers:
pending: []
labels:
istio: istio-initializer
spec:
replicas: 1
template:
metadata:
name: istio-initializer
labels:
istio: initializer
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istio-initializer-service-account
containers:
- name: initializer
image: {{ istio_sidecar_initializer_image_repo }}:{{ istio_sidecar_initializer_image_tag }}
imagePullPolicy: IfNotPresent
args:
- --port=8083
- --namespace={{ istio_namespace }}
- -v=2
volumeMounts:
- name: config-volume
mountPath: /etc/istio/config
volumes:
- name: config-volume
configMap:
name: istio
---
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: istio-sidecar
initializers:
- name: sidecar.initializer.istio.io
rules:
- apiGroups:
- "*"
apiVersions:
- "*"
resources:
- deployments
- statefulsets
- jobs
- daemonsets
---

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +1,27 @@
--- ---
dependencies: dependencies:
- role: download - role: kubernetes-apps/ansible
file: "{{ downloads.netcheck_server }}" tags:
when: deploy_netchecker - apps
tags: [download, netchecker]
- role: download - role: kubernetes-apps/kpm
file: "{{ downloads.netcheck_agent }}" tags:
when: deploy_netchecker - apps
tags: [download, netchecker] - kpm
- {role: kubernetes-apps/ansible, tags: apps}
- {role: kubernetes-apps/kpm, tags: [apps, kpm]}
- role: kubernetes-apps/efk - role: kubernetes-apps/efk
when: efk_enabled when: efk_enabled
tags: [ apps, efk ] tags:
- apps
- efk
- role: kubernetes-apps/helm - role: kubernetes-apps/helm
when: helm_enabled when: helm_enabled
tags: [ apps, helm ] tags:
- apps
- helm
- role: kubernetes-apps/istio
when: istio_enabled
tags:
- apps
- istio

View File

@@ -2,13 +2,20 @@
dependencies: dependencies:
- role: kubernetes-apps/network_plugin/calico - role: kubernetes-apps/network_plugin/calico
when: kube_network_plugin == 'calico' when: kube_network_plugin == 'calico'
tags: calico tags:
- calico
- role: kubernetes-apps/network_plugin/canal - role: kubernetes-apps/network_plugin/canal
when: kube_network_plugin == 'canal' when: kube_network_plugin == 'canal'
tags: canal tags:
- canal
- role: kubernetes-apps/network_plugin/flannel - role: kubernetes-apps/network_plugin/flannel
when: kube_network_plugin == 'flannel' when: kube_network_plugin == 'flannel'
tags: flannel tags:
- flannel
- role: kubernetes-apps/network_plugin/weave - role: kubernetes-apps/network_plugin/weave
when: kube_network_plugin == 'weave' when: kube_network_plugin == 'weave'
tags: weave tags:
- weave

View File

@@ -3,12 +3,15 @@
set_fact: set_fact:
calico_cert_dir: "{{ canal_cert_dir }}" calico_cert_dir: "{{ canal_cert_dir }}"
when: kube_network_plugin == 'canal' when: kube_network_plugin == 'canal'
tags: [facts, canal] tags:
- facts
- canal
- name: Get calico-policy-controller version if running - name: Get calico-policy-controller version if running
shell: "{{ bin_dir }}/kubectl -n {{ system_namespace }} get rs calico-policy-controller -o=jsonpath='{$.spec.template.spec.containers[:1].image}' | cut -d':' -f2" shell: "{{ bin_dir }}/kubectl -n {{ system_namespace }} get rs calico-policy-controller -o=jsonpath='{$.spec.template.spec.containers[:1].image}' | cut -d':' -f2"
register: existing_calico_policy_version register: existing_calico_policy_version
run_once: true run_once: true
changed_when: false
failed_when: false failed_when: false
# FIXME(mattymo): This should not be necessary # FIXME(mattymo): This should not be necessary

View File

@@ -40,7 +40,7 @@ spec:
memory: {{ calico_policy_controller_memory_requests }} memory: {{ calico_policy_controller_memory_requests }}
env: env:
- name: ETCD_ENDPOINTS - name: ETCD_ENDPOINTS
value: "{{ etcd_access_endpoint }}" value: "{{ etcd_access_addresses }}"
- name: ETCD_CA_CERT_FILE - name: ETCD_CA_CERT_FILE
value: "{{ calico_cert_dir }}/ca_cert.crt" value: "{{ calico_cert_dir }}/ca_cert.crt"
- name: ETCD_CERT_FILE - name: ETCD_CERT_FILE

View File

@@ -1,14 +1,14 @@
--- ---
dependencies: dependencies:
- role: download
file: "{{ downloads.calico_policy }}"
when: enable_network_policy and
kube_network_plugin in ['calico', 'canal']
tags: [download, canal, policy-controller]
- role: policy_controller/calico - role: policy_controller/calico
when: kube_network_plugin == 'calico' and when:
enable_network_policy - kube_network_plugin == 'calico'
tags: policy-controller - enable_network_policy
tags:
- policy-controller
- role: policy_controller/calico - role: policy_controller/calico
when: kube_network_plugin == 'canal' when:
tags: policy-controller - kube_network_plugin == 'canal'
tags:
- policy-controller

View File

@@ -1,17 +1,28 @@
--- ---
- name: Rotate Tokens | Test if default certificate is expired - name: Rotate Tokens | Get default token name
shell: >- shell: "{{ bin_dir }}/kubectl get secrets -o custom-columns=name:{.metadata.name} --no-headers | grep -m1 default-token"
kubectl run -i test-rotate-tokens register: default_token
--image={{ hyperkube_image_repo }}:{{ hyperkube_image_tag }}
--restart=Never --rm - name: Rotate Tokens | Get default token data
kubectl get nodes command: "{{ bin_dir }}/kubectl get secrets {{ default_token.stdout }} -ojson"
register: check_secret register: default_token_data
failed_when: false
run_once: true run_once: true
- name: Rotate Tokens | Test if default certificate is expired
uri:
url: https://{{ kube_apiserver_ip }}/api/v1/nodes
method: GET
return_content: no
validate_certs: no
headers:
Authorization: "Bearer {{ (default_token_data.stdout|from_json)['data']['token']|b64decode }}"
register: check_secret
run_once: true
failed_when: false
- name: Rotate Tokens | Determine if certificate is expired - name: Rotate Tokens | Determine if certificate is expired
set_fact: set_fact:
needs_rotation: '{{ "You must be logged in" in check_secret.stderr }}' needs_rotation: '{{ check_secret.status not in [200, 403] }}'
# FIXME(mattymo): Exclude built in secrets that were automatically rotated, # FIXME(mattymo): Exclude built in secrets that were automatically rotated,
# instead of filtering manually # instead of filtering manually

View File

@@ -11,13 +11,12 @@
{%- else -%} {%- else -%}
https://{{ first_kube_master }}:{{ kube_apiserver_port }} https://{{ first_kube_master }}:{{ kube_apiserver_port }}
{%- endif -%} {%- endif -%}
tags: facts tags:
- facts
- name: Gather certs for admin kubeconfig - name: Gather certs for admin kubeconfig
slurp: slurp:
src: "{{ item }}" src: "{{ item }}"
delegate_to: "{{ groups['kube-master'][0] }}"
delegate_facts: no
register: admin_certs register: admin_certs
with_items: with_items:
- "{{ kube_cert_dir }}/ca.pem" - "{{ kube_cert_dir }}/ca.pem"
@@ -29,6 +28,9 @@
template: template:
src: admin.conf.j2 src: admin.conf.j2
dest: "{{ kube_config_dir }}/admin.conf" dest: "{{ kube_config_dir }}/admin.conf"
owner: root
group: "{{ kube_cert_group }}"
mode: 0640
when: not kubeadm_enabled|d(false)|bool when: not kubeadm_enabled|d(false)|bool
- name: Create kube config dir - name: Create kube config dir
@@ -51,7 +53,6 @@
dest: "{{ artifacts_dir }}/admin.conf" dest: "{{ artifacts_dir }}/admin.conf"
flat: yes flat: yes
validate_checksum: no validate_checksum: no
become: no
run_once: yes run_once: yes
when: kubeconfig_localhost|default(false) when: kubeconfig_localhost|default(false)

Some files were not shown because too many files have changed in this diff Show More