Compare commits

...

37 Commits

Author SHA1 Message Date
Brad Beam
72a0d78b3c Merge pull request #1585 from mattymo/canal_upgrade
Fix upgrade for canal and apiserver cert
2017-08-29 18:45:21 -05:00
Matthew Mosesohn
13d08af054 Fix upgrade for canal and apiserver cert
Fixes #1573
2017-08-29 22:08:30 +01:00
Brad Beam
80a7ae9845 Merge pull request #1581 from 2ffs2nns/update-calico-version
update calico version
2017-08-29 07:48:44 -05:00
Eric Hoffmann
6c30a7b2eb update calico version
update calico releases link
2017-08-28 16:23:51 -07:00
Matthew Mosesohn
76b72338da Add CNI config for rkt kubelet (#1579) 2017-08-28 21:11:01 +03:00
Chad Swenson
a39e78d42d Initial version of Flannel using CNI (#1486)
* Updates Controller Manager/Kubelet with Flannel's required configuration for CNI
* Removes old Flannel installation
* Install CNI enabled Flannel DaemonSet/ConfigMap/CNI bins and config (with portmap plugin) on host
* Uses RBAC if enabled
* Fixed an issue that could occur if br_netfilter is not a module and net.bridge.bridge-nf-call-iptables sysctl was not set
2017-08-25 10:07:50 +03:00
Brad Beam
4550dccb84 Fixing reference to vault leader url (#1569) 2017-08-24 23:21:39 +03:00
Hassan Zamani
01ce09f343 Add feature_gates var for customizing Kubernetes feature gates (#1520) 2017-08-24 23:18:38 +03:00
Brad Beam
71dca67ca2 Merge pull request #1508 from tmjd/update-calico-2-4-0
Update Calico to 2.4.1 release.
2017-08-24 14:57:29 -05:00
Hans Kristian Flaatten
327f9baccf Update supported component versions in README.md (#1555) 2017-08-24 21:36:53 +03:00
Yuki KIRII
a98b866a66 Verify if br_netfilter module exists (#1492) 2017-08-24 17:47:32 +03:00
Xavier Mehrenberger
3aabba7535 Remove discontinued option --reconcile-cidr if kube_network_plugin=="cloud" (#1568) 2017-08-24 17:01:30 +03:00
Mohamed Mehany
c22cfa255b Added private key file to ssh bastion conf (#1563)
* Added private key file to ssh bastion conf

* Used regular if condition insted of inline conditional
2017-08-24 17:00:45 +03:00
Brad Beam
af211b3d71 Merge pull request #1567 from mattymo/tolerations
Enable scheduling of critical pods and network plugins on master
2017-08-24 08:40:41 -05:00
Matthew Mosesohn
6bb3463e7c Enable scheduling of critical pods and network plugins on master
Added toleration to DNS, netchecker, fluentd, canal, and
calico policy.

Also small fixes to make yamllint pass.
2017-08-24 10:41:17 +01:00
Brad Beam
8b151d12b9 Adding yamllinter to ci steps (#1556)
* Adding yaml linter to ci check

* Minor linting fixes from yamllint

* Changing CI to install python pkgs from requirements.txt

- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
2017-08-24 12:09:52 +03:00
Ian Lewis
ecb6dc3679 Register standalone master w/ taints (#1426)
If Kubernetes > 1.6 register standalone master nodes w/ a
node-role.kubernetes.io/master=:NoSchedule taint to allow
for more flexible scheduling rather than just marking unschedulable.
2017-08-23 16:44:11 +03:00
riverzhang
49a223a17d Update elrepo-release rpm version (#1554) 2017-08-23 09:54:51 +03:00
Brad Beam
e5cfdc648c Adding ability to override max ttl (#1559)
Prior this would fail because we didnt set max ttl for vault temp
2017-08-23 09:54:01 +03:00
Erik Stidham
9f9f70aade Update Calico to 2.4.1 release.
- Switched Calico images to be pulled from quay.io
- Updated Canal too
2017-08-21 09:33:12 -05:00
Bogdan Dobrelya
e91c04f586 Merge pull request #1553 from mattymo/kubelet-deployment-doc
Add node to docs about kubelet deployment type changes
2017-08-21 11:42:23 +02:00
Matthew Mosesohn
277fa6c12d Add node to docs about kubelet deployment type changes 2017-08-21 09:13:59 +01:00
Matthew Mosesohn
ca3050ec3d Update to Kubernetes v1.7.3 (#1549)
Change kubelet deploy mode to host
Enable cri and qos per cgroup for kubelet
Update CoreOS images
Add upgrade hook for switching from kubelet deployment from docker to host.
Bump machine type for ubuntu-rkt-sep
2017-08-21 10:53:49 +03:00
Bogdan Dobrelya
1b3ced152b Merge pull request #1544 from bogdando/rpm_spec
[WIP] Support pbr builds and prepare for RPM packaging as the ansible-kubespray artifact
2017-08-21 09:13:59 +02:00
Vijay Katam
97031f9133 Make epel-release install configurable (#1497) 2017-08-20 14:03:10 +03:00
Vijay Katam
c92506e2e7 Add calico variable that enables ignoring Kernel's RPF Setting (#1493) 2017-08-20 14:01:09 +03:00
Kevin Lefevre
65a9772adf Add OpenStack LBaaS support (#1506) 2017-08-20 13:59:15 +03:00
Anton
1e07ee6cc4 etcd_compaction_retention every 8 hour (#1527) 2017-08-20 13:55:48 +03:00
Abdelsalam Abbas
01a130273f fix issues with if condition (#1537) 2017-08-20 13:55:13 +03:00
Miad Abrin
3c710219a1 Fix Some Typos in kubernetes master role (#1547)
* Fix Typo etc3 -> etcd3

* Fix typo in post-upgrade of master. stop -> start
2017-08-20 13:54:28 +03:00
Maxim Krasilnikov
2ba285a544 Fixed deploy cluster with vault cert manager (#1548)
* Added custom ips to etcd vault distributed certificates

* Added custom ips to kube-master vault distributed certificates

* Added comment about issue_cert_copy_ca var in vault/issue_cert role file

* Generate kube-proxy, controller-manager and scheduler certificates by vault

* Revert "Disable vault from CI (#1546)"

This reverts commit 781f31d2b8.

* Fixed upgrade cluster with vault cert manager

* Remove vault dir in reset playbook
2017-08-20 13:53:58 +03:00
Bogdan Dobrelya
668d02846d Align pbr config data with the spec file
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 16:04:48 +02:00
Bogdan Dobrelya
48edf1757b Adjust the rpm spec data
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 14:09:55 +02:00
Bogdan Dobrelya
db121049b3 Move the spec file
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 13:59:27 +02:00
Bogdan Dobrelya
8058cdbc0e Add pbr build configuration
Required for an RPM package builds with the contrib/ansible-kubespray.spec

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 12:56:01 +02:00
Bogdan Dobrelya
31d357284a Update gitignore to prepare for a package build
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 11:58:07 +02:00
Bogdan Dobrelya
4ee77ce026 Add an RPM spec file and customize ansible roles_path
Install roles under /usr/local/share/kubespray/roles,
playbooks - /usr/local/share/kubespray/playbooks/,
ansible.cfg and inventory group vars - into /etc/kubespray.
Ship README and an example inventory as the package docs.
Update the ansible.cfg to consume the roles from the given path,
including virtualenvs prefix, if defined.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 11:54:20 +02:00
155 changed files with 1006 additions and 468 deletions

79
.gitignore vendored
View File

@@ -8,12 +8,85 @@ temp
.tox .tox
.cache .cache
*.bak *.bak
*.egg-info
*.pyc
*.pyo
*.tfstate *.tfstate
*.tfstate.backup *.tfstate.backup
**/*.sw[pon] **/*.sw[pon]
/ssh-bastion.conf /ssh-bastion.conf
**/*.sw[pon] **/*.sw[pon]
vagrant/ vagrant/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# dotenv
.env
# virtualenv
venv/
ENV/

View File

@@ -18,10 +18,7 @@ variables:
# us-west1-a # us-west1-a
before_script: before_script:
- pip install ansible==2.3.0 - pip install -r tests/requirements.txt
- pip install netaddr
- pip install apache-libcloud==0.20.1
- pip install boto==2.9.0
- mkdir -p /.ssh - mkdir -p /.ssh
- cp tests/ansible.cfg . - cp tests/ansible.cfg .
@@ -59,7 +56,7 @@ before_script:
RESOLVCONF_MODE: docker_dns RESOLVCONF_MODE: docker_dns
LOG_LEVEL: "-vv" LOG_LEVEL: "-vv"
ETCD_DEPLOYMENT: "docker" ETCD_DEPLOYMENT: "docker"
KUBELET_DEPLOYMENT: "docker" KUBELET_DEPLOYMENT: "host"
VAULT_DEPLOYMENT: "docker" VAULT_DEPLOYMENT: "docker"
WEAVE_CPU_LIMIT: "100m" WEAVE_CPU_LIMIT: "100m"
AUTHORIZATION_MODES: "{ 'authorization_modes': [] }" AUTHORIZATION_MODES: "{ 'authorization_modes': [] }"
@@ -75,10 +72,7 @@ before_script:
- $HOME/.cache - $HOME/.cache
before_script: before_script:
- docker info - docker info
- pip install ansible==2.3.0 - pip install -r tests/requirements.txt
- pip install netaddr
- pip install apache-libcloud==0.20.1
- pip install boto==2.9.0
- mkdir -p /.ssh - mkdir -p /.ssh
- mkdir -p $HOME/.ssh - mkdir -p $HOME/.ssh
- echo $PRIVATE_KEY | base64 -d > $HOME/.ssh/id_rsa - echo $PRIVATE_KEY | base64 -d > $HOME/.ssh/id_rsa
@@ -110,7 +104,7 @@ before_script:
# Check out latest tag if testing upgrade # Check out latest tag if testing upgrade
# Uncomment when gitlab kargo repo has tags # Uncomment when gitlab kargo repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1)) #- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout acae0fe4a36bd1d3cd267e72ad01126a72d1458a - test "${UPGRADE_TEST}" != "false" && git checkout 72ae7638bcc94c66afa8620dfa4ad9a9249327ea
# Create cluster # Create cluster
@@ -266,8 +260,9 @@ before_script:
.coreos_calico_sep_variables: &coreos_calico_sep_variables .coreos_calico_sep_variables: &coreos_calico_sep_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: calico KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: coreos-stable CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-west1-b CLOUD_REGION: us-west1-b
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: separate CLUSTER_MODE: separate
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12 RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
@@ -279,7 +274,6 @@ before_script:
KUBE_NETWORK_PLUGIN: canal KUBE_NETWORK_PLUGIN: canal
CLOUD_IMAGE: ubuntu-1604-xenial CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b CLOUD_REGION: europe-west1-b
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: ha CLUSTER_MODE: ha
UPGRADE_TEST: "graceful" UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: "" STARTUP_SCRIPT: ""
@@ -297,6 +291,7 @@ before_script:
KUBE_NETWORK_PLUGIN: flannel KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: centos-7 CLOUD_IMAGE: centos-7
CLOUD_REGION: us-west1-a CLOUD_REGION: us-west1-a
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: default CLUSTER_MODE: default
STARTUP_SCRIPT: "" STARTUP_SCRIPT: ""
@@ -311,7 +306,7 @@ before_script:
.coreos_canal_variables: &coreos_canal_variables .coreos_canal_variables: &coreos_canal_variables
# stage: deploy-gce-part2 # stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: canal KUBE_NETWORK_PLUGIN: canal
CLOUD_IMAGE: coreos-stable CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-east1-b CLOUD_REGION: us-east1-b
CLUSTER_MODE: default CLUSTER_MODE: default
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
@@ -350,7 +345,7 @@ before_script:
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables .coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-gce-special # stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: weave KUBE_NETWORK_PLUGIN: weave
CLOUD_IMAGE: coreos-alpha-1325-0-0-v20170216 CLOUD_IMAGE: coreos-alpha-1506-0-0-v20170817
CLOUD_REGION: us-west1-a CLOUD_REGION: us-west1-a
CLUSTER_MODE: ha-scale CLUSTER_MODE: ha-scale
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
@@ -367,15 +362,14 @@ before_script:
KUBELET_DEPLOYMENT: rkt KUBELET_DEPLOYMENT: rkt
STARTUP_SCRIPT: "" STARTUP_SCRIPT: ""
#Note(mattymo): Vault deployment is broken and needs work .ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
#.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables # stage: deploy-gce-part1
## stage: deploy-gce-part1 KUBE_NETWORK_PLUGIN: canal
# KUBE_NETWORK_PLUGIN: canal CERT_MGMT: vault
# CERT_MGMT: vault CLOUD_IMAGE: ubuntu-1604-xenial
# CLOUD_IMAGE: ubuntu-1604-xenial CLOUD_REGION: us-central1-b
# CLOUD_REGION: us-central1-b CLUSTER_MODE: separate
# CLUSTER_MODE: separate STARTUP_SCRIPT: ""
# STARTUP_SCRIPT: ""
.ubuntu_flannel_rbac_variables: &ubuntu_flannel_rbac_variables .ubuntu_flannel_rbac_variables: &ubuntu_flannel_rbac_variables
# stage: deploy-gce-special # stage: deploy-gce-special
@@ -600,17 +594,16 @@ ubuntu-rkt-sep:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
#Note(mattymo): Vault deployment is broken (https://github.com/kubernetes-incubator/kubespray/issues/1545) ubuntu-vault-sep:
#ubuntu-vault-sep: stage: deploy-gce-part1
# stage: deploy-gce-part1 <<: *job
# <<: *job <<: *gce
# <<: *gce variables:
# variables: <<: *gce_variables
# <<: *gce_variables <<: *ubuntu_vault_sep_variables
# <<: *ubuntu_vault_sep_variables when: manual
# when: manual except: ['triggers']
# except: ['triggers'] only: ['master', /^pr-.*$/]
# only: ['master', /^pr-.*$/]
ubuntu-flannel-rbac-sep: ubuntu-flannel-rbac-sep:
stage: deploy-gce-special stage: deploy-gce-special
@@ -643,6 +636,13 @@ syntax-check:
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root extra_playbooks/upgrade-only-k8s.yml -vvv --syntax-check - ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root extra_playbooks/upgrade-only-k8s.yml -vvv --syntax-check
except: ['triggers', 'master'] except: ['triggers', 'master']
yamllint:
<<: *job
stage: unit-tests
script:
- yamllint roles
except: ['triggers', 'master']
tox-inventory-builder: tox-inventory-builder:
stage: unit-tests stage: unit-tests
<<: *job <<: *job

16
.yamllint Normal file
View File

@@ -0,0 +1,16 @@
---
extends: default
rules:
braces:
min-spaces-inside: 0
max-spaces-inside: 1
brackets:
min-spaces-inside: 0
max-spaces-inside: 1
indentation:
spaces: 2
indent-sequences: consistent
line-length: disable
new-line-at-end-of-file: disable
truthy: disable

View File

@@ -53,13 +53,13 @@ Versions of supported components
-------------------------------- --------------------------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.6.7 <br> [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.7.3 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.0.17 <br> [etcd](https://github.com/coreos/etcd/releases) v3.2.4 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br> [flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br>
[calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.23.0 <br> [calico](https://docs.projectcalico.org/v2.5/releases/) v2.5.0 <br>
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br> [canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
[weave](http://weave.works/) v2.0.1 <br> [weave](http://weave.works/) v2.0.1 <br>
[docker](https://www.docker.com/) v1.13.1 (see note)<br> [docker](https://www.docker.com/) v1.13 (see note)<br>
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)<br> [rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)<br>
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin). Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).

View File

@@ -10,3 +10,4 @@ fact_caching_connection = /tmp
stdout_callback = skippy stdout_callback = skippy
library = ./library library = ./library
callback_whitelist = profile_tasks callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles

View File

@@ -9,10 +9,10 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1 exit 1
fi fi
if [ $(az &>/dev/null) ] ; then if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0" echo "azure cli 2.0 found, using it instead of 1.0"
./apply-rg_2.sh "$AZURE_RESOURCE_GROUP" ./apply-rg_2.sh "$AZURE_RESOURCE_GROUP"
elif [ $(azure &>/dev/null) ] ; then elif azure &>/dev/null; then
ansible-playbook generate-templates.yml ansible-playbook generate-templates.yml
azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP

View File

@@ -9,7 +9,7 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1 exit 1
fi fi
if [ $(az &>/dev/null) ] ; then if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0" echo "azure cli 2.0 found, using it instead of 1.0"
./clear-rg_2.sh "$AZURE_RESOURCE_GROUP" ./clear-rg_2.sh "$AZURE_RESOURCE_GROUP"
else else

View File

@@ -9,9 +9,9 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1 exit 1
fi fi
# check if azure cli 2.0 exists else use azure cli 1.0 # check if azure cli 2.0 exists else use azure cli 1.0
if [ $(az &>/dev/null) ] ; then if az &>/dev/null; then
ansible-playbook generate-inventory_2.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP" ansible-playbook generate-inventory_2.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP"
elif [ $(azure &>/dev/null) ]; then elif azure &>/dev/null; then
ansible-playbook generate-inventory.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP" ansible-playbook generate-inventory.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP"
else else
echo "Azure cli not found" echo "Azure cli not found"

View File

@@ -0,0 +1,60 @@
%global srcname ansible_kubespray
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: ansible-kubespray
Version: XXX
Release: XXX
Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Vendor: Kubespray <smainklh@gmail.com>
Url: https://github.com/kubernetes-incubator/kubespray
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python-d2to1
BuildRequires: python-pbr
Requires: ansible
Requires: python-jinja2
Requires: python-netaddr
%description
Ansible-kubespray is a set of Ansible modules and playbooks for
installing a Kubernetes cluster. If you have questions, join us
on the https://slack.k8s.io, channel '#kubespray'.
%prep
%autosetup -n %{name}-%{upstream_version} -S git
%build
%{__python2} setup.py build
%install
export PBR_VERSION=%{version}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot}
%files
%doc README.md
%doc inventory/inventory.example
%config /etc/kubespray/ansible.cfg
%config /etc/kubespray/inventory/group_vars/all.yml
%config /etc/kubespray/inventory/group_vars/k8s-cluster.yml
%license LICENSE
%{python2_sitelib}/%{srcname}-%{version}-py%{python2_version}.egg-info
/usr/local/share/kubespray/roles/
/usr/local/share/kubespray/playbooks/
%defattr(-,root,root)
%changelog

View File

@@ -161,3 +161,11 @@ Cloud providers configuration
============================= =============================
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined. Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
##### Optional : Ignore kernel's RPF check setting
By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting:
```
calico_node_ignorelooserpf: true
```

View File

@@ -23,13 +23,6 @@ ip a show dev flannel.1
valid_lft forever preferred_lft forever valid_lft forever preferred_lft forever
``` ```
* Docker must be configured with a bridge ip in the flannel subnet.
```
ps aux | grep docker
root 20196 1.7 2.7 1260616 56840 ? Ssl 10:18 0:07 /usr/bin/docker daemon --bip=10.233.16.1/24 --mtu=1450
```
* Try to run a container and check its ip address * Try to run a container and check its ip address
``` ```

View File

@@ -67,6 +67,8 @@ following default cluster paramters:
OpenStack (default is unset) OpenStack (default is unset)
* *kube_hostpath_dynamic_provisioner* - Required for use of PetSets type in * *kube_hostpath_dynamic_provisioner* - Required for use of PetSets type in
Kubernetes Kubernetes
* *kube_feature_gates* - A list of key=value pairs that describe feature gates for
alpha/experimental Kubernetes features. (defaults is `[]`)
* *authorization_modes* - A list of [authorization mode]( * *authorization_modes* - A list of [authorization mode](
https://kubernetes.io/docs/admin/authorization/#using-flags-for-your-authorization-module) https://kubernetes.io/docs/admin/authorization/#using-flags-for-your-authorization-module)
that the cluster should be configured for. Defaults to `[]` (i.e. no authorization). that the cluster should be configured for. Defaults to `[]` (i.e. no authorization).
@@ -98,6 +100,11 @@ Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-st
``--insecure-registry=myregistry.mydomain:5000`` ``--insecure-registry=myregistry.mydomain:5000``
* *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a * *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a
proxy proxy
* *kubelet_deployment_type* - Controls which platform to deploy kubelet on.
Available options are ``host``, ``rkt``, and ``docker``. ``docker`` mode
is unlikely to work on newer releases. Starting with Kubernetes v1.7
series, this now defaults to ``host``. Before v1.7, the default was Docker.
This is because of cgroup [issues](https://github.com/kubernetes/kubernetes/issues/43704).
* *kubelet_load_modules* - For some things, kubelet needs to load kernel modules. For example, * *kubelet_load_modules* - For some things, kubelet needs to load kernel modules. For example,
dynamic kernel services are needed for mounting persistent volumes into containers. These may not be dynamic kernel services are needed for mounting persistent volumes into containers. These may not be
loaded by preinstall kubernetes processes. For example, ceph and rbd backed volumes. Set this variable to loaded by preinstall kubernetes processes. For example, ceph and rbd backed volumes. Set this variable to

View File

@@ -74,6 +74,14 @@ bin_dir: /usr/local/bin
#azure_vnet_name: #azure_vnet_name:
#azure_route_table_name: #azure_route_table_name:
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following variables.
#openstack_lbaas_enabled: True
#openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
#openstack_lbaas_create_monitor: "yes"
#openstack_lbaas_monitor_delay: "1m"
#openstack_lbaas_monitor_timeout: "30s"
#openstack_lbaas_monitor_max_retries: "3"
## Set these proxy values in order to update docker daemon to use proxies ## Set these proxy values in order to update docker daemon to use proxies
#http_proxy: "" #http_proxy: ""
#https_proxy: "" #https_proxy: ""

View File

@@ -23,7 +23,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: false kube_api_anonymous_auth: false
## Change this to use another Kubernetes version, e.g. a current beta release ## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.6.7 kube_version: v1.7.3
# Where the binaries will be downloaded. # Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G) # Note: ensure that you've enough disk space (about 1G)
@@ -141,7 +141,7 @@ docker_bin_dir: "/usr/bin"
# Settings for containerized control plane (etcd/kubelet/secrets) # Settings for containerized control plane (etcd/kubelet/secrets)
etcd_deployment_type: docker etcd_deployment_type: docker
kubelet_deployment_type: docker kubelet_deployment_type: host
cert_management: script cert_management: script
vault_deployment_type: docker vault_deployment_type: docker

View File

@@ -1,3 +1,4 @@
pbr>=1.6
ansible>=2.3.0 ansible>=2.3.0
netaddr netaddr
jinja2>=2.9.6 jinja2>=2.9.6

View File

@@ -16,6 +16,6 @@ Host {{ bastion_ip }}
ControlPersist 5m ControlPersist 5m
Host {{ vars['hosts'] }} Host {{ vars['hosts'] }}
ProxyCommand ssh -W %h:%p {{ real_user }}@{{ bastion_ip }} ProxyCommand ssh -W %h:%p {{ real_user }}@{{ bastion_ip }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
StrictHostKeyChecking no StrictHostKeyChecking no
{% endif %} {% endif %}

View File

@@ -49,4 +49,3 @@
pip: pip:
name: "{{ item }}" name: "{{ item }}"
with_items: "{{pip_python_modules}}" with_items: "{{pip_python_modules}}"

View File

@@ -27,4 +27,3 @@
hostname: hostname:
name: "{{inventory_hostname}}" name: "{{inventory_hostname}}"
when: ansible_hostname == 'localhost' when: ansible_hostname == 'localhost'

View File

@@ -6,4 +6,3 @@
regexp: '^\w+\s+requiretty' regexp: '^\w+\s+requiretty'
dest: /etc/sudoers dest: /etc/sudoers
state: absent state: absent

View File

@@ -4,12 +4,12 @@
# Max of 4 names is allowed and no more than 256 - 17 chars total # Max of 4 names is allowed and no more than 256 - 17 chars total
# (a 2 is reserved for the 'default.svc.' and'svc.') # (a 2 is reserved for the 'default.svc.' and'svc.')
#searchdomains: # searchdomains:
# - foo.bar.lc # - foo.bar.lc
# Max of 2 is allowed here (a 1 is reserved for the dns_server) # Max of 2 is allowed here (a 1 is reserved for the dns_server)
#nameservers: # nameservers:
# - 127.0.0.1 # - 127.0.0.1
dns_forward_max: 150 dns_forward_max: 150
cache_size: 1000 cache_size: 1000

View File

@@ -86,4 +86,3 @@
port: 53 port: 53
timeout: 180 timeout: 180
when: inventory_hostname == groups['kube-node'][0] and groups['kube-node'][0] in ansible_play_hosts when: inventory_hostname == groups['kube-node'][0] and groups['kube-node'][0] in ansible_play_hosts

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors. # Copyright 2016 The Kubernetes Authors.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -30,21 +31,23 @@ spec:
scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec: spec:
tolerations:
- effect: NoSchedule
operator: Exists
containers: containers:
- name: autoscaler - name: autoscaler
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
resources: resources:
requests: requests:
cpu: "20m" cpu: "20m"
memory: "10Mi" memory: "10Mi"
command: command:
- /cluster-proportional-autoscaler - /cluster-proportional-autoscaler
- --namespace=kube-system - --namespace=kube-system
- --configmap=dnsmasq-autoscaler - --configmap=dnsmasq-autoscaler
- --target=Deployment/dnsmasq - --target=Deployment/dnsmasq
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate. # When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
# If using small nodes, "nodesPerReplica" should dominate. # If using small nodes, "nodesPerReplica" should dominate.
- --default-params={"linear":{"nodesPerReplica":{{ dnsmasq_nodes_per_replica }},"preventSinglePointFailure":true}} - --default-params={"linear":{"nodesPerReplica":{{ dnsmasq_nodes_per_replica }},"preventSinglePointFailure":true}}
- --logtostderr=true - --logtostderr=true
- --v={{ kube_log_level }} - --v={{ kube_log_level }}

View File

@@ -21,6 +21,9 @@ spec:
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
kubespray/dnsmasq-checksum: "{{ dnsmasq_stat.stat.checksum }}" kubespray/dnsmasq-checksum: "{{ dnsmasq_stat.stat.checksum }}"
spec: spec:
tolerations:
- effect: NoSchedule
operator: Exists
containers: containers:
- name: dnsmasq - name: dnsmasq
image: "{{ dnsmasq_image_repo }}:{{ dnsmasq_image_tag }}" image: "{{ dnsmasq_image_repo }}:{{ dnsmasq_image_tag }}"
@@ -35,7 +38,6 @@ spec:
capabilities: capabilities:
add: add:
- NET_ADMIN - NET_ADMIN
imagePullPolicy: IfNotPresent
resources: resources:
limits: limits:
cpu: {{ dns_cpu_limit }} cpu: {{ dns_cpu_limit }}
@@ -64,4 +66,3 @@ spec:
hostPath: hostPath:
path: /etc/dnsmasq.d-available path: /etc/dnsmasq.d-available
dnsPolicy: Default # Don't use cluster DNS. dnsPolicy: Default # Don't use cluster DNS.

View File

@@ -1,3 +1,4 @@
---
docker_version: '1.13' docker_version: '1.13'
docker_package_info: docker_package_info:

View File

@@ -8,7 +8,7 @@
- Docker | pause while Docker restarts - Docker | pause while Docker restarts
- Docker | wait for docker - Docker | wait for docker
- name : Docker | reload systemd - name: Docker | reload systemd
shell: systemctl daemon-reload shell: systemctl daemon-reload
- name: Docker | reload docker.socket - name: Docker | reload docker.socket

View File

@@ -3,14 +3,14 @@
include_vars: "{{ item }}" include_vars: "{{ item }}"
with_first_found: with_first_found:
- files: - files:
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_version|lower|replace('/', '_') }}.yml" - "{{ ansible_distribution|lower }}-{{ ansible_distribution_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml" - "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml" - "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}.yml" - "{{ ansible_distribution|lower }}.yml"
- "{{ ansible_os_family|lower }}.yml" - "{{ ansible_os_family|lower }}.yml"
- defaults.yml - defaults.yml
paths: paths:
- ../vars - ../vars
skip: true skip: true
tags: facts tags: facts

View File

@@ -48,7 +48,7 @@
- name: add system search domains to docker options - name: add system search domains to docker options
set_fact: set_fact:
docker_dns_search_domains: "{{ docker_dns_search_domains | union(system_search_domains.stdout.split(' ')|default([])) | unique }}" docker_dns_search_domains: "{{ docker_dns_search_domains | union(system_search_domains.stdout.split(' ')|default([])) | unique }}"
when: system_search_domains.stdout != "" when: system_search_domains.stdout != ""
- name: check number of nameservers - name: check number of nameservers
fail: fail:

View File

@@ -1,3 +1,3 @@
[Service] [Service]
Environment="DOCKER_OPTS={{ docker_options | default('') }} \ Environment="DOCKER_OPTS={{ docker_options | default('') }} \
--iptables={% if kube_network_plugin == 'flannel' %}true{% else %}false{% endif %}" --iptables=false"

View File

@@ -1,3 +1,4 @@
---
docker_kernel_min_version: '3.10' docker_kernel_min_version: '3.10'
# https://apt.dockerproject.org/repo/dists/debian-wheezy/main/filelist # https://apt.dockerproject.org/repo/dists/debian-wheezy/main/filelist

View File

@@ -1,3 +1,4 @@
---
docker_kernel_min_version: '0' docker_kernel_min_version: '0'
# versioning: docker-io itself is pinned at docker 1.5 # versioning: docker-io itself is pinned at docker 1.5

View File

@@ -1,3 +1,4 @@
---
docker_kernel_min_version: '0' docker_kernel_min_version: '0'
# https://docs.docker.com/engine/installation/linux/fedora/#install-from-a-package # https://docs.docker.com/engine/installation/linux/fedora/#install-from-a-package

View File

@@ -1,3 +1,4 @@
---
docker_kernel_min_version: '0' docker_kernel_min_version: '0'
# https://yum.dockerproject.org/repo/main/centos/7/Packages/ # https://yum.dockerproject.org/repo/main/centos/7/Packages/
@@ -8,7 +9,7 @@ docker_versioned_pkg:
'1.12': docker-engine-1.12.6-1.el7.centos '1.12': docker-engine-1.12.6-1.el7.centos
'1.13': docker-engine-1.13.1-1.el7.centos '1.13': docker-engine-1.13.1-1.el7.centos
'stable': docker-engine-17.03.0.ce-1.el7.centos 'stable': docker-engine-17.03.0.ce-1.el7.centos
'edge': docker-engine-17.03.0.ce-1.el7.centos 'edge': docker-engine-17.03.0.ce-1.el7.centos
# https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/ # https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

View File

@@ -18,15 +18,17 @@ download_localhost: False
download_always_pull: False download_always_pull: False
# Versions # Versions
kube_version: v1.6.7 kube_version: v1.7.3
etcd_version: v3.2.4 etcd_version: v3.2.4
#TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults # TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download # after migration to container download
calico_version: "v1.1.3" calico_version: "v2.5.0"
calico_cni_version: "v1.8.0" calico_ctl_version: "v1.5.0"
calico_policy_version: "v0.5.4" calico_cni_version: "v1.10.0"
calico_policy_version: "v0.7.0"
weave_version: 2.0.1 weave_version: 2.0.1
flannel_version: v0.8.0 flannel_version: "v0.8.0"
flannel_cni_version: "v0.2.0"
pod_infra_version: 3.0 pod_infra_version: 3.0
# Download URL's # Download URL's
@@ -42,13 +44,15 @@ etcd_image_repo: "quay.io/coreos/etcd"
etcd_image_tag: "{{ etcd_version }}" etcd_image_tag: "{{ etcd_version }}"
flannel_image_repo: "quay.io/coreos/flannel" flannel_image_repo: "quay.io/coreos/flannel"
flannel_image_tag: "{{ flannel_version }}" flannel_image_tag: "{{ flannel_version }}"
calicoctl_image_repo: "calico/ctl" flannel_cni_image_repo: "quay.io/coreos/flannel-cni"
calicoctl_image_tag: "{{ calico_version }}" flannel_cni_image_tag: "{{ flannel_cni_version }}"
calico_node_image_repo: "calico/node" calicoctl_image_repo: "quay.io/calico/ctl"
calicoctl_image_tag: "{{ calico_ctl_version }}"
calico_node_image_repo: "quay.io/calico/node"
calico_node_image_tag: "{{ calico_version }}" calico_node_image_tag: "{{ calico_version }}"
calico_cni_image_repo: "calico/cni" calico_cni_image_repo: "quay.io/calico/cni"
calico_cni_image_tag: "{{ calico_cni_version }}" calico_cni_image_tag: "{{ calico_cni_version }}"
calico_policy_image_repo: "calico/kube-policy-controller" calico_policy_image_repo: "quay.io/calico/kube-policy-controller"
calico_policy_image_tag: "{{ calico_policy_version }}" calico_policy_image_tag: "{{ calico_policy_version }}"
calico_rr_image_repo: "quay.io/calico/routereflector" calico_rr_image_repo: "quay.io/calico/routereflector"
calico_rr_image_tag: "v0.3.0" calico_rr_image_tag: "v0.3.0"
@@ -137,6 +141,12 @@ downloads:
tag: "{{ flannel_image_tag }}" tag: "{{ flannel_image_tag }}"
sha256: "{{ flannel_digest_checksum|default(None) }}" sha256: "{{ flannel_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}" enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}"
flannel_cni:
container: true
repo: "{{ flannel_cni_image_repo }}"
tag: "{{ flannel_cni_image_tag }}"
sha256: "{{ flannel_cni_digest_checksum|default(None) }}"
enabled: "{{ kube_network_plugin == 'flannel' }}"
calicoctl: calicoctl:
container: true container: true
repo: "{{ calicoctl_image_repo }}" repo: "{{ calicoctl_image_repo }}"

View File

@@ -111,7 +111,7 @@
- download.enabled|bool - download.enabled|bool
- download.container|bool - download.container|bool
#NOTE(bogdando) this brings no docker-py deps for nodes # NOTE(bogdando) this brings no docker-py deps for nodes
- name: Download containers if pull is required or told to always pull - name: Download containers if pull is required or told to always pull
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}" command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result register: pull_task_result

View File

@@ -21,8 +21,8 @@ etcd_metrics: "basic"
etcd_memory_limit: 512M etcd_memory_limit: 512M
# Uncomment to set CPU share for etcd # Uncomment to set CPU share for etcd
#etcd_cpu_limit: 300m # etcd_cpu_limit: 300m
etcd_node_cert_hosts: "{{ groups['k8s-cluster'] | union(groups.get('calico-rr', [])) }}" etcd_node_cert_hosts: "{{ groups['k8s-cluster'] | union(groups.get('calico-rr', [])) }}"
etcd_compaction_retention: "0" etcd_compaction_retention: "8"

View File

@@ -43,4 +43,3 @@
ETCDCTL_API: 3 ETCDCTL_API: 3
retries: 3 retries: 3
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"

View File

@@ -30,4 +30,3 @@
- name: set etcd_secret_changed - name: set etcd_secret_changed
set_fact: set_fact:
etcd_secret_changed: true etcd_secret_changed: true

View File

@@ -66,4 +66,3 @@
{%- set _ = certs.update({'sync': True}) -%} {%- set _ = certs.update({'sync': True}) -%}
{% endif %} {% endif %}
{{ certs.sync }} {{ certs.sync }}

View File

@@ -73,11 +73,10 @@
'member-{{ node }}-key.pem', 'member-{{ node }}-key.pem',
{% endfor %}]" {% endfor %}]"
my_master_certs: ['ca-key.pem', my_master_certs: ['ca-key.pem',
'admin-{{ inventory_hostname }}.pem', 'admin-{{ inventory_hostname }}.pem',
'admin-{{ inventory_hostname }}-key.pem', 'admin-{{ inventory_hostname }}-key.pem',
'member-{{ inventory_hostname }}.pem', 'member-{{ inventory_hostname }}.pem',
'member-{{ inventory_hostname }}-key.pem' 'member-{{ inventory_hostname }}-key.pem']
]
all_node_certs: "['ca.pem', all_node_certs: "['ca.pem',
{% for node in (groups['k8s-cluster'] + groups['calico-rr']|default([]))|unique %} {% for node in (groups['k8s-cluster'] + groups['calico-rr']|default([]))|unique %}
'node-{{ node }}.pem', 'node-{{ node }}.pem',
@@ -111,22 +110,22 @@
sync_certs|default(false) and inventory_hostname not in groups['etcd'] sync_certs|default(false) and inventory_hostname not in groups['etcd']
notify: set etcd_secret_changed notify: set etcd_secret_changed
#NOTE(mattymo): Use temporary file to copy master certs because we have a ~200k # NOTE(mattymo): Use temporary file to copy master certs because we have a ~200k
#char limit when using shell command # char limit when using shell command
#FIXME(mattymo): Use tempfile module in ansible 2.3
- name: Gen_certs | Prepare tempfile for unpacking certs
shell: mktemp /tmp/certsXXXXX.tar.gz
register: cert_tempfile
when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
inventory_hostname != groups['etcd'][0]
- name: Gen_certs | Write master certs to tempfile # FIXME(mattymo): Use tempfile module in ansible 2.3
copy: - name: Gen_certs | Prepare tempfile for unpacking certs
content: "{{etcd_master_cert_data.stdout}}" shell: mktemp /tmp/certsXXXXX.tar.gz
dest: "{{cert_tempfile.stdout}}" register: cert_tempfile
owner: root when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
mode: "0600" inventory_hostname != groups['etcd'][0]
- name: Gen_certs | Write master certs to tempfile
copy:
content: "{{etcd_master_cert_data.stdout}}"
dest: "{{cert_tempfile.stdout}}"
owner: root
mode: "0600"
when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
inventory_hostname != groups['etcd'][0] inventory_hostname != groups['etcd'][0]

View File

@@ -7,7 +7,6 @@
when: inventory_hostname in etcd_node_cert_hosts when: inventory_hostname in etcd_node_cert_hosts
tags: etcd-secrets tags: etcd-secrets
- name: gen_certs_vault | Read in the local credentials - name: gen_certs_vault | Read in the local credentials
command: cat /etc/vault/roles/etcd/userpass command: cat /etc/vault/roles/etcd/userpass
register: etcd_vault_creds_cat register: etcd_vault_creds_cat
@@ -33,15 +32,15 @@
- name: gen_certs_vault | Set fact for vault_client_token - name: gen_certs_vault | Set fact for vault_client_token
set_fact: set_fact:
vault_client_token: "{{ etcd_vault_login_result.get('json', {}).get('auth', {}).get('client_token') }}" vault_client_token: "{{ etcd_vault_login_result.get('json', {}).get('auth', {}).get('client_token') }}"
run_once: true run_once: true
- name: gen_certs_vault | Set fact for Vault API token - name: gen_certs_vault | Set fact for Vault API token
set_fact: set_fact:
etcd_vault_headers: etcd_vault_headers:
Accept: application/json Accept: application/json
Content-Type: application/json Content-Type: application/json
X-Vault-Token: "{{ vault_client_token }}" X-Vault-Token: "{{ vault_client_token }}"
run_once: true run_once: true
when: vault_client_token != "" when: vault_client_token != ""
@@ -58,6 +57,9 @@
[ [
{%- for host in groups.etcd -%} {%- for host in groups.etcd -%}
"{{ hostvars[host]['ansible_default_ipv4']['address'] }}", "{{ hostvars[host]['ansible_default_ipv4']['address'] }}",
{%- if hostvars[host]['ip'] is defined -%}
"{{ hostvars[host]['ip'] }}",
{%- endif -%}
{%- endfor -%} {%- endfor -%}
"127.0.0.1","::1" "127.0.0.1","::1"
] ]
@@ -81,6 +83,9 @@
[ [
{%- for host in etcd_node_cert_hosts -%} {%- for host in etcd_node_cert_hosts -%}
"{{ hostvars[host]['ansible_default_ipv4']['address'] }}", "{{ hostvars[host]['ansible_default_ipv4']['address'] }}",
{%- if hostvars[host]['ip'] is defined -%}
"{{ hostvars[host]['ip'] }}",
{%- endif -%}
{%- endfor -%} {%- endfor -%}
"127.0.0.1","::1" "127.0.0.1","::1"
] ]
@@ -90,5 +95,3 @@
with_items: "{{ etcd_node_certs_needed|d([]) }}" with_items: "{{ etcd_node_certs_needed|d([]) }}"
when: inventory_hostname in etcd_node_cert_hosts when: inventory_hostname in etcd_node_cert_hosts
notify: set etcd_secret_changed notify: set etcd_secret_changed

View File

@@ -1,5 +1,5 @@
--- ---
#Plan A: no docker-py deps # Plan A: no docker-py deps
- name: Install | Copy etcdctl binary from docker container - name: Install | Copy etcdctl binary from docker container
command: sh -c "{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy; command: sh -c "{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy;
{{ docker_bin_dir }}/docker create --name etcdctl-binarycopy {{ etcd_image_repo }}:{{ etcd_image_tag }} && {{ docker_bin_dir }}/docker create --name etcdctl-binarycopy {{ etcd_image_repo }}:{{ etcd_image_tag }} &&
@@ -12,21 +12,21 @@
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
changed_when: false changed_when: false
#Plan B: looks nicer, but requires docker-py on all hosts: # Plan B: looks nicer, but requires docker-py on all hosts:
#- name: Install | Set up etcd-binarycopy container # - name: Install | Set up etcd-binarycopy container
# docker: # docker:
# name: etcd-binarycopy # name: etcd-binarycopy
# state: present # state: present
# image: "{{ etcd_image_repo }}:{{ etcd_image_tag }}" # image: "{{ etcd_image_repo }}:{{ etcd_image_tag }}"
# when: etcd_deployment_type == "docker" # when: etcd_deployment_type == "docker"
# #
#- name: Install | Copy etcdctl from etcd-binarycopy container # - name: Install | Copy etcdctl from etcd-binarycopy container
# command: /usr/bin/docker cp "etcd-binarycopy:{{ etcd_container_bin_dir }}etcdctl" "{{ bin_dir }}/etcdctl" # command: /usr/bin/docker cp "etcd-binarycopy:{{ etcd_container_bin_dir }}etcdctl" "{{ bin_dir }}/etcdctl"
# when: etcd_deployment_type == "docker" # when: etcd_deployment_type == "docker"
# #
#- name: Install | Clean up etcd-binarycopy container # - name: Install | Clean up etcd-binarycopy container
# docker: # docker:
# name: etcd-binarycopy # name: etcd-binarycopy
# state: absent # state: absent
# image: "{{ etcd_image_repo }}:{{ etcd_image_tag }}" # image: "{{ etcd_image_repo }}:{{ etcd_image_tag }}"
# when: etcd_deployment_type == "docker" # when: etcd_deployment_type == "docker"

View File

@@ -1,3 +1,4 @@
---
- name: "Pre-upgrade | check for etcd-proxy unit file" - name: "Pre-upgrade | check for etcd-proxy unit file"
stat: stat:
path: /etc/systemd/system/etcd-proxy.service path: /etc/systemd/system/etcd-proxy.service

View File

@@ -1,7 +1,7 @@
--- ---
- name: Refresh config | Create etcd config file - name: Refresh config | Create etcd config file
template: template:
src: etcd.env.yml src: etcd.env.j2
dest: /etc/etcd.env dest: /etc/etcd.env
notify: restart etcd notify: restart etcd
when: is_etcd_master when: is_etcd_master

View File

@@ -1,7 +1,7 @@
--- ---
- name: sync_etcd_master_certs | Create list of master certs needing creation - name: sync_etcd_master_certs | Create list of master certs needing creation
set_fact: set_fact:
etcd_master_cert_list: >- etcd_master_cert_list: >-
{{ etcd_master_cert_list|default([]) + [ {{ etcd_master_cert_list|default([]) + [
"admin-" + item + ".pem", "admin-" + item + ".pem",
@@ -11,7 +11,7 @@
run_once: true run_once: true
- include: ../../vault/tasks/shared/sync_file.yml - include: ../../vault/tasks/shared/sync_file.yml
vars: vars:
sync_file: "{{ item }}" sync_file: "{{ item }}"
sync_file_dir: "{{ etcd_cert_dir }}" sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ groups.etcd }}" sync_file_hosts: "{{ groups.etcd }}"

View File

@@ -1,12 +1,12 @@
--- ---
- name: sync_etcd_node_certs | Create list of node certs needing creation - name: sync_etcd_node_certs | Create list of node certs needing creation
set_fact: set_fact:
etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + item + '.pem'] }}" etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + item + '.pem'] }}"
with_items: "{{ etcd_node_cert_hosts }}" with_items: "{{ etcd_node_cert_hosts }}"
- include: ../../vault/tasks/shared/sync_file.yml - include: ../../vault/tasks/shared/sync_file.yml
vars: vars:
sync_file: "{{ item }}" sync_file: "{{ item }}"
sync_file_dir: "{{ etcd_cert_dir }}" sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ etcd_node_cert_hosts }}" sync_file_hosts: "{{ etcd_node_cert_hosts }}"
@@ -24,7 +24,7 @@
sync_file_results: [] sync_file_results: []
- include: ../../vault/tasks/shared/sync_file.yml - include: ../../vault/tasks/shared/sync_file.yml
vars: vars:
sync_file: ca.pem sync_file: ca.pem
sync_file_dir: "{{ etcd_cert_dir }}" sync_file_dir: "{{ etcd_cert_dir }}"
sync_file_hosts: "{{ etcd_node_cert_hosts }}" sync_file_hosts: "{{ etcd_node_cert_hosts }}"

View File

@@ -1,9 +1,8 @@
--- ---
elrepo_key_url: 'https://www.elrepo.org/RPM-GPG-KEY-elrepo.org' elrepo_key_url: 'https://www.elrepo.org/RPM-GPG-KEY-elrepo.org'
elrepo_rpm : elrepo-release-7.0-2.el7.elrepo.noarch.rpm elrepo_rpm: elrepo-release-7.0-3.el7.elrepo.noarch.rpm
elrepo_mirror : http://www.elrepo.org elrepo_mirror: http://www.elrepo.org
elrepo_url : '{{elrepo_mirror}}/{{elrepo_rpm}}' elrepo_url: '{{elrepo_mirror}}/{{elrepo_rpm}}'
elrepo_kernel_package: "kernel-lt" elrepo_kernel_package: "kernel-lt"

View File

@@ -1,5 +1,6 @@
---
# Versions # Versions
kubedns_version : 1.14.2 kubedns_version: 1.14.2
kubednsautoscaler_version: 1.1.1 kubednsautoscaler_version: 1.1.1
# Limits for dnsmasq/kubedns apps # Limits for dnsmasq/kubedns apps

View File

@@ -14,12 +14,12 @@
dest: "{{kube_config_dir}}/{{item.file}}" dest: "{{kube_config_dir}}/{{item.file}}"
with_items: with_items:
- {name: kubedns, file: kubedns-sa.yml, type: sa} - {name: kubedns, file: kubedns-sa.yml, type: sa}
- {name: kubedns, file: kubedns-deploy.yml, type: deployment} - {name: kubedns, file: kubedns-deploy.yml.j2, type: deployment}
- {name: kubedns, file: kubedns-svc.yml, type: svc} - {name: kubedns, file: kubedns-svc.yml, type: svc}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-sa.yml, type: sa} - {name: kubedns-autoscaler, file: kubedns-autoscaler-sa.yml, type: sa}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrole.yml, type: clusterrole} - {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrole.yml, type: clusterrole}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrolebinding.yml, type: clusterrolebinding} - {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrolebinding.yml, type: clusterrolebinding}
- {name: kubedns-autoscaler, file: kubedns-autoscaler.yml, type: deployment} - {name: kubedns-autoscaler, file: kubedns-autoscaler.yml.j2, type: deployment}
register: manifests register: manifests
when: when:
- dns_mode != 'none' and inventory_hostname == groups['kube-master'][0] - dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]

View File

@@ -1,3 +1,4 @@
---
- name: Kubernetes Apps | Lay Down Netchecker Template - name: Kubernetes Apps | Lay Down Netchecker Template
template: template:
src: "{{item.file}}" src: "{{item.file}}"
@@ -24,7 +25,7 @@
state: absent state: absent
when: inventory_hostname == groups['kube-master'][0] when: inventory_hostname == groups['kube-master'][0]
#FIXME: remove if kubernetes/features#124 is implemented # FIXME: remove if kubernetes/features#124 is implemented
- name: Kubernetes Apps | Purge old Netchecker daemonsets - name: Kubernetes Apps | Purge old Netchecker daemonsets
kube: kube:
name: "{{item.item.name}}" name: "{{item.item.name}}"

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors. All rights reserved # Copyright 2016 The Kubernetes Authors. All rights reserved
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors. All rights reserved # Copyright 2016 The Kubernetes Authors. All rights reserved
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors. All rights reserved # Copyright 2016 The Kubernetes Authors. All rights reserved
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,3 +1,4 @@
---
# Copyright 2016 The Kubernetes Authors. # Copyright 2016 The Kubernetes Authors.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -28,24 +29,28 @@ spec:
k8s-app: kubedns-autoscaler k8s-app: kubedns-autoscaler
annotations: annotations:
scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec: spec:
containers: containers:
- name: autoscaler - name: autoscaler
image: "{{ kubednsautoscaler_image_repo }}:{{ kubednsautoscaler_image_tag }}" image: "{{ kubednsautoscaler_image_repo }}:{{ kubednsautoscaler_image_tag }}"
tolerations:
- effect: NoSchedule
operator: Exists
- effect: CriticalAddonsOnly
operator: exists
resources: resources:
requests: requests:
cpu: "20m" cpu: "20m"
memory: "10Mi" memory: "10Mi"
command: command:
- /cluster-proportional-autoscaler - /cluster-proportional-autoscaler
- --namespace={{ system_namespace }} - --namespace={{ system_namespace }}
- --configmap=kubedns-autoscaler - --configmap=kubedns-autoscaler
# Should keep target in sync with cluster/addons/dns/kubedns-controller.yaml.base # Should keep target in sync with cluster/addons/dns/kubedns-controller.yaml.base
- --target=Deployment/kube-dns - --target=Deployment/kube-dns
- --default-params={"linear":{"nodesPerReplica":{{ kubedns_nodes_per_replica }},"min":{{ kubedns_min_replicas }}}} - --default-params={"linear":{"nodesPerReplica":{{ kubedns_nodes_per_replica }},"min":{{ kubedns_min_replicas }}}}
- --logtostderr=true - --logtostderr=true
- --v=2 - --v=2
{% if rbac_enabled %} {% if rbac_enabled %}
serviceAccountName: cluster-proportional-autoscaler serviceAccountName: cluster-proportional-autoscaler
{% endif %} {% endif %}

View File

@@ -1,3 +1,4 @@
---
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -29,6 +30,8 @@ spec:
tolerations: tolerations:
- key: "CriticalAddonsOnly" - key: "CriticalAddonsOnly"
operator: "Exists" operator: "Exists"
- effect: NoSchedule
operator: Exists
volumes: volumes:
- name: kube-dns-config - name: kube-dns-config
configMap: configMap:

View File

@@ -1,3 +1,4 @@
---
apiVersion: v1 apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:

View File

@@ -1,3 +1,4 @@
---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
@@ -19,4 +20,3 @@ spec:
- name: dns-tcp - name: dns-tcp
port: 53 port: 53
protocol: TCP protocol: TCP

View File

@@ -12,6 +12,9 @@ spec:
labels: labels:
app: netchecker-agent app: netchecker-agent
spec: spec:
tolerations:
- effect: NoSchedule
operator: Exists
containers: containers:
- name: netchecker-agent - name: netchecker-agent
image: "{{ agent_img }}" image: "{{ agent_img }}"

View File

@@ -16,6 +16,9 @@ spec:
{% if kube_version | version_compare('v1.6', '>=') %} {% if kube_version | version_compare('v1.6', '>=') %}
dnsPolicy: ClusterFirstWithHostNet dnsPolicy: ClusterFirstWithHostNet
{% endif %} {% endif %}
tolerations:
- effect: NoSchedule
operator: Exists
containers: containers:
- name: netchecker-agent - name: netchecker-agent
image: "{{ agent_img }}" image: "{{ agent_img }}"

View File

@@ -1,5 +1,5 @@
--- ---
elasticsearch_cpu_limit: 1000m elasticsearch_cpu_limit: 1000m
elasticsearch_mem_limit: 0M elasticsearch_mem_limit: 0M
elasticsearch_cpu_requests: 100m elasticsearch_cpu_requests: 100m
elasticsearch_mem_requests: 0M elasticsearch_mem_requests: 0M

View File

@@ -1,3 +1,4 @@
---
dependencies: dependencies:
- role: download - role: download
file: "{{ downloads.elasticsearch }}" file: "{{ downloads.elasticsearch }}"

View File

@@ -38,4 +38,3 @@
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/elasticsearch-service.yaml -n {{ system_namespace }}" command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/elasticsearch-service.yaml -n {{ system_namespace }}"
run_once: true run_once: true
when: es_service_manifest.changed when: es_service_manifest.changed

View File

@@ -1,3 +1,4 @@
---
kind: ClusterRoleBinding kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
metadata: metadata:

View File

@@ -1,3 +1,4 @@
---
apiVersion: v1 apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:

View File

@@ -1,5 +1,5 @@
--- ---
fluentd_cpu_limit: 0m fluentd_cpu_limit: 0m
fluentd_mem_limit: 200Mi fluentd_mem_limit: 200Mi
fluentd_cpu_requests: 100m fluentd_cpu_requests: 100m
fluentd_mem_requests: 200Mi fluentd_mem_requests: 200Mi

View File

@@ -1,3 +1,4 @@
---
dependencies: dependencies:
- role: download - role: download
file: "{{ downloads.fluentd }}" file: "{{ downloads.fluentd }}"

View File

@@ -20,4 +20,3 @@
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/fluentd-ds.yaml -n {{ system_namespace }}" command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/fluentd-ds.yaml -n {{ system_namespace }}"
run_once: true run_once: true
when: fluentd_ds_manifest.changed when: fluentd_ds_manifest.changed

View File

@@ -17,6 +17,9 @@ spec:
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
version: "v{{ fluentd_version }}" version: "v{{ fluentd_version }}"
spec: spec:
tolerations:
- effect: NoSchedule
operator: Exists
containers: containers:
- name: fluentd-es - name: fluentd-es
image: "{{ fluentd_image_repo }}:{{ fluentd_image_tag }}" image: "{{ fluentd_image_repo }}:{{ fluentd_image_tag }}"

View File

@@ -1,5 +1,5 @@
--- ---
kibana_cpu_limit: 100m kibana_cpu_limit: 100m
kibana_mem_limit: 0M kibana_mem_limit: 0M
kibana_cpu_requests: 100m kibana_cpu_requests: 100m
kibana_mem_requests: 0M kibana_mem_requests: 0M

View File

@@ -1,3 +1,4 @@
---
dependencies: dependencies:
- role: download - role: download
file: "{{ downloads.kibana }}" file: "{{ downloads.kibana }}"

View File

@@ -1,6 +1,6 @@
--- ---
- name: "Kibana | Write Kibana deployment" - name: "Kibana | Write Kibana deployment"
template: template:
src: kibana-deployment.yml.j2 src: kibana-deployment.yml.j2
dest: "{{ kube_config_dir }}/kibana-deployment.yaml" dest: "{{ kube_config_dir }}/kibana-deployment.yaml"
register: kibana_deployment_manifest register: kibana_deployment_manifest
@@ -17,7 +17,7 @@
run_once: true run_once: true
- name: "Kibana | Write Kibana service " - name: "Kibana | Write Kibana service "
template: template:
src: kibana-service.yml.j2 src: kibana-service.yml.j2
dest: "{{ kube_config_dir }}/kibana-service.yaml" dest: "{{ kube_config_dir }}/kibana-service.yaml"
register: kibana_service_manifest register: kibana_service_manifest

View File

@@ -1,3 +1,4 @@
---
dependencies: dependencies:
- role: kubernetes-apps/efk/elasticsearch - role: kubernetes-apps/efk/elasticsearch
- role: kubernetes-apps/efk/fluentd - role: kubernetes-apps/efk/fluentd

View File

@@ -1,3 +1,4 @@
---
helm_enabled: false helm_enabled: false
# specify a dir and attach it to helm for HELM_HOME. # specify a dir and attach it to helm for HELM_HOME.

View File

@@ -1,3 +1,4 @@
---
dependencies: dependencies:
- role: download - role: download
file: "{{ downloads.helm }}" file: "{{ downloads.helm }}"

View File

@@ -1,3 +1,4 @@
---
kind: ClusterRoleBinding kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
metadata: metadata:

View File

@@ -1,3 +1,4 @@
---
apiVersion: v1 apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:

View File

@@ -1,3 +1,4 @@
---
dependencies: dependencies:
- role: download - role: download
file: "{{ downloads.netcheck_server }}" file: "{{ downloads.netcheck_server }}"

View File

@@ -1,3 +1,4 @@
---
- name: Create canal ConfigMap - name: Create canal ConfigMap
run_once: true run_once: true
kube: kube:
@@ -7,18 +8,6 @@
resource: "configmap" resource: "configmap"
namespace: "{{system_namespace}}" namespace: "{{system_namespace}}"
#FIXME: remove if kubernetes/features#124 is implemented
- name: Purge old flannel and canal-node
run_once: true
kube:
name: "canal-node"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/canal-node.yaml"
resource: "ds"
namespace: "{{system_namespace}}"
state: absent
when: inventory_hostname == groups['kube-master'][0] and canal_node_manifest.changed
- name: Start flannel and calico-node - name: Start flannel and calico-node
run_once: true run_once: true
kube: kube:
@@ -29,4 +18,3 @@
namespace: "{{system_namespace}}" namespace: "{{system_namespace}}"
state: "{{ item | ternary('latest','present') }}" state: "{{ item | ternary('latest','present') }}"
with_items: "{{ canal_node_manifest.changed }}" with_items: "{{ canal_node_manifest.changed }}"

View File

@@ -0,0 +1,22 @@
---
- name: "Flannel | Create ServiceAccount ClusterRole and ClusterRoleBinding"
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/cni-flannel-rbac.yml"
run_once: true
when: rbac_enabled and flannel_rbac_manifest.changed
- name: Flannel | Start Resources
kube:
name: "kube-flannel"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/cni-flannel.yml"
resource: "ds"
namespace: "{{system_namespace}}"
state: "{{ item | ternary('latest','present') }}"
with_items: "{{ flannel_manifest.changed }}"
when: inventory_hostname == groups['kube-master'][0]
- name: Flannel | Wait for flannel subnet.env file presence
wait_for:
path: /run/flannel/subnet.env
delay: 5
timeout: 600

View File

@@ -1,8 +1,11 @@
--- ---
dependencies: dependencies:
- role: kubernetes-apps/network_plugin/canal - role: kubernetes-apps/network_plugin/canal
when: kube_network_plugin == 'canal' when: kube_network_plugin == 'canal'
tags: canal tags: canal
- role: kubernetes-apps/network_plugin/weave - role: kubernetes-apps/network_plugin/flannel
when: kube_network_plugin == 'weave' when: kube_network_plugin == 'flannel'
tags: weave tags: flannel
- role: kubernetes-apps/network_plugin/weave
when: kube_network_plugin == 'weave'
tags: weave

View File

@@ -1,4 +1,5 @@
#FIXME: remove if kubernetes/features#124 is implemented ---
# FIXME: remove if kubernetes/features#124 is implemented
- name: Weave | Purge old weave daemonset - name: Weave | Purge old weave daemonset
kube: kube:
name: "weave-net" name: "weave-net"
@@ -9,7 +10,6 @@
state: absent state: absent
when: inventory_hostname == groups['kube-master'][0] and weave_manifest.changed when: inventory_hostname == groups['kube-master'][0] and weave_manifest.changed
- name: Weave | Start Resources - name: Weave | Start Resources
kube: kube:
name: "weave-net" name: "weave-net"
@@ -21,7 +21,6 @@
with_items: "{{ weave_manifest.changed }}" with_items: "{{ weave_manifest.changed }}"
when: inventory_hostname == groups['kube-master'][0] when: inventory_hostname == groups['kube-master'][0]
- name: "Weave | wait for weave to become available" - name: "Weave | wait for weave to become available"
uri: uri:
url: http://127.0.0.1:6784/status url: http://127.0.0.1:6784/status

View File

@@ -1,3 +1,4 @@
---
# Limits for calico apps # Limits for calico apps
calico_policy_controller_cpu_limit: 100m calico_policy_controller_cpu_limit: 100m
calico_policy_controller_memory_limit: 256M calico_policy_controller_memory_limit: 256M

View File

@@ -1,3 +1,4 @@
---
- set_fact: - set_fact:
calico_cert_dir: "{{ canal_cert_dir }}" calico_cert_dir: "{{ canal_cert_dir }}"
when: kube_network_plugin == 'canal' when: kube_network_plugin == 'canal'

View File

@@ -21,6 +21,9 @@ spec:
k8s-app: calico-policy k8s-app: calico-policy
spec: spec:
hostNetwork: true hostNetwork: true
tolerations:
- effect: NoSchedule
operator: Exists
containers: containers:
- name: calico-policy-controller - name: calico-policy-controller
image: {{ calico_policy_image_repo }}:{{ calico_policy_image_tag }} image: {{ calico_policy_image_repo }}:{{ calico_policy_image_tag }}

View File

@@ -1,3 +1,4 @@
---
# An experimental dev/test only dynamic volumes provisioner, # An experimental dev/test only dynamic volumes provisioner,
# for PetSets. Works for kube>=v1.3 only. # for PetSets. Works for kube>=v1.3 only.
kube_hostpath_dynamic_provisioner: "false" kube_hostpath_dynamic_provisioner: "false"
@@ -52,14 +53,14 @@ kube_oidc_auth: false
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/ ## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...) ## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...)
#kube_oidc_url: https:// ... # kube_oidc_url: https:// ...
# kube_oidc_client_id: kubernetes # kube_oidc_client_id: kubernetes
## Optional settings for OIDC ## Optional settings for OIDC
# kube_oidc_ca_file: {{ kube_cert_dir }}/ca.pem # kube_oidc_ca_file: {{ kube_cert_dir }}/ca.pem
# kube_oidc_username_claim: sub # kube_oidc_username_claim: sub
# kube_oidc_groups_claim: groups # kube_oidc_groups_claim: groups
##Variables for custom flags ## Variables for custom flags
apiserver_custom_flags: [] apiserver_custom_flags: []
controller_mgr_custom_flags: [] controller_mgr_custom_flags: []

View File

@@ -88,4 +88,3 @@
- include: post-upgrade.yml - include: post-upgrade.yml
tags: k8s-post-upgrade tags: k8s-post-upgrade

View File

@@ -13,7 +13,7 @@
seconds: 10 seconds: 10
when: needs_etcd_migration|bool when: needs_etcd_migration|bool
- name: "Post-upgrade | stop kubelet on all masters" - name: "Post-upgrade | start kubelet on all masters"
service: service:
name: kubelet name: kubelet
state: started state: started

View File

@@ -42,7 +42,7 @@
when: kube_apiserver_storage_backend == "etcd3" when: kube_apiserver_storage_backend == "etcd3"
failed_when: false failed_when: false
- name: "Pre-upgrade | etcd3 upgrade | use etcd2 unless forced to etc3" - name: "Pre-upgrade | etcd3 upgrade | use etcd2 unless forced to etcd3"
set_fact: set_fact:
kube_apiserver_storage_backend: "etcd2" kube_apiserver_storage_backend: "etcd2"
when: old_data_exists.rc == 0 and not force_etcd3|bool when: old_data_exists.rc == 0 and not force_etcd3|bool

View File

@@ -84,6 +84,9 @@ spec:
{% if authorization_modes %} {% if authorization_modes %}
- --authorization-mode={{ authorization_modes|join(',') }} - --authorization-mode={{ authorization_modes|join(',') }}
{% endif %} {% endif %}
{% if kube_feature_gates %}
- --feature-gates={{ kube_feature_gates|join(',') }}
{% endif %}
{% if apiserver_custom_flags is string %} {% if apiserver_custom_flags is string %}
- {{ apiserver_custom_flags }} - {{ apiserver_custom_flags }}
{% else %} {% else %}

View File

@@ -45,9 +45,15 @@ spec:
- --cloud-provider={{cloud_provider}} - --cloud-provider={{cloud_provider}}
{% endif %} {% endif %}
{% if kube_network_plugin is defined and kube_network_plugin == 'cloud' %} {% if kube_network_plugin is defined and kube_network_plugin == 'cloud' %}
- --allocate-node-cidrs=true
- --configure-cloud-routes=true - --configure-cloud-routes=true
{% endif %}
{% if kube_network_plugin is defined and kube_network_plugin in ["cloud", "flannel"] %}
- --allocate-node-cidrs=true
- --cluster-cidr={{ kube_pods_subnet }} - --cluster-cidr={{ kube_pods_subnet }}
- --service-cluster-ip-range={{ kube_service_addresses }}
{% endif %}
{% if kube_feature_gates %}
- --feature-gates={{ kube_feature_gates|join(',') }}
{% endif %} {% endif %}
{% if controller_mgr_custom_flags is string %} {% if controller_mgr_custom_flags is string %}
- {{ controller_mgr_custom_flags }} - {{ controller_mgr_custom_flags }}

View File

@@ -27,6 +27,9 @@ spec:
- --leader-elect=true - --leader-elect=true
- --kubeconfig={{ kube_config_dir }}/kube-scheduler-kubeconfig.yaml - --kubeconfig={{ kube_config_dir }}/kube-scheduler-kubeconfig.yaml
- --v={{ kube_log_level }} - --v={{ kube_log_level }}
{% if kube_feature_gates %}
- --feature-gates={{ kube_feature_gates|join(',') }}
{% endif %}
{% if scheduler_custom_flags is string %} {% if scheduler_custom_flags is string %}
- {{ scheduler_custom_flags }} - {{ scheduler_custom_flags }}
{% else %} {% else %}

View File

@@ -1,5 +1,6 @@
---
# Valid options: docker (default), rkt, or host # Valid options: docker (default), rkt, or host
kubelet_deployment_type: docker kubelet_deployment_type: host
# change to 0.0.0.0 to enable insecure access from anywhere (not recommended) # change to 0.0.0.0 to enable insecure access from anywhere (not recommended)
kube_apiserver_insecure_bind_address: 127.0.0.1 kube_apiserver_insecure_bind_address: 127.0.0.1
@@ -15,8 +16,8 @@ kube_proxy_masquerade_all: false
# These options reflect limitations of running kubelet in a container. # These options reflect limitations of running kubelet in a container.
# Modify at your own risk # Modify at your own risk
kubelet_enable_cri: false kubelet_enable_cri: true
kubelet_cgroups_per_qos: false kubelet_cgroups_per_qos: true
# Set to empty to avoid cgroup creation # Set to empty to avoid cgroup creation
kubelet_enforce_node_allocatable: "\"\"" kubelet_enforce_node_allocatable: "\"\""
@@ -49,7 +50,7 @@ kube_apiserver_node_port_range: "30000-32767"
kubelet_load_modules: false kubelet_load_modules: false
##Support custom flags to be passed to kubelet ## Support custom flags to be passed to kubelet
kubelet_custom_flags: [] kubelet_custom_flags: []
# This setting is used for rkt based kubelet for deploying hyperkube # This setting is used for rkt based kubelet for deploying hyperkube

View File

@@ -21,4 +21,3 @@
dest: "/etc/systemd/system/kubelet.service" dest: "/etc/systemd/system/kubelet.service"
backup: "yes" backup: "yes"
notify: restart kubelet notify: restart kubelet

View File

@@ -20,8 +20,8 @@
path: /var/lib/kubelet path: /var/lib/kubelet
- name: Create kubelet service systemd directory - name: Create kubelet service systemd directory
file: file:
path: /etc/systemd/system/kubelet.service.d path: /etc/systemd/system/kubelet.service.d
state: directory state: directory
- name: Write kubelet proxy drop-in - name: Write kubelet proxy drop-in
@@ -30,4 +30,3 @@
dest: /etc/systemd/system/kubelet.service.d/http-proxy.conf dest: /etc/systemd/system/kubelet.service.d/http-proxy.conf
when: http_proxy is defined or https_proxy is defined or no_proxy is defined when: http_proxy is defined or https_proxy is defined or no_proxy is defined
notify: restart kubelet notify: restart kubelet

View File

@@ -4,3 +4,8 @@
args: args:
creates: "/var/lib/cni" creates: "/var/lib/cni"
failed_when: false failed_when: false
- name: "Pre-upgrade | ensure kubelet container is stopped if using host deployment"
command: docker stop kubelet
failed_when: false
when: kubelet_deployment_type == "host"

View File

@@ -36,8 +36,14 @@ KUBELET_HOSTNAME="--hostname-override={{ kube_override_hostname }}"
{% set kubelet_args_kubeconfig %}--kubeconfig={{ kube_config_dir}}/node-kubeconfig.yaml --require-kubeconfig{% endset %} {% set kubelet_args_kubeconfig %}--kubeconfig={{ kube_config_dir}}/node-kubeconfig.yaml --require-kubeconfig{% endset %}
{% if standalone_kubelet|bool %} {% if standalone_kubelet|bool %}
{# We are on a master-only host. Make the master unschedulable in this case. #} {# We are on a master-only host. Make the master unschedulable in this case. #}
{% if kube_version | version_compare('v1.6', '>=') %}
{# Set taints on the master so that it's unschedulable by default. Use node-role.kubernetes.io/master taint like kubeadm. #}
{% set kubelet_args_kubeconfig %}{{ kubelet_args_kubeconfig }} --register-with-taints=node-role.kubernetes.io/master=:NoSchedule{% endset %}
{% else %}
{# --register-with-taints was added in 1.6 so just register unschedulable if Kubernetes < 1.6 #}
{% set kubelet_args_kubeconfig %}{{ kubelet_args_kubeconfig }} --register-schedulable=false{% endset %} {% set kubelet_args_kubeconfig %}{{ kubelet_args_kubeconfig }} --register-schedulable=false{% endset %}
{% endif %} {% endif %}
{% endif %}
{# Kubelet node labels #} {# Kubelet node labels #}
{% if inventory_hostname in groups['kube-master'] %} {% if inventory_hostname in groups['kube-master'] %}
@@ -49,14 +55,13 @@ KUBELET_HOSTNAME="--hostname-override={{ kube_override_hostname }}"
{% set node_labels %}--node-labels=node-role.kubernetes.io/node=true{% endset %} {% set node_labels %}--node-labels=node-role.kubernetes.io/node=true{% endset %}
{% endif %} {% endif %}
KUBELET_ARGS="{{ kubelet_args_base }} {{ kubelet_args_dns }} {{ kubelet_args_kubeconfig }} {{ node_labels }} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}" KUBELET_ARGS="{{ kubelet_args_base }} {{ kubelet_args_dns }} {{ kubelet_args_kubeconfig }} {{ node_labels }} {% if kube_feature_gates %} --feature-gates={{ kube_feature_gates|join(',') }} {% endif %} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}"
{% if kube_network_plugin is defined and kube_network_plugin in ["calico", "weave", "canal"] %} {% if kube_network_plugin is defined and kube_network_plugin in ["calico", "canal", "flannel", "weave"] %}
KUBELET_NETWORK_PLUGIN="--network-plugin=cni --network-plugin-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" KUBELET_NETWORK_PLUGIN="--network-plugin=cni --network-plugin-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
{% elif kube_network_plugin is defined and kube_network_plugin == "weave" %} {% elif kube_network_plugin is defined and kube_network_plugin == "weave" %}
DOCKER_SOCKET="--docker-endpoint=unix:/var/run/weave/weave.sock" DOCKER_SOCKET="--docker-endpoint=unix:/var/run/weave/weave.sock"
{% elif kube_network_plugin is defined and kube_network_plugin == "cloud" %} {% elif kube_network_plugin is defined and kube_network_plugin == "cloud" %}
# Please note that --reconcile-cidr is deprecated and a no-op in Kubernetes 1.5 but still required in 1.4 KUBELET_NETWORK_PLUGIN="--hairpin-mode=promiscuous-bridge --network-plugin=kubenet"
KUBELET_NETWORK_PLUGIN="--hairpin-mode=promiscuous-bridge --network-plugin=kubenet --reconcile-cidr=true"
{% endif %} {% endif %}
# Should this cluster be allowed to run privileged docker containers # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true" KUBE_ALLOW_PRIV="--allow-privileged=true"

View File

@@ -32,7 +32,7 @@ ExecStart=/usr/bin/rkt run \
--volume var-lib-docker,kind=host,source={{ docker_daemon_graph }},readOnly=false \ --volume var-lib-docker,kind=host,source={{ docker_daemon_graph }},readOnly=false \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,readOnly=false,recursive=true \ --volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,readOnly=false,recursive=true \
--volume var-log,kind=host,source=/var/log \ --volume var-log,kind=host,source=/var/log \
{% if kube_network_plugin in ["calico", "weave", "canal"] %} {% if kube_network_plugin in ["calico", "weave", "canal", "flannel"] %}
--volume etc-cni,kind=host,source=/etc/cni,readOnly=true \ --volume etc-cni,kind=host,source=/etc/cni,readOnly=true \
--volume opt-cni,kind=host,source=/opt/cni,readOnly=true \ --volume opt-cni,kind=host,source=/opt/cni,readOnly=true \
--volume var-lib-cni,kind=host,source=/var/lib/cni,readOnly=false \ --volume var-lib-cni,kind=host,source=/var/lib/cni,readOnly=false \

View File

@@ -1,3 +1,4 @@
---
- name: Preinstall | restart network - name: Preinstall | restart network
command: /bin/true command: /bin/true
notify: notify:

View File

@@ -48,5 +48,3 @@
fail: fail:
msg: "azure_route_table_name is missing" msg: "azure_route_table_name is missing"
when: azure_route_table_name is not defined or azure_route_table_name == "" when: azure_route_table_name is not defined or azure_route_table_name == ""

View File

@@ -1,6 +1,6 @@
--- ---
- include: pre-upgrade.yml - include: pre-upgrade.yml
tags: [upgrade, bootstrap-os] tags: [upgrade, bootstrap-os]
- name: Force binaries directory for Container Linux by CoreOS - name: Force binaries directory for Container Linux by CoreOS
set_fact: set_fact:
@@ -27,14 +27,14 @@
include_vars: "{{ item }}" include_vars: "{{ item }}"
with_first_found: with_first_found:
- files: - files:
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_version|lower|replace('/', '_') }}.yml" - "{{ ansible_distribution|lower }}-{{ ansible_distribution_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml" - "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml" - "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}.yml" - "{{ ansible_distribution|lower }}.yml"
- "{{ ansible_os_family|lower }}.yml" - "{{ ansible_os_family|lower }}.yml"
- defaults.yml - defaults.yml
paths: paths:
- ../vars - ../vars
skip: true skip: true
tags: facts tags: facts
@@ -85,7 +85,7 @@
- "/etc/cni/net.d" - "/etc/cni/net.d"
- "/opt/cni/bin" - "/opt/cni/bin"
when: when:
- kube_network_plugin in ["calico", "weave", "canal"] - kube_network_plugin in ["calico", "weave", "canal", "flannel"]
- inventory_hostname in groups['k8s-cluster'] - inventory_hostname in groups['k8s-cluster']
tags: [network, calico, weave, canal, bootstrap-os] tags: [network, calico, weave, canal, bootstrap-os]
@@ -128,6 +128,7 @@
when: when:
- ansible_distribution in ["CentOS","RedHat"] - ansible_distribution in ["CentOS","RedHat"]
- not is_atomic - not is_atomic
- epel_rpm_download_url != ''
register: epel_task_result register: epel_task_result
until: epel_task_result|succeeded until: epel_task_result|succeeded
retries: 4 retries: 4

Some files were not shown because too many files have changed in this diff Show More