Compare commits

...

51 Commits

Author SHA1 Message Date
Bogdan Dobrelya
5fd2b151b9 Merge pull request #874 from bogdando/fix
Fix docs formatting
2017-01-09 17:57:05 +01:00
Bogdan Dobrelya
3c107ef4dc Fix docs formatting
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-09 17:53:05 +01:00
Bogdan Dobrelya
a5f93d6013 Merge pull request #862 from bogdando/docs
Update docs
2017-01-09 17:43:36 +01:00
Matthew Mosesohn
38338e848d Merge pull request #860 from adidenko/fix-calico-rr-certs
Fix etcd cert generation for calico-rr role
2017-01-09 18:34:02 +03:00
Bogdan Dobrelya
e9518072a8 Update docs
Link docs to README, update README with recent info.
Update comparsions, add kubeadm vs kargo.
Better describe variables precedence UX impact.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-09 16:32:55 +01:00
Bogdan Dobrelya
10dbd0afbd Merge pull request #871 from mattymo/fix_system_search_domains
Fix docker dns host scenario with no search domains
2017-01-09 15:52:12 +01:00
Matthew Mosesohn
1dce56e2f8 Fix docker dns host scenario with no search domains
Fixes scenario where docker-dns.conf tries to create an empty
search entry
2017-01-09 16:36:44 +03:00
Bogdan Dobrelya
1f0b2eac12 Merge pull request #815 from adidenko/calico-1.0.0
Set latest stable versions for Calico images
2017-01-09 13:57:41 +01:00
Aleksandr Didenko
d9539e0f27 Fix etcd cert generation for calico-rr role
"etcd_node_cert_data" variable is undefinded for "calico-rr" role.
This patch adds "calico-rr" nodes to task where "etcd_node_cert_data"
variable is registered.
2017-01-09 12:06:25 +01:00
Aleksandr Didenko
0909368339 Set latest stable versions for Calico images
Change version for calico images to v1.0.0. Also bump versions for
CNI and policy controller.

Also removing images repo and tag duplication from netchecker role
2017-01-09 12:05:49 +01:00
Bogdan Dobrelya
091b634ea1 Merge pull request #799 from kubernetes-incubator/docker_dns
Implement "dockerd --dns-xxx" based dns mode
2017-01-09 11:38:02 +01:00
Bogdan Dobrelya
d18804b0bb Merge pull request #865 from rsmitty/coreos-family-vars
remove assertion for family not being CoreOS
2017-01-09 10:36:13 +01:00
Alexander Block
a8b5b856d1 Only use default resolver in dnsmasq when we are using host_resolvconf mode 2017-01-06 10:21:07 +01:00
Alexander Block
1d2a18b355 Introduce dns_mode and resolvconf_mode and implement docker_dns mode
Also update reset.yml to do more dns/network related cleanup.
2017-01-05 23:38:51 +01:00
Spencer Smith
4a59340182 remove assertion for family not being CoreOS 2017-01-05 13:36:25 -05:00
Spencer Smith
aa33613b98 Merge pull request #863 from bogdando/coreos_facts
[WIP] Better fix for different CoreOS os family facts
2017-01-05 13:22:35 -05:00
Bogdan Dobrelya
96372c15e2 Merge pull request #864 from bogdando/nopreemtible
Non preempt GCE instances for CI
2017-01-05 17:22:20 +01:00
Bogdan Dobrelya
f365b32c60 Non preempt GCE instances for CI
Revert preemptible GCE instances for CI as they are too
much of UNREACHABLE. Later we could return to them after
figured out how to mitigate preepted instances with
automated CI retries.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-05 17:08:57 +01:00
Bogdan Dobrelya
5af2c42bde Better fix for different CoreOS os family facts
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-05 16:32:08 +01:00
Bogdan Dobrelya
c0400e9db5 Merge pull request #861 from bogdando/rename_coreos
Rename CoreOS fact
2017-01-05 14:53:06 +01:00
Bogdan Dobrelya
f7447837c5 Rename CoreOS fact
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-05 14:02:29 +01:00
Bogdan Dobrelya
a4dbee3e38 Merge pull request #859 from bogdando/minor_rkt
Minor fix to rkt version in group vars
2017-01-05 12:14:01 +01:00
Bogdan Dobrelya
fb7899aa06 Minor fix to rkt version in group vars
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-05 11:11:03 +01:00
Bogdan Dobrelya
6d54d9f49a Merge pull request #784 from bradbeam/rkt
rkt support for control plane ( etcd + kubelet )
2017-01-05 10:34:49 +01:00
Bogdan Dobrelya
6546869c42 Merge branch 'master' into rkt 2017-01-05 10:34:18 +01:00
Bogdan Dobrelya
aa79a02f9c Merge pull request #854 from bogdando/pipeline
Fix pipeline premoderation/unit-tests
2017-01-04 18:00:48 +01:00
Bogdan Dobrelya
447febcdd6 Fix pipeline premoderation/unit-tests
Do not run unit-tests for master merges.
Fix the permissive "null" user.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-04 17:52:27 +01:00
Smaine Kahlouch
61732847b6 Merge pull request #853 from bogdando/premoderated_builds
Do not auto-trigger gitlab CI pipeline on PRs
2017-01-04 15:20:50 +01:00
Bogdan Dobrelya
fcd9d97f10 Do not auto-trigger gitlab CI pipeline on PRs
For security and resources utilization reasons, do not auto-start CI
for opened/updated PRs.

A member of the kubernetes-incubator github org has first to approve
that the PR is reasonable to test by putting the "ci check this" into
the PR's comments.

If approved that way, the CI pipeline starts as always. Only the 1st step
of the pipeline is premoderatied, the rest will follow each over on
success.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-04 13:52:31 +01:00
Bogdan Dobrelya
b6b5d52f78 Merge pull request #852 from intelsdi-x/issue_template
Issue template proposal
2017-01-04 13:28:39 +01:00
Brad Beam
4b6f29d5e1 Adding kubelet in rkt 2017-01-03 14:49:48 -06:00
Bogdan Dobrelya
f5d5230034 Merge pull request #843 from bogdando/fix_certs_k8s_apps
Fix cert paths for flannel/calico policy apps
2017-01-03 17:53:07 +01:00
Brad Beam
8dc19374cc Allowing etcd to run via rkt 2017-01-03 10:10:38 -06:00
Brad Beam
a8f2af0503 Adding initial rkt support 2017-01-03 10:08:43 -06:00
Bogdan Dobrelya
d8a2941e9e Fix cert paths for flannel/calico policy apps
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-03 16:12:54 +01:00
Michał Żyłowski
55b6d0bbdd GITHUB: Added issue template file 2017-01-03 16:09:35 +01:00
Bogdan Dobrelya
a3c044b657 Merge pull request #848 from kubernetes-incubator/upgrade_docker_1_12
Upgrade docker version and do some cleanups for unsupported distros/docker versions
2017-01-03 15:39:57 +01:00
Bogdan Dobrelya
4a2abc1a46 Merge pull request #845 from bogdando/docs
Comment cloud providers private networks use cases
2017-01-03 10:50:39 +01:00
Bogdan Dobrelya
410c78f2e5 Merge pull request #849 from intelsdi-x/ansible_version
README: changed minimal ansible version
2017-01-03 10:35:20 +01:00
Michał Żyłowski
3b5830a1cf README: changed minimal ansible version 2017-01-02 20:37:58 +01:00
Alexander Block
ab7df10a7d Upgrade docker version and do some cleanups for unsupported distros/docker versions 2017-01-02 18:05:50 +01:00
Bogdan Dobrelya
93663e987c Merge pull request #847 from bogdando/bug_769
Fix etc hosts for cluster nodes
2017-01-02 17:47:23 +01:00
Bogdan Dobrelya
6114266b84 Merge pull request #846 from bogdando/drop_sysv
Drop non systemd OS types support
2017-01-02 16:51:51 +01:00
Bogdan Dobrelya
97f96a6376 Fix etc hosts for cluster nodes
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-02 13:20:51 +01:00
Bogdan Dobrelya
58062be2a3 Drop non systemd OS types support
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-02 12:14:03 +01:00
Bogdan Dobrelya
5ec4efe88e Merge pull request #814 from swizzlr/patch-1
Add section describing Kargo vs Kops
2016-12-30 13:58:28 +01:00
Bogdan Dobrelya
e02aae71a1 Merge pull request #841 from mattymo/bug832
Fix etcd cert generation to support large deployments
2016-12-30 13:15:20 +01:00
Matthew Mosesohn
1f9f885379 Fix etcd cert generation to support large deployments
Due to bash max args limits, we should pass all node filenames and
base64-encoded tar data through stdin/stdout instead.

Fixes #832
2016-12-30 12:55:26 +03:00
Thomas Catterall
80509673d2 Update README.md 2016-12-29 19:41:34 +00:00
Thomas Catterall
b902110d75 Create comparisons.md 2016-12-29 19:41:11 +00:00
Thomas Catterall
53affb9bc0 Update README.md 2016-12-22 22:46:23 +00:00
91 changed files with 831 additions and 1240 deletions

47
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,47 @@
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out
information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
**Environment**:
- **Cloud provider or hardware configuration:**
- **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):**
- **Version of Ansible** (`ansible --version`):
**Kargo version (commit) (`git rev-parse --short HEAD`):**
**Network plugin used**:
**Copy of your inventory file:**
**Command used to invoke ansible**:
**Output of ansible run**:
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Anything else do we need to know**:
<!-- By running scripts/collect-info.yaml you can get a lot of useful informations.
Script can be started by:
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here.-->

View File

@@ -48,8 +48,12 @@ before_script:
GS_SECRET_ACCESS_KEY: $GS_SECRET GS_SECRET_ACCESS_KEY: $GS_SECRET
ANSIBLE_KEEP_REMOTE_FILES: "1" ANSIBLE_KEEP_REMOTE_FILES: "1"
BOOTSTRAP_OS: none BOOTSTRAP_OS: none
RESOLVCONF_MODE: docker_dns
LOG_LEVEL: "-vv" LOG_LEVEL: "-vv"
ETCD_DEPLOYMENT: "docker"
KUBELET_DEPLOYMENT: "docker"
MAGIC: "ci check this"
.gce: &gce .gce: &gce
<<: *job <<: *job
<<: *docker_service <<: *docker_service
@@ -101,7 +105,10 @@ before_script:
-e download_run_once=true -e download_run_once=true
-e download_localhost=true -e download_localhost=true
-e deploy_netchecker=true -e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
cluster.yml cluster.yml
@@ -136,6 +143,7 @@ before_script:
CLOUD_REGION: us-west1-b CLOUD_REGION: us-west1-b
CLUSTER_MODE: separated CLUSTER_MODE: separated
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
.debian8_canal_ha_variables: &debian8_canal_ha_variables .debian8_canal_ha_variables: &debian8_canal_ha_variables
# stage: deploy-gce-part1 # stage: deploy-gce-part1
@@ -172,6 +180,7 @@ before_script:
CLOUD_REGION: us-east1-b CLOUD_REGION: us-east1-b
CLUSTER_MODE: default CLUSTER_MODE: default
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables .rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-gce-special # stage: deploy-gce-special
@@ -202,7 +211,16 @@ before_script:
CLUSTER_MODE: ha CLUSTER_MODE: ha
BOOTSTRAP_OS: coreos BOOTSTRAP_OS: coreos
# Builds for PRs only (auto) and triggers (auto) .ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separated
ETCD_DEPLOYMENT: rkt
KUBELET_DEPLOYMENT: rkt
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
coreos-calico-sep: coreos-calico-sep:
stage: deploy-gce-part1 stage: deploy-gce-part1
<<: *job <<: *job
@@ -405,12 +423,27 @@ coreos-alpha-weave-ha:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
ubuntu-rkt-sep:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_rkt_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
# Premoderated with manual actions
syntax-check: syntax-check:
<<: *job <<: *job
stage: unit-tests stage: unit-tests
before_script:
- apt-get -y install jq
script: script:
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check - ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check
except: ['triggers'] - /bin/sh scripts/premoderator.sh
except: ['triggers', 'master']
tox-inventory-builder: tox-inventory-builder:
stage: unit-tests stage: unit-tests
@@ -419,4 +452,4 @@ tox-inventory-builder:
- pip install tox - pip install tox
- cd contrib/inventory_builder && tox - cd contrib/inventory_builder && tox
when: manual when: manual
except: ['triggers'] except: ['triggers', 'master']

View File

@@ -1,4 +1,4 @@
![Kubespray Logo](http://s9.postimg.org/md5dyjl67/kubespray_logoandkubespray_small.png) ![Kubernetes Logo](https://s28.postimg.org/lf3q4ocpp/k8s.png)
##Deploy a production ready kubernetes cluster ##Deploy a production ready kubernetes cluster
@@ -14,75 +14,88 @@ If you have questions, join us on the [kubernetes slack](https://slack.k8s.io),
To deploy the cluster you can use : To deploy the cluster you can use :
[**kargo-cli**](https://github.com/kubespray/kargo-cli) <br> [**kargo-cli**](https://github.com/kubespray/kargo-cli) <br>
**Ansible** usual commands <br> **Ansible** usual commands and [**inventory builder**](https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_builder/inventory.py) <br>
**vagrant** by simply running `vagrant up` (for tests purposes) <br> **vagrant** by simply running `vagrant up` (for tests purposes) <br>
* [Requirements](#requirements) * [Requirements](#requirements)
* [Kargo vs ...](docs/comparisons.md)
* [Getting started](docs/getting-started.md) * [Getting started](docs/getting-started.md)
* [Ansible inventory and tags](docs/ansible.md)
* [Deployment data variables](docs/vars.md)
* [DNS stack](docs/dns-stack.md)
* [HA mode](docs/ha-mode.md)
* [Network plugins](#network-plugins)
* [Vagrant install](docs/vagrant.md) * [Vagrant install](docs/vagrant.md)
* [CoreOS bootstrap](docs/coreos.md) * [CoreOS bootstrap](docs/coreos.md)
* [Ansible variables](docs/ansible.md) * [Downloaded artifacts](docs/downloads.md)
* [Cloud providers](docs/cloud.md) * [Cloud providers](docs/cloud.md)
* [OpenStack](docs/openstack.md) * [OpenStack](docs/openstack.md)
* [AWS](docs/aws.md) * [AWS](docs/aws.md)
* [Azure](docs/azure.md) * [Azure](docs/azure.md)
* [Network plugins](#network-plugins) * [Large deployments](docs/large-deployments.md)
* [Upgrades basics](docs/upgrades.md)
* [Roadmap](docs/roadmap.md) * [Roadmap](docs/roadmap.md)
Supported Linux distributions Supported Linux distributions
=============== ===============
* **CoreOS** * **Container Linux by CoreOS**
* **Debian** Wheezy, Jessie * **Debian** Jessie
* **Ubuntu** 14.10, 15.04, 15.10, 16.04 * **Ubuntu** 16.04
* **Fedora** 23
* **CentOS/RHEL** 7 * **CentOS/RHEL** 7
Versions Note: Upstart/SysV init based OS types are not supported.
--------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.4.6 <br> Versions of supported components
--------------------------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.5.1 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.0.6 <br> [etcd](https://github.com/coreos/etcd/releases) v3.0.6 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.6.2 <br> [flanneld](https://github.com/coreos/flannel/releases) v0.6.2 <br>
[calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.22.0 <br> [calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.23.0 <br>
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
[weave](http://weave.works/) v1.6.1 <br> [weave](http://weave.works/) v1.6.1 <br>
[docker](https://www.docker.com/) v1.10.3 <br> [docker](https://www.docker.com/) v1.12.5 <br>
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 <br>
Note: rkt support as docker alternative is limited to control plane (etcd and
kubelet). Docker is still used for Kubernetes cluster workloads and network
plugins' related OS services. Also note, only one of the supported network
plugins can be deployed for a given single cluster.
Requirements Requirements
-------------- --------------
* The target servers must have **access to the Internet** in order to pull docker images. * The target servers must have **access to the Internet** in order to pull docker images.
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. * The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall in order to avoid any issue during deployment you should disable your firewall.
* The target servers are configured to allow **IPv4 forwarding**.
* **Copy your ssh keys** to all the servers part of your inventory. * **Copy your ssh keys** to all the servers part of your inventory.
* **Ansible v2.x and python-netaddr** * **Ansible v2.2 (or newer) and python-netaddr**
## Network plugins ## Network plugins
You can choose between 3 network plugins. (default: `flannel` with vxlan backend) You can choose between 4 network plugins. (default: `flannel` with vxlan backend)
* [**flannel**](docs/flannel.md): gre/vxlan (layer 2) networking. * [**flannel**](docs/flannel.md): gre/vxlan (layer 2) networking.
* [**calico**](docs/calico.md): bgp (layer 3) networking. * [**calico**](docs/calico.md): bgp (layer 3) networking.
* [**canal**](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
* **weave**: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br> * **weave**: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)) (Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
The choice is defined with the variable `kube_network_plugin`
The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
## CI Tests ## CI Tests
[![Build Status](https://travis-ci.org/kubernetes-incubator/kargo.svg)](https://travis-ci.org/kubernetes-incubator/kargo) </br> ![Gitlab Logo](https://s27.postimg.org/wmtaig1wz/gitlabci.png)
### Google Compute Engine [![Build graphs](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/pipelines) </br>
| Calico | Flannel | Weave | CI/end-to-end tests sponsored by Google (GCE), and [teuto.net](https://teuto.net/) for OpenStack.
------------- | ------------- | ------------- | ------------- | See the [test matrix](docs/test_cases.md) for details.
Ubuntu Xenial |[![Build Status](https://ci.kubespray.io/job/kargo-gce-xenial-calico/badge/icon)](https://ci.kubespray.io/job/kargo-gce-xenial-calico/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-xenial-flannel/badge/icon)](https://ci.kubespray.io/job/kargo-gce-xenial-flannel/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-xenial-weave/badge/icon)](https://ci.kubespray.io/job/kargo-gce-xenial-weave)|
CentOS 7 |[![Build Status](https://ci.kubespray.io/job/kargo-gce-centos7-calico/badge/icon)](https://ci.kubespray.io/job/kargo-gce-centos7-calico/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-centos7-flannel/badge/icon)](https://ci.kubespray.io/job/kargo-gce-centos7-flannel/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-centos7-weave/badge/icon)](https://ci.kubespray.io/job/kargo-gce-centos7-weave/)|
CoreOS (stable) |[![Build Status](https://ci.kubespray.io/job/kargo-gce-coreos-calico/badge/icon)](https://ci.kubespray.io/job/kargo-gce-coreos-calico/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-coreos-flannel/badge/icon)](https://ci.kubespray.io/job/kargo-gce-coreos-flannel/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-coreos-weave/badge/icon)](https://ci.kubespray.io/job/kargo-gce-coreos-weave/)|
CI tests sponsored by Google (GCE), and [teuto.net](https://teuto.net/) for OpenStack.

View File

@@ -28,6 +28,7 @@
roles: roles:
- { role: kubernetes/preinstall, tags: preinstall } - { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker } - { role: docker, tags: docker }
- { role: rkt, tags: rkt, when: "'rkt' in [ etcd_deployment_type, kubelet_deployment_type ]" }
- hosts: etcd:!k8s-cluster - hosts: etcd:!k8s-cluster
any_errors_fatal: true any_errors_fatal: true
@@ -56,8 +57,8 @@
- hosts: k8s-cluster - hosts: k8s-cluster
any_errors_fatal: true any_errors_fatal: true
roles: roles:
- { role: dnsmasq, tags: dnsmasq } - { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, tags: resolvconf } - { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
- hosts: kube-master[0] - hosts: kube-master[0]
any_errors_fatal: true any_errors_fatal: true

View File

@@ -36,7 +36,7 @@ Ensure your OpenStack credentials are loaded in environment variables. This can
$ source ~/.stackrc $ source ~/.stackrc
``` ```
You will need two networks before installing, an internal network and You will need two networks before installing, an internal network and
an external (floating IP Pool) network. The internet network can be shared as an external (floating IP Pool) network. The internet network can be shared as
we use security groups to provide network segregation. Due to the many we use security groups to provide network segregation. Due to the many
differences between OpenStack installs the Terraform does not attempt to create differences between OpenStack installs the Terraform does not attempt to create
@@ -97,7 +97,7 @@ gfs_volume_size_in_gb = "50"
ssh_user_gfs = "ubuntu" ssh_user_gfs = "ubuntu"
``` ```
If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher. If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using Container Linux by CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher.
# Provision a Kubernetes Cluster on OpenStack # Provision a Kubernetes Cluster on OpenStack
@@ -133,20 +133,20 @@ Make sure you can connect to the hosts:
``` ```
$ ansible -i contrib/terraform/openstack/hosts -m ping all $ ansible -i contrib/terraform/openstack/hosts -m ping all
example-k8s_node-1 | SUCCESS => { example-k8s_node-1 | SUCCESS => {
"changed": false, "changed": false,
"ping": "pong" "ping": "pong"
} }
example-etcd-1 | SUCCESS => { example-etcd-1 | SUCCESS => {
"changed": false, "changed": false,
"ping": "pong" "ping": "pong"
} }
example-k8s-master-1 | SUCCESS => { example-k8s-master-1 | SUCCESS => {
"changed": false, "changed": false,
"ping": "pong" "ping": "pong"
} }
``` ```
if you are deploying a system that needs bootstrapping, like CoreOS, these might have a state `FAILED` due to CoreOS not having python. As long as the state is not `UNREACHABLE`, this is fine. if you are deploying a system that needs bootstrapping, like Container Linux by CoreOS, these might have a state `FAILED` due to Container Linux by CoreOS not having python. As long as the state is not `UNREACHABLE`, this is fine.
if it fails try to connect manually via SSH ... it could be somthing as simple as a stale host key. if it fails try to connect manually via SSH ... it could be somthing as simple as a stale host key.

View File

@@ -10,7 +10,7 @@ local_release_dir: "/tmp/releases"
# Random shifts for retrying failed ops like pushing/downloading # Random shifts for retrying failed ops like pushing/downloading
retry_stagger: 5 retry_stagger: 5
# Uncomment this line for CoreOS only. # Uncomment this line for Container Linux by CoreOS only.
# Directory where python binary is installed # Directory where python binary is installed
# ansible_python_interpreter: "/opt/bin/python" # ansible_python_interpreter: "/opt/bin/python"
@@ -105,21 +105,21 @@ kube_apiserver_insecure_port: 8080 # (http)
# into appropriate IP addresses. It's highly advisable to run such DNS server, # into appropriate IP addresses. It's highly advisable to run such DNS server,
# as it greatly simplifies configuration of your applications - you can use # as it greatly simplifies configuration of your applications - you can use
# service names instead of magic environment variables. # service names instead of magic environment variables.
# You still must manually configure all your containers to use this DNS server,
# Kubernetes won't do this for you (yet).
# Do not install additional dnsmasq # Can be dnsmasq_kubedns, kubedns or none
skip_dnsmasq: false dns_mode: dnsmasq_kubedns
# Upstream dns servers used by dnsmasq
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
## Upstream dns servers used by dnsmasq
#upstream_dns_servers: #upstream_dns_servers:
# - 8.8.8.8 # - 8.8.8.8
# - 8.8.4.4 # - 8.8.4.4
#
# # Use dns server : https://github.com/ansibl8s/k8s-skydns/blob/master/skydns-README.md
dns_setup: true
dns_domain: "{{ cluster_name }}" dns_domain: "{{ cluster_name }}"
#
# # Ip address of the kubernetes skydns service # Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}" skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}" dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"

View File

@@ -45,9 +45,37 @@ kube-master
etcd etcd
``` ```
Group vars Group vars and overriding variables precedence
-------------- ----------------------------------------------
The main variables to change are located in the directory ```inventory/group_vars/all.yml```.
The group variables to control main deployment options are located in the directory ``inventory/group_vars``.
There are also role vars for docker, rkt, kubernetes preinstall and master roles.
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overriden from the group vars. In order to override, one should use
the `-e ` runtime flags (most simple way) or other layers described in the docs.
Kargo uses only a few layers to override things (or expect them to
be overriden for roles):
Layer | Comment
------|--------
**role defaults** | provides best UX to override things for Kargo deployments
inventory vars | Unused
**inventory group_vars** | Expects users to use ``all.yml``,``k8s-cluster.yml`` etc. to override things
inventory host_vars | Unused
playbook group_vars | Unuses
playbook host_vars | Unused
**host facts** | Kargo overrides for internal roles' logic, like state flags
play vars | Unused
play vars_prompt | Unused
play vars_files | Unused
registered vars | Unused
set_facts | Kargo overrides those, for some places
**role and include vars** | Provides bad UX to override things! Use extra vars to enforce
block vars (only for tasks in block) | Kargo overrides for internal roles' logic
task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
Ansible tags Ansible tags
------------ ------------
@@ -132,5 +160,5 @@ bastion host.
bastion ansible_ssh_host=x.x.x.x bastion ansible_ssh_host=x.x.x.x
``` ```
For more information about Ansible and bastion hosts, read For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/) [Running Ansible Through an SSH Bastion Host](http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)

25
docs/comparisons.md Normal file
View File

@@ -0,0 +1,25 @@
Kargo vs [Kops](https://github.com/kubernetes/kops)
---------------
Kargo runs on bare metal and most clouds, using Ansible as its substrate for
provisioning and orchestration. Kops performs the provisioning and orchestration
itself, and as such is less flexible in deployment platforms. For people with
familiarity with Ansible, existing Ansible deployments or the desire to run a
Kubernetes cluster across multiple platforms, Kargo is a good choice. Kops,
however, is more tightly integrated with the unique features of the clouds it
supports so it could be a better choice if you know that you will only be using
one platform for the foreseeable future.
Kargo vs [Kubeadm](https://github.com/kubernetes/kubeadm)
------------------
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle
management, including self-hosted layouts, dynamic discovery services and so
on. Had it belong to the new [operators world](https://coreos.com/blog/introducing-operators.html),
it would've likely been named a "Kubernetes cluster operator". Kargo however,
does generic configuration management tasks from the "OS operators" ansible
world, plus some initial K8s clustering (with networking plugins included) and
control plane bootstrapping. Kargo [strives](https://github.com/kubernetes-incubator/kargo/issues/553)
to adopt kubeadm as a tool in order to consume life cycle management domain
knowledge from it and offload generic OS configuration things from it, which
hopefully benefits both sides.

View File

@@ -9,47 +9,14 @@ to serve as an authoritative DNS server for a given ``dns_domain`` and its
Other nodes in the inventory, like external storage nodes or a separate etcd cluster Other nodes in the inventory, like external storage nodes or a separate etcd cluster
node group, considered non-cluster and left up to the user to configure DNS resolve. node group, considered non-cluster and left up to the user to configure DNS resolve.
Note, custom ``ndots`` values affect only the dnsmasq daemon set (explained below).
While the kubedns has the ``ndots=5`` hardcoded, which is not recommended due to
[DNS performance reasons](https://github.com/kubernetes/kubernetes/issues/14051).
You can use config maps for the kubedns app to workaround the issue, which is
yet in the Kargo scope.
Additional search (sub)domains may be defined in the ``searchdomains`` DNS variables
and ``ndots`` vars. And additional recursive DNS resolvers in the `` upstream_dns_servers``, =============
``nameservers`` vars. Intranet/cloud provider DNS resolvers should be specified
in the first place, followed by external resolvers, for example:
``` There are several global variables which can be used to modify DNS settings:
skip_dnsmasq: true
nameservers: [8.8.8.8]
upstream_dns_servers: [172.18.32.6]
```
or
```
skip_dnsmasq: false
upstream_dns_servers: [172.18.32.6, 172.18.32.7, 8.8.8.8, 8.8.8.4]
```
The vars are explained below. For the early cluster deployment stage, when there
is yet K8s cluster and apps exist, a user may expect local repos to be
accessible via authoritative intranet resolvers. For that case, if none custom vars
was specified, the default resolver is set to either the cloud provider default
or `8.8.8.8`. And domain is set to the default ``dns_domain`` value as well.
Later, the nameservers will be reconfigured to the DNS service IP that Kargo
configures for K8s cluster.
Also note, existing records will be purged from the `/etc/resolv.conf`, #### ndots
including resolvconf's base/head/cloud-init config files and those that come from dhclient. ndots value to be used in ``/etc/resolv.conf``
This is required for hostnet pods networking and for [kubelet to not exceed search domains
limits](https://github.com/kubernetes/kubernetes/issues/9229).
Instead, new domain, search, nameserver records and options will be defined from the
aforementioned vars:
* Superseded via dhclient's DNS update hook.
* Generated via cloud-init (CoreOS only).
* Statically defined in the `/etc/resolv.conf`, if none of above is applicable.
* Resolvconf's head/base files are disabled from populating anything into the
`/etc/resolv.conf`.
It is important to note that multiple search domains combined with high ``ndots`` It is important to note that multiple search domains combined with high ``ndots``
values lead to poor performance of DNS stack, so please choose it wisely. values lead to poor performance of DNS stack, so please choose it wisely.
@@ -58,48 +25,97 @@ replies for [bogus internal FQDNS](https://github.com/kubernetes/kubernetes/issu
before it even hits the kubedns app. This enables dnsmasq to serve as a before it even hits the kubedns app. This enables dnsmasq to serve as a
protective, but still recursive resolver in front of kubedns. protective, but still recursive resolver in front of kubedns.
DNS configuration details #### searchdomains
------------------------- Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
Here is an approximate picture of how DNS things working and Most Linux systems limit the total number of search domains to 6 and the total length of all search domains
being configured by Kargo ansible playbooks: to 256 characters. Depending on the length of ``dns_domain``, you're limitted to less then the total limit.
![Image](figures/dns.jpeg?raw=true) Please note that ``resolvconf_mode: docker_dns`` will automatically add your systems search domains as
additional search domains. Please take this into the accounts for the limits.
Note that an additional dnsmasq daemon set is installed by Kargo #### nameservers
by default. Kubelet will configure DNS base of all pods to use the This variable is only used by ``resolvconf_mode: host_resolvconf``. These nameservers are added to the hosts
given dnsmasq cluster IP, which is defined via the ``dns_server`` var. ``/etc/resolv.conf`` *after* ``upstream_dns_servers`` and thus serve as backup nameservers. If this variable
The dnsmasq forwards requests for a given cluster ``dns_domain`` to is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8 when no cloud provider is specified).
Kubedns's SkyDns service. The SkyDns server is configured to be an
authoritative DNS server for the given cluser domain (and its subdomains
up to ``ndots:5`` depth). Note: you should scale its replication controller
up, if SkyDns chokes. These two layered DNS forwarders provide HA for the
DNS cluster IP endpoint, which is a critical moving part for Kubernetes apps.
Nameservers are as well configured in the hosts' ``/etc/resolv.conf`` files, #### upstream_dns_servers
as the given DNS cluster IP merged with ``nameservers`` values. While the DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup
DNS cluster IP merged with the ``upstream_dns_servers`` defines additional DNS servers in early cluster deployment when no cluster DNS is available yet. These are also added as upstream
nameservers for the aforementioned nsmasq daemon set running on all hosts. DNS servers used by ``dnsmasq`` (when deployed with ``dns_mode: dnsmasq_kubedns``).
This mitigates existing Linux limitation of max 3 nameservers in the
``/etc/resolv.conf`` and also brings an additional caching layer for the
clustered DNS services.
You can skip the dnsmasq daemon set install steps by setting the DNS modes supported by kargo
``skip_dnsmasq: true``. This may be the case, if you're fine with ============================
the nameservers limitation. Sadly, there is no way to work around the
search domain limitations of a 256 chars and 6 domains. Thus, you can You can modify how kargo sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
use the ``searchdomains`` var to define no more than a three custom domains.
Remaining three slots are reserved for K8s cluster default subdomains. ## dns_mode
``dns_mode`` configures how kargo will setup cluster DNS. There are three modes available:
#### dnsmasq_kubedns (default)
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns.
It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All
other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver``
#### kubedns
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
all queries.
#### none
This does not install any of dnsmasq and kubedns/skydns. This basically disables cluster DNS completely and
leaves you with a non functional cluster.
## resolvconf_mode
``resolvconf_mode`` configures how kargo will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
There are three modes available:
#### docker_dns (default)
This sets up the docker daemon with additional --dns/--dns-search/--dns-opt flags.
The following nameservers are added to the docker daemon (in the same order as listed here):
* cluster nameserver (depends on dns_mode)
* content of optional upstream_dns_servers variable
* host system nameservers (read from hosts /etc/resolv.conf)
The following search domains are added to the docker daemon (in the same order as listed here):
* cluster domains (``default.svc.{{ dns_domain }}``, ``svc.{{ dns_domain }}``)
* content of optional searchdomains variable
* host system search domains (read from hosts /etc/resolv.conf)
The following dns options are added to the docker daemon
* ndots:{{ ndots }}
* timeout:2
* attempts:2
For normal PODs, k8s will ignore these options and setup its own DNS settings for the PODs, taking
the --cluster_dns (either dnsmasq or kubedns, depending on dns_mode) kubelet option into account.
For ``hostNetwork: true`` PODs however, k8s will let docker setup DNS settings. Docker containers which
are not started/managed by k8s will also use these docker options.
The host system name servers are added to ensure name resolution is also working while cluster DNS is not
running yet. This is especially important in early stages of cluster deployment. In this early stage,
DNS queries to the cluster DNS will timeout after a few seconds, resulting in the system nameserver being
used as a backup nameserver. After cluster DNS is running, all queries will be answered by the cluster DNS
servers, which in turn will forward queries to the system nameserver if required.
#### host_resolvconf
This activates the classic kargo behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
configuration to point to the cluster dns server (either dnsmasq or kubedns, depending on dns_mode).
As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first
stage (``dns_early: true``), ``/etc/resolv.conf`` is configured to use the DNS servers found in ``upstream_dns_servers``
and ``nameservers``. Later, ``/etc/resolv.conf`` is reconfigured to use the cluster DNS server first, leaving
the other nameservers as backups.
Also note, existing records will be purged from the `/etc/resolv.conf`,
including resolvconf's base/head/cloud-init config files and those that come from dhclient.
#### none
Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that works as expected in most cases.
The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve
cluster service names.
When dnsmasq skipped, Kargo redefines the DNS cluster IP to point directly
to SkyDns cluster IP ``skydns_server`` and configures Kubelet's
``--dns_cluster`` to use that IP as well. While this greatly simplifies
things, it comes by the price of limited nameservers though. As you know now,
the DNS cluster IP takes a slot in the ``/etc/resolv.conf``, thus you can
specify no more than a two nameservers for infra and/or external use.
Those may be specified either in ``nameservers`` or ``upstream_dns_servers``
and will be merged together with the ``skydns_server`` IP into the hots'
``/etc/resolv.conf``.
Limitations Limitations
----------- -----------

Binary file not shown.

Before

Width:  |  Height:  |  Size: 654 KiB

View File

@@ -14,9 +14,6 @@ kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests" kube_manifest_dir: "{{ kube_config_dir }}/manifests"
system_namespace: kube-system system_namespace: kube-system
# Logging directory (sysvinit systems)
kube_log_dir: "/var/log/kubernetes"
# This is where all the cert scripts and certs will be located # This is where all the cert scripts and certs will be located
kube_cert_dir: "{{ kube_config_dir }}/ssl" kube_cert_dir: "{{ kube_config_dir }}/ssl"
@@ -136,21 +133,21 @@ kube_apiserver_insecure_port: 8080 # (http)
# into appropriate IP addresses. It's highly advisable to run such DNS server, # into appropriate IP addresses. It's highly advisable to run such DNS server,
# as it greatly simplifies configuration of your applications - you can use # as it greatly simplifies configuration of your applications - you can use
# service names instead of magic environment variables. # service names instead of magic environment variables.
# You still must manually configure all your containers to use this DNS server,
# Kubernetes won't do this for you (yet).
# Do not install additional dnsmasq # Can be dnsmasq_kubedns, kubedns or none
skip_dnsmasq: false dns_mode: dnsmasq_kubedns
# Upstream dns servers used by dnsmasq
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
## Upstream dns servers used by dnsmasq
#upstream_dns_servers: #upstream_dns_servers:
# - 8.8.8.8 # - 8.8.8.8
# - 8.8.4.4 # - 8.8.4.4
#
# # Use dns server : https://github.com/ansibl8s/k8s-skydns/blob/master/skydns-README.md
dns_setup: true
dns_domain: "{{ cluster_name }}" dns_domain: "{{ cluster_name }}"
#
# # Ip address of the kubernetes skydns service # Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}" skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}" dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
@@ -200,3 +197,8 @@ k8s_image_pull_policy: IfNotPresent
# default packages to install within the cluster # default packages to install within the cluster
kpm_packages: [] kpm_packages: []
# - name: kube-system/grafana # - name: kube-system/grafana
# Settings for containerized control plane (etcd/kubelet)
rkt_version: 1.21.0
etcd_deployment_type: docker
kubelet_deployment_type: docker

View File

@@ -18,9 +18,6 @@ dnsmasq_version: 2.72
dnsmasq_image_repo: "andyshinn/dnsmasq" dnsmasq_image_repo: "andyshinn/dnsmasq"
dnsmasq_image_tag: "{{ dnsmasq_version }}" dnsmasq_image_tag: "{{ dnsmasq_version }}"
# Skip dnsmasq setup
skip_dnsmasq: false
# Limits for dnsmasq/kubedns apps # Limits for dnsmasq/kubedns apps
dns_cpu_limit: 100m dns_cpu_limit: 100m
dns_memory_limit: 170Mi dns_memory_limit: 170Mi

View File

@@ -2,5 +2,5 @@
dependencies: dependencies:
- role: download - role: download
file: "{{ downloads.dnsmasq }}" file: "{{ downloads.dnsmasq }}"
when: not skip_dnsmasq|default(false) and download_localhost|default(false) when: dns_mode == 'dnsmasq_kubedns' and download_localhost|default(false)
tags: [download, dnsmasq] tags: [download, dnsmasq]

View File

@@ -15,15 +15,17 @@ local=/{{ bogus_domains }}
{% for srv in upstream_dns_servers %} {% for srv in upstream_dns_servers %}
server={{ srv }} server={{ srv }}
{% endfor %} {% endfor %}
{% else %} no-resolv
{% elif resolvconf_mode == 'host_resolvconf' %}
{# The default resolver is only needed when the hosts resolv.conf was modified by us. If it was not modified, we can rely on dnsmasq to reuse the systems resolv.conf #}
server={{ default_resolver }} server={{ default_resolver }}
no-resolv
{% endif %} {% endif %}
{% if kube_log_level == '4' %} {% if kube_log_level == '4' %}
log-queries log-queries
{% endif %} {% endif %}
bogus-priv bogus-priv
no-resolv
no-negcache no-negcache
cache-size=1000 cache-size=1000
max-cache-ttl=10 max-cache-ttl=10

View File

@@ -1,4 +1,4 @@
docker_version: '1.10' docker_version: '1.12'
docker_package_info: docker_package_info:
pkgs: pkgs:

View File

@@ -10,13 +10,12 @@
- name : Docker | reload systemd - name : Docker | reload systemd
shell: systemctl daemon-reload shell: systemctl daemon-reload
when: ansible_service_mgr == "systemd"
- name: Docker | reload docker.socket - name: Docker | reload docker.socket
service: service:
name: docker.socket name: docker.socket
state: restarted state: restarted
when: ansible_os_family == 'CoreOS' when: ansible_os_family in ['CoreOS', 'Container Linux by CoreOS']
- name: Docker | reload docker - name: Docker | reload docker
service: service:

View File

@@ -14,13 +14,17 @@
skip: true skip: true
tags: facts tags: facts
- include: set_facts_dns.yml
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
tags: facts
- name: check for minimum kernel version - name: check for minimum kernel version
fail: fail:
msg: > msg: >
docker requires a minimum kernel version of docker requires a minimum kernel version of
{{ docker_kernel_min_version }} on {{ docker_kernel_min_version }} on
{{ ansible_distribution }}-{{ ansible_distribution_version }} {{ ansible_distribution }}-{{ ansible_distribution_version }}
when: (ansible_os_family != "CoreOS") and (ansible_kernel|version_compare(docker_kernel_min_version, "<")) when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]) and (ansible_kernel|version_compare(docker_kernel_min_version, "<"))
tags: facts tags: facts
- name: ensure docker repository public key is installed - name: ensure docker repository public key is installed
@@ -34,7 +38,7 @@
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
with_items: "{{ docker_repo_key_info.repo_keys }}" with_items: "{{ docker_repo_key_info.repo_keys }}"
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: ensure docker repository is enabled - name: ensure docker repository is enabled
action: "{{ docker_repo_info.pkg_repo }}" action: "{{ docker_repo_info.pkg_repo }}"
@@ -42,14 +46,13 @@
repo: "{{item}}" repo: "{{item}}"
state: present state: present
with_items: "{{ docker_repo_info.repos }}" with_items: "{{ docker_repo_info.repos }}"
when: (ansible_os_family != "CoreOS") and (docker_repo_info.repos|length > 0) when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]) and (docker_repo_info.repos|length > 0)
- name: Configure docker repository on RedHat/CentOS - name: Configure docker repository on RedHat/CentOS
copy: copy:
src: "rh_docker.repo" src: "rh_docker.repo"
dest: "/etc/yum.repos.d/docker.repo" dest: "/etc/yum.repos.d/docker.repo"
when: ansible_distribution in ["CentOS","RedHat"] and when: ansible_distribution in ["CentOS","RedHat"]
ansible_distribution_major_version >= 7
- name: ensure docker packages are installed - name: ensure docker packages are installed
action: "{{ docker_package_info.pkg_mgr }}" action: "{{ docker_package_info.pkg_mgr }}"
@@ -62,15 +65,17 @@
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
with_items: "{{ docker_package_info.pkgs }}" with_items: "{{ docker_package_info.pkgs }}"
when: (ansible_os_family != "CoreOS") and (docker_package_info.pkgs|length > 0) when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]) and (docker_package_info.pkgs|length > 0)
- name: Set docker upstart and sysvinit config - name: check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns
include: non-systemd.yml shell: docker version -f "{{ '{{' }}.Client.Version{{ '}}' }}"
when: ansible_service_mgr in ["sysvinit","upstart"] register: docker_version
failed_when: docker_version.stdout|version_compare('1.12', '<')
changed_when: false
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
- name: Set docker systemd config - name: Set docker systemd config
include: systemd.yml include: systemd.yml
when: ansible_service_mgr == "systemd"
- name: ensure docker service is started and enabled - name: ensure docker service is started and enabled
service: service:

View File

@@ -1,66 +0,0 @@
---
# This uses lineinfile instead of templates for idempotency in files that may be modified by different roles
- name: Set docker options config file path
set_fact:
docker_options_file: >-
{%- if ansible_os_family == "Debian" -%}/etc/default/docker{%- elif ansible_os_family == "RedHat" -%}/etc/sysconfig/docker{%- endif -%}
tags: facts
- name: Set docker options config variable name
set_fact:
docker_options_name: >-
{%- if ansible_os_family == "Debian" -%}DOCKER_OPTS{%- elif ansible_os_family == "RedHat" -%}other_args{%- endif -%}
tags: facts
- name: Set docker options config value to be written
set_fact:
docker_options_value: '"{{ docker_options }} $DOCKER_NETWORK_OPTIONS $DOCKER_STORAGE_OPTIONS $INSECURE_REGISTRY"'
tags: facts
- name: Set docker options config line to be written
set_fact:
docker_options_line: "{{ docker_options_name }}={{ docker_options_value }}"
tags: facts
- name: Set docker proxy lines to be written
set_fact:
docker_proxy_lines:
- { name: "HTTP_PROXY", value: '"{{ http_proxy }}"' }
- { name: "HTTPS_PROXY", value: '"{{ https_proxy }}"' }
- { name: "NO_PROXY", value: '"{{ no_proxy }}"' }
tags: facts
- name: Remove docker daemon proxy config lines that don't match desired lines
lineinfile:
dest: "{{ docker_options_file }}"
regexp: "^{{ item.name }}=(?!{{ item.value|regex_escape() }})"
state: absent
with_items: "{{ docker_proxy_lines|default([]) }}"
when: item.value is defined and (item.value | trim != '')
- name: Write docker daemon proxy config lines
lineinfile:
dest: "{{ docker_options_file }}"
line: "{{ item.name }}={{ item.value }}"
owner: root
group: root
mode: 0644
with_items: "{{ docker_proxy_lines|default([]) }}"
when: item.value is defined and (item.value | trim != '')
- name: Remove docker daemon options lines that don't match desired line
lineinfile:
dest: "{{ docker_options_file }}"
regexp: "^(DOCKER_OPTS|OPTIONS|other_args)=(?!{{ docker_options_value|regex_escape() }})"
state: absent
- name: Write docker daemon options line
lineinfile:
dest: "{{ docker_options_file }}"
line: "{{ docker_options_line }}"
owner: root
group: root
mode: 0644
notify: restart docker
- meta: flush_handlers

View File

@@ -0,0 +1,61 @@
---
- name: set dns server for docker
set_fact:
docker_dns_servers: |-
{%- if dns_mode == 'kubedns' -%}
{{ [ skydns_server ] }}
{%- elif dns_mode == 'dnsmasq_kubedns' -%}
{{ [ dns_server ] }}
{%- endif -%}
- name: set base docker dns facts
set_fact:
docker_dns_search_domains:
- 'default.svc.{{ dns_domain }}'
- 'svc.{{ dns_domain }}'
docker_dns_options:
- ndots:{{ ndots }}
- timeout:2
- attempts:2
- name: add upstream dns servers (only when dnsmasq is not used)
set_fact:
docker_dns_servers: "{{ docker_dns_servers + upstream_dns_servers|default([]) }}"
when: dns_mode == 'kubedns'
- name: add global searchdomains
set_fact:
docker_dns_search_domains: "{{ docker_dns_search_domains + searchdomains|default([]) }}"
- name: check system nameservers
shell: grep "^nameserver" /etc/resolv.conf | sed 's/^nameserver\s*//'
changed_when: False
register: system_nameservers
- name: check system search domains
shell: grep "^search" /etc/resolv.conf | sed 's/^search\s*//'
changed_when: False
register: system_search_domains
- name: add system nameservers to docker options
set_fact:
docker_dns_servers: "{{ docker_dns_servers | union(system_nameservers.stdout_lines) | unique }}"
when: system_nameservers.stdout != ""
- name: add system search domains to docker options
set_fact:
docker_dns_search_domains: "{{ docker_dns_search_domains | union(system_search_domains.stdout.split(' ')|default([])) | unique }}"
when: system_search_domains.stdout != ""
- name: check number of nameservers
fail: msg="Too many nameservers"
when: docker_dns_servers|length > 3
- name: check number of search domains
fail: msg="Too many search domains"
when: docker_dns_search_domains|length > 6
- name: check length of search domains
fail: msg="Search domains exceeded limit of 256 characters"
when: docker_dns_search_domains|join(' ')|length > 256

View File

@@ -13,7 +13,7 @@
src: docker.service.j2 src: docker.service.j2
dest: /etc/systemd/system/docker.service dest: /etc/systemd/system/docker.service
register: docker_service_file register: docker_service_file
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: Write docker options systemd drop-in - name: Write docker options systemd drop-in
template: template:
@@ -21,4 +21,11 @@
dest: "/etc/systemd/system/docker.service.d/docker-options.conf" dest: "/etc/systemd/system/docker.service.d/docker-options.conf"
notify: restart docker notify: restart docker
- name: Write docker dns systemd drop-in
template:
src: docker-dns.conf.j2
dest: "/etc/systemd/system/docker.service.d/docker-dns.conf"
notify: restart docker
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
- meta: flush_handlers - meta: flush_handlers

View File

@@ -0,0 +1,6 @@
[Service]
Environment="DOCKER_DNS_OPTIONS=\
{% for d in docker_dns_servers %}--dns {{ d }} {% endfor %} \
{% for d in docker_dns_search_domains %}--dns-search {{ d }} {% endfor %} \
{% for o in docker_dns_options %}--dns-opt {{ o }} {% endfor %} \
"

View File

@@ -22,6 +22,7 @@ ExecStart={{ docker_bin_dir }}/docker daemon \
$DOCKER_OPTS \ $DOCKER_OPTS \
$DOCKER_STORAGE_OPTIONS \ $DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \ $DOCKER_NETWORK_OPTIONS \
$DOCKER_DNS_OPTIONS \
$INSECURE_REGISTRY $INSECURE_REGISTRY
TasksMax=infinity TasksMax=infinity
LimitNOFILE=1048576 LimitNOFILE=1048576

View File

@@ -1,16 +0,0 @@
docker_kernel_min_version: '2.6.32-431'
# versioning: docker-io itself is pinned at docker 1.5
docker_package_info:
pkg_mgr: yum
pkgs:
- name: docker-io
docker_repo_key_info:
pkg_key: ''
repo_keys: []
docker_repo_info:
pkg_repo: ''
repos: []

View File

@@ -1,10 +1,8 @@
docker_kernel_min_version: '3.2' docker_kernel_min_version: '3.10'
# https://apt.dockerproject.org/repo/dists/debian-wheezy/main/filelist # https://apt.dockerproject.org/repo/dists/debian-wheezy/main/filelist
docker_versioned_pkg: docker_versioned_pkg:
'latest': docker-engine 'latest': docker-engine
'1.9': docker-engine=1.9.1-0~{{ ansible_distribution_release|lower }}
'1.10': docker-engine=1.10.3-0~{{ ansible_distribution_release|lower }}
'1.11': docker-engine=1.11.2-0~{{ ansible_distribution_release|lower }} '1.11': docker-engine=1.11.2-0~{{ ansible_distribution_release|lower }}
'1.12': docker-engine=1.12.5-0~debian-{{ ansible_distribution_release|lower }} '1.12': docker-engine=1.12.5-0~debian-{{ ansible_distribution_release|lower }}

View File

@@ -2,8 +2,6 @@ docker_kernel_min_version: '0'
docker_versioned_pkg: docker_versioned_pkg:
'latest': docker 'latest': docker
'1.9': docker-1:1.9.1
'1.10': docker-1:1.10.1
'1.11': docker-1:1.11.2 '1.11': docker-1:1.11.2
'1.12': docker-1:1.12.5 '1.12': docker-1:1.12.5

View File

@@ -1,29 +0,0 @@
---
docker_version: '1.11'
docker_kernel_min_version: '3.2'
# https://apt.dockerproject.org/repo/dists/ubuntu-xenial/main/filelist
docker_versioned_pkg:
'latest': docker-engine
'1.11': docker-engine=1.11.1-0~{{ ansible_distribution_release|lower }}
'1.12': docker-engine=1.12.5-0~ubuntu-{{ ansible_distribution_release|lower }}
docker_package_info:
pkg_mgr: apt
pkgs:
- name: "{{ docker_versioned_pkg[docker_version | string] }}"
force: yes
docker_repo_key_info:
pkg_key: apt_key
keyserver: hkp://p80.pool.sks-keyservers.net:80
repo_keys:
- 58118E89F3A912897C070ADBF76221572C52609D
docker_repo_info:
pkg_repo: apt_repository
repos:
- >
deb https://apt.dockerproject.org/repo
{{ ansible_distribution|lower }}-{{ ansible_distribution_release|lower }}
main

View File

@@ -1,11 +1,10 @@
---
docker_version: '1.12'
docker_kernel_min_version: '3.10'
docker_kernel_min_version: '3.2' # https://apt.dockerproject.org/repo/dists/ubuntu-xenial/main/filelist
# https://apt.dockerproject.org/repo/dists/ubuntu-trusty/main/filelist
docker_versioned_pkg: docker_versioned_pkg:
'latest': docker-engine 'latest': docker-engine
'1.9': docker-engine=1.9.0-0~{{ ansible_distribution_release|lower }}
'1.10': docker-engine=1.10.3-0~{{ ansible_distribution_release|lower }}
'1.11': docker-engine=1.11.1-0~{{ ansible_distribution_release|lower }} '1.11': docker-engine=1.11.1-0~{{ ansible_distribution_release|lower }}
'1.12': docker-engine=1.12.5-0~ubuntu-{{ ansible_distribution_release|lower }} '1.12': docker-engine=1.12.5-0~ubuntu-{{ ansible_distribution_release|lower }}

View File

@@ -2,14 +2,14 @@
local_release_dir: /tmp local_release_dir: /tmp
# if this is set to true will only download files once. Doesn't work # if this is set to true will only download files once. Doesn't work
# on CoreOS unless the download_localhost is true and localhost # on Container Linux by CoreOS unless the download_localhost is true and localhost
# is running another OS type. Default compress level is 9 (best). # is running another OS type. Default compress level is 9 (best).
download_run_once: False download_run_once: False
download_compress: 9 download_compress: 9
# if this is set to true, uses the localhost for download_run_once mode # if this is set to true, uses the localhost for download_run_once mode
# (requires docker and sudo to access docker). You may want this option for # (requires docker and sudo to access docker). You may want this option for
# local caching of docker images or for CoreOS cluster nodes. # local caching of docker images or for Container Linux by CoreOS cluster nodes.
# Otherwise, uses the first node in the kube-master group to store images # Otherwise, uses the first node in the kube-master group to store images
# in the download_run_once mode. # in the download_run_once mode.
download_localhost: False download_localhost: False
@@ -21,8 +21,8 @@ download_always_pull: False
etcd_version: v3.0.6 etcd_version: v3.0.6
#TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults #TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download # after migration to container download
calico_version: v1.0.0-beta calico_version: "v1.0.0"
calico_cni_version: v1.4.2 calico_cni_version: "v1.5.5"
weave_version: v1.6.1 weave_version: v1.6.1
flannel_version: v0.6.2 flannel_version: v0.6.2
pod_infra_version: 3.0 pod_infra_version: 3.0
@@ -43,15 +43,13 @@ etcd_image_tag: "{{ etcd_version }}"
flannel_image_repo: "quay.io/coreos/flannel" flannel_image_repo: "quay.io/coreos/flannel"
flannel_image_tag: "{{ flannel_version }}" flannel_image_tag: "{{ flannel_version }}"
calicoctl_image_repo: "calico/ctl" calicoctl_image_repo: "calico/ctl"
# TODO(apanchenko): v1.0.0-beta can't execute `node run` from Docker container calicoctl_image_tag: "{{ calico_version }}"
# for details see https://github.com/projectcalico/calico-containers/issues/1291
calicoctl_image_tag: "v1.0.0-rc3"
calico_node_image_repo: "calico/node" calico_node_image_repo: "calico/node"
calico_node_image_tag: "{{ calico_version }}" calico_node_image_tag: "{{ calico_version }}"
calico_cni_image_repo: "calico/cni" calico_cni_image_repo: "calico/cni"
calico_cni_image_tag: "{{ calico_cni_version }}" calico_cni_image_tag: "{{ calico_cni_version }}"
calico_policy_image_repo: "calico/kube-policy-controller" calico_policy_image_repo: "calico/kube-policy-controller"
calico_policy_image_tag: latest calico_policy_image_tag: "v0.5.1"
# TODO(adidenko): switch to "calico/routereflector" when # TODO(adidenko): switch to "calico/routereflector" when
# https://github.com/projectcalico/calico-bird/pull/27 is merged # https://github.com/projectcalico/calico-bird/pull/27 is merged
calico_rr_image_repo: "quay.io/l23network/routereflector" calico_rr_image_repo: "quay.io/l23network/routereflector"
@@ -115,13 +113,13 @@ downloads:
version: "{{etcd_version}}" version: "{{etcd_version}}"
dest: "etcd/etcd-{{ etcd_version }}-linux-amd64.tar.gz" dest: "etcd/etcd-{{ etcd_version }}-linux-amd64.tar.gz"
sha256: >- sha256: >-
{%- if etcd_deployment_type == 'docker' -%}{{etcd_digest_checksum|default(None)}}{%- else -%}{{etcd_checksum}}{%- endif -%} {%- if etcd_deployment_type in [ 'docker', 'rkt' ] -%}{{etcd_digest_checksum|default(None)}}{%- else -%}{{etcd_checksum}}{%- endif -%}
source_url: "{{ etcd_download_url }}" source_url: "{{ etcd_download_url }}"
url: "{{ etcd_download_url }}" url: "{{ etcd_download_url }}"
unarchive: true unarchive: true
owner: "etcd" owner: "etcd"
mode: "0755" mode: "0755"
container: "{{ etcd_deployment_type == 'docker' }}" container: "{{ etcd_deployment_type in [ 'docker', 'rkt' ] }}"
repo: "{{ etcd_image_repo }}" repo: "{{ etcd_image_repo }}"
tag: "{{ etcd_image_tag }}" tag: "{{ etcd_image_tag }}"
hyperkube: hyperkube:

View File

@@ -48,7 +48,7 @@
when: "{{ download.enabled|bool and download.container|bool }}" when: "{{ download.enabled|bool and download.container|bool }}"
tags: bootstrap-os tags: bootstrap-os
# This is required for the download_localhost delegate to work smooth with CoreOS cluster nodes # This is required for the download_localhost delegate to work smooth with Container Linux by CoreOS cluster nodes
- name: Hack python binary path for localhost - name: Hack python binary path for localhost
raw: sh -c "mkdir -p /opt/bin; ln -sf /usr/bin/python /opt/bin/python" raw: sh -c "mkdir -p /opt/bin; ln -sf /usr/bin/python /opt/bin/python"
when: "{{ download_delegate == 'localhost' }}" when: "{{ download_delegate == 'localhost' }}"
@@ -119,7 +119,7 @@
delegate_to: "{{ download_delegate }}" delegate_to: "{{ download_delegate }}"
register: saved register: saved
run_once: true run_once: true
when: (ansible_os_family != "CoreOS" or download_delegate == "localhost") and download_run_once|bool and download.enabled|bool and download.container|bool and (container_changed|bool or not img.stat.exists) when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or download_delegate == "localhost") and download_run_once|bool and download.enabled|bool and download.container|bool and (container_changed|bool or not img.stat.exists)
- name: Download | copy container images to ansible host - name: Download | copy container images to ansible host
synchronize: synchronize:
@@ -128,7 +128,7 @@
mode: pull mode: pull
delegate_to: localhost delegate_to: localhost
become: false become: false
when: ansible_os_family != "CoreOS" and inventory_hostname == groups['kube-master'][0] and download_delegate != "localhost" and download_run_once|bool and download.enabled|bool and download.container|bool and saved.changed when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and inventory_hostname == groups['kube-master'][0] and download_delegate != "localhost" and download_run_once|bool and download.enabled|bool and download.container|bool and saved.changed
- name: Download | upload container images to nodes - name: Download | upload container images to nodes
synchronize: synchronize:
@@ -141,10 +141,10 @@
until: get_task|success until: get_task|success
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
when: (ansible_os_family != "CoreOS" and inventory_hostname != groups['kube-master'][0] or download_delegate == "localhost") and download_run_once|bool and download.enabled|bool and download.container|bool when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and inventory_hostname != groups['kube-master'][0] or download_delegate == "localhost") and download_run_once|bool and download.enabled|bool and download.container|bool
tags: [upload, upgrade] tags: [upload, upgrade]
- name: Download | load container images - name: Download | load container images
shell: "{{ docker_bin_dir }}/docker load < {{ fname }}" shell: "{{ docker_bin_dir }}/docker load < {{ fname }}"
when: (ansible_os_family != "CoreOS" and inventory_hostname != groups['kube-master'][0] or download_delegate == "localhost") and download_run_once|bool and download.enabled|bool and download.container|bool when: (not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and inventory_hostname != groups['kube-master'][0] or download_delegate == "localhost") and download_run_once|bool and download.enabled|bool and download.container|bool
tags: [upload, upgrade] tags: [upload, upgrade]

View File

@@ -8,7 +8,6 @@
- name: etcd | reload systemd - name: etcd | reload systemd
command: systemctl daemon-reload command: systemctl daemon-reload
when: ansible_service_mgr == "systemd"
- name: reload etcd - name: reload etcd
service: service:

View File

@@ -2,7 +2,7 @@
dependencies: dependencies:
- role: adduser - role: adduser
user: "{{ addusers.etcd }}" user: "{{ addusers.etcd }}"
when: ansible_os_family != 'CoreOS' when: not ansible_os_family in ['CoreOS', 'Container Linux by CoreOS']
- role: download - role: download
file: "{{ downloads.etcd }}" file: "{{ downloads.etcd }}"
tags: download tags: download

View File

@@ -16,14 +16,5 @@
src: "etcd-{{ etcd_deployment_type }}.service.j2" src: "etcd-{{ etcd_deployment_type }}.service.j2"
dest: /etc/systemd/system/etcd.service dest: /etc/systemd/system/etcd.service
backup: yes backup: yes
when: ansible_service_mgr == "systemd" and is_etcd_master when: is_etcd_master
notify: restart etcd
- name: Configure | Write etcd initd script
template:
src: "deb-etcd-{{ etcd_deployment_type }}.initd.j2"
dest: /etc/init.d/etcd
owner: root
mode: 0755
when: ansible_service_mgr in ["sysvinit","upstart"] and ansible_os_family == "Debian" and is_etcd_master
notify: restart etcd notify: restart etcd

View File

@@ -40,7 +40,7 @@
{{ m }} {{ m }}
{% endif %} {% endif %}
{% endfor %}" {% endfor %}"
- HOSTS: "{% for h in groups['k8s-cluster'] %} - HOSTS: "{% for h in (groups['k8s-cluster'] + groups['calico-rr']|default([]))|unique %}
{% if hostvars[h].sync_certs|default(false) %} {% if hostvars[h].sync_certs|default(false) %}
{{ h }} {{ h }}
{% endif %} {% endif %}
@@ -65,7 +65,7 @@
'member-{{ inventory_hostname }}-key.pem' 'member-{{ inventory_hostname }}-key.pem'
] ]
all_node_certs: "['ca.pem', all_node_certs: "['ca.pem',
{% for node in groups['k8s-cluster'] %} {% for node in (groups['k8s-cluster'] + groups['calico-rr']|default([]))|unique %}
'node-{{ node }}.pem', 'node-{{ node }}.pem',
'node-{{ node }}-key.pem', 'node-{{ node }}-key.pem',
{% endfor %}]" {% endfor %}]"
@@ -73,7 +73,9 @@
tags: facts tags: facts
- name: Gen_certs | Gather etcd master certs - name: Gen_certs | Gather etcd master certs
shell: "tar cfz - -C {{ etcd_cert_dir }} {{ my_master_certs|join(' ') }} {{ all_node_certs|join(' ') }}| base64 --wrap=0" shell: "tar cfz - -C {{ etcd_cert_dir }} -T /dev/stdin <<< {{ my_master_certs|join(' ') }} {{ all_node_certs|join(' ') }} | base64 --wrap=0"
args:
executable: /bin/bash
register: etcd_master_cert_data register: etcd_master_cert_data
delegate_to: "{{groups['etcd'][0]}}" delegate_to: "{{groups['etcd'][0]}}"
when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
@@ -81,21 +83,28 @@
notify: set etcd_secret_changed notify: set etcd_secret_changed
- name: Gen_certs | Gather etcd node certs - name: Gen_certs | Gather etcd node certs
shell: "tar cfz - -C {{ etcd_cert_dir }} {{ my_node_certs|join(' ') }} | base64 --wrap=0" shell: "tar cfz - -C {{ etcd_cert_dir }} -T /dev/stdin <<< {{ my_node_certs|join(' ') }} | base64 --wrap=0"
args:
executable: /bin/bash
register: etcd_node_cert_data register: etcd_node_cert_data
delegate_to: "{{groups['etcd'][0]}}" delegate_to: "{{groups['etcd'][0]}}"
when: inventory_hostname in groups['k8s-cluster'] and sync_certs|default(false) and when: (('calico-rr' in groups and inventory_hostname in groups['calico-rr']) or
inventory_hostname not in groups['etcd'] inventory_hostname in groups['k8s-cluster']) and
sync_certs|default(false) and inventory_hostname not in groups['etcd']
notify: set etcd_secret_changed notify: set etcd_secret_changed
- name: Gen_certs | Copy certs on masters - name: Gen_certs | Copy certs on masters
shell: "echo '{{etcd_master_cert_data.stdout|quote}}' | base64 -d | tar xz -C {{ etcd_cert_dir }}" shell: "base64 -d <<< '{{etcd_master_cert_data.stdout|quote}}' | tar xz -C {{ etcd_cert_dir }}"
args:
executable: /bin/bash
changed_when: false changed_when: false
when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and when: inventory_hostname in groups['etcd'] and sync_certs|default(false) and
inventory_hostname != groups['etcd'][0] inventory_hostname != groups['etcd'][0]
- name: Gen_certs | Copy certs on nodes - name: Gen_certs | Copy certs on nodes
shell: "echo '{{etcd_node_cert_data.stdout|quote}}' | base64 -d | tar xz -C {{ etcd_cert_dir }}" shell: "base64 -d <<< '{{etcd_node_cert_data.stdout|quote}}' | tar xz -C {{ etcd_cert_dir }}"
args:
executable: /bin/bash
changed_when: false changed_when: false
when: sync_certs|default(false) and when: sync_certs|default(false) and
inventory_hostname not in groups['etcd'] inventory_hostname not in groups['etcd']
@@ -121,7 +130,7 @@
/usr/local/share/ca-certificates/etcd-ca.crt /usr/local/share/ca-certificates/etcd-ca.crt
{%- elif ansible_os_family == "RedHat" -%} {%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/etcd-ca.crt /etc/pki/ca-trust/source/anchors/etcd-ca.crt
{%- elif ansible_os_family == "CoreOS" -%} {%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
/etc/ssl/certs/etcd-ca.pem /etc/ssl/certs/etcd-ca.pem
{%- endif %} {%- endif %}
tags: facts tags: facts
@@ -133,9 +142,9 @@
remote_src: true remote_src: true
register: etcd_ca_cert register: etcd_ca_cert
- name: Gen_certs | update ca-certificates (Debian/Ubuntu/CoreOS) - name: Gen_certs | update ca-certificates (Debian/Ubuntu/Container Linux by CoreOS)
command: update-ca-certificates command: update-ca-certificates
when: etcd_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS"] when: etcd_ca_cert.changed and ansible_os_family in ["Debian", "Container Linux by CoreOS"]
- name: Gen_certs | update ca-certificates (RedHat) - name: Gen_certs | update ca-certificates (RedHat)
command: update-ca-trust extract command: update-ca-trust extract

View File

@@ -1,17 +1,6 @@
--- ---
- name: Install | Copy etcd binary from downloaddir
command: rsync -piu "{{ etcd_bin_dir }}/etcd" "{{ bin_dir }}/etcd"
when: etcd_deployment_type == "host"
register: etcd_copy
changed_when: false
- name: Install | Copy etcdctl binary from downloaddir
command: rsync -piu "{{ etcd_bin_dir }}/etcdctl" "{{ bin_dir }}/etcdctl"
when: etcd_deployment_type == "host"
changed_when: false
#Plan A: no docker-py deps #Plan A: no docker-py deps
- name: Install | Copy etcdctl binary from container - name: Install | Copy etcdctl binary from docker container
command: sh -c "{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy; command: sh -c "{{ docker_bin_dir }}/docker rm -f etcdctl-binarycopy;
{{ docker_bin_dir }}/docker create --name etcdctl-binarycopy {{ etcd_image_repo }}:{{ etcd_image_tag }} && {{ docker_bin_dir }}/docker create --name etcdctl-binarycopy {{ etcd_image_repo }}:{{ etcd_image_tag }} &&
{{ docker_bin_dir }}/docker cp etcdctl-binarycopy:{{ etcd_container_bin_dir }}etcdctl {{ bin_dir }}/etcdctl && {{ docker_bin_dir }}/docker cp etcdctl-binarycopy:{{ etcd_container_bin_dir }}etcdctl {{ bin_dir }}/etcdctl &&

View File

@@ -0,0 +1,9 @@
---
- name: Install | Copy etcd binary from downloaddir
command: rsync -piu "{{ etcd_bin_dir }}/etcd" "{{ bin_dir }}/etcd"
register: etcd_copy
changed_when: false
- name: Install | Copy etcdctl binary from downloaddir
command: rsync -piu "{{ etcd_bin_dir }}/etcdctl" "{{ bin_dir }}/etcdctl"
changed_when: false

View File

@@ -0,0 +1,26 @@
---
- name: Trust etcd container
command: >-
/usr/bin/rkt trust
--skip-fingerprint-review
--root
https://quay.io/aci-signing-key
register: etcd_rkt_trust_result
until: etcd_rkt_trust_result.rc == 0
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
changed_when: false
- name: Install | Copy etcdctl binary from rkt container
command: >-
/usr/bin/rkt run
--volume=bin-dir,kind=host,source={{ bin_dir}},readOnly=false
--mount=volume=bin-dir,target=/host/bin
{{ etcd_image_repo }}:{{ etcd_image_tag }}
--name=etcdctl-binarycopy
--exec=/bin/cp -- {{ etcd_container_bin_dir }}/etcdctl /host/bin/etcdctl
register: etcd_task_result
until: etcd_task_result.rc == 0
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
changed_when: false

View File

@@ -5,7 +5,7 @@
tags: [etcd-secrets, facts] tags: [etcd-secrets, facts]
- include: gen_certs.yml - include: gen_certs.yml
tags: etcd-secrets tags: etcd-secrets
- include: install.yml - include: "install_{{ etcd_deployment_type }}.yml"
when: is_etcd_master when: is_etcd_master
tags: upgrade tags: upgrade
- include: set_cluster_health.yml - include: set_cluster_health.yml

View File

@@ -1,120 +0,0 @@
#!/bin/sh
set -a
### BEGIN INIT INFO
# Provides: etcd
# Required-Start: $local_fs $network $syslog
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: etcd distributed k/v store
# Description:
# etcd is a distributed, consistent key-value store for shared configuration and service discovery
### END INIT INFO
PATH=/sbin:/usr/sbin:/bin/:/usr/bin
DESC="etcd k/v store"
NAME=etcd
DAEMON={{ docker_bin_dir }}/docker
DAEMON_EXEC=`basename $DAEMON`
DAEMON_ARGS="run --restart=on-failure:5 --env-file=/etc/etcd.env \
--net=host \
-v /etc/ssl/certs:/etc/ssl/certs:ro \
-v /var/lib/etcd:/var/lib/etcd:rw \
-v {{ etcd_cert_dir }}:{{ etcd_cert_dir }}:ro \
--name={{ etcd_member_name | default("etcd") }} \
{{ etcd_image_repo }}:{{ etcd_image_tag }} \
{% if etcd_after_v3 %}
{{ etcd_container_bin_dir }}etcd
{% endif %}"
SCRIPTNAME=/etc/init.d/$NAME
DAEMON_USER=root
STOP_SCHEDULE="${STOP_SCHEDULE:-QUIT/5/TERM/5/KILL/5}"
PID=/var/run/etcd.pid
# Exit if the binary is not present
[ -x "$DAEMON" ] || exit 0
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
do_status()
{
status_of_proc -p $PID "$DAEMON" "$NAME" && exit 0 || exit $?
}
# Function that starts the daemon/service
#
do_start()
{
{{ docker_bin_dir }}/docker rm -f {{ etcd_member_name | default("etcd") }} &>/dev/null || true
sleep 1
start-stop-daemon --background --start --quiet --make-pidfile --pidfile $PID --user $DAEMON_USER --exec $DAEMON -- \
$DAEMON_ARGS \
|| return 2
}
#
# Function that stops the daemon/service
#
do_stop()
{
start-stop-daemon --stop --quiet --retry=$STOP_SCHEDULE --pidfile $PID --name $DAEMON_EXEC
RETVAL="$?"
sleep 1
return "$RETVAL"
}
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) log_end_msg 0 || exit 0 ;;
2) log_end_msg 1 || exit 1 ;;
esac
;;
stop)
log_daemon_msg "Stopping $DESC" "$NAME"
if do_stop; then
log_end_msg 0
else
log_failure_msg "Can't stop etcd"
log_end_msg 1
fi
;;
status)
if do_status; then
log_end_msg 0
else
log_failure_msg "etcd is not running"
log_end_msg 1
fi
;;
restart|force-reload)
log_daemon_msg "Restarting $DESC" "$NAME"
if do_stop; then
if do_start; then
log_end_msg 0
exit 0
else
rc="$?"
fi
else
rc="$?"
fi
log_failure_msg "Can't restart etcd"
log_end_msg ${rc}
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac

View File

@@ -1,109 +0,0 @@
#!/bin/sh
set -a
### BEGIN INIT INFO
# Provides: etcd
# Required-Start: $local_fs $network $syslog
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: etcd distributed k/v store
# Description:
# etcd is a distributed, consistent key-value store for shared configuration and service discovery
### END INIT INFO
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="etcd k/v store"
NAME=etcd
DAEMON={{ bin_dir }}/etcd
SCRIPTNAME=/etc/init.d/$NAME
DAEMON_USER=etcd
STOP_SCHEDULE="${STOP_SCHEDULE:-QUIT/5/TERM/5/KILL/5}"
PID=/var/run/etcd.pid
# Exit if the binary is not present
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -f /etc/etcd.env ] && . /etc/etcd.env
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
do_status()
{
status_of_proc -p $PID "$DAEMON" "$NAME" && exit 0 || exit $?
}
# Function that starts the daemon/service
#
do_start()
{
start-stop-daemon --background --start --quiet --make-pidfile --pidfile $PID --user $DAEMON_USER --exec $DAEMON -- \
$DAEMON_ARGS \
|| return 2
}
#
# Function that stops the daemon/service
#
do_stop()
{
start-stop-daemon --stop --quiet --retry=$STOP_SCHEDULE --pidfile $PID --name $NAME
RETVAL="$?"
sleep 1
return "$RETVAL"
}
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) log_end_msg 0 || exit 0 ;;
2) log_end_msg 1 || exit 1 ;;
esac
;;
stop)
log_daemon_msg "Stopping $DESC" "$NAME"
if do_stop; then
log_end_msg 0
else
log_failure_msg "Can't stop etcd"
log_end_msg 1
fi
;;
status)
if do_status; then
log_end_msg 0
else
log_failure_msg "etcd is not running"
log_end_msg 1
fi
;;
restart|force-reload)
log_daemon_msg "Restarting $DESC" "$NAME"
if do_stop; then
if do_start; then
log_end_msg 0
exit 0
else
rc="$?"
fi
else
rc="$?"
fi
log_failure_msg "Can't restart etcd"
log_end_msg ${rc}
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac

View File

@@ -0,0 +1,29 @@
[Unit]
Description=etcd rkt wrapper
Documentation=https://github.com/coreos/etcd
Wants=network.target
[Service]
Restart=on-failure
RestartSec=10s
TimeoutStartSec=0
LimitNOFILE=40000
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/run/etcd.uuid \
--volume=etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount=volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume=etcd-cert-dir,kind=host,source={{ etcd_cert_dir }},readOnly=true \
--mount=volume=etcd-cert-dir,target={{ etcd_cert_dir }} \
--volume=var-lib-etcd,kind=host,source=/var/lib/etcd,readOnly=false \
--mount=volume=var-lib-etcd,target=/var/lib/etcd \
--set-env-file=/etc/etcd.env \
--stage1-from-dir=stage1-fly.aci \
{{ etcd_image_repo }}:{{ etcd_image_tag }} \
--name={{ etcd_member_name | default("etcd") }}
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/etcd.uuid
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/etcd.uuid
[Install]
WantedBy=multi-user.target

View File

@@ -17,8 +17,6 @@ kubednsmasq_image_repo: "gcr.io/google_containers/kube-dnsmasq-amd64"
kubednsmasq_image_tag: "{{ kubednsmasq_version }}" kubednsmasq_image_tag: "{{ kubednsmasq_version }}"
exechealthz_image_repo: "gcr.io/google_containers/exechealthz-amd64" exechealthz_image_repo: "gcr.io/google_containers/exechealthz-amd64"
exechealthz_image_tag: "{{ exechealthz_version }}" exechealthz_image_tag: "{{ exechealthz_version }}"
calico_policy_image_repo: "calico/kube-policy-controller"
calico_policy_image_tag: latest
# Limits for calico apps # Limits for calico apps
calico_policy_controller_cpu_limit: 100m calico_policy_controller_cpu_limit: 100m
@@ -31,9 +29,9 @@ deploy_netchecker: false
netchecker_port: 31081 netchecker_port: 31081
agent_report_interval: 15 agent_report_interval: 15
netcheck_namespace: default netcheck_namespace: default
agent_img: "quay.io/l23network/mcp-netchecker-agent:v0.1" agent_img: "{{ netcheck_agent_img_repo }}:{{ netcheck_tag }}"
server_img: "quay.io/l23network/mcp-netchecker-server:v0.1" server_img: "{{ netcheck_server_img_repo }}:{{ netcheck_tag }}"
kubectl_image: "gcr.io/google_containers/kubectl:v0.18.0-120-gaeb4ac55ad12b1-dirty" kubectl_image: "{{ netcheck_kubectl_img_repo }}:{{ netcheck_kubectl_tag }}"
# Limits for netchecker apps # Limits for netchecker apps
netchecker_agent_cpu_limit: 30m netchecker_agent_cpu_limit: 30m
@@ -51,3 +49,5 @@ netchecker_kubectl_memory_requests: 64M
# SSL # SSL
etcd_cert_dir: "/etc/ssl/etcd/ssl" etcd_cert_dir: "/etc/ssl/etcd/ssl"
calico_cert_dir: "/etc/calico/certs"
canal_cert_dir: "/etc/canal/certs"

View File

@@ -1,8 +1,13 @@
---
- set_fact:
calico_cert_dir: "{{ canal_cert_dir }}"
when: kube_network_plugin == 'canal'
tags: facts
- name: Write calico-policy-controller yaml - name: Write calico-policy-controller yaml
template: src=calico-policy-controller.yml.j2 dest={{kube_config_dir}}/calico-policy-controller.yml template: src=calico-policy-controller.yml.j2 dest={{kube_config_dir}}/calico-policy-controller.yml
when: inventory_hostname == groups['kube-master'][0] when: inventory_hostname == groups['kube-master'][0]
- name: Start of Calico policy controller - name: Start of Calico policy controller
kube: kube:
name: "calico-policy-controller" name: "calico-policy-controller"

View File

@@ -12,7 +12,7 @@
- {file: kubedns-rc.yml, type: rc} - {file: kubedns-rc.yml, type: rc}
- {file: kubedns-svc.yml, type: svc} - {file: kubedns-svc.yml, type: svc}
register: manifests register: manifests
when: inventory_hostname == groups['kube-master'][0] when: dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
tags: dnsmasq tags: dnsmasq
- name: Kubernetes Apps | Start Resources - name: Kubernetes Apps | Start Resources
@@ -24,7 +24,7 @@
filename: "{{kube_config_dir}}/{{item.item.file}}" filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "{{item.changed | ternary('latest','present') }}" state: "{{item.changed | ternary('latest','present') }}"
with_items: "{{ manifests.results }}" with_items: "{{ manifests.results }}"
when: inventory_hostname == groups['kube-master'][0] when: dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
tags: dnsmasq tags: dnsmasq
- include: tasks/calico-policy-controller.yml - include: tasks/calico-policy-controller.yml

View File

@@ -36,11 +36,11 @@ spec:
- name: ETCD_ENDPOINTS - name: ETCD_ENDPOINTS
value: "{{ etcd_access_endpoint }}" value: "{{ etcd_access_endpoint }}"
- name: ETCD_CA_CERT_FILE - name: ETCD_CA_CERT_FILE
value: "{{ etcd_cert_dir }}/ca.pem" value: "{{ calico_cert_dir }}/ca_cert.crt"
- name: ETCD_CERT_FILE - name: ETCD_CERT_FILE
value: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}.pem" value: "{{ calico_cert_dir }}/cert.crt"
- name: ETCD_KEY_FILE - name: ETCD_KEY_FILE
value: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}-key.pem" value: "{{ calico_cert_dir }}/key.pem"
# Location of the Kubernetes API - this shouldn't need to be # Location of the Kubernetes API - this shouldn't need to be
# changed so long as it is used in conjunction with # changed so long as it is used in conjunction with
# CONFIGURE_ETC_HOSTS="true". # CONFIGURE_ETC_HOSTS="true".
@@ -53,10 +53,10 @@ spec:
- name: CONFIGURE_ETC_HOSTS - name: CONFIGURE_ETC_HOSTS
value: "true" value: "true"
volumeMounts: volumeMounts:
- mountPath: {{ etcd_cert_dir }} - mountPath: {{ calico_cert_dir }}
name: etcd-certs name: etcd-certs
readOnly: true readOnly: true
volumes: volumes:
- hostPath: - hostPath:
path: {{ etcd_cert_dir }} path: {{ calico_cert_dir }}
name: etcd-certs name: etcd-certs

View File

@@ -1,3 +1,20 @@
dependencies: dependencies:
- role: download
file: "{{ downloads.calico_policy }}"
when: ( enable_network_policy is defined and enable_network_policy == True ) or
( kube_network_plugin == 'canal' )
tags: [download, network, canal]
- role: download
file: "{{ downloads.netcheck_server }}"
when: deploy_netchecker
tags: [download, netchecker]
- role: download
file: "{{ downloads.netcheck_agent }}"
when: deploy_netchecker
tags: [download, netchecker]
- role: download
file: "{{ downloads.netcheck_kubectl }}"
when: deploy_netchecker
tags: [download, netchecker]
- {role: kubernetes-apps/ansible, tags: apps} - {role: kubernetes-apps/ansible, tags: apps}
- {role: kubernetes-apps/kpm, tags: [apps, kpm]} - {role: kubernetes-apps/kpm, tags: [apps, kpm]}

View File

@@ -15,7 +15,6 @@
- name: Master | reload systemd - name: Master | reload systemd
command: systemctl daemon-reload command: systemctl daemon-reload
when: ansible_service_mgr == "systemd"
- name: Master | reload kubelet - name: Master | reload kubelet
service: service:

View File

@@ -7,7 +7,6 @@
- name: Kubelet | reload systemd - name: Kubelet | reload systemd
command: systemctl daemon-reload command: systemctl daemon-reload
when: ansible_service_mgr == "systemd"
- name: Kubelet | reload kubelet - name: Kubelet | reload kubelet
service: service:

View File

@@ -1,19 +1,31 @@
--- ---
- name: Trust kubelet container
command: >-
/usr/bin/rkt trust
--skip-fingerprint-review
--root
{{ item }}
register: kubelet_rkt_trust_result
until: kubelet_rkt_trust_result.rc == 0
with_items:
- "https://quay.io/aci-signing-key"
- "https://coreos.com/dist/pubkeys/aci-pubkeys.gpg"
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
changed_when: false
when: kubelet_deployment_type == "rkt"
- name: create kubelet working directory
file:
state: directory
path: /var/lib/kubelet
when: kubelet_deployment_type == "rkt"
- name: install | Write kubelet systemd init file - name: install | Write kubelet systemd init file
template: src=kubelet.service.j2 dest=/etc/systemd/system/kubelet.service backup=yes template: "src=kubelet.{{ kubelet_deployment_type }}.service.j2 dest=/etc/systemd/system/kubelet.service backup=yes"
when: ansible_service_mgr == "systemd"
notify: restart kubelet
- name: install | Write kubelet initd script
template: src=deb-kubelet.initd.j2 dest=/etc/init.d/kubelet owner=root mode=0755 backup=yes
when: ansible_service_mgr in ["sysvinit","upstart"] and ansible_os_family == "Debian"
notify: restart kubelet
- name: install | Write kubelet initd script
template: src=rh-kubelet.initd.j2 dest=/etc/init.d/kubelet owner=root mode=0755 backup=yes
when: ansible_service_mgr in ["sysvinit","upstart"] and ansible_os_family == "RedHat"
notify: restart kubelet notify: restart kubelet
- name: install | Install kubelet launch script - name: install | Install kubelet launch script
template: src=kubelet-container.j2 dest="{{ bin_dir }}/kubelet" owner=kube mode=0755 backup=yes template: src=kubelet-container.j2 dest="{{ bin_dir }}/kubelet" owner=kube mode=0755 backup=yes
notify: restart kubelet notify: restart kubelet
when: kubelet_deployment_type == "docker"

View File

@@ -1,121 +0,0 @@
#!/bin/bash
#
### BEGIN INIT INFO
# Provides: kubelet
# Required-Start: $local_fs $network $syslog
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The Kubernetes node container manager
# Description:
# The Kubernetes container manager maintains docker state against a state file.
### END INIT INFO
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="The Kubernetes container manager"
NAME=kubelet
DAEMON={{ bin_dir }}/kubelet
DAEMON_ARGS=""
DAEMON_LOG_FILE=/var/log/$NAME.log
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
DAEMON_USER=root
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r {{kube_config_dir}}/$NAME.env ] && . {{kube_config_dir}}/$NAME.env
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
#
# Function that starts the daemon/service
#
do_start()
{
{{ docker_bin_dir }}/docker rm -f kubelet &>/dev/null || true
sleep 1
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --background --no-close \
--make-pidfile --pidfile $PIDFILE \
--exec $DAEMON -c $DAEMON_USER --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --background --no-close \
--make-pidfile --pidfile $PIDFILE \
--exec $DAEMON -c $DAEMON_USER -- \
$DAEMON_ARGS >> $DAEMON_LOG_FILE 2>&1 \
|| return 2
}
#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
}
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) log_end_msg 0 || exit 0 ;;
2) log_end_msg 1 || exit 1 ;;
esac
;;
stop)
log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) log_end_msg 0 ;;
2) exit 1 ;;
esac
;;
status)
status_of_proc -p $PIDFILE "$DAEMON" "$NAME" && exit 0 || exit $?
;;
restart|force-reload)
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac

View File

@@ -1,10 +1,5 @@
{% if ansible_service_mgr in ["sysvinit","upstart"] %}
# Logging directory
KUBE_LOGGING="--log-dir={{ kube_log_dir }} --logtostderr=true"
{% else %}
# logging to stderr means we get it in the systemd journal # logging to stderr means we get it in the systemd journal
KUBE_LOGGING="--logtostderr=true" KUBE_LOGGING="--logtostderr=true"
{% endif %}
KUBE_LOG_LEVEL="--v={{ kube_log_level }}" KUBE_LOG_LEVEL="--v={{ kube_log_level }}"
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address={{ ip | default("0.0.0.0") }}" KUBELET_ADDRESS="--address={{ ip | default("0.0.0.0") }}"
@@ -17,9 +12,9 @@ KUBELET_HOSTNAME="--hostname-override={{ ansible_hostname }}"
{% set kubelet_args_base %}--pod-manifest-path={{ kube_manifest_dir }} --pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }}{% endset %} {% set kubelet_args_base %}--pod-manifest-path={{ kube_manifest_dir }} --pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }}{% endset %}
{# DNS settings for kubelet #} {# DNS settings for kubelet #}
{% if dns_setup|bool and skip_dnsmasq|bool %} {% if dns_mode == 'kubedns' %}
{% set kubelet_args_cluster_dns %}--cluster_dns={{ skydns_server }}{% endset %} {% set kubelet_args_cluster_dns %}--cluster_dns={{ skydns_server }}{% endset %}
{% elif dns_setup|bool %} {% elif dns_mode == 'dnsmasq_kubedns' %}
{% set kubelet_args_cluster_dns %}--cluster_dns={{ dns_server }}{% endset %} {% set kubelet_args_cluster_dns %}--cluster_dns={{ dns_server }}{% endset %}
{% else %} {% else %}
{% set kubelet_args_cluster_dns %}{% endset %} {% set kubelet_args_cluster_dns %}{% endset %}
@@ -51,8 +46,3 @@ KUBELET_CLOUDPROVIDER="--cloud-provider={{ cloud_provider }}"
{% else %} {% else %}
KUBELET_CLOUDPROVIDER="" KUBELET_CLOUDPROVIDER=""
{% endif %} {% endif %}
{% if ansible_service_mgr in ["sysvinit","upstart"] %}
DAEMON_ARGS="$KUBE_LOGGING $KUBE_LOG_LEVEL $KUBE_ALLOW_PRIV $KUBELET_API_SERVER $KUBELET_ADDRESS \
$KUBELET_HOSTNAME $KUBELET_ARGS $DOCKER_SOCKET $KUBELET_ARGS $KUBELET_NETWORK_PLUGIN \
$KUBELET_CLOUDPROVIDER"
{% endif %}

View File

@@ -0,0 +1,58 @@
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
{% if kube_network_plugin is defined and kube_network_plugin == "calico" %}
After=calico-node.service
Wants=network.target calico-node.service
{% else %}
Wants=network.target
{% endif %}
[Service]
Restart=on-failure
RestartSec=10s
TimeoutStartSec=0
LimitNOFILE=40000
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet.uuid
ExecStartPre=-/bin/mkdir -p /var/lib/kubelet
EnvironmentFile={{kube_config_dir}}/kubelet.env
# stage1-fly mounts /proc /sys /dev so no need to duplicate the mounts
ExecStart=/usr/bin/rkt run \
--volume var-log,kind=host,source=/var/log \
--volume dns,kind=host,source=/etc/resolv.conf \
--volume etc-kubernetes,kind=host,source={{ kube_config_dir }},readOnly=false \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--volume var-lib-docker,kind=host,source={{ docker_daemon_graph }},readOnly=false \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,readOnly=false \
--volume run,kind=host,source=/run,readOnly=false \
--mount volume=var-log,target=/var/log \
--mount volume=dns,target=/etc/resolv.conf \
--mount volume=etc-kubernetes,target={{ kube_config_dir }} \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--mount volume=var-lib-docker,target=/var/lib/docker \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--mount volume=run,target=/run \
--stage1-from-dir=stage1-fly.aci \
{{ hyperkube_image_repo }}:{{ hyperkube_image_tag }} \
--uuid-file-save=/var/run/kubelet.uuid \
--debug --exec=/kubelet -- \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS \
$DOCKER_SOCKET \
$KUBELET_REGISTER_NODE \
$KUBELET_NETWORK_PLUGIN
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet.uuid
[Install]
WantedBy=multi-user.target

View File

@@ -1,129 +0,0 @@
#!/bin/bash
#
# /etc/rc.d/init.d/kubelet
#
# chkconfig: 2345 95 95
# description: Daemon for kubelet (kubernetes.io)
### BEGIN INIT INFO
# Provides: kubelet
# Required-Start: $local_fs $network $syslog cgconfig
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start and stop kubelet
# Description:
# The Kubernetes container manager maintains docker state against a state file.
### END INIT INFO
# Source function library.
. /etc/rc.d/init.d/functions
prog="kubelet"
exec="{{ bin_dir }}/$prog"
pidfile="/var/run/$prog.pid"
lockfile="/var/lock/subsys/$prog"
logfile="/var/log/$prog"
[ -e {{kube_config_dir}}/$prog.env ] && . {{kube_config_dir}}/$prog.env
start() {
if [ ! -x $exec ]; then
if [ ! -e $exec ]; then
echo "Docker executable $exec not found"
else
echo "You do not have permission to execute the Docker executable $exec"
fi
exit 5
fi
check_for_cleanup
if ! [ -f $pidfile ]; then
printf "Starting $prog:\t"
echo "\n$(date)\n" >> $logfile
$exec $DAEMON_ARGS &>> $logfile &
pid=$!
echo $pid >> $pidfile
touch $lockfile
success
echo
else
failure
echo
printf "$pidfile still exists...\n"
exit 7
fi
}
stop() {
echo -n $"Stopping $prog: "
killproc -p $pidfile -d 300 $prog
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
stop
start
}
reload() {
restart
}
force_reload() {
restart
}
rh_status() {
status -p $pidfile $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
check_for_cleanup() {
if [ -f ${pidfile} ]; then
/bin/ps -fp $(cat ${pidfile}) > /dev/null || rm ${pidfile}
fi
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
restart
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
exit 2
esac
exit $?

View File

@@ -29,6 +29,6 @@ openstack_tenant_id: "{{ lookup('env','OS_TENANT_ID') }}"
# All clients access each node individually, instead of using a load balancer. # All clients access each node individually, instead of using a load balancer.
etcd_multiaccess: true etcd_multiaccess: true
# CoreOS cloud init config file to define /etc/resolv.conf content # Container Linux by CoreOS cloud init config file to define /etc/resolv.conf content
# for hostnet pods and infra needs # for hostnet pods and infra needs
resolveconf_cloud_init_conf: /etc/resolveconf_cloud_init.conf resolveconf_cloud_init_conf: /etc/resolveconf_cloud_init.conf

View File

@@ -3,7 +3,7 @@
notify: notify:
- Preinstall | reload network - Preinstall | reload network
- Preinstall | reload kubelet - Preinstall | reload kubelet
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
# FIXME(bogdando) https://github.com/projectcalico/felix/issues/1185 # FIXME(bogdando) https://github.com/projectcalico/felix/issues/1185
- name: Preinstall | reload network - name: Preinstall | reload network
@@ -15,18 +15,18 @@
networking networking
{%- endif %} {%- endif %}
state: restarted state: restarted
when: ansible_os_family != "CoreOS" and kube_network_plugin not in ['canal', 'calico'] when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] and kube_network_plugin not in ['canal', 'calico']
- name: Preinstall | update resolvconf for CoreOS - name: Preinstall | update resolvconf for Container Linux by CoreOS
command: /bin/true command: /bin/true
notify: notify:
- Preinstall | apply resolvconf cloud-init - Preinstall | apply resolvconf cloud-init
- Preinstall | reload kubelet - Preinstall | reload kubelet
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: Preinstall | apply resolvconf cloud-init - name: Preinstall | apply resolvconf cloud-init
command: /usr/bin/coreos-cloudinit --from-file {{ resolveconf_cloud_init_conf }} command: /usr/bin/coreos-cloudinit --from-file {{ resolveconf_cloud_init_conf }}
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: Preinstall | reload kubelet - name: Preinstall | reload kubelet
service: service:

View File

@@ -3,7 +3,7 @@
blockinfile: blockinfile:
dest: /etc/hosts dest: /etc/hosts
block: |- block: |-
{% for item in groups['all'] -%}{{ hostvars[item]['access_ip'] | default(hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4.address)) }}{% if (item != hostvars[item]['ansible_hostname']) %} {{ hostvars[item]['ansible_hostname'] }} {{ hostvars[item]['ansible_hostname'] }}.{{ dns_domain }}{% endif %} {{ item }} {{ item }}.{{ dns_domain }} {% for item in (groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]))|unique -%}{{ hostvars[item]['access_ip'] | default(hostvars[item]['ip'] | default(hostvars[item]['ansible_default_ipv4']['address'])) }}{% if (item != hostvars[item]['ansible_hostname']) %} {{ hostvars[item]['ansible_hostname'] }} {{ hostvars[item]['ansible_hostname'] }}.{{ dns_domain }}{% endif %} {{ item }} {{ item }}.{{ dns_domain }}
{% endfor %} {% endfor %}
state: present state: present
create: yes create: yes

View File

@@ -1,8 +1,11 @@
--- ---
- name: Force binaries directory for CoreOS - include: pre-upgrade.yml
tags: [upgrade, bootstrap-os]
- name: Force binaries directory for Container Linux by CoreOS
set_fact: set_fact:
bin_dir: "/opt/bin" bin_dir: "/opt/bin"
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
tags: facts tags: facts
- name: check bin dir exists - name: check bin dir exists
@@ -59,14 +62,6 @@
when: "{{ inventory_hostname in groups['k8s-cluster'] }}" when: "{{ inventory_hostname in groups['k8s-cluster'] }}"
tags: [kubelet, bootstrap-os, master, node] tags: [kubelet, bootstrap-os, master, node]
- name: Create kubernetes logs directory
file:
path: "{{ kube_log_dir }}"
state: directory
owner: kube
when: ansible_service_mgr in ["sysvinit","upstart"] and "{{ inventory_hostname in groups['k8s-cluster'] }}"
tags: [bootstrap-os, master, node]
- name: check cloud_provider value - name: check cloud_provider value
fail: fail:
msg: "If set the 'cloud_provider' var must be set either to 'generic', 'gce', 'aws', 'azure' or 'openstack'" msg: "If set the 'cloud_provider' var must be set either to 'generic', 'gce', 'aws', 'azure' or 'openstack'"
@@ -122,8 +117,7 @@
- name: Install epel-release on RedHat/CentOS - name: Install epel-release on RedHat/CentOS
shell: rpm -qa | grep epel-release || rpm -ivh {{ epel_rpm_download_url }} shell: rpm -qa | grep epel-release || rpm -ivh {{ epel_rpm_download_url }}
when: ansible_distribution in ["CentOS","RedHat"] and when: ansible_distribution in ["CentOS","RedHat"]
ansible_distribution_major_version >= 7
changed_when: False changed_when: False
tags: bootstrap-os tags: bootstrap-os
@@ -137,7 +131,7 @@
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
with_items: "{{required_pkgs | default([]) | union(common_required_pkgs|default([]))}}" with_items: "{{required_pkgs | default([]) | union(common_required_pkgs|default([]))}}"
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
tags: bootstrap-os tags: bootstrap-os
- name: Disable IPv6 DNS lookup - name: Disable IPv6 DNS lookup
@@ -146,7 +140,7 @@
line: "precedence ::ffff:0:0/96 100" line: "precedence ::ffff:0:0/96 100"
state: present state: present
backup: yes backup: yes
when: disable_ipv6_dns and ansible_os_family != "CoreOS" when: disable_ipv6_dns and not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
tags: bootstrap-os tags: bootstrap-os
# Todo : selinux configuration # Todo : selinux configuration
@@ -178,8 +172,9 @@
tags: [bootstrap-os, etchosts] tags: [bootstrap-os, etchosts]
- include: resolvconf.yml - include: resolvconf.yml
when: dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'
tags: [bootstrap-os, resolvconf] tags: [bootstrap-os, resolvconf]
- name: Check if we are running inside a Azure VM - name: Check if we are running inside a Azure VM
stat: path=/var/lib/waagent/ stat: path=/var/lib/waagent/
register: azure_check register: azure_check
@@ -187,7 +182,6 @@
- include: growpart-azure-centos-7.yml - include: growpart-azure-centos-7.yml
when: azure_check.stat.exists and when: azure_check.stat.exists and
ansible_distribution in ["CentOS","RedHat"] and ansible_distribution in ["CentOS","RedHat"]
ansible_distribution_major_version >= 7
tags: bootstrap-os tags: bootstrap-os

View File

@@ -0,0 +1,4 @@
---
- name: Stop if non systemd OS type
assert:
that: ansible_service_mgr == "systemd"

View File

@@ -1,7 +1,7 @@
--- ---
- name: create temporary resolveconf cloud init file - name: create temporary resolveconf cloud init file
command: cp -f /etc/resolv.conf "{{ resolvconffile }}" command: cp -f /etc/resolv.conf "{{ resolvconffile }}"
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: Remove search/domain/nameserver options - name: Remove search/domain/nameserver options
lineinfile: lineinfile:
@@ -48,7 +48,7 @@
- name: get temporary resolveconf cloud init file content - name: get temporary resolveconf cloud init file content
command: cat {{ resolvconffile }} command: cat {{ resolvconffile }}
register: cloud_config register: cloud_config
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: persist resolvconf cloud init file - name: persist resolvconf cloud init file
template: template:
@@ -56,9 +56,9 @@
src: resolvconf.j2 src: resolvconf.j2
owner: root owner: root
mode: 0644 mode: 0644
notify: Preinstall | update resolvconf for CoreOS notify: Preinstall | update resolvconf for Container Linux by CoreOS
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- include: dhclient-hooks.yml - include: dhclient-hooks.yml
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
tags: [bootstrap-os, resolvconf] tags: [bootstrap-os, resolvconf]

View File

@@ -35,11 +35,11 @@
{%- if resolvconf|bool -%}/etc/resolvconf/resolv.conf.d/base{%- endif -%} {%- if resolvconf|bool -%}/etc/resolvconf/resolv.conf.d/base{%- endif -%}
head: >- head: >-
{%- if resolvconf|bool -%}/etc/resolvconf/resolv.conf.d/head{%- endif -%} {%- if resolvconf|bool -%}/etc/resolvconf/resolv.conf.d/head{%- endif -%}
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: target temporary resolvconf cloud init file (CoreOS) - name: target temporary resolvconf cloud init file (Container Linux by CoreOS)
set_fact: resolvconffile=/tmp/resolveconf_cloud_init_conf set_fact: resolvconffile=/tmp/resolveconf_cloud_init_conf
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: target dhclient conf/hook files for Red Hat family - name: target dhclient conf/hook files for Red Hat family
set_fact: set_fact:
@@ -67,7 +67,7 @@
- name: pick dnsmasq cluster IP or default resolver - name: pick dnsmasq cluster IP or default resolver
set_fact: set_fact:
dnsmasq_server: |- dnsmasq_server: |-
{%- if skip_dnsmasq|bool and not dns_early|bool -%} {%- if dns_mode == 'kubedns' and not dns_early|bool -%}
{{ [ skydns_server ] + upstream_dns_servers|default([]) }} {{ [ skydns_server ] + upstream_dns_servers|default([]) }}
{%- elif dns_early|bool -%} {%- elif dns_early|bool -%}
{{ upstream_dns_servers|default([]) }} {{ upstream_dns_servers|default([]) }}

View File

@@ -74,7 +74,7 @@
/usr/local/share/ca-certificates/kube-ca.crt /usr/local/share/ca-certificates/kube-ca.crt
{%- elif ansible_os_family == "RedHat" -%} {%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/kube-ca.crt /etc/pki/ca-trust/source/anchors/kube-ca.crt
{%- elif ansible_os_family == "CoreOS" -%} {%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
/etc/ssl/certs/kube-ca.pem /etc/ssl/certs/kube-ca.pem
{%- endif %} {%- endif %}
tags: facts tags: facts
@@ -86,9 +86,9 @@
remote_src: true remote_src: true
register: kube_ca_cert register: kube_ca_cert
- name: Gen_certs | update ca-certificates (Debian/Ubuntu/CoreOS) - name: Gen_certs | update ca-certificates (Debian/Ubuntu/Container Linux by CoreOS)
command: update-ca-certificates command: update-ca-certificates
when: kube_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS"] when: kube_ca_cert.changed and ansible_os_family in ["Debian", "Container Linux by CoreOS"]
- name: Gen_certs | update ca-certificates (RedHat) - name: Gen_certs | update ca-certificates (RedHat)
command: update-ca-trust extract command: update-ca-trust extract

View File

@@ -21,10 +21,6 @@ global_as_num: "64512"
# calico_mtu: 1500 # calico_mtu: 1500
# Limits for apps # Limits for apps
calico_rr_memory_limit: 1000M
calico_rr_cpu_limit: 300m
calico_rr_memory_requests: 500M
calico_rr_cpu_requests: 150m
calico_node_memory_limit: 500M calico_node_memory_limit: 500M
calico_node_cpu_limit: 300m calico_node_cpu_limit: 300m
calico_node_memory_requests: 256M calico_node_memory_requests: 256M

View File

@@ -7,7 +7,6 @@
- name : Calico | reload systemd - name : Calico | reload systemd
shell: systemctl daemon-reload shell: systemctl daemon-reload
when: ansible_service_mgr == "systemd"
- name: Calico | reload calico-node - name: Calico | reload calico-node
service: service:

View File

@@ -5,3 +5,9 @@ global_as_num: "64512"
calico_cert_dir: /etc/calico/certs calico_cert_dir: /etc/calico/certs
etcd_cert_dir: /etc/ssl/etcd/ssl etcd_cert_dir: /etc/ssl/etcd/ssl
# Limits for apps
calico_rr_memory_limit: 1000M
calico_rr_cpu_limit: 300m
calico_rr_memory_requests: 500M
calico_rr_cpu_requests: 150m

View File

@@ -7,7 +7,6 @@
- name : Calico-rr | reload systemd - name : Calico-rr | reload systemd
shell: systemctl daemon-reload shell: systemctl daemon-reload
when: ansible_service_mgr == "systemd"
- name: Calico-rr | reload calico-rr - name: Calico-rr | reload calico-rr
service: service:

View File

@@ -1,6 +1,6 @@
dependencies: dependencies:
- role: etcd - role: etcd
- role: docker - role: docker
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- role: download - role: download
file: "{{ downloads.calico_rr }}" file: "{{ downloads.calico_rr }}"

View File

@@ -36,12 +36,10 @@
- name: Calico-rr | Write calico-rr.env for systemd init file - name: Calico-rr | Write calico-rr.env for systemd init file
template: src=calico-rr.env.j2 dest=/etc/calico/calico-rr.env template: src=calico-rr.env.j2 dest=/etc/calico/calico-rr.env
when: ansible_service_mgr == "systemd"
notify: restart calico-rr notify: restart calico-rr
- name: Calico-rr | Write calico-rr systemd init file - name: Calico-rr | Write calico-rr systemd init file
template: src=calico-rr.service.j2 dest=/etc/systemd/system/calico-rr.service template: src=calico-rr.service.j2 dest=/etc/systemd/system/calico-rr.service
when: ansible_service_mgr == "systemd"
notify: restart calico-rr notify: restart calico-rr
- name: Calico-rr | Configure route reflector - name: Calico-rr | Configure route reflector

View File

@@ -162,33 +162,19 @@
run_once: true run_once: true
when: legacy_calicoctl when: legacy_calicoctl
- name: Calico | Write /etc/network-environment
template: src=network-environment.j2 dest=/etc/network-environment
when: ansible_service_mgr in ["sysvinit","upstart"]
- name: Calico (old) | Write calico-node systemd init file - name: Calico (old) | Write calico-node systemd init file
template: src=calico-node.service.legacy.j2 dest=/etc/systemd/system/calico-node.service template: src=calico-node.service.legacy.j2 dest=/etc/systemd/system/calico-node.service
when: ansible_service_mgr == "systemd" and legacy_calicoctl when: legacy_calicoctl
notify: restart calico-node notify: restart calico-node
- name: Calico | Write calico.env for systemd init file - name: Calico | Write calico.env for systemd init file
template: src=calico.env.j2 dest=/etc/calico/calico.env template: src=calico.env.j2 dest=/etc/calico/calico.env
when: ansible_service_mgr == "systemd" and not legacy_calicoctl when: not legacy_calicoctl
notify: restart calico-node notify: restart calico-node
- name: Calico | Write calico-node systemd init file - name: Calico | Write calico-node systemd init file
template: src=calico-node.service.j2 dest=/etc/systemd/system/calico-node.service template: src=calico-node.service.j2 dest=/etc/systemd/system/calico-node.service
when: ansible_service_mgr == "systemd" and not legacy_calicoctl when: not legacy_calicoctl
notify: restart calico-node
- name: Calico | Write calico-node initd script
template: src=deb-calico.initd.j2 dest=/etc/init.d/calico-node owner=root mode=0755
when: ansible_service_mgr in ["sysvinit","upstart"] and ansible_os_family == "Debian"
notify: restart calico-node
- name: Calico | Write calico-node initd script
template: src=rh-calico.initd.j2 dest=/etc/init.d/calico-node owner=root mode=0755
when: ansible_service_mgr in ["sysvinit","upstart"] and ansible_os_family == "RedHat"
notify: restart calico-node notify: restart calico-node
- name: Calico | Restart calico-node if secrets changed - name: Calico | Restart calico-node if secrets changed

View File

@@ -2,13 +2,13 @@
{{ docker_bin_dir }}/docker run -i --privileged --rm \ {{ docker_bin_dir }}/docker run -i --privileged --rm \
--net=host --pid=host \ --net=host --pid=host \
-e ETCD_ENDPOINTS={{ etcd_access_endpoint }} \ -e ETCD_ENDPOINTS={{ etcd_access_endpoint }} \
-e ETCD_CA_CERT_FILE=/etc/calico/certs/ca_cert.crt \ -e ETCD_CA_CERT_FILE={{ calico_cert_dir }}/ca_cert.crt \
-e ETCD_CERT_FILE=/etc/calico/certs/cert.crt \ -e ETCD_CERT_FILE={{ calico_cert_dir }}/cert.crt \
-e ETCD_KEY_FILE=/etc/calico/certs/key.pem \ -e ETCD_KEY_FILE={{ calico_cert_dir }}/key.pem \
-v {{ docker_bin_dir }}/docker:{{ docker_bin_dir }}/docker \ -v {{ docker_bin_dir }}/docker:{{ docker_bin_dir }}/docker \
-v /var/run/docker.sock:/var/run/docker.sock \ -v /var/run/docker.sock:/var/run/docker.sock \
-v /var/run/calico:/var/run/calico \ -v /var/run/calico:/var/run/calico \
-v /etc/calico/certs:/etc/calico/certs:ro \ -v {{ calico_cert_dir }}:{{ calico_cert_dir }}:ro \
--memory={{ calicoctl_memory_limit|regex_replace('Mi', 'M') }} --cpu-shares={{ calicoctl_cpu_limit|regex_replace('m', '') }} \ --memory={{ calicoctl_memory_limit|regex_replace('Mi', 'M') }} --cpu-shares={{ calicoctl_cpu_limit|regex_replace('m', '') }} \
{{ calicoctl_image_repo }}:{{ calicoctl_image_tag}} \ {{ calicoctl_image_repo }}:{{ calicoctl_image_tag}} \
$@ $@

View File

@@ -1,124 +0,0 @@
#!/bin/bash
#
### BEGIN INIT INFO
# Provides: calico-node
# Required-Start: $local_fs $network $syslog
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Calico docker container
# Description:
# Runs calico as a docker container
### END INIT INFO
set -a
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Calico-node Docker"
NAME=calico-node
DAEMON={{ bin_dir }}/calicoctl
DAEMON_ARGS=""
DOCKER=$(which docker)
SCRIPTNAME=/etc/init.d/$NAME
DAEMON_USER=root
# Exit if the binary is not present
[ -x "$DAEMON" ] || exit 0
# Exit if the docker package is not installed
[ -x "$DOCKER" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/network-environment ] && . /etc/network-environment
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
do_status()
{
if [ $($DOCKER ps --format "{{.Image}}" | grep -cw 'calico/node') -eq 1 ]; then
return 0
else
return 1
fi
}
# Function that starts the daemon/service
#
do_start()
{
do_status
retval=$?
if [ $retval -ne 0 ]; then
{% if legacy_calicoctl %}
${DAEMON} node --ip=${DEFAULT_IPV4} >>/dev/null && return 0 || return 2
{% else %}
${DAEMON} node run --ip=${DEFAULT_IPV4} >>/dev/null && return 0 || return 2
{% endif %}
else
return 1
fi
}
#
# Function that stops the daemon/service
#
do_stop()
{
{% if legacy_calicoctl %}
${DAEMON} node stop >> /dev/null || ${DAEMON} node stop --force >> /dev/null
{% else %}
echo "Current version of ${DAEMON} doesn't support 'node stop' command!"
return 1
{% endif %}
}
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) log_end_msg 0 || exit 0 ;;
2) log_end_msg 1 || exit 1 ;;
esac
;;
stop)
log_daemon_msg "Stopping $DESC" "$NAME"
if do_stop; then
log_end_msg 0
else
log_failure_msg "Can't stop calico-node"
log_end_msg 1
fi
;;
status)
if do_status; then
log_end_msg 0
else
log_failure_msg "Calico-node is not running"
log_end_msg 1
fi
;;
restart|force-reload)
log_daemon_msg "Restarting $DESC" "$NAME"
if do_stop; then
if do_start; then
log_end_msg 0
exit 0
else
rc="$?"
fi
else
rc="$?"
fi
log_failure_msg "Can't restart Calico-node"
log_end_msg ${rc}
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac

View File

@@ -1,12 +0,0 @@
# This host's IPv4 address (the source IP address used to reach other nodes
# in the Kubernetes cluster).
DEFAULT_IPV4={{ip | default(ansible_default_ipv4.address) }}
# The Kubernetes master IP
KUBERNETES_MASTER={{ kube_apiserver_endpoint }}
# IP and port of etcd instance used by Calico
ETCD_ENDPOINTS={{ etcd_access_endpoint }}
ETCD_CA_CERT_FILE=/etc/calico/certs/ca_cert.crt
ETCD_CERT_FILE=/etc/calico/certs/cert.crt
ETCD_KEY_FILE=/etc/calico/certs/key.pem

View File

@@ -1,140 +0,0 @@
#!/bin/bash
#
# /etc/rc.d/init.d/calico-node
#
# chkconfig: 2345 95 95
# description: Daemon for calico-node (http://www.projectcalico.org/)
set -a
### BEGIN INIT INFO
# Provides: calico-node
# Required-Start: $local_fs $network $syslog cgconfig
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start and stop calico-node
# Description:
# Manage calico-docker container
### END INIT INFO
# Source function library.
. /etc/rc.d/init.d/functions
prog="calicoctl"
exec="{{ bin_dir }}/$prog"
dockerexec="$(which docker)"
logfile="/var/log/$prog"
[ -e /etc/network-environment ] && for i in $(cat /etc/network-environment | egrep '(^$|^#)'); do export $i; done
do_status()
{
if [ $($dockerexec ps --format "{{.Image}}" | grep -cw 'calico/node') -ne 1 ]; then
return 1
fi
}
do_start() {
if [ ! -x $exec ]; then
if [ ! -e $exec ]; then
echo "calico-node executable $exec not found"
else
echo "You do not have permission to execute the calico-node executable $exec"
fi
exit 5
fi
[ -x "$dockerexec" ] || exit 0
do_status
retval=$?
if [ $retval -ne 0 ]; then
printf "Starting $prog:\t"
echo "\n$(date)\n" >> $logfile
{% if legacy_calicoctl %}
$exec node --ip=${DEFAULT_IPV4} &>>$logfile
{% else %}
$exec node run --ip=${DEFAULT_IPV4} &>>$logfile
{% endif %}
success
echo
else
echo -n "calico-node's already running"
success
exit 0
fi
}
do_stop() {
echo -n $"Stopping $prog: "
{% if legacy_calicoctl %}
$exec node stop >> /dev/null || $exec node stop --force >> /dev/null
{% else %}
echo "Current version of ${exec} doesn't support 'node stop' command!"
return 1
{% endif %}
retval=$?
echo
return $retval
}
restart() {
do_stop
do_start
}
reload() {
restart
}
force_reload() {
restart
}
case "$1" in
start)
do_start
case "$?" in
0|1) success || exit 0 ;;
2) failure || exit 1 ;;
esac
;;
stop)
echo -n "Stopping $DESC" "$NAME"
if do_stop; then
success
echo
else
echo -n "Can't stop calico-node"
failure
echo
fi
;;
restart)
$1
;;
reload)
$1
;;
force-reload)
force_reload
;;
status)
if do_status; then
echo -n "Calico-node is running"
success
echo
else
echo -n "Calico-node is not running"
failure
echo
fi
;;
*)
echo $"Usage: $0 {start|stop|status|restart|reload|force-reload}"
exit 2
esac
exit $?

View File

@@ -16,3 +16,6 @@ flannel_memory_limit: 500M
flannel_cpu_limit: 300m flannel_cpu_limit: 300m
flannel_memory_requests: 256M flannel_memory_requests: 256M
flannel_cpu_requests: 150m flannel_cpu_requests: 150m
flannel_cert_dir: /etc/flannel/certs
etcd_cert_dir: /etc/ssl/etcd/ssl

View File

@@ -15,13 +15,12 @@
- name : Flannel | reload systemd - name : Flannel | reload systemd
shell: systemctl daemon-reload shell: systemctl daemon-reload
when: ansible_service_mgr == "systemd"
- name: Flannel | reload docker.socket - name: Flannel | reload docker.socket
service: service:
name: docker.socket name: docker.socket
state: restarted state: restarted
when: ansible_os_family == 'CoreOS' when: ansible_os_family in ['CoreOS', 'Container Linux by CoreOS']
- name: Flannel | reload docker - name: Flannel | reload docker
service: service:

View File

@@ -7,6 +7,25 @@
delegate_to: "{{groups['etcd'][0]}}" delegate_to: "{{groups['etcd'][0]}}"
run_once: true run_once: true
- name: Flannel | Create flannel certs directory
file:
dest: "{{ flannel_cert_dir }}"
state: directory
mode: 0750
owner: root
group: root
- name: Flannel | Link etcd certificates for flanneld
file:
src: "{{ etcd_cert_dir }}/{{ item.s }}"
dest: "{{ flannel_cert_dir }}/{{ item.d }}"
state: hard
force: yes
with_items:
- {s: "ca.pem", d: "ca_cert.crt"}
- {s: "node-{{ inventory_hostname }}.pem", d: "cert.crt"}
- {s: "node-{{ inventory_hostname }}-key.pem", d: "key.pem"}
- name: Flannel | Create flannel pod manifest - name: Flannel | Create flannel pod manifest
template: template:
src: flannel-pod.yml src: flannel-pod.yml
@@ -51,31 +70,11 @@
docker_network_options: '"--bip={{ flannel_subnet }} --mtu={{ flannel_mtu }}"' docker_network_options: '"--bip={{ flannel_subnet }} --mtu={{ flannel_mtu }}"'
tags: facts tags: facts
- name: Flannel | Remove non-systemd docker daemon network options that don't match desired line
lineinfile:
dest: "{{ docker_options_file }}"
regexp: "^DOCKER_NETWORK_OPTIONS=(?!{{ docker_network_options|regex_escape() }})"
state: absent
when: ansible_service_mgr in ["sysvinit","upstart"]
- name: Flannel | Set non-systemd docker daemon network options
lineinfile:
dest: "{{ docker_options_file }}"
line: DOCKER_NETWORK_OPTIONS={{ docker_network_options }}
insertbefore: ^{{ docker_options_name }}=
owner: root
group: root
mode: 0644
notify:
- Flannel | restart docker
when: ansible_service_mgr in ["sysvinit","upstart"]
- name: Flannel | Ensure path for docker network systemd drop-in - name: Flannel | Ensure path for docker network systemd drop-in
file: file:
path: "/etc/systemd/system/docker.service.d" path: "/etc/systemd/system/docker.service.d"
state: directory state: directory
owner: root owner: root
when: ansible_service_mgr == "systemd"
- name: Flannel | Create docker network systemd drop-in - name: Flannel | Create docker network systemd drop-in
template: template:
@@ -83,6 +82,3 @@
dest: "/etc/systemd/system/docker.service.d/flannel-options.conf" dest: "/etc/systemd/system/docker.service.d/flannel-options.conf"
notify: notify:
- Flannel | restart docker - Flannel | restart docker
when: ansible_service_mgr == "systemd"
- meta: flush_handlers

View File

@@ -1,5 +1,5 @@
[Service] [Service]
{% if ansible_os_family == "CoreOS" %} {% if ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] %}
Environment="DOCKER_OPT_BIP=--bip={{ flannel_subnet }} --mtu={{ flannel_mtu }}" Environment="DOCKER_OPT_BIP=--bip={{ flannel_subnet }} --mtu={{ flannel_mtu }}"
{% else %} {% else %}
Environment="DOCKER_NETWORK_OPTIONS=--bip={{ flannel_subnet }} --mtu={{ flannel_mtu }}" Environment="DOCKER_NETWORK_OPTIONS=--bip={{ flannel_subnet }} --mtu={{ flannel_mtu }}"

View File

@@ -14,7 +14,7 @@
path: "/run/flannel" path: "/run/flannel"
- name: "etcd-certs" - name: "etcd-certs"
hostPath: hostPath:
path: "{{ etcd_cert_dir }}" path: "{{ flannel_cert_dir }}"
containers: containers:
- name: "flannel-container" - name: "flannel-container"
image: "{{ flannel_image_repo }}:{{ flannel_image_tag }}" image: "{{ flannel_image_repo }}:{{ flannel_image_tag }}"
@@ -29,7 +29,7 @@
command: command:
- "/bin/sh" - "/bin/sh"
- "-c" - "-c"
- "/opt/bin/flanneld -etcd-endpoints {{ etcd_access_endpoint }} -etcd-prefix /{{ cluster_name }}/network -etcd-cafile {{ etcd_cert_dir }}/ca.pem -etcd-certfile {{ etcd_cert_dir }}/node-{{ inventory_hostname }}.pem -etcd-keyfile {{ etcd_cert_dir }}/node-{{ inventory_hostname }}-key.pem {% if flannel_interface is defined %}-iface {{ flannel_interface }}{% endif %} {% if flannel_public_ip is defined %}-public-ip {{ flannel_public_ip }}{% endif %}" - "/opt/bin/flanneld -etcd-endpoints {{ etcd_access_endpoint }} -etcd-prefix /{{ cluster_name }}/network -etcd-cafile {{ flannel_cert_dir }}/ca_cert.crt -etcd-certfile {{ flannel_cert_dir }}/cert.crt -etcd-keyfile {{ flannel_cert_dir }}/key.pem {% if flannel_interface is defined %}-iface {{ flannel_interface }}{% endif %} {% if flannel_public_ip is defined %}-public-ip {{ flannel_public_ip }}{% endif %}"
ports: ports:
- hostPort: 10253 - hostPort: 10253
containerPort: 10253 containerPort: 10253
@@ -37,7 +37,7 @@
- name: "subnetenv" - name: "subnetenv"
mountPath: "/run/flannel" mountPath: "/run/flannel"
- name: "etcd-certs" - name: "etcd-certs"
mountPath: "{{ etcd_cert_dir }}" mountPath: "{{ flannel_cert_dir }}"
readOnly: true readOnly: true
securityContext: securityContext:
privileged: true privileged: true

View File

@@ -7,7 +7,6 @@
- name : Weave | reload systemd - name : Weave | reload systemd
shell: systemctl daemon-reload shell: systemctl daemon-reload
when: ansible_service_mgr == "systemd"
- name: restart weaveproxy - name: restart weaveproxy
command: /bin/true command: /bin/true

View File

@@ -31,17 +31,14 @@
- name: Weave | Write weave systemd init file - name: Weave | Write weave systemd init file
template: src=weave.service.j2 dest=/etc/systemd/system/weave.service template: src=weave.service.j2 dest=/etc/systemd/system/weave.service
when: ansible_service_mgr == "systemd"
notify: restart weave notify: restart weave
- name: Weave | Write weaveproxy systemd init file - name: Weave | Write weaveproxy systemd init file
template: src=weaveproxy.service.j2 dest=/etc/systemd/system/weaveproxy.service template: src=weaveproxy.service.j2 dest=/etc/systemd/system/weaveproxy.service
when: ansible_service_mgr == "systemd"
notify: restart weaveproxy notify: restart weaveproxy
- name: Weave | Write weaveexpose systemd init file - name: Weave | Write weaveexpose systemd init file
template: src=weaveexpose.service.j2 dest=/etc/systemd/system/weaveexpose.service template: src=weaveexpose.service.j2 dest=/etc/systemd/system/weaveexpose.service
when: ansible_service_mgr == "systemd"
notify: restart weaveexpose notify: restart weaveexpose
- meta: flush_handlers - meta: flush_handlers

View File

@@ -1,7 +1,7 @@
--- ---
- name: reset | stop services - name: reset | stop services
service: name={{item}} state=stopped service: name={{ item }} state=stopped
with_items: with_items:
- kubelet - kubelet
- etcd - etcd
@@ -16,13 +16,26 @@
- etcd - etcd
register: services_removed register: services_removed
- name: reset | remove docker dropins
file:
path: "/etc/systemd/system/docker.service.d/{{ item }}"
state: absent
with_items:
- docker-dns.conf
- docker-options.conf
register: docker_dropins_removed
- name: reset | systemctl daemon-reload - name: reset | systemctl daemon-reload
command: systemctl daemon-reload command: systemctl daemon-reload
when: ansible_service_mgr == "systemd" and services_removed.changed when: services_removed.changed or docker_dropins_removed.changed
- name: reset | remove all containers - name: reset | remove all containers
shell: "{{ docker_bin_dir }}/docker ps -aq | xargs -r docker rm -fv" shell: "{{ docker_bin_dir }}/docker ps -aq | xargs -r docker rm -fv"
- name: reset | restart docker if needed
service: name=docker state=restarted
when: docker_dropins_removed.changed
- name: reset | gather mounted kubelet dirs - name: reset | gather mounted kubelet dirs
shell: mount | grep /var/lib/kubelet | awk '{print $3}' | tac shell: mount | grep /var/lib/kubelet | awk '{print $3}' | tac
register: mounted_dirs register: mounted_dirs
@@ -42,6 +55,40 @@
- /etc/cni - /etc/cni
- /etc/nginx - /etc/nginx
- /etc/dnsmasq.d - /etc/dnsmasq.d
- /etc/dnsmasq.conf
- /etc/dnsmasq.d-available
- /etc/etcd.env - /etc/etcd.env
- /etc/calico - /etc/calico
- /opt/cni - /opt/cni
- /etc/dhcp/dhclient.d/zdnsupdate.sh
- /etc/dhcp/dhclient-exit-hooks.d/zdnsupdate
- "{{ bin_dir }}/kubelet"
- name: reset | remove dns settings from dhclient.conf
blockinfile:
dest: "{{ item }}"
state: absent
follow: yes
marker: "# Ansible entries {mark}"
failed_when: false
with_items:
- /etc/dhclient.conf
- /etc/dhcp/dhclient.conf
- name: reset | remove host entries from /etc/hosts
blockinfile:
dest: "/etc/hosts"
state: absent
follow: yes
marker: "# Ansible inventory hosts {mark}"
- name: reset | Restart network
service:
name: >-
{% if ansible_os_family == "RedHat" -%}
network
{%- elif ansible_os_family == "Debian" -%}
networking
{%- endif %}
state: restarted
when: ansible_os_family != "CoreOS"

View File

@@ -0,0 +1,6 @@
---
rkt_version: 1.12.0
rkt_pkg_version: "{{ rkt_version }}-1"
rkt_download_src: https://github.com/coreos/rkt
rkt_download_url: "{{ rkt_download_src }}/releases/download/v{{ rkt_version }}"

View File

@@ -0,0 +1,35 @@
---
- name: gather os specific variables for rkt
include_vars: "{{ item }}"
with_first_found:
- files:
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}.yml"
- "{{ ansible_os_family|lower }}.yml"
- defaults.yml
paths:
- ../vars
skip: true
tags: facts
- name: install rkt pkg on ubuntu
apt:
deb: "{{ rkt_download_url }}/{{ rkt_pkg_name }}"
state: present
register: rkt_task_result
until: rkt_task_result|success
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when: ansible_os_family == "Debian"
- name: install rkt pkg on centos
yum:
pkg: "{{ rkt_download_url }}/{{ rkt_pkg_name }}"
state: present
register: rkt_task_result
until: rkt_task_result|success
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when: ansible_os_family == "RedHat"

4
roles/rkt/tasks/main.yml Normal file
View File

@@ -0,0 +1,4 @@
---
- name: Install rkt
include: install.yml

View File

@@ -0,0 +1,2 @@
---
rkt_pkg_name: "rkt_{{ rkt_pkg_version }}_amd64.deb"

View File

@@ -0,0 +1,2 @@
---
rkt_pkg_name: "rkt-{{ rkt_pkg_version }}.x86_64.rpm"

View File

@@ -0,0 +1,2 @@
---
rkt_pkg_name: "rkt-{{ rkt_pkg_version }}.x86_64.rpm"

15
scripts/premoderator.sh Normal file
View File

@@ -0,0 +1,15 @@
#!/bin/sh -eux -o pipefail
# A naive premoderation script to allow Gitlab CI pipeline on a specific PRs' comment
# Exits with 0, if the pipeline is good to go
CURL_ARGS="-fs --connect-timeout 5 --max-time 5 --retry-max-time 20 --retry 4 --retry-delay 5"
MAGIC="${MAGIC:-ci check this}"
# Get PR number from CI_BUILD_REF_NAME
issue=$(echo ${CI_BUILD_REF_NAME} | perl -ne '/^pr-(\d+)-\S+$/ && print $1')
# Get the user name from the PR comments with the wanted magic incantation casted
user=$(curl ${CURL_ARGS} "https://api.github.com/repos/kubernetes-incubator/kargo/issues/${issue}/comments" \
| jq -M "map(select(.body | contains (\"$MAGIC\"))) | .[0] .user.login" | tr -d '"')
# Check for the required user group membership to allow (exit 0) or decline (exit >0) the pipeline
[ "$user" != "null" ] || exit 1
curl ${CURL_ARGS} "https://api.github.com/orgs/kubernetes-incubator/members/${user}"

View File

@@ -24,7 +24,7 @@
instance_names: "{{instance_names}}" instance_names: "{{instance_names}}"
machine_type: "{{ cloud_machine_type }}" machine_type: "{{ cloud_machine_type }}"
image: "{{ cloud_image }}" image: "{{ cloud_image }}"
preemptible: yes preemptible: no
service_account_email: "{{ gce_service_account_email }}" service_account_email: "{{ gce_service_account_email }}"
pem_file: "{{ gce_pem_file | default(omit)}}" pem_file: "{{ gce_pem_file | default(omit)}}"
credentials_file: "{{gce_credentials_file | default(omit)}}" credentials_file: "{{gce_credentials_file | default(omit)}}"

View File

@@ -7,14 +7,14 @@
tasks: tasks:
- name: Force binaries directory for CoreOS - name: Force binaries directory for Container Linux by CoreOS
set_fact: set_fact:
bin_dir: "/opt/bin" bin_dir: "/opt/bin"
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- set_fact: - set_fact:
bin_dir: "/usr/local/bin" bin_dir: "/usr/local/bin"
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: Run a replica controller composed of 2 pods - name: Run a replica controller composed of 2 pods
shell: "{{bin_dir}}/kubectl run test --image={{test_image_repo}}:{{test_image_tag}} --replicas=2 --command -- tail -f /dev/null" shell: "{{bin_dir}}/kubectl run test --image={{test_image_repo}}:{{test_image_tag}} --replicas=2 --command -- tail -f /dev/null"

View File

@@ -3,14 +3,14 @@
tasks: tasks:
- name: Force binaries directory for CoreOS - name: Force binaries directory for Container Linux by CoreOS
set_fact: set_fact:
bin_dir: "/opt/bin" bin_dir: "/opt/bin"
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- set_fact: - set_fact:
bin_dir: "/usr/local/bin" bin_dir: "/usr/local/bin"
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: Get pod names - name: Get pod names
shell: "{{bin_dir}}/kubectl get pods -o json" shell: "{{bin_dir}}/kubectl get pods -o json"

View File

@@ -12,14 +12,14 @@
netchecker_port: 31081 netchecker_port: 31081
tasks: tasks:
- name: Force binaries directory for CoreOS - name: Force binaries directory for Container Linux by CoreOS
set_fact: set_fact:
bin_dir: "/opt/bin" bin_dir: "/opt/bin"
when: ansible_os_family == "CoreOS" when: ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- set_fact: - set_fact:
bin_dir: "/usr/local/bin" bin_dir: "/usr/local/bin"
when: ansible_os_family != "CoreOS" when: not ansible_os_family in ["CoreOS", "Container Linux by CoreOS"]
- name: Wait for netchecker server - name: Wait for netchecker server
shell: "{{ bin_dir }}/kubectl get pods --namespace {{netcheck_namespace}} | grep ^netchecker-server" shell: "{{ bin_dir }}/kubectl get pods --namespace {{netcheck_namespace}} | grep ^netchecker-server"