Merge branch 'master' into multi-arch-support

This commit is contained in:
Antoine Legrand
2018-08-17 16:35:50 +02:00
committed by GitHub
191 changed files with 2050 additions and 2634 deletions

2
.gitignore vendored
View File

@@ -12,9 +12,9 @@ temp
*.tfstate *.tfstate
*.tfstate.backup *.tfstate.backup
contrib/terraform/aws/credentials.tfvars contrib/terraform/aws/credentials.tfvars
**/*.sw[pon]
/ssh-bastion.conf /ssh-bastion.conf
**/*.sw[pon] **/*.sw[pon]
*~
vagrant/ vagrant/
# Byte-compiled / optimized / DLL files # Byte-compiled / optimized / DLL files

View File

@@ -93,7 +93,7 @@ before_script:
# Check out latest tag if testing upgrade # Check out latest tag if testing upgrade
# Uncomment when gitlab kubespray repo has tags # Uncomment when gitlab kubespray repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1)) #- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout f7d52564aad2ff8e337634951beb4a881c0e8aa6 - test "${UPGRADE_TEST}" != "false" && git checkout 8b3ce6e418ccf48171eb5b3888ee1af84f8d71ba
# Checkout the CI vars file so it is available # Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml - test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Workaround https://github.com/kubernetes-incubator/kubespray/issues/2021 # Workaround https://github.com/kubernetes-incubator/kubespray/issues/2021

12
OWNERS
View File

@@ -1,9 +1,7 @@
# See the OWNERS file documentation: # See the OWNERS file documentation:
# https://github.com/kubernetes/kubernetes/blob/master/docs/devel/owners.md # https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
owners: approvers:
- Smana - kubespray-approvers
- ant31 reviewers:
- bogdando - kubespray-reviewers
- mattymo
- rsmitty

17
OWNERS_ALIASES Normal file
View File

@@ -0,0 +1,17 @@
aliases:
kubespray-approvers:
- ant31
- mattymo
- atoms
- chadswen
- rsmitty
- bogdando
- bradbeam
- woopstar
- riverzhang
- holser
- smana
kubespray-reviewers:
- jjungnickel
- archifleks
- chapsuk

View File

@@ -6,9 +6,9 @@ Deploy a Production Ready Kubernetes Cluster
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**. If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere or Baremetal** - Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere or Baremetal**
- **High available** cluster - **Highly available** cluster
- **Composable** (Choice of the network plugin for instance) - **Composable** (Choice of the network plugin for instance)
- Support most popular **Linux distributions** - Supports most popular **Linux distributions**
- **Continuous integration tests** - **Continuous integration tests**
Quick Start Quick Start
@@ -17,6 +17,7 @@ Quick Start
To deploy the cluster you can use : To deploy the cluster you can use :
### Ansible ### Ansible
# Install dependencies from ``requirements.txt`` # Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt sudo pip install -r requirements.txt
@@ -36,19 +37,16 @@ To deploy the cluster you can use :
### Vagrant ### Vagrant
For Vagrant we need to install python dependencies for provisioning tasks.\ For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python and pip are installed: Check if Python and pip are installed:
```sh
python -v && pip -v
```
If this returns the version of the software, you're good to go. If not, download and install Python from here https://www.python.org/downloads/source/ python -V && pip -V
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements Install the necessary requirements
```sh
sudo pip install -r requirements.txt sudo pip install -r requirements.txt
vagrant up vagrant up
```
Documents Documents
--------- ---------
@@ -88,19 +86,25 @@ Supported Linux Distributions
Note: Upstart/SysV init based OS types are not supported. Note: Upstart/SysV init based OS types are not supported.
Versions of supported components Supported Components
-------------------------------- --------------------
- [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.10.2 - Core
- [etcd](https://github.com/coreos/etcd/releases) v3.2.16 - [kubernetes](https://github.com/kubernetes/kubernetes) v1.11.2
- [flanneld](https://github.com/coreos/flannel/releases) v0.10.0 - [etcd](https://github.com/coreos/etcd) v3.2.18
- [calico](https://docs.projectcalico.org/v2.6/releases/) v2.6.8
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.0.0-rc8
- [contiv](https://github.com/contiv/install/releases) v1.1.7
- [weave](http://weave.works/) v2.3.0
- [docker](https://www.docker.com/) v17.03 (see note) - [docker](https://www.docker.com/) v17.03 (see note)
- [rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2) - [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
- Network Plugin
- [calico](https://github.com/projectcalico/calico) v2.6.8
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.1.2
- [contiv](https://github.com/contiv/install) v1.1.7
- [flanneld](https://github.com/coreos/flannel) v0.10.0
- [weave](https://github.com/weaveworks/weave) v2.4.0
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v1.1.0-k8s1.10
- [cert-manager](https://github.com/jetstack/cert-manager) v0.4.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.18.0
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin). Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).

4
Vagrantfile vendored
View File

@@ -44,6 +44,8 @@ $kube_node_instances_with_disks = false
$kube_node_instances_with_disks_size = "20G" $kube_node_instances_with_disks_size = "20G"
$kube_node_instances_with_disks_number = 2 $kube_node_instances_with_disks_number = 2
$playbook = "cluster.yml"
$local_release_dir = "/vagrant/temp" $local_release_dir = "/vagrant/temp"
host_vars = {} host_vars = {}
@@ -157,7 +159,7 @@ Vagrant.configure("2") do |config|
# when all the machines are up and ready. # when all the machines are up and ready.
if i == $num_instances if i == $num_instances
config.vm.provision "ansible" do |ansible| config.vm.provision "ansible" do |ansible|
ansible.playbook = "cluster.yml" ansible.playbook = $playbook
if File.exist?(File.join(File.dirname($inventory), "hosts")) if File.exist?(File.join(File.dirname($inventory), "hosts"))
ansible.inventory_path = $inventory ansible.inventory_path = $inventory
end end

View File

@@ -37,7 +37,7 @@
- role: rkt - role: rkt
tags: rkt tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]" when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
- { role: download, tags: download, skip_downloads: false } - { role: download, tags: download, when: "not skip_downloads" }
environment: "{{proxy_env}}" environment: "{{proxy_env}}"
- hosts: etcd:k8s-cluster:vault:calico-rr - hosts: etcd:k8s-cluster:vault:calico-rr
@@ -51,7 +51,7 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: true } - { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" }
- hosts: k8s-cluster:calico-rr - hosts: k8s-cluster:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"

View File

@@ -9,8 +9,8 @@ Resource Group. It will not install Kubernetes itself, this has to be done in a
## Requirements ## Requirements
- [Install azure-cli](https://docs.microsoft.com/en-us/azure/xplat-cli-install) - [Install azure-cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
- [Login with azure-cli](https://docs.microsoft.com/en-us/azure/xplat-cli-connect) - [Login with azure-cli](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest)
- Dedicated Resource Group created in the Azure Portal or through azure-cli - Dedicated Resource Group created in the Azure Portal or through azure-cli
## Configuration through group_vars/all ## Configuration through group_vars/all

View File

@@ -1 +1 @@
../../../inventory/group_vars ../../../inventory/local/group_vars

View File

@@ -2,7 +2,7 @@
# For Ubuntu. # For Ubuntu.
glusterfs_default_release: "" glusterfs_default_release: ""
glusterfs_ppa_use: yes glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.8" glusterfs_ppa_version: "4.1"
# Gluster configuration. # Gluster configuration.
gluster_mount_dir: /mnt/gluster gluster_mount_dir: /mnt/gluster

View File

@@ -2,7 +2,7 @@
# For Ubuntu. # For Ubuntu.
glusterfs_default_release: "" glusterfs_default_release: ""
glusterfs_ppa_use: yes glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.8" glusterfs_ppa_version: "3.12"
# Gluster configuration. # Gluster configuration.
gluster_mount_dir: /mnt/gluster gluster_mount_dir: /mnt/gluster

View File

@@ -1,2 +1,2 @@
--- ---
glusterfs_daemon: glusterfs-server glusterfs_daemon: glusterd

View File

@@ -17,10 +17,10 @@ This project will create:
- Export the variables for your AWS credentials or edit `credentials.tfvars`: - Export the variables for your AWS credentials or edit `credentials.tfvars`:
``` ```
export AWS_ACCESS_KEY_ID="www" export TF_VAR_AWS_ACCESS_KEY_ID="www"
export AWS_SECRET_ACCESS_KEY ="xxx" export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
export AWS_SSH_KEY_NAME="yyy" export TF_VAR_AWS_SSH_KEY_NAME="yyy"
export AWS_DEFAULT_REGION="zzz" export TF_VAR_AWS_DEFAULT_REGION="zzz"
``` ```
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars` - Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`

View File

@@ -181,7 +181,7 @@ data "template_file" "inventory" {
resource "null_resource" "inventories" { resource "null_resource" "inventories" {
provisioner "local-exec" { provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ../../../inventory/hosts" command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
} }
triggers { triggers {

View File

@@ -31,3 +31,5 @@ default_tags = {
# Env = "devtest" # Env = "devtest"
# Product = "kubernetes" # Product = "kubernetes"
} }
inventory_file = "../../../inventory/hosts"

View File

@@ -103,3 +103,7 @@ variable "default_tags" {
description = "Default tags for all resources" description = "Default tags for all resources"
type = "map" type = "map"
} }
variable "inventory_file" {
description = "Where to store the generated inventory file"
}

View File

@@ -32,7 +32,11 @@ floating IP addresses or not.
- Kubernetes worker nodes - Kubernetes worker nodes
Note that the Ansible script will report an invalid configuration if you wind up Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration. with an even number of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
### GlusterFS ### GlusterFS
The Terraform configuration supports provisioning of an optional GlusterFS The Terraform configuration supports provisioning of an optional GlusterFS
@@ -219,6 +223,7 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. | |`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks | | `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. | |`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
#### Terraform state files #### Terraform state files

View File

@@ -3,6 +3,7 @@ module "network" {
external_net = "${var.external_net}" external_net = "${var.external_net}"
network_name = "${var.network_name}" network_name = "${var.network_name}"
subnet_cidr = "${var.subnet_cidr}"
cluster_name = "${var.cluster_name}" cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}" dns_nameservers = "${var.dns_nameservers}"
} }
@@ -24,6 +25,7 @@ module "compute" {
source = "modules/compute" source = "modules/compute"
cluster_name = "${var.cluster_name}" cluster_name = "${var.cluster_name}"
az_list = "${var.az_list}"
number_of_k8s_masters = "${var.number_of_k8s_masters}" number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}" number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_etcd = "${var.number_of_etcd}" number_of_etcd = "${var.number_of_etcd}"
@@ -49,6 +51,7 @@ module "compute" {
k8s_node_fips = "${module.ips.k8s_node_fips}" k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}" bastion_fips = "${module.ips.bastion_fips}"
supplementary_master_groups = "${var.supplementary_master_groups}" supplementary_master_groups = "${var.supplementary_master_groups}"
supplementary_node_groups = "${var.supplementary_node_groups}"
network_id = "${module.network.router_id}" network_id = "${module.network.router_id}"
} }

View File

@@ -59,6 +59,17 @@ resource "openstack_compute_secgroup_v2" "k8s" {
self = true self = true
} }
} }
resource "openstack_compute_secgroup_v2" "worker" {
name = "${var.cluster_name}-k8s-worker"
description = "${var.cluster_name} - Kubernetes worker nodes"
rule {
ip_protocol = "tcp"
from_port = "30000"
to_port = "32767"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_instance_v2" "bastion" { resource "openstack_compute_instance_v2" "bastion" {
name = "${var.cluster_name}-bastion-${count.index+1}" name = "${var.cluster_name}-bastion-${count.index+1}"
@@ -91,6 +102,7 @@ resource "openstack_compute_instance_v2" "bastion" {
resource "openstack_compute_instance_v2" "k8s_master" { resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}" name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}" count = "${var.number_of_k8s_masters}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}" flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -120,6 +132,7 @@ resource "openstack_compute_instance_v2" "k8s_master" {
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" { resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}" name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}" count = "${var.number_of_k8s_masters_no_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}" flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -148,6 +161,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
resource "openstack_compute_instance_v2" "etcd" { resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}" name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}" count = "${var.number_of_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}" flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -169,6 +183,7 @@ resource "openstack_compute_instance_v2" "etcd" {
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" { resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}" name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}" count = "${var.number_of_k8s_masters_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}" flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -193,6 +208,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" { resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}" name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}" count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}" flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -216,6 +232,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
resource "openstack_compute_instance_v2" "k8s_node" { resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}" name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}" count = "${var.number_of_k8s_nodes}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}" flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -226,12 +243,13 @@ resource "openstack_compute_instance_v2" "k8s_node" {
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}", security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}", "${openstack_compute_secgroup_v2.bastion.name}",
"${openstack_compute_secgroup_v2.worker.name}",
"default", "default",
] ]
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster" kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
} }
@@ -244,6 +262,7 @@ resource "openstack_compute_instance_v2" "k8s_node" {
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" { resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}" name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}" count = "${var.number_of_k8s_nodes_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}" flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -253,12 +272,13 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
} }
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}", security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.worker.name}",
"default", "default",
] ]
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating" kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
} }
@@ -292,6 +312,7 @@ resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" { resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}" name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}" count = "${var.number_of_gfs_nodes_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image_gfs}" image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}" flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"

View File

@@ -1,5 +1,9 @@
variable "cluster_name" {} variable "cluster_name" {}
variable "az_list" {
type = "list"
}
variable "number_of_k8s_masters" {} variable "number_of_k8s_masters" {}
variable "number_of_k8s_masters_no_etcd" {} variable "number_of_k8s_masters_no_etcd" {}
@@ -59,3 +63,7 @@ variable "bastion_fips" {
variable "supplementary_master_groups" { variable "supplementary_master_groups" {
default = "" default = ""
} }
variable "supplementary_node_groups" {
default = ""
}

View File

@@ -12,7 +12,7 @@ resource "openstack_networking_network_v2" "k8s" {
resource "openstack_networking_subnet_v2" "k8s" { resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network" name = "${var.cluster_name}-internal-network"
network_id = "${openstack_networking_network_v2.k8s.id}" network_id = "${openstack_networking_network_v2.k8s.id}"
cidr = "10.0.0.0/24" cidr = "${var.subnet_cidr}"
ip_version = 4 ip_version = 4
dns_nameservers = "${var.dns_nameservers}" dns_nameservers = "${var.dns_nameservers}"
} }

View File

@@ -7,3 +7,5 @@ variable "cluster_name" {}
variable "dns_nameservers" { variable "dns_nameservers" {
type = "list" type = "list"
} }
variable "subnet_cidr" {}

View File

@@ -41,5 +41,6 @@ number_of_k8s_nodes_no_floating_ip = 4
# networking # networking
network_name = "<network>" network_name = "<network>"
external_net = "<UUID>" external_net = "<UUID>"
subnet_cidr = "<cidr>"
floatingip_pool = "<pool>" floatingip_pool = "<pool>"

View File

@@ -2,6 +2,12 @@ variable "cluster_name" {
default = "example" default = "example"
} }
variable "az_list" {
description = "List of Availability Zones available in your OpenStack cluster"
type = "list"
default = ["nova"]
}
variable "number_of_bastions" { variable "number_of_bastions" {
default = 1 default = 1
} }
@@ -97,6 +103,12 @@ variable "network_name" {
default = "internal" default = "internal"
} }
variable "subnet_cidr" {
description = "Subnet CIDR block."
type = "string"
default = "10.0.0.0/24"
}
variable "dns_nameservers" { variable "dns_nameservers" {
description = "An array of DNS name server names used by hosts in this subnet." description = "An array of DNS name server names used by hosts in this subnet."
type = "list" type = "list"
@@ -116,3 +128,8 @@ variable "supplementary_master_groups" {
description = "supplementary kubespray ansible groups for masters, such kube-node" description = "supplementary kubespray ansible groups for masters, such kube-node"
default = "" default = ""
} }
variable "supplementary_node_groups" {
description = "supplementary kubespray ansible groups for worker nodes, such as kube-ingress"
default = ""
}

View File

@@ -706,6 +706,10 @@ def query_list(hosts):
for name, attrs, hostgroups in hosts: for name, attrs, hostgroups in hosts:
for group in set(hostgroups): for group in set(hostgroups):
# Ansible 2.6.2 stopped supporting empty group names: https://github.com/ansible/ansible/pull/42584/commits/d4cd474b42ed23d8f8aabb2a7f84699673852eaf
# Empty group name defaults to "all" in Ansible < 2.6.2 so we alter empty group names to "all"
if not group: group = "all"
groups[group].setdefault('hosts', []) groups[group].setdefault('hosts', [])
groups[group]['hosts'].append(name) groups[group]['hosts'].append(name)

View File

@@ -52,13 +52,13 @@ You can modify how Kubespray sets up DNS for your cluster with the variables ``d
## dns_mode ## dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available: ``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### dnsmasq_kubedns (default) #### dnsmasq_kubedns
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns. limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns.
It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All
other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver`` other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver``
#### kubedns #### kubedns (default)
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
all queries. all queries.

View File

@@ -38,9 +38,9 @@ See more details in the [ansible guide](ansible.md).
Adding nodes Adding nodes
------------ ------------
You may want to add **worker** nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters. You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)). - Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`: - Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
ansible-playbook -i inventory/mycluster/hosts.ini scale.yml -b -v \ ansible-playbook -i inventory/mycluster/hosts.ini scale.yml -b -v \
@@ -51,11 +51,26 @@ Remove nodes
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again. You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)). Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `remove-node.yml`:
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \ ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key --private-key=~/.ssh/private_key
We support two ways to select the nodes:
- Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--extra-vars "node=nodename,nodename2"
```
or
- Use `--limit nodename,nodename2` to select the node
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--limit nodename,nodename2"
``` ```
Connecting to Kubernetes Connecting to Kubernetes

View File

@@ -3,7 +3,7 @@ OpenStack
To deploy kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'openstack'`. To deploy kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'openstack'`.
After that make sure to source in your OpenStack credentials like you would do when using `nova-client` by using `source path/to/your/openstack-rc`. After that make sure to source in your OpenStack credentials like you would do when using `nova-client` or `neutron-client` by using `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`.
The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack. The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack.
Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected. Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected.
@@ -12,35 +12,34 @@ Unless you are using calico you can now run the playbook.
**Additional step needed when using calico:** **Additional step needed when using calico:**
Calico does not encapsulate all packages with the hosts ip addresses. Instead the packages will be routed with the PODs ip addresses directly. Calico does not encapsulate all packages with the hosts' ip addresses. Instead the packages will be routed with the PODs ip addresses directly.
OpenStack will filter and drop all packages from ips it does not know to prevent spoofing. OpenStack will filter and drop all packages from ips it does not know to prevent spoofing.
In order to make calico work on OpenStack you will need to tell OpenStack to allow calicos packages by allowing the network it uses. In order to make calico work on OpenStack you will need to tell OpenStack to allow calico's packages by allowing the network it uses.
First you will need the ids of your OpenStack instances that will run kubernetes: First you will need the ids of your OpenStack instances that will run kubernetes:
nova list --tenant Your-Tenant openstack server list --project YOUR_PROJECT
+--------------------------------------+--------+----------------------------------+--------+-------------+ +--------------------------------------+--------+----------------------------------+--------+-------------+
| ID | Name | Tenant ID | Status | Power State | | ID | Name | Tenant ID | Status | Power State |
+--------------------------------------+--------+----------------------------------+--------+-------------+ +--------------------------------------+--------+----------------------------------+--------+-------------+
| e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running | | e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
| 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running | | 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports: Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports (though they are now configured through using OpenStack):
neutron port-list -c id -c device_id openstack port list -c id -c device_id --project YOUR_PROJECT
+--------------------------------------+--------------------------------------+ +--------------------------------------+--------------------------------------+
| id | device_id | | id | device_id |
+--------------------------------------+--------------------------------------+ +--------------------------------------+--------------------------------------+
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 | | 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 | | e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
Given the port ids on the left, you can set the `allowed_address_pairs` in neutron. Given the port ids on the left, you can set the two `allowed_address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
Note that you have to allow both of `kube_service_addresses` (default `10.233.0.0/18`)
and `kube_pods_subnet` (default `10.233.64.0/18`.)
# allow kube_service_addresses and kube_pods_subnet network # allow kube_service_addresses and kube_pods_subnet network
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18 ip_address=10.233.64.0/18 openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address ip_address=10.233.0.0/18,ip_address=10.233.64.0/18
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18 ip_address=10.233.64.0/18 openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address ip_address=10.233.0.0/18,ip_address=10.233.64.0/18
Now you can finally run the playbook. Now you can finally run the playbook.

View File

@@ -81,3 +81,61 @@ kubernetes-apps/rotate_tokens role, only pods in kube-system are destroyed and
recreated. All other invalidated service account tokens are cleaned up recreated. All other invalidated service account tokens are cleaned up
automatically, but other pods are not deleted out of an abundance of caution automatically, but other pods are not deleted out of an abundance of caution
for impact to user deployed pods. for impact to user deployed pods.
### Component-based upgrades
A deployer may want to upgrade specific components in order to minimize risk
or save time. This strategy is not covered by CI as of this writing, so it is
not guaranteed to work.
These commands are useful only for upgrading fully-deployed, healthy, existing
hosts. This will definitely not work for undeployed or partially deployed
hosts.
Upgrade docker:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=docker
```
Upgrade etcd:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd
```
Upgrade vault:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=vault
```
Upgrade kubelet:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=node --skip-tags=k8s-gen-certs,k8s-gen-tokens
```
Upgrade Kubernetes master components:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=master
```
Upgrade network plugins:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=network
```
Upgrade all add-ons:
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=apps
```
Upgrade just helm (assuming `helm_enabled` is true):
```
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=helm
```

View File

@@ -8,8 +8,8 @@
version: "{{ item.version }}" version: "{{ item.version }}"
state: "{{ item.state }}" state: "{{ item.state }}"
with_items: with_items:
- { state: "present", name: "docker", version: "3.2.1" } - { state: "present", name: "docker", version: "3.4.1" }
- { state: "present", name: "docker-compose", version: "1.21.0" } - { state: "present", name: "docker-compose", version: "1.21.2" }
- name: CephFS Provisioner | Check Go version - name: CephFS Provisioner | Check Go version
shell: | shell: |
@@ -35,19 +35,19 @@
- name: CephFS Provisioner | Clone repo - name: CephFS Provisioner | Clone repo
git: git:
repo: https://github.com/kubernetes-incubator/external-storage.git repo: https://github.com/kubernetes-incubator/external-storage.git
dest: "~/go/src/github.com/kubernetes-incubator" dest: "~/go/src/github.com/kubernetes-incubator/external-storage"
version: a71a49d4 version: 06fddbe2
clone: no clone: yes
update: yes update: yes
- name: CephFS Provisioner | Build image - name: CephFS Provisioner | Build image
shell: | shell: |
cd ~/go/src/github.com/kubernetes-incubator/external-storage cd ~/go/src/github.com/kubernetes-incubator/external-storage
REGISTRY=quay.io/kubespray/ VERSION=a71a49d4 make ceph/cephfs REGISTRY=quay.io/kubespray/ VERSION=06fddbe2 make ceph/cephfs
- name: CephFS Provisioner | Push image - name: CephFS Provisioner | Push image
docker_image: docker_image:
name: quay.io/kubespray/cephfs-provisioner:a71a49d4 name: quay.io/kubespray/cephfs-provisioner:06fddbe2
push: yes push: yes
retries: 10 retries: 10

View File

@@ -131,3 +131,6 @@ bin_dir: /usr/local/bin
# The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable. # The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
#kube_read_only_port: 10255 #kube_read_only_port: 10255
# Does coreos need auto upgrade, default is true
#coreos_auto_upgrade: true

View File

@@ -19,7 +19,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: true kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release ## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.10.2 kube_version: v1.11.2
# Where the binaries will be downloaded. # Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G) # Note: ensure that you've enough disk space (about 1G)
@@ -67,25 +67,21 @@ kube_users:
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing # Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico kube_network_plugin: calico
# weave's network password for encryption # Weave deployment
# if null then no network encryption # weave_password: ~
# you can use --extra-vars to pass the password in command line # weave_checkpoint_disable: false
weave_password: EnterPasswordHere # weave_conn_limit: 100
# weave_hairpin_mode: true
# Weave uses consensus mode by default # weave_ipalloc_range: {{ kube_pods_subnet }}
# Enabling seed mode allow to dynamically add or remove hosts # weave_expect_npc: {{ enable_network_policy }}
# https://www.weave.works/docs/net/latest/ipam/ # weave_kube_peers: ~
weave_mode_seed: false # weave_ipalloc_init: ~
# weave_expose_ip: ~
# This two variable are automatically changed by the weave's role, do not manually change these values # weave_metrics_addr: ~
# To reset values : # weave_status_addr: ~
# weave_seed: uninitialized # weave_mtu: 1376
# weave_peers: uninitialized # weave_no_masq_local: true
weave_seed: uninitialized # weave_extra_args: ~
weave_peers: uninitialized
# Set the MTU of Weave (default 1376, Jumbo Frames: 8916)
weave_mtu: 1376
# Enable kubernetes network policies # Enable kubernetes network policies
enable_network_policy: false enable_network_policy: false
@@ -140,12 +136,21 @@ dns_domain: "{{ cluster_name }}"
# Path used to store Docker data # Path used to store Docker data
docker_daemon_graph: "/var/lib/docker" docker_daemon_graph: "/var/lib/docker"
## Used to set docker daemon iptables options to true
#docker_iptables_enabled: "true"
## A string of extra options to pass to the docker daemon. ## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear. ## This string should be exactly as you wish it to appear.
## An obvious use case is allowing insecure-registry access ## An obvious use case is allowing insecure-registry access
## to self hosted registries like so: ## to self hosted registries like so:
docker_options: "--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }} {{ docker_log_opts }}" docker_options: >
--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }} {{ docker_log_opts }}
{% if ansible_architecture == "aarch64" and ansible_os_family == "RedHat" %}
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current
--default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current --signature-verification=false
{% endif %}
docker_bin_dir: "/usr/bin" docker_bin_dir: "/usr/bin"
## If non-empty will override default system MounFlags value. ## If non-empty will override default system MounFlags value.
@@ -164,6 +169,9 @@ helm_deployment_type: host
# K8s image pull policy (imagePullPolicy) # K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent k8s_image_pull_policy: IfNotPresent
# audit log for kubernetes
kubernetes_audit: false
# Kubernetes dashboard # Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details. # RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: true dashboard_enabled: true
@@ -174,9 +182,6 @@ efk_enabled: false
# Helm deployment # Helm deployment
helm_enabled: false helm_enabled: false
# Istio deployment
istio_enabled: false
# Registry deployment # Registry deployment
registry_enabled: false registry_enabled: false
# registry_namespace: "{{ system_namespace }}" # registry_namespace: "{{ system_namespace }}"
@@ -192,19 +197,21 @@ local_volume_provisioner_enabled: false
# CephFS provisioner deployment # CephFS provisioner deployment
cephfs_provisioner_enabled: false cephfs_provisioner_enabled: false
# cephfs_provisioner_namespace: "{{ system_namespace }}" # cephfs_provisioner_namespace: "cephfs-provisioner"
# cephfs_provisioner_cluster: ceph # cephfs_provisioner_cluster: ceph
# cephfs_provisioner_monitors: # cephfs_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
# - 172.24.0.1:6789
# - 172.24.0.2:6789
# - 172.24.0.3:6789
# cephfs_provisioner_admin_id: admin # cephfs_provisioner_admin_id: admin
# cephfs_provisioner_secret: secret # cephfs_provisioner_secret: secret
# cephfs_provisioner_storage_class: cephfs # cephfs_provisioner_storage_class: cephfs
# cephfs_provisioner_reclaim_policy: Delete
# cephfs_provisioner_claim_root: /volumes
# cephfs_provisioner_deterministic_names: true
# Nginx ingress controller deployment # Nginx ingress controller deployment
ingress_nginx_enabled: false ingress_nginx_enabled: false
# ingress_nginx_host_network: false # ingress_nginx_host_network: false
# ingress_nginx_nodeselector:
# node-role.kubernetes.io/master: "true"
# ingress_nginx_namespace: "ingress-nginx" # ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80 # ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443 # ingress_nginx_secure_port: 443

View File

@@ -26,11 +26,6 @@
# node5 # node5
# node6 # node6
# [kube-ingress]
# node2
# node3
# [k8s-cluster:children] # [k8s-cluster:children]
# kube-master # kube-master
# kube-node # kube-node
# kube-ingress

View File

@@ -1,199 +0,0 @@
#!/usr/bin/env python
DOCUMENTATION = '''
---
module: hashivault_pki_issue
version_added: "0.1"
short_description: Hashicorp Vault PKI issue module
description:
- Module to issue PKI certs from Hashicorp Vault.
options:
url:
description:
- url for vault
default: to environment variable VAULT_ADDR
ca_cert:
description:
- "path to a PEM-encoded CA cert file to use to verify the Vault server TLS certificate"
default: to environment variable VAULT_CACERT
ca_path:
description:
- "path to a directory of PEM-encoded CA cert files to verify the Vault server TLS certificate : if ca_cert is specified, its value will take precedence"
default: to environment variable VAULT_CAPATH
client_cert:
description:
- "path to a PEM-encoded client certificate for TLS authentication to the Vault server"
default: to environment variable VAULT_CLIENT_CERT
client_key:
description:
- "path to an unencrypted PEM-encoded private key matching the client certificate"
default: to environment variable VAULT_CLIENT_KEY
verify:
description:
- "if set, do not verify presented TLS certificate before communicating with Vault server : setting this variable is not recommended except during testing"
default: to environment variable VAULT_SKIP_VERIFY
authtype:
description:
- "authentication type to use: token, userpass, github, ldap, approle"
default: token
token:
description:
- token for vault
default: to environment variable VAULT_TOKEN
username:
description:
- username to login to vault.
default: to environment variable VAULT_USER
password:
description:
- password to login to vault.
default: to environment variable VAULT_PASSWORD
secret:
description:
- secret to read.
data:
description:
- Keys and values to write.
update:
description:
- Update rather than overwrite.
default: False
min_ttl:
description:
- Issue new cert if existing cert has lower TTL expressed in hours or a percentage. Examples: 70800h, 50%
force:
description:
- Force issue of new cert
'''
EXAMPLES = '''
---
- hosts: localhost
tasks:
- hashivault_write:
secret: giant
data:
foo: foe
fie: fum
'''
def main():
argspec = hashivault_argspec()
argspec['secret'] = dict(required=True, type='str')
argspec['update'] = dict(required=False, default=False, type='bool')
argspec['data'] = dict(required=False, default={}, type='dict')
module = hashivault_init(argspec, supports_check_mode=True)
result = hashivault_write(module)
if result.get('failed'):
module.fail_json(**result)
else:
module.exit_json(**result)
def _convert_to_seconds(original_value):
try:
value = str(original_value)
seconds = 0
if 'h' in value:
ray = value.split('h')
seconds = int(ray.pop(0)) * 3600
value = ''.join(ray)
if 'm' in value:
ray = value.split('m')
seconds += int(ray.pop(0)) * 60
value = ''.join(ray)
if value:
ray = value.split('s')
seconds += int(ray.pop(0))
return seconds
except Exception:
pass
return original_value
def hashivault_needs_refresh(old_data, min_ttl):
print("Checking refresh")
print_r(old_data)
return False
# if sorted(old_data.keys()) != sorted(new_data.keys()):
# return True
# for key in old_data:
# old_value = old_data[key]
# new_value = new_data[key]
# if old_value == new_value:
# continue
# if key != 'ttl' and key != 'max_ttl':
# return True
# old_value = _convert_to_seconds(old_value)
# new_value = _convert_to_seconds(new_value)
# if old_value != new_value:
# return True
# return False
#
def hashivault_changed(old_data, new_data):
if sorted(old_data.keys()) != sorted(new_data.keys()):
return True
for key in old_data:
old_value = old_data[key]
new_value = new_data[key]
if old_value == new_value:
continue
if key != 'ttl' and key != 'max_ttl':
return True
old_value = _convert_to_seconds(old_value)
new_value = _convert_to_seconds(new_value)
if old_value != new_value:
return True
return False
from ansible.module_utils.hashivault import *
@hashiwrapper
def hashivault_write(module):
result = {"changed": False, "rc": 0}
params = module.params
client = hashivault_auth_client(params)
secret = params.get('secret')
force = params.get('force', False)
min_ttl = params.get('min_ttl', "100%")
returned_data = None
if secret.startswith('/'):
secret = secret.lstrip('/')
#else:
# secret = ('secret/%s' % secret)
data = params.get('data')
with warnings.catch_warnings():
warnings.simplefilter("ignore")
changed = True
write_data = data
if params.get('update') or module.check_mode:
# Do not move this read outside of the update
read_data = client.read(secret) or {}
read_data = read_data.get('data', {})
write_data = dict(read_data)
write_data.update(data)
result['write_data'] = write_data
result['read_data'] = read_data
changed = hashivault_changed(read_data, write_data)
if not changed:
changed = hashivault_needs_refresh(read_data, min_ttl)
if changed:
if not module.check_mode:
returned_data = client.write((secret), **write_data)
if returned_data:
result['data'] = returned_data
result['msg'] = "Secret %s written" % secret
result['changed'] = changed
return result
if __name__ == '__main__':
main()

View File

@@ -5,7 +5,7 @@
ansible_ssh_pipelining: true ansible_ssh_pipelining: true
gather_facts: true gather_facts: true
- hosts: etcd:k8s-cluster:vault:calico-rr - hosts: "{{ node | default('etcd:k8s-cluster:vault:calico-rr') }}"
vars_prompt: vars_prompt:
name: "delete_nodes_confirmation" name: "delete_nodes_confirmation"
prompt: "Are you sure you want to delete nodes state? Type 'yes' to delete nodes." prompt: "Are you sure you want to delete nodes state? Type 'yes' to delete nodes."
@@ -22,7 +22,7 @@
roles: roles:
- { role: remove-node/pre-remove, tags: pre-remove } - { role: remove-node/pre-remove, tags: pre-remove }
- hosts: kube-node - hosts: "{{ node | default('kube-node') }}"
roles: roles:
- { role: kubespray-defaults } - { role: kubespray-defaults }
- { role: reset, tags: reset } - { role: reset, tags: reset }

View File

@@ -4,3 +4,6 @@ pip_python_coreos_modules:
- six - six
override_system_hostname: true override_system_hostname: true
coreos_auto_upgrade: true

View File

@@ -18,7 +18,11 @@ mv -n pypy-$PYPY_VERSION-linux64 pypy
## library fixup ## library fixup
mkdir -p pypy/lib mkdir -p pypy/lib
if [ -f /lib64/libncurses.so.5.9 ]; then
ln -snf /lib64/libncurses.so.5.9 $BINDIR/pypy/lib/libtinfo.so.5 ln -snf /lib64/libncurses.so.5.9 $BINDIR/pypy/lib/libtinfo.so.5
elif [ -f /lib64/libncurses.so.6.1 ]; then
ln -snf /lib64/libncurses.so.6.1 $BINDIR/pypy/lib/libtinfo.so.5
fi
cat > $BINDIR/python <<EOF cat > $BINDIR/python <<EOF
#!/bin/bash #!/bin/bash

View File

@@ -62,3 +62,8 @@
with_items: "{{pip_python_coreos_modules}}" with_items: "{{pip_python_coreos_modules}}"
environment: environment:
PATH: "{{ ansible_env.PATH }}:{{ bin_dir }}" PATH: "{{ ansible_env.PATH }}:{{ bin_dir }}"
- name: Bootstrap | Disable auto-upgrade
shell: "systemctl stop locksmithd.service && systemctl mask --now locksmithd.service"
when:
- not coreos_auto_upgrade

View File

@@ -17,7 +17,7 @@ dockerproject_repo_key_info:
dockerproject_repo_info: dockerproject_repo_info:
repos: repos:
docker_dns_servers_strict: yes docker_dns_servers_strict: true
docker_container_storage_setup: false docker_container_storage_setup: false
@@ -40,3 +40,6 @@ dockerproject_rh_repo_base_url: 'https://yum.dockerproject.org/repo/main/centos/
dockerproject_rh_repo_gpgkey: 'https://yum.dockerproject.org/gpg' dockerproject_rh_repo_gpgkey: 'https://yum.dockerproject.org/gpg'
dockerproject_apt_repo_base_url: 'https://apt.dockerproject.org/repo' dockerproject_apt_repo_base_url: 'https://apt.dockerproject.org/repo'
dockerproject_apt_repo_gpgkey: 'https://apt.dockerproject.org/gpg' dockerproject_apt_repo_gpgkey: 'https://apt.dockerproject.org/gpg'
# Used to set docker daemon iptables options
docker_iptables_enabled: "false"

View File

@@ -9,10 +9,10 @@ docker_container_storage_setup_container_thinpool: docker-pool
docker_container_storage_setup_data_size: 40%FREE docker_container_storage_setup_data_size: 40%FREE
docker_container_storage_setup_min_data_size: 2G docker_container_storage_setup_min_data_size: 2G
docker_container_storage_setup_chunk_size: 512K docker_container_storage_setup_chunk_size: 512K
docker_container_storage_setup_growpart: false docker_container_storage_setup_growpart: "false"
docker_container_storage_setup_auto_extend_pool: yes docker_container_storage_setup_auto_extend_pool: "yes"
docker_container_storage_setup_pool_autoextend_threshold: 60 docker_container_storage_setup_pool_autoextend_threshold: 60
docker_container_storage_setup_pool_autoextend_percent: 20 docker_container_storage_setup_pool_autoextend_percent: 20
docker_container_storage_setup_device_wait_timeout: 60 docker_container_storage_setup_device_wait_timeout: 60
docker_container_storage_setup_wipe_signatures: false docker_container_storage_setup_wipe_signatures: "false"
docker_container_storage_setup_container_root_lv_size: 40%FREE docker_container_storage_setup_container_root_lv_size: 40%FREE

View File

@@ -7,6 +7,7 @@
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml" - "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml" - "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml"
- "{{ ansible_distribution|lower }}.yml" - "{{ ansible_distribution|lower }}.yml"
- "{{ ansible_os_family|lower }}-{{ ansible_architecture }}.yml"
- "{{ ansible_os_family|lower }}.yml" - "{{ ansible_os_family|lower }}.yml"
- defaults.yml - defaults.yml
paths: paths:

View File

@@ -6,6 +6,7 @@
with_items: with_items:
- docker - docker
- docker-engine - docker-engine
- docker.io
when: when:
- ansible_os_family == 'Debian' - ansible_os_family == 'Debian'
- (docker_versioned_pkg[docker_version | string] | search('docker-ce')) - (docker_versioned_pkg[docker_version | string] | search('docker-ce'))
@@ -19,6 +20,12 @@
- docker-common - docker-common
- docker-engine - docker-engine
- docker-selinux - docker-selinux
- docker-client
- docker-client-latest
- docker-latest
- docker-latest-logrotate
- docker-logrotate
- docker-engine-selinux
when: when:
- ansible_os_family == 'RedHat' - ansible_os_family == 'RedHat'
- (docker_versioned_pkg[docker_version | string] | search('docker-ce')) - (docker_versioned_pkg[docker_version | string] | search('docker-ce'))

View File

@@ -26,7 +26,7 @@
- name: add upstream dns servers (only when dnsmasq is not used) - name: add upstream dns servers (only when dnsmasq is not used)
set_fact: set_fact:
docker_dns_servers: "{{ docker_dns_servers + upstream_dns_servers|default([]) }}" docker_dns_servers: "{{ docker_dns_servers + upstream_dns_servers|default([]) }}"
when: dns_mode in ['kubedns', 'coredns', 'coreos_dual'] when: dns_mode in ['kubedns', 'coredns', 'coredns_dual']
- name: add global searchdomains - name: add global searchdomains
set_fact: set_fact:
@@ -56,7 +56,7 @@
- name: check number of nameservers - name: check number of nameservers
fail: fail:
msg: "Too many nameservers. You can relax this check by set docker_dns_servers_strict=no and we will only use the first 3." msg: "Too many nameservers. You can relax this check by set docker_dns_servers_strict=false in all.yml and we will only use the first 3."
when: docker_dns_servers|length > 3 and docker_dns_servers_strict|bool when: docker_dns_servers|length > 3 and docker_dns_servers_strict|bool
- name: rtrim number of nameservers to 3 - name: rtrim number of nameservers to 3

View File

@@ -1,6 +1,5 @@
[Service] [Service]
Environment="DOCKER_OPTS={{ docker_options | default('') }} \ Environment="DOCKER_OPTS={{ docker_options|default('') }} --iptables={{ docker_iptables_enabled | default('false') }}"
--iptables=false"
{% if docker_mount_flags is defined and docker_mount_flags != "" %} {% if docker_mount_flags is defined and docker_mount_flags != "" %}
MountFlags={{ docker_mount_flags }} MountFlags={{ docker_mount_flags }}
{% endif %} {% endif %}

View File

@@ -9,6 +9,7 @@ docker_versioned_pkg:
'1.12': docker-engine=1.12.6-0~debian-{{ ansible_distribution_release|lower }} '1.12': docker-engine=1.12.6-0~debian-{{ ansible_distribution_release|lower }}
'1.13': docker-engine=1.13.1-0~debian-{{ ansible_distribution_release|lower }} '1.13': docker-engine=1.13.1-0~debian-{{ ansible_distribution_release|lower }}
'17.03': docker-ce=17.03.2~ce-0~debian-{{ ansible_distribution_release|lower }} '17.03': docker-ce=17.03.2~ce-0~debian-{{ ansible_distribution_release|lower }}
'17.09': docker-ce=17.09.0~ce-0~debian-{{ ansible_distribution_release|lower }}
'stable': docker-ce=17.03.2~ce-0~debian-{{ ansible_distribution_release|lower }} 'stable': docker-ce=17.03.2~ce-0~debian-{{ ansible_distribution_release|lower }}
'edge': docker-ce=17.12.1~ce-0~debian-{{ ansible_distribution_release|lower }} 'edge': docker-ce=17.12.1~ce-0~debian-{{ ansible_distribution_release|lower }}

View File

@@ -0,0 +1,28 @@
---
docker_kernel_min_version: '0'
# overide defaults, missing 17.03 for aarch64
docker_version: '1.13'
# http://mirror.centos.org/altarch/7/extras/aarch64/Packages/
# or do 'yum --showduplicates list docker'
docker_versioned_pkg:
'latest': docker
'1.12': docker-1.12.6-48.git0fdc778.el7
'1.13': docker-1.13.1-63.git94f4240.el7
# https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package
# http://mirror.centos.org/altarch/7/extras/aarch64/Packages/
docker_package_info:
pkg_mgr: yum
pkgs:
- name: "{{ docker_versioned_pkg[docker_version | string] }}"
docker_repo_key_info:
pkg_key: ''
repo_keys: []
docker_repo_info:
pkg_repo: ''
repos: []

View File

@@ -11,6 +11,7 @@ docker_versioned_pkg:
'1.12': docker-engine-1.12.6-1.el7.centos '1.12': docker-engine-1.12.6-1.el7.centos
'1.13': docker-engine-1.13.1-1.el7.centos '1.13': docker-engine-1.13.1-1.el7.centos
'17.03': docker-ce-17.03.2.ce-1.el7.centos '17.03': docker-ce-17.03.2.ce-1.el7.centos
'17.09': docker-ce-17.09.0.ce-1.el7.centos
'stable': docker-ce-17.03.2.ce-1.el7.centos 'stable': docker-ce-17.03.2.ce-1.el7.centos
'edge': docker-ce-17.12.1.ce-1.el7.centos 'edge': docker-ce-17.12.1.ce-1.el7.centos

View File

@@ -8,6 +8,7 @@ docker_versioned_pkg:
'1.12': docker-engine=1.12.6-0~ubuntu-{{ ansible_distribution_release|lower }} '1.12': docker-engine=1.12.6-0~ubuntu-{{ ansible_distribution_release|lower }}
'1.13': docker-engine=1.13.1-0~ubuntu-{{ ansible_distribution_release|lower }} '1.13': docker-engine=1.13.1-0~ubuntu-{{ ansible_distribution_release|lower }}
'17.03': docker-ce=17.03.2~ce-0~ubuntu-{{ ansible_distribution_release|lower }} '17.03': docker-ce=17.03.2~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'17.09': docker-ce=17.09.0~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'stable': docker-ce=17.03.2~ce-0~ubuntu-{{ ansible_distribution_release|lower }} 'stable': docker-ce=17.03.2~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'edge': docker-ce=17.12.1~ce-0~ubuntu-{{ ansible_distribution_release|lower }} 'edge': docker-ce=17.12.1~ce-0~ubuntu-{{ ansible_distribution_release|lower }}

View File

@@ -27,9 +27,9 @@ download_delegate: "{% if download_localhost %}localhost{% else %}{{groups['kube
image_arch: amd64 image_arch: amd64
# Versions # Versions
kube_version: v1.10.2 kube_version: v1.11.2
kubeadm_version: "{{ kube_version }}" kubeadm_version: "{{ kube_version }}"
etcd_version: v3.2.16 etcd_version: v3.2.18
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults # TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download # after migration to container download
calico_version: "v2.6.8" calico_version: "v2.6.8"
@@ -39,21 +39,18 @@ calico_policy_version: "v1.0.3"
calico_rr_version: "v0.4.2" calico_rr_version: "v0.4.2"
flannel_version: "v0.10.0" flannel_version: "v0.10.0"
flannel_cni_version: "v0.3.0" flannel_cni_version: "v0.3.0"
istio_version: "0.2.6"
vault_version: 0.10.1 vault_version: 0.10.1
weave_version: 2.3.0 weave_version: "2.4.0"
pod_infra_version: 3.0 pod_infra_version: 3.0
contiv_version: 1.1.7 contiv_version: 1.1.7
cilium_version: "v1.0.0-rc8" cilium_version: "v1.1.2"
# Download URLs # Download URLs
istioctl_download_url: "https://storage.googleapis.com/istio-release/releases/{{ istio_version }}/istioctl/istioctl-linux"
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm" kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_{{ image_arch }}.zip" vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_{{ image_arch }}.zip"
# Checksums # Checksums
istioctl_checksum: fd703063c540b8c0ab943f478c05ab257d88ae27224c746a27d0526ddbf7c370 kubeadm_checksum: 6b17720a65b8ff46efe92a5544f149c39a221910d89939838d75581d4e6924c0
kubeadm_checksum: 394d7d340214c91d669186cf4f2110d8eb840ca965399b4d8b22d0545a60e377
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188 vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
# Containers # Containers
@@ -73,22 +70,6 @@ calico_policy_image_repo: "quay.io/calico/kube-controllers"
calico_policy_image_tag: "{{ calico_policy_version }}" calico_policy_image_tag: "{{ calico_policy_version }}"
calico_rr_image_repo: "quay.io/calico/routereflector" calico_rr_image_repo: "quay.io/calico/routereflector"
calico_rr_image_tag: "{{ calico_rr_version }}" calico_rr_image_tag: "{{ calico_rr_version }}"
istio_proxy_image_repo: docker.io/istio/proxy
istio_proxy_image_tag: "{{ istio_version }}"
istio_proxy_init_image_repo: docker.io/istio/proxy_init
istio_proxy_init_image_tag: "{{ istio_version }}"
istio_ca_image_repo: docker.io/istio/istio-ca
istio_ca_image_tag: "{{ istio_version }}"
istio_mixer_image_repo: docker.io/istio/mixer
istio_mixer_image_tag: "{{ istio_version }}"
istio_pilot_image_repo: docker.io/istio/pilot
istio_pilot_image_tag: "{{ istio_version }}"
istio_proxy_debug_image_repo: docker.io/istio/proxy_debug
istio_proxy_debug_image_tag: "{{ istio_version }}"
istio_sidecar_initializer_image_repo: docker.io/istio/sidecar_initializer
istio_sidecar_initializer_image_tag: "{{ istio_version }}"
istio_statsd_image_repo: prom/statsd-exporter
istio_statsd_image_tag: latest
hyperkube_image_repo: "gcr.io/google-containers/hyperkube-{{ image_arch }}" hyperkube_image_repo: "gcr.io/google-containers/hyperkube-{{ image_arch }}"
hyperkube_image_tag: "{{ kube_version }}" hyperkube_image_tag: "{{ kube_version }}"
pod_infra_image_repo: "gcr.io/google_containers/pause-{{ image_arch }}" pod_infra_image_repo: "gcr.io/google_containers/pause-{{ image_arch }}"
@@ -120,7 +101,7 @@ dnsmasq_image_tag: "{{ dnsmasq_version }}"
kubedns_version: 1.14.10 kubedns_version: 1.14.10
kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-{{ image_arch }}" kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-{{ image_arch }}"
kubedns_image_tag: "{{ kubedns_version }}" kubedns_image_tag: "{{ kubedns_version }}"
coredns_version: 1.1.2 coredns_version: 1.2.0
coredns_image_repo: "docker.io/coredns/coredns" coredns_image_repo: "docker.io/coredns/coredns"
coredns_image_tag: "{{ coredns_version }}" coredns_image_tag: "{{ coredns_version }}"
dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny--{{ image_arch }}" dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny--{{ image_arch }}"
@@ -135,14 +116,14 @@ kubednsautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-aut
kubednsautoscaler_image_tag: "{{ kubednsautoscaler_version }}" kubednsautoscaler_image_tag: "{{ kubednsautoscaler_version }}"
test_image_repo: busybox test_image_repo: busybox
test_image_tag: latest test_image_tag: latest
elasticsearch_version: "v2.4.1" elasticsearch_version: "v5.6.4"
elasticsearch_image_repo: "gcr.io/google_containers/elasticsearch" elasticsearch_image_repo: "k8s.gcr.io/elasticsearch"
elasticsearch_image_tag: "{{ elasticsearch_version }}" elasticsearch_image_tag: "{{ elasticsearch_version }}"
fluentd_version: "1.22" fluentd_version: "v2.0.4"
fluentd_image_repo: "gcr.io/google_containers/fluentd-elasticsearch" fluentd_image_repo: "k8s.gcr.io/fluentd-elasticsearch"
fluentd_image_tag: "{{ fluentd_version }}" fluentd_image_tag: "{{ fluentd_version }}"
kibana_version: "v4.6.1" kibana_version: "5.6.4"
kibana_image_repo: "gcr.io/google_containers/kibana" kibana_image_repo: "docker.elastic.co/kibana/kibana"
kibana_image_tag: "{{ kibana_version }}" kibana_image_tag: "{{ kibana_version }}"
helm_version: "v2.9.1" helm_version: "v2.9.1"
helm_image_repo: "lachlanevenson/k8s-helm" helm_image_repo: "lachlanevenson/k8s-helm"
@@ -156,18 +137,16 @@ registry_image_tag: "2.6"
registry_proxy_image_repo: "gcr.io/google_containers/kube-registry-proxy" registry_proxy_image_repo: "gcr.io/google_containers/kube-registry-proxy"
registry_proxy_image_tag: "0.4" registry_proxy_image_tag: "0.4"
local_volume_provisioner_image_repo: "quay.io/external_storage/local-volume-provisioner" local_volume_provisioner_image_repo: "quay.io/external_storage/local-volume-provisioner"
local_volume_provisioner_image_tag: "v2.0.0" local_volume_provisioner_image_tag: "v2.1.0"
cephfs_provisioner_image_repo: "quay.io/kubespray/cephfs-provisioner" cephfs_provisioner_image_repo: "quay.io/external_storage/cephfs-provisioner"
cephfs_provisioner_image_tag: "a71a49d4" cephfs_provisioner_image_tag: "v1.1.0-k8s1.10"
ingress_nginx_controller_image_repo: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller" ingress_nginx_controller_image_repo: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller"
ingress_nginx_controller_image_tag: "0.14.0" ingress_nginx_controller_image_tag: "0.18.0"
ingress_nginx_default_backend_image_repo: "gcr.io/google_containers/defaultbackend" ingress_nginx_default_backend_image_repo: "gcr.io/google_containers/defaultbackend"
ingress_nginx_default_backend_image_tag: "1.4" ingress_nginx_default_backend_image_tag: "1.4"
cert_manager_version: "v0.2.4" cert_manager_version: "v0.4.1"
cert_manager_controller_image_repo: "quay.io/jetstack/cert-manager-controller" cert_manager_controller_image_repo: "quay.io/jetstack/cert-manager-controller"
cert_manager_controller_image_tag: "{{ cert_manager_version }}" cert_manager_controller_image_tag: "{{ cert_manager_version }}"
cert_manager_ingress_shim_image_repo: "quay.io/jetstack/cert-manager-ingress-shim"
cert_manager_ingress_shim_image_tag: "{{ cert_manager_version }}"
downloads: downloads:
netcheck_server: netcheck_server:
@@ -207,83 +186,6 @@ downloads:
mode: "0755" mode: "0755"
groups: groups:
- k8s-cluster - k8s-cluster
istioctl:
enabled: "{{ istio_enabled }}"
file: true
version: "{{ istio_version }}"
dest: "istio/istioctl"
sha256: "{{ istioctl_checksum }}"
source_url: "{{ istioctl_download_url }}"
url: "{{ istioctl_download_url }}"
unarchive: false
owner: "root"
mode: "0755"
groups:
- kube-master
istio_proxy:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_proxy_image_repo }}"
tag: "{{ istio_proxy_image_tag }}"
sha256: "{{ istio_proxy_digest_checksum|default(None) }}"
groups:
- kube-node
istio_proxy_init:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_proxy_init_image_repo }}"
tag: "{{ istio_proxy_init_image_tag }}"
sha256: "{{ istio_proxy_init_digest_checksum|default(None) }}"
groups:
- kube-node
istio_ca:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_ca_image_repo }}"
tag: "{{ istio_ca_image_tag }}"
sha256: "{{ istio_ca_digest_checksum|default(None) }}"
groups:
- kube-node
istio_mixer:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_mixer_image_repo }}"
tag: "{{ istio_mixer_image_tag }}"
sha256: "{{ istio_mixer_digest_checksum|default(None) }}"
groups:
- kube-node
istio_pilot:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_pilot_image_repo }}"
tag: "{{ istio_pilot_image_tag }}"
sha256: "{{ istio_pilot_digest_checksum|default(None) }}"
groups:
- kube-node
istio_proxy_debug:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_proxy_debug_image_repo }}"
tag: "{{ istio_proxy_debug_image_tag }}"
sha256: "{{ istio_proxy_debug_digest_checksum|default(None) }}"
groups:
- kube-node
istio_sidecar_initializer:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_sidecar_initializer_image_repo }}"
tag: "{{ istio_sidecar_initializer_image_tag }}"
sha256: "{{ istio_sidecar_initializer_digest_checksum|default(None) }}"
groups:
- kube-node
istio_statsd:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_statsd_image_repo }}"
tag: "{{ istio_statsd_image_tag }}"
sha256: "{{ istio_statsd_digest_checksum|default(None) }}"
groups:
- kube-node
hyperkube: hyperkube:
enabled: true enabled: true
container: true container: true
@@ -569,7 +471,7 @@ downloads:
tag: "{{ ingress_nginx_controller_image_tag }}" tag: "{{ ingress_nginx_controller_image_tag }}"
sha256: "{{ ingress_nginx_controller_digest_checksum|default(None) }}" sha256: "{{ ingress_nginx_controller_digest_checksum|default(None) }}"
groups: groups:
- kube-ingress - kube-node
ingress_nginx_default_backend: ingress_nginx_default_backend:
enabled: "{{ ingress_nginx_enabled }}" enabled: "{{ ingress_nginx_enabled }}"
container: true container: true
@@ -577,7 +479,7 @@ downloads:
tag: "{{ ingress_nginx_default_backend_image_tag }}" tag: "{{ ingress_nginx_default_backend_image_tag }}"
sha256: "{{ ingress_nginx_default_backend_digest_checksum|default(None) }}" sha256: "{{ ingress_nginx_default_backend_digest_checksum|default(None) }}"
groups: groups:
- kube-ingress - kube-node
cert_manager_controller: cert_manager_controller:
enabled: "{{ cert_manager_enabled }}" enabled: "{{ cert_manager_enabled }}"
container: true container: true
@@ -586,14 +488,6 @@ downloads:
sha256: "{{ cert_manager_controller_digest_checksum|default(None) }}" sha256: "{{ cert_manager_controller_digest_checksum|default(None) }}"
groups: groups:
- kube-node - kube-node
cert_manager_ingress_shim:
enabled: "{{ cert_manager_enabled }}"
container: true
repo: "{{ cert_manager_ingress_shim_image_repo }}"
tag: "{{ cert_manager_ingress_shim_image_tag }}"
sha256: "{{ cert_manager_ingress_shim_digest_checksum|default(None) }}"
groups:
- kube-node
download_defaults: download_defaults:
container: false container: false

View File

@@ -20,6 +20,6 @@
when: when:
- not skip_downloads|default(false) - not skip_downloads|default(false)
- item.value.enabled - item.value.enabled
- item.value.container - "{{ item.value.container | default(False) }}"
- download_run_once - download_run_once
- group_names | intersect(download.groups) | length - group_names | intersect(download.groups) | length

View File

@@ -9,7 +9,7 @@
- name: Register docker images info - name: Register docker images info
raw: >- raw: >-
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} (index .RepoTags 0) {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}" | tr '\n' ',' {{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} if .RepoTags {{ '}}' }}{{ '{{' }} (index .RepoTags 0) {{ '}}' }}{{ '{{' }} end {{ '}}' }}{{ '{{' }} if .RepoDigests {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}{{ '{{' }} end {{ '}}' }}" | tr '\n' ','
no_log: true no_log: true
register: docker_images register: docker_images
failed_when: false failed_when: false

View File

@@ -3,6 +3,9 @@
etcd_cluster_setup: true etcd_cluster_setup: true
etcd_events_cluster_setup: false etcd_events_cluster_setup: false
# Set to true to separate k8s events to a different etcd cluster
etcd_events_cluster_enabled: false
etcd_backup_prefix: "/var/backups" etcd_backup_prefix: "/var/backups"
etcd_data_dir: "/var/lib/etcd" etcd_data_dir: "/var/lib/etcd"
etcd_events_data_dir: "/var/lib/etcd-events" etcd_events_data_dir: "/var/lib/etcd-events"

View File

@@ -95,4 +95,9 @@ if [ -n "$HOSTS" ]; then
fi fi
# Install certs # Install certs
if [ -e "$SSLDIR/ca-key.pem" ]; then
# No pass existing CA
rm -f ca.pem ca-key.pem
fi
mv *.pem ${SSLDIR}/ mv *.pem ${SSLDIR}/

View File

@@ -62,5 +62,3 @@
with_items: "{{ etcd_node_certs_needed|d([]) }}" with_items: "{{ etcd_node_certs_needed|d([]) }}"
when: inventory_hostname in etcd_node_cert_hosts when: inventory_hostname in etcd_node_cert_hosts
notify: set etcd_secret_changed notify: set etcd_secret_changed
- fail:

View File

@@ -19,11 +19,17 @@
register: "etcd_client_cert_serial_result" register: "etcd_client_cert_serial_result"
changed_when: false changed_when: false
when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort
tags:
- master
- network
- name: Set etcd_client_cert_serial - name: Set etcd_client_cert_serial
set_fact: set_fact:
etcd_client_cert_serial: "{{ etcd_client_cert_serial_result.stdout }}" etcd_client_cert_serial: "{{ etcd_client_cert_serial_result.stdout }}"
when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort
tags:
- master
- network
- include_tasks: "install_{{ etcd_deployment_type }}.yml" - include_tasks: "install_{{ etcd_deployment_type }}.yml"
when: is_etcd_master when: is_etcd_master

View File

@@ -8,13 +8,15 @@
"member-" + inventory_hostname + ".pem" "member-" + inventory_hostname + ".pem"
] }} ] }}
#- include_tasks: ../../vault/tasks/shared/sync_file.yml - include_tasks: ../../vault/tasks/shared/sync_file.yml
# vars: vars:
# sync_file: "{{ item }}" sync_file: "{{ item }}"
# sync_file_dir: "{{ etcd_cert_dir }}" sync_file_dir: "{{ etcd_cert_dir }}"
# sync_file_hosts: [ "{{ inventory_hostname }}" ] sync_file_hosts: [ "{{ inventory_hostname }}" ]
# sync_file_is_cert: true sync_file_owner: kube
# with_items: "{{ etcd_master_cert_list|d([]) }}" sync_file_group: root
sync_file_is_cert: true
with_items: "{{ etcd_master_cert_list|d([]) }}"
- name: sync_etcd_certs | Set facts for etcd sync_file results - name: sync_etcd_certs | Set facts for etcd sync_file results
set_fact: set_fact:
@@ -22,16 +24,16 @@
with_items: "{{ sync_file_results|d([]) }}" with_items: "{{ sync_file_results|d([]) }}"
when: item.no_srcs|bool when: item.no_srcs|bool
#- name: sync_etcd_certs | Unset sync_file_results after etcd certs sync - name: sync_etcd_certs | Unset sync_file_results after etcd certs sync
# set_fact: set_fact:
# sync_file_results: [] sync_file_results: []
#
#- include_tasks: ../../vault/tasks/shared/sync_file.yml - include_tasks: ../../vault/tasks/shared/sync_file.yml
# vars: vars:
# sync_file: ca.pem sync_file: ca.pem
# sync_file_dir: "{{ etcd_cert_dir }}" sync_file_dir: "{{ etcd_cert_dir }}"
# sync_file_hosts: [ "{{ inventory_hostname }}" ] sync_file_hosts: [ "{{ inventory_hostname }}" ]
#
#- name: sync_etcd_certs | Unset sync_file_results after ca.pem sync - name: sync_etcd_certs | Unset sync_file_results after ca.pem sync
# set_fact: set_fact:
# sync_file_results: [] sync_file_results: []

View File

@@ -4,30 +4,30 @@
set_fact: set_fact:
etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + inventory_hostname + '.pem'] }}" etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + inventory_hostname + '.pem'] }}"
#- include_tasks: ../../vault/tasks/shared/sync_file.yml - include_tasks: ../../vault/tasks/shared/sync_file.yml
# vars: vars:
# sync_file: "{{ item }}" sync_file: "{{ item }}"
# sync_file_dir: "{{ etcd_cert_dir }}" sync_file_dir: "{{ etcd_cert_dir }}"
# sync_file_hosts: [ "{{ inventory_hostname }}" ] sync_file_hosts: [ "{{ inventory_hostname }}" ]
# sync_file_is_cert: true sync_file_is_cert: true
# with_items: "{{ etcd_node_cert_list|d([]) }}" with_items: "{{ etcd_node_cert_list|d([]) }}"
#
- name: sync_etcd_node_certs | Set facts for etcd sync_file results - name: sync_etcd_node_certs | Set facts for etcd sync_file results
set_fact: set_fact:
etcd_node_certs_needed: "{{ etcd_node_certs_needed|default([]) + [item.path] }}" etcd_node_certs_needed: "{{ etcd_node_certs_needed|default([]) + [item.path] }}"
with_items: "{{ sync_file_results|d([]) }}" with_items: "{{ sync_file_results|d([]) }}"
when: item.no_srcs|bool when: item.no_srcs|bool
#- name: sync_etcd_node_certs | Unset sync_file_results after etcd node certs - name: sync_etcd_node_certs | Unset sync_file_results after etcd node certs
# set_fact: set_fact:
# sync_file_results: [] sync_file_results: []
#
#- include_tasks: ../../vault/tasks/shared/sync_file.yml - include_tasks: ../../vault/tasks/shared/sync_file.yml
# vars: vars:
# sync_file: ca.pem sync_file: ca.pem
# sync_file_dir: "{{ etcd_cert_dir }}" sync_file_dir: "{{ etcd_cert_dir }}"
# sync_file_hosts: "{{ groups['etcd'] }}" sync_file_hosts: "{{ groups['etcd'] }}"
#
#- name: sync_etcd_node_certs | Unset sync_file_results after ca.pem - name: sync_etcd_node_certs | Unset sync_file_results after ca.pem
# set_fact: set_fact:
# sync_file_results: [] sync_file_results: []

View File

@@ -0,0 +1,31 @@
[Unit]
Description=etcd events rkt wrapper
Documentation=https://github.com/coreos/etcd
Wants=network.target
[Service]
Restart=on-failure
RestartSec=10s
TimeoutStartSec=0
LimitNOFILE=40000
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/run/etcd-events.uuid \
--volume hosts,kind=host,source=/etc/hosts,readOnly=true \
--mount volume=hosts,target=/etc/hosts \
--volume=etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount=volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume=etcd-cert-dir,kind=host,source={{ etcd_cert_dir }},readOnly=true \
--mount=volume=etcd-cert-dir,target={{ etcd_cert_dir }} \
--volume=etcd-data-dir,kind=host,source={{ etcd_events_data_dir }},readOnly=false \
--mount=volume=etcd-data-dir,target={{ etcd_events_data_dir }} \
--set-env-file=/etc/etcd-events.env \
--stage1-from-dir=stage1-fly.aci \
{{ etcd_image_repo }}:{{ etcd_image_tag }} \
--name={{ etcd_member_name | default("etcd-events") }}
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/etcd-events.uuid
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/etcd-events.uuid
[Install]
WantedBy=multi-user.target

View File

@@ -60,6 +60,9 @@ dashboard_certs_secret_name: kubernetes-dashboard-certs
dashboard_tls_key_file: dashboard.key dashboard_tls_key_file: dashboard.key
dashboard_tls_cert_file: dashboard.crt dashboard_tls_cert_file: dashboard.crt
# Override dashboard default settings
dashboard_token_ttl: 900
# SSL # SSL
etcd_cert_dir: "/etc/ssl/etcd/ssl" etcd_cert_dir: "/etc/ssl/etcd/ssl"
canal_cert_dir: "/etc/canal/certs" canal_cert_dir: "/etc/canal/certs"

View File

@@ -19,6 +19,7 @@
- rbac_enabled or item.type not in rbac_resources - rbac_enabled or item.type not in rbac_resources
tags: tags:
- dnsmasq - dnsmasq
- kubedns
# see https://github.com/kubernetes/kubernetes/issues/45084, only needed for "old" kube-dns # see https://github.com/kubernetes/kubernetes/issues/45084, only needed for "old" kube-dns
- name: Kubernetes Apps | Patch system:kube-dns ClusterRole - name: Kubernetes Apps | Patch system:kube-dns ClusterRole
@@ -39,3 +40,4 @@
- rbac_enabled and kubedns_version|version_compare("1.11.0", "<", strict=True) - rbac_enabled and kubedns_version|version_compare("1.11.0", "<", strict=True)
tags: tags:
- dnsmasq - dnsmasq
- kubedns

View File

@@ -17,6 +17,9 @@
- inventory_hostname == groups['kube-master'][0] - inventory_hostname == groups['kube-master'][0]
tags: tags:
- upgrade - upgrade
- dnsmasq
- coredns
- kubedns
- name: Kubernetes Apps | CoreDNS - name: Kubernetes Apps | CoreDNS
import_tasks: "tasks/coredns.yml" import_tasks: "tasks/coredns.yml"
@@ -56,6 +59,8 @@
delay: 5 delay: 5
tags: tags:
- dnsmasq - dnsmasq
- coredns
- kubedns
- name: Kubernetes Apps | Netchecker - name: Kubernetes Apps | Netchecker
import_tasks: tasks/netchecker.yml import_tasks: tasks/netchecker.yml

View File

@@ -2,7 +2,7 @@
- name: Kubernetes Apps | Check if netchecker-server manifest already exists - name: Kubernetes Apps | Check if netchecker-server manifest already exists
stat: stat:
path: "{{ kube_config_dir }}/netchecker-server-deployment.yml.j2" path: "{{ kube_config_dir }}/netchecker-server-deployment.yml"
register: netchecker_server_manifest register: netchecker_server_manifest
tags: tags:
- facts - facts
@@ -22,16 +22,16 @@
- name: Kubernetes Apps | Lay Down Netchecker Template - name: Kubernetes Apps | Lay Down Netchecker Template
template: template:
src: "{{item.file}}" src: "{{item.file}}.j2"
dest: "{{kube_config_dir}}/{{item.file}}" dest: "{{kube_config_dir}}/{{item.file}}"
with_items: with_items:
- {file: netchecker-agent-ds.yml.j2, type: ds, name: netchecker-agent} - {file: netchecker-agent-ds.yml, type: ds, name: netchecker-agent}
- {file: netchecker-agent-hostnet-ds.yml.j2, type: ds, name: netchecker-agent-hostnet} - {file: netchecker-agent-hostnet-ds.yml, type: ds, name: netchecker-agent-hostnet}
- {file: netchecker-server-sa.yml.j2, type: sa, name: netchecker-server} - {file: netchecker-server-sa.yml, type: sa, name: netchecker-server}
- {file: netchecker-server-clusterrole.yml.j2, type: clusterrole, name: netchecker-server} - {file: netchecker-server-clusterrole.yml, type: clusterrole, name: netchecker-server}
- {file: netchecker-server-clusterrolebinding.yml.j2, type: clusterrolebinding, name: netchecker-server} - {file: netchecker-server-clusterrolebinding.yml, type: clusterrolebinding, name: netchecker-server}
- {file: netchecker-server-deployment.yml.j2, type: deployment, name: netchecker-server} - {file: netchecker-server-deployment.yml, type: deployment, name: netchecker-server}
- {file: netchecker-server-svc.yml.j2, type: svc, name: netchecker-service} - {file: netchecker-server-svc.yml, type: svc, name: netchecker-service}
register: manifests register: manifests
when: when:
- inventory_hostname == groups['kube-master'][0] - inventory_hostname == groups['kube-master'][0]

View File

@@ -11,7 +11,7 @@ data:
.:53 { .:53 {
errors errors
health health
kubernetes {{ cluster_name }} in-addr.arpa ip6.arpa { kubernetes {{ dns_domain }} in-addr.arpa ip6.arpa {
pods insecure pods insecure
upstream /etc/resolv.conf upstream /etc/resolv.conf
fallthrough in-addr.arpa ip6.arpa fallthrough in-addr.arpa ip6.arpa

View File

@@ -34,6 +34,22 @@ spec:
effect: NoSchedule effect: NoSchedule
- key: "CriticalAddonsOnly" - key: "CriticalAddonsOnly"
operator: "Exists" operator: "Exists"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
k8s-app: coredns{{ coredns_ordinal_suffix | default('') }}
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: In
values:
- "true"
containers: containers:
- name: coredns - name: coredns
image: "{{ coredns_image_repo }}:{{ coredns_image_tag }}" image: "{{ coredns_image_repo }}:{{ coredns_image_tag }}"

View File

@@ -166,6 +166,7 @@ spec:
# If not specified, Dashboard will attempt to auto discover the API server and connect # If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work. # to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port # - --apiserver-host=http://my-address:port
- --token-ttl={{ dashboard_token_ttl }}
volumeMounts: volumeMounts:
- name: kubernetes-dashboard-certs - name: kubernetes-dashboard-certs
mountPath: /certs mountPath: /certs

View File

@@ -30,7 +30,24 @@ spec:
spec: spec:
tolerations: tolerations:
- effect: NoSchedule - effect: NoSchedule
operator: Exists operator: Equal
key: node-role.kubernetes.io/master
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
k8s-app: kubedns-autoscaler
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: In
values:
- "true"
containers: containers:
- name: autoscaler - name: autoscaler
image: "{{ kubednsautoscaler_image_repo }}:{{ kubednsautoscaler_image_tag }}" image: "{{ kubednsautoscaler_image_repo }}:{{ kubednsautoscaler_image_tag }}"

View File

@@ -30,8 +30,25 @@ spec:
tolerations: tolerations:
- key: "CriticalAddonsOnly" - key: "CriticalAddonsOnly"
operator: "Exists" operator: "Exists"
- effect: NoSchedule - effect: "NoSchedule"
operator: Exists operator: "Equal"
key: "node-role.kubernetes.io/master"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
k8s-app: kube-dns
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: In
values:
- "true"
volumes: volumes:
- name: kube-dns-config - name: kube-dns-config
configMap: configMap:

View File

@@ -1,9 +1,12 @@
--- ---
kind: ClusterRoleBinding kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: efk name: efk
namespace: kube-system namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: efk name: efk

View File

@@ -6,3 +6,4 @@ metadata:
namespace: kube-system namespace: kube-system
labels: labels:
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile

View File

@@ -1,15 +1,17 @@
--- ---
# https://raw.githubusercontent.com/kubernetes/kubernetes/v1.5.2/cluster/addons/fluentd-elasticsearch/es-controller.yaml # https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.2/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml
apiVersion: extensions/v1beta1 apiVersion: apps/v1
kind: Deployment kind: StatefulSet
metadata: metadata:
name: elasticsearch-logging-v1 name: elasticsearch-logging
namespace: kube-system namespace: kube-system
labels: labels:
k8s-app: elasticsearch-logging k8s-app: elasticsearch-logging
version: "{{ elasticsearch_image_tag }}" version: "{{ elasticsearch_image_tag }}"
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec: spec:
serviceName: elasticsearch-logging
replicas: 2 replicas: 2
selector: selector:
matchLabels: matchLabels:
@@ -53,4 +55,10 @@ spec:
{% if rbac_enabled %} {% if rbac_enabled %}
serviceAccountName: efk serviceAccountName: efk
{% endif %} {% endif %}
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true

View File

@@ -1,7 +1,7 @@
--- ---
fluentd_cpu_limit: 0m fluentd_cpu_limit: 0m
fluentd_mem_limit: 200Mi fluentd_mem_limit: 500Mi
fluentd_cpu_requests: 100m fluentd_cpu_requests: 100m
fluentd_mem_requests: 200Mi fluentd_mem_requests: 200Mi
fluentd_config_dir: /etc/kubernetes/fluentd fluentd_config_dir: /etc/fluent/config.d
fluentd_config_file: fluentd.conf # fluentd_config_file: fluentd.conf

View File

@@ -1,10 +1,19 @@
---
# https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.10/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: fluentd-config name: fluentd-config
namespace: "kube-system" namespace: "kube-system"
labels:
addonmanager.kubernetes.io/mode: Reconcile
data: data:
{{ fluentd_config_file }}: | system.conf: |-
<system>
root_dir /tmp/fluentd-buffers/
</system>
containers.input.conf: |-
# This configuration file for Fluentd / td-agent is used # This configuration file for Fluentd / td-agent is used
# to watch changes to Docker log files. The kubelet creates symlinks that # to watch changes to Docker log files. The kubelet creates symlinks that
# capture the pod name, namespace, container name & Docker container ID # capture the pod name, namespace, container name & Docker container ID
@@ -18,7 +27,6 @@ data:
# See https://github.com/uken/fluent-plugin-elasticsearch & # See https://github.com/uken/fluent-plugin-elasticsearch &
# https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for # https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
# more information about the plugins. # more information about the plugins.
# Maintainer: Jimmi Dyson <jimmidyson@gmail.com>
# #
# Example # Example
# ======= # =======
@@ -99,63 +107,87 @@ data:
# This makes it easier for users to search for logs by pod name or by # This makes it easier for users to search for logs by pod name or by
# the name of the Kubernetes container regardless of how many times the # the name of the Kubernetes container regardless of how many times the
# Kubernetes pod has been restarted (resulting in a several Docker container IDs). # Kubernetes pod has been restarted (resulting in a several Docker container IDs).
#
# TODO: Propagate the labels associated with a container along with its logs # Json Log Example:
# so users can query logs using labels as well as or instead of the pod name
# and container name. This is simply done via configuration of the Kubernetes
# fluentd plugin but requires secrets to be enabled in the fluent pod. This is a
# problem yet to be solved as secrets are not usable in static pods which the fluentd
# pod must be until a per-node controller is available in Kubernetes.
# Prevent fluentd from handling records containing its own logs. Otherwise
# it can lead to an infinite loop, when error in sending one message generates
# another message which also fails to be sent and so on.
<match fluent.**>
type null
</match>
# Example:
# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"} # {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
# CRI Log Example:
# 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
<source> <source>
type tail @id fluentd-containers.log
@type tail
path /var/log/containers/*.log path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos pos_file /var/log/es-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ time_format %Y-%m-%dT%H:%M:%S.%NZ
tag kubernetes.* tag raw.kubernetes.*
format json
read_from_head true read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
</pattern>
<pattern>
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
time_format %Y-%m-%dT%H:%M:%S.%N%:z
</pattern>
</parse>
</source> </source>
# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@id raw.kubernetes
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</match>
system.input.conf: |-
# Example: # Example:
# 2015-12-21 23:17:22,066 [salt.state ][INFO ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081 # 2015-12-21 23:17:22,066 [salt.state ][INFO ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
<source> <source>
type tail @id minion
@type tail
format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/ format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
time_format %Y-%m-%d %H:%M:%S time_format %Y-%m-%d %H:%M:%S
path /var/log/salt/minion path /var/log/salt/minion
pos_file /var/log/es-salt.pos pos_file /var/log/salt.pos
tag salt tag salt
</source> </source>
# Example: # Example:
# Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script # Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
<source> <source>
type tail @id startupscript.log
@type tail
format syslog format syslog
path /var/log/startupscript.log path /var/log/startupscript.log
pos_file /var/log/es-startupscript.log.pos pos_file /var/log/es-startupscript.log.pos
tag startupscript tag startupscript
</source> </source>
# Examples: # Examples:
# time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json" # time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
# time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404 # time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
# TODO(random-liu): Remove this after cri container runtime rolls out.
<source> <source>
type tail @id docker.log
@type tail
format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/ format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
path /var/log/docker.log path /var/log/docker.log
pos_file /var/log/es-docker.log.pos pos_file /var/log/es-docker.log.pos
tag docker tag docker
</source> </source>
# Example: # Example:
# 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal # 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
<source> <source>
type tail @id etcd.log
@type tail
# Not parsing this, because it doesn't have anything particularly useful to # Not parsing this, because it doesn't have anything particularly useful to
# parse out of it (like severities). # parse out of it (like severities).
format none format none
@@ -163,13 +195,16 @@ data:
pos_file /var/log/es-etcd.log.pos pos_file /var/log/es-etcd.log.pos
tag etcd tag etcd
</source> </source>
# Multi-line parsing is required for all the kube logs because very large log # Multi-line parsing is required for all the kube logs because very large log
# statements, such as those that include entire object bodies, get split into # statements, such as those that include entire object bodies, get split into
# multiple lines by glog. # multiple lines by glog.
# Example: # Example:
# I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537] # I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
<source> <source>
type tail @id kubelet.log
@type tail
format multiline format multiline
multiline_flush_interval 5s multiline_flush_interval 5s
format_firstline /^\w\d{4}/ format_firstline /^\w\d{4}/
@@ -179,10 +214,12 @@ data:
pos_file /var/log/es-kubelet.log.pos pos_file /var/log/es-kubelet.log.pos
tag kubelet tag kubelet
</source> </source>
# Example: # Example:
# I1118 21:26:53.975789 6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed # I1118 21:26:53.975789 6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
<source> <source>
type tail @id kube-proxy.log
@type tail
format multiline format multiline
multiline_flush_interval 5s multiline_flush_interval 5s
format_firstline /^\w\d{4}/ format_firstline /^\w\d{4}/
@@ -192,10 +229,12 @@ data:
pos_file /var/log/es-kube-proxy.log.pos pos_file /var/log/es-kube-proxy.log.pos
tag kube-proxy tag kube-proxy
</source> </source>
# Example: # Example:
# I0204 07:00:19.604280 5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266] # I0204 07:00:19.604280 5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
<source> <source>
type tail @id kube-apiserver.log
@type tail
format multiline format multiline
multiline_flush_interval 5s multiline_flush_interval 5s
format_firstline /^\w\d{4}/ format_firstline /^\w\d{4}/
@@ -205,10 +244,12 @@ data:
pos_file /var/log/es-kube-apiserver.log.pos pos_file /var/log/es-kube-apiserver.log.pos
tag kube-apiserver tag kube-apiserver
</source> </source>
# Example: # Example:
# I0204 06:55:31.872680 5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui # I0204 06:55:31.872680 5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui
<source> <source>
type tail @id kube-controller-manager.log
@type tail
format multiline format multiline
multiline_flush_interval 5s multiline_flush_interval 5s
format_firstline /^\w\d{4}/ format_firstline /^\w\d{4}/
@@ -218,10 +259,12 @@ data:
pos_file /var/log/es-kube-controller-manager.log.pos pos_file /var/log/es-kube-controller-manager.log.pos
tag kube-controller-manager tag kube-controller-manager
</source> </source>
# Example: # Example:
# W0204 06:49:18.239674 7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312] # W0204 06:49:18.239674 7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
<source> <source>
type tail @id kube-scheduler.log
@type tail
format multiline format multiline
multiline_flush_interval 5s multiline_flush_interval 5s
format_firstline /^\w\d{4}/ format_firstline /^\w\d{4}/
@@ -231,10 +274,12 @@ data:
pos_file /var/log/es-kube-scheduler.log.pos pos_file /var/log/es-kube-scheduler.log.pos
tag kube-scheduler tag kube-scheduler
</source> </source>
# Example: # Example:
# I1104 10:36:20.242766 5 rescheduler.go:73] Running Rescheduler # I1104 10:36:20.242766 5 rescheduler.go:73] Running Rescheduler
<source> <source>
type tail @id rescheduler.log
@type tail
format multiline format multiline
multiline_flush_interval 5s multiline_flush_interval 5s
format_firstline /^\w\d{4}/ format_firstline /^\w\d{4}/
@@ -244,10 +289,12 @@ data:
pos_file /var/log/es-rescheduler.log.pos pos_file /var/log/es-rescheduler.log.pos
tag rescheduler tag rescheduler
</source> </source>
# Example: # Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf # I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source> <source>
type tail @id glbc.log
@type tail
format multiline format multiline
multiline_flush_interval 5s multiline_flush_interval 5s
format_firstline /^\w\d{4}/ format_firstline /^\w\d{4}/
@@ -257,10 +304,12 @@ data:
pos_file /var/log/es-glbc.log.pos pos_file /var/log/es-glbc.log.pos
tag glbc tag glbc
</source> </source>
# Example: # Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf # I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source> <source>
type tail @id cluster-autoscaler.log
@type tail
format multiline format multiline
multiline_flush_interval 5s multiline_flush_interval 5s
format_firstline /^\w\d{4}/ format_firstline /^\w\d{4}/
@@ -270,59 +319,123 @@ data:
pos_file /var/log/es-cluster-autoscaler.log.pos pos_file /var/log/es-cluster-autoscaler.log.pos
tag cluster-autoscaler tag cluster-autoscaler
</source> </source>
# Logs from systemd-journal for interesting services.
# TODO(random-liu): Remove this after cri container runtime rolls out.
<source>
@id journald-docker
@type systemd
filters [{ "_SYSTEMD_UNIT": "docker.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag docker
</source>
# <source>
# @id journald-container-runtime
# @type systemd
# filters [{ "_SYSTEMD_UNIT": "{% raw %}{{ container_runtime }} {% endraw %}.service" }]
# <storage>
# @type local
# persistent true
# </storage>
# read_from_head true
# tag container-runtime
# </source>
<source>
@id journald-kubelet
@type systemd
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag kubelet
</source>
<source>
@id journald-node-problem-detector
@type systemd
filters [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag node-problem-detector
</source>
forward.input.conf: |-
# Takes the messages sent over TCP
<source>
@type forward
</source>
monitoring.conf: |-
# Prometheus Exporter Plugin
# input plugin that exports metrics
<source>
@type prometheus
</source>
<source>
@type monitor_agent
</source>
# input plugin that collects metrics from MonitorAgent
<source>
@type prometheus_monitor
<labels>
host ${hostname}
</labels>
</source>
# input plugin that collects metrics for output plugin
<source>
@type prometheus_output_monitor
<labels>
host ${hostname}
</labels>
</source>
# input plugin that collects metrics for in_tail plugin
<source>
@type prometheus_tail_monitor
<labels>
host ${hostname}
</labels>
</source>
output.conf: |-
# Enriches records with Kubernetes metadata
<filter kubernetes.**> <filter kubernetes.**>
type kubernetes_metadata @type kubernetes_metadata
</filter> </filter>
## Prometheus Exporter Plugin
## input plugin that exports metrics
#<source>
# type prometheus
#</source>
#<source>
# type monitor_agent
#</source>
#<source>
# type forward
#</source>
## input plugin that collects metrics from MonitorAgent
#<source>
# @type prometheus_monitor
# <labels>
# host ${hostname}
# </labels>
#</source>
## input plugin that collects metrics for output plugin
#<source>
# @type prometheus_output_monitor
# <labels>
# host ${hostname}
# </labels>
#</source>
## input plugin that collects metrics for in_tail plugin
#<source>
# @type prometheus_tail_monitor
# <labels>
# host ${hostname}
# </labels>
#</source>
<match **> <match **>
type elasticsearch @id elasticsearch
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}" @type elasticsearch
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}" @log_level info
log_level info
include_tag_key true include_tag_key true
host elasticsearch-logging host elasticsearch-logging
port 9200 port 9200
logstash_format true logstash_format true
# Set the chunk limit the same as for fluentd-gcp. <buffer>
buffer_chunk_limit 2M @type file
# Cap buffer memory usage to 2MiB/chunk * 32 chunks = 64 MiB path /var/log/fluentd-buffers/kubernetes.system.buffer
buffer_queue_limit 32 flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s flush_interval 5s
# Never wait longer than 5 minutes between retries. retry_forever
max_retry_wait 30 retry_max_interval 30
# Disable the limit on the number of retries (retry forever). chunk_limit_size 2M
disable_retry_limit queue_limit_length 8
# Use multiple threads for processing. overflow_action block
num_threads 8 </buffer>
</match> </match>

View File

@@ -1,32 +1,42 @@
--- ---
# https://raw.githubusercontent.com/kubernetes/kubernetes/v1.5.2/cluster/addons/fluentd-elasticsearch/es-controller.yaml # https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.2/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml
apiVersion: extensions/v1beta1 apiVersion: apps/v1
kind: DaemonSet kind: DaemonSet
metadata: metadata:
name: "fluentd-es-v{{ fluentd_version }}" name: "fluentd-es-{{ fluentd_version }}"
namespace: "kube-system" namespace: "kube-system"
labels: labels:
k8s-app: fluentd-es k8s-app: fluentd-es
version: "{{ fluentd_version }}"
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
version: "v{{ fluentd_version }}" addonmanager.kubernetes.io/mode: Reconcile
spec: spec:
selector:
matchLabels:
k8s-app: fluentd-es
version: "{{ fluentd_version }}"
template: template:
metadata: metadata:
labels: labels:
k8s-app: fluentd-es k8s-app: fluentd-es
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
version: "v{{ fluentd_version }}" version: "{{ fluentd_version }}"
# This annotation ensures that fluentd does not get evicted if the node
# supports critical pod annotation based priority scheme.
# Note that this does not guarantee admission on the nodes (#40573).
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec: spec:
tolerations: priorityClassName: system-node-critical
- effect: NoSchedule {% if rbac_enabled %}
operator: Exists serviceAccountName: efk
{% endif %}
containers: containers:
- name: fluentd-es - name: fluentd-es
image: "{{ fluentd_image_repo }}:{{ fluentd_image_tag }}" image: "{{ fluentd_image_repo }}:{{ fluentd_image_tag }}"
command: env:
- '/bin/sh' - name: FLUENTD_ARGS
- '-c' value: "--no-supervisor -q"
- '/usr/sbin/td-agent -c {{ fluentd_config_dir }}/{{ fluentd_config_file}} 2>&1 >> /var/log/fluentd.log'
resources: resources:
limits: limits:
{% if fluentd_cpu_limit is defined and fluentd_cpu_limit != "0m" %} {% if fluentd_cpu_limit is defined and fluentd_cpu_limit != "0m" %}
@@ -39,22 +49,19 @@ spec:
volumeMounts: volumeMounts:
- name: varlog - name: varlog
mountPath: /var/log mountPath: /var/log
- name: dockercontainers - name: varlibdockercontainers
mountPath: "{{ docker_daemon_graph }}/containers" mountPath: "{{ docker_daemon_graph }}/containers"
readOnly: true readOnly: true
- name: config - name: config-volume
mountPath: "{{ fluentd_config_dir }}" mountPath: "{{ fluentd_config_dir }}"
terminationGracePeriodSeconds: 30 terminationGracePeriodSeconds: 30
volumes: volumes:
- name: varlog - name: varlog
hostPath: hostPath:
path: /var/log path: /var/log
- name: dockercontainers - name: varlibdockercontainers
hostPath: hostPath:
path: {{ docker_daemon_graph }}/containers path: {{ docker_daemon_graph }}/containers
- name: config - name: config-volume
configMap: configMap:
name: fluentd-config name: fluentd-config
{% if rbac_enabled %}
serviceAccountName: efk
{% endif %}

View File

@@ -4,3 +4,4 @@ kibana_mem_limit: 0M
kibana_cpu_requests: 100m kibana_cpu_requests: 100m
kibana_mem_requests: 0M kibana_mem_requests: 0M
kibana_service_port: 5601 kibana_service_port: 5601
kibana_base_url: "/api/v1/namespaces/kube-system/services/kibana-logging/proxy"

View File

@@ -1,6 +1,6 @@
--- ---
# https://raw.githubusercontent.com/kubernetes/kubernetes/v1.5.2/cluster/addons/fluentd-kibana/kibana-controller.yaml # https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.10/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml
apiVersion: extensions/v1beta1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: kibana-logging name: kibana-logging
@@ -36,10 +36,12 @@ spec:
env: env:
- name: "ELASTICSEARCH_URL" - name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:{{ elasticsearch_service_port }}" value: "http://elasticsearch-logging:{{ elasticsearch_service_port }}"
{% if kibana_base_url is defined and kibana_base_url != "" %} - name: "SERVER_BASEPATH"
- name: "KIBANA_BASE_URL"
value: "{{ kibana_base_url }}" value: "{{ kibana_base_url }}"
{% endif %} - name: XPACK_MONITORING_ENABLED
value: "false"
- name: XPACK_SECURITY_ENABLED
value: "false"
ports: ports:
- containerPort: 5601 - containerPort: 5601
name: ui name: ui

View File

@@ -1,7 +1,10 @@
--- ---
cephfs_provisioner_namespace: "kube-system" cephfs_provisioner_namespace: "cephfs-provisioner"
cephfs_provisioner_cluster: ceph cephfs_provisioner_cluster: ceph
cephfs_provisioner_monitors: [] cephfs_provisioner_monitors: ~
cephfs_provisioner_admin_id: admin cephfs_provisioner_admin_id: admin
cephfs_provisioner_secret: secret cephfs_provisioner_secret: secret
cephfs_provisioner_storage_class: cephfs cephfs_provisioner_storage_class: cephfs
cephfs_provisioner_reclaim_policy: Delete
cephfs_provisioner_claim_root: /volumes
cephfs_provisioner_deterministic_names: true

View File

@@ -1,5 +1,32 @@
--- ---
- name: CephFS Provisioner | Remove legacy addon dir and manifests
file:
path: "{{ kube_config_dir }}/addons/cephfs_provisioner"
state: absent
when:
- inventory_hostname == groups['kube-master'][0]
tags:
- upgrade
- name: CephFS Provisioner | Remove legacy namespace
shell: |
{{ bin_dir }}/kubectl delete namespace {{ cephfs_provisioner_namespace }}
ignore_errors: yes
when:
- inventory_hostname == groups['kube-master'][0]
tags:
- upgrade
- name: CephFS Provisioner | Remove legacy storageclass
shell: |
{{ bin_dir }}/kubectl delete storageclass {{ cephfs_provisioner_storage_class }}
ignore_errors: yes
when:
- inventory_hostname == groups['kube-master'][0]
tags:
- upgrade
- name: CephFS Provisioner | Create addon dir - name: CephFS Provisioner | Create addon dir
file: file:
path: "{{ kube_config_dir }}/addons/cephfs_provisioner" path: "{{ kube_config_dir }}/addons/cephfs_provisioner"
@@ -7,22 +34,24 @@
owner: root owner: root
group: root group: root
mode: 0755 mode: 0755
when:
- inventory_hostname == groups['kube-master'][0]
- name: CephFS Provisioner | Create manifests - name: CephFS Provisioner | Create manifests
template: template:
src: "{{ item.file }}.j2" src: "{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/addons/cephfs_provisioner/{{ item.file }}" dest: "{{ kube_config_dir }}/addons/cephfs_provisioner/{{ item.file }}"
with_items: with_items:
- { name: cephfs-provisioner-ns, file: cephfs-provisioner-ns.yml, type: ns } - { name: 00-namespace, file: 00-namespace.yml, type: ns }
- { name: cephfs-provisioner-sa, file: cephfs-provisioner-sa.yml, type: sa } - { name: secret-cephfs-provisioner, file: secret-cephfs-provisioner.yml, type: secret }
- { name: cephfs-provisioner-role, file: cephfs-provisioner-role.yml, type: role } - { name: sa-cephfs-provisioner, file: sa-cephfs-provisioner.yml, type: sa }
- { name: cephfs-provisioner-rolebinding, file: cephfs-provisioner-rolebinding.yml, type: rolebinding } - { name: clusterrole-cephfs-provisioner, file: clusterrole-cephfs-provisioner.yml, type: clusterrole }
- { name: cephfs-provisioner-clusterrole, file: cephfs-provisioner-clusterrole.yml, type: clusterrole } - { name: clusterrolebinding-cephfs-provisioner, file: clusterrolebinding-cephfs-provisioner.yml, type: clusterrolebinding }
- { name: cephfs-provisioner-clusterrolebinding, file: cephfs-provisioner-clusterrolebinding.yml, type: clusterrolebinding } - { name: role-cephfs-provisioner, file: role-cephfs-provisioner.yml, type: role }
- { name: cephfs-provisioner-rs, file: cephfs-provisioner-rs.yml, type: rs } - { name: rolebinding-cephfs-provisioner, file: rolebinding-cephfs-provisioner.yml, type: rolebinding }
- { name: cephfs-provisioner-secret, file: cephfs-provisioner-secret.yml, type: secret } - { name: deploy-cephfs-provisioner, file: deploy-cephfs-provisioner.yml, type: rs }
- { name: cephfs-provisioner-sc, file: cephfs-provisioner-sc.yml, type: sc } - { name: sc-cephfs-provisioner, file: sc-cephfs-provisioner.yml, type: sc }
register: cephfs_manifests register: cephfs_provisioner_manifests
when: inventory_hostname == groups['kube-master'][0] when: inventory_hostname == groups['kube-master'][0]
- name: CephFS Provisioner | Apply manifests - name: CephFS Provisioner | Apply manifests
@@ -33,5 +62,5 @@
resource: "{{ item.item.type }}" resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/addons/cephfs_provisioner/{{ item.item.file }}" filename: "{{ kube_config_dir }}/addons/cephfs_provisioner/{{ item.item.file }}"
state: "latest" state: "latest"
with_items: "{{ cephfs_manifests.results }}" with_items: "{{ cephfs_provisioner_manifests.results }}"
when: inventory_hostname == groups['kube-master'][0] when: inventory_hostname == groups['kube-master'][0]

View File

@@ -1,6 +1,6 @@
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: ReplicaSet kind: Deployment
metadata: metadata:
name: cephfs-provisioner-v{{ cephfs_provisioner_image_tag }} name: cephfs-provisioner-v{{ cephfs_provisioner_image_tag }}
namespace: {{ cephfs_provisioner_namespace }} namespace: {{ cephfs_provisioner_namespace }}

View File

@@ -4,9 +4,12 @@ kind: StorageClass
metadata: metadata:
name: {{ cephfs_provisioner_storage_class }} name: {{ cephfs_provisioner_storage_class }}
provisioner: ceph.com/cephfs provisioner: ceph.com/cephfs
reclaimPolicy: {{ cephfs_provisioner_reclaim_policy }}
parameters: parameters:
cluster: {{ cephfs_provisioner_cluster }} cluster: {{ cephfs_provisioner_cluster }}
monitors: {{ cephfs_provisioner_monitors | join(',') }} monitors: {{ cephfs_provisioner_monitors }}
adminId: {{ cephfs_provisioner_admin_id }} adminId: {{ cephfs_provisioner_admin_id }}
adminSecretName: cephfs-provisioner-{{ cephfs_provisioner_admin_id }}-secret adminSecretName: cephfs-provisioner
adminSecretNamespace: {{ cephfs_provisioner_namespace }} adminSecretNamespace: {{ cephfs_provisioner_namespace }}
claimRoot: {{ cephfs_provisioner_claim_root }}
deterministicNames: "{{ cephfs_provisioner_deterministic_names | bool | lower }}"

View File

@@ -2,7 +2,7 @@
kind: Secret kind: Secret
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: cephfs-provisioner-{{ cephfs_provisioner_admin_id }}-secret name: cephfs-provisioner
namespace: {{ cephfs_provisioner_namespace }} namespace: {{ cephfs_provisioner_namespace }}
type: Opaque type: Opaque
data: data:

View File

@@ -46,18 +46,20 @@ to limit the quota of persistent volumes.
### Simple directories ### Simple directories
``` bash In a development environment using `mount --bind` works also, but there is no capacity
for vol in vol6 vol7 vol8; do
mkdir /mnt/disks/$vol
done
```
This is also acceptable in a development environment, but there is no capacity
management. management.
### Block volumeMode PVs
Create a symbolic link under discovery directory to the block device on the node. To use
raw block devices in pods BlockVolume feature gate must be enabled.
Usage notes Usage notes
----------- -----------
Beta PV.NodeAffinity field is used by default. If running against an older K8s
version, the useAlphaAPI flag must be set in the configMap.
The volume provisioner cannot calculate volume sizes correctly, so you should The volume provisioner cannot calculate volume sizes correctly, so you should
delete the daemonset pod on the relevant host after creating volumes. The pod delete the daemonset pod on the relevant host after creating volumes. The pod
will be recreated and read the size correctly. will be recreated and read the size correctly.

View File

@@ -19,6 +19,9 @@ spec:
version: {{ local_volume_provisioner_image_tag }} version: {{ local_volume_provisioner_image_tag }}
spec: spec:
serviceAccountName: local-volume-provisioner serviceAccountName: local-volume-provisioner
tolerations:
- effect: NoSchedule
operator: Exists
containers: containers:
- name: provisioner - name: provisioner
image: {{ local_volume_provisioner_image_repo }}:{{ local_volume_provisioner_image_tag }} image: {{ local_volume_provisioner_image_repo }}:{{ local_volume_provisioner_image_tag }}
@@ -30,12 +33,17 @@ spec:
valueFrom: valueFrom:
fieldRef: fieldRef:
fieldPath: spec.nodeName fieldPath: spec.nodeName
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts: volumeMounts:
- name: local-volume-provisioner - name: local-volume-provisioner
mountPath: /etc/provisioner/config mountPath: /etc/provisioner/config
readOnly: true readOnly: true
- name: local-volume-provisioner-hostpath-mnt-disks - name: local-volume-provisioner-hostpath-mnt-disks
mountPath: {{ local_volume_provisioner_mount_dir }} mountPath: {{ local_volume_provisioner_mount_dir }}
mountPropagation: "HostToContainer"
volumes: volumes:
- name: local-volume-provisioner - name: local-volume-provisioner
configMap: configMap:

View File

@@ -18,3 +18,6 @@ helm_skip_refresh: false
# Override values for the Tiller Deployment manifest. # Override values for the Tiller Deployment manifest.
# tiller_override: "key1=val1,key2=val2" # tiller_override: "key1=val1,key2=val2"
# Limit the maximum number of revisions saved per release. Use 0 for no limit.
# tiller_max_history: 0

View File

@@ -34,6 +34,7 @@
{% if rbac_enabled %} --service-account=tiller{% endif %} {% if rbac_enabled %} --service-account=tiller{% endif %}
{% if tiller_node_selectors is defined %} --node-selectors {{ tiller_node_selectors }}{% endif %} {% if tiller_node_selectors is defined %} --node-selectors {{ tiller_node_selectors }}{% endif %}
{% if tiller_override is defined %} --override {{ tiller_override }}{% endif %} {% if tiller_override is defined %} --override {{ tiller_override }}{% endif %}
{% if tiller_max_history is defined %} --history-max={{ tiller_max_history }}{% endif %}
when: (helm_container is defined and helm_container.changed) or (helm_task_result is defined and helm_task_result.changed) when: (helm_container is defined and helm_container.changed) or (helm_task_result is defined and helm_task_result.changed)
- name: Helm | Set up bash completion - name: Helm | Set up bash completion

View File

@@ -1,6 +1,2 @@
--- ---
cert_manager_namespace: "cert-manager" cert_manager_namespace: "cert-manager"
cert_manager_cpu_requests: 10m
cert_manager_cpu_limits: 30m
cert_manager_memory_requests: 32Mi
cert_manager_memory_limits: 200Mi

View File

@@ -1,5 +1,23 @@
--- ---
- name: Cert Manager | Remove legacy addon dir and manifests
file:
path: "{{ kube_config_dir }}/addons/cert_manager"
state: absent
when:
- inventory_hostname == groups['kube-master'][0]
tags:
- upgrade
- name: Cert Manager | Remove legacy namespace
shell: |
{{ bin_dir }}/kubectl delete namespace {{ cert_manager_namespace }}
ignore_errors: yes
when:
- inventory_hostname == groups['kube-master'][0]
tags:
- upgrade
- name: Cert Manager | Create addon dir - name: Cert Manager | Create addon dir
file: file:
path: "{{ kube_config_dir }}/addons/cert_manager" path: "{{ kube_config_dir }}/addons/cert_manager"
@@ -7,20 +25,22 @@
owner: root owner: root
group: root group: root
mode: 0755 mode: 0755
when:
- inventory_hostname == groups['kube-master'][0]
- name: Cert Manager | Create manifests - name: Cert Manager | Create manifests
template: template:
src: "{{ item.file }}.j2" src: "{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/addons/cert_manager/{{ item.file }}" dest: "{{ kube_config_dir }}/addons/cert_manager/{{ item.file }}"
with_items: with_items:
- { name: cert-manager-ns, file: cert-manager-ns.yml, type: ns } - { name: 00-namespace, file: 00-namespace.yml, type: ns }
- { name: cert-manager-sa, file: cert-manager-sa.yml, type: sa } - { name: sa-cert-manager, file: sa-cert-manager.yml, type: sa }
- { name: cert-manager-clusterrole, file: cert-manager-clusterrole.yml, type: clusterrole } - { name: crd-certificate, file: crd-certificate.yml, type: crd }
- { name: cert-manager-clusterrolebinding, file: cert-manager-clusterrolebinding.yml, type: clusterrolebinding } - { name: crd-clusterissuer, file: crd-clusterissuer.yml, type: crd }
- { name: cert-manager-issuer-crd, file: cert-manager-issuer-crd.yml, type: crd } - { name: crd-issuer, file: crd-issuer.yml, type: crd }
- { name: cert-manager-clusterissuer-crd, file: cert-manager-clusterissuer-crd.yml, type: crd } - { name: clusterrole-cert-manager, file: clusterrole-cert-manager.yml, type: clusterrole }
- { name: cert-manager-certificate-crd, file: cert-manager-certificate-crd.yml, type: crd } - { name: clusterrolebinding-cert-manager, file: clusterrolebinding-cert-manager.yml, type: clusterrolebinding }
- { name: cert-manager-deploy, file: cert-manager-deploy.yml, type: deploy } - { name: deploy-cert-manager, file: deploy-cert-manager.yml, type: deploy }
register: cert_manager_manifests register: cert_manager_manifests
when: when:
- inventory_hostname == groups['kube-master'][0] - inventory_hostname == groups['kube-master'][0]

View File

@@ -5,7 +5,7 @@ metadata:
name: cert-manager name: cert-manager
labels: labels:
app: cert-manager app: cert-manager
chart: cert-manager-0.2.8 chart: cert-manager-v0.4.1
release: cert-manager release: cert-manager
heritage: Tiller heritage: Tiller
rules: rules:

View File

@@ -5,7 +5,7 @@ metadata:
name: cert-manager name: cert-manager
labels: labels:
app: cert-manager app: cert-manager
chart: cert-manager-0.2.8 chart: cert-manager-v0.4.1
release: cert-manager release: cert-manager
heritage: Tiller heritage: Tiller
roleRef: roleRef:

View File

@@ -5,7 +5,7 @@ metadata:
name: certificates.certmanager.k8s.io name: certificates.certmanager.k8s.io
labels: labels:
app: cert-manager app: cert-manager
chart: cert-manager-0.2.8 chart: cert-manager-v0.4.1
release: cert-manager release: cert-manager
heritage: Tiller heritage: Tiller
spec: spec:

View File

@@ -5,7 +5,7 @@ metadata:
name: clusterissuers.certmanager.k8s.io name: clusterissuers.certmanager.k8s.io
labels: labels:
app: cert-manager app: cert-manager
chart: cert-manager-0.2.8 chart: cert-manager-v0.4.1
release: cert-manager release: cert-manager
heritage: Tiller heritage: Tiller
spec: spec:

View File

@@ -5,7 +5,7 @@ metadata:
name: issuers.certmanager.k8s.io name: issuers.certmanager.k8s.io
labels: labels:
app: cert-manager app: cert-manager
chart: cert-manager-0.2.8 chart: cert-manager-v0.4.1
release: cert-manager release: cert-manager
heritage: Tiller heritage: Tiller
spec: spec:

View File

@@ -6,15 +6,19 @@ metadata:
namespace: {{ cert_manager_namespace }} namespace: {{ cert_manager_namespace }}
labels: labels:
app: cert-manager app: cert-manager
chart: cert-manager-0.2.8 chart: cert-manager-v0.4.1
release: cert-manager release: cert-manager
heritage: Tiller heritage: Tiller
spec: spec:
replicas: 1 replicas: 1
selector:
matchLabels:
app: cert-manager
release: cert-manager
template: template:
metadata: metadata:
labels: labels:
k8s-app: cert-manager app: cert-manager
release: cert-manager release: cert-manager
annotations: annotations:
spec: spec:
@@ -25,6 +29,7 @@ spec:
imagePullPolicy: {{ k8s_image_pull_policy }} imagePullPolicy: {{ k8s_image_pull_policy }}
args: args:
- --cluster-resource-namespace=$(POD_NAMESPACE) - --cluster-resource-namespace=$(POD_NAMESPACE)
- --leader-election-namespace=$(POD_NAMESPACE)
env: env:
- name: POD_NAMESPACE - name: POD_NAMESPACE
valueFrom: valueFrom:
@@ -32,20 +37,5 @@ spec:
fieldPath: metadata.namespace fieldPath: metadata.namespace
resources: resources:
requests: requests:
cpu: {{ cert_manager_cpu_requests }} cpu: 10m
memory: {{ cert_manager_memory_requests }} memory: 32Mi
limits:
cpu: {{ cert_manager_cpu_limits }}
memory: {{ cert_manager_memory_limits }}
- name: ingress-shim
image: {{ cert_manager_ingress_shim_image_repo }}:{{ cert_manager_ingress_shim_image_tag }}
imagePullPolicy: {{ k8s_image_pull_policy }}
resources:
requests:
cpu: {{ cert_manager_cpu_requests }}
memory: {{ cert_manager_memory_requests }}
limits:
cpu: {{ cert_manager_cpu_limits }}
memory: {{ cert_manager_memory_limits }}

View File

@@ -6,6 +6,6 @@ metadata:
namespace: {{ cert_manager_namespace }} namespace: {{ cert_manager_namespace }}
labels: labels:
app: cert-manager app: cert-manager
chart: cert-manager-0.2.8 chart: cert-manager-v0.4.1
release: cert-manager release: cert-manager
heritage: Tiller heritage: Tiller

Some files were not shown because too many files have changed in this diff Show More