Compare commits

..

18 Commits

Author SHA1 Message Date
Cristian Calin
6ff35d0c67 CI: upgrade vagrant to 2.2.19 (#8264) (#8267) 2021-12-03 05:20:27 -08:00
Hyojun Jeon
69c21e1c35 Add vxlanEnabled spec in FelixConfiguration (#8240) 2021-11-29 01:49:23 -08:00
Iago Santos
f4dae74117 Fix kubespray flatcar ansible_os_family and ansible_distribution (#8181)
Closes https://github.com/kubernetes-sigs/kubespray/issues/8028

Signed-off-by: Iago Santos <iago.santos.pardo@adfinis.com>
2021-11-19 07:58:51 -08:00
Kenichi Omichi
2b7247f842 [2.17] Fix-CI: python was upgraded in CI to 3.10 (#8210)
* Fix-CI: python was upgraded in CI to 3.10 and pathlib is now included in python base making this dependency break the CI (#8153)

* Upgrade ruamel.yaml.clib to work with Python 3.10 (#8034)

ruamel.yaml.clib did not build with the upcoming Python 3.10.

Cf. https://sourceforge.net/p/ruamel-yaml-clib/tickets/5/

ruamel.yaml.clib==0.2.4 fixes the issue. It does not work
with Python 3.7 (cf https://sourceforge.net/p/ruamel-yaml-clib/tickets/6/)
but currently Kubespray requires Python >= 3.9.

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
Co-authored-by: Olivier Lemasle <olivier.lemasle@apalia.net>
2021-11-18 23:48:52 -08:00
Kenichi Omichi
eeeca4a1d0 [2.17] Update kubernetes version to 1.21.6 (#8142) 2021-11-02 01:32:58 -07:00
Sébastien Masset
7e296b1523 Fixed default DNS min replica for single node clusters (#8109) 2021-10-26 23:59:25 -07:00
Utku Özdemir
488fbd8a37 Implement drain fallback with --disable-eviction to ignore PDBs (#8102)
Signed-off-by: Utku Ozdemir <uoz@protonmail.com>
2021-10-21 06:14:09 -07:00
Cristian Calin
f7242d39b9 Calico: increase calico node probe timeouts and allow tunning (#7981) (#8103) 2021-10-21 05:06:10 -07:00
Mathieu Parent
87fee0cccf [2.17] Fix containerd failed to start if apparmor is not installed (#8042)
* Ensure apparmor is installed (#8011)

Kubespray deployment failed when using containerd backend on nodes that apparmor was not installed or previously removed. This PR ensure apparmor is installed by adding it into required_pkgs var.

(cherry picked from commit 4bace2491d)

* Ensure apparmor is installed (#8036)

Kubespray deployment failed when using containerd backend on nodes that apparmor was not installed or previously removed. This PR ensure apparmor is installed by adding it into required_pkgs var.

(cherry picked from commit af04906b51)

Co-authored-by: rtsp <git@rtsp.us>
2021-10-01 10:00:24 -07:00
Kenichi Omichi
45018ac077 Check if openstack application credentials are empty since they always exists (#8021) (#8038)
Co-authored-by: Hugo Blom <bl0m1@users.noreply.github.com>
2021-09-30 08:02:08 -07:00
Kenichi Omichi
9fafe9849b Add proxy for subscription-manager (#8012) (#8039)
If using proxy, it is necessary to configure it before running
"subscription-manager status" command.
This adds the step.
2021-09-30 02:20:08 -07:00
Kenichi Omichi
3b2b618cd2 check if 'plugins' key exists in calico_cni_config object (#7717) (#8040)
* check if 'plugins' key exists in calico_cni_config object

* fix whitespace linting error

* fixed when list indentation

Co-authored-by: David Louks <2402775+dlouks@users.noreply.github.com>
2021-09-30 02:12:07 -07:00
Kenichi Omichi
bf1bb5984b Use kube_config_dir for kubeconfig (#7996) (#8037)
The path of kubeconfig should be configurable, and its default value
is /etc/kubernetes/admin.conf. Most paths of the file are configurable
but some were not. This make those configurable.
2021-09-30 02:08:08 -07:00
Kenichi Omichi
04a8a19ce6 Issue 8004: Fix typha prometheus (#8005) (#8035)
The typha prometheus settings were in the `volumeMounts` section of the
spec and not in the `envs` section. This was cauing the deployment to
fail because it was looking for a volumeMount.

```
failed: [controller-001.a2.da.dev.logdna.net] (item=calico-typha.yml) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "checksum": "598ac79530749e8e2110793b53fc49ac208e7130", "dest": "/etc/kubernetes/calico-typha.yml", "diff": [], "failed": false, "gid": 0, "group": "root", "invocation": {"module_args": {"_original_basename": "calico-typha.yml.j2", "attributes": null, "backup": false, "checksum": "598ac79530749e8e2110793b53fc49ac208e7130", "content": null, "delimiter": null, "dest": "/etc/kubernetes/calico-typha.yml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/core/.ansible/tmp/ansible-tmp-1632349768.56-75434-32452975679246/source", "unsafe_writes": null, "validate": null}}, "item": {"file": "calico-typha.yml", "name": "calico", "type": "typha"}, "md5sum": "53c00ac7f562cf9ecbbfd27899ea066d", "mode": "0644", "owner": "root", "size": 5378, "src": "/home/core/.ansible/tmp/ansible-tmp-1632349768.56-75434-32452975679246/source", "state": "file", "uid": 0}, "msg": "error running kubectl (/opt/bin/kubectl --namespace=kube-system apply --force --filename=/etc/kubernetes/calico-typha.yml) command (rc=1), out='service/calico-typha unchanged\n', err='error: error validating \"/etc/kubernetes/calico-typha.yml\": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[2]): unknown field \"value\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[2]): missing required field \"mountPath\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[3]): unknown field \"value\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[3]): missing required field \"mountPath\" in io.k8s.api.core.v1.VolumeMount]; if you choose to ignore these errors, turn validation off with --validate=false\n'"}
```

Co-authored-by: Eric Lake <ericlake@gmail.com>
2021-09-29 10:22:49 -07:00
Kenichi Omichi
ae1fb69382 Fix cilium operator metrics activation (#8000) (#8033)
This is a cherry-pick of 598f178054

Co-authored-by: Léopold Jacquot <leopold.jacquot@infomaniak.com>
2021-09-29 01:32:49 -07:00
Kenichi Omichi
dfee7a8ec5 Fix k8s-certs-renew cp path (#7992) (#8032)
This is a cherry-pick of 2211504790

Signed-off-by: Wang Zhen <lazybetrayer@gmail.com>

Co-authored-by: Wang Zhen <lazybetrayer@gmail.com>
2021-09-29 01:28:48 -07:00
Kenichi Omichi
bd4407199c Add metrics_server_resizer option (#8018) (#8031)
The addon-resizer container can reduce resource limits of cpu and
memory of metrics-server container in the pod, and that caused
OOMKilled.
In addition, the original metrics-server manifest doesn't contain
the addon-resizer container as [1].
So this adds metrics_server_resizer option to control the addon-resizer
container deployment and the default value is false to make it stable
for most environments.

This is a cherry-pick of 8d3961edbe

[1]: 527679e5e8/manifests/base/deployment.yaml
2021-09-28 11:15:16 -07:00
Kenichi Omichi
6cfa3bbb22 Remove allowPrivilegeEscalation from metrics-server (#8014) (#8025)
"allowPrivilegeEscalation: false" blocks deploying metrics-server
on CentOS7. In addition, the original metrics-server manifest doesn't
contain it as [1]. This removes it.

[1]: 527679e5e8/manifests/base/deployment.yaml
2021-09-27 23:54:43 -07:00
1303 changed files with 43855 additions and 42427 deletions

View File

@@ -7,6 +7,18 @@ skip_list:
# These rules are intentionally skipped:
#
# [E204]: "Lines should be no longer than 160 chars"
# This could be re-enabled with a major rewrite in the future.
# For now, there's not enough value gain from strictly limiting line length.
# (Disabled in May 2019)
- '204'
# [E701]: "meta/main.yml should contain relevant info"
# Roles in Kubespray are not intended to be used/imported by Ansible Galaxy.
# While it can be useful to have these metadata available, they are also available in the existing documentation.
# (Disabled in May 2019)
- '701'
# [role-name] "meta/main.yml" Role name role-name does not match ``^+$`` pattern
# Meta roles in Kubespray don't need proper names
# (Disabled in June 2021)
@@ -16,24 +28,3 @@ skip_list:
# In Kubespray we use variables that use camelCase to match their k8s counterparts
# (Disabled in June 2021)
- 'var-naming'
# [fqcn-builtins]
# Roles in kubespray don't need fully qualified collection names
# (Disabled in Feb 2023)
- 'fqcn-builtins'
# We use template in names
- 'name[template]'
# No changed-when on commands
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'no-changed-when'
# Disable run-once check with free strategy
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'run-once[task]'
exclude_paths:
# Generated files
- tests/files/custom_cni/cilium.yaml
- venv
- .github

View File

@@ -1,8 +0,0 @@
# This file contains ignores rule violations for ansible-lint
inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml jinja[spacing]
roles/kubernetes/control-plane/defaults/main/kube-proxy.yml jinja[spacing]
roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
roles/kubernetes/node/defaults/main.yml jinja[spacing]
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
roles/kubespray-defaults/defaults/main/main.yml jinja[spacing]

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
docs/_sidebar.md linguist-generated=true

44
.github/ISSUE_TEMPLATE/bug-report.md vendored Normal file
View File

@@ -0,0 +1,44 @@
---
name: Bug Report
about: Report a bug encountered while operating Kubernetes
labels: kind/bug
---
<!--
Please, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
**Environment**:
- **Cloud provider or hardware configuration:**
- **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):**
- **Version of Ansible** (`ansible --version`):
- **Version of Python** (`python --version`):
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
**Network plugin used**:
**Full inventory with variables (`ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"`):**
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Command used to invoke ansible**:
**Output of ansible run**:
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Anything else do we need to know**:
<!-- By running scripts/collect-info.yaml you can get a lot of useful informations.
Script can be started by:
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here.-->

View File

@@ -1,124 +0,0 @@
---
name: Bug Report
description: Report a bug encountered while using Kubespray
labels: kind/bug
body:
- type: markdown
attributes:
value: |
Please, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
- type: textarea
id: problem
attributes:
label: What happened?
description: |
Please provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner.
validations:
required: true
- type: textarea
id: expected
attributes:
label: What did you expect to happen?
validations:
required: true
- type: textarea
id: repro
attributes:
label: How can we reproduce it (as minimally and precisely as possible)?
validations:
required: true
- type: markdown
attributes:
value: '### Environment'
- type: textarea
id: os
attributes:
label: OS
placeholder: 'printf "$(uname -srm)\n$(cat /etc/os-release)\n"'
validations:
required: true
- type: textarea
id: ansible_version
attributes:
label: Version of Ansible
placeholder: 'ansible --version'
validations:
required: true
- type: input
id: python_version
attributes:
label: Version of Python
placeholder: 'python --version'
validations:
required: true
- type: input
id: kubespray_version
attributes:
label: Version of Kubespray (commit)
placeholder: 'git rev-parse --short HEAD'
validations:
required: true
- type: dropdown
id: network_plugin
attributes:
label: Network plugin used
options:
- calico
- cilium
- cni
- custom_cni
- flannel
- kube-ovn
- kube-router
- macvlan
- meta
- multus
- ovn4nfv
- weave
validations:
required: true
- type: textarea
id: inventory
attributes:
label: Full inventory with variables
placeholder: 'ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"'
description: We recommend using snippets services like https://gist.github.com/ etc.
validations:
required: true
- type: input
id: ansible_command
attributes:
label: Command used to invoke ansible
validations:
required: true
- type: textarea
id: ansible_output
attributes:
label: Output of ansible run
description: We recommend using snippets services like https://gist.github.com/ etc.
validations:
required: true
- type: textarea
id: anything_else
attributes:
label: Anything else we need to know
description: |
By running scripts/collect-info.yaml you can get a lot of useful informations.
Script can be started by:
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here

View File

@@ -1,5 +0,0 @@
---
contact_links:
- name: Support Request
url: https://kubernetes.slack.com/channels/kubespray
about: Support request or question relating to Kubernetes

11
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@@ -0,0 +1,11 @@
---
name: Enhancement Request
about: Suggest an enhancement to the Kubespray project
labels: kind/feature
---
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
**Why is this needed**:

View File

@@ -1,20 +0,0 @@
---
name: Enhancement Request
description: Suggest an enhancement to the Kubespray project
labels: kind/feature
body:
- type: markdown
attributes:
value: Please only use this template for submitting enhancement requests
- type: textarea
id: what
attributes:
label: What would you like to be added
validations:
required: true
- type: textarea
id: why
attributes:
label: Why is this needed
validations:
required: true

20
.github/ISSUE_TEMPLATE/failing-test.md vendored Normal file
View File

@@ -0,0 +1,20 @@
---
name: Failing Test
about: Report test failures in Kubespray CI jobs
labels: kind/failing-test
---
<!-- Please only use this template for submitting reports about failing tests in Kubespray CI jobs -->
**Which jobs are failing**:
**Which test(s) are failing**:
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
**Anything else we need to know**:

View File

@@ -1,41 +0,0 @@
---
name: Failing Test
description: Report test failures in Kubespray CI jobs
labels: kind/failing-test
body:
- type: markdown
attributes:
value: Please only use this template for submitting reports about failing tests in Kubespray CI jobs
- type: textarea
id: failing_jobs
attributes:
label: Which jobs are failing ?
validations:
required: true
- type: textarea
id: failing_tests
attributes:
label: Which tests are failing ?
validations:
required: true
- type: input
id: since_when
attributes:
label: Since when has it been failing ?
validations:
required: true
- type: textarea
id: failure_reason
attributes:
label: Reason for failure
description: If you don't know and have no guess, just put "Unknown"
validations:
required: true
- type: textarea
id: anything_else
attributes:
label: Anything else we need to know

18
.github/ISSUE_TEMPLATE/support.md vendored Normal file
View File

@@ -0,0 +1,18 @@
---
name: Support Request
about: Support request or question relating to Kubespray
labels: kind/support
---
<!--
STOP -- PLEASE READ!
GitHub is not the right place for support requests.
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubespray) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
-->

View File

@@ -1,7 +0,0 @@
version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
labels: [ "dependencies" ]

23
.gitignore vendored
View File

@@ -3,21 +3,14 @@
**/vagrant_ansible_inventory
*.iml
temp
contrib/offline/container-images
contrib/offline/container-images.tar.gz
contrib/offline/offline-files
contrib/offline/offline-files.tar.gz
.idea
.vscode
.tox
.cache
*.bak
*.tfstate
*.tfstate*backup
*.lock.hcl
*.tfstate.backup
.terraform/
contrib/terraform/aws/credentials.tfvars
.terraform.lock.hcl
/ssh-bastion.conf
**/*.sw[pon]
*~
@@ -106,17 +99,3 @@ target/
# virtualenv
venv/
ENV/
# molecule
roles/**/molecule/**/__pycache__/
# macOS
.DS_Store
# Temp location used by our scripts
scripts/tmp/
tmp.md
# Ansible collection files
kubernetes_sigs-kubespray*tar.gz
ansible_collections

View File

@@ -1,6 +1,5 @@
---
stages:
- build
- unit-tests
- deploy-part1
- moderator
@@ -9,15 +8,14 @@ stages:
- deploy-special
variables:
KUBESPRAY_VERSION: v2.25.0
KUBESPRAY_VERSION: v2.16.0
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_JOB_ID"
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml"
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
CONTAINER_ENGINE: docker
@@ -28,23 +26,25 @@ variables:
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false"
MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
TERRAFORM_VERSION: 1.3.7
PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
TERRAFORM_14_VERSION: 0.14.11
TERRAFORM_15_VERSION: 0.15.5
before_script:
- ./tests/scripts/rebase.sh
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh
.job: &job
tags:
- packet
image: $PIPELINE_IMAGE
image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
artifacts:
when: always
paths:
@@ -53,7 +53,6 @@ before_script:
.testcases: &testcases
<<: *job
retry: 1
interruptible: true
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
@@ -75,10 +74,8 @@ ci-authorized:
only: []
include:
- .gitlab-ci/build.yml
- .gitlab-ci/lint.yml
- .gitlab-ci/shellcheck.yml
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml
- .gitlab-ci/molecule.yml

View File

@@ -1,40 +0,0 @@
---
.build:
stage: build
image:
name: moby/buildkit:rootless
entrypoint: [""]
variables:
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json
pipeline image:
extends: .build
script:
- |
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache
rules:
- if: '$CI_COMMIT_REF_NAME != $CI_DEFAULT_BRANCH'
pipeline image and build cache:
extends: .build
script:
- |
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache \
--export-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache,mode=max
rules:
- if: '$CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH'

View File

@@ -14,7 +14,7 @@ vagrant-validate:
stage: unit-tests
tags: [light]
variables:
VAGRANT_VERSION: 2.3.7
VAGRANT_VERSION: 2.2.19
script:
- ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master']
@@ -23,16 +23,9 @@ ansible-lint:
extends: .job
stage: unit-tests
tags: [light]
script:
- ansible-lint -v
except: ['triggers', 'master']
jinja-syntax-check:
extends: .job
stage: unit-tests
tags: [light]
script:
- "find -name '*.j2' -exec tests/scripts/check-templates.py {} +"
# lint every yml/yaml file that looks like it contains Ansible plays
script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
except: ['triggers', 'master']
syntax-check:
@@ -47,34 +40,21 @@ syntax-check:
ANSIBLE_VERBOSITY: "3"
script:
- ansible-playbook --syntax-check cluster.yml
- ansible-playbook --syntax-check playbooks/cluster.yml
- ansible-playbook --syntax-check upgrade-cluster.yml
- ansible-playbook --syntax-check playbooks/upgrade_cluster.yml
- ansible-playbook --syntax-check reset.yml
- ansible-playbook --syntax-check playbooks/reset.yml
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
except: ['triggers', 'master']
collection-build-install-sanity-check:
extends: .job
stage: unit-tests
tags: [light]
variables:
ANSIBLE_COLLECTIONS_PATH: "./ansible_collections"
script:
- ansible-galaxy collection build
- ansible-galaxy collection install kubernetes_sigs-kubespray-$(grep "^version:" galaxy.yml | awk '{print $2}').tar.gz
- ansible-galaxy collection list $(egrep -i '(name:\s+|namespace:\s+)' galaxy.yml | awk '{print $2}' | tr '\n' '.' | sed 's|\.$||g') | grep "^kubernetes_sigs.kubespray"
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/cluster.yml
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/reset.yml
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
tags: [light]
extends: .job
before_script:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt
script:
- pip3 install tox
- cd contrib/inventory_builder && tox
@@ -89,35 +69,6 @@ markdownlint:
script:
- markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md
generate-sidebar:
extends: .job
stage: unit-tests
tags: [light]
script:
- scripts/gen_docs_sidebar.sh
- git diff --exit-code
check-readme-versions:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_readme_versions.sh
check-galaxy-version:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_galaxy_version.sh
check-typo:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_typo.sh
ci-matrix:
stage: unit-tests
tags: [light]

View File

@@ -1,83 +0,0 @@
---
.molecule:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: $PIPELINE_IMAGE
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- chronic ./tests/scripts/molecule_logs.sh
artifacts:
when: always
paths:
- molecule_logs/
# CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set
.molecule_periodic:
only:
variables:
- $PERIODIC_CI_ENABLED
allow_failure: true
extends: .molecule
molecule_full:
extends: .molecule_periodic
molecule_no_container_engines:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -e container-engine
when: on_success
molecule_docker:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-dockerd
when: on_success
molecule_containerd:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/containerd
when: on_success
molecule_cri-o:
extends: .molecule
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o
allow_failure: true
when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail
molecule_kata:
extends: .molecule
stage: deploy-part3
script:
- ./tests/scripts/molecule_run.sh -i container-engine/kata-containers
when: manual
# FIXME: this test is broken (perma-failing)
molecule_gvisor:
extends: .molecule
stage: deploy-part3
script:
- ./tests/scripts/molecule_run.sh -i container-engine/gvisor
when: manual
# FIXME: this test is broken (perma-failing)
molecule_youki:
extends: .molecule
stage: deploy-part3
script:
- ./tests/scripts/molecule_run.sh -i container-engine/youki
when: manual
# FIXME: this test is broken (perma-failing)

View File

@@ -2,7 +2,6 @@
.packet:
extends: .testcases
variables:
ANSIBLE_TIMEOUT: "120"
CI_PLATFORM: packet
SSH_USER: kubespray
tags:
@@ -23,81 +22,59 @@
allow_failure: true
extends: .packet
packet_cleanup_old:
stage: deploy-part1
extends: .packet_periodic
script:
- cd tests
- make cleanup-packet
after_script: []
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu24-calico-all-in-one:
packet_ubuntu18-calico-aio:
stage: deploy-part1
extends: .packet_pr
when: on_success
# Future AIO job
packet_ubuntu20-calico-aio:
stage: deploy-part1
extends: .packet_pr
when: on_success
variables:
RESET_CHECK: "true"
# ### PR JOBS PART2
packet_ubuntu20-all-in-one-docker:
packet_centos7-flannel-containerd-addons-ha:
extends: .packet_pr
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_ubuntu20-calico-all-in-one:
stage: deploy-part1
extends: .packet_pr
when: on_success
packet_ubuntu20-calico-all-in-one-hardening:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-all-in-one-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-all-in-one:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu24-all-in-one-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu24-calico-etcd-datastore:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_centos7-flannel-addons-ha:
packet_centos8-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
packet_almalinux8-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
allow_failure: true
packet_ubuntu20-crio:
packet_ubuntu18-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
variables:
MITOGEN_ENABLE: "true"
packet_fedora37-crio:
extends: .packet_pr
packet_ubuntu16-canal-kubeadm-ha:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_ubuntu16-canal-sep:
stage: deploy-special
extends: .packet_pr
when: manual
packet_ubuntu20-flannel-ha:
packet_ubuntu16-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -107,41 +84,18 @@ packet_debian10-cilium-svc-proxy:
extends: .packet_periodic
when: on_success
packet_debian10-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian10-docker:
packet_debian10-containerd:
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_debian11-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian11-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-cilium:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
extends: .packet_pr
@@ -152,78 +106,54 @@ packet_centos7-calico-ha-once-localhost:
services:
- docker:19.03.9-dind
packet_almalinux8-kube-ovn:
packet_centos8-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_almalinux8-calico:
packet_fedora34-weave:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_rockylinux8-calico:
packet_opensuse-canal:
stage: deploy-part2
extends: .packet_pr
extends: .packet_periodic
when: on_success
packet_rockylinux9-calico:
packet_ubuntu18-ovn4nfv:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_rockylinux9-cilium:
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
RESET_CHECK: "true"
packet_almalinux8-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_amazon-linux-2-all-in-one:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_fedora38-docker-weave:
stage: deploy-part2
extends: .packet_pr
when: on_success
allow_failure: true
packet_opensuse-docker-cilium:
stage: deploy-part2
extends: .packet_pr
extends: .packet_periodic
when: on_success
# ### MANUAL JOBS
packet_ubuntu20-docker-weave-sep:
packet_ubuntu16-weave-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu20-cilium-sep:
packet_ubuntu18-cilium-sep:
stage: deploy-special
extends: .packet_pr
when: manual
packet_ubuntu20-flannel-ha-once:
packet_ubuntu18-flannel-containerd-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
# Calico HA eBPF
packet_almalinux8-calico-ha-ebpf:
packet_ubuntu18-flannel-containerd-ha-once:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian10-macvlan:
packet_debian9-macvlan:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -233,52 +163,40 @@ packet_centos7-calico-ha:
extends: .packet_pr
when: manual
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-multus-calico:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora38-docker-calico:
packet_oracle7-canal-ha:
stage: deploy-part2
extends: .packet_periodic
when: on_success
variables:
RESET_CHECK: "true"
extends: .packet_pr
when: manual
packet_fedora37-calico-selinux:
packet_fedora33-calico:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_fedora37-calico-swap-selinux:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_almalinux8-calico-nodelocaldns-secondary:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora38-kube-ovn:
packet_fedora34-calico-selinux:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_debian11-custom-cni:
packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian11-kubelet-csr-approver:
packet_fedora34-kube-ovn-containerd:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian12-custom-cni-helm:
stage: deploy-part2
extends: .packet_pr
when: manual
extends: .packet_periodic
when: on_success
# ### PR JOBS PART3
# Long jobs (45min+)
@@ -289,59 +207,44 @@ packet_centos7-weave-upgrade-ha:
when: on_success
variables:
UPGRADE_TEST: basic
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: basic
MITOGEN_ENABLE: "false"
# Calico HA Wireguard
packet_ubuntu20-calico-ha-wireguard:
stage: deploy-part2
extends: .packet_pr
when: manual
variables:
MITOGEN_ENABLE: "true"
packet_debian11-calico-upgrade:
packet_debian9-calico-upgrade:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_almalinux8-calico-remove-node:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
REMOVE_NODE_CHECK: "true"
REMOVE_NODE_NAME: "instance-3"
packet_ubuntu20-calico-etcd-kubeadm:
stage: deploy-part3
extends: .packet_pr
when: on_success
packet_debian11-calico-upgrade-once:
packet_debian9-calico-upgrade-once:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_ubuntu20-calico-ha-recover:
packet_ubuntu18-calico-ha-recover:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
packet_ubuntu20-calico-ha-recover-noquorum:
packet_ubuntu18-calico-ha-recover-noquorum:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:]:kube_control_plane[1:]"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:],kube_control_plane[1:]"

View File

@@ -11,6 +11,6 @@ shellcheck:
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:
# Run shellcheck for all *.sh
- find . -name '*.sh' -not -path './.git/*' | xargs shellcheck --severity error
# Run shellcheck for all *.sh except contrib/
- find . -name '*.sh' -not -path './contrib/*' -not -path './.git/*' | xargs shellcheck --severity error
except: ['triggers', 'master']

View File

@@ -53,72 +53,115 @@
# Cleanup regardless of exit code
- chronic ./tests/scripts/testcases_cleanup.sh
tf-validate-openstack:
tf-0.15.x-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-equinix:
tf-0.15.x-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: equinix
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws:
tf-0.15.x-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-exoscale:
tf-0.15.x-validate-exoscale:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: exoscale
tf-validate-hetzner:
tf-0.15.x-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: hetzner
tf-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: vsphere
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-upcloud:
tf-0.15.x-validate-upcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-nifcloud:
tf-0.14.x-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: nifcloud
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu20-default:
tf-0.14.x-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-exoscale:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: exoscale
tf-0.14.x-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: vsphere
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-upcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu16-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_VERSION
# TF_VERSION: $TERRAFORM_14_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_metro: am
# TF_VAR_facility: ewr1
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_20_04
# TF_VAR_operating_system: ubuntu_16_04
#
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_14_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_facility: ams1
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_18_04
.ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3
@@ -144,6 +187,10 @@ tf-validate-nifcloud:
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
# Since ELASTX is in Stockholm, Mitogen helps with latency
MITOGEN_ENABLE: "false"
# Mitogen doesn't support interpreter discovery yet
ANSIBLE_PYTHON_INTERPRETER: "/usr/bin/python3"
tf-elastx_cleanup:
stage: unit-tests
@@ -156,14 +203,14 @@ tf-elastx_cleanup:
script:
- ./scripts/openstack-cleanup/main.py
tf-elastx_ubuntu20-calico:
tf-elastx_ubuntu18-calico:
extends: .terraform_apply
stage: deploy-part3
when: on_success
allow_failure: true
variables:
<<: *elastx_variables
TF_VERSION: $TERRAFORM_VERSION
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
@@ -186,7 +233,7 @@ tf-elastx_ubuntu20-calico:
TF_VAR_az_list_node: '["sto1"]'
TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-20.04-server-latest
TF_VAR_image: ubuntu-18.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
# OVH voucher expired, commenting job until things are sorted out
@@ -203,13 +250,13 @@ tf-elastx_ubuntu20-calico:
# script:
# - ./scripts/openstack-cleanup/main.py
# tf-ovh_ubuntu20-calico:
# tf-ovh_ubuntu18-calico:
# extends: .terraform_apply
# when: on_success
# environment: ovh
# variables:
# <<: *ovh_variables
# TF_VERSION: $TERRAFORM_VERSION
# TF_VERSION: $TERRAFORM_14_VERSION
# PROVIDER: openstack
# CLUSTER: $CI_COMMIT_REF_NAME
# ANSIBLE_TIMEOUT: "60"
@@ -229,5 +276,5 @@ tf-elastx_ubuntu20-calico:
# TF_VAR_network_name: "Ext-Net"
# TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_image: "Ubuntu 20.04"
# TF_VAR_image: "Ubuntu 18.04"
# TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

View File

@@ -1,5 +1,22 @@
---
molecule_tests:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
.vagrant:
extends: .testcases
variables:
@@ -10,22 +27,30 @@
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: $PIPELINE_IMAGE
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/testcases_run.sh
after_script:
- chronic ./tests/scripts/testcases_cleanup.sh
vagrant_ubuntu20-calico-dual-stack:
vagrant_ubuntu18-calico-dual-stack:
stage: deploy-part2
extends: .vagrant
when: manual
# FIXME: this test if broken (perma-failing)
when: on_success
vagrant_ubuntu20-weave-medium:
vagrant_ubuntu18-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu18-weave-medium:
stage: deploy-part2
extends: .vagrant
when: manual
@@ -34,31 +59,3 @@ vagrant_ubuntu20-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
allow_failure: false
vagrant_ubuntu20-flannel-collection:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu20-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu20-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_fedora37-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual
# FIXME: this test if broken (perma-failing)
vagrant_centos7-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual

View File

@@ -1,3 +1,2 @@
---
MD013: false
MD029: false

View File

@@ -1,85 +0,0 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-xml
- id: check-merge-conflict
- id: detect-private-key
- id: end-of-file-fixer
- id: forbid-new-submodules
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.27.1
hooks:
- id: yamllint
args: [--strict]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: [-r, "~MD013,~MD029"]
exclude: "^.git"
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 3.0.0
hooks:
- id: shellcheck
args: [--severity, "error"]
exclude: "^.git"
files: "\\.sh$"
- repo: local
hooks:
- id: ansible-lint
name: ansible-lint
entry: ansible-lint -v
language: python
pass_filenames: false
additional_dependencies:
- .[community]
- id: ansible-syntax-check
name: ansible-syntax-check
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
language: python
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"
- id: tox-inventory-builder
name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox"
language: python
pass_filenames: false
- id: check-readme-versions
name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh
language: script
pass_filenames: false
- id: generate-docs-sidebar
name: generate-docs-sidebar
entry: scripts/gen_docs_sidebar.sh
language: script
pass_filenames: false
- id: ci-matrix
name: ci-matrix
entry: tests/scripts/md-table/test.sh
language: script
pass_filenames: false
- id: jinja-syntax-check
name: jinja-syntax-check
entry: tests/scripts/check-templates.py
language: python
types:
- jinja
additional_dependencies:
- Jinja2

View File

@@ -3,9 +3,6 @@ extends: default
ignore: |
.git/
.github/
# Generated file
tests/files/custom_cni/cilium.yaml
rules:
braces:

View File

@@ -1 +0,0 @@
# See our release notes on [GitHub](https://github.com/kubernetes-sigs/kubespray/releases)

2
CNAME
View File

@@ -1 +1 @@
kubespray.io
kubespray.io

View File

@@ -6,23 +6,11 @@
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
To install development dependencies you can set up a python virtual env with the necessary dependencies:
```ShellSession
virtualenv venv
source venv/bin/activate
pip install -r tests/requirements.txt
ansible-galaxy install -r tests/requirements.yml
```
To install development dependencies you can use `pip install -r tests/requirements.txt`
#### Linting
Kubespray uses [pre-commit](https://pre-commit.com) hook configuration to run several linters, please install this tool and use it to run validation tests before submitting a PR.
```ShellSession
pre-commit install
pre-commit run -a # To run pre-commit hook on all files in the repository, even if they were not modified
```
Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `ansible-lint`
#### Molecule
@@ -39,9 +27,5 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo.
5. Addess any pre-commit validation failures.
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
7. Submit a pull request.
8. Work with the reviewers on their suggestions.
9. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.
4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
5. Submit a pull request.

View File

@@ -1,52 +1,33 @@
# syntax=docker/dockerfile:1
# Use imutable image tags rather than mutable tags (like ubuntu:18.04)
FROM ubuntu:bionic-20200807
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
RUN apt update -y \
&& apt install -y \
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
ca-certificates curl gnupg2 software-properties-common python3-pip rsync git \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install --no-install-recommends -y docker-ce \
&& rm -rf /var/lib/apt/lists/*
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
# See: https://github.com/pypa/pip/issues/10219
ENV LANG=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive \
PYTHONDONTWRITEBYTECODE=1
ENV LANG=C.UTF-8
WORKDIR /kubespray
COPY . .
RUN /usr/bin/python3 -m pip install --no-cache-dir pip -U \
&& /usr/bin/python3 -m pip install --no-cache-dir -r tests/requirements.txt \
&& python3 -m pip install --no-cache-dir -r requirements.txt \
&& update-alternatives --install /usr/bin/python python /usr/bin/python3 1
# hadolint ignore=DL3008
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt-get update -q \
&& apt-get install -yq --no-install-recommends \
curl \
python3 \
python3-pip \
sshpass \
vim \
rsync \
openssh-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/log/*
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN --mount=type=bind,source=roles/kubespray-defaults/defaults/main/main.yml,target=roles/kubespray-defaults/defaults/main/main.yml \
KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl
COPY *.yml ./
COPY *.cfg ./
COPY roles ./roles
COPY contrib ./contrib
COPY inventory ./inventory
COPY library ./library
COPY extra_playbooks ./extra_playbooks
COPY playbooks ./playbooks
COPY plugins ./plugins
RUN KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl \
&& chmod a+x kubectl \
&& mv kubectl /usr/local/bin/kubectl

View File

@@ -187,7 +187,7 @@
identification within third-party archives.
Copyright 2016 Kubespray
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

View File

@@ -1,7 +1,5 @@
mitogen:
@echo Mitogen support is deprecated.
@echo Please run the following command manually:
@echo ansible-playbook -c local mitogen.yml -vv
ansible-playbook -c local mitogen.yml -vv
clean:
rm -rf dist/
rm *.retry

2
OWNERS
View File

@@ -5,4 +5,4 @@ approvers:
reviewers:
- kubespray-reviewers
emeritus_approvers:
- kubespray-emeritus_approvers
- kubespray-emeritus_approvers

View File

@@ -1,24 +1,18 @@
aliases:
kubespray-approvers:
- cristicalin
- floryut
- liupeng0518
- mzaian
- oomichi
- yankay
kubespray-reviewers:
- cyclinder
- erikjiang
- mrfreezeex
- mzaian
- vannten
- yankay
kubespray-emeritus_approvers:
- ant31
- atoms
- chadswen
- luckysb
- mattymo
- chadswen
- mirwan
- miouge1
- riverzhang
- woopstar
- luckysb
- floryut
kubespray-reviewers:
- holmsten
- bozzo
- eppo
- oomichi
kubespray-emeritus_approvers:
- riverzhang
- atoms
- ant31

226
README.md
View File

@@ -5,7 +5,7 @@
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **[AWS](docs/cloud_providers/aws.md), GCE, [Azure](docs/cloud_providers/azure.md), [OpenStack](docs/cloud_providers/openstack.md), [vSphere](docs/cloud_providers/vsphere.md), [Equinix Metal](docs/cloud_providers/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Equinix Metal](docs/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
@@ -13,16 +13,16 @@ You can get your invite [here](http://slack.k8s.io/)
## Quick Start
Below are several ways to use Kubespray to deploy a Kubernetes cluster.
To deploy the cluster you can use :
### Ansible
#### Usage
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
then run the following steps:
```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
@@ -34,13 +34,6 @@ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inv
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
# uninstalling old packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
# And be mind it will remove the current kubernetes cluster (if it's running)!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root reset.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
@@ -48,172 +41,139 @@ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
```
Note: When Ansible is already installed via system packages on the control node,
Python packages installed via `sudo pip install -r requirements.txt` will go to
a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on
Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on
Ubuntu). As a consequence, the `ansible-playbook` command will fail with:
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with:
```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
```
This likely indicates that a task depends on a module present in ``requirements.txt``.
probably pointing on a task depending on a module present in requirements.txt.
One way of addressing this is to uninstall the system Ansible package then
reinstall Ansible via ``pip``, but this not always possible and one must
take care regarding package versions.
A workaround consists of setting the `ANSIBLE_LIBRARY`
and `ANSIBLE_MODULE_UTILS` environment variables respectively to
the `ansible/modules` and `ansible/module_utils` subdirectories of the ``pip``
installation location, which is the ``Location`` shown by running
`pip show [package]` before executing `ansible-playbook`.
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.
A simple way to ensure you get all the correct version of Ansible is to use
the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
to access the inventory and SSH key in the container, like this:
A simple way to ensure you get all the correct version of Ansible is to use the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/) to get the inventory and ssh key into the container, like this:
```ShellSession
git checkout v2.25.0
docker pull quay.io/kubespray/kubespray:v2.25.0
docker pull quay.io/kubespray/kubespray:v2.16.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.25.0 bash
quay.io/kubespray/kubespray:v2.16.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
#### Collection
See [here](docs/ansible/ansible_collection.md) if you wish to use this repository as an Ansible collection
### Vagrant
For Vagrant we need to install Python dependencies for provisioning tasks.
Check that ``Python`` and ``pip`` are installed:
For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python and pip are installed:
```ShellSession
python -V && pip -V
```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
then run the following step:
Install the necessary requirements
```ShellSession
sudo pip install -r requirements.txt
vagrant up
```
## Documents
- [Requirements](#requirements)
- [Kubespray vs ...](docs/getting_started/comparisons.md)
- [Getting started](docs/getting_started/getting-started.md)
- [Setting up your first cluster](docs/getting_started/setting-up-your-first-cluster.md)
- [Ansible inventory and tags](docs/ansible/ansible.md)
- [Integration with existing ansible repo](docs/operations/integration.md)
- [Deployment data variables](docs/ansible/vars.md)
- [DNS stack](docs/advanced/dns-stack.md)
- [HA mode](docs/operations/ha-mode.md)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Setting up your first cluster](docs/setting-up-your-first-cluster.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Network plugins](#network-plugins)
- [Vagrant install](docs/developers/vagrant.md)
- [Flatcar Container Linux bootstrap](docs/operating_systems/flatcar.md)
- [Fedora CoreOS bootstrap](docs/operating_systems/fcos.md)
- [openSUSE setup](docs/operating_systems/opensuse.md)
- [Downloaded artifacts](docs/advanced/downloads.md)
- [Cloud providers](docs/cloud_providers/cloud.md)
- [OpenStack](docs/cloud_providers/openstack.md)
- [AWS](docs/cloud_providers/aws.md)
- [Azure](docs/cloud_providers/azure.md)
- [vSphere](docs/cloud_providers/vsphere.md)
- [Equinix Metal](docs/cloud_providers/equinix-metal.md)
- [Large deployments](docs/operations/large-deployments.md)
- [Adding/replacing a node](docs/operations/nodes.md)
- [Upgrades basics](docs/operations/upgrades.md)
- [Air-Gap installation](docs/operations/offline-environment.md)
- [NTP](docs/advanced/ntp.md)
- [Hardening](docs/operations/hardening.md)
- [Mirror](docs/operations/mirror.md)
- [Roadmap](docs/roadmap/roadmap.md)
- [Vagrant install](docs/vagrant.md)
- [Flatcar Container Linux bootstrap](docs/flatcar.md)
- [Fedora CoreOS bootstrap](docs/fcos.md)
- [Debian Jessie setup](docs/debian.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
- [Cloud providers](docs/cloud.md)
- [OpenStack](docs/openstack.md)
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Equinix Metal](docs/equinix-metal.md)
- [Large deployments](docs/large-deployments.md)
- [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md)
- [Roadmap](docs/roadmap.md)
## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Buster
- **Ubuntu** 20.04, 22.04, 24.04
- **CentOS/RHEL** 7, [8, 9](docs/operating_systems/centos.md#centos-8)
- **Fedora** 37, 38
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **Debian** Bullseye, Buster, Jessie, Stretch
- **Ubuntu** 16.04, 18.04, 20.04
- **CentOS/RHEL** 7, [8](docs/centos8.md)
- **Fedora** 33, 34
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** 7, [8, 9](docs/operating_systems/centos.md#centos-8)
- **Alma Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Rocky Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
- **openEuler** (experimental: see [openEuler notes](docs/operating_systems/openeuler.md))
- **Oracle Linux** 7, [8](docs/centos8.md)
- **Alma Linux** [8](docs/centos8.md)
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md))
Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.29.5
- [etcd](https://github.com/etcd-io/etcd) v3.5.12
- [docker](https://www.docker.com/) v26.1
- [containerd](https://containerd.io/) v1.7.16
- [cri-o](http://cri-o.io/) v1.29.1 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.21.6
- [etcd](https://github.com/coreos/etcd) v3.4.13
- [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.4.9
- [cri-o](http://cri-o.io/) v1.21 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [calico](https://github.com/projectcalico/calico) v3.27.3
- [cilium](https://github.com/cilium/cilium) v1.15.4
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
- [cni-plugins](https://github.com/containernetworking/plugins) v0.9.1
- [calico](https://github.com/projectcalico/calico) v3.19.2
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.9.10
- [flanneld](https://github.com/flannel-io/flannel) v0.14.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.7.2
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.3.0
- [multus](https://github.com/intel/multus-cni) v3.7.2
- [ovn4nfv](https://github.com/opnfv/ovn4nfv-k8s-plugin) v1.1.0
- [weave](https://github.com/weaveworks/weave) v2.8.1
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.8.0
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.13.2
- [coredns](https://github.com/coredns/coredns) v1.11.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.10.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.11.0
- [helm](https://helm.sh/) v3.14.2
- [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
- [ambassador](https://github.com/datawire/ambassador): v1.5
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.29.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.9.2
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.24
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) v0.14.2
- [cert-manager](https://github.com/jetstack/cert-manager) v1.0.4
- [coredns](https://github.com/coredns/coredns) v1.8.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.0.0
## Container Runtime Notes
- The list of available docker version is 18.09, 19.03 and 20.10. The recommended docker version is 20.10. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements
- **Minimum required version of Kubernetes is v1.28**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
- **Minimum required version of Kubernetes is v1.19**
- **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is experimentally supported for now**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is run from non-root user account, correct privilege escalation method
- If kubespray is ran from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified.
Hardware:
These limits are safeguarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master
- Memory: 1500 MB
@@ -222,43 +182,46 @@ These limits are safeguarded by Kubespray. Actual requirements for your workload
## Network Plugins
You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`)
You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [flannel](docs/CNI/flannel.md): gre/vxlan (layer 2) networking.
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options
- [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay
and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts,
pods, and (if using Istio and Envoy) applications at the service mesh layer.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [weave](docs/CNI/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
- [ovn4nfv](docs/ovn4nfv.md): [ovn4nfv-k8s-plugins](https://github.com/opnfv/ovn4nfv-k8s-plugin) is the network controller, OVS agent and CNI server to offer basic SFC and OVN overlay networking.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
- [macvlan](docs/CNI/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [multus](docs/CNI/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray.
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/advanced/netcheck.md).
See also [Network checker](docs/netcheck.md).
## Ingress Plugins
- [ambassador](docs/ambassador.md): the Ambassador Ingress Controller and API gateway.
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
## Community docs and resources
@@ -271,12 +234,11 @@ See also [Network checker](docs/advanced/netcheck.md).
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
- [Kubean](https://github.com/kubean-io/kubean)
## CI Tests
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/-/pipelines)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
See the [test matrix](docs/developers/test_cases.md) for details.
See the [test matrix](docs/test_cases.md) for details.

View File

@@ -2,20 +2,17 @@
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325)
1. At least one of the [approvers](OWNERS_ALIASES) must approve this release
1. (Only for major releases) The `kube_version_min_required` variable is set to `n-1`
1. (Only for major releases) Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
1. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
1. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
1. (Only for major releases) An approver creates a release branch in the form `release-X.Y`
1. (For major releases) On the `master` branch: bump the version in `galaxy.yml` to the next expected major release (X.y.0 with y = Y + 1), make a Pull Request.
1. (For minor releases) On the `release-X.Y` branch: bump the version in `galaxy.yml` to the next expected minor release (X.Y.z with z = Z + 1), make a Pull Request.
1. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
1. (Only for major releases) The `KUBESPRAY_VERSION` in `.gitlab-ci.yml` is upgraded to the version we just released # TODO clarify this, this variable is for testing upgrades.
1. The release issue is closed
1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
1. An issue is proposing a new release with a changelog since the last release
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
6. An approver creates a release branch in the form `release-X.Y`
7. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
8. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
9. The release issue is closed
10. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
11. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
## Major/minor releases and milestones
@@ -49,37 +46,3 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.
## Release note creation
You can create a release note with:
```shell
export GITHUB_TOKEN=<your-github-token>
export ORG=kubernetes-sigs
export REPO=kubespray
release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --dependencies=false --output=/tmp/kubespray-release-note --required-author=""
```
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note
## Container image creation
The container image `quay.io/kubespray/kubespray:vX.Y.Z` can be created from Dockerfile of the kubespray root directory:
```shell
cd kubespray/
nerdctl build -t quay.io/kubespray/kubespray:vX.Y.Z .
nerdctl push quay.io/kubespray/kubespray:vX.Y.Z
```
The container image `quay.io/kubespray/vagrant:vX.Y.Z` can be created from build.sh of test-infra/vagrant-docker/:
```shell
cd kubespray/test-infra/vagrant-docker/
./build vX.Y.Z
```
Please note that the above operation requires the permission to push container images into quay.io/kubespray/.
If you don't have the permission, please ask it on the #kubespray-dev channel.

View File

@@ -9,7 +9,5 @@
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
atoms
mattymo
floryut
oomichi
cristicalin

90
Vagrantfile vendored
View File

@@ -19,27 +19,21 @@ SUPPORTED_OS = {
"flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]},
"flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]},
"flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
"ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"},
"ubuntu2404" => {box: "bento/ubuntu-24.04", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"},
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"rockylinux8" => {box: "rockylinux/8", user: "vagrant"},
"rockylinux9" => {box: "rockylinux/9", user: "vagrant"},
"fedora37" => {box: "fedora/37-cloud-base", user: "vagrant"},
"fedora38" => {box: "fedora/38-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"},
"fedora33" => {box: "fedora/33-cloud-base", user: "vagrant"},
"fedora34" => {box: "fedora/34-cloud-base", user: "vagrant"},
"opensuse" => {box: "bento/opensuse-leap-15.2", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
"rhel7" => {box: "generic/rhel7", user: "vagrant"},
"rhel8" => {box: "generic/rhel8", user: "vagrant"},
"debian11" => {box: "debian/bullseye64", user: "vagrant"},
"debian12" => {box: "debian/bookworm64", user: "vagrant"},
}
if File.exist?(CONFIG)
@@ -56,16 +50,16 @@ $shared_folders ||= {}
$forwarded_ports ||= {}
$subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu2004"
$os ||= "ubuntu1804"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
$multi_networking ||= "False"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking ||= false
$download_run_once ||= "True"
$download_force_cache ||= "False"
$download_force_cache ||= "True"
# The first three nodes are etcd servers
$etcd_instances ||= [$num_instances, 3].min
$etcd_instances ||= $num_instances
# The first two nodes are kube masters
$kube_master_instances ||= [$num_instances, 2].min
$kube_master_instances ||= $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes
$kube_node_instances ||= $num_instances
# The following only works when using the libvirt provider
@@ -74,27 +68,14 @@ $kube_node_instances_with_disks_size ||= "20G"
$kube_node_instances_with_disks_number ||= 2
$override_disk_size ||= false
$disk_size ||= "20GB"
$local_path_provisioner_enabled ||= "False"
$local_path_provisioner_enabled ||= false
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
$libvirt_nested ||= false
# boolean or string (e.g. "-vvv")
$ansible_verbosity ||= false
$ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
$vagrant_dir ||= File.join(File.dirname(__FILE__), ".vagrant")
$playbook ||= "cluster.yml"
$extra_vars ||= {}
host_vars = {}
# throw error if os is not supported
if ! SUPPORTED_OS.key?($os)
puts "Unsupported OS: #{$os}"
puts "Supported OS are: #{SUPPORTED_OS.keys.join(', ')}"
exit 1
end
$box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = "inventory/sample" if ! $inventory
@@ -103,7 +84,7 @@ $inventory = File.absolute_path($inventory, File.dirname(__FILE__))
# if $inventory has a hosts.ini file use it, otherwise copy over
# vars etc to where vagrant expects dynamic inventory to be
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
$vagrant_ansible = File.join(File.absolute_path($vagrant_dir), "provisioners", "ansible")
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible")
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
$vagrant_inventory = File.join($vagrant_ansible,"inventory")
FileUtils.rm_f($vagrant_inventory)
@@ -186,15 +167,7 @@ Vagrant.configure("2") do |config|
# always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d|
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi"
end
end
node.vm.provider :virtualbox do |vb|
# always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d|
vb.customize ['createhd', '--filename', "disk-#{i}-#{driverletters[d]}-#{DISK_UUID}.disk", '--size', $kube_node_instances_with_disks_size] # 10GB disk
vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', d, '--device', 0, '--type', 'hdd', '--medium', "disk-#{i}-#{driverletters[d]}-#{DISK_UUID}.disk", '--nonrotational', 'on', '--mtype', 'normal']
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
end
end
end
@@ -222,8 +195,7 @@ Vagrant.configure("2") do |config|
end
ip = "#{$subnet}.#{i+100}"
node.vm.network :private_network,
:ip => ip,
node.vm.network :private_network, ip: ip,
:libvirt__guest_ipv6 => 'yes',
:libvirt__ipv6_address => "#{$subnet_ipv6}::#{i+100}",
:libvirt__ipv6_prefix => "64",
@@ -233,29 +205,14 @@ Vagrant.configure("2") do |config|
# Disable swap for each vm
node.vm.provision "shell", inline: "swapoff -a"
# ubuntu2004 and ubuntu2204 have IPv6 explicitly disabled. This undoes that.
if ["ubuntu2004", "ubuntu2204"].include? $os
# ubuntu1804 and ubuntu2004 have IPv6 explicitly disabled. This undoes that.
if ["ubuntu1804", "ubuntu2004"].include? $os
node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
end
# Hack for fedora37/38 to get the IP address of the second interface
if ["fedora37", "fedora38"].include? $os
config.vm.provision "shell", inline: <<-SHELL
nmcli conn modify 'Wired connection 2' ipv4.addresses $(cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep IPADDR | cut -d "=" -f2)
nmcli conn modify 'Wired connection 2' ipv4.method manual
service NetworkManager restart
SHELL
end
# Rockylinux boxes needs UEFI
if ["rockylinux8", "rockylinux9"].include? $os
config.vm.provider "libvirt" do |domain|
domain.loader = "/usr/share/OVMF/x64/OVMF_CODE.fd"
end
end
# Disable firewalld on oraclelinux/redhat vms
if ["oraclelinux","oraclelinux8","rhel7","rhel8","rockylinux8"].include? $os
if ["oraclelinux","oraclelinux8","rhel7","rhel8"].include? $os
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
end
@@ -277,17 +234,13 @@ Vagrant.configure("2") do |config|
"kubectl_localhost": "True",
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
"ansible_ssh_user": SUPPORTED_OS[$os][:user],
"unsafe_show_logs": "True"
"ansible_ssh_user": SUPPORTED_OS[$os][:user]
}
# Only execute the Ansible provisioner once, when all the machines are up and ready.
# And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
if i == $num_instances
node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook
ansible.compatibility_mode = "2.0"
ansible.verbose = $ansible_verbosity
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path
@@ -297,10 +250,7 @@ Vagrant.configure("2") do |config|
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars
ansible.extra_vars = $extra_vars
if $ansible_tags != ""
ansible.tags = [$ansible_tags]
end
#ansible.tags = ['download']
ansible.groups = {
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
"kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],

View File

@@ -3,6 +3,7 @@ pipelining=True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
force_valid_group_names = ignore
@@ -10,11 +11,11 @@ host_key_checking=False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp
fact_caching_timeout = 86400
fact_caching_timeout = 7200
stdout_callback = default
display_skipped_hosts = no
library = ./library
callbacks_enabled = profile_tasks,ara_default
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg

View File

@@ -1,27 +1,38 @@
---
- name: Check Ansible version
hosts: all
- hosts: localhost
gather_facts: false
become: no
run_once: true
vars:
minimal_ansible_version: 2.16.4
maximal_ansible_version: 2.17.0
minimal_ansible_version: 2.9.0
minimal_ansible_version_2_10: 2.10.11
maximal_ansible_version: 2.11.0
ansible_connection: local
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }} exclusive - you have {{ ansible_version.string }}"
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
tags:
- check
- name: "Check Ansible version > {{ minimal_ansible_version_2_10 }} when using ansible 2.10"
assert:
msg: "When using Ansible 2.10, the minimum supported version is {{ minimal_ansible_version_2_10 }}"
that:
- ansible_version.string is version(minimal_ansible_version_2_10, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
when:
- ansible_version.string is version('2.10.0', ">=")
tags:
- check
- name: "Check that python netaddr is installed"
assert:
msg: "Python netaddr is not present"
that: "'127.0.0.1' | ansible.utils.ipaddr"
that: "'127.0.0.1' | ipaddr"
tags:
- check

View File

@@ -1,3 +1,127 @@
---
- name: Install Kubernetes
ansible.builtin.import_playbook: playbooks/cluster.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s_cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s_cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/node, tags: node }
- hosts: kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/control-plane, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: kubernetes/node-label, tags: node-label }
- { role: network_plugin, tags: network }
- hosts: calico_rr
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] }
- hosts: kube_control_plane[0]
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
- hosts: kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- { role: kubernetes-apps, tags: apps }
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }

View File

@@ -39,7 +39,7 @@ class SearchEC2Tags(object):
hosts[group] = []
tag_key = "kubespray-role"
tag_value = ["*"+group+"*"]
region = os.environ['AWS_REGION']
region = os.environ['REGION']
ec2 = boto3.resource('ec2', region)
filters = [{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}]
@@ -67,11 +67,6 @@ class SearchEC2Tags(object):
if node_labels_tag:
ansible_host['node_labels'] = dict([ label.strip().split('=') for label in node_labels_tag[0]['Value'].split(',') ])
##Set when instance actually has node_taints
node_taints_tag = list(filter(lambda t: t['Key'] == 'kubespray-node-taints', instance.tags))
if node_taints_tag:
ansible_host['node_taints'] = list([ taint.strip() for taint in node_taints_tag[0]['Value'].split(',') ])
hosts[group].append(dns_name)
hosts['_meta']['hostvars'][dns_name] = ansible_host

View File

@@ -1 +1 @@
boto3 # Apache-2.0
boto3 # Apache-2.0

View File

@@ -1,2 +1,2 @@
.generated
/inventory
/inventory

View File

@@ -47,10 +47,6 @@ If you need to delete all resources from a resource group, simply call:
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Installing Ansible and the dependencies
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call:
@@ -63,5 +59,6 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell
cd kubespray-root-dir
sudo pip3 install -r requirements.txt
ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
```

View File

@@ -1,6 +1,5 @@
---
- name: Generate Azure inventory
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- generate-inventory

View File

@@ -1,6 +1,5 @@
---
- name: Generate Azure inventory
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- generate-inventory_2

View File

@@ -1,6 +1,5 @@
---
- name: Generate Azure templates
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- generate-templates

View File

@@ -1,6 +1,6 @@
---
- name: Query Azure VMs
- name: Query Azure VMs # noqa 301
command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd

View File

@@ -1,14 +1,14 @@
---
- name: Query Azure VMs IPs
- name: Query Azure VMs IPs # noqa 301
command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }}
register: vm_ip_list_cmd
- name: Query Azure VMs Roles
- name: Query Azure VMs Roles # noqa 301
command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd
- name: Query Azure Load Balancer Public IP
- name: Query Azure Load Balancer Public IP # noqa 301
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd

View File

@@ -31,3 +31,4 @@
[k8s_cluster:children]
kube_node
kube_control_plane

View File

@@ -24,14 +24,14 @@ bastionIPAddressName: bastion-pubip
disablePasswordAuthentication: true
sshKeyPath: "/home/{{ admin_username }}/.ssh/authorized_keys"
sshKeyPath: "/home/{{admin_username}}/.ssh/authorized_keys"
imageReference:
publisher: "OpenLogic"
offer: "CentOS"
sku: "7.5"
version: "latest"
imageReferenceJson: "{{ imageReference | to_json }}"
imageReferenceJson: "{{imageReference|to_json}}"
storageAccountName: "sa{{ nameSuffix | replace('-', '') }}"
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"

View File

@@ -103,4 +103,4 @@
}
{% endif %}
]
}
}

View File

@@ -5,4 +5,4 @@
"variables": {},
"resources": [],
"outputs": {}
}
}

View File

@@ -16,4 +16,4 @@
}
}
]
}
}

View File

@@ -1,11 +1,9 @@
---
- name: Create nodes as docker containers
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- { role: dind-host }
- name: Customize each node containers
hosts: containers
- hosts: containers
roles:
- { role: dind-cluster }

View File

@@ -1,9 +1,9 @@
---
- name: Set_fact distro_setup
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: Set_fact other distro settings
- name: set_fact other distro settings
set_fact:
distro_user: "{{ distro_setup['user'] }}"
distro_ssh_service: "{{ distro_setup['ssh_service'] }}"
@@ -43,7 +43,7 @@
package:
name: "{{ item }}"
state: present
with_items: "{{ distro_extra_packages + ['rsyslog', 'openssh-server'] }}"
with_items: "{{ distro_extra_packages }} + [ 'rsyslog', 'openssh-server' ]"
- name: Start needed services
service:
@@ -66,8 +66,8 @@
dest: "/etc/sudoers.d/{{ distro_user }}"
mode: 0640
- name: "Add my pubkey to {{ distro_user }} user authorized keys"
ansible.posix.authorized_key:
- name: Add my pubkey to "{{ distro_user }}" user authorized keys
authorized_key:
user: "{{ distro_user }}"
state: present
key: "{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_rsa.pub') }}"
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"

View File

@@ -1,9 +1,9 @@
---
- name: Set_fact distro_setup
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: Set_fact other distro settings
- name: set_fact other distro settings
set_fact:
distro_image: "{{ distro_setup['image'] }}"
distro_init: "{{ distro_setup['init'] }}"
@@ -13,7 +13,7 @@
distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}"
- name: Create dind node containers from "containers" inventory section
community.docker.docker_container:
docker_container:
image: "{{ distro_image }}"
name: "{{ item }}"
state: started
@@ -53,7 +53,7 @@
{{ distro_raw_setup_done }} && echo SKIPPED && exit 0
until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done
{{ distro_raw_setup }}
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("SKIPPED") < 0
@@ -63,25 +63,26 @@
until test -S /var/run/dbus/system_bus_socket; do sleep 1; done
systemctl disable {{ distro_agetty_svc }}
systemctl stop {{ distro_agetty_svc }}
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}"
changed_when: false
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) # noqa 301
raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id
cmp /etc/machine-id /etc/machine-id~ || true
systemctl daemon-reload
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}"
- name: Early hack image install to adapt for DIND
# noqa 302 - this task uses the raw module intentionally
raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("removed") >= 0

View File

@@ -17,7 +17,7 @@ pass_or_fail() {
test_distro() {
local distro=${1:?};shift
local extra="${*:-}"
local prefix="${distro[${extra}]}"
local prefix="$distro[${extra}]}"
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro
pass_or_fail "$prefix: dind-nodes" || return 1
(cd ../..
@@ -71,15 +71,15 @@ for spec in ${SPECS}; do
echo "Loading file=${spec} ..."
. ${spec} || continue
: ${DISTROS:?} || continue
echo "DISTROS:" "${DISTROS[@]}"
echo "DISTROS=${DISTROS[@]}"
echo "EXTRAS->"
printf " %s\n" "${EXTRAS[@]}"
let n=1
for distro in "${DISTROS[@]}"; do
for distro in ${DISTROS[@]}; do
for extra in "${EXTRAS[@]:-NULL}"; do
# Magic value to let this for run once:
[[ ${extra} == NULL ]] && unset extra
docker rm -f "${NODES[@]}"
docker rm -f ${NODES[@]}
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++))
{
info "${distro}[${extra}] START: file_out=${file_out}"

View File

@@ -83,15 +83,11 @@ class KubesprayInventory(object):
self.config_file = config_file
self.yaml_config = {}
loadPreviousConfig = False
printHostnames = False
# See whether there are any commands to process
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
if changed_hosts[0] == "add":
loadPreviousConfig = True
changed_hosts = changed_hosts[1:]
elif changed_hosts[0] == "print_hostnames":
loadPreviousConfig = True
printHostnames = True
else:
self.parse_command(changed_hosts[0], changed_hosts[1:])
sys.exit(0)
@@ -109,10 +105,6 @@ class KubesprayInventory(object):
print(e)
sys.exit(1)
if printHostnames:
self.print_hostnames()
sys.exit(0)
self.ensure_required_groups(ROLES)
if changed_hosts:

View File

@@ -1,3 +1,3 @@
configparser>=3.3.0
ipaddress
ruamel.yaml>=0.15.88
ipaddress

View File

@@ -1,3 +1,3 @@
hacking>=0.10.2
mock>=1.3.0
pytest>=2.8.0
mock>=1.3.0

View File

@@ -13,7 +13,6 @@
# under the License.
import inventory
from io import StringIO
import unittest
from unittest import mock
@@ -27,28 +26,6 @@ if path not in sys.path:
import inventory # noqa
class TestInventoryPrintHostnames(unittest.TestCase):
@mock.patch('ruamel.yaml.YAML.load')
def test_print_hostnames(self, load_mock):
mock_io = mock.mock_open(read_data='')
load_mock.return_value = OrderedDict({'all': {'hosts': {
'node1': {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'},
'node2': {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}}}})
with mock.patch('builtins.open', mock_io):
with self.assertRaises(SystemExit) as cm:
with mock.patch('sys.stdout', new_callable=StringIO) as stdout:
inventory.KubesprayInventory(
changed_hosts=["print_hostnames"],
config_file="file")
self.assertEqual("node1 node2\n", stdout.getvalue())
self.assertEqual(cm.exception.code, 0)
class TestInventory(unittest.TestCase):
@mock.patch('inventory.sys')
def setUp(self, sys_mock):

View File

@@ -1,27 +1,21 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = pep8
envlist = pep8, py33
[testenv]
allowlist_externals = py.test
whitelist_externals = py.test
usedevelop = True
deps =
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
setenv = VIRTUAL_ENV={envdir}
passenv =
http_proxy
HTTP_PROXY
https_proxy
HTTPS_PROXY
no_proxy
NO_PROXY
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
commands = pytest -vv #{posargs:./tests}
[testenv:pep8]
usedevelop = False
allowlist_externals = bash
whitelist_externals = bash
commands =
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8"

View File

@@ -1,2 +1,3 @@
#k8s_deployment_user: kubespray
#k8s_deployment_user_pkey_path: /tmp/ssh_rsa

View File

@@ -1,9 +1,8 @@
---
- name: Prepare Hypervisor to later install kubespray VMs
hosts: localhost
- hosts: localhost
gather_facts: False
become: yes
vars:
bootstrap_os: none
- bootstrap_os: none
roles:
- { role: kvm-setup }
- kvm-setup

View File

@@ -22,9 +22,9 @@
- ntp
when: ansible_os_family == "Debian"
- name: Create deployment user if required
include_tasks: user.yml
# Create deployment user if required
- include: user.yml
when: k8s_deployment_user is defined
- name: Set proper sysctl values
import_tasks: sysctl.yml
# Set proper sysctl values
- include: sysctl.yml

View File

@@ -1,6 +1,6 @@
---
- name: Load br_netfilter module
community.general.modprobe:
modprobe:
name: br_netfilter
state: present
register: br_netfilter
@@ -25,19 +25,19 @@
- name: Enable net.ipv4.ip_forward in sysctl
ansible.posix.sysctl:
sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_file: "{{ sysctl_file_path }}"
sysctl_file: /etc/sysctl.d/ipv4-ip_forward.conf
state: present
reload: yes
- name: Set bridge-nf-call-{arptables,iptables} to 0
ansible.posix.sysctl:
sysctl:
name: "{{ item }}"
state: present
value: 0
sysctl_file: "{{ sysctl_file_path }}"
sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
reload: yes
with_items:
- net.bridge.bridge-nf-call-arptables

View File

@@ -1,29 +1,24 @@
---
- name: Bootstrap hosts
hosts: gfs-cluster
- hosts: gfs-cluster
gather_facts: false
vars:
ansible_ssh_pipelining: false
roles:
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
hosts: all
- hosts: all
gather_facts: true
- name: Install glusterfs server
hosts: gfs-cluster
- hosts: gfs-cluster
vars:
ansible_ssh_pipelining: true
roles:
- { role: glusterfs/server }
- name: Install glusterfs servers
hosts: k8s_cluster
- hosts: k8s_cluster
roles:
- { role: glusterfs/client }
- name: Configure Kubernetes to use glusterfs
hosts: kube_control_plane[0]
- hosts: kube_control_plane[0]
roles:
- { role: kubernetes-pv }

View File

@@ -41,3 +41,4 @@
# [network-storage:children]
# gfs-cluster

View File

@@ -14,16 +14,12 @@ This role performs basic installation and setup of Gluster, but it does not conf
Available variables are listed below, along with default values (see `defaults/main.yml`):
```yaml
glusterfs_default_release: ""
```
glusterfs_default_release: ""
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
```yaml
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
```
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
@@ -33,11 +29,9 @@ None.
## Example Playbook
```yaml
- hosts: server
roles:
- geerlingguy.glusterfs
```
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).

View File

@@ -6,12 +6,12 @@ galaxy_info:
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: "2.0"
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- "6"
- "7"
- 6
- 7
- name: Ubuntu
versions:
- precise

View File

@@ -3,19 +3,14 @@
# hyperkube and needs to be installed as part of the system.
# Setup/install tasks.
- name: Setup RedHat distros for glusterfs
include_tasks: setup-RedHat.yml
- include: setup-RedHat.yml
when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined
- name: Setup Debian distros for glusterfs
include_tasks: setup-Debian.yml
- include: setup-Debian.yml
when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined
- name: Ensure Gluster mount directories exist.
file:
path: "{{ item }}"
state: directory
mode: 0775
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_mount_dir }}"
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa no-handler
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent

View File

@@ -1,14 +1,10 @@
---
- name: Install Prerequisites
package:
name: "{{ item }}"
state: present
package: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
package:
name: "{{ item }}"
state: present
package: name={{ item }} state=present
with_items:
- glusterfs-client

View File

@@ -6,12 +6,12 @@ galaxy_info:
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: "2.0"
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- "6"
- "7"
- 6
- 7
- name: Ubuntu
versions:
- precise

View File

@@ -4,97 +4,78 @@
include_vars: "{{ ansible_os_family }}.yml"
# Install xfs package
- name: Install xfs Debian
apt:
name: xfsprogs
state: present
- name: install xfs Debian
apt: name=xfsprogs state=present
when: ansible_os_family == "Debian"
- name: Install xfs RedHat
package:
name: xfsprogs
state: present
- name: install xfs RedHat
package: name=xfsprogs state=present
when: ansible_os_family == "RedHat"
# Format external volumes in xfs
- name: Format volumes in xfs
community.general.filesystem:
fstype: xfs
dev: "{{ disk_volume_device_1 }}"
filesystem: "fstype=xfs dev={{ disk_volume_device_1 }}"
# Mount external volumes
- name: Mounting new xfs filesystem
ansible.posix.mount:
name: "{{ gluster_volume_node_mount_dir }}"
src: "{{ disk_volume_device_1 }}"
fstype: xfs
state: mounted
- name: mounting new xfs filesystem
mount: "name={{ gluster_volume_node_mount_dir }} src={{ disk_volume_device_1 }} fstype=xfs state=mounted"
# Setup/install tasks.
- name: Setup RedHat distros for glusterfs
include_tasks: setup-RedHat.yml
- include: setup-RedHat.yml
when: ansible_os_family == 'RedHat'
- name: Setup Debian distros for glusterfs
include_tasks: setup-Debian.yml
- include: setup-Debian.yml
when: ansible_os_family == 'Debian'
- name: Ensure GlusterFS is started and enabled at boot.
service:
name: "{{ glusterfs_daemon }}"
state: started
enabled: yes
service: "name={{ glusterfs_daemon }} state=started enabled=yes"
- name: Ensure Gluster brick and mount directories exist.
file:
path: "{{ item }}"
state: directory
mode: 0775
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- name: Configure Gluster volume with replicas
gluster.gluster.gluster_volume:
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
replicas: "{{ groups['gfs-cluster'] | length }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster'] | length > 1
when: groups['gfs-cluster']|length > 1
- name: Configure Gluster volume without replicas
gluster.gluster.gluster_volume:
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster'] | length <= 1
when: groups['gfs-cluster']|length <= 1
- name: Mount glusterfs to retrieve disk size
ansible.posix.mount:
mount:
name: "{{ gluster_mount_dir }}"
src: "{{ ip | default(ansible_default_ipv4['address']) }}:/gluster"
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
fstype: glusterfs
opts: "defaults,_netdev"
state: mounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Get Gluster disk size
setup:
filter: ansible_mounts
setup: filter=ansible_mounts
register: mounts_data
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Set Gluster disk size to variable
set_fact:
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024 * 1024 * 1024)) | int }}"
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Create file on GlusterFS
@@ -105,9 +86,9 @@
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs
ansible.posix.mount:
mount:
name: "{{ gluster_mount_dir }}"
fstype: glusterfs
src: "{{ ip | default(ansible_default_ipv4['address']) }}:/gluster"
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
state: unmounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa no-handler
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent

View File

@@ -1,15 +1,11 @@
---
- name: Install Prerequisites
package:
name: "{{ item }}"
state: present
package: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
package:
name: "{{ item }}"
state: present
package: name={{ item }} state=present
with_items:
- glusterfs-server
- glusterfs-client

View File

@@ -3,7 +3,6 @@
template:
src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}"
mode: 0644
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
@@ -18,6 +17,6 @@
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
state: "{{ item.changed | ternary('latest', 'present') }}"
state: "{{ item.changed | ternary('latest','present') }}"
with_items: "{{ gluster_pv.results }}"
when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined

View File

@@ -1,11 +1,9 @@
---
- name: Tear down heketi
hosts: kube_control_plane[0]
- hosts: kube_control_plane[0]
roles:
- { role: tear-down }
- name: Teardown disks in heketi
hosts: heketi-node
- hosts: heketi-node
become: yes
roles:
- { role: tear-down-disks }

View File

@@ -1,11 +1,9 @@
---
- name: Prepare heketi install
hosts: heketi-node
- hosts: heketi-node
roles:
- { role: prepare }
- name: Provision heketi
hosts: kube_control_plane[0]
- hosts: kube_control_plane[0]
tags:
- "provision"
roles:

View File

@@ -2,13 +2,6 @@ all:
vars:
heketi_admin_key: "11elfeinhundertundelf"
heketi_user_key: "!!einseinseins"
glusterfs_daemonset:
readiness_probe:
timeout_seconds: 3
initial_delay_seconds: 3
liveness_probe:
timeout_seconds: 3
initial_delay_seconds: 10
children:
k8s_cluster:
vars:

View File

@@ -5,7 +5,7 @@
- "dm_snapshot"
- "dm_mirror"
- "dm_thin_pool"
community.general.modprobe:
modprobe:
name: "{{ item }}"
state: "present"

View File

@@ -1,3 +1,3 @@
---
- name: "Stop port forwarding"
- name: "stop port forwarding"
command: "killall "

View File

@@ -7,9 +7,9 @@
- name: "Bootstrap heketi."
when:
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Service']\")) | length == 0"
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Deployment']\")) | length == 0"
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Pod']\")) | length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Deployment']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod']\"))|length == 0"
include_tasks: "bootstrap/deploy.yml"
# Prepare heketi topology
@@ -20,11 +20,11 @@
- name: "Ensure heketi bootstrap pod is up."
assert:
that: "(initial_heketi_pod.stdout | from_json | json_query('items[*]')) | length == 1"
that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1"
- name: Store the initial heketi pod name
set_fact:
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout | from_json | json_query(\"items[*].metadata.name | [0]\") }}"
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}"
- name: "Test heketi topology."
changed_when: false
@@ -32,7 +32,7 @@
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Load heketi topology."
when: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*]\") | flatten | length == 0"
when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0"
include_tasks: "bootstrap/topology.yml"
# Provision heketi database volume
@@ -58,7 +58,7 @@
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout | from_json | json_query(secret_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(endpoints_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(service_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 0"
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"

View File

@@ -17,11 +17,11 @@
register: "initial_heketi_state"
vars:
initial_heketi_state: { stdout: "{}" }
pods_query: "items[?kind=='Pod'].status.conditions | [0][?type=='Ready'].status | [0]"
deployments_query: "items[?kind=='Deployment'].status.conditions | [0][?type=='Available'].status | [0]"
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
until:
- "initial_heketi_state.stdout | from_json | json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout | from_json | json_query(deployments_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5

View File

@@ -15,10 +15,10 @@
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout | from_json | json_query(secret_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(endpoints_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(service_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 0"
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"
register: "heketi_storage_result"
- name: "Get state of heketi database copy job."
command: "{{ bin_dir }}/kubectl get jobs --output=json"
@@ -28,6 +28,6 @@
heketi_storage_state: { stdout: "{}" }
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job' && status.succeeded==1]"
until:
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 1"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 1"
retries: 60
delay: 5

View File

@@ -5,10 +5,10 @@
changed_when: false
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout | from_json | json_query('items[*]') | length > 0"
- name: "Ensure there is nothing left over."
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5

View File

@@ -14,7 +14,7 @@
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa no-handler
- name: "Load heketi topology." # noqa 503
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"
@@ -22,6 +22,6 @@
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\") | flatten | length == groups['heketi-node'] | length"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60
delay: 5

View File

@@ -6,19 +6,19 @@
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout | from_json | json_query(\"volumes[*]\") }}"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_exists: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout | from_json }}" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod."
- name: "Copy configuration from pod." # noqa 301
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."
@@ -28,14 +28,14 @@
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout | from_json | json_query(\"volumes[*]\") }}"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_created: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout | from_json }}" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Ensure heketi database volume exists."
assert: { that: "heketi_database_volume_created is defined", msg: "Heketi database volume does not exist." }

View File

@@ -23,8 +23,8 @@
changed_when: false
vars:
daemonset_state: { stdout: "{}" }
ready: "{{ daemonset_state.stdout | from_json | json_query(\"status.numberReady\") }}"
desired: "{{ daemonset_state.stdout | from_json | json_query(\"status.desiredNumberScheduled\") }}"
ready: "{{ daemonset_state.stdout|from_json|json_query(\"status.numberReady\") }}"
desired: "{{ daemonset_state.stdout|from_json|json_query(\"status.desiredNumberScheduled\") }}"
until: "ready | int >= 3"
retries: 60
delay: 5

View File

@@ -5,7 +5,7 @@
changed_when: false
- name: "Assign storage label"
when: "label_present.stdout_lines | length == 0"
when: "label_present.stdout_lines|length == 0"
command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
- name: Get storage nodes again
@@ -15,5 +15,5 @@
- name: Ensure the label has been set
assert:
that: "label_present | length > 0"
that: "label_present|length > 0"
msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs."

View File

@@ -24,11 +24,11 @@
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
until:
- "heketi_state.stdout | from_json | json_query(pods_query) == 'True'"
- "heketi_state.stdout | from_json | json_query(deployments_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5
- name: Set the Heketi pod name
set_fact:
heketi_pod_name: "{{ heketi_state.stdout | from_json | json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"
heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"

View File

@@ -5,7 +5,7 @@
changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout | length == 0"
when: "clusterrolebinding_state.stdout | length > 0"
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- name: Get clusterrolebindings again
@@ -31,7 +31,7 @@
mode: 0644
- name: "Deploy Heketi config secret"
when: "secret_state.stdout | length == 0"
when: "secret_state.stdout | length > 0"
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- name: Get the heketi-config-secret secret again
@@ -41,5 +41,5 @@
- name: Make sure the heketi-config-secret secret exists now
assert:
that: "secret_state.stdout | length > 0"
that: "secret_state.stdout != \"\""
msg: "Heketi config secret is not present."

View File

@@ -12,7 +12,7 @@
- name: "Render storage class configuration."
become: true
vars:
endpoint_address: "{{ (heketi_service.stdout | from_json).spec.clusterIP }}"
endpoint_address: "{{ (heketi_service.stdout|from_json).spec.clusterIP }}"
template:
src: "storageclass.yml.j2"
dest: "{{ kube_config_dir }}/storageclass.yml"

View File

@@ -11,16 +11,16 @@
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
mode: 0644
- name: "Copy topology configuration into container." # noqa no-handler
- name: "Copy topology configuration into container." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa no-handler
- name: "Load heketi topology." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."
register: "heketi_topology"
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\") | flatten | length == groups['heketi-node'] | length"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60
delay: 5

View File

@@ -73,8 +73,8 @@
"privileged": true
},
"readinessProbe": {
"timeoutSeconds": {{ glusterfs_daemonset.readiness_probe.timeout_seconds }},
"initialDelaySeconds": {{ glusterfs_daemonset.readiness_probe.initial_delay_seconds }},
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"exec": {
"command": [
"/bin/bash",
@@ -84,8 +84,8 @@
}
},
"livenessProbe": {
"timeoutSeconds": {{ glusterfs_daemonset.liveness_probe.timeout_seconds }},
"initialDelaySeconds": {{ glusterfs_daemonset.liveness_probe.initial_delay_seconds }},
"timeoutSeconds": 3,
"initialDelaySeconds": 10,
"exec": {
"command": [
"/bin/bash",

View File

@@ -22,7 +22,7 @@
ignore_errors: true # noqa ignore-errors
changed_when: false
- name: "Remove volume groups."
- name: "Remove volume groups." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
@@ -30,7 +30,7 @@
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks."
- name: "Remove physical volume from cluster disks." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true

View File

@@ -1,43 +1,43 @@
---
- name: Remove storage class.
- name: Remove storage class. # noqa 301
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi.
- name: Tear down heketi. # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi.
- name: Tear down heketi. # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true # noqa ignore-errors
- name: Tear down bootstrap.
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
- name: Ensure there is nothing left over.
- name: Ensure there is nothing left over. # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: Ensure there is nothing left over.
- name: Ensure there is nothing left over. # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: Tear down glusterfs.
- name: Tear down glusterfs. # noqa 301
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi storage service.
- name: Remove heketi storage service. # noqa 301
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi gluster role binding
- name: Remove heketi gluster role binding # noqa 301
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi config secret
- name: Remove heketi config secret # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi db backup
- name: Remove heketi db backup # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi service account
- name: Remove heketi service account # noqa 301
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true # noqa ignore-errors
- name: Get secrets
@@ -46,6 +46,6 @@
changed_when: false
- name: Remove heketi storage secret
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout | from_json | json_query(storage_query) }}"
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
when: "storage_query is defined"
ignore_errors: true # noqa ignore-errors

Some files were not shown because too many files have changed in this diff Show More