Compare commits

..

17 Commits

Author SHA1 Message Date
Cristian Calin
e7508d7d21 [sysctl] set fs.may_detach_mounts=1 even when CRIs don't set it themselves (#8635) (#8642) 2022-03-22 05:31:44 -07:00
Cristian Calin
59c05d3713 [container image] use focal (ubuntu 20.04) base image for our docker builds (#8631) (#8633) 2022-03-21 01:03:09 -07:00
Calin Cristian Andrei
ae1f8d8578 [kubernetes] make 1.22.8 the new default 2022-03-18 11:26:41 -07:00
Calin Cristian Andrei
aafdcc1b68 [backport-2.18] update kubernetes hashes for 1.23, 1.22 and 1.21 2022-03-18 11:26:41 -07:00
Takuya Murakami
019bcbc893 Update config.toml.j2 (#8340) (#8602)
* Update config.toml.j2

i think this commit code is not completed works

exam registry address : a.com:5000

insecure registry must be http://a.com:5000

but this code add insecure a.com:5000 (without http://)

If there is no http, containerd accesses with https even if insecure_skip_verify = true

solution is code edit

* Update config.toml.j2

* Update containerd.yml

* Update containerd.yml

* Update containerd.yml

* Update config.toml.j2

(cherry picked from commit dda557ed23)

Co-authored-by: Choi Yongbeom <59861163+mircyb@users.noreply.github.com>
2022-03-09 06:22:13 -08:00
Takuya Murakami
0c43883e5c [PATCH] nerdctl insecure registry config (#8339) (#8601)
Backport #8339 to 2.18-release
Cherry-pick 24f1402a14

Co-authored-by: Choi Yongbeom <59861163+mircyb@users.noreply.github.com>
2022-03-08 14:32:22 -08:00
Takuya Murakami
92d6c2d9a8 feat(offline): Improve generate_list.sh to generate offline file list using ansible (#8537) (#8538) (#8606)
Use jinja2 template and ansible to expand variables.
2022-03-07 05:32:55 -08:00
Kenichi Omichi
411902e9ff Update quay.io/kubespray/vagrant (#8605)
quay.io/kubespray/vagrant image is used for molecule_tests.
The tag was v2.17.1 on release-2.18 branch but the image
contains vagrant-2.2.15 which has a bug related to a virtual machine creation.
That caused kubespray CI failures.
This updates the image to use a newer vagrant.
2022-03-06 00:12:52 -08:00
Kenichi Omichi
c4a2745523 Move containerd_version to defaults/main.yml (#8379) (#8513)
All container image versions were defined in download/defaults/main.yml
except containerd.
The inconsistency caused the offline script(generate_list.sh) could not
output the URL of containerd image.
This moves the definition into a valid file.
In addition, this adds host_os to generate_list.sh for downloading
krew from a valid URL.
2022-02-13 09:55:50 -08:00
Kenichi Omichi
d1609e3111 CI: Replace CentOS 8 with AlmaLinux 8 before CentOS 8 EOL end of 2021 (#8297) (#8514)
Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2022-02-07 23:42:53 -08:00
Cristian Calin
6abffe9c37 [2.18] update kubernetes hashes and make 1.22.6 the default (#8467)
* [kubernetes] add hashes for 1.23.2, 1.22.6, 1.21.9 and 1.20.15

* [kubernetes] make 1.22.6 the default version
2022-01-25 05:20:30 -08:00
Boris Barnier
a5cd98f6cf Fix kata_containers_binary_checksums for arm64 (#8460) 2022-01-24 00:19:57 -08:00
Mathieu Parent
38d85cfafd Document image_command_tool and image_command_tool_on_localhost (#8409)
Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
(cherry picked from commit 43d128362f)
2022-01-17 02:25:30 -08:00
Mathieu Parent
7fffe6730c Allow to choose container manager commands (#8380)
This allow to workaround #8375 by using image_command_tool=crictl
when containerd_registries is used for containerd.

Also changes image_info_command_on_localhost for docker to return digests.

(cherry picked from commit cfd9873bbc)

The cherry-pick was adapted because nerdctl_extra_flags is not in
the release-2.18 branch (#8339).
2022-01-17 02:25:30 -08:00
Mathieu Parent
0b99ea69a9 Add missing example offline nerdctl_download_url (#8373)
(cherry picked from commit c11e4ba9a7)
2022-01-17 02:25:30 -08:00
Mathieu Parent
1928f946be Avoid yanked ruamel.yaml.clib version (#8372)
See https://pypi.org/project/ruamel.yaml.clib/#history

Signed-off-by: Mathieu Parent <math.parent@gmail.com>
(cherry picked from commit 7ae00947f5)
2022-01-17 02:25:30 -08:00
rtsp
8a3c78e8b4 [2.18] Fix container engine still installed on dedicated etcd (#8404)
* Fix container engine still installed on dedicated etcd node even if `etcd_deployment_type: host` (#8386)

(cherry picked from commit aa4a3d7)
2022-01-11 00:31:16 -08:00
838 changed files with 21925 additions and 24252 deletions

View File

@@ -24,17 +24,7 @@ skip_list:
# (Disabled in June 2021) # (Disabled in June 2021)
- 'role-name' - 'role-name'
- 'experimental'
# [var-naming] "defaults/main.yml" File defines variable 'apiVersion' that violates variable naming standards # [var-naming] "defaults/main.yml" File defines variable 'apiVersion' that violates variable naming standards
# In Kubespray we use variables that use camelCase to match their k8s counterparts # In Kubespray we use variables that use camelCase to match their k8s counterparts
# (Disabled in June 2021) # (Disabled in June 2021)
- 'var-naming' - 'var-naming'
- 'var-spacing'
# [fqcn-builtins]
# Roles in kubespray don't need fully qualified collection names
# (Disabled in Feb 2023)
- 'fqcn-builtins'
exclude_paths:
# Generated files
- tests/files/custom_cni/cilium.yaml

11
.gitignore vendored
View File

@@ -3,19 +3,14 @@
**/vagrant_ansible_inventory **/vagrant_ansible_inventory
*.iml *.iml
temp temp
contrib/offline/offline-files
contrib/offline/offline-files.tar.gz
.idea .idea
.vscode
.tox .tox
.cache .cache
*.bak *.bak
*.tfstate *.tfstate
*.tfstate.backup *.tfstate.backup
*.lock.hcl
.terraform/ .terraform/
contrib/terraform/aws/credentials.tfvars contrib/terraform/aws/credentials.tfvars
.terraform.lock.hcl
/ssh-bastion.conf /ssh-bastion.conf
**/*.sw[pon] **/*.sw[pon]
*~ *~
@@ -107,14 +102,10 @@ ENV/
# molecule # molecule
roles/**/molecule/**/__pycache__/ roles/**/molecule/**/__pycache__/
roles/**/molecule/**/*.conf
# macOS # macOS
.DS_Store .DS_Store
# Temp location used by our scripts # Temp location used by our scripts
scripts/tmp/ scripts/tmp/
tmp.md
# Ansible collection files
kubernetes_sigs-kubespray*tar.gz
ansible_collections

View File

@@ -1,6 +1,5 @@
--- ---
stages: stages:
- build
- unit-tests - unit-tests
- deploy-part1 - deploy-part1
- moderator - moderator
@@ -9,12 +8,12 @@ stages:
- deploy-special - deploy-special
variables: variables:
KUBESPRAY_VERSION: v2.21.0 KUBESPRAY_VERSION: v2.17.1
FAILFASTCI_NAMESPACE: 'kargo-ci' FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray' GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true" ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this" MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_JOB_ID" TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml" CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml" CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml" CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml"
@@ -28,15 +27,13 @@ variables:
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
IDEMPOT_CHECK: "false" IDEMPOT_CHECK: "false"
RESET_CHECK: "false" RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false" UPGRADE_TEST: "false"
MITOGEN_ENABLE: "false" MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv" ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false" RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]" RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
TERRAFORM_VERSION: 1.3.7 TERRAFORM_VERSION: 1.0.8
ANSIBLE_MAJOR_VERSION: "2.11" ANSIBLE_MAJOR_VERSION: "2.10"
PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}"
before_script: before_script:
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
@@ -48,7 +45,7 @@ before_script:
.job: &job .job: &job
tags: tags:
- packet - packet
image: $PIPELINE_IMAGE image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
artifacts: artifacts:
when: always when: always
paths: paths:
@@ -78,10 +75,8 @@ ci-authorized:
only: [] only: []
include: include:
- .gitlab-ci/build.yml
- .gitlab-ci/lint.yml - .gitlab-ci/lint.yml
- .gitlab-ci/shellcheck.yml - .gitlab-ci/shellcheck.yml
- .gitlab-ci/terraform.yml - .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml - .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml - .gitlab-ci/vagrant.yml
- .gitlab-ci/molecule.yml

View File

@@ -1,40 +0,0 @@
---
.build:
stage: build
image:
name: moby/buildkit:rootless
entrypoint: [""]
variables:
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json
pipeline image:
extends: .build
script:
- |
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache
rules:
- if: '$CI_COMMIT_REF_NAME != $CI_DEFAULT_BRANCH'
pipeline image and build cache:
extends: .build
script:
- |
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache \
--export-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache,mode=max
rules:
- if: '$CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH'

View File

@@ -14,7 +14,7 @@ vagrant-validate:
stage: unit-tests stage: unit-tests
tags: [light] tags: [light]
variables: variables:
VAGRANT_VERSION: 2.3.4 VAGRANT_VERSION: 2.2.19
script: script:
- ./tests/scripts/vagrant-validate.sh - ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master'] except: ['triggers', 'master']
@@ -23,8 +23,9 @@ ansible-lint:
extends: .job extends: .job
stage: unit-tests stage: unit-tests
tags: [light] tags: [light]
script: # lint every yml/yaml file that looks like it contains Ansible plays
- ansible-lint -v script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
except: ['triggers', 'master'] except: ['triggers', 'master']
syntax-check: syntax-check:
@@ -39,28 +40,11 @@ syntax-check:
ANSIBLE_VERBOSITY: "3" ANSIBLE_VERBOSITY: "3"
script: script:
- ansible-playbook --syntax-check cluster.yml - ansible-playbook --syntax-check cluster.yml
- ansible-playbook --syntax-check playbooks/cluster.yml
- ansible-playbook --syntax-check upgrade-cluster.yml - ansible-playbook --syntax-check upgrade-cluster.yml
- ansible-playbook --syntax-check playbooks/upgrade_cluster.yml
- ansible-playbook --syntax-check reset.yml - ansible-playbook --syntax-check reset.yml
- ansible-playbook --syntax-check playbooks/reset.yml
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml - ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
except: ['triggers', 'master'] except: ['triggers', 'master']
collection-build-install-sanity-check:
extends: .job
stage: unit-tests
tags: [light]
variables:
ANSIBLE_COLLECTIONS_PATH: "./ansible_collections"
script:
- ansible-galaxy collection build
- ansible-galaxy collection install kubernetes_sigs-kubespray-$(grep "^version:" galaxy.yml | awk '{print $2}').tar.gz
- ansible-galaxy collection list $(egrep -i '(name:\s+|namespace:\s+)' galaxy.yml | awk '{print $2}' | tr '\n' '.' | sed 's|\.$||g') | grep "^kubernetes_sigs.kubespray"
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/cluster.yml
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/reset.yml
except: ['triggers', 'master']
tox-inventory-builder: tox-inventory-builder:
stage: unit-tests stage: unit-tests
tags: [light] tags: [light]
@@ -69,7 +53,7 @@ tox-inventory-builder:
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip - apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core - python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt - python -m pip install -r tests/requirements.txt
script: script:
- pip3 install tox - pip3 install tox
@@ -85,27 +69,6 @@ markdownlint:
script: script:
- markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md - markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md
check-readme-versions:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_readme_versions.sh
check-galaxy-version:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_galaxy_version.sh
check-typo:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_typo.sh
ci-matrix: ci-matrix:
stage: unit-tests stage: unit-tests
tags: [light] tags: [light]

View File

@@ -1,86 +0,0 @@
---
.molecule:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: $PIPELINE_IMAGE
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- chronic ./tests/scripts/molecule_logs.sh
artifacts:
when: always
paths:
- molecule_logs/
# CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set
.molecule_periodic:
only:
variables:
- $PERIODIC_CI_ENABLED
allow_failure: true
extends: .molecule
molecule_full:
extends: .molecule_periodic
molecule_no_container_engines:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -e container-engine
when: on_success
molecule_docker:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-dockerd
when: on_success
molecule_containerd:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/containerd
when: on_success
molecule_cri-o:
extends: .molecule
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o
when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail
molecule_kata:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/kata-containers
when: on_success
molecule_gvisor:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/gvisor
when: on_success
molecule_youki:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/youki
when: on_success

View File

@@ -31,9 +31,18 @@ packet_ubuntu20-calico-aio:
variables: variables:
RESET_CHECK: "true" RESET_CHECK: "true"
# Exericse ansible variants
packet_ubuntu20-calico-aio-ansible-2_9:
stage: deploy-part1
extends: .packet_pr
when: on_success
variables:
ANSIBLE_MAJOR_VERSION: "2.9"
RESET_CHECK: "true"
packet_ubuntu20-calico-aio-ansible-2_11: packet_ubuntu20-calico-aio-ansible-2_11:
stage: deploy-part1 stage: deploy-part1
extends: .packet_periodic extends: .packet_pr
when: on_success when: on_success
variables: variables:
ANSIBLE_MAJOR_VERSION: "2.11" ANSIBLE_MAJOR_VERSION: "2.11"
@@ -51,26 +60,11 @@ packet_ubuntu20-aio-docker:
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
packet_ubuntu20-calico-aio-hardening:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu18-calico-aio: packet_ubuntu18-calico-aio:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
packet_ubuntu22-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-aio:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_centos7-flannel-addons-ha: packet_centos7-flannel-addons-ha:
extends: .packet_pr extends: .packet_pr
stage: deploy-part2 stage: deploy-part2
@@ -91,11 +85,31 @@ packet_fedora35-crio:
stage: deploy-part2 stage: deploy-part2
when: manual when: manual
packet_ubuntu16-canal-ha:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_ubuntu16-canal-sep:
stage: deploy-special
extends: .packet_pr
when: manual
packet_ubuntu16-flannel-ha: packet_ubuntu16-flannel-ha:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian10-cilium-svc-proxy: packet_debian10-cilium-svc-proxy:
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic extends: .packet_periodic
@@ -141,33 +155,26 @@ packet_almalinux8-calico:
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
packet_rockylinux8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_rockylinux9-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_rockylinux9-cilium:
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
RESET_CHECK: "true"
packet_almalinux8-docker: packet_almalinux8-docker:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
packet_fedora36-docker-weave: packet_fedora34-docker-weave:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
packet_fedora35-kube-router:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_opensuse-canal:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_opensuse-docker-cilium: packet_opensuse-docker-cilium:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
@@ -201,7 +208,7 @@ packet_almalinux8-calico-ha-ebpf:
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_debian10-macvlan: packet_debian9-macvlan:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
@@ -211,19 +218,29 @@ packet_centos7-calico-ha:
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-multus-calico: packet_centos7-multus-calico:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_fedora36-docker-calico: packet_oracle7-canal-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora35-docker-calico:
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
variables: variables:
RESET_CHECK: "true" RESET_CHECK: "true"
packet_fedora35-calico-selinux: packet_fedora34-calico-selinux:
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
@@ -243,32 +260,15 @@ packet_almalinux8-calico-nodelocaldns-secondary:
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_fedora36-kube-ovn: packet_fedora34-kube-ovn:
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
packet_debian11-custom-cni:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian11-kubelet-csr-approver:
stage: deploy-part2
extends: .packet_pr
when: manual
# ### PR JOBS PART3 # ### PR JOBS PART3
# Long jobs (45min+) # Long jobs (45min+)
packet_centos7-weave-upgrade-ha: packet_centos7-docker-weave-upgrade-ha:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: basic
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-part3 stage: deploy-part3
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
@@ -281,27 +281,14 @@ packet_ubuntu20-calico-ha-wireguard:
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_debian11-calico-upgrade: packet_debian10-calico-upgrade:
stage: deploy-part3 stage: deploy-part3
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
variables: variables:
UPGRADE_TEST: graceful UPGRADE_TEST: graceful
packet_almalinux8-calico-remove-node: packet_debian10-calico-upgrade-once:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
REMOVE_NODE_CHECK: "true"
REMOVE_NODE_NAME: "instance-3"
packet_ubuntu20-calico-etcd-kubeadm:
stage: deploy-part3
extends: .packet_pr
when: on_success
packet_debian11-calico-upgrade-once:
stage: deploy-part3 stage: deploy-part3
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success

View File

@@ -11,6 +11,6 @@ shellcheck:
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/ - cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version - shellcheck --version
script: script:
# Run shellcheck for all *.sh # Run shellcheck for all *.sh except contrib/
- find . -name '*.sh' -not -path './.git/*' | xargs shellcheck --severity error - find . -name '*.sh' -not -path './contrib/*' -not -path './.git/*' | xargs shellcheck --severity error
except: ['triggers', 'master'] except: ['triggers', 'master']

View File

@@ -60,11 +60,11 @@ tf-validate-openstack:
PROVIDER: openstack PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-equinix: tf-validate-packet:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: $TERRAFORM_VERSION
PROVIDER: equinix PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws: tf-validate-aws:
@@ -80,12 +80,6 @@ tf-validate-exoscale:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: $TERRAFORM_VERSION
PROVIDER: exoscale PROVIDER: exoscale
tf-validate-hetzner:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: hetzner
tf-validate-vsphere: tf-validate-vsphere:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
@@ -110,7 +104,7 @@ tf-validate-upcloud:
# TF_VAR_number_of_k8s_nodes: "1" # TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86 # TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86 # TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_metro: ny # TF_VAR_facility: ewr1
# TF_VAR_public_key_path: "" # TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_16_04 # TF_VAR_operating_system: ubuntu_16_04
# #
@@ -124,7 +118,7 @@ tf-validate-upcloud:
# TF_VAR_number_of_k8s_nodes: "1" # TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86 # TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86 # TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_metro: am # TF_VAR_facility: ams1
# TF_VAR_public_key_path: "" # TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_18_04 # TF_VAR_operating_system: ubuntu_18_04

View File

@@ -1,5 +1,28 @@
--- ---
molecule_tests:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:v2.18.0
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- chronic ./tests/scripts/molecule_logs.sh
artifacts:
when: always
paths:
- molecule_logs/
.vagrant: .vagrant:
extends: .testcases extends: .testcases
variables: variables:
@@ -10,12 +33,12 @@
tags: [c3.small.x86] tags: [c3.small.x86]
only: [/^pr-.*$/] only: [/^pr-.*$/]
except: ['triggers'] except: ['triggers']
image: $PIPELINE_IMAGE image: quay.io/kubespray/vagrant:v2.18.0
services: [] services: []
before_script: before_script:
- apt-get update && apt-get install -y python3-pip - apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10 - update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core - python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt - python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh - ./tests/scripts/vagrant_clean.sh
script: script:
@@ -43,30 +66,3 @@ vagrant_ubuntu20-flannel:
stage: deploy-part2 stage: deploy-part2
extends: .vagrant extends: .vagrant
when: on_success when: on_success
allow_failure: false
vagrant_ubuntu20-flannel-collection:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_fedora35-kube-router:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_centos7-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual

View File

@@ -1,3 +1,2 @@
--- ---
MD013: false MD013: false
MD029: false

View File

@@ -1,71 +0,0 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-xml
- id: check-merge-conflict
- id: detect-private-key
- id: end-of-file-fixer
- id: forbid-new-submodules
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.27.1
hooks:
- id: yamllint
args: [--strict]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: [ -r, "~MD013,~MD029" ]
exclude: "^.git"
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 3.0.0
hooks:
- id: shellcheck
args: [ --severity, "error" ]
exclude: "^.git"
files: "\\.sh$"
- repo: local
hooks:
- id: ansible-lint
name: ansible-lint
entry: ansible-lint -v
language: python
pass_filenames: false
additional_dependencies:
- .[community]
- id: ansible-syntax-check
name: ansible-syntax-check
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
language: python
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"
- id: tox-inventory-builder
name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox"
language: python
pass_filenames: false
- id: check-readme-versions
name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh
language: script
pass_filenames: false
- id: ci-matrix
name: ci-matrix
entry: tests/scripts/md-table/test.sh
language: script
pass_filenames: false

View File

@@ -3,8 +3,6 @@ extends: default
ignore: | ignore: |
.git/ .git/
# Generated file
tests/files/custom_cni/cilium.yaml
rules: rules:
braces: braces:

2
CNAME
View File

@@ -1 +1 @@
kubespray.io kubespray.io

View File

@@ -16,12 +16,7 @@ pip install -r tests/requirements.txt
#### Linting #### Linting
Kubespray uses [pre-commit](https://pre-commit.com) hook configuration to run several linters, please install this tool and use it to run validation tests before submitting a PR. Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `ansible-lint`. It is a good idea to add call these tools as part of your pre-commit hook and avoid a lot of back end forth on fixing linting issues (<https://support.gitkraken.com/working-with-repositories/githooksexample/>).
```ShellSession
pre-commit install
pre-commit run -a # To run pre-commit hook on all files in the repository, even if they were not modified
```
#### Molecule #### Molecule
@@ -38,9 +33,7 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
1. Submit an issue describing your proposed change to the repo in question. 1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly. 2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes. 3. Fork the desired repo, develop and test your code changes.
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo. 4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
5. Addess any pre-commit validation failures. 5. Submit a pull request.
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>) 6. Work with the reviewers on their suggestions.
7. Submit a pull request. 7. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.
8. Work with the reviewers on their suggestions.
9. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.

View File

@@ -1,44 +1,33 @@
# Use imutable image tags rather than mutable tags (like ubuntu:22.04) # Use imutable image tags rather than mutable tags (like ubuntu:20.04)
FROM ubuntu:jammy-20230308 FROM ubuntu:focal-20220316
RUN apt update -y \
&& apt install -y \
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
ca-certificates curl gnupg2 software-properties-common python3-pip unzip rsync git \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install --no-install-recommends -y docker-ce \
&& rm -rf /var/lib/apt/lists/*
# Some tools like yamllint need this # Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible # Pip needs this as well at the moment to install ansible
# (and potentially other packages) # (and potentially other packages)
# See: https://github.com/pypa/pip/issues/10219 # See: https://github.com/pypa/pip/issues/10219
ENV LANG=C.UTF-8 \ ENV LANG=C.UTF-8
DEBIAN_FRONTEND=noninteractive \
PYTHONDONTWRITEBYTECODE=1
WORKDIR /kubespray
COPY *.yml ./
COPY *.cfg ./
COPY roles ./roles
COPY contrib ./contrib
COPY inventory ./inventory
COPY library ./library
COPY extra_playbooks ./extra_playbooks
COPY playbooks ./playbooks
COPY plugins ./plugins
RUN apt update -q \ WORKDIR /kubespray
&& apt install -yq --no-install-recommends \ COPY . .
curl \ RUN /usr/bin/python3 -m pip install --no-cache-dir pip -U \
python3 \ && /usr/bin/python3 -m pip install --no-cache-dir -r tests/requirements.txt \
python3-pip \ && python3 -m pip install --no-cache-dir -r requirements.txt \
sshpass \ && update-alternatives --install /usr/bin/python python /usr/bin/python3 1
vim \
rsync \ RUN KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
openssh-client \ && curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl \
&& pip install --no-compile --no-cache-dir \ && chmod a+x kubectl \
ansible==5.7.1 \ && mv kubectl /usr/local/bin/kubectl
ansible-core==2.12.5 \
cryptography==3.4.8 \
jinja2==3.1.2 \
netaddr==0.8.0 \
jmespath==1.0.1 \
MarkupSafe==2.1.2 \
ruamel.yaml==0.17.21 \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
&& rm -rf /var/lib/apt/lists/* /var/log/* \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;

View File

@@ -187,7 +187,7 @@
identification within third-party archives. identification within third-party archives.
Copyright 2016 Kubespray Copyright 2016 Kubespray
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
You may obtain a copy of the License at You may obtain a copy of the License at

2
OWNERS
View File

@@ -5,4 +5,4 @@ approvers:
reviewers: reviewers:
- kubespray-reviewers - kubespray-reviewers
emeritus_approvers: emeritus_approvers:
- kubespray-emeritus_approvers - kubespray-emeritus_approvers

View File

@@ -4,13 +4,10 @@ aliases:
- chadswen - chadswen
- mirwan - mirwan
- miouge1 - miouge1
- woopstar
- luckysb - luckysb
- floryut - floryut
- oomichi - oomichi
- cristicalin
- liupeng0518
- yankay
- mzaian
kubespray-reviewers: kubespray-reviewers:
- holmsten - holmsten
- bozzo - bozzo
@@ -18,13 +15,7 @@ aliases:
- oomichi - oomichi
- jayonlau - jayonlau
- cristicalin - cristicalin
- liupeng0518
- yankay
- cyclinder
- mzaian
- mrfreezeex
kubespray-emeritus_approvers: kubespray-emeritus_approvers:
- riverzhang - riverzhang
- atoms - atoms
- ant31 - ant31
- woopstar

142
README.md
View File

@@ -13,16 +13,16 @@ You can get your invite [here](http://slack.k8s.io/)
## Quick Start ## Quick Start
Below are several ways to use Kubespray to deploy a Kubernetes cluster. To deploy the cluster you can use :
### Ansible ### Ansible
#### Usage #### Usage
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following steps:
```ShellSession ```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster`` # Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster cp -rfp inventory/sample inventory/mycluster
@@ -34,13 +34,6 @@ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inv
cat inventory/mycluster/group_vars/all/all.yml cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Clean up old Kubernete cluster with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
# uninstalling old packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
# And be mind it will remove the current kubernetes cluster (if it's running)!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root reset.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root # Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/, # The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons. # installing packages and interacting with various systemd daemons.
@@ -48,61 +41,44 @@ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
``` ```
Note: When Ansible is already installed via system packages on the control node, Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
Python packages installed via `sudo pip install -r requirements.txt` will go to As a consequence, `ansible-playbook` command will fail with:
a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on
Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on
Ubuntu). As a consequence, the `ansible-playbook` command will fail with:
```raw ```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path. ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
``` ```
This likely indicates that a task depends on a module present in ``requirements.txt``. probably pointing on a task depending on a module present in requirements.txt.
One way of addressing this is to uninstall the system Ansible package then One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
reinstall Ansible via ``pip``, but this not always possible and one must A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.
take care regarding package versions.
A workaround consists of setting the `ANSIBLE_LIBRARY`
and `ANSIBLE_MODULE_UTILS` environment variables respectively to
the `ansible/modules` and `ansible/module_utils` subdirectories of the ``pip``
installation location, which is the ``Location`` shown by running
`pip show [package]` before executing `ansible-playbook`.
A simple way to ensure you get all the correct version of Ansible is to use A simple way to ensure you get all the correct version of Ansible is to use the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags). You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/) to get the inventory and ssh key into the container, like this:
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
to access the inventory and SSH key in the container, like this:
```ShellSession ```ShellSession
git checkout v2.22.0 docker pull quay.io/kubespray/kubespray:v2.17.1
docker pull quay.io/kubespray/kubespray:v2.22.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \ docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \ --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.22.0 bash quay.io/kubespray/kubespray:v2.17.1 bash
# Inside the container you may now run the kubespray playbooks: # Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
``` ```
#### Collection
See [here](docs/ansible_collection.md) if you wish to use this repository as an Ansible collection
### Vagrant ### Vagrant
For Vagrant we need to install Python dependencies for provisioning tasks. For Vagrant we need to install python dependencies for provisioning tasks.
Check that ``Python`` and ``pip`` are installed: Check if Python and pip are installed:
```ShellSession ```ShellSession
python -V && pip -V python -V && pip -V
``` ```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/> If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following step:
```ShellSession ```ShellSession
sudo pip install -r requirements.txt
vagrant up vagrant up
``` ```
@@ -134,87 +110,69 @@ vagrant up
- [Adding/replacing a node](docs/nodes.md) - [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md) - [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md) - [Air-Gap installation](docs/offline-environment.md)
- [NTP](docs/ntp.md)
- [Hardening](docs/hardening.md)
- [Mirror](docs/mirror.md)
- [Roadmap](docs/roadmap.md) - [Roadmap](docs/roadmap.md)
## Supported Linux Distributions ## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk** - **Flatcar Container Linux by Kinvolk**
- **Debian** Bullseye, Buster - **Debian** Bullseye, Buster, Jessie, Stretch
- **Ubuntu** 16.04, 18.04, 20.04, 22.04 - **Ubuntu** 16.04, 18.04, 20.04
- **CentOS/RHEL** 7, [8, 9](docs/centos.md#centos-8) - **CentOS/RHEL** 7, [8](docs/centos8.md)
- **Fedora** 35, 36 - **Fedora** 34, 35
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md)) - **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed - **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** 7, [8, 9](docs/centos.md#centos-8) - **Oracle Linux** 7, [8](docs/centos8.md)
- **Alma Linux** [8, 9](docs/centos.md#centos-8) - **Alma Linux** [8](docs/centos8.md)
- **Rocky Linux** [8, 9](docs/centos.md#centos-8) - **Rocky Linux** [8](docs/centos8.md)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md)) - **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/uoslinux.md))
- **openEuler** (experimental: see [openEuler notes](docs/openeuler.md))
Note: Upstart/SysV init based OS types are not supported. Note: Upstart/SysV init based OS types are not supported.
## Supported Components ## Supported Components
- Core - Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.26.5 - [kubernetes](https://github.com/kubernetes/kubernetes) v1.22.8
- [etcd](https://github.com/etcd-io/etcd) v3.5.6 - [etcd](https://github.com/coreos/etcd) v3.5.0
- [docker](https://www.docker.com/) v20.10 (see note) - [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.7.1 - [containerd](https://containerd.io/) v1.5.8
- [cri-o](http://cri-o.io/) v1.24 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS) - [cri-o](http://cri-o.io/) v1.22 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin - Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0 - [cni-plugins](https://github.com/containernetworking/plugins) v1.0.1
- [calico](https://github.com/projectcalico/calico) v3.25.1 - [calico](https://github.com/projectcalico/calico) v3.20.3
- [cilium](https://github.com/cilium/cilium) v1.13.0 - [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [flannel](https://github.com/flannel-io/flannel) v0.21.4 - [cilium](https://github.com/cilium/cilium) v1.9.11
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.10.7 - [flanneld](https://github.com/flannel-io/flannel) v0.15.1
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1 - [kube-ovn](https://github.com/alauda/kube-ovn) v1.8.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8 - [kube-router](https://github.com/cloudnativelabs/kube-router) v1.3.2
- [multus](https://github.com/intel/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1 - [weave](https://github.com/weaveworks/weave) v2.8.1
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12
- Application - Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.1
- [coredns](https://github.com/coredns/coredns) v1.9.3
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.7.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.3
- [argocd](https://argoproj.github.io/) v2.7.2
- [helm](https://helm.sh/) v3.12.0
- [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11 - [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11 - [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0 - [cert-manager](https://github.com/jetstack/cert-manager) v1.5.4
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0 - [coredns](https://github.com/coredns/coredns) v1.8.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.22.0 - [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.0.4
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.4.0
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.23
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
## Container Runtime Notes ## Container Runtime Notes
- Supported Docker versions are 18.09, 19.03 and 20.10. The *recommended* Docker version is 20.10. `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``). - The list of available docker version is 18.09, 19.03 and 20.10. The recommended docker version is 20.10. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20) - The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements ## Requirements
- **Minimum required version of Kubernetes is v1.24** - **Minimum required version of Kubernetes is v1.20**
- **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands** - **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is experimentally supported for now**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md)) - The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**. - The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**. - If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to. - The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall. in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is run from non-root user account, correct privilege escalation method - If kubespray is ran from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified. or command parameters `--become or -b` should be specified.
Hardware: Hardware:
These limits are safeguarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide. These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master - Master
- Memory: 1500 MB - Memory: 1500 MB
@@ -223,7 +181,7 @@ These limits are safeguarded by Kubespray. Actual requirements for your workload
## Network Plugins ## Network Plugins
You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`) You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking. - [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
@@ -232,6 +190,8 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts,
pods, and (if using Istio and Envoy) applications at the service mesh layer. pods, and (if using Istio and Envoy) applications at the service mesh layer.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic. - [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. - [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
@@ -248,10 +208,7 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc. - [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray. The choice is defined with the variable `kube_network_plugin`. There is also an
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead. option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md). See also [Network checker](docs/netcheck.md).
@@ -272,11 +229,10 @@ See also [Network checker](docs/netcheck.md).
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst) - [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform) - [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
- [Kubean](https://github.com/kubean-io/kubean)
## CI Tests ## CI Tests
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/-/pipelines) [![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/). CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).

View File

@@ -2,18 +2,17 @@
The Kubespray Project is released on an as-needed basis. The process is as follows: The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325) 1. An issue is proposing a new release with a changelog since the last release
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release 2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1` 3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables. 4. Remove hashes for [EOL versions](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details. 5. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
6. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes 6. An approver creates a release branch in the form `release-X.Y`
7. An approver creates a release branch in the form `release-X.Y` 7. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
8. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details. 8. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
9. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml` 9. The release issue is closed
10. The release issue is closed 10. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
11. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released` 11. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
12. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
## Major/minor releases and milestones ## Major/minor releases and milestones
@@ -47,37 +46,3 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1 then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3. and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively. And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.
## Release note creation
You can create a release note with:
```shell
export GITHUB_TOKEN=<your-github-token>
export ORG=kubernetes-sigs
export REPO=kubespray
release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --dependencies=false --output=/tmp/kubespray-release-note --required-author=""
```
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note
## Container image creation
The container image `quay.io/kubespray/kubespray:vX.Y.Z` can be created from Dockerfile of the kubespray root directory:
```shell
cd kubespray/
nerdctl build -t quay.io/kubespray/kubespray:vX.Y.Z .
nerdctl push quay.io/kubespray/kubespray:vX.Y.Z
```
The container image `quay.io/kubespray/vagrant:vX.Y.Z` can be created from build.sh of test-infra/vagrant-docker/:
```shell
cd kubespray/test-infra/vagrant-docker/
./build vX.Y.Z
```
Please note that the above operation requires the permission to push container images into quay.io/kubespray/.
If you don't have the permission, please ask it on the #kubespray-dev channel.

View File

@@ -9,7 +9,5 @@
# #
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE # DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/ # INSTRUCTIONS AT https://kubernetes.io/security/
atoms
mattymo mattymo
floryut
oomichi
cristicalin

36
Vagrantfile vendored
View File

@@ -10,7 +10,6 @@ Vagrant.require_version ">= 2.0.0"
CONFIG = File.join(File.dirname(__FILE__), ENV['KUBESPRAY_VAGRANT_CONFIG'] || 'vagrant/config.rb') CONFIG = File.join(File.dirname(__FILE__), ENV['KUBESPRAY_VAGRANT_CONFIG'] || 'vagrant/config.rb')
FLATCAR_URL_TEMPLATE = "https://%s.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.json" FLATCAR_URL_TEMPLATE = "https://%s.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.json"
FEDORA35_MIRROR = "https://download.fedoraproject.org/pub/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-Vagrant-35-1.2.x86_64.vagrant-libvirt.box"
# Uniq disk UUID for libvirt # Uniq disk UUID for libvirt
DISK_UUID = Time.now.utc.to_i DISK_UUID = Time.now.utc.to_i
@@ -29,10 +28,9 @@ SUPPORTED_OS = {
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"}, "centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"}, "almalinux8" => {box: "almalinux/8", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"}, "almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"rockylinux8" => {box: "generic/rocky8", user: "vagrant"}, "fedora34" => {box: "fedora/34-cloud-base", user: "vagrant"},
"fedora35" => {box: "fedora/35-cloud-base", user: "vagrant", box_url: FEDORA35_MIRROR}, "fedora35" => {box: "fedora/35-cloud-base", user: "vagrant"},
"fedora36" => {box: "fedora/36-cloud-base", user: "vagrant"}, "opensuse" => {box: "bento/opensuse-leap-15.2", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"}, "opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"}, "oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"}, "oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
@@ -56,14 +54,14 @@ $subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756" $subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu1804" $os ||= "ubuntu1804"
$network_plugin ||= "flannel" $network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni # Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking ||= "False" $multi_networking ||= false
$download_run_once ||= "True" $download_run_once ||= "True"
$download_force_cache ||= "False" $download_force_cache ||= "False"
# The first three nodes are etcd servers # The first three nodes are etcd servers
$etcd_instances ||= [$num_instances, 3].min $etcd_instances ||= $num_instances
# The first two nodes are kube masters # The first two nodes are kube masters
$kube_master_instances ||= [$num_instances, 2].min $kube_master_instances ||= $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes # All nodes are kube nodes
$kube_node_instances ||= $num_instances $kube_node_instances ||= $num_instances
# The following only works when using the libvirt provider # The following only works when using the libvirt provider
@@ -72,24 +70,14 @@ $kube_node_instances_with_disks_size ||= "20G"
$kube_node_instances_with_disks_number ||= 2 $kube_node_instances_with_disks_number ||= 2
$override_disk_size ||= false $override_disk_size ||= false
$disk_size ||= "20GB" $disk_size ||= "20GB"
$local_path_provisioner_enabled ||= "False" $local_path_provisioner_enabled ||= false
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/" $local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
$libvirt_nested ||= false $libvirt_nested ||= false
# boolean or string (e.g. "-vvv")
$ansible_verbosity ||= false
$ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
$playbook ||= "cluster.yml" $playbook ||= "cluster.yml"
host_vars = {} host_vars = {}
# throw error if os is not supported
if ! SUPPORTED_OS.key?($os)
puts "Unsupported OS: #{$os}"
puts "Supported OS are: #{SUPPORTED_OS.keys.join(', ')}"
exit 1
end
$box = SUPPORTED_OS[$os][:box] $box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example # if $inventory is not set, try to use example
$inventory = "inventory/sample" if ! $inventory $inventory = "inventory/sample" if ! $inventory
@@ -181,7 +169,7 @@ Vagrant.configure("2") do |config|
# always make /dev/sd{a/b/c} so that CI can ensure that # always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs # virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d| (1..$kube_node_instances_with_disks_number).each do |d|
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi" lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
end end
end end
end end
@@ -252,11 +240,9 @@ Vagrant.configure("2") do |config|
} }
# Only execute the Ansible provisioner once, when all the machines are up and ready. # Only execute the Ansible provisioner once, when all the machines are up and ready.
# And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
if i == $num_instances if i == $num_instances
node.vm.provision "ansible" do |ansible| node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook ansible.playbook = $playbook
ansible.verbose = $ansible_verbosity
$ansible_inventory_path = File.join( $inventory, "hosts.ini") $ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path) if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path ansible.inventory_path = $ansible_inventory_path
@@ -266,9 +252,7 @@ Vagrant.configure("2") do |config|
ansible.host_key_checking = false ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"] ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars ansible.host_vars = host_vars
if $ansible_tags != "" #ansible.tags = ['download']
ansible.tags = [$ansible_tags]
end
ansible.groups = { ansible.groups = {
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"], "etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
"kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"], "kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],

View File

@@ -1,6 +1,6 @@
[ssh_connection] [ssh_connection]
pipelining=True pipelining=True
ansible_ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p #control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults] [defaults]
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .) # https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
@@ -10,11 +10,11 @@ host_key_checking=False
gathering = smart gathering = smart
fact_caching = jsonfile fact_caching = jsonfile
fact_caching_connection = /tmp fact_caching_connection = /tmp
fact_caching_timeout = 86400 fact_caching_timeout = 7200
stdout_callback = default stdout_callback = default
display_skipped_hosts = no display_skipped_hosts = no
library = ./library library = ./library
callbacks_enabled = profile_tasks,ara_default callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg

View File

@@ -3,20 +3,32 @@
gather_facts: false gather_facts: false
become: no become: no
vars: vars:
minimal_ansible_version: 2.11.0 minimal_ansible_version: 2.9.0
maximal_ansible_version: 2.13.0 minimal_ansible_version_2_10: 2.10.11
maximal_ansible_version: 2.12.0
ansible_connection: local ansible_connection: local
tags: always tags: always
tasks: tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}" - name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert: assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }} exclusive" msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
that: that:
- ansible_version.string is version(minimal_ansible_version, ">=") - ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<") - ansible_version.string is version(maximal_ansible_version, "<")
tags: tags:
- check - check
- name: "Check Ansible version > {{ minimal_ansible_version_2_10 }} when using ansible 2.10"
assert:
msg: "When using Ansible 2.10, the minimum supported version is {{ minimal_ansible_version_2_10 }}"
that:
- ansible_version.string is version(minimal_ansible_version_2_10, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
when:
- ansible_version.string is version('2.10.0', ">=")
tags:
- check
- name: "Check that python netaddr is installed" - name: "Check that python netaddr is installed"
assert: assert:
msg: "Python netaddr is not present" msg: "Python netaddr is not present"

View File

@@ -1,3 +1,127 @@
--- ---
- name: Install Kubernetes - name: Check ansible version
ansible.builtin.import_playbook: playbooks/cluster.yml import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s_cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s_cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/node, tags: node }
- hosts: kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/control-plane, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: kubernetes/node-label, tags: node-label }
- { role: network_plugin, tags: network }
- hosts: calico_rr
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] }
- hosts: kube_control_plane[0]
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
- hosts: kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- { role: kubernetes-apps, tags: apps }
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }

View File

@@ -1 +1 @@
boto3 # Apache-2.0 boto3 # Apache-2.0

View File

@@ -1,2 +1,2 @@
.generated .generated
/inventory /inventory

View File

@@ -47,10 +47,6 @@ If you need to delete all resources from a resource group, simply call:
**WARNING** this really deletes everything from your resource group, including everything that was later created by you! **WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Installing Ansible and the dependencies
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
## Generating an inventory for kubespray ## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call: After you have applied the templates, you can generate an inventory with this call:
@@ -63,5 +59,6 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell ```shell
cd kubespray-root-dir cd kubespray-root-dir
sudo pip3 install -r requirements.txt
ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
``` ```

View File

@@ -31,3 +31,4 @@
[k8s_cluster:children] [k8s_cluster:children]
kube_node kube_node
kube_control_plane kube_control_plane

View File

@@ -27,4 +27,4 @@
} }
} }
] ]
} }

View File

@@ -103,4 +103,4 @@
} }
{% endif %} {% endif %}
] ]
} }

View File

@@ -5,4 +5,4 @@
"variables": {}, "variables": {},
"resources": [], "resources": [],
"outputs": {} "outputs": {}
} }

View File

@@ -16,4 +16,4 @@
} }
} }
] ]
} }

View File

@@ -43,7 +43,7 @@
package: package:
name: "{{ item }}" name: "{{ item }}"
state: present state: present
with_items: "{{ distro_extra_packages + [ 'rsyslog', 'openssh-server' ] }}" with_items: "{{ distro_extra_packages }} + [ 'rsyslog', 'openssh-server' ]"
- name: Start needed services - name: Start needed services
service: service:

View File

@@ -17,7 +17,7 @@ pass_or_fail() {
test_distro() { test_distro() {
local distro=${1:?};shift local distro=${1:?};shift
local extra="${*:-}" local extra="${*:-}"
local prefix="${distro[${extra}]}" local prefix="$distro[${extra}]}"
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro
pass_or_fail "$prefix: dind-nodes" || return 1 pass_or_fail "$prefix: dind-nodes" || return 1
(cd ../.. (cd ../..
@@ -71,15 +71,15 @@ for spec in ${SPECS}; do
echo "Loading file=${spec} ..." echo "Loading file=${spec} ..."
. ${spec} || continue . ${spec} || continue
: ${DISTROS:?} || continue : ${DISTROS:?} || continue
echo "DISTROS:" "${DISTROS[@]}" echo "DISTROS=${DISTROS[@]}"
echo "EXTRAS->" echo "EXTRAS->"
printf " %s\n" "${EXTRAS[@]}" printf " %s\n" "${EXTRAS[@]}"
let n=1 let n=1
for distro in "${DISTROS[@]}"; do for distro in ${DISTROS[@]}; do
for extra in "${EXTRAS[@]:-NULL}"; do for extra in "${EXTRAS[@]:-NULL}"; do
# Magic value to let this for run once: # Magic value to let this for run once:
[[ ${extra} == NULL ]] && unset extra [[ ${extra} == NULL ]] && unset extra
docker rm -f "${NODES[@]}" docker rm -f ${NODES[@]}
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++)) printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++))
{ {
info "${distro}[${extra}] START: file_out=${file_out}" info "${distro}[${extra}] START: file_out=${file_out}"

View File

@@ -83,15 +83,11 @@ class KubesprayInventory(object):
self.config_file = config_file self.config_file = config_file
self.yaml_config = {} self.yaml_config = {}
loadPreviousConfig = False loadPreviousConfig = False
printHostnames = False
# See whether there are any commands to process # See whether there are any commands to process
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS: if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
if changed_hosts[0] == "add": if changed_hosts[0] == "add":
loadPreviousConfig = True loadPreviousConfig = True
changed_hosts = changed_hosts[1:] changed_hosts = changed_hosts[1:]
elif changed_hosts[0] == "print_hostnames":
loadPreviousConfig = True
printHostnames = True
else: else:
self.parse_command(changed_hosts[0], changed_hosts[1:]) self.parse_command(changed_hosts[0], changed_hosts[1:])
sys.exit(0) sys.exit(0)
@@ -109,10 +105,6 @@ class KubesprayInventory(object):
print(e) print(e)
sys.exit(1) sys.exit(1)
if printHostnames:
self.print_hostnames()
sys.exit(0)
self.ensure_required_groups(ROLES) self.ensure_required_groups(ROLES)
if changed_hosts: if changed_hosts:

View File

@@ -1,3 +1,3 @@
configparser>=3.3.0 configparser>=3.3.0
ipaddress
ruamel.yaml>=0.15.88 ruamel.yaml>=0.15.88
ipaddress

View File

@@ -1,3 +1,3 @@
hacking>=0.10.2 hacking>=0.10.2
mock>=1.3.0
pytest>=2.8.0 pytest>=2.8.0
mock>=1.3.0

View File

@@ -13,7 +13,6 @@
# under the License. # under the License.
import inventory import inventory
from io import StringIO
import unittest import unittest
from unittest import mock from unittest import mock
@@ -27,28 +26,6 @@ if path not in sys.path:
import inventory # noqa import inventory # noqa
class TestInventoryPrintHostnames(unittest.TestCase):
@mock.patch('ruamel.yaml.YAML.load')
def test_print_hostnames(self, load_mock):
mock_io = mock.mock_open(read_data='')
load_mock.return_value = OrderedDict({'all': {'hosts': {
'node1': {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'},
'node2': {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}}}})
with mock.patch('builtins.open', mock_io):
with self.assertRaises(SystemExit) as cm:
with mock.patch('sys.stdout', new_callable=StringIO) as stdout:
inventory.KubesprayInventory(
changed_hosts=["print_hostnames"],
config_file="file")
self.assertEqual("node1 node2\n", stdout.getvalue())
self.assertEqual(cm.exception.code, 0)
class TestInventory(unittest.TestCase): class TestInventory(unittest.TestCase):
@mock.patch('inventory.sys') @mock.patch('inventory.sys')
def setUp(self, sys_mock): def setUp(self, sys_mock):

View File

@@ -1,2 +1,3 @@
#k8s_deployment_user: kubespray #k8s_deployment_user: kubespray
#k8s_deployment_user_pkey_path: /tmp/ssh_rsa #k8s_deployment_user_pkey_path: /tmp/ssh_rsa

View File

@@ -28,7 +28,7 @@
sysctl: sysctl:
name: net.ipv4.ip_forward name: net.ipv4.ip_forward
value: 1 value: 1
sysctl_file: "{{ sysctl_file_path }}" sysctl_file: /etc/sysctl.d/ipv4-ip_forward.conf
state: present state: present
reload: yes reload: yes
@@ -37,7 +37,7 @@
name: "{{ item }}" name: "{{ item }}"
state: present state: present
value: 0 value: 0
sysctl_file: "{{ sysctl_file_path }}" sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
reload: yes reload: yes
with_items: with_items:
- net.bridge.bridge-nf-call-arptables - net.bridge.bridge-nf-call-arptables

View File

@@ -41,3 +41,4 @@
# [network-storage:children] # [network-storage:children]
# gfs-cluster # gfs-cluster

View File

@@ -14,16 +14,12 @@ This role performs basic installation and setup of Gluster, but it does not conf
Available variables are listed below, along with default values (see `defaults/main.yml`): Available variables are listed below, along with default values (see `defaults/main.yml`):
```yaml glusterfs_default_release: ""
glusterfs_default_release: ""
```
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy). You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
```yaml glusterfs_ppa_use: yes
glusterfs_ppa_use: yes glusterfs_ppa_version: "3.5"
glusterfs_ppa_version: "3.5"
```
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info. For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
@@ -33,11 +29,9 @@ None.
## Example Playbook ## Example Playbook
```yaml
- hosts: server - hosts: server
roles: roles:
- geerlingguy.glusterfs - geerlingguy.glusterfs
```
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/). For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).

View File

@@ -3,7 +3,6 @@
template: template:
src: "{{ item.file }}" src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}" dest: "{{ kube_config_dir }}/{{ item.dest }}"
mode: 0644
with_items: with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json} - { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml} - { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}

View File

@@ -9,8 +9,7 @@ This script has two features:
(2) Deploy local container registry and register the container images to the registry. (2) Deploy local container registry and register the container images to the registry.
Step(1) should be done online site as a preparation, then we bring the gotten images Step(1) should be done online site as a preparation, then we bring the gotten images
to the target offline environment. if images are from a private registry, to the target offline environment.
you need to set `PRIVATE_REGISTRY` environment variable.
Then we will run step(2) for registering the images to local registry. Then we will run step(2) for registering the images to local registry.
Step(1) can be operated with: Step(1) can be operated with:
@@ -45,21 +44,3 @@ temp
In some cases you may want to update some component version, you can declare version variables in ansible inventory file or group_vars, In some cases you may want to update some component version, you can declare version variables in ansible inventory file or group_vars,
then run `./generate_list.sh -i [inventory_file]` to update file.list and images.list. then run `./generate_list.sh -i [inventory_file]` to update file.list and images.list.
## manage-offline-files.sh
This script will download all files according to `temp/files.list` and run nginx container to provide offline file download.
Step(1) generate `files.list`
```shell
./generate_list.sh
```
Step(2) download files and run nginx container
```shell
./manage-offline-files.sh
```
when nginx container is running, it can be accessed through <http://127.0.0.1:8080/>.

View File

@@ -19,9 +19,6 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template | sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# add kube-* images to images list template # add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/download/defaults/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy" KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
for i in $KUBE_IMAGES; do for i in $KUBE_IMAGES; do
echo "{{ kube_image_repo }}/$i:{{ kube_version }}" >> ${TEMP_DIR}/images.list.template echo "{{ kube_image_repo }}/$i:{{ kube_version }}" >> ${TEMP_DIR}/images.list.template

View File

@@ -1,6 +1,6 @@
--- ---
- hosts: localhost - hosts: localhost
become: no become: false
roles: roles:
# Just load default variables from roles. # Just load default variables from roles.
@@ -10,10 +10,11 @@
when: false when: false
tasks: tasks:
# Generate files.list and images.list files from templates. - name: Generate files.list and images.list files from templates
- template: template:
src: ./contrib/offline/temp/{{ item }}.list.template src: ./contrib/offline/temp/{{ item }}.list.template
dest: ./contrib/offline/temp/{{ item }}.list dest: ./contrib/offline/temp/{{ item }}.list
mode: 0644
with_items: with_items:
- files - files
- images - images

View File

@@ -15,7 +15,7 @@ function create_container_image_tar() {
IMAGES=$(kubectl describe pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq) IMAGES=$(kubectl describe pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq)
# NOTE: etcd and pause cannot be seen as pods. # NOTE: etcd and pause cannot be seen as pods.
# The pause image is used for --pod-infra-container-image option of kubelet. # The pause image is used for --pod-infra-container-image option of kubelet.
EXT_IMAGES=$(kubectl cluster-info dump | egrep "quay.io/coreos/etcd:|registry.k8s.io/pause:" | sed s@\"@@g) EXT_IMAGES=$(kubectl cluster-info dump | egrep "quay.io/coreos/etcd:|k8s.gcr.io/pause:" | sed s@\"@@g)
IMAGES="${IMAGES} ${EXT_IMAGES}" IMAGES="${IMAGES} ${EXT_IMAGES}"
rm -f ${IMAGE_TAR_FILE} rm -f ${IMAGE_TAR_FILE}
@@ -46,16 +46,15 @@ function create_container_image_tar() {
# NOTE: Here removes the following repo parts from each image # NOTE: Here removes the following repo parts from each image
# so that these parts will be replaced with Kubespray. # so that these parts will be replaced with Kubespray.
# - kube_image_repo: "registry.k8s.io" # - kube_image_repo: "k8s.gcr.io"
# - gcr_image_repo: "gcr.io" # - gcr_image_repo: "gcr.io"
# - docker_image_repo: "docker.io" # - docker_image_repo: "docker.io"
# - quay_image_repo: "quay.io" # - quay_image_repo: "quay.io"
FIRST_PART=$(echo ${image} | awk -F"/" '{print $1}') FIRST_PART=$(echo ${image} | awk -F"/" '{print $1}')
if [ "${FIRST_PART}" = "registry.k8s.io" ] || if [ "${FIRST_PART}" = "k8s.gcr.io" ] ||
[ "${FIRST_PART}" = "gcr.io" ] || [ "${FIRST_PART}" = "gcr.io" ] ||
[ "${FIRST_PART}" = "docker.io" ] || [ "${FIRST_PART}" = "docker.io" ] ||
[ "${FIRST_PART}" = "quay.io" ] || [ "${FIRST_PART}" = "quay.io" ]; then
[ "${FIRST_PART}" = "${PRIVATE_REGISTRY}" ]; then
image=$(echo ${image} | sed s@"${FIRST_PART}/"@@) image=$(echo ${image} | sed s@"${FIRST_PART}/"@@)
fi fi
echo "${FILE_NAME} ${image}" >> ${IMAGE_LIST} echo "${FILE_NAME} ${image}" >> ${IMAGE_LIST}
@@ -153,8 +152,7 @@ else
echo "(2) Deploy local container registry and register the container images to the registry." echo "(2) Deploy local container registry and register the container images to the registry."
echo "" echo ""
echo "Step(1) should be done online site as a preparation, then we bring" echo "Step(1) should be done online site as a preparation, then we bring"
echo "the gotten images to the target offline environment. if images are from" echo "the gotten images to the target offline environment."
echo "a private registry, you need to set PRIVATE_REGISTRY environment variable."
echo "Then we will run step(2) for registering the images to local registry." echo "Then we will run step(2) for registering the images to local registry."
echo "" echo ""
echo "${IMAGE_TAR_FILE} is created to contain your container images." echo "${IMAGE_TAR_FILE} is created to contain your container images."

View File

@@ -1,44 +0,0 @@
#!/bin/bash
CURRENT_DIR=$( dirname "$(readlink -f "$0")" )
OFFLINE_FILES_DIR_NAME="offline-files"
OFFLINE_FILES_DIR="${CURRENT_DIR}/${OFFLINE_FILES_DIR_NAME}"
OFFLINE_FILES_ARCHIVE="${CURRENT_DIR}/offline-files.tar.gz"
FILES_LIST=${FILES_LIST:-"${CURRENT_DIR}/temp/files.list"}
NGINX_PORT=8080
# download files
if [ ! -f "${FILES_LIST}" ]; then
echo "${FILES_LIST} should exist, run ./generate_list.sh first."
exit 1
fi
rm -rf "${OFFLINE_FILES_DIR}"
rm "${OFFLINE_FILES_ARCHIVE}"
mkdir "${OFFLINE_FILES_DIR}"
wget -x -P "${OFFLINE_FILES_DIR}" -i "${FILES_LIST}"
tar -czvf "${OFFLINE_FILES_ARCHIVE}" "${OFFLINE_FILES_DIR_NAME}"
[ -n "$NO_HTTP_SERVER" ] && echo "skip to run nginx" && exit 0
# run nginx container server
if command -v nerdctl 1>/dev/null 2>&1; then
runtime="nerdctl"
elif command -v podman 1>/dev/null 2>&1; then
runtime="podman"
elif command -v docker 1>/dev/null 2>&1; then
runtime="docker"
else
echo "No supported container runtime found"
exit 1
fi
sudo "${runtime}" container inspect nginx >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
--volume "$(pwd)"/nginx.conf:/etc/nginx/nginx.conf \
--name nginx nginx:alpine
fi

View File

@@ -1,39 +0,0 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
include /etc/nginx/default.d/*.conf;
location / {
root /usr/share/nginx/html/download;
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}

View File

@@ -36,7 +36,8 @@ terraform apply -var-file=credentials.tfvars
``` ```
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory` - Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated `ssh-bastion.conf`. Ansible automatically detects bastion and changes `ssh_args` - Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
Ansible automatically detects bastion and changes ssh_args
```commandline ```commandline
ssh -F ./ssh-bastion.conf user@$ip ssh -F ./ssh-bastion.conf user@$ip

View File

@@ -20,20 +20,20 @@ module "aws-vpc" {
aws_cluster_name = var.aws_cluster_name aws_cluster_name = var.aws_cluster_name
aws_vpc_cidr_block = var.aws_vpc_cidr_block aws_vpc_cidr_block = var.aws_vpc_cidr_block
aws_avail_zones = data.aws_availability_zones.available.names aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names))
aws_cidr_subnets_private = var.aws_cidr_subnets_private aws_cidr_subnets_private = var.aws_cidr_subnets_private
aws_cidr_subnets_public = var.aws_cidr_subnets_public aws_cidr_subnets_public = var.aws_cidr_subnets_public
default_tags = var.default_tags default_tags = var.default_tags
} }
module "aws-nlb" { module "aws-elb" {
source = "./modules/nlb" source = "./modules/elb"
aws_cluster_name = var.aws_cluster_name aws_cluster_name = var.aws_cluster_name
aws_vpc_id = module.aws-vpc.aws_vpc_id aws_vpc_id = module.aws-vpc.aws_vpc_id
aws_avail_zones = data.aws_availability_zones.available.names aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names))
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
aws_nlb_api_port = var.aws_nlb_api_port aws_elb_api_port = var.aws_elb_api_port
k8s_secure_api_port = var.k8s_secure_api_port k8s_secure_api_port = var.k8s_secure_api_port
default_tags = var.default_tags default_tags = var.default_tags
} }
@@ -54,6 +54,7 @@ resource "aws_instance" "bastion-server" {
instance_type = var.aws_bastion_size instance_type = var.aws_bastion_size
count = var.aws_bastion_num count = var.aws_bastion_num
associate_public_ip_address = true associate_public_ip_address = true
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index) subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group vpc_security_group_ids = module.aws-vpc.aws_security_group
@@ -78,7 +79,8 @@ resource "aws_instance" "k8s-master" {
count = var.aws_kube_master_num count = var.aws_kube_master_num
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index) availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group vpc_security_group_ids = module.aws-vpc.aws_security_group
@@ -96,10 +98,10 @@ resource "aws_instance" "k8s-master" {
})) }))
} }
resource "aws_lb_target_group_attachment" "tg-attach_master_nodes" { resource "aws_elb_attachment" "attach_master_nodes" {
count = var.aws_kube_master_num count = var.aws_kube_master_num
target_group_arn = module.aws-nlb.aws_nlb_api_tg_arn elb = module.aws-elb.aws_elb_api_id
target_id = element(aws_instance.k8s-master.*.private_ip, count.index) instance = element(aws_instance.k8s-master.*.id, count.index)
} }
resource "aws_instance" "k8s-etcd" { resource "aws_instance" "k8s-etcd" {
@@ -108,7 +110,8 @@ resource "aws_instance" "k8s-etcd" {
count = var.aws_etcd_num count = var.aws_etcd_num
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index) availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group vpc_security_group_ids = module.aws-vpc.aws_security_group
@@ -131,7 +134,8 @@ resource "aws_instance" "k8s-worker" {
count = var.aws_kube_worker_num count = var.aws_kube_worker_num
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index) availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group vpc_security_group_ids = module.aws-vpc.aws_security_group
@@ -164,7 +168,7 @@ data "template_file" "inventory" {
list_node = join("\n", aws_instance.k8s-worker.*.private_dns) list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip)) connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
list_etcd = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_dns) : (aws_instance.k8s-master.*.private_dns))) list_etcd = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_dns) : (aws_instance.k8s-master.*.private_dns)))
nlb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-nlb.aws_nlb_api_fqdn}\"" elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
} }
} }

View File

@@ -0,0 +1,57 @@
resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = var.aws_vpc_id
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
}))
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = var.aws_elb_api_port
to_port = var.k8s_secure_api_port
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.aws-elb.id
}
resource "aws_security_group_rule" "aws-allow-api-egress" {
type = "egress"
from_port = 0
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.aws-elb.id
}
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = var.aws_subnet_ids_public
security_groups = [aws_security_group.aws-elb.id]
listener {
instance_port = var.k8s_secure_api_port
instance_protocol = "tcp"
lb_port = var.aws_elb_api_port
lb_protocol = "tcp"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTPS:${var.k8s_secure_api_port}/healthz"
interval = 30
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-elb-api"
}))
}

View File

@@ -0,0 +1,7 @@
output "aws_elb_api_id" {
value = aws_elb.aws-elb-api.id
}
output "aws_elb_api_fqdn" {
value = aws_elb.aws-elb-api.dns_name
}

View File

@@ -6,8 +6,8 @@ variable "aws_vpc_id" {
description = "AWS VPC ID" description = "AWS VPC ID"
} }
variable "aws_nlb_api_port" { variable "aws_elb_api_port" {
description = "Port for AWS NLB" description = "Port for AWS ELB"
} }
variable "k8s_secure_api_port" { variable "k8s_secure_api_port" {

View File

@@ -1,41 +0,0 @@
# Create a new AWS NLB for K8S API
resource "aws_lb" "aws-nlb-api" {
name = "kubernetes-nlb-${var.aws_cluster_name}"
load_balancer_type = "network"
subnets = length(var.aws_subnet_ids_public) <= length(var.aws_avail_zones) ? var.aws_subnet_ids_public : slice(var.aws_subnet_ids_public, 0, length(var.aws_avail_zones))
idle_timeout = 400
enable_cross_zone_load_balancing = true
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-nlb-api"
}))
}
# Create a new AWS NLB Instance Target Group
resource "aws_lb_target_group" "aws-nlb-api-tg" {
name = "kubernetes-nlb-tg-${var.aws_cluster_name}"
port = var.k8s_secure_api_port
protocol = "TCP"
target_type = "ip"
vpc_id = var.aws_vpc_id
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
interval = 30
protocol = "HTTPS"
path = "/healthz"
}
}
# Create a new AWS NLB Listener listen to target group
resource "aws_lb_listener" "aws-nlb-api-listener" {
load_balancer_arn = aws_lb.aws-nlb-api.arn
port = var.aws_nlb_api_port
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.aws-nlb-api-tg.arn
}
}

View File

@@ -1,11 +0,0 @@
output "aws_nlb_api_id" {
value = aws_lb.aws-nlb-api.id
}
output "aws_nlb_api_fqdn" {
value = aws_lb.aws-nlb-api.dns_name
}
output "aws_nlb_api_tg_arn" {
value = aws_lb_target_group.aws-nlb-api-tg.arn
}

View File

@@ -25,14 +25,13 @@ resource "aws_internet_gateway" "cluster-vpc-internetgw" {
resource "aws_subnet" "cluster-vpc-subnets-public" { resource "aws_subnet" "cluster-vpc-subnets-public" {
vpc_id = aws_vpc.cluster-vpc.id vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_cidr_subnets_public) count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones)) availability_zone = element(var.aws_avail_zones, count.index)
cidr_block = element(var.aws_cidr_subnets_public, count.index) cidr_block = element(var.aws_cidr_subnets_public, count.index)
tags = merge(var.default_tags, tomap({ tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public" Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared" "kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
"kubernetes.io/role/elb" = "1"
})) }))
} }
@@ -44,14 +43,12 @@ resource "aws_nat_gateway" "cluster-nat-gateway" {
resource "aws_subnet" "cluster-vpc-subnets-private" { resource "aws_subnet" "cluster-vpc-subnets-private" {
vpc_id = aws_vpc.cluster-vpc.id vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_cidr_subnets_private) count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones)) availability_zone = element(var.aws_avail_zones, count.index)
cidr_block = element(var.aws_cidr_subnets_private, count.index) cidr_block = element(var.aws_cidr_subnets_private, count.index)
tags = merge(var.default_tags, tomap({ tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private" Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
})) }))
} }

View File

@@ -14,8 +14,8 @@ output "etcd" {
value = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_ip) : (aws_instance.k8s-master.*.private_ip))) value = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_ip) : (aws_instance.k8s-master.*.private_ip)))
} }
output "aws_nlb_api_fqdn" { output "aws_elb_api_fqdn" {
value = "${module.aws-nlb.aws_nlb_api_fqdn}:${var.aws_nlb_api_port}" value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
} }
output "inventory" { output "inventory" {

View File

@@ -33,9 +33,9 @@ aws_kube_worker_size = "t2.medium"
aws_kube_worker_disk_size = 50 aws_kube_worker_disk_size = 50
#Settings AWS NLB #Settings AWS ELB
aws_nlb_api_port = 6443 aws_elb_api_port = 6443
k8s_secure_api_port = 6443 k8s_secure_api_port = 6443

View File

@@ -24,4 +24,4 @@ kube_control_plane
calico_rr calico_rr
[k8s_cluster:vars] [k8s_cluster:vars]
${nlb_api_fqdn} ${elb_api_fqdn}

View File

@@ -32,7 +32,7 @@ aws_kube_worker_size = "t3.medium"
aws_kube_worker_disk_size = 50 aws_kube_worker_disk_size = 50
#Settings AWS ELB #Settings AWS ELB
aws_nlb_api_port = 6443 aws_elb_api_port = 6443
k8s_secure_api_port = 6443 k8s_secure_api_port = 6443
default_tags = { default_tags = {

View File

@@ -25,7 +25,7 @@ aws_kube_worker_size = "t3.medium"
aws_kube_worker_disk_size = 50 aws_kube_worker_disk_size = 50
#Settings AWS ELB #Settings AWS ELB
aws_nlb_api_port = 6443 aws_elb_api_port = 6443
k8s_secure_api_port = 6443 k8s_secure_api_port = 6443
default_tags = { } default_tags = { }

View File

@@ -104,11 +104,11 @@ variable "aws_kube_worker_size" {
} }
/* /*
* AWS NLB Settings * AWS ELB Settings
* *
*/ */
variable "aws_nlb_api_port" { variable "aws_elb_api_port" {
description = "Port for AWS NLB" description = "Port for AWS ELB"
} }
variable "k8s_secure_api_port" { variable "k8s_secure_api_port" {

View File

@@ -1,15 +0,0 @@
output "k8s_masters" {
value = equinix_metal_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = equinix_metal_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = equinix_metal_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = equinix_metal_device.k8s_node.*.access_public_ipv4
}

View File

@@ -1,17 +0,0 @@
terraform {
required_version = ">= 1.0.0"
provider_meta "equinix" {
module_name = "kubespray"
}
required_providers {
equinix = {
source = "equinix/equinix"
version = "~> 1.14"
}
}
}
# Configure the Equinix Metal Provider
provider "equinix" {
}

View File

@@ -31,7 +31,9 @@ The setup looks like following
## Requirements ## Requirements
* Terraform 0.13.0 or newer (0.12 also works if you modify the provider block to include version and remove all `versions.tf` files) * Terraform 0.13.0 or newer
*0.12 also works if you modify the provider block to include version and remove all `versions.tf` files*
## Quickstart ## Quickstart

View File

@@ -3,8 +3,8 @@ provider "exoscale" {}
module "kubernetes" { module "kubernetes" {
source = "./modules/kubernetes-cluster" source = "./modules/kubernetes-cluster"
prefix = var.prefix prefix = var.prefix
zone = var.zone
machines = var.machines machines = var.machines
ssh_public_keys = var.ssh_public_keys ssh_public_keys = var.ssh_public_keys

View File

@@ -74,28 +74,14 @@ ansible-playbook -i contrib/terraform/gcs/inventory.ini cluster.yml -b -v
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes * `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server * `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports) * `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
* `ingress_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to ingress on ports 80 and 443
* `extra_ingress_firewalls`: Additional ingress firewall rules. Key will be used as the name of the rule
* `source_ranges`: List of IP ranges (CIDR). Example: `["8.8.8.8"]`
* `protocol`: Protocol. Example `"tcp"`
* `ports`: List of ports, as string. Example `["53"]`
* `target_tags`: List of target tag (either the machine name or `control-plane` or `worker`). Example: `["control-plane", "worker-0"]`
### Optional ### Optional
* `prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)* * `prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)*
* `master_sa_email`: Service account email to use for the control plane nodes *(Defaults to `""`, auto generate one)* * `master_sa_email`: Service account email to use for the master nodes *(Defaults to `""`, auto generate one)*
* `master_sa_scopes`: Service account email to use for the control plane nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)* * `master_sa_scopes`: Service account email to use for the master nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `master_preemptible`: Enable [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
for the control plane nodes *(Defaults to `false`)*
* `master_additional_disk_type`: [Disk type](https://cloud.google.com/compute/docs/disks/#disk-types)
for extra disks added on the control plane nodes *(Defaults to `"pd-ssd"`)*
* `worker_sa_email`: Service account email to use for the worker nodes *(Defaults to `""`, auto generate one)* * `worker_sa_email`: Service account email to use for the worker nodes *(Defaults to `""`, auto generate one)*
* `worker_sa_scopes`: Service account email to use for the worker nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)* * `worker_sa_scopes`: Service account email to use for the worker nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `worker_preemptible`: Enable [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
for the worker nodes *(Defaults to `false`)*
* `worker_additional_disk_type`: [Disk type](https://cloud.google.com/compute/docs/disks/#disk-types)
for extra disks added on the worker nodes *(Defaults to `"pd-ssd"`)*
An example variables file can be found `tfvars.json` An example variables file can be found `tfvars.json`

View File

@@ -1,16 +1,8 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
provider "google" { provider "google" {
credentials = file(var.keyfile_location) credentials = file(var.keyfile_location)
region = var.region region = var.region
project = var.gcp_project_id project = var.gcp_project_id
version = "~> 3.48"
} }
module "kubernetes" { module "kubernetes" {
@@ -21,19 +13,12 @@ module "kubernetes" {
machines = var.machines machines = var.machines
ssh_pub_key = var.ssh_pub_key ssh_pub_key = var.ssh_pub_key
master_sa_email = var.master_sa_email master_sa_email = var.master_sa_email
master_sa_scopes = var.master_sa_scopes master_sa_scopes = var.master_sa_scopes
master_preemptible = var.master_preemptible worker_sa_email = var.worker_sa_email
master_additional_disk_type = var.master_additional_disk_type worker_sa_scopes = var.worker_sa_scopes
worker_sa_email = var.worker_sa_email
worker_sa_scopes = var.worker_sa_scopes
worker_preemptible = var.worker_preemptible
worker_additional_disk_type = var.worker_additional_disk_type
ssh_whitelist = var.ssh_whitelist ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist api_server_whitelist = var.api_server_whitelist
nodeport_whitelist = var.nodeport_whitelist nodeport_whitelist = var.nodeport_whitelist
ingress_whitelist = var.ingress_whitelist
extra_ingress_firewalls = var.extra_ingress_firewalls
} }

View File

@@ -5,8 +5,6 @@
resource "google_compute_network" "main" { resource "google_compute_network" "main" {
name = "${var.prefix}-network" name = "${var.prefix}-network"
auto_create_subnetworks = false
} }
resource "google_compute_subnetwork" "main" { resource "google_compute_subnetwork" "main" {
@@ -22,8 +20,6 @@ resource "google_compute_firewall" "deny_all" {
priority = 1000 priority = 1000
source_ranges = ["0.0.0.0/0"]
deny { deny {
protocol = "all" protocol = "all"
} }
@@ -43,8 +39,6 @@ resource "google_compute_firewall" "allow_internal" {
} }
resource "google_compute_firewall" "ssh" { resource "google_compute_firewall" "ssh" {
count = length(var.ssh_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-ssh-firewall" name = "${var.prefix}-ssh-firewall"
network = google_compute_network.main.name network = google_compute_network.main.name
@@ -59,8 +53,6 @@ resource "google_compute_firewall" "ssh" {
} }
resource "google_compute_firewall" "api_server" { resource "google_compute_firewall" "api_server" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-api-server-firewall" name = "${var.prefix}-api-server-firewall"
network = google_compute_network.main.name network = google_compute_network.main.name
@@ -75,8 +67,6 @@ resource "google_compute_firewall" "api_server" {
} }
resource "google_compute_firewall" "nodeport" { resource "google_compute_firewall" "nodeport" {
count = length(var.nodeport_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-nodeport-firewall" name = "${var.prefix}-nodeport-firewall"
network = google_compute_network.main.name network = google_compute_network.main.name
@@ -91,15 +81,11 @@ resource "google_compute_firewall" "nodeport" {
} }
resource "google_compute_firewall" "ingress_http" { resource "google_compute_firewall" "ingress_http" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-http-ingress-firewall" name = "${var.prefix}-http-ingress-firewall"
network = google_compute_network.main.name network = google_compute_network.main.name
priority = 100 priority = 100
source_ranges = var.ingress_whitelist
allow { allow {
protocol = "tcp" protocol = "tcp"
ports = ["80"] ports = ["80"]
@@ -107,15 +93,11 @@ resource "google_compute_firewall" "ingress_http" {
} }
resource "google_compute_firewall" "ingress_https" { resource "google_compute_firewall" "ingress_https" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-https-ingress-firewall" name = "${var.prefix}-https-ingress-firewall"
network = google_compute_network.main.name network = google_compute_network.main.name
priority = 100 priority = 100
source_ranges = var.ingress_whitelist
allow { allow {
protocol = "tcp" protocol = "tcp"
ports = ["443"] ports = ["443"]
@@ -191,7 +173,7 @@ resource "google_compute_disk" "master" {
} }
name = "${var.prefix}-${each.key}" name = "${var.prefix}-${each.key}"
type = var.master_additional_disk_type type = "pd-ssd"
zone = each.value.machine.zone zone = each.value.machine.zone
size = each.value.disk_size size = each.value.disk_size
@@ -219,7 +201,7 @@ resource "google_compute_instance" "master" {
machine_type = each.value.size machine_type = each.value.size
zone = each.value.zone zone = each.value.zone
tags = ["control-plane", "master", each.key] tags = ["master"]
boot_disk { boot_disk {
initialize_params { initialize_params {
@@ -247,28 +229,19 @@ resource "google_compute_instance" "master" {
# Since we use google_compute_attached_disk we need to ignore this # Since we use google_compute_attached_disk we need to ignore this
lifecycle { lifecycle {
ignore_changes = [attached_disk] ignore_changes = ["attached_disk"]
}
scheduling {
preemptible = var.master_preemptible
automatic_restart = !var.master_preemptible
} }
} }
resource "google_compute_forwarding_rule" "master_lb" { resource "google_compute_forwarding_rule" "master_lb" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-master-lb-forward-rule" name = "${var.prefix}-master-lb-forward-rule"
port_range = "6443" port_range = "6443"
target = google_compute_target_pool.master_lb[count.index].id target = google_compute_target_pool.master_lb.id
} }
resource "google_compute_target_pool" "master_lb" { resource "google_compute_target_pool" "master_lb" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-master-lb-pool" name = "${var.prefix}-master-lb-pool"
instances = local.master_target_list instances = local.master_target_list
} }
@@ -285,7 +258,7 @@ resource "google_compute_disk" "worker" {
} }
name = "${var.prefix}-${each.key}" name = "${var.prefix}-${each.key}"
type = var.worker_additional_disk_type type = "pd-ssd"
zone = each.value.machine.zone zone = each.value.machine.zone
size = each.value.disk_size size = each.value.disk_size
@@ -325,7 +298,7 @@ resource "google_compute_instance" "worker" {
machine_type = each.value.size machine_type = each.value.size
zone = each.value.zone zone = each.value.zone
tags = ["worker", each.key] tags = ["worker"]
boot_disk { boot_disk {
initialize_params { initialize_params {
@@ -353,69 +326,35 @@ resource "google_compute_instance" "worker" {
# Since we use google_compute_attached_disk we need to ignore this # Since we use google_compute_attached_disk we need to ignore this
lifecycle { lifecycle {
ignore_changes = [attached_disk] ignore_changes = ["attached_disk"]
}
scheduling {
preemptible = var.worker_preemptible
automatic_restart = !var.worker_preemptible
} }
} }
resource "google_compute_address" "worker_lb" { resource "google_compute_address" "worker_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-lb-address" name = "${var.prefix}-worker-lb-address"
address_type = "EXTERNAL" address_type = "EXTERNAL"
region = var.region region = var.region
} }
resource "google_compute_forwarding_rule" "worker_http_lb" { resource "google_compute_forwarding_rule" "worker_http_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-http-lb-forward-rule" name = "${var.prefix}-worker-http-lb-forward-rule"
ip_address = google_compute_address.worker_lb[count.index].address ip_address = google_compute_address.worker_lb.address
port_range = "80" port_range = "80"
target = google_compute_target_pool.worker_lb[count.index].id target = google_compute_target_pool.worker_lb.id
} }
resource "google_compute_forwarding_rule" "worker_https_lb" { resource "google_compute_forwarding_rule" "worker_https_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-https-lb-forward-rule" name = "${var.prefix}-worker-https-lb-forward-rule"
ip_address = google_compute_address.worker_lb[count.index].address ip_address = google_compute_address.worker_lb.address
port_range = "443" port_range = "443"
target = google_compute_target_pool.worker_lb[count.index].id target = google_compute_target_pool.worker_lb.id
} }
resource "google_compute_target_pool" "worker_lb" { resource "google_compute_target_pool" "worker_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-lb-pool" name = "${var.prefix}-worker-lb-pool"
instances = local.worker_target_list instances = local.worker_target_list
} }
resource "google_compute_firewall" "extra_ingress_firewall" {
for_each = {
for name, firewall in var.extra_ingress_firewalls :
name => firewall
}
name = "${var.prefix}-${each.key}-ingress"
network = google_compute_network.main.name
priority = 100
source_ranges = each.value.source_ranges
target_tags = each.value.target_tags
allow {
protocol = each.value.protocol
ports = each.value.ports
}
}

View File

@@ -19,9 +19,9 @@ output "worker_ip_addresses" {
} }
output "ingress_controller_lb_ip_address" { output "ingress_controller_lb_ip_address" {
value = length(var.ingress_whitelist) > 0 ? google_compute_address.worker_lb.0.address : "" value = google_compute_address.worker_lb.address
} }
output "control_plane_lb_ip_address" { output "control_plane_lb_ip_address" {
value = length(var.api_server_whitelist) > 0 ? google_compute_forwarding_rule.master_lb.0.ip_address : "" value = google_compute_forwarding_rule.master_lb.ip_address
} }

View File

@@ -14,7 +14,7 @@ variable "machines" {
})) }))
boot_disk = object({ boot_disk = object({
image_name = string image_name = string
size = number size = number
}) })
})) }))
} }
@@ -27,14 +27,6 @@ variable "master_sa_scopes" {
type = list(string) type = list(string)
} }
variable "master_preemptible" {
type = bool
}
variable "master_additional_disk_type" {
type = string
}
variable "worker_sa_email" { variable "worker_sa_email" {
type = string type = string
} }
@@ -43,14 +35,6 @@ variable "worker_sa_scopes" {
type = list(string) type = list(string)
} }
variable "worker_preemptible" {
type = bool
}
variable "worker_additional_disk_type" {
type = string
}
variable "ssh_pub_key" {} variable "ssh_pub_key" {}
variable "ssh_whitelist" { variable "ssh_whitelist" {
@@ -65,22 +49,6 @@ variable "nodeport_whitelist" {
type = list(string) type = list(string)
} }
variable "ingress_whitelist" {
type = list(string)
default = ["0.0.0.0/0"]
}
variable "private_network_cidr" { variable "private_network_cidr" {
default = "10.0.10.0/24" default = "10.0.10.0/24"
} }
variable "extra_ingress_firewalls" {
type = map(object({
source_ranges = set(string)
protocol = string
ports = list(string)
target_tags = set(string)
}))
default = {}
}

View File

@@ -16,9 +16,6 @@
"nodeport_whitelist": [ "nodeport_whitelist": [
"1.2.3.4/32" "1.2.3.4/32"
], ],
"ingress_whitelist": [
"0.0.0.0/0"
],
"machines": { "machines": {
"master-0": { "master-0": {
@@ -27,7 +24,7 @@
"zone": "us-central1-a", "zone": "us-central1-a",
"additional_disks": {}, "additional_disks": {},
"boot_disk": { "boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118", "image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"size": 50 "size": 50
} }
}, },
@@ -41,7 +38,7 @@
} }
}, },
"boot_disk": { "boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118", "image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"size": 50 "size": 50
} }
}, },
@@ -55,7 +52,7 @@
} }
}, },
"boot_disk": { "boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118", "image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"size": 50 "size": 50
} }
} }

View File

@@ -44,16 +44,6 @@ variable "master_sa_scopes" {
default = ["https://www.googleapis.com/auth/cloud-platform"] default = ["https://www.googleapis.com/auth/cloud-platform"]
} }
variable "master_preemptible" {
type = bool
default = false
}
variable "master_additional_disk_type" {
type = string
default = "pd-ssd"
}
variable "worker_sa_email" { variable "worker_sa_email" {
type = string type = string
default = "" default = ""
@@ -64,16 +54,6 @@ variable "worker_sa_scopes" {
default = ["https://www.googleapis.com/auth/cloud-platform"] default = ["https://www.googleapis.com/auth/cloud-platform"]
} }
variable "worker_preemptible" {
type = bool
default = false
}
variable "worker_additional_disk_type" {
type = string
default = "pd-ssd"
}
variable ssh_pub_key { variable ssh_pub_key {
description = "Path to public SSH key file which is injected into the VMs." description = "Path to public SSH key file which is injected into the VMs."
type = string type = string
@@ -90,19 +70,3 @@ variable api_server_whitelist {
variable nodeport_whitelist { variable nodeport_whitelist {
type = list(string) type = list(string)
} }
variable "ingress_whitelist" {
type = list(string)
default = ["0.0.0.0/0"]
}
variable "extra_ingress_firewalls" {
type = map(object({
source_ranges = set(string)
protocol = string
ports = list(string)
target_tags = set(string)
}))
default = {}
}

View File

@@ -56,24 +56,11 @@ cd inventory/$CLUSTER
Edit `default.tfvars` to match your requirement. Edit `default.tfvars` to match your requirement.
Flatcar Container Linux instead of the basic Hetzner Images.
```bash
cd ../../contrib/terraform/hetzner
```
Edit `main.tf` and reactivate the module `source = "./modules/kubernetes-cluster-flatcar"`and
comment out the `#source = "./modules/kubernetes-cluster"`.
activate `ssh_private_key_path = var.ssh_private_key_path`. The VM boots into
Rescue-Mode with the selected image of the `var.machines` but installs Flatcar instead.
Run Terraform to create the infrastructure. Run Terraform to create the infrastructure.
```bash ```bash
cd ./kubespray terraform init ../../contrib/terraform/hetzner
terraform -chdir=./contrib/terraform/hetzner/ init terraform apply --var-file default.tfvars ../../contrib/terraform/hetzner/
terraform -chdir=./contrib/terraform/hetzner/ apply --var-file=../../../inventory/$CLUSTER/default.tfvars
``` ```
You should now have a inventory file named `inventory.ini` that you can use with kubespray. You should now have a inventory file named `inventory.ini` that you can use with kubespray.
@@ -110,7 +97,6 @@ terraform destroy --var-file default.tfvars ../../contrib/terraform/hetzner
* `prefix`: Prefix to add to all resources, if set to "" don't set any prefix * `prefix`: Prefix to add to all resources, if set to "" don't set any prefix
* `ssh_public_keys`: List of public SSH keys to install on all machines * `ssh_public_keys`: List of public SSH keys to install on all machines
* `zone`: The zone where to run the cluster * `zone`: The zone where to run the cluster
* `network_zone`: the network zone where the cluster is running
* `machines`: Machines to provision. Key of this object will be used as the name of the machine * `machines`: Machines to provision. Key of this object will be used as the name of the machine
* `node_type`: The role of this node *(master|worker)* * `node_type`: The role of this node *(master|worker)*
* `size`: Size of the VM * `size`: Size of the VM

View File

@@ -1,6 +1,6 @@
prefix = "default" prefix = "default"
zone = "hel1" zone = "hel1"
network_zone = "eu-central"
inventory_file = "inventory.ini" inventory_file = "inventory.ini"
ssh_public_keys = [ ssh_public_keys = [
@@ -9,23 +9,21 @@ ssh_public_keys = [
"ssh-rsa I-did-not-read-the-docs 2", "ssh-rsa I-did-not-read-the-docs 2",
] ]
ssh_private_key_path = "~/.ssh/id_rsa"
machines = { machines = {
"master-0" : { "master-0" : {
"node_type" : "master", "node_type" : "master",
"size" : "cx21", "size" : "cx21",
"image" : "ubuntu-22.04", "image" : "ubuntu-20.04",
}, },
"worker-0" : { "worker-0" : {
"node_type" : "worker", "node_type" : "worker",
"size" : "cx21", "size" : "cx21",
"image" : "ubuntu-22.04", "image" : "ubuntu-20.04",
}, },
"worker-1" : { "worker-1" : {
"node_type" : "worker", "node_type" : "worker",
"size" : "cx21", "size" : "cx21",
"image" : "ubuntu-22.04", "image" : "ubuntu-20.04",
} }
} }

View File

@@ -2,7 +2,6 @@ provider "hcloud" {}
module "kubernetes" { module "kubernetes" {
source = "./modules/kubernetes-cluster" source = "./modules/kubernetes-cluster"
# source = "./modules/kubernetes-cluster-flatcar"
prefix = var.prefix prefix = var.prefix
@@ -10,11 +9,7 @@ module "kubernetes" {
machines = var.machines machines = var.machines
#only for flatcar
#ssh_private_key_path = var.ssh_private_key_path
ssh_public_keys = var.ssh_public_keys ssh_public_keys = var.ssh_public_keys
network_zone = var.network_zone
ssh_whitelist = var.ssh_whitelist ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist api_server_whitelist = var.api_server_whitelist
@@ -26,32 +21,31 @@ module "kubernetes" {
# Generate ansible inventory # Generate ansible inventory
# #
locals { data "template_file" "inventory" {
inventory = templatefile( template = file("${path.module}/templates/inventory.tpl")
"${path.module}/templates/inventory.tpl",
{ vars = {
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d", connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
keys(module.kubernetes.master_ip_addresses), keys(module.kubernetes.master_ip_addresses),
values(module.kubernetes.master_ip_addresses).*.public_ip, values(module.kubernetes.master_ip_addresses).*.public_ip,
values(module.kubernetes.master_ip_addresses).*.private_ip, values(module.kubernetes.master_ip_addresses).*.private_ip,
range(1, length(module.kubernetes.master_ip_addresses) + 1))) range(1, length(module.kubernetes.master_ip_addresses) + 1)))
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s", connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
keys(module.kubernetes.worker_ip_addresses), keys(module.kubernetes.worker_ip_addresses),
values(module.kubernetes.worker_ip_addresses).*.public_ip, values(module.kubernetes.worker_ip_addresses).*.public_ip,
values(module.kubernetes.worker_ip_addresses).*.private_ip)) values(module.kubernetes.worker_ip_addresses).*.private_ip))
list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses)) list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
network_id = module.kubernetes.network_id list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
} }
)
} }
resource "null_resource" "inventories" { resource "null_resource" "inventories" {
provisioner "local-exec" { provisioner "local-exec" {
command = "echo '${local.inventory}' > ${var.inventory_file}" command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
} }
triggers = { triggers = {
template = local.inventory template = data.template_file.inventory.rendered
} }
} }

View File

@@ -1,144 +0,0 @@
resource "hcloud_network" "kubernetes" {
name = "${var.prefix}-network"
ip_range = var.private_network_cidr
}
resource "hcloud_network_subnet" "kubernetes" {
type = "cloud"
network_id = hcloud_network.kubernetes.id
network_zone = var.network_zone
ip_range = var.private_subnet_cidr
}
resource "hcloud_ssh_key" "first" {
name = var.prefix
public_key = var.ssh_public_keys.0
}
resource "hcloud_server" "machine" {
for_each = {
for name, machine in var.machines :
name => machine
}
name = "${var.prefix}-${each.key}"
ssh_keys = [hcloud_ssh_key.first.id]
# boot into rescue OS
rescue = "linux64"
# dummy value for the OS because Flatcar is not available
image = each.value.image
server_type = each.value.size
location = var.zone
connection {
host = self.ipv4_address
timeout = "5m"
private_key = file(var.ssh_private_key_path)
}
firewall_ids = each.value.node_type == "master" ? [hcloud_firewall.master.id] : [hcloud_firewall.worker.id]
provisioner "file" {
content = data.ct_config.machine-ignitions[each.key].rendered
destination = "/root/ignition.json"
}
provisioner "remote-exec" {
inline = [
"set -ex",
"apt update",
"apt install -y gawk",
"curl -fsSLO --retry-delay 1 --retry 60 --retry-connrefused --retry-max-time 60 --connect-timeout 20 https://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-install",
"chmod +x flatcar-install",
"./flatcar-install -s -i /root/ignition.json -C stable",
"shutdown -r +1",
]
}
# optional:
provisioner "remote-exec" {
connection {
host = self.ipv4_address
private_key = file(var.ssh_private_key_path)
timeout = "3m"
user = var.user_flatcar
}
inline = [
"sudo hostnamectl set-hostname ${self.name}",
]
}
}
resource "hcloud_server_network" "machine" {
for_each = {
for name, machine in var.machines :
name => hcloud_server.machine[name]
}
server_id = each.value.id
subnet_id = hcloud_network_subnet.kubernetes.id
}
data "ct_config" "machine-ignitions" {
for_each = {
for name, machine in var.machines :
name => machine
}
strict = false
content = templatefile(
"${path.module}/templates/machine.yaml.tmpl",
{
ssh_keys = jsonencode(var.ssh_public_keys)
user_flatcar = var.user_flatcar
name = each.key
}
)
}
resource "hcloud_firewall" "master" {
name = "${var.prefix}-master-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "6443"
source_ips = var.api_server_whitelist
}
}
resource "hcloud_firewall" "worker" {
name = "${var.prefix}-worker-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "80"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "443"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "30000-32767"
source_ips = var.nodeport_whitelist
}
}

View File

@@ -1,29 +0,0 @@
output "master_ip_addresses" {
value = {
for name, machine in var.machines :
name => {
"private_ip" = hcloud_server_network.machine[name].ip
"public_ip" = hcloud_server.machine[name].ipv4_address
}
if machine.node_type == "master"
}
}
output "worker_ip_addresses" {
value = {
for name, machine in var.machines :
name => {
"private_ip" = hcloud_server_network.machine[name].ip
"public_ip" = hcloud_server.machine[name].ipv4_address
}
if machine.node_type == "worker"
}
}
output "cluster_private_network_cidr" {
value = var.private_subnet_cidr
}
output "network_id" {
value = hcloud_network.kubernetes.id
}

View File

@@ -1,19 +0,0 @@
variant: flatcar
version: 1.0.0
passwd:
users:
- name: ${user_flatcar}
ssh_authorized_keys: ${ssh_keys}
storage:
files:
- path: /home/core/works
filesystem: root
mode: 0755
contents:
inline: |
#!/bin/bash
set -euo pipefail
hostname="$(hostname)"
echo My name is ${name} and the hostname is $${hostname}

View File

@@ -1,60 +0,0 @@
variable "zone" {
type = string
default = "fsn1"
}
variable "prefix" {
default = "k8s"
}
variable "user_flatcar" {
type = string
default = "core"
}
variable "machines" {
type = map(object({
node_type = string
size = string
image = string
}))
}
variable "ssh_public_keys" {
type = list(string)
}
variable "ssh_private_key_path" {
type = string
default = "~/.ssh/id_rsa"
}
variable "ssh_whitelist" {
type = list(string)
}
variable "api_server_whitelist" {
type = list(string)
}
variable "nodeport_whitelist" {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
}
variable "private_network_cidr" {
default = "10.0.0.0/16"
}
variable "private_subnet_cidr" {
default = "10.0.10.0/24"
}
variable "network_zone" {
default = "eu-central"
}

View File

@@ -1,14 +0,0 @@
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
}
ct = {
source = "poseidon/ct"
version = "0.11.0"
}
null = {
source = "hashicorp/null"
}
}
}

View File

@@ -6,7 +6,7 @@ resource "hcloud_network" "kubernetes" {
resource "hcloud_network_subnet" "kubernetes" { resource "hcloud_network_subnet" "kubernetes" {
type = "cloud" type = "cloud"
network_id = hcloud_network.kubernetes.id network_id = hcloud_network.kubernetes.id
network_zone = var.network_zone network_zone = "eu-central"
ip_range = var.private_subnet_cidr ip_range = var.private_subnet_cidr
} }
@@ -75,17 +75,17 @@ resource "hcloud_firewall" "master" {
name = "${var.prefix}-master-firewall" name = "${var.prefix}-master-firewall"
rule { rule {
direction = "in" direction = "in"
protocol = "tcp" protocol = "tcp"
port = "22" port = "22"
source_ips = var.ssh_whitelist source_ips = var.ssh_whitelist
} }
rule { rule {
direction = "in" direction = "in"
protocol = "tcp" protocol = "tcp"
port = "6443" port = "6443"
source_ips = var.api_server_whitelist source_ips = var.api_server_whitelist
} }
} }
@@ -93,30 +93,30 @@ resource "hcloud_firewall" "worker" {
name = "${var.prefix}-worker-firewall" name = "${var.prefix}-worker-firewall"
rule { rule {
direction = "in" direction = "in"
protocol = "tcp" protocol = "tcp"
port = "22" port = "22"
source_ips = var.ssh_whitelist source_ips = var.ssh_whitelist
} }
rule { rule {
direction = "in" direction = "in"
protocol = "tcp" protocol = "tcp"
port = "80" port = "80"
source_ips = var.ingress_whitelist source_ips = var.ingress_whitelist
} }
rule { rule {
direction = "in" direction = "in"
protocol = "tcp" protocol = "tcp"
port = "443" port = "443"
source_ips = var.ingress_whitelist source_ips = var.ingress_whitelist
} }
rule { rule {
direction = "in" direction = "in"
protocol = "tcp" protocol = "tcp"
port = "30000-32767" port = "30000-32767"
source_ips = var.nodeport_whitelist source_ips = var.nodeport_whitelist
} }
} }

View File

@@ -21,7 +21,3 @@ output "worker_ip_addresses" {
output "cluster_private_network_cidr" { output "cluster_private_network_cidr" {
value = var.private_subnet_cidr value = var.private_subnet_cidr
} }
output "network_id" {
value = hcloud_network.kubernetes.id
}

View File

@@ -14,3 +14,4 @@ ssh_authorized_keys:
%{ for ssh_public_key in ssh_public_keys ~} %{ for ssh_public_key in ssh_public_keys ~}
- ${ssh_public_key} - ${ssh_public_key}
%{ endfor ~} %{ endfor ~}

View File

@@ -39,6 +39,3 @@ variable "private_network_cidr" {
variable "private_subnet_cidr" { variable "private_subnet_cidr" {
default = "10.0.10.0/24" default = "10.0.10.0/24"
} }
variable "network_zone" {
default = "eu-central"
}

View File

@@ -1,8 +1,8 @@
terraform { terraform {
required_providers { required_providers {
hcloud = { hcloud = {
source = "hetznercloud/hcloud" source = "hetznercloud/hcloud"
version = "1.38.2" version = "1.31.1"
} }
} }
required_version = ">= 0.14" required_version = ">= 0.14"

View File

@@ -1,46 +0,0 @@
prefix = "default"
zone = "hel1"
network_zone = "eu-central"
inventory_file = "inventory.ini"
ssh_public_keys = [
# Put your public SSH key here
"ssh-rsa I-did-not-read-the-docs",
"ssh-rsa I-did-not-read-the-docs 2",
]
ssh_private_key_path = "~/.ssh/id_rsa"
machines = {
"master-0" : {
"node_type" : "master",
"size" : "cx21",
"image" : "ubuntu-22.04",
},
"worker-0" : {
"node_type" : "worker",
"size" : "cx21",
"image" : "ubuntu-22.04",
},
"worker-1" : {
"node_type" : "worker",
"size" : "cx21",
"image" : "ubuntu-22.04",
}
}
nodeport_whitelist = [
"0.0.0.0/0"
]
ingress_whitelist = [
"0.0.0.0/0"
]
ssh_whitelist = [
"0.0.0.0/0"
]
api_server_whitelist = [
"0.0.0.0/0"
]

View File

@@ -1 +0,0 @@
../../../../inventory/sample/group_vars

View File

@@ -2,18 +2,15 @@
${connection_strings_master} ${connection_strings_master}
${connection_strings_worker} ${connection_strings_worker}
[kube_control_plane] [kube-master]
${list_master} ${list_master}
[etcd] [etcd]
${list_master} ${list_master}
[kube_node] [kube-node]
${list_worker} ${list_worker}
[k8s_cluster:children] [k8s-cluster:children]
kube-master kube-master
kube-node kube-node
[k8s_cluster:vars]
network_id=${network_id}

View File

@@ -1,10 +1,6 @@
variable "zone" { variable "zone" {
description = "The zone where to run the cluster" description = "The zone where to run the cluster"
} }
variable "network_zone" {
description = "The network zone where the cluster is running"
default = "eu-central"
}
variable "prefix" { variable "prefix" {
description = "Prefix for resource names" description = "Prefix for resource names"
@@ -25,12 +21,6 @@ variable "ssh_public_keys" {
type = list(string) type = list(string)
} }
variable "ssh_private_key_path" {
description = "Private SSH key which connect to the VMs."
type = string
default = "~/.ssh/id_rsa"
}
variable "ssh_whitelist" { variable "ssh_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for ssh" description = "List of IP ranges (CIDR) to whitelist for ssh"
type = list(string) type = list(string)

View File

@@ -2,11 +2,14 @@ terraform {
required_providers { required_providers {
hcloud = { hcloud = {
source = "hetznercloud/hcloud" source = "hetznercloud/hcloud"
version = "1.38.2" version = "1.31.1"
} }
null = { null = {
source = "hashicorp/null" source = "hashicorp/null"
} }
template = {
source = "hashicorp/template"
}
} }
required_version = ">= 0.14" required_version = ">= 0.14"
} }

View File

@@ -17,10 +17,9 @@ most modern installs of OpenStack that support the basic services.
- [ELASTX](https://elastx.se/) - [ELASTX](https://elastx.se/)
- [EnterCloudSuite](https://www.entercloudsuite.com/) - [EnterCloudSuite](https://www.entercloudsuite.com/)
- [FugaCloud](https://fuga.cloud/) - [FugaCloud](https://fuga.cloud/)
- [Open Telekom Cloud](https://cloud.telekom.de/) - [Open Telekom Cloud](https://cloud.telekom.de/) : requires to set the variable `wait_for_floatingip = "true"` in your cluster.tfvars
- [OVH](https://www.ovh.com/) - [OVH](https://www.ovh.com/)
- [Rackspace](https://www.rackspace.com/) - [Rackspace](https://www.rackspace.com/)
- [Safespring](https://www.safespring.com)
- [Ultimum](https://ultimum.io/) - [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/) - [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/) - [Zetta](https://www.zetta.io/)
@@ -88,7 +87,7 @@ binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Requirements ## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) 0.14 or later - [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) 0.12 or later
- [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) - [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
- you already have a suitable OS image in Glance - you already have a suitable OS image in Glance
- you already have a floating IP pool created - you already have a floating IP pool created
@@ -248,7 +247,6 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. | |`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`az_list` | List of Availability Zones available in your OpenStack cluster. | |`az_list` | List of Availability Zones available in your OpenStack cluster. |
|`network_name` | The name to be given to the internal network that will be generated | |`network_name` | The name to be given to the internal network that will be generated |
|`use_existing_network`| Use an existing network with the name of `network_name`. `false` by default |
|`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated | |`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. | |`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated | |`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
@@ -270,10 +268,10 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube_ingress` for running ingress controller pods, empty by default. | |`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube_ingress` for running ingress controller pods, empty by default. |
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default | |`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default | |`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|`bastion_allowed_ports` | List of ports to open on bastion node, `[]` by default |
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default | |`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default | |`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default | |`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`wait_for_floatingip` | Let Terraform poll the instance until the floating IP has been associated, `false` by default. |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage | |`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage | |`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default | |`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
@@ -284,41 +282,15 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`master_server_group_policy` | Enable and use openstack nova servergroups for masters with set policy, default: "" (disabled) | |`master_server_group_policy` | Enable and use openstack nova servergroups for masters with set policy, default: "" (disabled) |
|`node_server_group_policy` | Enable and use openstack nova servergroups for nodes with set policy, default: "" (disabled) | |`node_server_group_policy` | Enable and use openstack nova servergroups for nodes with set policy, default: "" (disabled) |
|`etcd_server_group_policy` | Enable and use openstack nova servergroups for etcd with set policy, default: "" (disabled) | |`etcd_server_group_policy` | Enable and use openstack nova servergroups for etcd with set policy, default: "" (disabled) |
|`additional_server_groups` | Extra server groups to create. Set "policy" to the policy for the group, expected format is `{"new-server-group" = {"policy" = "anti-affinity"}}`, default: {} (to not create any extra groups) |
|`use_access_ip` | If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1. | |`use_access_ip` | If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1. |
|`port_security_enabled` | Allow to disable port security by setting this to `false`. `true` by default |
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|`k8s_nodes` | Map containing worker node definition, see explanation below | |`k8s_nodes` | Map containing worker node definition, see explanation below |
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
##### k8s_nodes ##### k8s_nodes
Allows a custom definition of worker nodes giving the operator full control over individual node flavor and availability zone placement. Allows a custom definition of worker nodes giving the operator full control over individual node flavor and
To enable the use of this mode set the `number_of_k8s_nodes` and `number_of_k8s_nodes_no_floating_ip` variables to 0. availability zone placement. To enable the use of this mode set the `number_of_k8s_nodes` and
Then define your desired worker node configuration using the `k8s_nodes` variable. `number_of_k8s_nodes_no_floating_ip` variables to 0. Then define your desired worker node configuration
The `az`, `flavor` and `floating_ip` parameters are mandatory. using the `k8s_nodes` variable.
The optional parameter `extra_groups` (a comma-delimited string) can be used to define extra inventory group memberships for specific nodes.
```yaml
k8s_nodes:
node-name:
az: string # Name of the AZ
flavor: string # Flavor ID to use
floating_ip: bool # If floating IPs should be created or not
extra_groups: string # (optional) Additional groups to add for kubespray, defaults to no groups
image_id: string # (optional) Image ID to use, defaults to var.image_id or var.image
root_volume_size_in_gb: number # (optional) Size of the block storage to use as root disk, defaults to var.node_root_volume_size_in_gb or to use volume from flavor otherwise
volume_type: string # (optional) Volume type to use, defaults to var.node_volume_type
network_id: string # (optional) Use this network_id for the node, defaults to either var.network_id or ID of var.network_name
server_group: string # (optional) Server group to add this node to. If set, this has to be one specified in additional_server_groups, defaults to use the server group specified in node_server_group_policy
cloudinit: # (optional) Options for cloud-init
extra_partitions: # List of extra partitions (other than the root partition) to setup during creation
volume_path: string # Path to the volume to create partition for (e.g. /dev/vda )
partition_path: string # Path to the partition (e.g. /dev/vda2 )
mount_path: string # Path to where the partition should be mounted
partition_start: string # Where the partition should start (e.g. 10GB ). Note, if you set the partition_start to 0 there will be no space left for the root partition
partition_end: string # Where the partition should end (e.g. 10GB or -1 for end of volume)
```
For example: For example:
@@ -338,7 +310,6 @@ k8s_nodes = {
"az" = "sto3" "az" = "sto3"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445" "flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true "floating_ip" = true
"extra_groups" = "calico_rr"
} }
} }
``` ```
@@ -440,39 +411,18 @@ plugins. This is accomplished as follows:
```ShellSession ```ShellSession
cd inventory/$CLUSTER cd inventory/$CLUSTER
terraform -chdir="../../contrib/terraform/openstack" init terraform init ../../contrib/terraform/openstack
``` ```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules. This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
### Customizing with cloud-init
You can apply cloud-init based customization for the openstack instances before provisioning your cluster.
One common template is used for all instances. Adjust the file shown below:
`contrib/terraform/openstack/modules/compute/templates/cloudinit.yaml.tmpl`
For example, to enable openstack novnc access and ansible_user=root SSH access:
```ShellSession
#cloud-config
## in some cases novnc console access is required
## it requires ssh password to be set
ssh_pwauth: yes
chpasswd:
list: |
root:secret
expire: False
## in some cases direct root ssh access via ssh key is required
disable_root: false
```
### Provisioning cluster ### Provisioning cluster
You can apply the Terraform configuration to your cluster with the following command You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`): issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession ```ShellSession
terraform -chdir="../../contrib/terraform/openstack" apply -var-file=cluster.tfvars terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack
``` ```
if you chose to create a bastion host, this script will create if you chose to create a bastion host, this script will create
@@ -487,7 +437,7 @@ pick it up automatically.
You can destroy your new cluster with the following command issued from the cluster's inventory directory: You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession ```ShellSession
terraform -chdir="../../contrib/terraform/openstack" destroy -var-file=cluster.tfvars terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/openstack
``` ```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup: If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View File

@@ -1,15 +1,14 @@
module "network" { module "network" {
source = "./modules/network" source = "./modules/network"
external_net = var.external_net external_net = var.external_net
network_name = var.network_name network_name = var.network_name
subnet_cidr = var.subnet_cidr subnet_cidr = var.subnet_cidr
cluster_name = var.cluster_name cluster_name = var.cluster_name
dns_nameservers = var.dns_nameservers dns_nameservers = var.dns_nameservers
network_dns_domain = var.network_dns_domain network_dns_domain = var.network_dns_domain
use_neutron = var.use_neutron use_neutron = var.use_neutron
port_security_enabled = var.port_security_enabled router_id = var.router_id
router_id = var.router_id
} }
module "ips" { module "ips" {
@@ -24,7 +23,6 @@ module "ips" {
network_name = var.network_name network_name = var.network_name
router_id = module.network.router_id router_id = module.network.router_id
k8s_nodes = var.k8s_nodes k8s_nodes = var.k8s_nodes
k8s_masters = var.k8s_masters
k8s_master_fips = var.k8s_master_fips k8s_master_fips = var.k8s_master_fips
bastion_fips = var.bastion_fips bastion_fips = var.bastion_fips
router_internal_port_id = module.network.router_internal_port_id router_internal_port_id = module.network.router_internal_port_id
@@ -45,7 +43,6 @@ module "compute" {
number_of_bastions = var.number_of_bastions number_of_bastions = var.number_of_bastions
number_of_k8s_nodes_no_floating_ip = var.number_of_k8s_nodes_no_floating_ip number_of_k8s_nodes_no_floating_ip = var.number_of_k8s_nodes_no_floating_ip
number_of_gfs_nodes_no_floating_ip = var.number_of_gfs_nodes_no_floating_ip number_of_gfs_nodes_no_floating_ip = var.number_of_gfs_nodes_no_floating_ip
k8s_masters = var.k8s_masters
k8s_nodes = var.k8s_nodes k8s_nodes = var.k8s_nodes
bastion_root_volume_size_in_gb = var.bastion_root_volume_size_in_gb bastion_root_volume_size_in_gb = var.bastion_root_volume_size_in_gb
etcd_root_volume_size_in_gb = var.etcd_root_volume_size_in_gb etcd_root_volume_size_in_gb = var.etcd_root_volume_size_in_gb
@@ -72,7 +69,6 @@ module "compute" {
flavor_bastion = var.flavor_bastion flavor_bastion = var.flavor_bastion
k8s_master_fips = module.ips.k8s_master_fips k8s_master_fips = module.ips.k8s_master_fips
k8s_master_no_etcd_fips = module.ips.k8s_master_no_etcd_fips k8s_master_no_etcd_fips = module.ips.k8s_master_no_etcd_fips
k8s_masters_fips = module.ips.k8s_masters_fips
k8s_node_fips = module.ips.k8s_node_fips k8s_node_fips = module.ips.k8s_node_fips
k8s_nodes_fips = module.ips.k8s_nodes_fips k8s_nodes_fips = module.ips.k8s_nodes_fips
bastion_fips = module.ips.bastion_fips bastion_fips = module.ips.bastion_fips
@@ -84,7 +80,7 @@ module "compute" {
supplementary_node_groups = var.supplementary_node_groups supplementary_node_groups = var.supplementary_node_groups
master_allowed_ports = var.master_allowed_ports master_allowed_ports = var.master_allowed_ports
worker_allowed_ports = var.worker_allowed_ports worker_allowed_ports = var.worker_allowed_ports
bastion_allowed_ports = var.bastion_allowed_ports wait_for_floatingip = var.wait_for_floatingip
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
master_server_group_policy = var.master_server_group_policy master_server_group_policy = var.master_server_group_policy
node_server_group_policy = var.node_server_group_policy node_server_group_policy = var.node_server_group_policy
@@ -92,17 +88,8 @@ module "compute" {
extra_sec_groups = var.extra_sec_groups extra_sec_groups = var.extra_sec_groups
extra_sec_groups_name = var.extra_sec_groups_name extra_sec_groups_name = var.extra_sec_groups_name
group_vars_path = var.group_vars_path group_vars_path = var.group_vars_path
port_security_enabled = var.port_security_enabled
force_null_port_security = var.force_null_port_security
network_router_id = module.network.router_id
network_id = module.network.network_id
use_existing_network = var.use_existing_network
private_subnet_id = module.network.subnet_id
additional_server_groups = var.additional_server_groups
depends_on = [ network_id = module.network.router_id
module.network.subnet_id
]
} }
output "private_subnet_id" { output "private_subnet_id" {
@@ -118,7 +105,7 @@ output "router_id" {
} }
output "k8s_master_fips" { output "k8s_master_fips" {
value = var.number_of_k8s_masters + var.number_of_k8s_masters_no_etcd > 0 ? concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips) : [for key, value in module.ips.k8s_masters_fips : value.address] value = concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)
} }
output "k8s_node_fips" { output "k8s_node_fips" {

View File

@@ -15,21 +15,6 @@ data "openstack_images_image_v2" "image_master" {
name = var.image_master == "" ? var.image : var.image_master name = var.image_master == "" ? var.image : var.image_master
} }
data "cloudinit_config" "cloudinit" {
part {
content_type = "text/cloud-config"
content = templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
# template_file doesn't support lists
extra_partitions = ""
})
}
}
data "openstack_networking_network_v2" "k8s_network" {
count = var.use_existing_network ? 1 : 0
name = var.network_name
}
resource "openstack_compute_keypair_v2" "k8s" { resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}" name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path)) public_key = chomp(file(var.public_key_path))
@@ -88,17 +73,6 @@ resource "openstack_networking_secgroup_rule_v2" "bastion" {
security_group_id = openstack_networking_secgroup_v2.bastion[0].id security_group_id = openstack_networking_secgroup_v2.bastion[0].id
} }
resource "openstack_networking_secgroup_rule_v2" "k8s_bastion_ports" {
count = length(var.bastion_allowed_ports)
direction = "ingress"
ethertype = "IPv4"
protocol = lookup(var.bastion_allowed_ports[count.index], "protocol", "tcp")
port_range_min = lookup(var.bastion_allowed_ports[count.index], "port_range_min")
port_range_max = lookup(var.bastion_allowed_ports[count.index], "port_range_max")
remote_ip_prefix = lookup(var.bastion_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
}
resource "openstack_networking_secgroup_v2" "k8s" { resource "openstack_networking_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s" name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes" description = "${var.cluster_name} - Kubernetes"
@@ -173,34 +147,19 @@ resource "openstack_compute_servergroup_v2" "k8s_etcd" {
policies = [var.etcd_server_group_policy] policies = [var.etcd_server_group_policy]
} }
resource "openstack_compute_servergroup_v2" "k8s_node_additional" {
for_each = var.additional_server_groups
name = "k8s-${each.key}-srvgrp"
policies = [each.value.policy]
}
locals { locals {
# master groups # master groups
master_sec_groups = compact([ master_sec_groups = compact([
openstack_networking_secgroup_v2.k8s_master.id, openstack_networking_secgroup_v2.k8s_master.name,
openstack_networking_secgroup_v2.k8s.id, openstack_networking_secgroup_v2.k8s.name,
var.extra_sec_groups ?openstack_networking_secgroup_v2.k8s_master_extra[0].id : "", var.extra_sec_groups ?openstack_networking_secgroup_v2.k8s_master_extra[0].name : "",
]) ])
# worker groups # worker groups
worker_sec_groups = compact([ worker_sec_groups = compact([
openstack_networking_secgroup_v2.k8s.id, openstack_networking_secgroup_v2.k8s.name,
openstack_networking_secgroup_v2.worker.id, openstack_networking_secgroup_v2.worker.name,
var.extra_sec_groups ? openstack_networking_secgroup_v2.worker_extra[0].id : "", var.extra_sec_groups ? openstack_networking_secgroup_v2.worker_extra[0].name : "",
]) ])
# bastion groups
bastion_sec_groups = compact(concat([
openstack_networking_secgroup_v2.k8s.id,
openstack_networking_secgroup_v2.bastion[0].id,
]))
# etcd groups
etcd_sec_groups = compact([openstack_networking_secgroup_v2.k8s.id])
# glusterfs groups
gfs_sec_groups = compact([openstack_networking_secgroup_v2.k8s.id])
# Image uuid # Image uuid
image_to_use_node = var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.vm_image[0].id image_to_use_node = var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.vm_image[0].id
@@ -208,49 +167,6 @@ locals {
image_to_use_gfs = var.image_gfs_uuid != "" ? var.image_gfs_uuid : var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.gfs_image[0].id image_to_use_gfs = var.image_gfs_uuid != "" ? var.image_gfs_uuid : var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.gfs_image[0].id
# image_master uuidimage_gfs_uuid # image_master uuidimage_gfs_uuid
image_to_use_master = var.image_master_uuid != "" ? var.image_master_uuid : var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.image_master[0].id image_to_use_master = var.image_master_uuid != "" ? var.image_master_uuid : var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.image_master[0].id
k8s_nodes_settings = {
for name, node in var.k8s_nodes :
name => {
"use_local_disk" = (node.root_volume_size_in_gb != null ? node.root_volume_size_in_gb : var.node_root_volume_size_in_gb) == 0,
"image_id" = node.image_id != null ? node.image_id : local.image_to_use_node,
"volume_size" = node.root_volume_size_in_gb != null ? node.root_volume_size_in_gb : var.node_root_volume_size_in_gb,
"volume_type" = node.volume_type != null ? node.volume_type : var.node_volume_type,
"network_id" = node.network_id != null ? node.network_id : (var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id)
"server_group" = node.server_group != null ? [openstack_compute_servergroup_v2.k8s_node_additional[node.server_group].id] : (var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0].id] : [])
}
}
k8s_masters_settings = {
for name, node in var.k8s_masters :
name => {
"use_local_disk" = (node.root_volume_size_in_gb != null ? node.root_volume_size_in_gb : var.master_root_volume_size_in_gb) == 0,
"image_id" = node.image_id != null ? node.image_id : local.image_to_use_master,
"volume_size" = node.root_volume_size_in_gb != null ? node.root_volume_size_in_gb : var.master_root_volume_size_in_gb,
"volume_type" = node.volume_type != null ? node.volume_type : var.master_volume_type,
"network_id" = node.network_id != null ? node.network_id : (var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id)
}
}
}
resource "openstack_networking_port_v2" "bastion_port" {
count = var.number_of_bastions
name = "${var.cluster_name}-bastion-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.bastion_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
} }
resource "openstack_compute_instance_v2" "bastion" { resource "openstack_compute_instance_v2" "bastion" {
@@ -259,7 +175,6 @@ resource "openstack_compute_instance_v2" "bastion" {
image_id = var.bastion_root_volume_size_in_gb == 0 ? local.image_to_use_node : null image_id = var.bastion_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_bastion flavor_id = var.flavor_bastion
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" { dynamic "block_device" {
for_each = var.bastion_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : [] for_each = var.bastion_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@@ -274,41 +189,25 @@ resource "openstack_compute_instance_v2" "bastion" {
} }
network { network {
port = element(openstack_networking_port_v2.bastion_port.*.id, count.index) name = var.network_name
} }
security_groups = [openstack_networking_secgroup_v2.k8s.name,
element(openstack_networking_secgroup_v2.bastion.*.name, count.index),
]
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "bastion" kubespray_groups = "bastion"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${var.bastion_fips[0]}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml" command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > ${var.group_vars_path}/no_floating.yml"
} }
} }
resource "openstack_networking_port_v2" "k8s_master_port" {
count = var.number_of_k8s_masters
name = "${var.cluster_name}-k8s-master-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master" { resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index + 1}" name = "${var.cluster_name}-k8s-master-${count.index + 1}"
count = var.number_of_k8s_masters count = var.number_of_k8s_masters
@@ -316,7 +215,6 @@ resource "openstack_compute_instance_v2" "k8s_master" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" { dynamic "block_device" {
@@ -333,9 +231,11 @@ resource "openstack_compute_instance_v2" "k8s_master" {
} }
network { network {
port = element(openstack_networking_port_v2.k8s_master_port.*.id, count.index) name = var.network_name
} }
security_groups = local.master_sec_groups
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : [] for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content { content {
@@ -346,99 +246,15 @@ resource "openstack_compute_instance_v2" "k8s_master" {
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster" kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml" command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
} }
} }
resource "openstack_networking_port_v2" "k8s_masters_port" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? var.k8s_masters : {}
name = "${var.cluster_name}-k8s-${each.key}"
network_id = local.k8s_masters_settings[each.key].network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_masters" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? var.k8s_masters : {}
name = "${var.cluster_name}-k8s-${each.key}"
availability_zone = each.value.az
image_id = local.k8s_masters_settings[each.key].use_local_disk ? local.k8s_masters_settings[each.key].image_id : null
flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name
dynamic "block_device" {
for_each = !local.k8s_masters_settings[each.key].use_local_disk ? [local.k8s_masters_settings[each.key].image_id] : []
content {
uuid = block_device.value
source_type = "image"
volume_size = local.k8s_masters_settings[each.key].volume_size
volume_type = local.k8s_masters_settings[each.key].volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
network {
port = openstack_networking_port_v2.k8s_masters_port[each.key].id
}
dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
}
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "%{if each.value.etcd == true}etcd,%{endif}kube_control_plane,${var.supplementary_master_groups},k8s_cluster%{if each.value.floating_ip == false},no_floating%{endif}"
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "%{if each.value.floating_ip}sed s/USER/${var.ssh_user}/ ${path.module}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_masters_fips : value.address]), 0)}/ > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
}
}
resource "openstack_networking_port_v2" "k8s_master_no_etcd_port" {
count = var.number_of_k8s_masters_no_etcd
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" { resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}" name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
count = var.number_of_k8s_masters_no_etcd count = var.number_of_k8s_masters_no_etcd
@@ -446,7 +262,6 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" { dynamic "block_device" {
@@ -463,9 +278,11 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
} }
network { network {
port = element(openstack_networking_port_v2.k8s_master_no_etcd_port.*.id, count.index) name = var.network_name
} }
security_groups = local.master_sec_groups
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : [] for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content { content {
@@ -476,35 +293,15 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster" kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml" command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
} }
} }
resource "openstack_networking_port_v2" "etcd_port" {
count = var.number_of_etcd
name = "${var.cluster_name}-etcd-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.etcd_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "etcd" { resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index + 1}" name = "${var.cluster_name}-etcd-${count.index + 1}"
count = var.number_of_etcd count = var.number_of_etcd
@@ -512,7 +309,6 @@ resource "openstack_compute_instance_v2" "etcd" {
image_id = var.etcd_root_volume_size_in_gb == 0 ? local.image_to_use_master : null image_id = var.etcd_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_etcd flavor_id = var.flavor_etcd
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" { dynamic "block_device" {
for_each = var.etcd_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : [] for_each = var.etcd_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
@@ -527,11 +323,13 @@ resource "openstack_compute_instance_v2" "etcd" {
} }
network { network {
port = element(openstack_networking_port_v2.etcd_port.*.id, count.index) name = var.network_name
} }
security_groups = [openstack_networking_secgroup_v2.k8s.name]
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = var.etcd_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : [] for_each = var.etcd_server_group_policy ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
content { content {
group = openstack_compute_servergroup_v2.k8s_etcd[0].id group = openstack_compute_servergroup_v2.k8s_etcd[0].id
} }
@@ -540,31 +338,11 @@ resource "openstack_compute_instance_v2" "etcd" {
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "etcd,no_floating" kubespray_groups = "etcd,no_floating"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
} }
resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_port" {
count = var.number_of_k8s_masters_no_floating_ip
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" { resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}" name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
count = var.number_of_k8s_masters_no_floating_ip count = var.number_of_k8s_masters_no_floating_ip
@@ -587,9 +365,11 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
} }
network { network {
port = element(openstack_networking_port_v2.k8s_master_no_floating_ip_port.*.id, count.index) name = var.network_name
} }
security_groups = local.master_sec_groups
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : [] for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content { content {
@@ -600,31 +380,11 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating" kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
} }
resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_no_etcd_port" {
count = var.number_of_k8s_masters_no_floating_ip_no_etcd
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" { resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}" name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
count = var.number_of_k8s_masters_no_floating_ip_no_etcd count = var.number_of_k8s_masters_no_floating_ip_no_etcd
@@ -632,7 +392,6 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" { dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : [] for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
@@ -648,9 +407,11 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
} }
network { network {
port = element(openstack_networking_port_v2.k8s_master_no_floating_ip_no_etcd_port.*.id, count.index) name = var.network_name
} }
security_groups = local.master_sec_groups
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : [] for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content { content {
@@ -661,31 +422,11 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating" kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
} }
resource "openstack_networking_port_v2" "k8s_node_port" {
count = var.number_of_k8s_nodes
name = "${var.cluster_name}-k8s-node-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_node" { resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index + 1}" name = "${var.cluster_name}-k8s-node-${count.index + 1}"
count = var.number_of_k8s_nodes count = var.number_of_k8s_nodes
@@ -693,7 +434,6 @@ resource "openstack_compute_instance_v2" "k8s_node" {
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_k8s_node flavor_id = var.flavor_k8s_node
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" { dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : [] for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@@ -709,9 +449,10 @@ resource "openstack_compute_instance_v2" "k8s_node" {
} }
network { network {
port = element(openstack_networking_port_v2.k8s_node_port.*.id, count.index) name = var.network_name
} }
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : [] for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
@@ -723,35 +464,15 @@ resource "openstack_compute_instance_v2" "k8s_node" {
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,${var.supplementary_node_groups}" kubespray_groups = "kube_node,k8s_cluster,${var.supplementary_node_groups}"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_node_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml" command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
} }
} }
resource "openstack_networking_port_v2" "k8s_node_no_floating_ip_port" {
count = var.number_of_k8s_nodes_no_floating_ip
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" { resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}" name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
count = var.number_of_k8s_nodes_no_floating_ip count = var.number_of_k8s_nodes_no_floating_ip
@@ -759,7 +480,6 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_k8s_node flavor_id = var.flavor_k8s_node
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" { dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : [] for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@@ -775,62 +495,41 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
} }
network { network {
port = element(openstack_networking_port_v2.k8s_node_no_floating_ip_port.*.id, count.index) name = var.network_name
} }
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0].id] : [] for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content { content {
group = scheduler_hints.value group = openstack_compute_servergroup_v2.k8s_node[0].id
} }
} }
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,no_floating,${var.supplementary_node_groups}" kubespray_groups = "kube_node,k8s_cluster,no_floating,${var.supplementary_node_groups}"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
} }
resource "openstack_networking_port_v2" "k8s_nodes_port" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
name = "${var.cluster_name}-k8s-node-${each.key}"
network_id = local.k8s_nodes_settings[each.key].network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_nodes" { resource "openstack_compute_instance_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {} for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
name = "${var.cluster_name}-k8s-node-${each.key}" name = "${var.cluster_name}-k8s-node-${each.key}"
availability_zone = each.value.az availability_zone = each.value.az
image_id = local.k8s_nodes_settings[each.key].use_local_disk ? local.k8s_nodes_settings[each.key].image_id : null image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = each.value.flavor flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
user_data = each.value.cloudinit != null ? templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
extra_partitions = each.value.cloudinit.extra_partitions
}) : data.cloudinit_config.cloudinit.rendered
dynamic "block_device" { dynamic "block_device" {
for_each = !local.k8s_nodes_settings[each.key].use_local_disk ? [local.k8s_nodes_settings[each.key].image_id] : [] for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
content { content {
uuid = block_device.value uuid = local.image_to_use_node
source_type = "image" source_type = "image"
volume_size = local.k8s_nodes_settings[each.key].volume_size volume_size = var.node_root_volume_size_in_gb
volume_type = local.k8s_nodes_settings[each.key].volume_type volume_type = var.node_volume_type
boot_index = 0 boot_index = 0
destination_type = "volume" destination_type = "volume"
delete_on_termination = true delete_on_termination = true
@@ -838,48 +537,30 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
} }
network { network {
port = openstack_networking_port_v2.k8s_nodes_port[each.key].id name = var.network_name
} }
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = local.k8s_nodes_settings[each.key].server_group for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content { content {
group = scheduler_hints.value group = openstack_compute_servergroup_v2.k8s_node[0].id
} }
} }
metadata = { metadata = {
ssh_user = var.ssh_user ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,%{if each.value.floating_ip == false}no_floating,%{endif}${var.supplementary_node_groups}${each.value.extra_groups != null ? ",${each.value.extra_groups}" : ""}" kubespray_groups = "kube_node,k8s_cluster,%{if each.value.floating_ip == false}no_floating,%{endif}${var.supplementary_node_groups}"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "%{if each.value.floating_ip}sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_nodes_fips : value.address]), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}" command = "%{if each.value.floating_ip}sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_nodes_fips : value.address]), 0)}/ > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
} }
} }
resource "openstack_networking_port_v2" "glusterfs_node_no_floating_ip_port" {
count = var.number_of_gfs_nodes_no_floating_ip
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.gfs_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
dynamic "fixed_ip" {
for_each = var.private_subnet_id == "" ? [] : [true]
content {
subnet_id = var.private_subnet_id
}
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" { resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}" name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
count = var.number_of_gfs_nodes_no_floating_ip count = var.number_of_gfs_nodes_no_floating_ip
@@ -901,9 +582,11 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
} }
network { network {
port = element(openstack_networking_port_v2.glusterfs_node_no_floating_ip_port.*.id, count.index) name = var.network_name
} }
security_groups = [openstack_networking_secgroup_v2.k8s.name]
dynamic "scheduler_hints" { dynamic "scheduler_hints" {
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : [] for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content { content {
@@ -914,46 +597,44 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
metadata = { metadata = {
ssh_user = var.ssh_user_gfs ssh_user = var.ssh_user_gfs
kubespray_groups = "gfs-cluster,network-storage,no_floating" kubespray_groups = "gfs-cluster,network-storage,no_floating"
depends_on = var.network_router_id depends_on = var.network_id
use_access_ip = var.use_access_ip use_access_ip = var.use_access_ip
} }
} }
resource "openstack_networking_floatingip_associate_v2" "bastion" { resource "openstack_compute_floatingip_associate_v2" "bastion" {
count = var.number_of_bastions count = var.number_of_bastions
floating_ip = var.bastion_fips[count.index] floating_ip = var.bastion_fips[count.index]
port_id = element(openstack_networking_port_v2.bastion_port.*.id, count.index) instance_id = element(openstack_compute_instance_v2.bastion.*.id, count.index)
wait_until_associated = var.wait_for_floatingip
} }
resource "openstack_networking_floatingip_associate_v2" "k8s_master" { resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
count = var.number_of_k8s_masters count = var.number_of_k8s_masters
instance_id = element(openstack_compute_instance_v2.k8s_master.*.id, count.index)
floating_ip = var.k8s_master_fips[count.index] floating_ip = var.k8s_master_fips[count.index]
port_id = element(openstack_networking_port_v2.k8s_master_port.*.id, count.index) wait_until_associated = var.wait_for_floatingip
} }
resource "openstack_networking_floatingip_associate_v2" "k8s_masters" { resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip } : {} count = var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0
floating_ip = var.k8s_masters_fips[each.key].address instance_id = element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)
port_id = openstack_networking_port_v2.k8s_masters_port[each.key].id floating_ip = var.k8s_master_no_etcd_fips[count.index]
} }
resource "openstack_networking_floatingip_associate_v2" "k8s_master_no_etcd" { resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
count = var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0
floating_ip = var.k8s_master_no_etcd_fips[count.index]
port_id = element(openstack_networking_port_v2.k8s_master_no_etcd_port.*.id, count.index)
}
resource "openstack_networking_floatingip_associate_v2" "k8s_node" {
count = var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0 count = var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0
floating_ip = var.k8s_node_fips[count.index] floating_ip = var.k8s_node_fips[count.index]
port_id = element(openstack_networking_port_v2.k8s_node_port.*.id, count.index) instance_id = element(openstack_compute_instance_v2.k8s_node[*].id, count.index)
wait_until_associated = var.wait_for_floatingip
} }
resource "openstack_networking_floatingip_associate_v2" "k8s_nodes" { resource "openstack_compute_floatingip_associate_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {} for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
floating_ip = var.k8s_nodes_fips[each.key].address floating_ip = var.k8s_nodes_fips[each.key].address
port_id = openstack_networking_port_v2.k8s_nodes_port[each.key].id instance_id = openstack_compute_instance_v2.k8s_nodes[each.key].id
wait_until_associated = var.wait_for_floatingip
} }
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" { resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {

View File

@@ -1,39 +0,0 @@
%{~ if length(extra_partitions) > 0 }
#cloud-config
bootcmd:
%{~ for idx, partition in extra_partitions }
- [ cloud-init-per, once, move-second-header, sgdisk, --move-second-header, ${partition.volume_path} ]
- [ cloud-init-per, once, create-part-${idx}, parted, --script, ${partition.volume_path}, 'mkpart extended ext4 ${partition.partition_start} ${partition.partition_end}' ]
- [ cloud-init-per, once, create-fs-part-${idx}, mkfs.ext4, ${partition.partition_path} ]
%{~ endfor }
runcmd:
%{~ for idx, partition in extra_partitions }
- mkdir -p ${partition.mount_path}
- chown nobody:nogroup ${partition.mount_path}
- mount ${partition.partition_path} ${partition.mount_path}
%{~ endfor }
mounts:
%{~ for idx, partition in extra_partitions }
- [ ${partition.partition_path}, ${partition.mount_path} ]
%{~ endfor }
%{~ else ~}
# yamllint disable rule:comments
#cloud-config
## in some cases novnc console access is required
## it requires ssh password to be set
#ssh_pwauth: yes
#chpasswd:
# list: |
# root:secret
# expire: False
## in some cases direct root ssh access via ssh key is required
#disable_root: false
## in some cases additional CA certs are required
#ca-certs:
# trusted: |
# -----BEGIN CERTIFICATE-----
%{~ endif }

Some files were not shown because too many files have changed in this diff Show More