Compare commits

..

16 Commits

Author SHA1 Message Date
Max Gautier
12a65c45e9 Refactor check_galaxy + fix version (#10729) (#10891)
* Remove checks for docs using exact tags

Instead use a more generic documentation for installing kubespray as a
collection from git.

* Check that we upgraded galaxy.yml to next version

This is only intented to check for human error. The version in galaxy
should be the next (which does not mean the same if we're on master or a
release branch).

* Set collection version to KUBESPRAY_NEXT_VERSION
2024-02-06 04:07:46 -08:00
Max Gautier
1f4ca14029 Add patches versions checksums (runc, containerd) (#10878)
Make runc 1.1.12 and containerd 1.7.13 default
2024-02-05 09:16:32 -08:00
Max Gautier
e14ab338bf kubernetes: add hashes for 1.24.15, 1.24.16, 1.24.17 (#10823)
Make kubernetes 1.26.13 default
2024-01-22 18:29:35 +01:00
bo.jiang
d8a8fb03a2 Fix hardcoded pod infra version
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2024-01-22 17:23:42 +01:00
Max Gautier
a0c142f2e7 Use calico_pool_blocksize from cluster when existing (#10516)
The blockSize attribute from Calico IPPool resources cannot be changed
once set [1]. Consequently, we use the one currently defined when
configuring the existing IPPool, avoiding upgrade errors by trying to
change it.

In particular, this can be useful when calico_pool_blocksize default
changes in kubespray, which would otherwise force users to add an
explicit setting to their inventories.

[1]: https://docs.tigera.io/calico/latest/reference/resources/ippool#spec
2024-01-22 17:13:13 +01:00
Kay Yan
774d824d0b bump vagrant 2.3.7 (#10789) 2024-01-11 14:59:56 +01:00
Max Gautier
336b323954 fix(multus): loop_control template error when item is None (#10347) (#10726)
Co-authored-by: Nicolas Goudry <nicolas-goudry@users.noreply.github.com>
2023-12-18 12:03:49 +01:00
Max Gautier
7dcbe415f8 [2.22] Add hashes for kubernetes 1.26.11, 1.26.10 (#10704)
* [kubernetes] Add hashes for kubernetes 1.26.11, 1.26.10

Make kubernetes 1.26.11 default

* Workaround for yaml/pyyaml#601

* Convert exoscale tf provider to new version (#10646)

This is untested. It passes terraform validate to un-broke the CI.

* Update 0040-verify-settings.yml (#10699)

remove embedded template

* Use supported version of fedora in CI (#10108)

* tests: replace fedora35 with fedora37

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: replace fedora36 with fedora38

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* docs: update fedora version in docs

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* molecule: upgrade fedora version

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: upgrade fedora images for vagrant and kubevirt

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* vagrant: workaround to fix private network ip address in fedora

Fedora stop supporting syconfig network script so we added a workaround
here
https://github.com/hashicorp/vagrant/issues/12762#issuecomment-1535957837
to fix it.

* netowrkmanager: do not configure dns if using systemd-resolved

We should not configure dns if we point to systemd-resolved.
Systemd-resolved is using NetworkManager to infer the upstream DNS
server so if we set NetworkManager to 127.0.0.53 it will prevent
systemd-resolved to get the correct network DNS server.

Thus if we are in this case we just don't set this setting.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* image-builder: update centos7 image

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* gitlab-ci: mark fedora packet jobs as allow failure

Fedora networking is still broken on Packet, let's mark it as allow
failure for now.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
Co-authored-by: piwinkler <9642809+piwinkler@users.noreply.github.com>
Co-authored-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-12-12 07:27:35 +01:00
Boris Barnier
d65e4e61a7 Add hashes for kubernetes version 1.25.14
Signed-off-by: Boris Barnier <bozzo@users.noreply.github.com>
2023-09-19 11:45:53 +02:00
Boris Barnier
f2a364e375 Add hashes for kubernetes version 1.26.6, 1.26.7, 1.26.8 & 1.26.9
Signed-off-by: Boris Barnier <bozzo@users.noreply.github.com>
2023-09-19 11:45:53 +02:00
Kay Yan
2cf23e3104 Don't search filesystem mounts in docker build step (#10131) (#10194)
Limit find cmd to /usr/ where __pycache__ files are located

Co-authored-by: Mateusz Mojsiejuk <m@fishface.se>
2023-06-06 10:16:12 -07:00
Kay Yan
200e0d54a2 Rebasechanges from upstream (#10128) (#10186)
Co-authored-by: Daniel Strufe <2900921+dabeck@users.noreply.github.com>
2023-06-04 23:40:43 -07:00
Kay Yan
fc28bfc336 Fix metrics-server for k8s 1.26 (#10183) (#10187)
Co-authored-by: Mohamed Omar Zaian <mohamedzaian@gmail.com>
2023-06-04 21:46:42 -07:00
Kay Yan
733ac8ffa9 fix-dockerfile (#10181) 2023-06-02 02:34:53 -07:00
Alexander
70450a4882 update README for v2.22.0 (#10180) 2023-06-02 01:00:53 -07:00
Jeroen Rijken
71349c9a17 [2.22] MetalLB backport (#10164)
* Update MetalLB deployment, wait for resource.

Signed-off-by: Jeroen Rijken <jeroen.rijken@xs4all.nl>

* yml to yaml, add basic test for metallb

Signed-off-by: Jeroen Rijken <jeroen.rijken@xs4all.nl>

---------

Signed-off-by: Jeroen Rijken <jeroen.rijken@xs4all.nl>
2023-06-01 19:57:45 -07:00
527 changed files with 5252 additions and 8185 deletions

View File

@@ -7,32 +7,34 @@ skip_list:
# These rules are intentionally skipped:
#
# [E204]: "Lines should be no longer than 160 chars"
# This could be re-enabled with a major rewrite in the future.
# For now, there's not enough value gain from strictly limiting line length.
# (Disabled in May 2019)
- '204'
# [E701]: "meta/main.yml should contain relevant info"
# Roles in Kubespray are not intended to be used/imported by Ansible Galaxy.
# While it can be useful to have these metadata available, they are also available in the existing documentation.
# (Disabled in May 2019)
- '701'
# [role-name] "meta/main.yml" Role name role-name does not match ``^+$`` pattern
# Meta roles in Kubespray don't need proper names
# (Disabled in June 2021)
- 'role-name'
- 'experimental'
# [var-naming] "defaults/main.yml" File defines variable 'apiVersion' that violates variable naming standards
# In Kubespray we use variables that use camelCase to match their k8s counterparts
# (Disabled in June 2021)
- 'var-naming'
- 'var-spacing'
# [fqcn-builtins]
# Roles in kubespray don't need fully qualified collection names
# (Disabled in Feb 2023)
- 'fqcn-builtins'
# We use template in names
- 'name[template]'
# No changed-when on commands
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'no-changed-when'
# Disable run-once check with free strategy
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'run-once[task]'
exclude_paths:
# Generated files
- tests/files/custom_cni/cilium.yaml
- venv

View File

@@ -1,8 +0,0 @@
# This file contains ignores rule violations for ansible-lint
inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml jinja[spacing]
roles/kubernetes/control-plane/defaults/main/kube-proxy.yml jinja[spacing]
roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
roles/kubernetes/node/defaults/main.yml jinja[spacing]
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
roles/kubespray-defaults/defaults/main.yaml jinja[spacing]

2
.gitignore vendored
View File

@@ -11,7 +11,7 @@ contrib/offline/offline-files.tar.gz
.cache
*.bak
*.tfstate
*.tfstate*backup
*.tfstate.backup
*.lock.hcl
.terraform/
contrib/terraform/aws/credentials.tfvars

View File

@@ -9,7 +9,7 @@ stages:
- deploy-special
variables:
KUBESPRAY_VERSION: v2.22.1
KUBESPRAY_VERSION: v2.21.0
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"
@@ -33,12 +33,16 @@ variables:
MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
TERRAFORM_VERSION: 1.3.7
ANSIBLE_MAJOR_VERSION: "2.11"
PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}"
before_script:
- ./tests/scripts/rebase.sh
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip uninstall -y ansible ansible-base ansible-core
- PIP_CONSTRAINT=tests/constraints.txt python -m pip install -r tests/requirements-${ANSIBLE_MAJOR_VERSION}.txt
- mkdir -p /.ssh
.job: &job
@@ -53,7 +57,6 @@ before_script:
.testcases: &testcases
<<: *job
retry: 1
interruptible: true
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh

View File

@@ -67,6 +67,10 @@ tox-inventory-builder:
extends: .job
before_script:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- PIP_CONSTRAINT=tests/constraints.txt python -m pip install -r tests/requirements.txt
script:
- pip3 install tox
- cd contrib/inventory_builder && tox

View File

@@ -9,6 +9,10 @@
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- PIP_CONSTRAINT=tests/constraints.txt python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
@@ -54,7 +58,6 @@ molecule_cri-o:
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o
allow_failure: true
when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail

View File

@@ -23,45 +23,50 @@
allow_failure: true
extends: .packet
packet_cleanup_old:
stage: deploy-part1
extends: .packet_periodic
script:
- cd tests
- make cleanup-packet
after_script: []
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-all-in-one:
# The ubuntu20-calico-aio jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-aio:
stage: deploy-part1
extends: .packet_pr
when: on_success
variables:
RESET_CHECK: "true"
packet_ubuntu20-calico-aio-ansible-2_11:
stage: deploy-part1
extends: .packet_periodic
when: on_success
variables:
ANSIBLE_MAJOR_VERSION: "2.11"
RESET_CHECK: "true"
# ### PR JOBS PART2
packet_ubuntu20-all-in-one-docker:
packet_ubuntu18-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu20-calico-all-in-one-hardening:
packet_ubuntu20-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-all-in-one-docker:
packet_ubuntu20-calico-aio-hardening:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-all-in-one:
packet_ubuntu18-calico-aio:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-etcd-datastore:
packet_ubuntu22-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-aio:
stage: deploy-part2
extends: .packet_pr
when: on_success
@@ -75,9 +80,8 @@ packet_almalinux8-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
allow_failure: true
packet_ubuntu20-crio:
packet_ubuntu18-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
@@ -87,7 +91,7 @@ packet_fedora37-crio:
stage: deploy-part2
when: manual
packet_ubuntu20-flannel-ha:
packet_ubuntu16-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -117,21 +121,6 @@ packet_debian11-docker:
extends: .packet_pr
when: on_success
packet_debian12-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-cilium:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
extends: .packet_pr
@@ -144,7 +133,7 @@ packet_centos7-calico-ha-once-localhost:
packet_almalinux8-kube-ovn:
stage: deploy-part2
extends: .packet_pr
extends: .packet_periodic
when: on_success
packet_almalinux8-calico:
@@ -187,17 +176,22 @@ packet_opensuse-docker-cilium:
# ### MANUAL JOBS
packet_ubuntu20-docker-weave-sep:
packet_ubuntu16-docker-weave-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu20-cilium-sep:
packet_ubuntu18-cilium-sep:
stage: deploy-special
extends: .packet_pr
when: manual
packet_ubuntu20-flannel-ha-once:
packet_ubuntu18-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu18-flannel-ha-once:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -240,7 +234,7 @@ packet_fedora37-calico-swap-selinux:
extends: .packet_pr
when: manual
packet_amazon-linux-2-all-in-one:
packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -315,18 +309,18 @@ packet_debian11-calico-upgrade-once:
variables:
UPGRADE_TEST: graceful
packet_ubuntu20-calico-ha-recover:
packet_ubuntu18-calico-ha-recover:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
packet_ubuntu20-calico-ha-recover-noquorum:
packet_ubuntu18-calico-ha-recover-noquorum:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:]:kube_control_plane[1:]"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:],kube_control_plane[1:]"

View File

@@ -100,13 +100,21 @@ tf-validate-upcloud:
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-nifcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: nifcloud
# tf-packet-ubuntu20-default:
# tf-packet-ubuntu16-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_metro: ny
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_16_04
#
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_VERSION
@@ -118,7 +126,7 @@ tf-validate-nifcloud:
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_metro: am
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_20_04
# TF_VAR_operating_system: ubuntu_18_04
.ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3
@@ -156,7 +164,7 @@ tf-elastx_cleanup:
script:
- ./scripts/openstack-cleanup/main.py
tf-elastx_ubuntu20-calico:
tf-elastx_ubuntu18-calico:
extends: .terraform_apply
stage: deploy-part3
when: on_success
@@ -186,7 +194,7 @@ tf-elastx_ubuntu20-calico:
TF_VAR_az_list_node: '["sto1"]'
TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-20.04-server-latest
TF_VAR_image: ubuntu-18.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
# OVH voucher expired, commenting job until things are sorted out
@@ -203,7 +211,7 @@ tf-elastx_ubuntu20-calico:
# script:
# - ./scripts/openstack-cleanup/main.py
# tf-ovh_ubuntu20-calico:
# tf-ovh_ubuntu18-calico:
# extends: .terraform_apply
# when: on_success
# environment: ovh
@@ -229,5 +237,5 @@ tf-elastx_ubuntu20-calico:
# TF_VAR_network_name: "Ext-Net"
# TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_image: "Ubuntu 20.04"
# TF_VAR_image: "Ubuntu 18.04"
# TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

View File

@@ -13,6 +13,10 @@
image: $PIPELINE_IMAGE
services: []
before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- PIP_CONSTRAINT=tests/constraints.txt python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/testcases_run.sh
@@ -20,12 +24,17 @@
- chronic ./tests/scripts/testcases_cleanup.sh
allow_failure: true
vagrant_ubuntu20-calico-dual-stack:
vagrant_ubuntu18-calico-dual-stack:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu20-weave-medium:
vagrant_ubuntu18-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu18-weave-medium:
stage: deploy-part2
extends: .vagrant
when: manual
@@ -41,13 +50,13 @@ vagrant_ubuntu20-flannel-collection:
extends: .vagrant
when: on_success
vagrant_ubuntu20-kube-router-sep:
vagrant_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu20-kube-router-svc-proxy:
vagrant_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual

View File

@@ -1 +0,0 @@
# See our release notes on [GitHub](https://github.com/kubernetes-sigs/kubespray/releases)

View File

@@ -12,7 +12,6 @@ To install development dependencies you can set up a python virtual env with the
virtualenv venv
source venv/bin/activate
pip install -r tests/requirements.txt
ansible-galaxy install -r tests/requirements.yml
```
#### Linting

View File

@@ -28,15 +28,14 @@ RUN apt update -q \
rsync \
openssh-client \
&& pip install --no-compile --no-cache-dir \
ansible==7.6.0 \
ansible-core==2.14.6 \
cryptography==41.0.1 \
ansible==5.7.1 \
ansible-core==2.12.5 \
cryptography==3.4.8 \
jinja2==3.1.2 \
netaddr==0.8.0 \
jmespath==1.0.1 \
MarkupSafe==2.1.3 \
MarkupSafe==2.1.2 \
ruamel.yaml==0.17.21 \
passlib==1.7.4 \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \

View File

@@ -23,7 +23,6 @@ aliases:
- cyclinder
- mzaian
- mrfreezeex
- erikjiang
kubespray-emeritus_approvers:
- riverzhang
- atoms

View File

@@ -34,7 +34,7 @@ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inv
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root
# Clean up old Kubernete cluster with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
# uninstalling old packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
@@ -75,11 +75,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.22.1
docker pull quay.io/kubespray/kubespray:v2.22.1
git checkout v2.22.2
docker pull quay.io/kubespray/kubespray:v2.22.2
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.22.1 bash
quay.io/kubespray/kubespray:v2.22.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
@@ -142,8 +142,8 @@ vagrant up
## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Buster
- **Ubuntu** 20.04, 22.04
- **Debian** Bullseye, Buster
- **Ubuntu** 16.04, 18.04, 20.04, 22.04
- **CentOS/RHEL** 7, [8, 9](docs/centos.md#centos-8)
- **Fedora** 37, 38
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
@@ -161,28 +161,28 @@ Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.27.9
- [etcd](https://github.com/etcd-io/etcd) v3.5.10
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.26.13
- [etcd](https://github.com/etcd-io/etcd) v3.5.6
- [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.7.5
- [cri-o](http://cri-o.io/) v1.27 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [containerd](https://containerd.io/) v1.7.13
- [cri-o](http://cri-o.io/) v1.24 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [calico](https://github.com/projectcalico/calico) v3.25.2
- [cilium](https://github.com/cilium/cilium) v1.13.4
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5
- [calico](https://github.com/projectcalico/calico) v3.25.1
- [cilium](https://github.com/cilium/cilium) v1.13.0
- [flannel](https://github.com/flannel-io/flannel) v0.21.4
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.10.7
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.1
- [coredns](https://github.com/coredns/coredns) v1.10.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.8.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.8.0
- [helm](https://helm.sh/) v3.12.3
- [coredns](https://github.com/coredns/coredns) v1.9.3
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.7.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.3
- [argocd](https://argoproj.github.io/) v2.7.2
- [helm](https://helm.sh/) v3.12.0
- [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
@@ -191,19 +191,19 @@ Note: Upstart/SysV init based OS types are not supported.
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.22.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.9.2
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.24
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.4.0
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.23
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
## Container Runtime Notes
- Supported Docker versions are 18.09, 19.03, 20.10, 23.0 and 24.0. The *recommended* Docker version is 20.10 (except on Debian bookworm which without supporting for 20.10 and below any more). `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
- Supported Docker versions are 18.09, 19.03 and 20.10. The *recommended* Docker version is 20.10. `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements
- **Minimum required version of Kubernetes is v1.25**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- **Minimum required version of Kubernetes is v1.24**
- **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
@@ -227,7 +227,7 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options
- [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay
and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts,
pods, and (if using Istio and Envoy) applications at the service mesh layer.

11
Vagrantfile vendored
View File

@@ -19,8 +19,9 @@ SUPPORTED_OS = {
"flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]},
"flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]},
"flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
"ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"},
@@ -52,7 +53,7 @@ $shared_folders ||= {}
$forwarded_ports ||= {}
$subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu2004"
$os ||= "ubuntu1804"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
$multi_networking ||= "False"
@@ -218,8 +219,8 @@ Vagrant.configure("2") do |config|
# Disable swap for each vm
node.vm.provision "shell", inline: "swapoff -a"
# ubuntu2004 and ubuntu2204 have IPv6 explicitly disabled. This undoes that.
if ["ubuntu2004", "ubuntu2204"].include? $os
# ubuntu1804 and ubuntu2004 have IPv6 explicitly disabled. This undoes that.
if ["ubuntu1804", "ubuntu2004"].include? $os
node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
end
@@ -233,7 +234,7 @@ Vagrant.configure("2") do |config|
end
# Disable firewalld on oraclelinux/redhat vms
if ["oraclelinux","oraclelinux8","rhel7","rhel8","rockylinux8"].include? $os
if ["oraclelinux","oraclelinux8","rhel7","rhel8"].include? $os
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
end

View File

@@ -39,7 +39,7 @@ class SearchEC2Tags(object):
hosts[group] = []
tag_key = "kubespray-role"
tag_value = ["*"+group+"*"]
region = os.environ['AWS_REGION']
region = os.environ['REGION']
ec2 = boto3.resource('ec2', region)
filters = [{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}]
@@ -67,11 +67,6 @@ class SearchEC2Tags(object):
if node_labels_tag:
ansible_host['node_labels'] = dict([ label.strip().split('=') for label in node_labels_tag[0]['Value'].split(',') ])
##Set when instance actually has node_taints
node_taints_tag = list(filter(lambda t: t['Key'] == 'kubespray-node-taints', instance.tags))
if node_taints_tag:
ansible_host['node_taints'] = list([ taint.strip() for taint in node_taints_tag[0]['Value'].split(',') ])
hosts[group].append(dns_name)
hosts['_meta']['hostvars'][dns_name] = ansible_host

View File

@@ -1,6 +1,5 @@
---
- name: Generate Azure inventory
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- generate-inventory

View File

@@ -1,6 +1,5 @@
---
- name: Generate Azure inventory
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- generate-inventory_2

View File

@@ -1,6 +1,5 @@
---
- name: Generate Azure templates
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- generate-templates

View File

@@ -1,6 +1,6 @@
---
- name: Query Azure VMs
- name: Query Azure VMs # noqa 301
command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd

View File

@@ -1,14 +1,14 @@
---
- name: Query Azure VMs IPs
- name: Query Azure VMs IPs # noqa 301
command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }}
register: vm_ip_list_cmd
- name: Query Azure VMs Roles
- name: Query Azure VMs Roles # noqa 301
command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd
- name: Query Azure Load Balancer Public IP
- name: Query Azure Load Balancer Public IP # noqa 301
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd

View File

@@ -1,11 +1,9 @@
---
- name: Create nodes as docker containers
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- { role: dind-host }
- name: Customize each node containers
hosts: containers
- hosts: containers
roles:
- { role: dind-cluster }

View File

@@ -1,9 +1,9 @@
---
- name: Set_fact distro_setup
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: Set_fact other distro settings
- name: set_fact other distro settings
set_fact:
distro_user: "{{ distro_setup['user'] }}"
distro_ssh_service: "{{ distro_setup['ssh_service'] }}"
@@ -66,8 +66,8 @@
dest: "/etc/sudoers.d/{{ distro_user }}"
mode: 0640
- name: "Add my pubkey to {{ distro_user }} user authorized keys"
ansible.posix.authorized_key:
- name: Add my pubkey to "{{ distro_user }}" user authorized keys
authorized_key:
user: "{{ distro_user }}"
state: present
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"

View File

@@ -1,9 +1,9 @@
---
- name: Set_fact distro_setup
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: Set_fact other distro settings
- name: set_fact other distro settings
set_fact:
distro_image: "{{ distro_setup['image'] }}"
distro_init: "{{ distro_setup['init'] }}"
@@ -13,7 +13,7 @@
distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}"
- name: Create dind node containers from "containers" inventory section
community.docker.docker_container:
docker_container:
image: "{{ distro_image }}"
name: "{{ item }}"
state: started
@@ -69,7 +69,7 @@
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) # noqa 301
raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id
@@ -79,6 +79,7 @@
with_items: "{{ containers.results }}"
- name: Early hack image install to adapt for DIND
# noqa 302 - this task uses the raw module intentionally
raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"

View File

@@ -1,27 +1,21 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = pep8
envlist = pep8, py33
[testenv]
allowlist_externals = py.test
whitelist_externals = py.test
usedevelop = True
deps =
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
setenv = VIRTUAL_ENV={envdir}
passenv =
http_proxy
HTTP_PROXY
https_proxy
HTTPS_PROXY
no_proxy
NO_PROXY
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
commands = pytest -vv #{posargs:./tests}
[testenv:pep8]
usedevelop = False
allowlist_externals = bash
whitelist_externals = bash
commands =
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8"

View File

@@ -1,9 +1,8 @@
---
- name: Prepare Hypervisor to later install kubespray VMs
hosts: localhost
- hosts: localhost
gather_facts: False
become: yes
vars:
bootstrap_os: none
- bootstrap_os: none
roles:
- { role: kvm-setup }
- kvm-setup

View File

@@ -22,9 +22,9 @@
- ntp
when: ansible_os_family == "Debian"
- name: Create deployment user if required
include_tasks: user.yml
# Create deployment user if required
- include: user.yml
when: k8s_deployment_user is defined
- name: Set proper sysctl values
import_tasks: sysctl.yml
# Set proper sysctl values
- include: sysctl.yml

View File

@@ -1,6 +1,6 @@
---
- name: Load br_netfilter module
community.general.modprobe:
modprobe:
name: br_netfilter
state: present
register: br_netfilter
@@ -25,7 +25,7 @@
- name: Enable net.ipv4.ip_forward in sysctl
ansible.posix.sysctl:
sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_file: "{{ sysctl_file_path }}"
@@ -33,7 +33,7 @@
reload: yes
- name: Set bridge-nf-call-{arptables,iptables} to 0
ansible.posix.sysctl:
sysctl:
name: "{{ item }}"
state: present
value: 0

View File

@@ -1,9 +1,8 @@
---
- name: Check ansible version
import_playbook: kubernetes_sigs.kubespray.ansible_version
import_playbook: ansible_version.yml
- name: Install mitogen
hosts: localhost
- hosts: localhost
strategy: linear
vars:
mitogen_version: 0.3.2
@@ -20,25 +19,24 @@
- "{{ playbook_dir }}/plugins/mitogen"
- "{{ playbook_dir }}/dist"
- name: Download mitogen release
- name: download mitogen release
get_url:
url: "{{ mitogen_url }}"
dest: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
validate_certs: true
mode: 0644
- name: Extract archive
- name: extract archive
unarchive:
src: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
dest: "{{ playbook_dir }}/dist/"
- name: Copy plugin
ansible.posix.synchronize:
- name: copy plugin
synchronize:
src: "{{ playbook_dir }}/dist/mitogen-{{ mitogen_version }}/"
dest: "{{ playbook_dir }}/plugins/mitogen"
- name: Add strategy to ansible.cfg
community.general.ini_file:
- name: add strategy to ansible.cfg
ini_file:
path: ansible.cfg
mode: 0644
section: "{{ item.section | d('defaults') }}"

View File

@@ -1,29 +1,24 @@
---
- name: Bootstrap hosts
hosts: gfs-cluster
- hosts: gfs-cluster
gather_facts: false
vars:
ansible_ssh_pipelining: false
roles:
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
hosts: all
- hosts: all
gather_facts: true
- name: Install glusterfs server
hosts: gfs-cluster
- hosts: gfs-cluster
vars:
ansible_ssh_pipelining: true
roles:
- { role: glusterfs/server }
- name: Install glusterfs servers
hosts: k8s_cluster
- hosts: k8s_cluster
roles:
- { role: glusterfs/client }
- name: Configure Kubernetes to use glusterfs
hosts: kube_control_plane[0]
- hosts: kube_control_plane[0]
roles:
- { role: kubernetes-pv }

View File

@@ -6,12 +6,12 @@ galaxy_info:
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: "2.0"
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- "6"
- "7"
- 6
- 7
- name: Ubuntu
versions:
- precise

View File

@@ -3,19 +3,14 @@
# hyperkube and needs to be installed as part of the system.
# Setup/install tasks.
- name: Setup RedHat distros for glusterfs
include_tasks: setup-RedHat.yml
- include: setup-RedHat.yml
when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined
- name: Setup Debian distros for glusterfs
include_tasks: setup-Debian.yml
- include: setup-Debian.yml
when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined
- name: Ensure Gluster mount directories exist.
file:
path: "{{ item }}"
state: directory
mode: 0775
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_mount_dir }}"
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa no-handler
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent

View File

@@ -1,14 +1,10 @@
---
- name: Install Prerequisites
package:
name: "{{ item }}"
state: present
package: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
package:
name: "{{ item }}"
state: present
package: name={{ item }} state=present
with_items:
- glusterfs-client

View File

@@ -6,12 +6,12 @@ galaxy_info:
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: "2.0"
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- "6"
- "7"
- 6
- 7
- name: Ubuntu
versions:
- precise

View File

@@ -4,58 +4,40 @@
include_vars: "{{ ansible_os_family }}.yml"
# Install xfs package
- name: Install xfs Debian
apt:
name: xfsprogs
state: present
- name: install xfs Debian
apt: name=xfsprogs state=present
when: ansible_os_family == "Debian"
- name: Install xfs RedHat
package:
name: xfsprogs
state: present
- name: install xfs RedHat
package: name=xfsprogs state=present
when: ansible_os_family == "RedHat"
# Format external volumes in xfs
- name: Format volumes in xfs
community.general.filesystem:
fstype: xfs
dev: "{{ disk_volume_device_1 }}"
filesystem: "fstype=xfs dev={{ disk_volume_device_1 }}"
# Mount external volumes
- name: Mounting new xfs filesystem
ansible.posix.mount:
name: "{{ gluster_volume_node_mount_dir }}"
src: "{{ disk_volume_device_1 }}"
fstype: xfs
state: mounted
- name: mounting new xfs filesystem
mount: "name={{ gluster_volume_node_mount_dir }} src={{ disk_volume_device_1 }} fstype=xfs state=mounted"
# Setup/install tasks.
- name: Setup RedHat distros for glusterfs
include_tasks: setup-RedHat.yml
- include: setup-RedHat.yml
when: ansible_os_family == 'RedHat'
- name: Setup Debian distros for glusterfs
include_tasks: setup-Debian.yml
- include: setup-Debian.yml
when: ansible_os_family == 'Debian'
- name: Ensure GlusterFS is started and enabled at boot.
service:
name: "{{ glusterfs_daemon }}"
state: started
enabled: yes
service: "name={{ glusterfs_daemon }} state=started enabled=yes"
- name: Ensure Gluster brick and mount directories exist.
file:
path: "{{ item }}"
state: directory
mode: 0775
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- name: Configure Gluster volume with replicas
gluster.gluster.gluster_volume:
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
@@ -67,7 +49,7 @@
when: groups['gfs-cluster']|length > 1
- name: Configure Gluster volume without replicas
gluster.gluster.gluster_volume:
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
@@ -78,7 +60,7 @@
when: groups['gfs-cluster']|length <= 1
- name: Mount glusterfs to retrieve disk size
ansible.posix.mount:
mount:
name: "{{ gluster_mount_dir }}"
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
fstype: glusterfs
@@ -87,8 +69,7 @@
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Get Gluster disk size
setup:
filter: ansible_mounts
setup: filter=ansible_mounts
register: mounts_data
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
@@ -105,7 +86,7 @@
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs
ansible.posix.mount:
mount:
name: "{{ gluster_mount_dir }}"
fstype: glusterfs
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa no-handler
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent

View File

@@ -1,15 +1,11 @@
---
- name: Install Prerequisites
package:
name: "{{ item }}"
state: present
package: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
package:
name: "{{ item }}"
state: present
package: name={{ item }} state=present
with_items:
- glusterfs-server
- glusterfs-client

View File

@@ -1,11 +1,9 @@
---
- name: Tear down heketi
hosts: kube_control_plane[0]
- hosts: kube_control_plane[0]
roles:
- { role: tear-down }
- name: Teardown disks in heketi
hosts: heketi-node
- hosts: heketi-node
become: yes
roles:
- { role: tear-down-disks }

View File

@@ -1,11 +1,9 @@
---
- name: Prepare heketi install
hosts: heketi-node
- hosts: heketi-node
roles:
- { role: prepare }
- name: Provision heketi
hosts: kube_control_plane[0]
- hosts: kube_control_plane[0]
tags:
- "provision"
roles:

View File

@@ -5,7 +5,7 @@
- "dm_snapshot"
- "dm_mirror"
- "dm_thin_pool"
community.general.modprobe:
modprobe:
name: "{{ item }}"
state: "present"

View File

@@ -1,3 +1,3 @@
---
- name: "Stop port forwarding"
- name: "stop port forwarding"
command: "killall "

View File

@@ -6,7 +6,7 @@
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over."
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"

View File

@@ -14,7 +14,7 @@
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa no-handler
- name: "Load heketi topology." # noqa 503
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"

View File

@@ -18,7 +18,7 @@
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod."
- name: "Copy configuration from pod." # noqa 301
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."

View File

@@ -11,10 +11,10 @@
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
mode: 0644
- name: "Copy topology configuration into container." # noqa no-handler
- name: "Copy topology configuration into container." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa no-handler
- name: "Load heketi topology." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."

View File

@@ -22,7 +22,7 @@
ignore_errors: true # noqa ignore-errors
changed_when: false
- name: "Remove volume groups."
- name: "Remove volume groups." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
@@ -30,7 +30,7 @@
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks."
- name: "Remove physical volume from cluster disks." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true

View File

@@ -1,43 +1,43 @@
---
- name: Remove storage class.
- name: Remove storage class. # noqa 301
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi.
- name: Tear down heketi. # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi.
- name: Tear down heketi. # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true # noqa ignore-errors
- name: Tear down bootstrap.
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
- name: Ensure there is nothing left over.
- name: Ensure there is nothing left over. # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: Ensure there is nothing left over.
- name: Ensure there is nothing left over. # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: Tear down glusterfs.
- name: Tear down glusterfs. # noqa 301
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi storage service.
- name: Remove heketi storage service. # noqa 301
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi gluster role binding
- name: Remove heketi gluster role binding # noqa 301
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi config secret
- name: Remove heketi config secret # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi db backup
- name: Remove heketi db backup # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi service account
- name: Remove heketi service account # noqa 301
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true # noqa ignore-errors
- name: Get secrets

View File

@@ -27,7 +27,7 @@ manage-offline-container-images.sh register
## generate_list.sh
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main/main.yml` file.
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main.yml` file.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.

View File

@@ -5,7 +5,7 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${DOWNLOAD_YML:="roles/download/defaults/main/main.yml"}
: ${DOWNLOAD_YML:="roles/download/defaults/main.yml"}
mkdir -p ${TEMP_DIR}
@@ -19,7 +19,7 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/download/defaults/main/main.yml
# Those container images are downloaded by kubeadm, then roles/download/defaults/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"

View File

@@ -1,6 +1,5 @@
---
- name: Collect container images for offline deployment
hosts: localhost
- hosts: localhost
become: no
roles:
@@ -12,11 +11,9 @@
tasks:
# Generate files.list and images.list files from templates.
- name: Collect container images for offline deployment
template:
- template:
src: ./contrib/offline/temp/{{ item }}.list.template
dest: ./contrib/offline/temp/{{ item }}.list
mode: 0644
with_items:
- files
- images

View File

@@ -39,6 +39,6 @@ if [ $? -ne 0 ]; then
sudo "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \
--volume "$(pwd)"/nginx.conf:/etc/nginx/nginx.conf \
--name nginx nginx:alpine
fi

View File

@@ -1,5 +1,4 @@
---
- name: Disable firewalld/ufw
hosts: all
- hosts: all
roles:
- { role: prepare }

View File

@@ -1,8 +1,5 @@
---
- name: Disable firewalld and ufw
when:
- disable_service_firewall is defined and disable_service_firewall
block:
- block:
- name: List services
service_facts:
@@ -12,7 +9,7 @@
state: stopped
enabled: no
when:
"'firewalld.service' in services and services['firewalld.service'].status != 'not-found'"
"'firewalld.service' in services"
- name: Disable service ufw
systemd:
@@ -20,4 +17,7 @@
state: stopped
enabled: no
when:
"'ufw.service' in services and services['ufw.service'].status != 'not-found'"
"'ufw.service' in services"
when:
- disable_service_firewall is defined and disable_service_firewall

View File

@@ -1,5 +0,0 @@
*.tfstate*
.terraform.lock.hcl
.terraform
sample-inventory/inventory.ini

View File

@@ -1,137 +0,0 @@
# Kubernetes on NIFCLOUD with Terraform
Provision a Kubernetes cluster on [NIFCLOUD](https://pfs.nifcloud.com/) using Terraform and Kubespray
## Overview
The setup looks like following
```text
Kubernetes cluster
+----------------------------+
+---------------+ | +--------------------+ |
| | | | +--------------------+ |
| API server LB +---------> | | | |
| | | | | Control Plane/etcd | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------------+ |
| ^ |
| | |
| v |
| +--------------------+ |
| | +--------------------+ |
| | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------------+ |
+----------------------------+
```
## Requirements
* Terraform 1.3.7
## Quickstart
### Export Variables
* Your NIFCLOUD credentials:
```bash
export NIFCLOUD_ACCESS_KEY_ID=<YOUR ACCESS KEY>
export NIFCLOUD_SECRET_ACCESS_KEY=<YOUR SECRET ACCESS KEY>
```
* The SSH KEY used to connect to the instance:
* FYI: [Cloud Help(SSH Key)](https://pfs.nifcloud.com/help/ssh.htm)
```bash
export TF_VAR_SSHKEY_NAME=<YOUR SSHKEY NAME>
```
* The IP address to connect to bastion server:
```bash
export TF_VAR_working_instance_ip=$(curl ifconfig.me)
```
### Create The Infrastructure
* Run terraform:
```bash
terraform init
terraform apply -var-file ./sample-inventory/cluster.tfvars
```
### Setup The Kubernetes
* Generate cluster configuration file:
```bash
./generate-inventory.sh > sample-inventory/inventory.ini
* Export Variables:
```bash
BASTION_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.bastion_info | to_entries[].value.public_ip')
API_LB_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_lb')
CP01_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[0].value.private_ip')
export ANSIBLE_SSH_ARGS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=\"ssh root@${BASTION_IP} -W %h:%p\""
```
* Set ssh-agent"
```bash
eval `ssh-agent`
ssh-add <THE PATH TO YOUR SSH KEY>
```
* Run cluster.yml playbook:
```bash
cd ./../../../
ansible-playbook -i contrib/terraform/nifcloud/inventory/inventory.ini cluster.yml
```
### Connecting to Kubernetes
* [Install kubectl](https://kubernetes.io/docs/tasks/tools/) on the localhost
* Fetching kubeconfig file:
```bash
mkdir -p ~/.kube
scp -o ProxyCommand="ssh root@${BASTION_IP} -W %h:%p" root@${CP01_IP}:/etc/kubernetes/admin.conf ~/.kube/config
```
* Rewrite /etc/hosts
```bash
sudo echo "${API_LB_IP} lb-apiserver.kubernetes.local" >> /etc/hosts
```
* Run kubectl
```bash
kubectl get node
```
## Variables
* `region`: Region where to run the cluster
* `az`: Availability zone where to run the cluster
* `private_ip_bn`: Private ip address of bastion server
* `private_network_cidr`: Subnet of private network
* `instances_cp`: Machine to provision as Control Plane. Key of this object will be used as part of the machine' name
* `private_ip`: private ip address of machine
* `instances_wk`: Machine to provision as Worker Node. Key of this object will be used as part of the machine' name
* `private_ip`: private ip address of machine
* `instance_key_name`: The key name of the Key Pair to use for the instance
* `instance_type_bn`: The instance type of bastion server
* `instance_type_wk`: The instance type of worker node
* `instance_type_cp`: The instance type of control plane
* `image_name`: OS image used for the instance
* `working_instance_ip`: The IP address to connect to bastion server
* `accounting_type`: Accounting type. (1: monthly, 2: pay per use)

View File

@@ -1,64 +0,0 @@
#!/bin/bash
#
# Generates a inventory file based on the terraform output.
# After provisioning a cluster, simply run this command and supply the terraform state file
# Default state file is terraform.tfstate
#
set -e
TF_OUT=$(terraform output -json)
CONTROL_PLANES=$(jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[]' <(echo "${TF_OUT}"))
WORKERS=$(jq -r '.kubernetes_cluster.value.worker_info | to_entries[]' <(echo "${TF_OUT}"))
mapfile -t CONTROL_PLANE_NAMES < <(jq -r '.key' <(echo "${CONTROL_PLANES}"))
mapfile -t WORKER_NAMES < <(jq -r '.key' <(echo "${WORKERS}"))
API_LB=$(jq -r '.kubernetes_cluster.value.control_plane_lb' <(echo "${TF_OUT}"))
echo "[all]"
# Generate control plane hosts
i=1
for name in "${CONTROL_PLANE_NAMES[@]}"; do
private_ip=$(jq -r '. | select( .key=='"\"${name}\""' ) | .value.private_ip' <(echo "${CONTROL_PLANES}"))
echo "${name} ansible_user=root ansible_host=${private_ip} access_ip=${private_ip} ip=${private_ip} etcd_member_name=etcd${i}"
i=$(( i + 1 ))
done
# Generate worker hosts
for name in "${WORKER_NAMES[@]}"; do
private_ip=$(jq -r '. | select( .key=='"\"${name}\""' ) | .value.private_ip' <(echo "${WORKERS}"))
echo "${name} ansible_user=root ansible_host=${private_ip} access_ip=${private_ip} ip=${private_ip}"
done
API_LB=$(jq -r '.kubernetes_cluster.value.control_plane_lb' <(echo "${TF_OUT}"))
echo ""
echo "[all:vars]"
echo "upstream_dns_servers=['8.8.8.8','8.8.4.4']"
echo "loadbalancer_apiserver={'address':'${API_LB}','port':'6443'}"
echo ""
echo "[kube_control_plane]"
for name in "${CONTROL_PLANE_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[etcd]"
for name in "${CONTROL_PLANE_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[kube_node]"
for name in "${WORKER_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[k8s_cluster:children]"
echo "kube_control_plane"
echo "kube_node"

View File

@@ -1,36 +0,0 @@
provider "nifcloud" {
region = var.region
}
module "kubernetes_cluster" {
source = "./modules/kubernetes-cluster"
availability_zone = var.az
prefix = "dev"
private_network_cidr = var.private_network_cidr
instance_key_name = var.instance_key_name
instances_cp = var.instances_cp
instances_wk = var.instances_wk
image_name = var.image_name
instance_type_bn = var.instance_type_bn
instance_type_cp = var.instance_type_cp
instance_type_wk = var.instance_type_wk
private_ip_bn = var.private_ip_bn
additional_lb_filter = [var.working_instance_ip]
}
resource "nifcloud_security_group_rule" "ssh_from_bastion" {
security_group_names = [
module.kubernetes_cluster.security_group_name.bastion
]
type = "IN"
from_port = 22
to_port = 22
protocol = "TCP"
cidr_ip = var.working_instance_ip
}

View File

@@ -1,301 +0,0 @@
#################################################
##
## Local variables
##
locals {
# e.g. east-11 is 11
az_num = reverse(split("-", var.availability_zone))[0]
# e.g. east-11 is e11
az_short_name = "${substr(reverse(split("-", var.availability_zone))[1], 0, 1)}${local.az_num}"
# Port used by the protocol
port_ssh = 22
port_kubectl = 6443
port_kubelet = 10250
# calico: https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements#network-requirements
port_bgp = 179
port_vxlan = 4789
port_etcd = 2379
}
#################################################
##
## General
##
# data
data "nifcloud_image" "this" {
image_name = var.image_name
}
# private lan
resource "nifcloud_private_lan" "this" {
private_lan_name = "${var.prefix}lan"
availability_zone = var.availability_zone
cidr_block = var.private_network_cidr
accounting_type = var.accounting_type
}
#################################################
##
## Bastion
##
resource "nifcloud_security_group" "bn" {
group_name = "${var.prefix}bn"
description = "${var.prefix} bastion"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "bn" {
instance_id = "${local.az_short_name}${var.prefix}bn01"
security_group = nifcloud_security_group.bn.group_name
instance_type = var.instance_type_bn
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = var.private_ip_bn
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}bn01"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
#################################################
##
## Control Plane
##
resource "nifcloud_security_group" "cp" {
group_name = "${var.prefix}cp"
description = "${var.prefix} control plane"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "cp" {
for_each = var.instances_cp
instance_id = "${local.az_short_name}${var.prefix}${each.key}"
security_group = nifcloud_security_group.cp.group_name
instance_type = var.instance_type_cp
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = each.value.private_ip
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}${each.key}"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
resource "nifcloud_load_balancer" "this" {
load_balancer_name = "${local.az_short_name}${var.prefix}cp"
accounting_type = var.accounting_type
balancing_type = 1 // Round-Robin
load_balancer_port = local.port_kubectl
instance_port = local.port_kubectl
instances = [for v in nifcloud_instance.cp : v.instance_id]
filter = concat(
[for k, v in nifcloud_instance.cp : v.public_ip],
[for k, v in nifcloud_instance.wk : v.public_ip],
var.additional_lb_filter,
)
filter_type = 1 // Allow
}
#################################################
##
## Worker
##
resource "nifcloud_security_group" "wk" {
group_name = "${var.prefix}wk"
description = "${var.prefix} worker"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "wk" {
for_each = var.instances_wk
instance_id = "${local.az_short_name}${var.prefix}${each.key}"
security_group = nifcloud_security_group.wk.group_name
instance_type = var.instance_type_wk
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = each.value.private_ip
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}${each.key}"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
#################################################
##
## Security Group Rule: Kubernetes
##
# ssh
resource "nifcloud_security_group_rule" "ssh_from_bastion" {
security_group_names = [
nifcloud_security_group.wk.group_name,
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_ssh
to_port = local.port_ssh
protocol = "TCP"
source_security_group_name = nifcloud_security_group.bn.group_name
}
# kubectl
resource "nifcloud_security_group_rule" "kubectl_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_kubectl
to_port = local.port_kubectl
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# kubelet
resource "nifcloud_security_group_rule" "kubelet_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_kubelet
to_port = local.port_kubelet
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
resource "nifcloud_security_group_rule" "kubelet_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_kubelet
to_port = local.port_kubelet
protocol = "TCP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
#################################################
##
## Security Group Rule: calico
##
# vslan
resource "nifcloud_security_group_rule" "vxlan_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_vxlan
to_port = local.port_vxlan
protocol = "UDP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
resource "nifcloud_security_group_rule" "vxlan_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_vxlan
to_port = local.port_vxlan
protocol = "UDP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# bgp
resource "nifcloud_security_group_rule" "bgp_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_bgp
to_port = local.port_bgp
protocol = "TCP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
resource "nifcloud_security_group_rule" "bgp_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_bgp
to_port = local.port_bgp
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# etcd
resource "nifcloud_security_group_rule" "etcd_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_etcd
to_port = local.port_etcd
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}

View File

@@ -1,48 +0,0 @@
output "control_plane_lb" {
description = "The DNS name of LB for control plane"
value = nifcloud_load_balancer.this.dns_name
}
output "security_group_name" {
description = "The security group used in the cluster"
value = {
bastion = nifcloud_security_group.bn.group_name,
control_plane = nifcloud_security_group.cp.group_name,
worker = nifcloud_security_group.wk.group_name,
}
}
output "private_network_id" {
description = "The private network used in the cluster"
value = nifcloud_private_lan.this.id
}
output "bastion_info" {
description = "The basion information in cluster"
value = { (nifcloud_instance.bn.instance_id) : {
instance_id = nifcloud_instance.bn.instance_id,
unique_id = nifcloud_instance.bn.unique_id,
private_ip = nifcloud_instance.bn.private_ip,
public_ip = nifcloud_instance.bn.public_ip,
} }
}
output "worker_info" {
description = "The worker information in cluster"
value = { for v in nifcloud_instance.wk : v.instance_id => {
instance_id = v.instance_id,
unique_id = v.unique_id,
private_ip = v.private_ip,
public_ip = v.public_ip,
} }
}
output "control_plane_info" {
description = "The control plane information in cluster"
value = { for v in nifcloud_instance.cp : v.instance_id => {
instance_id = v.instance_id,
unique_id = v.unique_id,
private_ip = v.private_ip,
public_ip = v.public_ip,
} }
}

View File

@@ -1,45 +0,0 @@
#!/bin/bash
#################################################
##
## IP Address
##
configure_private_ip_address () {
cat << EOS > /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
ens192:
dhcp4: yes
dhcp6: yes
dhcp-identifier: mac
ens224:
dhcp4: no
dhcp6: no
addresses: [${private_ip_address}]
EOS
netplan apply
}
configure_private_ip_address
#################################################
##
## SSH
##
configure_ssh_port () {
sed -i 's/^#*Port [0-9]*/Port ${ssh_port}/' /etc/ssh/sshd_config
}
configure_ssh_port
#################################################
##
## Hostname
##
hostnamectl set-hostname ${hostname}
#################################################
##
## Disable swap files genereated by systemd-gpt-auto-generator
##
systemctl mask "dev-sda3.swap"

View File

@@ -1,9 +0,0 @@
terraform {
required_version = ">=1.3.7"
required_providers {
nifcloud = {
source = "nifcloud/nifcloud"
version = ">= 1.8.0, < 2.0.0"
}
}
}

View File

@@ -1,81 +0,0 @@
variable "availability_zone" {
description = "The availability zone"
type = string
}
variable "prefix" {
description = "The prefix for the entire cluster"
type = string
validation {
condition = length(var.prefix) <= 5
error_message = "Must be a less than 5 character long."
}
}
variable "private_network_cidr" {
description = "The subnet of private network"
type = string
validation {
condition = can(cidrnetmask(var.private_network_cidr))
error_message = "Must be a valid IPv4 CIDR block address."
}
}
variable "private_ip_bn" {
description = "Private IP of bastion server"
type = string
}
variable "instances_cp" {
type = map(object({
private_ip = string
}))
}
variable "instances_wk" {
type = map(object({
private_ip = string
}))
}
variable "instance_key_name" {
description = "The key name of the Key Pair to use for the instance"
type = string
}
variable "instance_type_bn" {
description = "The instance type of bastion server"
type = string
}
variable "instance_type_wk" {
description = "The instance type of worker"
type = string
}
variable "instance_type_cp" {
description = "The instance type of control plane"
type = string
}
variable "image_name" {
description = "The name of image"
type = string
}
variable "additional_lb_filter" {
description = "Additional LB filter"
type = list(string)
}
variable "accounting_type" {
type = string
default = "1"
validation {
condition = anytrue([
var.accounting_type == "1", // Monthly
var.accounting_type == "2", // Pay per use
])
error_message = "Must be a 1 or 2."
}
}

View File

@@ -1,3 +0,0 @@
output "kubernetes_cluster" {
value = module.kubernetes_cluster
}

View File

@@ -1,22 +0,0 @@
region = "jp-west-1"
az = "west-11"
instance_key_name = "deployerkey"
instance_type_bn = "e-medium"
instance_type_cp = "e-medium"
instance_type_wk = "e-medium"
private_network_cidr = "192.168.30.0/24"
instances_cp = {
"cp01" : { private_ip : "192.168.30.11/24" }
"cp02" : { private_ip : "192.168.30.12/24" }
"cp03" : { private_ip : "192.168.30.13/24" }
}
instances_wk = {
"wk01" : { private_ip : "192.168.30.21/24" }
"wk02" : { private_ip : "192.168.30.22/24" }
}
private_ip_bn = "192.168.30.10/24"
image_name = "Ubuntu Server 22.04 LTS"

View File

@@ -1 +0,0 @@
../../../../inventory/sample/group_vars

View File

@@ -1,9 +0,0 @@
terraform {
required_version = ">=1.3.7"
required_providers {
nifcloud = {
source = "nifcloud/nifcloud"
version = "1.8.0"
}
}
}

View File

@@ -1,77 +0,0 @@
variable "region" {
description = "The region"
type = string
}
variable "az" {
description = "The availability zone"
type = string
}
variable "private_ip_bn" {
description = "Private IP of bastion server"
type = string
}
variable "private_network_cidr" {
description = "The subnet of private network"
type = string
validation {
condition = can(cidrnetmask(var.private_network_cidr))
error_message = "Must be a valid IPv4 CIDR block address."
}
}
variable "instances_cp" {
type = map(object({
private_ip = string
}))
}
variable "instances_wk" {
type = map(object({
private_ip = string
}))
}
variable "instance_key_name" {
description = "The key name of the Key Pair to use for the instance"
type = string
}
variable "instance_type_bn" {
description = "The instance type of bastion server"
type = string
}
variable "instance_type_wk" {
description = "The instance type of worker"
type = string
}
variable "instance_type_cp" {
description = "The instance type of control plane"
type = string
}
variable "image_name" {
description = "The name of image"
type = string
}
variable "working_instance_ip" {
description = "The IP address to connect to bastion server."
type = string
}
variable "accounting_type" {
type = string
default = "2"
validation {
condition = anytrue([
var.accounting_type == "1", // Monthly
var.accounting_type == "2", // Pay per use
])
error_message = "Must be a 1 or 2."
}
}

View File

@@ -10,29 +10,21 @@ It is recommended to deploy the ansible version used by kubespray into a python
```ShellSession
VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
python3 -m venv $VENVDIR
ANSIBLE_VERSION=2.12
virtualenv --python=$(which python3) $VENVDIR
source $VENVDIR/bin/activate
cd $KUBESPRAYDIR
pip install -U -r requirements.txt
pip install -U -r requirements-$ANSIBLE_VERSION.txt
```
In case you have a similar message when installing the requirements:
```ShellSession
ERROR: Could not find a version that satisfies the requirement ansible==7.6.0 (from -r requirements.txt (line 1)) (from versions: [...], 6.7.0)
ERROR: No matching distribution found for ansible==7.6.0 (from -r requirements.txt (line 1))
```
It means that the version of Python you are running is not compatible with the version of Ansible that Kubespray supports.
If the latest version supported according to pip is 6.7.0 it means you are running Python 3.8 or lower while you need at least Python 3.9 (see the table below).
### Ansible Python Compatibility
Based on the table below and the available python version for your ansible host you should choose the appropriate ansible version to use with kubespray.
| Ansible Version | Python Version |
|-----------------|----------------|
| 2.14 | 3.9-3.11 |
| 2.11 | 2.7,3.5-3.9 |
| 2.12 | 3.8-3.10 |
## Inventory

View File

@@ -15,7 +15,7 @@ Kubespray can be installed as an [Ansible collection](https://docs.ansible.com/a
collections:
- name: https://github.com/kubernetes-sigs/kubespray
type: git
version: master # use the appropriate tag or branch for the version you need
version: v2.22.2
```
2. Install your collection

View File

@@ -58,23 +58,11 @@ Guide:
```ShellSession
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="yyyyy"
export AWS_REGION="us-east-2"
export REGION="us-east-2"
```
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
**Optional** Using labels and taints
To add labels to your kubernetes node, add the following tag to your instance:
- Key: `kubespray-node-labels`
- Value: `node-role.kubernetes.io/ingress=`
To add taints to your kubernetes node, add the following tag to your instance:
- Key: `kubespray-node-taints`
- Value: `node-role.kubernetes.io/ingress=:NoSchedule`
## Kubespray configuration
Declare the cloud config variables for the `aws` provider as follows. Setting these variables are optional and depend on your use case.

View File

@@ -80,15 +80,10 @@ docker_registry_mirrors:
containerd_grpc_max_recv_message_size: 16777216
containerd_grpc_max_send_message_size: 16777216
containerd_registries_mirrors:
- prefix: docker.io
mirrors:
- host: https://mirror.gcr.io
capabilities: ["pull", "resolve"]
skip_verify: false
- host: https://registry-1.docker.io
capabilities: ["pull", "resolve"]
skip_verify: false
containerd_registries:
"docker.io":
- "https://mirror.gcr.io"
- "https://registry-1.docker.io"
containerd_max_container_log_line_size: -1

View File

@@ -11,13 +11,14 @@ amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
debian10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
fedora38 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
ubuntu16 | :x: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: |
ubuntu18 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: |
ubuntu20 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
## crio
@@ -29,13 +30,14 @@ amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora38 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu16 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu18 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
## docker
@@ -47,11 +49,12 @@ amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian10 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora37 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora38 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :white_check_mark: |
opensuse | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :white_check_mark: |
ubuntu16 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :white_check_mark: |
ubuntu18 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -99,4 +99,4 @@ For the moment, only Cinder v3 is supported by the CSI Driver.
## More info
For further information about the Cinder CSI Driver, you can refer to this page: [Cloud Provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md).
For further information about the Cinder CSI Driver, you can refer to this page: [Cloud Provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md).

View File

@@ -24,20 +24,15 @@ etcd_deployment_type: host
Example: define registry mirror for docker hub
```yaml
containerd_registries_mirrors:
- prefix: docker.io
mirrors:
- host: https://mirror.gcr.io
capabilities: ["pull", "resolve"]
skip_verify: false
- host: https://registry-1.docker.io
capabilities: ["pull", "resolve"]
skip_verify: false
containerd_registries:
"docker.io":
- "https://mirror.gcr.io"
- "https://registry-1.docker.io"
```
`containerd_registries_mirrors` is ignored for pulling images when `image_command_tool=nerdctl`
`containerd_registries` is ignored for pulling images when `image_command_tool=nerdctl`
(the default for `container_manager=containerd`). Use `crictl` instead, it supports
`containerd_registries_mirrors` but lacks proper multi-arch support (see
`containerd_registries` but lacks proper multi-arch support (see
[#8375](https://github.com/kubernetes-sigs/kubespray/issues/8375)):
```yaml
@@ -108,35 +103,13 @@ containerd_runc_runtime:
Config insecure-registry access to self hosted registries.
```yaml
containerd_registries_mirrors:
- prefix: test.registry.io
mirrors:
- host: http://test.registry.io
capabilities: ["pull", "resolve"]
skip_verify: true
- prefix: 172.19.16.11:5000
mirrors:
- host: http://172.19.16.11:5000
capabilities: ["pull", "resolve"]
skip_verify: true
- prefix: repo:5000
mirrors:
- host: http://repo:5000
capabilities: ["pull", "resolve"]
skip_verify: true
containerd_insecure_registries:
"test.registry.io": "http://test.registry.io"
"172.19.16.11:5000": "http://172.19.16.11:5000"
"repo:5000": "http://repo:5000"
```
[containerd]: https://containerd.io/
[RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/
[runtime classes in containerd]: https://github.com/containerd/containerd/blob/main/docs/cri/config.md#runtime-classes
[runtime-spec]: https://github.com/opencontainers/runtime-spec
### Optional : NRI
[Node Resource Interface](https://github.com/containerd/nri) (NRI) is disabled by default for the containerd. If you
are using contained version v1.7.0 or above, then you can enable it with the
following configuration:
```yaml
nri_enabled: true
```

View File

@@ -62,13 +62,3 @@ The `allowed_annotations` configures `crio.conf` accordingly.
The `crio_remap_enable` configures the `/etc/subuid` and `/etc/subgid` files to add an entry for the **containers** user.
By default, 16M uids and gids are reserved for user namespaces (256 pods * 65536 uids/gids) at the end of the uid/gid space.
## Optional : NRI
[Node Resource Interface](https://github.com/containerd/nri) (NRI) is disabled by default for the CRI-O. If you
are using CRI-O version v1.26.0 or above, then you can enable it with the
following configuration:
```yaml
nri_enabled: true
```

View File

@@ -1,6 +1,6 @@
# K8s DNS stack by Kubespray
For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/)
For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](https://kubernetes.io/docs/admin/dns/)
[cluster add-on](https://releases.k8s.io/master/cluster/addons/README.md)
to serve as an authoritative DNS server for a given ``dns_domain`` and its
``svc, default.svc`` default subdomains (a total of ``ndots: 5`` max levels).
@@ -143,11 +143,6 @@ coredns_default_zone_cache_block: |
}
```
### systemd_resolved_disable_stub_listener
Whether or not to set `DNSStubListener=no` when using systemd-resolved. Defaults to `true` on Flatcar.
You might need to set it to `true` if CoreDNS fails to start with `address already in use` errors.
## DNS modes supported by Kubespray
You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.

View File

@@ -44,9 +44,3 @@ kubeEtcd:
service:
enabled: false
```
To fully override metrics exposition urls, define it in the inventory with:
```yaml
etcd_listen_metrics_urls: "http://0.0.0.0:2381"
```

View File

@@ -26,8 +26,7 @@ You may also define the port the local internal loadbalancer uses by changing,
`loadbalancer_apiserver_port`. This defaults to the value of
`kube_apiserver_port`. It is also important to note that Kubespray will only
configure kubelet and kube-proxy on non-master nodes to use the local internal
loadbalancer. If you wish to control the name of the loadbalancer container,
you can set the variable `loadbalancer_apiserver_pod_name`.
loadbalancer.
If you choose to NOT use the local internal loadbalancer, you will need to
use the [kube-vip](kube-vip.md) ansible role or configure your own loadbalancer to achieve HA. By default, it only configures a non-HA endpoint, which points to the

View File

@@ -118,7 +118,7 @@ Let's take a deep look to the resultant **kubernetes** configuration:
* The `enable-admission-plugins` has not the `PodSecurityPolicy` admission plugin. This because it is going to be definitely removed from **kubernetes** `v1.25`. For this reason we decided to set the newest `PodSecurity` (for more details, please take a look here: <https://kubernetes.io/docs/concepts/security/pod-security-admission/>). Then, we set the `EventRateLimit` plugin, providing additional configuration files (that are automatically created under the hood and mounted inside the `kube-apiserver` container) to make it work.
* The `encryption-provider-config` provide encryption at rest. This means that the `kube-apiserver` encrypt data that is going to be stored before they reach `etcd`. So the data is completely unreadable from `etcd` (in case an attacker is able to exploit this).
* The `rotateCertificates` in `KubeletConfiguration` is set to `true` along with `serverTLSBootstrap`. This could be used in alternative to `tlsCertFile` and `tlsPrivateKeyFile` parameters. Additionally it automatically generates certificates by itself. By default the CSRs are approved automatically via [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver). You can customize approval configuration by modifying Helm values via `kubelet_csr_approver_values`.
See <https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/> for more information on the subject.
See <https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/> for more information on the subject.
* If you are installing **kubernetes** in an AppArmor-based OS (eg. Debian/Ubuntu) you can enable the `AppArmor` feature gate uncommenting the lines with the comment `# AppArmor-based OS` on top.
* The `kubelet_systemd_hardening`, both with `kubelet_secure_addresses` setup a minimal firewall on the system. To better understand how these variables work, here's an explanatory image:
![kubelet hardening](img/kubelet-hardening.png)

View File

@@ -2,7 +2,7 @@
**NOTE:** The current image version is `v1.1.6`. Please file any issues you find and note the version used.
The AWS ALB Ingress Controller satisfies Kubernetes [ingress resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) by provisioning [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).
The AWS ALB Ingress Controller satisfies Kubernetes [ingress resources](https://kubernetes.io/docs/user-guide/ingress) by provisioning [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).
This project was originated by [Ticketmaster](https://github.com/ticketmaster) and [CoreOS](https://github.com/coreos) as part of Ticketmaster's move to AWS and CoreOS Tectonic. Learn more about Ticketmaster's Kubernetes initiative from Justin Dean's video at [Tectonic Summit](https://www.youtube.com/watch?v=wqXVKneP0Hg).
@@ -10,20 +10,20 @@ This project was donated to Kubernetes SIG-AWS to allow AWS, CoreOS, Ticketmaste
## Documentation
Checkout our [Live Docs](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v1.1/#aws-alb-ingress-controller)!
Checkout our [Live Docs](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/)!
## Getting started
To get started with the controller, see our [walkthrough](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v1.1/guide/walkthrough/echoserver/).
To get started with the controller, see our [walkthrough](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/walkthrough/echoserver/).
## Setup
- See [controller setup](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v1.1/guide/controller/setup/) on how to install ALB ingress controller
- See [external-dns setup](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v1.1/guide/external-dns/setup/) for how to setup the external-dns to manage route 53 records.
- See [controller setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/setup/) on how to install ALB ingress controller
- See [external-dns setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/external-dns/setup/) for how to setup the external-dns to manage route 53 records.
## Building
For details on building this project, see our [building guide](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v1.1/BUILDING/).
For details on building this project, see our [building guide](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/BUILDING/).
## Community, discussion, contribution, and support

View File

@@ -113,7 +113,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/mast
This example creates an ELB with just two listeners, one in port 80 and another in port 443
![Listeners](https://github.com/kubernetes/ingress-nginx/blob/main/docs/images/elb-l7-listener.png)
![Listeners](https://github.com/kubernetes/ingress-nginx/raw/master/docs/images/elb-l7-listener.png)
##### ELB Idle Timeouts

View File

@@ -5,7 +5,7 @@ You can use it quickly & easily deploy ceph RBD storage that works almost
anywhere.
It works just like in-tree dynamic provisioner. For more information on how
dynamic provisioning works, see [the docs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
dynamic provisioning works, see [the docs](http://kubernetes.io/docs/user-guide/persistent-volumes/)
or [this blog post](http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html).
## Development

View File

@@ -2,7 +2,7 @@
Distributed system such as Kubernetes are designed to be resilient to the
failures. More details about Kubernetes High-Availability (HA) may be found at
[Building High-Availability Clusters](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/)
[Building High-Availability Clusters](https://kubernetes.io/docs/admin/high-availability/)
To have a simple view the most of the parts of HA will be skipped to describe
Kubelet<->Controller Manager communication only.

View File

@@ -90,9 +90,6 @@ In all hosts, restart nginx-proxy pod. This pod is a local proxy for the apiserv
```sh
# run in every host
docker ps | grep k8s_nginx-proxy_nginx-proxy | awk '{print $1}' | xargs docker restart
# or with containerd
crictl ps | grep nginx-proxy | awk '{print $1}' | xargs crictl stop
```
### 3) Remove old control plane nodes

View File

@@ -33,7 +33,6 @@ kube_image_repo: "{{ registry_host }}"
gcr_image_repo: "{{ registry_host }}"
docker_image_repo: "{{ registry_host }}"
quay_image_repo: "{{ registry_host }}"
github_image_repo: "{{ registry_host }}"
kubeadm_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubeadm"
kubectl_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubectl"
@@ -51,12 +50,8 @@ containerd_download_url: "{{ files_repo }}/containerd-{{ containerd_version }}-l
runc_download_url: "{{ files_repo }}/runc.{{ image_arch }}"
nerdctl_download_url: "{{ files_repo }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
# Insecure registries for containerd
containerd_registries_mirrors:
- prefix: "{{ registry_addr }}"
mirrors:
- host: "{{ registry_host }}"
capabilities: ["pull", "resolve"]
skip_verify: true
containerd_insecure_registries:
"{{ registry_addr }}""{{ registry_host }}"
# CentOS/Redhat/AlmaLinux/Rocky Linux
## Docker / Containerd
@@ -95,7 +90,7 @@ If you use the settings like the one above, you'll need to define in your invent
* `registry_host`: Container image registry. If you _don't_ use the same repository path for the container images that
the ones defined
in [Download's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/download/defaults/main/main.yml)
in [Download's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/download/defaults/main.yml)
, you need to override the `*_image_repo` for these container images. If you want to make your life easier, use the
same repository path, you won't have to override anything else.
* `registry_addr`: Container image registry, but only have [domain or ip]:[port].

View File

@@ -7,12 +7,6 @@ If you set http and https proxy, all nodes and loadbalancer will be excluded fro
`http_proxy:"http://example.proxy.tld:port"`
`https_proxy:"http://example.proxy.tld:port"`
## Set custom CA
CA must be already on each target nodes
`https_proxy_cert_file: /path/to/host/custom/ca.crt`
## Set default no_proxy (this will override default no_proxy generation)
`no_proxy: "node1,node1_ip,node2,node2_ip...additional_host"`

View File

@@ -1,6 +1,6 @@
# Node Layouts
There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `multinode`.
There are four node layout types: `default`, `separate`, `ha`, and `scale`.
`default` is a non-HA two nodes setup with one separate `kube_node`
and the `etcd` group merged with the `kube_control_plane`.
@@ -16,10 +16,6 @@ in the Ansible inventory. This helps test TLS certificate generation at scale
to prevent regressions and profile certain long-running tasks. These nodes are
never actually deployed, but certificates are generated for them.
`all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged.
`multinode` layout consists of two separate `kube_node` and a merged single `etcd+kube_control_plane` node.
Note, the canal network plugin deploys flannel as well plus calico policy controller.
## Test cases

View File

@@ -403,16 +403,3 @@ Please note that **migrating container engines is not officially supported by Ku
As of Kubespray 2.18.0, containerd is already the default container engine. If you have the chance, it is advisable and safer to reset and redeploy the entire cluster with a new container engine.
* [Migrating from Docker to Containerd](upgrades/migrate_docker2containerd.md)
## System upgrade
If you want to upgrade the APT or YUM packages while the nodes are cordoned, you can use:
```ShellSession
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e system_upgrade=true
```
Nodes will be rebooted when there are package upgrades (`system_upgrade_reboot: on-upgrade`).
This can be changed to `always` or `never`.
Note: Downloads will happen twice unless `system_upgrade_reboot` is `never`.

View File

@@ -120,7 +120,7 @@ following default cluster parameters:
alpha/experimental Kubeadm features. (defaults is `[]`)
* *authorization_modes* - A list of [authorization mode](
https://kubernetes.io/docs/reference/access-authn-authz/authorization/#using-flags-for-your-authorization-module)
https://kubernetes.io/docs/admin/authorization/#using-flags-for-your-authorization-module)
that the cluster should be configured for. Defaults to `['Node', 'RBAC']`
(Node and RBAC authorizers).
Note: `Node` and `RBAC` are enabled by default. Previously deployed clusters can be
@@ -216,12 +216,6 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
* *kubelet_make_iptables_util_chains* - If `true`, causes the kubelet ensures a set of `iptables` rules are present on host.
* *kubelet_cpu_manager_policy* - If set to `static`, allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. And it should be set with `kube_reserved` or `system-reserved`, enable this with the following guide:[Control CPU Management Policies on the Node](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/)
* *kubelet_topology_manager_policy* - Control the behavior of the allocation of CPU and Memory from different [NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access) Nodes. Enable this with the following guide: [Control Topology Management Policies on a node](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager).
* *kubelet_topology_manager_scope* - The Topology Manager can deal with the alignment of resources in a couple of distinct scopes: `container` and `pod`. See [Topology Manager Scopes](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#topology-manager-scopes).
* *kubelet_systemd_hardening* - If `true`, provides kubelet systemd service with security features for isolation.
**N.B.** To enable this feature, ensure you are using the **`cgroup v2`** on your system. Check it out with command: `sudo ls -l /sys/fs/cgroup/*.slice`. If directory does not exist, enable this with the following guide: [enable cgroup v2](https://rootlesscontaine.rs/getting-started/common/cgroup2/#enabling-cgroup-v2).

View File

@@ -1,6 +1,5 @@
---
- name: Remove old cloud provider config
hosts: kube_node:kube_control_plane
- hosts: kube_node:kube_control_plane
tasks:
- name: Remove old cloud provider config
file:
@@ -8,8 +7,7 @@
state: absent
with_items:
- /etc/kubernetes/cloud_config
- name: Migrate intree Cinder PV
hosts: kube_control_plane[0]
- hosts: kube_control_plane[0]
tasks:
- name: Include kubespray-default variables
include_vars: ../roles/kubespray-defaults/defaults/main.yaml
@@ -18,13 +16,13 @@
src: get_cinder_pvs.sh
dest: /tmp
mode: u+rwx
- name: Get PVs provisioned by in-tree cloud provider
- name: Get PVs provisioned by in-tree cloud provider # noqa 301
command: /tmp/get_cinder_pvs.sh
register: pvs
- name: Remove get_cinder_pvs.sh
file:
path: /tmp/get_cinder_pvs.sh
state: absent
- name: Rewrite the "pv.kubernetes.io/provisioned-by" annotation
- name: Rewrite the "pv.kubernetes.io/provisioned-by" annotation # noqa 301
command: "{{ bin_dir }}/kubectl annotate --overwrite pv {{ item }} pv.kubernetes.io/provisioned-by=cinder.csi.openstack.org"
loop: "{{ pvs.stdout_lines | list }}"

View File

@@ -10,15 +10,13 @@
### In most cases, you probably want to use upgrade-cluster.yml playbook and
### not this one.
- name: Setup ssh config to use the bastion
hosts: localhost
- hosts: localhost
gather_facts: False
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- name: Bootstrap hosts OS for Ansible
hosts: k8s_cluster:etcd:calico_rr
- hosts: k8s_cluster:etcd:calico_rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
vars:
@@ -29,8 +27,7 @@
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- name: Preinstall
hosts: k8s_cluster:etcd:calico_rr
- hosts: k8s_cluster:etcd:calico_rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}

View File

@@ -1,6 +1,5 @@
---
- name: Wait for cloud-init to finish
hosts: all
- hosts: all
tasks:
- name: Wait for cloud-init to finish
command: cloud-init status --wait

View File

@@ -2,12 +2,13 @@
namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster
name: kubespray
version: 2.23.2
version: 2.22.2
readme: README.md
authors:
- luksi1
tags:
- infrastructure
- kubernetes
- kubespray
repository: https://github.com/kubernetes-sigs/kubespray
build_ignore:
- .github

View File

@@ -48,14 +48,13 @@ loadbalancer_apiserver_healthcheck_port: 8081
# cloud_provider:
## When cloud_provider is set to 'external', you can set the cloud controller to deploy
## Supported cloud controllers are: 'openstack', 'vsphere', 'huaweicloud' and 'hcloud'
## Supported cloud controllers are: 'openstack', 'vsphere' and 'hcloud'
## When openstack or vsphere are used make sure to source in the required fields
# external_cloud_provider:
## Set these proxy values in order to update package manager and docker daemon to use proxies and custom CA for https_proxy if needed
## Set these proxy values in order to update package manager and docker daemon to use proxies
# http_proxy: ""
# https_proxy: ""
# https_proxy_cert_file: ""
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
# no_proxy: ""

View File

@@ -30,13 +30,17 @@
# containerd_metrics_grpc_histogram: false
# Registries defined within containerd.
# containerd_registries_mirrors:
# - prefix: docker.io
# mirrors:
# - host: https://registry-1.docker.io
# capabilities: ["pull", "resolve"]
# skip_verify: false
## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipaddress and domain_name.
## example define mirror.registry.io or 172.19.16.11:5000
## set "name": "url". insecure url must be started http://
## Port number is also needed if the default HTTPS port is not used.
# containerd_insecure_registries:
# "localhost": "http://127.0.0.1"
# "172.19.16.11:5000": "http://172.19.16.11:5000"
# containerd_registries:
# "docker.io": "https://registry-1.docker.io"
# containerd_max_container_log_line_size: -1

View File

@@ -3,7 +3,6 @@
# hcloud_api_token: ""
# token_secret_name: hcloud
# with_networks: false # Use the hcloud controller-manager with networks support https://github.com/hetznercloud/hcloud-cloud-controller-manager#networks-support
# network_name: # network name/ID: If you manage the network yourself it might still be required to let the CCM know about private networks
# service_account_name: cloud-controller-manager
#
# controller_image_tag: "latest"
@@ -13,10 +12,3 @@
# ## arg1: "value1"
# ## arg2: "value2"
# controller_extra_args: {}
#
# load_balancers_location: # mutually exclusive with load_balancers_network_zone
# load_balancers_network_zone:
# load_balancers_disable_private_ingress: # set to true if using IPVS based plugins https://github.com/hetznercloud/hcloud-cloud-controller-manager/blob/main/docs/load_balancers.md#sample-service-with-networks
# load_balancers_use_private_ip: # set to true if using private networks
# load_balancers_enabled:
# network_routes_enabled:

View File

@@ -1,17 +0,0 @@
## Values for the external Huawei Cloud Controller
# external_huaweicloud_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
# external_huaweicloud_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
## Credentials to authenticate against Keystone API
## All of them are required Per default these values will be
## read from the environment.
# external_huaweicloud_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
# external_huaweicloud_access_key: "{{ lookup('env','OS_ACCESS_KEY') }}"
# external_huaweicloud_secret_key: "{{ lookup('env','OS_SECRET_KEY') }}"
# external_huaweicloud_region: "{{ lookup('env','OS_REGION_NAME') }}"
# external_huaweicloud_project_id: "{{ lookup('env','OS_TENANT_ID')| default(lookup('env','OS_PROJECT_ID'),true) }}"
# external_huaweicloud_cloud: "{{ lookup('env','OS_CLOUD') }}"
## The repo and tag of the external Huawei Cloud Controller image
# external_huawei_cloud_controller_image_repo: "swr.ap-southeast-1.myhuaweicloud.com"
# external_huawei_cloud_controller_image_tag: "v0.26.3"

View File

@@ -33,12 +33,16 @@
# [Optional] Calico: If using Calico network plugin
# calicoctl_download_url: "{{ files_repo }}/github.com/projectcalico/calico/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# calicoctl_alternate_download_url: "{{ files_repo }}/github.com/projectcalico/calicoctl/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# [Optional] Calico with kdd: If using Calico network plugin with kdd datastore
# calico_crds_download_url: "{{ files_repo }}/github.com/projectcalico/calico/archive/{{ calico_version }}.tar.gz"
# [Optional] Cilium: If using Cilium network plugin
# ciliumcli_download_url: "{{ files_repo }}/github.com/cilium/cilium-cli/releases/download/{{ cilium_cli_version }}/cilium-linux-{{ image_arch }}.tar.gz"
# [Optional] Flannel: If using Flannel network plugin
# flannel_cni_download_url: "{{ files_repo }}/kubernetes/flannel/{{ flannel_cni_version }}/flannel-{{ image_arch }}"
# [Optional] helm: only if you set helm_enabled: true
# helm_download_url: "{{ files_repo }}/get.helm.sh/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"

View File

@@ -29,7 +29,7 @@ local_path_provisioner_enabled: false
# local_path_provisioner_claim_root: /opt/local-path-provisioner/
# local_path_provisioner_debug: false
# local_path_provisioner_image_repo: "rancher/local-path-provisioner"
# local_path_provisioner_image_tag: "v0.0.24"
# local_path_provisioner_image_tag: "v0.0.23"
# local_path_provisioner_helper_image_repo: "busybox"
# local_path_provisioner_helper_image_tag: "latest"
@@ -125,8 +125,6 @@ ingress_publish_status_address: ""
# - --default-ssl-certificate=default/foo-tls
# ingress_nginx_termination_grace_period_seconds: 300
# ingress_nginx_class: nginx
# ingress_nginx_without_class: true
# ingress_nginx_default: false
# ALB ingress controller deployment
ingress_alb_enabled: false
@@ -169,10 +167,6 @@ cert_manager_enabled: false
# - "1.1.1.1"
# - "8.8.8.8"
# cert_manager_controller_extra_args:
# - "--dns01-recursive-nameservers-only=true"
# - "--dns01-recursive-nameservers=1.1.1.1:53,8.8.8.8:53"
# MetalLB deployment
metallb_enabled: false
metallb_speaker_enabled: "{{ metallb_enabled }}"
@@ -244,7 +238,7 @@ metallb_speaker_enabled: "{{ metallb_enabled }}"
# - pool2
argocd_enabled: false
# argocd_version: v2.8.0
# argocd_version: v2.7.2
# argocd_namespace: argocd
# Default password:
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli

Some files were not shown because too many files have changed in this diff Show More