mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-14 13:54:37 +03:00
Compare commits
156 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
64447e745e | ||
|
|
78eb74c252 | ||
|
|
669589f761 | ||
|
|
b7a83531e7 | ||
|
|
a9e29a9eb2 | ||
|
|
a0a2f40295 | ||
|
|
7b7c9f509e | ||
|
|
beb2660aa8 | ||
|
|
3f78bf9298 | ||
|
|
06a2a3ed6c | ||
|
|
eb40523388 | ||
|
|
50fbfa2a9a | ||
|
|
747d8bb4c2 | ||
|
|
e90cae9344 | ||
|
|
bb67d9524d | ||
|
|
a306f15a74 | ||
|
|
8c09c3fda2 | ||
|
|
a656b7ed9a | ||
|
|
2e8b72e278 | ||
|
|
ddf5c6ee12 | ||
|
|
eda7ea5695 | ||
|
|
08c0b34270 | ||
|
|
1a86b4cb6d | ||
|
|
aea150e5dc | ||
|
|
ee2dd4fd28 | ||
|
|
c3b674526d | ||
|
|
565eab901b | ||
|
|
c3315ac742 | ||
|
|
da9b34d1b0 | ||
|
|
243ca5d08f | ||
|
|
29ea790c30 | ||
|
|
ae780e6a9b | ||
|
|
471326f458 | ||
|
|
7395c27932 | ||
|
|
d435edefc4 | ||
|
|
eb73f1d27d | ||
|
|
9a31f3285a | ||
|
|
45a070f1ba | ||
|
|
ccb742c7ab | ||
|
|
cb848fa7cb | ||
|
|
8abf49ae13 | ||
|
|
8f2390a120 | ||
|
|
81a3f81aa1 | ||
|
|
0fb404c775 | ||
|
|
51069223f5 | ||
|
|
17b51240c9 | ||
|
|
306103ed05 | ||
|
|
eb628efbc4 | ||
|
|
2c3ea84e6f | ||
|
|
85f15900a4 | ||
|
|
af1f318852 | ||
|
|
b31afe235f | ||
|
|
a9321aaf86 | ||
|
|
d2944d2813 | ||
|
|
fe02d21d23 | ||
|
|
5160e7e20b | ||
|
|
c440106eff | ||
|
|
a1c47b1b20 | ||
|
|
93724ed29c | ||
|
|
75fecf1542 | ||
|
|
0d7bdc6cca | ||
|
|
c87d70b04b | ||
|
|
fa7a504fa5 | ||
|
|
612cfdceb1 | ||
|
|
70bb19dd23 | ||
|
|
94d3f65f09 | ||
|
|
cf3ac625da | ||
|
|
c2e3071a33 | ||
|
|
21e8b96e22 | ||
|
|
3acacc6150 | ||
|
|
d583d331b5 | ||
|
|
b321ca3e64 | ||
|
|
6b1188e3dc | ||
|
|
0d4f57aa22 | ||
|
|
bc5b38a771 | ||
|
|
f46910eac3 | ||
|
|
adb8ff14b9 | ||
|
|
7ba85710ad | ||
|
|
cbd3a83a06 | ||
|
|
eb015c0362 | ||
|
|
17681a7e31 | ||
|
|
cca7615456 | ||
|
|
a4b15690b8 | ||
|
|
32743868c7 | ||
|
|
7d221be408 | ||
|
|
2d75077d4a | ||
|
|
802da0bcb0 | ||
|
|
6305dd39e9 | ||
|
|
b3f6d05131 | ||
|
|
8ebeb88e57 | ||
|
|
c9d685833b | ||
|
|
f3332af3f2 | ||
|
|
870065517f | ||
|
|
267a8c6025 | ||
|
|
edff3f8afd | ||
|
|
cdc8d17d0b | ||
|
|
8f0e553e11 | ||
|
|
5f9a7b9d49 | ||
|
|
af7bc17c9a | ||
|
|
e2b62ba154 | ||
|
|
5da421c178 | ||
|
|
becb6267fb | ||
|
|
34754ccb38 | ||
|
|
dcd0edce40 | ||
|
|
7a0030b145 | ||
|
|
fa9e41047e | ||
|
|
f5f1f9478c | ||
|
|
6a70f02662 | ||
|
|
3bc0dfb354 | ||
|
|
418df29ff0 | ||
|
|
1f47d5b74f | ||
|
|
e52d70885e | ||
|
|
3f1409d87d | ||
|
|
0b2e5b2f82 | ||
|
|
228efcba0e | ||
|
|
401ea552c2 | ||
|
|
8cce6df80a | ||
|
|
3e522a9f59 | ||
|
|
ae45de3584 | ||
|
|
513b6dd6ad | ||
|
|
e65050d3f4 | ||
|
|
4a8a47d438 | ||
|
|
b2d8ec68a4 | ||
|
|
d3101d65aa | ||
|
|
abaddb4c9b | ||
|
|
acb86c23f9 | ||
|
|
bea5034ddf | ||
|
|
5194d8306e | ||
|
|
4846f33136 | ||
|
|
de8d1f1a3b | ||
|
|
ddd7aa844c | ||
|
|
1fd31ccc28 | ||
|
|
6f520eacf7 | ||
|
|
a0eb7c0d5c | ||
|
|
94322ef72e | ||
|
|
c6ab6406c2 | ||
|
|
2c132dccba | ||
|
|
7919a47165 | ||
|
|
7b2586943b | ||
|
|
f964b3438d | ||
|
|
09f3caedaa | ||
|
|
fe4b1f6dee | ||
|
|
bc5e33791f | ||
|
|
d669b93c4f | ||
|
|
a81c6d5448 | ||
|
|
6b34e3ef08 | ||
|
|
dbdc4d4123 | ||
|
|
c24c279df7 | ||
|
|
0f243d751f | ||
|
|
31f6d38cd2 | ||
|
|
c31bb9aca7 | ||
|
|
748b0b294d | ||
|
|
af8210dfea | ||
|
|
493969588e | ||
|
|
293573c665 | ||
|
|
5ffdb7355a |
@@ -5,4 +5,4 @@ roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
|
|||||||
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
|
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
|
||||||
roles/kubernetes/node/defaults/main.yml jinja[spacing]
|
roles/kubernetes/node/defaults/main.yml jinja[spacing]
|
||||||
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
|
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
|
||||||
roles/kubespray-defaults/defaults/main.yaml jinja[spacing]
|
roles/kubespray-defaults/defaults/main/main.yml jinja[spacing]
|
||||||
|
|||||||
44
.github/ISSUE_TEMPLATE/bug-report.md
vendored
44
.github/ISSUE_TEMPLATE/bug-report.md
vendored
@@ -1,44 +0,0 @@
|
|||||||
---
|
|
||||||
name: Bug Report
|
|
||||||
about: Report a bug encountered while operating Kubernetes
|
|
||||||
labels: kind/bug
|
|
||||||
|
|
||||||
---
|
|
||||||
<!--
|
|
||||||
Please, be ready for followup questions, and please respond in a timely
|
|
||||||
manner. If we can't reproduce a bug or think a feature already exists, we
|
|
||||||
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
|
||||||
explain why.
|
|
||||||
-->
|
|
||||||
|
|
||||||
**Environment**:
|
|
||||||
- **Cloud provider or hardware configuration:**
|
|
||||||
|
|
||||||
- **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):**
|
|
||||||
|
|
||||||
- **Version of Ansible** (`ansible --version`):
|
|
||||||
|
|
||||||
- **Version of Python** (`python --version`):
|
|
||||||
|
|
||||||
|
|
||||||
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
|
|
||||||
|
|
||||||
|
|
||||||
**Network plugin used**:
|
|
||||||
|
|
||||||
|
|
||||||
**Full inventory with variables (`ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"`):**
|
|
||||||
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
|
|
||||||
|
|
||||||
**Command used to invoke ansible**:
|
|
||||||
|
|
||||||
|
|
||||||
**Output of ansible run**:
|
|
||||||
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
|
|
||||||
|
|
||||||
**Anything else do we need to know**:
|
|
||||||
<!-- By running scripts/collect-info.yaml you can get a lot of useful informations.
|
|
||||||
Script can be started by:
|
|
||||||
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
|
|
||||||
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
|
|
||||||
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here.-->
|
|
||||||
117
.github/ISSUE_TEMPLATE/bug-report.yaml
vendored
Normal file
117
.github/ISSUE_TEMPLATE/bug-report.yaml
vendored
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
---
|
||||||
|
name: Bug Report
|
||||||
|
description: Report a bug encountered while using Kubespray
|
||||||
|
labels: kind/bug
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
Please, be ready for followup questions, and please respond in a timely
|
||||||
|
manner. If we can't reproduce a bug or think a feature already exists, we
|
||||||
|
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
||||||
|
explain why.
|
||||||
|
- type: textarea
|
||||||
|
id: problem
|
||||||
|
attributes:
|
||||||
|
label: What happened?
|
||||||
|
description: |
|
||||||
|
Please provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: expected
|
||||||
|
attributes:
|
||||||
|
label: What did you expect to happen?
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: repro
|
||||||
|
attributes:
|
||||||
|
label: How can we reproduce it (as minimally and precisely as possible)?
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: '### Environment'
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: os
|
||||||
|
attributes:
|
||||||
|
label: OS
|
||||||
|
placeholder: 'printf "$(uname -srm)\n$(cat /etc/os-release)\n"'
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: ansible_version
|
||||||
|
attributes:
|
||||||
|
label: Version of Ansible
|
||||||
|
placeholder: 'ansible --version'
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: python_version
|
||||||
|
attributes:
|
||||||
|
label: Version of Python
|
||||||
|
placeholder: 'python --version'
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: kubespray_version
|
||||||
|
attributes:
|
||||||
|
label: Version of Kubespray (commit)
|
||||||
|
placeholder: 'git rev-parse --short HEAD'
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: dropdown
|
||||||
|
id: network_plugin
|
||||||
|
attributes:
|
||||||
|
label: Network plugin used
|
||||||
|
options:
|
||||||
|
- calico
|
||||||
|
- cilium
|
||||||
|
- cni
|
||||||
|
- custom_cni
|
||||||
|
- flannel
|
||||||
|
- kube-ovn
|
||||||
|
- kube-router
|
||||||
|
- macvlan
|
||||||
|
- meta
|
||||||
|
- multus
|
||||||
|
- ovn4nfv
|
||||||
|
- weave
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: inventory
|
||||||
|
attributes:
|
||||||
|
label: Full inventory with variables
|
||||||
|
placeholder: 'ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"'
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: ansible_command
|
||||||
|
attributes:
|
||||||
|
label: Command used to invoke ansible
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: ansible_output
|
||||||
|
attributes:
|
||||||
|
label: Output of ansible run
|
||||||
|
description: We recommend using snippets services like https://gist.github.com/ etc.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: anything_else
|
||||||
|
attributes:
|
||||||
|
label: Anything else we need to know
|
||||||
|
description: |
|
||||||
|
By running scripts/collect-info.yaml you can get a lot of useful informations.
|
||||||
|
Script can be started by:
|
||||||
|
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
|
||||||
|
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
|
||||||
|
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here
|
||||||
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
contact_links:
|
||||||
|
- name: Support Request
|
||||||
|
url: https://kubernetes.slack.com/channels/kubespray
|
||||||
|
about: Support request or question relating to Kubernetes
|
||||||
11
.github/ISSUE_TEMPLATE/enhancement.md
vendored
11
.github/ISSUE_TEMPLATE/enhancement.md
vendored
@@ -1,11 +0,0 @@
|
|||||||
---
|
|
||||||
name: Enhancement Request
|
|
||||||
about: Suggest an enhancement to the Kubespray project
|
|
||||||
labels: kind/feature
|
|
||||||
|
|
||||||
---
|
|
||||||
<!-- Please only use this template for submitting enhancement requests -->
|
|
||||||
|
|
||||||
**What would you like to be added**:
|
|
||||||
|
|
||||||
**Why is this needed**:
|
|
||||||
20
.github/ISSUE_TEMPLATE/enhancement.yaml
vendored
Normal file
20
.github/ISSUE_TEMPLATE/enhancement.yaml
vendored
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
name: Enhancement Request
|
||||||
|
description: Suggest an enhancement to the Kubespray project
|
||||||
|
labels: kind/feature
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: Please only use this template for submitting enhancement requests
|
||||||
|
- type: textarea
|
||||||
|
id: what
|
||||||
|
attributes:
|
||||||
|
label: What would you like to be added
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: why
|
||||||
|
attributes:
|
||||||
|
label: Why is this needed
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
20
.github/ISSUE_TEMPLATE/failing-test.md
vendored
20
.github/ISSUE_TEMPLATE/failing-test.md
vendored
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
name: Failing Test
|
|
||||||
about: Report test failures in Kubespray CI jobs
|
|
||||||
labels: kind/failing-test
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<!-- Please only use this template for submitting reports about failing tests in Kubespray CI jobs -->
|
|
||||||
|
|
||||||
**Which jobs are failing**:
|
|
||||||
|
|
||||||
**Which test(s) are failing**:
|
|
||||||
|
|
||||||
**Since when has it been failing**:
|
|
||||||
|
|
||||||
**Testgrid link**:
|
|
||||||
|
|
||||||
**Reason for failure**:
|
|
||||||
|
|
||||||
**Anything else we need to know**:
|
|
||||||
41
.github/ISSUE_TEMPLATE/failing-test.yaml
vendored
Normal file
41
.github/ISSUE_TEMPLATE/failing-test.yaml
vendored
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
---
|
||||||
|
name: Failing Test
|
||||||
|
description: Report test failures in Kubespray CI jobs
|
||||||
|
labels: kind/failing-test
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: Please only use this template for submitting reports about failing tests in Kubespray CI jobs
|
||||||
|
- type: textarea
|
||||||
|
id: failing_jobs
|
||||||
|
attributes:
|
||||||
|
label: Which jobs are failing ?
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: failing_tests
|
||||||
|
attributes:
|
||||||
|
label: Which tests are failing ?
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: since_when
|
||||||
|
attributes:
|
||||||
|
label: Since when has it been failing ?
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: failure_reason
|
||||||
|
attributes:
|
||||||
|
label: Reason for failure
|
||||||
|
description: If you don't know and have no guess, just put "Unknown"
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: anything_else
|
||||||
|
attributes:
|
||||||
|
label: Anything else we need to know
|
||||||
18
.github/ISSUE_TEMPLATE/support.md
vendored
18
.github/ISSUE_TEMPLATE/support.md
vendored
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
name: Support Request
|
|
||||||
about: Support request or question relating to Kubespray
|
|
||||||
labels: kind/support
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
STOP -- PLEASE READ!
|
|
||||||
|
|
||||||
GitHub is not the right place for support requests.
|
|
||||||
|
|
||||||
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubespray) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
|
|
||||||
|
|
||||||
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.
|
|
||||||
|
|
||||||
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
|
|
||||||
-->
|
|
||||||
@@ -9,7 +9,7 @@ stages:
|
|||||||
- deploy-special
|
- deploy-special
|
||||||
|
|
||||||
variables:
|
variables:
|
||||||
KUBESPRAY_VERSION: v2.22.1
|
KUBESPRAY_VERSION: v2.23.2
|
||||||
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
||||||
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
||||||
ANSIBLE_FORCE_COLOR: "true"
|
ANSIBLE_FORCE_COLOR: "true"
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ vagrant-validate:
|
|||||||
stage: unit-tests
|
stage: unit-tests
|
||||||
tags: [light]
|
tags: [light]
|
||||||
variables:
|
variables:
|
||||||
VAGRANT_VERSION: 2.3.4
|
VAGRANT_VERSION: 2.3.7
|
||||||
script:
|
script:
|
||||||
- ./tests/scripts/vagrant-validate.sh
|
- ./tests/scripts/vagrant-validate.sh
|
||||||
except: ['triggers', 'master']
|
except: ['triggers', 'master']
|
||||||
@@ -27,6 +27,14 @@ ansible-lint:
|
|||||||
- ansible-lint -v
|
- ansible-lint -v
|
||||||
except: ['triggers', 'master']
|
except: ['triggers', 'master']
|
||||||
|
|
||||||
|
jinja-syntax-check:
|
||||||
|
extends: .job
|
||||||
|
stage: unit-tests
|
||||||
|
tags: [light]
|
||||||
|
script:
|
||||||
|
- "find -name '*.j2' -exec tests/scripts/check-templates.py {} +"
|
||||||
|
except: ['triggers', 'master']
|
||||||
|
|
||||||
syntax-check:
|
syntax-check:
|
||||||
extends: .job
|
extends: .job
|
||||||
stage: unit-tests
|
stage: unit-tests
|
||||||
|
|||||||
@@ -31,8 +31,8 @@ packet_cleanup_old:
|
|||||||
- make cleanup-packet
|
- make cleanup-packet
|
||||||
after_script: []
|
after_script: []
|
||||||
|
|
||||||
# The ubuntu20-calico-aio jobs are meant as early stages to prevent running the full CI if something is horribly broken
|
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
|
||||||
packet_ubuntu20-calico-aio:
|
packet_ubuntu20-calico-all-in-one:
|
||||||
stage: deploy-part1
|
stage: deploy-part1
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
@@ -41,22 +41,27 @@ packet_ubuntu20-calico-aio:
|
|||||||
|
|
||||||
# ### PR JOBS PART2
|
# ### PR JOBS PART2
|
||||||
|
|
||||||
packet_ubuntu20-aio-docker:
|
packet_ubuntu20-all-in-one-docker:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
|
|
||||||
packet_ubuntu20-calico-aio-hardening:
|
packet_ubuntu20-calico-all-in-one-hardening:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
|
|
||||||
packet_ubuntu22-aio-docker:
|
packet_ubuntu22-all-in-one-docker:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
|
|
||||||
packet_ubuntu22-calico-aio:
|
packet_ubuntu22-calico-all-in-one:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_ubuntu22-calico-etcd-datastore:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
@@ -235,7 +240,7 @@ packet_fedora37-calico-swap-selinux:
|
|||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
|
|
||||||
packet_amazon-linux-2-aio:
|
packet_amazon-linux-2-all-in-one:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
@@ -260,6 +265,11 @@ packet_debian11-kubelet-csr-approver:
|
|||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
|
|
||||||
|
packet_debian12-custom-cni-helm:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: manual
|
||||||
|
|
||||||
# ### PR JOBS PART3
|
# ### PR JOBS PART3
|
||||||
# Long jobs (45min+)
|
# Long jobs (45min+)
|
||||||
|
|
||||||
|
|||||||
@@ -69,3 +69,12 @@ repos:
|
|||||||
entry: tests/scripts/md-table/test.sh
|
entry: tests/scripts/md-table/test.sh
|
||||||
language: script
|
language: script
|
||||||
pass_filenames: false
|
pass_filenames: false
|
||||||
|
|
||||||
|
- id: jinja-syntax-check
|
||||||
|
name: jinja-syntax-check
|
||||||
|
entry: tests/scripts/check-templates.py
|
||||||
|
language: python
|
||||||
|
types:
|
||||||
|
- jinja
|
||||||
|
additional_dependencies:
|
||||||
|
- Jinja2
|
||||||
|
|||||||
61
Dockerfile
61
Dockerfile
@@ -1,5 +1,8 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
|
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
|
||||||
FROM ubuntu:jammy-20230308
|
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
|
||||||
|
|
||||||
# Some tools like yamllint need this
|
# Some tools like yamllint need this
|
||||||
# Pip needs this as well at the moment to install ansible
|
# Pip needs this as well at the moment to install ansible
|
||||||
# (and potentially other packages)
|
# (and potentially other packages)
|
||||||
@@ -7,7 +10,37 @@ FROM ubuntu:jammy-20230308
|
|||||||
ENV LANG=C.UTF-8 \
|
ENV LANG=C.UTF-8 \
|
||||||
DEBIAN_FRONTEND=noninteractive \
|
DEBIAN_FRONTEND=noninteractive \
|
||||||
PYTHONDONTWRITEBYTECODE=1
|
PYTHONDONTWRITEBYTECODE=1
|
||||||
|
|
||||||
WORKDIR /kubespray
|
WORKDIR /kubespray
|
||||||
|
|
||||||
|
# hadolint ignore=DL3008
|
||||||
|
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
apt-get update -q \
|
||||||
|
&& apt-get install -yq --no-install-recommends \
|
||||||
|
curl \
|
||||||
|
python3 \
|
||||||
|
python3-pip \
|
||||||
|
sshpass \
|
||||||
|
vim \
|
||||||
|
rsync \
|
||||||
|
openssh-client \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* /var/log/*
|
||||||
|
|
||||||
|
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
|
||||||
|
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
|
||||||
|
pip install --no-compile --no-cache-dir -r requirements.txt \
|
||||||
|
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
|
||||||
|
|
||||||
|
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
|
||||||
|
|
||||||
|
RUN --mount=type=bind,source=roles/kubespray-defaults/defaults/main/main.yml,target=roles/kubespray-defaults/defaults/main/main.yml \
|
||||||
|
KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
|
||||||
|
OS_ARCHITECTURE=$(dpkg --print-architecture) \
|
||||||
|
&& curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
|
||||||
|
&& echo "$(curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
|
||||||
|
&& chmod a+x /usr/local/bin/kubectl
|
||||||
|
|
||||||
COPY *.yml ./
|
COPY *.yml ./
|
||||||
COPY *.cfg ./
|
COPY *.cfg ./
|
||||||
COPY roles ./roles
|
COPY roles ./roles
|
||||||
@@ -17,29 +50,3 @@ COPY library ./library
|
|||||||
COPY extra_playbooks ./extra_playbooks
|
COPY extra_playbooks ./extra_playbooks
|
||||||
COPY playbooks ./playbooks
|
COPY playbooks ./playbooks
|
||||||
COPY plugins ./plugins
|
COPY plugins ./plugins
|
||||||
|
|
||||||
RUN apt update -q \
|
|
||||||
&& apt install -yq --no-install-recommends \
|
|
||||||
curl \
|
|
||||||
python3 \
|
|
||||||
python3-pip \
|
|
||||||
sshpass \
|
|
||||||
vim \
|
|
||||||
rsync \
|
|
||||||
openssh-client \
|
|
||||||
&& pip install --no-compile --no-cache-dir \
|
|
||||||
ansible==7.6.0 \
|
|
||||||
ansible-core==2.14.6 \
|
|
||||||
cryptography==41.0.1 \
|
|
||||||
jinja2==3.1.2 \
|
|
||||||
netaddr==0.8.0 \
|
|
||||||
jmespath==1.0.1 \
|
|
||||||
MarkupSafe==2.1.3 \
|
|
||||||
ruamel.yaml==0.17.21 \
|
|
||||||
passlib==1.7.4 \
|
|
||||||
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
|
|
||||||
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
|
|
||||||
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
|
|
||||||
&& chmod a+x /usr/local/bin/kubectl \
|
|
||||||
&& rm -rf /var/lib/apt/lists/* /var/log/* \
|
|
||||||
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
|
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ aliases:
|
|||||||
- mzaian
|
- mzaian
|
||||||
- mrfreezeex
|
- mrfreezeex
|
||||||
- erikjiang
|
- erikjiang
|
||||||
|
- vannten
|
||||||
kubespray-emeritus_approvers:
|
kubespray-emeritus_approvers:
|
||||||
- riverzhang
|
- riverzhang
|
||||||
- atoms
|
- atoms
|
||||||
|
|||||||
26
README.md
26
README.md
@@ -75,11 +75,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
|
|||||||
to access the inventory and SSH key in the container, like this:
|
to access the inventory and SSH key in the container, like this:
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
git checkout v2.22.1
|
git checkout v2.23.2
|
||||||
docker pull quay.io/kubespray/kubespray:v2.22.1
|
docker pull quay.io/kubespray/kubespray:v2.23.2
|
||||||
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
|
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
|
||||||
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
|
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
|
||||||
quay.io/kubespray/kubespray:v2.22.1 bash
|
quay.io/kubespray/kubespray:v2.23.2 bash
|
||||||
# Inside the container you may now run the kubespray playbooks:
|
# Inside the container you may now run the kubespray playbooks:
|
||||||
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
|
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
|
||||||
```
|
```
|
||||||
@@ -161,28 +161,28 @@ Note: Upstart/SysV init based OS types are not supported.
|
|||||||
## Supported Components
|
## Supported Components
|
||||||
|
|
||||||
- Core
|
- Core
|
||||||
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.27.5
|
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.28.6
|
||||||
- [etcd](https://github.com/etcd-io/etcd) v3.5.7
|
- [etcd](https://github.com/etcd-io/etcd) v3.5.10
|
||||||
- [docker](https://www.docker.com/) v20.10 (see note)
|
- [docker](https://www.docker.com/) v20.10 (see note)
|
||||||
- [containerd](https://containerd.io/) v1.7.5
|
- [containerd](https://containerd.io/) v1.7.11
|
||||||
- [cri-o](http://cri-o.io/) v1.27 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
|
- [cri-o](http://cri-o.io/) v1.27 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
|
||||||
- Network Plugin
|
- Network Plugin
|
||||||
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
|
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
|
||||||
- [calico](https://github.com/projectcalico/calico) v3.25.2
|
- [calico](https://github.com/projectcalico/calico) v3.26.4
|
||||||
- [cilium](https://github.com/cilium/cilium) v1.13.4
|
- [cilium](https://github.com/cilium/cilium) v1.13.4
|
||||||
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
|
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
|
||||||
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5
|
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5
|
||||||
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1
|
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
|
||||||
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
|
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
|
||||||
- [weave](https://github.com/weaveworks/weave) v2.8.1
|
- [weave](https://github.com/weaveworks/weave) v2.8.1
|
||||||
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12
|
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12
|
||||||
- Application
|
- Application
|
||||||
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.1
|
- [cert-manager](https://github.com/jetstack/cert-manager) v1.13.2
|
||||||
- [coredns](https://github.com/coredns/coredns) v1.10.1
|
- [coredns](https://github.com/coredns/coredns) v1.10.1
|
||||||
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.8.1
|
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.9.4
|
||||||
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
|
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
|
||||||
- [argocd](https://argoproj.github.io/) v2.8.0
|
- [argocd](https://argoproj.github.io/) v2.8.4
|
||||||
- [helm](https://helm.sh/) v3.12.3
|
- [helm](https://helm.sh/) v3.13.1
|
||||||
- [metallb](https://metallb.universe.tf/) v0.13.9
|
- [metallb](https://metallb.universe.tf/) v0.13.9
|
||||||
- [registry](https://github.com/distribution/distribution) v2.8.1
|
- [registry](https://github.com/distribution/distribution) v2.8.1
|
||||||
- Storage Plugin
|
- Storage Plugin
|
||||||
@@ -202,7 +202,7 @@ Note: Upstart/SysV init based OS types are not supported.
|
|||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
- **Minimum required version of Kubernetes is v1.25**
|
- **Minimum required version of Kubernetes is v1.26**
|
||||||
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
|
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
|
||||||
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
|
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
|
||||||
- The target servers are configured to allow **IPv4 forwarding**.
|
- The target servers are configured to allow **IPv4 forwarding**.
|
||||||
|
|||||||
1
Vagrantfile
vendored
1
Vagrantfile
vendored
@@ -263,6 +263,7 @@ Vagrant.configure("2") do |config|
|
|||||||
if i == $num_instances
|
if i == $num_instances
|
||||||
node.vm.provision "ansible" do |ansible|
|
node.vm.provision "ansible" do |ansible|
|
||||||
ansible.playbook = $playbook
|
ansible.playbook = $playbook
|
||||||
|
ansible.compatibility_mode = "2.0"
|
||||||
ansible.verbose = $ansible_verbosity
|
ansible.verbose = $ansible_verbosity
|
||||||
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
|
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
|
||||||
if File.exist?($ansible_inventory_path)
|
if File.exist?($ansible_inventory_path)
|
||||||
|
|||||||
@@ -5,7 +5,9 @@
|
|||||||
Container image collecting script for offline deployment
|
Container image collecting script for offline deployment
|
||||||
|
|
||||||
This script has two features:
|
This script has two features:
|
||||||
|
|
||||||
(1) Get container images from an environment which is deployed online.
|
(1) Get container images from an environment which is deployed online.
|
||||||
|
|
||||||
(2) Deploy local container registry and register the container images to the registry.
|
(2) Deploy local container registry and register the container images to the registry.
|
||||||
|
|
||||||
Step(1) should be done online site as a preparation, then we bring the gotten images
|
Step(1) should be done online site as a preparation, then we bring the gotten images
|
||||||
@@ -27,7 +29,7 @@ manage-offline-container-images.sh register
|
|||||||
|
|
||||||
## generate_list.sh
|
## generate_list.sh
|
||||||
|
|
||||||
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main/main.yml` file.
|
This script generates the list of downloaded files and the list of container images by `roles/kubespray-defaults/defaults/main/download.yml` file.
|
||||||
|
|
||||||
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
|
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
|
||||||
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.
|
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
|
|||||||
TEMP_DIR="${CURRENT_DIR}/temp"
|
TEMP_DIR="${CURRENT_DIR}/temp"
|
||||||
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
|
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
|
||||||
|
|
||||||
: ${DOWNLOAD_YML:="roles/download/defaults/main/main.yml"}
|
: ${DOWNLOAD_YML:="roles/kubespray-defaults/defaults/main/download.yml"}
|
||||||
|
|
||||||
mkdir -p ${TEMP_DIR}
|
mkdir -p ${TEMP_DIR}
|
||||||
|
|
||||||
@@ -19,7 +19,7 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
|
|||||||
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
|
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
|
||||||
|
|
||||||
# add kube-* images to images list template
|
# add kube-* images to images list template
|
||||||
# Those container images are downloaded by kubeadm, then roles/download/defaults/main/main.yml
|
# Those container images are downloaded by kubeadm, then roles/kubespray-defaults/defaults/main/download.yml
|
||||||
# doesn't contain those images. That is reason why here needs to put those images into the
|
# doesn't contain those images. That is reason why here needs to put those images into the
|
||||||
# list separately.
|
# list separately.
|
||||||
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
|
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
|
||||||
|
|||||||
@@ -23,8 +23,8 @@ function create_container_image_tar() {
|
|||||||
mkdir ${IMAGE_DIR}
|
mkdir ${IMAGE_DIR}
|
||||||
cd ${IMAGE_DIR}
|
cd ${IMAGE_DIR}
|
||||||
|
|
||||||
sudo docker pull registry:latest
|
sudo ${runtime} pull registry:latest
|
||||||
sudo docker save -o registry-latest.tar registry:latest
|
sudo ${runtime} save -o registry-latest.tar registry:latest
|
||||||
|
|
||||||
for image in ${IMAGES}
|
for image in ${IMAGES}
|
||||||
do
|
do
|
||||||
@@ -32,7 +32,7 @@ function create_container_image_tar() {
|
|||||||
set +e
|
set +e
|
||||||
for step in $(seq 1 ${RETRY_COUNT})
|
for step in $(seq 1 ${RETRY_COUNT})
|
||||||
do
|
do
|
||||||
sudo docker pull ${image}
|
sudo ${runtime} pull ${image}
|
||||||
if [ $? -eq 0 ]; then
|
if [ $? -eq 0 ]; then
|
||||||
break
|
break
|
||||||
fi
|
fi
|
||||||
@@ -42,7 +42,7 @@ function create_container_image_tar() {
|
|||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
set -e
|
set -e
|
||||||
sudo docker save -o ${FILE_NAME} ${image}
|
sudo ${runtime} save -o ${FILE_NAME} ${image}
|
||||||
|
|
||||||
# NOTE: Here removes the following repo parts from each image
|
# NOTE: Here removes the following repo parts from each image
|
||||||
# so that these parts will be replaced with Kubespray.
|
# so that these parts will be replaced with Kubespray.
|
||||||
@@ -95,16 +95,16 @@ function register_container_images() {
|
|||||||
sed -i s@"HOSTNAME"@"${LOCALHOST_NAME}"@ ${TEMP_DIR}/registries.conf
|
sed -i s@"HOSTNAME"@"${LOCALHOST_NAME}"@ ${TEMP_DIR}/registries.conf
|
||||||
sudo cp ${TEMP_DIR}/registries.conf /etc/containers/registries.conf
|
sudo cp ${TEMP_DIR}/registries.conf /etc/containers/registries.conf
|
||||||
else
|
else
|
||||||
echo "docker package(docker-ce, etc.) should be installed"
|
echo "runtime package(docker-ce, podman, nerctl, etc.) should be installed"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
tar -zxvf ${IMAGE_TAR_FILE}
|
tar -zxvf ${IMAGE_TAR_FILE}
|
||||||
sudo docker load -i ${IMAGE_DIR}/registry-latest.tar
|
sudo ${runtime} load -i ${IMAGE_DIR}/registry-latest.tar
|
||||||
set +e
|
set +e
|
||||||
sudo docker container inspect registry >/dev/null 2>&1
|
sudo ${runtime} container inspect registry >/dev/null 2>&1
|
||||||
if [ $? -ne 0 ]; then
|
if [ $? -ne 0 ]; then
|
||||||
sudo docker run --restart=always -d -p 5000:5000 --name registry registry:latest
|
sudo ${runtime} run --restart=always -d -p 5000:5000 --name registry registry:latest
|
||||||
fi
|
fi
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
@@ -112,8 +112,8 @@ function register_container_images() {
|
|||||||
file_name=$(echo ${line} | awk '{print $1}')
|
file_name=$(echo ${line} | awk '{print $1}')
|
||||||
raw_image=$(echo ${line} | awk '{print $2}')
|
raw_image=$(echo ${line} | awk '{print $2}')
|
||||||
new_image="${LOCALHOST_NAME}:5000/${raw_image}"
|
new_image="${LOCALHOST_NAME}:5000/${raw_image}"
|
||||||
org_image=$(sudo docker load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
|
org_image=$(sudo ${runtime} load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
|
||||||
image_id=$(sudo docker image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
|
image_id=$(sudo ${runtime} image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
|
||||||
if [ -z "${file_name}" ]; then
|
if [ -z "${file_name}" ]; then
|
||||||
echo "Failed to get file_name for line ${line}"
|
echo "Failed to get file_name for line ${line}"
|
||||||
exit 1
|
exit 1
|
||||||
@@ -130,9 +130,9 @@ function register_container_images() {
|
|||||||
echo "Failed to get image_id for file ${file_name}"
|
echo "Failed to get image_id for file ${file_name}"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
sudo docker load -i ${IMAGE_DIR}/${file_name}
|
sudo ${runtime} load -i ${IMAGE_DIR}/${file_name}
|
||||||
sudo docker tag ${image_id} ${new_image}
|
sudo ${runtime} tag ${image_id} ${new_image}
|
||||||
sudo docker push ${new_image}
|
sudo ${runtime} push ${new_image}
|
||||||
done <<< "$(cat ${IMAGE_LIST})"
|
done <<< "$(cat ${IMAGE_LIST})"
|
||||||
|
|
||||||
echo "Succeeded to register container images to local registry."
|
echo "Succeeded to register container images to local registry."
|
||||||
@@ -143,6 +143,18 @@ function register_container_images() {
|
|||||||
echo "- quay_image_repo"
|
echo "- quay_image_repo"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# get runtime command
|
||||||
|
if command -v nerdctl 1>/dev/null 2>&1; then
|
||||||
|
runtime="nerdctl"
|
||||||
|
elif command -v podman 1>/dev/null 2>&1; then
|
||||||
|
runtime="podman"
|
||||||
|
elif command -v docker 1>/dev/null 2>&1; then
|
||||||
|
runtime="docker"
|
||||||
|
else
|
||||||
|
echo "No supported container runtime found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
if [ "${OPTION}" == "create" ]; then
|
if [ "${OPTION}" == "create" ]; then
|
||||||
create_container_image_tar
|
create_container_image_tar
|
||||||
elif [ "${OPTION}" == "register" ]; then
|
elif [ "${OPTION}" == "register" ]; then
|
||||||
|
|||||||
@@ -38,7 +38,7 @@ sudo "${runtime}" container inspect nginx >/dev/null 2>&1
|
|||||||
if [ $? -ne 0 ]; then
|
if [ $? -ne 0 ]; then
|
||||||
sudo "${runtime}" run \
|
sudo "${runtime}" run \
|
||||||
--restart=always -d -p ${NGINX_PORT}:80 \
|
--restart=always -d -p ${NGINX_PORT}:80 \
|
||||||
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
|
--volume "${OFFLINE_FILES_DIR}":/usr/share/nginx/html/download \
|
||||||
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \
|
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \
|
||||||
--name nginx nginx:alpine
|
--name nginx nginx:alpine
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -50,70 +50,32 @@ Example (this one assumes you are using Ubuntu)
|
|||||||
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
|
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
|
||||||
```
|
```
|
||||||
|
|
||||||
***Using other distrib than Ubuntu***
|
## Using other distrib than Ubuntu***
|
||||||
If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
|
|
||||||
|
|
||||||
For example, to use:
|
To leverage a Linux distribution other than Ubuntu 18.04 (Bionic) LTS for your Terraform configurations, you can adjust the AMI search filters within the 'data "aws_ami" "distro"' block by utilizing variables in your `terraform.tfvars` file. This approach ensures a flexible configuration that adapts to various Linux distributions without directly modifying the core Terraform files.
|
||||||
|
|
||||||
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
|
### Example Usages
|
||||||
|
|
||||||
```ini
|
- **Debian Jessie**: To configure the usage of Debian Jessie, insert the subsequent lines into your `terraform.tfvars`:
|
||||||
data "aws_ami" "distro" {
|
|
||||||
most_recent = true
|
|
||||||
|
|
||||||
filter {
|
```hcl
|
||||||
name = "name"
|
ami_name_pattern = "debian-jessie-amd64-hvm-*"
|
||||||
values = ["debian-jessie-amd64-hvm-*"]
|
ami_owners = ["379101102735"]
|
||||||
}
|
```
|
||||||
|
|
||||||
filter {
|
- **Ubuntu 16.04**: To utilize Ubuntu 16.04 instead, apply the following configuration in your `terraform.tfvars`:
|
||||||
name = "virtualization-type"
|
|
||||||
values = ["hvm"]
|
|
||||||
}
|
|
||||||
|
|
||||||
owners = ["379101102735"]
|
```hcl
|
||||||
}
|
ami_name_pattern = "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"
|
||||||
```
|
ami_owners = ["099720109477"]
|
||||||
|
```
|
||||||
|
|
||||||
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
|
- **Centos 7**: For employing Centos 7, incorporate these lines into your `terraform.tfvars`:
|
||||||
|
|
||||||
```ini
|
```hcl
|
||||||
data "aws_ami" "distro" {
|
ami_name_pattern = "dcos-centos7-*"
|
||||||
most_recent = true
|
ami_owners = ["688023202711"]
|
||||||
|
```
|
||||||
filter {
|
|
||||||
name = "name"
|
|
||||||
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"]
|
|
||||||
}
|
|
||||||
|
|
||||||
filter {
|
|
||||||
name = "virtualization-type"
|
|
||||||
values = ["hvm"]
|
|
||||||
}
|
|
||||||
|
|
||||||
owners = ["099720109477"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
|
|
||||||
|
|
||||||
```ini
|
|
||||||
data "aws_ami" "distro" {
|
|
||||||
most_recent = true
|
|
||||||
|
|
||||||
filter {
|
|
||||||
name = "name"
|
|
||||||
values = ["dcos-centos7-*"]
|
|
||||||
}
|
|
||||||
|
|
||||||
filter {
|
|
||||||
name = "virtualization-type"
|
|
||||||
values = ["hvm"]
|
|
||||||
}
|
|
||||||
|
|
||||||
owners = ["688023202711"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Connecting to Kubernetes
|
## Connecting to Kubernetes
|
||||||
|
|
||||||
|
|||||||
@@ -20,20 +20,38 @@ variable "aws_cluster_name" {
|
|||||||
description = "Name of AWS Cluster"
|
description = "Name of AWS Cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "ami_name_pattern" {
|
||||||
|
description = "The name pattern to use for AMI lookup"
|
||||||
|
type = string
|
||||||
|
default = "debian-10-amd64-*"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ami_virtualization_type" {
|
||||||
|
description = "The virtualization type to use for AMI lookup"
|
||||||
|
type = string
|
||||||
|
default = "hvm"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ami_owners" {
|
||||||
|
description = "The owners to use for AMI lookup"
|
||||||
|
type = list(string)
|
||||||
|
default = ["136693071363"]
|
||||||
|
}
|
||||||
|
|
||||||
data "aws_ami" "distro" {
|
data "aws_ami" "distro" {
|
||||||
most_recent = true
|
most_recent = true
|
||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "name"
|
name = "name"
|
||||||
values = ["debian-10-amd64-*"]
|
values = [var.ami_name_pattern]
|
||||||
}
|
}
|
||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "virtualization-type"
|
name = "virtualization-type"
|
||||||
values = ["hvm"]
|
values = [var.ami_virtualization_type]
|
||||||
}
|
}
|
||||||
|
|
||||||
owners = ["136693071363"] # Debian-10
|
owners = var.ami_owners
|
||||||
}
|
}
|
||||||
|
|
||||||
//AWS VPC Variables
|
//AWS VPC Variables
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ terraform {
|
|||||||
required_providers {
|
required_providers {
|
||||||
equinix = {
|
equinix = {
|
||||||
source = "equinix/equinix"
|
source = "equinix/equinix"
|
||||||
version = "~> 1.14"
|
version = "1.24.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ ssh_public_keys = [
|
|||||||
machines = {
|
machines = {
|
||||||
"master-0" : {
|
"master-0" : {
|
||||||
"node_type" : "master",
|
"node_type" : "master",
|
||||||
"size" : "Medium",
|
"size" : "standard.medium",
|
||||||
"boot_disk" : {
|
"boot_disk" : {
|
||||||
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
"root_partition_size" : 50,
|
"root_partition_size" : 50,
|
||||||
@@ -22,7 +22,7 @@ machines = {
|
|||||||
},
|
},
|
||||||
"worker-0" : {
|
"worker-0" : {
|
||||||
"node_type" : "worker",
|
"node_type" : "worker",
|
||||||
"size" : "Large",
|
"size" : "standard.large",
|
||||||
"boot_disk" : {
|
"boot_disk" : {
|
||||||
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
"root_partition_size" : 50,
|
"root_partition_size" : 50,
|
||||||
@@ -32,7 +32,7 @@ machines = {
|
|||||||
},
|
},
|
||||||
"worker-1" : {
|
"worker-1" : {
|
||||||
"node_type" : "worker",
|
"node_type" : "worker",
|
||||||
"size" : "Large",
|
"size" : "standard.large",
|
||||||
"boot_disk" : {
|
"boot_disk" : {
|
||||||
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
"root_partition_size" : 50,
|
"root_partition_size" : 50,
|
||||||
@@ -42,7 +42,7 @@ machines = {
|
|||||||
},
|
},
|
||||||
"worker-2" : {
|
"worker-2" : {
|
||||||
"node_type" : "worker",
|
"node_type" : "worker",
|
||||||
"size" : "Large",
|
"size" : "standard.large",
|
||||||
"boot_disk" : {
|
"boot_disk" : {
|
||||||
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
"root_partition_size" : 50,
|
"root_partition_size" : 50,
|
||||||
|
|||||||
@@ -1,29 +1,25 @@
|
|||||||
data "exoscale_compute_template" "os_image" {
|
data "exoscale_template" "os_image" {
|
||||||
for_each = var.machines
|
for_each = var.machines
|
||||||
|
|
||||||
zone = var.zone
|
zone = var.zone
|
||||||
name = each.value.boot_disk.image_name
|
name = each.value.boot_disk.image_name
|
||||||
}
|
}
|
||||||
|
|
||||||
data "exoscale_compute" "master_nodes" {
|
data "exoscale_compute_instance" "master_nodes" {
|
||||||
for_each = exoscale_compute.master
|
for_each = exoscale_compute_instance.master
|
||||||
|
|
||||||
id = each.value.id
|
id = each.value.id
|
||||||
|
zone = var.zone
|
||||||
# Since private IP address is not assigned until the nics are created we need this
|
|
||||||
depends_on = [exoscale_nic.master_private_network_nic]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
data "exoscale_compute" "worker_nodes" {
|
data "exoscale_compute_instance" "worker_nodes" {
|
||||||
for_each = exoscale_compute.worker
|
for_each = exoscale_compute_instance.worker
|
||||||
|
|
||||||
id = each.value.id
|
id = each.value.id
|
||||||
|
zone = var.zone
|
||||||
# Since private IP address is not assigned until the nics are created we need this
|
|
||||||
depends_on = [exoscale_nic.worker_private_network_nic]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_network" "private_network" {
|
resource "exoscale_private_network" "private_network" {
|
||||||
zone = var.zone
|
zone = var.zone
|
||||||
name = "${var.prefix}-network"
|
name = "${var.prefix}-network"
|
||||||
|
|
||||||
@@ -34,25 +30,29 @@ resource "exoscale_network" "private_network" {
|
|||||||
netmask = cidrnetmask(var.private_network_cidr)
|
netmask = cidrnetmask(var.private_network_cidr)
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_compute" "master" {
|
resource "exoscale_compute_instance" "master" {
|
||||||
for_each = {
|
for_each = {
|
||||||
for name, machine in var.machines :
|
for name, machine in var.machines :
|
||||||
name => machine
|
name => machine
|
||||||
if machine.node_type == "master"
|
if machine.node_type == "master"
|
||||||
}
|
}
|
||||||
|
|
||||||
display_name = "${var.prefix}-${each.key}"
|
name = "${var.prefix}-${each.key}"
|
||||||
template_id = data.exoscale_compute_template.os_image[each.key].id
|
template_id = data.exoscale_template.os_image[each.key].id
|
||||||
size = each.value.size
|
type = each.value.size
|
||||||
disk_size = each.value.boot_disk.root_partition_size + each.value.boot_disk.node_local_partition_size + each.value.boot_disk.ceph_partition_size
|
disk_size = each.value.boot_disk.root_partition_size + each.value.boot_disk.node_local_partition_size + each.value.boot_disk.ceph_partition_size
|
||||||
state = "Running"
|
state = "Running"
|
||||||
zone = var.zone
|
zone = var.zone
|
||||||
security_groups = [exoscale_security_group.master_sg.name]
|
security_group_ids = [exoscale_security_group.master_sg.id]
|
||||||
|
network_interface {
|
||||||
|
network_id = exoscale_private_network.private_network.id
|
||||||
|
}
|
||||||
|
elastic_ip_ids = [exoscale_elastic_ip.control_plane_lb.id]
|
||||||
|
|
||||||
user_data = templatefile(
|
user_data = templatefile(
|
||||||
"${path.module}/templates/cloud-init.tmpl",
|
"${path.module}/templates/cloud-init.tmpl",
|
||||||
{
|
{
|
||||||
eip_ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
eip_ip_address = exoscale_elastic_ip.ingress_controller_lb.ip_address
|
||||||
node_local_partition_size = each.value.boot_disk.node_local_partition_size
|
node_local_partition_size = each.value.boot_disk.node_local_partition_size
|
||||||
ceph_partition_size = each.value.boot_disk.ceph_partition_size
|
ceph_partition_size = each.value.boot_disk.ceph_partition_size
|
||||||
root_partition_size = each.value.boot_disk.root_partition_size
|
root_partition_size = each.value.boot_disk.root_partition_size
|
||||||
@@ -62,25 +62,29 @@ resource "exoscale_compute" "master" {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_compute" "worker" {
|
resource "exoscale_compute_instance" "worker" {
|
||||||
for_each = {
|
for_each = {
|
||||||
for name, machine in var.machines :
|
for name, machine in var.machines :
|
||||||
name => machine
|
name => machine
|
||||||
if machine.node_type == "worker"
|
if machine.node_type == "worker"
|
||||||
}
|
}
|
||||||
|
|
||||||
display_name = "${var.prefix}-${each.key}"
|
name = "${var.prefix}-${each.key}"
|
||||||
template_id = data.exoscale_compute_template.os_image[each.key].id
|
template_id = data.exoscale_template.os_image[each.key].id
|
||||||
size = each.value.size
|
type = each.value.size
|
||||||
disk_size = each.value.boot_disk.root_partition_size + each.value.boot_disk.node_local_partition_size + each.value.boot_disk.ceph_partition_size
|
disk_size = each.value.boot_disk.root_partition_size + each.value.boot_disk.node_local_partition_size + each.value.boot_disk.ceph_partition_size
|
||||||
state = "Running"
|
state = "Running"
|
||||||
zone = var.zone
|
zone = var.zone
|
||||||
security_groups = [exoscale_security_group.worker_sg.name]
|
security_group_ids = [exoscale_security_group.worker_sg.id]
|
||||||
|
network_interface {
|
||||||
|
network_id = exoscale_private_network.private_network.id
|
||||||
|
}
|
||||||
|
elastic_ip_ids = [exoscale_elastic_ip.ingress_controller_lb.id]
|
||||||
|
|
||||||
user_data = templatefile(
|
user_data = templatefile(
|
||||||
"${path.module}/templates/cloud-init.tmpl",
|
"${path.module}/templates/cloud-init.tmpl",
|
||||||
{
|
{
|
||||||
eip_ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
eip_ip_address = exoscale_elastic_ip.ingress_controller_lb.ip_address
|
||||||
node_local_partition_size = each.value.boot_disk.node_local_partition_size
|
node_local_partition_size = each.value.boot_disk.node_local_partition_size
|
||||||
ceph_partition_size = each.value.boot_disk.ceph_partition_size
|
ceph_partition_size = each.value.boot_disk.ceph_partition_size
|
||||||
root_partition_size = each.value.boot_disk.root_partition_size
|
root_partition_size = each.value.boot_disk.root_partition_size
|
||||||
@@ -90,41 +94,33 @@ resource "exoscale_compute" "worker" {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_nic" "master_private_network_nic" {
|
|
||||||
for_each = exoscale_compute.master
|
|
||||||
|
|
||||||
compute_id = each.value.id
|
|
||||||
network_id = exoscale_network.private_network.id
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "exoscale_nic" "worker_private_network_nic" {
|
|
||||||
for_each = exoscale_compute.worker
|
|
||||||
|
|
||||||
compute_id = each.value.id
|
|
||||||
network_id = exoscale_network.private_network.id
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "exoscale_security_group" "master_sg" {
|
resource "exoscale_security_group" "master_sg" {
|
||||||
name = "${var.prefix}-master-sg"
|
name = "${var.prefix}-master-sg"
|
||||||
description = "Security group for Kubernetes masters"
|
description = "Security group for Kubernetes masters"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_security_group_rules" "master_sg_rules" {
|
resource "exoscale_security_group_rule" "master_sg_rule_ssh" {
|
||||||
security_group_id = exoscale_security_group.master_sg.id
|
security_group_id = exoscale_security_group.master_sg.id
|
||||||
|
|
||||||
|
for_each = toset(var.ssh_whitelist)
|
||||||
# SSH
|
# SSH
|
||||||
ingress {
|
type = "INGRESS"
|
||||||
protocol = "TCP"
|
start_port = 22
|
||||||
cidr_list = var.ssh_whitelist
|
end_port = 22
|
||||||
ports = ["22"]
|
protocol = "TCP"
|
||||||
}
|
cidr = each.value
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_security_group_rule" "master_sg_rule_k8s_api" {
|
||||||
|
security_group_id = exoscale_security_group.master_sg.id
|
||||||
|
|
||||||
|
for_each = toset(var.api_server_whitelist)
|
||||||
# Kubernetes API
|
# Kubernetes API
|
||||||
ingress {
|
type = "INGRESS"
|
||||||
protocol = "TCP"
|
start_port = 6443
|
||||||
cidr_list = var.api_server_whitelist
|
end_port = 6443
|
||||||
ports = ["6443"]
|
protocol = "TCP"
|
||||||
}
|
cidr = each.value
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_security_group" "worker_sg" {
|
resource "exoscale_security_group" "worker_sg" {
|
||||||
@@ -132,62 +128,64 @@ resource "exoscale_security_group" "worker_sg" {
|
|||||||
description = "security group for kubernetes worker nodes"
|
description = "security group for kubernetes worker nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_security_group_rules" "worker_sg_rules" {
|
resource "exoscale_security_group_rule" "worker_sg_rule_ssh" {
|
||||||
security_group_id = exoscale_security_group.worker_sg.id
|
security_group_id = exoscale_security_group.worker_sg.id
|
||||||
|
|
||||||
# SSH
|
# SSH
|
||||||
ingress {
|
for_each = toset(var.ssh_whitelist)
|
||||||
protocol = "TCP"
|
type = "INGRESS"
|
||||||
cidr_list = var.ssh_whitelist
|
start_port = 22
|
||||||
ports = ["22"]
|
end_port = 22
|
||||||
}
|
protocol = "TCP"
|
||||||
|
cidr = each.value
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_security_group_rule" "worker_sg_rule_http" {
|
||||||
|
security_group_id = exoscale_security_group.worker_sg.id
|
||||||
|
|
||||||
# HTTP(S)
|
# HTTP(S)
|
||||||
ingress {
|
for_each = toset(["80", "443"])
|
||||||
protocol = "TCP"
|
type = "INGRESS"
|
||||||
cidr_list = ["0.0.0.0/0"]
|
start_port = each.value
|
||||||
ports = ["80", "443"]
|
end_port = each.value
|
||||||
}
|
protocol = "TCP"
|
||||||
|
cidr = "0.0.0.0/0"
|
||||||
|
}
|
||||||
|
|
||||||
# Kubernetes Nodeport
|
|
||||||
ingress {
|
resource "exoscale_security_group_rule" "worker_sg_rule_nodeport" {
|
||||||
protocol = "TCP"
|
security_group_id = exoscale_security_group.worker_sg.id
|
||||||
cidr_list = var.nodeport_whitelist
|
|
||||||
ports = ["30000-32767"]
|
# HTTP(S)
|
||||||
|
for_each = toset(var.nodeport_whitelist)
|
||||||
|
type = "INGRESS"
|
||||||
|
start_port = 30000
|
||||||
|
end_port = 32767
|
||||||
|
protocol = "TCP"
|
||||||
|
cidr = each.value
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_elastic_ip" "ingress_controller_lb" {
|
||||||
|
zone = var.zone
|
||||||
|
healthcheck {
|
||||||
|
mode = "http"
|
||||||
|
port = 80
|
||||||
|
uri = "/healthz"
|
||||||
|
interval = 10
|
||||||
|
timeout = 2
|
||||||
|
strikes_ok = 2
|
||||||
|
strikes_fail = 3
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_ipaddress" "ingress_controller_lb" {
|
resource "exoscale_elastic_ip" "control_plane_lb" {
|
||||||
zone = var.zone
|
zone = var.zone
|
||||||
healthcheck_mode = "http"
|
healthcheck {
|
||||||
healthcheck_port = 80
|
mode = "tcp"
|
||||||
healthcheck_path = "/healthz"
|
port = 6443
|
||||||
healthcheck_interval = 10
|
interval = 10
|
||||||
healthcheck_timeout = 2
|
timeout = 2
|
||||||
healthcheck_strikes_ok = 2
|
strikes_ok = 2
|
||||||
healthcheck_strikes_fail = 3
|
strikes_fail = 3
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "exoscale_secondary_ipaddress" "ingress_controller_lb" {
|
|
||||||
for_each = exoscale_compute.worker
|
|
||||||
|
|
||||||
compute_id = each.value.id
|
|
||||||
ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "exoscale_ipaddress" "control_plane_lb" {
|
|
||||||
zone = var.zone
|
|
||||||
healthcheck_mode = "tcp"
|
|
||||||
healthcheck_port = 6443
|
|
||||||
healthcheck_interval = 10
|
|
||||||
healthcheck_timeout = 2
|
|
||||||
healthcheck_strikes_ok = 2
|
|
||||||
healthcheck_strikes_fail = 3
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "exoscale_secondary_ipaddress" "control_plane_lb" {
|
|
||||||
for_each = exoscale_compute.master
|
|
||||||
|
|
||||||
compute_id = each.value.id
|
|
||||||
ip_address = exoscale_ipaddress.control_plane_lb.ip_address
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,19 +1,19 @@
|
|||||||
output "master_ip_addresses" {
|
output "master_ip_addresses" {
|
||||||
value = {
|
value = {
|
||||||
for key, instance in exoscale_compute.master :
|
for key, instance in exoscale_compute_instance.master :
|
||||||
instance.name => {
|
instance.name => {
|
||||||
"private_ip" = contains(keys(data.exoscale_compute.master_nodes), key) ? data.exoscale_compute.master_nodes[key].private_network_ip_addresses[0] : ""
|
"private_ip" = contains(keys(data.exoscale_compute_instance.master_nodes), key) ? data.exoscale_compute_instance.master_nodes[key].private_network_ip_addresses[0] : ""
|
||||||
"public_ip" = exoscale_compute.master[key].ip_address
|
"public_ip" = exoscale_compute_instance.master[key].ip_address
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
output "worker_ip_addresses" {
|
output "worker_ip_addresses" {
|
||||||
value = {
|
value = {
|
||||||
for key, instance in exoscale_compute.worker :
|
for key, instance in exoscale_compute_instance.worker :
|
||||||
instance.name => {
|
instance.name => {
|
||||||
"private_ip" = contains(keys(data.exoscale_compute.worker_nodes), key) ? data.exoscale_compute.worker_nodes[key].private_network_ip_addresses[0] : ""
|
"private_ip" = contains(keys(data.exoscale_compute_instance.worker_nodes), key) ? data.exoscale_compute_instance.worker_nodes[key].private_network_ip_addresses[0] : ""
|
||||||
"public_ip" = exoscale_compute.worker[key].ip_address
|
"public_ip" = exoscale_compute_instance.worker[key].ip_address
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -23,9 +23,9 @@ output "cluster_private_network_cidr" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
output "ingress_controller_lb_ip_address" {
|
output "ingress_controller_lb_ip_address" {
|
||||||
value = exoscale_ipaddress.ingress_controller_lb.ip_address
|
value = exoscale_elastic_ip.ingress_controller_lb.ip_address
|
||||||
}
|
}
|
||||||
|
|
||||||
output "control_plane_lb_ip_address" {
|
output "control_plane_lb_ip_address" {
|
||||||
value = exoscale_ipaddress.control_plane_lb.ip_address
|
value = exoscale_elastic_ip.control_plane_lb.ip_address
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_providers {
|
required_providers {
|
||||||
exoscale = {
|
exoscale = {
|
||||||
source = "exoscale/exoscale"
|
source = "exoscale/exoscale"
|
||||||
version = ">= 0.21"
|
version = ">= 0.21"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
2
contrib/terraform/openstack/.gitignore
vendored
2
contrib/terraform/openstack/.gitignore
vendored
@@ -1,5 +1,5 @@
|
|||||||
.terraform
|
.terraform
|
||||||
*.tfvars
|
*.tfvars
|
||||||
!sample-inventory\/cluster.tfvars
|
!sample-inventory/cluster.tfvars
|
||||||
*.tfstate
|
*.tfstate
|
||||||
*.tfstate.backup
|
*.tfstate.backup
|
||||||
|
|||||||
@@ -318,6 +318,7 @@ k8s_nodes:
|
|||||||
mount_path: string # Path to where the partition should be mounted
|
mount_path: string # Path to where the partition should be mounted
|
||||||
partition_start: string # Where the partition should start (e.g. 10GB ). Note, if you set the partition_start to 0 there will be no space left for the root partition
|
partition_start: string # Where the partition should start (e.g. 10GB ). Note, if you set the partition_start to 0 there will be no space left for the root partition
|
||||||
partition_end: string # Where the partition should end (e.g. 10GB or -1 for end of volume)
|
partition_end: string # Where the partition should end (e.g. 10GB or -1 for end of volume)
|
||||||
|
netplan_critical_dhcp_interface: string # Name of interface to set the dhcp flag critical = true, to circumvent [this issue](https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1776013).
|
||||||
```
|
```
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|||||||
@@ -19,8 +19,8 @@ data "cloudinit_config" "cloudinit" {
|
|||||||
part {
|
part {
|
||||||
content_type = "text/cloud-config"
|
content_type = "text/cloud-config"
|
||||||
content = templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
|
content = templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
|
||||||
# template_file doesn't support lists
|
extra_partitions = [],
|
||||||
extra_partitions = ""
|
netplan_critical_dhcp_interface = ""
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -821,7 +821,8 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
|
|||||||
flavor_id = each.value.flavor
|
flavor_id = each.value.flavor
|
||||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||||
user_data = each.value.cloudinit != null ? templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
|
user_data = each.value.cloudinit != null ? templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
|
||||||
extra_partitions = each.value.cloudinit.extra_partitions
|
extra_partitions = each.value.cloudinit.extra_partitions,
|
||||||
|
netplan_critical_dhcp_interface = each.value.cloudinit.netplan_critical_dhcp_interface,
|
||||||
}) : data.cloudinit_config.cloudinit.rendered
|
}) : data.cloudinit_config.cloudinit.rendered
|
||||||
|
|
||||||
dynamic "block_device" {
|
dynamic "block_device" {
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
%{~ if length(extra_partitions) > 0 }
|
%{~ if length(extra_partitions) > 0 || netplan_critical_dhcp_interface != "" }
|
||||||
#cloud-config
|
#cloud-config
|
||||||
bootcmd:
|
bootcmd:
|
||||||
%{~ for idx, partition in extra_partitions }
|
%{~ for idx, partition in extra_partitions }
|
||||||
@@ -8,11 +8,26 @@ bootcmd:
|
|||||||
%{~ endfor }
|
%{~ endfor }
|
||||||
|
|
||||||
runcmd:
|
runcmd:
|
||||||
|
%{~ if netplan_critical_dhcp_interface != "" }
|
||||||
|
- netplan apply
|
||||||
|
%{~ endif }
|
||||||
%{~ for idx, partition in extra_partitions }
|
%{~ for idx, partition in extra_partitions }
|
||||||
- mkdir -p ${partition.mount_path}
|
- mkdir -p ${partition.mount_path}
|
||||||
- chown nobody:nogroup ${partition.mount_path}
|
- chown nobody:nogroup ${partition.mount_path}
|
||||||
- mount ${partition.partition_path} ${partition.mount_path}
|
- mount ${partition.partition_path} ${partition.mount_path}
|
||||||
%{~ endfor }
|
%{~ endfor ~}
|
||||||
|
|
||||||
|
%{~ if netplan_critical_dhcp_interface != "" }
|
||||||
|
write_files:
|
||||||
|
- path: /etc/netplan/90-critical-dhcp.yaml
|
||||||
|
content: |
|
||||||
|
network:
|
||||||
|
version: 2
|
||||||
|
ethernets:
|
||||||
|
${ netplan_critical_dhcp_interface }:
|
||||||
|
dhcp4: true
|
||||||
|
critical: true
|
||||||
|
%{~ endif }
|
||||||
|
|
||||||
mounts:
|
mounts:
|
||||||
%{~ for idx, partition in extra_partitions }
|
%{~ for idx, partition in extra_partitions }
|
||||||
|
|||||||
@@ -142,13 +142,14 @@ variable "k8s_nodes" {
|
|||||||
additional_server_groups = optional(list(string))
|
additional_server_groups = optional(list(string))
|
||||||
server_group = optional(string)
|
server_group = optional(string)
|
||||||
cloudinit = optional(object({
|
cloudinit = optional(object({
|
||||||
extra_partitions = list(object({
|
extra_partitions = optional(list(object({
|
||||||
volume_path = string
|
volume_path = string
|
||||||
partition_path = string
|
partition_path = string
|
||||||
partition_start = string
|
partition_start = string
|
||||||
partition_end = string
|
partition_end = string
|
||||||
mount_path = string
|
mount_path = string
|
||||||
}))
|
})), [])
|
||||||
|
netplan_critical_dhcp_interface = optional(string, "")
|
||||||
}))
|
}))
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -140,4 +140,4 @@ terraform destroy --var-file cluster-settings.tfvars \
|
|||||||
* `backend_servers`: List of servers that traffic to the port should be forwarded to.
|
* `backend_servers`: List of servers that traffic to the port should be forwarded to.
|
||||||
* `server_groups`: Group servers together
|
* `server_groups`: Group servers together
|
||||||
* `servers`: The servers that should be included in the group.
|
* `servers`: The servers that should be included in the group.
|
||||||
* `anti_affinity`: If anti-affinity should be enabled, try to spread the VMs out on separate nodes.
|
* `anti_affinity_policy`: Defines if a server group is an anti-affinity group. Setting this to "strict" or yes" will result in all servers in the group being placed on separate compute hosts. The value can be "strict", "yes" or "no". "strict" refers to strict policy doesn't allow servers in the same server group to be on the same host. "yes" refers to best-effort policy and tries to put servers on different hosts, but this is not guaranteed.
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ ssh_public_keys = [
|
|||||||
|
|
||||||
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
|
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
|
||||||
machines = {
|
machines = {
|
||||||
"master-0" : {
|
"control-plane-0" : {
|
||||||
"node_type" : "master",
|
"node_type" : "master",
|
||||||
# plan to use instead of custom cpu/mem
|
# plan to use instead of custom cpu/mem
|
||||||
"plan" : null,
|
"plan" : null,
|
||||||
@@ -133,9 +133,9 @@ loadbalancers = {
|
|||||||
server_groups = {
|
server_groups = {
|
||||||
# "control-plane" = {
|
# "control-plane" = {
|
||||||
# servers = [
|
# servers = [
|
||||||
# "master-0"
|
# "control-plane-0"
|
||||||
# ]
|
# ]
|
||||||
# anti_affinity = true
|
# anti_affinity_policy = "strict"
|
||||||
# },
|
# },
|
||||||
# "workers" = {
|
# "workers" = {
|
||||||
# servers = [
|
# servers = [
|
||||||
@@ -143,6 +143,6 @@ server_groups = {
|
|||||||
# "worker-1",
|
# "worker-1",
|
||||||
# "worker-2"
|
# "worker-2"
|
||||||
# ]
|
# ]
|
||||||
# anti_affinity = true
|
# anti_affinity_policy = "yes"
|
||||||
# }
|
# }
|
||||||
}
|
}
|
||||||
@@ -3,7 +3,7 @@ locals {
|
|||||||
disks = flatten([
|
disks = flatten([
|
||||||
for node_name, machine in var.machines : [
|
for node_name, machine in var.machines : [
|
||||||
for disk_name, disk in machine.additional_disks : {
|
for disk_name, disk in machine.additional_disks : {
|
||||||
disk = disk
|
disk = disk
|
||||||
disk_name = disk_name
|
disk_name = disk_name
|
||||||
node_name = node_name
|
node_name = node_name
|
||||||
}
|
}
|
||||||
@@ -13,8 +13,8 @@ locals {
|
|||||||
lb_backend_servers = flatten([
|
lb_backend_servers = flatten([
|
||||||
for lb_name, loadbalancer in var.loadbalancers : [
|
for lb_name, loadbalancer in var.loadbalancers : [
|
||||||
for backend_server in loadbalancer.backend_servers : {
|
for backend_server in loadbalancer.backend_servers : {
|
||||||
port = loadbalancer.target_port
|
port = loadbalancer.target_port
|
||||||
lb_name = lb_name
|
lb_name = lb_name
|
||||||
server_name = backend_server
|
server_name = backend_server
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@@ -22,7 +22,7 @@ locals {
|
|||||||
|
|
||||||
# If prefix is set, all resources will be prefixed with "${var.prefix}-"
|
# If prefix is set, all resources will be prefixed with "${var.prefix}-"
|
||||||
# Else don't prefix with anything
|
# Else don't prefix with anything
|
||||||
resource-prefix = "%{ if var.prefix != ""}${var.prefix}-%{ endif }"
|
resource-prefix = "%{if var.prefix != ""}${var.prefix}-%{endif}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "upcloud_network" "private" {
|
resource "upcloud_network" "private" {
|
||||||
@@ -38,7 +38,7 @@ resource "upcloud_network" "private" {
|
|||||||
|
|
||||||
resource "upcloud_storage" "additional_disks" {
|
resource "upcloud_storage" "additional_disks" {
|
||||||
for_each = {
|
for_each = {
|
||||||
for disk in local.disks: "${disk.node_name}_${disk.disk_name}" => disk.disk
|
for disk in local.disks : "${disk.node_name}_${disk.disk_name}" => disk.disk
|
||||||
}
|
}
|
||||||
|
|
||||||
size = each.value.size
|
size = each.value.size
|
||||||
@@ -61,8 +61,8 @@ resource "upcloud_server" "master" {
|
|||||||
zone = var.zone
|
zone = var.zone
|
||||||
|
|
||||||
template {
|
template {
|
||||||
storage = var.template_name
|
storage = var.template_name
|
||||||
size = each.value.disk_size
|
size = each.value.disk_size
|
||||||
}
|
}
|
||||||
|
|
||||||
# Public network interface
|
# Public network interface
|
||||||
@@ -81,14 +81,14 @@ resource "upcloud_server" "master" {
|
|||||||
ignore_changes = [storage_devices]
|
ignore_changes = [storage_devices]
|
||||||
}
|
}
|
||||||
|
|
||||||
firewall = var.firewall_enabled
|
firewall = var.firewall_enabled
|
||||||
|
|
||||||
dynamic "storage_devices" {
|
dynamic "storage_devices" {
|
||||||
for_each = {
|
for_each = {
|
||||||
for disk_key_name, disk in upcloud_storage.additional_disks :
|
for disk_key_name, disk in upcloud_storage.additional_disks :
|
||||||
disk_key_name => disk
|
disk_key_name => disk
|
||||||
# Only add the disk if it matches the node name in the start of its name
|
# Only add the disk if it matches the node name in the start of its name
|
||||||
if length(regexall("^${each.key}_.+", disk_key_name)) > 0
|
if length(regexall("^${each.key}_.+", disk_key_name)) > 0
|
||||||
}
|
}
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -138,14 +138,14 @@ resource "upcloud_server" "worker" {
|
|||||||
ignore_changes = [storage_devices]
|
ignore_changes = [storage_devices]
|
||||||
}
|
}
|
||||||
|
|
||||||
firewall = var.firewall_enabled
|
firewall = var.firewall_enabled
|
||||||
|
|
||||||
dynamic "storage_devices" {
|
dynamic "storage_devices" {
|
||||||
for_each = {
|
for_each = {
|
||||||
for disk_key_name, disk in upcloud_storage.additional_disks :
|
for disk_key_name, disk in upcloud_storage.additional_disks :
|
||||||
disk_key_name => disk
|
disk_key_name => disk
|
||||||
# Only add the disk if it matches the node name in the start of its name
|
# Only add the disk if it matches the node name in the start of its name
|
||||||
if length(regexall("^${each.key}_.+", disk_key_name)) > 0
|
if length(regexall("^${each.key}_.+", disk_key_name)) > 0
|
||||||
}
|
}
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -162,10 +162,10 @@ resource "upcloud_server" "worker" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "upcloud_firewall_rules" "master" {
|
resource "upcloud_firewall_rules" "master" {
|
||||||
for_each = upcloud_server.master
|
for_each = upcloud_server.master
|
||||||
server_id = each.value.id
|
server_id = each.value.id
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.master_allowed_remote_ips
|
for_each = var.master_allowed_remote_ips
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -181,7 +181,7 @@ resource "upcloud_firewall_rules" "master" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = length(var.master_allowed_remote_ips) > 0 ? [1] : []
|
for_each = length(var.master_allowed_remote_ips) > 0 ? [1] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -197,7 +197,7 @@ resource "upcloud_firewall_rules" "master" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.k8s_allowed_remote_ips
|
for_each = var.k8s_allowed_remote_ips
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -213,7 +213,7 @@ resource "upcloud_firewall_rules" "master" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
|
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -229,7 +229,7 @@ resource "upcloud_firewall_rules" "master" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.master_allowed_ports
|
for_each = var.master_allowed_ports
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -245,97 +245,97 @@ resource "upcloud_firewall_rules" "master" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "UpCloud DNS"
|
comment = "UpCloud DNS"
|
||||||
source_port_end = "53"
|
source_port_end = "53"
|
||||||
source_port_start = "53"
|
source_port_start = "53"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv4"
|
family = "IPv4"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "94.237.40.9"
|
source_address_end = "94.237.40.9"
|
||||||
source_address_start = "94.237.40.9"
|
source_address_start = "94.237.40.9"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "UpCloud DNS"
|
comment = "UpCloud DNS"
|
||||||
source_port_end = "53"
|
source_port_end = "53"
|
||||||
source_port_start = "53"
|
source_port_start = "53"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv4"
|
family = "IPv4"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "94.237.127.9"
|
source_address_end = "94.237.127.9"
|
||||||
source_address_start = "94.237.127.9"
|
source_address_start = "94.237.127.9"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "UpCloud DNS"
|
comment = "UpCloud DNS"
|
||||||
source_port_end = "53"
|
source_port_end = "53"
|
||||||
source_port_start = "53"
|
source_port_start = "53"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv6"
|
family = "IPv6"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "2a04:3540:53::1"
|
source_address_end = "2a04:3540:53::1"
|
||||||
source_address_start = "2a04:3540:53::1"
|
source_address_start = "2a04:3540:53::1"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "UpCloud DNS"
|
comment = "UpCloud DNS"
|
||||||
source_port_end = "53"
|
source_port_end = "53"
|
||||||
source_port_start = "53"
|
source_port_start = "53"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv6"
|
family = "IPv6"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "2a04:3544:53::1"
|
source_address_end = "2a04:3544:53::1"
|
||||||
source_address_start = "2a04:3544:53::1"
|
source_address_start = "2a04:3544:53::1"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "NTP Port"
|
comment = "NTP Port"
|
||||||
source_port_end = "123"
|
source_port_end = "123"
|
||||||
source_port_start = "123"
|
source_port_start = "123"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv4"
|
family = "IPv4"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "255.255.255.255"
|
source_address_end = "255.255.255.255"
|
||||||
source_address_start = "0.0.0.0"
|
source_address_start = "0.0.0.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "NTP Port"
|
comment = "NTP Port"
|
||||||
source_port_end = "123"
|
source_port_end = "123"
|
||||||
source_port_start = "123"
|
source_port_start = "123"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv6"
|
family = "IPv6"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -351,10 +351,10 @@ resource "upcloud_firewall_rules" "master" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "upcloud_firewall_rules" "k8s" {
|
resource "upcloud_firewall_rules" "k8s" {
|
||||||
for_each = upcloud_server.worker
|
for_each = upcloud_server.worker
|
||||||
server_id = each.value.id
|
server_id = each.value.id
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.k8s_allowed_remote_ips
|
for_each = var.k8s_allowed_remote_ips
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -370,7 +370,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
|
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -386,7 +386,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.worker_allowed_ports
|
for_each = var.worker_allowed_ports
|
||||||
|
|
||||||
content {
|
content {
|
||||||
@@ -402,97 +402,97 @@ resource "upcloud_firewall_rules" "k8s" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "UpCloud DNS"
|
comment = "UpCloud DNS"
|
||||||
source_port_end = "53"
|
source_port_end = "53"
|
||||||
source_port_start = "53"
|
source_port_start = "53"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv4"
|
family = "IPv4"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "94.237.40.9"
|
source_address_end = "94.237.40.9"
|
||||||
source_address_start = "94.237.40.9"
|
source_address_start = "94.237.40.9"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "UpCloud DNS"
|
comment = "UpCloud DNS"
|
||||||
source_port_end = "53"
|
source_port_end = "53"
|
||||||
source_port_start = "53"
|
source_port_start = "53"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv4"
|
family = "IPv4"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "94.237.127.9"
|
source_address_end = "94.237.127.9"
|
||||||
source_address_start = "94.237.127.9"
|
source_address_start = "94.237.127.9"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "UpCloud DNS"
|
comment = "UpCloud DNS"
|
||||||
source_port_end = "53"
|
source_port_end = "53"
|
||||||
source_port_start = "53"
|
source_port_start = "53"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv6"
|
family = "IPv6"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "2a04:3540:53::1"
|
source_address_end = "2a04:3540:53::1"
|
||||||
source_address_start = "2a04:3540:53::1"
|
source_address_start = "2a04:3540:53::1"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "UpCloud DNS"
|
comment = "UpCloud DNS"
|
||||||
source_port_end = "53"
|
source_port_end = "53"
|
||||||
source_port_start = "53"
|
source_port_start = "53"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv6"
|
family = "IPv6"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "2a04:3544:53::1"
|
source_address_end = "2a04:3544:53::1"
|
||||||
source_address_start = "2a04:3544:53::1"
|
source_address_start = "2a04:3544:53::1"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "NTP Port"
|
comment = "NTP Port"
|
||||||
source_port_end = "123"
|
source_port_end = "123"
|
||||||
source_port_start = "123"
|
source_port_start = "123"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv4"
|
family = "IPv4"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
source_address_end = "255.255.255.255"
|
source_address_end = "255.255.255.255"
|
||||||
source_address_start = "0.0.0.0"
|
source_address_start = "0.0.0.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dynamic firewall_rule {
|
dynamic "firewall_rule" {
|
||||||
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
action = "accept"
|
action = "accept"
|
||||||
comment = "NTP Port"
|
comment = "NTP Port"
|
||||||
source_port_end = "123"
|
source_port_end = "123"
|
||||||
source_port_start = "123"
|
source_port_start = "123"
|
||||||
direction = "in"
|
direction = "in"
|
||||||
family = "IPv6"
|
family = "IPv6"
|
||||||
protocol = firewall_rule.value
|
protocol = firewall_rule.value
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -535,9 +535,9 @@ resource "upcloud_loadbalancer_frontend" "lb_frontend" {
|
|||||||
|
|
||||||
resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
|
resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
|
||||||
for_each = {
|
for_each = {
|
||||||
for be_server in local.lb_backend_servers:
|
for be_server in local.lb_backend_servers :
|
||||||
"${be_server.server_name}-lb-backend-${be_server.lb_name}" => be_server
|
"${be_server.server_name}-lb-backend-${be_server.lb_name}" => be_server
|
||||||
if var.loadbalancer_enabled
|
if var.loadbalancer_enabled
|
||||||
}
|
}
|
||||||
|
|
||||||
backend = upcloud_loadbalancer_backend.lb_backend[each.value.lb_name].id
|
backend = upcloud_loadbalancer_backend.lb_backend[each.value.lb_name].id
|
||||||
@@ -550,9 +550,9 @@ resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "upcloud_server_group" "server_groups" {
|
resource "upcloud_server_group" "server_groups" {
|
||||||
for_each = var.server_groups
|
for_each = var.server_groups
|
||||||
title = each.key
|
title = each.key
|
||||||
anti_affinity = each.value.anti_affinity
|
anti_affinity_policy = each.value.anti_affinity_policy
|
||||||
labels = {}
|
labels = {}
|
||||||
members = [for server in each.value.servers : merge(upcloud_server.master, upcloud_server.worker)[server].id]
|
members = [for server in each.value.servers : merge(upcloud_server.master, upcloud_server.worker)[server].id]
|
||||||
}
|
}
|
||||||
@@ -3,8 +3,8 @@ output "master_ip" {
|
|||||||
value = {
|
value = {
|
||||||
for instance in upcloud_server.master :
|
for instance in upcloud_server.master :
|
||||||
instance.hostname => {
|
instance.hostname => {
|
||||||
"public_ip": instance.network_interface[0].ip_address
|
"public_ip" : instance.network_interface[0].ip_address
|
||||||
"private_ip": instance.network_interface[1].ip_address
|
"private_ip" : instance.network_interface[1].ip_address
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -13,8 +13,8 @@ output "worker_ip" {
|
|||||||
value = {
|
value = {
|
||||||
for instance in upcloud_server.worker :
|
for instance in upcloud_server.worker :
|
||||||
instance.hostname => {
|
instance.hostname => {
|
||||||
"public_ip": instance.network_interface[0].ip_address
|
"public_ip" : instance.network_interface[0].ip_address
|
||||||
"private_ip": instance.network_interface[1].ip_address
|
"private_ip" : instance.network_interface[1].ip_address
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,11 +15,11 @@ variable "private_network_cidr" {}
|
|||||||
variable "machines" {
|
variable "machines" {
|
||||||
description = "Cluster machines"
|
description = "Cluster machines"
|
||||||
type = map(object({
|
type = map(object({
|
||||||
node_type = string
|
node_type = string
|
||||||
plan = string
|
plan = string
|
||||||
cpu = string
|
cpu = string
|
||||||
mem = string
|
mem = string
|
||||||
disk_size = number
|
disk_size = number
|
||||||
additional_disks = map(object({
|
additional_disks = map(object({
|
||||||
size = number
|
size = number
|
||||||
tier = string
|
tier = string
|
||||||
@@ -99,7 +99,7 @@ variable "server_groups" {
|
|||||||
description = "Server groups"
|
description = "Server groups"
|
||||||
|
|
||||||
type = map(object({
|
type = map(object({
|
||||||
anti_affinity = bool
|
anti_affinity_policy = string
|
||||||
servers = list(string)
|
servers = list(string)
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
@@ -2,8 +2,8 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_providers {
|
required_providers {
|
||||||
upcloud = {
|
upcloud = {
|
||||||
source = "UpCloudLtd/upcloud"
|
source = "UpCloudLtd/upcloud"
|
||||||
version = "~>2.7.1"
|
version = "~>2.12.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
required_version = ">= 0.13"
|
required_version = ">= 0.13"
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ ssh_public_keys = [
|
|||||||
|
|
||||||
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
|
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
|
||||||
machines = {
|
machines = {
|
||||||
"master-0" : {
|
"control-plane-0" : {
|
||||||
"node_type" : "master",
|
"node_type" : "master",
|
||||||
# plan to use instead of custom cpu/mem
|
# plan to use instead of custom cpu/mem
|
||||||
"plan" : null,
|
"plan" : null,
|
||||||
@@ -28,7 +28,7 @@ machines = {
|
|||||||
"mem" : "4096"
|
"mem" : "4096"
|
||||||
# The size of the storage in GB
|
# The size of the storage in GB
|
||||||
"disk_size" : 250
|
"disk_size" : 250
|
||||||
"additional_disks": {}
|
"additional_disks" : {}
|
||||||
},
|
},
|
||||||
"worker-0" : {
|
"worker-0" : {
|
||||||
"node_type" : "worker",
|
"node_type" : "worker",
|
||||||
@@ -40,7 +40,7 @@ machines = {
|
|||||||
"mem" : "4096"
|
"mem" : "4096"
|
||||||
# The size of the storage in GB
|
# The size of the storage in GB
|
||||||
"disk_size" : 250
|
"disk_size" : 250
|
||||||
"additional_disks": {
|
"additional_disks" : {
|
||||||
# "some-disk-name-1": {
|
# "some-disk-name-1": {
|
||||||
# "size": 100,
|
# "size": 100,
|
||||||
# "tier": "maxiops",
|
# "tier": "maxiops",
|
||||||
@@ -61,7 +61,7 @@ machines = {
|
|||||||
"mem" : "4096"
|
"mem" : "4096"
|
||||||
# The size of the storage in GB
|
# The size of the storage in GB
|
||||||
"disk_size" : 250
|
"disk_size" : 250
|
||||||
"additional_disks": {
|
"additional_disks" : {
|
||||||
# "some-disk-name-1": {
|
# "some-disk-name-1": {
|
||||||
# "size": 100,
|
# "size": 100,
|
||||||
# "tier": "maxiops",
|
# "tier": "maxiops",
|
||||||
@@ -82,7 +82,7 @@ machines = {
|
|||||||
"mem" : "4096"
|
"mem" : "4096"
|
||||||
# The size of the storage in GB
|
# The size of the storage in GB
|
||||||
"disk_size" : 250
|
"disk_size" : 250
|
||||||
"additional_disks": {
|
"additional_disks" : {
|
||||||
# "some-disk-name-1": {
|
# "some-disk-name-1": {
|
||||||
# "size": 100,
|
# "size": 100,
|
||||||
# "tier": "maxiops",
|
# "tier": "maxiops",
|
||||||
@@ -118,7 +118,7 @@ master_allowed_ports = []
|
|||||||
worker_allowed_ports = []
|
worker_allowed_ports = []
|
||||||
|
|
||||||
loadbalancer_enabled = false
|
loadbalancer_enabled = false
|
||||||
loadbalancer_plan = "development"
|
loadbalancer_plan = "development"
|
||||||
loadbalancers = {
|
loadbalancers = {
|
||||||
# "http" : {
|
# "http" : {
|
||||||
# "port" : 80,
|
# "port" : 80,
|
||||||
@@ -134,9 +134,9 @@ loadbalancers = {
|
|||||||
server_groups = {
|
server_groups = {
|
||||||
# "control-plane" = {
|
# "control-plane" = {
|
||||||
# servers = [
|
# servers = [
|
||||||
# "master-0"
|
# "control-plane-0"
|
||||||
# ]
|
# ]
|
||||||
# anti_affinity = true
|
# anti_affinity_policy = "strict"
|
||||||
# },
|
# },
|
||||||
# "workers" = {
|
# "workers" = {
|
||||||
# servers = [
|
# servers = [
|
||||||
@@ -144,6 +144,6 @@ server_groups = {
|
|||||||
# "worker-1",
|
# "worker-1",
|
||||||
# "worker-2"
|
# "worker-2"
|
||||||
# ]
|
# ]
|
||||||
# anti_affinity = true
|
# anti_affinity_policy = "yes"
|
||||||
# }
|
# }
|
||||||
}
|
}
|
||||||
@@ -136,8 +136,8 @@ variable "server_groups" {
|
|||||||
description = "Server groups"
|
description = "Server groups"
|
||||||
|
|
||||||
type = map(object({
|
type = map(object({
|
||||||
anti_affinity = bool
|
anti_affinity_policy = string
|
||||||
servers = list(string)
|
servers = list(string)
|
||||||
}))
|
}))
|
||||||
|
|
||||||
default = {}
|
default = {}
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ terraform {
|
|||||||
required_providers {
|
required_providers {
|
||||||
upcloud = {
|
upcloud = {
|
||||||
source = "UpCloudLtd/upcloud"
|
source = "UpCloudLtd/upcloud"
|
||||||
version = "~>2.7.1"
|
version = "~>2.12.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
required_version = ">= 0.13"
|
required_version = ">= 0.13"
|
||||||
|
|||||||
@@ -13,6 +13,7 @@
|
|||||||
* CNI
|
* CNI
|
||||||
* [Calico](docs/calico.md)
|
* [Calico](docs/calico.md)
|
||||||
* [Flannel](docs/flannel.md)
|
* [Flannel](docs/flannel.md)
|
||||||
|
* [Cilium](docs/cilium.md)
|
||||||
* [Kube Router](docs/kube-router.md)
|
* [Kube Router](docs/kube-router.md)
|
||||||
* [Kube OVN](docs/kube-ovn.md)
|
* [Kube OVN](docs/kube-ovn.md)
|
||||||
* [Weave](docs/weave.md)
|
* [Weave](docs/weave.md)
|
||||||
@@ -29,7 +30,6 @@
|
|||||||
* [Equinix Metal](/docs/equinix-metal.md)
|
* [Equinix Metal](/docs/equinix-metal.md)
|
||||||
* [vSphere](/docs/vsphere.md)
|
* [vSphere](/docs/vsphere.md)
|
||||||
* [Operating Systems](docs/bootstrap-os.md)
|
* [Operating Systems](docs/bootstrap-os.md)
|
||||||
* [Debian](docs/debian.md)
|
|
||||||
* [Flatcar Container Linux](docs/flatcar.md)
|
* [Flatcar Container Linux](docs/flatcar.md)
|
||||||
* [Fedora CoreOS](docs/fcos.md)
|
* [Fedora CoreOS](docs/fcos.md)
|
||||||
* [OpenSUSE](docs/opensuse.md)
|
* [OpenSUSE](docs/opensuse.md)
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ Based on the table below and the available python version for your ansible host
|
|||||||
|
|
||||||
| Ansible Version | Python Version |
|
| Ansible Version | Python Version |
|
||||||
|-----------------|----------------|
|
|-----------------|----------------|
|
||||||
| 2.14 | 3.9-3.11 |
|
| >= 2.15.5 | 3.9-3.11 |
|
||||||
|
|
||||||
## Inventory
|
## Inventory
|
||||||
|
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ Kubespray can be installed as an [Ansible collection](https://docs.ansible.com/a
|
|||||||
collections:
|
collections:
|
||||||
- name: https://github.com/kubernetes-sigs/kubespray
|
- name: https://github.com/kubernetes-sigs/kubespray
|
||||||
type: git
|
type: git
|
||||||
version: v2.22.1
|
version: master # use the appropriate tag or branch for the version you need
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Install your collection
|
2. Install your collection
|
||||||
|
|||||||
@@ -7,10 +7,6 @@ Kubespray supports multiple ansible versions but only the default (5.x) gets wid
|
|||||||
|
|
||||||
## CentOS 8
|
## CentOS 8
|
||||||
|
|
||||||
CentOS 8 / Oracle Linux 8,9 / AlmaLinux 8,9 / Rocky Linux 8,9 ship only with iptables-nft (ie without iptables-legacy similar to RHEL8)
|
|
||||||
The only tested configuration for now is using Calico CNI
|
|
||||||
You need to add `calico_iptables_backend: "NFT"` to your configuration.
|
|
||||||
|
|
||||||
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
|
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
|
||||||
you need to ensure they are using iptables-nft.
|
you need to ensure they are using iptables-nft.
|
||||||
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
|
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
|
|||||||
centos7 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
|
centos7 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
|
||||||
debian10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
|
debian10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
|
||||||
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
|
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
|
||||||
debian12 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
|
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
|
||||||
fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
|
fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
|
||||||
fedora38 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
|
fedora38 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
|
||||||
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
|
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
|
||||||
|
|||||||
@@ -44,6 +44,8 @@ containerd_registries_mirrors:
|
|||||||
image_command_tool: crictl
|
image_command_tool: crictl
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The `containerd_registries` and `containerd_insecure_registries` configs are deprecated.
|
||||||
|
|
||||||
### Containerd Runtimes
|
### Containerd Runtimes
|
||||||
|
|
||||||
Containerd supports multiple runtime configurations that can be used with
|
Containerd supports multiple runtime configurations that can be used with
|
||||||
@@ -130,3 +132,13 @@ containerd_registries_mirrors:
|
|||||||
[RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/
|
[RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/
|
||||||
[runtime classes in containerd]: https://github.com/containerd/containerd/blob/main/docs/cri/config.md#runtime-classes
|
[runtime classes in containerd]: https://github.com/containerd/containerd/blob/main/docs/cri/config.md#runtime-classes
|
||||||
[runtime-spec]: https://github.com/opencontainers/runtime-spec
|
[runtime-spec]: https://github.com/opencontainers/runtime-spec
|
||||||
|
|
||||||
|
### Optional : NRI
|
||||||
|
|
||||||
|
[Node Resource Interface](https://github.com/containerd/nri) (NRI) is disabled by default for the containerd. If you
|
||||||
|
are using contained version v1.7.0 or above, then you can enable it with the
|
||||||
|
following configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
nri_enabled: true
|
||||||
|
```
|
||||||
|
|||||||
@@ -42,6 +42,22 @@ crio_registries:
|
|||||||
|
|
||||||
[CRI-O]: https://cri-o.io/
|
[CRI-O]: https://cri-o.io/
|
||||||
|
|
||||||
|
The following is a method to enable insecure registries.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
crio_insecure_registries:
|
||||||
|
- 10.0.0.2:5000
|
||||||
|
```
|
||||||
|
|
||||||
|
And you can config authentication for these registries after `crio_insecure_registries`.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
crio_registry_auth:
|
||||||
|
- registry: 10.0.0.2:5000
|
||||||
|
username: user
|
||||||
|
password: pass
|
||||||
|
```
|
||||||
|
|
||||||
## Note about user namespaces
|
## Note about user namespaces
|
||||||
|
|
||||||
CRI-O has support for user namespaces. This feature is optional and can be enabled by setting the following two variables.
|
CRI-O has support for user namespaces. This feature is optional and can be enabled by setting the following two variables.
|
||||||
@@ -62,3 +78,13 @@ The `allowed_annotations` configures `crio.conf` accordingly.
|
|||||||
|
|
||||||
The `crio_remap_enable` configures the `/etc/subuid` and `/etc/subgid` files to add an entry for the **containers** user.
|
The `crio_remap_enable` configures the `/etc/subuid` and `/etc/subgid` files to add an entry for the **containers** user.
|
||||||
By default, 16M uids and gids are reserved for user namespaces (256 pods * 65536 uids/gids) at the end of the uid/gid space.
|
By default, 16M uids and gids are reserved for user namespaces (256 pods * 65536 uids/gids) at the end of the uid/gid space.
|
||||||
|
|
||||||
|
## Optional : NRI
|
||||||
|
|
||||||
|
[Node Resource Interface](https://github.com/containerd/nri) (NRI) is disabled by default for the CRI-O. If you
|
||||||
|
are using CRI-O version v1.26.0 or above, then you can enable it with the
|
||||||
|
following configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
nri_enabled: true
|
||||||
|
```
|
||||||
|
|||||||
@@ -1,41 +0,0 @@
|
|||||||
# Debian Jessie
|
|
||||||
|
|
||||||
Debian Jessie installation Notes:
|
|
||||||
|
|
||||||
- Add
|
|
||||||
|
|
||||||
```ini
|
|
||||||
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
|
|
||||||
```
|
|
||||||
|
|
||||||
to `/etc/default/grub`. Then update with
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
sudo update-grub
|
|
||||||
sudo update-grub2
|
|
||||||
sudo reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
- Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd.
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
apt-get -t jessie-backports install systemd
|
|
||||||
```
|
|
||||||
|
|
||||||
(Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files)
|
|
||||||
|
|
||||||
- Add the Ansible repository and install Ansible to get a proper version
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
sudo add-apt-repository ppa:ansible/ansible
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-get install ansible
|
|
||||||
```
|
|
||||||
|
|
||||||
- Install Jinja2 and Python-Netaddr
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr
|
|
||||||
```
|
|
||||||
|
|
||||||
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)
|
|
||||||
@@ -97,3 +97,9 @@ Adding extra options to pass to the docker daemon:
|
|||||||
## This string should be exactly as you wish it to appear.
|
## This string should be exactly as you wish it to appear.
|
||||||
docker_options: ""
|
docker_options: ""
|
||||||
```
|
```
|
||||||
|
|
||||||
|
For Debian based distributions, set the path to store the GPG key to avoid using the default one used in `apt_key` module (e.g. /etc/apt/trusted.gpg)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
docker_repo_key_keyring: /etc/apt/trusted.gpg.d/docker.gpg
|
||||||
|
```
|
||||||
|
|||||||
@@ -54,6 +54,11 @@ kube_apiserver_enable_admission_plugins:
|
|||||||
- PodNodeSelector
|
- PodNodeSelector
|
||||||
- PodSecurity
|
- PodSecurity
|
||||||
kube_apiserver_admission_control_config_file: true
|
kube_apiserver_admission_control_config_file: true
|
||||||
|
# Creates config file for PodNodeSelector
|
||||||
|
# kube_apiserver_admission_plugins_needs_configuration: [PodNodeSelector]
|
||||||
|
# Define the default node selector, by default all the workloads will be scheduled on nodes
|
||||||
|
# with label network=srv1
|
||||||
|
# kube_apiserver_admission_plugins_podnodeselector_default_node_selector: "network=srv1"
|
||||||
# EventRateLimit plugin configuration
|
# EventRateLimit plugin configuration
|
||||||
kube_apiserver_admission_event_rate_limits:
|
kube_apiserver_admission_event_rate_limits:
|
||||||
limit_1:
|
limit_1:
|
||||||
@@ -115,7 +120,7 @@ kube_pod_security_default_enforce: restricted
|
|||||||
Let's take a deep look to the resultant **kubernetes** configuration:
|
Let's take a deep look to the resultant **kubernetes** configuration:
|
||||||
|
|
||||||
* The `anonymous-auth` (on `kube-apiserver`) is set to `true` by default. This is fine, because it is considered safe if you enable `RBAC` for the `authorization-mode`.
|
* The `anonymous-auth` (on `kube-apiserver`) is set to `true` by default. This is fine, because it is considered safe if you enable `RBAC` for the `authorization-mode`.
|
||||||
* The `enable-admission-plugins` has not the `PodSecurityPolicy` admission plugin. This because it is going to be definitely removed from **kubernetes** `v1.25`. For this reason we decided to set the newest `PodSecurity` (for more details, please take a look here: <https://kubernetes.io/docs/concepts/security/pod-security-admission/>). Then, we set the `EventRateLimit` plugin, providing additional configuration files (that are automatically created under the hood and mounted inside the `kube-apiserver` container) to make it work.
|
* The `enable-admission-plugins` includes `PodSecurity` (for more details, please take a look here: <https://kubernetes.io/docs/concepts/security/pod-security-admission/>). Then, we set the `EventRateLimit` plugin, providing additional configuration files (that are automatically created under the hood and mounted inside the `kube-apiserver` container) to make it work.
|
||||||
* The `encryption-provider-config` provide encryption at rest. This means that the `kube-apiserver` encrypt data that is going to be stored before they reach `etcd`. So the data is completely unreadable from `etcd` (in case an attacker is able to exploit this).
|
* The `encryption-provider-config` provide encryption at rest. This means that the `kube-apiserver` encrypt data that is going to be stored before they reach `etcd`. So the data is completely unreadable from `etcd` (in case an attacker is able to exploit this).
|
||||||
* The `rotateCertificates` in `KubeletConfiguration` is set to `true` along with `serverTLSBootstrap`. This could be used in alternative to `tlsCertFile` and `tlsPrivateKeyFile` parameters. Additionally it automatically generates certificates by itself. By default the CSRs are approved automatically via [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver). You can customize approval configuration by modifying Helm values via `kubelet_csr_approver_values`.
|
* The `rotateCertificates` in `KubeletConfiguration` is set to `true` along with `serverTLSBootstrap`. This could be used in alternative to `tlsCertFile` and `tlsPrivateKeyFile` parameters. Additionally it automatically generates certificates by itself. By default the CSRs are approved automatically via [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver). You can customize approval configuration by modifying Helm values via `kubelet_csr_approver_values`.
|
||||||
See <https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/> for more information on the subject.
|
See <https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/> for more information on the subject.
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# AWS ALB Ingress Controller
|
# AWS ALB Ingress Controller
|
||||||
|
|
||||||
**NOTE:** The current image version is `v1.1.6`. Please file any issues you find and note the version used.
|
**NOTE:** The current image version is `v1.1.9`. Please file any issues you find and note the version used.
|
||||||
|
|
||||||
The AWS ALB Ingress Controller satisfies Kubernetes [ingress resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) by provisioning [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).
|
The AWS ALB Ingress Controller satisfies Kubernetes [ingress resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) by provisioning [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).
|
||||||
|
|
||||||
|
|||||||
@@ -70,3 +70,9 @@ If using [control plane load-balancing](https://kube-vip.io/docs/about/architect
|
|||||||
```yaml
|
```yaml
|
||||||
kube_vip_lb_enable: true
|
kube_vip_lb_enable: true
|
||||||
```
|
```
|
||||||
|
|
||||||
|
In addition, [load-balancing method](https://kube-vip.io/docs/installation/flags/#environment-variables) could be changed:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kube_vip_lb_fwdmethod: masquerade
|
||||||
|
```
|
||||||
|
|||||||
@@ -29,10 +29,6 @@ metallb_config:
|
|||||||
nodeselector:
|
nodeselector:
|
||||||
kubernetes.io/os: linux
|
kubernetes.io/os: linux
|
||||||
tolerations:
|
tolerations:
|
||||||
- key: "node-role.kubernetes.io/master"
|
|
||||||
operator: "Equal"
|
|
||||||
value: ""
|
|
||||||
effect: "NoSchedule"
|
|
||||||
- key: "node-role.kubernetes.io/control-plane"
|
- key: "node-role.kubernetes.io/control-plane"
|
||||||
operator: "Equal"
|
operator: "Equal"
|
||||||
value: ""
|
value: ""
|
||||||
@@ -73,7 +69,6 @@ metallb_config:
|
|||||||
primary:
|
primary:
|
||||||
ip_range:
|
ip_range:
|
||||||
- 192.0.1.0-192.0.1.254
|
- 192.0.1.0-192.0.1.254
|
||||||
auto_assign: true
|
|
||||||
|
|
||||||
pool1:
|
pool1:
|
||||||
ip_range:
|
ip_range:
|
||||||
@@ -82,8 +77,8 @@ metallb_config:
|
|||||||
|
|
||||||
pool2:
|
pool2:
|
||||||
ip_range:
|
ip_range:
|
||||||
- 192.0.2.2-192.0.2.2
|
- 192.0.3.0/24
|
||||||
auto_assign: false
|
avoid_buggy_ips: true # When set to true, .0 and .255 addresses will be avoided.
|
||||||
```
|
```
|
||||||
|
|
||||||
## Layer2 Mode
|
## Layer2 Mode
|
||||||
|
|||||||
@@ -95,7 +95,7 @@ If you use the settings like the one above, you'll need to define in your invent
|
|||||||
|
|
||||||
* `registry_host`: Container image registry. If you _don't_ use the same repository path for the container images that
|
* `registry_host`: Container image registry. If you _don't_ use the same repository path for the container images that
|
||||||
the ones defined
|
the ones defined
|
||||||
in [Download's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/download/defaults/main/main.yml)
|
in [kubesprays-defaults's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubespray-defaults/defaults/main/download.yml)
|
||||||
, you need to override the `*_image_repo` for these container images. If you want to make your life easier, use the
|
, you need to override the `*_image_repo` for these container images. If you want to make your life easier, use the
|
||||||
same repository path, you won't have to override anything else.
|
same repository path, you won't have to override anything else.
|
||||||
* `registry_addr`: Container image registry, but only have [domain or ip]:[port].
|
* `registry_addr`: Container image registry, but only have [domain or ip]:[port].
|
||||||
|
|||||||
@@ -29,10 +29,6 @@ If the RHEL 7/8 hosts are already registered to a valid Red Hat support subscrip
|
|||||||
|
|
||||||
## RHEL 8
|
## RHEL 8
|
||||||
|
|
||||||
RHEL 8 ships only with iptables-nft (ie without iptables-legacy)
|
|
||||||
The only tested configuration for now is using Calico CNI
|
|
||||||
You need to use K8S 1.17+ and to add `calico_iptables_backend: "NFT"` to your configuration
|
|
||||||
|
|
||||||
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
|
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
|
||||||
you need to ensure they are using iptables-nft.
|
you need to ensure they are using iptables-nft.
|
||||||
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
|
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Node Layouts
|
# Node Layouts
|
||||||
|
|
||||||
There are four node layout types: `default`, `separate`, `ha`, and `scale`.
|
There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `node-etcd-client`.
|
||||||
|
|
||||||
`default` is a non-HA two nodes setup with one separate `kube_node`
|
`default` is a non-HA two nodes setup with one separate `kube_node`
|
||||||
and the `etcd` group merged with the `kube_control_plane`.
|
and the `etcd` group merged with the `kube_control_plane`.
|
||||||
@@ -16,6 +16,11 @@ in the Ansible inventory. This helps test TLS certificate generation at scale
|
|||||||
to prevent regressions and profile certain long-running tasks. These nodes are
|
to prevent regressions and profile certain long-running tasks. These nodes are
|
||||||
never actually deployed, but certificates are generated for them.
|
never actually deployed, but certificates are generated for them.
|
||||||
|
|
||||||
|
`all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged.
|
||||||
|
|
||||||
|
`node-etcd-client` layout consists of a 4 nodes cluster, all of them in `kube_node`, first 3 in `etcd` and only one `kube_control_plane`.
|
||||||
|
This is necessary to tests setups requiring that nodes are etcd clients (use of cilium as `network_plugin` for instance)
|
||||||
|
|
||||||
Note, the canal network plugin deploys flannel as well plus calico policy controller.
|
Note, the canal network plugin deploys flannel as well plus calico policy controller.
|
||||||
|
|
||||||
## Test cases
|
## Test cases
|
||||||
|
|||||||
@@ -186,6 +186,8 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
|
|||||||
* *containerd_additional_runtimes* - Sets the additional Containerd runtimes used by the Kubernetes CRI plugin.
|
* *containerd_additional_runtimes* - Sets the additional Containerd runtimes used by the Kubernetes CRI plugin.
|
||||||
[Default config](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/container-engine/containerd/defaults/main.yml) can be overridden in inventory vars.
|
[Default config](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/container-engine/containerd/defaults/main.yml) can be overridden in inventory vars.
|
||||||
|
|
||||||
|
* *crio_criu_support_enabled* - When set to `true`, enables the container checkpoint/restore in CRI-O. It's required to install [CRIU](https://criu.org/Installation) on the host when dumping/restoring checkpoints. And it's recommended to enable the feature gate `ContainerCheckpoint` so that the kubelet get a higher level API to simplify the operations (**Note**: It's still in experimental stage, just for container analytics so far). You can follow the [documentation](https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/).
|
||||||
|
|
||||||
* *http_proxy/https_proxy/no_proxy/no_proxy_exclude_workers/additional_no_proxy* - Proxy variables for deploying behind a
|
* *http_proxy/https_proxy/no_proxy/no_proxy_exclude_workers/additional_no_proxy* - Proxy variables for deploying behind a
|
||||||
proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames
|
proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames
|
||||||
that correspond to each node.
|
that correspond to each node.
|
||||||
@@ -252,8 +254,6 @@ node_taints:
|
|||||||
- "node.example.com/external=true:NoSchedule"
|
- "node.example.com/external=true:NoSchedule"
|
||||||
```
|
```
|
||||||
|
|
||||||
* *podsecuritypolicy_enabled* - When set to `true`, enables the PodSecurityPolicy admission controller and defines two policies `privileged` (applying to all resources in `kube-system` namespace and kubelet) and `restricted` (applying all other namespaces).
|
|
||||||
Addons deployed in kube-system namespaces are handled.
|
|
||||||
* *kubernetes_audit* - When set to `true`, enables Auditing.
|
* *kubernetes_audit* - When set to `true`, enables Auditing.
|
||||||
The auditing parameters can be tuned via the following variables (which default values are shown below):
|
The auditing parameters can be tuned via the following variables (which default values are shown below):
|
||||||
* `audit_log_path`: /var/log/audit/kube-apiserver-audit.log
|
* `audit_log_path`: /var/log/audit/kube-apiserver-audit.log
|
||||||
@@ -271,6 +271,7 @@ node_taints:
|
|||||||
* `audit_webhook_mode`: batch
|
* `audit_webhook_mode`: batch
|
||||||
* `audit_webhook_batch_max_size`: 100
|
* `audit_webhook_batch_max_size`: 100
|
||||||
* `audit_webhook_batch_max_wait`: 1s
|
* `audit_webhook_batch_max_wait`: 1s
|
||||||
|
* *kubectl_alias* - Bash alias of kubectl to interact with Kubernetes cluster much easier.
|
||||||
|
|
||||||
### Custom flags for Kube Components
|
### Custom flags for Kube Components
|
||||||
|
|
||||||
|
|||||||
@@ -12,7 +12,7 @@
|
|||||||
hosts: kube_control_plane[0]
|
hosts: kube_control_plane[0]
|
||||||
tasks:
|
tasks:
|
||||||
- name: Include kubespray-default variables
|
- name: Include kubespray-default variables
|
||||||
include_vars: ../roles/kubespray-defaults/defaults/main.yaml
|
include_vars: ../roles/kubespray-defaults/defaults/main/main.yml
|
||||||
- name: Copy get_cinder_pvs.sh to master
|
- name: Copy get_cinder_pvs.sh to master
|
||||||
copy:
|
copy:
|
||||||
src: get_cinder_pvs.sh
|
src: get_cinder_pvs.sh
|
||||||
|
|||||||
@@ -2,13 +2,15 @@
|
|||||||
namespace: kubernetes_sigs
|
namespace: kubernetes_sigs
|
||||||
description: Deploy a production ready Kubernetes cluster
|
description: Deploy a production ready Kubernetes cluster
|
||||||
name: kubespray
|
name: kubespray
|
||||||
version: 2.22.1
|
version: 2.24.0
|
||||||
readme: README.md
|
readme: README.md
|
||||||
authors:
|
authors:
|
||||||
- luksi1
|
- luksi1
|
||||||
tags:
|
tags:
|
||||||
- infrastructure
|
- infrastructure
|
||||||
repository: https://github.com/kubernetes-sigs/kubespray
|
repository: https://github.com/kubernetes-sigs/kubespray
|
||||||
|
dependencies:
|
||||||
|
ansible.utils: '>=2.5.0'
|
||||||
build_ignore:
|
build_ignore:
|
||||||
- .github
|
- .github
|
||||||
- '*.tar.gz'
|
- '*.tar.gz'
|
||||||
|
|||||||
@@ -57,7 +57,7 @@ loadbalancer_apiserver_healthcheck_port: 8081
|
|||||||
# https_proxy: ""
|
# https_proxy: ""
|
||||||
# https_proxy_cert_file: ""
|
# https_proxy_cert_file: ""
|
||||||
|
|
||||||
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
|
## Refer to roles/kubespray-defaults/defaults/main/main.yml before modifying no_proxy
|
||||||
# no_proxy: ""
|
# no_proxy: ""
|
||||||
|
|
||||||
## Some problems may occur when downloading files over https proxy due to ansible bug
|
## Some problems may occur when downloading files over https proxy due to ansible bug
|
||||||
|
|||||||
@@ -1,5 +1,8 @@
|
|||||||
|
# Registries defined within cri-o.
|
||||||
# crio_insecure_registries:
|
# crio_insecure_registries:
|
||||||
# - 10.0.0.2:5000
|
# - 10.0.0.2:5000
|
||||||
|
|
||||||
|
# Auth config for the registries
|
||||||
# crio_registry_auth:
|
# crio_registry_auth:
|
||||||
# - registry: 10.0.0.2:5000
|
# - registry: 10.0.0.2:5000
|
||||||
# username: user
|
# username: user
|
||||||
|
|||||||
@@ -24,3 +24,12 @@
|
|||||||
### ETCD: disable peer client cert authentication.
|
### ETCD: disable peer client cert authentication.
|
||||||
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
|
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
|
||||||
# etcd_peer_client_auth: true
|
# etcd_peer_client_auth: true
|
||||||
|
|
||||||
|
## Enable distributed tracing
|
||||||
|
## To enable this experimental feature, set the etcd_experimental_enable_distributed_tracing: true, along with the
|
||||||
|
## etcd_experimental_distributed_tracing_sample_rate to choose how many samples to collect per million spans,
|
||||||
|
## the default sampling rate is 0 https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
|
||||||
|
# etcd_experimental_enable_distributed_tracing: false
|
||||||
|
# etcd_experimental_distributed_tracing_sample_rate: 100
|
||||||
|
# etcd_experimental_distributed_tracing_address: "localhost:4317"
|
||||||
|
# etcd_experimental_distributed_tracing_service_name: etcd
|
||||||
@@ -103,10 +103,6 @@ ingress_publish_status_address: ""
|
|||||||
# ingress_nginx_nodeselector:
|
# ingress_nginx_nodeselector:
|
||||||
# kubernetes.io/os: "linux"
|
# kubernetes.io/os: "linux"
|
||||||
# ingress_nginx_tolerations:
|
# ingress_nginx_tolerations:
|
||||||
# - key: "node-role.kubernetes.io/master"
|
|
||||||
# operator: "Equal"
|
|
||||||
# value: ""
|
|
||||||
# effect: "NoSchedule"
|
|
||||||
# - key: "node-role.kubernetes.io/control-plane"
|
# - key: "node-role.kubernetes.io/control-plane"
|
||||||
# operator: "Equal"
|
# operator: "Equal"
|
||||||
# value: ""
|
# value: ""
|
||||||
@@ -140,8 +136,6 @@ ingress_alb_enabled: false
|
|||||||
cert_manager_enabled: false
|
cert_manager_enabled: false
|
||||||
# cert_manager_namespace: "cert-manager"
|
# cert_manager_namespace: "cert-manager"
|
||||||
# cert_manager_tolerations:
|
# cert_manager_tolerations:
|
||||||
# - key: node-role.kubernetes.io/master
|
|
||||||
# effect: NoSchedule
|
|
||||||
# - key: node-role.kubernetes.io/control-plane
|
# - key: node-role.kubernetes.io/control-plane
|
||||||
# effect: NoSchedule
|
# effect: NoSchedule
|
||||||
# cert_manager_affinity:
|
# cert_manager_affinity:
|
||||||
@@ -176,33 +170,27 @@ cert_manager_enabled: false
|
|||||||
# MetalLB deployment
|
# MetalLB deployment
|
||||||
metallb_enabled: false
|
metallb_enabled: false
|
||||||
metallb_speaker_enabled: "{{ metallb_enabled }}"
|
metallb_speaker_enabled: "{{ metallb_enabled }}"
|
||||||
# metallb_speaker_nodeselector:
|
|
||||||
# kubernetes.io/os: "linux"
|
|
||||||
# metallb_controller_nodeselector:
|
|
||||||
# kubernetes.io/os: "linux"
|
|
||||||
# metallb_speaker_tolerations:
|
|
||||||
# - key: "node-role.kubernetes.io/master"
|
|
||||||
# operator: "Equal"
|
|
||||||
# value: ""
|
|
||||||
# effect: "NoSchedule"
|
|
||||||
# - key: "node-role.kubernetes.io/control-plane"
|
|
||||||
# operator: "Equal"
|
|
||||||
# value: ""
|
|
||||||
# effect: "NoSchedule"
|
|
||||||
# metallb_controller_tolerations:
|
|
||||||
# - key: "node-role.kubernetes.io/master"
|
|
||||||
# operator: "Equal"
|
|
||||||
# value: ""
|
|
||||||
# effect: "NoSchedule"
|
|
||||||
# - key: "node-role.kubernetes.io/control-plane"
|
|
||||||
# operator: "Equal"
|
|
||||||
# value: ""
|
|
||||||
# effect: "NoSchedule"
|
|
||||||
# metallb_version: v0.13.9
|
# metallb_version: v0.13.9
|
||||||
# metallb_protocol: "layer2"
|
# metallb_protocol: "layer2"
|
||||||
# metallb_port: "7472"
|
# metallb_port: "7472"
|
||||||
# metallb_memberlist_port: "7946"
|
# metallb_memberlist_port: "7946"
|
||||||
# metallb_config:
|
# metallb_config:
|
||||||
|
# speaker:
|
||||||
|
# nodeselector:
|
||||||
|
# kubernetes.io/os: "linux"
|
||||||
|
# tollerations:
|
||||||
|
# - key: "node-role.kubernetes.io/control-plane"
|
||||||
|
# operator: "Equal"
|
||||||
|
# value: ""
|
||||||
|
# effect: "NoSchedule"
|
||||||
|
# controller:
|
||||||
|
# nodeselector:
|
||||||
|
# kubernetes.io/os: "linux"
|
||||||
|
# tolerations:
|
||||||
|
# - key: "node-role.kubernetes.io/control-plane"
|
||||||
|
# operator: "Equal"
|
||||||
|
# value: ""
|
||||||
|
# effect: "NoSchedule"
|
||||||
# address_pools:
|
# address_pools:
|
||||||
# primary:
|
# primary:
|
||||||
# ip_range:
|
# ip_range:
|
||||||
@@ -244,7 +232,7 @@ metallb_speaker_enabled: "{{ metallb_enabled }}"
|
|||||||
# - pool2
|
# - pool2
|
||||||
|
|
||||||
argocd_enabled: false
|
argocd_enabled: false
|
||||||
# argocd_version: v2.8.0
|
# argocd_version: v2.8.4
|
||||||
# argocd_namespace: argocd
|
# argocd_namespace: argocd
|
||||||
# Default password:
|
# Default password:
|
||||||
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli
|
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli
|
||||||
@@ -259,3 +247,14 @@ argocd_enabled: false
|
|||||||
# The plugin manager for kubectl
|
# The plugin manager for kubectl
|
||||||
krew_enabled: false
|
krew_enabled: false
|
||||||
krew_root_dir: "/usr/local/krew"
|
krew_root_dir: "/usr/local/krew"
|
||||||
|
|
||||||
|
# Kube VIP
|
||||||
|
kube_vip_enabled: false
|
||||||
|
# kube_vip_arp_enabled: true
|
||||||
|
# kube_vip_controlplane_enabled: true
|
||||||
|
# kube_vip_address: 192.168.56.120
|
||||||
|
# loadbalancer_apiserver:
|
||||||
|
# address: "{{ kube_vip_address }}"
|
||||||
|
# port: 6443
|
||||||
|
# kube_vip_interface: eth0
|
||||||
|
# kube_vip_services_enabled: false
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
|
|||||||
kube_api_anonymous_auth: true
|
kube_api_anonymous_auth: true
|
||||||
|
|
||||||
## Change this to use another Kubernetes version, e.g. a current beta release
|
## Change this to use another Kubernetes version, e.g. a current beta release
|
||||||
kube_version: v1.27.5
|
kube_version: v1.28.6
|
||||||
|
|
||||||
# Where the binaries will be downloaded.
|
# Where the binaries will be downloaded.
|
||||||
# Note: ensure that you've enough disk space (about 1G)
|
# Note: ensure that you've enough disk space (about 1G)
|
||||||
@@ -243,15 +243,6 @@ kubernetes_audit: false
|
|||||||
# kubelet_config_dir:
|
# kubelet_config_dir:
|
||||||
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
|
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
|
||||||
|
|
||||||
# pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
|
|
||||||
podsecuritypolicy_enabled: false
|
|
||||||
|
|
||||||
# Custom PodSecurityPolicySpec for restricted policy
|
|
||||||
# podsecuritypolicy_restricted_spec: {}
|
|
||||||
|
|
||||||
# Custom PodSecurityPolicySpec for privileged policy
|
|
||||||
# podsecuritypolicy_privileged_spec: {}
|
|
||||||
|
|
||||||
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
|
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
|
||||||
# kubeconfig_localhost: false
|
# kubeconfig_localhost: false
|
||||||
# Use ansible_host as external api ip when copying over kubeconfig.
|
# Use ansible_host as external api ip when copying over kubeconfig.
|
||||||
|
|||||||
@@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
# custom_cni network plugin configuration
|
||||||
|
# There are two deployment options to choose from, select one
|
||||||
|
|
||||||
|
## OPTION 1 - Static manifest files
|
||||||
|
## With this option, referred manifest file will be deployed
|
||||||
|
## as if the `kubectl apply -f` method was used with it.
|
||||||
|
#
|
||||||
|
## List of Kubernetes resource manifest files
|
||||||
|
## See tests/files/custom_cni/README.md for example
|
||||||
|
# custom_cni_manifests: []
|
||||||
|
|
||||||
|
## OPTION 1 EXAMPLE - Cilium static manifests in Kubespray tree
|
||||||
|
# custom_cni_manifests:
|
||||||
|
# - "{{ playbook_dir }}/../tests/files/custom_cni/cilium.yaml"
|
||||||
|
|
||||||
|
## OPTION 2 - Helm chart application
|
||||||
|
## This allows the CNI backend to be deployed to Kubespray cluster
|
||||||
|
## as common Helm application.
|
||||||
|
#
|
||||||
|
## Helm release name - how the local instance of deployed chart will be named
|
||||||
|
# custom_cni_chart_release_name: ""
|
||||||
|
#
|
||||||
|
## Kubernetes namespace to deploy into
|
||||||
|
# custom_cni_chart_namespace: "kube-system"
|
||||||
|
#
|
||||||
|
## Helm repository name - how the local record of Helm repository will be named
|
||||||
|
# custom_cni_chart_repository_name: ""
|
||||||
|
#
|
||||||
|
## Helm repository URL
|
||||||
|
# custom_cni_chart_repository_url: ""
|
||||||
|
#
|
||||||
|
## Helm chart reference - path to the chart in the repository
|
||||||
|
# custom_cni_chart_ref: ""
|
||||||
|
#
|
||||||
|
## Helm chart version
|
||||||
|
# custom_cni_chart_version: ""
|
||||||
|
#
|
||||||
|
## Custom Helm values to be used for deployment
|
||||||
|
# custom_cni_chart_values: {}
|
||||||
|
|
||||||
|
## OPTION 2 EXAMPLE - Cilium deployed from official public Helm chart
|
||||||
|
# custom_cni_chart_namespace: kube-system
|
||||||
|
# custom_cni_chart_release_name: cilium
|
||||||
|
# custom_cni_chart_repository_name: cilium
|
||||||
|
# custom_cni_chart_repository_url: https://helm.cilium.io
|
||||||
|
# custom_cni_chart_ref: cilium/cilium
|
||||||
|
# custom_cni_chart_version: 1.14.3
|
||||||
|
# custom_cni_chart_values:
|
||||||
|
# cluster:
|
||||||
|
# name: "cilium-demo"
|
||||||
@@ -1,4 +1,10 @@
|
|||||||
# See roles/network_plugin/kube-router//defaults/main.yml
|
# See roles/network_plugin/kube-router/defaults/main.yml
|
||||||
|
|
||||||
|
# Kube router version
|
||||||
|
# Default to v2
|
||||||
|
# kube_router_version: "v2.0.0"
|
||||||
|
# Uncomment to use v1 (Deprecated)
|
||||||
|
# kube_router_version: "v1.6.0"
|
||||||
|
|
||||||
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
|
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
|
||||||
# kube_router_run_router: true
|
# kube_router_run_router: true
|
||||||
@@ -19,6 +25,9 @@
|
|||||||
# Add LoadBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers.
|
# Add LoadBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers.
|
||||||
# kube_router_advertise_loadbalancer_ip: false
|
# kube_router_advertise_loadbalancer_ip: false
|
||||||
|
|
||||||
|
# Enables BGP graceful restarts
|
||||||
|
# kube_router_bgp_graceful_restart: true
|
||||||
|
|
||||||
# Adjust manifest of kube-router daemonset template with DSR needed changes
|
# Adjust manifest of kube-router daemonset template with DSR needed changes
|
||||||
# kube_router_enable_dsr: false
|
# kube_router_enable_dsr: false
|
||||||
|
|
||||||
|
|||||||
@@ -1,2 +1,2 @@
|
|||||||
---
|
---
|
||||||
requires_ansible: '>=2.14.0'
|
requires_ansible: '>=2.15.5'
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ FROM ubuntu:jammy-20230308
|
|||||||
# Pip needs this as well at the moment to install ansible
|
# Pip needs this as well at the moment to install ansible
|
||||||
# (and potentially other packages)
|
# (and potentially other packages)
|
||||||
# See: https://github.com/pypa/pip/issues/10219
|
# See: https://github.com/pypa/pip/issues/10219
|
||||||
ENV VAGRANT_VERSION=2.3.4 \
|
ENV VAGRANT_VERSION=2.3.7 \
|
||||||
VAGRANT_DEFAULT_PROVIDER=libvirt \
|
VAGRANT_DEFAULT_PROVIDER=libvirt \
|
||||||
VAGRANT_ANSIBLE_TAGS=facts \
|
VAGRANT_ANSIBLE_TAGS=facts \
|
||||||
LANG=C.UTF-8 \
|
LANG=C.UTF-8 \
|
||||||
@@ -40,11 +40,11 @@ WORKDIR /kubespray
|
|||||||
|
|
||||||
RUN --mount=type=bind,target=./requirements.txt,src=./requirements.txt \
|
RUN --mount=type=bind,target=./requirements.txt,src=./requirements.txt \
|
||||||
--mount=type=bind,target=./tests/requirements.txt,src=./tests/requirements.txt \
|
--mount=type=bind,target=./tests/requirements.txt,src=./tests/requirements.txt \
|
||||||
--mount=type=bind,target=./roles/kubespray-defaults/defaults/main.yaml,src=./roles/kubespray-defaults/defaults/main.yaml \
|
--mount=type=bind,target=./roles/kubespray-defaults/defaults/main/main.yml,src=./roles/kubespray-defaults/defaults/main/main.yml \
|
||||||
update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
|
update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
|
||||||
&& pip install --no-compile --no-cache-dir pip -U \
|
&& pip install --no-compile --no-cache-dir pip -U \
|
||||||
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
|
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
|
||||||
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
|
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
|
||||||
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
|
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
|
||||||
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
|
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
|
||||||
&& chmod a+x /usr/local/bin/kubectl \
|
&& chmod a+x /usr/local/bin/kubectl \
|
||||||
|
|||||||
@@ -4,8 +4,8 @@
|
|||||||
gather_facts: false
|
gather_facts: false
|
||||||
become: no
|
become: no
|
||||||
vars:
|
vars:
|
||||||
minimal_ansible_version: 2.14.0
|
minimal_ansible_version: 2.15.5 # 2.15 versions before 2.15.5 are known to be buggy for kubespray
|
||||||
maximal_ansible_version: 2.15.0
|
maximal_ansible_version: 2.17.0
|
||||||
ansible_connection: local
|
ansible_connection: local
|
||||||
tags: always
|
tags: always
|
||||||
tasks:
|
tasks:
|
||||||
|
|||||||
@@ -1,5 +1,8 @@
|
|||||||
---
|
---
|
||||||
# This is an inventory compatibility playbook to ensure we keep compatibility with old style group names
|
- name: Check ansible version
|
||||||
|
import_playbook: ansible_version.yml
|
||||||
|
|
||||||
|
# These are inventory compatibility tasks to ensure we keep compatibility with old style group names
|
||||||
|
|
||||||
- name: Add kube-master nodes to kube_control_plane
|
- name: Add kube-master nodes to kube_control_plane
|
||||||
hosts: kube-master
|
hosts: kube-master
|
||||||
@@ -45,3 +48,11 @@
|
|||||||
- name: Add nodes to no-floating group
|
- name: Add nodes to no-floating group
|
||||||
group_by:
|
group_by:
|
||||||
key: 'no_floating'
|
key: 'no_floating'
|
||||||
|
|
||||||
|
- name: Install bastion ssh config
|
||||||
|
hosts: bastion[0]
|
||||||
|
gather_facts: False
|
||||||
|
environment: "{{ proxy_disable_env }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults }
|
||||||
|
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
|
||||||
@@ -1,20 +1,8 @@
|
|||||||
---
|
---
|
||||||
- name: Check ansible version
|
- name: Common tasks for every playbooks
|
||||||
import_playbook: ansible_version.yml
|
import_playbook: boilerplate.yml
|
||||||
|
|
||||||
- name: Ensure compatibility with old groups
|
|
||||||
import_playbook: legacy_groups.yml
|
|
||||||
|
|
||||||
- name: Install bastion ssh config
|
|
||||||
hosts: bastion[0]
|
|
||||||
gather_facts: False
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
|
|
||||||
|
|
||||||
- name: Gather facts
|
- name: Gather facts
|
||||||
tags: always
|
|
||||||
import_playbook: facts.yml
|
import_playbook: facts.yml
|
||||||
|
|
||||||
- name: Prepare for etcd install
|
- name: Prepare for etcd install
|
||||||
@@ -29,35 +17,7 @@
|
|||||||
- { role: download, tags: download, when: "not skip_downloads" }
|
- { role: download, tags: download, when: "not skip_downloads" }
|
||||||
|
|
||||||
- name: Install etcd
|
- name: Install etcd
|
||||||
hosts: etcd:kube_control_plane
|
import_playbook: install_etcd.yml
|
||||||
gather_facts: False
|
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- role: etcd
|
|
||||||
tags: etcd
|
|
||||||
vars:
|
|
||||||
etcd_cluster_setup: true
|
|
||||||
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
|
|
||||||
when: etcd_deployment_type != "kubeadm"
|
|
||||||
|
|
||||||
- name: Install etcd certs on nodes if required
|
|
||||||
hosts: k8s_cluster
|
|
||||||
gather_facts: False
|
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- role: etcd
|
|
||||||
tags: etcd
|
|
||||||
vars:
|
|
||||||
etcd_cluster_setup: false
|
|
||||||
etcd_events_cluster_setup: false
|
|
||||||
when:
|
|
||||||
- etcd_deployment_type != "kubeadm"
|
|
||||||
- kube_network_plugin in ["calico", "flannel", "canal", "cilium"] or cilium_deploy_additionally | default(false) | bool
|
|
||||||
- kube_network_plugin != "calico" or calico_datastore == "etcd"
|
|
||||||
|
|
||||||
- name: Install Kubernetes nodes
|
- name: Install Kubernetes nodes
|
||||||
hosts: k8s_cluster
|
hosts: k8s_cluster
|
||||||
|
|||||||
29
playbooks/install_etcd.yml
Normal file
29
playbooks/install_etcd.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
---
|
||||||
|
- name: Add worker nodes to the etcd play if needed
|
||||||
|
hosts: kube_node
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults }
|
||||||
|
tasks:
|
||||||
|
- name: Check if nodes needs etcd client certs (depends on network_plugin)
|
||||||
|
group_by:
|
||||||
|
key: "_kubespray_needs_etcd"
|
||||||
|
when:
|
||||||
|
- kube_network_plugin in ["flannel", "canal", "cilium"] or
|
||||||
|
(cilium_deploy_additionally | default(false)) or
|
||||||
|
(kube_network_plugin == "calico" and calico_datastore == "etcd")
|
||||||
|
- etcd_deployment_type != "kubeadm"
|
||||||
|
tags: etcd
|
||||||
|
|
||||||
|
- name: Install etcd
|
||||||
|
hosts: etcd:kube_control_plane:_kubespray_needs_etcd
|
||||||
|
gather_facts: False
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
environment: "{{ proxy_disable_env }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults }
|
||||||
|
- role: etcd
|
||||||
|
tags: etcd
|
||||||
|
vars:
|
||||||
|
etcd_cluster_setup: true
|
||||||
|
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
|
||||||
|
when: etcd_deployment_type != "kubeadm"
|
||||||
@@ -1,17 +1,6 @@
|
|||||||
---
|
---
|
||||||
- name: Check ansible version
|
- name: Common tasks for every playbooks
|
||||||
import_playbook: ansible_version.yml
|
import_playbook: boilerplate.yml
|
||||||
|
|
||||||
- name: Ensure compatibility with old groups
|
|
||||||
import_playbook: legacy_groups.yml
|
|
||||||
|
|
||||||
- name: Install bastion ssh config
|
|
||||||
hosts: bastion[0]
|
|
||||||
gather_facts: False
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults}
|
|
||||||
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
|
|
||||||
|
|
||||||
- name: Recover etcd
|
- name: Recover etcd
|
||||||
hosts: etcd[0]
|
hosts: etcd[0]
|
||||||
|
|||||||
@@ -1,17 +1,6 @@
|
|||||||
---
|
---
|
||||||
- name: Check ansible version
|
- name: Common tasks for every playbooks
|
||||||
import_playbook: ansible_version.yml
|
import_playbook: boilerplate.yml
|
||||||
|
|
||||||
- name: Ensure compatibility with old groups
|
|
||||||
import_playbook: legacy_groups.yml
|
|
||||||
|
|
||||||
- name: Install bastion ssh config
|
|
||||||
hosts: bastion[0]
|
|
||||||
gather_facts: False
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
|
|
||||||
|
|
||||||
- name: Confirm node removal
|
- name: Confirm node removal
|
||||||
hosts: "{{ node | default('etcd:k8s_cluster:calico_rr') }}"
|
hosts: "{{ node | default('etcd:k8s_cluster:calico_rr') }}"
|
||||||
|
|||||||
@@ -1,17 +1,6 @@
|
|||||||
---
|
---
|
||||||
- name: Check ansible version
|
- name: Common tasks for every playbooks
|
||||||
import_playbook: ansible_version.yml
|
import_playbook: boilerplate.yml
|
||||||
|
|
||||||
- name: Ensure compatibility with old groups
|
|
||||||
import_playbook: legacy_groups.yml
|
|
||||||
|
|
||||||
- name: Install bastion ssh config
|
|
||||||
hosts: bastion[0]
|
|
||||||
gather_facts: False
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults}
|
|
||||||
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
|
|
||||||
|
|
||||||
- name: Gather facts
|
- name: Gather facts
|
||||||
import_playbook: facts.yml
|
import_playbook: facts.yml
|
||||||
|
|||||||
@@ -1,20 +1,8 @@
|
|||||||
---
|
---
|
||||||
- name: Check ansible version
|
- name: Common tasks for every playbooks
|
||||||
import_playbook: ansible_version.yml
|
import_playbook: boilerplate.yml
|
||||||
|
|
||||||
- name: Ensure compatibility with old groups
|
|
||||||
import_playbook: legacy_groups.yml
|
|
||||||
|
|
||||||
- name: Install bastion ssh config
|
|
||||||
hosts: bastion[0]
|
|
||||||
gather_facts: False
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
|
|
||||||
|
|
||||||
- name: Gather facts
|
- name: Gather facts
|
||||||
tags: always
|
|
||||||
import_playbook: facts.yml
|
import_playbook: facts.yml
|
||||||
|
|
||||||
- name: Generate the etcd certificates beforehand
|
- name: Generate the etcd certificates beforehand
|
||||||
|
|||||||
@@ -1,20 +1,8 @@
|
|||||||
---
|
---
|
||||||
- name: Check ansible version
|
- name: Common tasks for every playbooks
|
||||||
import_playbook: ansible_version.yml
|
import_playbook: boilerplate.yml
|
||||||
|
|
||||||
- name: Ensure compatibility with old groups
|
|
||||||
import_playbook: legacy_groups.yml
|
|
||||||
|
|
||||||
- name: Install bastion ssh config
|
|
||||||
hosts: bastion[0]
|
|
||||||
gather_facts: False
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
|
|
||||||
|
|
||||||
- name: Gather facts
|
- name: Gather facts
|
||||||
tags: always
|
|
||||||
import_playbook: facts.yml
|
import_playbook: facts.yml
|
||||||
|
|
||||||
- name: Download images to ansible host cache via first kube_control_plane node
|
- name: Download images to ansible host cache via first kube_control_plane node
|
||||||
@@ -48,35 +36,7 @@
|
|||||||
- { role: container-engine, tags: "container-engine", when: deploy_container_engine }
|
- { role: container-engine, tags: "container-engine", when: deploy_container_engine }
|
||||||
|
|
||||||
- name: Install etcd
|
- name: Install etcd
|
||||||
hosts: etcd:kube_control_plane
|
import_playbook: install_etcd.yml
|
||||||
gather_facts: False
|
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- role: etcd
|
|
||||||
tags: etcd
|
|
||||||
vars:
|
|
||||||
etcd_cluster_setup: true
|
|
||||||
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
|
|
||||||
when: etcd_deployment_type != "kubeadm"
|
|
||||||
|
|
||||||
- name: Install etcd certs on nodes if required
|
|
||||||
hosts: k8s_cluster
|
|
||||||
gather_facts: False
|
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- role: etcd
|
|
||||||
tags: etcd
|
|
||||||
vars:
|
|
||||||
etcd_cluster_setup: false
|
|
||||||
etcd_events_cluster_setup: false
|
|
||||||
when:
|
|
||||||
- etcd_deployment_type != "kubeadm"
|
|
||||||
- kube_network_plugin in ["calico", "flannel", "canal", "cilium"] or cilium_deploy_additionally | default(false) | bool
|
|
||||||
- kube_network_plugin != "calico" or calico_datastore == "etcd"
|
|
||||||
|
|
||||||
- name: Handle upgrades to master components first to maintain backwards compat.
|
- name: Handle upgrades to master components first to maintain backwards compat.
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
ansible==7.6.0
|
ansible==8.5.0
|
||||||
cryptography==41.0.1
|
cryptography==41.0.4
|
||||||
jinja2==3.1.2
|
jinja2==3.1.2
|
||||||
jmespath==1.0.1
|
jmespath==1.0.1
|
||||||
MarkupSafe==2.1.3
|
MarkupSafe==2.1.3
|
||||||
netaddr==0.8.0
|
netaddr==0.9.0
|
||||||
pbr==5.11.1
|
pbr==5.11.1
|
||||||
ruamel.yaml==0.17.31
|
ruamel.yaml==0.17.35
|
||||||
ruamel.yaml.clib==0.2.7
|
ruamel.yaml.clib==0.2.8
|
||||||
|
|||||||
@@ -1,43 +0,0 @@
|
|||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import testinfra.utils.ansible_runner
|
|
||||||
import yaml
|
|
||||||
from ansible.cli.playbook import PlaybookCLI
|
|
||||||
from ansible.playbook import Playbook
|
|
||||||
|
|
||||||
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
|
||||||
os.environ["MOLECULE_INVENTORY_FILE"]
|
|
||||||
).get_hosts("all")
|
|
||||||
|
|
||||||
|
|
||||||
def read_playbook(playbook):
|
|
||||||
cli_args = [os.path.realpath(playbook), testinfra_hosts]
|
|
||||||
cli = PlaybookCLI(cli_args)
|
|
||||||
cli.parse()
|
|
||||||
loader, inventory, variable_manager = cli._play_prereqs()
|
|
||||||
|
|
||||||
pb = Playbook.load(cli.args[0], variable_manager, loader)
|
|
||||||
|
|
||||||
for play in pb.get_plays():
|
|
||||||
yield variable_manager.get_vars(play)
|
|
||||||
|
|
||||||
|
|
||||||
def get_playbook():
|
|
||||||
playbooks_path = Path(__file__).parent.parent
|
|
||||||
with open(os.path.join(playbooks_path, "molecule.yml"), "r") as yamlfile:
|
|
||||||
data = yaml.load(yamlfile, Loader=yaml.FullLoader)
|
|
||||||
if "playbooks" in data["provisioner"].keys():
|
|
||||||
if "converge" in data["provisioner"]["playbooks"].keys():
|
|
||||||
return data["provisioner"]["playbooks"]["converge"]
|
|
||||||
else:
|
|
||||||
return os.path.join(playbooks_path, "converge.yml")
|
|
||||||
|
|
||||||
|
|
||||||
def test_user(host):
|
|
||||||
for vars in read_playbook(get_playbook()):
|
|
||||||
assert host.user(vars["user"]["name"]).exists
|
|
||||||
if "group" in vars["user"].keys():
|
|
||||||
assert host.group(vars["user"]["group"]).exists
|
|
||||||
else:
|
|
||||||
assert host.group(vars["user"]["name"]).exists
|
|
||||||
@@ -1,40 +0,0 @@
|
|||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import testinfra.utils.ansible_runner
|
|
||||||
import yaml
|
|
||||||
from ansible.cli.playbook import PlaybookCLI
|
|
||||||
from ansible.playbook import Playbook
|
|
||||||
|
|
||||||
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
|
||||||
os.environ["MOLECULE_INVENTORY_FILE"]
|
|
||||||
).get_hosts("all")
|
|
||||||
|
|
||||||
|
|
||||||
def read_playbook(playbook):
|
|
||||||
cli_args = [os.path.realpath(playbook), testinfra_hosts]
|
|
||||||
cli = PlaybookCLI(cli_args)
|
|
||||||
cli.parse()
|
|
||||||
loader, inventory, variable_manager = cli._play_prereqs()
|
|
||||||
|
|
||||||
pb = Playbook.load(cli.args[0], variable_manager, loader)
|
|
||||||
|
|
||||||
for play in pb.get_plays():
|
|
||||||
yield variable_manager.get_vars(play)
|
|
||||||
|
|
||||||
|
|
||||||
def get_playbook():
|
|
||||||
playbooks_path = Path(__file__).parent.parent
|
|
||||||
with open(os.path.join(playbooks_path, "molecule.yml"), "r") as yamlfile:
|
|
||||||
data = yaml.load(yamlfile, Loader=yaml.FullLoader)
|
|
||||||
if "playbooks" in data["provisioner"].keys():
|
|
||||||
if "converge" in data["provisioner"]["playbooks"].keys():
|
|
||||||
return data["provisioner"]["playbooks"]["converge"]
|
|
||||||
else:
|
|
||||||
return os.path.join(playbooks_path, "converge.yml")
|
|
||||||
|
|
||||||
|
|
||||||
def test_ssh_config(host):
|
|
||||||
for vars in read_playbook(get_playbook()):
|
|
||||||
assert host.file(vars["ssh_bastion_confing__name"]).exists
|
|
||||||
assert host.file(vars["ssh_bastion_confing__name"]).is_file
|
|
||||||
@@ -18,6 +18,7 @@ containerd_runc_runtime:
|
|||||||
base_runtime_spec: cri-base.json
|
base_runtime_spec: cri-base.json
|
||||||
options:
|
options:
|
||||||
systemdCgroup: "{{ containerd_use_systemd_cgroup | ternary('true', 'false') }}"
|
systemdCgroup: "{{ containerd_use_systemd_cgroup | ternary('true', 'false') }}"
|
||||||
|
binaryName: "{{ bin_dir }}/runc"
|
||||||
|
|
||||||
containerd_additional_runtimes: []
|
containerd_additional_runtimes: []
|
||||||
# Example for Kata Containers as additional runtime:
|
# Example for Kata Containers as additional runtime:
|
||||||
@@ -47,9 +48,6 @@ containerd_metrics_address: ""
|
|||||||
|
|
||||||
containerd_metrics_grpc_histogram: false
|
containerd_metrics_grpc_histogram: false
|
||||||
|
|
||||||
containerd_registries:
|
|
||||||
"docker.io": "https://registry-1.docker.io"
|
|
||||||
|
|
||||||
containerd_registries_mirrors:
|
containerd_registries_mirrors:
|
||||||
- prefix: docker.io
|
- prefix: docker.io
|
||||||
mirrors:
|
mirrors:
|
||||||
@@ -104,3 +102,6 @@ containerd_supported_distributions:
|
|||||||
- "UnionTech"
|
- "UnionTech"
|
||||||
- "UniontechOS"
|
- "UniontechOS"
|
||||||
- "openEuler"
|
- "openEuler"
|
||||||
|
|
||||||
|
# Enable container device interface
|
||||||
|
enable_cdi: false
|
||||||
|
|||||||
@@ -1,10 +1,4 @@
|
|||||||
---
|
---
|
||||||
- name: Restart containerd
|
|
||||||
command: /bin/true
|
|
||||||
notify:
|
|
||||||
- Containerd | restart containerd
|
|
||||||
- Containerd | wait for containerd
|
|
||||||
|
|
||||||
- name: Containerd | restart containerd
|
- name: Containerd | restart containerd
|
||||||
systemd:
|
systemd:
|
||||||
name: containerd
|
name: containerd
|
||||||
@@ -12,6 +6,7 @@
|
|||||||
enabled: yes
|
enabled: yes
|
||||||
daemon-reload: yes
|
daemon-reload: yes
|
||||||
masked: no
|
masked: no
|
||||||
|
listen: Restart containerd
|
||||||
|
|
||||||
- name: Containerd | wait for containerd
|
- name: Containerd | wait for containerd
|
||||||
command: "{{ containerd_bin_dir }}/ctr images ls -q"
|
command: "{{ containerd_bin_dir }}/ctr images ls -q"
|
||||||
@@ -19,3 +14,4 @@
|
|||||||
retries: 8
|
retries: 8
|
||||||
delay: 4
|
delay: 4
|
||||||
until: containerd_ready.rc == 0
|
until: containerd_ready.rc == 0
|
||||||
|
listen: Restart containerd
|
||||||
|
|||||||
@@ -61,6 +61,9 @@
|
|||||||
src: containerd.service.j2
|
src: containerd.service.j2
|
||||||
dest: /etc/systemd/system/containerd.service
|
dest: /etc/systemd/system/containerd.service
|
||||||
mode: 0644
|
mode: 0644
|
||||||
|
validate: "sh -c '[ -f /usr/bin/systemd/system/factory-reset.target ] || exit 0 && systemd-analyze verify %s:containerd.service'"
|
||||||
|
# FIXME: check that systemd version >= 250 (factory-reset.target was introduced in that release)
|
||||||
|
# Remove once we drop support for systemd < 250
|
||||||
notify: Restart containerd
|
notify: Restart containerd
|
||||||
|
|
||||||
- name: Containerd | Ensure containerd directories exist
|
- name: Containerd | Ensure containerd directories exist
|
||||||
|
|||||||
@@ -20,6 +20,10 @@ oom_score = {{ containerd_oom_score }}
|
|||||||
max_container_log_line_size = {{ containerd_max_container_log_line_size }}
|
max_container_log_line_size = {{ containerd_max_container_log_line_size }}
|
||||||
enable_unprivileged_ports = {{ containerd_enable_unprivileged_ports | default(false) | lower }}
|
enable_unprivileged_ports = {{ containerd_enable_unprivileged_ports | default(false) | lower }}
|
||||||
enable_unprivileged_icmp = {{ containerd_enable_unprivileged_icmp | default(false) | lower }}
|
enable_unprivileged_icmp = {{ containerd_enable_unprivileged_icmp | default(false) | lower }}
|
||||||
|
{% if enable_cdi %}
|
||||||
|
enable_cdi = true
|
||||||
|
cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]
|
||||||
|
{% endif %}
|
||||||
[plugins."io.containerd.grpc.v1.cri".containerd]
|
[plugins."io.containerd.grpc.v1.cri".containerd]
|
||||||
default_runtime_name = "{{ containerd_default_runtime | default('runc') }}"
|
default_runtime_name = "{{ containerd_default_runtime | default('runc') }}"
|
||||||
snapshotter = "{{ containerd_snapshotter | default('overlayfs') }}"
|
snapshotter = "{{ containerd_snapshotter | default('overlayfs') }}"
|
||||||
@@ -35,7 +39,11 @@ oom_score = {{ containerd_oom_score }}
|
|||||||
|
|
||||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.{{ runtime.name }}.options]
|
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.{{ runtime.name }}.options]
|
||||||
{% for key, value in runtime.options.items() %}
|
{% for key, value in runtime.options.items() %}
|
||||||
|
{% if value | string != "true" and value | string != "false" %}
|
||||||
|
{{ key }} = "{{ value }}"
|
||||||
|
{% else %}
|
||||||
{{ key }} = {{ value }}
|
{{ key }} = {{ value }}
|
||||||
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
{% if kata_containers_enabled %}
|
{% if kata_containers_enabled %}
|
||||||
@@ -78,6 +86,11 @@ oom_score = {{ containerd_oom_score }}
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
|
{% if nri_enabled and containerd_version is version('1.7.0', '>=') %}
|
||||||
|
[plugins."io.containerd.nri.v1.nri"]
|
||||||
|
disable = false
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{% if containerd_extra_args is defined %}
|
{% if containerd_extra_args is defined %}
|
||||||
{{ containerd_extra_args }}
|
{{ containerd_extra_args }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ server = "https://{{ item.prefix }}"
|
|||||||
{% for mirror in item.mirrors %}
|
{% for mirror in item.mirrors %}
|
||||||
[host."{{ mirror.host }}"]
|
[host."{{ mirror.host }}"]
|
||||||
capabilities = ["{{ ([ mirror.capabilities ] | flatten ) | join('","') }}"]
|
capabilities = ["{{ ([ mirror.capabilities ] | flatten ) | join('","') }}"]
|
||||||
{% if mirror.skip_verify is defined %}
|
|
||||||
skip_verify = {{ mirror.skip_verify | default('false') | string | lower }}
|
skip_verify = {{ mirror.skip_verify | default('false') | string | lower }}
|
||||||
{% endif %}
|
override_path = {{ mirror.override_path | default('false') | string | lower }}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|||||||
@@ -1,35 +1,31 @@
|
|||||||
---
|
---
|
||||||
- name: Restart and enable cri-dockerd
|
|
||||||
command: /bin/true
|
|
||||||
notify:
|
|
||||||
- Cri-dockerd | reload systemd
|
|
||||||
- Cri-dockerd | restart docker.service
|
|
||||||
- Cri-dockerd | reload cri-dockerd.socket
|
|
||||||
- Cri-dockerd | reload cri-dockerd.service
|
|
||||||
- Cri-dockerd | enable cri-dockerd service
|
|
||||||
|
|
||||||
- name: Cri-dockerd | reload systemd
|
- name: Cri-dockerd | reload systemd
|
||||||
systemd:
|
systemd:
|
||||||
name: cri-dockerd
|
name: cri-dockerd
|
||||||
daemon_reload: true
|
daemon_reload: true
|
||||||
masked: no
|
masked: no
|
||||||
|
listen: Restart and enable cri-dockerd
|
||||||
|
|
||||||
- name: Cri-dockerd | restart docker.service
|
- name: Cri-dockerd | restart docker.service
|
||||||
service:
|
service:
|
||||||
name: docker.service
|
name: docker.service
|
||||||
state: restarted
|
state: restarted
|
||||||
|
listen: Restart and enable cri-dockerd
|
||||||
|
|
||||||
- name: Cri-dockerd | reload cri-dockerd.socket
|
- name: Cri-dockerd | reload cri-dockerd.socket
|
||||||
service:
|
service:
|
||||||
name: cri-dockerd.socket
|
name: cri-dockerd.socket
|
||||||
state: restarted
|
state: restarted
|
||||||
|
listen: Restart and enable cri-dockerd
|
||||||
|
|
||||||
- name: Cri-dockerd | reload cri-dockerd.service
|
- name: Cri-dockerd | reload cri-dockerd.service
|
||||||
service:
|
service:
|
||||||
name: cri-dockerd.service
|
name: cri-dockerd.service
|
||||||
state: restarted
|
state: restarted
|
||||||
|
listen: Restart and enable cri-dockerd
|
||||||
|
|
||||||
- name: Cri-dockerd | enable cri-dockerd service
|
- name: Cri-dockerd | enable cri-dockerd service
|
||||||
service:
|
service:
|
||||||
name: cri-dockerd.service
|
name: cri-dockerd.service
|
||||||
enabled: yes
|
enabled: yes
|
||||||
|
listen: Restart and enable cri-dockerd
|
||||||
|
|||||||
@@ -18,6 +18,9 @@
|
|||||||
src: "{{ item }}.j2"
|
src: "{{ item }}.j2"
|
||||||
dest: "/etc/systemd/system/{{ item }}"
|
dest: "/etc/systemd/system/{{ item }}"
|
||||||
mode: 0644
|
mode: 0644
|
||||||
|
validate: "sh -c '[ -f /usr/bin/systemd/system/factory-reset.target ] || exit 0 && systemd-analyze verify %s:{{ item }}'"
|
||||||
|
# FIXME: check that systemd version >= 250 (factory-reset.target was introduced in that release)
|
||||||
|
# Remove once we drop support for systemd < 250
|
||||||
with_items:
|
with_items:
|
||||||
- cri-dockerd.service
|
- cri-dockerd.service
|
||||||
- cri-dockerd.socket
|
- cri-dockerd.socket
|
||||||
|
|||||||
@@ -69,10 +69,6 @@ youki_runtime:
|
|||||||
type: oci
|
type: oci
|
||||||
root: /run/youki
|
root: /run/youki
|
||||||
|
|
||||||
# TODO(cristicalin): remove this after 2.21
|
|
||||||
crio_download_base: "download.opensuse.org/repositories/devel:kubic:libcontainers:stable"
|
|
||||||
crio_download_crio: "http://{{ crio_download_base }}:/cri-o:/"
|
|
||||||
|
|
||||||
# Reserve 16M uids and gids for user namespaces (256 pods * 65536 uids/gids)
|
# Reserve 16M uids and gids for user namespaces (256 pods * 65536 uids/gids)
|
||||||
# at the end of the uid/gid space
|
# at the end of the uid/gid space
|
||||||
crio_remap_enable: false
|
crio_remap_enable: false
|
||||||
@@ -97,3 +93,6 @@ crio_man_files:
|
|||||||
8:
|
8:
|
||||||
- crio
|
- crio
|
||||||
- crio-status
|
- crio-status
|
||||||
|
|
||||||
|
# If set to true, it will enable the CRIU support in cri-o
|
||||||
|
crio_criu_support_enabled: false
|
||||||
|
|||||||
@@ -1,16 +1,12 @@
|
|||||||
---
|
---
|
||||||
- name: Restart crio
|
|
||||||
command: /bin/true
|
|
||||||
notify:
|
|
||||||
- CRI-O | reload systemd
|
|
||||||
- CRI-O | reload crio
|
|
||||||
|
|
||||||
- name: CRI-O | reload systemd
|
- name: CRI-O | reload systemd
|
||||||
systemd:
|
systemd:
|
||||||
daemon_reload: true
|
daemon_reload: true
|
||||||
|
listen: Restart crio
|
||||||
|
|
||||||
- name: CRI-O | reload crio
|
- name: CRI-O | reload crio
|
||||||
service:
|
service:
|
||||||
name: crio
|
name: crio
|
||||||
state: restarted
|
state: restarted
|
||||||
enabled: yes
|
enabled: yes
|
||||||
|
listen: Restart crio
|
||||||
|
|||||||
@@ -1,120 +0,0 @@
|
|||||||
---
|
|
||||||
# TODO(cristicalin): drop this file after 2.21
|
|
||||||
- name: CRI-O kubic repo name for debian os family
|
|
||||||
set_fact:
|
|
||||||
crio_kubic_debian_repo_name: "{{ ((ansible_distribution == 'Ubuntu') | ternary('x', '')) ~ ansible_distribution ~ '_' ~ ansible_distribution_version }}"
|
|
||||||
when: ansible_os_family == "Debian"
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic apt repo key
|
|
||||||
apt_key:
|
|
||||||
url: "https://{{ crio_download_base }}/{{ crio_kubic_debian_repo_name }}/Release.key"
|
|
||||||
state: absent
|
|
||||||
environment: "{{ proxy_env }}"
|
|
||||||
when: crio_kubic_debian_repo_name is defined
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic apt repo
|
|
||||||
apt_repository:
|
|
||||||
repo: "deb http://{{ crio_download_base }}/{{ crio_kubic_debian_repo_name }}/ /"
|
|
||||||
state: absent
|
|
||||||
filename: devel-kubic-libcontainers-stable
|
|
||||||
when: crio_kubic_debian_repo_name is defined
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic cri-o apt repo
|
|
||||||
apt_repository:
|
|
||||||
repo: "deb {{ crio_download_crio }}{{ crio_version }}/{{ crio_kubic_debian_repo_name }}/ /"
|
|
||||||
state: absent
|
|
||||||
filename: devel-kubic-libcontainers-stable-cri-o
|
|
||||||
when: crio_kubic_debian_repo_name is defined
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic yum repo
|
|
||||||
yum_repository:
|
|
||||||
name: devel_kubic_libcontainers_stable
|
|
||||||
description: Stable Releases of Upstream github.com/containers packages (CentOS_$releasever)
|
|
||||||
baseurl: http://{{ crio_download_base }}/CentOS_{{ ansible_distribution_major_version }}/
|
|
||||||
state: absent
|
|
||||||
when:
|
|
||||||
- ansible_os_family == "RedHat"
|
|
||||||
- ansible_distribution not in ["Amazon", "Fedora"]
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic yum repo
|
|
||||||
yum_repository:
|
|
||||||
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
|
|
||||||
description: "CRI-O {{ crio_version }} (CentOS_$releasever)"
|
|
||||||
baseurl: "{{ crio_download_crio }}{{ crio_version }}/CentOS_{{ ansible_distribution_major_version }}/"
|
|
||||||
state: absent
|
|
||||||
when:
|
|
||||||
- ansible_os_family == "RedHat"
|
|
||||||
- ansible_distribution not in ["Amazon", "Fedora"]
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic yum repo
|
|
||||||
yum_repository:
|
|
||||||
name: devel_kubic_libcontainers_stable
|
|
||||||
description: Stable Releases of Upstream github.com/containers packages
|
|
||||||
baseurl: http://{{ crio_download_base }}/Fedora_{{ ansible_distribution_major_version }}/
|
|
||||||
state: absent
|
|
||||||
when:
|
|
||||||
- ansible_distribution in ["Fedora"]
|
|
||||||
- not is_ostree
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic yum repo
|
|
||||||
yum_repository:
|
|
||||||
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
|
|
||||||
description: "CRI-O {{ crio_version }}"
|
|
||||||
baseurl: "{{ crio_download_crio }}{{ crio_version }}/Fedora_{{ ansible_distribution_major_version }}/"
|
|
||||||
state: absent
|
|
||||||
when:
|
|
||||||
- ansible_distribution in ["Fedora"]
|
|
||||||
- not is_ostree
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic yum repo
|
|
||||||
yum_repository:
|
|
||||||
name: devel_kubic_libcontainers_stable
|
|
||||||
description: Stable Releases of Upstream github.com/containers packages
|
|
||||||
baseurl: http://{{ crio_download_base }}/CentOS_7/
|
|
||||||
state: absent
|
|
||||||
when: ansible_distribution in ["Amazon"]
|
|
||||||
|
|
||||||
- name: Remove legacy CRI-O kubic yum repo
|
|
||||||
yum_repository:
|
|
||||||
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
|
|
||||||
description: "CRI-O {{ crio_version }}"
|
|
||||||
baseurl: "{{ crio_download_crio }}{{ crio_version }}/CentOS_7/"
|
|
||||||
state: absent
|
|
||||||
when: ansible_distribution in ["Amazon"]
|
|
||||||
|
|
||||||
- name: Disable modular repos for CRI-O
|
|
||||||
community.general.ini_file:
|
|
||||||
path: "/etc/yum.repos.d/{{ item.repo }}.repo"
|
|
||||||
section: "{{ item.section }}"
|
|
||||||
option: enabled
|
|
||||||
value: 0
|
|
||||||
mode: 0644
|
|
||||||
become: true
|
|
||||||
when: is_ostree
|
|
||||||
loop:
|
|
||||||
- repo: "fedora-updates-modular"
|
|
||||||
section: "updates-modular"
|
|
||||||
- repo: "fedora-modular"
|
|
||||||
section: "fedora-modular"
|
|
||||||
|
|
||||||
# Disable any older module version if we enabled them before
|
|
||||||
- name: Disable CRI-O ex module
|
|
||||||
command: "rpm-ostree ex module disable cri-o:{{ item }}"
|
|
||||||
become: true
|
|
||||||
when:
|
|
||||||
- is_ostree
|
|
||||||
- ostree_version is defined and ostree_version.stdout is version('2021.9', '>=')
|
|
||||||
with_items:
|
|
||||||
- 1.22
|
|
||||||
- 1.23
|
|
||||||
- 1.24
|
|
||||||
|
|
||||||
- name: Cri-o | remove installed packages
|
|
||||||
package:
|
|
||||||
name: "{{ item }}"
|
|
||||||
state: absent
|
|
||||||
when: not is_ostree
|
|
||||||
with_items:
|
|
||||||
- cri-o
|
|
||||||
- cri-o-runc
|
|
||||||
- oci-systemd-hook
|
|
||||||
@@ -27,9 +27,6 @@
|
|||||||
import_tasks: "setup-amazon.yaml"
|
import_tasks: "setup-amazon.yaml"
|
||||||
when: ansible_distribution in ["Amazon"]
|
when: ansible_distribution in ["Amazon"]
|
||||||
|
|
||||||
- name: Cri-o | clean up reglacy repos
|
|
||||||
import_tasks: "cleanup.yaml"
|
|
||||||
|
|
||||||
- name: Cri-o | build a list of crio runtimes with Katacontainers runtimes
|
- name: Cri-o | build a list of crio runtimes with Katacontainers runtimes
|
||||||
set_fact:
|
set_fact:
|
||||||
crio_runtimes: "{{ crio_runtimes + kata_runtimes }}"
|
crio_runtimes: "{{ crio_runtimes + kata_runtimes }}"
|
||||||
|
|||||||
@@ -17,7 +17,7 @@
|
|||||||
- name: CRI-O | Remove cri-o apt repo
|
- name: CRI-O | Remove cri-o apt repo
|
||||||
apt_repository:
|
apt_repository:
|
||||||
repo: "deb {{ crio_download_crio }}{{ crio_version }}/{{ crio_kubic_debian_repo_name }}/ /"
|
repo: "deb {{ crio_download_crio }}{{ crio_version }}/{{ crio_kubic_debian_repo_name }}/ /"
|
||||||
state: present
|
state: absent
|
||||||
filename: devel-kubic-libcontainers-stable-cri-o
|
filename: devel-kubic-libcontainers-stable-cri-o
|
||||||
when: crio_kubic_debian_repo_name is defined
|
when: crio_kubic_debian_repo_name is defined
|
||||||
tags:
|
tags:
|
||||||
|
|||||||
@@ -273,6 +273,11 @@ pinns_path = ""
|
|||||||
pinns_path = "{{ bin_dir }}/pinns"
|
pinns_path = "{{ bin_dir }}/pinns"
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
|
{% if crio_criu_support_enabled %}
|
||||||
|
# Enable CRIU integration, requires that the criu binary is available in $PATH.
|
||||||
|
enable_criu_support = true
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
# The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
|
# The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
|
||||||
# The runtime to use is picked based on the runtime_handler provided by the CRI.
|
# The runtime to use is picked based on the runtime_handler provided by the CRI.
|
||||||
# If no runtime_handler is provided, the runtime will be picked based on the level
|
# If no runtime_handler is provided, the runtime will be picked based on the level
|
||||||
@@ -376,3 +381,9 @@ enable_metrics = {{ crio_enable_metrics | bool | lower }}
|
|||||||
|
|
||||||
# The port on which the metrics server will listen.
|
# The port on which the metrics server will listen.
|
||||||
metrics_port = {{ crio_metrics_port }}
|
metrics_port = {{ crio_metrics_port }}
|
||||||
|
|
||||||
|
{% if nri_enabled and crio_version is version('v1.26.0', operator='>=') %}
|
||||||
|
[crio.nri]
|
||||||
|
|
||||||
|
enable_nri=true
|
||||||
|
{% endif %}
|
||||||
|
|||||||
@@ -5,6 +5,9 @@ docker_cli_version: "{{ docker_version }}"
|
|||||||
docker_package_info:
|
docker_package_info:
|
||||||
pkgs:
|
pkgs:
|
||||||
|
|
||||||
|
# Path where to store repo key
|
||||||
|
# docker_repo_key_keyring: /etc/apt/trusted.gpg.d/docker.gpg
|
||||||
|
|
||||||
docker_repo_key_info:
|
docker_repo_key_info:
|
||||||
repo_keys:
|
repo_keys:
|
||||||
|
|
||||||
|
|||||||
@@ -1,28 +1,25 @@
|
|||||||
---
|
---
|
||||||
- name: Restart docker
|
|
||||||
command: /bin/true
|
|
||||||
notify:
|
|
||||||
- Docker | reload systemd
|
|
||||||
- Docker | reload docker.socket
|
|
||||||
- Docker | reload docker
|
|
||||||
- Docker | wait for docker
|
|
||||||
|
|
||||||
- name: Docker | reload systemd
|
- name: Docker | reload systemd
|
||||||
systemd:
|
systemd:
|
||||||
name: docker
|
name: docker
|
||||||
daemon_reload: true
|
daemon_reload: true
|
||||||
masked: no
|
masked: no
|
||||||
|
listen: Restart docker
|
||||||
|
|
||||||
- name: Docker | reload docker.socket
|
- name: Docker | reload docker.socket
|
||||||
service:
|
service:
|
||||||
name: docker.socket
|
name: docker.socket
|
||||||
state: restarted
|
state: restarted
|
||||||
when: ansible_os_family in ['Flatcar', 'Flatcar Container Linux by Kinvolk'] or is_fedora_coreos
|
when: ansible_os_family in ['Flatcar', 'Flatcar Container Linux by Kinvolk'] or is_fedora_coreos
|
||||||
|
listen: Restart docker
|
||||||
|
|
||||||
|
|
||||||
- name: Docker | reload docker
|
- name: Docker | reload docker
|
||||||
service:
|
service:
|
||||||
name: docker
|
name: docker
|
||||||
state: restarted
|
state: restarted
|
||||||
|
listen: Restart docker
|
||||||
|
|
||||||
|
|
||||||
- name: Docker | wait for docker
|
- name: Docker | wait for docker
|
||||||
command: "{{ docker_bin_dir }}/docker images"
|
command: "{{ docker_bin_dir }}/docker images"
|
||||||
@@ -30,3 +27,4 @@
|
|||||||
retries: 20
|
retries: 20
|
||||||
delay: 1
|
delay: 1
|
||||||
until: docker_ready.rc == 0
|
until: docker_ready.rc == 0
|
||||||
|
listen: Restart docker
|
||||||
|
|||||||
@@ -57,6 +57,7 @@
|
|||||||
apt_key:
|
apt_key:
|
||||||
id: "{{ item }}"
|
id: "{{ item }}"
|
||||||
url: "{{ docker_repo_key_info.url }}"
|
url: "{{ docker_repo_key_info.url }}"
|
||||||
|
keyring: "{{ docker_repo_key_keyring|default(omit) }}"
|
||||||
state: present
|
state: present
|
||||||
register: keyserver_task_result
|
register: keyserver_task_result
|
||||||
until: keyserver_task_result is succeeded
|
until: keyserver_task_result is succeeded
|
||||||
|
|||||||
@@ -7,3 +7,4 @@ kata_containers_qemu_default_memory: "{{ ansible_memtotal_mb }}"
|
|||||||
kata_containers_qemu_debug: 'false'
|
kata_containers_qemu_debug: 'false'
|
||||||
kata_containers_qemu_sandbox_cgroup_only: 'true'
|
kata_containers_qemu_sandbox_cgroup_only: 'true'
|
||||||
kata_containers_qemu_enable_mem_prealloc: 'false'
|
kata_containers_qemu_enable_mem_prealloc: 'false'
|
||||||
|
kata_containers_virtio_fs_cache: 'always'
|
||||||
|
|||||||
@@ -1,11 +1,12 @@
|
|||||||
# Copyright (c) 2017-2019 Intel Corporation
|
# Copyright (c) 2017-2019 Intel Corporation
|
||||||
|
# Copyright (c) 2021 Adobe Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
|
|
||||||
# XXX: WARNING: this file is auto-generated.
|
# XXX: WARNING: this file is auto-generated.
|
||||||
# XXX:
|
# XXX:
|
||||||
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
|
# XXX: Source file: "config/configuration-qemu.toml.in"
|
||||||
# XXX: Project:
|
# XXX: Project:
|
||||||
# XXX: Name: Kata Containers
|
# XXX: Name: Kata Containers
|
||||||
# XXX: Type: kata
|
# XXX: Type: kata
|
||||||
@@ -18,20 +19,46 @@ kernel = "/opt/kata/share/kata-containers/vmlinux.container"
|
|||||||
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
|
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
|
||||||
{% endif %}
|
{% endif %}
|
||||||
image = "/opt/kata/share/kata-containers/kata-containers.img"
|
image = "/opt/kata/share/kata-containers/kata-containers.img"
|
||||||
|
# initrd = "/opt/kata/share/kata-containers/kata-containers-initrd.img"
|
||||||
machine_type = "q35"
|
machine_type = "q35"
|
||||||
|
|
||||||
|
# rootfs filesystem type:
|
||||||
|
# - ext4 (default)
|
||||||
|
# - xfs
|
||||||
|
# - erofs
|
||||||
|
rootfs_type="ext4"
|
||||||
|
|
||||||
# Enable confidential guest support.
|
# Enable confidential guest support.
|
||||||
# Toggling that setting may trigger different hardware features, ranging
|
# Toggling that setting may trigger different hardware features, ranging
|
||||||
# from memory encryption to both memory and CPU-state encryption and integrity.
|
# from memory encryption to both memory and CPU-state encryption and integrity.
|
||||||
# The Kata Containers runtime dynamically detects the available feature set and
|
# The Kata Containers runtime dynamically detects the available feature set and
|
||||||
# aims at enabling the largest possible one.
|
# aims at enabling the largest possible one, returning an error if none is
|
||||||
|
# available, or none is supported by the hypervisor.
|
||||||
|
#
|
||||||
|
# Known limitations:
|
||||||
|
# * Does not work by design:
|
||||||
|
# - CPU Hotplug
|
||||||
|
# - Memory Hotplug
|
||||||
|
# - NVDIMM devices
|
||||||
|
#
|
||||||
# Default false
|
# Default false
|
||||||
# confidential_guest = true
|
# confidential_guest = true
|
||||||
|
|
||||||
|
# Choose AMD SEV-SNP confidential guests
|
||||||
|
# In case of using confidential guests on AMD hardware that supports both SEV
|
||||||
|
# and SEV-SNP, the following enables SEV-SNP guests. SEV guests are default.
|
||||||
|
# Default false
|
||||||
|
# sev_snp_guest = true
|
||||||
|
|
||||||
|
# Enable running QEMU VMM as a non-root user.
|
||||||
|
# By default QEMU VMM run as root. When this is set to true, QEMU VMM process runs as
|
||||||
|
# a non-root random user. See documentation for the limitations of this mode.
|
||||||
|
# rootless = true
|
||||||
|
|
||||||
# List of valid annotation names for the hypervisor
|
# List of valid annotation names for the hypervisor
|
||||||
# Each member of the list is a regular expression, which is the base name
|
# Each member of the list is a regular expression, which is the base name
|
||||||
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
|
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
|
||||||
enable_annotations = []
|
enable_annotations = ["enable_iommu"]
|
||||||
|
|
||||||
# List of valid annotations values for the hypervisor
|
# List of valid annotations values for the hypervisor
|
||||||
# Each member of the list is a path pattern as described by glob(3).
|
# Each member of the list is a path pattern as described by glob(3).
|
||||||
@@ -55,11 +82,25 @@ kernel_params = ""
|
|||||||
# If you want that qemu uses the default firmware leave this option empty
|
# If you want that qemu uses the default firmware leave this option empty
|
||||||
firmware = ""
|
firmware = ""
|
||||||
|
|
||||||
|
# Path to the firmware volume.
|
||||||
|
# firmware TDVF or OVMF can be split into FIRMWARE_VARS.fd (UEFI variables
|
||||||
|
# as configuration) and FIRMWARE_CODE.fd (UEFI program image). UEFI variables
|
||||||
|
# can be customized per each user while UEFI code is kept same.
|
||||||
|
firmware_volume = ""
|
||||||
|
|
||||||
# Machine accelerators
|
# Machine accelerators
|
||||||
# comma-separated list of machine accelerators to pass to the hypervisor.
|
# comma-separated list of machine accelerators to pass to the hypervisor.
|
||||||
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
|
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
|
||||||
machine_accelerators=""
|
machine_accelerators=""
|
||||||
|
|
||||||
|
# Qemu seccomp sandbox feature
|
||||||
|
# comma-separated list of seccomp sandbox features to control the syscall access.
|
||||||
|
# For example, `seccompsandbox= "on,obsolete=deny,spawn=deny,resourcecontrol=deny"`
|
||||||
|
# Note: "elevateprivileges=deny" doesn't work with daemonize option, so it's removed from the seccomp sandbox
|
||||||
|
# Another note: enabling this feature may reduce performance, you may enable
|
||||||
|
# /proc/sys/net/core/bpf_jit_enable to reduce the impact. see https://man7.org/linux/man-pages/man8/bpfc.8.html
|
||||||
|
#seccompsandbox="on,obsolete=deny,spawn=deny,resourcecontrol=deny"
|
||||||
|
|
||||||
# CPU features
|
# CPU features
|
||||||
# comma-separated list of cpu features to pass to the cpu
|
# comma-separated list of cpu features to pass to the cpu
|
||||||
# For example, `cpu_features = "pmu=off,vmx=off"
|
# For example, `cpu_features = "pmu=off,vmx=off"
|
||||||
@@ -110,6 +151,12 @@ default_memory = {{ kata_containers_qemu_default_memory }}
|
|||||||
# This is will determine the times that memory will be hotadded to sandbox/VM.
|
# This is will determine the times that memory will be hotadded to sandbox/VM.
|
||||||
#memory_slots = 10
|
#memory_slots = 10
|
||||||
|
|
||||||
|
# Default maximum memory in MiB per SB / VM
|
||||||
|
# unspecified or == 0 --> will be set to the actual amount of physical RAM
|
||||||
|
# > 0 <= amount of physical RAM --> will be set to the specified number
|
||||||
|
# > amount of physical RAM --> will be set to the actual amount of physical RAM
|
||||||
|
default_maxmemory = 0
|
||||||
|
|
||||||
# The size in MiB will be plused to max memory of hypervisor.
|
# The size in MiB will be plused to max memory of hypervisor.
|
||||||
# It is the memory address space for the NVDIMM devie.
|
# It is the memory address space for the NVDIMM devie.
|
||||||
# If set block storage driver (block_device_driver) to "nvdimm",
|
# If set block storage driver (block_device_driver) to "nvdimm",
|
||||||
@@ -128,12 +175,13 @@ default_memory = {{ kata_containers_qemu_default_memory }}
|
|||||||
# root file system is backed by a block device, the block device is passed
|
# root file system is backed by a block device, the block device is passed
|
||||||
# directly to the hypervisor for performance reasons.
|
# directly to the hypervisor for performance reasons.
|
||||||
# This flag prevents the block device from being passed to the hypervisor,
|
# This flag prevents the block device from being passed to the hypervisor,
|
||||||
# 9pfs is used instead to pass the rootfs.
|
# virtio-fs is used instead to pass the rootfs.
|
||||||
disable_block_device_use = false
|
disable_block_device_use = false
|
||||||
|
|
||||||
# Shared file system type:
|
# Shared file system type:
|
||||||
# - virtio-fs (default)
|
# - virtio-fs (default)
|
||||||
# - virtio-9p
|
# - virtio-9p
|
||||||
|
# - virtio-fs-nydus
|
||||||
{% if kata_containers_version is version('2.2.0', '>=') %}
|
{% if kata_containers_version is version('2.2.0', '>=') %}
|
||||||
shared_fs = "virtio-fs"
|
shared_fs = "virtio-fs"
|
||||||
{% else %}
|
{% else %}
|
||||||
@@ -141,27 +189,39 @@ shared_fs = "virtio-9p"
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
# Path to vhost-user-fs daemon.
|
# Path to vhost-user-fs daemon.
|
||||||
|
{% if kata_containers_version is version('2.5.0', '>=') %}
|
||||||
|
virtio_fs_daemon = "/opt/kata/libexec/virtiofsd"
|
||||||
|
{% else %}
|
||||||
virtio_fs_daemon = "/opt/kata/libexec/kata-qemu/virtiofsd"
|
virtio_fs_daemon = "/opt/kata/libexec/kata-qemu/virtiofsd"
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
# List of valid annotations values for the virtiofs daemon
|
# List of valid annotations values for the virtiofs daemon
|
||||||
# The default if not set is empty (all annotations rejected.)
|
# The default if not set is empty (all annotations rejected.)
|
||||||
# Your distribution recommends: ["/opt/kata/libexec/kata-qemu/virtiofsd"]
|
# Your distribution recommends: ["/opt/kata/libexec/virtiofsd"]
|
||||||
valid_virtio_fs_daemon_paths = ["/opt/kata/libexec/kata-qemu/virtiofsd"]
|
valid_virtio_fs_daemon_paths = [
|
||||||
|
"/opt/kata/libexec/virtiofsd",
|
||||||
|
"/opt/kata/libexec/kata-qemu/virtiofsd",
|
||||||
|
]
|
||||||
|
|
||||||
# Default size of DAX cache in MiB
|
# Default size of DAX cache in MiB
|
||||||
virtio_fs_cache_size = 0
|
virtio_fs_cache_size = 0
|
||||||
|
|
||||||
|
# Default size of virtqueues
|
||||||
|
virtio_fs_queue_size = 1024
|
||||||
|
|
||||||
# Extra args for virtiofsd daemon
|
# Extra args for virtiofsd daemon
|
||||||
#
|
#
|
||||||
# Format example:
|
# Format example:
|
||||||
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
|
# ["--arg1=xxx", "--arg2=yyy"]
|
||||||
|
# Examples:
|
||||||
|
# Set virtiofsd log level to debug : ["--log-level=debug"]
|
||||||
#
|
#
|
||||||
# see `virtiofsd -h` for possible options.
|
# see `virtiofsd -h` for possible options.
|
||||||
virtio_fs_extra_args = ["--thread-pool-size=1"]
|
virtio_fs_extra_args = ["--thread-pool-size=1", "--announce-submounts"]
|
||||||
|
|
||||||
# Cache mode:
|
# Cache mode:
|
||||||
#
|
#
|
||||||
# - none
|
# - never
|
||||||
# Metadata, data, and pathname lookup are not cached in guest. They are
|
# Metadata, data, and pathname lookup are not cached in guest. They are
|
||||||
# always fetched from host and any changes are immediately pushed to host.
|
# always fetched from host and any changes are immediately pushed to host.
|
||||||
#
|
#
|
||||||
@@ -172,13 +232,27 @@ virtio_fs_extra_args = ["--thread-pool-size=1"]
|
|||||||
#
|
#
|
||||||
# - always
|
# - always
|
||||||
# Metadata, data, and pathname lookup are cached in guest and never expire.
|
# Metadata, data, and pathname lookup are cached in guest and never expire.
|
||||||
virtio_fs_cache = "always"
|
virtio_fs_cache = "{{ kata_containers_virtio_fs_cache }}"
|
||||||
|
|
||||||
# Block storage driver to be used for the hypervisor in case the container
|
# Block storage driver to be used for the hypervisor in case the container
|
||||||
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
|
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
|
||||||
# or nvdimm.
|
# or nvdimm.
|
||||||
block_device_driver = "virtio-scsi"
|
block_device_driver = "virtio-scsi"
|
||||||
|
|
||||||
|
# aio is the I/O mechanism used by qemu
|
||||||
|
# Options:
|
||||||
|
#
|
||||||
|
# - threads
|
||||||
|
# Pthread based disk I/O.
|
||||||
|
#
|
||||||
|
# - native
|
||||||
|
# Native Linux I/O.
|
||||||
|
#
|
||||||
|
# - io_uring
|
||||||
|
# Linux io_uring API. This provides the fastest I/O operations on Linux, requires kernel>5.1 and
|
||||||
|
# qemu >=5.0.
|
||||||
|
block_device_aio = "io_uring"
|
||||||
|
|
||||||
# Specifies cache-related options will be set to block devices or not.
|
# Specifies cache-related options will be set to block devices or not.
|
||||||
# Default false
|
# Default false
|
||||||
#block_device_cache_set = true
|
#block_device_cache_set = true
|
||||||
@@ -242,6 +316,11 @@ vhost_user_store_path = "/var/run/kata-containers/vhost-user"
|
|||||||
# Your distribution recommends: ["/var/run/kata-containers/vhost-user"]
|
# Your distribution recommends: ["/var/run/kata-containers/vhost-user"]
|
||||||
valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
|
valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
|
||||||
|
|
||||||
|
# The timeout for reconnecting on non-server spdk sockets when the remote end goes away.
|
||||||
|
# qemu will delay this many seconds and then attempt to reconnect.
|
||||||
|
# Zero disables reconnecting, and the default is zero.
|
||||||
|
vhost_user_reconnect_timeout_sec = 0
|
||||||
|
|
||||||
# Enable file based guest memory support. The default is an empty string which
|
# Enable file based guest memory support. The default is an empty string which
|
||||||
# will disable this feature. In the case of virtio-fs, this is enabled
|
# will disable this feature. In the case of virtio-fs, this is enabled
|
||||||
# automatically and '/dev/shm' is used as the backing folder.
|
# automatically and '/dev/shm' is used as the backing folder.
|
||||||
@@ -253,17 +332,12 @@ valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
|
|||||||
# Your distribution recommends: [""]
|
# Your distribution recommends: [""]
|
||||||
valid_file_mem_backends = [""]
|
valid_file_mem_backends = [""]
|
||||||
|
|
||||||
# Enable swap of vm memory. Default false.
|
|
||||||
# The behaviour is undefined if mem_prealloc is also set to true
|
|
||||||
#enable_swap = true
|
|
||||||
|
|
||||||
# -pflash can add image file to VM. The arguments of it should be in format
|
# -pflash can add image file to VM. The arguments of it should be in format
|
||||||
# of ["/path/to/flash0.img", "/path/to/flash1.img"]
|
# of ["/path/to/flash0.img", "/path/to/flash1.img"]
|
||||||
pflashes = []
|
pflashes = []
|
||||||
|
|
||||||
# This option changes the default hypervisor and kernel parameters
|
# This option changes the default hypervisor and kernel parameters
|
||||||
# to enable debug output where available. This extra output is added
|
# to enable debug output where available. And Debug also enables the hmp socket.
|
||||||
# to the proxy logs, but only when proxy debug is also enabled.
|
|
||||||
#
|
#
|
||||||
# Default false
|
# Default false
|
||||||
enable_debug = {{ kata_containers_qemu_debug }}
|
enable_debug = {{ kata_containers_qemu_debug }}
|
||||||
@@ -278,21 +352,18 @@ enable_debug = {{ kata_containers_qemu_debug }}
|
|||||||
# used for 9p packet payload.
|
# used for 9p packet payload.
|
||||||
#msize_9p = 8192
|
#msize_9p = 8192
|
||||||
|
|
||||||
# If true and vsocks are supported, use vsocks to communicate directly
|
|
||||||
# with the agent and no proxy is started, otherwise use unix
|
|
||||||
# sockets and start a proxy to communicate with the agent.
|
|
||||||
# Default false
|
|
||||||
#use_vsock = true
|
|
||||||
|
|
||||||
# If false and nvdimm is supported, use nvdimm device to plug guest image.
|
# If false and nvdimm is supported, use nvdimm device to plug guest image.
|
||||||
# Otherwise virtio-block device is used.
|
# Otherwise virtio-block device is used.
|
||||||
|
#
|
||||||
|
# nvdimm is not supported when `confidential_guest = true`.
|
||||||
|
#
|
||||||
# Default is false
|
# Default is false
|
||||||
#disable_image_nvdimm = true
|
#disable_image_nvdimm = true
|
||||||
|
|
||||||
# VFIO devices are hotplugged on a bridge by default.
|
# VFIO devices are hotplugged on a bridge by default.
|
||||||
# Enable hotplugging on root bus. This may be required for devices with
|
# Enable hotplugging on root bus. This may be required for devices with
|
||||||
# a large PCI bar, as this is a current limitation with hotplugging on
|
# a large PCI bar, as this is a current limitation with hotplugging on
|
||||||
# a bridge. This value is valid for "pc" machine type.
|
# a bridge.
|
||||||
# Default false
|
# Default false
|
||||||
#hotplug_vfio_on_root_bus = true
|
#hotplug_vfio_on_root_bus = true
|
||||||
|
|
||||||
@@ -329,15 +400,15 @@ valid_entropy_sources = ["/dev/urandom","/dev/random",""]
|
|||||||
# the OCI spec passed to the runtime.
|
# the OCI spec passed to the runtime.
|
||||||
#
|
#
|
||||||
# You can create a rootfs with hooks by customizing the osbuilder scripts:
|
# You can create a rootfs with hooks by customizing the osbuilder scripts:
|
||||||
# https://github.com/kata-containers/osbuilder
|
# https://github.com/kata-containers/kata-containers/tree/main/tools/osbuilder
|
||||||
#
|
#
|
||||||
# Hooks must be stored in a subdirectory of guest_hook_path according to their
|
# Hooks must be stored in a subdirectory of guest_hook_path according to their
|
||||||
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
|
# hook type, i.e. "guest_hook_path/{prestart,poststart,poststop}".
|
||||||
# The agent will scan these directories for executable files and add them, in
|
# The agent will scan these directories for executable files and add them, in
|
||||||
# lexicographical order, to the lifecycle of the guest container.
|
# lexicographical order, to the lifecycle of the guest container.
|
||||||
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
|
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
|
||||||
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
|
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
|
||||||
# Warnings will be logged if any error is encountered will scanning for hooks,
|
# Warnings will be logged if any error is encountered while scanning for hooks,
|
||||||
# but it will not abort container execution.
|
# but it will not abort container execution.
|
||||||
#guest_hook_path = "/usr/share/oci/hooks"
|
#guest_hook_path = "/usr/share/oci/hooks"
|
||||||
#
|
#
|
||||||
@@ -382,6 +453,19 @@ valid_entropy_sources = ["/dev/urandom","/dev/random",""]
|
|||||||
# be default_memory.
|
# be default_memory.
|
||||||
#enable_guest_swap = true
|
#enable_guest_swap = true
|
||||||
|
|
||||||
|
# use legacy serial for guest console if available and implemented for architecture. Default false
|
||||||
|
#use_legacy_serial = true
|
||||||
|
|
||||||
|
# disable applying SELinux on the VMM process (default false)
|
||||||
|
disable_selinux=false
|
||||||
|
|
||||||
|
# disable applying SELinux on the container process
|
||||||
|
# If set to false, the type `container_t` is applied to the container process by default.
|
||||||
|
# Note: To enable guest SELinux, the guest rootfs must be CentOS that is created and built
|
||||||
|
# with `SELINUX=yes`.
|
||||||
|
# (default: true)
|
||||||
|
disable_guest_selinux=true
|
||||||
|
|
||||||
[factory]
|
[factory]
|
||||||
# VM templating support. Once enabled, new VMs are created from template
|
# VM templating support. Once enabled, new VMs are created from template
|
||||||
# using vm cloning. They will share the same initial kernel, initramfs and
|
# using vm cloning. They will share the same initial kernel, initramfs and
|
||||||
@@ -425,31 +509,6 @@ valid_entropy_sources = ["/dev/urandom","/dev/random",""]
|
|||||||
# Default /var/run/kata-containers/cache.sock
|
# Default /var/run/kata-containers/cache.sock
|
||||||
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
|
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
|
||||||
|
|
||||||
[proxy.kata]
|
|
||||||
path = "/opt/kata/libexec/kata-containers/kata-proxy"
|
|
||||||
|
|
||||||
# If enabled, proxy messages will be sent to the system log
|
|
||||||
# (default: disabled)
|
|
||||||
enable_debug = {{ kata_containers_qemu_debug }}
|
|
||||||
|
|
||||||
[shim.kata]
|
|
||||||
path = "/opt/kata/libexec/kata-containers/kata-shim"
|
|
||||||
|
|
||||||
# If enabled, shim messages will be sent to the system log
|
|
||||||
# (default: disabled)
|
|
||||||
enable_debug = {{ kata_containers_qemu_debug }}
|
|
||||||
|
|
||||||
# If enabled, the shim will create opentracing.io traces and spans.
|
|
||||||
# (See https://www.jaegertracing.io/docs/getting-started).
|
|
||||||
#
|
|
||||||
# Note: By default, the shim runs in a separate network namespace. Therefore,
|
|
||||||
# to allow it to send trace details to the Jaeger agent running on the host,
|
|
||||||
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
|
|
||||||
# network namespace.
|
|
||||||
#
|
|
||||||
# (default: disabled)
|
|
||||||
#enable_tracing = true
|
|
||||||
|
|
||||||
[agent.kata]
|
[agent.kata]
|
||||||
# If enabled, make the agent display debug-level messages.
|
# If enabled, make the agent display debug-level messages.
|
||||||
# (default: disabled)
|
# (default: disabled)
|
||||||
@@ -457,24 +516,17 @@ enable_debug = {{ kata_containers_qemu_debug }}
|
|||||||
|
|
||||||
# Enable agent tracing.
|
# Enable agent tracing.
|
||||||
#
|
#
|
||||||
# If enabled, the default trace mode is "dynamic" and the
|
# If enabled, the agent will generate OpenTelemetry trace spans.
|
||||||
# default trace type is "isolated". The trace mode and type are set
|
|
||||||
# explicitly with the `trace_type=` and `trace_mode=` options.
|
|
||||||
#
|
#
|
||||||
# Notes:
|
# Notes:
|
||||||
#
|
#
|
||||||
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
|
# - If the runtime also has tracing enabled, the agent spans will be
|
||||||
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
|
# associated with the appropriate runtime parent span.
|
||||||
# will NOT activate agent tracing.
|
# - If enabled, the runtime will wait for the container to shutdown,
|
||||||
#
|
# increasing the container shutdown time slightly.
|
||||||
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
|
|
||||||
# full details.
|
|
||||||
#
|
#
|
||||||
# (default: disabled)
|
# (default: disabled)
|
||||||
#enable_tracing = true
|
#enable_tracing = true
|
||||||
#
|
|
||||||
#trace_mode = "dynamic"
|
|
||||||
#trace_type = "isolated"
|
|
||||||
|
|
||||||
# Comma separated list of kernel modules and their parameters.
|
# Comma separated list of kernel modules and their parameters.
|
||||||
# These modules will be loaded in the guest kernel using modprobe(8).
|
# These modules will be loaded in the guest kernel using modprobe(8).
|
||||||
@@ -500,21 +552,6 @@ kernel_modules=[]
|
|||||||
# (default: 30)
|
# (default: 30)
|
||||||
#dial_timeout = 30
|
#dial_timeout = 30
|
||||||
|
|
||||||
[netmon]
|
|
||||||
# If enabled, the network monitoring process gets started when the
|
|
||||||
# sandbox is created. This allows for the detection of some additional
|
|
||||||
# network being added to the existing network namespace, after the
|
|
||||||
# sandbox has been created.
|
|
||||||
# (default: disabled)
|
|
||||||
#enable_netmon = true
|
|
||||||
|
|
||||||
# Specify the path to the netmon binary.
|
|
||||||
path = "/opt/kata/libexec/kata-containers/kata-netmon"
|
|
||||||
|
|
||||||
# If enabled, netmon messages will be sent to the system log
|
|
||||||
# (default: disabled)
|
|
||||||
enable_debug = {{ kata_containers_qemu_debug }}
|
|
||||||
|
|
||||||
[runtime]
|
[runtime]
|
||||||
# If enabled, the runtime will log additional debug messages to the
|
# If enabled, the runtime will log additional debug messages to the
|
||||||
# system log
|
# system log
|
||||||
@@ -546,6 +583,19 @@ internetworking_model="tcfilter"
|
|||||||
# (default: true)
|
# (default: true)
|
||||||
disable_guest_seccomp=true
|
disable_guest_seccomp=true
|
||||||
|
|
||||||
|
# vCPUs pinning settings
|
||||||
|
# if enabled, each vCPU thread will be scheduled to a fixed CPU
|
||||||
|
# qualified condition: num(vCPU threads) == num(CPUs in sandbox's CPUSet)
|
||||||
|
# enable_vcpus_pinning = false
|
||||||
|
|
||||||
|
# Apply a custom SELinux security policy to the container process inside the VM.
|
||||||
|
# This is used when you want to apply a type other than the default `container_t`,
|
||||||
|
# so general users should not uncomment and apply it.
|
||||||
|
# (format: "user:role:type")
|
||||||
|
# Note: You cannot specify MCS policy with the label because the sensitivity levels and
|
||||||
|
# categories are determined automatically by high-level container runtimes such as containerd.
|
||||||
|
#guest_selinux_label="system_u:system_r:container_t"
|
||||||
|
|
||||||
# If enabled, the runtime will create opentracing.io traces and spans.
|
# If enabled, the runtime will create opentracing.io traces and spans.
|
||||||
# (See https://www.jaegertracing.io/docs/getting-started).
|
# (See https://www.jaegertracing.io/docs/getting-started).
|
||||||
# (default: disabled)
|
# (default: disabled)
|
||||||
@@ -563,11 +613,9 @@ disable_guest_seccomp=true
|
|||||||
|
|
||||||
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
|
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
|
||||||
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
|
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
|
||||||
# `disable_new_netns` conflicts with `enable_netmon`
|
|
||||||
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
|
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
|
||||||
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
|
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
|
||||||
# (like OVS) directly.
|
# (like OVS) directly.
|
||||||
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
|
|
||||||
# (default: false)
|
# (default: false)
|
||||||
#disable_new_netns = true
|
#disable_new_netns = true
|
||||||
|
|
||||||
@@ -576,15 +624,49 @@ disable_guest_seccomp=true
|
|||||||
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
|
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
|
||||||
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
|
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
|
||||||
# The sandbox cgroup is constrained if there is no container type annotation.
|
# The sandbox cgroup is constrained if there is no container type annotation.
|
||||||
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
|
# See: https://pkg.go.dev/github.com/kata-containers/kata-containers/src/runtime/virtcontainers#ContainerType
|
||||||
sandbox_cgroup_only={{ kata_containers_qemu_sandbox_cgroup_only }}
|
sandbox_cgroup_only={{ kata_containers_qemu_sandbox_cgroup_only }}
|
||||||
|
|
||||||
|
# If enabled, the runtime will attempt to determine appropriate sandbox size (memory, CPU) before booting the virtual machine. In
|
||||||
|
# this case, the runtime will not dynamically update the amount of memory and CPU in the virtual machine. This is generally helpful
|
||||||
|
# when a hardware architecture or hypervisor solutions is utilized which does not support CPU and/or memory hotplug.
|
||||||
|
# Compatibility for determining appropriate sandbox (VM) size:
|
||||||
|
# - When running with pods, sandbox sizing information will only be available if using Kubernetes >= 1.23 and containerd >= 1.6. CRI-O
|
||||||
|
# does not yet support sandbox sizing annotations.
|
||||||
|
# - When running single containers using a tool like ctr, container sizing information will be available.
|
||||||
|
static_sandbox_resource_mgmt=false
|
||||||
|
|
||||||
# If specified, sandbox_bind_mounts identifieds host paths to be mounted (ro) into the sandboxes shared path.
|
# If specified, sandbox_bind_mounts identifieds host paths to be mounted (ro) into the sandboxes shared path.
|
||||||
# This is only valid if filesystem sharing is utilized. The provided path(s) will be bindmounted into the shared fs directory.
|
# This is only valid if filesystem sharing is utilized. The provided path(s) will be bindmounted into the shared fs directory.
|
||||||
# If defaults are utilized, these mounts should be available in the guest at `/run/kata-containers/shared/containers/sandbox-mounts`
|
# If defaults are utilized, these mounts should be available in the guest at `/run/kata-containers/shared/containers/sandbox-mounts`
|
||||||
# These will not be exposed to the container workloads, and are only provided for potential guest services.
|
# These will not be exposed to the container workloads, and are only provided for potential guest services.
|
||||||
sandbox_bind_mounts=[]
|
sandbox_bind_mounts=[]
|
||||||
|
|
||||||
|
# VFIO Mode
|
||||||
|
# Determines how VFIO devices should be be presented to the container.
|
||||||
|
# Options:
|
||||||
|
#
|
||||||
|
# - vfio
|
||||||
|
# Matches behaviour of OCI runtimes (e.g. runc) as much as
|
||||||
|
# possible. VFIO devices will appear in the container as VFIO
|
||||||
|
# character devices under /dev/vfio. The exact names may differ
|
||||||
|
# from the host (they need to match the VM's IOMMU group numbers
|
||||||
|
# rather than the host's)
|
||||||
|
#
|
||||||
|
# - guest-kernel
|
||||||
|
# This is a Kata-specific behaviour that's useful in certain cases.
|
||||||
|
# The VFIO device is managed by whatever driver in the VM kernel
|
||||||
|
# claims it. This means it will appear as one or more device nodes
|
||||||
|
# or network interfaces depending on the nature of the device.
|
||||||
|
# Using this mode requires specially built workloads that know how
|
||||||
|
# to locate the relevant device interfaces within the VM.
|
||||||
|
#
|
||||||
|
vfio_mode="guest-kernel"
|
||||||
|
|
||||||
|
# If enabled, the runtime will not create Kubernetes emptyDir mounts on the guest filesystem. Instead, emptyDir mounts will
|
||||||
|
# be created on the host and shared via virtio-fs. This is potentially slower, but allows sharing of files from host to guest.
|
||||||
|
disable_guest_empty_dir=false
|
||||||
|
|
||||||
# Enabled experimental feature list, format: ["a", "b"].
|
# Enabled experimental feature list, format: ["a", "b"].
|
||||||
# Experimental features are features not stable enough for production,
|
# Experimental features are features not stable enough for production,
|
||||||
# they may break compatibility, and are prepared for a big version bump.
|
# they may break compatibility, and are prepared for a big version bump.
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user