mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-14 13:54:37 +03:00
Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4167807f17 | ||
|
|
2ac1c7562f | ||
|
|
2d6e31d281 | ||
|
|
0a19d1bf01 |
@@ -1,16 +0,0 @@
|
|||||||
---
|
|
||||||
parseable: true
|
|
||||||
skip_list:
|
|
||||||
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
|
|
||||||
# The following rules throw errors.
|
|
||||||
# These either still need to be corrected in the repository and the rules re-enabled or they are skipped on purpose.
|
|
||||||
- '204'
|
|
||||||
- '206'
|
|
||||||
- '301'
|
|
||||||
- '305'
|
|
||||||
- '306'
|
|
||||||
- '404'
|
|
||||||
- '502'
|
|
||||||
- '503'
|
|
||||||
- '504'
|
|
||||||
- '701'
|
|
||||||
@@ -1,11 +1,16 @@
|
|||||||
---
|
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
|
||||||
name: Bug Report
|
|
||||||
about: Report a bug encountered while operating Kubernetes
|
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
|
||||||
labels: kind/bug
|
|
||||||
|
|
||||||
---
|
|
||||||
<!--
|
<!--
|
||||||
Please, be ready for followup questions, and please respond in a timely
|
If this is a BUG REPORT, please:
|
||||||
|
- Fill in as much of the template below as you can. If you leave out
|
||||||
|
information, we can't help you as well.
|
||||||
|
|
||||||
|
If this is a FEATURE REQUEST, please:
|
||||||
|
- Describe *in detail* the feature/behavior/change you'd like to see.
|
||||||
|
|
||||||
|
In both cases, be ready for followup questions, and please respond in a timely
|
||||||
manner. If we can't reproduce a bug or think a feature already exists, we
|
manner. If we can't reproduce a bug or think a feature already exists, we
|
||||||
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
||||||
explain why.
|
explain why.
|
||||||
11
.github/ISSUE_TEMPLATE/enhancement.md
vendored
11
.github/ISSUE_TEMPLATE/enhancement.md
vendored
@@ -1,11 +0,0 @@
|
|||||||
---
|
|
||||||
name: Enhancement Request
|
|
||||||
about: Suggest an enhancement to the Kubespray project
|
|
||||||
labels: kind/feature
|
|
||||||
|
|
||||||
---
|
|
||||||
<!-- Please only use this template for submitting enhancement requests -->
|
|
||||||
|
|
||||||
**What would you like to be added**:
|
|
||||||
|
|
||||||
**Why is this needed**:
|
|
||||||
20
.github/ISSUE_TEMPLATE/failing-test.md
vendored
20
.github/ISSUE_TEMPLATE/failing-test.md
vendored
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
name: Failing Test
|
|
||||||
about: Report test failures in Kubespray CI jobs
|
|
||||||
labels: kind/failing-test
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<!-- Please only use this template for submitting reports about failing tests in Kubespray CI jobs -->
|
|
||||||
|
|
||||||
**Which jobs are failing**:
|
|
||||||
|
|
||||||
**Which test(s) are failing**:
|
|
||||||
|
|
||||||
**Since when has it been failing**:
|
|
||||||
|
|
||||||
**Testgrid link**:
|
|
||||||
|
|
||||||
**Reason for failure**:
|
|
||||||
|
|
||||||
**Anything else we need to know**:
|
|
||||||
18
.github/ISSUE_TEMPLATE/support.md
vendored
18
.github/ISSUE_TEMPLATE/support.md
vendored
@@ -1,18 +0,0 @@
|
|||||||
---
|
|
||||||
name: Support Request
|
|
||||||
about: Support request or question relating to Kubespray
|
|
||||||
labels: triage/support
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
STOP -- PLEASE READ!
|
|
||||||
|
|
||||||
GitHub is not the right place for support requests.
|
|
||||||
|
|
||||||
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubespray) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
|
|
||||||
|
|
||||||
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.
|
|
||||||
|
|
||||||
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
|
|
||||||
-->
|
|
||||||
44
.github/PULL_REQUEST_TEMPLATE.md
vendored
44
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,44 +0,0 @@
|
|||||||
<!-- Thanks for sending a pull request! Here are some tips for you:
|
|
||||||
|
|
||||||
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
|
|
||||||
2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here:
|
|
||||||
https://git.k8s.io/community/contributors/devel/release.md#issue-kind-label
|
|
||||||
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/testing.md
|
|
||||||
4. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
|
|
||||||
5. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
|
|
||||||
6. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests
|
|
||||||
-->
|
|
||||||
|
|
||||||
**What type of PR is this?**
|
|
||||||
> Uncomment only one ` /kind <>` line, hit enter to put that in a new line, and remove leading whitespaces from that line:
|
|
||||||
>
|
|
||||||
> /kind api-change
|
|
||||||
> /kind bug
|
|
||||||
> /kind cleanup
|
|
||||||
> /kind design
|
|
||||||
> /kind documentation
|
|
||||||
> /kind failing-test
|
|
||||||
> /kind feature
|
|
||||||
> /kind flake
|
|
||||||
|
|
||||||
**What this PR does / why we need it**:
|
|
||||||
|
|
||||||
**Which issue(s) this PR fixes**:
|
|
||||||
<!--
|
|
||||||
*Automatically closes linked issue when PR is merged.
|
|
||||||
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
|
|
||||||
_If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`_*
|
|
||||||
-->
|
|
||||||
Fixes #
|
|
||||||
|
|
||||||
**Special notes for your reviewer**:
|
|
||||||
|
|
||||||
**Does this PR introduce a user-facing change?**:
|
|
||||||
<!--
|
|
||||||
If no, just write "NONE" in the release-note block below.
|
|
||||||
If yes, a release note is required:
|
|
||||||
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
|
|
||||||
-->
|
|
||||||
```release-note
|
|
||||||
|
|
||||||
```
|
|
||||||
11
.gitignore
vendored
11
.gitignore
vendored
@@ -1,6 +1,9 @@
|
|||||||
.vagrant
|
.vagrant
|
||||||
*.retry
|
*.retry
|
||||||
**/vagrant_ansible_inventory
|
**/vagrant_ansible_inventory
|
||||||
|
inventory/credentials/
|
||||||
|
inventory/group_vars/fake_hosts.yml
|
||||||
|
inventory/host_vars/
|
||||||
temp
|
temp
|
||||||
.idea
|
.idea
|
||||||
.tox
|
.tox
|
||||||
@@ -8,19 +11,12 @@ temp
|
|||||||
*.bak
|
*.bak
|
||||||
*.tfstate
|
*.tfstate
|
||||||
*.tfstate.backup
|
*.tfstate.backup
|
||||||
.terraform/
|
|
||||||
contrib/terraform/aws/credentials.tfvars
|
contrib/terraform/aws/credentials.tfvars
|
||||||
/ssh-bastion.conf
|
/ssh-bastion.conf
|
||||||
**/*.sw[pon]
|
**/*.sw[pon]
|
||||||
*~
|
*~
|
||||||
vagrant/
|
vagrant/
|
||||||
|
|
||||||
# Ansible inventory
|
|
||||||
inventory/*
|
|
||||||
!inventory/local
|
|
||||||
!inventory/sample
|
|
||||||
inventory/*/artifacts/
|
|
||||||
|
|
||||||
# Byte-compiled / optimized / DLL files
|
# Byte-compiled / optimized / DLL files
|
||||||
__pycache__/
|
__pycache__/
|
||||||
*.py[cod]
|
*.py[cod]
|
||||||
@@ -28,6 +24,7 @@ __pycache__/
|
|||||||
|
|
||||||
# Distribution / packaging
|
# Distribution / packaging
|
||||||
.Python
|
.Python
|
||||||
|
inventory/*/artifacts/
|
||||||
env/
|
env/
|
||||||
build/
|
build/
|
||||||
credentials/
|
credentials/
|
||||||
|
|||||||
731
.gitlab-ci.yml
731
.gitlab-ci.yml
@@ -1,16 +1,14 @@
|
|||||||
---
|
|
||||||
stages:
|
stages:
|
||||||
- unit-tests
|
- unit-tests
|
||||||
- moderator
|
- moderator
|
||||||
- deploy-part1
|
- deploy-part1
|
||||||
- deploy-part2
|
- deploy-part2
|
||||||
- deploy-gce
|
|
||||||
- deploy-special
|
- deploy-special
|
||||||
|
|
||||||
variables:
|
variables:
|
||||||
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
||||||
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
||||||
# DOCKER_HOST: tcp://localhost:2375
|
# DOCKER_HOST: tcp://localhost:2375
|
||||||
ANSIBLE_FORCE_COLOR: "true"
|
ANSIBLE_FORCE_COLOR: "true"
|
||||||
MAGIC: "ci check this"
|
MAGIC: "ci check this"
|
||||||
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
|
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
|
||||||
@@ -26,45 +24,726 @@ variables:
|
|||||||
IDEMPOT_CHECK: "false"
|
IDEMPOT_CHECK: "false"
|
||||||
RESET_CHECK: "false"
|
RESET_CHECK: "false"
|
||||||
UPGRADE_TEST: "false"
|
UPGRADE_TEST: "false"
|
||||||
|
KUBEADM_ENABLED: "false"
|
||||||
LOG_LEVEL: "-vv"
|
LOG_LEVEL: "-vv"
|
||||||
|
|
||||||
|
# asia-east1-a
|
||||||
|
# asia-northeast1-a
|
||||||
|
# europe-west1-b
|
||||||
|
# us-central1-a
|
||||||
|
# us-east1-b
|
||||||
|
# us-west1-a
|
||||||
|
|
||||||
before_script:
|
before_script:
|
||||||
- ./tests/scripts/rebase.sh
|
- /usr/bin/python -m pip install -r tests/requirements.txt
|
||||||
- /usr/bin/python -m pip install -r tests/requirements.txt
|
- mkdir -p /.ssh
|
||||||
- mkdir -p /.ssh
|
|
||||||
|
|
||||||
.job: &job
|
.job: &job
|
||||||
tags:
|
tags:
|
||||||
- packet
|
- kubernetes
|
||||||
variables:
|
- docker
|
||||||
KUBESPRAY_VERSION: v2.9.0
|
image: quay.io/kubespray/kubespray:v2.7
|
||||||
image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
|
|
||||||
|
.docker_service: &docker_service
|
||||||
|
services:
|
||||||
|
- docker:dind
|
||||||
|
|
||||||
|
.create_cluster: &create_cluster
|
||||||
|
<<: *job
|
||||||
|
<<: *docker_service
|
||||||
|
|
||||||
|
.gce_variables: &gce_variables
|
||||||
|
GCE_USER: travis
|
||||||
|
SSH_USER: $GCE_USER
|
||||||
|
CLOUD_MACHINE_TYPE: "g1-small"
|
||||||
|
CI_PLATFORM: "gce"
|
||||||
|
PRIVATE_KEY: $GCE_PRIVATE_KEY
|
||||||
|
|
||||||
|
.do_variables: &do_variables
|
||||||
|
PRIVATE_KEY: $DO_PRIVATE_KEY
|
||||||
|
CI_PLATFORM: "do"
|
||||||
|
SSH_USER: root
|
||||||
|
|
||||||
|
|
||||||
.testcases: &testcases
|
.testcases: &testcases
|
||||||
<<: *job
|
<<: *job
|
||||||
services:
|
<<: *docker_service
|
||||||
- docker:dind
|
cache:
|
||||||
|
key: "$CI_BUILD_REF_NAME"
|
||||||
|
paths:
|
||||||
|
- downloads/
|
||||||
|
- $HOME/.cache
|
||||||
before_script:
|
before_script:
|
||||||
- ./tests/scripts/rebase.sh
|
- docker info
|
||||||
- ./tests/scripts/testcases_prepare.sh
|
- /usr/bin/python -m pip install -r requirements.txt
|
||||||
|
- /usr/bin/python -m pip install -r tests/requirements.txt
|
||||||
|
- mkdir -p /.ssh
|
||||||
|
- mkdir -p $HOME/.ssh
|
||||||
|
- ansible-playbook --version
|
||||||
|
- export PYPATH=$([[ ! "$CI_JOB_NAME" =~ "coreos" ]] && echo /usr/bin/python || echo /opt/bin/python)
|
||||||
|
- echo "CI_JOB_NAME is $CI_JOB_NAME"
|
||||||
|
- echo "PYPATH is $PYPATH"
|
||||||
script:
|
script:
|
||||||
- ./tests/scripts/testcases_run.sh
|
- pwd
|
||||||
after_script:
|
- ls
|
||||||
- ./tests/scripts/testcases_cleanup.sh
|
- echo ${PWD}
|
||||||
|
- echo "${STARTUP_SCRIPT}"
|
||||||
|
- cd tests && make create-${CI_PLATFORM} -s ; cd -
|
||||||
|
|
||||||
|
# Check out latest tag if testing upgrade
|
||||||
|
# Uncomment when gitlab kubespray repo has tags
|
||||||
|
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
|
||||||
|
- test "${UPGRADE_TEST}" != "false" && git checkout 53d87e53c5899d4ea2904ab7e3883708dd6363d3
|
||||||
|
# Checkout the CI vars file so it is available
|
||||||
|
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
|
||||||
|
# Workaround https://github.com/kubernetes-sigs/kubespray/issues/2021
|
||||||
|
- 'sh -c "echo ignore_assert_errors: true | tee -a tests/files/${CI_JOB_NAME}.yml"'
|
||||||
|
|
||||||
|
|
||||||
|
# Create cluster
|
||||||
|
- >
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_ssh_user=${SSH_USER}
|
||||||
|
-e local_release_dir=${PWD}/downloads
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
cluster.yml
|
||||||
|
|
||||||
|
# Repeat deployment if testing upgrade
|
||||||
|
- >
|
||||||
|
if [ "${UPGRADE_TEST}" != "false" ]; then
|
||||||
|
test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml";
|
||||||
|
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
|
||||||
|
git checkout "${CI_BUILD_REF}";
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_ssh_user=${SSH_USER}
|
||||||
|
-e local_release_dir=${PWD}/downloads
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
$PLAYBOOK;
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Tests Cases
|
||||||
|
## Test Master API
|
||||||
|
- >
|
||||||
|
ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
|
||||||
|
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
|
||||||
|
|
||||||
|
## Ping the between 2 pod
|
||||||
|
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
|
||||||
|
|
||||||
|
## Advanced DNS checks
|
||||||
|
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/040_check-network-adv.yml $LOG_LEVEL
|
||||||
|
|
||||||
|
## Idempotency checks 1/5 (repeat deployment)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_python_interpreter=${PYPATH}
|
||||||
|
-e local_release_dir=${PWD}/downloads
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
cluster.yml;
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Idempotency checks 2/5 (Advanced DNS checks)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Idempotency checks 3/5 (reset deployment)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_python_interpreter=${PYPATH}
|
||||||
|
-e reset_confirmation=yes
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
reset.yml;
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Idempotency checks 4/5 (redeploy after reset)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_python_interpreter=${PYPATH}
|
||||||
|
-e local_release_dir=${PWD}/downloads
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
cluster.yml;
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Idempotency checks 5/5 (Advanced DNS checks)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH}
|
||||||
|
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
|
||||||
|
fi
|
||||||
|
|
||||||
|
after_script:
|
||||||
|
- cd tests && make delete-${CI_PLATFORM} -s ; cd -
|
||||||
|
|
||||||
|
.gce: &gce
|
||||||
|
<<: *testcases
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
|
||||||
|
.do: &do
|
||||||
|
variables:
|
||||||
|
<<: *do_variables
|
||||||
|
<<: *testcases
|
||||||
|
|
||||||
|
# Test matrix. Leave the comments for markup scripts.
|
||||||
|
.coreos_calico_aio_variables: &coreos_calico_aio_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu18_flannel_aio_variables: &ubuntu18_flannel_aio_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos_weave_kubeadm_variables: ¢os_weave_kubeadm_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
UPGRADE_TEST: "graceful"
|
||||||
|
|
||||||
|
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.coreos_cilium_variables: &coreos_cilium_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_cilium_sep_variables: &ubuntu_cilium_sep_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.rhel7_weave_variables: &rhel7_weave_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos7_flannel_addons_variables: ¢os7_flannel_addons_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.debian9_calico_variables: &debian9_calico_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.coreos_canal_variables: &coreos_canal_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos7_calico_ha_variables: ¢os7_calico_ha_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos7_kube_router_variables: ¢os7_kube_router_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos7_multus_calico_variables: ¢os7_multus_calico_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
UPGRADE_TEST: "graceful"
|
||||||
|
|
||||||
|
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.coreos_kube_router_variables: &coreos_kube_router_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_flannel_variables: &ubuntu_flannel_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_kube_router_variables: &ubuntu_kube_router_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.opensuse_canal_variables: &opensuse_canal_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
|
||||||
|
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
|
||||||
|
### PR JOBS PART1
|
||||||
|
|
||||||
|
gce_ubuntu18-flannel-aio:
|
||||||
|
stage: deploy-part1
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *ubuntu18_flannel_aio_variables
|
||||||
|
<<: *gce_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
### PR JOBS PART2
|
||||||
|
|
||||||
|
gce_coreos-calico-aio:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *coreos_calico_aio_variables
|
||||||
|
<<: *gce_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
gce_centos7-flannel-addons:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_flannel_addons_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
gce_centos-weave-kubeadm-sep:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos_weave_kubeadm_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-flannel-ha:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_flannel_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
### MANUAL JOBS
|
||||||
|
|
||||||
|
gce_ubuntu-weave-sep:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_weave_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
gce_coreos-calico-sep-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_calico_aio_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_ubuntu-canal-ha-triggers:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_canal_ha_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_centos7-flannel-addons-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_flannel_addons_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
|
||||||
|
gce_ubuntu-weave-sep-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_weave_sep_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
# More builds for PRs/merges (manual) and triggers (auto)
|
||||||
|
do_ubuntu-canal-ha:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *do
|
||||||
|
variables:
|
||||||
|
<<: *do_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-canal-ha:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_canal_ha_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-canal-kubeadm:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_canal_kubeadm_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-canal-kubeadm-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_canal_kubeadm_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_centos-weave-kubeadm-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos_weave_kubeadm_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_ubuntu-contiv-sep:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_contiv_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_coreos-cilium:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_cilium_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-cilium-sep:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_cilium_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_rhel7-weave:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *rhel7_weave_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_rhel7-weave-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *rhel7_weave_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_debian9-calico-upgrade:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *debian9_calico_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_debian9-calico-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *debian9_calico_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_coreos-canal:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_canal_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_coreos-canal-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_canal_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_rhel7-canal-sep:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *rhel7_canal_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_rhel7-canal-sep-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *rhel7_canal_sep_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_centos7-calico-ha:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_calico_ha_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_centos7-calico-ha-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_calico_ha_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_centos7-kube-router:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_kube_router_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_centos7-multus-calico:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_multus_calico_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_opensuse-canal:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *opensuse_canal_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
|
||||||
|
gce_coreos-alpha-weave-ha:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_alpha_weave_ha_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_coreos-kube-router:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_kube_router_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-rkt-sep:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_rkt_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-kube-router-sep:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_kube_router_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
# For failfast, at least 1 job must be defined in .gitlab-ci.yml
|
|
||||||
# Premoderated with manual actions
|
# Premoderated with manual actions
|
||||||
ci-authorized:
|
ci-authorized:
|
||||||
extends: .job
|
<<: *job
|
||||||
stage: moderator
|
stage: moderator
|
||||||
|
before_script:
|
||||||
|
- apt-get -y install jq
|
||||||
script:
|
script:
|
||||||
- /bin/sh scripts/premoderator.sh
|
- /bin/sh scripts/premoderator.sh
|
||||||
except: ['triggers', 'master']
|
except: ['triggers', 'master']
|
||||||
|
|
||||||
include:
|
syntax-check:
|
||||||
- .gitlab-ci/lint.yml
|
<<: *job
|
||||||
- .gitlab-ci/shellcheck.yml
|
stage: unit-tests
|
||||||
- .gitlab-ci/gce.yml
|
script:
|
||||||
- .gitlab-ci/digital-ocean.yml
|
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check
|
||||||
- .gitlab-ci/terraform.yml
|
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root upgrade-cluster.yml -vvv --syntax-check
|
||||||
- .gitlab-ci/packet.yml
|
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root reset.yml -vvv --syntax-check
|
||||||
|
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root extra_playbooks/upgrade-only-k8s.yml -vvv --syntax-check
|
||||||
|
except: ['triggers', 'master']
|
||||||
|
|
||||||
|
yamllint:
|
||||||
|
<<: *job
|
||||||
|
stage: unit-tests
|
||||||
|
script:
|
||||||
|
- yamllint roles
|
||||||
|
except: ['triggers', 'master']
|
||||||
|
|
||||||
|
tox-inventory-builder:
|
||||||
|
stage: unit-tests
|
||||||
|
<<: *job
|
||||||
|
script:
|
||||||
|
- pip install tox
|
||||||
|
- cd contrib/inventory_builder && tox
|
||||||
|
when: manual
|
||||||
|
except: ['triggers', 'master']
|
||||||
|
|||||||
@@ -1,19 +0,0 @@
|
|||||||
---
|
|
||||||
.do_variables: &do_variables
|
|
||||||
PRIVATE_KEY: $DO_PRIVATE_KEY
|
|
||||||
CI_PLATFORM: "do"
|
|
||||||
SSH_USER: root
|
|
||||||
|
|
||||||
.do: &do
|
|
||||||
extends: .testcases
|
|
||||||
tags:
|
|
||||||
- do
|
|
||||||
|
|
||||||
do_ubuntu-canal-ha:
|
|
||||||
stage: deploy-part2
|
|
||||||
extends: .do
|
|
||||||
variables:
|
|
||||||
<<: *do_variables
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
@@ -1,264 +0,0 @@
|
|||||||
---
|
|
||||||
.gce_variables: &gce_variables
|
|
||||||
GCE_USER: travis
|
|
||||||
SSH_USER: $GCE_USER
|
|
||||||
CLOUD_MACHINE_TYPE: "g1-small"
|
|
||||||
CI_PLATFORM: "gce"
|
|
||||||
PRIVATE_KEY: $GCE_PRIVATE_KEY
|
|
||||||
|
|
||||||
.cache: &cache
|
|
||||||
cache:
|
|
||||||
key: "$CI_BUILD_REF_NAME"
|
|
||||||
paths:
|
|
||||||
- downloads/
|
|
||||||
- $HOME/.cache
|
|
||||||
|
|
||||||
.gce: &gce
|
|
||||||
extends: .testcases
|
|
||||||
<<: *cache
|
|
||||||
variables:
|
|
||||||
<<: *gce_variables
|
|
||||||
tags:
|
|
||||||
- gce
|
|
||||||
|
|
||||||
.centos_weave_kubeadm_variables: ¢os_weave_kubeadm_variables
|
|
||||||
# stage: deploy-part1
|
|
||||||
UPGRADE_TEST: "graceful"
|
|
||||||
|
|
||||||
.centos7_multus_calico_variables: ¢os7_multus_calico_variables
|
|
||||||
# stage: deploy-gce
|
|
||||||
UPGRADE_TEST: "graceful"
|
|
||||||
|
|
||||||
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
|
|
||||||
### PR JOBS PART1
|
|
||||||
|
|
||||||
gce_ubuntu18-flannel-aio:
|
|
||||||
stage: deploy-part1
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: [/^pr-.*$/]
|
|
||||||
|
|
||||||
### PR JOBS PART2
|
|
||||||
|
|
||||||
gce_coreos-calico-aio:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
except: ['triggers']
|
|
||||||
only: [/^pr-.*$/]
|
|
||||||
|
|
||||||
gce_centos7-flannel-addons:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: [/^pr-.*$/]
|
|
||||||
|
|
||||||
### MANUAL JOBS
|
|
||||||
|
|
||||||
gce_centos-weave-kubeadm-sep:
|
|
||||||
stage: deploy-gce
|
|
||||||
extends: .gce
|
|
||||||
variables:
|
|
||||||
<<: *centos_weave_kubeadm_variables
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_ubuntu-weave-sep:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_coreos-calico-sep-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_ubuntu-canal-ha-triggers:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_centos7-flannel-addons-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_ubuntu-weave-sep-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
# More builds for PRs/merges (manual) and triggers (auto)
|
|
||||||
|
|
||||||
|
|
||||||
gce_ubuntu-canal-ha:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_ubuntu-canal-kubeadm:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_ubuntu-canal-kubeadm-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_ubuntu-flannel-ha:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
|
|
||||||
gce_centos-weave-kubeadm-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
extends: .gce
|
|
||||||
variables:
|
|
||||||
<<: *centos_weave_kubeadm_variables
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_ubuntu-contiv-sep:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_coreos-cilium:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_ubuntu18-cilium-sep:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_rhel7-weave:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_rhel7-weave-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_debian9-calico-upgrade:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_debian9-calico-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_coreos-canal:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_coreos-canal-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_rhel7-canal-sep:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_rhel7-canal-sep-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_centos7-calico-ha:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_centos7-calico-ha-triggers:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
gce_centos7-kube-router:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_centos7-multus-calico:
|
|
||||||
stage: deploy-gce
|
|
||||||
extends: .gce
|
|
||||||
variables:
|
|
||||||
<<: *centos7_multus_calico_variables
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_opensuse-canal:
|
|
||||||
stage: deploy-gce
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
|
|
||||||
gce_coreos-alpha-weave-ha:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_coreos-kube-router:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
gce_ubuntu-kube-router-sep:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *gce
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
@@ -1,40 +0,0 @@
|
|||||||
---
|
|
||||||
yamllint:
|
|
||||||
extends: .job
|
|
||||||
stage: unit-tests
|
|
||||||
script:
|
|
||||||
- yamllint --strict .
|
|
||||||
except: ['triggers', 'master']
|
|
||||||
|
|
||||||
ansible-lint:
|
|
||||||
extends: .job
|
|
||||||
stage: unit-tests
|
|
||||||
# lint every yml/yaml file that looks like it contains Ansible plays
|
|
||||||
script: |-
|
|
||||||
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
|
|
||||||
except: ['triggers', 'master']
|
|
||||||
|
|
||||||
syntax-check:
|
|
||||||
extends: .job
|
|
||||||
stage: unit-tests
|
|
||||||
variables:
|
|
||||||
ANSIBLE_INVENTORY: inventory/local-tests.cfg
|
|
||||||
ANSIBLE_REMOTE_USER: root
|
|
||||||
ANSIBLE_BECOME: "true"
|
|
||||||
ANSIBLE_BECOME_USER: root
|
|
||||||
ANSIBLE_VERBOSITY: "3"
|
|
||||||
script:
|
|
||||||
- ansible-playbook --syntax-check cluster.yml
|
|
||||||
- ansible-playbook --syntax-check upgrade-cluster.yml
|
|
||||||
- ansible-playbook --syntax-check reset.yml
|
|
||||||
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
|
|
||||||
except: ['triggers', 'master']
|
|
||||||
|
|
||||||
tox-inventory-builder:
|
|
||||||
stage: unit-tests
|
|
||||||
extends: .job
|
|
||||||
script:
|
|
||||||
- pip install tox
|
|
||||||
- cd contrib/inventory_builder && tox
|
|
||||||
when: manual
|
|
||||||
except: ['triggers', 'master']
|
|
||||||
@@ -1,123 +0,0 @@
|
|||||||
---
|
|
||||||
.packet_variables: &packet_variables
|
|
||||||
CI_PLATFORM: "packet"
|
|
||||||
SSH_USER: "kubespray"
|
|
||||||
|
|
||||||
.packet: &packet
|
|
||||||
extends: .testcases
|
|
||||||
variables:
|
|
||||||
<<: *packet_variables
|
|
||||||
tags:
|
|
||||||
- packet
|
|
||||||
|
|
||||||
.test-upgrade: &test-upgrade
|
|
||||||
variables:
|
|
||||||
UPGRADE_TEST: "graceful"
|
|
||||||
|
|
||||||
packet_ubuntu18-calico-aio:
|
|
||||||
stage: deploy-part1
|
|
||||||
<<: *packet
|
|
||||||
when: on_success
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
# ### PR JOBS PART2
|
|
||||||
|
|
||||||
packet_centos7-flannel-addons:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: on_success
|
|
||||||
except: ['triggers']
|
|
||||||
only: [/^pr-.*$/]
|
|
||||||
|
|
||||||
# ### MANUAL JOBS
|
|
||||||
|
|
||||||
packet_centos-weave-kubeadm-sep:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: on_success
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
packet_ubuntu-weave-sep:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: manual
|
|
||||||
only: ['triggers']
|
|
||||||
|
|
||||||
# # More builds for PRs/merges (manual) and triggers (auto)
|
|
||||||
|
|
||||||
packet_ubuntu-canal-ha:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *packet
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_ubuntu-canal-kubeadm:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: on_success
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_ubuntu-flannel-ha:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: on_success
|
|
||||||
except: ['triggers']
|
|
||||||
|
|
||||||
packet_ubuntu-contiv-sep:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *packet
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_ubuntu18-cilium-sep:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *packet
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_debian9-calico-upgrade:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: on_success
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_centos7-calico-ha:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: on_success
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_centos7-kube-router:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *packet
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_centos7-multus-calico:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_opensuse-canal:
|
|
||||||
stage: deploy-part2
|
|
||||||
<<: *packet
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
packet_ubuntu-kube-router-sep:
|
|
||||||
stage: deploy-special
|
|
||||||
<<: *packet
|
|
||||||
when: manual
|
|
||||||
except: ['triggers']
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
---
|
|
||||||
shellcheck:
|
|
||||||
extends: .job
|
|
||||||
stage: unit-tests
|
|
||||||
variables:
|
|
||||||
SHELLCHECK_VERSION: v0.6.0
|
|
||||||
before_script:
|
|
||||||
- ./tests/scripts/rebase.sh
|
|
||||||
- curl --silent "https://storage.googleapis.com/shellcheck/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
|
|
||||||
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
|
|
||||||
- shellcheck --version
|
|
||||||
script:
|
|
||||||
# Run shellcheck for all *.sh except contrib/
|
|
||||||
- find . -name '*.sh' -not -path './contrib/*' | xargs shellcheck --severity error
|
|
||||||
except: ['triggers', 'master']
|
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
---
|
|
||||||
# Tests for contrib/terraform/
|
|
||||||
.terraform_install:
|
|
||||||
extends: .job
|
|
||||||
before_script:
|
|
||||||
- ./tests/scripts/rebase.sh
|
|
||||||
# Set Ansible config
|
|
||||||
- cp ansible.cfg ~/.ansible.cfg
|
|
||||||
# Install Terraform
|
|
||||||
- apt-get install -y unzip
|
|
||||||
- curl https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip > /tmp/terraform.zip
|
|
||||||
- unzip /tmp/terraform.zip && mv ./terraform /usr/local/bin/ && terraform --version
|
|
||||||
# Prepare inventory
|
|
||||||
- cp -LRp contrib/terraform/$PROVIDER/sample-inventory inventory/$CLUSTER
|
|
||||||
- cd inventory/$CLUSTER
|
|
||||||
- ln -s ../../contrib/terraform/$PROVIDER/hosts
|
|
||||||
- terraform init ../../contrib/terraform/$PROVIDER
|
|
||||||
# Copy SSH keypair
|
|
||||||
- mkdir -p ~/.ssh
|
|
||||||
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
|
|
||||||
- chmod 400 ~/.ssh/id_rsa
|
|
||||||
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
|
|
||||||
.terraform_validate:
|
|
||||||
extends: .terraform_install
|
|
||||||
stage: unit-tests
|
|
||||||
script:
|
|
||||||
- terraform validate -var-file=cluster.tf ../../contrib/terraform/$PROVIDER
|
|
||||||
- terraform fmt -check -diff ../../contrib/terraform/$PROVIDER
|
|
||||||
|
|
||||||
.terraform_apply:
|
|
||||||
extends: .terraform_install
|
|
||||||
stage: deploy-part2
|
|
||||||
when: manual
|
|
||||||
variables:
|
|
||||||
ANSIBLE_INVENTORY_UNPARSED_FAILED: "true"
|
|
||||||
script:
|
|
||||||
- terraform apply -auto-approve ../../contrib/terraform/$PROVIDER
|
|
||||||
- ansible-playbook -i hosts ../../cluster.yml --become
|
|
||||||
after_script:
|
|
||||||
# Cleanup regardless of exit code
|
|
||||||
- cd inventory/$CLUSTER
|
|
||||||
- terraform destroy -auto-approve ../../contrib/terraform/$PROVIDER
|
|
||||||
|
|
||||||
tf-validate-openstack:
|
|
||||||
extends: .terraform_validate
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.11.11
|
|
||||||
PROVIDER: openstack
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
|
|
||||||
tf-validate-packet:
|
|
||||||
extends: .terraform_validate
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.11.11
|
|
||||||
PROVIDER: packet
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
|
|
||||||
tf-validate-aws:
|
|
||||||
extends: .terraform_validate
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.11.11
|
|
||||||
PROVIDER: aws
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
|
|
||||||
tf-packet-ubuntu16-default:
|
|
||||||
extends: .terraform_apply
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.11.11
|
|
||||||
PROVIDER: packet
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG
|
|
||||||
TF_VAR_number_of_k8s_masters: "1"
|
|
||||||
TF_VAR_number_of_k8s_nodes: "1"
|
|
||||||
TF_VAR_plan_k8s_masters: t1.small.x86
|
|
||||||
TF_VAR_plan_k8s_nodes: t1.small.x86
|
|
||||||
TF_VAR_facility: ewr1
|
|
||||||
TF_VAR_public_key_path: ""
|
|
||||||
TF_VAR_operating_system: ubuntu_16_04
|
|
||||||
|
|
||||||
tf-packet-ubuntu18-default:
|
|
||||||
extends: .terraform_apply
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.11.11
|
|
||||||
PROVIDER: packet
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG
|
|
||||||
TF_VAR_number_of_k8s_masters: "1"
|
|
||||||
TF_VAR_number_of_k8s_nodes: "1"
|
|
||||||
TF_VAR_plan_k8s_masters: t1.small.x86
|
|
||||||
TF_VAR_plan_k8s_nodes: t1.small.x86
|
|
||||||
TF_VAR_facility: ams1
|
|
||||||
TF_VAR_public_key_path: ""
|
|
||||||
TF_VAR_operating_system: ubuntu_18_04
|
|
||||||
|
|
||||||
.ovh_variables: &ovh_variables
|
|
||||||
OS_AUTH_URL: https://auth.cloud.ovh.net/v3
|
|
||||||
OS_PROJECT_ID: 8d3cd5d737d74227ace462dee0b903fe
|
|
||||||
OS_PROJECT_NAME: "9361447987648822"
|
|
||||||
OS_USER_DOMAIN_NAME: Default
|
|
||||||
OS_PROJECT_DOMAIN_ID: default
|
|
||||||
OS_USERNAME: 8XuhBMfkKVrk
|
|
||||||
OS_REGION_NAME: UK1
|
|
||||||
OS_INTERFACE: public
|
|
||||||
OS_IDENTITY_API_VERSION: "3"
|
|
||||||
|
|
||||||
tf-apply-ovh:
|
|
||||||
extends: .terraform_apply
|
|
||||||
variables:
|
|
||||||
<<: *ovh_variables
|
|
||||||
TF_VERSION: 0.11.11
|
|
||||||
PROVIDER: openstack
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
ANSIBLE_TIMEOUT: "60"
|
|
||||||
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG
|
|
||||||
TF_VAR_number_of_k8s_masters: "0"
|
|
||||||
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
|
|
||||||
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
|
|
||||||
TF_VAR_number_of_etcd: "0"
|
|
||||||
TF_VAR_number_of_k8s_nodes: "0"
|
|
||||||
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
|
|
||||||
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
|
|
||||||
TF_VAR_number_of_bastions: "0"
|
|
||||||
TF_VAR_number_of_k8s_masters_no_etcd: "0"
|
|
||||||
TF_VAR_use_neutron: "0"
|
|
||||||
TF_VAR_floatingip_pool: "Ext-Net"
|
|
||||||
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
|
|
||||||
TF_VAR_network_name: "Ext-Net"
|
|
||||||
TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
|
|
||||||
TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
|
|
||||||
TF_VAR_image: "Ubuntu 18.04"
|
|
||||||
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
|
|
||||||
@@ -7,5 +7,4 @@
|
|||||||
1. Submit an issue describing your proposed change to the repo in question.
|
1. Submit an issue describing your proposed change to the repo in question.
|
||||||
2. The [repo owners](OWNERS) will respond to your issue promptly.
|
2. The [repo owners](OWNERS) will respond to your issue promptly.
|
||||||
3. Fork the desired repo, develop and test your code changes.
|
3. Fork the desired repo, develop and test your code changes.
|
||||||
4. Sign the CNCF CLA (https://git.k8s.io/community/CLA.md#the-contributor-license-agreement)
|
4. Submit a pull request.
|
||||||
5. Submit a pull request.
|
|
||||||
|
|||||||
18
Dockerfile
18
Dockerfile
@@ -1,18 +1,16 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:16.04
|
||||||
|
|
||||||
RUN mkdir /kubespray
|
RUN mkdir /kubespray
|
||||||
WORKDIR /kubespray
|
WORKDIR /kubespray
|
||||||
RUN apt update -y && \
|
RUN apt update -y && \
|
||||||
apt install -y \
|
apt install -y \
|
||||||
libssl-dev python-dev sshpass apt-transport-https jq \
|
libssl-dev python-dev sshpass apt-transport-https \
|
||||||
ca-certificates curl gnupg2 software-properties-common python-pip
|
ca-certificates curl gnupg2 software-properties-common python-pip
|
||||||
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
|
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
|
||||||
add-apt-repository \
|
add-apt-repository \
|
||||||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||||
$(lsb_release -cs) \
|
$(lsb_release -cs) \
|
||||||
stable" \
|
stable" \
|
||||||
&& apt update -y && apt-get install docker-ce -y
|
&& apt update -y && apt-get install docker-ce -y
|
||||||
COPY . .
|
COPY . .
|
||||||
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt
|
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt
|
||||||
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/linux/amd64/kubectl \
|
|
||||||
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl
|
|
||||||
|
|||||||
3
OWNERS
3
OWNERS
@@ -1,4 +1,5 @@
|
|||||||
# See the OWNERS docs at https://go.k8s.io/owners
|
# See the OWNERS file documentation:
|
||||||
|
# https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
|
||||||
|
|
||||||
approvers:
|
approvers:
|
||||||
- kubespray-approvers
|
- kubespray-approvers
|
||||||
|
|||||||
@@ -11,10 +11,8 @@ aliases:
|
|||||||
- riverzhang
|
- riverzhang
|
||||||
- holser
|
- holser
|
||||||
- smana
|
- smana
|
||||||
- verwilst
|
|
||||||
kubespray-reviewers:
|
kubespray-reviewers:
|
||||||
- jjungnickel
|
- jjungnickel
|
||||||
- archifleks
|
- archifleks
|
||||||
- chapsuk
|
- chapsuk
|
||||||
- mirwan
|
- mirwan
|
||||||
- miouge1
|
|
||||||
|
|||||||
52
README.md
52
README.md
@@ -3,10 +3,10 @@
|
|||||||
Deploy a Production Ready Kubernetes Cluster
|
Deploy a Production Ready Kubernetes Cluster
|
||||||
============================================
|
============================================
|
||||||
|
|
||||||
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
||||||
You can get your invite [here](http://slack.k8s.io/)
|
You can get your invite [here](http://slack.k8s.io/)
|
||||||
|
|
||||||
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
|
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Oracle Cloud Infrastructure (Experimental), or Baremetal**
|
||||||
- **Highly available** cluster
|
- **Highly available** cluster
|
||||||
- **Composable** (Choice of the network plugin for instance)
|
- **Composable** (Choice of the network plugin for instance)
|
||||||
- Supports most popular **Linux distributions**
|
- Supports most popular **Linux distributions**
|
||||||
@@ -17,8 +17,15 @@ Quick Start
|
|||||||
|
|
||||||
To deploy the cluster you can use :
|
To deploy the cluster you can use :
|
||||||
|
|
||||||
|
### Current release
|
||||||
|
2.8.2
|
||||||
|
|
||||||
### Ansible
|
### Ansible
|
||||||
|
|
||||||
|
#### Ansible version
|
||||||
|
|
||||||
|
Ansible v2.7.0 is failing and/or produce unexpected results due to [ansible/ansible/issues/46600](https://github.com/ansible/ansible/issues/46600)
|
||||||
|
|
||||||
#### Usage
|
#### Usage
|
||||||
|
|
||||||
# Install dependencies from ``requirements.txt``
|
# Install dependencies from ``requirements.txt``
|
||||||
@@ -29,7 +36,7 @@ To deploy the cluster you can use :
|
|||||||
|
|
||||||
# Update Ansible inventory file with inventory builder
|
# Update Ansible inventory file with inventory builder
|
||||||
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
|
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
|
||||||
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
|
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
|
||||||
|
|
||||||
# Review and change parameters under ``inventory/mycluster/group_vars``
|
# Review and change parameters under ``inventory/mycluster/group_vars``
|
||||||
cat inventory/mycluster/group_vars/all/all.yml
|
cat inventory/mycluster/group_vars/all/all.yml
|
||||||
@@ -39,7 +46,7 @@ To deploy the cluster you can use :
|
|||||||
# The option `-b` is required, as for example writing SSL keys in /etc/,
|
# The option `-b` is required, as for example writing SSL keys in /etc/,
|
||||||
# installing packages and interacting with various systemd daemons.
|
# installing packages and interacting with various systemd daemons.
|
||||||
# Without -b the playbook will fail to run!
|
# Without -b the playbook will fail to run!
|
||||||
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml
|
ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml
|
||||||
|
|
||||||
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
|
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
|
||||||
As a consequence, `ansible-playbook` command will fail with:
|
As a consequence, `ansible-playbook` command will fail with:
|
||||||
@@ -86,7 +93,6 @@ Documents
|
|||||||
- [AWS](docs/aws.md)
|
- [AWS](docs/aws.md)
|
||||||
- [Azure](docs/azure.md)
|
- [Azure](docs/azure.md)
|
||||||
- [vSphere](docs/vsphere.md)
|
- [vSphere](docs/vsphere.md)
|
||||||
- [Packet Host](docs/packet.md)
|
|
||||||
- [Large deployments](docs/large-deployments.md)
|
- [Large deployments](docs/large-deployments.md)
|
||||||
- [Upgrades basics](docs/upgrades.md)
|
- [Upgrades basics](docs/upgrades.md)
|
||||||
- [Roadmap](docs/roadmap.md)
|
- [Roadmap](docs/roadmap.md)
|
||||||
@@ -108,32 +114,37 @@ Supported Components
|
|||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
- Core
|
- Core
|
||||||
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.14.1
|
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.12.5
|
||||||
- [etcd](https://github.com/coreos/etcd) v3.2.26
|
- [etcd](https://github.com/coreos/etcd) v3.2.24
|
||||||
- [docker](https://www.docker.com/) v18.06 (see note)
|
- [docker](https://www.docker.com/) v18.06 (see note)
|
||||||
|
- [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
|
||||||
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
|
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
|
||||||
- Network Plugin
|
- Network Plugin
|
||||||
- [calico](https://github.com/projectcalico/calico) v3.4.0
|
- [calico](https://github.com/projectcalico/calico) v3.1.3
|
||||||
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
|
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
|
||||||
- [cilium](https://github.com/cilium/cilium) v1.3.0
|
- [cilium](https://github.com/cilium/cilium) v1.3.0
|
||||||
- [contiv](https://github.com/contiv/install) v1.2.1
|
- [contiv](https://github.com/contiv/install) v1.2.1
|
||||||
- [flanneld](https://github.com/coreos/flannel) v0.11.0
|
- [flanneld](https://github.com/coreos/flannel) v0.10.0
|
||||||
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
|
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.1
|
||||||
- [multus](https://github.com/intel/multus-cni) v3.1.autoconf
|
- [multus](https://github.com/intel/multus-cni) v3.1.autoconf
|
||||||
- [weave](https://github.com/weaveworks/weave) v2.5.1
|
- [weave](https://github.com/weaveworks/weave) v2.5.0
|
||||||
- Application
|
- Application
|
||||||
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
|
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
|
||||||
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
|
|
||||||
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2
|
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2
|
||||||
- [coredns](https://github.com/coredns/coredns) v1.5.0
|
- [coredns](https://github.com/coredns/coredns) v1.2.6
|
||||||
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.21.0
|
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.21.0
|
||||||
|
|
||||||
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
|
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
|
||||||
|
|
||||||
|
Note 2: rkt support as docker alternative is limited to control plane (etcd and
|
||||||
|
kubelet). Docker is still used for Kubernetes cluster workloads and network
|
||||||
|
plugins' related OS services. Also note, only one of the supported network
|
||||||
|
plugins can be deployed for a given single cluster.
|
||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
------------
|
------------
|
||||||
|
|
||||||
- **Ansible v2.7.8 (or newer) and python-netaddr is installed on the machine
|
- **Ansible v2.5 (or newer) and python-netaddr is installed on the machine
|
||||||
that will run Ansible commands**
|
that will run Ansible commands**
|
||||||
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
|
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
|
||||||
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
|
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
|
||||||
@@ -145,14 +156,6 @@ Requirements
|
|||||||
should be configured in the target servers. Then the `ansible_become` flag
|
should be configured in the target servers. Then the `ansible_become` flag
|
||||||
or command parameters `--become or -b` should be specified.
|
or command parameters `--become or -b` should be specified.
|
||||||
|
|
||||||
Hardware:
|
|
||||||
These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
|
|
||||||
|
|
||||||
- Master
|
|
||||||
- Memory: 1500 MB
|
|
||||||
- Node
|
|
||||||
- Memory: 1024 MB
|
|
||||||
|
|
||||||
Network Plugins
|
Network Plugins
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
@@ -195,12 +198,13 @@ Tools and projects on top of Kubespray
|
|||||||
--------------------------------------
|
--------------------------------------
|
||||||
|
|
||||||
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
|
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
|
||||||
|
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
|
||||||
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
|
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
|
||||||
|
|
||||||
CI Tests
|
CI Tests
|
||||||
--------
|
--------
|
||||||
|
|
||||||
[](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
|
[](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines)
|
||||||
|
|
||||||
CI/end-to-end tests sponsored by Google (GCE)
|
CI/end-to-end tests sponsored by Google (GCE)
|
||||||
See the [test matrix](docs/test_cases.md) for details.
|
See the [test matrix](docs/test_cases.md) for details.
|
||||||
|
|||||||
29
Vagrantfile
vendored
29
Vagrantfile
vendored
@@ -23,7 +23,7 @@ SUPPORTED_OS = {
|
|||||||
"centos" => {box: "centos/7", user: "vagrant"},
|
"centos" => {box: "centos/7", user: "vagrant"},
|
||||||
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"},
|
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"},
|
||||||
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
|
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
|
||||||
"opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
|
"opensuse" => {box: "opensuse/openSUSE-42.3-x86_64", user: "vagrant"},
|
||||||
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
|
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -50,10 +50,6 @@ $kube_node_instances = $num_instances
|
|||||||
$kube_node_instances_with_disks = false
|
$kube_node_instances_with_disks = false
|
||||||
$kube_node_instances_with_disks_size = "20G"
|
$kube_node_instances_with_disks_size = "20G"
|
||||||
$kube_node_instances_with_disks_number = 2
|
$kube_node_instances_with_disks_number = 2
|
||||||
$override_disk_size = false
|
|
||||||
$disk_size = "20GB"
|
|
||||||
$local_path_provisioner_enabled = false
|
|
||||||
$local_path_provisioner_claim_root = "/opt/local-path-provisioner/"
|
|
||||||
|
|
||||||
$playbook = "cluster.yml"
|
$playbook = "cluster.yml"
|
||||||
|
|
||||||
@@ -101,13 +97,6 @@ Vagrant.configure("2") do |config|
|
|||||||
# always use Vagrants insecure key
|
# always use Vagrants insecure key
|
||||||
config.ssh.insert_key = false
|
config.ssh.insert_key = false
|
||||||
|
|
||||||
if ($override_disk_size)
|
|
||||||
unless Vagrant.has_plugin?("vagrant-disksize")
|
|
||||||
system "vagrant plugin install vagrant-disksize"
|
|
||||||
end
|
|
||||||
config.disksize.size = $disk_size
|
|
||||||
end
|
|
||||||
|
|
||||||
(1..$num_instances).each do |i|
|
(1..$num_instances).each do |i|
|
||||||
config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
|
config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
|
||||||
|
|
||||||
@@ -131,7 +120,6 @@ Vagrant.configure("2") do |config|
|
|||||||
vb.cpus = $vm_cpus
|
vb.cpus = $vm_cpus
|
||||||
vb.gui = $vm_gui
|
vb.gui = $vm_gui
|
||||||
vb.linked_clone = true
|
vb.linked_clone = true
|
||||||
vb.customize ["modifyvm", :id, "--vram", "8"] # ubuntu defaults to 256 MB which is a waste of precious RAM
|
|
||||||
end
|
end
|
||||||
|
|
||||||
node.vm.provider :libvirt do |lv|
|
node.vm.provider :libvirt do |lv|
|
||||||
@@ -177,29 +165,24 @@ Vagrant.configure("2") do |config|
|
|||||||
|
|
||||||
host_vars[vm_name] = {
|
host_vars[vm_name] = {
|
||||||
"ip": ip,
|
"ip": ip,
|
||||||
"flannel_interface": "eth1",
|
|
||||||
"kube_network_plugin": $network_plugin,
|
"kube_network_plugin": $network_plugin,
|
||||||
"kube_network_plugin_multus": $multi_networking,
|
"kube_network_plugin_multus": $multi_networking,
|
||||||
"docker_keepcache": "1",
|
"docker_keepcache": "1",
|
||||||
"download_run_once": "False",
|
"download_run_once": "True",
|
||||||
"download_localhost": "False",
|
"download_localhost": "False"
|
||||||
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
|
|
||||||
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
|
|
||||||
"ansible_ssh_user": SUPPORTED_OS[$os][:user]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Only execute the Ansible provisioner once, when all the machines are up and ready.
|
# Only execute the Ansible provisioner once, when all the machines are up and ready.
|
||||||
if i == $num_instances
|
if i == $num_instances
|
||||||
node.vm.provision "ansible" do |ansible|
|
node.vm.provision "ansible" do |ansible|
|
||||||
ansible.playbook = $playbook
|
ansible.playbook = $playbook
|
||||||
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
|
if File.exist?(File.join( $inventory, "hosts.ini"))
|
||||||
if File.exist?($ansible_inventory_path)
|
ansible.inventory_path = $inventory
|
||||||
ansible.inventory_path = $ansible_inventory_path
|
|
||||||
end
|
end
|
||||||
ansible.become = true
|
ansible.become = true
|
||||||
ansible.limit = "all"
|
ansible.limit = "all"
|
||||||
ansible.host_key_checking = false
|
ansible.host_key_checking = false
|
||||||
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
|
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "--ask-become-pass"]
|
||||||
ansible.host_vars = host_vars
|
ansible.host_vars = host_vars
|
||||||
#ansible.tags = ['download']
|
#ansible.tags = ['download']
|
||||||
ansible.groups = {
|
ansible.groups = {
|
||||||
|
|||||||
@@ -1,2 +0,0 @@
|
|||||||
---
|
|
||||||
theme: jekyll-theme-slate
|
|
||||||
50
cluster.yml
50
cluster.yml
@@ -1,18 +1,32 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
gather_facts: false
|
gather_facts: false
|
||||||
become: no
|
|
||||||
tasks:
|
tasks:
|
||||||
- name: "Check ansible version >=2.7.8"
|
- name: "Check ansible version !=2.7.0"
|
||||||
assert:
|
assert:
|
||||||
msg: "Ansible must be v2.7.8 or higher"
|
msg: "Ansible V2.7.0 can't be used until: https://github.com/ansible/ansible/issues/46600 is fixed"
|
||||||
that:
|
that:
|
||||||
- ansible_version.string is version("2.7.8", ">=")
|
- ansible_version.string is version("2.7.0", "!=")
|
||||||
|
- ansible_version.string is version("2.5.0", ">=")
|
||||||
tags:
|
tags:
|
||||||
- check
|
- check
|
||||||
vars:
|
vars:
|
||||||
ansible_connection: local
|
ansible_connection: local
|
||||||
|
|
||||||
|
- hosts: localhost
|
||||||
|
gather_facts: false
|
||||||
|
tasks:
|
||||||
|
- name: deploy warning for non kubeadm
|
||||||
|
debug:
|
||||||
|
msg: "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
|
||||||
|
when: not kubeadm_enabled and not skip_non_kubeadm_warning
|
||||||
|
|
||||||
|
- name: deploy cluster for non kubeadm
|
||||||
|
pause:
|
||||||
|
prompt: "Are you sure you want to deploy cluster using the deprecated non-kubeadm mode."
|
||||||
|
echo: no
|
||||||
|
when: not kubeadm_enabled and not skip_non_kubeadm_warning
|
||||||
|
|
||||||
- hosts: bastion[0]
|
- hosts: bastion[0]
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
roles:
|
roles:
|
||||||
@@ -22,6 +36,10 @@
|
|||||||
- hosts: k8s-cluster:etcd:calico-rr
|
- hosts: k8s-cluster:etcd:calico-rr
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
gather_facts: false
|
gather_facts: false
|
||||||
|
vars:
|
||||||
|
# Need to disable pipelining for bootstrap-os as some systems have requiretty in sudoers set, which makes pipelining
|
||||||
|
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
|
||||||
|
ansible_ssh_pipelining: false
|
||||||
roles:
|
roles:
|
||||||
- { role: kubespray-defaults}
|
- { role: kubespray-defaults}
|
||||||
- { role: bootstrap-os, tags: bootstrap-os}
|
- { role: bootstrap-os, tags: bootstrap-os}
|
||||||
@@ -30,14 +48,13 @@
|
|||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
vars:
|
vars:
|
||||||
ansible_ssh_pipelining: true
|
ansible_ssh_pipelining: true
|
||||||
gather_facts: false
|
gather_facts: true
|
||||||
pre_tasks:
|
pre_tasks:
|
||||||
- name: gather facts from all instances
|
- name: gather facts from all instances
|
||||||
setup:
|
setup:
|
||||||
delegate_to: "{{item}}"
|
delegate_to: "{{item}}"
|
||||||
delegate_facts: true
|
delegate_facts: True
|
||||||
with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
|
with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
|
||||||
run_once: true
|
|
||||||
|
|
||||||
- hosts: k8s-cluster:etcd:calico-rr
|
- hosts: k8s-cluster:etcd:calico-rr
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
@@ -79,7 +96,7 @@
|
|||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
roles:
|
roles:
|
||||||
- { role: kubespray-defaults}
|
- { role: kubespray-defaults}
|
||||||
- { role: kubernetes/kubeadm, tags: kubeadm}
|
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
|
||||||
- { role: network_plugin, tags: network }
|
- { role: network_plugin, tags: network }
|
||||||
|
|
||||||
- hosts: kube-master[0]
|
- hosts: kube-master[0]
|
||||||
@@ -87,7 +104,7 @@
|
|||||||
roles:
|
roles:
|
||||||
- { role: kubespray-defaults}
|
- { role: kubespray-defaults}
|
||||||
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
|
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
|
||||||
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"]}
|
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"], when: "kubeadm_enabled" }
|
||||||
|
|
||||||
- hosts: kube-master
|
- hosts: kube-master
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
@@ -104,15 +121,16 @@
|
|||||||
- { role: kubespray-defaults}
|
- { role: kubespray-defaults}
|
||||||
- { role: network_plugin/calico/rr, tags: network }
|
- { role: network_plugin/calico/rr, tags: network }
|
||||||
|
|
||||||
|
- hosts: k8s-cluster
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
|
||||||
|
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
|
||||||
|
environment: "{{proxy_env}}"
|
||||||
|
|
||||||
- hosts: kube-master
|
- hosts: kube-master
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
roles:
|
roles:
|
||||||
- { role: kubespray-defaults}
|
- { role: kubespray-defaults}
|
||||||
- { role: kubernetes-apps, tags: apps }
|
- { role: kubernetes-apps, tags: apps }
|
||||||
environment: "{{proxy_env}}"
|
|
||||||
|
|
||||||
- hosts: k8s-cluster
|
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults}
|
|
||||||
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
|
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
---
|
|
||||||
apiVersion: "2015-06-15"
|
apiVersion: "2015-06-15"
|
||||||
|
|
||||||
virtualNetworkName: "{{ azure_virtual_network_name | default('KubeVNET') }}"
|
virtualNetworkName: "{{ azure_virtual_network_name | default('KubeVNET') }}"
|
||||||
@@ -35,3 +34,4 @@ imageReferenceJson: "{{imageReference|to_json}}"
|
|||||||
|
|
||||||
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
|
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
|
||||||
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"
|
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
---
|
|
||||||
- set_fact:
|
- set_fact:
|
||||||
base_dir: "{{playbook_dir}}/.generated/"
|
base_dir: "{{playbook_dir}}/.generated/"
|
||||||
|
|
||||||
|
|||||||
@@ -1,3 +1,2 @@
|
|||||||
---
|
|
||||||
# See distro.yaml for supported node_distro images
|
# See distro.yaml for supported node_distro images
|
||||||
node_distro: debian
|
node_distro: debian
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
---
|
|
||||||
distro_settings:
|
distro_settings:
|
||||||
debian: &DEBIAN
|
debian: &DEBIAN
|
||||||
image: "debian:9.5"
|
image: "debian:9.5"
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
|
||||||
# kubespray-dind.yaml: minimal kubespray ansible playbook usable for DIND
|
# kubespray-dind.yaml: minimal kubespray ansible playbook usable for DIND
|
||||||
# See contrib/dind/README.md
|
# See contrib/dind/README.md
|
||||||
kube_api_anonymous_auth: true
|
kube_api_anonymous_auth: true
|
||||||
|
kubeadm_enabled: true
|
||||||
|
|
||||||
kubelet_fail_swap_on: false
|
kubelet_fail_swap_on: false
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
---
|
|
||||||
- name: set_fact distro_setup
|
- name: set_fact distro_setup
|
||||||
set_fact:
|
set_fact:
|
||||||
distro_setup: "{{ distro_settings[node_distro] }}"
|
distro_setup: "{{ distro_settings[node_distro] }}"
|
||||||
@@ -34,7 +33,7 @@
|
|||||||
# Delete docs
|
# Delete docs
|
||||||
path-exclude=/usr/share/doc/*
|
path-exclude=/usr/share/doc/*
|
||||||
path-include=/usr/share/doc/*/copyright
|
path-include=/usr/share/doc/*/copyright
|
||||||
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
|
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
|
||||||
when:
|
when:
|
||||||
- ansible_os_family == 'Debian'
|
- ansible_os_family == 'Debian'
|
||||||
|
|
||||||
@@ -56,7 +55,7 @@
|
|||||||
user:
|
user:
|
||||||
name: "{{ distro_user }}"
|
name: "{{ distro_user }}"
|
||||||
uid: 1000
|
uid: 1000
|
||||||
# groups: sudo
|
#groups: sudo
|
||||||
append: yes
|
append: yes
|
||||||
|
|
||||||
- name: Allow password-less sudo to "{{ distro_user }}"
|
- name: Allow password-less sudo to "{{ distro_user }}"
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
---
|
|
||||||
- name: set_fact distro_setup
|
- name: set_fact distro_setup
|
||||||
set_fact:
|
set_fact:
|
||||||
distro_setup: "{{ distro_settings[node_distro] }}"
|
distro_setup: "{{ distro_settings[node_distro] }}"
|
||||||
@@ -19,7 +18,7 @@
|
|||||||
state: started
|
state: started
|
||||||
hostname: "{{ item }}"
|
hostname: "{{ item }}"
|
||||||
command: "{{ distro_init }}"
|
command: "{{ distro_init }}"
|
||||||
# recreate: yes
|
#recreate: yes
|
||||||
privileged: true
|
privileged: true
|
||||||
tmpfs:
|
tmpfs:
|
||||||
- /sys/module/nf_conntrack/parameters
|
- /sys/module/nf_conntrack/parameters
|
||||||
@@ -79,7 +78,6 @@
|
|||||||
with_items: "{{ containers.results }}"
|
with_items: "{{ containers.results }}"
|
||||||
|
|
||||||
- name: Early hack image install to adapt for DIND
|
- name: Early hack image install to adapt for DIND
|
||||||
# noqa 302 - this task uses the raw module intentionally
|
|
||||||
raw: |
|
raw: |
|
||||||
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
|
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
|
||||||
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
|
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
|
||||||
|
|||||||
@@ -1,6 +1,8 @@
|
|||||||
DISTROS=(debian centos)
|
DISTROS=(debian centos)
|
||||||
NETCHECKER_HOST=${NODES[0]}
|
NETCHECKER_HOST=${NODES[0]}
|
||||||
EXTRAS=(
|
EXTRAS=(
|
||||||
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":false}'
|
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":false}'
|
||||||
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":true}'
|
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":true}'
|
||||||
|
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":false}'
|
||||||
|
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":true}'
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
DISTROS=(debian centos)
|
DISTROS=(debian centos)
|
||||||
EXTRAS=(
|
EXTRAS=(
|
||||||
'kube_network_plugin=calico {}'
|
'kube_network_plugin=calico {"kubeadm_enabled":true}'
|
||||||
'kube_network_plugin=canal {}'
|
'kube_network_plugin=canal {"kubeadm_enabled":true}'
|
||||||
'kube_network_plugin=cilium {}'
|
'kube_network_plugin=cilium {"kubeadm_enabled":true}'
|
||||||
'kube_network_plugin=flannel {}'
|
'kube_network_plugin=flannel {"kubeadm_enabled":true}'
|
||||||
'kube_network_plugin=weave {}'
|
'kube_network_plugin=weave {"kubeadm_enabled":true}'
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -17,9 +17,6 @@
|
|||||||
#
|
#
|
||||||
# Advanced usage:
|
# Advanced usage:
|
||||||
# Add another host after initial creation: inventory.py 10.10.1.5
|
# Add another host after initial creation: inventory.py 10.10.1.5
|
||||||
# Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
|
|
||||||
# Add hosts with different ip and access ip:
|
|
||||||
# inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3
|
|
||||||
# Delete a host: inventory.py -10.10.1.3
|
# Delete a host: inventory.py -10.10.1.3
|
||||||
# Delete a host by id: inventory.py -node1
|
# Delete a host by id: inventory.py -node1
|
||||||
#
|
#
|
||||||
@@ -34,21 +31,21 @@
|
|||||||
# ip: X.X.X.X
|
# ip: X.X.X.X
|
||||||
|
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
from ipaddress import ip_address
|
try:
|
||||||
from ruamel.yaml import YAML
|
import configparser
|
||||||
|
except ImportError:
|
||||||
|
import ConfigParser as configparser
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
|
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster:children',
|
||||||
'calico-rr']
|
'calico-rr', 'vault']
|
||||||
PROTECTED_NAMES = ROLES
|
PROTECTED_NAMES = ROLES
|
||||||
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
|
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
|
||||||
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
|
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
|
||||||
'0': False, 'no': False, 'false': False, 'off': False}
|
'0': False, 'no': False, 'false': False, 'off': False}
|
||||||
yaml = YAML()
|
|
||||||
yaml.Representer.add_representer(OrderedDict, yaml.Representer.represent_dict)
|
|
||||||
|
|
||||||
|
|
||||||
def get_var_as_bool(name, default):
|
def get_var_as_bool(name, default):
|
||||||
@@ -57,8 +54,7 @@ def get_var_as_bool(name, default):
|
|||||||
|
|
||||||
# Configurable as shell vars start
|
# Configurable as shell vars start
|
||||||
|
|
||||||
|
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.ini")
|
||||||
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
|
|
||||||
# Reconfigures cluster distribution at scale
|
# Reconfigures cluster distribution at scale
|
||||||
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
|
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
|
||||||
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
|
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
|
||||||
@@ -72,14 +68,11 @@ HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
|
|||||||
class KubesprayInventory(object):
|
class KubesprayInventory(object):
|
||||||
|
|
||||||
def __init__(self, changed_hosts=None, config_file=None):
|
def __init__(self, changed_hosts=None, config_file=None):
|
||||||
|
self.config = configparser.ConfigParser(allow_no_value=True,
|
||||||
|
delimiters=('\t', ' '))
|
||||||
self.config_file = config_file
|
self.config_file = config_file
|
||||||
self.yaml_config = {}
|
|
||||||
if self.config_file:
|
if self.config_file:
|
||||||
try:
|
self.config.read(self.config_file)
|
||||||
self.hosts_file = open(config_file, 'r')
|
|
||||||
self.yaml_config = yaml.load(self.hosts_file)
|
|
||||||
except FileNotFoundError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
|
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
|
||||||
self.parse_command(changed_hosts[0], changed_hosts[1:])
|
self.parse_command(changed_hosts[0], changed_hosts[1:])
|
||||||
@@ -88,20 +81,18 @@ class KubesprayInventory(object):
|
|||||||
self.ensure_required_groups(ROLES)
|
self.ensure_required_groups(ROLES)
|
||||||
|
|
||||||
if changed_hosts:
|
if changed_hosts:
|
||||||
changed_hosts = self.range2ips(changed_hosts)
|
|
||||||
self.hosts = self.build_hostnames(changed_hosts)
|
self.hosts = self.build_hostnames(changed_hosts)
|
||||||
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
|
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
|
||||||
self.set_all(self.hosts)
|
self.set_all(self.hosts)
|
||||||
self.set_k8s_cluster()
|
self.set_k8s_cluster()
|
||||||
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1
|
self.set_etcd(list(self.hosts.keys())[:3])
|
||||||
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count])
|
|
||||||
if len(self.hosts) >= SCALE_THRESHOLD:
|
if len(self.hosts) >= SCALE_THRESHOLD:
|
||||||
self.set_kube_master(list(self.hosts.keys())[etcd_hosts_count:5])
|
self.set_kube_master(list(self.hosts.keys())[3:5])
|
||||||
else:
|
else:
|
||||||
self.set_kube_master(list(self.hosts.keys())[:2])
|
self.set_kube_master(list(self.hosts.keys())[:2])
|
||||||
self.set_kube_node(self.hosts.keys())
|
self.set_kube_node(self.hosts.keys())
|
||||||
if len(self.hosts) >= SCALE_THRESHOLD:
|
if len(self.hosts) >= SCALE_THRESHOLD:
|
||||||
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count])
|
self.set_calico_rr(list(self.hosts.keys())[:3])
|
||||||
else: # Show help if no options
|
else: # Show help if no options
|
||||||
self.show_help()
|
self.show_help()
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
@@ -110,9 +101,8 @@ class KubesprayInventory(object):
|
|||||||
|
|
||||||
def write_config(self, config_file):
|
def write_config(self, config_file):
|
||||||
if config_file:
|
if config_file:
|
||||||
with open(self.config_file, 'w') as f:
|
with open(config_file, 'w') as f:
|
||||||
yaml.dump(self.yaml_config, f)
|
self.config.write(f)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
print("WARNING: Unable to save config. Make sure you set "
|
print("WARNING: Unable to save config. Make sure you set "
|
||||||
"CONFIG_FILE env var.")
|
"CONFIG_FILE env var.")
|
||||||
@@ -122,29 +112,28 @@ class KubesprayInventory(object):
|
|||||||
print("DEBUG: {0}".format(msg))
|
print("DEBUG: {0}".format(msg))
|
||||||
|
|
||||||
def get_ip_from_opts(self, optstring):
|
def get_ip_from_opts(self, optstring):
|
||||||
if 'ip' in optstring:
|
opts = optstring.split(' ')
|
||||||
return optstring['ip']
|
for opt in opts:
|
||||||
else:
|
if '=' not in opt:
|
||||||
raise ValueError("IP parameter not found in options")
|
continue
|
||||||
|
k, v = opt.split('=')
|
||||||
|
if k == "ip":
|
||||||
|
return v
|
||||||
|
raise ValueError("IP parameter not found in options")
|
||||||
|
|
||||||
def ensure_required_groups(self, groups):
|
def ensure_required_groups(self, groups):
|
||||||
for group in groups:
|
for group in groups:
|
||||||
if group == 'all':
|
try:
|
||||||
self.debug("Adding group {0}".format(group))
|
self.debug("Adding group {0}".format(group))
|
||||||
if group not in self.yaml_config:
|
self.config.add_section(group)
|
||||||
all_dict = OrderedDict([('hosts', OrderedDict({})),
|
except configparser.DuplicateSectionError:
|
||||||
('children', OrderedDict({}))])
|
pass
|
||||||
self.yaml_config = {'all': all_dict}
|
|
||||||
else:
|
|
||||||
self.debug("Adding group {0}".format(group))
|
|
||||||
if group not in self.yaml_config['all']['children']:
|
|
||||||
self.yaml_config['all']['children'][group] = {'hosts': {}}
|
|
||||||
|
|
||||||
def get_host_id(self, host):
|
def get_host_id(self, host):
|
||||||
'''Returns integer host ID (without padding) from a given hostname.'''
|
'''Returns integer host ID (without padding) from a given hostname.'''
|
||||||
try:
|
try:
|
||||||
short_hostname = host.split('.')[0]
|
short_hostname = host.split('.')[0]
|
||||||
return int(re.findall("\\d+$", short_hostname)[-1])
|
return int(re.findall("\d+$", short_hostname)[-1])
|
||||||
except IndexError:
|
except IndexError:
|
||||||
raise ValueError("Host name must end in an integer")
|
raise ValueError("Host name must end in an integer")
|
||||||
|
|
||||||
@@ -152,12 +141,12 @@ class KubesprayInventory(object):
|
|||||||
existing_hosts = OrderedDict()
|
existing_hosts = OrderedDict()
|
||||||
highest_host_id = 0
|
highest_host_id = 0
|
||||||
try:
|
try:
|
||||||
for host in self.yaml_config['all']['hosts']:
|
for host, opts in self.config.items('all'):
|
||||||
existing_hosts[host] = self.yaml_config['all']['hosts'][host]
|
existing_hosts[host] = opts
|
||||||
host_id = self.get_host_id(host)
|
host_id = self.get_host_id(host)
|
||||||
if host_id > highest_host_id:
|
if host_id > highest_host_id:
|
||||||
highest_host_id = host_id
|
highest_host_id = host_id
|
||||||
except Exception:
|
except configparser.NoSectionError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# FIXME(mattymo): Fix condition where delete then add reuses highest id
|
# FIXME(mattymo): Fix condition where delete then add reuses highest id
|
||||||
@@ -174,53 +163,22 @@ class KubesprayInventory(object):
|
|||||||
self.debug("Marked {0} for deletion.".format(realhost))
|
self.debug("Marked {0} for deletion.".format(realhost))
|
||||||
self.delete_host_by_ip(all_hosts, realhost)
|
self.delete_host_by_ip(all_hosts, realhost)
|
||||||
elif host[0].isdigit():
|
elif host[0].isdigit():
|
||||||
if ',' in host:
|
|
||||||
ip, access_ip = host.split(',')
|
|
||||||
else:
|
|
||||||
ip = host
|
|
||||||
access_ip = host
|
|
||||||
if self.exists_hostname(all_hosts, host):
|
if self.exists_hostname(all_hosts, host):
|
||||||
self.debug("Skipping existing host {0}.".format(host))
|
self.debug("Skipping existing host {0}.".format(host))
|
||||||
continue
|
continue
|
||||||
elif self.exists_ip(all_hosts, ip):
|
elif self.exists_ip(all_hosts, host):
|
||||||
self.debug("Skipping existing host {0}.".format(ip))
|
self.debug("Skipping existing host {0}.".format(host))
|
||||||
continue
|
continue
|
||||||
|
|
||||||
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
|
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
|
||||||
next_host_id += 1
|
next_host_id += 1
|
||||||
all_hosts[next_host] = {'ansible_host': access_ip,
|
all_hosts[next_host] = "ansible_host={0} ip={1}".format(
|
||||||
'ip': ip,
|
host, host)
|
||||||
'access_ip': access_ip}
|
|
||||||
elif host[0].isalpha():
|
elif host[0].isalpha():
|
||||||
raise Exception("Adding hosts by hostname is not supported.")
|
raise Exception("Adding hosts by hostname is not supported.")
|
||||||
|
|
||||||
return all_hosts
|
return all_hosts
|
||||||
|
|
||||||
def range2ips(self, hosts):
|
|
||||||
reworked_hosts = []
|
|
||||||
|
|
||||||
def ips(start_address, end_address):
|
|
||||||
try:
|
|
||||||
# Python 3.x
|
|
||||||
start = int(ip_address(start_address))
|
|
||||||
end = int(ip_address(end_address))
|
|
||||||
except:
|
|
||||||
# Python 2.7
|
|
||||||
start = int(ip_address(unicode(start_address)))
|
|
||||||
end = int(ip_address(unicode(end_address)))
|
|
||||||
return [ip_address(ip).exploded for ip in range(start, end + 1)]
|
|
||||||
|
|
||||||
for host in hosts:
|
|
||||||
if '-' in host and not host.startswith('-'):
|
|
||||||
start, end = host.strip().split('-')
|
|
||||||
try:
|
|
||||||
reworked_hosts.extend(ips(start, end))
|
|
||||||
except ValueError:
|
|
||||||
raise Exception("Range of ip_addresses isn't valid")
|
|
||||||
else:
|
|
||||||
reworked_hosts.append(host)
|
|
||||||
return reworked_hosts
|
|
||||||
|
|
||||||
def exists_hostname(self, existing_hosts, hostname):
|
def exists_hostname(self, existing_hosts, hostname):
|
||||||
return hostname in existing_hosts.keys()
|
return hostname in existing_hosts.keys()
|
||||||
|
|
||||||
@@ -238,34 +196,16 @@ class KubesprayInventory(object):
|
|||||||
raise ValueError("Unable to find host by IP: {0}".format(ip))
|
raise ValueError("Unable to find host by IP: {0}".format(ip))
|
||||||
|
|
||||||
def purge_invalid_hosts(self, hostnames, protected_names=[]):
|
def purge_invalid_hosts(self, hostnames, protected_names=[]):
|
||||||
for role in self.yaml_config['all']['children']:
|
for role in self.config.sections():
|
||||||
if role != 'k8s-cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa
|
for host, _ in self.config.items(role):
|
||||||
all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa
|
|
||||||
for host in all_hosts.keys():
|
|
||||||
if host not in hostnames and host not in protected_names:
|
|
||||||
self.debug(
|
|
||||||
"Host {0} removed from role {1}".format(host, role)) # noqa
|
|
||||||
del self.yaml_config['all']['children'][role]['hosts'][host] # noqa
|
|
||||||
# purge from all
|
|
||||||
if self.yaml_config['all']['hosts']:
|
|
||||||
all_hosts = self.yaml_config['all']['hosts'].copy()
|
|
||||||
for host in all_hosts.keys():
|
|
||||||
if host not in hostnames and host not in protected_names:
|
if host not in hostnames and host not in protected_names:
|
||||||
self.debug("Host {0} removed from role all".format(host))
|
self.debug("Host {0} removed from role {1}".format(host,
|
||||||
del self.yaml_config['all']['hosts'][host]
|
role))
|
||||||
|
self.config.remove_option(role, host)
|
||||||
|
|
||||||
def add_host_to_group(self, group, host, opts=""):
|
def add_host_to_group(self, group, host, opts=""):
|
||||||
self.debug("adding host {0} to group {1}".format(host, group))
|
self.debug("adding host {0} to group {1}".format(host, group))
|
||||||
if group == 'all':
|
self.config.set(group, host, opts)
|
||||||
if self.yaml_config['all']['hosts'] is None:
|
|
||||||
self.yaml_config['all']['hosts'] = {host: None}
|
|
||||||
self.yaml_config['all']['hosts'][host] = opts
|
|
||||||
elif group != 'k8s-cluster:children':
|
|
||||||
if self.yaml_config['all']['children'][group]['hosts'] is None:
|
|
||||||
self.yaml_config['all']['children'][group]['hosts'] = {
|
|
||||||
host: None}
|
|
||||||
else:
|
|
||||||
self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa
|
|
||||||
|
|
||||||
def set_kube_master(self, hosts):
|
def set_kube_master(self, hosts):
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
@@ -276,31 +216,31 @@ class KubesprayInventory(object):
|
|||||||
self.add_host_to_group('all', host, opts)
|
self.add_host_to_group('all', host, opts)
|
||||||
|
|
||||||
def set_k8s_cluster(self):
|
def set_k8s_cluster(self):
|
||||||
k8s_cluster = {'children': {'kube-master': None, 'kube-node': None}}
|
self.add_host_to_group('k8s-cluster:children', 'kube-node')
|
||||||
self.yaml_config['all']['children']['k8s-cluster'] = k8s_cluster
|
self.add_host_to_group('k8s-cluster:children', 'kube-master')
|
||||||
|
|
||||||
def set_calico_rr(self, hosts):
|
def set_calico_rr(self, hosts):
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
if host in self.yaml_config['all']['children']['kube-master']:
|
if host in self.config.items('kube-master'):
|
||||||
self.debug("Not adding {0} to calico-rr group because it "
|
self.debug("Not adding {0} to calico-rr group because it "
|
||||||
"conflicts with kube-master group".format(host))
|
"conflicts with kube-master group".format(host))
|
||||||
continue
|
continue
|
||||||
if host in self.yaml_config['all']['children']['kube-node']:
|
if host in self.config.items('kube-node'):
|
||||||
self.debug("Not adding {0} to calico-rr group because it "
|
self.debug("Not adding {0} to calico-rr group because it "
|
||||||
"conflicts with kube-node group".format(host))
|
"conflicts with kube-node group".format(host))
|
||||||
continue
|
continue
|
||||||
self.add_host_to_group('calico-rr', host)
|
self.add_host_to_group('calico-rr', host)
|
||||||
|
|
||||||
def set_kube_node(self, hosts):
|
def set_kube_node(self, hosts):
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD:
|
if len(self.config['all']) >= SCALE_THRESHOLD:
|
||||||
if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa
|
if self.config.has_option('etcd', host):
|
||||||
self.debug("Not adding {0} to kube-node group because of "
|
self.debug("Not adding {0} to kube-node group because of "
|
||||||
"scale deployment and host is in etcd "
|
"scale deployment and host is in etcd "
|
||||||
"group.".format(host))
|
"group.".format(host))
|
||||||
continue
|
continue
|
||||||
if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa
|
if len(self.config['all']) >= MASSIVE_SCALE_THRESHOLD:
|
||||||
if host in self.yaml_config['all']['children']['kube-master']['hosts']: # noqa
|
if self.config.has_option('kube-master', host):
|
||||||
self.debug("Not adding {0} to kube-node group because of "
|
self.debug("Not adding {0} to kube-node group because of "
|
||||||
"scale deployment and host is in kube-master "
|
"scale deployment and host is in kube-master "
|
||||||
"group.".format(host))
|
"group.".format(host))
|
||||||
@@ -310,31 +250,42 @@ class KubesprayInventory(object):
|
|||||||
def set_etcd(self, hosts):
|
def set_etcd(self, hosts):
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
self.add_host_to_group('etcd', host)
|
self.add_host_to_group('etcd', host)
|
||||||
|
self.add_host_to_group('vault', host)
|
||||||
|
|
||||||
def load_file(self, files=None):
|
def load_file(self, files=None):
|
||||||
'''Directly loads JSON to inventory.'''
|
'''Directly loads JSON, or YAML file to inventory.'''
|
||||||
|
|
||||||
if not files:
|
if not files:
|
||||||
raise Exception("No input file specified.")
|
raise Exception("No input file specified.")
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
import yaml
|
||||||
|
|
||||||
for filename in list(files):
|
for filename in list(files):
|
||||||
# Try JSON
|
# Try JSON, then YAML
|
||||||
try:
|
try:
|
||||||
with open(filename, 'r') as f:
|
with open(filename, 'r') as f:
|
||||||
data = json.load(f)
|
data = json.load(f)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
raise Exception("Cannot read %s as JSON, or CSV", filename)
|
try:
|
||||||
|
with open(filename, 'r') as f:
|
||||||
|
data = yaml.load(f)
|
||||||
|
print("yaml")
|
||||||
|
except ValueError:
|
||||||
|
raise Exception("Cannot read %s as JSON, YAML, or CSV",
|
||||||
|
filename)
|
||||||
|
|
||||||
self.ensure_required_groups(ROLES)
|
self.ensure_required_groups(ROLES)
|
||||||
self.set_k8s_cluster()
|
self.set_k8s_cluster()
|
||||||
for group, hosts in data.items():
|
for group, hosts in data.items():
|
||||||
self.ensure_required_groups([group])
|
self.ensure_required_groups([group])
|
||||||
for host, opts in hosts.items():
|
for host, opts in hosts.items():
|
||||||
optstring = {'ansible_host': opts['ip'],
|
optstring = "ansible_host={0} ip={0}".format(opts['ip'])
|
||||||
'ip': opts['ip'],
|
for key, val in opts.items():
|
||||||
'access_ip': opts['ip']}
|
if key == "ip":
|
||||||
|
continue
|
||||||
|
optstring += " {0}={1}".format(key, val)
|
||||||
|
|
||||||
self.add_host_to_group('all', host, optstring)
|
self.add_host_to_group('all', host, optstring)
|
||||||
self.add_host_to_group(group, host)
|
self.add_host_to_group(group, host)
|
||||||
self.write_config(self.config_file)
|
self.write_config(self.config_file)
|
||||||
@@ -362,26 +313,24 @@ print_ips - Write a space-delimited list of IPs from "all" group
|
|||||||
|
|
||||||
Advanced usage:
|
Advanced usage:
|
||||||
Add another host after initial creation: inventory.py 10.10.1.5
|
Add another host after initial creation: inventory.py 10.10.1.5
|
||||||
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
|
|
||||||
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
|
|
||||||
Delete a host: inventory.py -10.10.1.3
|
Delete a host: inventory.py -10.10.1.3
|
||||||
Delete a host by id: inventory.py -node1
|
Delete a host by id: inventory.py -node1
|
||||||
|
|
||||||
Configurable env vars:
|
Configurable env vars:
|
||||||
DEBUG Enable debug printing. Default: True
|
DEBUG Enable debug printing. Default: True
|
||||||
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml
|
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.ini
|
||||||
HOST_PREFIX Host prefix for generated hosts. Default: node
|
HOST_PREFIX Host prefix for generated hosts. Default: node
|
||||||
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
|
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
|
||||||
MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
|
MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
|
||||||
''' # noqa
|
'''
|
||||||
print(help_text)
|
print(help_text)
|
||||||
|
|
||||||
def print_config(self):
|
def print_config(self):
|
||||||
yaml.dump(self.yaml_config, sys.stdout)
|
self.config.write(sys.stdout)
|
||||||
|
|
||||||
def print_ips(self):
|
def print_ips(self):
|
||||||
ips = []
|
ips = []
|
||||||
for host, opts in self.yaml_config['all']['hosts'].items():
|
for host, opts in self.config.items('all'):
|
||||||
ips.append(self.get_ip_from_opts(opts))
|
ips.append(self.get_ip_from_opts(opts))
|
||||||
print(' '.join(ips))
|
print(' '.join(ips))
|
||||||
|
|
||||||
@@ -391,6 +340,5 @@ def main(argv=None):
|
|||||||
argv = sys.argv[1:]
|
argv = sys.argv[1:]
|
||||||
KubesprayInventory(argv, CONFIG_FILE)
|
KubesprayInventory(argv, CONFIG_FILE)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
sys.exit(main())
|
sys.exit(main())
|
||||||
|
|||||||
@@ -1,3 +1 @@
|
|||||||
configparser>=3.3.0
|
configparser>=3.3.0
|
||||||
ruamel.yaml>=0.15.88
|
|
||||||
ipaddress
|
|
||||||
|
|||||||
@@ -34,9 +34,7 @@ class TestInventory(unittest.TestCase):
|
|||||||
self.inv = inventory.KubesprayInventory()
|
self.inv = inventory.KubesprayInventory()
|
||||||
|
|
||||||
def test_get_ip_from_opts(self):
|
def test_get_ip_from_opts(self):
|
||||||
optstring = {'ansible_host': '10.90.3.2',
|
optstring = "ansible_host=10.90.3.2 ip=10.90.3.2"
|
||||||
'ip': '10.90.3.2',
|
|
||||||
'access_ip': '10.90.3.2'}
|
|
||||||
expected = "10.90.3.2"
|
expected = "10.90.3.2"
|
||||||
result = self.inv.get_ip_from_opts(optstring)
|
result = self.inv.get_ip_from_opts(optstring)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
@@ -50,7 +48,7 @@ class TestInventory(unittest.TestCase):
|
|||||||
groups = ['group1', 'group2']
|
groups = ['group1', 'group2']
|
||||||
self.inv.ensure_required_groups(groups)
|
self.inv.ensure_required_groups(groups)
|
||||||
for group in groups:
|
for group in groups:
|
||||||
self.assertTrue(group in self.inv.yaml_config['all']['children'])
|
self.assertTrue(group in self.inv.config.sections())
|
||||||
|
|
||||||
def test_get_host_id(self):
|
def test_get_host_id(self):
|
||||||
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain',
|
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain',
|
||||||
@@ -69,49 +67,35 @@ class TestInventory(unittest.TestCase):
|
|||||||
def test_build_hostnames_add_one(self):
|
def test_build_hostnames_add_one(self):
|
||||||
changed_hosts = ['10.90.0.2']
|
changed_hosts = ['10.90.0.2']
|
||||||
expected = OrderedDict([('node1',
|
expected = OrderedDict([('node1',
|
||||||
{'ansible_host': '10.90.0.2',
|
'ansible_host=10.90.0.2 ip=10.90.0.2')])
|
||||||
'ip': '10.90.0.2',
|
|
||||||
'access_ip': '10.90.0.2'})])
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
def test_build_hostnames_add_duplicate(self):
|
def test_build_hostnames_add_duplicate(self):
|
||||||
changed_hosts = ['10.90.0.2']
|
changed_hosts = ['10.90.0.2']
|
||||||
expected = OrderedDict([('node1',
|
expected = OrderedDict([('node1',
|
||||||
{'ansible_host': '10.90.0.2',
|
'ansible_host=10.90.0.2 ip=10.90.0.2')])
|
||||||
'ip': '10.90.0.2',
|
self.inv.config['all'] = expected
|
||||||
'access_ip': '10.90.0.2'})])
|
|
||||||
self.inv.yaml_config['all']['hosts'] = expected
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
def test_build_hostnames_add_two(self):
|
def test_build_hostnames_add_two(self):
|
||||||
changed_hosts = ['10.90.0.2', '10.90.0.3']
|
changed_hosts = ['10.90.0.2', '10.90.0.3']
|
||||||
expected = OrderedDict([
|
expected = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'access_ip': '10.90.0.2'}),
|
self.inv.config['all'] = OrderedDict()
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
self.inv.yaml_config['all']['hosts'] = OrderedDict()
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
def test_build_hostnames_delete_first(self):
|
def test_build_hostnames_delete_first(self):
|
||||||
changed_hosts = ['-10.90.0.2']
|
changed_hosts = ['-10.90.0.2']
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'access_ip': '10.90.0.2'}),
|
self.inv.config['all'] = existing_hosts
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
self.inv.yaml_config['all']['hosts'] = existing_hosts
|
|
||||||
expected = OrderedDict([
|
expected = OrderedDict([
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
@@ -119,12 +103,8 @@ class TestInventory(unittest.TestCase):
|
|||||||
hostname = 'node1'
|
hostname = 'node1'
|
||||||
expected = True
|
expected = True
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'access_ip': '10.90.0.2'}),
|
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
result = self.inv.exists_hostname(existing_hosts, hostname)
|
result = self.inv.exists_hostname(existing_hosts, hostname)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
@@ -132,12 +112,8 @@ class TestInventory(unittest.TestCase):
|
|||||||
hostname = 'node99'
|
hostname = 'node99'
|
||||||
expected = False
|
expected = False
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'access_ip': '10.90.0.2'}),
|
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
result = self.inv.exists_hostname(existing_hosts, hostname)
|
result = self.inv.exists_hostname(existing_hosts, hostname)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
@@ -145,12 +121,8 @@ class TestInventory(unittest.TestCase):
|
|||||||
ip = '10.90.0.2'
|
ip = '10.90.0.2'
|
||||||
expected = True
|
expected = True
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'access_ip': '10.90.0.2'}),
|
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
result = self.inv.exists_ip(existing_hosts, ip)
|
result = self.inv.exists_ip(existing_hosts, ip)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
@@ -158,40 +130,26 @@ class TestInventory(unittest.TestCase):
|
|||||||
ip = '10.90.0.200'
|
ip = '10.90.0.200'
|
||||||
expected = False
|
expected = False
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'access_ip': '10.90.0.2'}),
|
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
result = self.inv.exists_ip(existing_hosts, ip)
|
result = self.inv.exists_ip(existing_hosts, ip)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
def test_delete_host_by_ip_positive(self):
|
def test_delete_host_by_ip_positive(self):
|
||||||
ip = '10.90.0.2'
|
ip = '10.90.0.2'
|
||||||
expected = OrderedDict([
|
expected = OrderedDict([
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'access_ip': '10.90.0.2'}),
|
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
self.inv.delete_host_by_ip(existing_hosts, ip)
|
self.inv.delete_host_by_ip(existing_hosts, ip)
|
||||||
self.assertEqual(expected, existing_hosts)
|
self.assertEqual(expected, existing_hosts)
|
||||||
|
|
||||||
def test_delete_host_by_ip_negative(self):
|
def test_delete_host_by_ip_negative(self):
|
||||||
ip = '10.90.0.200'
|
ip = '10.90.0.200'
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
'access_ip': '10.90.0.2'}),
|
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'})])
|
|
||||||
self.assertRaisesRegexp(ValueError, "Unable to find host",
|
self.assertRaisesRegexp(ValueError, "Unable to find host",
|
||||||
self.inv.delete_host_by_ip, existing_hosts, ip)
|
self.inv.delete_host_by_ip, existing_hosts, ip)
|
||||||
|
|
||||||
@@ -199,71 +157,59 @@ class TestInventory(unittest.TestCase):
|
|||||||
proper_hostnames = ['node1', 'node2']
|
proper_hostnames = ['node1', 'node2']
|
||||||
bad_host = 'doesnotbelong2'
|
bad_host = 'doesnotbelong2'
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
('node1', {'ansible_host': '10.90.0.2',
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
'ip': '10.90.0.2',
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3'),
|
||||||
'access_ip': '10.90.0.2'}),
|
('doesnotbelong2', 'whateveropts=ilike')])
|
||||||
('node2', {'ansible_host': '10.90.0.3',
|
self.inv.config['all'] = existing_hosts
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '10.90.0.3'}),
|
|
||||||
('doesnotbelong2', {'whateveropts=ilike'})])
|
|
||||||
self.inv.yaml_config['all']['hosts'] = existing_hosts
|
|
||||||
self.inv.purge_invalid_hosts(proper_hostnames)
|
self.inv.purge_invalid_hosts(proper_hostnames)
|
||||||
self.assertTrue(
|
self.assertTrue(bad_host not in self.inv.config['all'].keys())
|
||||||
bad_host not in self.inv.yaml_config['all']['hosts'].keys())
|
|
||||||
|
|
||||||
def test_add_host_to_group(self):
|
def test_add_host_to_group(self):
|
||||||
group = 'etcd'
|
group = 'etcd'
|
||||||
host = 'node1'
|
host = 'node1'
|
||||||
opts = {'ip': '10.90.0.2'}
|
opts = 'ip=10.90.0.2'
|
||||||
|
|
||||||
self.inv.add_host_to_group(group, host, opts)
|
self.inv.add_host_to_group(group, host, opts)
|
||||||
self.assertEqual(
|
self.assertEqual(self.inv.config[group].get(host), opts)
|
||||||
self.inv.yaml_config['all']['children'][group]['hosts'].get(host),
|
|
||||||
None)
|
|
||||||
|
|
||||||
def test_set_kube_master(self):
|
def test_set_kube_master(self):
|
||||||
group = 'kube-master'
|
group = 'kube-master'
|
||||||
host = 'node1'
|
host = 'node1'
|
||||||
|
|
||||||
self.inv.set_kube_master([host])
|
self.inv.set_kube_master([host])
|
||||||
self.assertTrue(
|
self.assertTrue(host in self.inv.config[group])
|
||||||
host in self.inv.yaml_config['all']['children'][group]['hosts'])
|
|
||||||
|
|
||||||
def test_set_all(self):
|
def test_set_all(self):
|
||||||
|
group = 'all'
|
||||||
hosts = OrderedDict([
|
hosts = OrderedDict([
|
||||||
('node1', 'opt1'),
|
('node1', 'opt1'),
|
||||||
('node2', 'opt2')])
|
('node2', 'opt2')])
|
||||||
|
|
||||||
self.inv.set_all(hosts)
|
self.inv.set_all(hosts)
|
||||||
for host, opt in hosts.items():
|
for host, opt in hosts.items():
|
||||||
self.assertEqual(
|
self.assertEqual(self.inv.config[group].get(host), opt)
|
||||||
self.inv.yaml_config['all']['hosts'].get(host), opt)
|
|
||||||
|
|
||||||
def test_set_k8s_cluster(self):
|
def test_set_k8s_cluster(self):
|
||||||
group = 'k8s-cluster'
|
group = 'k8s-cluster:children'
|
||||||
expected_hosts = ['kube-node', 'kube-master']
|
expected_hosts = ['kube-node', 'kube-master']
|
||||||
|
|
||||||
self.inv.set_k8s_cluster()
|
self.inv.set_k8s_cluster()
|
||||||
for host in expected_hosts:
|
for host in expected_hosts:
|
||||||
self.assertTrue(
|
self.assertTrue(host in self.inv.config[group])
|
||||||
host in
|
|
||||||
self.inv.yaml_config['all']['children'][group]['children'])
|
|
||||||
|
|
||||||
def test_set_kube_node(self):
|
def test_set_kube_node(self):
|
||||||
group = 'kube-node'
|
group = 'kube-node'
|
||||||
host = 'node1'
|
host = 'node1'
|
||||||
|
|
||||||
self.inv.set_kube_node([host])
|
self.inv.set_kube_node([host])
|
||||||
self.assertTrue(
|
self.assertTrue(host in self.inv.config[group])
|
||||||
host in self.inv.yaml_config['all']['children'][group]['hosts'])
|
|
||||||
|
|
||||||
def test_set_etcd(self):
|
def test_set_etcd(self):
|
||||||
group = 'etcd'
|
group = 'etcd'
|
||||||
host = 'node1'
|
host = 'node1'
|
||||||
|
|
||||||
self.inv.set_etcd([host])
|
self.inv.set_etcd([host])
|
||||||
self.assertTrue(
|
self.assertTrue(host in self.inv.config[group])
|
||||||
host in self.inv.yaml_config['all']['children'][group]['hosts'])
|
|
||||||
|
|
||||||
def test_scale_scenario_one(self):
|
def test_scale_scenario_one(self):
|
||||||
num_nodes = 50
|
num_nodes = 50
|
||||||
@@ -273,13 +219,11 @@ class TestInventory(unittest.TestCase):
|
|||||||
hosts["node" + str(hostid)] = ""
|
hosts["node" + str(hostid)] = ""
|
||||||
|
|
||||||
self.inv.set_all(hosts)
|
self.inv.set_all(hosts)
|
||||||
self.inv.set_etcd(list(hosts.keys())[0:3])
|
self.inv.set_etcd(hosts.keys()[0:3])
|
||||||
self.inv.set_kube_master(list(hosts.keys())[0:2])
|
self.inv.set_kube_master(hosts.keys()[0:2])
|
||||||
self.inv.set_kube_node(hosts.keys())
|
self.inv.set_kube_node(hosts.keys())
|
||||||
for h in range(3):
|
for h in range(3):
|
||||||
self.assertFalse(
|
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
|
||||||
list(hosts.keys())[h] in
|
|
||||||
self.inv.yaml_config['all']['children']['kube-node']['hosts'])
|
|
||||||
|
|
||||||
def test_scale_scenario_two(self):
|
def test_scale_scenario_two(self):
|
||||||
num_nodes = 500
|
num_nodes = 500
|
||||||
@@ -289,57 +233,8 @@ class TestInventory(unittest.TestCase):
|
|||||||
hosts["node" + str(hostid)] = ""
|
hosts["node" + str(hostid)] = ""
|
||||||
|
|
||||||
self.inv.set_all(hosts)
|
self.inv.set_all(hosts)
|
||||||
self.inv.set_etcd(list(hosts.keys())[0:3])
|
self.inv.set_etcd(hosts.keys()[0:3])
|
||||||
self.inv.set_kube_master(list(hosts.keys())[3:5])
|
self.inv.set_kube_master(hosts.keys()[3:5])
|
||||||
self.inv.set_kube_node(hosts.keys())
|
self.inv.set_kube_node(hosts.keys())
|
||||||
for h in range(5):
|
for h in range(5):
|
||||||
self.assertFalse(
|
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
|
||||||
list(hosts.keys())[h] in
|
|
||||||
self.inv.yaml_config['all']['children']['kube-node']['hosts'])
|
|
||||||
|
|
||||||
def test_range2ips_range(self):
|
|
||||||
changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8']
|
|
||||||
expected = ['10.90.0.2',
|
|
||||||
'10.90.0.4',
|
|
||||||
'10.90.0.5',
|
|
||||||
'10.90.0.6',
|
|
||||||
'10.90.0.8']
|
|
||||||
result = self.inv.range2ips(changed_hosts)
|
|
||||||
self.assertEqual(expected, result)
|
|
||||||
|
|
||||||
def test_range2ips_incorrect_range(self):
|
|
||||||
host_range = ['10.90.0.4-a.9b.c.e']
|
|
||||||
self.assertRaisesRegexp(Exception, "Range of ip_addresses isn't valid",
|
|
||||||
self.inv.range2ips, host_range)
|
|
||||||
|
|
||||||
def test_build_hostnames_different_ips_add_one(self):
|
|
||||||
changed_hosts = ['10.90.0.2,192.168.0.2']
|
|
||||||
expected = OrderedDict([('node1',
|
|
||||||
{'ansible_host': '192.168.0.2',
|
|
||||||
'ip': '10.90.0.2',
|
|
||||||
'access_ip': '192.168.0.2'})])
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
|
||||||
self.assertEqual(expected, result)
|
|
||||||
|
|
||||||
def test_build_hostnames_different_ips_add_duplicate(self):
|
|
||||||
changed_hosts = ['10.90.0.2,192.168.0.2']
|
|
||||||
expected = OrderedDict([('node1',
|
|
||||||
{'ansible_host': '192.168.0.2',
|
|
||||||
'ip': '10.90.0.2',
|
|
||||||
'access_ip': '192.168.0.2'})])
|
|
||||||
self.inv.yaml_config['all']['hosts'] = expected
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
|
||||||
self.assertEqual(expected, result)
|
|
||||||
|
|
||||||
def test_build_hostnames_different_ips_add_two(self):
|
|
||||||
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3']
|
|
||||||
expected = OrderedDict([
|
|
||||||
('node1', {'ansible_host': '192.168.0.2',
|
|
||||||
'ip': '10.90.0.2',
|
|
||||||
'access_ip': '192.168.0.2'}),
|
|
||||||
('node2', {'ansible_host': '192.168.0.3',
|
|
||||||
'ip': '10.90.0.3',
|
|
||||||
'access_ip': '192.168.0.3'})])
|
|
||||||
self.inv.yaml_config['all']['hosts'] = OrderedDict()
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
|
||||||
self.assertEqual(expected, result)
|
|
||||||
|
|||||||
@@ -1,9 +1,15 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
|
- name: Upgrade all packages to the latest version (yum)
|
||||||
|
yum:
|
||||||
|
name: '*'
|
||||||
|
state: latest
|
||||||
|
when: ansible_os_family == "RedHat"
|
||||||
|
|
||||||
- name: Install required packages
|
- name: Install required packages
|
||||||
yum:
|
yum:
|
||||||
name: "{{ item }}"
|
name: "{{ item }}"
|
||||||
state: present
|
state: latest
|
||||||
with_items:
|
with_items:
|
||||||
- bind-utils
|
- bind-utils
|
||||||
- ntp
|
- ntp
|
||||||
@@ -15,13 +21,23 @@
|
|||||||
update_cache: yes
|
update_cache: yes
|
||||||
cache_valid_time: 3600
|
cache_valid_time: 3600
|
||||||
name: "{{ item }}"
|
name: "{{ item }}"
|
||||||
state: present
|
state: latest
|
||||||
install_recommends: no
|
install_recommends: no
|
||||||
with_items:
|
with_items:
|
||||||
- dnsutils
|
- dnsutils
|
||||||
- ntp
|
- ntp
|
||||||
when: ansible_os_family == "Debian"
|
when: ansible_os_family == "Debian"
|
||||||
|
|
||||||
|
- name: Upgrade all packages to the latest version (apt)
|
||||||
|
shell: apt-get -o \
|
||||||
|
Dpkg::Options::=--force-confdef -o \
|
||||||
|
Dpkg::Options::=--force-confold -q -y \
|
||||||
|
dist-upgrade
|
||||||
|
environment:
|
||||||
|
DEBIAN_FRONTEND: noninteractive
|
||||||
|
when: ansible_os_family == "Debian"
|
||||||
|
|
||||||
|
|
||||||
# Create deployment user if required
|
# Create deployment user if required
|
||||||
- include: user.yml
|
- include: user.yml
|
||||||
when: k8s_deployment_user is defined
|
when: k8s_deployment_user is defined
|
||||||
|
|||||||
@@ -6,5 +6,5 @@ This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer
|
|||||||
|
|
||||||
## Install
|
## Install
|
||||||
```
|
```
|
||||||
ansible-playbook --ask-become -i inventory/sample/hosts.ini contrib/metallb/metallb.yml
|
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/metallb/metallb.yml
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1 +0,0 @@
|
|||||||
../../library
|
|
||||||
@@ -5,4 +5,3 @@ metallb:
|
|||||||
cpu: "100m"
|
cpu: "100m"
|
||||||
memory: "100Mi"
|
memory: "100Mi"
|
||||||
port: "7472"
|
port: "7472"
|
||||||
version: v0.7.3
|
|
||||||
|
|||||||
@@ -12,7 +12,6 @@
|
|||||||
kubectl: "{{bin_dir}}/kubectl"
|
kubectl: "{{bin_dir}}/kubectl"
|
||||||
filename: "{{ kube_config_dir }}/{{ item.item }}"
|
filename: "{{ kube_config_dir }}/{{ item.item }}"
|
||||||
state: "{{ item.changed | ternary('latest','present') }}"
|
state: "{{ item.changed | ternary('latest','present') }}"
|
||||||
become: true
|
|
||||||
with_items: "{{ rendering.results }}"
|
with_items: "{{ rendering.results }}"
|
||||||
when:
|
when:
|
||||||
- "inventory_hostname == groups['kube-master'][0]"
|
- "inventory_hostname == groups['kube-master'][0]"
|
||||||
|
|||||||
@@ -53,6 +53,22 @@ rules:
|
|||||||
---
|
---
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
kind: Role
|
kind: Role
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: leader-election
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["endpoints"]
|
||||||
|
resourceNames: ["metallb-speaker"]
|
||||||
|
verbs: ["get", "update"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["endpoints"]
|
||||||
|
verbs: ["create"]
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: Role
|
||||||
metadata:
|
metadata:
|
||||||
namespace: metallb-system
|
namespace: metallb-system
|
||||||
name: config-watcher
|
name: config-watcher
|
||||||
@@ -115,6 +131,21 @@ roleRef:
|
|||||||
kind: Role
|
kind: Role
|
||||||
name: config-watcher
|
name: config-watcher
|
||||||
---
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: RoleBinding
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: leader-election
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: speaker
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: Role
|
||||||
|
name: leader-election
|
||||||
|
---
|
||||||
apiVersion: apps/v1beta2
|
apiVersion: apps/v1beta2
|
||||||
kind: DaemonSet
|
kind: DaemonSet
|
||||||
metadata:
|
metadata:
|
||||||
@@ -142,7 +173,7 @@ spec:
|
|||||||
hostNetwork: true
|
hostNetwork: true
|
||||||
containers:
|
containers:
|
||||||
- name: speaker
|
- name: speaker
|
||||||
image: metallb/speaker:{{ metallb.version }}
|
image: metallb/speaker:v0.6.2
|
||||||
imagePullPolicy: IfNotPresent
|
imagePullPolicy: IfNotPresent
|
||||||
args:
|
args:
|
||||||
- --port={{ metallb.port }}
|
- --port={{ metallb.port }}
|
||||||
@@ -199,7 +230,7 @@ spec:
|
|||||||
runAsUser: 65534 # nobody
|
runAsUser: 65534 # nobody
|
||||||
containers:
|
containers:
|
||||||
- name: controller
|
- name: controller
|
||||||
image: metallb/controller:{{ metallb.version }}
|
image: metallb/controller:v0.6.2
|
||||||
imagePullPolicy: IfNotPresent
|
imagePullPolicy: IfNotPresent
|
||||||
args:
|
args:
|
||||||
- --port={{ metallb.port }}
|
- --port={{ metallb.port }}
|
||||||
@@ -219,3 +250,5 @@ spec:
|
|||||||
readOnlyRootFilesystem: true
|
readOnlyRootFilesystem: true
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
vars:
|
vars:
|
||||||
ansible_ssh_pipelining: false
|
ansible_ssh_pipelining: false
|
||||||
roles:
|
roles:
|
||||||
- { role: bootstrap-os, tags: bootstrap-os}
|
- { role: bootstrap-os, tags: bootstrap-os}
|
||||||
|
|
||||||
- hosts: all
|
- hosts: all
|
||||||
gather_facts: true
|
gather_facts: true
|
||||||
@@ -22,3 +22,4 @@
|
|||||||
- hosts: kube-master[0]
|
- hosts: kube-master[0]
|
||||||
roles:
|
roles:
|
||||||
- { role: kubernetes-pv }
|
- { role: kubernetes-pv }
|
||||||
|
|
||||||
|
|||||||
@@ -22,9 +22,9 @@ galaxy_info:
|
|||||||
- wheezy
|
- wheezy
|
||||||
- jessie
|
- jessie
|
||||||
galaxy_tags:
|
galaxy_tags:
|
||||||
- system
|
- system
|
||||||
- networking
|
- networking
|
||||||
- cloud
|
- cloud
|
||||||
- clustering
|
- clustering
|
||||||
- files
|
- files
|
||||||
- sharing
|
- sharing
|
||||||
|
|||||||
@@ -12,5 +12,5 @@
|
|||||||
- name: Ensure Gluster mount directories exist.
|
- name: Ensure Gluster mount directories exist.
|
||||||
file: "path={{ item }} state=directory mode=0775"
|
file: "path={{ item }} state=directory mode=0775"
|
||||||
with_items:
|
with_items:
|
||||||
- "{{ gluster_mount_dir }}"
|
- "{{ gluster_mount_dir }}"
|
||||||
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined
|
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined
|
||||||
|
|||||||
@@ -22,9 +22,9 @@ galaxy_info:
|
|||||||
- wheezy
|
- wheezy
|
||||||
- jessie
|
- jessie
|
||||||
galaxy_tags:
|
galaxy_tags:
|
||||||
- system
|
- system
|
||||||
- networking
|
- networking
|
||||||
- cloud
|
- cloud
|
||||||
- clustering
|
- clustering
|
||||||
- files
|
- files
|
||||||
- sharing
|
- sharing
|
||||||
|
|||||||
@@ -33,24 +33,24 @@
|
|||||||
- name: Ensure Gluster brick and mount directories exist.
|
- name: Ensure Gluster brick and mount directories exist.
|
||||||
file: "path={{ item }} state=directory mode=0775"
|
file: "path={{ item }} state=directory mode=0775"
|
||||||
with_items:
|
with_items:
|
||||||
- "{{ gluster_brick_dir }}"
|
- "{{ gluster_brick_dir }}"
|
||||||
- "{{ gluster_mount_dir }}"
|
- "{{ gluster_mount_dir }}"
|
||||||
|
|
||||||
- name: Configure Gluster volume.
|
- name: Configure Gluster volume.
|
||||||
gluster_volume:
|
gluster_volume:
|
||||||
state: present
|
state: present
|
||||||
name: "{{ gluster_brick_name }}"
|
name: "{{ gluster_brick_name }}"
|
||||||
brick: "{{ gluster_brick_dir }}"
|
brick: "{{ gluster_brick_dir }}"
|
||||||
replicas: "{{ groups['gfs-cluster'] | length }}"
|
replicas: "{{ groups['gfs-cluster'] | length }}"
|
||||||
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
|
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
|
||||||
host: "{{ inventory_hostname }}"
|
host: "{{ inventory_hostname }}"
|
||||||
force: yes
|
force: yes
|
||||||
run_once: true
|
run_once: true
|
||||||
|
|
||||||
- name: Mount glusterfs to retrieve disk size
|
- name: Mount glusterfs to retrieve disk size
|
||||||
mount:
|
mount:
|
||||||
name: "{{ gluster_mount_dir }}"
|
name: "{{ gluster_mount_dir }}"
|
||||||
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
|
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
|
||||||
fstype: glusterfs
|
fstype: glusterfs
|
||||||
opts: "defaults,_netdev"
|
opts: "defaults,_netdev"
|
||||||
state: mounted
|
state: mounted
|
||||||
@@ -63,13 +63,13 @@
|
|||||||
|
|
||||||
- name: Set Gluster disk size to variable
|
- name: Set Gluster disk size to variable
|
||||||
set_fact:
|
set_fact:
|
||||||
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
|
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
|
||||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||||
|
|
||||||
- name: Create file on GlusterFS
|
- name: Create file on GlusterFS
|
||||||
template:
|
template:
|
||||||
dest: "{{ gluster_mount_dir }}/.test-file.txt"
|
dest: "{{ gluster_mount_dir }}/.test-file.txt"
|
||||||
src: test-file.txt
|
src: test-file.txt
|
||||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||||
|
|
||||||
- name: Unmount glusterfs
|
- name: Unmount glusterfs
|
||||||
@@ -79,3 +79,4 @@
|
|||||||
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
|
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
|
||||||
state: unmounted
|
state: unmounted
|
||||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||||
|
|
||||||
|
|||||||
@@ -2,9 +2,9 @@
|
|||||||
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
|
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
|
||||||
template: src={{item.file}} dest={{kube_config_dir}}/{{item.dest}}
|
template: src={{item.file}} dest={{kube_config_dir}}/{{item.dest}}
|
||||||
with_items:
|
with_items:
|
||||||
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
|
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
|
||||||
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
|
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
|
||||||
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
|
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
|
||||||
register: gluster_pv
|
register: gluster_pv
|
||||||
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
|
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
|
||||||
|
|
||||||
|
|||||||
@@ -1,3 +1,2 @@
|
|||||||
---
|
|
||||||
dependencies:
|
dependencies:
|
||||||
- {role: kubernetes-pv/ansible, tags: apps}
|
- {role: kubernetes-pv/ansible, tags: apps}
|
||||||
|
|||||||
@@ -2,23 +2,23 @@
|
|||||||
- name: "Load lvm kernel modules"
|
- name: "Load lvm kernel modules"
|
||||||
become: true
|
become: true
|
||||||
with_items:
|
with_items:
|
||||||
- "dm_snapshot"
|
- "dm_snapshot"
|
||||||
- "dm_mirror"
|
- "dm_mirror"
|
||||||
- "dm_thin_pool"
|
- "dm_thin_pool"
|
||||||
modprobe:
|
modprobe:
|
||||||
name: "{{ item }}"
|
name: "{{ item }}"
|
||||||
state: "present"
|
state: "present"
|
||||||
|
|
||||||
- name: "Install glusterfs mount utils (RedHat)"
|
- name: "Install glusterfs mount utils (RedHat)"
|
||||||
become: true
|
become: true
|
||||||
yum:
|
yum:
|
||||||
name: "glusterfs-fuse"
|
name: "glusterfs-fuse"
|
||||||
state: "present"
|
state: "present"
|
||||||
when: "ansible_os_family == 'RedHat'"
|
when: "ansible_os_family == 'RedHat'"
|
||||||
|
|
||||||
- name: "Install glusterfs mount utils (Debian)"
|
- name: "Install glusterfs mount utils (Debian)"
|
||||||
become: true
|
become: true
|
||||||
apt:
|
apt:
|
||||||
name: "glusterfs-client"
|
name: "glusterfs-client"
|
||||||
state: "present"
|
state: "present"
|
||||||
when: "ansible_os_family == 'Debian'"
|
when: "ansible_os_family == 'Debian'"
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
---
|
|
||||||
# Bootstrap heketi
|
# Bootstrap heketi
|
||||||
- name: "Get state of heketi service, deployment and pods."
|
- name: "Get state of heketi service, deployment and pods."
|
||||||
register: "initial_heketi_state"
|
register: "initial_heketi_state"
|
||||||
|
|||||||
@@ -18,7 +18,7 @@
|
|||||||
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
|
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
|
||||||
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
|
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
|
||||||
until:
|
until:
|
||||||
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
|
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
|
||||||
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
|
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
|
||||||
retries: 60
|
retries: 60
|
||||||
delay: 5
|
delay: 5
|
||||||
|
|||||||
@@ -38,4 +38,4 @@
|
|||||||
vars: { volume: "{{ volume_information.stdout|from_json }}" }
|
vars: { volume: "{{ volume_information.stdout|from_json }}" }
|
||||||
when: "volume.name == 'heketidbstorage'"
|
when: "volume.name == 'heketidbstorage'"
|
||||||
- name: "Ensure heketi database volume exists."
|
- name: "Ensure heketi database volume exists."
|
||||||
assert: { that: "heketi_database_volume_created is defined", msg: "Heketi database volume does not exist." }
|
assert: { that: "heketi_database_volume_created is defined" , msg: "Heketi database volume does not exist." }
|
||||||
|
|||||||
@@ -18,8 +18,8 @@
|
|||||||
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
|
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
|
||||||
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
|
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
|
||||||
until:
|
until:
|
||||||
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
|
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
|
||||||
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
|
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
|
||||||
retries: 60
|
retries: 60
|
||||||
delay: 5
|
delay: 5
|
||||||
- set_fact:
|
- set_fact:
|
||||||
|
|||||||
@@ -8,9 +8,7 @@
|
|||||||
- register: "clusterrolebinding_state"
|
- register: "clusterrolebinding_state"
|
||||||
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
|
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
|
||||||
changed_when: false
|
changed_when: false
|
||||||
- assert:
|
- assert: { that: "clusterrolebinding_state.stdout != \"\"", message: "Cluster role binding is not present." }
|
||||||
that: "clusterrolebinding_state.stdout != \"\""
|
|
||||||
msg: "Cluster role binding is not present."
|
|
||||||
|
|
||||||
- register: "secret_state"
|
- register: "secret_state"
|
||||||
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
|
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
|
||||||
@@ -26,6 +24,4 @@
|
|||||||
- register: "secret_state"
|
- register: "secret_state"
|
||||||
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
|
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
|
||||||
changed_when: false
|
changed_when: false
|
||||||
- assert:
|
- assert: { that: "secret_state.stdout != \"\"", message: "Heketi config secret is not present." }
|
||||||
that: "secret_state.stdout != \"\""
|
|
||||||
msg: "Heketi config secret is not present."
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
register: "heketi_service"
|
register: "heketi_service"
|
||||||
changed_when: false
|
changed_when: false
|
||||||
- name: "Ensure heketi service is available."
|
- name: "Ensure heketi service is available."
|
||||||
assert: { that: "heketi_service.stdout != \"\"" }
|
assert: { that: "heketi_service.stdout != \"\"" }
|
||||||
- name: "Render storage class configuration."
|
- name: "Render storage class configuration."
|
||||||
become: true
|
become: true
|
||||||
vars:
|
vars:
|
||||||
|
|||||||
@@ -69,7 +69,7 @@
|
|||||||
},
|
},
|
||||||
"readinessProbe": {
|
"readinessProbe": {
|
||||||
"timeoutSeconds": 3,
|
"timeoutSeconds": 3,
|
||||||
"initialDelaySeconds": 3,
|
"initialDelaySeconds": 60,
|
||||||
"exec": {
|
"exec": {
|
||||||
"command": [
|
"command": [
|
||||||
"/bin/bash",
|
"/bin/bash",
|
||||||
@@ -80,7 +80,7 @@
|
|||||||
},
|
},
|
||||||
"livenessProbe": {
|
"livenessProbe": {
|
||||||
"timeoutSeconds": 3,
|
"timeoutSeconds": 3,
|
||||||
"initialDelaySeconds": 10,
|
"initialDelaySeconds": 60,
|
||||||
"exec": {
|
"exec": {
|
||||||
"command": [
|
"command": [
|
||||||
"/bin/bash",
|
"/bin/bash",
|
||||||
|
|||||||
@@ -106,7 +106,7 @@
|
|||||||
},
|
},
|
||||||
"livenessProbe": {
|
"livenessProbe": {
|
||||||
"timeoutSeconds": 3,
|
"timeoutSeconds": 3,
|
||||||
"initialDelaySeconds": 10,
|
"initialDelaySeconds": 30,
|
||||||
"httpGet": {
|
"httpGet": {
|
||||||
"path": "/hello",
|
"path": "/hello",
|
||||||
"port": 8080
|
"port": 8080
|
||||||
|
|||||||
@@ -122,7 +122,7 @@
|
|||||||
},
|
},
|
||||||
"livenessProbe": {
|
"livenessProbe": {
|
||||||
"timeoutSeconds": 3,
|
"timeoutSeconds": 3,
|
||||||
"initialDelaySeconds": 10,
|
"initialDelaySeconds": 30,
|
||||||
"httpGet": {
|
"httpGet": {
|
||||||
"path": "/hello",
|
"path": "/hello",
|
||||||
"port": 8080
|
"port": 8080
|
||||||
|
|||||||
@@ -2,15 +2,15 @@
|
|||||||
- name: "Install lvm utils (RedHat)"
|
- name: "Install lvm utils (RedHat)"
|
||||||
become: true
|
become: true
|
||||||
yum:
|
yum:
|
||||||
name: "lvm2"
|
name: "lvm2"
|
||||||
state: "present"
|
state: "present"
|
||||||
when: "ansible_os_family == 'RedHat'"
|
when: "ansible_os_family == 'RedHat'"
|
||||||
|
|
||||||
- name: "Install lvm utils (Debian)"
|
- name: "Install lvm utils (Debian)"
|
||||||
become: true
|
become: true
|
||||||
apt:
|
apt:
|
||||||
name: "lvm2"
|
name: "lvm2"
|
||||||
state: "present"
|
state: "present"
|
||||||
when: "ansible_os_family == 'Debian'"
|
when: "ansible_os_family == 'Debian'"
|
||||||
|
|
||||||
- name: "Get volume group information."
|
- name: "Get volume group information."
|
||||||
@@ -34,13 +34,13 @@
|
|||||||
- name: "Remove lvm utils (RedHat)"
|
- name: "Remove lvm utils (RedHat)"
|
||||||
become: true
|
become: true
|
||||||
yum:
|
yum:
|
||||||
name: "lvm2"
|
name: "lvm2"
|
||||||
state: "absent"
|
state: "absent"
|
||||||
when: "ansible_os_family == 'RedHat'"
|
when: "ansible_os_family == 'RedHat'"
|
||||||
|
|
||||||
- name: "Remove lvm utils (Debian)"
|
- name: "Remove lvm utils (Debian)"
|
||||||
become: true
|
become: true
|
||||||
apt:
|
apt:
|
||||||
name: "lvm2"
|
name: "lvm2"
|
||||||
state: "absent"
|
state: "absent"
|
||||||
when: "ansible_os_family == 'Debian'"
|
when: "ansible_os_family == 'Debian'"
|
||||||
|
|||||||
@@ -1,5 +0,0 @@
|
|||||||
# See the OWNERS docs at https://go.k8s.io/owners
|
|
||||||
|
|
||||||
approvers:
|
|
||||||
- holmsten
|
|
||||||
- miouge1
|
|
||||||
@@ -1,11 +1,11 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.8.7"
|
required_version = ">= 0.8.7"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "aws" {
|
provider "aws" {
|
||||||
access_key = "${var.AWS_ACCESS_KEY_ID}"
|
access_key = "${var.AWS_ACCESS_KEY_ID}"
|
||||||
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
|
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
|
||||||
region = "${var.AWS_DEFAULT_REGION}"
|
region = "${var.AWS_DEFAULT_REGION}"
|
||||||
}
|
}
|
||||||
|
|
||||||
data "aws_availability_zones" "available" {}
|
data "aws_availability_zones" "available" {}
|
||||||
@@ -18,30 +18,33 @@ data "aws_availability_zones" "available" {}
|
|||||||
module "aws-vpc" {
|
module "aws-vpc" {
|
||||||
source = "modules/vpc"
|
source = "modules/vpc"
|
||||||
|
|
||||||
aws_cluster_name = "${var.aws_cluster_name}"
|
aws_cluster_name = "${var.aws_cluster_name}"
|
||||||
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
|
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
|
||||||
aws_avail_zones = "${slice(data.aws_availability_zones.available.names,0,2)}"
|
aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
|
||||||
aws_cidr_subnets_private = "${var.aws_cidr_subnets_private}"
|
aws_cidr_subnets_private="${var.aws_cidr_subnets_private}"
|
||||||
aws_cidr_subnets_public = "${var.aws_cidr_subnets_public}"
|
aws_cidr_subnets_public="${var.aws_cidr_subnets_public}"
|
||||||
default_tags = "${var.default_tags}"
|
default_tags="${var.default_tags}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
module "aws-elb" {
|
module "aws-elb" {
|
||||||
source = "modules/elb"
|
source = "modules/elb"
|
||||||
|
|
||||||
aws_cluster_name = "${var.aws_cluster_name}"
|
aws_cluster_name="${var.aws_cluster_name}"
|
||||||
aws_vpc_id = "${module.aws-vpc.aws_vpc_id}"
|
aws_vpc_id="${module.aws-vpc.aws_vpc_id}"
|
||||||
aws_avail_zones = "${slice(data.aws_availability_zones.available.names,0,2)}"
|
aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
|
||||||
aws_subnet_ids_public = "${module.aws-vpc.aws_subnet_ids_public}"
|
aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}"
|
||||||
aws_elb_api_port = "${var.aws_elb_api_port}"
|
aws_elb_api_port = "${var.aws_elb_api_port}"
|
||||||
k8s_secure_api_port = "${var.k8s_secure_api_port}"
|
k8s_secure_api_port = "${var.k8s_secure_api_port}"
|
||||||
default_tags = "${var.default_tags}"
|
default_tags="${var.default_tags}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
module "aws-iam" {
|
module "aws-iam" {
|
||||||
source = "modules/iam"
|
source = "modules/iam"
|
||||||
|
|
||||||
aws_cluster_name = "${var.aws_cluster_name}"
|
aws_cluster_name="${var.aws_cluster_name}"
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -50,44 +53,50 @@ module "aws-iam" {
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
resource "aws_instance" "bastion-server" {
|
resource "aws_instance" "bastion-server" {
|
||||||
ami = "${data.aws_ami.distro.id}"
|
ami = "${data.aws_ami.distro.id}"
|
||||||
instance_type = "${var.aws_bastion_size}"
|
instance_type = "${var.aws_bastion_size}"
|
||||||
count = "${length(var.aws_cidr_subnets_public)}"
|
count = "${length(var.aws_cidr_subnets_public)}"
|
||||||
associate_public_ip_address = true
|
associate_public_ip_address = true
|
||||||
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
||||||
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
|
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
|
||||||
|
|
||||||
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
|
|
||||||
|
|
||||||
key_name = "${var.AWS_SSH_KEY_NAME}"
|
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
key_name = "${var.AWS_SSH_KEY_NAME}"
|
||||||
|
|
||||||
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
|
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
|
||||||
"Cluster", "${var.aws_cluster_name}",
|
"Cluster", "${var.aws_cluster_name}",
|
||||||
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
|
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
|
||||||
))}"
|
))}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Create K8s Master and worker nodes and etcd instances
|
* Create K8s Master and worker nodes and etcd instances
|
||||||
*
|
*
|
||||||
*/
|
*/
|
||||||
|
|
||||||
resource "aws_instance" "k8s-master" {
|
resource "aws_instance" "k8s-master" {
|
||||||
ami = "${data.aws_ami.distro.id}"
|
ami = "${data.aws_ami.distro.id}"
|
||||||
instance_type = "${var.aws_kube_master_size}"
|
instance_type = "${var.aws_kube_master_size}"
|
||||||
|
|
||||||
count = "${var.aws_kube_master_num}"
|
count = "${var.aws_kube_master_num}"
|
||||||
|
|
||||||
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
|
||||||
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
|
||||||
|
|
||||||
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
|
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
||||||
|
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
||||||
|
|
||||||
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
|
|
||||||
key_name = "${var.AWS_SSH_KEY_NAME}"
|
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
|
||||||
|
|
||||||
|
|
||||||
|
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
|
||||||
|
key_name = "${var.AWS_SSH_KEY_NAME}"
|
||||||
|
|
||||||
|
|
||||||
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
|
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
|
||||||
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
||||||
"Role", "master"
|
"Role", "master"
|
||||||
@@ -95,77 +104,88 @@ resource "aws_instance" "k8s-master" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_elb_attachment" "attach_master_nodes" {
|
resource "aws_elb_attachment" "attach_master_nodes" {
|
||||||
count = "${var.aws_kube_master_num}"
|
count = "${var.aws_kube_master_num}"
|
||||||
elb = "${module.aws-elb.aws_elb_api_id}"
|
elb = "${module.aws-elb.aws_elb_api_id}"
|
||||||
instance = "${element(aws_instance.k8s-master.*.id,count.index)}"
|
instance = "${element(aws_instance.k8s-master.*.id,count.index)}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
resource "aws_instance" "k8s-etcd" {
|
resource "aws_instance" "k8s-etcd" {
|
||||||
ami = "${data.aws_ami.distro.id}"
|
ami = "${data.aws_ami.distro.id}"
|
||||||
instance_type = "${var.aws_etcd_size}"
|
instance_type = "${var.aws_etcd_size}"
|
||||||
|
|
||||||
count = "${var.aws_etcd_num}"
|
count = "${var.aws_etcd_num}"
|
||||||
|
|
||||||
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
|
||||||
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
|
||||||
|
|
||||||
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
|
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
||||||
|
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
||||||
|
|
||||||
key_name = "${var.AWS_SSH_KEY_NAME}"
|
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
|
||||||
|
|
||||||
|
key_name = "${var.AWS_SSH_KEY_NAME}"
|
||||||
|
|
||||||
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
|
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
|
||||||
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
||||||
"Role", "etcd"
|
"Role", "etcd"
|
||||||
))}"
|
))}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
resource "aws_instance" "k8s-worker" {
|
resource "aws_instance" "k8s-worker" {
|
||||||
ami = "${data.aws_ami.distro.id}"
|
ami = "${data.aws_ami.distro.id}"
|
||||||
instance_type = "${var.aws_kube_worker_size}"
|
instance_type = "${var.aws_kube_worker_size}"
|
||||||
|
|
||||||
count = "${var.aws_kube_worker_num}"
|
count = "${var.aws_kube_worker_num}"
|
||||||
|
|
||||||
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
||||||
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
||||||
|
|
||||||
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
|
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
|
||||||
|
|
||||||
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
|
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
|
||||||
key_name = "${var.AWS_SSH_KEY_NAME}"
|
key_name = "${var.AWS_SSH_KEY_NAME}"
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
|
||||||
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
|
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
|
||||||
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
||||||
"Role", "worker"
|
"Role", "worker"
|
||||||
))}"
|
))}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Create Kubespray Inventory File
|
* Create Kubespray Inventory File
|
||||||
*
|
*
|
||||||
*/
|
*/
|
||||||
data "template_file" "inventory" {
|
data "template_file" "inventory" {
|
||||||
template = "${file("${path.module}/templates/inventory.tpl")}"
|
template = "${file("${path.module}/templates/inventory.tpl")}"
|
||||||
|
|
||||||
|
vars {
|
||||||
|
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
|
||||||
|
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
|
||||||
|
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
|
||||||
|
connection_strings_etcd = "${join("\n",formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
|
||||||
|
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
|
||||||
|
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
|
||||||
|
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
|
||||||
|
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
|
||||||
|
}
|
||||||
|
|
||||||
vars {
|
|
||||||
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
|
|
||||||
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
|
|
||||||
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
|
|
||||||
connection_strings_etcd = "${join("\n",formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
|
|
||||||
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
|
|
||||||
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
|
|
||||||
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
|
|
||||||
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "null_resource" "inventories" {
|
resource "null_resource" "inventories" {
|
||||||
provisioner "local-exec" {
|
provisioner "local-exec" {
|
||||||
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
|
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
|
||||||
}
|
}
|
||||||
|
|
||||||
triggers {
|
triggers {
|
||||||
template = "${data.template_file.inventory.rendered}"
|
template = "${data.template_file.inventory.rendered}"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,54 +1,55 @@
|
|||||||
resource "aws_security_group" "aws-elb" {
|
resource "aws_security_group" "aws-elb" {
|
||||||
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
|
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
|
||||||
vpc_id = "${var.aws_vpc_id}"
|
vpc_id = "${var.aws_vpc_id}"
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
|
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
|
||||||
))}"
|
))}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
resource "aws_security_group_rule" "aws-allow-api-access" {
|
resource "aws_security_group_rule" "aws-allow-api-access" {
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
from_port = "${var.aws_elb_api_port}"
|
from_port = "${var.aws_elb_api_port}"
|
||||||
to_port = "${var.k8s_secure_api_port}"
|
to_port = "${var.k8s_secure_api_port}"
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
security_group_id = "${aws_security_group.aws-elb.id}"
|
security_group_id = "${aws_security_group.aws-elb.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "aws-allow-api-egress" {
|
resource "aws_security_group_rule" "aws-allow-api-egress" {
|
||||||
type = "egress"
|
type = "egress"
|
||||||
from_port = 0
|
from_port = 0
|
||||||
to_port = 65535
|
to_port = 65535
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
security_group_id = "${aws_security_group.aws-elb.id}"
|
security_group_id = "${aws_security_group.aws-elb.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Create a new AWS ELB for K8S API
|
# Create a new AWS ELB for K8S API
|
||||||
resource "aws_elb" "aws-elb-api" {
|
resource "aws_elb" "aws-elb-api" {
|
||||||
name = "kubernetes-elb-${var.aws_cluster_name}"
|
name = "kubernetes-elb-${var.aws_cluster_name}"
|
||||||
subnets = ["${var.aws_subnet_ids_public}"]
|
subnets = ["${var.aws_subnet_ids_public}"]
|
||||||
security_groups = ["${aws_security_group.aws-elb.id}"]
|
security_groups = ["${aws_security_group.aws-elb.id}"]
|
||||||
|
|
||||||
listener {
|
listener {
|
||||||
instance_port = "${var.k8s_secure_api_port}"
|
instance_port = "${var.k8s_secure_api_port}"
|
||||||
instance_protocol = "tcp"
|
instance_protocol = "tcp"
|
||||||
lb_port = "${var.aws_elb_api_port}"
|
lb_port = "${var.aws_elb_api_port}"
|
||||||
lb_protocol = "tcp"
|
lb_protocol = "tcp"
|
||||||
}
|
}
|
||||||
|
|
||||||
health_check {
|
health_check {
|
||||||
healthy_threshold = 2
|
healthy_threshold = 2
|
||||||
unhealthy_threshold = 2
|
unhealthy_threshold = 2
|
||||||
timeout = 3
|
timeout = 3
|
||||||
target = "TCP:${var.k8s_secure_api_port}"
|
target = "TCP:${var.k8s_secure_api_port}"
|
||||||
interval = 30
|
interval = 30
|
||||||
}
|
}
|
||||||
|
|
||||||
cross_zone_load_balancing = true
|
cross_zone_load_balancing = true
|
||||||
idle_timeout = 400
|
idle_timeout = 400
|
||||||
connection_draining = true
|
connection_draining = true
|
||||||
connection_draining_timeout = 400
|
connection_draining_timeout = 400
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
tags = "${merge(var.default_tags, map(
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
output "aws_elb_api_id" {
|
output "aws_elb_api_id" {
|
||||||
value = "${aws_elb.aws-elb-api.id}"
|
value = "${aws_elb.aws-elb-api.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "aws_elb_api_fqdn" {
|
output "aws_elb_api_fqdn" {
|
||||||
value = "${aws_elb.aws-elb-api.dns_name}"
|
value = "${aws_elb.aws-elb-api.dns_name}"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,30 +1,33 @@
|
|||||||
variable "aws_cluster_name" {
|
variable "aws_cluster_name" {
|
||||||
description = "Name of Cluster"
|
description = "Name of Cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_vpc_id" {
|
variable "aws_vpc_id" {
|
||||||
description = "AWS VPC ID"
|
description = "AWS VPC ID"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_elb_api_port" {
|
variable "aws_elb_api_port" {
|
||||||
description = "Port for AWS ELB"
|
description = "Port for AWS ELB"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "k8s_secure_api_port" {
|
variable "k8s_secure_api_port" {
|
||||||
description = "Secure Port of K8S API Server"
|
description = "Secure Port of K8S API Server"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
variable "aws_avail_zones" {
|
variable "aws_avail_zones" {
|
||||||
description = "Availability Zones Used"
|
description = "Availability Zones Used"
|
||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
variable "aws_subnet_ids_public" {
|
variable "aws_subnet_ids_public" {
|
||||||
description = "IDs of Public Subnets"
|
description = "IDs of Public Subnets"
|
||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "default_tags" {
|
variable "default_tags" {
|
||||||
description = "Tags for all resources"
|
description = "Tags for all resources"
|
||||||
type = "map"
|
type = "map"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,9 +1,8 @@
|
|||||||
#Add AWS Roles for Kubernetes
|
#Add AWS Roles for Kubernetes
|
||||||
|
|
||||||
resource "aws_iam_role" "kube-master" {
|
resource "aws_iam_role" "kube-master" {
|
||||||
name = "kubernetes-${var.aws_cluster_name}-master"
|
name = "kubernetes-${var.aws_cluster_name}-master"
|
||||||
|
assume_role_policy = <<EOF
|
||||||
assume_role_policy = <<EOF
|
|
||||||
{
|
{
|
||||||
"Version": "2012-10-17",
|
"Version": "2012-10-17",
|
||||||
"Statement": [
|
"Statement": [
|
||||||
@@ -20,9 +19,8 @@ EOF
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_iam_role" "kube-worker" {
|
resource "aws_iam_role" "kube-worker" {
|
||||||
name = "kubernetes-${var.aws_cluster_name}-node"
|
name = "kubernetes-${var.aws_cluster_name}-node"
|
||||||
|
assume_role_policy = <<EOF
|
||||||
assume_role_policy = <<EOF
|
|
||||||
{
|
{
|
||||||
"Version": "2012-10-17",
|
"Version": "2012-10-17",
|
||||||
"Statement": [
|
"Statement": [
|
||||||
@@ -41,10 +39,9 @@ EOF
|
|||||||
#Add AWS Policies for Kubernetes
|
#Add AWS Policies for Kubernetes
|
||||||
|
|
||||||
resource "aws_iam_role_policy" "kube-master" {
|
resource "aws_iam_role_policy" "kube-master" {
|
||||||
name = "kubernetes-${var.aws_cluster_name}-master"
|
name = "kubernetes-${var.aws_cluster_name}-master"
|
||||||
role = "${aws_iam_role.kube-master.id}"
|
role = "${aws_iam_role.kube-master.id}"
|
||||||
|
policy = <<EOF
|
||||||
policy = <<EOF
|
|
||||||
{
|
{
|
||||||
"Version": "2012-10-17",
|
"Version": "2012-10-17",
|
||||||
"Statement": [
|
"Statement": [
|
||||||
@@ -76,10 +73,9 @@ EOF
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_iam_role_policy" "kube-worker" {
|
resource "aws_iam_role_policy" "kube-worker" {
|
||||||
name = "kubernetes-${var.aws_cluster_name}-node"
|
name = "kubernetes-${var.aws_cluster_name}-node"
|
||||||
role = "${aws_iam_role.kube-worker.id}"
|
role = "${aws_iam_role.kube-worker.id}"
|
||||||
|
policy = <<EOF
|
||||||
policy = <<EOF
|
|
||||||
{
|
{
|
||||||
"Version": "2012-10-17",
|
"Version": "2012-10-17",
|
||||||
"Statement": [
|
"Statement": [
|
||||||
@@ -128,14 +124,15 @@ resource "aws_iam_role_policy" "kube-worker" {
|
|||||||
EOF
|
EOF
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
#Create AWS Instance Profiles
|
#Create AWS Instance Profiles
|
||||||
|
|
||||||
resource "aws_iam_instance_profile" "kube-master" {
|
resource "aws_iam_instance_profile" "kube-master" {
|
||||||
name = "kube_${var.aws_cluster_name}_master_profile"
|
name = "kube_${var.aws_cluster_name}_master_profile"
|
||||||
role = "${aws_iam_role.kube-master.name}"
|
role = "${aws_iam_role.kube-master.name}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_iam_instance_profile" "kube-worker" {
|
resource "aws_iam_instance_profile" "kube-worker" {
|
||||||
name = "kube_${var.aws_cluster_name}_node_profile"
|
name = "kube_${var.aws_cluster_name}_node_profile"
|
||||||
role = "${aws_iam_role.kube-worker.name}"
|
role = "${aws_iam_role.kube-worker.name}"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
output "kube-master-profile" {
|
output "kube-master-profile" {
|
||||||
value = "${aws_iam_instance_profile.kube-master.name }"
|
value = "${aws_iam_instance_profile.kube-master.name }"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "kube-worker-profile" {
|
output "kube-worker-profile" {
|
||||||
value = "${aws_iam_instance_profile.kube-worker.name }"
|
value = "${aws_iam_instance_profile.kube-worker.name }"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,3 +1,3 @@
|
|||||||
variable "aws_cluster_name" {
|
variable "aws_cluster_name" {
|
||||||
description = "Name of Cluster"
|
description = "Name of Cluster"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,53 +1,58 @@
|
|||||||
|
|
||||||
resource "aws_vpc" "cluster-vpc" {
|
resource "aws_vpc" "cluster-vpc" {
|
||||||
cidr_block = "${var.aws_vpc_cidr_block}"
|
cidr_block = "${var.aws_vpc_cidr_block}"
|
||||||
|
|
||||||
#DNS Related Entries
|
#DNS Related Entries
|
||||||
enable_dns_support = true
|
enable_dns_support = true
|
||||||
enable_dns_hostnames = true
|
enable_dns_hostnames = true
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
|
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
|
||||||
))}"
|
))}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
resource "aws_eip" "cluster-nat-eip" {
|
resource "aws_eip" "cluster-nat-eip" {
|
||||||
count = "${length(var.aws_cidr_subnets_public)}"
|
count = "${length(var.aws_cidr_subnets_public)}"
|
||||||
vpc = true
|
vpc = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
|
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
|
||||||
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
||||||
|
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-internetgw"
|
"Name", "kubernetes-${var.aws_cluster_name}-internetgw"
|
||||||
))}"
|
))}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_subnet" "cluster-vpc-subnets-public" {
|
resource "aws_subnet" "cluster-vpc-subnets-public" {
|
||||||
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
||||||
count = "${length(var.aws_avail_zones)}"
|
count="${length(var.aws_avail_zones)}"
|
||||||
availability_zone = "${element(var.aws_avail_zones, count.index)}"
|
availability_zone = "${element(var.aws_avail_zones, count.index)}"
|
||||||
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
|
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
|
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
|
||||||
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
|
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
|
||||||
))}"
|
))}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_nat_gateway" "cluster-nat-gateway" {
|
resource "aws_nat_gateway" "cluster-nat-gateway" {
|
||||||
count = "${length(var.aws_cidr_subnets_public)}"
|
count = "${length(var.aws_cidr_subnets_public)}"
|
||||||
allocation_id = "${element(aws_eip.cluster-nat-eip.*.id, count.index)}"
|
allocation_id = "${element(aws_eip.cluster-nat-eip.*.id, count.index)}"
|
||||||
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)}"
|
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_subnet" "cluster-vpc-subnets-private" {
|
resource "aws_subnet" "cluster-vpc-subnets-private" {
|
||||||
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
||||||
count = "${length(var.aws_avail_zones)}"
|
count="${length(var.aws_avail_zones)}"
|
||||||
availability_zone = "${element(var.aws_avail_zones, count.index)}"
|
availability_zone = "${element(var.aws_avail_zones, count.index)}"
|
||||||
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
|
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
|
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
|
||||||
))}"
|
))}"
|
||||||
}
|
}
|
||||||
@@ -57,78 +62,81 @@ resource "aws_subnet" "cluster-vpc-subnets-private" {
|
|||||||
#TODO: Do we need two routing tables for each subnet for redundancy or is one enough?
|
#TODO: Do we need two routing tables for each subnet for redundancy or is one enough?
|
||||||
|
|
||||||
resource "aws_route_table" "kubernetes-public" {
|
resource "aws_route_table" "kubernetes-public" {
|
||||||
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
||||||
|
route {
|
||||||
|
cidr_block = "0.0.0.0/0"
|
||||||
|
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
|
||||||
|
}
|
||||||
|
|
||||||
route {
|
tags = "${merge(var.default_tags, map(
|
||||||
cidr_block = "0.0.0.0/0"
|
|
||||||
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
|
|
||||||
}
|
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
|
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
|
||||||
))}"
|
))}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_route_table" "kubernetes-private" {
|
resource "aws_route_table" "kubernetes-private" {
|
||||||
count = "${length(var.aws_cidr_subnets_private)}"
|
count = "${length(var.aws_cidr_subnets_private)}"
|
||||||
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
||||||
|
route {
|
||||||
|
cidr_block = "0.0.0.0/0"
|
||||||
|
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
|
||||||
|
}
|
||||||
|
|
||||||
route {
|
tags = "${merge(var.default_tags, map(
|
||||||
cidr_block = "0.0.0.0/0"
|
|
||||||
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
|
|
||||||
}
|
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
|
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
|
||||||
))}"
|
))}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_route_table_association" "kubernetes-public" {
|
resource "aws_route_table_association" "kubernetes-public" {
|
||||||
count = "${length(var.aws_cidr_subnets_public)}"
|
count = "${length(var.aws_cidr_subnets_public)}"
|
||||||
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id,count.index)}"
|
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id,count.index)}"
|
||||||
route_table_id = "${aws_route_table.kubernetes-public.id}"
|
route_table_id = "${aws_route_table.kubernetes-public.id}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_route_table_association" "kubernetes-private" {
|
resource "aws_route_table_association" "kubernetes-private" {
|
||||||
count = "${length(var.aws_cidr_subnets_private)}"
|
count = "${length(var.aws_cidr_subnets_private)}"
|
||||||
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-private.*.id,count.index)}"
|
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-private.*.id,count.index)}"
|
||||||
route_table_id = "${element(aws_route_table.kubernetes-private.*.id,count.index)}"
|
route_table_id = "${element(aws_route_table.kubernetes-private.*.id,count.index)}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
#Kubernetes Security Groups
|
#Kubernetes Security Groups
|
||||||
|
|
||||||
resource "aws_security_group" "kubernetes" {
|
resource "aws_security_group" "kubernetes" {
|
||||||
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
|
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
|
||||||
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
vpc_id = "${aws_vpc.cluster-vpc.id}"
|
||||||
|
|
||||||
tags = "${merge(var.default_tags, map(
|
tags = "${merge(var.default_tags, map(
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
|
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
|
||||||
))}"
|
))}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "allow-all-ingress" {
|
resource "aws_security_group_rule" "allow-all-ingress" {
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
from_port = 0
|
from_port = 0
|
||||||
to_port = 65535
|
to_port = 65535
|
||||||
protocol = "-1"
|
protocol = "-1"
|
||||||
cidr_blocks = ["${var.aws_vpc_cidr_block}"]
|
cidr_blocks= ["${var.aws_vpc_cidr_block}"]
|
||||||
security_group_id = "${aws_security_group.kubernetes.id}"
|
security_group_id = "${aws_security_group.kubernetes.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "allow-all-egress" {
|
resource "aws_security_group_rule" "allow-all-egress" {
|
||||||
type = "egress"
|
type = "egress"
|
||||||
from_port = 0
|
from_port = 0
|
||||||
to_port = 65535
|
to_port = 65535
|
||||||
protocol = "-1"
|
protocol = "-1"
|
||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
security_group_id = "${aws_security_group.kubernetes.id}"
|
security_group_id = "${aws_security_group.kubernetes.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
resource "aws_security_group_rule" "allow-ssh-connections" {
|
resource "aws_security_group_rule" "allow-ssh-connections" {
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
from_port = 22
|
from_port = 22
|
||||||
to_port = 22
|
to_port = 22
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
security_group_id = "${aws_security_group.kubernetes.id}"
|
security_group_id = "${aws_security_group.kubernetes.id}"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,19 +1,21 @@
|
|||||||
output "aws_vpc_id" {
|
output "aws_vpc_id" {
|
||||||
value = "${aws_vpc.cluster-vpc.id}"
|
value = "${aws_vpc.cluster-vpc.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "aws_subnet_ids_private" {
|
output "aws_subnet_ids_private" {
|
||||||
value = ["${aws_subnet.cluster-vpc-subnets-private.*.id}"]
|
value = ["${aws_subnet.cluster-vpc-subnets-private.*.id}"]
|
||||||
}
|
}
|
||||||
|
|
||||||
output "aws_subnet_ids_public" {
|
output "aws_subnet_ids_public" {
|
||||||
value = ["${aws_subnet.cluster-vpc-subnets-public.*.id}"]
|
value = ["${aws_subnet.cluster-vpc-subnets-public.*.id}"]
|
||||||
}
|
}
|
||||||
|
|
||||||
output "aws_security_group" {
|
output "aws_security_group" {
|
||||||
value = ["${aws_security_group.kubernetes.*.id}"]
|
value = ["${aws_security_group.kubernetes.*.id}"]
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
output "default_tags" {
|
output "default_tags" {
|
||||||
value = "${var.default_tags}"
|
value = "${var.default_tags}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,27 +1,29 @@
|
|||||||
variable "aws_vpc_cidr_block" {
|
variable "aws_vpc_cidr_block" {
|
||||||
description = "CIDR Blocks for AWS VPC"
|
description = "CIDR Blocks for AWS VPC"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
variable "aws_cluster_name" {
|
variable "aws_cluster_name" {
|
||||||
description = "Name of Cluster"
|
description = "Name of Cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
variable "aws_avail_zones" {
|
variable "aws_avail_zones" {
|
||||||
description = "AWS Availability Zones Used"
|
description = "AWS Availability Zones Used"
|
||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_cidr_subnets_private" {
|
variable "aws_cidr_subnets_private" {
|
||||||
description = "CIDR Blocks for private subnets in Availability zones"
|
description = "CIDR Blocks for private subnets in Availability zones"
|
||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_cidr_subnets_public" {
|
variable "aws_cidr_subnets_public" {
|
||||||
description = "CIDR Blocks for public subnets in Availability zones"
|
description = "CIDR Blocks for public subnets in Availability zones"
|
||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "default_tags" {
|
variable "default_tags" {
|
||||||
description = "Default tags for all resources"
|
description = "Default tags for all resources"
|
||||||
type = "map"
|
type = "map"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,27 +1,28 @@
|
|||||||
output "bastion_ip" {
|
output "bastion_ip" {
|
||||||
value = "${join("\n", aws_instance.bastion-server.*.public_ip)}"
|
value = "${join("\n", aws_instance.bastion-server.*.public_ip)}"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "masters" {
|
output "masters" {
|
||||||
value = "${join("\n", aws_instance.k8s-master.*.private_ip)}"
|
value = "${join("\n", aws_instance.k8s-master.*.private_ip)}"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "workers" {
|
output "workers" {
|
||||||
value = "${join("\n", aws_instance.k8s-worker.*.private_ip)}"
|
value = "${join("\n", aws_instance.k8s-worker.*.private_ip)}"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "etcd" {
|
output "etcd" {
|
||||||
value = "${join("\n", aws_instance.k8s-etcd.*.private_ip)}"
|
value = "${join("\n", aws_instance.k8s-etcd.*.private_ip)}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
output "aws_elb_api_fqdn" {
|
output "aws_elb_api_fqdn" {
|
||||||
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
|
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "inventory" {
|
output "inventory" {
|
||||||
value = "${data.template_file.inventory.rendered}"
|
value = "${data.template_file.inventory.rendered}"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "default_tags" {
|
output "default_tags" {
|
||||||
value = "${var.default_tags}"
|
value = "${var.default_tags}"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,53 +0,0 @@
|
|||||||
#Global Vars
|
|
||||||
aws_cluster_name = "devtest"
|
|
||||||
|
|
||||||
#VPC Vars
|
|
||||||
aws_vpc_cidr_block = "10.250.192.0/18"
|
|
||||||
|
|
||||||
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
|
|
||||||
|
|
||||||
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
|
|
||||||
|
|
||||||
#Bastion Host
|
|
||||||
aws_bastion_size = "t2.medium"
|
|
||||||
|
|
||||||
#Kubernetes Cluster
|
|
||||||
|
|
||||||
aws_kube_master_num = 3
|
|
||||||
|
|
||||||
aws_kube_master_size = "t2.medium"
|
|
||||||
|
|
||||||
aws_etcd_num = 3
|
|
||||||
|
|
||||||
aws_etcd_size = "t2.medium"
|
|
||||||
|
|
||||||
aws_kube_worker_num = 4
|
|
||||||
|
|
||||||
aws_kube_worker_size = "t2.medium"
|
|
||||||
|
|
||||||
#Settings AWS ELB
|
|
||||||
|
|
||||||
aws_elb_api_port = 6443
|
|
||||||
|
|
||||||
k8s_secure_api_port = 6443
|
|
||||||
|
|
||||||
kube_insecure_apiserver_address = "0.0.0.0"
|
|
||||||
|
|
||||||
default_tags = {
|
|
||||||
# Env = "devtest" # Product = "kubernetes"
|
|
||||||
}
|
|
||||||
|
|
||||||
inventory_file = "../../../inventory/hosts"
|
|
||||||
|
|
||||||
## Credentials
|
|
||||||
#AWS Access Key
|
|
||||||
AWS_ACCESS_KEY_ID = ""
|
|
||||||
|
|
||||||
#AWS Secret Key
|
|
||||||
AWS_SECRET_ACCESS_KEY = ""
|
|
||||||
|
|
||||||
#EC2 SSH Key Name
|
|
||||||
AWS_SSH_KEY_NAME = ""
|
|
||||||
|
|
||||||
#AWS Region
|
|
||||||
AWS_DEFAULT_REGION = "eu-central-1"
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
../../../../inventory/sample/group_vars
|
|
||||||
@@ -44,18 +44,18 @@ variable "aws_vpc_cidr_block" {
|
|||||||
|
|
||||||
variable "aws_cidr_subnets_private" {
|
variable "aws_cidr_subnets_private" {
|
||||||
description = "CIDR Blocks for private subnets in Availability Zones"
|
description = "CIDR Blocks for private subnets in Availability Zones"
|
||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_cidr_subnets_public" {
|
variable "aws_cidr_subnets_public" {
|
||||||
description = "CIDR Blocks for public subnets in Availability Zones"
|
description = "CIDR Blocks for public subnets in Availability Zones"
|
||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
//AWS EC2 Settings
|
//AWS EC2 Settings
|
||||||
|
|
||||||
variable "aws_bastion_size" {
|
variable "aws_bastion_size" {
|
||||||
description = "EC2 Instance Size of Bastion Host"
|
description = "EC2 Instance Size of Bastion Host"
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -64,27 +64,27 @@ variable "aws_bastion_size" {
|
|||||||
* AWS Availability Zones without an remainder.
|
* AWS Availability Zones without an remainder.
|
||||||
*/
|
*/
|
||||||
variable "aws_kube_master_num" {
|
variable "aws_kube_master_num" {
|
||||||
description = "Number of Kubernetes Master Nodes"
|
description = "Number of Kubernetes Master Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_kube_master_size" {
|
variable "aws_kube_master_size" {
|
||||||
description = "Instance size of Kube Master Nodes"
|
description = "Instance size of Kube Master Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_etcd_num" {
|
variable "aws_etcd_num" {
|
||||||
description = "Number of etcd Nodes"
|
description = "Number of etcd Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_etcd_size" {
|
variable "aws_etcd_size" {
|
||||||
description = "Instance size of etcd Nodes"
|
description = "Instance size of etcd Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_kube_worker_num" {
|
variable "aws_kube_worker_num" {
|
||||||
description = "Number of Kubernetes Worker Nodes"
|
description = "Number of Kubernetes Worker Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_kube_worker_size" {
|
variable "aws_kube_worker_size" {
|
||||||
description = "Instance size of Kubernetes Worker Nodes"
|
description = "Instance size of Kubernetes Worker Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -92,16 +92,16 @@ variable "aws_kube_worker_size" {
|
|||||||
*
|
*
|
||||||
*/
|
*/
|
||||||
variable "aws_elb_api_port" {
|
variable "aws_elb_api_port" {
|
||||||
description = "Port for AWS ELB"
|
description = "Port for AWS ELB"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "k8s_secure_api_port" {
|
variable "k8s_secure_api_port" {
|
||||||
description = "Secure Port of K8S API Server"
|
description = "Secure Port of K8S API Server"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "default_tags" {
|
variable "default_tags" {
|
||||||
description = "Default tags for all resources"
|
description = "Default tags for all resources"
|
||||||
type = "map"
|
type = "map"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "inventory_file" {
|
variable "inventory_file" {
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ most modern installs of OpenStack that support the basic services.
|
|||||||
|
|
||||||
### Known compatible public clouds
|
### Known compatible public clouds
|
||||||
- [Auro](https://auro.io/)
|
- [Auro](https://auro.io/)
|
||||||
- [Betacloud](https://www.betacloud.io/)
|
- [BetaCloud](https://www.betacloud.io/)
|
||||||
- [CityCloud](https://www.citycloud.com/)
|
- [CityCloud](https://www.citycloud.com/)
|
||||||
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
|
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
|
||||||
- [ELASTX](https://elastx.se/)
|
- [ELASTX](https://elastx.se/)
|
||||||
@@ -51,8 +51,8 @@ floating IP addresses or not.
|
|||||||
Note that the Ansible script will report an invalid configuration if you wind up
|
Note that the Ansible script will report an invalid configuration if you wind up
|
||||||
with an even number of etcd instances since that is not a valid configuration. This
|
with an even number of etcd instances since that is not a valid configuration. This
|
||||||
restriction includes standalone etcd nodes that are deployed in a cluster along with
|
restriction includes standalone etcd nodes that are deployed in a cluster along with
|
||||||
master nodes with etcd replicas. As an example, if you have three master nodes with
|
master nodes with etcd replicas. As an example, if you have three master nodes with
|
||||||
etcd replicas and three standalone etcd nodes, the script will fail since there are
|
etcd replicas and three standalone etcd nodes, the script will fail since there are
|
||||||
now six total etcd replicas.
|
now six total etcd replicas.
|
||||||
|
|
||||||
### GlusterFS
|
### GlusterFS
|
||||||
@@ -109,7 +109,6 @@ Create an inventory directory for your cluster by copying the existing sample an
|
|||||||
$ cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
|
$ cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
|
||||||
$ cd inventory/$CLUSTER
|
$ cd inventory/$CLUSTER
|
||||||
$ ln -s ../../contrib/terraform/openstack/hosts
|
$ ln -s ../../contrib/terraform/openstack/hosts
|
||||||
$ ln -s ../../contrib
|
|
||||||
```
|
```
|
||||||
|
|
||||||
This will be the base for subsequent Terraform commands.
|
This will be the base for subsequent Terraform commands.
|
||||||
@@ -117,8 +116,8 @@ This will be the base for subsequent Terraform commands.
|
|||||||
#### OpenStack access and credentials
|
#### OpenStack access and credentials
|
||||||
|
|
||||||
No provider variables are hardcoded inside `variables.tf` because Terraform
|
No provider variables are hardcoded inside `variables.tf` because Terraform
|
||||||
supports various authentication methods for OpenStack: the older script and
|
supports various authentication methods for OpenStack: the older script and
|
||||||
environment method (using `openrc`) as well as a newer declarative method, and
|
environment method (using `openrc`) as well as a newer declarative method, and
|
||||||
different OpenStack environments may support Identity API version 2 or 3.
|
different OpenStack environments may support Identity API version 2 or 3.
|
||||||
|
|
||||||
These are examples and may vary depending on your OpenStack cloud provider,
|
These are examples and may vary depending on your OpenStack cloud provider,
|
||||||
@@ -229,7 +228,7 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|
|||||||
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|
||||||
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|
||||||
|`external_net` | UUID of the external network that will be routed to |
|
|`external_net` | UUID of the external network that will be routed to |
|
||||||
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through`nova flavor-list` |
|
||||||
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|
||||||
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|
||||||
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|
||||||
@@ -243,8 +242,6 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|
|||||||
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|
||||||
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
|
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
|
||||||
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|
||||||
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|
|
||||||
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|
|
||||||
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|
||||||
|
|
||||||
#### Terraform state files
|
#### Terraform state files
|
||||||
@@ -361,7 +358,7 @@ If it fails try to connect manually via SSH. It could be something as simple as
|
|||||||
|
|
||||||
### Configure cluster variables
|
### Configure cluster variables
|
||||||
|
|
||||||
Edit `inventory/$CLUSTER/group_vars/all/all.yml`:
|
Edit `inventory/$CLUSTER/group_vars/all.yml`:
|
||||||
- **bin_dir**:
|
- **bin_dir**:
|
||||||
```
|
```
|
||||||
# Directory where the binaries will be installed
|
# Directory where the binaries will be installed
|
||||||
@@ -374,7 +371,7 @@ bin_dir: /opt/bin
|
|||||||
```
|
```
|
||||||
cloud_provider: openstack
|
cloud_provider: openstack
|
||||||
```
|
```
|
||||||
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml`:
|
Edit `inventory/$CLUSTER/group_vars/k8s-cluster.yml`:
|
||||||
- Set variable **kube_network_plugin** to your desired networking plugin.
|
- Set variable **kube_network_plugin** to your desired networking plugin.
|
||||||
- **flannel** works out-of-the-box
|
- **flannel** works out-of-the-box
|
||||||
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
|
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
|
||||||
@@ -418,8 +415,8 @@ ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
|
|||||||
```
|
```
|
||||||
4. Get `admin`'s certificates and keys:
|
4. Get `admin`'s certificates and keys:
|
||||||
```
|
```
|
||||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1-key.pem > admin-key.pem
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem
|
||||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1.pem > admin.pem
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
|
||||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
|
||||||
```
|
```
|
||||||
5. Configure kubectl:
|
5. Configure kubectl:
|
||||||
|
|||||||
@@ -1,7 +1,3 @@
|
|||||||
provider "openstack" {
|
|
||||||
version = "~> 1.17"
|
|
||||||
}
|
|
||||||
|
|
||||||
module "network" {
|
module "network" {
|
||||||
source = "modules/network"
|
source = "modules/network"
|
||||||
|
|
||||||
@@ -53,13 +49,9 @@ module "compute" {
|
|||||||
network_name = "${var.network_name}"
|
network_name = "${var.network_name}"
|
||||||
flavor_bastion = "${var.flavor_bastion}"
|
flavor_bastion = "${var.flavor_bastion}"
|
||||||
k8s_master_fips = "${module.ips.k8s_master_fips}"
|
k8s_master_fips = "${module.ips.k8s_master_fips}"
|
||||||
k8s_master_no_etcd_fips = "${module.ips.k8s_master_no_etcd_fips}"
|
|
||||||
k8s_node_fips = "${module.ips.k8s_node_fips}"
|
k8s_node_fips = "${module.ips.k8s_node_fips}"
|
||||||
bastion_fips = "${module.ips.bastion_fips}"
|
bastion_fips = "${module.ips.bastion_fips}"
|
||||||
bastion_allowed_remote_ips = "${var.bastion_allowed_remote_ips}"
|
bastion_allowed_remote_ips = "${var.bastion_allowed_remote_ips}"
|
||||||
master_allowed_remote_ips = "${var.master_allowed_remote_ips}"
|
|
||||||
k8s_allowed_remote_ips = "${var.k8s_allowed_remote_ips}"
|
|
||||||
k8s_allowed_egress_ips = "${var.k8s_allowed_egress_ips}"
|
|
||||||
supplementary_master_groups = "${var.supplementary_master_groups}"
|
supplementary_master_groups = "${var.supplementary_master_groups}"
|
||||||
supplementary_node_groups = "${var.supplementary_node_groups}"
|
supplementary_node_groups = "${var.supplementary_node_groups}"
|
||||||
worker_allowed_ports = "${var.worker_allowed_ports}"
|
worker_allowed_ports = "${var.worker_allowed_ports}"
|
||||||
@@ -80,7 +72,7 @@ output "router_id" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
output "k8s_master_fips" {
|
output "k8s_master_fips" {
|
||||||
value = "${concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)}"
|
value = "${module.ips.k8s_master_fips}"
|
||||||
}
|
}
|
||||||
|
|
||||||
output "k8s_node_fips" {
|
output "k8s_node_fips" {
|
||||||
|
|||||||
@@ -4,86 +4,61 @@ resource "openstack_compute_keypair_v2" "k8s" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_v2" "k8s_master" {
|
resource "openstack_networking_secgroup_v2" "k8s_master" {
|
||||||
name = "${var.cluster_name}-k8s-master"
|
name = "${var.cluster_name}-k8s-master"
|
||||||
description = "${var.cluster_name} - Kubernetes Master"
|
description = "${var.cluster_name} - Kubernetes Master"
|
||||||
delete_default_rules = true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
|
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
|
||||||
count = "${length(var.master_allowed_remote_ips)}"
|
direction = "ingress"
|
||||||
direction = "ingress"
|
ethertype = "IPv4"
|
||||||
ethertype = "IPv4"
|
protocol = "tcp"
|
||||||
protocol = "tcp"
|
port_range_min = "6443"
|
||||||
port_range_min = "6443"
|
port_range_max = "6443"
|
||||||
port_range_max = "6443"
|
remote_ip_prefix = "0.0.0.0/0"
|
||||||
remote_ip_prefix = "${var.master_allowed_remote_ips[count.index]}"
|
|
||||||
security_group_id = "${openstack_networking_secgroup_v2.k8s_master.id}"
|
security_group_id = "${openstack_networking_secgroup_v2.k8s_master.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_v2" "bastion" {
|
resource "openstack_networking_secgroup_v2" "bastion" {
|
||||||
name = "${var.cluster_name}-bastion"
|
name = "${var.cluster_name}-bastion"
|
||||||
count = "${var.number_of_bastions ? 1 : 0}"
|
description = "${var.cluster_name} - Bastion Server"
|
||||||
description = "${var.cluster_name} - Bastion Server"
|
|
||||||
delete_default_rules = true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_rule_v2" "bastion" {
|
resource "openstack_networking_secgroup_rule_v2" "bastion" {
|
||||||
count = "${var.number_of_bastions ? length(var.bastion_allowed_remote_ips) : 0}"
|
count = "${length(var.bastion_allowed_remote_ips)}"
|
||||||
direction = "ingress"
|
direction = "ingress"
|
||||||
ethertype = "IPv4"
|
ethertype = "IPv4"
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
port_range_min = "22"
|
port_range_min = "22"
|
||||||
port_range_max = "22"
|
port_range_max = "22"
|
||||||
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
|
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
|
||||||
security_group_id = "${openstack_networking_secgroup_v2.bastion.id}"
|
security_group_id = "${openstack_networking_secgroup_v2.bastion.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_v2" "k8s" {
|
resource "openstack_networking_secgroup_v2" "k8s" {
|
||||||
name = "${var.cluster_name}-k8s"
|
name = "${var.cluster_name}-k8s"
|
||||||
description = "${var.cluster_name} - Kubernetes"
|
description = "${var.cluster_name} - Kubernetes"
|
||||||
delete_default_rules = true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_rule_v2" "k8s" {
|
resource "openstack_networking_secgroup_rule_v2" "k8s" {
|
||||||
direction = "ingress"
|
direction = "ingress"
|
||||||
ethertype = "IPv4"
|
ethertype = "IPv4"
|
||||||
remote_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
|
remote_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
|
||||||
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_rule_v2" "k8s_allowed_remote_ips" {
|
|
||||||
count = "${length(var.k8s_allowed_remote_ips)}"
|
|
||||||
direction = "ingress"
|
|
||||||
ethertype = "IPv4"
|
|
||||||
protocol = "tcp"
|
|
||||||
port_range_min = "22"
|
|
||||||
port_range_max = "22"
|
|
||||||
remote_ip_prefix = "${var.k8s_allowed_remote_ips[count.index]}"
|
|
||||||
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_rule_v2" "egress" {
|
|
||||||
count = "${length(var.k8s_allowed_egress_ips)}"
|
|
||||||
direction = "egress"
|
|
||||||
ethertype = "IPv4"
|
|
||||||
remote_ip_prefix = "${var.k8s_allowed_egress_ips[count.index]}"
|
|
||||||
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
|
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_v2" "worker" {
|
resource "openstack_networking_secgroup_v2" "worker" {
|
||||||
name = "${var.cluster_name}-k8s-worker"
|
name = "${var.cluster_name}-k8s-worker"
|
||||||
description = "${var.cluster_name} - Kubernetes worker nodes"
|
description = "${var.cluster_name} - Kubernetes worker nodes"
|
||||||
delete_default_rules = true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_secgroup_rule_v2" "worker" {
|
resource "openstack_networking_secgroup_rule_v2" "worker" {
|
||||||
count = "${length(var.worker_allowed_ports)}"
|
count = "${length(var.worker_allowed_ports)}"
|
||||||
direction = "ingress"
|
direction = "ingress"
|
||||||
ethertype = "IPv4"
|
ethertype = "IPv4"
|
||||||
protocol = "${lookup(var.worker_allowed_ports[count.index], "protocol", "tcp")}"
|
protocol = "${lookup(var.worker_allowed_ports[count.index], "protocol", "tcp")}"
|
||||||
port_range_min = "${lookup(var.worker_allowed_ports[count.index], "port_range_min")}"
|
port_range_min = "${lookup(var.worker_allowed_ports[count.index], "port_range_min")}"
|
||||||
port_range_max = "${lookup(var.worker_allowed_ports[count.index], "port_range_max")}"
|
port_range_max = "${lookup(var.worker_allowed_ports[count.index], "port_range_max")}"
|
||||||
remote_ip_prefix = "${lookup(var.worker_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")}"
|
remote_ip_prefix = "${lookup(var.worker_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")}"
|
||||||
security_group_id = "${openstack_networking_secgroup_v2.worker.id}"
|
security_group_id = "${openstack_networking_secgroup_v2.worker.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -100,6 +75,7 @@ resource "openstack_compute_instance_v2" "bastion" {
|
|||||||
|
|
||||||
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
|
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
|
||||||
"${openstack_networking_secgroup_v2.bastion.name}",
|
"${openstack_networking_secgroup_v2.bastion.name}",
|
||||||
|
"default",
|
||||||
]
|
]
|
||||||
|
|
||||||
metadata = {
|
metadata = {
|
||||||
@@ -111,22 +87,25 @@ resource "openstack_compute_instance_v2" "bastion" {
|
|||||||
provisioner "local-exec" {
|
provisioner "local-exec" {
|
||||||
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/group_vars/no-floating.yml"
|
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/group_vars/no-floating.yml"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_instance_v2" "k8s_master" {
|
resource "openstack_compute_instance_v2" "k8s_master" {
|
||||||
name = "${var.cluster_name}-k8s-master-${count.index+1}"
|
name = "${var.cluster_name}-k8s-master-${count.index+1}"
|
||||||
count = "${var.number_of_k8s_masters}"
|
count = "${var.number_of_k8s_masters}"
|
||||||
availability_zone = "${element(var.az_list, count.index)}"
|
availability_zone = "${element(var.az_list, count.index)}"
|
||||||
image_name = "${var.image}"
|
image_name = "${var.image}"
|
||||||
flavor_id = "${var.flavor_k8s_master}"
|
flavor_id = "${var.flavor_k8s_master}"
|
||||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||||
|
|
||||||
network {
|
network {
|
||||||
name = "${var.network_name}"
|
name = "${var.network_name}"
|
||||||
}
|
}
|
||||||
|
|
||||||
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
|
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
|
||||||
|
"${openstack_networking_secgroup_v2.bastion.name}",
|
||||||
"${openstack_networking_secgroup_v2.k8s.name}",
|
"${openstack_networking_secgroup_v2.k8s.name}",
|
||||||
|
"default",
|
||||||
]
|
]
|
||||||
|
|
||||||
metadata = {
|
metadata = {
|
||||||
@@ -138,21 +117,23 @@ resource "openstack_compute_instance_v2" "k8s_master" {
|
|||||||
provisioner "local-exec" {
|
provisioner "local-exec" {
|
||||||
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
|
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
|
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
|
||||||
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
|
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
|
||||||
count = "${var.number_of_k8s_masters_no_etcd}"
|
count = "${var.number_of_k8s_masters_no_etcd}"
|
||||||
availability_zone = "${element(var.az_list, count.index)}"
|
availability_zone = "${element(var.az_list, count.index)}"
|
||||||
image_name = "${var.image}"
|
image_name = "${var.image}"
|
||||||
flavor_id = "${var.flavor_k8s_master}"
|
flavor_id = "${var.flavor_k8s_master}"
|
||||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||||
|
|
||||||
network {
|
network {
|
||||||
name = "${var.network_name}"
|
name = "${var.network_name}"
|
||||||
}
|
}
|
||||||
|
|
||||||
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
|
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
|
||||||
|
"${openstack_networking_secgroup_v2.bastion.name}",
|
||||||
"${openstack_networking_secgroup_v2.k8s.name}",
|
"${openstack_networking_secgroup_v2.k8s.name}",
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -165,15 +146,16 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
|
|||||||
provisioner "local-exec" {
|
provisioner "local-exec" {
|
||||||
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
|
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_instance_v2" "etcd" {
|
resource "openstack_compute_instance_v2" "etcd" {
|
||||||
name = "${var.cluster_name}-etcd-${count.index+1}"
|
name = "${var.cluster_name}-etcd-${count.index+1}"
|
||||||
count = "${var.number_of_etcd}"
|
count = "${var.number_of_etcd}"
|
||||||
availability_zone = "${element(var.az_list, count.index)}"
|
availability_zone = "${element(var.az_list, count.index)}"
|
||||||
image_name = "${var.image}"
|
image_name = "${var.image}"
|
||||||
flavor_id = "${var.flavor_etcd}"
|
flavor_id = "${var.flavor_etcd}"
|
||||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||||
|
|
||||||
network {
|
network {
|
||||||
name = "${var.network_name}"
|
name = "${var.network_name}"
|
||||||
@@ -186,15 +168,16 @@ resource "openstack_compute_instance_v2" "etcd" {
|
|||||||
kubespray_groups = "etcd,vault,no-floating"
|
kubespray_groups = "etcd,vault,no-floating"
|
||||||
depends_on = "${var.network_id}"
|
depends_on = "${var.network_id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
|
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
|
||||||
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
|
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
|
||||||
count = "${var.number_of_k8s_masters_no_floating_ip}"
|
count = "${var.number_of_k8s_masters_no_floating_ip}"
|
||||||
availability_zone = "${element(var.az_list, count.index)}"
|
availability_zone = "${element(var.az_list, count.index)}"
|
||||||
image_name = "${var.image}"
|
image_name = "${var.image}"
|
||||||
flavor_id = "${var.flavor_k8s_master}"
|
flavor_id = "${var.flavor_k8s_master}"
|
||||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||||
|
|
||||||
network {
|
network {
|
||||||
name = "${var.network_name}"
|
name = "${var.network_name}"
|
||||||
@@ -202,6 +185,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
|
|||||||
|
|
||||||
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
|
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
|
||||||
"${openstack_networking_secgroup_v2.k8s.name}",
|
"${openstack_networking_secgroup_v2.k8s.name}",
|
||||||
|
"default",
|
||||||
]
|
]
|
||||||
|
|
||||||
metadata = {
|
metadata = {
|
||||||
@@ -209,15 +193,16 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
|
|||||||
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
|
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
|
||||||
depends_on = "${var.network_id}"
|
depends_on = "${var.network_id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
|
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
|
||||||
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
|
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
|
||||||
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
|
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
|
||||||
availability_zone = "${element(var.az_list, count.index)}"
|
availability_zone = "${element(var.az_list, count.index)}"
|
||||||
image_name = "${var.image}"
|
image_name = "${var.image}"
|
||||||
flavor_id = "${var.flavor_k8s_master}"
|
flavor_id = "${var.flavor_k8s_master}"
|
||||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||||
|
|
||||||
network {
|
network {
|
||||||
name = "${var.network_name}"
|
name = "${var.network_name}"
|
||||||
@@ -232,22 +217,25 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
|
|||||||
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
|
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
|
||||||
depends_on = "${var.network_id}"
|
depends_on = "${var.network_id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_instance_v2" "k8s_node" {
|
resource "openstack_compute_instance_v2" "k8s_node" {
|
||||||
name = "${var.cluster_name}-k8s-node-${count.index+1}"
|
name = "${var.cluster_name}-k8s-node-${count.index+1}"
|
||||||
count = "${var.number_of_k8s_nodes}"
|
count = "${var.number_of_k8s_nodes}"
|
||||||
availability_zone = "${element(var.az_list, count.index)}"
|
availability_zone = "${element(var.az_list, count.index)}"
|
||||||
image_name = "${var.image}"
|
image_name = "${var.image}"
|
||||||
flavor_id = "${var.flavor_k8s_node}"
|
flavor_id = "${var.flavor_k8s_node}"
|
||||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||||
|
|
||||||
network {
|
network {
|
||||||
name = "${var.network_name}"
|
name = "${var.network_name}"
|
||||||
}
|
}
|
||||||
|
|
||||||
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
|
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
|
||||||
|
"${openstack_networking_secgroup_v2.bastion.name}",
|
||||||
"${openstack_networking_secgroup_v2.worker.name}",
|
"${openstack_networking_secgroup_v2.worker.name}",
|
||||||
|
"default",
|
||||||
]
|
]
|
||||||
|
|
||||||
metadata = {
|
metadata = {
|
||||||
@@ -259,15 +247,16 @@ resource "openstack_compute_instance_v2" "k8s_node" {
|
|||||||
provisioner "local-exec" {
|
provisioner "local-exec" {
|
||||||
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
|
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
|
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
|
||||||
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
|
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
|
||||||
count = "${var.number_of_k8s_nodes_no_floating_ip}"
|
count = "${var.number_of_k8s_nodes_no_floating_ip}"
|
||||||
availability_zone = "${element(var.az_list, count.index)}"
|
availability_zone = "${element(var.az_list, count.index)}"
|
||||||
image_name = "${var.image}"
|
image_name = "${var.image}"
|
||||||
flavor_id = "${var.flavor_k8s_node}"
|
flavor_id = "${var.flavor_k8s_node}"
|
||||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||||
|
|
||||||
network {
|
network {
|
||||||
name = "${var.network_name}"
|
name = "${var.network_name}"
|
||||||
@@ -275,6 +264,7 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
|
|||||||
|
|
||||||
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
|
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
|
||||||
"${openstack_networking_secgroup_v2.worker.name}",
|
"${openstack_networking_secgroup_v2.worker.name}",
|
||||||
|
"default",
|
||||||
]
|
]
|
||||||
|
|
||||||
metadata = {
|
metadata = {
|
||||||
@@ -282,6 +272,7 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
|
|||||||
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
|
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
|
||||||
depends_on = "${var.network_id}"
|
depends_on = "${var.network_id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_floatingip_associate_v2" "bastion" {
|
resource "openstack_compute_floatingip_associate_v2" "bastion" {
|
||||||
@@ -296,12 +287,6 @@ resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
|
|||||||
floating_ip = "${var.k8s_master_fips[count.index]}"
|
floating_ip = "${var.k8s_master_fips[count.index]}"
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" {
|
|
||||||
count = "${var.number_of_k8s_masters_no_etcd}"
|
|
||||||
instance_id = "${element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)}"
|
|
||||||
floating_ip = "${var.k8s_master_no_etcd_fips[count.index]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
|
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
|
||||||
count = "${var.number_of_k8s_nodes}"
|
count = "${var.number_of_k8s_nodes}"
|
||||||
floating_ip = "${var.k8s_node_fips[count.index]}"
|
floating_ip = "${var.k8s_node_fips[count.index]}"
|
||||||
@@ -316,24 +301,27 @@ resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
|
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
|
||||||
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
|
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
|
||||||
count = "${var.number_of_gfs_nodes_no_floating_ip}"
|
count = "${var.number_of_gfs_nodes_no_floating_ip}"
|
||||||
availability_zone = "${element(var.az_list, count.index)}"
|
availability_zone = "${element(var.az_list, count.index)}"
|
||||||
image_name = "${var.image_gfs}"
|
image_name = "${var.image_gfs}"
|
||||||
flavor_id = "${var.flavor_gfs_node}"
|
flavor_id = "${var.flavor_gfs_node}"
|
||||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||||
|
|
||||||
network {
|
network {
|
||||||
name = "${var.network_name}"
|
name = "${var.network_name}"
|
||||||
}
|
}
|
||||||
|
|
||||||
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
|
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
|
||||||
|
"default",
|
||||||
|
]
|
||||||
|
|
||||||
metadata = {
|
metadata = {
|
||||||
ssh_user = "${var.ssh_user_gfs}"
|
ssh_user = "${var.ssh_user_gfs}"
|
||||||
kubespray_groups = "gfs-cluster,network-storage,no-floating"
|
kubespray_groups = "gfs-cluster,network-storage,no-floating"
|
||||||
depends_on = "${var.network_id}"
|
depends_on = "${var.network_id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
|
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
|
||||||
|
|||||||
@@ -54,10 +54,6 @@ variable "k8s_master_fips" {
|
|||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "k8s_master_no_etcd_fips" {
|
|
||||||
type = "list"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "k8s_node_fips" {
|
variable "k8s_node_fips" {
|
||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
@@ -70,18 +66,6 @@ variable "bastion_allowed_remote_ips" {
|
|||||||
type = "list"
|
type = "list"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "master_allowed_remote_ips" {
|
|
||||||
type = "list"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "k8s_allowed_remote_ips" {
|
|
||||||
type = "list"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "k8s_allowed_egress_ips" {
|
|
||||||
type = "list"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "supplementary_master_groups" {
|
variable "supplementary_master_groups" {
|
||||||
default = ""
|
default = ""
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,12 +10,6 @@ resource "openstack_networking_floatingip_v2" "k8s_master" {
|
|||||||
depends_on = ["null_resource.dummy_dependency"]
|
depends_on = ["null_resource.dummy_dependency"]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "openstack_networking_floatingip_v2" "k8s_master_no_etcd" {
|
|
||||||
count = "${var.number_of_k8s_masters_no_etcd}"
|
|
||||||
pool = "${var.floatingip_pool}"
|
|
||||||
depends_on = ["null_resource.dummy_dependency"]
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "openstack_networking_floatingip_v2" "k8s_node" {
|
resource "openstack_networking_floatingip_v2" "k8s_node" {
|
||||||
count = "${var.number_of_k8s_nodes}"
|
count = "${var.number_of_k8s_nodes}"
|
||||||
pool = "${var.floatingip_pool}"
|
pool = "${var.floatingip_pool}"
|
||||||
|
|||||||
@@ -2,10 +2,6 @@ output "k8s_master_fips" {
|
|||||||
value = ["${openstack_networking_floatingip_v2.k8s_master.*.address}"]
|
value = ["${openstack_networking_floatingip_v2.k8s_master.*.address}"]
|
||||||
}
|
}
|
||||||
|
|
||||||
output "k8s_master_no_etcd_fips" {
|
|
||||||
value = ["${openstack_networking_floatingip_v2.k8s_master_no_etcd.*.address}"]
|
|
||||||
}
|
|
||||||
|
|
||||||
output "k8s_node_fips" {
|
output "k8s_node_fips" {
|
||||||
value = ["${openstack_networking_floatingip_v2.k8s_node.*.address}"]
|
value = ["${openstack_networking_floatingip_v2.k8s_node.*.address}"]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ output "router_id" {
|
|||||||
|
|
||||||
output "router_internal_port_id" {
|
output "router_internal_port_id" {
|
||||||
value = "${element(concat(openstack_networking_router_interface_v2.k8s.*.id, list("")), 0)}"
|
value = "${element(concat(openstack_networking_router_interface_v2.k8s.*.id, list("")), 0)}"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
output "subnet_id" {
|
output "subnet_id" {
|
||||||
|
|||||||
@@ -6,13 +6,11 @@ public_key_path = "~/.ssh/id_rsa.pub"
|
|||||||
|
|
||||||
# image to use for bastion, masters, standalone etcd instances, and nodes
|
# image to use for bastion, masters, standalone etcd instances, and nodes
|
||||||
image = "<image name>"
|
image = "<image name>"
|
||||||
|
|
||||||
# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
|
# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
|
||||||
ssh_user = "<cloud-provisioned user>"
|
ssh_user = "<cloud-provisioned user>"
|
||||||
|
|
||||||
# 0|1 bastion nodes
|
# 0|1 bastion nodes
|
||||||
number_of_bastions = 0
|
number_of_bastions = 0
|
||||||
|
|
||||||
#flavor_bastion = "<UUID>"
|
#flavor_bastion = "<UUID>"
|
||||||
|
|
||||||
# standalone etcds
|
# standalone etcds
|
||||||
@@ -20,20 +18,14 @@ number_of_etcd = 0
|
|||||||
|
|
||||||
# masters
|
# masters
|
||||||
number_of_k8s_masters = 1
|
number_of_k8s_masters = 1
|
||||||
|
|
||||||
number_of_k8s_masters_no_etcd = 0
|
number_of_k8s_masters_no_etcd = 0
|
||||||
|
|
||||||
number_of_k8s_masters_no_floating_ip = 0
|
number_of_k8s_masters_no_floating_ip = 0
|
||||||
|
|
||||||
number_of_k8s_masters_no_floating_ip_no_etcd = 0
|
number_of_k8s_masters_no_floating_ip_no_etcd = 0
|
||||||
|
|
||||||
flavor_k8s_master = "<UUID>"
|
flavor_k8s_master = "<UUID>"
|
||||||
|
|
||||||
# nodes
|
# nodes
|
||||||
number_of_k8s_nodes = 2
|
number_of_k8s_nodes = 2
|
||||||
|
|
||||||
number_of_k8s_nodes_no_floating_ip = 4
|
number_of_k8s_nodes_no_floating_ip = 4
|
||||||
|
|
||||||
#flavor_k8s_node = "<UUID>"
|
#flavor_k8s_node = "<UUID>"
|
||||||
|
|
||||||
# GlusterFS
|
# GlusterFS
|
||||||
@@ -48,11 +40,7 @@ number_of_k8s_nodes_no_floating_ip = 4
|
|||||||
|
|
||||||
# networking
|
# networking
|
||||||
network_name = "<network>"
|
network_name = "<network>"
|
||||||
|
|
||||||
external_net = "<UUID>"
|
external_net = "<UUID>"
|
||||||
|
|
||||||
subnet_cidr = "<cidr>"
|
subnet_cidr = "<cidr>"
|
||||||
|
|
||||||
floatingip_pool = "<pool>"
|
floatingip_pool = "<pool>"
|
||||||
|
|
||||||
bastion_allowed_remote_ips = ["0.0.0.0/0"]
|
bastion_allowed_remote_ips = ["0.0.0.0/0"]
|
||||||
|
|||||||
@@ -4,8 +4,8 @@ variable "cluster_name" {
|
|||||||
|
|
||||||
variable "az_list" {
|
variable "az_list" {
|
||||||
description = "List of Availability Zones available in your OpenStack cluster"
|
description = "List of Availability Zones available in your OpenStack cluster"
|
||||||
type = "list"
|
type = "list"
|
||||||
default = ["nova"]
|
default = ["nova"]
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "number_of_bastions" {
|
variable "number_of_bastions" {
|
||||||
@@ -74,27 +74,27 @@ variable "ssh_user_gfs" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "flavor_bastion" {
|
variable "flavor_bastion" {
|
||||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
|
||||||
default = 3
|
default = 3
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "flavor_k8s_master" {
|
variable "flavor_k8s_master" {
|
||||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
|
||||||
default = 3
|
default = 3
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "flavor_k8s_node" {
|
variable "flavor_k8s_node" {
|
||||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
|
||||||
default = 3
|
default = 3
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "flavor_etcd" {
|
variable "flavor_etcd" {
|
||||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
|
||||||
default = 3
|
default = 3
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "flavor_gfs_node" {
|
variable "flavor_gfs_node" {
|
||||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
|
||||||
default = 3
|
default = 3
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -110,8 +110,8 @@ variable "use_neutron" {
|
|||||||
|
|
||||||
variable "subnet_cidr" {
|
variable "subnet_cidr" {
|
||||||
description = "Subnet CIDR block."
|
description = "Subnet CIDR block."
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "10.0.0.0/24"
|
default = "10.0.0.0/24"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "dns_nameservers" {
|
variable "dns_nameservers" {
|
||||||
@@ -131,47 +131,28 @@ variable "external_net" {
|
|||||||
|
|
||||||
variable "supplementary_master_groups" {
|
variable "supplementary_master_groups" {
|
||||||
description = "supplementary kubespray ansible groups for masters, such kube-node"
|
description = "supplementary kubespray ansible groups for masters, such kube-node"
|
||||||
default = ""
|
default = ""
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "supplementary_node_groups" {
|
variable "supplementary_node_groups" {
|
||||||
description = "supplementary kubespray ansible groups for worker nodes, such as kube-ingress"
|
description = "supplementary kubespray ansible groups for worker nodes, such as kube-ingress"
|
||||||
default = ""
|
default = ""
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "bastion_allowed_remote_ips" {
|
variable "bastion_allowed_remote_ips" {
|
||||||
description = "An array of CIDRs allowed to SSH to hosts"
|
description = "An array of CIDRs allowed to SSH to hosts"
|
||||||
type = "list"
|
type = "list"
|
||||||
default = ["0.0.0.0/0"]
|
default = ["0.0.0.0/0"]
|
||||||
}
|
|
||||||
|
|
||||||
variable "master_allowed_remote_ips" {
|
|
||||||
description = "An array of CIDRs allowed to access API of masters"
|
|
||||||
type = "list"
|
|
||||||
default = ["0.0.0.0/0"]
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "k8s_allowed_remote_ips" {
|
|
||||||
description = "An array of CIDRs allowed to SSH to hosts"
|
|
||||||
type = "list"
|
|
||||||
default = []
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "k8s_allowed_egress_ips" {
|
|
||||||
description = "An array of CIDRs allowed for egress traffic"
|
|
||||||
type = "list"
|
|
||||||
default = ["0.0.0.0/0"]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "worker_allowed_ports" {
|
variable "worker_allowed_ports" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
|
||||||
default = [
|
default = [
|
||||||
{
|
{
|
||||||
"protocol" = "tcp"
|
"protocol" = "tcp"
|
||||||
"port_range_min" = 30000
|
"port_range_min" = 30000
|
||||||
"port_range_max" = 32767
|
"port_range_max" = 32767
|
||||||
"remote_ip_prefix" = "0.0.0.0/0"
|
"remote_ip_prefix" = "0.0.0.0/0"
|
||||||
},
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,231 +0,0 @@
|
|||||||
# Kubernetes on Packet with Terraform
|
|
||||||
|
|
||||||
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
|
|
||||||
[Packet](https://www.packet.com).
|
|
||||||
|
|
||||||
## Status
|
|
||||||
|
|
||||||
This will install a Kubernetes cluster on Packet bare metal. It should work in all locations and on most server types.
|
|
||||||
|
|
||||||
## Approach
|
|
||||||
The terraform configuration inspects variables found in
|
|
||||||
[variables.tf](variables.tf) to create resources in your Packet project.
|
|
||||||
There is a [python script](../terraform.py) that reads the generated`.tfstate`
|
|
||||||
file to generate a dynamic inventory that is consumed by [cluster.yml](../../..//cluster.yml)
|
|
||||||
to actually install Kubernetes with Kubespray.
|
|
||||||
|
|
||||||
### Kubernetes Nodes
|
|
||||||
You can create many different kubernetes topologies by setting the number of
|
|
||||||
different classes of hosts.
|
|
||||||
- Master nodes with etcd: `number_of_k8s_masters` variable
|
|
||||||
- Master nodes without etcd: `number_of_k8s_masters_no_etcd` variable
|
|
||||||
- Standalone etcd hosts: `number_of_etcd` variable
|
|
||||||
- Kubernetes worker nodes: `number_of_k8s_nodes` variable
|
|
||||||
|
|
||||||
Note that the Ansible script will report an invalid configuration if you wind up
|
|
||||||
with an *even number* of etcd instances since that is not a valid configuration. This
|
|
||||||
restriction includes standalone etcd nodes that are deployed in a cluster along with
|
|
||||||
master nodes with etcd replicas. As an example, if you have three master nodes with
|
|
||||||
etcd replicas and three standalone etcd nodes, the script will fail since there are
|
|
||||||
now six total etcd replicas.
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
|
|
||||||
- Install dependencies: `sudo pip install -r requirements.txt`
|
|
||||||
- Account with Packet Host
|
|
||||||
- An SSH key pair
|
|
||||||
|
|
||||||
## SSH Key Setup
|
|
||||||
|
|
||||||
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tf will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tf to blank to prevent the duplicate key from being uploaded which will cause an error.
|
|
||||||
|
|
||||||
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
ssh-keygen -f ~/.ssh/id_rsa
|
|
||||||
```
|
|
||||||
|
|
||||||
## Terraform
|
|
||||||
Terraform will be used to provision all of the Packet resources with base software as appropriate.
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
|
|
||||||
#### Inventory files
|
|
||||||
|
|
||||||
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
$ cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER
|
|
||||||
$ cd inventory/$CLUSTER
|
|
||||||
$ ln -s ../../contrib/terraform/packet/hosts
|
|
||||||
```
|
|
||||||
|
|
||||||
This will be the base for subsequent Terraform commands.
|
|
||||||
|
|
||||||
#### Packet API access
|
|
||||||
|
|
||||||
Your Packet API key must be available in the `PACKET_AUTH_TOKEN` environment variable.
|
|
||||||
This key is typically stored outside of the code repo since it is considered secret.
|
|
||||||
If someone gets this key, they can startup/shutdown hosts in your project!
|
|
||||||
|
|
||||||
For more information on how to generate an API key or find your project ID, please see:
|
|
||||||
https://support.packet.com/kb/articles/api-integrations
|
|
||||||
|
|
||||||
The Packet Project ID associated with the key will be set later in cluster.tf.
|
|
||||||
|
|
||||||
For more information about the API, please see:
|
|
||||||
https://www.packet.com/developers/api/
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```ShellSession
|
|
||||||
$ export PACKET_AUTH_TOKEN="Example-API-Token"
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that to deploy several clusters within the same project you need to use [terraform workspace](https://www.terraform.io/docs/state/workspaces.html#using-workspaces).
|
|
||||||
|
|
||||||
#### Cluster variables
|
|
||||||
The construction of the cluster is driven by values found in
|
|
||||||
[variables.tf](variables.tf).
|
|
||||||
|
|
||||||
For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|
|
||||||
|
|
||||||
The `cluster_name` is used to set a tag on each server deployed as part of this cluster.
|
|
||||||
This helps when identifying which hosts are associated with each cluster.
|
|
||||||
|
|
||||||
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
|
|
||||||
|
|
||||||
* cluster_name = the name of the inventory directory created above as $CLUSTER
|
|
||||||
* packet_project_id = the Packet Project ID associated with the Packet API token above
|
|
||||||
|
|
||||||
#### Enable localhost access
|
|
||||||
Kubespray will pull down a Kubernetes configuration file to access this cluster by enabling the
|
|
||||||
`kubeconfig_localhost: true` in the Kubespray configuration.
|
|
||||||
|
|
||||||
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml` and comment back in the following line and change from `false` to `true`:
|
|
||||||
`\# kubeconfig_localhost: false`
|
|
||||||
becomes:
|
|
||||||
`kubeconfig_localhost: true`
|
|
||||||
|
|
||||||
Once the Kubespray playbooks are run, a Kubernetes configuration file will be written to the local host at `inventory/$CLUSTER/artifacts/admin.conf`
|
|
||||||
|
|
||||||
#### Terraform state files
|
|
||||||
|
|
||||||
In the cluster's inventory folder, the following files might be created (either by Terraform
|
|
||||||
or manually), to prevent you from pushing them accidentally they are in a
|
|
||||||
`.gitignore` file in the `terraform/packet` directory :
|
|
||||||
|
|
||||||
* `.terraform`
|
|
||||||
* `.tfvars`
|
|
||||||
* `.tfstate`
|
|
||||||
* `.tfstate.backup`
|
|
||||||
|
|
||||||
You can still add them manually if you want to.
|
|
||||||
|
|
||||||
### Initialization
|
|
||||||
|
|
||||||
Before Terraform can operate on your cluster you need to install the required
|
|
||||||
plugins. This is accomplished as follows:
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
$ cd inventory/$CLUSTER
|
|
||||||
$ terraform init ../../contrib/terraform/packet
|
|
||||||
```
|
|
||||||
|
|
||||||
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
|
|
||||||
|
|
||||||
### Provisioning cluster
|
|
||||||
You can apply the Terraform configuration to your cluster with the following command
|
|
||||||
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
|
|
||||||
```ShellSession
|
|
||||||
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/packet
|
|
||||||
$ export ANSIBLE_HOST_KEY_CHECKING=False
|
|
||||||
$ ansible-playbook -i hosts ../../cluster.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Destroying cluster
|
|
||||||
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/packet
|
|
||||||
```
|
|
||||||
|
|
||||||
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
|
|
||||||
|
|
||||||
* remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
|
|
||||||
* clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
|
|
||||||
|
|
||||||
### Debugging
|
|
||||||
You can enable debugging output from Terraform by setting `TF_LOG` to `DEBUG` before running the Terraform command.
|
|
||||||
|
|
||||||
## Ansible
|
|
||||||
|
|
||||||
### Node access
|
|
||||||
|
|
||||||
#### SSH
|
|
||||||
|
|
||||||
Ensure your local ssh-agent is running and your ssh key has been added. This
|
|
||||||
step is required by the terraform provisioner:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ eval $(ssh-agent -s)
|
|
||||||
$ ssh-add ~/.ssh/id_rsa
|
|
||||||
```
|
|
||||||
|
|
||||||
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
|
|
||||||
|
|
||||||
#### Test access
|
|
||||||
|
|
||||||
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ansible -i inventory/$CLUSTER/hosts -m ping all
|
|
||||||
example-k8s_node-1 | SUCCESS => {
|
|
||||||
"changed": false,
|
|
||||||
"ping": "pong"
|
|
||||||
}
|
|
||||||
example-etcd-1 | SUCCESS => {
|
|
||||||
"changed": false,
|
|
||||||
"ping": "pong"
|
|
||||||
}
|
|
||||||
example-k8s-master-1 | SUCCESS => {
|
|
||||||
"changed": false,
|
|
||||||
"ping": "pong"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
|
|
||||||
|
|
||||||
### Deploy Kubernetes
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
This will take some time as there are many tasks to run.
|
|
||||||
|
|
||||||
## Kubernetes
|
|
||||||
|
|
||||||
### Set up kubectl
|
|
||||||
|
|
||||||
* [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on the localhost.
|
|
||||||
|
|
||||||
* Verify that Kubectl runs correctly
|
|
||||||
```
|
|
||||||
kubectl version
|
|
||||||
```
|
|
||||||
|
|
||||||
* Verify that the Kubernetes configuration file has been copied over
|
|
||||||
```
|
|
||||||
cat inventory/alpha/$CLUSTER/admin.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
* Verify that all the nodes are running correctly.
|
|
||||||
```
|
|
||||||
kubectl version
|
|
||||||
kubectl --kubeconfig=inventory/$CLUSTER/artifacts/admin.conf get nodes
|
|
||||||
```
|
|
||||||
|
|
||||||
## What's next
|
|
||||||
|
|
||||||
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
../terraform.py
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
# Configure the Packet Provider
|
|
||||||
provider "packet" {
|
|
||||||
version = "~> 2.0"
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "packet_ssh_key" "k8s" {
|
|
||||||
count = "${var.public_key_path != "" ? 1 : 0}"
|
|
||||||
name = "kubernetes-${var.cluster_name}"
|
|
||||||
public_key = "${chomp(file(var.public_key_path))}"
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "packet_device" "k8s_master" {
|
|
||||||
depends_on = ["packet_ssh_key.k8s"]
|
|
||||||
|
|
||||||
count = "${var.number_of_k8s_masters}"
|
|
||||||
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
|
|
||||||
plan = "${var.plan_k8s_masters}"
|
|
||||||
facilities = ["${var.facility}"]
|
|
||||||
operating_system = "${var.operating_system}"
|
|
||||||
billing_cycle = "${var.billing_cycle}"
|
|
||||||
project_id = "${var.packet_project_id}"
|
|
||||||
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master", "etcd", "kube-node"]
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "packet_device" "k8s_master_no_etcd" {
|
|
||||||
depends_on = ["packet_ssh_key.k8s"]
|
|
||||||
|
|
||||||
count = "${var.number_of_k8s_masters_no_etcd}"
|
|
||||||
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
|
|
||||||
plan = "${var.plan_k8s_masters_no_etcd}"
|
|
||||||
facilities = ["${var.facility}"]
|
|
||||||
operating_system = "${var.operating_system}"
|
|
||||||
billing_cycle = "${var.billing_cycle}"
|
|
||||||
project_id = "${var.packet_project_id}"
|
|
||||||
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master"]
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "packet_device" "k8s_etcd" {
|
|
||||||
depends_on = ["packet_ssh_key.k8s"]
|
|
||||||
|
|
||||||
count = "${var.number_of_etcd}"
|
|
||||||
hostname = "${var.cluster_name}-etcd-${count.index+1}"
|
|
||||||
plan = "${var.plan_etcd}"
|
|
||||||
facilities = ["${var.facility}"]
|
|
||||||
operating_system = "${var.operating_system}"
|
|
||||||
billing_cycle = "${var.billing_cycle}"
|
|
||||||
project_id = "${var.packet_project_id}"
|
|
||||||
tags = ["cluster-${var.cluster_name}", "etcd"]
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "packet_device" "k8s_node" {
|
|
||||||
depends_on = ["packet_ssh_key.k8s"]
|
|
||||||
|
|
||||||
count = "${var.number_of_k8s_nodes}"
|
|
||||||
hostname = "${var.cluster_name}-k8s-node-${count.index+1}"
|
|
||||||
plan = "${var.plan_k8s_nodes}"
|
|
||||||
facilities = ["${var.facility}"]
|
|
||||||
operating_system = "${var.operating_system}"
|
|
||||||
billing_cycle = "${var.billing_cycle}"
|
|
||||||
project_id = "${var.packet_project_id}"
|
|
||||||
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-node"]
|
|
||||||
}
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
output "k8s_masters" {
|
|
||||||
value = "${packet_device.k8s_master.*.access_public_ipv4}"
|
|
||||||
}
|
|
||||||
|
|
||||||
output "k8s_masters_no_etc" {
|
|
||||||
value = "${packet_device.k8s_master_no_etcd.*.access_public_ipv4}"
|
|
||||||
}
|
|
||||||
|
|
||||||
output "k8s_etcds" {
|
|
||||||
value = "${packet_device.k8s_etcd.*.access_public_ipv4}"
|
|
||||||
}
|
|
||||||
|
|
||||||
output "k8s_nodes" {
|
|
||||||
value = "${packet_device.k8s_node.*.access_public_ipv4}"
|
|
||||||
}
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
# your Kubernetes cluster name here
|
|
||||||
cluster_name = "mycluster"
|
|
||||||
|
|
||||||
# Your Packet project ID. See https://support.packet.com/kb/articles/api-integrations
|
|
||||||
packet_project_id = "Example-API-Token"
|
|
||||||
|
|
||||||
# The public SSH key to be uploaded into authorized_keys in bare metal Packet nodes provisioned
|
|
||||||
# leave this value blank if the public key is already setup in the Packet project
|
|
||||||
# Terraform will complain if the public key is setup in Packet
|
|
||||||
public_key_path = "~/.ssh/id_rsa.pub"
|
|
||||||
|
|
||||||
# cluster location
|
|
||||||
facility = "ewr1"
|
|
||||||
|
|
||||||
# standalone etcds
|
|
||||||
number_of_etcd = 0
|
|
||||||
|
|
||||||
plan_etcd = "t1.small.x86"
|
|
||||||
|
|
||||||
# masters
|
|
||||||
number_of_k8s_masters = 1
|
|
||||||
|
|
||||||
number_of_k8s_masters_no_etcd = 0
|
|
||||||
|
|
||||||
plan_k8s_masters = "t1.small.x86"
|
|
||||||
|
|
||||||
plan_k8s_masters_no_etcd = "t1.small.x86"
|
|
||||||
|
|
||||||
# nodes
|
|
||||||
number_of_k8s_nodes = 2
|
|
||||||
|
|
||||||
plan_k8s_nodes = "t1.small.x86"
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
../../../../inventory/sample/group_vars
|
|
||||||
@@ -1,56 +0,0 @@
|
|||||||
variable "cluster_name" {
|
|
||||||
default = "kubespray"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "packet_project_id" {
|
|
||||||
description = "Your Packet project ID. See https://support.packet.com/kb/articles/api-integrations"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "operating_system" {
|
|
||||||
default = "ubuntu_16_04"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "public_key_path" {
|
|
||||||
description = "The path of the ssh pub key"
|
|
||||||
default = "~/.ssh/id_rsa.pub"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "billing_cycle" {
|
|
||||||
default = "hourly"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "facility" {
|
|
||||||
default = "dfw2"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "plan_k8s_masters" {
|
|
||||||
default = "c2.medium.x86"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "plan_k8s_masters_no_etcd" {
|
|
||||||
default = "c2.medium.x86"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "plan_etcd" {
|
|
||||||
default = "c2.medium.x86"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "plan_k8s_nodes" {
|
|
||||||
default = "c2.medium.x86"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "number_of_k8s_masters" {
|
|
||||||
default = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "number_of_k8s_masters_no_etcd" {
|
|
||||||
default = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "number_of_etcd" {
|
|
||||||
default = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "number_of_k8s_nodes" {
|
|
||||||
default = 0
|
|
||||||
}
|
|
||||||
@@ -149,46 +149,163 @@ def parse_bool(string_form):
|
|||||||
raise ValueError('could not convert %r to a bool' % string_form)
|
raise ValueError('could not convert %r to a bool' % string_form)
|
||||||
|
|
||||||
|
|
||||||
@parses('packet_device')
|
@parses('triton_machine')
|
||||||
def packet_device(resource, tfvars=None):
|
@calculate_mantl_vars
|
||||||
|
def triton_machine(resource, module_name):
|
||||||
raw_attrs = resource['primary']['attributes']
|
raw_attrs = resource['primary']['attributes']
|
||||||
name = raw_attrs['hostname']
|
name = raw_attrs.get('name')
|
||||||
groups = []
|
groups = []
|
||||||
|
|
||||||
attrs = {
|
attrs = {
|
||||||
'id': raw_attrs['id'],
|
'id': raw_attrs['id'],
|
||||||
'facilities': parse_list(raw_attrs, 'facilities'),
|
'dataset': raw_attrs['dataset'],
|
||||||
'hostname': raw_attrs['hostname'],
|
'disk': raw_attrs['disk'],
|
||||||
'operating_system': raw_attrs['operating_system'],
|
'firewall_enabled': parse_bool(raw_attrs['firewall_enabled']),
|
||||||
'locked': parse_bool(raw_attrs['locked']),
|
'image': raw_attrs['image'],
|
||||||
'tags': parse_list(raw_attrs, 'tags'),
|
'ips': parse_list(raw_attrs, 'ips'),
|
||||||
'plan': raw_attrs['plan'],
|
'memory': raw_attrs['memory'],
|
||||||
'project_id': raw_attrs['project_id'],
|
'name': raw_attrs['name'],
|
||||||
|
'networks': parse_list(raw_attrs, 'networks'),
|
||||||
|
'package': raw_attrs['package'],
|
||||||
|
'primary_ip': raw_attrs['primaryip'],
|
||||||
|
'root_authorized_keys': raw_attrs['root_authorized_keys'],
|
||||||
'state': raw_attrs['state'],
|
'state': raw_attrs['state'],
|
||||||
|
'tags': parse_dict(raw_attrs, 'tags'),
|
||||||
|
'type': raw_attrs['type'],
|
||||||
|
'user_data': raw_attrs['user_data'],
|
||||||
|
'user_script': raw_attrs['user_script'],
|
||||||
|
|
||||||
# ansible
|
# ansible
|
||||||
'ansible_ssh_host': raw_attrs['network.0.address'],
|
'ansible_ssh_host': raw_attrs['primaryip'],
|
||||||
'ansible_ssh_user': 'root', # it's always "root" on Packet
|
'ansible_ssh_port': 22,
|
||||||
|
'ansible_ssh_user': 'root', # it's "root" on Triton by default
|
||||||
|
|
||||||
# generic
|
# generic
|
||||||
'ipv4_address': raw_attrs['network.0.address'],
|
'public_ipv4': raw_attrs['primaryip'],
|
||||||
'public_ipv4': raw_attrs['network.0.address'],
|
'provider': 'triton',
|
||||||
'ipv6_address': raw_attrs['network.1.address'],
|
|
||||||
'public_ipv6': raw_attrs['network.1.address'],
|
|
||||||
'private_ipv4': raw_attrs['network.2.address'],
|
|
||||||
'provider': 'packet',
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# add groups based on attrs
|
# private IPv4
|
||||||
groups.append('packet_operating_system=' + attrs['operating_system'])
|
for ip in attrs['ips']:
|
||||||
groups.append('packet_locked=%s' % attrs['locked'])
|
if ip.startswith('10') or ip.startswith('192.168'): # private IPs
|
||||||
groups.append('packet_state=' + attrs['state'])
|
attrs['private_ipv4'] = ip
|
||||||
groups.append('packet_plan=' + attrs['plan'])
|
break
|
||||||
|
|
||||||
# groups specific to kubespray
|
if 'private_ipv4' not in attrs:
|
||||||
groups = groups + attrs['tags']
|
attrs['private_ipv4'] = attrs['public_ipv4']
|
||||||
|
|
||||||
|
# attrs specific to Mantl
|
||||||
|
attrs.update({
|
||||||
|
'consul_dc': _clean_dc(attrs['tags'].get('dc', 'none')),
|
||||||
|
'role': attrs['tags'].get('role', 'none'),
|
||||||
|
'ansible_python_interpreter': attrs['tags'].get('python_bin', 'python')
|
||||||
|
})
|
||||||
|
|
||||||
|
# add groups based on attrs
|
||||||
|
groups.append('triton_image=' + attrs['image'])
|
||||||
|
groups.append('triton_package=' + attrs['package'])
|
||||||
|
groups.append('triton_state=' + attrs['state'])
|
||||||
|
groups.append('triton_firewall_enabled=%s' % attrs['firewall_enabled'])
|
||||||
|
groups.extend('triton_tags_%s=%s' % item
|
||||||
|
for item in attrs['tags'].items())
|
||||||
|
groups.extend('triton_network=' + network
|
||||||
|
for network in attrs['networks'])
|
||||||
|
|
||||||
|
# groups specific to Mantl
|
||||||
|
groups.append('role=' + attrs['role'])
|
||||||
|
groups.append('dc=' + attrs['consul_dc'])
|
||||||
|
|
||||||
return name, attrs, groups
|
return name, attrs, groups
|
||||||
|
|
||||||
|
|
||||||
|
@parses('digitalocean_droplet')
|
||||||
|
@calculate_mantl_vars
|
||||||
|
def digitalocean_host(resource, tfvars=None):
|
||||||
|
raw_attrs = resource['primary']['attributes']
|
||||||
|
name = raw_attrs['name']
|
||||||
|
groups = []
|
||||||
|
|
||||||
|
attrs = {
|
||||||
|
'id': raw_attrs['id'],
|
||||||
|
'image': raw_attrs['image'],
|
||||||
|
'ipv4_address': raw_attrs['ipv4_address'],
|
||||||
|
'locked': parse_bool(raw_attrs['locked']),
|
||||||
|
'metadata': json.loads(raw_attrs.get('user_data', '{}')),
|
||||||
|
'region': raw_attrs['region'],
|
||||||
|
'size': raw_attrs['size'],
|
||||||
|
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'),
|
||||||
|
'status': raw_attrs['status'],
|
||||||
|
# ansible
|
||||||
|
'ansible_ssh_host': raw_attrs['ipv4_address'],
|
||||||
|
'ansible_ssh_port': 22,
|
||||||
|
'ansible_ssh_user': 'root', # it's always "root" on DO
|
||||||
|
# generic
|
||||||
|
'public_ipv4': raw_attrs['ipv4_address'],
|
||||||
|
'private_ipv4': raw_attrs.get('ipv4_address_private',
|
||||||
|
raw_attrs['ipv4_address']),
|
||||||
|
'provider': 'digitalocean',
|
||||||
|
}
|
||||||
|
|
||||||
|
# attrs specific to Mantl
|
||||||
|
attrs.update({
|
||||||
|
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])),
|
||||||
|
'role': attrs['metadata'].get('role', 'none'),
|
||||||
|
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||||
|
})
|
||||||
|
|
||||||
|
# add groups based on attrs
|
||||||
|
groups.append('do_image=' + attrs['image'])
|
||||||
|
groups.append('do_locked=%s' % attrs['locked'])
|
||||||
|
groups.append('do_region=' + attrs['region'])
|
||||||
|
groups.append('do_size=' + attrs['size'])
|
||||||
|
groups.append('do_status=' + attrs['status'])
|
||||||
|
groups.extend('do_metadata_%s=%s' % item
|
||||||
|
for item in attrs['metadata'].items())
|
||||||
|
|
||||||
|
# groups specific to Mantl
|
||||||
|
groups.append('role=' + attrs['role'])
|
||||||
|
groups.append('dc=' + attrs['consul_dc'])
|
||||||
|
|
||||||
|
return name, attrs, groups
|
||||||
|
|
||||||
|
|
||||||
|
@parses('softlayer_virtualserver')
|
||||||
|
@calculate_mantl_vars
|
||||||
|
def softlayer_host(resource, module_name):
|
||||||
|
raw_attrs = resource['primary']['attributes']
|
||||||
|
name = raw_attrs['name']
|
||||||
|
groups = []
|
||||||
|
|
||||||
|
attrs = {
|
||||||
|
'id': raw_attrs['id'],
|
||||||
|
'image': raw_attrs['image'],
|
||||||
|
'ipv4_address': raw_attrs['ipv4_address'],
|
||||||
|
'metadata': json.loads(raw_attrs.get('user_data', '{}')),
|
||||||
|
'region': raw_attrs['region'],
|
||||||
|
'ram': raw_attrs['ram'],
|
||||||
|
'cpu': raw_attrs['cpu'],
|
||||||
|
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'),
|
||||||
|
'public_ipv4': raw_attrs['ipv4_address'],
|
||||||
|
'private_ipv4': raw_attrs['ipv4_address_private'],
|
||||||
|
'ansible_ssh_host': raw_attrs['ipv4_address'],
|
||||||
|
'ansible_ssh_port': 22,
|
||||||
|
'ansible_ssh_user': 'root',
|
||||||
|
'provider': 'softlayer',
|
||||||
|
}
|
||||||
|
|
||||||
|
# attrs specific to Mantl
|
||||||
|
attrs.update({
|
||||||
|
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])),
|
||||||
|
'role': attrs['metadata'].get('role', 'none'),
|
||||||
|
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||||
|
})
|
||||||
|
|
||||||
|
# groups specific to Mantl
|
||||||
|
groups.append('role=' + attrs['role'])
|
||||||
|
groups.append('dc=' + attrs['consul_dc'])
|
||||||
|
|
||||||
|
return name, attrs, groups
|
||||||
|
|
||||||
def openstack_floating_ips(resource):
|
def openstack_floating_ips(resource):
|
||||||
raw_attrs = resource['primary']['attributes']
|
raw_attrs = resource['primary']['attributes']
|
||||||
attrs = {
|
attrs = {
|
||||||
@@ -286,6 +403,281 @@ def openstack_host(resource, module_name):
|
|||||||
return name, attrs, groups
|
return name, attrs, groups
|
||||||
|
|
||||||
|
|
||||||
|
@parses('aws_instance')
|
||||||
|
@calculate_mantl_vars
|
||||||
|
def aws_host(resource, module_name):
|
||||||
|
name = resource['primary']['attributes']['tags.Name']
|
||||||
|
raw_attrs = resource['primary']['attributes']
|
||||||
|
|
||||||
|
groups = []
|
||||||
|
|
||||||
|
attrs = {
|
||||||
|
'ami': raw_attrs['ami'],
|
||||||
|
'availability_zone': raw_attrs['availability_zone'],
|
||||||
|
'ebs_block_device': parse_attr_list(raw_attrs, 'ebs_block_device'),
|
||||||
|
'ebs_optimized': parse_bool(raw_attrs['ebs_optimized']),
|
||||||
|
'ephemeral_block_device': parse_attr_list(raw_attrs,
|
||||||
|
'ephemeral_block_device'),
|
||||||
|
'id': raw_attrs['id'],
|
||||||
|
'key_name': raw_attrs['key_name'],
|
||||||
|
'private': parse_dict(raw_attrs, 'private',
|
||||||
|
sep='_'),
|
||||||
|
'public': parse_dict(raw_attrs, 'public',
|
||||||
|
sep='_'),
|
||||||
|
'root_block_device': parse_attr_list(raw_attrs, 'root_block_device'),
|
||||||
|
'security_groups': parse_list(raw_attrs, 'security_groups'),
|
||||||
|
'subnet': parse_dict(raw_attrs, 'subnet',
|
||||||
|
sep='_'),
|
||||||
|
'tags': parse_dict(raw_attrs, 'tags'),
|
||||||
|
'tenancy': raw_attrs['tenancy'],
|
||||||
|
'vpc_security_group_ids': parse_list(raw_attrs,
|
||||||
|
'vpc_security_group_ids'),
|
||||||
|
# ansible-specific
|
||||||
|
'ansible_ssh_port': 22,
|
||||||
|
'ansible_ssh_host': raw_attrs['public_ip'],
|
||||||
|
# generic
|
||||||
|
'public_ipv4': raw_attrs['public_ip'],
|
||||||
|
'private_ipv4': raw_attrs['private_ip'],
|
||||||
|
'provider': 'aws',
|
||||||
|
}
|
||||||
|
|
||||||
|
# attrs specific to Ansible
|
||||||
|
if 'tags.sshUser' in raw_attrs:
|
||||||
|
attrs['ansible_ssh_user'] = raw_attrs['tags.sshUser']
|
||||||
|
if 'tags.sshPrivateIp' in raw_attrs:
|
||||||
|
attrs['ansible_ssh_host'] = raw_attrs['private_ip']
|
||||||
|
|
||||||
|
# attrs specific to Mantl
|
||||||
|
attrs.update({
|
||||||
|
'consul_dc': _clean_dc(attrs['tags'].get('dc', module_name)),
|
||||||
|
'role': attrs['tags'].get('role', 'none'),
|
||||||
|
'ansible_python_interpreter': attrs['tags'].get('python_bin','python')
|
||||||
|
})
|
||||||
|
|
||||||
|
# groups specific to Mantl
|
||||||
|
groups.extend(['aws_ami=' + attrs['ami'],
|
||||||
|
'aws_az=' + attrs['availability_zone'],
|
||||||
|
'aws_key_name=' + attrs['key_name'],
|
||||||
|
'aws_tenancy=' + attrs['tenancy']])
|
||||||
|
groups.extend('aws_tag_%s=%s' % item for item in attrs['tags'].items())
|
||||||
|
groups.extend('aws_vpc_security_group=' + group
|
||||||
|
for group in attrs['vpc_security_group_ids'])
|
||||||
|
groups.extend('aws_subnet_%s=%s' % subnet
|
||||||
|
for subnet in attrs['subnet'].items())
|
||||||
|
|
||||||
|
# groups specific to Mantl
|
||||||
|
groups.append('role=' + attrs['role'])
|
||||||
|
groups.append('dc=' + attrs['consul_dc'])
|
||||||
|
|
||||||
|
return name, attrs, groups
|
||||||
|
|
||||||
|
|
||||||
|
@parses('google_compute_instance')
|
||||||
|
@calculate_mantl_vars
|
||||||
|
def gce_host(resource, module_name):
|
||||||
|
name = resource['primary']['id']
|
||||||
|
raw_attrs = resource['primary']['attributes']
|
||||||
|
groups = []
|
||||||
|
|
||||||
|
# network interfaces
|
||||||
|
interfaces = parse_attr_list(raw_attrs, 'network_interface')
|
||||||
|
for interface in interfaces:
|
||||||
|
interface['access_config'] = parse_attr_list(interface,
|
||||||
|
'access_config')
|
||||||
|
for key in interface.keys():
|
||||||
|
if '.' in key:
|
||||||
|
del interface[key]
|
||||||
|
|
||||||
|
# general attrs
|
||||||
|
attrs = {
|
||||||
|
'can_ip_forward': raw_attrs['can_ip_forward'] == 'true',
|
||||||
|
'disks': parse_attr_list(raw_attrs, 'disk'),
|
||||||
|
'machine_type': raw_attrs['machine_type'],
|
||||||
|
'metadata': parse_dict(raw_attrs, 'metadata'),
|
||||||
|
'network': parse_attr_list(raw_attrs, 'network'),
|
||||||
|
'network_interface': interfaces,
|
||||||
|
'self_link': raw_attrs['self_link'],
|
||||||
|
'service_account': parse_attr_list(raw_attrs, 'service_account'),
|
||||||
|
'tags': parse_list(raw_attrs, 'tags'),
|
||||||
|
'zone': raw_attrs['zone'],
|
||||||
|
# ansible
|
||||||
|
'ansible_ssh_port': 22,
|
||||||
|
'provider': 'gce',
|
||||||
|
}
|
||||||
|
|
||||||
|
# attrs specific to Ansible
|
||||||
|
if 'metadata.ssh_user' in raw_attrs:
|
||||||
|
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
|
||||||
|
|
||||||
|
# attrs specific to Mantl
|
||||||
|
attrs.update({
|
||||||
|
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
|
||||||
|
'role': attrs['metadata'].get('role', 'none'),
|
||||||
|
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||||
|
})
|
||||||
|
|
||||||
|
try:
|
||||||
|
attrs.update({
|
||||||
|
'ansible_ssh_host': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'],
|
||||||
|
'public_ipv4': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'],
|
||||||
|
'private_ipv4': interfaces[0]['address'],
|
||||||
|
'publicly_routable': True,
|
||||||
|
})
|
||||||
|
except (KeyError, ValueError):
|
||||||
|
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
|
||||||
|
|
||||||
|
# add groups based on attrs
|
||||||
|
groups.extend('gce_image=' + disk['image'] for disk in attrs['disks'])
|
||||||
|
groups.append('gce_machine_type=' + attrs['machine_type'])
|
||||||
|
groups.extend('gce_metadata_%s=%s' % (key, value)
|
||||||
|
for (key, value) in attrs['metadata'].items()
|
||||||
|
if key not in set(['sshKeys']))
|
||||||
|
groups.extend('gce_tag=' + tag for tag in attrs['tags'])
|
||||||
|
groups.append('gce_zone=' + attrs['zone'])
|
||||||
|
|
||||||
|
if attrs['can_ip_forward']:
|
||||||
|
groups.append('gce_ip_forward')
|
||||||
|
if attrs['publicly_routable']:
|
||||||
|
groups.append('gce_publicly_routable')
|
||||||
|
|
||||||
|
# groups specific to Mantl
|
||||||
|
groups.append('role=' + attrs['metadata'].get('role', 'none'))
|
||||||
|
groups.append('dc=' + attrs['consul_dc'])
|
||||||
|
|
||||||
|
return name, attrs, groups
|
||||||
|
|
||||||
|
|
||||||
|
@parses('vsphere_virtual_machine')
|
||||||
|
@calculate_mantl_vars
|
||||||
|
def vsphere_host(resource, module_name):
|
||||||
|
raw_attrs = resource['primary']['attributes']
|
||||||
|
network_attrs = parse_dict(raw_attrs, 'network_interface')
|
||||||
|
network = parse_dict(network_attrs, '0')
|
||||||
|
ip_address = network.get('ipv4_address', network['ip_address'])
|
||||||
|
name = raw_attrs['name']
|
||||||
|
groups = []
|
||||||
|
|
||||||
|
attrs = {
|
||||||
|
'id': raw_attrs['id'],
|
||||||
|
'ip_address': ip_address,
|
||||||
|
'private_ipv4': ip_address,
|
||||||
|
'public_ipv4': ip_address,
|
||||||
|
'metadata': parse_dict(raw_attrs, 'custom_configuration_parameters'),
|
||||||
|
'ansible_ssh_port': 22,
|
||||||
|
'provider': 'vsphere',
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
attrs.update({
|
||||||
|
'ansible_ssh_host': ip_address,
|
||||||
|
})
|
||||||
|
except (KeyError, ValueError):
|
||||||
|
attrs.update({'ansible_ssh_host': '', })
|
||||||
|
|
||||||
|
attrs.update({
|
||||||
|
'consul_dc': _clean_dc(attrs['metadata'].get('consul_dc', module_name)),
|
||||||
|
'role': attrs['metadata'].get('role', 'none'),
|
||||||
|
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||||
|
})
|
||||||
|
|
||||||
|
# attrs specific to Ansible
|
||||||
|
if 'ssh_user' in attrs['metadata']:
|
||||||
|
attrs['ansible_ssh_user'] = attrs['metadata']['ssh_user']
|
||||||
|
|
||||||
|
groups.append('role=' + attrs['role'])
|
||||||
|
groups.append('dc=' + attrs['consul_dc'])
|
||||||
|
|
||||||
|
return name, attrs, groups
|
||||||
|
|
||||||
|
@parses('azure_instance')
|
||||||
|
@calculate_mantl_vars
|
||||||
|
def azure_host(resource, module_name):
|
||||||
|
name = resource['primary']['attributes']['name']
|
||||||
|
raw_attrs = resource['primary']['attributes']
|
||||||
|
|
||||||
|
groups = []
|
||||||
|
|
||||||
|
attrs = {
|
||||||
|
'automatic_updates': raw_attrs['automatic_updates'],
|
||||||
|
'description': raw_attrs['description'],
|
||||||
|
'hosted_service_name': raw_attrs['hosted_service_name'],
|
||||||
|
'id': raw_attrs['id'],
|
||||||
|
'image': raw_attrs['image'],
|
||||||
|
'ip_address': raw_attrs['ip_address'],
|
||||||
|
'location': raw_attrs['location'],
|
||||||
|
'name': raw_attrs['name'],
|
||||||
|
'reverse_dns': raw_attrs['reverse_dns'],
|
||||||
|
'security_group': raw_attrs['security_group'],
|
||||||
|
'size': raw_attrs['size'],
|
||||||
|
'ssh_key_thumbprint': raw_attrs['ssh_key_thumbprint'],
|
||||||
|
'subnet': raw_attrs['subnet'],
|
||||||
|
'username': raw_attrs['username'],
|
||||||
|
'vip_address': raw_attrs['vip_address'],
|
||||||
|
'virtual_network': raw_attrs['virtual_network'],
|
||||||
|
'endpoint': parse_attr_list(raw_attrs, 'endpoint'),
|
||||||
|
# ansible
|
||||||
|
'ansible_ssh_port': 22,
|
||||||
|
'ansible_ssh_user': raw_attrs['username'],
|
||||||
|
'ansible_ssh_host': raw_attrs['vip_address'],
|
||||||
|
}
|
||||||
|
|
||||||
|
# attrs specific to mantl
|
||||||
|
attrs.update({
|
||||||
|
'consul_dc': attrs['location'].lower().replace(" ", "-"),
|
||||||
|
'role': attrs['description']
|
||||||
|
})
|
||||||
|
|
||||||
|
# groups specific to mantl
|
||||||
|
groups.extend(['azure_image=' + attrs['image'],
|
||||||
|
'azure_location=' + attrs['location'].lower().replace(" ", "-"),
|
||||||
|
'azure_username=' + attrs['username'],
|
||||||
|
'azure_security_group=' + attrs['security_group']])
|
||||||
|
|
||||||
|
# groups specific to mantl
|
||||||
|
groups.append('role=' + attrs['role'])
|
||||||
|
groups.append('dc=' + attrs['consul_dc'])
|
||||||
|
|
||||||
|
return name, attrs, groups
|
||||||
|
|
||||||
|
|
||||||
|
@parses('clc_server')
|
||||||
|
@calculate_mantl_vars
|
||||||
|
def clc_server(resource, module_name):
|
||||||
|
raw_attrs = resource['primary']['attributes']
|
||||||
|
name = raw_attrs.get('id')
|
||||||
|
groups = []
|
||||||
|
md = parse_dict(raw_attrs, 'metadata')
|
||||||
|
attrs = {
|
||||||
|
'metadata': md,
|
||||||
|
'ansible_ssh_port': md.get('ssh_port', 22),
|
||||||
|
'ansible_ssh_user': md.get('ssh_user', 'root'),
|
||||||
|
'provider': 'clc',
|
||||||
|
'publicly_routable': False,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
attrs.update({
|
||||||
|
'public_ipv4': raw_attrs['public_ip_address'],
|
||||||
|
'private_ipv4': raw_attrs['private_ip_address'],
|
||||||
|
'ansible_ssh_host': raw_attrs['public_ip_address'],
|
||||||
|
'publicly_routable': True,
|
||||||
|
})
|
||||||
|
except (KeyError, ValueError):
|
||||||
|
attrs.update({
|
||||||
|
'ansible_ssh_host': raw_attrs['private_ip_address'],
|
||||||
|
'private_ipv4': raw_attrs['private_ip_address'],
|
||||||
|
})
|
||||||
|
|
||||||
|
attrs.update({
|
||||||
|
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
|
||||||
|
'role': attrs['metadata'].get('role', 'none'),
|
||||||
|
})
|
||||||
|
|
||||||
|
groups.append('role=' + attrs['role'])
|
||||||
|
groups.append('dc=' + attrs['consul_dc'])
|
||||||
|
return name, attrs, groups
|
||||||
|
|
||||||
|
|
||||||
def iter_host_ips(hosts, ips):
|
def iter_host_ips(hosts, ips):
|
||||||
'''Update hosts that have an entry in the floating IP list'''
|
'''Update hosts that have an entry in the floating IP list'''
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
---
|
|
||||||
vault_deployment_type: docker
|
vault_deployment_type: docker
|
||||||
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
|
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
|
||||||
vault_version: 0.10.1
|
vault_version: 0.10.1
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
# Stop temporary Vault if it's running (can linger if playbook fails out)
|
# Stop temporary Vault if it's running (can linger if playbook fails out)
|
||||||
- name: stop vault-temp container
|
- name: stop vault-temp container
|
||||||
shell: docker stop {{ vault_temp_container_name }}
|
shell: docker stop {{ vault_temp_container_name }} || rkt stop {{ vault_temp_container_name }}
|
||||||
failed_when: false
|
failed_when: false
|
||||||
register: vault_temp_stop
|
register: vault_temp_stop
|
||||||
changed_when: vault_temp_stop is succeeded
|
changed_when: vault_temp_stop is succeeded
|
||||||
|
|||||||
@@ -5,19 +5,17 @@
|
|||||||
set_fact:
|
set_fact:
|
||||||
sync_file_dir: "{{ sync_file_path | dirname }}"
|
sync_file_dir: "{{ sync_file_path | dirname }}"
|
||||||
sync_file: "{{ sync_file_path | basename }}"
|
sync_file: "{{ sync_file_path | basename }}"
|
||||||
when:
|
when: sync_file_path is defined and sync_file_path != ''
|
||||||
- sync_file_path is defined
|
|
||||||
- sync_file_path
|
|
||||||
|
|
||||||
- name: "sync_file | Set fact for sync_file_path when undefined"
|
- name: "sync_file | Set fact for sync_file_path when undefined"
|
||||||
set_fact:
|
set_fact:
|
||||||
sync_file_path: "{{ (sync_file_dir, sync_file)|join('/') }}"
|
sync_file_path: "{{ (sync_file_dir, sync_file)|join('/') }}"
|
||||||
when: sync_file_path is not defined or not sync_file_path
|
when: sync_file_path is not defined or sync_file_path == ''
|
||||||
|
|
||||||
- name: "sync_file | Set fact for key path name"
|
- name: "sync_file | Set fact for key path name"
|
||||||
set_fact:
|
set_fact:
|
||||||
sync_file_key_path: "{{ sync_file_path.rsplit('.', 1)|first + '-key.' + sync_file_path.rsplit('.', 1)|last }}"
|
sync_file_key_path: "{{ sync_file_path.rsplit('.', 1)|first + '-key.' + sync_file_path.rsplit('.', 1)|last }}"
|
||||||
when: sync_file_key_path is not defined or not sync_file_key_path
|
when: sync_file_key_path is not defined or sync_file_key_path == ''
|
||||||
|
|
||||||
- name: "sync_file | Check if {{sync_file_path}} file exists"
|
- name: "sync_file | Check if {{sync_file_path}} file exists"
|
||||||
stat:
|
stat:
|
||||||
@@ -48,17 +46,17 @@
|
|||||||
- name: "sync_file | Remove sync sources with files that do not match sync_file_srcs|first"
|
- name: "sync_file | Remove sync sources with files that do not match sync_file_srcs|first"
|
||||||
set_fact:
|
set_fact:
|
||||||
_: "{% if inventory_hostname in sync_file_srcs %}{{ sync_file_srcs.remove(inventory_hostname) }}{% endif %}"
|
_: "{% if inventory_hostname in sync_file_srcs %}{{ sync_file_srcs.remove(inventory_hostname) }}{% endif %}"
|
||||||
when:
|
when: >-
|
||||||
- sync_file_srcs|d([])|length > 1
|
sync_file_srcs|d([])|length > 1 and
|
||||||
- inventory_hostname != sync_file_srcs|first
|
inventory_hostname != sync_file_srcs|first
|
||||||
|
|
||||||
- name: "sync_file | Remove sync sources with keys that do not match sync_file_srcs|first"
|
- name: "sync_file | Remove sync sources with keys that do not match sync_file_srcs|first"
|
||||||
set_fact:
|
set_fact:
|
||||||
_: "{% if inventory_hostname in sync_file_srcs %}{{ sync_file_srcs.remove(inventory_hostname) }}{% endif %}"
|
_: "{% if inventory_hostname in sync_file_srcs %}{{ sync_file_srcs.remove(inventory_hostname) }}{% endif %}"
|
||||||
when:
|
when: >-
|
||||||
- sync_file_is_cert|d()
|
sync_file_is_cert|d() and
|
||||||
- sync_file_key_srcs|d([])|length > 1
|
sync_file_key_srcs|d([])|length > 1 and
|
||||||
- inventory_hostname != sync_file_key_srcs|first
|
inventory_hostname != sync_file_key_srcs|first
|
||||||
|
|
||||||
- name: "sync_file | Consolidate file and key sources"
|
- name: "sync_file | Consolidate file and key sources"
|
||||||
set_fact:
|
set_fact:
|
||||||
|
|||||||
45
contrib/vault/roles/vault/templates/rkt.service.j2
Normal file
45
contrib/vault/roles/vault/templates/rkt.service.j2
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=hashicorp vault on rkt
|
||||||
|
Documentation=https://github.com/hashicorp/vault
|
||||||
|
Wants=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
User=root
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=10s
|
||||||
|
TimeoutStartSec=5
|
||||||
|
LimitNOFILE=40000
|
||||||
|
# Container has the following internal mount points:
|
||||||
|
# /vault/file/ # File backend storage location
|
||||||
|
# /vault/logs/ # Log files
|
||||||
|
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/vault.uuid
|
||||||
|
|
||||||
|
ExecStart=/usr/bin/rkt run \
|
||||||
|
--insecure-options=image \
|
||||||
|
--volume hosts,kind=host,source=/etc/hosts,readOnly=true \
|
||||||
|
--mount volume=hosts,target=/etc/hosts \
|
||||||
|
--volume=volume-vault-file,kind=host,source=/var/lib/vault \
|
||||||
|
--volume=volume-vault-logs,kind=host,source={{ vault_log_dir }} \
|
||||||
|
--volume=vault-cert-dir,kind=host,source={{ vault_cert_dir }} \
|
||||||
|
--mount=volume=vault-cert-dir,target={{ vault_cert_dir }} \
|
||||||
|
--volume=vault-conf-dir,kind=host,source={{ vault_config_dir }} \
|
||||||
|
--mount=volume=vault-conf-dir,target={{ vault_config_dir }} \
|
||||||
|
--volume=vault-secrets-dir,kind=host,source={{ vault_secrets_dir }} \
|
||||||
|
--mount=volume=vault-secrets-dir,target={{ vault_secrets_dir }} \
|
||||||
|
--volume=vault-roles-dir,kind=host,source={{ vault_roles_dir }} \
|
||||||
|
--mount=volume=vault-roles-dir,target={{ vault_roles_dir }} \
|
||||||
|
--volume=etcd-cert-dir,kind=host,source={{ etcd_cert_dir }} \
|
||||||
|
--mount=volume=etcd-cert-dir,target={{ etcd_cert_dir }} \
|
||||||
|
docker://{{ vault_image_repo }}:{{ vault_image_tag }} \
|
||||||
|
--uuid-file-save=/var/run/vault.uuid \
|
||||||
|
--name={{ vault_container_name }} \
|
||||||
|
--net=host \
|
||||||
|
--caps-retain=CAP_IPC_LOCK \
|
||||||
|
--exec vault -- \
|
||||||
|
server \
|
||||||
|
--config={{ vault_config_dir }}/config.json
|
||||||
|
|
||||||
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/vault.uuid
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
@@ -93,6 +93,6 @@ Potential Work
|
|||||||
- Change the Vault role to not run certain tasks when ``root_token`` and
|
- Change the Vault role to not run certain tasks when ``root_token`` and
|
||||||
``unseal_keys`` are not present. Alternatively, allow user input for these
|
``unseal_keys`` are not present. Alternatively, allow user input for these
|
||||||
values when missing.
|
values when missing.
|
||||||
- Add the ability to start temp Vault with Host or Docker
|
- Add the ability to start temp Vault with Host, Rkt, or Docker
|
||||||
- Add a dynamic way to change out the backend role creation during Bootstrap,
|
- Add a dynamic way to change out the backend role creation during Bootstrap,
|
||||||
so other services can be used (such as Consul)
|
so other services can be used (such as Consul)
|
||||||
|
|||||||
@@ -1,40 +0,0 @@
|
|||||||
* [Readme](/)
|
|
||||||
* [Comparisons](/docs/comparisons.md)
|
|
||||||
* [Getting started](/docs/getting-started.md)
|
|
||||||
* [Ansible](docs/ansible.md)
|
|
||||||
* [Variables](/docs/vars.md)
|
|
||||||
* [Ansible](/docs/ansible.md)
|
|
||||||
* Operations
|
|
||||||
* [Integration](docs/integration.md)
|
|
||||||
* [Upgrades](/docs/upgrades.md)
|
|
||||||
* [HA Mode](docs/ha-mode.md)
|
|
||||||
* [Large deployments](docs/large-deployments.md)
|
|
||||||
* CNI
|
|
||||||
* [Calico](docs/calico.md)
|
|
||||||
* [Contiv](docs/contiv.md)
|
|
||||||
* [Flannel](docs/flannel.md)
|
|
||||||
* [Kube Router](docs/kube-router.md)
|
|
||||||
* [Weave](docs/weave.md)
|
|
||||||
* [Multus](docs/multus.md)
|
|
||||||
* [Cloud providers](docs/cloud.md)
|
|
||||||
* [AWS](docs/aws.md)
|
|
||||||
* [Azure](docs/azure.md)
|
|
||||||
* [OpenStack](/docs/openstack.md)
|
|
||||||
* [Packet](/docs/packet.md)
|
|
||||||
* [vSphere](/docs/vsphere.md)
|
|
||||||
* Operating Systems
|
|
||||||
* [Atomic](docs/atomic.md)
|
|
||||||
* [Debian](docs/debian.md)
|
|
||||||
* [Coreos](docs/coreos.md)
|
|
||||||
* [OpenSUSE](docs/opensuse.md)
|
|
||||||
* Advanced
|
|
||||||
* [Proxy](/docs/proxy.md)
|
|
||||||
* [Downloads](docs/downloads.md)
|
|
||||||
* [CRI-O](docs/cri-o.md)
|
|
||||||
* [Netcheck](docs/netcheck.md)
|
|
||||||
* [DNS Stack](docs/dns-stack.md)
|
|
||||||
* [Kubernetes reliability](docs/kubernetes-reliability.md)
|
|
||||||
* Developers
|
|
||||||
* [Test cases](docs/test_cases.md)
|
|
||||||
* [Vagrant](docs/vagrant.md)
|
|
||||||
* [Roadmap](docs/roadmap.md)
|
|
||||||
@@ -35,12 +35,12 @@ Below is a complete inventory example:
|
|||||||
```
|
```
|
||||||
## Configure 'ip' variable to bind kubernetes services on a
|
## Configure 'ip' variable to bind kubernetes services on a
|
||||||
## different ip than the default iface
|
## different ip than the default iface
|
||||||
node1 ansible_host=95.54.0.12 ip=10.3.0.1
|
node1 ansible_ssh_host=95.54.0.12 ip=10.3.0.1
|
||||||
node2 ansible_host=95.54.0.13 ip=10.3.0.2
|
node2 ansible_ssh_host=95.54.0.13 ip=10.3.0.2
|
||||||
node3 ansible_host=95.54.0.14 ip=10.3.0.3
|
node3 ansible_ssh_host=95.54.0.14 ip=10.3.0.3
|
||||||
node4 ansible_host=95.54.0.15 ip=10.3.0.4
|
node4 ansible_ssh_host=95.54.0.15 ip=10.3.0.4
|
||||||
node5 ansible_host=95.54.0.16 ip=10.3.0.5
|
node5 ansible_ssh_host=95.54.0.16 ip=10.3.0.5
|
||||||
node6 ansible_host=95.54.0.17 ip=10.3.0.6
|
node6 ansible_ssh_host=95.54.0.17 ip=10.3.0.6
|
||||||
|
|
||||||
[kube-master]
|
[kube-master]
|
||||||
node1
|
node1
|
||||||
@@ -70,7 +70,7 @@ The group variables to control main deployment options are located in the direct
|
|||||||
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
|
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
|
||||||
Mandatory variables that are common for at least one role (or a node group) can be found in the
|
Mandatory variables that are common for at least one role (or a node group) can be found in the
|
||||||
`inventory/sample/group_vars/k8s-cluster.yml`.
|
`inventory/sample/group_vars/k8s-cluster.yml`.
|
||||||
There are also role vars for docker, kubernetes preinstall and master roles.
|
There are also role vars for docker, rkt, kubernetes preinstall and master roles.
|
||||||
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
|
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
|
||||||
those cannot be overridden from the group vars. In order to override, one should use
|
those cannot be overridden from the group vars. In order to override, one should use
|
||||||
the `-e ` runtime flags (most simple way) or other layers described in the docs.
|
the `-e ` runtime flags (most simple way) or other layers described in the docs.
|
||||||
@@ -110,6 +110,7 @@ The following tags are defined in playbooks:
|
|||||||
| calico | Network plugin Calico
|
| calico | Network plugin Calico
|
||||||
| canal | Network plugin Canal
|
| canal | Network plugin Canal
|
||||||
| cloud-provider | Cloud-provider related tasks
|
| cloud-provider | Cloud-provider related tasks
|
||||||
|
| dnsmasq | Configuring DNS stack for hosts and K8s apps
|
||||||
| docker | Configuring docker for hosts
|
| docker | Configuring docker for hosts
|
||||||
| download | Fetching container images to a delegate host
|
| download | Fetching container images to a delegate host
|
||||||
| etcd | Configuring etcd cluster
|
| etcd | Configuring etcd cluster
|
||||||
@@ -151,11 +152,11 @@ Example command to filter and apply only DNS configuration tasks and skip
|
|||||||
everything else related to host OS configuration and downloading images of containers:
|
everything else related to host OS configuration and downloading images of containers:
|
||||||
|
|
||||||
```
|
```
|
||||||
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
|
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,dnsmasq,facts --skip-tags=download,bootstrap-os
|
||||||
```
|
```
|
||||||
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
|
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
|
||||||
```
|
```
|
||||||
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
|
ansible-playbook -i inventory/sample/hosts.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
|
||||||
```
|
```
|
||||||
And this prepares all container images locally (at the ansible runner node) without installing
|
And this prepares all container images locally (at the ansible runner node) without installing
|
||||||
or upgrading related stuff or trying to upload container to K8s cluster nodes:
|
or upgrading related stuff or trying to upload container to K8s cluster nodes:
|
||||||
@@ -175,8 +176,7 @@ simply add a line to your inventory, where you have to replace x.x.x.x with the
|
|||||||
bastion host.
|
bastion host.
|
||||||
|
|
||||||
```
|
```
|
||||||
[bastion]
|
bastion ansible_ssh_host=x.x.x.x
|
||||||
bastion ansible_host=x.x.x.x
|
|
||||||
```
|
```
|
||||||
|
|
||||||
For more information about Ansible and bastion hosts, read
|
For more information about Ansible and bastion hosts, read
|
||||||
|
|||||||
16
docs/arch.md
16
docs/arch.md
@@ -1,16 +0,0 @@
|
|||||||
## Architecture compatibility
|
|
||||||
|
|
||||||
The following table shows the impact of the CPU architecture on compatible features:
|
|
||||||
- amd64: Cluster using only x86/amd64 CPUs
|
|
||||||
- arm64: Cluster using only arm64 CPUs
|
|
||||||
- amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs
|
|
||||||
|
|
||||||
| kube_network_plugin | amd64 | arm64 | amd64 + arm64 |
|
|
||||||
| ------------------- | ----- | ----- | ------------- |
|
|
||||||
| Calico | Y | Y | Y |
|
|
||||||
| Weave | Y | Y | Y |
|
|
||||||
| Flannel | Y | N | N |
|
|
||||||
| Canal | Y | N | N |
|
|
||||||
| Cilium | Y | N | N |
|
|
||||||
| Contib | Y | N | N |
|
|
||||||
| kube-router | Y | N | N |
|
|
||||||
@@ -51,25 +51,6 @@ This is the AppId from the last command
|
|||||||
|
|
||||||
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your choosen secret.
|
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your choosen secret.
|
||||||
|
|
||||||
#### azure\_loadbalancer\_sku
|
|
||||||
Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
|
|
||||||
|
|
||||||
#### azure\_exclude\_master\_from\_standard\_lb
|
|
||||||
azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.
|
|
||||||
|
|
||||||
#### azure\_disable\_outbound\_snat
|
|
||||||
azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.
|
|
||||||
|
|
||||||
#### azure\_primary\_availability\_set\_name
|
|
||||||
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
|
|
||||||
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
|
|
||||||
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
|
|
||||||
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.
|
|
||||||
|
|
||||||
#### azure\_use\_instance\_metadata
|
|
||||||
Use instance metadata service where possible
|
|
||||||
|
|
||||||
|
|
||||||
## Provisioning Azure with Resource Group Templates
|
## Provisioning Azure with Resource Group Templates
|
||||||
|
|
||||||
You'll find Resource Group Templates and scripts to provision the required infrastructure to Azure in [*contrib/azurerm*](../contrib/azurerm/README.md)
|
You'll find Resource Group Templates and scripts to provision the required infrastructure to Azure in [*contrib/azurerm*](../contrib/azurerm/README.md)
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user