mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-14 22:04:43 +03:00
Compare commits
15 Commits
v2.29.1
...
release-2.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d3f6079991 | ||
|
|
7aa8b82512 | ||
|
|
ec974e16fa | ||
|
|
6f97687d19 | ||
|
|
447605ca0e | ||
|
|
3901480bc1 | ||
|
|
c42cb8f9b2 | ||
|
|
5c28bb0679 | ||
|
|
6d53229986 | ||
|
|
1e57d2e21a | ||
|
|
ea41fc5e74 | ||
|
|
4167807f17 | ||
|
|
2ac1c7562f | ||
|
|
2d6e31d281 | ||
|
|
0a19d1bf01 |
@@ -1,47 +0,0 @@
|
|||||||
---
|
|
||||||
parseable: true
|
|
||||||
skip_list:
|
|
||||||
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
|
|
||||||
|
|
||||||
# DO NOT add any other rules to this skip_list, instead use local `# noqa` with a comment explaining WHY it is necessary
|
|
||||||
|
|
||||||
# These rules are intentionally skipped:
|
|
||||||
#
|
|
||||||
# [role-name] "meta/main.yml" Role name role-name does not match ``^+$`` pattern
|
|
||||||
# Meta roles in Kubespray don't need proper names
|
|
||||||
# (Disabled in June 2021)
|
|
||||||
- 'role-name'
|
|
||||||
|
|
||||||
# [var-naming]
|
|
||||||
# In Kubespray we use variables that use camelCase to match their k8s counterparts
|
|
||||||
# (Disabled in June 2021)
|
|
||||||
- 'var-naming[pattern]'
|
|
||||||
# Variables names from within roles in kubespray don't need role name as a prefix
|
|
||||||
- 'var-naming[no-role-prefix]'
|
|
||||||
|
|
||||||
# [fqcn-builtins]
|
|
||||||
# Roles in kubespray don't need fully qualified collection names
|
|
||||||
# (Disabled in Feb 2023)
|
|
||||||
- 'fqcn-builtins'
|
|
||||||
|
|
||||||
# We use template in names
|
|
||||||
- 'name[template]'
|
|
||||||
|
|
||||||
# No changed-when on commands
|
|
||||||
# (Disabled in June 2023 after ansible upgrade; FIXME)
|
|
||||||
- 'no-changed-when'
|
|
||||||
|
|
||||||
# Disable run-once check with free strategy
|
|
||||||
# (Disabled in June 2023 after ansible upgrade; FIXME)
|
|
||||||
- 'run-once[task]'
|
|
||||||
exclude_paths:
|
|
||||||
# Generated files
|
|
||||||
- tests/files/custom_cni/cilium.yaml
|
|
||||||
- venv
|
|
||||||
- .github
|
|
||||||
- .ansible
|
|
||||||
- .cache
|
|
||||||
- .gitlab-ci.yml
|
|
||||||
- .gitlab-ci
|
|
||||||
mock_modules:
|
|
||||||
- gluster.gluster.gluster_volume
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
# This file contains ignores rule violations for ansible-lint
|
|
||||||
inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml jinja[spacing]
|
|
||||||
roles/kubernetes/control-plane/defaults/main/kube-proxy.yml jinja[spacing]
|
|
||||||
roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
|
|
||||||
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
|
|
||||||
roles/kubernetes/node/defaults/main.yml jinja[spacing]
|
|
||||||
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
|
|
||||||
roles/kubespray-defaults/defaults/main/main.yml jinja[spacing]
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
root = true
|
|
||||||
|
|
||||||
[*.{yaml,yml,yml.j2,yaml.j2}]
|
|
||||||
indent_style = space
|
|
||||||
indent_size = 2
|
|
||||||
trim_trailing_whitespace = true
|
|
||||||
insert_final_newline = true
|
|
||||||
charset = utf-8
|
|
||||||
|
|
||||||
[{Dockerfile}]
|
|
||||||
indent_style = space
|
|
||||||
indent_size = 2
|
|
||||||
trim_trailing_whitespace = true
|
|
||||||
insert_final_newline = true
|
|
||||||
charset = utf-8
|
|
||||||
1
.gitattributes
vendored
1
.gitattributes
vendored
@@ -1 +0,0 @@
|
|||||||
docs/_sidebar.md linguist-generated=true
|
|
||||||
47
.github/ISSUE_TEMPLATE.md
vendored
Normal file
47
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
|
||||||
|
|
||||||
|
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
|
||||||
|
|
||||||
|
<!--
|
||||||
|
If this is a BUG REPORT, please:
|
||||||
|
- Fill in as much of the template below as you can. If you leave out
|
||||||
|
information, we can't help you as well.
|
||||||
|
|
||||||
|
If this is a FEATURE REQUEST, please:
|
||||||
|
- Describe *in detail* the feature/behavior/change you'd like to see.
|
||||||
|
|
||||||
|
In both cases, be ready for followup questions, and please respond in a timely
|
||||||
|
manner. If we can't reproduce a bug or think a feature already exists, we
|
||||||
|
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
||||||
|
explain why.
|
||||||
|
-->
|
||||||
|
|
||||||
|
**Environment**:
|
||||||
|
- **Cloud provider or hardware configuration:**
|
||||||
|
|
||||||
|
- **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):**
|
||||||
|
|
||||||
|
- **Version of Ansible** (`ansible --version`):
|
||||||
|
|
||||||
|
|
||||||
|
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
|
||||||
|
|
||||||
|
|
||||||
|
**Network plugin used**:
|
||||||
|
|
||||||
|
|
||||||
|
**Copy of your inventory file:**
|
||||||
|
|
||||||
|
|
||||||
|
**Command used to invoke ansible**:
|
||||||
|
|
||||||
|
|
||||||
|
**Output of ansible run**:
|
||||||
|
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
|
||||||
|
|
||||||
|
**Anything else do we need to know**:
|
||||||
|
<!-- By running scripts/collect-info.yaml you can get a lot of useful informations.
|
||||||
|
Script can be started by:
|
||||||
|
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
|
||||||
|
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
|
||||||
|
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here.-->
|
||||||
147
.github/ISSUE_TEMPLATE/bug-report.yaml
vendored
147
.github/ISSUE_TEMPLATE/bug-report.yaml
vendored
@@ -1,147 +0,0 @@
|
|||||||
---
|
|
||||||
name: Bug Report
|
|
||||||
description: Report a bug encountered while using Kubespray
|
|
||||||
labels: kind/bug
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Please, be ready for followup questions, and please respond in a timely
|
|
||||||
manner. If we can't reproduce a bug or think a feature already exists, we
|
|
||||||
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
|
||||||
explain why.
|
|
||||||
- type: textarea
|
|
||||||
id: problem
|
|
||||||
attributes:
|
|
||||||
label: What happened?
|
|
||||||
description: |
|
|
||||||
Please provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: textarea
|
|
||||||
id: expected
|
|
||||||
attributes:
|
|
||||||
label: What did you expect to happen?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: repro
|
|
||||||
attributes:
|
|
||||||
label: How can we reproduce it (as minimally and precisely as possible)?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: '### Environment'
|
|
||||||
|
|
||||||
- type: dropdown
|
|
||||||
id: os
|
|
||||||
attributes:
|
|
||||||
label: OS
|
|
||||||
options:
|
|
||||||
- 'RHEL 9'
|
|
||||||
- 'RHEL 8'
|
|
||||||
- 'Fedora 40'
|
|
||||||
- 'Ubuntu 24'
|
|
||||||
- 'Ubuntu 22'
|
|
||||||
- 'Ubuntu 20'
|
|
||||||
- 'Debian 12'
|
|
||||||
- 'Debian 11'
|
|
||||||
- 'Flatcar Container Linux'
|
|
||||||
- 'openSUSE Leap'
|
|
||||||
- 'openSUSE Tumbleweed'
|
|
||||||
- 'Oracle Linux 9'
|
|
||||||
- 'Oracle Linux 8'
|
|
||||||
- 'AlmaLinux 9'
|
|
||||||
- 'AlmaLinux 8'
|
|
||||||
- 'Rocky Linux 9'
|
|
||||||
- 'Rocky Linux 8'
|
|
||||||
- 'Amazon Linux 2'
|
|
||||||
- 'Kylin Linux Advanced Server V10'
|
|
||||||
- 'UOS Linux 20'
|
|
||||||
- 'openEuler 24'
|
|
||||||
- 'openEuler 22'
|
|
||||||
- 'openEuler 20'
|
|
||||||
- 'Other|Unsupported'
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: ansible_version
|
|
||||||
attributes:
|
|
||||||
label: Version of Ansible
|
|
||||||
placeholder: 'ansible --version'
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: input
|
|
||||||
id: python_version
|
|
||||||
attributes:
|
|
||||||
label: Version of Python
|
|
||||||
placeholder: 'python --version'
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: input
|
|
||||||
id: kubespray_version
|
|
||||||
attributes:
|
|
||||||
label: Version of Kubespray (commit)
|
|
||||||
placeholder: 'git rev-parse --short HEAD'
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: dropdown
|
|
||||||
id: network_plugin
|
|
||||||
attributes:
|
|
||||||
label: Network plugin used
|
|
||||||
options:
|
|
||||||
- calico
|
|
||||||
- cilium
|
|
||||||
- cni
|
|
||||||
- custom_cni
|
|
||||||
- flannel
|
|
||||||
- kube-ovn
|
|
||||||
- kube-router
|
|
||||||
- macvlan
|
|
||||||
- meta
|
|
||||||
- multus
|
|
||||||
- ovn4nfv
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: inventory
|
|
||||||
attributes:
|
|
||||||
label: Full inventory with variables
|
|
||||||
placeholder: 'ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"'
|
|
||||||
description: We recommend using snippets services like https://gist.github.com/ etc.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: input
|
|
||||||
id: ansible_command
|
|
||||||
attributes:
|
|
||||||
label: Command used to invoke ansible
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: ansible_output
|
|
||||||
attributes:
|
|
||||||
label: Output of ansible run
|
|
||||||
description: We recommend using snippets services like https://gist.github.com/ etc.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: anything_else
|
|
||||||
attributes:
|
|
||||||
label: Anything else we need to know
|
|
||||||
description: |
|
|
||||||
By running scripts/collect-info.yaml you can get a lot of useful informations.
|
|
||||||
Script can be started by:
|
|
||||||
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
|
|
||||||
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
|
|
||||||
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here
|
|
||||||
6
.github/ISSUE_TEMPLATE/config.yml
vendored
6
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,6 +0,0 @@
|
|||||||
---
|
|
||||||
blank_issues_enabled: false
|
|
||||||
contact_links:
|
|
||||||
- name: Support Request
|
|
||||||
url: https://kubernetes.slack.com/channels/kubespray
|
|
||||||
about: Support request or question relating to Kubernetes
|
|
||||||
20
.github/ISSUE_TEMPLATE/enhancement.yaml
vendored
20
.github/ISSUE_TEMPLATE/enhancement.yaml
vendored
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
name: Enhancement Request
|
|
||||||
description: Suggest an enhancement to the Kubespray project
|
|
||||||
labels: kind/feature
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: Please only use this template for submitting enhancement requests
|
|
||||||
- type: textarea
|
|
||||||
id: what
|
|
||||||
attributes:
|
|
||||||
label: What would you like to be added
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: textarea
|
|
||||||
id: why
|
|
||||||
attributes:
|
|
||||||
label: Why is this needed
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
41
.github/ISSUE_TEMPLATE/failing-test.yaml
vendored
41
.github/ISSUE_TEMPLATE/failing-test.yaml
vendored
@@ -1,41 +0,0 @@
|
|||||||
---
|
|
||||||
name: Failing Test
|
|
||||||
description: Report test failures in Kubespray CI jobs
|
|
||||||
labels: kind/failing-test
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: Please only use this template for submitting reports about failing tests in Kubespray CI jobs
|
|
||||||
- type: textarea
|
|
||||||
id: failing_jobs
|
|
||||||
attributes:
|
|
||||||
label: Which jobs are failing ?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: failing_tests
|
|
||||||
attributes:
|
|
||||||
label: Which tests are failing ?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: input
|
|
||||||
id: since_when
|
|
||||||
attributes:
|
|
||||||
label: Since when has it been failing ?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: failure_reason
|
|
||||||
attributes:
|
|
||||||
label: Reason for failure
|
|
||||||
description: If you don't know and have no guess, just put "Unknown"
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: anything_else
|
|
||||||
attributes:
|
|
||||||
label: Anything else we need to know
|
|
||||||
44
.github/PULL_REQUEST_TEMPLATE.md
vendored
44
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,44 +0,0 @@
|
|||||||
<!-- Thanks for sending a pull request! Here are some tips for you:
|
|
||||||
|
|
||||||
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md and developer guide https://git.k8s.io/community/contributors/devel/development.md
|
|
||||||
2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here:
|
|
||||||
https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
|
|
||||||
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
|
|
||||||
4. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
|
|
||||||
5. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
|
|
||||||
6. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests
|
|
||||||
-->
|
|
||||||
|
|
||||||
**What type of PR is this?**
|
|
||||||
> Uncomment only one ` /kind <>` line, hit enter to put that in a new line, and remove leading whitespaces from that line:
|
|
||||||
>
|
|
||||||
> /kind api-change
|
|
||||||
> /kind bug
|
|
||||||
> /kind cleanup
|
|
||||||
> /kind design
|
|
||||||
> /kind documentation
|
|
||||||
> /kind failing-test
|
|
||||||
> /kind feature
|
|
||||||
> /kind flake
|
|
||||||
|
|
||||||
**What this PR does / why we need it**:
|
|
||||||
|
|
||||||
**Which issue(s) this PR fixes**:
|
|
||||||
<!--
|
|
||||||
*Automatically closes linked issue when PR is merged.
|
|
||||||
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
|
|
||||||
_If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`_*
|
|
||||||
-->
|
|
||||||
Fixes #
|
|
||||||
|
|
||||||
**Special notes for your reviewer**:
|
|
||||||
|
|
||||||
**Does this PR introduce a user-facing change?**:
|
|
||||||
<!--
|
|
||||||
If no, just write "NONE" in the release-note block below.
|
|
||||||
If yes, a release note is required:
|
|
||||||
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
|
|
||||||
-->
|
|
||||||
```release-note
|
|
||||||
|
|
||||||
```
|
|
||||||
21
.github/dependabot.yml
vendored
21
.github/dependabot.yml
vendored
@@ -1,21 +0,0 @@
|
|||||||
version: 2
|
|
||||||
updates:
|
|
||||||
- package-ecosystem: "pip"
|
|
||||||
directory: "/"
|
|
||||||
schedule:
|
|
||||||
interval: "weekly"
|
|
||||||
labels:
|
|
||||||
- dependencies
|
|
||||||
- release-note-none
|
|
||||||
groups:
|
|
||||||
molecule:
|
|
||||||
patterns:
|
|
||||||
- molecule
|
|
||||||
- molecule-plugins*
|
|
||||||
- package-ecosystem: "github-actions"
|
|
||||||
directory: "/"
|
|
||||||
labels:
|
|
||||||
- release-note-none
|
|
||||||
- ci-short
|
|
||||||
schedule:
|
|
||||||
interval: "weekly"
|
|
||||||
32
.github/workflows/auto-label-os.yml
vendored
32
.github/workflows/auto-label-os.yml
vendored
@@ -1,32 +0,0 @@
|
|||||||
name: Issue labeler
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [opened]
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
label-component:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
issues: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
|
||||||
|
|
||||||
- name: Parse issue form
|
|
||||||
uses: stefanbuck/github-issue-parser@2ea9b35a8c584529ed00891a8f7e41dc46d0441e
|
|
||||||
id: issue-parser
|
|
||||||
with:
|
|
||||||
template-path: .github/ISSUE_TEMPLATE/bug-report.yaml
|
|
||||||
|
|
||||||
- name: Set labels based on OS field
|
|
||||||
uses: redhat-plumbers-in-action/advanced-issue-labeler@e38e6809c5420d038eed380d49ee9a6ca7c92dbf
|
|
||||||
with:
|
|
||||||
issue-form: ${{ steps.issue-parser.outputs.jsonString }}
|
|
||||||
section: os
|
|
||||||
block-list: |
|
|
||||||
None
|
|
||||||
Other
|
|
||||||
token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
@@ -1,55 +0,0 @@
|
|||||||
name: Upgrade Kubespray components with new patches versions - all branches
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: '22 2 * * *' # every day, 02:22 UTC
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
permissions: {}
|
|
||||||
jobs:
|
|
||||||
get-releases-branches:
|
|
||||||
if: github.repository == 'kubernetes-sigs/kubespray'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
outputs:
|
|
||||||
branches: ${{ steps.get-branches.outputs.data }}
|
|
||||||
steps:
|
|
||||||
- uses: octokit/graphql-action@8ad880e4d437783ea2ab17010324de1075228110
|
|
||||||
id: get-branches
|
|
||||||
with:
|
|
||||||
query: |
|
|
||||||
query get_release_branches($owner:String!, $name:String!) {
|
|
||||||
repository(owner:$owner, name:$name) {
|
|
||||||
refs(refPrefix: "refs/heads/",
|
|
||||||
first: 1, # TODO increment once we have release branch with the new checksums format
|
|
||||||
query: "release-",
|
|
||||||
orderBy: {
|
|
||||||
field: ALPHABETICAL,
|
|
||||||
direction: DESC
|
|
||||||
}) {
|
|
||||||
nodes {
|
|
||||||
name
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
variables: |
|
|
||||||
owner: ${{ github.repository_owner }}
|
|
||||||
name: ${{ github.event.repository.name }}
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
|
|
||||||
update-versions:
|
|
||||||
needs: get-releases-branches
|
|
||||||
strategy:
|
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
|
||||||
branch:
|
|
||||||
- name: ${{ github.event.repository.default_branch }}
|
|
||||||
- ${{ fromJSON(needs.get-releases-branches.outputs.branches).repository.refs.nodes }}
|
|
||||||
uses: ./.github/workflows/upgrade-patch-versions.yml
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
pull-requests: write
|
|
||||||
name: Update patch updates on ${{ matrix.branch.name }}
|
|
||||||
with:
|
|
||||||
branch: ${{ matrix.branch.name }}
|
|
||||||
44
.github/workflows/upgrade-patch-versions.yml
vendored
44
.github/workflows/upgrade-patch-versions.yml
vendored
@@ -1,44 +0,0 @@
|
|||||||
on:
|
|
||||||
workflow_call:
|
|
||||||
inputs:
|
|
||||||
branch:
|
|
||||||
description: Which branch to update with new patch versions
|
|
||||||
default: master
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
update-patch-versions:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
|
||||||
with:
|
|
||||||
ref: ${{ inputs.branch }}
|
|
||||||
- uses: actions/setup-python@v6
|
|
||||||
with:
|
|
||||||
python-version: '3.13'
|
|
||||||
cache: 'pip'
|
|
||||||
- run: pip install scripts/component_hash_update pre-commit
|
|
||||||
- run: update-hashes
|
|
||||||
env:
|
|
||||||
API_KEY: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
key: pre-commit-hook-propagate
|
|
||||||
path: |
|
|
||||||
~/.cache/pre-commit
|
|
||||||
- run: pre-commit run --all-files propagate-ansible-variables
|
|
||||||
continue-on-error: true
|
|
||||||
- uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e
|
|
||||||
with:
|
|
||||||
commit-message: Patch versions updates
|
|
||||||
title: Patch versions updates - ${{ inputs.branch }}
|
|
||||||
labels: bot
|
|
||||||
branch: component_hash_update/${{ inputs.branch }}
|
|
||||||
sign-commits: true
|
|
||||||
body: |
|
|
||||||
/kind feature
|
|
||||||
|
|
||||||
```release-note
|
|
||||||
NONE
|
|
||||||
```
|
|
||||||
36
.gitignore
vendored
36
.gitignore
vendored
@@ -1,34 +1,21 @@
|
|||||||
.vagrant
|
.vagrant
|
||||||
*.retry
|
*.retry
|
||||||
**/vagrant_ansible_inventory
|
**/vagrant_ansible_inventory
|
||||||
*.iml
|
inventory/credentials/
|
||||||
|
inventory/group_vars/fake_hosts.yml
|
||||||
|
inventory/host_vars/
|
||||||
temp
|
temp
|
||||||
contrib/offline/container-images
|
|
||||||
contrib/offline/container-images.tar.gz
|
|
||||||
contrib/offline/offline-files
|
|
||||||
contrib/offline/offline-files.tar.gz
|
|
||||||
.idea
|
.idea
|
||||||
.vscode
|
|
||||||
.tox
|
.tox
|
||||||
.cache
|
.cache
|
||||||
*.bak
|
*.bak
|
||||||
*.tfstate
|
*.tfstate
|
||||||
*.tfstate*backup
|
*.tfstate.backup
|
||||||
*.lock.hcl
|
|
||||||
.terraform/
|
|
||||||
contrib/terraform/aws/credentials.tfvars
|
contrib/terraform/aws/credentials.tfvars
|
||||||
.terraform.lock.hcl
|
|
||||||
/ssh-bastion.conf
|
/ssh-bastion.conf
|
||||||
**/*.sw[pon]
|
**/*.sw[pon]
|
||||||
*~
|
*~
|
||||||
vagrant/
|
vagrant/
|
||||||
plugins/mitogen
|
|
||||||
|
|
||||||
# Ansible inventory
|
|
||||||
inventory/*
|
|
||||||
!inventory/local
|
|
||||||
!inventory/sample
|
|
||||||
inventory/*/artifacts/
|
|
||||||
|
|
||||||
# Byte-compiled / optimized / DLL files
|
# Byte-compiled / optimized / DLL files
|
||||||
__pycache__/
|
__pycache__/
|
||||||
@@ -37,6 +24,7 @@ __pycache__/
|
|||||||
|
|
||||||
# Distribution / packaging
|
# Distribution / packaging
|
||||||
.Python
|
.Python
|
||||||
|
inventory/*/artifacts/
|
||||||
env/
|
env/
|
||||||
build/
|
build/
|
||||||
credentials/
|
credentials/
|
||||||
@@ -106,17 +94,3 @@ target/
|
|||||||
# virtualenv
|
# virtualenv
|
||||||
venv/
|
venv/
|
||||||
ENV/
|
ENV/
|
||||||
|
|
||||||
# molecule
|
|
||||||
roles/**/molecule/**/__pycache__/
|
|
||||||
|
|
||||||
# macOS
|
|
||||||
.DS_Store
|
|
||||||
|
|
||||||
# Temp location used by our scripts
|
|
||||||
scripts/tmp/
|
|
||||||
tmp.md
|
|
||||||
|
|
||||||
# Ansible collection files
|
|
||||||
kubernetes_sigs-kubespray*tar.gz
|
|
||||||
ansible_collections
|
|
||||||
|
|||||||
769
.gitlab-ci.yml
769
.gitlab-ci.yml
@@ -1,66 +1,749 @@
|
|||||||
---
|
|
||||||
stages:
|
stages:
|
||||||
- build # build docker image used in most other jobs
|
- unit-tests
|
||||||
- test # unit tests
|
- moderator
|
||||||
- deploy-part1 # kubespray runs - common setup
|
- deploy-part1
|
||||||
- deploy-extended # kubespray runs - rarer or costlier (to test) setups
|
- deploy-part2
|
||||||
|
- deploy-special
|
||||||
|
|
||||||
variables:
|
variables:
|
||||||
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
||||||
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
||||||
GIT_CONFIG_COUNT: 2
|
# DOCKER_HOST: tcp://localhost:2375
|
||||||
GIT_CONFIG_KEY_0: user.email
|
|
||||||
GIT_CONFIG_VALUE_0: "ci@kubespray.io"
|
|
||||||
GIT_CONFIG_KEY_1: user.name
|
|
||||||
GIT_CONFIG_VALUE_1: "Kubespray CI"
|
|
||||||
ANSIBLE_FORCE_COLOR: "true"
|
ANSIBLE_FORCE_COLOR: "true"
|
||||||
MAGIC: "ci check this"
|
MAGIC: "ci check this"
|
||||||
|
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
|
||||||
|
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
|
||||||
GS_ACCESS_KEY_ID: $GS_KEY
|
GS_ACCESS_KEY_ID: $GS_KEY
|
||||||
GS_SECRET_ACCESS_KEY: $GS_SECRET
|
GS_SECRET_ACCESS_KEY: $GS_SECRET
|
||||||
CONTAINER_ENGINE: docker
|
CONTAINER_ENGINE: docker
|
||||||
|
SSH_USER: root
|
||||||
GCE_PREEMPTIBLE: "false"
|
GCE_PREEMPTIBLE: "false"
|
||||||
ANSIBLE_KEEP_REMOTE_FILES: "1"
|
ANSIBLE_KEEP_REMOTE_FILES: "1"
|
||||||
ANSIBLE_CONFIG: ./tests/ansible.cfg
|
ANSIBLE_CONFIG: ./tests/ansible.cfg
|
||||||
ANSIBLE_REMOTE_USER: kubespray
|
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
|
||||||
ANSIBLE_PRIVATE_KEY_FILE: /tmp/id_rsa
|
IDEMPOT_CHECK: "false"
|
||||||
ANSIBLE_INVENTORY: /tmp/inventory
|
|
||||||
ANSIBLE_STDOUT_CALLBACK: "default"
|
|
||||||
RESET_CHECK: "false"
|
RESET_CHECK: "false"
|
||||||
REMOVE_NODE_CHECK: "false"
|
|
||||||
UPGRADE_TEST: "false"
|
UPGRADE_TEST: "false"
|
||||||
MITOGEN_ENABLE: "false"
|
KUBEADM_ENABLED: "false"
|
||||||
ANSIBLE_VERBOSITY: 2
|
LOG_LEVEL: "-vv"
|
||||||
RECOVER_CONTROL_PLANE_TEST: "false"
|
|
||||||
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
|
# asia-east1-a
|
||||||
OPENTOFU_VERSION: v1.9.1
|
# asia-northeast1-a
|
||||||
PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}"
|
# europe-west1-b
|
||||||
|
# us-central1-a
|
||||||
|
# us-east1-b
|
||||||
|
# us-west1-a
|
||||||
|
|
||||||
before_script:
|
before_script:
|
||||||
- ./tests/scripts/rebase.sh
|
- /usr/bin/python -m pip install -r tests/requirements.txt
|
||||||
- mkdir -p cluster-dump $ANSIBLE_INVENTORY
|
- mkdir -p /.ssh
|
||||||
|
|
||||||
.job: &job
|
.job: &job
|
||||||
tags:
|
tags:
|
||||||
- ffci
|
- kubernetes
|
||||||
image: $PIPELINE_IMAGE
|
- docker
|
||||||
artifacts:
|
image: quay.io/kubespray/kubespray:v2.7
|
||||||
when: always
|
|
||||||
|
.docker_service: &docker_service
|
||||||
|
services:
|
||||||
|
- docker:dind
|
||||||
|
|
||||||
|
.create_cluster: &create_cluster
|
||||||
|
<<: *job
|
||||||
|
<<: *docker_service
|
||||||
|
|
||||||
|
.gce_variables: &gce_variables
|
||||||
|
GCE_USER: travis
|
||||||
|
SSH_USER: $GCE_USER
|
||||||
|
CLOUD_MACHINE_TYPE: "g1-small"
|
||||||
|
CI_PLATFORM: "gce"
|
||||||
|
PRIVATE_KEY: $GCE_PRIVATE_KEY
|
||||||
|
|
||||||
|
.do_variables: &do_variables
|
||||||
|
PRIVATE_KEY: $DO_PRIVATE_KEY
|
||||||
|
CI_PLATFORM: "do"
|
||||||
|
SSH_USER: root
|
||||||
|
|
||||||
|
|
||||||
|
.testcases: &testcases
|
||||||
|
<<: *job
|
||||||
|
<<: *docker_service
|
||||||
|
cache:
|
||||||
|
key: "$CI_BUILD_REF_NAME"
|
||||||
paths:
|
paths:
|
||||||
- cluster-dump/
|
- downloads/
|
||||||
needs:
|
- $HOME/.cache
|
||||||
- pipeline-image
|
before_script:
|
||||||
|
- docker info
|
||||||
|
- /usr/bin/python -m pip install -r requirements.txt
|
||||||
|
- /usr/bin/python -m pip install -r tests/requirements.txt
|
||||||
|
- mkdir -p /.ssh
|
||||||
|
- mkdir -p $HOME/.ssh
|
||||||
|
- ansible-playbook --version
|
||||||
|
- export PYPATH=$([[ ! "$CI_JOB_NAME" =~ "coreos" ]] && echo /usr/bin/python || echo /opt/bin/python)
|
||||||
|
- echo "CI_JOB_NAME is $CI_JOB_NAME"
|
||||||
|
- echo "PYPATH is $PYPATH"
|
||||||
|
script:
|
||||||
|
- pwd
|
||||||
|
- ls
|
||||||
|
- echo ${PWD}
|
||||||
|
- echo "${STARTUP_SCRIPT}"
|
||||||
|
- cd tests && make create-${CI_PLATFORM} -s ; cd -
|
||||||
|
|
||||||
.job-moderated:
|
# Check out latest tag if testing upgrade
|
||||||
extends: .job
|
# Uncomment when gitlab kubespray repo has tags
|
||||||
needs:
|
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
|
||||||
- pipeline-image
|
- test "${UPGRADE_TEST}" != "false" && git checkout 53d87e53c5899d4ea2904ab7e3883708dd6363d3
|
||||||
- pre-commit # lint
|
# Checkout the CI vars file so it is available
|
||||||
- vagrant-validate # lint
|
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
|
||||||
|
# Workaround https://github.com/kubernetes-sigs/kubespray/issues/2021
|
||||||
|
- 'sh -c "echo ignore_assert_errors: true | tee -a tests/files/${CI_JOB_NAME}.yml"'
|
||||||
|
|
||||||
include:
|
|
||||||
- .gitlab-ci/build.yml
|
# Create cluster
|
||||||
- .gitlab-ci/lint.yml
|
- >
|
||||||
- .gitlab-ci/terraform.yml
|
ansible-playbook
|
||||||
- .gitlab-ci/kubevirt.yml
|
-i ${ANSIBLE_INVENTORY}
|
||||||
- .gitlab-ci/vagrant.yml
|
-b --become-user=root
|
||||||
- .gitlab-ci/molecule.yml
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_ssh_user=${SSH_USER}
|
||||||
|
-e local_release_dir=${PWD}/downloads
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
cluster.yml
|
||||||
|
|
||||||
|
# Repeat deployment if testing upgrade
|
||||||
|
- >
|
||||||
|
if [ "${UPGRADE_TEST}" != "false" ]; then
|
||||||
|
test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml";
|
||||||
|
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
|
||||||
|
git checkout "${CI_BUILD_REF}";
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_ssh_user=${SSH_USER}
|
||||||
|
-e local_release_dir=${PWD}/downloads
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
$PLAYBOOK;
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Tests Cases
|
||||||
|
## Test Master API
|
||||||
|
- >
|
||||||
|
ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
|
||||||
|
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
|
||||||
|
|
||||||
|
## Ping the between 2 pod
|
||||||
|
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
|
||||||
|
|
||||||
|
## Advanced DNS checks
|
||||||
|
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/040_check-network-adv.yml $LOG_LEVEL
|
||||||
|
|
||||||
|
## Idempotency checks 1/5 (repeat deployment)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_python_interpreter=${PYPATH}
|
||||||
|
-e local_release_dir=${PWD}/downloads
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
cluster.yml;
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Idempotency checks 2/5 (Advanced DNS checks)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Idempotency checks 3/5 (reset deployment)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_python_interpreter=${PYPATH}
|
||||||
|
-e reset_confirmation=yes
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
reset.yml;
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Idempotency checks 4/5 (redeploy after reset)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook
|
||||||
|
-i ${ANSIBLE_INVENTORY}
|
||||||
|
-b --become-user=root
|
||||||
|
--private-key=${HOME}/.ssh/id_rsa
|
||||||
|
-u $SSH_USER
|
||||||
|
${SSH_ARGS}
|
||||||
|
${LOG_LEVEL}
|
||||||
|
-e @${CI_TEST_VARS}
|
||||||
|
-e ansible_python_interpreter=${PYPATH}
|
||||||
|
-e local_release_dir=${PWD}/downloads
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
cluster.yml;
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Idempotency checks 5/5 (Advanced DNS checks)
|
||||||
|
- >
|
||||||
|
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
|
||||||
|
ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH}
|
||||||
|
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
|
||||||
|
--limit "all:!fake_hosts"
|
||||||
|
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
|
||||||
|
fi
|
||||||
|
|
||||||
|
after_script:
|
||||||
|
- cd tests && make delete-${CI_PLATFORM} -s ; cd -
|
||||||
|
|
||||||
|
.gce: &gce
|
||||||
|
<<: *testcases
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
|
||||||
|
.do: &do
|
||||||
|
variables:
|
||||||
|
<<: *do_variables
|
||||||
|
<<: *testcases
|
||||||
|
|
||||||
|
# Test matrix. Leave the comments for markup scripts.
|
||||||
|
.coreos_calico_aio_variables: &coreos_calico_aio_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu18_flannel_aio_variables: &ubuntu18_flannel_aio_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos_weave_kubeadm_variables: ¢os_weave_kubeadm_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
UPGRADE_TEST: "graceful"
|
||||||
|
|
||||||
|
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.coreos_cilium_variables: &coreos_cilium_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_cilium_sep_variables: &ubuntu_cilium_sep_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.rhel7_weave_variables: &rhel7_weave_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos7_flannel_addons_variables: ¢os7_flannel_addons_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.debian9_calico_variables: &debian9_calico_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.coreos_canal_variables: &coreos_canal_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos7_calico_ha_variables: ¢os7_calico_ha_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos7_kube_router_variables: ¢os7_kube_router_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.centos7_multus_calico_variables: ¢os7_multus_calico_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
UPGRADE_TEST: "graceful"
|
||||||
|
|
||||||
|
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.coreos_kube_router_variables: &coreos_kube_router_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
|
||||||
|
# stage: deploy-part1
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_flannel_variables: &ubuntu_flannel_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.ubuntu_kube_router_variables: &ubuntu_kube_router_variables
|
||||||
|
# stage: deploy-special
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
.opensuse_canal_variables: &opensuse_canal_variables
|
||||||
|
# stage: deploy-part2
|
||||||
|
MOVED_TO_GROUP_VARS: "true"
|
||||||
|
|
||||||
|
|
||||||
|
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
|
||||||
|
### PR JOBS PART1
|
||||||
|
|
||||||
|
gce_ubuntu18-flannel-aio:
|
||||||
|
stage: deploy-part1
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *ubuntu18_flannel_aio_variables
|
||||||
|
<<: *gce_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
### PR JOBS PART2
|
||||||
|
|
||||||
|
gce_coreos-calico-aio:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *coreos_calico_aio_variables
|
||||||
|
<<: *gce_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
gce_centos7-flannel-addons:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_flannel_addons_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
gce_centos-weave-kubeadm-sep:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos_weave_kubeadm_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-flannel-ha:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_flannel_variables
|
||||||
|
when: on_success
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
### MANUAL JOBS
|
||||||
|
|
||||||
|
gce_ubuntu-weave-sep:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_weave_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: [/^pr-.*$/]
|
||||||
|
|
||||||
|
gce_coreos-calico-sep-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_calico_aio_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_ubuntu-canal-ha-triggers:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_canal_ha_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_centos7-flannel-addons-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_flannel_addons_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
|
||||||
|
gce_ubuntu-weave-sep-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_weave_sep_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
# More builds for PRs/merges (manual) and triggers (auto)
|
||||||
|
do_ubuntu-canal-ha:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *do
|
||||||
|
variables:
|
||||||
|
<<: *do_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-canal-ha:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_canal_ha_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-canal-kubeadm:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_canal_kubeadm_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-canal-kubeadm-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_canal_kubeadm_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_centos-weave-kubeadm-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos_weave_kubeadm_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_ubuntu-contiv-sep:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_contiv_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_coreos-cilium:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_cilium_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-cilium-sep:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_cilium_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_rhel7-weave:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *rhel7_weave_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_rhel7-weave-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *rhel7_weave_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_debian9-calico-upgrade:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *debian9_calico_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_debian9-calico-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *debian9_calico_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_coreos-canal:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_canal_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_coreos-canal-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_canal_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_rhel7-canal-sep:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *rhel7_canal_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_rhel7-canal-sep-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *rhel7_canal_sep_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_centos7-calico-ha:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_calico_ha_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_centos7-calico-ha-triggers:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_calico_ha_variables
|
||||||
|
when: on_success
|
||||||
|
only: ['triggers']
|
||||||
|
|
||||||
|
gce_centos7-kube-router:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_kube_router_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_centos7-multus-calico:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *centos7_multus_calico_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_opensuse-canal:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *opensuse_canal_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
|
||||||
|
gce_coreos-alpha-weave-ha:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_alpha_weave_ha_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_coreos-kube-router:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *coreos_kube_router_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-rkt-sep:
|
||||||
|
stage: deploy-part2
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_rkt_sep_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
gce_ubuntu-kube-router-sep:
|
||||||
|
stage: deploy-special
|
||||||
|
<<: *job
|
||||||
|
<<: *gce
|
||||||
|
variables:
|
||||||
|
<<: *gce_variables
|
||||||
|
<<: *ubuntu_kube_router_variables
|
||||||
|
when: manual
|
||||||
|
except: ['triggers']
|
||||||
|
only: ['master', /^pr-.*$/]
|
||||||
|
|
||||||
|
# Premoderated with manual actions
|
||||||
|
ci-authorized:
|
||||||
|
<<: *job
|
||||||
|
stage: moderator
|
||||||
|
before_script:
|
||||||
|
- apt-get -y install jq
|
||||||
|
script:
|
||||||
|
- /bin/sh scripts/premoderator.sh
|
||||||
|
except: ['triggers', 'master']
|
||||||
|
|
||||||
|
syntax-check:
|
||||||
|
<<: *job
|
||||||
|
stage: unit-tests
|
||||||
|
script:
|
||||||
|
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check
|
||||||
|
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root upgrade-cluster.yml -vvv --syntax-check
|
||||||
|
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root reset.yml -vvv --syntax-check
|
||||||
|
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root extra_playbooks/upgrade-only-k8s.yml -vvv --syntax-check
|
||||||
|
except: ['triggers', 'master']
|
||||||
|
|
||||||
|
yamllint:
|
||||||
|
<<: *job
|
||||||
|
stage: unit-tests
|
||||||
|
script:
|
||||||
|
- yamllint roles
|
||||||
|
except: ['triggers', 'master']
|
||||||
|
|
||||||
|
tox-inventory-builder:
|
||||||
|
stage: unit-tests
|
||||||
|
<<: *job
|
||||||
|
script:
|
||||||
|
- pip install tox
|
||||||
|
- cd contrib/inventory_builder && tox
|
||||||
|
when: manual
|
||||||
|
except: ['triggers', 'master']
|
||||||
|
|||||||
@@ -1,30 +0,0 @@
|
|||||||
---
|
|
||||||
pipeline-image:
|
|
||||||
cache:
|
|
||||||
key: $CI_COMMIT_REF_SLUG
|
|
||||||
paths:
|
|
||||||
- image-cache
|
|
||||||
tags:
|
|
||||||
- ffci
|
|
||||||
stage: build
|
|
||||||
image: moby/buildkit:rootless
|
|
||||||
variables:
|
|
||||||
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
|
|
||||||
CACHE_IMAGE: $CI_REGISTRY_IMAGE/pipeline:cache
|
|
||||||
# TODO: remove the override
|
|
||||||
# currently rebase.sh depends on bash (not available in the kaniko image)
|
|
||||||
# once we have a simpler rebase (which should be easy if the target branch ref is available as variable
|
|
||||||
# we'll be able to rebase here as well hopefully
|
|
||||||
before_script:
|
|
||||||
- mkdir -p ~/.docker
|
|
||||||
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > ~/.docker/config.json
|
|
||||||
script:
|
|
||||||
- |
|
|
||||||
buildctl-daemonless.sh build \
|
|
||||||
--frontend dockerfile.v0 \
|
|
||||||
--local context=$CI_PROJECT_DIR \
|
|
||||||
--local dockerfile=$CI_PROJECT_DIR \
|
|
||||||
--opt filename=pipeline.Dockerfile \
|
|
||||||
--export-cache type=registry,ref=$CACHE_IMAGE \
|
|
||||||
--import-cache type=registry,ref=$CACHE_IMAGE \
|
|
||||||
--output type=image,name=$PIPELINE_IMAGE,push=true
|
|
||||||
@@ -1,153 +0,0 @@
|
|||||||
---
|
|
||||||
.kubevirt:
|
|
||||||
extends: .job-moderated
|
|
||||||
interruptible: true
|
|
||||||
script:
|
|
||||||
- ansible-playbook tests/cloud_playbooks/create-kubevirt.yml
|
|
||||||
-e @"tests/files/${TESTCASE}.yml"
|
|
||||||
- ./tests/scripts/testcases_run.sh
|
|
||||||
variables:
|
|
||||||
ANSIBLE_TIMEOUT: "120"
|
|
||||||
tags:
|
|
||||||
- ffci
|
|
||||||
needs:
|
|
||||||
- pipeline-image
|
|
||||||
|
|
||||||
# TODO: generate testcases matrixes from the files in tests/files/
|
|
||||||
# this is needed to avoid the need for PR rebasing when a job was added or removed in the target branch
|
|
||||||
# (currently, a removed job in the target branch breaks the tests, because the
|
|
||||||
# pipeline definition is parsed by gitlab before the rebase.sh script)
|
|
||||||
# CI template for PRs
|
|
||||||
pr:
|
|
||||||
stage: deploy-part1
|
|
||||||
rules:
|
|
||||||
- if: $PR_LABELS =~ /.*ci-short.*/
|
|
||||||
when: manual
|
|
||||||
allow_failure: true
|
|
||||||
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
|
|
||||||
when: on_success
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
|
|
||||||
when: on_success
|
|
||||||
- when: manual
|
|
||||||
allow_failure: true
|
|
||||||
extends: .kubevirt
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- TESTCASE:
|
|
||||||
- almalinux9-crio
|
|
||||||
- almalinux9-kube-ovn
|
|
||||||
- debian11-calico-collection
|
|
||||||
- debian11-macvlan
|
|
||||||
- debian12-cilium
|
|
||||||
- debian13-cilium
|
|
||||||
- fedora39-kube-router
|
|
||||||
- openeuler24-calico
|
|
||||||
- rockylinux9-cilium
|
|
||||||
- ubuntu22-calico-all-in-one
|
|
||||||
- ubuntu22-calico-all-in-one-upgrade
|
|
||||||
- ubuntu24-calico-etcd-datastore
|
|
||||||
- ubuntu24-calico-all-in-one-hardening
|
|
||||||
- ubuntu24-cilium-sep
|
|
||||||
- ubuntu24-flannel-collection
|
|
||||||
- ubuntu24-kube-router-sep
|
|
||||||
- ubuntu24-kube-router-svc-proxy
|
|
||||||
- ubuntu24-ha-separate-etcd
|
|
||||||
- flatcar4081-calico
|
|
||||||
- fedora40-flannel-crio-collection-scale
|
|
||||||
|
|
||||||
# The ubuntu24-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
|
|
||||||
ubuntu24-calico-all-in-one:
|
|
||||||
stage: deploy-part1
|
|
||||||
extends: .kubevirt
|
|
||||||
variables:
|
|
||||||
TESTCASE: ubuntu24-calico-all-in-one
|
|
||||||
rules:
|
|
||||||
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
|
|
||||||
when: on_success
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
|
|
||||||
when: on_success
|
|
||||||
- when: manual
|
|
||||||
allow_failure: true
|
|
||||||
|
|
||||||
pr_full:
|
|
||||||
extends: .kubevirt
|
|
||||||
stage: deploy-extended
|
|
||||||
rules:
|
|
||||||
- if: $PR_LABELS =~ /.*ci-full.*/
|
|
||||||
when: on_success
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
|
|
||||||
when: on_success
|
|
||||||
# Else run as manual
|
|
||||||
- when: manual
|
|
||||||
allow_failure: true
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- TESTCASE:
|
|
||||||
- almalinux9-calico-ha-ebpf
|
|
||||||
- almalinux9-calico-nodelocaldns-secondary
|
|
||||||
- debian11-custom-cni
|
|
||||||
- debian11-kubelet-csr-approver
|
|
||||||
- debian12-custom-cni-helm
|
|
||||||
- fedora39-calico-swap-selinux
|
|
||||||
- fedora39-crio
|
|
||||||
- ubuntu24-calico-ha-wireguard
|
|
||||||
- ubuntu24-flannel-ha
|
|
||||||
- ubuntu24-flannel-ha-once
|
|
||||||
|
|
||||||
# Need an update of the container image to use schema v2
|
|
||||||
# update: quay.io/kubespray/vm-amazon-linux-2:latest
|
|
||||||
manual:
|
|
||||||
extends: pr_full
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- TESTCASE:
|
|
||||||
- amazon-linux-2-all-in-one
|
|
||||||
rules:
|
|
||||||
- when: manual
|
|
||||||
allow_failure: true
|
|
||||||
|
|
||||||
pr_extended:
|
|
||||||
extends: .kubevirt
|
|
||||||
stage: deploy-extended
|
|
||||||
rules:
|
|
||||||
- if: $PR_LABELS =~ /.*(ci-extended|ci-full).*/
|
|
||||||
when: on_success
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
|
|
||||||
when: on_success
|
|
||||||
- when: manual
|
|
||||||
allow_failure: true
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- TESTCASE:
|
|
||||||
- almalinux9-calico
|
|
||||||
- almalinux9-calico-remove-node
|
|
||||||
- almalinux9-docker
|
|
||||||
- debian11-docker
|
|
||||||
- debian12-calico
|
|
||||||
- debian12-docker
|
|
||||||
- debian13-calico
|
|
||||||
- rockylinux9-calico
|
|
||||||
- ubuntu22-all-in-one-docker
|
|
||||||
- ubuntu24-all-in-one-docker
|
|
||||||
- ubuntu24-calico-all-in-one
|
|
||||||
- ubuntu24-calico-etcd-kubeadm
|
|
||||||
- ubuntu24-flannel
|
|
||||||
|
|
||||||
# TODO: migrate to pr-full, fix the broken ones
|
|
||||||
periodic:
|
|
||||||
allow_failure: true
|
|
||||||
extends: .kubevirt
|
|
||||||
rules:
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
|
|
||||||
when: on_success
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- TESTCASE:
|
|
||||||
- debian11-calico-upgrade
|
|
||||||
- debian11-calico-upgrade-once
|
|
||||||
- debian12-cilium-svc-proxy
|
|
||||||
- fedora39-calico-selinux
|
|
||||||
- fedora40-docker-calico
|
|
||||||
- ubuntu24-calico-etcd-kubeadm-upgrade-ha
|
|
||||||
- ubuntu24-calico-ha-recover
|
|
||||||
- ubuntu24-calico-ha-recover-noquorum
|
|
||||||
@@ -1,26 +0,0 @@
|
|||||||
---
|
|
||||||
pre-commit:
|
|
||||||
stage: test
|
|
||||||
tags:
|
|
||||||
- ffci
|
|
||||||
image: 'ghcr.io/pre-commit-ci/runner-image@sha256:fe01a6ec51b298412990b88627c3973b1146c7304f930f469bafa29ba60bcde9'
|
|
||||||
variables:
|
|
||||||
PRE_COMMIT_HOME: ${CI_PROJECT_DIR}/.cache/pre-commit
|
|
||||||
ANSIBLE_STDOUT_CALLBACK: default
|
|
||||||
script:
|
|
||||||
- pre-commit run --all-files --show-diff-on-failure
|
|
||||||
cache:
|
|
||||||
key: pre-commit-2
|
|
||||||
paths:
|
|
||||||
- ${PRE_COMMIT_HOME}
|
|
||||||
when: 'always'
|
|
||||||
needs: []
|
|
||||||
|
|
||||||
vagrant-validate:
|
|
||||||
extends: .job
|
|
||||||
stage: test
|
|
||||||
tags: [ffci]
|
|
||||||
variables:
|
|
||||||
VAGRANT_VERSION: 2.3.7
|
|
||||||
script:
|
|
||||||
- ./tests/scripts/vagrant-validate.sh
|
|
||||||
@@ -1,55 +0,0 @@
|
|||||||
---
|
|
||||||
.molecule:
|
|
||||||
tags: [ffci]
|
|
||||||
rules: # run on ci-short as well
|
|
||||||
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
|
|
||||||
when: on_success
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
|
|
||||||
when: on_success
|
|
||||||
- when: manual
|
|
||||||
allow_failure: true
|
|
||||||
stage: deploy-part1
|
|
||||||
image: $PIPELINE_IMAGE
|
|
||||||
needs:
|
|
||||||
- pipeline-image
|
|
||||||
script:
|
|
||||||
- ./tests/scripts/molecule_run.sh
|
|
||||||
after_script:
|
|
||||||
- rm -fr molecule_logs
|
|
||||||
- mkdir -p molecule_logs
|
|
||||||
- find ~/.cache/molecule/ \( -name '*.out' -o -name '*.err' \) -type f | xargs tar -uf molecule_logs/molecule.tar
|
|
||||||
- gzip molecule_logs/molecule.tar
|
|
||||||
artifacts:
|
|
||||||
when: always
|
|
||||||
paths:
|
|
||||||
- molecule_logs/
|
|
||||||
|
|
||||||
molecule:
|
|
||||||
extends: .molecule
|
|
||||||
script:
|
|
||||||
- ./tests/scripts/molecule_run.sh -i $ROLE
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- ROLE:
|
|
||||||
- container-engine/cri-dockerd
|
|
||||||
- container-engine/containerd
|
|
||||||
- container-engine/cri-o
|
|
||||||
- container-engine/gvisor
|
|
||||||
- container-engine/youki
|
|
||||||
- adduser
|
|
||||||
- bastion-ssh-config
|
|
||||||
- bootstrap_os
|
|
||||||
|
|
||||||
molecule_full:
|
|
||||||
allow_failure: true
|
|
||||||
rules:
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
|
|
||||||
when: on_success
|
|
||||||
- when: manual
|
|
||||||
allow_failure: true
|
|
||||||
extends: molecule
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- ROLE:
|
|
||||||
# FIXME : tests below are perma-failing
|
|
||||||
- container-engine/kata-containers
|
|
||||||
@@ -1,120 +0,0 @@
|
|||||||
---
|
|
||||||
# Tests for contrib/terraform/
|
|
||||||
.terraform_install:
|
|
||||||
extends: .job
|
|
||||||
needs:
|
|
||||||
- pipeline-image
|
|
||||||
variables:
|
|
||||||
TF_VAR_public_key_path: "${ANSIBLE_PRIVATE_KEY_FILE}.pub"
|
|
||||||
TF_VAR_ssh_private_key_path: $ANSIBLE_PRIVATE_KEY_FILE
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
TERRAFORM_STATE_ROOT: $CI_PROJECT_DIR
|
|
||||||
stage: deploy-part1
|
|
||||||
before_script:
|
|
||||||
- ./tests/scripts/rebase.sh
|
|
||||||
- mkdir -p cluster-dump $ANSIBLE_INVENTORY
|
|
||||||
- ./tests/scripts/opentofu_install.sh
|
|
||||||
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
|
|
||||||
- ln -rs -t $ANSIBLE_INVENTORY contrib/terraform/$PROVIDER/hosts
|
|
||||||
- tofu -chdir="contrib/terraform/$PROVIDER" init
|
|
||||||
|
|
||||||
terraform_validate:
|
|
||||||
extends: .terraform_install
|
|
||||||
tags: [ffci]
|
|
||||||
only: ['master', /^pr-.*$/]
|
|
||||||
script:
|
|
||||||
- tofu -chdir="contrib/terraform/$PROVIDER" validate
|
|
||||||
- tofu -chdir="contrib/terraform/$PROVIDER" fmt -check -diff
|
|
||||||
stage: test
|
|
||||||
needs:
|
|
||||||
- pipeline-image
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- PROVIDER:
|
|
||||||
- openstack
|
|
||||||
- aws
|
|
||||||
- exoscale
|
|
||||||
- hetzner
|
|
||||||
- vsphere
|
|
||||||
- upcloud
|
|
||||||
- nifcloud
|
|
||||||
|
|
||||||
.terraform_apply:
|
|
||||||
extends: .terraform_install
|
|
||||||
tags: [ffci]
|
|
||||||
stage: deploy-extended
|
|
||||||
when: manual
|
|
||||||
only: [/^pr-.*$/]
|
|
||||||
variables:
|
|
||||||
ANSIBLE_INVENTORY_UNPARSED_FAILED: "true"
|
|
||||||
ANSIBLE_REMOTE_USER: ubuntu # the openstack terraform module does not handle custom user correctly
|
|
||||||
ANSIBLE_SSH_RETRIES: 15
|
|
||||||
TF_VAR_ssh_user: $ANSIBLE_REMOTE_USER
|
|
||||||
TF_VAR_cluster_name: $CI_JOB_ID
|
|
||||||
script:
|
|
||||||
# Set Ansible config
|
|
||||||
- cp ansible.cfg ~/.ansible.cfg
|
|
||||||
- ssh-keygen -N '' -f $ANSIBLE_PRIVATE_KEY_FILE -t rsa
|
|
||||||
- mkdir -p contrib/terraform/$PROVIDER/group_vars
|
|
||||||
# Random subnet to avoid routing conflicts
|
|
||||||
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
|
|
||||||
- tofu -chdir="contrib/terraform/$PROVIDER" apply -auto-approve -parallelism=1
|
|
||||||
- tests/scripts/testcases_run.sh
|
|
||||||
after_script:
|
|
||||||
# Cleanup regardless of exit code
|
|
||||||
- tofu -chdir="contrib/terraform/$PROVIDER" destroy -auto-approve
|
|
||||||
|
|
||||||
# Elastx is generously donating resources for Kubespray on Openstack CI
|
|
||||||
# Contacts: @gix @bl0m1
|
|
||||||
.elastx_variables: &elastx_variables
|
|
||||||
OS_AUTH_URL: https://ops.elastx.cloud:5000
|
|
||||||
OS_PROJECT_ID: 564c6b461c6b44b1bb19cdb9c2d928e4
|
|
||||||
OS_PROJECT_NAME: kubespray_ci
|
|
||||||
OS_USER_DOMAIN_NAME: Default
|
|
||||||
OS_PROJECT_DOMAIN_ID: default
|
|
||||||
OS_USERNAME: kubespray@root314.com
|
|
||||||
OS_REGION_NAME: se-sto
|
|
||||||
OS_INTERFACE: public
|
|
||||||
OS_IDENTITY_API_VERSION: "3"
|
|
||||||
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
|
|
||||||
|
|
||||||
tf-elastx_cleanup:
|
|
||||||
tags: [ffci]
|
|
||||||
image: python
|
|
||||||
variables:
|
|
||||||
<<: *elastx_variables
|
|
||||||
before_script:
|
|
||||||
- pip install -r scripts/openstack-cleanup/requirements.txt
|
|
||||||
script:
|
|
||||||
- ./scripts/openstack-cleanup/main.py
|
|
||||||
allow_failure: true
|
|
||||||
|
|
||||||
tf-elastx_ubuntu20-calico:
|
|
||||||
extends: .terraform_apply
|
|
||||||
stage: deploy-part1
|
|
||||||
when: on_success
|
|
||||||
allow_failure: true
|
|
||||||
variables:
|
|
||||||
<<: *elastx_variables
|
|
||||||
PROVIDER: openstack
|
|
||||||
ANSIBLE_TIMEOUT: "60"
|
|
||||||
TF_VAR_number_of_k8s_masters: "1"
|
|
||||||
TF_VAR_number_of_k8s_masters_no_floating_ip: "0"
|
|
||||||
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
|
|
||||||
TF_VAR_number_of_etcd: "0"
|
|
||||||
TF_VAR_number_of_k8s_nodes: "1"
|
|
||||||
TF_VAR_number_of_k8s_nodes_no_floating_ip: "0"
|
|
||||||
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
|
|
||||||
TF_VAR_number_of_bastions: "0"
|
|
||||||
TF_VAR_number_of_k8s_masters_no_etcd: "0"
|
|
||||||
TF_VAR_floatingip_pool: "elx-public1"
|
|
||||||
TF_VAR_dns_nameservers: '["1.1.1.1", "8.8.8.8", "8.8.4.4"]'
|
|
||||||
TF_VAR_use_access_ip: "0"
|
|
||||||
TF_VAR_external_net: "600b8501-78cb-4155-9c9f-23dfcba88828"
|
|
||||||
TF_VAR_network_name: "ci-$CI_JOB_ID"
|
|
||||||
TF_VAR_az_list: '["sto1"]'
|
|
||||||
TF_VAR_az_list_node: '["sto1"]'
|
|
||||||
TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
|
|
||||||
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
|
|
||||||
TF_VAR_image: ubuntu-20.04-server-latest
|
|
||||||
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
|
|
||||||
@@ -1,49 +0,0 @@
|
|||||||
---
|
|
||||||
vagrant:
|
|
||||||
extends: .job-moderated
|
|
||||||
variables:
|
|
||||||
CI_PLATFORM: "vagrant"
|
|
||||||
SSH_USER: "vagrant"
|
|
||||||
VAGRANT_DEFAULT_PROVIDER: "libvirt"
|
|
||||||
KUBESPRAY_VAGRANT_CONFIG: tests/files/${TESTCASE}.rb
|
|
||||||
DOCKER_NAME: vagrant
|
|
||||||
VAGRANT_ANSIBLE_TAGS: facts
|
|
||||||
VAGRANT_HOME: "$CI_PROJECT_DIR/.vagrant.d"
|
|
||||||
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
|
|
||||||
tags: [ffci-vm-large]
|
|
||||||
image: quay.io/kubespray/vm-kubespray-ci:v13
|
|
||||||
services: []
|
|
||||||
before_script:
|
|
||||||
- echo $USER
|
|
||||||
- python3 -m venv citest
|
|
||||||
- source citest/bin/activate
|
|
||||||
- vagrant plugin expunge --reinstall --force --no-tty
|
|
||||||
- vagrant plugin install vagrant-libvirt
|
|
||||||
- pip install --no-compile --no-cache-dir pip -U
|
|
||||||
- pip install --no-compile --no-cache-dir -r $CI_PROJECT_DIR/requirements.txt
|
|
||||||
- pip install --no-compile --no-cache-dir -r $CI_PROJECT_DIR/tests/requirements.txt
|
|
||||||
- ./tests/scripts/vagrant_clean.sh
|
|
||||||
script:
|
|
||||||
- vagrant up
|
|
||||||
- ./tests/scripts/testcases_run.sh
|
|
||||||
after_script:
|
|
||||||
- vagrant destroy -f
|
|
||||||
cache:
|
|
||||||
key: $CI_JOB_NAME_SLUG
|
|
||||||
paths:
|
|
||||||
- .vagrant.d/boxes
|
|
||||||
- .cache/pip
|
|
||||||
policy: pull-push # TODO: change to "pull" when not on main
|
|
||||||
stage: deploy-extended
|
|
||||||
rules:
|
|
||||||
- if: $PR_LABELS =~ /.*(ci-extended|ci-full).*/
|
|
||||||
when: on_success
|
|
||||||
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
|
|
||||||
when: on_success
|
|
||||||
- when: manual
|
|
||||||
allow_failure: true
|
|
||||||
parallel:
|
|
||||||
matrix:
|
|
||||||
- TESTCASE:
|
|
||||||
- ubuntu24-calico-dual-stack
|
|
||||||
- ubuntu24-calico-ipv6only-stack
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
all
|
|
||||||
exclude_rule 'MD013'
|
|
||||||
exclude_rule 'MD029'
|
|
||||||
rule 'MD007', :indent => 2
|
|
||||||
@@ -1,110 +0,0 @@
|
|||||||
---
|
|
||||||
repos:
|
|
||||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
|
||||||
rev: v6.0.0
|
|
||||||
hooks:
|
|
||||||
- id: check-added-large-files
|
|
||||||
- id: check-case-conflict
|
|
||||||
- id: check-executables-have-shebangs
|
|
||||||
- id: check-xml
|
|
||||||
- id: check-merge-conflict
|
|
||||||
- id: detect-private-key
|
|
||||||
- id: end-of-file-fixer
|
|
||||||
- id: forbid-new-submodules
|
|
||||||
- id: requirements-txt-fixer
|
|
||||||
- id: trailing-whitespace
|
|
||||||
|
|
||||||
- repo: https://github.com/adrienverge/yamllint.git
|
|
||||||
rev: v1.37.1
|
|
||||||
hooks:
|
|
||||||
- id: yamllint
|
|
||||||
args: [--strict]
|
|
||||||
|
|
||||||
- repo: https://github.com/shellcheck-py/shellcheck-py
|
|
||||||
rev: v0.11.0.1
|
|
||||||
hooks:
|
|
||||||
- id: shellcheck
|
|
||||||
args: ["--severity=error"]
|
|
||||||
exclude: "^.git"
|
|
||||||
files: "\\.sh$"
|
|
||||||
|
|
||||||
- repo: https://github.com/ansible/ansible-lint
|
|
||||||
rev: v25.11.0
|
|
||||||
hooks:
|
|
||||||
- id: ansible-lint
|
|
||||||
additional_dependencies:
|
|
||||||
- jmespath==1.0.1
|
|
||||||
- netaddr==1.3.0
|
|
||||||
- distlib
|
|
||||||
|
|
||||||
- repo: https://github.com/golangci/misspell
|
|
||||||
rev: v0.7.0
|
|
||||||
hooks:
|
|
||||||
- id: misspell
|
|
||||||
exclude: "OWNERS_ALIASES$"
|
|
||||||
|
|
||||||
- repo: local
|
|
||||||
hooks:
|
|
||||||
- id: collection-build-install
|
|
||||||
name: Build and install kubernetes-sigs.kubespray Ansible collection
|
|
||||||
language: python
|
|
||||||
additional_dependencies:
|
|
||||||
- ansible-core>=2.16.4
|
|
||||||
- distlib
|
|
||||||
entry: tests/scripts/collection-build-install.sh
|
|
||||||
pass_filenames: false
|
|
||||||
|
|
||||||
- id: generate-docs-sidebar
|
|
||||||
name: generate-docs-sidebar
|
|
||||||
entry: scripts/gen_docs_sidebar.sh
|
|
||||||
language: script
|
|
||||||
pass_filenames: false
|
|
||||||
|
|
||||||
- id: ci-matrix
|
|
||||||
name: ci-matrix
|
|
||||||
entry: tests/scripts/md-table/main.py
|
|
||||||
language: python
|
|
||||||
pass_filenames: false
|
|
||||||
additional_dependencies:
|
|
||||||
- jinja2
|
|
||||||
- pathlib
|
|
||||||
- pyaml
|
|
||||||
|
|
||||||
- id: check-galaxy-version
|
|
||||||
name: Verify correct version for galaxy.yml
|
|
||||||
entry: scripts/galaxy_version.py
|
|
||||||
language: python
|
|
||||||
pass_filenames: false
|
|
||||||
additional_dependencies:
|
|
||||||
- ruamel.yaml
|
|
||||||
|
|
||||||
- id: jinja-syntax-check
|
|
||||||
name: jinja-syntax-check
|
|
||||||
entry: tests/scripts/check-templates.py
|
|
||||||
language: python
|
|
||||||
types:
|
|
||||||
- jinja
|
|
||||||
additional_dependencies:
|
|
||||||
- jinja2
|
|
||||||
|
|
||||||
- id: propagate-ansible-variables
|
|
||||||
name: Update static files referencing default kubespray values
|
|
||||||
language: python
|
|
||||||
additional_dependencies:
|
|
||||||
- ansible-core>=2.16.4
|
|
||||||
entry: scripts/propagate_ansible_variables.yml
|
|
||||||
pass_filenames: false
|
|
||||||
|
|
||||||
- id: check-checksums-sorted
|
|
||||||
name: Check that our checksums are correctly sorted by version
|
|
||||||
entry: scripts/assert-sorted-checksums.yml
|
|
||||||
language: python
|
|
||||||
pass_filenames: false
|
|
||||||
additional_dependencies:
|
|
||||||
- ansible
|
|
||||||
|
|
||||||
- repo: https://github.com/markdownlint/markdownlint
|
|
||||||
rev: v0.12.0
|
|
||||||
hooks:
|
|
||||||
- id: markdownlint
|
|
||||||
exclude: "^.github|(^docs/_sidebar\\.md$)"
|
|
||||||
14
.yamllint
14
.yamllint
@@ -1,12 +1,6 @@
|
|||||||
---
|
---
|
||||||
extends: default
|
extends: default
|
||||||
|
|
||||||
ignore: |
|
|
||||||
.git/
|
|
||||||
.github/
|
|
||||||
# Generated file
|
|
||||||
tests/files/custom_cni/cilium.yaml
|
|
||||||
# https://ansible.readthedocs.io/projects/lint/rules/yaml/
|
|
||||||
rules:
|
rules:
|
||||||
braces:
|
braces:
|
||||||
min-spaces-inside: 0
|
min-spaces-inside: 0
|
||||||
@@ -14,15 +8,9 @@ rules:
|
|||||||
brackets:
|
brackets:
|
||||||
min-spaces-inside: 0
|
min-spaces-inside: 0
|
||||||
max-spaces-inside: 1
|
max-spaces-inside: 1
|
||||||
comments:
|
|
||||||
min-spaces-from-content: 1
|
|
||||||
# https://github.com/adrienverge/yamllint/issues/384
|
|
||||||
comments-indentation: false
|
|
||||||
indentation:
|
indentation:
|
||||||
spaces: 2
|
spaces: 2
|
||||||
indent-sequences: consistent
|
indent-sequences: consistent
|
||||||
line-length: disable
|
line-length: disable
|
||||||
new-line-at-end-of-file: disable
|
new-line-at-end-of-file: disable
|
||||||
octal-values:
|
truthy: disable
|
||||||
forbid-implicit-octal: true # yamllint defaults to false
|
|
||||||
forbid-explicit-octal: true # yamllint defaults to false
|
|
||||||
|
|||||||
@@ -1 +0,0 @@
|
|||||||
# See our release notes on [GitHub](https://github.com/kubernetes-sigs/kubespray/releases)
|
|
||||||
@@ -2,46 +2,9 @@
|
|||||||
|
|
||||||
## How to become a contributor and submit your own code
|
## How to become a contributor and submit your own code
|
||||||
|
|
||||||
### Environment setup
|
|
||||||
|
|
||||||
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
|
|
||||||
|
|
||||||
To install development dependencies you can set up a python virtual env with the necessary dependencies:
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
virtualenv venv
|
|
||||||
source venv/bin/activate
|
|
||||||
pip install -r tests/requirements.txt
|
|
||||||
ansible-galaxy install -r tests/requirements.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Linting
|
|
||||||
|
|
||||||
Kubespray uses [pre-commit](https://pre-commit.com) hook configuration to run several linters, please install this tool and use it to run validation tests before submitting a PR.
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
pre-commit install
|
|
||||||
pre-commit run -a # To run pre-commit hook on all files in the repository, even if they were not modified
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Molecule
|
|
||||||
|
|
||||||
[molecule](https://github.com/ansible-community/molecule) is designed to help the development and testing of Ansible roles. In Kubespray you can run it all for all roles with `./tests/scripts/molecule_run.sh` or for a specific role (that you are working with) with `molecule test` from the role directory (`cd roles/my-role`).
|
|
||||||
|
|
||||||
When developing or debugging a role it can be useful to run `molecule create` and `molecule converge` separately. Then you can use `molecule login` to SSH into the test environment.
|
|
||||||
|
|
||||||
#### Vagrant
|
|
||||||
|
|
||||||
Vagrant with VirtualBox or libvirt driver helps you to quickly spin test clusters to test things end to end. See [README.md#vagrant](README.md)
|
|
||||||
|
|
||||||
### Contributing A Patch
|
### Contributing A Patch
|
||||||
|
|
||||||
1. Submit an issue describing your proposed change to the repo in question.
|
1. Submit an issue describing your proposed change to the repo in question.
|
||||||
2. The [repo owners](OWNERS) will respond to your issue promptly.
|
2. The [repo owners](OWNERS) will respond to your issue promptly.
|
||||||
3. Fork the desired repo, develop and test your code changes.
|
3. Fork the desired repo, develop and test your code changes.
|
||||||
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo.
|
4. Submit a pull request.
|
||||||
5. Address any pre-commit validation failures.
|
|
||||||
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
|
|
||||||
7. Submit a pull request.
|
|
||||||
8. Work with the reviewers on their suggestions.
|
|
||||||
9. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.
|
|
||||||
|
|||||||
62
Dockerfile
62
Dockerfile
@@ -1,50 +1,16 @@
|
|||||||
# syntax=docker/dockerfile:1
|
FROM ubuntu:16.04
|
||||||
|
|
||||||
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
|
|
||||||
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
|
|
||||||
|
|
||||||
# Some tools like yamllint need this
|
|
||||||
# Pip needs this as well at the moment to install ansible
|
|
||||||
# (and potentially other packages)
|
|
||||||
# See: https://github.com/pypa/pip/issues/10219
|
|
||||||
ENV LANG=C.UTF-8 \
|
|
||||||
DEBIAN_FRONTEND=noninteractive \
|
|
||||||
PYTHONDONTWRITEBYTECODE=1
|
|
||||||
|
|
||||||
|
RUN mkdir /kubespray
|
||||||
WORKDIR /kubespray
|
WORKDIR /kubespray
|
||||||
|
RUN apt update -y && \
|
||||||
# hadolint ignore=DL3008
|
apt install -y \
|
||||||
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
|
libssl-dev python-dev sshpass apt-transport-https \
|
||||||
apt-get update -q \
|
ca-certificates curl gnupg2 software-properties-common python-pip
|
||||||
&& apt-get install -yq --no-install-recommends \
|
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
|
||||||
curl \
|
add-apt-repository \
|
||||||
python3 \
|
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||||
python3-pip \
|
$(lsb_release -cs) \
|
||||||
sshpass \
|
stable" \
|
||||||
vim \
|
&& apt update -y && apt-get install docker-ce -y
|
||||||
rsync \
|
COPY . .
|
||||||
openssh-client \
|
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/* /var/log/*
|
|
||||||
|
|
||||||
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
|
|
||||||
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
|
|
||||||
pip install --no-compile --no-cache-dir -r requirements.txt \
|
|
||||||
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
|
|
||||||
|
|
||||||
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
|
|
||||||
|
|
||||||
RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \
|
|
||||||
&& curl -L "https://dl.k8s.io/release/v1.33.7/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
|
|
||||||
&& echo "$(curl -L "https://dl.k8s.io/release/v1.33.7/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
|
|
||||||
&& chmod a+x /usr/local/bin/kubectl
|
|
||||||
|
|
||||||
COPY *.yml ./
|
|
||||||
COPY *.cfg ./
|
|
||||||
COPY roles ./roles
|
|
||||||
COPY contrib ./contrib
|
|
||||||
COPY inventory ./inventory
|
|
||||||
COPY library ./library
|
|
||||||
COPY extra_playbooks ./extra_playbooks
|
|
||||||
COPY playbooks ./playbooks
|
|
||||||
COPY plugins ./plugins
|
|
||||||
|
|||||||
2
LICENSE
2
LICENSE
@@ -187,7 +187,7 @@
|
|||||||
identification within third-party archives.
|
identification within third-party archives.
|
||||||
|
|
||||||
Copyright 2016 Kubespray
|
Copyright 2016 Kubespray
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
You may obtain a copy of the License at
|
You may obtain a copy of the License at
|
||||||
|
|||||||
5
Makefile
Normal file
5
Makefile
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
mitogen:
|
||||||
|
ansible-playbook -c local mitogen.yaml -vv
|
||||||
|
clean:
|
||||||
|
rm -rf dist/
|
||||||
|
rm *.retry
|
||||||
5
OWNERS
5
OWNERS
@@ -1,8 +1,7 @@
|
|||||||
# See the OWNERS docs at https://go.k8s.io/owners
|
# See the OWNERS file documentation:
|
||||||
|
# https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
|
||||||
|
|
||||||
approvers:
|
approvers:
|
||||||
- kubespray-approvers
|
- kubespray-approvers
|
||||||
reviewers:
|
reviewers:
|
||||||
- kubespray-reviewers
|
- kubespray-reviewers
|
||||||
emeritus_approvers:
|
|
||||||
- kubespray-emeritus_approvers
|
|
||||||
|
|||||||
@@ -1,26 +1,18 @@
|
|||||||
aliases:
|
aliases:
|
||||||
kubespray-approvers:
|
kubespray-approvers:
|
||||||
- ant31
|
- ant31
|
||||||
- mzaian
|
- mattymo
|
||||||
- tico88612
|
|
||||||
- vannten
|
|
||||||
- yankay
|
|
||||||
kubespray-reviewers:
|
|
||||||
- cyclinder
|
|
||||||
- erikjiang
|
|
||||||
- mzaian
|
|
||||||
- tico88612
|
|
||||||
- vannten
|
|
||||||
- yankay
|
|
||||||
kubespray-emeritus_approvers:
|
|
||||||
- atoms
|
- atoms
|
||||||
- chadswen
|
- chadswen
|
||||||
- cristicalin
|
- rsmitty
|
||||||
- floryut
|
- bogdando
|
||||||
- liupeng0518
|
- bradbeam
|
||||||
- luckysb
|
|
||||||
- mattymo
|
|
||||||
- miouge1
|
|
||||||
- oomichi
|
|
||||||
- riverzhang
|
|
||||||
- woopstar
|
- woopstar
|
||||||
|
- riverzhang
|
||||||
|
- holser
|
||||||
|
- smana
|
||||||
|
kubespray-reviewers:
|
||||||
|
- jjungnickel
|
||||||
|
- archifleks
|
||||||
|
- chapsuk
|
||||||
|
- mirwan
|
||||||
|
|||||||
324
README.md
324
README.md
@@ -1,228 +1,210 @@
|
|||||||
# Deploy a Production Ready Kubernetes Cluster
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
Deploy a Production Ready Kubernetes Cluster
|
||||||
|
============================================
|
||||||
|
|
||||||
|
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
||||||
You can get your invite [here](http://slack.k8s.io/)
|
You can get your invite [here](http://slack.k8s.io/)
|
||||||
|
|
||||||
- Can be deployed on **[AWS](docs/cloud_providers/aws.md), GCE, [Azure](docs/cloud_providers/azure.md), [OpenStack](docs/cloud_controllers/openstack.md), [vSphere](docs/cloud_controllers/vsphere.md), [Equinix Metal](docs/cloud_providers/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
|
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Oracle Cloud Infrastructure (Experimental), or Baremetal**
|
||||||
- **Highly available** cluster
|
- **Highly available** cluster
|
||||||
- **Composable** (Choice of the network plugin for instance)
|
- **Composable** (Choice of the network plugin for instance)
|
||||||
- Supports most popular **Linux distributions**
|
- Supports most popular **Linux distributions**
|
||||||
- **Continuous integration tests**
|
- **Continuous integration tests**
|
||||||
|
|
||||||
## Quick Start
|
Quick Start
|
||||||
|
-----------
|
||||||
|
|
||||||
Below are several ways to use Kubespray to deploy a Kubernetes cluster.
|
To deploy the cluster you can use :
|
||||||
|
|
||||||
### Docker
|
### Current release
|
||||||
|
2.8.2
|
||||||
Ensure you have installed Docker then
|
|
||||||
|
|
||||||
```ShellSession
|
|
||||||
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
|
|
||||||
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
|
|
||||||
quay.io/kubespray/kubespray:v2.29.0 bash
|
|
||||||
# Inside the container you may now run the kubespray playbooks:
|
|
||||||
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Ansible
|
### Ansible
|
||||||
|
|
||||||
|
#### Ansible version
|
||||||
|
|
||||||
|
Ansible v2.7.0 is failing and/or produce unexpected results due to [ansible/ansible/issues/46600](https://github.com/ansible/ansible/issues/46600)
|
||||||
|
|
||||||
#### Usage
|
#### Usage
|
||||||
|
|
||||||
See [Getting started](/docs/getting_started/getting-started.md)
|
# Install dependencies from ``requirements.txt``
|
||||||
|
sudo pip install -r requirements.txt
|
||||||
|
|
||||||
#### Collection
|
# Copy ``inventory/sample`` as ``inventory/mycluster``
|
||||||
|
cp -rfp inventory/sample inventory/mycluster
|
||||||
|
|
||||||
See [here](docs/ansible/ansible_collection.md) if you wish to use this repository as an Ansible collection
|
# Update Ansible inventory file with inventory builder
|
||||||
|
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
|
||||||
|
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
|
||||||
|
|
||||||
|
# Review and change parameters under ``inventory/mycluster/group_vars``
|
||||||
|
cat inventory/mycluster/group_vars/all/all.yml
|
||||||
|
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
|
||||||
|
|
||||||
|
# Deploy Kubespray with Ansible Playbook - run the playbook as root
|
||||||
|
# The option `-b` is required, as for example writing SSL keys in /etc/,
|
||||||
|
# installing packages and interacting with various systemd daemons.
|
||||||
|
# Without -b the playbook will fail to run!
|
||||||
|
ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml
|
||||||
|
|
||||||
|
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
|
||||||
|
As a consequence, `ansible-playbook` command will fail with:
|
||||||
|
```
|
||||||
|
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
|
||||||
|
```
|
||||||
|
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
|
||||||
|
|
||||||
|
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
|
||||||
|
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.
|
||||||
|
|
||||||
### Vagrant
|
### Vagrant
|
||||||
|
|
||||||
For Vagrant we need to install Python dependencies for provisioning tasks.
|
For Vagrant we need to install python dependencies for provisioning tasks.
|
||||||
Check that ``Python`` and ``pip`` are installed:
|
Check if Python and pip are installed:
|
||||||
|
|
||||||
```ShellSession
|
python -V && pip -V
|
||||||
python -V && pip -V
|
|
||||||
```
|
|
||||||
|
|
||||||
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
|
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
|
||||||
|
Install the necessary requirements
|
||||||
|
|
||||||
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
|
sudo pip install -r requirements.txt
|
||||||
then run the following step:
|
vagrant up
|
||||||
|
|
||||||
```ShellSession
|
Documents
|
||||||
vagrant up
|
---------
|
||||||
```
|
|
||||||
|
|
||||||
## Documents
|
- [Requirements](#requirements)
|
||||||
|
- [Kubespray vs ...](docs/comparisons.md)
|
||||||
|
- [Getting started](docs/getting-started.md)
|
||||||
|
- [Ansible inventory and tags](docs/ansible.md)
|
||||||
|
- [Integration with existing ansible repo](docs/integration.md)
|
||||||
|
- [Deployment data variables](docs/vars.md)
|
||||||
|
- [DNS stack](docs/dns-stack.md)
|
||||||
|
- [HA mode](docs/ha-mode.md)
|
||||||
|
- [Network plugins](#network-plugins)
|
||||||
|
- [Vagrant install](docs/vagrant.md)
|
||||||
|
- [CoreOS bootstrap](docs/coreos.md)
|
||||||
|
- [Debian Jessie setup](docs/debian.md)
|
||||||
|
- [openSUSE setup](docs/opensuse.md)
|
||||||
|
- [Downloaded artifacts](docs/downloads.md)
|
||||||
|
- [Cloud providers](docs/cloud.md)
|
||||||
|
- [OpenStack](docs/openstack.md)
|
||||||
|
- [AWS](docs/aws.md)
|
||||||
|
- [Azure](docs/azure.md)
|
||||||
|
- [vSphere](docs/vsphere.md)
|
||||||
|
- [Large deployments](docs/large-deployments.md)
|
||||||
|
- [Upgrades basics](docs/upgrades.md)
|
||||||
|
- [Roadmap](docs/roadmap.md)
|
||||||
|
|
||||||
- [Requirements](#requirements)
|
Supported Linux Distributions
|
||||||
- [Kubespray vs ...](docs/getting_started/comparisons.md)
|
-----------------------------
|
||||||
- [Getting started](docs/getting_started/getting-started.md)
|
|
||||||
- [Setting up your first cluster](docs/getting_started/setting-up-your-first-cluster.md)
|
|
||||||
- [Ansible inventory and tags](docs/ansible/ansible.md)
|
|
||||||
- [Integration with existing ansible repo](docs/operations/integration.md)
|
|
||||||
- [Deployment data variables](docs/ansible/vars.md)
|
|
||||||
- [DNS stack](docs/advanced/dns-stack.md)
|
|
||||||
- [HA mode](docs/operations/ha-mode.md)
|
|
||||||
- [Network plugins](#network-plugins)
|
|
||||||
- [Vagrant install](docs/developers/vagrant.md)
|
|
||||||
- [Flatcar Container Linux bootstrap](docs/operating_systems/flatcar.md)
|
|
||||||
- [Fedora CoreOS bootstrap](docs/operating_systems/fcos.md)
|
|
||||||
- [openSUSE setup](docs/operating_systems/opensuse.md)
|
|
||||||
- [Downloaded artifacts](docs/advanced/downloads.md)
|
|
||||||
- [Equinix Metal](docs/cloud_providers/equinix-metal.md)
|
|
||||||
- [OpenStack](docs/cloud_controllers/openstack.md)
|
|
||||||
- [vSphere](docs/cloud_controllers/vsphere.md)
|
|
||||||
- [Large deployments](docs/operations/large-deployments.md)
|
|
||||||
- [Adding/replacing a node](docs/operations/nodes.md)
|
|
||||||
- [Upgrades basics](docs/operations/upgrades.md)
|
|
||||||
- [Air-Gap installation](docs/operations/offline-environment.md)
|
|
||||||
- [NTP](docs/advanced/ntp.md)
|
|
||||||
- [Hardening](docs/operations/hardening.md)
|
|
||||||
- [Mirror](docs/operations/mirror.md)
|
|
||||||
- [Roadmap](docs/roadmap/roadmap.md)
|
|
||||||
|
|
||||||
## Supported Linux Distributions
|
- **Container Linux by CoreOS**
|
||||||
|
- **Debian** Buster, Jessie, Stretch, Wheezy
|
||||||
|
- **Ubuntu** 16.04, 18.04
|
||||||
|
- **CentOS/RHEL** 7
|
||||||
|
- **Fedora** 28
|
||||||
|
- **Fedora/CentOS** Atomic
|
||||||
|
- **openSUSE** Leap 42.3/Tumbleweed
|
||||||
|
|
||||||
- **Flatcar Container Linux by Kinvolk**
|
Note: Upstart/SysV init based OS types are not supported.
|
||||||
- **Debian** Bookworm, Bullseye, Trixie
|
|
||||||
- **Ubuntu** 22.04, 24.04
|
|
||||||
- **CentOS/RHEL** [8, 9](docs/operating_systems/rhel.md#rhel-8)
|
|
||||||
- **Fedora** 39, 40
|
|
||||||
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
|
|
||||||
- **openSUSE** Leap 15.x/Tumbleweed
|
|
||||||
- **Oracle Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
|
|
||||||
- **Alma Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
|
|
||||||
- **Rocky Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
|
|
||||||
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
|
|
||||||
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
|
|
||||||
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
|
|
||||||
- **openEuler** (experimental: see [openEuler notes](docs/operating_systems/openeuler.md))
|
|
||||||
|
|
||||||
Note:
|
Supported Components
|
||||||
|
--------------------
|
||||||
|
|
||||||
- Upstart/SysV init based OS types are not supported.
|
- Core
|
||||||
- [Kernel requirements](docs/operations/kernel-requirements.md) (please read if the OS kernel version is < 4.19).
|
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.12.7
|
||||||
|
- [etcd](https://github.com/coreos/etcd) v3.2.24
|
||||||
|
- [docker](https://www.docker.com/) v18.06 (see note)
|
||||||
|
- [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
|
||||||
|
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
|
||||||
|
- Network Plugin
|
||||||
|
- [calico](https://github.com/projectcalico/calico) v3.1.3
|
||||||
|
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
|
||||||
|
- [cilium](https://github.com/cilium/cilium) v1.3.0
|
||||||
|
- [contiv](https://github.com/contiv/install) v1.2.1
|
||||||
|
- [flanneld](https://github.com/coreos/flannel) v0.10.0
|
||||||
|
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.1
|
||||||
|
- [multus](https://github.com/intel/multus-cni) v3.1.autoconf
|
||||||
|
- [weave](https://github.com/weaveworks/weave) v2.5.0
|
||||||
|
- Application
|
||||||
|
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
|
||||||
|
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2
|
||||||
|
- [coredns](https://github.com/coredns/coredns) v1.2.6
|
||||||
|
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.21.0
|
||||||
|
|
||||||
## Supported Components
|
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
|
||||||
|
|
||||||
<!-- BEGIN ANSIBLE MANAGED BLOCK -->
|
Note 2: rkt support as docker alternative is limited to control plane (etcd and
|
||||||
|
kubelet). Docker is still used for Kubernetes cluster workloads and network
|
||||||
|
plugins' related OS services. Also note, only one of the supported network
|
||||||
|
plugins can be deployed for a given single cluster.
|
||||||
|
|
||||||
- Core
|
Requirements
|
||||||
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.33.7
|
------------
|
||||||
- [etcd](https://github.com/etcd-io/etcd) 3.5.25
|
|
||||||
- [docker](https://www.docker.com/) 28.3
|
|
||||||
- [containerd](https://containerd.io/) 2.1.5
|
|
||||||
- [cri-o](http://cri-o.io/) 1.33.7 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
|
|
||||||
- Network Plugin
|
|
||||||
- [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0
|
|
||||||
- [calico](https://github.com/projectcalico/calico) 3.30.5
|
|
||||||
- [cilium](https://github.com/cilium/cilium) 1.18.4
|
|
||||||
- [flannel](https://github.com/flannel-io/flannel) 0.27.3
|
|
||||||
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
|
|
||||||
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1
|
|
||||||
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) 4.2.2
|
|
||||||
- [kube-vip](https://github.com/kube-vip/kube-vip) 0.8.0
|
|
||||||
- Application
|
|
||||||
- [cert-manager](https://github.com/jetstack/cert-manager) 1.15.3
|
|
||||||
- [coredns](https://github.com/coredns/coredns) 1.12.0
|
|
||||||
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) 1.13.3
|
|
||||||
- [argocd](https://argoproj.github.io/) 2.14.5
|
|
||||||
- [helm](https://helm.sh/) 3.18.4
|
|
||||||
- [metallb](https://metallb.universe.tf/) 0.13.9
|
|
||||||
- [registry](https://github.com/distribution/distribution) 2.8.1
|
|
||||||
- Storage Plugin
|
|
||||||
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) 0.5.0
|
|
||||||
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) 1.10.0
|
|
||||||
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) 1.30.0
|
|
||||||
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) 1.9.2
|
|
||||||
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) 0.0.32
|
|
||||||
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) 2.5.0
|
|
||||||
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) 0.16.4
|
|
||||||
|
|
||||||
<!-- END ANSIBLE MANAGED BLOCK -->
|
- **Ansible v2.5 (or newer) and python-netaddr is installed on the machine
|
||||||
|
that will run Ansible commands**
|
||||||
## Container Runtime Notes
|
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
|
||||||
|
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
|
||||||
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
|
- The target servers are configured to allow **IPv4 forwarding**.
|
||||||
|
- **Your ssh key must be copied** to all the servers part of your inventory.
|
||||||
## Requirements
|
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
|
||||||
|
|
||||||
- **Minimum required version of Kubernetes is v1.30**
|
|
||||||
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
|
|
||||||
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
|
|
||||||
- The target servers are configured to allow **IPv4 forwarding**.
|
|
||||||
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
|
|
||||||
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
|
|
||||||
in order to avoid any issue during deployment you should disable your firewall.
|
in order to avoid any issue during deployment you should disable your firewall.
|
||||||
- If kubespray is run from non-root user account, correct privilege escalation method
|
- If kubespray is ran from non-root user account, correct privilege escalation method
|
||||||
should be configured in the target servers. Then the `ansible_become` flag
|
should be configured in the target servers. Then the `ansible_become` flag
|
||||||
or command parameters `--become or -b` should be specified.
|
or command parameters `--become or -b` should be specified.
|
||||||
|
|
||||||
Hardware:
|
Network Plugins
|
||||||
These limits are safeguarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
|
---------------
|
||||||
|
|
||||||
- Control Plane
|
You can choose between 6 network plugins. (default: `calico`, except Vagrant uses `flannel`)
|
||||||
- Memory: 2 GB
|
|
||||||
- Worker Node
|
|
||||||
- Memory: 1 GB
|
|
||||||
|
|
||||||
## Network Plugins
|
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
|
||||||
|
|
||||||
You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`)
|
- [calico](docs/calico.md): bgp (layer 3) networking.
|
||||||
|
|
||||||
- [flannel](docs/CNI/flannel.md): gre/vxlan (layer 2) networking.
|
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
|
||||||
|
|
||||||
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options
|
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
|
||||||
designed to give you the most efficient networking across a range of situations, including non-overlay
|
|
||||||
and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts,
|
|
||||||
pods, and (if using Istio and Envoy) applications at the service mesh layer.
|
|
||||||
|
|
||||||
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
|
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
|
||||||
|
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
|
||||||
|
|
||||||
- [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
|
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
|
||||||
|
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
|
||||||
|
|
||||||
- [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
|
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
|
||||||
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
|
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
|
||||||
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
|
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
|
||||||
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
|
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
|
||||||
|
|
||||||
- [macvlan](docs/CNI/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
|
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
|
||||||
|
|
||||||
- [multus](docs/CNI/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
|
The choice is defined with the variable `kube_network_plugin`. There is also an
|
||||||
|
|
||||||
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray.
|
|
||||||
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
|
|
||||||
|
|
||||||
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
|
|
||||||
option to leverage built-in cloud provider networking instead.
|
option to leverage built-in cloud provider networking instead.
|
||||||
See also [Network checker](docs/advanced/netcheck.md).
|
See also [Network checker](docs/netcheck.md).
|
||||||
|
|
||||||
## Ingress Plugins
|
Community docs and resources
|
||||||
|
----------------------------
|
||||||
|
|
||||||
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
|
- [kubernetes.io/docs/getting-started-guides/kubespray/](https://kubernetes.io/docs/getting-started-guides/kubespray/)
|
||||||
|
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
|
||||||
|
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
|
||||||
|
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
|
||||||
|
|
||||||
- [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
|
Tools and projects on top of Kubespray
|
||||||
|
--------------------------------------
|
||||||
|
|
||||||
## Community docs and resources
|
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
|
||||||
|
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
|
||||||
|
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
|
||||||
|
|
||||||
- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
|
CI Tests
|
||||||
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
|
--------
|
||||||
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
|
|
||||||
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=CJ5G4GpqDy0)
|
|
||||||
|
|
||||||
## Tools and projects on top of Kubespray
|
[](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines)
|
||||||
|
|
||||||
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
|
CI/end-to-end tests sponsored by Google (GCE)
|
||||||
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
|
See the [test matrix](docs/test_cases.md) for details.
|
||||||
- [Kubean](https://github.com/kubean-io/kubean)
|
|
||||||
|
|
||||||
## CI Tests
|
|
||||||
|
|
||||||
[](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/-/pipelines)
|
|
||||||
|
|
||||||
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
|
|
||||||
|
|
||||||
See the [test matrix](docs/developers/test_cases.md) for details.
|
|
||||||
|
|||||||
87
RELEASE.md
87
RELEASE.md
@@ -2,84 +2,39 @@
|
|||||||
|
|
||||||
The Kubespray Project is released on an as-needed basis. The process is as follows:
|
The Kubespray Project is released on an as-needed basis. The process is as follows:
|
||||||
|
|
||||||
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325)
|
1. An issue is proposing a new release with a changelog since the last release
|
||||||
1. At least one of the [approvers](OWNERS_ALIASES) must approve this release
|
2. At least one of the [OWNERS](OWNERS) must LGTM this release
|
||||||
1. (Only for major releases) The `kube_version_min_required` variable is set to `n-1`
|
3. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
|
||||||
1. (Only for major releases) Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
|
4. The release issue is closed
|
||||||
1. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
|
5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
|
||||||
1. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
|
|
||||||
1. (Only for major releases) An approver creates a release branch in the form `release-X.Y`
|
|
||||||
1. (For major releases) On the `master` branch: bump the version in `galaxy.yml` to the next expected major release (X.y.0 with y = Y + 1), make a Pull Request.
|
|
||||||
1. (For minor releases) On the `release-X.Y` branch: bump the version in `galaxy.yml` to the next expected minor release (X.Y.z with z = Z + 1), make a Pull Request.
|
|
||||||
1. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
|
|
||||||
1. The release issue is closed
|
|
||||||
1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
|
|
||||||
1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
|
|
||||||
1. Create/Update Issue for upgradeing kubernetes and [k8s-conformance](https://github.com/cncf/k8s-conformance)
|
|
||||||
|
|
||||||
## Major/minor releases and milestones
|
## Major/minor releases, merge freezes and milestones
|
||||||
|
|
||||||
* For major releases (vX.Y) Kubespray maintains one branch (`release-X.Y`). Minor releases (vX.Y.Z) are available only as tags.
|
* Kubespray does not maintain stable branches for releases. Releases are tags, not
|
||||||
|
branches, and there are no backports. Therefore, there is no need for merge
|
||||||
|
freezes as well.
|
||||||
|
|
||||||
* Security patches and bugs might be backported.
|
* Fixes for major releases (vX.x.0) and minor releases (vX.Y.x) are delivered
|
||||||
|
|
||||||
* Fixes for major releases (vX.Y) and minor releases (vX.Y.Z) are delivered
|
|
||||||
via maintenance releases (vX.Y.Z) and assigned to the corresponding open
|
via maintenance releases (vX.Y.Z) and assigned to the corresponding open
|
||||||
[GitHub milestone](https://github.com/kubernetes-sigs/kubespray/milestones).
|
milestone (vX.Y). That milestone remains open for the major/minor releases
|
||||||
That milestone remains open for the major/minor releases support lifetime,
|
support lifetime, which ends once the milestone closed. Then only a next major
|
||||||
which ends once the milestone is closed. Then only a next major or minor release
|
or minor release can be done.
|
||||||
can be done.
|
|
||||||
|
|
||||||
* Kubespray major and minor releases are bound to the given `kube_version` major/minor
|
* Kubespray major and minor releases are bound to the given ``kube_version`` major/minor
|
||||||
version numbers and other components' arbitrary versions, like etcd or network plugins.
|
version numbers and other components' arbitrary versions, like etcd or network plugins.
|
||||||
Older or newer component versions are not supported and not tested for the given
|
Older or newer versions are not supported and not tested for the given release.
|
||||||
release (even if included in the checksum variables, like `kubeadm_checksums`).
|
|
||||||
|
|
||||||
* There is no unstable releases and no APIs, thus Kubespray doesn't follow
|
* There is no unstable releases and no APIs, thus Kubespray doesn't follow
|
||||||
[semver](https://semver.org/). Every version describes only a stable release.
|
[semver](http://semver.org/). Every version describes only a stable release.
|
||||||
Breaking changes, if any introduced by changed defaults or non-contrib ansible roles'
|
Breaking changes, if any introduced by changed defaults or non-contrib ansible roles'
|
||||||
playbooks, shall be described in the release notes. Other breaking changes, if any in
|
playbooks, shall be described in the release notes. Other breaking changes, if any in
|
||||||
the contributed addons or bound versions of Kubernetes and other components, are
|
the contributed addons or bound versions of Kubernetes and other components, are
|
||||||
considered out of Kubespray scope and are up to the components' teams to deal with and
|
considered out of Kubespray scope and are up to the components' teams to deal with and
|
||||||
document.
|
document.
|
||||||
|
|
||||||
* Minor releases can change components' versions, but not the major `kube_version`.
|
* Minor releases can change components' versions, but not the major ``kube_version``.
|
||||||
Greater `kube_version` requires a new major or minor release. For example, if Kubespray v2.0.0
|
Greater ``kube_version`` requires a new major or minor release. For example, if Kubespray v2.0.0
|
||||||
is bound to `kube_version: 1.4.x`, `calico_version: 0.22.0`, `etcd_version: 3.0.6`,
|
is bound to ``kube_version: 1.4.x``, ``calico_version: 0.22.0``, ``etcd_version: v3.0.6``,
|
||||||
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
|
then Kubespray v2.1.0 may be bound to only minor changes to ``kube_version``, like v1.5.1
|
||||||
and *any* changes to other components, like etcd v4, or calico 1.2.3.
|
and *any* changes to other components, like etcd v4, or calico 1.2.3.
|
||||||
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.
|
And Kubespray v3.x.x shall be bound to ``kube_version: 2.x.x`` respectively.
|
||||||
|
|
||||||
## Release note creation
|
|
||||||
|
|
||||||
You can create a release note with:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
export GITHUB_TOKEN=<your-github-token>
|
|
||||||
export ORG=kubernetes-sigs
|
|
||||||
export REPO=kubespray
|
|
||||||
release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --dependencies=false --output=/tmp/kubespray-release-note --required-author=""
|
|
||||||
```
|
|
||||||
|
|
||||||
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
|
|
||||||
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note
|
|
||||||
|
|
||||||
## Container image creation
|
|
||||||
|
|
||||||
The container image `quay.io/kubespray/kubespray:vX.Y.Z` can be created from Dockerfile of the kubespray root directory:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
cd kubespray/
|
|
||||||
nerdctl build -t quay.io/kubespray/kubespray:vX.Y.Z .
|
|
||||||
nerdctl push quay.io/kubespray/kubespray:vX.Y.Z
|
|
||||||
```
|
|
||||||
|
|
||||||
The container image `quay.io/kubespray/vagrant:vX.Y.Z` can be created from build.sh of test-infra/vagrant-docker/:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
cd kubespray/test-infra/vagrant-docker/
|
|
||||||
./build vX.Y.Z
|
|
||||||
```
|
|
||||||
|
|
||||||
Please note that the above operation requires the permission to push container images into quay.io/kubespray/.
|
|
||||||
If you don't have the permission, please ask it on the #kubespray-dev channel.
|
|
||||||
|
|||||||
@@ -1,15 +1,13 @@
|
|||||||
# Defined below are the security contacts for this repo.
|
# Defined below are the security contacts for this repo.
|
||||||
#
|
#
|
||||||
# They are the contact point for the Product Security Committee to reach out
|
# They are the contact point for the Product Security Team to reach out
|
||||||
# to for triaging and handling of incoming issues.
|
# to for triaging and handling of incoming issues.
|
||||||
#
|
#
|
||||||
# The below names agree to abide by the
|
# The below names agree to abide by the
|
||||||
# [Embargo Policy](https://git.k8s.io/security/private-distributors-list.md#embargo-policy)
|
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
|
||||||
# and will be removed and replaced if they violate that agreement.
|
# and will be removed and replaced if they violate that agreement.
|
||||||
#
|
#
|
||||||
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
|
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
|
||||||
# INSTRUCTIONS AT https://kubernetes.io/security/
|
# INSTRUCTIONS AT https://kubernetes.io/security/
|
||||||
floryut
|
atoms
|
||||||
ant31
|
mattymo
|
||||||
VannTen
|
|
||||||
yankay
|
|
||||||
302
Vagrantfile
vendored
302
Vagrantfile
vendored
@@ -1,147 +1,84 @@
|
|||||||
# -*- mode: ruby -*-
|
# -*- mode: ruby -*-
|
||||||
# # vi: set ft=ruby :
|
# # vi: set ft=ruby :
|
||||||
|
|
||||||
# For help on using kubespray with vagrant, check out docs/developers/vagrant.md
|
# For help on using kubespray with vagrant, check out docs/vagrant.md
|
||||||
|
|
||||||
require 'fileutils'
|
require 'fileutils'
|
||||||
require 'ipaddr'
|
|
||||||
require 'socket'
|
|
||||||
|
|
||||||
Vagrant.require_version ">= 2.0.0"
|
Vagrant.require_version ">= 2.0.0"
|
||||||
|
|
||||||
CONFIG = File.join(File.dirname(__FILE__), ENV['KUBESPRAY_VAGRANT_CONFIG'] || 'vagrant/config.rb')
|
CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
|
||||||
|
|
||||||
FLATCAR_URL_TEMPLATE = "https://%s.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.json"
|
COREOS_URL_TEMPLATE = "https://storage.googleapis.com/%s.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
|
||||||
|
|
||||||
# Uniq disk UUID for libvirt
|
# Uniq disk UUID for libvirt
|
||||||
DISK_UUID = Time.now.utc.to_i
|
DISK_UUID = Time.now.utc.to_i
|
||||||
|
|
||||||
SUPPORTED_OS = {
|
SUPPORTED_OS = {
|
||||||
"flatcar-stable" => {box: "flatcar-stable", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["stable"]},
|
"coreos-stable" => {box: "coreos-stable", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
|
||||||
"flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]},
|
"coreos-alpha" => {box: "coreos-alpha", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
|
||||||
"flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]},
|
"coreos-beta" => {box: "coreos-beta", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
|
||||||
"flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
|
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
|
||||||
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
|
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
|
||||||
"ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"},
|
"centos" => {box: "centos/7", user: "vagrant"},
|
||||||
"ubuntu2404" => {box: "bento/ubuntu-24.04", user: "vagrant"},
|
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"},
|
||||||
"centos8" => {box: "centos/8", user: "vagrant"},
|
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
|
||||||
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
|
"opensuse" => {box: "opensuse/openSUSE-42.3-x86_64", user: "vagrant"},
|
||||||
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
|
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
|
||||||
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
|
|
||||||
"almalinux9" => {box: "almalinux/9", user: "vagrant"},
|
|
||||||
"rockylinux8" => {box: "rockylinux/8", user: "vagrant"},
|
|
||||||
"rockylinux9" => {box: "rockylinux/9", user: "vagrant"},
|
|
||||||
"fedora39" => {box: "fedora/39-cloud-base", user: "vagrant"},
|
|
||||||
"fedora40" => {box: "fedora/40-cloud-base", user: "vagrant"},
|
|
||||||
"fedora39-arm64" => {box: "bento/fedora-39-arm64", user: "vagrant"},
|
|
||||||
"fedora40-arm64" => {box: "bento/fedora-40", user: "vagrant"},
|
|
||||||
"opensuse" => {box: "opensuse/Leap-15.6.x86_64", user: "vagrant"},
|
|
||||||
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
|
|
||||||
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
|
|
||||||
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
|
|
||||||
"rhel8" => {box: "generic/rhel8", user: "vagrant"},
|
|
||||||
"debian11" => {box: "debian/bullseye64", user: "vagrant"},
|
|
||||||
"debian12" => {box: "debian/bookworm64", user: "vagrant"},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Defaults for config options defined in CONFIG
|
||||||
|
$num_instances = 3
|
||||||
|
$instance_name_prefix = "k8s"
|
||||||
|
$vm_gui = false
|
||||||
|
$vm_memory = 2048
|
||||||
|
$vm_cpus = 1
|
||||||
|
$shared_folders = {}
|
||||||
|
$forwarded_ports = {}
|
||||||
|
$subnet = "172.17.8"
|
||||||
|
$os = "ubuntu1804"
|
||||||
|
$network_plugin = "flannel"
|
||||||
|
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
|
||||||
|
$multi_networking = false
|
||||||
|
# The first three nodes are etcd servers
|
||||||
|
$etcd_instances = $num_instances
|
||||||
|
# The first two nodes are kube masters
|
||||||
|
$kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1)
|
||||||
|
# All nodes are kube nodes
|
||||||
|
$kube_node_instances = $num_instances
|
||||||
|
# The following only works when using the libvirt provider
|
||||||
|
$kube_node_instances_with_disks = false
|
||||||
|
$kube_node_instances_with_disks_size = "20G"
|
||||||
|
$kube_node_instances_with_disks_number = 2
|
||||||
|
|
||||||
|
$playbook = "cluster.yml"
|
||||||
|
|
||||||
|
host_vars = {}
|
||||||
|
|
||||||
if File.exist?(CONFIG)
|
if File.exist?(CONFIG)
|
||||||
require CONFIG
|
require CONFIG
|
||||||
end
|
end
|
||||||
|
|
||||||
# Defaults for config options defined in CONFIG
|
$box = SUPPORTED_OS[$os][:box]
|
||||||
$num_instances ||= 3
|
# if $inventory is not set, try to use example
|
||||||
$instance_name_prefix ||= "k8s"
|
$inventory = "inventory/sample" if ! $inventory
|
||||||
$vm_gui ||= false
|
$inventory = File.absolute_path($inventory, File.dirname(__FILE__))
|
||||||
$vm_memory ||= 2048
|
|
||||||
$vm_cpus ||= 2
|
|
||||||
$shared_folders ||= {}
|
|
||||||
$forwarded_ports ||= {}
|
|
||||||
$subnet ||= "172.18.8"
|
|
||||||
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
|
|
||||||
$os ||= "ubuntu2004"
|
|
||||||
$network_plugin ||= "flannel"
|
|
||||||
$inventories ||= []
|
|
||||||
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
|
|
||||||
$multi_networking ||= "False"
|
|
||||||
$download_run_once ||= "True"
|
|
||||||
$download_force_cache ||= "False"
|
|
||||||
# Modify those to have separate groups (for instance, to test separate etcd:)
|
|
||||||
# first_control_plane = 1
|
|
||||||
# first_etcd = 4
|
|
||||||
# control_plane_instances = 3
|
|
||||||
# etcd_instances = 3
|
|
||||||
$first_node ||= 1
|
|
||||||
$first_control_plane ||= 1
|
|
||||||
$first_etcd ||= 1
|
|
||||||
|
|
||||||
# The first three nodes are etcd servers
|
# if $inventory has a hosts.ini file use it, otherwise copy over
|
||||||
$etcd_instances ||= [$num_instances, 3].min
|
# vars etc to where vagrant expects dynamic inventory to be
|
||||||
# The first two nodes are kube masters
|
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
|
||||||
$control_plane_instances ||= [$num_instances, 2].min
|
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible")
|
||||||
# All nodes are kube nodes
|
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
|
||||||
$kube_node_instances ||= $num_instances - $first_node + 1
|
if ! File.exist?(File.join($vagrant_ansible,"inventory"))
|
||||||
|
FileUtils.ln_s($inventory, File.join($vagrant_ansible,"inventory"))
|
||||||
# The following only works when using the libvirt provider
|
|
||||||
$kube_node_instances_with_disks ||= false
|
|
||||||
$kube_node_instances_with_disks_size ||= "20G"
|
|
||||||
$kube_node_instances_with_disks_number ||= 2
|
|
||||||
$override_disk_size ||= false
|
|
||||||
$disk_size ||= "20GB"
|
|
||||||
$local_path_provisioner_enabled ||= "False"
|
|
||||||
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
|
|
||||||
$libvirt_nested ||= false
|
|
||||||
# boolean or string (e.g. "-vvv")
|
|
||||||
$ansible_verbosity ||= false
|
|
||||||
$ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
|
|
||||||
|
|
||||||
$vagrant_dir ||= File.join(File.dirname(__FILE__), ".vagrant")
|
|
||||||
|
|
||||||
$playbook ||= "cluster.yml"
|
|
||||||
$extra_vars ||= {}
|
|
||||||
|
|
||||||
host_vars = {}
|
|
||||||
|
|
||||||
def collect_networks(subnet, subnet_ipv6)
|
|
||||||
Socket.getifaddrs.filter_map do |iface|
|
|
||||||
next unless iface&.netmask&.ip_address && iface.addr
|
|
||||||
|
|
||||||
is_ipv6 = iface.addr.ipv6?
|
|
||||||
ip = IPAddr.new(iface.addr.ip_address.split('%').first)
|
|
||||||
ip_test = is_ipv6 ? IPAddr.new("#{subnet_ipv6}::0") : IPAddr.new("#{subnet}.0")
|
|
||||||
|
|
||||||
prefix = IPAddr.new(iface.netmask.ip_address).to_i.to_s(2).count('1')
|
|
||||||
network = ip.mask(prefix)
|
|
||||||
|
|
||||||
[IPAddr.new("#{network}/#{prefix}"), ip_test]
|
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
def subnet_in_use?(network_ips)
|
|
||||||
network_ips.any? { |net, test_ip| net.include?(test_ip) && test_ip != net }
|
|
||||||
end
|
|
||||||
|
|
||||||
network_ips = collect_networks($subnet, $subnet_ipv6)
|
|
||||||
|
|
||||||
if subnet_in_use?(network_ips)
|
|
||||||
puts "Invalid subnet provided, subnet is already in use: #{$subnet}.0"
|
|
||||||
puts "Subnets in use: #{network_ips.inspect}"
|
|
||||||
exit 1
|
|
||||||
end
|
|
||||||
|
|
||||||
# throw error if os is not supported
|
|
||||||
if ! SUPPORTED_OS.key?($os)
|
|
||||||
puts "Unsupported OS: #{$os}"
|
|
||||||
puts "Supported OS are: #{SUPPORTED_OS.keys.join(', ')}"
|
|
||||||
exit 1
|
|
||||||
end
|
|
||||||
|
|
||||||
$box = SUPPORTED_OS[$os][:box]
|
|
||||||
|
|
||||||
if Vagrant.has_plugin?("vagrant-proxyconf")
|
if Vagrant.has_plugin?("vagrant-proxyconf")
|
||||||
$no_proxy = ENV['NO_PROXY'] || ENV['no_proxy'] || "127.0.0.1,localhost"
|
$no_proxy = ENV['NO_PROXY'] || ENV['no_proxy'] || "127.0.0.1,localhost"
|
||||||
(1..$num_instances).each do |i|
|
(1..$num_instances).each do |i|
|
||||||
$no_proxy += ",#{$subnet}.#{i+100}"
|
$no_proxy += ",#{$subnet}.#{i+100}"
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
Vagrant.configure("2") do |config|
|
Vagrant.configure("2") do |config|
|
||||||
@@ -160,13 +97,6 @@ Vagrant.configure("2") do |config|
|
|||||||
# always use Vagrants insecure key
|
# always use Vagrants insecure key
|
||||||
config.ssh.insert_key = false
|
config.ssh.insert_key = false
|
||||||
|
|
||||||
if ($override_disk_size)
|
|
||||||
unless Vagrant.has_plugin?("vagrant-disksize")
|
|
||||||
system "vagrant plugin install vagrant-disksize"
|
|
||||||
end
|
|
||||||
config.disksize.size = $disk_size
|
|
||||||
end
|
|
||||||
|
|
||||||
(1..$num_instances).each do |i|
|
(1..$num_instances).each do |i|
|
||||||
config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
|
config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
|
||||||
|
|
||||||
@@ -190,13 +120,9 @@ Vagrant.configure("2") do |config|
|
|||||||
vb.cpus = $vm_cpus
|
vb.cpus = $vm_cpus
|
||||||
vb.gui = $vm_gui
|
vb.gui = $vm_gui
|
||||||
vb.linked_clone = true
|
vb.linked_clone = true
|
||||||
vb.customize ["modifyvm", :id, "--vram", "8"] # ubuntu defaults to 256 MB which is a waste of precious RAM
|
|
||||||
vb.customize ["modifyvm", :id, "--audio", "none"]
|
|
||||||
end
|
end
|
||||||
|
|
||||||
node.vm.provider :libvirt do |lv|
|
node.vm.provider :libvirt do |lv|
|
||||||
lv.nested = $libvirt_nested
|
|
||||||
lv.cpu_mode = "host-model"
|
|
||||||
lv.memory = $vm_memory
|
lv.memory = $vm_memory
|
||||||
lv.cpus = $vm_cpus
|
lv.cpus = $vm_cpus
|
||||||
lv.default_prefix = 'kubespray'
|
lv.default_prefix = 'kubespray'
|
||||||
@@ -213,15 +139,7 @@ Vagrant.configure("2") do |config|
|
|||||||
# always make /dev/sd{a/b/c} so that CI can ensure that
|
# always make /dev/sd{a/b/c} so that CI can ensure that
|
||||||
# virtualbox and libvirt will have the same devices to use for OSDs
|
# virtualbox and libvirt will have the same devices to use for OSDs
|
||||||
(1..$kube_node_instances_with_disks_number).each do |d|
|
(1..$kube_node_instances_with_disks_number).each do |d|
|
||||||
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi"
|
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
|
||||||
end
|
|
||||||
end
|
|
||||||
node.vm.provider :virtualbox do |vb|
|
|
||||||
# always make /dev/sd{a/b/c} so that CI can ensure that
|
|
||||||
# virtualbox and libvirt will have the same devices to use for OSDs
|
|
||||||
(1..$kube_node_instances_with_disks_number).each do |d|
|
|
||||||
vb.customize ['createhd', '--filename', "disk-#{i}-#{driverletters[d]}-#{DISK_UUID}.disk", '--size', $kube_node_instances_with_disks_size] # 10GB disk
|
|
||||||
vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', d, '--device', 0, '--type', 'hdd', '--medium', "disk-#{i}-#{driverletters[d]}-#{DISK_UUID}.disk", '--nonrotational', 'on', '--mtype', 'normal']
|
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
@@ -234,112 +152,44 @@ Vagrant.configure("2") do |config|
|
|||||||
node.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
|
node.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
|
||||||
end
|
end
|
||||||
|
|
||||||
if ["rhel8"].include? $os
|
node.vm.synced_folder ".", "/vagrant", disabled: false, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z'] , rsync__exclude: ['.git','venv']
|
||||||
# Vagrant synced_folder rsync options cannot be used for RHEL boxes as Rsync package cannot
|
$shared_folders.each do |src, dst|
|
||||||
# be installed until the host is registered with a valid Red Hat support subscription
|
node.vm.synced_folder src, dst, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
|
||||||
node.vm.synced_folder ".", "/vagrant", disabled: false
|
|
||||||
$shared_folders.each do |src, dst|
|
|
||||||
node.vm.synced_folder src, dst
|
|
||||||
end
|
|
||||||
else
|
|
||||||
node.vm.synced_folder ".", "/vagrant", disabled: false, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z'] , rsync__exclude: ['.git','venv']
|
|
||||||
$shared_folders.each do |src, dst|
|
|
||||||
node.vm.synced_folder src, dst, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
|
|
||||||
end
|
|
||||||
end
|
end
|
||||||
|
|
||||||
ip = "#{$subnet}.#{i+100}"
|
ip = "#{$subnet}.#{i+100}"
|
||||||
ip6 = "#{$subnet_ipv6}::#{i+100}"
|
node.vm.network :private_network, ip: ip
|
||||||
node.vm.network :private_network,
|
|
||||||
:ip => ip,
|
|
||||||
:libvirt__guest_ipv6 => 'yes',
|
|
||||||
:libvirt__ipv6_address => ip6,
|
|
||||||
:libvirt__ipv6_prefix => "64",
|
|
||||||
:libvirt__forward_mode => "none",
|
|
||||||
:libvirt__dhcp_enabled => false
|
|
||||||
|
|
||||||
# libvirt__ipv6_address does not work as intended, the address is obtained with the desired prefix, but auto-generated(like fd3c:b398:698:756:5054:ff:fe48:c61e/64)
|
|
||||||
# add default route for detect ansible_default_ipv6
|
|
||||||
# TODO: fix libvirt__ipv6 or use $subnet in shell
|
|
||||||
config.vm.provision "shell", inline: "ip -6 r a fd3c:b398:698:756::/64 dev eth1;ip -6 r add default via fd3c:b398:0698:0756::1 dev eth1 || true"
|
|
||||||
|
|
||||||
# Disable swap for each vm
|
# Disable swap for each vm
|
||||||
node.vm.provision "shell", inline: "swapoff -a"
|
node.vm.provision "shell", inline: "swapoff -a"
|
||||||
|
|
||||||
# ubuntu2004 and ubuntu2204 have IPv6 explicitly disabled. This undoes that.
|
|
||||||
if ["ubuntu2004", "ubuntu2204"].include? $os
|
|
||||||
node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
|
|
||||||
node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
|
|
||||||
end
|
|
||||||
# Hack for fedora39/40 to get the IP address of the second interface
|
|
||||||
if ["fedora39", "fedora40", "fedora39-arm64", "fedora40-arm64"].include? $os
|
|
||||||
config.vm.provision "shell", inline: <<-SHELL
|
|
||||||
nmcli conn modify 'Wired connection 2' ipv4.addresses $(cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep IPADDR | cut -d "=" -f2)/24
|
|
||||||
nmcli conn modify 'Wired connection 2' ipv4.method manual
|
|
||||||
service NetworkManager restart
|
|
||||||
SHELL
|
|
||||||
end
|
|
||||||
|
|
||||||
|
|
||||||
# Rockylinux boxes needs UEFI
|
|
||||||
if ["rockylinux8", "rockylinux9"].include? $os
|
|
||||||
config.vm.provider "libvirt" do |domain|
|
|
||||||
domain.loader = "/usr/share/OVMF/x64/OVMF_CODE.fd"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Disable firewalld on oraclelinux/redhat vms
|
|
||||||
if ["oraclelinux","oraclelinux8", "rhel8","rockylinux8"].include? $os
|
|
||||||
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
|
|
||||||
end
|
|
||||||
|
|
||||||
host_vars[vm_name] = {
|
host_vars[vm_name] = {
|
||||||
"ip": ip,
|
"ip": ip,
|
||||||
"flannel_interface": "eth1",
|
|
||||||
"kube_network_plugin": $network_plugin,
|
"kube_network_plugin": $network_plugin,
|
||||||
"kube_network_plugin_multus": $multi_networking,
|
"kube_network_plugin_multus": $multi_networking,
|
||||||
"download_run_once": $download_run_once,
|
"docker_keepcache": "1",
|
||||||
"download_localhost": "False",
|
"download_run_once": "True",
|
||||||
"download_cache_dir": ENV['HOME'] + "/kubespray_cache",
|
"download_localhost": "False"
|
||||||
# Make kubespray cache even when download_run_once is false
|
|
||||||
"download_force_cache": $download_force_cache,
|
|
||||||
# Keeping the cache on the nodes can improve provisioning speed while debugging kubespray
|
|
||||||
"download_keep_remote_cache": "False",
|
|
||||||
"docker_rpm_keepcache": "1",
|
|
||||||
# These two settings will put kubectl and admin.config in $inventory/artifacts
|
|
||||||
"kubeconfig_localhost": "True",
|
|
||||||
"kubectl_localhost": "True",
|
|
||||||
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
|
|
||||||
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
|
|
||||||
"ansible_ssh_user": SUPPORTED_OS[$os][:user],
|
|
||||||
"ansible_ssh_private_key_file": File.join(Dir.home, ".vagrant.d", "insecure_private_key"),
|
|
||||||
"unsafe_show_logs": "True"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Only execute the Ansible provisioner once, when all the machines are up and ready.
|
# Only execute the Ansible provisioner once, when all the machines are up and ready.
|
||||||
# And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
|
|
||||||
if i == $num_instances
|
if i == $num_instances
|
||||||
node.vm.provision "ansible" do |ansible|
|
node.vm.provision "ansible" do |ansible|
|
||||||
ansible.playbook = $playbook
|
ansible.playbook = $playbook
|
||||||
ansible.compatibility_mode = "2.0"
|
if File.exist?(File.join( $inventory, "hosts.ini"))
|
||||||
ansible.verbose = $ansible_verbosity
|
ansible.inventory_path = $inventory
|
||||||
ansible.become = true
|
|
||||||
ansible.limit = "all,localhost"
|
|
||||||
ansible.host_key_checking = false
|
|
||||||
ansible.raw_arguments = ["--forks=#{$num_instances}",
|
|
||||||
"--flush-cache",
|
|
||||||
"-e ansible_become_pass=vagrant"] +
|
|
||||||
$inventories.map {|inv| ["-i", inv]}.flatten
|
|
||||||
ansible.host_vars = host_vars
|
|
||||||
ansible.extra_vars = $extra_vars
|
|
||||||
if $ansible_tags != ""
|
|
||||||
ansible.tags = [$ansible_tags]
|
|
||||||
end
|
end
|
||||||
|
ansible.become = true
|
||||||
|
ansible.limit = "all"
|
||||||
|
ansible.host_key_checking = false
|
||||||
|
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "--ask-become-pass"]
|
||||||
|
ansible.host_vars = host_vars
|
||||||
|
#ansible.tags = ['download']
|
||||||
ansible.groups = {
|
ansible.groups = {
|
||||||
"etcd" => ["#{$instance_name_prefix}-[#{$first_etcd}:#{$etcd_instances + $first_etcd - 1}]"],
|
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
|
||||||
"kube_control_plane" => ["#{$instance_name_prefix}-[#{$first_control_plane}:#{$control_plane_instances + $first_control_plane - 1}]"],
|
"kube-master" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],
|
||||||
"kube_node" => ["#{$instance_name_prefix}-[#{$first_node}:#{$kube_node_instances + $first_node - 1}]"],
|
"kube-node" => ["#{$instance_name_prefix}-[1:#{$kube_node_instances}]"],
|
||||||
"k8s_cluster:children" => ["kube_control_plane", "kube_node"],
|
"k8s-cluster:children" => ["kube-master", "kube-node"],
|
||||||
}
|
}
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|||||||
@@ -1,2 +0,0 @@
|
|||||||
---
|
|
||||||
theme: jekyll-theme-slate
|
|
||||||
12
ansible.cfg
12
ansible.cfg
@@ -3,21 +3,17 @@ pipelining=True
|
|||||||
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
|
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
|
||||||
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
|
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
|
||||||
[defaults]
|
[defaults]
|
||||||
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
|
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
|
||||||
force_valid_group_names = ignore
|
|
||||||
|
|
||||||
host_key_checking=False
|
host_key_checking=False
|
||||||
gathering = smart
|
gathering = smart
|
||||||
fact_caching = jsonfile
|
fact_caching = jsonfile
|
||||||
fact_caching_connection = /tmp
|
fact_caching_connection = /tmp
|
||||||
fact_caching_timeout = 86400
|
stdout_callback = skippy
|
||||||
timeout = 300
|
|
||||||
stdout_callback = default
|
|
||||||
display_skipped_hosts = no
|
|
||||||
library = ./library
|
library = ./library
|
||||||
callbacks_enabled = profile_tasks
|
callback_whitelist = profile_tasks
|
||||||
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
|
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
|
||||||
deprecation_warnings=False
|
deprecation_warnings=False
|
||||||
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg
|
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds
|
||||||
[inventory]
|
[inventory]
|
||||||
ignore_patterns = artifacts, credentials
|
ignore_patterns = artifacts, credentials
|
||||||
|
|||||||
137
cluster.yml
137
cluster.yml
@@ -1,3 +1,136 @@
|
|||||||
---
|
---
|
||||||
- name: Install Kubernetes
|
- hosts: localhost
|
||||||
ansible.builtin.import_playbook: playbooks/cluster.yml
|
gather_facts: false
|
||||||
|
tasks:
|
||||||
|
- name: "Check ansible version !=2.7.0"
|
||||||
|
assert:
|
||||||
|
msg: "Ansible V2.7.0 can't be used until: https://github.com/ansible/ansible/issues/46600 is fixed"
|
||||||
|
that:
|
||||||
|
- ansible_version.string is version("2.7.0", "!=")
|
||||||
|
- ansible_version.string is version("2.5.0", ">=")
|
||||||
|
tags:
|
||||||
|
- check
|
||||||
|
vars:
|
||||||
|
ansible_connection: local
|
||||||
|
|
||||||
|
- hosts: localhost
|
||||||
|
gather_facts: false
|
||||||
|
tasks:
|
||||||
|
- name: deploy warning for non kubeadm
|
||||||
|
debug:
|
||||||
|
msg: "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
|
||||||
|
when: not kubeadm_enabled and not skip_non_kubeadm_warning
|
||||||
|
|
||||||
|
- name: deploy cluster for non kubeadm
|
||||||
|
pause:
|
||||||
|
prompt: "Are you sure you want to deploy cluster using the deprecated non-kubeadm mode."
|
||||||
|
echo: no
|
||||||
|
when: not kubeadm_enabled and not skip_non_kubeadm_warning
|
||||||
|
|
||||||
|
- hosts: bastion[0]
|
||||||
|
gather_facts: False
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
|
||||||
|
|
||||||
|
- hosts: k8s-cluster:etcd:calico-rr
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
gather_facts: false
|
||||||
|
vars:
|
||||||
|
# Need to disable pipelining for bootstrap-os as some systems have requiretty in sudoers set, which makes pipelining
|
||||||
|
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
|
||||||
|
ansible_ssh_pipelining: false
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: bootstrap-os, tags: bootstrap-os}
|
||||||
|
|
||||||
|
- hosts: k8s-cluster:etcd:calico-rr
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
vars:
|
||||||
|
ansible_ssh_pipelining: true
|
||||||
|
gather_facts: true
|
||||||
|
pre_tasks:
|
||||||
|
- name: gather facts from all instances
|
||||||
|
setup:
|
||||||
|
delegate_to: "{{item}}"
|
||||||
|
delegate_facts: True
|
||||||
|
with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
|
||||||
|
|
||||||
|
- hosts: k8s-cluster:etcd:calico-rr
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: kubernetes/preinstall, tags: preinstall }
|
||||||
|
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
|
||||||
|
- { role: download, tags: download, when: "not skip_downloads" }
|
||||||
|
environment: "{{proxy_env}}"
|
||||||
|
|
||||||
|
- hosts: etcd
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" }
|
||||||
|
|
||||||
|
- hosts: k8s-cluster:calico-rr
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: etcd, tags: etcd, etcd_cluster_setup: false, etcd_events_cluster_setup: false }
|
||||||
|
|
||||||
|
- hosts: k8s-cluster
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: kubernetes/node, tags: node }
|
||||||
|
environment: "{{proxy_env}}"
|
||||||
|
|
||||||
|
- hosts: kube-master
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: kubernetes/master, tags: master }
|
||||||
|
- { role: kubernetes/client, tags: client }
|
||||||
|
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
|
||||||
|
|
||||||
|
- hosts: k8s-cluster
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
|
||||||
|
- { role: network_plugin, tags: network }
|
||||||
|
|
||||||
|
- hosts: kube-master[0]
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
|
||||||
|
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"], when: "kubeadm_enabled" }
|
||||||
|
|
||||||
|
- hosts: kube-master
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: kubernetes-apps/network_plugin, tags: network }
|
||||||
|
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
|
||||||
|
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
|
||||||
|
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
|
||||||
|
|
||||||
|
- hosts: calico-rr
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: network_plugin/calico/rr, tags: network }
|
||||||
|
|
||||||
|
- hosts: k8s-cluster
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
|
||||||
|
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
|
||||||
|
environment: "{{proxy_env}}"
|
||||||
|
|
||||||
|
- hosts: kube-master
|
||||||
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
|
roles:
|
||||||
|
- { role: kubespray-defaults}
|
||||||
|
- { role: kubernetes-apps, tags: apps }
|
||||||
|
|||||||
@@ -35,18 +35,15 @@ class SearchEC2Tags(object):
|
|||||||
hosts['_meta'] = { 'hostvars': {} }
|
hosts['_meta'] = { 'hostvars': {} }
|
||||||
|
|
||||||
##Search ec2 three times to find nodes of each group type. Relies on kubespray-role key/value.
|
##Search ec2 three times to find nodes of each group type. Relies on kubespray-role key/value.
|
||||||
for group in ["kube_control_plane", "kube_node", "etcd"]:
|
for group in ["kube-master", "kube-node", "etcd"]:
|
||||||
hosts[group] = []
|
hosts[group] = []
|
||||||
tag_key = "kubespray-role"
|
tag_key = "kubespray-role"
|
||||||
tag_value = ["*"+group+"*"]
|
tag_value = ["*"+group+"*"]
|
||||||
region = os.environ['AWS_REGION']
|
region = os.environ['REGION']
|
||||||
|
|
||||||
ec2 = boto3.resource('ec2', region)
|
ec2 = boto3.resource('ec2', region)
|
||||||
filters = [{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}]
|
|
||||||
cluster_name = os.getenv('CLUSTER_NAME')
|
instances = ec2.instances.filter(Filters=[{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}])
|
||||||
if cluster_name:
|
|
||||||
filters.append({'Name': 'tag-key', 'Values': ['kubernetes.io/cluster/'+cluster_name]})
|
|
||||||
instances = ec2.instances.filter(Filters=filters)
|
|
||||||
for instance in instances:
|
for instance in instances:
|
||||||
|
|
||||||
##Suppose default vpc_visibility is private
|
##Suppose default vpc_visibility is private
|
||||||
@@ -67,15 +64,10 @@ class SearchEC2Tags(object):
|
|||||||
if node_labels_tag:
|
if node_labels_tag:
|
||||||
ansible_host['node_labels'] = dict([ label.strip().split('=') for label in node_labels_tag[0]['Value'].split(',') ])
|
ansible_host['node_labels'] = dict([ label.strip().split('=') for label in node_labels_tag[0]['Value'].split(',') ])
|
||||||
|
|
||||||
##Set when instance actually has node_taints
|
|
||||||
node_taints_tag = list(filter(lambda t: t['Key'] == 'kubespray-node-taints', instance.tags))
|
|
||||||
if node_taints_tag:
|
|
||||||
ansible_host['node_taints'] = list([ taint.strip() for taint in node_taints_tag[0]['Value'].split(',') ])
|
|
||||||
|
|
||||||
hosts[group].append(dns_name)
|
hosts[group].append(dns_name)
|
||||||
hosts['_meta']['hostvars'][dns_name] = ansible_host
|
hosts['_meta']['hostvars'][dns_name] = ansible_host
|
||||||
|
|
||||||
hosts['k8s_cluster'] = {'children':['kube_control_plane', 'kube_node']}
|
hosts['k8s-cluster'] = {'children':['kube-master', 'kube-node']}
|
||||||
print(json.dumps(hosts, sort_keys=True, indent=2))
|
print(json.dumps(hosts, sort_keys=True, indent=2))
|
||||||
|
|
||||||
SearchEC2Tags()
|
SearchEC2Tags()
|
||||||
|
|||||||
@@ -1 +0,0 @@
|
|||||||
boto3 # Apache-2.0
|
|
||||||
2
contrib/azurerm/.gitignore
vendored
2
contrib/azurerm/.gitignore
vendored
@@ -1,2 +1,2 @@
|
|||||||
.generated
|
.generated
|
||||||
/inventory
|
/inventory
|
||||||
@@ -15,23 +15,22 @@ Resource Group. It will not install Kubernetes itself, this has to be done in a
|
|||||||
|
|
||||||
## Configuration through group_vars/all
|
## Configuration through group_vars/all
|
||||||
|
|
||||||
You have to modify at least two variables in group_vars/all. The one is the **cluster_name** variable, it must be globally
|
You have to modify at least one variable in group_vars/all, which is the **cluster_name** variable. It must be globally
|
||||||
unique due to some restrictions in Azure. The other one is the **ssh_public_keys** variable, it must be your ssh public
|
unique due to some restrictions in Azure. Most other variables should be self explanatory if you have some basic Kubernetes
|
||||||
key to access your azure virtual machines. Most other variables should be self explanatory if you have some basic Kubernetes
|
|
||||||
experience.
|
experience.
|
||||||
|
|
||||||
## Bastion host
|
## Bastion host
|
||||||
|
|
||||||
You can enable the use of a Bastion Host by changing **use_bastion** in group_vars/all to **true**. The generated
|
You can enable the use of a Bastion Host by changing **use_bastion** in group_vars/all to **true**. The generated
|
||||||
templates will then include an additional bastion VM which can then be used to connect to the masters and nodes. The option
|
templates will then include an additional bastion VM which can then be used to connect to the masters and nodes. The option
|
||||||
also removes all public IPs from all other VMs.
|
also removes all public IPs from all other VMs.
|
||||||
|
|
||||||
## Generating and applying
|
## Generating and applying
|
||||||
|
|
||||||
To generate and apply the templates, call:
|
To generate and apply the templates, call:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
./apply-rg.sh <resource_group_name>
|
$ ./apply-rg.sh <resource_group_name>
|
||||||
```
|
```
|
||||||
|
|
||||||
If you change something in the configuration (e.g. number of nodes) later, you can call this again and Azure will
|
If you change something in the configuration (e.g. number of nodes) later, you can call this again and Azure will
|
||||||
@@ -42,26 +41,24 @@ take care about creating/modifying whatever is needed.
|
|||||||
If you need to delete all resources from a resource group, simply call:
|
If you need to delete all resources from a resource group, simply call:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
./clear-rg.sh <resource_group_name>
|
$ ./clear-rg.sh <resource_group_name>
|
||||||
```
|
```
|
||||||
|
|
||||||
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
|
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
|
||||||
|
|
||||||
## Installing Ansible and the dependencies
|
|
||||||
|
|
||||||
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
|
|
||||||
|
|
||||||
## Generating an inventory for kubespray
|
## Generating an inventory for kubespray
|
||||||
|
|
||||||
After you have applied the templates, you can generate an inventory with this call:
|
After you have applied the templates, you can generate an inventory with this call:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
./generate-inventory.sh <resource_group_name>
|
$ ./generate-inventory.sh <resource_group_name>
|
||||||
```
|
```
|
||||||
|
|
||||||
It will create the file ./inventory which can then be used with kubespray, e.g.:
|
It will create the file ./inventory which can then be used with kubespray, e.g.:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
cd kubespray-root-dir
|
$ cd kubespray-root-dir
|
||||||
ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
|
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all.yml" cluster.yml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -9,11 +9,18 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
ansible-playbook generate-templates.yml
|
if az &>/dev/null; then
|
||||||
|
echo "azure cli 2.0 found, using it instead of 1.0"
|
||||||
az deployment group create --template-file ./.generated/network.json -g $AZURE_RESOURCE_GROUP
|
./apply-rg_2.sh "$AZURE_RESOURCE_GROUP"
|
||||||
az deployment group create --template-file ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
|
elif azure &>/dev/null; then
|
||||||
az deployment group create --template-file ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
|
ansible-playbook generate-templates.yml
|
||||||
az deployment group create --template-file ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
|
|
||||||
az deployment group create --template-file ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
|
azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP
|
||||||
az deployment group create --template-file ./.generated/minions.json -g $AZURE_RESOURCE_GROUP
|
azure group deployment create -f ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
azure group deployment create -f ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
azure group deployment create -f ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
azure group deployment create -f ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
azure group deployment create -f ./.generated/minions.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
else
|
||||||
|
echo "Azure cli not found"
|
||||||
|
fi
|
||||||
|
|||||||
19
contrib/azurerm/apply-rg_2.sh
Executable file
19
contrib/azurerm/apply-rg_2.sh
Executable file
@@ -0,0 +1,19 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
AZURE_RESOURCE_GROUP="$1"
|
||||||
|
|
||||||
|
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
|
||||||
|
echo "AZURE_RESOURCE_GROUP is missing"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
ansible-playbook generate-templates.yml
|
||||||
|
|
||||||
|
az group deployment create --template-file ./.generated/network.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
az group deployment create --template-file ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
az group deployment create --template-file ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
az group deployment create --template-file ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
az group deployment create --template-file ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
|
||||||
|
az group deployment create --template-file ./.generated/minions.json -g $AZURE_RESOURCE_GROUP
|
||||||
@@ -9,6 +9,10 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
ansible-playbook generate-templates.yml
|
if az &>/dev/null; then
|
||||||
|
echo "azure cli 2.0 found, using it instead of 1.0"
|
||||||
az group deployment create -g "$AZURE_RESOURCE_GROUP" --template-file ./.generated/clear-rg.json --mode Complete
|
./clear-rg_2.sh "$AZURE_RESOURCE_GROUP"
|
||||||
|
else
|
||||||
|
ansible-playbook generate-templates.yml
|
||||||
|
azure group deployment create -g "$AZURE_RESOURCE_GROUP" -f ./.generated/clear-rg.json -m Complete
|
||||||
|
fi
|
||||||
|
|||||||
14
contrib/azurerm/clear-rg_2.sh
Executable file
14
contrib/azurerm/clear-rg_2.sh
Executable file
@@ -0,0 +1,14 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
AZURE_RESOURCE_GROUP="$1"
|
||||||
|
|
||||||
|
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
|
||||||
|
echo "AZURE_RESOURCE_GROUP is missing"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
ansible-playbook generate-templates.yml
|
||||||
|
|
||||||
|
az group deployment create -g "$AZURE_RESOURCE_GROUP" --template-file ./.generated/clear-rg.json --mode Complete
|
||||||
@@ -1,6 +1,5 @@
|
|||||||
---
|
---
|
||||||
- name: Generate Azure inventory
|
- hosts: localhost
|
||||||
hosts: localhost
|
gather_facts: False
|
||||||
gather_facts: false
|
|
||||||
roles:
|
roles:
|
||||||
- generate-inventory
|
- generate-inventory
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
---
|
---
|
||||||
- name: Generate Azure inventory
|
- hosts: localhost
|
||||||
hosts: localhost
|
gather_facts: False
|
||||||
gather_facts: false
|
|
||||||
roles:
|
roles:
|
||||||
- generate-inventory_2
|
- generate-inventory_2
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
---
|
---
|
||||||
- name: Generate Azure templates
|
- hosts: localhost
|
||||||
hosts: localhost
|
gather_facts: False
|
||||||
gather_facts: false
|
|
||||||
roles:
|
roles:
|
||||||
- generate-templates
|
- generate-templates
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ cluster_name: example
|
|||||||
# node that can be used to access the masters and minions
|
# node that can be used to access the masters and minions
|
||||||
use_bastion: false
|
use_bastion: false
|
||||||
|
|
||||||
# Set this to a preferred name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
|
# Set this to a prefered name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
|
||||||
# This is convenient when exceptions have to be configured on a firewall to allow ssh to the given bastion host.
|
# This is convenient when exceptions have to be configured on a firewall to allow ssh to the given bastion host.
|
||||||
# bastion_domain_prefix: k8s-bastion
|
# bastion_domain_prefix: k8s-bastion
|
||||||
|
|
||||||
|
|||||||
@@ -4,12 +4,8 @@
|
|||||||
command: azure vm list-ip-address --json {{ azure_resource_group }}
|
command: azure vm list-ip-address --json {{ azure_resource_group }}
|
||||||
register: vm_list_cmd
|
register: vm_list_cmd
|
||||||
|
|
||||||
- name: Set vm_list
|
- set_fact:
|
||||||
set_fact:
|
|
||||||
vm_list: "{{ vm_list_cmd.stdout }}"
|
vm_list: "{{ vm_list_cmd.stdout }}"
|
||||||
|
|
||||||
- name: Generate inventory
|
- name: Generate inventory
|
||||||
template:
|
template: src=inventory.j2 dest="{{playbook_dir}}/inventory"
|
||||||
src: inventory.j2
|
|
||||||
dest: "{{ playbook_dir }}/inventory"
|
|
||||||
mode: "0644"
|
|
||||||
|
|||||||
@@ -7,9 +7,9 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[kube_control_plane]
|
[kube-master]
|
||||||
{% for vm in vm_list %}
|
{% for vm in vm_list %}
|
||||||
{% if 'kube_control_plane' in vm.tags.roles %}
|
{% if 'kube-master' in vm.tags.roles %}
|
||||||
{{ vm.name }}
|
{{ vm.name }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -21,13 +21,13 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[kube_node]
|
[kube-node]
|
||||||
{% for vm in vm_list %}
|
{% for vm in vm_list %}
|
||||||
{% if 'kube_node' in vm.tags.roles %}
|
{% if 'kube-node' in vm.tags.roles %}
|
||||||
{{ vm.name }}
|
{{ vm.name }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[k8s_cluster:children]
|
[k8s-cluster:children]
|
||||||
kube_node
|
kube-node
|
||||||
kube_control_plane
|
kube-master
|
||||||
|
|||||||
@@ -8,24 +8,9 @@
|
|||||||
command: az vm list -o json --resource-group {{ azure_resource_group }}
|
command: az vm list -o json --resource-group {{ azure_resource_group }}
|
||||||
register: vm_list_cmd
|
register: vm_list_cmd
|
||||||
|
|
||||||
- name: Query Azure Load Balancer Public IP
|
- set_fact:
|
||||||
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
|
|
||||||
register: lb_pubip_cmd
|
|
||||||
|
|
||||||
- name: Set VM IP, roles lists and load balancer public IP
|
|
||||||
set_fact:
|
|
||||||
vm_ip_list: "{{ vm_ip_list_cmd.stdout }}"
|
vm_ip_list: "{{ vm_ip_list_cmd.stdout }}"
|
||||||
vm_roles_list: "{{ vm_list_cmd.stdout }}"
|
vm_roles_list: "{{ vm_list_cmd.stdout }}"
|
||||||
lb_pubip: "{{ lb_pubip_cmd.stdout }}"
|
|
||||||
|
|
||||||
- name: Generate inventory
|
- name: Generate inventory
|
||||||
template:
|
template: src=inventory.j2 dest="{{playbook_dir}}/inventory"
|
||||||
src: inventory.j2
|
|
||||||
dest: "{{ playbook_dir }}/inventory"
|
|
||||||
mode: "0644"
|
|
||||||
|
|
||||||
- name: Generate Load Balancer variables
|
|
||||||
template:
|
|
||||||
src: loadbalancer_vars.j2
|
|
||||||
dest: "{{ playbook_dir }}/loadbalancer_vars.yml"
|
|
||||||
mode: "0644"
|
|
||||||
|
|||||||
@@ -7,9 +7,9 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[kube_control_plane]
|
[kube-master]
|
||||||
{% for vm in vm_roles_list %}
|
{% for vm in vm_roles_list %}
|
||||||
{% if 'kube_control_plane' in vm.tags.roles %}
|
{% if 'kube-master' in vm.tags.roles %}
|
||||||
{{ vm.name }}
|
{{ vm.name }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -21,13 +21,14 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[kube_node]
|
[kube-node]
|
||||||
{% for vm in vm_roles_list %}
|
{% for vm in vm_roles_list %}
|
||||||
{% if 'kube_node' in vm.tags.roles %}
|
{% if 'kube-node' in vm.tags.roles %}
|
||||||
{{ vm.name }}
|
{{ vm.name }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[k8s_cluster:children]
|
[k8s-cluster:children]
|
||||||
kube_node
|
kube-node
|
||||||
kube_control_plane
|
kube-master
|
||||||
|
|
||||||
|
|||||||
@@ -1,8 +0,0 @@
|
|||||||
## External LB example config
|
|
||||||
apiserver_loadbalancer_domain_name: {{ lb_pubip.dnsSettings.fqdn }}
|
|
||||||
loadbalancer_apiserver:
|
|
||||||
address: {{ lb_pubip.ipAddress }}
|
|
||||||
port: 6443
|
|
||||||
|
|
||||||
## Internal loadbalancers for apiservers
|
|
||||||
loadbalancer_apiserver_localhost: false
|
|
||||||
@@ -1,4 +1,3 @@
|
|||||||
---
|
|
||||||
apiVersion: "2015-06-15"
|
apiVersion: "2015-06-15"
|
||||||
|
|
||||||
virtualNetworkName: "{{ azure_virtual_network_name | default('KubeVNET') }}"
|
virtualNetworkName: "{{ azure_virtual_network_name | default('KubeVNET') }}"
|
||||||
@@ -24,14 +23,15 @@ bastionIPAddressName: bastion-pubip
|
|||||||
|
|
||||||
disablePasswordAuthentication: true
|
disablePasswordAuthentication: true
|
||||||
|
|
||||||
sshKeyPath: "/home/{{ admin_username }}/.ssh/authorized_keys"
|
sshKeyPath: "/home/{{admin_username}}/.ssh/authorized_keys"
|
||||||
|
|
||||||
imageReference:
|
imageReference:
|
||||||
publisher: "OpenLogic"
|
publisher: "OpenLogic"
|
||||||
offer: "CentOS"
|
offer: "CentOS"
|
||||||
sku: "7.5"
|
sku: "7.2"
|
||||||
version: "latest"
|
version: "latest"
|
||||||
imageReferenceJson: "{{ imageReference | to_json }}"
|
imageReferenceJson: "{{imageReference|to_json}}"
|
||||||
|
|
||||||
storageAccountName: "sa{{ nameSuffix | replace('-', '') }}"
|
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
|
||||||
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"
|
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"
|
||||||
|
|
||||||
|
|||||||
@@ -1,20 +1,9 @@
|
|||||||
---
|
- set_fact:
|
||||||
- name: Set base_dir
|
base_dir: "{{playbook_dir}}/.generated/"
|
||||||
set_fact:
|
|
||||||
base_dir: "{{ playbook_dir }}/.generated/"
|
|
||||||
|
|
||||||
- name: Create base_dir
|
- file: path={{base_dir}} state=directory recurse=true
|
||||||
file:
|
|
||||||
path: "{{ base_dir }}"
|
|
||||||
state: directory
|
|
||||||
recurse: true
|
|
||||||
mode: "0755"
|
|
||||||
|
|
||||||
- name: Store json files in base_dir
|
- template: src={{item}} dest="{{base_dir}}/{{item}}"
|
||||||
template:
|
|
||||||
src: "{{ item }}"
|
|
||||||
dest: "{{ base_dir }}/{{ item }}"
|
|
||||||
mode: "0644"
|
|
||||||
with_items:
|
with_items:
|
||||||
- network.json
|
- network.json
|
||||||
- storage.json
|
- storage.json
|
||||||
|
|||||||
@@ -27,4 +27,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -103,4 +103,4 @@
|
|||||||
}
|
}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -5,4 +5,4 @@
|
|||||||
"variables": {},
|
"variables": {},
|
||||||
"resources": [],
|
"resources": [],
|
||||||
"outputs": {}
|
"outputs": {}
|
||||||
}
|
}
|
||||||
@@ -144,7 +144,7 @@
|
|||||||
"[concat('Microsoft.Network/networkInterfaces/', 'master-{{i}}-nic')]"
|
"[concat('Microsoft.Network/networkInterfaces/', 'master-{{i}}-nic')]"
|
||||||
],
|
],
|
||||||
"tags": {
|
"tags": {
|
||||||
"roles": "kube_control_plane,etcd"
|
"roles": "kube-master,etcd"
|
||||||
},
|
},
|
||||||
"apiVersion": "{{apiVersion}}",
|
"apiVersion": "{{apiVersion}}",
|
||||||
"properties": {
|
"properties": {
|
||||||
|
|||||||
@@ -61,7 +61,7 @@
|
|||||||
"[concat('Microsoft.Network/networkInterfaces/', 'minion-{{i}}-nic')]"
|
"[concat('Microsoft.Network/networkInterfaces/', 'minion-{{i}}-nic')]"
|
||||||
],
|
],
|
||||||
"tags": {
|
"tags": {
|
||||||
"roles": "kube_node"
|
"roles": "kube-node"
|
||||||
},
|
},
|
||||||
"apiVersion": "{{apiVersion}}",
|
"apiVersion": "{{apiVersion}}",
|
||||||
"properties": {
|
"properties": {
|
||||||
@@ -112,4 +112,4 @@
|
|||||||
} {% if not loop.last %},{% endif %}
|
} {% if not loop.last %},{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -16,4 +16,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
176
contrib/dind/README.md
Normal file
176
contrib/dind/README.md
Normal file
@@ -0,0 +1,176 @@
|
|||||||
|
# Kubespray DIND experimental setup
|
||||||
|
|
||||||
|
This ansible playbook creates local docker containers
|
||||||
|
to serve as Kubernetes "nodes", which in turn will run
|
||||||
|
"normal" Kubernetes docker containers, a mode usually
|
||||||
|
called DIND (Docker-IN-Docker).
|
||||||
|
|
||||||
|
The playbook has two roles:
|
||||||
|
- dind-host: creates the "nodes" as containers in localhost, with
|
||||||
|
appropriate settings for DIND (privileged, volume mapping for dind
|
||||||
|
storage, etc).
|
||||||
|
- dind-cluster: customizes each node container to have required
|
||||||
|
system packages installed, and some utils (swapoff, lsattr)
|
||||||
|
symlinked to /bin/true to ease mimicking a real node.
|
||||||
|
|
||||||
|
This playbook has been test with Ubuntu 16.04 as host and ubuntu:16.04
|
||||||
|
as docker images (note that dind-cluster has specific customization
|
||||||
|
for these images).
|
||||||
|
|
||||||
|
The playbook also creates a `/tmp/kubespray.dind.inventory_builder.sh`
|
||||||
|
helper (wraps up running `contrib/inventory_builder/inventory.py` with
|
||||||
|
node containers IPs and prefix).
|
||||||
|
|
||||||
|
## Deploying
|
||||||
|
|
||||||
|
See below for a complete successful run:
|
||||||
|
|
||||||
|
1. Create the node containers
|
||||||
|
|
||||||
|
~~~~
|
||||||
|
# From the kubespray root dir
|
||||||
|
cd contrib/dind
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
ansible-playbook -i hosts dind-cluster.yaml
|
||||||
|
|
||||||
|
# Back to kubespray root
|
||||||
|
cd ../..
|
||||||
|
~~~~
|
||||||
|
|
||||||
|
NOTE: if the playbook run fails with something like below error
|
||||||
|
message, you may need to specifically set `ansible_python_interpreter`,
|
||||||
|
see `./hosts` file for an example expanded localhost entry.
|
||||||
|
|
||||||
|
~~~
|
||||||
|
failed: [localhost] (item=kube-node1) => {"changed": false, "item": "kube-node1", "msg": "Failed to import docker or docker-py - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)"}
|
||||||
|
~~~
|
||||||
|
|
||||||
|
2. Customize kubespray-dind.yaml
|
||||||
|
|
||||||
|
Note that there's coupling between above created node containers
|
||||||
|
and `kubespray-dind.yaml` settings, in particular regarding selected `node_distro`
|
||||||
|
(as set in `group_vars/all/all.yaml`), and docker settings.
|
||||||
|
|
||||||
|
~~~
|
||||||
|
$EDITOR contrib/dind/kubespray-dind.yaml
|
||||||
|
~~~
|
||||||
|
|
||||||
|
3. Prepare the inventory and run the playbook
|
||||||
|
|
||||||
|
~~~
|
||||||
|
INVENTORY_DIR=inventory/local-dind
|
||||||
|
mkdir -p ${INVENTORY_DIR}
|
||||||
|
rm -f ${INVENTORY_DIR}/hosts.ini
|
||||||
|
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
|
||||||
|
|
||||||
|
ansible-playbook --become -e ansible_ssh_user=debian -i ${INVENTORY_DIR}/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml
|
||||||
|
~~~
|
||||||
|
|
||||||
|
NOTE: You could also test other distros without editing files by
|
||||||
|
passing `--extra-vars` as per below commandline,
|
||||||
|
replacing `DISTRO` by either `debian`, `ubuntu`, `centos`, `fedora`:
|
||||||
|
|
||||||
|
~~~
|
||||||
|
cd contrib/dind
|
||||||
|
ansible-playbook -i hosts dind-cluster.yaml --extra-vars node_distro=DISTRO
|
||||||
|
|
||||||
|
cd ../..
|
||||||
|
CONFIG_FILE=inventory/local-dind/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
|
||||||
|
ansible-playbook --become -e ansible_ssh_user=DISTRO -i inventory/local-dind/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml --extra-vars bootstrap_os=DISTRO
|
||||||
|
~~~
|
||||||
|
|
||||||
|
## Resulting deployment
|
||||||
|
|
||||||
|
See below to get an idea on how a completed deployment looks like,
|
||||||
|
from the host where you ran kubespray playbooks.
|
||||||
|
|
||||||
|
### node_distro: debian
|
||||||
|
|
||||||
|
Running from an Ubuntu Xenial host:
|
||||||
|
|
||||||
|
~~~
|
||||||
|
$ uname -a
|
||||||
|
Linux ip-xx-xx-xx-xx 4.4.0-1069-aws #79-Ubuntu SMP Mon Sep 24
|
||||||
|
15:01:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
|
||||||
|
|
||||||
|
$ docker ps
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
1835dd183b75 debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node5
|
||||||
|
30b0af8d2924 debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node4
|
||||||
|
3e0d1510c62f debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node3
|
||||||
|
738993566f94 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node2
|
||||||
|
c581ef662ed2 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node1
|
||||||
|
|
||||||
|
$ docker exec kube-node1 kubectl get node
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
kube-node1 Ready master,node 18m v1.12.1
|
||||||
|
kube-node2 Ready master,node 17m v1.12.1
|
||||||
|
kube-node3 Ready node 17m v1.12.1
|
||||||
|
kube-node4 Ready node 17m v1.12.1
|
||||||
|
kube-node5 Ready node 17m v1.12.1
|
||||||
|
|
||||||
|
$ docker exec kube-node1 kubectl get pod --all-namespaces
|
||||||
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
|
default netchecker-agent-67489 1/1 Running 0 2m51s
|
||||||
|
default netchecker-agent-6qq6s 1/1 Running 0 2m51s
|
||||||
|
default netchecker-agent-fsw92 1/1 Running 0 2m51s
|
||||||
|
default netchecker-agent-fw6tl 1/1 Running 0 2m51s
|
||||||
|
default netchecker-agent-hostnet-8f2zb 1/1 Running 0 3m
|
||||||
|
default netchecker-agent-hostnet-gq7ml 1/1 Running 0 3m
|
||||||
|
default netchecker-agent-hostnet-jfkgv 1/1 Running 0 3m
|
||||||
|
default netchecker-agent-hostnet-kwfwx 1/1 Running 0 3m
|
||||||
|
default netchecker-agent-hostnet-r46nm 1/1 Running 0 3m
|
||||||
|
default netchecker-agent-lxdrn 1/1 Running 0 2m51s
|
||||||
|
default netchecker-server-864bd4c897-9vstl 1/1 Running 0 2m40s
|
||||||
|
default sh-68fcc6db45-qf55h 1/1 Running 1 12m
|
||||||
|
kube-system coredns-7598f59475-6vknq 1/1 Running 0 14m
|
||||||
|
kube-system coredns-7598f59475-l5q5x 1/1 Running 0 14m
|
||||||
|
kube-system kube-apiserver-kube-node1 1/1 Running 0 17m
|
||||||
|
kube-system kube-apiserver-kube-node2 1/1 Running 0 18m
|
||||||
|
kube-system kube-controller-manager-kube-node1 1/1 Running 0 18m
|
||||||
|
kube-system kube-controller-manager-kube-node2 1/1 Running 0 18m
|
||||||
|
kube-system kube-proxy-5xx9d 1/1 Running 0 17m
|
||||||
|
kube-system kube-proxy-cdqq4 1/1 Running 0 17m
|
||||||
|
kube-system kube-proxy-n64ls 1/1 Running 0 17m
|
||||||
|
kube-system kube-proxy-pswmj 1/1 Running 0 18m
|
||||||
|
kube-system kube-proxy-x89qw 1/1 Running 0 18m
|
||||||
|
kube-system kube-scheduler-kube-node1 1/1 Running 4 17m
|
||||||
|
kube-system kube-scheduler-kube-node2 1/1 Running 4 18m
|
||||||
|
kube-system kubernetes-dashboard-5db4d9f45f-548rl 1/1 Running 0 14m
|
||||||
|
kube-system nginx-proxy-kube-node3 1/1 Running 4 17m
|
||||||
|
kube-system nginx-proxy-kube-node4 1/1 Running 4 17m
|
||||||
|
kube-system nginx-proxy-kube-node5 1/1 Running 4 17m
|
||||||
|
kube-system weave-net-42bfr 2/2 Running 0 16m
|
||||||
|
kube-system weave-net-6gt8m 2/2 Running 0 16m
|
||||||
|
kube-system weave-net-88nnc 2/2 Running 0 16m
|
||||||
|
kube-system weave-net-shckr 2/2 Running 0 16m
|
||||||
|
kube-system weave-net-xr46t 2/2 Running 0 16m
|
||||||
|
|
||||||
|
$ docker exec kube-node1 curl -s http://localhost:31081/api/v1/connectivity_check
|
||||||
|
{"Message":"All 10 pods successfully reported back to the server","Absent":null,"Outdated":null}
|
||||||
|
~~~
|
||||||
|
|
||||||
|
## Using ./run-test-distros.sh
|
||||||
|
|
||||||
|
You can use `./run-test-distros.sh` to run a set of tests via DIND,
|
||||||
|
and excerpt from this script, to get an idea:
|
||||||
|
|
||||||
|
~~~
|
||||||
|
# The SPEC file(s) must have two arrays as e.g.
|
||||||
|
# DISTROS=(debian centos)
|
||||||
|
# EXTRAS=(
|
||||||
|
# 'kube_network_plugin=calico'
|
||||||
|
# 'kube_network_plugin=flannel'
|
||||||
|
# 'kube_network_plugin=weave'
|
||||||
|
# )
|
||||||
|
# that will be tested in a "combinatory" way (e.g. from above there'll be
|
||||||
|
# be 6 test runs), creating a sequenced <spec_filename>-nn.out with each output.
|
||||||
|
#
|
||||||
|
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars
|
||||||
|
# to main kubespray ansible-playbook run.
|
||||||
|
~~~
|
||||||
|
|
||||||
|
See e.g. `test-some_distros-most_CNIs.env` and
|
||||||
|
`test-some_distros-kube_router_combo.env` in particular for a richer
|
||||||
|
set of CNI specific `--extra-vars` combo.
|
||||||
9
contrib/dind/dind-cluster.yaml
Normal file
9
contrib/dind/dind-cluster.yaml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
- hosts: localhost
|
||||||
|
gather_facts: False
|
||||||
|
roles:
|
||||||
|
- { role: dind-host }
|
||||||
|
|
||||||
|
- hosts: containers
|
||||||
|
roles:
|
||||||
|
- { role: dind-cluster }
|
||||||
2
contrib/dind/group_vars/all/all.yaml
Normal file
2
contrib/dind/group_vars/all/all.yaml
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
# See distro.yaml for supported node_distro images
|
||||||
|
node_distro: debian
|
||||||
40
contrib/dind/group_vars/all/distro.yaml
Normal file
40
contrib/dind/group_vars/all/distro.yaml
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
distro_settings:
|
||||||
|
debian: &DEBIAN
|
||||||
|
image: "debian:9.5"
|
||||||
|
user: "debian"
|
||||||
|
pid1_exe: /lib/systemd/systemd
|
||||||
|
init: |
|
||||||
|
sh -c "apt-get -qy update && apt-get -qy install systemd-sysv dbus && exec /sbin/init"
|
||||||
|
raw_setup: apt-get -qy update && apt-get -qy install dbus python sudo iproute2
|
||||||
|
raw_setup_done: test -x /usr/bin/sudo
|
||||||
|
agetty_svc: getty@*
|
||||||
|
ssh_service: ssh
|
||||||
|
extra_packages: []
|
||||||
|
ubuntu:
|
||||||
|
<<: *DEBIAN
|
||||||
|
image: "ubuntu:16.04"
|
||||||
|
user: "ubuntu"
|
||||||
|
init: |
|
||||||
|
/sbin/init
|
||||||
|
centos: &CENTOS
|
||||||
|
image: "centos:7"
|
||||||
|
user: "centos"
|
||||||
|
pid1_exe: /usr/lib/systemd/systemd
|
||||||
|
init: |
|
||||||
|
/sbin/init
|
||||||
|
raw_setup: yum -qy install policycoreutils dbus python sudo iproute iptables
|
||||||
|
raw_setup_done: test -x /usr/bin/sudo
|
||||||
|
agetty_svc: getty@* serial-getty@*
|
||||||
|
ssh_service: sshd
|
||||||
|
extra_packages: []
|
||||||
|
fedora:
|
||||||
|
<<: *CENTOS
|
||||||
|
image: "fedora:latest"
|
||||||
|
user: "fedora"
|
||||||
|
raw_setup: yum -qy install policycoreutils dbus python sudo iproute iptables; mkdir -p /etc/modules-load.d
|
||||||
|
extra_packages:
|
||||||
|
- hostname
|
||||||
|
- procps
|
||||||
|
- findutils
|
||||||
|
- kmod
|
||||||
|
- iputils
|
||||||
15
contrib/dind/hosts
Normal file
15
contrib/dind/hosts
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
[local]
|
||||||
|
# If you created a virtualenv for ansible, you may need to specify running the
|
||||||
|
# python binary from there instead:
|
||||||
|
#localhost ansible_connection=local ansible_python_interpreter=/home/user/kubespray/.venv/bin/python
|
||||||
|
localhost ansible_connection=local
|
||||||
|
|
||||||
|
[containers]
|
||||||
|
kube-node1
|
||||||
|
kube-node2
|
||||||
|
kube-node3
|
||||||
|
kube-node4
|
||||||
|
kube-node5
|
||||||
|
|
||||||
|
[containers:vars]
|
||||||
|
ansible_connection=docker
|
||||||
22
contrib/dind/kubespray-dind.yaml
Normal file
22
contrib/dind/kubespray-dind.yaml
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
# kubespray-dind.yaml: minimal kubespray ansible playbook usable for DIND
|
||||||
|
# See contrib/dind/README.md
|
||||||
|
kube_api_anonymous_auth: true
|
||||||
|
kubeadm_enabled: true
|
||||||
|
|
||||||
|
kubelet_fail_swap_on: false
|
||||||
|
|
||||||
|
# Docker nodes need to have been created with same "node_distro: debian"
|
||||||
|
# at contrib/dind/group_vars/all/all.yaml
|
||||||
|
bootstrap_os: debian
|
||||||
|
|
||||||
|
docker_version: latest
|
||||||
|
|
||||||
|
docker_storage_options: -s overlay2 --storage-opt overlay2.override_kernel_check=true -g /dind/docker
|
||||||
|
|
||||||
|
dns_mode: coredns
|
||||||
|
|
||||||
|
deploy_netchecker: True
|
||||||
|
netcheck_agent_image_repo: quay.io/l23network/k8s-netchecker-agent
|
||||||
|
netcheck_server_image_repo: quay.io/l23network/k8s-netchecker-server
|
||||||
|
netcheck_agent_image_tag: v1.0
|
||||||
|
netcheck_server_image_tag: v1.0
|
||||||
1
contrib/dind/requirements.txt
Normal file
1
contrib/dind/requirements.txt
Normal file
@@ -0,0 +1 @@
|
|||||||
|
docker
|
||||||
70
contrib/dind/roles/dind-cluster/tasks/main.yaml
Normal file
70
contrib/dind/roles/dind-cluster/tasks/main.yaml
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
- name: set_fact distro_setup
|
||||||
|
set_fact:
|
||||||
|
distro_setup: "{{ distro_settings[node_distro] }}"
|
||||||
|
|
||||||
|
- name: set_fact other distro settings
|
||||||
|
set_fact:
|
||||||
|
distro_user: "{{ distro_setup['user'] }}"
|
||||||
|
distro_ssh_service: "{{ distro_setup['ssh_service'] }}"
|
||||||
|
distro_extra_packages: "{{ distro_setup['extra_packages'] }}"
|
||||||
|
|
||||||
|
- name: Null-ify some linux tools to ease DIND
|
||||||
|
file:
|
||||||
|
src: "/bin/true"
|
||||||
|
dest: "{{item}}"
|
||||||
|
state: link
|
||||||
|
force: yes
|
||||||
|
with_items:
|
||||||
|
# DIND box may have swap enable, don't bother
|
||||||
|
- /sbin/swapoff
|
||||||
|
# /etc/hosts handling would fail on trying to copy file attributes on edit,
|
||||||
|
# void it by successfully returning nil output
|
||||||
|
- /usr/bin/lsattr
|
||||||
|
# disable selinux-isms, sp needed if running on non-Selinux host
|
||||||
|
- /usr/sbin/semodule
|
||||||
|
|
||||||
|
- name: Void installing dpkg docs and man pages on Debian based distros
|
||||||
|
copy:
|
||||||
|
content: |
|
||||||
|
# Delete locales
|
||||||
|
path-exclude=/usr/share/locale/*
|
||||||
|
# Delete man pages
|
||||||
|
path-exclude=/usr/share/man/*
|
||||||
|
# Delete docs
|
||||||
|
path-exclude=/usr/share/doc/*
|
||||||
|
path-include=/usr/share/doc/*/copyright
|
||||||
|
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
|
||||||
|
when:
|
||||||
|
- ansible_os_family == 'Debian'
|
||||||
|
|
||||||
|
- name: Install system packages to better match a full-fledge node
|
||||||
|
package:
|
||||||
|
name: "{{ item }}"
|
||||||
|
state: present
|
||||||
|
with_items: "{{ distro_extra_packages }} + [ 'rsyslog', 'openssh-server' ]"
|
||||||
|
|
||||||
|
- name: Start needed services
|
||||||
|
service:
|
||||||
|
name: "{{ item }}"
|
||||||
|
state: started
|
||||||
|
with_items:
|
||||||
|
- rsyslog
|
||||||
|
- "{{ distro_ssh_service }}"
|
||||||
|
|
||||||
|
- name: Create distro user "{{distro_user}}"
|
||||||
|
user:
|
||||||
|
name: "{{ distro_user }}"
|
||||||
|
uid: 1000
|
||||||
|
#groups: sudo
|
||||||
|
append: yes
|
||||||
|
|
||||||
|
- name: Allow password-less sudo to "{{ distro_user }}"
|
||||||
|
copy:
|
||||||
|
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL"
|
||||||
|
dest: "/etc/sudoers.d/{{ distro_user }}"
|
||||||
|
|
||||||
|
- name: Add my pubkey to "{{ distro_user }}" user authorized keys
|
||||||
|
authorized_key:
|
||||||
|
user: "{{ distro_user }}"
|
||||||
|
state: present
|
||||||
|
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"
|
||||||
86
contrib/dind/roles/dind-host/tasks/main.yaml
Normal file
86
contrib/dind/roles/dind-host/tasks/main.yaml
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
- name: set_fact distro_setup
|
||||||
|
set_fact:
|
||||||
|
distro_setup: "{{ distro_settings[node_distro] }}"
|
||||||
|
|
||||||
|
- name: set_fact other distro settings
|
||||||
|
set_fact:
|
||||||
|
distro_image: "{{ distro_setup['image'] }}"
|
||||||
|
distro_init: "{{ distro_setup['init'] }}"
|
||||||
|
distro_pid1_exe: "{{ distro_setup['pid1_exe'] }}"
|
||||||
|
distro_raw_setup: "{{ distro_setup['raw_setup'] }}"
|
||||||
|
distro_raw_setup_done: "{{ distro_setup['raw_setup_done'] }}"
|
||||||
|
distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}"
|
||||||
|
|
||||||
|
- name: Create dind node containers from "containers" inventory section
|
||||||
|
docker_container:
|
||||||
|
image: "{{ distro_image }}"
|
||||||
|
name: "{{ item }}"
|
||||||
|
state: started
|
||||||
|
hostname: "{{ item }}"
|
||||||
|
command: "{{ distro_init }}"
|
||||||
|
#recreate: yes
|
||||||
|
privileged: true
|
||||||
|
tmpfs:
|
||||||
|
- /sys/module/nf_conntrack/parameters
|
||||||
|
volumes:
|
||||||
|
- /boot:/boot
|
||||||
|
- /lib/modules:/lib/modules
|
||||||
|
- "{{ item }}:/dind/docker"
|
||||||
|
register: containers
|
||||||
|
with_items: "{{groups.containers}}"
|
||||||
|
tags:
|
||||||
|
- addresses
|
||||||
|
|
||||||
|
- name: Gather list of containers IPs
|
||||||
|
set_fact:
|
||||||
|
addresses: "{{ containers.results | map(attribute='ansible_facts') | map(attribute='docker_container') | map(attribute='NetworkSettings') | map(attribute='IPAddress') | list }}"
|
||||||
|
tags:
|
||||||
|
- addresses
|
||||||
|
|
||||||
|
- name: Create inventory_builder helper already set with the list of node containers' IPs
|
||||||
|
template:
|
||||||
|
src: inventory_builder.sh.j2
|
||||||
|
dest: /tmp/kubespray.dind.inventory_builder.sh
|
||||||
|
mode: 0755
|
||||||
|
tags:
|
||||||
|
- addresses
|
||||||
|
|
||||||
|
- name: Install needed packages into node containers via raw, need to wait for possible systemd packages to finish installing
|
||||||
|
raw: |
|
||||||
|
# agetty processes churn a lot of cpu time failing on inexistent ttys, early STOP them, to rip them in below task
|
||||||
|
pkill -STOP agetty || true
|
||||||
|
{{ distro_raw_setup_done }} && echo SKIPPED && exit 0
|
||||||
|
until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done
|
||||||
|
{{ distro_raw_setup }}
|
||||||
|
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
|
||||||
|
with_items: "{{ containers.results }}"
|
||||||
|
register: result
|
||||||
|
changed_when: result.stdout.find("SKIPPED") < 0
|
||||||
|
|
||||||
|
- name: Remove gettys from node containers
|
||||||
|
raw: |
|
||||||
|
until test -S /var/run/dbus/system_bus_socket; do sleep 1; done
|
||||||
|
systemctl disable {{ distro_agetty_svc }}
|
||||||
|
systemctl stop {{ distro_agetty_svc }}
|
||||||
|
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
|
||||||
|
with_items: "{{ containers.results }}"
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
|
||||||
|
# handle manually
|
||||||
|
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
|
||||||
|
raw: |
|
||||||
|
echo {{ item | hash('sha1') }} > /etc/machine-id.new
|
||||||
|
mv -b /etc/machine-id.new /etc/machine-id
|
||||||
|
cmp /etc/machine-id /etc/machine-id~ || true
|
||||||
|
systemctl daemon-reload
|
||||||
|
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
|
||||||
|
with_items: "{{ containers.results }}"
|
||||||
|
|
||||||
|
- name: Early hack image install to adapt for DIND
|
||||||
|
raw: |
|
||||||
|
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
|
||||||
|
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
|
||||||
|
with_items: "{{ containers.results }}"
|
||||||
|
register: result
|
||||||
|
changed_when: result.stdout.find("removed") >= 0
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# NOTE: if you change HOST_PREFIX, you also need to edit ./hosts [containers] section
|
||||||
|
HOST_PREFIX=kube-node python3 contrib/inventory_builder/inventory.py {% for ip in addresses %} {{ ip }} {% endfor %}
|
||||||
93
contrib/dind/run-test-distros.sh
Executable file
93
contrib/dind/run-test-distros.sh
Executable file
@@ -0,0 +1,93 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Q&D test'em all: creates full DIND kubespray deploys
|
||||||
|
# for each distro, verifying it via netchecker.
|
||||||
|
|
||||||
|
info() {
|
||||||
|
local msg="$*"
|
||||||
|
local date="$(date -Isec)"
|
||||||
|
echo "INFO: [$date] $msg"
|
||||||
|
}
|
||||||
|
pass_or_fail() {
|
||||||
|
local rc="$?"
|
||||||
|
local msg="$*"
|
||||||
|
local date="$(date -Isec)"
|
||||||
|
[ $rc -eq 0 ] && echo "PASS: [$date] $msg" || echo "FAIL: [$date] $msg"
|
||||||
|
return $rc
|
||||||
|
}
|
||||||
|
test_distro() {
|
||||||
|
local distro=${1:?};shift
|
||||||
|
local extra="${*:-}"
|
||||||
|
local prefix="$distro[${extra}]}"
|
||||||
|
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro
|
||||||
|
pass_or_fail "$prefix: dind-nodes" || return 1
|
||||||
|
(cd ../..
|
||||||
|
INVENTORY_DIR=inventory/local-dind
|
||||||
|
mkdir -p ${INVENTORY_DIR}
|
||||||
|
rm -f ${INVENTORY_DIR}/hosts.ini
|
||||||
|
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
|
||||||
|
# expand $extra with -e in front of each word
|
||||||
|
extra_args=""; for extra_arg in $extra; do extra_args="$extra_args -e $extra_arg"; done
|
||||||
|
ansible-playbook --become -e ansible_ssh_user=$distro -i \
|
||||||
|
${INVENTORY_DIR}/hosts.ini cluster.yml \
|
||||||
|
-e @contrib/dind/kubespray-dind.yaml -e bootstrap_os=$distro ${extra_args}
|
||||||
|
pass_or_fail "$prefix: kubespray"
|
||||||
|
) || return 1
|
||||||
|
local node0=${NODES[0]}
|
||||||
|
docker exec ${node0} kubectl get pod --all-namespaces
|
||||||
|
pass_or_fail "$prefix: kube-api" || return 1
|
||||||
|
let retries=60
|
||||||
|
while ((retries--)); do
|
||||||
|
# Some CNI may set NodePort on "main" node interface address (thus no localhost NodePort)
|
||||||
|
# e.g. kube-router: https://github.com/cloudnativelabs/kube-router/pull/217
|
||||||
|
docker exec ${node0} curl -m2 -s http://${NETCHECKER_HOST:?}:31081/api/v1/connectivity_check | grep successfully && break
|
||||||
|
sleep 2
|
||||||
|
done
|
||||||
|
[ $retries -ge 0 ]
|
||||||
|
pass_or_fail "$prefix: netcheck" || return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
NODES=($(egrep ^kube-node hosts))
|
||||||
|
NETCHECKER_HOST=localhost
|
||||||
|
|
||||||
|
: ${OUTPUT_DIR:=./out}
|
||||||
|
mkdir -p ${OUTPUT_DIR}
|
||||||
|
|
||||||
|
# The SPEC file(s) must have two arrays as e.g.
|
||||||
|
# DISTROS=(debian centos)
|
||||||
|
# EXTRAS=(
|
||||||
|
# 'kube_network_plugin=calico'
|
||||||
|
# 'kube_network_plugin=flannel'
|
||||||
|
# 'kube_network_plugin=weave'
|
||||||
|
# )
|
||||||
|
# that will be tested in a "combinatory" way (e.g. from above there'll be
|
||||||
|
# be 6 test runs), creating a sequenced <spec_filename>-nn.out with each output.
|
||||||
|
#
|
||||||
|
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars
|
||||||
|
# to main kubespray ansible-playbook run.
|
||||||
|
|
||||||
|
SPECS=${*:?Missing SPEC files, e.g. test-most_distros-some_CNIs.env}
|
||||||
|
for spec in ${SPECS}; do
|
||||||
|
unset DISTROS EXTRAS
|
||||||
|
echo "Loading file=${spec} ..."
|
||||||
|
. ${spec} || continue
|
||||||
|
: ${DISTROS:?} || continue
|
||||||
|
echo "DISTROS=${DISTROS[@]}"
|
||||||
|
echo "EXTRAS->"
|
||||||
|
printf " %s\n" "${EXTRAS[@]}"
|
||||||
|
let n=1
|
||||||
|
for distro in ${DISTROS[@]}; do
|
||||||
|
for extra in "${EXTRAS[@]:-NULL}"; do
|
||||||
|
# Magic value to let this for run once:
|
||||||
|
[[ ${extra} == NULL ]] && unset extra
|
||||||
|
docker rm -f ${NODES[@]}
|
||||||
|
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++))
|
||||||
|
{
|
||||||
|
info "${distro}[${extra}] START: file_out=${file_out}"
|
||||||
|
time test_distro ${distro} ${extra}
|
||||||
|
} |& tee ${file_out}
|
||||||
|
# sleeping for the sake of the human to verify if they want
|
||||||
|
sleep 2m
|
||||||
|
done
|
||||||
|
done
|
||||||
|
done
|
||||||
|
egrep -H '^(....:|real)' $(ls -tr ${OUTPUT_DIR}/*.out)
|
||||||
11
contrib/dind/test-most_distros-some_CNIs.env
Normal file
11
contrib/dind/test-most_distros-some_CNIs.env
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
# Test spec file: used from ./run-test-distros.sh, will run
|
||||||
|
# each distro in $DISTROS overloading main kubespray ansible-playbook run
|
||||||
|
# Get all DISTROS from distro.yaml (shame no yaml parsing, but nuff anyway)
|
||||||
|
# DISTROS="${*:-$(egrep -o '^ \w+' group_vars/all/distro.yaml|paste -s)}"
|
||||||
|
DISTROS=(debian ubuntu centos fedora)
|
||||||
|
|
||||||
|
# Each line below will be added as --extra-vars to main playbook run
|
||||||
|
EXTRAS=(
|
||||||
|
'kube_network_plugin=calico'
|
||||||
|
'kube_network_plugin=weave'
|
||||||
|
)
|
||||||
8
contrib/dind/test-some_distros-kube_router_combo.env
Normal file
8
contrib/dind/test-some_distros-kube_router_combo.env
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
DISTROS=(debian centos)
|
||||||
|
NETCHECKER_HOST=${NODES[0]}
|
||||||
|
EXTRAS=(
|
||||||
|
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":false}'
|
||||||
|
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":true}'
|
||||||
|
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":false}'
|
||||||
|
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":true}'
|
||||||
|
)
|
||||||
8
contrib/dind/test-some_distros-most_CNIs.env
Normal file
8
contrib/dind/test-some_distros-most_CNIs.env
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
DISTROS=(debian centos)
|
||||||
|
EXTRAS=(
|
||||||
|
'kube_network_plugin=calico {"kubeadm_enabled":true}'
|
||||||
|
'kube_network_plugin=canal {"kubeadm_enabled":true}'
|
||||||
|
'kube_network_plugin=cilium {"kubeadm_enabled":true}'
|
||||||
|
'kube_network_plugin=flannel {"kubeadm_enabled":true}'
|
||||||
|
'kube_network_plugin=weave {"kubeadm_enabled":true}'
|
||||||
|
)
|
||||||
344
contrib/inventory_builder/inventory.py
Normal file
344
contrib/inventory_builder/inventory.py
Normal file
@@ -0,0 +1,344 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
# implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
# Usage: inventory.py ip1 [ip2 ...]
|
||||||
|
# Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5
|
||||||
|
#
|
||||||
|
# Advanced usage:
|
||||||
|
# Add another host after initial creation: inventory.py 10.10.1.5
|
||||||
|
# Delete a host: inventory.py -10.10.1.3
|
||||||
|
# Delete a host by id: inventory.py -node1
|
||||||
|
#
|
||||||
|
# Load a YAML or JSON file with inventory data: inventory.py load hosts.yaml
|
||||||
|
# YAML file should be in the following format:
|
||||||
|
# group1:
|
||||||
|
# host1:
|
||||||
|
# ip: X.X.X.X
|
||||||
|
# var: val
|
||||||
|
# group2:
|
||||||
|
# host2:
|
||||||
|
# ip: X.X.X.X
|
||||||
|
|
||||||
|
from collections import OrderedDict
|
||||||
|
try:
|
||||||
|
import configparser
|
||||||
|
except ImportError:
|
||||||
|
import ConfigParser as configparser
|
||||||
|
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
|
||||||
|
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster:children',
|
||||||
|
'calico-rr', 'vault']
|
||||||
|
PROTECTED_NAMES = ROLES
|
||||||
|
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
|
||||||
|
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
|
||||||
|
'0': False, 'no': False, 'false': False, 'off': False}
|
||||||
|
|
||||||
|
|
||||||
|
def get_var_as_bool(name, default):
|
||||||
|
value = os.environ.get(name, '')
|
||||||
|
return _boolean_states.get(value.lower(), default)
|
||||||
|
|
||||||
|
# Configurable as shell vars start
|
||||||
|
|
||||||
|
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.ini")
|
||||||
|
# Reconfigures cluster distribution at scale
|
||||||
|
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
|
||||||
|
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
|
||||||
|
|
||||||
|
DEBUG = get_var_as_bool("DEBUG", True)
|
||||||
|
HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
|
||||||
|
|
||||||
|
# Configurable as shell vars end
|
||||||
|
|
||||||
|
|
||||||
|
class KubesprayInventory(object):
|
||||||
|
|
||||||
|
def __init__(self, changed_hosts=None, config_file=None):
|
||||||
|
self.config = configparser.ConfigParser(allow_no_value=True,
|
||||||
|
delimiters=('\t', ' '))
|
||||||
|
self.config_file = config_file
|
||||||
|
if self.config_file:
|
||||||
|
self.config.read(self.config_file)
|
||||||
|
|
||||||
|
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
|
||||||
|
self.parse_command(changed_hosts[0], changed_hosts[1:])
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
self.ensure_required_groups(ROLES)
|
||||||
|
|
||||||
|
if changed_hosts:
|
||||||
|
self.hosts = self.build_hostnames(changed_hosts)
|
||||||
|
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
|
||||||
|
self.set_all(self.hosts)
|
||||||
|
self.set_k8s_cluster()
|
||||||
|
self.set_etcd(list(self.hosts.keys())[:3])
|
||||||
|
if len(self.hosts) >= SCALE_THRESHOLD:
|
||||||
|
self.set_kube_master(list(self.hosts.keys())[3:5])
|
||||||
|
else:
|
||||||
|
self.set_kube_master(list(self.hosts.keys())[:2])
|
||||||
|
self.set_kube_node(self.hosts.keys())
|
||||||
|
if len(self.hosts) >= SCALE_THRESHOLD:
|
||||||
|
self.set_calico_rr(list(self.hosts.keys())[:3])
|
||||||
|
else: # Show help if no options
|
||||||
|
self.show_help()
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
self.write_config(self.config_file)
|
||||||
|
|
||||||
|
def write_config(self, config_file):
|
||||||
|
if config_file:
|
||||||
|
with open(config_file, 'w') as f:
|
||||||
|
self.config.write(f)
|
||||||
|
else:
|
||||||
|
print("WARNING: Unable to save config. Make sure you set "
|
||||||
|
"CONFIG_FILE env var.")
|
||||||
|
|
||||||
|
def debug(self, msg):
|
||||||
|
if DEBUG:
|
||||||
|
print("DEBUG: {0}".format(msg))
|
||||||
|
|
||||||
|
def get_ip_from_opts(self, optstring):
|
||||||
|
opts = optstring.split(' ')
|
||||||
|
for opt in opts:
|
||||||
|
if '=' not in opt:
|
||||||
|
continue
|
||||||
|
k, v = opt.split('=')
|
||||||
|
if k == "ip":
|
||||||
|
return v
|
||||||
|
raise ValueError("IP parameter not found in options")
|
||||||
|
|
||||||
|
def ensure_required_groups(self, groups):
|
||||||
|
for group in groups:
|
||||||
|
try:
|
||||||
|
self.debug("Adding group {0}".format(group))
|
||||||
|
self.config.add_section(group)
|
||||||
|
except configparser.DuplicateSectionError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_host_id(self, host):
|
||||||
|
'''Returns integer host ID (without padding) from a given hostname.'''
|
||||||
|
try:
|
||||||
|
short_hostname = host.split('.')[0]
|
||||||
|
return int(re.findall("\d+$", short_hostname)[-1])
|
||||||
|
except IndexError:
|
||||||
|
raise ValueError("Host name must end in an integer")
|
||||||
|
|
||||||
|
def build_hostnames(self, changed_hosts):
|
||||||
|
existing_hosts = OrderedDict()
|
||||||
|
highest_host_id = 0
|
||||||
|
try:
|
||||||
|
for host, opts in self.config.items('all'):
|
||||||
|
existing_hosts[host] = opts
|
||||||
|
host_id = self.get_host_id(host)
|
||||||
|
if host_id > highest_host_id:
|
||||||
|
highest_host_id = host_id
|
||||||
|
except configparser.NoSectionError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# FIXME(mattymo): Fix condition where delete then add reuses highest id
|
||||||
|
next_host_id = highest_host_id + 1
|
||||||
|
|
||||||
|
all_hosts = existing_hosts.copy()
|
||||||
|
for host in changed_hosts:
|
||||||
|
if host[0] == "-":
|
||||||
|
realhost = host[1:]
|
||||||
|
if self.exists_hostname(all_hosts, realhost):
|
||||||
|
self.debug("Marked {0} for deletion.".format(realhost))
|
||||||
|
all_hosts.pop(realhost)
|
||||||
|
elif self.exists_ip(all_hosts, realhost):
|
||||||
|
self.debug("Marked {0} for deletion.".format(realhost))
|
||||||
|
self.delete_host_by_ip(all_hosts, realhost)
|
||||||
|
elif host[0].isdigit():
|
||||||
|
if self.exists_hostname(all_hosts, host):
|
||||||
|
self.debug("Skipping existing host {0}.".format(host))
|
||||||
|
continue
|
||||||
|
elif self.exists_ip(all_hosts, host):
|
||||||
|
self.debug("Skipping existing host {0}.".format(host))
|
||||||
|
continue
|
||||||
|
|
||||||
|
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
|
||||||
|
next_host_id += 1
|
||||||
|
all_hosts[next_host] = "ansible_host={0} ip={1}".format(
|
||||||
|
host, host)
|
||||||
|
elif host[0].isalpha():
|
||||||
|
raise Exception("Adding hosts by hostname is not supported.")
|
||||||
|
|
||||||
|
return all_hosts
|
||||||
|
|
||||||
|
def exists_hostname(self, existing_hosts, hostname):
|
||||||
|
return hostname in existing_hosts.keys()
|
||||||
|
|
||||||
|
def exists_ip(self, existing_hosts, ip):
|
||||||
|
for host_opts in existing_hosts.values():
|
||||||
|
if ip == self.get_ip_from_opts(host_opts):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def delete_host_by_ip(self, existing_hosts, ip):
|
||||||
|
for hostname, host_opts in existing_hosts.items():
|
||||||
|
if ip == self.get_ip_from_opts(host_opts):
|
||||||
|
del existing_hosts[hostname]
|
||||||
|
return
|
||||||
|
raise ValueError("Unable to find host by IP: {0}".format(ip))
|
||||||
|
|
||||||
|
def purge_invalid_hosts(self, hostnames, protected_names=[]):
|
||||||
|
for role in self.config.sections():
|
||||||
|
for host, _ in self.config.items(role):
|
||||||
|
if host not in hostnames and host not in protected_names:
|
||||||
|
self.debug("Host {0} removed from role {1}".format(host,
|
||||||
|
role))
|
||||||
|
self.config.remove_option(role, host)
|
||||||
|
|
||||||
|
def add_host_to_group(self, group, host, opts=""):
|
||||||
|
self.debug("adding host {0} to group {1}".format(host, group))
|
||||||
|
self.config.set(group, host, opts)
|
||||||
|
|
||||||
|
def set_kube_master(self, hosts):
|
||||||
|
for host in hosts:
|
||||||
|
self.add_host_to_group('kube-master', host)
|
||||||
|
|
||||||
|
def set_all(self, hosts):
|
||||||
|
for host, opts in hosts.items():
|
||||||
|
self.add_host_to_group('all', host, opts)
|
||||||
|
|
||||||
|
def set_k8s_cluster(self):
|
||||||
|
self.add_host_to_group('k8s-cluster:children', 'kube-node')
|
||||||
|
self.add_host_to_group('k8s-cluster:children', 'kube-master')
|
||||||
|
|
||||||
|
def set_calico_rr(self, hosts):
|
||||||
|
for host in hosts:
|
||||||
|
if host in self.config.items('kube-master'):
|
||||||
|
self.debug("Not adding {0} to calico-rr group because it "
|
||||||
|
"conflicts with kube-master group".format(host))
|
||||||
|
continue
|
||||||
|
if host in self.config.items('kube-node'):
|
||||||
|
self.debug("Not adding {0} to calico-rr group because it "
|
||||||
|
"conflicts with kube-node group".format(host))
|
||||||
|
continue
|
||||||
|
self.add_host_to_group('calico-rr', host)
|
||||||
|
|
||||||
|
def set_kube_node(self, hosts):
|
||||||
|
for host in hosts:
|
||||||
|
if len(self.config['all']) >= SCALE_THRESHOLD:
|
||||||
|
if self.config.has_option('etcd', host):
|
||||||
|
self.debug("Not adding {0} to kube-node group because of "
|
||||||
|
"scale deployment and host is in etcd "
|
||||||
|
"group.".format(host))
|
||||||
|
continue
|
||||||
|
if len(self.config['all']) >= MASSIVE_SCALE_THRESHOLD:
|
||||||
|
if self.config.has_option('kube-master', host):
|
||||||
|
self.debug("Not adding {0} to kube-node group because of "
|
||||||
|
"scale deployment and host is in kube-master "
|
||||||
|
"group.".format(host))
|
||||||
|
continue
|
||||||
|
self.add_host_to_group('kube-node', host)
|
||||||
|
|
||||||
|
def set_etcd(self, hosts):
|
||||||
|
for host in hosts:
|
||||||
|
self.add_host_to_group('etcd', host)
|
||||||
|
self.add_host_to_group('vault', host)
|
||||||
|
|
||||||
|
def load_file(self, files=None):
|
||||||
|
'''Directly loads JSON, or YAML file to inventory.'''
|
||||||
|
|
||||||
|
if not files:
|
||||||
|
raise Exception("No input file specified.")
|
||||||
|
|
||||||
|
import json
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
for filename in list(files):
|
||||||
|
# Try JSON, then YAML
|
||||||
|
try:
|
||||||
|
with open(filename, 'r') as f:
|
||||||
|
data = json.load(f)
|
||||||
|
except ValueError:
|
||||||
|
try:
|
||||||
|
with open(filename, 'r') as f:
|
||||||
|
data = yaml.load(f)
|
||||||
|
print("yaml")
|
||||||
|
except ValueError:
|
||||||
|
raise Exception("Cannot read %s as JSON, YAML, or CSV",
|
||||||
|
filename)
|
||||||
|
|
||||||
|
self.ensure_required_groups(ROLES)
|
||||||
|
self.set_k8s_cluster()
|
||||||
|
for group, hosts in data.items():
|
||||||
|
self.ensure_required_groups([group])
|
||||||
|
for host, opts in hosts.items():
|
||||||
|
optstring = "ansible_host={0} ip={0}".format(opts['ip'])
|
||||||
|
for key, val in opts.items():
|
||||||
|
if key == "ip":
|
||||||
|
continue
|
||||||
|
optstring += " {0}={1}".format(key, val)
|
||||||
|
|
||||||
|
self.add_host_to_group('all', host, optstring)
|
||||||
|
self.add_host_to_group(group, host)
|
||||||
|
self.write_config(self.config_file)
|
||||||
|
|
||||||
|
def parse_command(self, command, args=None):
|
||||||
|
if command == 'help':
|
||||||
|
self.show_help()
|
||||||
|
elif command == 'print_cfg':
|
||||||
|
self.print_config()
|
||||||
|
elif command == 'print_ips':
|
||||||
|
self.print_ips()
|
||||||
|
elif command == 'load':
|
||||||
|
self.load_file(args)
|
||||||
|
else:
|
||||||
|
raise Exception("Invalid command specified.")
|
||||||
|
|
||||||
|
def show_help(self):
|
||||||
|
help_text = '''Usage: inventory.py ip1 [ip2 ...]
|
||||||
|
Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5
|
||||||
|
|
||||||
|
Available commands:
|
||||||
|
help - Display this message
|
||||||
|
print_cfg - Write inventory file to stdout
|
||||||
|
print_ips - Write a space-delimited list of IPs from "all" group
|
||||||
|
|
||||||
|
Advanced usage:
|
||||||
|
Add another host after initial creation: inventory.py 10.10.1.5
|
||||||
|
Delete a host: inventory.py -10.10.1.3
|
||||||
|
Delete a host by id: inventory.py -node1
|
||||||
|
|
||||||
|
Configurable env vars:
|
||||||
|
DEBUG Enable debug printing. Default: True
|
||||||
|
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.ini
|
||||||
|
HOST_PREFIX Host prefix for generated hosts. Default: node
|
||||||
|
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
|
||||||
|
MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
|
||||||
|
'''
|
||||||
|
print(help_text)
|
||||||
|
|
||||||
|
def print_config(self):
|
||||||
|
self.config.write(sys.stdout)
|
||||||
|
|
||||||
|
def print_ips(self):
|
||||||
|
ips = []
|
||||||
|
for host, opts in self.config.items('all'):
|
||||||
|
ips.append(self.get_ip_from_opts(opts))
|
||||||
|
print(' '.join(ips))
|
||||||
|
|
||||||
|
|
||||||
|
def main(argv=None):
|
||||||
|
if not argv:
|
||||||
|
argv = sys.argv[1:]
|
||||||
|
KubesprayInventory(argv, CONFIG_FILE)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
1
contrib/inventory_builder/requirements.txt
Normal file
1
contrib/inventory_builder/requirements.txt
Normal file
@@ -0,0 +1 @@
|
|||||||
|
configparser>=3.3.0
|
||||||
3
contrib/inventory_builder/setup.cfg
Normal file
3
contrib/inventory_builder/setup.cfg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
[metadata]
|
||||||
|
name = kubespray-inventory-builder
|
||||||
|
version = 0.1
|
||||||
29
contrib/inventory_builder/setup.py
Normal file
29
contrib/inventory_builder/setup.py
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
# implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||||
|
import setuptools
|
||||||
|
|
||||||
|
# In python < 2.7.4, a lazy loading of package `pbr` will break
|
||||||
|
# setuptools if some other modules registered functions in `atexit`.
|
||||||
|
# solution from: http://bugs.python.org/issue15881#msg170215
|
||||||
|
try:
|
||||||
|
import multiprocessing # noqa
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
setuptools.setup(
|
||||||
|
setup_requires=[],
|
||||||
|
pbr=False)
|
||||||
3
contrib/inventory_builder/test-requirements.txt
Normal file
3
contrib/inventory_builder/test-requirements.txt
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
hacking>=0.10.2
|
||||||
|
pytest>=2.8.0
|
||||||
|
mock>=1.3.0
|
||||||
240
contrib/inventory_builder/tests/test_inventory.py
Normal file
240
contrib/inventory_builder/tests/test_inventory.py
Normal file
@@ -0,0 +1,240 @@
|
|||||||
|
# Copyright 2016 Mirantis, Inc.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import mock
|
||||||
|
import unittest
|
||||||
|
|
||||||
|
from collections import OrderedDict
|
||||||
|
import sys
|
||||||
|
|
||||||
|
path = "./contrib/inventory_builder/"
|
||||||
|
if path not in sys.path:
|
||||||
|
sys.path.append(path)
|
||||||
|
|
||||||
|
import inventory
|
||||||
|
|
||||||
|
|
||||||
|
class TestInventory(unittest.TestCase):
|
||||||
|
@mock.patch('inventory.sys')
|
||||||
|
def setUp(self, sys_mock):
|
||||||
|
sys_mock.exit = mock.Mock()
|
||||||
|
super(TestInventory, self).setUp()
|
||||||
|
self.data = ['10.90.3.2', '10.90.3.3', '10.90.3.4']
|
||||||
|
self.inv = inventory.KubesprayInventory()
|
||||||
|
|
||||||
|
def test_get_ip_from_opts(self):
|
||||||
|
optstring = "ansible_host=10.90.3.2 ip=10.90.3.2"
|
||||||
|
expected = "10.90.3.2"
|
||||||
|
result = self.inv.get_ip_from_opts(optstring)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_get_ip_from_opts_invalid(self):
|
||||||
|
optstring = "notanaddr=value something random!chars:D"
|
||||||
|
self.assertRaisesRegexp(ValueError, "IP parameter not found",
|
||||||
|
self.inv.get_ip_from_opts, optstring)
|
||||||
|
|
||||||
|
def test_ensure_required_groups(self):
|
||||||
|
groups = ['group1', 'group2']
|
||||||
|
self.inv.ensure_required_groups(groups)
|
||||||
|
for group in groups:
|
||||||
|
self.assertTrue(group in self.inv.config.sections())
|
||||||
|
|
||||||
|
def test_get_host_id(self):
|
||||||
|
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain',
|
||||||
|
'node3.xyz123.aaa']
|
||||||
|
expected = [99, 1, 1, 1, 3]
|
||||||
|
for hostname, expected in zip(hostnames, expected):
|
||||||
|
result = self.inv.get_host_id(hostname)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_get_host_id_invalid(self):
|
||||||
|
bad_hostnames = ['node', 'no99de', '01node', 'node.111111']
|
||||||
|
for hostname in bad_hostnames:
|
||||||
|
self.assertRaisesRegexp(ValueError, "Host name must end in an",
|
||||||
|
self.inv.get_host_id, hostname)
|
||||||
|
|
||||||
|
def test_build_hostnames_add_one(self):
|
||||||
|
changed_hosts = ['10.90.0.2']
|
||||||
|
expected = OrderedDict([('node1',
|
||||||
|
'ansible_host=10.90.0.2 ip=10.90.0.2')])
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_add_duplicate(self):
|
||||||
|
changed_hosts = ['10.90.0.2']
|
||||||
|
expected = OrderedDict([('node1',
|
||||||
|
'ansible_host=10.90.0.2 ip=10.90.0.2')])
|
||||||
|
self.inv.config['all'] = expected
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_add_two(self):
|
||||||
|
changed_hosts = ['10.90.0.2', '10.90.0.3']
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
self.inv.config['all'] = OrderedDict()
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_delete_first(self):
|
||||||
|
changed_hosts = ['-10.90.0.2']
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
self.inv.config['all'] = existing_hosts
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_exists_hostname_positive(self):
|
||||||
|
hostname = 'node1'
|
||||||
|
expected = True
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
result = self.inv.exists_hostname(existing_hosts, hostname)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_exists_hostname_negative(self):
|
||||||
|
hostname = 'node99'
|
||||||
|
expected = False
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
result = self.inv.exists_hostname(existing_hosts, hostname)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_exists_ip_positive(self):
|
||||||
|
ip = '10.90.0.2'
|
||||||
|
expected = True
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
result = self.inv.exists_ip(existing_hosts, ip)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_exists_ip_negative(self):
|
||||||
|
ip = '10.90.0.200'
|
||||||
|
expected = False
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
result = self.inv.exists_ip(existing_hosts, ip)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_delete_host_by_ip_positive(self):
|
||||||
|
ip = '10.90.0.2'
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
self.inv.delete_host_by_ip(existing_hosts, ip)
|
||||||
|
self.assertEqual(expected, existing_hosts)
|
||||||
|
|
||||||
|
def test_delete_host_by_ip_negative(self):
|
||||||
|
ip = '10.90.0.200'
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||||
|
self.assertRaisesRegexp(ValueError, "Unable to find host",
|
||||||
|
self.inv.delete_host_by_ip, existing_hosts, ip)
|
||||||
|
|
||||||
|
def test_purge_invalid_hosts(self):
|
||||||
|
proper_hostnames = ['node1', 'node2']
|
||||||
|
bad_host = 'doesnotbelong2'
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||||
|
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3'),
|
||||||
|
('doesnotbelong2', 'whateveropts=ilike')])
|
||||||
|
self.inv.config['all'] = existing_hosts
|
||||||
|
self.inv.purge_invalid_hosts(proper_hostnames)
|
||||||
|
self.assertTrue(bad_host not in self.inv.config['all'].keys())
|
||||||
|
|
||||||
|
def test_add_host_to_group(self):
|
||||||
|
group = 'etcd'
|
||||||
|
host = 'node1'
|
||||||
|
opts = 'ip=10.90.0.2'
|
||||||
|
|
||||||
|
self.inv.add_host_to_group(group, host, opts)
|
||||||
|
self.assertEqual(self.inv.config[group].get(host), opts)
|
||||||
|
|
||||||
|
def test_set_kube_master(self):
|
||||||
|
group = 'kube-master'
|
||||||
|
host = 'node1'
|
||||||
|
|
||||||
|
self.inv.set_kube_master([host])
|
||||||
|
self.assertTrue(host in self.inv.config[group])
|
||||||
|
|
||||||
|
def test_set_all(self):
|
||||||
|
group = 'all'
|
||||||
|
hosts = OrderedDict([
|
||||||
|
('node1', 'opt1'),
|
||||||
|
('node2', 'opt2')])
|
||||||
|
|
||||||
|
self.inv.set_all(hosts)
|
||||||
|
for host, opt in hosts.items():
|
||||||
|
self.assertEqual(self.inv.config[group].get(host), opt)
|
||||||
|
|
||||||
|
def test_set_k8s_cluster(self):
|
||||||
|
group = 'k8s-cluster:children'
|
||||||
|
expected_hosts = ['kube-node', 'kube-master']
|
||||||
|
|
||||||
|
self.inv.set_k8s_cluster()
|
||||||
|
for host in expected_hosts:
|
||||||
|
self.assertTrue(host in self.inv.config[group])
|
||||||
|
|
||||||
|
def test_set_kube_node(self):
|
||||||
|
group = 'kube-node'
|
||||||
|
host = 'node1'
|
||||||
|
|
||||||
|
self.inv.set_kube_node([host])
|
||||||
|
self.assertTrue(host in self.inv.config[group])
|
||||||
|
|
||||||
|
def test_set_etcd(self):
|
||||||
|
group = 'etcd'
|
||||||
|
host = 'node1'
|
||||||
|
|
||||||
|
self.inv.set_etcd([host])
|
||||||
|
self.assertTrue(host in self.inv.config[group])
|
||||||
|
|
||||||
|
def test_scale_scenario_one(self):
|
||||||
|
num_nodes = 50
|
||||||
|
hosts = OrderedDict()
|
||||||
|
|
||||||
|
for hostid in range(1, num_nodes+1):
|
||||||
|
hosts["node" + str(hostid)] = ""
|
||||||
|
|
||||||
|
self.inv.set_all(hosts)
|
||||||
|
self.inv.set_etcd(hosts.keys()[0:3])
|
||||||
|
self.inv.set_kube_master(hosts.keys()[0:2])
|
||||||
|
self.inv.set_kube_node(hosts.keys())
|
||||||
|
for h in range(3):
|
||||||
|
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
|
||||||
|
|
||||||
|
def test_scale_scenario_two(self):
|
||||||
|
num_nodes = 500
|
||||||
|
hosts = OrderedDict()
|
||||||
|
|
||||||
|
for hostid in range(1, num_nodes+1):
|
||||||
|
hosts["node" + str(hostid)] = ""
|
||||||
|
|
||||||
|
self.inv.set_all(hosts)
|
||||||
|
self.inv.set_etcd(hosts.keys()[0:3])
|
||||||
|
self.inv.set_kube_master(hosts.keys()[3:5])
|
||||||
|
self.inv.set_kube_node(hosts.keys())
|
||||||
|
for h in range(5):
|
||||||
|
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
|
||||||
28
contrib/inventory_builder/tox.ini
Normal file
28
contrib/inventory_builder/tox.ini
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
[tox]
|
||||||
|
minversion = 1.6
|
||||||
|
skipsdist = True
|
||||||
|
envlist = pep8, py27
|
||||||
|
|
||||||
|
[testenv]
|
||||||
|
whitelist_externals = py.test
|
||||||
|
usedevelop = True
|
||||||
|
deps =
|
||||||
|
-r{toxinidir}/requirements.txt
|
||||||
|
-r{toxinidir}/test-requirements.txt
|
||||||
|
setenv = VIRTUAL_ENV={envdir}
|
||||||
|
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
|
||||||
|
commands = pytest -vv #{posargs:./tests}
|
||||||
|
|
||||||
|
[testenv:pep8]
|
||||||
|
usedevelop = False
|
||||||
|
whitelist_externals = bash
|
||||||
|
commands =
|
||||||
|
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8"
|
||||||
|
|
||||||
|
[testenv:venv]
|
||||||
|
commands = {posargs}
|
||||||
|
|
||||||
|
[flake8]
|
||||||
|
show-source = true
|
||||||
|
builtins = _
|
||||||
|
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg
|
||||||
11
contrib/kvm-setup/README.md
Normal file
11
contrib/kvm-setup/README.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
# Kubespray on KVM Virtual Machines hypervisor preparation
|
||||||
|
|
||||||
|
A simple playbook to ensure your system has the right settings to enable Kubespray
|
||||||
|
deployment on VMs.
|
||||||
|
|
||||||
|
This playbook does not create Virtual Machines, nor does it run Kubespray itself.
|
||||||
|
|
||||||
|
### User creation
|
||||||
|
|
||||||
|
If you want to create a user for running Kubespray deployment, you should specify
|
||||||
|
both `k8s_deployment_user` and `k8s_deployment_user_pkey_path`.
|
||||||
3
contrib/kvm-setup/group_vars/all
Normal file
3
contrib/kvm-setup/group_vars/all
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
#k8s_deployment_user: kubespray
|
||||||
|
#k8s_deployment_user_pkey_path: /tmp/ssh_rsa
|
||||||
|
|
||||||
8
contrib/kvm-setup/kvm-setup.yml
Normal file
8
contrib/kvm-setup/kvm-setup.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
- hosts: localhost
|
||||||
|
gather_facts: False
|
||||||
|
become: yes
|
||||||
|
vars:
|
||||||
|
- bootstrap_os: none
|
||||||
|
roles:
|
||||||
|
- kvm-setup
|
||||||
46
contrib/kvm-setup/roles/kvm-setup/tasks/main.yml
Normal file
46
contrib/kvm-setup/roles/kvm-setup/tasks/main.yml
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Upgrade all packages to the latest version (yum)
|
||||||
|
yum:
|
||||||
|
name: '*'
|
||||||
|
state: latest
|
||||||
|
when: ansible_os_family == "RedHat"
|
||||||
|
|
||||||
|
- name: Install required packages
|
||||||
|
yum:
|
||||||
|
name: "{{ item }}"
|
||||||
|
state: latest
|
||||||
|
with_items:
|
||||||
|
- bind-utils
|
||||||
|
- ntp
|
||||||
|
when: ansible_os_family == "RedHat"
|
||||||
|
|
||||||
|
- name: Install required packages
|
||||||
|
apt:
|
||||||
|
upgrade: yes
|
||||||
|
update_cache: yes
|
||||||
|
cache_valid_time: 3600
|
||||||
|
name: "{{ item }}"
|
||||||
|
state: latest
|
||||||
|
install_recommends: no
|
||||||
|
with_items:
|
||||||
|
- dnsutils
|
||||||
|
- ntp
|
||||||
|
when: ansible_os_family == "Debian"
|
||||||
|
|
||||||
|
- name: Upgrade all packages to the latest version (apt)
|
||||||
|
shell: apt-get -o \
|
||||||
|
Dpkg::Options::=--force-confdef -o \
|
||||||
|
Dpkg::Options::=--force-confold -q -y \
|
||||||
|
dist-upgrade
|
||||||
|
environment:
|
||||||
|
DEBIAN_FRONTEND: noninteractive
|
||||||
|
when: ansible_os_family == "Debian"
|
||||||
|
|
||||||
|
|
||||||
|
# Create deployment user if required
|
||||||
|
- include: user.yml
|
||||||
|
when: k8s_deployment_user is defined
|
||||||
|
|
||||||
|
# Set proper sysctl values
|
||||||
|
- include: sysctl.yml
|
||||||
46
contrib/kvm-setup/roles/kvm-setup/tasks/sysctl.yml
Normal file
46
contrib/kvm-setup/roles/kvm-setup/tasks/sysctl.yml
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
- name: Load br_netfilter module
|
||||||
|
modprobe:
|
||||||
|
name: br_netfilter
|
||||||
|
state: present
|
||||||
|
register: br_netfilter
|
||||||
|
|
||||||
|
- name: Add br_netfilter into /etc/modules
|
||||||
|
lineinfile:
|
||||||
|
dest: /etc/modules
|
||||||
|
state: present
|
||||||
|
line: 'br_netfilter'
|
||||||
|
when: br_netfilter is defined and ansible_os_family == 'Debian'
|
||||||
|
|
||||||
|
- name: Add br_netfilter into /etc/modules-load.d/kubespray.conf
|
||||||
|
copy:
|
||||||
|
dest: /etc/modules-load.d/kubespray.conf
|
||||||
|
content: |-
|
||||||
|
### This file is managed by Ansible
|
||||||
|
br-netfilter
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: 0644
|
||||||
|
when: br_netfilter is defined
|
||||||
|
|
||||||
|
|
||||||
|
- name: Enable net.ipv4.ip_forward in sysctl
|
||||||
|
sysctl:
|
||||||
|
name: net.ipv4.ip_forward
|
||||||
|
value: 1
|
||||||
|
sysctl_file: /etc/sysctl.d/ipv4-ip_forward.conf
|
||||||
|
state: present
|
||||||
|
reload: yes
|
||||||
|
|
||||||
|
- name: Set bridge-nf-call-{arptables,iptables} to 0
|
||||||
|
sysctl:
|
||||||
|
name: "{{ item }}"
|
||||||
|
state: present
|
||||||
|
value: 0
|
||||||
|
sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
|
||||||
|
reload: yes
|
||||||
|
with_items:
|
||||||
|
- net.bridge.bridge-nf-call-arptables
|
||||||
|
- net.bridge.bridge-nf-call-ip6tables
|
||||||
|
- net.bridge.bridge-nf-call-iptables
|
||||||
|
when: br_netfilter is defined
|
||||||
46
contrib/kvm-setup/roles/kvm-setup/tasks/user.yml
Normal file
46
contrib/kvm-setup/roles/kvm-setup/tasks/user.yml
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
- name: Create user {{ k8s_deployment_user }}
|
||||||
|
user:
|
||||||
|
name: "{{ k8s_deployment_user }}"
|
||||||
|
groups: adm
|
||||||
|
shell: /bin/bash
|
||||||
|
|
||||||
|
- name: Ensure that .ssh exists
|
||||||
|
file:
|
||||||
|
path: "/home/{{ k8s_deployment_user }}/.ssh"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ k8s_deployment_user }}"
|
||||||
|
group: "{{ k8s_deployment_user }}"
|
||||||
|
|
||||||
|
- name: Configure sudo for deployment user
|
||||||
|
copy:
|
||||||
|
content: |
|
||||||
|
%{{ k8s_deployment_user }} ALL=(ALL) NOPASSWD: ALL
|
||||||
|
dest: "/etc/sudoers.d/55-k8s-deployment"
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: 0644
|
||||||
|
|
||||||
|
- name: Write private SSH key
|
||||||
|
copy:
|
||||||
|
src: "{{ k8s_deployment_user_pkey_path }}"
|
||||||
|
dest: "/home/{{ k8s_deployment_user }}/.ssh/id_rsa"
|
||||||
|
mode: 0400
|
||||||
|
owner: "{{ k8s_deployment_user }}"
|
||||||
|
group: "{{ k8s_deployment_user }}"
|
||||||
|
when: k8s_deployment_user_pkey_path is defined
|
||||||
|
|
||||||
|
- name: Write public SSH key
|
||||||
|
shell: "ssh-keygen -y -f /home/{{ k8s_deployment_user }}/.ssh/id_rsa \
|
||||||
|
> /home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
|
||||||
|
args:
|
||||||
|
creates: "/home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
|
||||||
|
when: k8s_deployment_user_pkey_path is defined
|
||||||
|
|
||||||
|
- name: Fix ssh-pub-key permissions
|
||||||
|
file:
|
||||||
|
path: "/home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
|
||||||
|
mode: 0600
|
||||||
|
owner: "{{ k8s_deployment_user }}"
|
||||||
|
group: "{{ k8s_deployment_user }}"
|
||||||
|
when: k8s_deployment_user_pkey_path is defined
|
||||||
10
contrib/metallb/README.md
Normal file
10
contrib/metallb/README.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
# Deploy MetalLB into Kubespray/Kubernetes
|
||||||
|
```
|
||||||
|
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
|
||||||
|
```
|
||||||
|
This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer2/tutorial). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
|
||||||
|
|
||||||
|
## Install
|
||||||
|
```
|
||||||
|
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/metallb/metallb.yml
|
||||||
|
```
|
||||||
6
contrib/metallb/metallb.yml
Normal file
6
contrib/metallb/metallb.yml
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
---
|
||||||
|
- hosts: kube-master[0]
|
||||||
|
tags:
|
||||||
|
- "provision"
|
||||||
|
roles:
|
||||||
|
- { role: provision }
|
||||||
7
contrib/metallb/roles/provision/defaults/main.yml
Normal file
7
contrib/metallb/roles/provision/defaults/main.yml
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
metallb:
|
||||||
|
ip_range: "10.5.0.50-10.5.0.99"
|
||||||
|
limits:
|
||||||
|
cpu: "100m"
|
||||||
|
memory: "100Mi"
|
||||||
|
port: "7472"
|
||||||
17
contrib/metallb/roles/provision/tasks/main.yml
Normal file
17
contrib/metallb/roles/provision/tasks/main.yml
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
---
|
||||||
|
- name: "Kubernetes Apps | Lay Down MetalLB"
|
||||||
|
become: true
|
||||||
|
template: { src: "{{ item }}.j2", dest: "{{ kube_config_dir }}/{{ item }}" }
|
||||||
|
with_items: ["metallb.yml", "metallb-config.yml"]
|
||||||
|
register: "rendering"
|
||||||
|
when:
|
||||||
|
- "inventory_hostname == groups['kube-master'][0]"
|
||||||
|
- name: "Kubernetes Apps | Install and configure MetalLB"
|
||||||
|
kube:
|
||||||
|
name: "MetalLB"
|
||||||
|
kubectl: "{{bin_dir}}/kubectl"
|
||||||
|
filename: "{{ kube_config_dir }}/{{ item.item }}"
|
||||||
|
state: "{{ item.changed | ternary('latest','present') }}"
|
||||||
|
with_items: "{{ rendering.results }}"
|
||||||
|
when:
|
||||||
|
- "inventory_hostname == groups['kube-master'][0]"
|
||||||
@@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: config
|
||||||
|
data:
|
||||||
|
config: |
|
||||||
|
address-pools:
|
||||||
|
- name: loadbalanced
|
||||||
|
protocol: layer2
|
||||||
|
addresses:
|
||||||
|
- {{ metallb.ip_range }}
|
||||||
254
contrib/metallb/roles/provision/templates/metallb.yml.j2
Normal file
254
contrib/metallb/roles/provision/templates/metallb.yml.j2
Normal file
@@ -0,0 +1,254 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: metallb-system
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
---
|
||||||
|
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: controller
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: speaker
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: metallb-system:controller
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["services"]
|
||||||
|
verbs: ["get", "list", "watch", "update"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["services/status"]
|
||||||
|
verbs: ["update"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["events"]
|
||||||
|
verbs: ["create", "patch"]
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: metallb-system:speaker
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["services", "endpoints", "nodes"]
|
||||||
|
verbs: ["get", "list", "watch"]
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: Role
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: leader-election
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["endpoints"]
|
||||||
|
resourceNames: ["metallb-speaker"]
|
||||||
|
verbs: ["get", "update"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["endpoints"]
|
||||||
|
verbs: ["create"]
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: Role
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: config-watcher
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["configmaps"]
|
||||||
|
verbs: ["get", "list", "watch"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["events"]
|
||||||
|
verbs: ["create"]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Role bindings
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: metallb-system:controller
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: controller
|
||||||
|
namespace: metallb-system
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: metallb-system:controller
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: metallb-system:speaker
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: speaker
|
||||||
|
namespace: metallb-system
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: metallb-system:speaker
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: RoleBinding
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: config-watcher
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: controller
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: speaker
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: Role
|
||||||
|
name: config-watcher
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: RoleBinding
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: leader-election
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: speaker
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: Role
|
||||||
|
name: leader-election
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1beta2
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: speaker
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
component: speaker
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: metallb
|
||||||
|
component: speaker
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
component: speaker
|
||||||
|
annotations:
|
||||||
|
prometheus.io/scrape: "true"
|
||||||
|
prometheus.io/port: "{{ metallb.port }}"
|
||||||
|
spec:
|
||||||
|
serviceAccountName: speaker
|
||||||
|
terminationGracePeriodSeconds: 0
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
- name: speaker
|
||||||
|
image: metallb/speaker:v0.6.2
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
args:
|
||||||
|
- --port={{ metallb.port }}
|
||||||
|
- --config=config
|
||||||
|
env:
|
||||||
|
- name: METALLB_NODE_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: spec.nodeName
|
||||||
|
ports:
|
||||||
|
- name: monitoring
|
||||||
|
containerPort: {{ metallb.port }}
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpu: {{ metallb.limits.cpu }}
|
||||||
|
memory: {{ metallb.limits.memory }}
|
||||||
|
securityContext:
|
||||||
|
allowPrivilegeEscalation: false
|
||||||
|
readOnlyRootFilesystem: true
|
||||||
|
capabilities:
|
||||||
|
drop:
|
||||||
|
- all
|
||||||
|
add:
|
||||||
|
- net_raw
|
||||||
|
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1beta2
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
namespace: metallb-system
|
||||||
|
name: controller
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
component: controller
|
||||||
|
spec:
|
||||||
|
revisionHistoryLimit: 3
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: metallb
|
||||||
|
component: controller
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: metallb
|
||||||
|
component: controller
|
||||||
|
annotations:
|
||||||
|
prometheus.io/scrape: "true"
|
||||||
|
prometheus.io/port: "{{ metallb.port }}"
|
||||||
|
spec:
|
||||||
|
serviceAccountName: controller
|
||||||
|
terminationGracePeriodSeconds: 0
|
||||||
|
securityContext:
|
||||||
|
runAsNonRoot: true
|
||||||
|
runAsUser: 65534 # nobody
|
||||||
|
containers:
|
||||||
|
- name: controller
|
||||||
|
image: metallb/controller:v0.6.2
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
args:
|
||||||
|
- --port={{ metallb.port }}
|
||||||
|
- --config=config
|
||||||
|
ports:
|
||||||
|
- name: monitoring
|
||||||
|
containerPort: {{ metallb.port }}
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpu: {{ metallb.limits.cpu }}
|
||||||
|
memory: {{ metallb.limits.memory }}
|
||||||
|
securityContext:
|
||||||
|
allowPrivilegeEscalation: false
|
||||||
|
capabilities:
|
||||||
|
drop:
|
||||||
|
- all
|
||||||
|
readOnlyRootFilesystem: true
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
92
contrib/network-storage/glusterfs/README.md
Normal file
92
contrib/network-storage/glusterfs/README.md
Normal file
@@ -0,0 +1,92 @@
|
|||||||
|
# Deploying a Kubespray Kubernetes Cluster with GlusterFS
|
||||||
|
|
||||||
|
You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible** section.
|
||||||
|
|
||||||
|
## Using an Ansible inventory
|
||||||
|
|
||||||
|
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
|
||||||
|
|
||||||
|
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/sample/k8s_gfs_inventory`. Make sure that the settings on `inventory/sample/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
|
||||||
|
|
||||||
|
```
|
||||||
|
ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./cluster.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute:
|
||||||
|
|
||||||
|
```
|
||||||
|
ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:
|
||||||
|
|
||||||
|
```
|
||||||
|
k8s-master-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core
|
||||||
|
k8s-master-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core
|
||||||
|
k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using Terraform and Ansible
|
||||||
|
|
||||||
|
First step is to fill in a `my-kubespray-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
|
||||||
|
|
||||||
|
```
|
||||||
|
cluster_name = "cluster1"
|
||||||
|
number_of_k8s_masters = "1"
|
||||||
|
number_of_k8s_masters_no_floating_ip = "2"
|
||||||
|
number_of_k8s_nodes_no_floating_ip = "0"
|
||||||
|
number_of_k8s_nodes = "0"
|
||||||
|
public_key_path = "~/.ssh/my-desired-key.pub"
|
||||||
|
image = "Ubuntu 16.04"
|
||||||
|
ssh_user = "ubuntu"
|
||||||
|
flavor_k8s_node = "node-flavor-id-in-your-openstack"
|
||||||
|
flavor_k8s_master = "master-flavor-id-in-your-openstack"
|
||||||
|
network_name = "k8s-network"
|
||||||
|
floatingip_pool = "net_external"
|
||||||
|
|
||||||
|
# GlusterFS variables
|
||||||
|
flavor_gfs_node = "gluster-flavor-id-in-your-openstack"
|
||||||
|
image_gfs = "Ubuntu 16.04"
|
||||||
|
number_of_gfs_nodes_no_floating_ip = "3"
|
||||||
|
gfs_volume_size_in_gb = "50"
|
||||||
|
ssh_user_gfs = "ubuntu"
|
||||||
|
```
|
||||||
|
|
||||||
|
As explained in the general terraform/openstack guide, you need to source your OpenStack credentials file, add your ssh-key to the ssh-agent and setup environment variables for terraform:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ source ~/.stackrc
|
||||||
|
$ eval $(ssh-agent -s)
|
||||||
|
$ ssh-add ~/.ssh/my-desired-key
|
||||||
|
$ echo Setting up Terraform creds && \
|
||||||
|
export TF_VAR_username=${OS_USERNAME} && \
|
||||||
|
export TF_VAR_password=${OS_PASSWORD} && \
|
||||||
|
export TF_VAR_tenant=${OS_TENANT_NAME} && \
|
||||||
|
export TF_VAR_auth_url=${OS_AUTH_URL}
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, standing on the kubespray directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
|
||||||
|
|
||||||
|
```
|
||||||
|
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping).
|
||||||
|
|
||||||
|
Then, provision your Kubernetes (kubespray) cluster with the following ansible call:
|
||||||
|
|
||||||
|
```
|
||||||
|
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, provision the glusterfs nodes and add the Persistent Volume setup for GlusterFS in Kubernetes through the following ansible call:
|
||||||
|
|
||||||
|
```
|
||||||
|
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
If you need to destroy the cluster, you can run:
|
||||||
|
|
||||||
|
```
|
||||||
|
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
|
||||||
|
```
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user