* Refactor control plane upgrades with reconfiguration support
Adds revised support for:
- The previously removed `--config` argument for `kubeadm upgrade apply`
- Changes to `ClusterConfiguration` as part of the `upgrade-cluster.yml` playbook lifecycle
- kubeadm-config `v1beta4` `UpgradeConfiguration` for the `kubeadm upgrade apply` command: [UpgradeConfiguration v1beta4](https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/#kubeadm-k8s-io-v1beta4-UpgradeConfiguration).
* Add kubeadm upgrade node support
Per discussion:
- Use `kubeadm upgrade node` on secondary control plane upgrades
- Add support for UpgradeConfiguration.node in kubeadm-config.v1beta4
- Remove redundant `allowRCUpgrades` config
- Revert from `block` for first and secondary control plane back to unblocked tasks since they no longer share much code and it's more readable this way
* Add kubelet and kube-proxy reconfiguration to upgrades
* Fix task to use `kubeadm init phase etcd local`
* Rebase with changes from "Adapt checksums and versions to new hashes updater" PR
* Add `imagePullPolicy` and `imagePullSerial` to kubeadm-config v1beta4 `InitConfiguration.nodeRegistration`
(cherry picked from commit b551fe083d)
[WARNING][1] kube-controllers/runconfig.go 193: unable to list KubeControllersConfiguration(default) error=connection is unauthorized: kubecontrollersconfigurations.crd.projectcalico.org "default" is forbidden: User "system:serviceaccount:kube-system:calico-kube-controllers" cannot list resource "kubecontrollersconfigurations" in API group "crd.projectcalico.org" at the cluster scope
Co-authored-by: darkobas <marko@datafund.io>
This avoids spurious failure with 'localhost'.
It should also be more correct the inventory contains uncached hosts
which are not in `k8s_cluster` and therefore should not be Kubespray
business.
(We still use hostvars for uncached hosts, because it's easier to select
on 'ansible_default_ipv4' that way and does not change the end result)
We use a lot of facts where variables are enough, and format too early,
which prevent reusing the variables in different contexts.
- Moves set_fact variables to the vars directory, remove unnecessary
intermediate variables, and render them at usage sites to only do logic
on native Ansible/Jinja lists.
- Use defaults/ rather than default filters for several variables.
There is no test with IDEMPOT_CHECK=true since commit 7b78e6872 (disable
idempotency tests (#1872), 2017-10-26)
Remove the related infra from our CI scripts.
This display in a readable (by humans) way the result of most tasks, and
should be way more readable that what we have now, which is frequently a
bunch of unreadable json.
+ some small fixes (using delegated_to instead of when
<control_plane_host>)
- special casing should be in Kubespray, not in the test. It makes no
sense to do something in tests which won't be done in actual usage.
- We don't actually test CoreOS at all in the CI.
This is a followup to 2ba28a338 (Revert "Wait for available API token in
a new namespace (#7045)", 2024-10-25).
While checking for the serviceaccount token is not effective, there is
still a race when creating a Pod directly, because the ServiceAccount
itself might not be created yet.
More details at https://github.com/kubernetes/kubernetes/issues/66689.
This cause very frequent flakes in our CI with spurious failures.
Use a Deployment instead ; it will takes cares of creating the Pods and
retrying ; it also let us use kubectl rollout status instead of manually
checking for the pods.
To reproduce this commit run in bash:
for file in $(ls tests/files/)
do
if ! grep -Rq ${file%.*} .gitlab.ci; then
rm tests/files/${file}
fi
done
This also means that our CI matrix was not accurate.
This is needed for shutdown ordering: while at startup, it's not a
problem that containerd start before dbus (the dbus socket already
exists) it needs to shutdown before dbus to do its cleanup (asking
systemd via dbus to cleanup cgroups).
This is expected to be used in the command module this way:
command:
cmd: "{{ kubectl_apply_stdin }}"
stdin: <... rendered manifests > -> using the 'template' lookup plugin
in most cases.
The advantages over the kube plugin module integrated in kubespray
(which this should replace eventually):
- way easier to modify to take advantage of new features (server-side
apply for instance)
- no need for a separate template tasks + checking the result (which can
introduce problem if the first playbook runs encounters an error).
Those files haven't been touched in roughly 5 years, and pip install on
Kubespray errors out.
The 'Requires:' are outdated, which suggests that no one is using this.
8ff4ad2d8 (preinstall: simplify OS packages selection, 2024-11-04)
removed all usages of ansible.utils.validate (not that many), so the
dependencies is no longer necessary.
config_path was introduced in containerd 1.5.0, and registry.mirrors is
deprecated.
There is no reason to keep the old alternative, so just always use
config_path, and consequently remove the option.
Our README is currently pretty cluttered:
- Part of the README duplicates docs/getting_started/getting-started.md
-> Remove duplicates and extract useful info into the getting-started.md
- General info on Ansible environment troubleshooting
-> remove most of it as it's not specific to Kubespray, move to
docs/ansible/ansible.md
-> split inventory-related stuff of ansible.md into it's own file. This
should host documentation on how to manages Kubespray inventories in the
future.
ansible.md:
- remove the list of "Unused" variables, as:
1. It's not accurate
2. What matters is where users should put their variables
contrib/dind use inventory_builder, which is removed. It overlaps with
the function of kind (Kubernetes in Docker) and has not see change
(apart from linting driven ones) for a long time.
It also does not seem to work (provisioning playbook crash).
This only really help with the easiest part of building your inventory
(listing the hosts) as you still need to edit your groups vars and
similar.
The opaqueness of the script does not really help our users to
understand their own inventory.
Furthermore, there is not really a reason that something which is common
to all the Ansible ecosystem should be done in a special way for
Kubespray.
* Add vars for configuring cilium IP load balancer pools and bgp peer policies
* Cilium 1.16+ Support - Add vars for configuring cilium bgpv2 api & handle cilium_kube_proxy_replacement unsupported values
When using
dns_upstream_forward_extra_opts:
prefer_udp: "" # the option as no value so use empty string to just
# put the key
This is rendered in the dns configmap as ($ for end-of-line)
...
prefer_udp $
...
Note the trailing space.
This triggers https://github.com/kubernetes/kubernetes/issues/36222,
which makes the configmap hardly readable when editing them manually or
simply putting them in a yaml file for inspection.
Trim the concatenation of option + value to get rid of any trailing
space.
* Make Helm's 'atomic' parameter configurable from role variables
* Configure Helm with 'atomic' and 'wait' set to false for generic CNI to prevent kubelet-csr-approver installation failures
We use shell scripts and conf files in some roles (notably, certificates
provisioning), so we need to include them in order for the collection to
work when using the configurations depending on those roles.
* kubeadm: do not ignore preflight errors blindly
The "ignoring all errors" seems to date back to the inception of the
kubeadm support (it was --skip-preflight-check before).
This can mask real errors and prevent users from seeing them.
Do not ignore any errors by default and make the set of ignored errors
configurable.
* download/kubeadm: remove redundant task
The mode is already set by the previous `copy` task.
* Validate kubeadm configs
This should help to fail early when we have invalid kubeadm configs (from
a kubespray bug or a misconfiguration).
* kubeadm-upgrade: remove unnecessary bool cast
* Convert kubeadm join discovery timeout to v1beta4 config
* CI: Ignore kubeadm:Mem errors on some setup.
The new CI does not define k8s_cluster group, so it relies on
kubernetes-sigs/kubespray#11559.
This does not work for upgrade testing (which use the previous release).
We can revert this commit after 2.27.0
We should not rollback our test setup during upgrade test.
The only reason to do that would be for incompatible changes in the test
inventory, and we already checkout master for those (${CI_JOB_NAME}.yml)
Also do some cleanup by removing unnecessary intermediary variables
VirtualMachineInstance resources sometimes temporarily loose their
IP (at least as far as the kubevirt controllers can see).
See https://github.com/kubevirt/kubevirt/issues/12698 for the upstream
bug.
This does not seems to affect actual connection (if it did, our current
CI would not work).
However, our CI execute multiple playbooks, and in particular:
1. The provisioning playbook (which checks that the IPs have been
provisioned by querying the K8S API)
2. Kubespray itself
If any of the VirtualMachineInstance looses its IP between after 1
checked for it, and before 2 starts, the dynamic inventory (which is
invoked when the playbook is launched by ansible-playbook) will not have
an ip for that host, and will try to use the name for ssh, which of
course will not work.
Instead, when we have a valid state during provisioning (all IPs
presents), use it to construct a static inventory which will be used for
the rest of the CI run.
This allows a single source of truth for the virtual machines in a
kubevirt ci-run.
`etcd_member_name` should be correctly handled in kubespray-defaults for
testing the recover cases.
Not constraining the inventory to .ini allows us to use dynamic
inventory, which is needed for simplifying kubevirt jobs inventory.
Also reduces the scope of the ANSIBLE_INVENTORY variable.
VMI in Kubevirt are the abstraction below VirtualMachine.
- We don't really need the extra abstraction of VirtualMachine objects
- Convert the waiting for VMs ip address to use kubernetes.core.k8s_info
and no shell pipeline
We're still getting bug reports circumventing the bug report template
and omitting informations.
Blank issue (aka, not using the form templates) can still be created
using the gh cli, the API, etc. This only disable the possibility in the
Web UI.
- Lookup was not returning a list, making the difference filter spit out
garbage -> query always return a list
- hostvars is a dictionnary, so convert to list before selectattr and
map back to only get keys
Currently there is not much difference between the files, if there are more changes in the future,
please use different files to distinguish them (you can use the kubeadm_config_api_version variable)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Currently there is not much difference between the files, if there are more changes in the future,
please use different files to distinguish them (you can use the kubeadm_config_api_version variable)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Remove kubeadm api version condition.
Currently there is not much difference between the files, if there are more changes in the future,
please use different files to distinguish them (you can use the kubeadm_config_api_version variable)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
I added the kubeadm_config_api_version variable in the previous commit,
and remove kubeadm api version condition.
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
v1beta4 has changed a lot in this file (e.g. ExtraArgs etc.), so it was implemented in separate files.
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Since a2019c1c2 (Add a JSON schema describing the packages install
structure, 2024-04-25), we use a custom structure to select which
packages should be installed on a particular host OS.
This has proven too rigid in practice, and the query is pretty
complicated.
Replace this by simply using an array of jinja conditions for the
packages, which should be easier to understand for everyone and more
flexible.
Also remove the associated schema and validation which are no longer
needed.
* etcd: throttle restart for availability
During upgrade, etcd member are restarted all at once.
This can impact the availability of the etcd cluster and subsequently of
the Kubernetes cluster.
Limit the concurrent restart so that the etcd cluster can keep quorum.
* Simplify etcd handlers
* Feat: add external OCI cloud controller manager template & variable
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
* Feat: add external OCI cloud controller manager workflow
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
* Feat: migrate external OCI CCM config check from OCI cloud provider
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
* cloud_controller: oracle: simpler asserts
Make the asserts check for Oracle Cloud Infrastructure external cloud
controller more compact, and hence readable.
Allows to put them back in the main tasks for less back and forth when
reading the code.
---------
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
Co-authored-by: Max Gautier <mg@max.gautier.name>
This reverts commit 275c54e810.
Static tokens are no longer created automatically for service account in
Kubernetes. Instead, they are dynamically injected into pods using a
projected volume.
Thus there is no longer a need to check for this (it didn't work anyway,
since the describe output actually contains <none> when there is no
tokens:
{
"attempts": 1,
"changed": false,
"cmd": "set -o pipefail && /usr/local/bin/kubectl describe serviceaccounts default --namespace test | grep Tokens | awk '{print $2}'",
"delta": "0:00:00.075633",
"end": "2024-10-19 14:25:04.858871",
"msg": "",
"rc": 0,
"start": "2024-10-19 14:25:04.783238",
"stderr": "",
"stderr_lines": [],
"stdout": "<none>",
"stdout_lines": [
"<none>"
]
}
)
This leverage the Kubernetes GC to delete kubevirt VMs, by using
ownerReferences, with the CI pod running the playbook as the owner.
This concretely means that the control plane in our CI cluster will
delete the kubevirt VMs associated with a particular ci job as soon as
that pod job is deleted, which usually happens when the job terminates,
(barring errors, which will be addressed in the cluster directly)
Upgrade to kubevirt.io/v1 for the VirtualMachine manifests, since the
alpha version is deprecated.
Before adding these changes, `ansible_facts.services["containerd.service"]` will not defined and fail to check for triggering the container stop and delete behaviors.
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Simplify registry mirror rendering in config.toml.
The map filter can extract the host list from mirrors so we can
just unique them and render them without needing to construct vars
for it.
For the registry mirror tls section, we can first extract mirrors
from the dict then filter on only the ones having skip_veridy defined
first and then filter on the ones having true (as the dict might not
have skip_verify defined and that would cause errors of undefined var).
This will speed up and simply the templating.
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
Dropping the ansible dependencies for ansible-lint will allow us to
catch missing dependencies collections in galaxy.yml. For collections
needed for contrib/ or tests/ (i.e: not part of core kubespray
dependencies), we can just configure ansible-lint to mock them.
This mean it won't check the mocked module parameters, but for those
area of the code base it's an acceptable trade-off.
The fallback_ips tasks are essentially serializing the gathering of one
fact on all the hosts, which can have dramatic performance implications
on large clusters (several minutes).
This is essentially a reversal of 35f248dff0
Being able to run without refreshing the cache facts is not worth it.
We keep fallback_ip for now, simply changing the access to a normal
hostvars variable instead of a custom dictionnary.
Using the hosts directive at the play level prevent those tasks from
being run when using --limit and the group in question is not part of
the limit (ex: running scale.yml on new worker nodes only)
Instead, run on all hosts, and for each group, partition between that
group and '_' (generic group name which is not used; using an empty
string as the group is not supported by ansible.builtin.group_by)
Reported-by: asteppat <asteppat@cisco.com>
* Update cluster-role for cilium to prevent errors in agent startup
ciliumloadbalancerippools permissions exists in the cilium helm chart for version 1.13.0
https://github.com/cilium/cilium/blob/v1.13.0/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml#L71
The agent also needs permissions to read/watch secrets for bgp auth secrets when using CiliumBGPPeeringPolicy with a secret.
* Remove list/watch permissions for secrets
* Remove secrets from list/watch permissions
The old repository for these has been deleted, leaving the previous
configuration not possible to deploy, and even currently running clusters
fail after a restart as the DeameonSet has ImagePullPolicy: Always. More
details can be found here: kubernetes-sigs/vsphere-csi-driver#3053
As of writing, only CSI driver versions 3.1.2 to 3.3.1 is available in
this registry. This "officially" supports Kubernetes 1.26 to 1.30. Since
older drivers are not available, I have removed some feature-gating for
those unavailable versions while I was at it. For the cloud provider,
the `latest` image is now missing, and only 1.28.0 to 1.31.0 are
available. I've set the latest of these as the new default.
I also updated the documented default versions, as they were all out of
date and not aligned with actual code defaults.
Nodes to api-server relies by default certificates, and bootstrap
tokens, and there should be no need to generate tokens for every nodes,
even when enabling static token auth.
Testing for group membership with group names makes Kubespray more
tolerant towards the structure of the inventory.
Where 'inventory_hostname in groups["some_group"] would fail if
"some_group" is not defined, '"some_group" in group_names' would not.
- Use proper syntax highlighting for config.rb examples
- Consistent shell style ($ as prompt)
- Use only one way to do things
- Remove OS specific details
The current way to handle a custom inventory in vagrant is a bit
hackish, copy files around and can break Vagrantfile parsing in
cornercase scenarios (removing vagrant inventories, or the inventory
copied into vagrant inventory).
Instead, simply pass additional inventories to the ansible-playbook
command lines as raw arguments with `-i`.
This also makes supporting multiples inventories trivial, so we add a
new `$inventories` variable for that purpose.
Specifying one directory for kubeadm patches is not ideal:
1. It does not allow working with multiples inventories easily
2. No ansible templating of the patch
3. Ansible path searching can sometimes be confusing
Instead, provide the patch directly in a variable, and add some quality
of life to handle components targeting and patch ordering more
explicitly (`target` and `type` which are translated to the kubeadm
scheme which is based on the file name)
kubernetes/control-plane and kubernetes/kubeadm roles both push kubeadm
patches in the same way.
Extract that code and make it a dependency of both.
This is safe because it's only configuration for kubeadm, which only
takes effect when kubeadm is run.
* Update multus to v4.1.0 and clarify cilium compatibility
* Fix: bug introduced by #10934 where the template would break if multus was defined
* Set priorityClassName to system-node-critical for multus pods
Allow the script to be called with a list of components, to only
download new versions checksums for those.
By default, we get new versions checksums for all supported (by the
script) components.
runc upstream does not provide one hash file per assets in their
releases, but one file with all the hashes.
To handle this (and/or any arbitrary format from upstreams), add a
dictionary mapping the name of the download to a lambda function which
transform the file provided by upstream into a dictionary of hashes,
keyed by architecture.
The script is currently limited to one hardcoded URL for kubernetes
related binaries, and a fixed set of architectures.
The solution is three-fold:
1. Use an url template dictionary for each download -> this allow to easily
add support for new downloads.
2. Source the architectures to search from the existing data
3. Enumerate the existing versions in the data and start searching from
the last one until no newer version is found (newer in the version
order sense, irrespective of actual age)
Remove system|kube_master_<resource>_reserved variables.
Those variables are unnecessary because users can simply use the
variables in group_vars if they which to differentiate control plane
nodes from other nodes.
Set conservative defaults for ephemeral-storage and pids for both kube
and system reserved resources.
Working symlinks are dependant on git configuration (when using the playbook as
a git repository, which is common), precisely `git config
core.symlinks`.
While this is enabled by default, some company policies will disable it.
Instead, use import_tasks which should avoid that class of bugs.
* Simplify docker systemd unit
systemd handles missing unit by ignoring the dependency so we don't need
to template them.
* Remove RHEL 7/CentOS 7 support
- remove ref in kubespray roles
- move CI from centos 7 to 8
- remove docs related to centos7
* Remove container-storage-setup
Only used for RHEL 7 and CentOS 7
* Fix: fix testcases_run.sh for upgrade tests
Need to git checkout ${CI_COMMIT_SHA} before running upgrade playbook (revert #11173 partially)
* feat: add CI job to test upgrade
Add a packet_ubuntu22-calico-all-in-one-upgrade job
* make calico api server manifest backward compatible with version older than 3.27.3
Add 3.28.1 checksums
Add 3.28.0 checksums
Change default version to 3.27.3
* change default calico version to 3.28.1
* Set mount type to DirectoryOrCreate for hostPath needed by Calico
Fixes https://github.com/kubernetes-sigs/kubespray/issues/10947
This patch aims to be minimal and intentionally:
- does not change the generation logic for `supersede_domain` and `supersede_search`
- does not change how `nameserverentries` (for NetworkManager) is built
It seems like `nameserverentries` in the "Generate nameservers for resolvconf, including cluster DNS"
task is built the same way as `dhclient_supersede_nameserver_entries_list`.
However, `nameserverentries` in the "Generate nameservers for resolvconf, not including cluster DNS"
task (below) is built differently for some reason. It includes `configured_nameservers` as well.
Due to these differences, I have refrained from reusing the same building logic
(`dhclient_supersede_nameserver_entries_list`) for both.
If the `configured_nameservers` addition can be removed or made to apply
to dhclient as well, we could potentially build a single list and then
generate the `nameserverentries` and `supersede_nameserver` strings from it.
* CI: reduce VM resources requests to improve scheduling
* CI: Reduce default jobs; add labels(ci-full/extended) to run more test
* CI: use jobs dependencies instead of stages
* precommit one-job
* CI: Use Kubevirt VM to run Molecule and Vagrant jobs
The inventory file generated by Terraform produces the following warnings:
```
[WARNING]: * Failed to parse <PATH>/kubespray/contrib/terraform/hetzner/inventory.ini with ini plugin:
<PATH>/kubespray/contrib/terraform/hetzner/inventory.ini:21: Section [k8s_cluster:children] includes undefined group: kube-master
...
[WARNING]: Could not match supplied host pattern, ignoring: kube-master
PLAY [Add kube-master nodes to kube_control_plane] ********************************************************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: kube-node
PLAY [Add kube-node nodes to kube_node] *******************************************************************************************************************
skipping: no hosts matched
```
Use a style file as recommended by upstream. This makes for only one
source of truth.
Conserve previous upstream default for MD007 (upstream default changed
here https://github.com/markdownlint/markdownlint/pull/373)
Use gitlab dynamic child pipelines feature to have one source of truth
for the pre-commit jobs, the pre-commit config file.
Use one cache per pre-commit. This should reduce the "fetching cache"
time steps in gitlab-ci, since each job will have a separate cache with
only its hook installed.
We have inconsistent sets of options passed to the playbooks during our
CI runs.
Don't run ansible-playbook directly, instead factorize the execution in
a bash function using all the common flags.
Also remove various ENABLE_* variables and instead directly test for the
relevant conditions at execution time, as this makes it more obvious and
does not force one to go back and forth in the script.
With CentOS, kubespray currently produces the following warning:
[WARNING]: TASK: bootstrap-os : Enable Oracle Linux repo: The loop variable
'item' is already in use. You should set the `loop_var` value in the
`loop_control` option for the task to something else to avoid variable
collisions and unexpected behavior.
This could bites us in nasty ways, so fix it.
c58497cde (Refactor bootstrap-os (#10983), 2024-03-27) refactored the
boostrap-os include but didn't adapt the amazon linux tasks to the
actual ID of amazon linux ('amzn')
Re-enable the CI so we can avoid that kind of breakage.
if node.projectcalico.org already existe patch node to set asNumber
instead of apply resource to prevent remove of existing fields feed by
calico-node pods
✅ Closes: 11096
When using the kube_node_instancers_with_disks* variables, there were
no configuration block using those to provision disks with the
VirtualBox provider.
This commit fixes it.
Some packages requirements depends on inventory variables
(`kube_proxy_mode` in that case but it could apply to others).
As the case seems pretty rare, instead of adding complexity to pkgs, we
add an escape hatch to use jinja conditions.
That should be revisited if we find ourselves shoehorning lots of logic
in this later on.
Uses the logic introduced in the previous patch to convert all
kubernetes/preinstall/vars/* os specific files to the `pkgs`
dictionary.
Some niceties for devs:
- always validate the `pkgs` variable to catch mistakes in CI.
- ensure that `pkgs` is always sorted. This makes it easier to find the
packages you're looking for.
Adds infrastructure to install OS packages depending not only on OS
(family, versions, etc) but on groups.
All the informations related to a particular package should reside in
the `pkgs` dictionnary, which takes inspiration from the `downloads`
dictionary structure.
Since the structure we're setting in place for installing packages has
some complexity, add a JSON schema to avoid frustrating errors when
modifying the informations (adding/removing packages install).
This reverts commit 4b0a134bc9.
The mentionned PR break scale.yml. This goes back to the status quo
until a proper fix can be provided, at which point we'll reapply the
PR.
Upgrade Snapshot controller installed for all supported Kubernetes
versions to v7.0.2. Also update the manifests used to deploy the
Snapshot controller.
* feat: add user facing variable with default
* feat: remove rolebinding to anonymous users after init and upgrade
* feat: use file discovery for secondary control plane nodes
* feat: use file discovery for nodes
* fix: do not fail if rolebinding does not exist
* docs: add warning about kube_api_anonymous_auth
* style: improve readability of delegate_to parameter
* refactor: rename discovery kubeconfig file
* test: enable new variable in hardening and upgrade test cases
* docs: add option to config parameters
* test: multiple instances and upgrade
* Move fedora ansible python install to bootstrap-os
* /bin/dir is set in bootstrap-os
* Removing ansible_os_family workarounds
Support for these distributions was merged in Ansible, no need to
override it ourselves now.
https://github.com/ansible/ansible/pull/69324 openEuler
https://github.com/ansible/ansible/pull/77275/ UnionTech OS Server 20
https://github.com/ansible/ansible/pull/78232/ Kylin
* Don't unconditionnaly set VARIANT_ID=coreos in os-release
WTF, this is so wrong.
Furthermore, is_fedora_coreos is already handled in boostrap-os
* Handle Clearlinux generically
Followup of 4eec302e86 (since we're using
package module anyway, let's get rid of the custom task)
* Remove leftover files for Coreos
Coreos was replaced by flatcar in 058438a25 but the file was copied
instead of moved.
* Remove workarounds for resolved ansible issues
* boostrap: Use first_found to include per distro
Using directly ID and VARIANT_ID with first_found allow for less manual
includes.
Distro "families" are simply handled by symlinks.
* boostrap: don't set ansible_python_interpreter
- Allows users to override the chosen python_interpreter with group_vars
easily (group_vars have lesser precedence than facts)
- Allows us to use vars at the task scope to use a virtual env
Ansible python discovery has improved, so those workarounds should not
be necessary anymore.
Special workaround for Flatcar, due to upstream ansible not willing to
support it.
* upgrade ansible version
Needed for with_first_found to work correctly:
https://github.com/ansible/ansible/issues/70772 fixed in 2.16
* Remove unused google cloud cloud_playbook
* Fix dpkg_selection on non-existing packages
Needed since ansible-core>2.16, see:
f10d11bcdc
* scripts: ignore download_hash download failures
Binary names on github releases often change and this script might break
because of that, this commit allow to ignore these failures as a mean to
be able to run the script anyway.
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
* scripts: use sha256sums for crio as well
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
* scripts: add ppc64le support for crio
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
---------
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
The new version brings the following improvements:
- remove having to resort to python python to limit tags (it it slower than
the sh equivalent as python has a somewhat significant startup time).
- Introduce a concept of min version so that it can only get Kubernetes
version supported by Kubespray.
- Fix an issue with kata changing their file scheme (the arch
specifically)
- Now download sha256/sha256sum files if provided rather than
downloading the full file and computing the hash
- A few minor style tweaks
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr.fr>
Fixes bug for retrieving images with tags containing image digests.
Script now gets images from jobs and cronjobs as well.
New env variable DESTINATION_REGISTRY to push to another registry
instead of local registry.
New env variable IMAGES_FROM_FILE to pull images listed in a file
instead of getting images from a running k8s environment.
New env variable REGISTRY_PORT to override port (default is 5000).
Under the original code, leader election failed for ingress controllers
as a result of mismatch between election-id in the controller config,
and the resourceName in the relevant rule of role 'ingress-nginx'.
This appeared in the controller logs.
To fix the issue, a command-line option was added to container
execution (--election-id=...).
Now, the election-id agrees with the resourceName provided in
the role-ingress-nginx.yml file. A comment in that file was
changed to reflect the new logic.
Co-authored-by: Vasilis Samoladas <vsam@softnet.tuc.gr>
Co-authored-by: Mohamed Omar Zaian <mohamedzaian@gmail.com>
This should avoid permissions problems when the user creating the
directory and the user creating the content are different (when
containers images are saved by root for instances, because the user
can't use the container runtime).
* Refactor of kubeadm images listing
Instead of setting multiples facts, we directly create the dict we need from
kubeadm output.
* Remove useless 'default' filters in roles/download
* Only download kubeadm images where needed
The current state waiting method is bad to implement.
When changing the deployment version, which is execute with the upgrade_cluster in the previous ansible task: "Kubernetes Apps | Install and configure MetalLB", next ansible task: "Kubernetes Apps | Wait for MetalLB controller to be running" may fall with an error.
* [kubernetes] Make kubernetes 1.29.1 default
* [cri-o]: support cri-o 1.29
Use "crio status" instead of "crio-status" for cri-o >=1.29.0
* Remove GAed feature gates SecCompDefault
The SecCompDefault feature gate was removed since k8s 1.29
https://github.com/kubernetes/kubernetes/pull/121246
* Update upgrades.md
modify env serial to have real rolling upgrades
* Update upgrades.md
change section for serial
* Update docs/upgrades.md
Co-authored-by: Kundan Kumar <kundan.kumar@india.nec.com>
---------
Co-authored-by: Kundan Kumar <kundan.kumar@india.nec.com>
Handle all old dns domains:
- for nodelocaldns: in the same server block as the current dns_domain
- for coredns: uffix rewrite of each of the old dns domains to the
current one
The old version of the script downloaded all binaries and generated file checksums locally.
This was a slow process since all binaries of all architectures needed to be downloaded.
The new version simply downloads the .sha256 files containing the binary checksum in text
form which saves a lot of traffic and time.
* Enable configuring mountOptions, reclaimPolicy and volumeBindingMode for cinder-csi StorageClasses
* Check if class.mount_options is defined at all, before generating the option list
In the case were vagrant is not invoked directly from the repository,
but from another location, and the Vagrantfile is "included" into
another, we need to be able to specify where the location of the vagrant
directory is, as of now it's hardcoded relative to the Vagrantfile
location. This commit fix it.
* ignore_unreachable for etcd dir cleanup
ignore_errors ignores errors occur within "file" module. However, when
the target node is offline, the playbook will still fail at this task
with node "unreachable" state. Setting "ignore_unreachable: true" allows
the playbook to bypass offline nodes and move on to proceed recovery
tasks on remaining online nodes.
* Re-arrange control plane recovery runbook steps
* Remove suggestion to manually update IP addresses
The suggestion was added in 48a182844c 4
years ago. But a new task added 2 years ago, in
ee0f1e9d58, automatically update API
server arg with updated etcd node ip addresses. This suggestion is no
longer needed.
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Equinix Metal](docs/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **[AWS](docs/cloud_providers/aws.md), GCE, [Azure](docs/cloud_providers/azure.md), [OpenStack](docs/cloud_controllers/openstack.md), [vSphere](docs/cloud_controllers/vsphere.md), [Equinix Metal](docs/cloud_providers/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
@@ -19,74 +19,11 @@ Below are several ways to use Kubespray to deploy a Kubernetes cluster.
#### Usage
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following steps:
```ShellSession
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
See [Getting started](/docs/getting_started/getting-started.md)
#### Collection
See [here](docs/ansible_collection.md) if you wish to use this repository as an Ansible collection
See [here](docs/ansible/ansible_collection.md) if you wish to use this repository as an Ansible collection
### Vagrant
@@ -99,7 +36,7 @@ python -V && pip -V
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
then run the following step:
```ShellSession
@@ -109,80 +46,76 @@ vagrant up
## Documents
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Setting up your first cluster](docs/setting-up-your-first-cluster.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Kubespray vs ...](docs/getting_started/comparisons.md)
- Supported Docker versions are 18.09, 19.03, 20.10, 23.0 and 24.0. The *recommended* Docker version is 20.10 (except on Debian bookworm which without supporting for 20.10 and below any more). `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements
- **Minimum required version of Kubernetes is v1.26**
- **Minimum required version of Kubernetes is v1.29**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
@@ -225,7 +158,7 @@ These limits are safeguarded by Kubespray. Actual requirements for your workload
You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay
@@ -234,32 +167,32 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
- [weave](docs/CNI/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
- [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [macvlan](docs/CNI/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [multus](docs/CNI/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray.
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
See also [Network checker](docs/advanced/netcheck.md).
## Ingress Plugins
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
- [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
## Community docs and resources
@@ -280,4 +213,4 @@ See also [Network checker](docs/netcheck.md).
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325)
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
6. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
7. An approver creates a release branch in the form `release-X.Y`
8.The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
9.The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
10. The release issue is closed
11.An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
12. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
1. At least one of the [approvers](OWNERS_ALIASES) must approve this release
1. (Only for major releases) The `kube_version_min_required` variable is set to `n-1`
1. (Only for major releases) Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
1. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
1. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
1. (Only for major releases) An approver creates a release branch in the form `release-X.Y`
1.(For majorreleases) On the `master` branch: bump the version in `galaxy.yml` to the next expected major release (X.y.0 with y = Y + 1), make a Pull Request.
1.(For minor releases) On the `release-X.Y` branch: bump the version in `galaxy.yml` to the next expected minor release (X.Y.z with z = Z + 1), make a Pull Request.
1. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
1.(Only for major releases) The `KUBESPRAY_VERSION` in `.gitlab-ci.yml` is upgraded to the version we just released # TODO clarify this, this variable is for testing upgrades.
1. The release issue is closed
1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
1. Create/Update Issue for upgradeing kubernetes and [k8s-conformance](https://github.com/cncf/k8s-conformance)
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
- name:Ensure Gluster brick and mount directories exist.
file:
path:"{{ item }}"
state:directory
mode:0775
mode:"0775"
with_items:
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
@@ -62,7 +62,7 @@
replicas:"{{ groups['gfs-cluster'] | length }}"
cluster:"{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host:"{{ inventory_hostname }}"
force:yes
force:true
run_once:true
when:groups['gfs-cluster'] | length > 1
@@ -73,7 +73,7 @@
brick:"{{ gluster_brick_dir }}"
cluster:"{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host:"{{ inventory_hostname }}"
force:yes
force:true
run_once:true
when:groups['gfs-cluster'] | length <= 1
@@ -101,7 +101,7 @@
template:
dest:"{{ gluster_mount_dir }}/.test-file.txt"
src:test-file.txt
mode:0644
mode:"0644"
when:groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
@@ -24,6 +24,7 @@ most modern installs of OpenStack that support the basic services.
- [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/)
- [Cloudify](https://www.cloudify.ro/en)
## Approach
@@ -59,17 +60,17 @@ You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating
floating IP addresses or not.
-Master nodes with etcd
-Master nodes without etcd
-Control plane nodes with etcd
-Control plane nodes without etcd
- Standalone etcd hosts
- Kubernetes worker nodes
Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
control plane nodes with etcd replicas. As an example, if you have three control plane
nodes with etcd replicas and three standalone etcd nodes, the script will fail since
there are now six total etcd replicas.
### GlusterFS shared file system
@@ -97,9 +98,10 @@ binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Module Architecture
The configuration is divided into three modules:
The configuration is divided into four modules:
- Network
- Loadbalancer
- IPs
- Compute
@@ -256,7 +258,8 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`bastion_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to bastion node instead of creating new random floating IPs. |
|`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`image`,`image_gfs`, `image_master` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`image_uuid`,`image_gfs_uuid`, `image_master_uuid` | UUID of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
@@ -269,11 +272,18 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube_node` for tainting them as nodes, empty by default. |
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube_ingress` for running ingress controller pods, empty by default. |
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|`bastion_allowed_remote_ipv6_ips` | List of IPv6 CIDR allowed to initiate a SSH connection, `["::/0"]` by default |
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|`master_allowed_remote_ipv6_ips` | List of IPv6 CIDR blocks allowed to initiate an API connection, `["::/0"]` by default |
|`bastion_allowed_ports` | List of ports to open on bastion node, `[]` by default |
|`bastion_allowed_ports_ipv6` | List of ports to open on bastion node for IPv6 CIDR blocks, `[]` by default |
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`k8s_allowed_remote_ips_ipv6` | List of IPv6 CIDR allowed to initiate a SSH connection, empty by default |
|`k8s_allowed_egress_ipv6_ips` | List of IPv6 CIDRs allowed for egress traffic, `["::/0"]` by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`worker_allowed_ports_ipv6` | List of ports to open on worker nodes for IPv6 CIDR blocks, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "::/0"}]` by default |
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`master_allowed_ports_ipv6` | List of ports to open on master nodes for IPv6 CIDR blocks, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "::/0"}]`, empty by default |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
@@ -290,6 +300,10 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
|`k8s_master_loadbalancer_enabled` | Enable and use an Octavia load balancer for the K8s master nodes |
|`k8s_master_loadbalancer_listener_port` | Define via which port the K8s Api should be exposed. `6443` by default |
|`k8s_master_loadbalancer_server_port` | Define via which port the K8S api is available on the master nodes. `6443` by default |
|`k8s_master_loadbalancer_public_ip` | Specify if an existing floating IP should be used for the load balancer. A new floating IP is assigned by default |
##### k8s_nodes
@@ -304,7 +318,8 @@ k8s_nodes:
node-name:
az:string# Name of the AZ
flavor:string# Flavor ID to use
floating_ip:bool# If floating IPs should be created or not
floating_ip:bool# If floating IPs should be used or not
reserved_floating_ip:string# If floating_ip is true use existing floating IP, if reserved_floating_ip is an empty string and floating_ip is true, a new floating IP will be created
extra_groups:string# (optional) Additional groups to add for kubespray, defaults to no groups
image_id:string# (optional) Image ID to use, defaults to var.image_id or var.image
root_volume_size_in_gb:number# (optional) Size of the block storage to use as root disk, defaults to var.node_root_volume_size_in_gb or to use volume from flavor otherwise
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.