Working symlinks are dependant on git configuration (when using the playbook as
a git repository, which is common), precisely `git config
core.symlinks`.
While this is enabled by default, some company policies will disable it.
Instead, use import_tasks which should avoid that class of bugs.
* Simplify docker systemd unit
systemd handles missing unit by ignoring the dependency so we don't need
to template them.
* Remove RHEL 7/CentOS 7 support
- remove ref in kubespray roles
- move CI from centos 7 to 8
- remove docs related to centos7
* Remove container-storage-setup
Only used for RHEL 7 and CentOS 7
* Fix: fix testcases_run.sh for upgrade tests
Need to git checkout ${CI_COMMIT_SHA} before running upgrade playbook (revert #11173 partially)
* feat: add CI job to test upgrade
Add a packet_ubuntu22-calico-all-in-one-upgrade job
* make calico api server manifest backward compatible with version older than 3.27.3
Add 3.28.1 checksums
Add 3.28.0 checksums
Change default version to 3.27.3
* change default calico version to 3.28.1
* Set mount type to DirectoryOrCreate for hostPath needed by Calico
Fixes https://github.com/kubernetes-sigs/kubespray/issues/10947
This patch aims to be minimal and intentionally:
- does not change the generation logic for `supersede_domain` and `supersede_search`
- does not change how `nameserverentries` (for NetworkManager) is built
It seems like `nameserverentries` in the "Generate nameservers for resolvconf, including cluster DNS"
task is built the same way as `dhclient_supersede_nameserver_entries_list`.
However, `nameserverentries` in the "Generate nameservers for resolvconf, not including cluster DNS"
task (below) is built differently for some reason. It includes `configured_nameservers` as well.
Due to these differences, I have refrained from reusing the same building logic
(`dhclient_supersede_nameserver_entries_list`) for both.
If the `configured_nameservers` addition can be removed or made to apply
to dhclient as well, we could potentially build a single list and then
generate the `nameserverentries` and `supersede_nameserver` strings from it.
* CI: reduce VM resources requests to improve scheduling
* CI: Reduce default jobs; add labels(ci-full/extended) to run more test
* CI: use jobs dependencies instead of stages
* precommit one-job
* CI: Use Kubevirt VM to run Molecule and Vagrant jobs
The inventory file generated by Terraform produces the following warnings:
```
[WARNING]: * Failed to parse <PATH>/kubespray/contrib/terraform/hetzner/inventory.ini with ini plugin:
<PATH>/kubespray/contrib/terraform/hetzner/inventory.ini:21: Section [k8s_cluster:children] includes undefined group: kube-master
...
[WARNING]: Could not match supplied host pattern, ignoring: kube-master
PLAY [Add kube-master nodes to kube_control_plane] ********************************************************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: kube-node
PLAY [Add kube-node nodes to kube_node] *******************************************************************************************************************
skipping: no hosts matched
```
Use a style file as recommended by upstream. This makes for only one
source of truth.
Conserve previous upstream default for MD007 (upstream default changed
here https://github.com/markdownlint/markdownlint/pull/373)
Use gitlab dynamic child pipelines feature to have one source of truth
for the pre-commit jobs, the pre-commit config file.
Use one cache per pre-commit. This should reduce the "fetching cache"
time steps in gitlab-ci, since each job will have a separate cache with
only its hook installed.
We have inconsistent sets of options passed to the playbooks during our
CI runs.
Don't run ansible-playbook directly, instead factorize the execution in
a bash function using all the common flags.
Also remove various ENABLE_* variables and instead directly test for the
relevant conditions at execution time, as this makes it more obvious and
does not force one to go back and forth in the script.
With CentOS, kubespray currently produces the following warning:
[WARNING]: TASK: bootstrap-os : Enable Oracle Linux repo: The loop variable
'item' is already in use. You should set the `loop_var` value in the
`loop_control` option for the task to something else to avoid variable
collisions and unexpected behavior.
This could bites us in nasty ways, so fix it.
c58497cde (Refactor bootstrap-os (#10983), 2024-03-27) refactored the
boostrap-os include but didn't adapt the amazon linux tasks to the
actual ID of amazon linux ('amzn')
Re-enable the CI so we can avoid that kind of breakage.
if node.projectcalico.org already existe patch node to set asNumber
instead of apply resource to prevent remove of existing fields feed by
calico-node pods
✅ Closes: 11096
When using the kube_node_instancers_with_disks* variables, there were
no configuration block using those to provision disks with the
VirtualBox provider.
This commit fixes it.
Some packages requirements depends on inventory variables
(`kube_proxy_mode` in that case but it could apply to others).
As the case seems pretty rare, instead of adding complexity to pkgs, we
add an escape hatch to use jinja conditions.
That should be revisited if we find ourselves shoehorning lots of logic
in this later on.
Uses the logic introduced in the previous patch to convert all
kubernetes/preinstall/vars/* os specific files to the `pkgs`
dictionary.
Some niceties for devs:
- always validate the `pkgs` variable to catch mistakes in CI.
- ensure that `pkgs` is always sorted. This makes it easier to find the
packages you're looking for.
Adds infrastructure to install OS packages depending not only on OS
(family, versions, etc) but on groups.
All the informations related to a particular package should reside in
the `pkgs` dictionnary, which takes inspiration from the `downloads`
dictionary structure.
Since the structure we're setting in place for installing packages has
some complexity, add a JSON schema to avoid frustrating errors when
modifying the informations (adding/removing packages install).
This reverts commit 4b0a134bc9.
The mentionned PR break scale.yml. This goes back to the status quo
until a proper fix can be provided, at which point we'll reapply the
PR.
Upgrade Snapshot controller installed for all supported Kubernetes
versions to v7.0.2. Also update the manifests used to deploy the
Snapshot controller.
* feat: add user facing variable with default
* feat: remove rolebinding to anonymous users after init and upgrade
* feat: use file discovery for secondary control plane nodes
* feat: use file discovery for nodes
* fix: do not fail if rolebinding does not exist
* docs: add warning about kube_api_anonymous_auth
* style: improve readability of delegate_to parameter
* refactor: rename discovery kubeconfig file
* test: enable new variable in hardening and upgrade test cases
* docs: add option to config parameters
* test: multiple instances and upgrade
* Move fedora ansible python install to bootstrap-os
* /bin/dir is set in bootstrap-os
* Removing ansible_os_family workarounds
Support for these distributions was merged in Ansible, no need to
override it ourselves now.
https://github.com/ansible/ansible/pull/69324 openEuler
https://github.com/ansible/ansible/pull/77275/ UnionTech OS Server 20
https://github.com/ansible/ansible/pull/78232/ Kylin
* Don't unconditionnaly set VARIANT_ID=coreos in os-release
WTF, this is so wrong.
Furthermore, is_fedora_coreos is already handled in boostrap-os
* Handle Clearlinux generically
Followup of 4eec302e86 (since we're using
package module anyway, let's get rid of the custom task)
* Remove leftover files for Coreos
Coreos was replaced by flatcar in 058438a25 but the file was copied
instead of moved.
* Remove workarounds for resolved ansible issues
* boostrap: Use first_found to include per distro
Using directly ID and VARIANT_ID with first_found allow for less manual
includes.
Distro "families" are simply handled by symlinks.
* boostrap: don't set ansible_python_interpreter
- Allows users to override the chosen python_interpreter with group_vars
easily (group_vars have lesser precedence than facts)
- Allows us to use vars at the task scope to use a virtual env
Ansible python discovery has improved, so those workarounds should not
be necessary anymore.
Special workaround for Flatcar, due to upstream ansible not willing to
support it.
* upgrade ansible version
Needed for with_first_found to work correctly:
https://github.com/ansible/ansible/issues/70772 fixed in 2.16
* Remove unused google cloud cloud_playbook
* Fix dpkg_selection on non-existing packages
Needed since ansible-core>2.16, see:
f10d11bcdc
* scripts: ignore download_hash download failures
Binary names on github releases often change and this script might break
because of that, this commit allow to ignore these failures as a mean to
be able to run the script anyway.
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
* scripts: use sha256sums for crio as well
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
* scripts: add ppc64le support for crio
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
---------
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
The new version brings the following improvements:
- remove having to resort to python python to limit tags (it it slower than
the sh equivalent as python has a somewhat significant startup time).
- Introduce a concept of min version so that it can only get Kubernetes
version supported by Kubespray.
- Fix an issue with kata changing their file scheme (the arch
specifically)
- Now download sha256/sha256sum files if provided rather than
downloading the full file and computing the hash
- A few minor style tweaks
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr.fr>
Fixes bug for retrieving images with tags containing image digests.
Script now gets images from jobs and cronjobs as well.
New env variable DESTINATION_REGISTRY to push to another registry
instead of local registry.
New env variable IMAGES_FROM_FILE to pull images listed in a file
instead of getting images from a running k8s environment.
New env variable REGISTRY_PORT to override port (default is 5000).
Under the original code, leader election failed for ingress controllers
as a result of mismatch between election-id in the controller config,
and the resourceName in the relevant rule of role 'ingress-nginx'.
This appeared in the controller logs.
To fix the issue, a command-line option was added to container
execution (--election-id=...).
Now, the election-id agrees with the resourceName provided in
the role-ingress-nginx.yml file. A comment in that file was
changed to reflect the new logic.
Co-authored-by: Vasilis Samoladas <vsam@softnet.tuc.gr>
Co-authored-by: Mohamed Omar Zaian <mohamedzaian@gmail.com>
This should avoid permissions problems when the user creating the
directory and the user creating the content are different (when
containers images are saved by root for instances, because the user
can't use the container runtime).
* Refactor of kubeadm images listing
Instead of setting multiples facts, we directly create the dict we need from
kubeadm output.
* Remove useless 'default' filters in roles/download
* Only download kubeadm images where needed
The current state waiting method is bad to implement.
When changing the deployment version, which is execute with the upgrade_cluster in the previous ansible task: "Kubernetes Apps | Install and configure MetalLB", next ansible task: "Kubernetes Apps | Wait for MetalLB controller to be running" may fall with an error.
* [kubernetes] Make kubernetes 1.29.1 default
* [cri-o]: support cri-o 1.29
Use "crio status" instead of "crio-status" for cri-o >=1.29.0
* Remove GAed feature gates SecCompDefault
The SecCompDefault feature gate was removed since k8s 1.29
https://github.com/kubernetes/kubernetes/pull/121246
* Update upgrades.md
modify env serial to have real rolling upgrades
* Update upgrades.md
change section for serial
* Update docs/upgrades.md
Co-authored-by: Kundan Kumar <kundan.kumar@india.nec.com>
---------
Co-authored-by: Kundan Kumar <kundan.kumar@india.nec.com>
Handle all old dns domains:
- for nodelocaldns: in the same server block as the current dns_domain
- for coredns: uffix rewrite of each of the old dns domains to the
current one
The old version of the script downloaded all binaries and generated file checksums locally.
This was a slow process since all binaries of all architectures needed to be downloaded.
The new version simply downloads the .sha256 files containing the binary checksum in text
form which saves a lot of traffic and time.
* Enable configuring mountOptions, reclaimPolicy and volumeBindingMode for cinder-csi StorageClasses
* Check if class.mount_options is defined at all, before generating the option list
In the case were vagrant is not invoked directly from the repository,
but from another location, and the Vagrantfile is "included" into
another, we need to be able to specify where the location of the vagrant
directory is, as of now it's hardcoded relative to the Vagrantfile
location. This commit fix it.
* ignore_unreachable for etcd dir cleanup
ignore_errors ignores errors occur within "file" module. However, when
the target node is offline, the playbook will still fail at this task
with node "unreachable" state. Setting "ignore_unreachable: true" allows
the playbook to bypass offline nodes and move on to proceed recovery
tasks on remaining online nodes.
* Re-arrange control plane recovery runbook steps
* Remove suggestion to manually update IP addresses
The suggestion was added in 48a182844c 4
years ago. But a new task added 2 years ago, in
ee0f1e9d58, automatically update API
server arg with updated etcd node ip addresses. This suggestion is no
longer needed.
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Equinix Metal](docs/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **[AWS](docs/cloud_providers/aws.md), GCE, [Azure](docs/cloud_providers/azure.md), [OpenStack](docs/cloud_providers/openstack.md), [vSphere](docs/cloud_providers/vsphere.md), [Equinix Metal](docs/cloud_providers/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
@@ -19,7 +19,7 @@ Below are several ways to use Kubespray to deploy a Kubernetes cluster.
#### Usage
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
then run the following steps:
```ShellSession
@@ -75,18 +75,18 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.24.2
docker pull quay.io/kubespray/kubespray:v2.24.2
git checkout v2.25.0
docker pull quay.io/kubespray/kubespray:v2.25.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
See [here](docs/ansible_collection.md) if you wish to use this repository as an Ansible collection
See [here](docs/ansible/ansible_collection.md) if you wish to use this repository as an Ansible collection
### Vagrant
@@ -99,7 +99,7 @@ python -V && pip -V
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
then run the following step:
```ShellSession
@@ -109,80 +109,79 @@ vagrant up
## Documents
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Setting up your first cluster](docs/setting-up-your-first-cluster.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Kubespray vs ...](docs/getting_started/comparisons.md)
- Supported Docker versions are 18.09, 19.03, 20.10, 23.0 and 24.0. The *recommended* Docker version is 20.10 (except on Debian bookworm which without supporting for 20.10 and below any more). `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements
- **Minimum required version of Kubernetes is v1.26**
- **Minimum required version of Kubernetes is v1.28**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
@@ -225,7 +224,7 @@ These limits are safeguarded by Kubespray. Actual requirements for your workload
You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay
@@ -234,32 +233,32 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
- [weave](docs/CNI/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
- [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [macvlan](docs/CNI/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [multus](docs/CNI/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray.
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
See also [Network checker](docs/advanced/netcheck.md).
## Ingress Plugins
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
- [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
## Community docs and resources
@@ -280,4 +279,4 @@ See also [Network checker](docs/netcheck.md).
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325)
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
6. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
7. An approver creates a release branch in the form `release-X.Y`
8.The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
9.The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
10. The release issue is closed
11.An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
12. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
1. At least one of the [approvers](OWNERS_ALIASES) must approve this release
1. (Only for major releases) The `kube_version_min_required` variable is set to `n-1`
1. (Only for major releases) Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
1. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
1. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
1. (Only for major releases) An approver creates a release branch in the form `release-X.Y`
1.(For majorreleases) On the `master` branch: bump the version in `galaxy.yml` to the next expected major release (X.y.0 with y = Y + 1), make a Pull Request.
1.(For minor releases) On the `release-X.Y` branch: bump the version in `galaxy.yml` to the next expected minor release (X.Y.z with z = Z + 1), make a Pull Request.
1. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
1.(Only for major releases) The `KUBESPRAY_VERSION` in `.gitlab-ci.yml` is upgraded to the version we just released # TODO clarify this, this variable is for testing upgrades.
1. The release issue is closed
1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
1. Create/Update Issue for upgradeing kubernetes and [k8s-conformance](https://github.com/cncf/k8s-conformance)
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
- name:Ensure Gluster brick and mount directories exist.
file:
path:"{{ item }}"
state:directory
mode:0775
mode:"0775"
with_items:
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
@@ -62,7 +62,7 @@
replicas:"{{ groups['gfs-cluster'] | length }}"
cluster:"{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host:"{{ inventory_hostname }}"
force:yes
force:true
run_once:true
when:groups['gfs-cluster'] | length > 1
@@ -73,7 +73,7 @@
brick:"{{ gluster_brick_dir }}"
cluster:"{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host:"{{ inventory_hostname }}"
force:yes
force:true
run_once:true
when:groups['gfs-cluster'] | length <= 1
@@ -101,7 +101,7 @@
template:
dest:"{{ gluster_mount_dir }}/.test-file.txt"
src:test-file.txt
mode:0644
mode:"0644"
when:groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
@@ -24,6 +24,7 @@ most modern installs of OpenStack that support the basic services.
- [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/)
- [Cloudify](https://www.cloudify.ro/en)
## Approach
@@ -97,9 +98,10 @@ binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Module Architecture
The configuration is divided into three modules:
The configuration is divided into four modules:
- Network
- Loadbalancer
- IPs
- Compute
@@ -269,11 +271,18 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube_node` for tainting them as nodes, empty by default. |
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube_ingress` for running ingress controller pods, empty by default. |
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|`bastion_allowed_remote_ipv6_ips` | List of IPv6 CIDR allowed to initiate a SSH connection, `["::/0"]` by default |
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|`master_allowed_remote_ipv6_ips` | List of IPv6 CIDR blocks allowed to initiate an API connection, `["::/0"]` by default |
|`bastion_allowed_ports` | List of ports to open on bastion node, `[]` by default |
|`bastion_allowed_ports_ipv6` | List of ports to open on bastion node for IPv6 CIDR blocks, `[]` by default |
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`k8s_allowed_remote_ips_ipv6` | List of IPv6 CIDR allowed to initiate a SSH connection, empty by default |
|`k8s_allowed_egress_ipv6_ips` | List of IPv6 CIDRs allowed for egress traffic, `["::/0"]` by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`worker_allowed_ports_ipv6` | List of ports to open on worker nodes for IPv6 CIDR blocks, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "::/0"}]` by default |
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`master_allowed_ports_ipv6` | List of ports to open on master nodes for IPv6 CIDR blocks, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "::/0"}]`, empty by default |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
@@ -290,6 +299,10 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
| `k8s_master_loadbalancer_enabled`| Enable and use an Octavia load balancer for the K8s master nodes |
| `k8s_master_loadbalancer_listener_port` | Define via which port the K8s Api should be exposed. `6443` by default |
| `k8s_master_loadbalancer_server_port` | Define via which port the K8S api is available on the mas. `6443` by default |
| `k8s_master_loadbalancer_public_ip` | Specify if an existing floating IP should be used for the load balancer. A new floating IP is assigned by default |
Calico supports two types of encapsulation: [VXLAN and IP in IP](https://docs.projectcalico.org/v3.11/networking/vxlan-ipip). VXLAN is the more mature implementation and enabled by default, please check your environment if you need *IP in IP* encapsulation.
@@ -235,7 +243,7 @@ If you are running your cluster with the default calico settings and are upgradi
* perform a manual migration to vxlan before upgrading kubespray (see migrating from IP in IP to VXLAN below)
* pin the pre-2.19 settings in your ansible inventory (see IP in IP mode settings below)
**Note:**: Vxlan in ipv6 only supported when kernel >= 3.12. So if your kernel version < 3.12, Please don't set `calico_vxlan_mode_ipv6: vxlanAlways`. More details see [#Issue 6877](https://github.com/projectcalico/calico/issues/6877).
**Note:**: Vxlan in ipv6 only supported when kernel >= 3.12. So if your kernel version < 3.12, Please don't set `calico_vxlan_mode_ipv6: Always`. More details see [#Issue 6877](https://github.com/projectcalico/calico/issues/6877).
### IP in IP mode
@@ -374,7 +382,7 @@ To clean up any ipvs leftovers:
Calico node, typha and kube-controllers need to be able to talk to the kubernetes API. Please reference the [Enabling eBPF Calico Docs](https://docs.projectcalico.org/maintenance/ebpf/enabling-bpf) for guidelines on how to do this.
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/ha-mode.md).
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/operations/ha-mode.md).
If no external loadbalancer is used, Calico eBPF can also use the localhost loadbalancer option. We are able to do so only if you use the same port for the localhost apiserver loadbalancer and the kube-apiserver. In this case Calico Automatic Host Endpoints need to be enabled to allow services like `coredns` and `metrics-server` to communicate with the kubernetes host endpoint. See [this blog post](https://www.projectcalico.org/securing-kubernetes-nodes-with-calico-automatic-host-endpoints/) on enabling automatic host endpoints.
For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/stable/gettingstarted/encryption-wireguard/)
For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/stable/security/network/encryption-wireguard/)
To enable Wireguard encryption, you just need to set two variables.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.