mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-13 21:34:40 +03:00
Merge pull request #11748 from VannTen/cleanup/remove_inventory_builder
Remove inventory_builder and re-organize docs
This commit is contained in:
@@ -34,91 +34,22 @@ Based on the table below and the available python version for your ansible host
|
||||
|-----------------|----------------|
|
||||
| >= 2.16.4 | 3.10-3.12 |
|
||||
|
||||
## Inventory
|
||||
## Customize Ansible vars
|
||||
|
||||
The inventory is composed of 3 groups:
|
||||
|
||||
* **kube_node** : list of kubernetes nodes where the pods will run.
|
||||
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
|
||||
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
|
||||
|
||||
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
|
||||
If you want it a standalone, make sure those groups do not intersect.
|
||||
If you want the server to act both as control-plane and node, the server must be defined
|
||||
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
|
||||
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
|
||||
not _kube_node_.
|
||||
|
||||
There are also two special groups:
|
||||
|
||||
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
|
||||
* **bastion** : configure a bastion host if your nodes are not directly reachable
|
||||
|
||||
Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**.
|
||||
This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`)
|
||||
|
||||
Below is a complete inventory example:
|
||||
|
||||
```ini
|
||||
## Configure 'ip' variable to bind kubernetes services on a
|
||||
## different ip than the default iface
|
||||
node1 ansible_host=95.54.0.12 ip=10.3.0.1
|
||||
node2 ansible_host=95.54.0.13 ip=10.3.0.2
|
||||
node3 ansible_host=95.54.0.14 ip=10.3.0.3
|
||||
node4 ansible_host=95.54.0.15 ip=10.3.0.4
|
||||
node5 ansible_host=95.54.0.16 ip=10.3.0.5
|
||||
node6 ansible_host=95.54.0.17 ip=10.3.0.6
|
||||
|
||||
[kube_control_plane]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
node1
|
||||
node2
|
||||
node3
|
||||
|
||||
[kube_node]
|
||||
node2
|
||||
node3
|
||||
node4
|
||||
node5
|
||||
node6
|
||||
```
|
||||
|
||||
## Group vars and overriding variables precedence
|
||||
|
||||
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
|
||||
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
|
||||
Mandatory variables that are common for at least one role (or a node group) can be found in the
|
||||
`inventory/sample/group_vars/k8s_cluster.yml`.
|
||||
There are also role vars for docker, kubernetes preinstall and control plane roles.
|
||||
According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
|
||||
those cannot be overridden from the group vars. In order to override, one should use
|
||||
the `-e` runtime flags (most simple way) or other layers described in the docs.
|
||||
|
||||
Kubespray uses only a few layers to override things (or expect them to
|
||||
be overridden for roles):
|
||||
Kubespray expects users to use one of the following variables sources for settings and customization:
|
||||
|
||||
| Layer | Comment |
|
||||
|----------------------------------------|------------------------------------------------------------------------------|
|
||||
| **role defaults** | provides best UX to override things for Kubespray deployments |
|
||||
| inventory vars | Unused |
|
||||
| **inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things |
|
||||
| inventory host_vars | Unused |
|
||||
| playbook group_vars | Unused |
|
||||
| playbook host_vars | Unused |
|
||||
| **host facts** | Kubespray overrides for internal roles' logic, like state flags |
|
||||
| play vars | Unused |
|
||||
| play vars_prompt | Unused |
|
||||
| play vars_files | Unused |
|
||||
| registered vars | Unused |
|
||||
| set_facts | Kubespray overrides those, for some places |
|
||||
| **role and include vars** | Provides bad UX to override things! Use extra vars to enforce |
|
||||
| block vars (only for tasks in block) | Kubespray overrides for internal roles' logic |
|
||||
| task vars (only for the task) | Unused for roles, but only for helper scripts |
|
||||
| inventory vars | |
|
||||
| - **inventory group_vars** | most used |
|
||||
| - inventory host_vars | host specifc vars overrides, group_vars is usually more practical |
|
||||
| **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` |
|
||||
|
||||
[!IMPORTANT]
|
||||
Extra vars are best used to override kubespray internal variables, for instances, roles/vars/.
|
||||
Those vars are usually **not expected** (by Kubespray developers) to be modified by end users, and not part of Kubespray
|
||||
interface. Thus they can change, disappear, or break stuff unexpectedly.
|
||||
|
||||
## Ansible tags
|
||||
|
||||
The following tags are defined in playbooks:
|
||||
@@ -257,42 +188,32 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
|
||||
--tags download --skip-tags upload,upgrade
|
||||
```
|
||||
|
||||
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
|
||||
|
||||
## Bastion host
|
||||
|
||||
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
|
||||
you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
|
||||
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
|
||||
bastion host.
|
||||
|
||||
```ShellSession
|
||||
[bastion]
|
||||
bastion ansible_host=x.x.x.x
|
||||
```
|
||||
|
||||
For more information about Ansible and bastion hosts, read
|
||||
[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
|
||||
Note: use `--tags` and `--skip-tags` wisely and only if you're 100% sure what you're doing.
|
||||
|
||||
## Mitogen
|
||||
|
||||
Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation.
|
||||
|
||||
## Beyond ansible 2.9
|
||||
## Troubleshooting Ansible issues
|
||||
|
||||
Ansible project has decided, in order to ease their maintenance burden, to split between
|
||||
two projects which are now joined under the Ansible umbrella.
|
||||
|
||||
Ansible-base (2.10.x branch) will contain just the ansible language implementation while
|
||||
ansible modules that were previously bundled into a single repository will be part of the
|
||||
ansible 3.x package. Please see [this blog post](https://blog.while-true-do.io/ansible-release-3-0-0/)
|
||||
that explains in detail the need and the evolution plan.
|
||||
|
||||
**Note:** this change means that ansible virtual envs cannot be upgraded with `pip install -U`.
|
||||
You first need to uninstall your old ansible (pre 2.10) version and install the new one.
|
||||
Having the wrong version of ansible, ansible collections or python dependencies can cause issue.
|
||||
In particular, Kubespray ship custom modules which Ansible needs to find, for which you should specify [ANSIBLE_LIBRAY](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html#adding-a-module-or-plugin-outside-of-a-collection)
|
||||
|
||||
```ShellSession
|
||||
pip uninstall ansible ansible-base ansible-core
|
||||
cd kubespray/
|
||||
pip install -U .
|
||||
export ANSIBLE_LIBRAY=<kubespray_dir>/library`
|
||||
```
|
||||
|
||||
A simple way to ensure you get all the correct version of Ansible is to use
|
||||
the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
|
||||
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
|
||||
to access the inventory and SSH key in the container, like this:
|
||||
|
||||
```ShellSession
|
||||
git checkout v2.26.0
|
||||
docker pull quay.io/kubespray/kubespray:v2.26.0
|
||||
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
|
||||
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
|
||||
quay.io/kubespray/kubespray:v2.26.0 bash
|
||||
# Inside the container you may now run the kubespray playbooks:
|
||||
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
|
||||
```
|
||||
|
||||
71
docs/ansible/inventory.md
Normal file
71
docs/ansible/inventory.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Inventory
|
||||
|
||||
The inventory is composed of 3 groups:
|
||||
|
||||
* **kube_node** : list of kubernetes nodes where the pods will run.
|
||||
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
|
||||
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
|
||||
|
||||
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
|
||||
If you want it a standalone, make sure those groups do not intersect.
|
||||
If you want the server to act both as control-plane and node, the server must be defined
|
||||
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
|
||||
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
|
||||
not _kube_node_.
|
||||
|
||||
There are also two special groups:
|
||||
|
||||
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
|
||||
* **bastion** : configure a bastion host if your nodes are not directly reachable
|
||||
|
||||
Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**.
|
||||
This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`)
|
||||
|
||||
Below is a complete inventory example:
|
||||
|
||||
```ini
|
||||
## Configure 'ip' variable to bind kubernetes services on a
|
||||
## different ip than the default iface
|
||||
node1 ansible_host=95.54.0.12 ip=10.3.0.1
|
||||
node2 ansible_host=95.54.0.13 ip=10.3.0.2
|
||||
node3 ansible_host=95.54.0.14 ip=10.3.0.3
|
||||
node4 ansible_host=95.54.0.15 ip=10.3.0.4
|
||||
node5 ansible_host=95.54.0.16 ip=10.3.0.5
|
||||
node6 ansible_host=95.54.0.17 ip=10.3.0.6
|
||||
|
||||
[kube_control_plane]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
node1
|
||||
node2
|
||||
node3
|
||||
|
||||
[kube_node]
|
||||
node2
|
||||
node3
|
||||
node4
|
||||
node5
|
||||
node6
|
||||
```
|
||||
|
||||
## Inventory customization
|
||||
|
||||
See [Customize Ansible vars](/docs/ansible/ansible.md#customize-ansible-vars)
|
||||
and [Ansible documentation on group_vars](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html#assigning-a-variable-to-many-machines-group-variables)
|
||||
|
||||
## Bastion host
|
||||
|
||||
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
|
||||
you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
|
||||
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
|
||||
bastion host.
|
||||
|
||||
```ShellSession
|
||||
[bastion]
|
||||
bastion ansible_host=x.x.x.x
|
||||
```
|
||||
|
||||
For more information about Ansible and bastion hosts, read
|
||||
[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
|
||||
Reference in New Issue
Block a user