Clean-up references to inventory_builder in docs

This commit is contained in:
Max Gautier
2024-11-26 14:54:12 +01:00
parent 56e41f0647
commit 69ca324192
5 changed files with 32 additions and 116 deletions

View File

@@ -52,14 +52,6 @@ repos:
- repo: local - repo: local
hooks: hooks:
- id: tox-inventory-builder
name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox"
language: python
pass_filenames: false
additional_dependencies:
- tox==4.15.0
- id: check-readme-versions - id: check-readme-versions
name: check-readme-versions name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh entry: tests/scripts/check_readme_versions.sh

View File

@@ -26,9 +26,7 @@ then run the following steps:
# Copy ``inventory/sample`` as ``inventory/mycluster`` # Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder # Update Ansible inventory file with the ip of your nodes
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars`` # Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml cat inventory/mycluster/group_vars/all/all.yml

View File

@@ -34,91 +34,23 @@ Based on the table below and the available python version for your ansible host
|-----------------|----------------| |-----------------|----------------|
| >= 2.16.4 | 3.10-3.12 | | >= 2.16.4 | 3.10-3.12 |
## Inventory
The inventory is composed of 3 groups: ## Customize Ansible vars
* **kube_node** : list of kubernetes nodes where the pods will run. Kubespray expects users to use one of the following variables sources for settings and customization:
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as control-plane and node, the server must be defined
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
not _kube_node_.
There are also two special groups:
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable
Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**.
This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`)
Below is a complete inventory example:
```ini
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube_control_plane]
node1
node2
[etcd]
node1
node2
node3
[kube_node]
node2
node3
node4
node5
node6
```
## Group vars and overriding variables precedence
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/sample/group_vars/k8s_cluster.yml`.
There are also role vars for docker, kubernetes preinstall and control plane roles.
According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use
the `-e` runtime flags (most simple way) or other layers described in the docs.
Kubespray uses only a few layers to override things (or expect them to
be overridden for roles):
| Layer | Comment | | Layer | Comment |
|----------------------------------------|------------------------------------------------------------------------------| |----------------------------------------|------------------------------------------------------------------------------|
| **role defaults** | provides best UX to override things for Kubespray deployments | | inventory vars | |
| inventory vars | Unused | | - **inventory group_vars** | most used |
| **inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things | | - inventory host_vars | host specifc vars overrides, group_vars is usually more practical |
| inventory host_vars | Unused |
| playbook group_vars | Unused |
| playbook host_vars | Unused |
| **host facts** | Kubespray overrides for internal roles' logic, like state flags |
| play vars | Unused |
| play vars_prompt | Unused |
| play vars_files | Unused |
| registered vars | Unused |
| set_facts | Kubespray overrides those, for some places |
| **role and include vars** | Provides bad UX to override things! Use extra vars to enforce |
| block vars (only for tasks in block) | Kubespray overrides for internal roles' logic |
| task vars (only for the task) | Unused for roles, but only for helper scripts |
| **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` | | **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` |
[!IMPORTANT]
Extra vars are best used to override kubespray internal variables, for instances, roles/vars/.
Those vars are usually **not expected** (by Kubespray developpers) to be modified by end users, and not part of Kubespray
interface. Thus they can change, disappear, or break stuff unexpectedly.
## Ansible tags ## Ansible tags
The following tags are defined in playbooks: The following tags are defined in playbooks:

View File

@@ -6,29 +6,25 @@ Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
an example inventory located an example inventory located
[here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini). [here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini).
You can use an ## Building your own inventory
[inventory generator](https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py)
to create or modify an Ansible inventory. Currently, it is limited in Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. See the
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does [example inventory](/inventory/sample/inventory.ini)
support creating inventory file for large clusters as well. It now supports and [Ansible documentation on building your inventory](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html),
separated ETCD and Kubernetes control plane roles from node role if the size exceeds a and [details on the inventory structure expected by Kubespray](/docs/ansible/inventory.md).
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` for more information.
Example inventory generator usage:
```ShellSession ```ShellSession
cp -r inventory/sample inventory/mycluster
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5) <your-favorite-editor> inventory/mycluster/inventory.ini
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
<your-favorite-editor> inventory/mycluster/group_vars/all.yml # for every node, including etcd
<your-favorite-editor> inventory/mycluster/group_vars/k8s_cluster.yml # for every node in the cluster (not etcd when it's separate)
<your-favorite-editor> inventory/mycluster/group_vars/kube_control_plane.yml # for the control plane
<your-favorite-editor> inventory/myclsuter/group_vars/kube_node.yml # for worker nodes
``` ```
Then use `inventory/mycluster/hosts.yml` as inventory file.
## Starting custom deployment
Once you have an inventory, you may want to customize deployment data vars
and start the deployment:
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars: **IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
```ShellSession ```ShellSession

View File

@@ -212,17 +212,15 @@ Copy ``inventory/sample`` as ``inventory/mycluster``:
cp -rfp inventory/sample inventory/mycluster cp -rfp inventory/sample inventory/mycluster
``` ```
Update Ansible inventory file with inventory builder: Update the sample Ansible inventory file with ip given by gcloud:
```ShellSession ```ShellSession
declare -a IPS=($(gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way" --format="value(EXTERNAL_IP)" | tr '\n' ' ')) gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way"
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
``` ```
Open the generated `inventory/mycluster/hosts.yaml` file and adjust it so Open `inventory/mycluster/inventory.ini` file and add it so
that controller-0, controller-1 and controller-2 are control plane nodes and that controller-0, controller-1 and controller-2 in the `kube_control_plane` group and
worker-0, worker-1 and worker-2 are worker nodes. Also update the `ip` to the respective local VPC IP and worker-0, worker-1 and worker-2 in the `kube_node` group. Add respective `ip` to the respective local VPC IP for each node.
remove the `access_ip`.
The main configuration for the cluster is stored in The main configuration for the cluster is stored in
`inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we `inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we
@@ -242,7 +240,7 @@ the kubernetes cluster, just change the 'false' to 'true' for
Now we will deploy the configuration: Now we will deploy the configuration:
```ShellSession ```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml
``` ```
Ansible will now execute the playbook, this can take up to 20 minutes. Ansible will now execute the playbook, this can take up to 20 minutes.
@@ -596,7 +594,7 @@ If you want to keep the VMs and just remove the cluster state, you can simply
run another Ansible playbook: run another Ansible playbook:
```ShellSession ```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml
``` ```
Resetting the cluster to the VMs original state usually takes about a couple Resetting the cluster to the VMs original state usually takes about a couple