Clean-up references to inventory_builder in docs

This commit is contained in:
Max Gautier
2024-11-26 14:54:12 +01:00
parent 56e41f0647
commit 69ca324192
5 changed files with 32 additions and 116 deletions

View File

@@ -6,29 +6,25 @@ Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
an example inventory located
[here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini).
You can use an
[inventory generator](https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py)
to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does
support creating inventory file for large clusters as well. It now supports
separated ETCD and Kubernetes control plane roles from node role if the size exceeds a
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` for more information.
## Building your own inventory
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. See the
[example inventory](/inventory/sample/inventory.ini)
and [Ansible documentation on building your inventory](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html),
and [details on the inventory structure expected by Kubespray](/docs/ansible/inventory.md).
Example inventory generator usage:
```ShellSession
cp -r inventory/sample inventory/mycluster
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
<your-favorite-editor> inventory/mycluster/inventory.ini
# Review and change parameters under ``inventory/mycluster/group_vars``
<your-favorite-editor> inventory/mycluster/group_vars/all.yml # for every node, including etcd
<your-favorite-editor> inventory/mycluster/group_vars/k8s_cluster.yml # for every node in the cluster (not etcd when it's separate)
<your-favorite-editor> inventory/mycluster/group_vars/kube_control_plane.yml # for the control plane
<your-favorite-editor> inventory/myclsuter/group_vars/kube_node.yml # for worker nodes
```
Then use `inventory/mycluster/hosts.yml` as inventory file.
## Starting custom deployment
Once you have an inventory, you may want to customize deployment data vars
and start the deployment:
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
```ShellSession

View File

@@ -212,17 +212,15 @@ Copy ``inventory/sample`` as ``inventory/mycluster``:
cp -rfp inventory/sample inventory/mycluster
```
Update Ansible inventory file with inventory builder:
Update the sample Ansible inventory file with ip given by gcloud:
```ShellSession
declare -a IPS=($(gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way" --format="value(EXTERNAL_IP)" | tr '\n' ' '))
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way"
```
Open the generated `inventory/mycluster/hosts.yaml` file and adjust it so
that controller-0, controller-1 and controller-2 are control plane nodes and
worker-0, worker-1 and worker-2 are worker nodes. Also update the `ip` to the respective local VPC IP and
remove the `access_ip`.
Open `inventory/mycluster/inventory.ini` file and add it so
that controller-0, controller-1 and controller-2 in the `kube_control_plane` group and
worker-0, worker-1 and worker-2 in the `kube_node` group. Add respective `ip` to the respective local VPC IP for each node.
The main configuration for the cluster is stored in
`inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we
@@ -242,7 +240,7 @@ the kubernetes cluster, just change the 'false' to 'true' for
Now we will deploy the configuration:
```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml
ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml
```
Ansible will now execute the playbook, this can take up to 20 minutes.
@@ -596,7 +594,7 @@ If you want to keep the VMs and just remove the cluster state, you can simply
run another Ansible playbook:
```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml
ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml
```
Resetting the cluster to the VMs original state usually takes about a couple