mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-13 21:34:40 +03:00
Clean-up references to inventory_builder in docs
This commit is contained in:
@@ -52,14 +52,6 @@ repos:
|
||||
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: tox-inventory-builder
|
||||
name: tox-inventory-builder
|
||||
entry: bash -c "cd contrib/inventory_builder && tox"
|
||||
language: python
|
||||
pass_filenames: false
|
||||
additional_dependencies:
|
||||
- tox==4.15.0
|
||||
|
||||
- id: check-readme-versions
|
||||
name: check-readme-versions
|
||||
entry: tests/scripts/check_readme_versions.sh
|
||||
|
||||
@@ -26,9 +26,7 @@ then run the following steps:
|
||||
# Copy ``inventory/sample`` as ``inventory/mycluster``
|
||||
cp -rfp inventory/sample inventory/mycluster
|
||||
|
||||
# Update Ansible inventory file with inventory builder
|
||||
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
|
||||
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
|
||||
# Update Ansible inventory file with the ip of your nodes
|
||||
|
||||
# Review and change parameters under ``inventory/mycluster/group_vars``
|
||||
cat inventory/mycluster/group_vars/all/all.yml
|
||||
|
||||
@@ -34,91 +34,23 @@ Based on the table below and the available python version for your ansible host
|
||||
|-----------------|----------------|
|
||||
| >= 2.16.4 | 3.10-3.12 |
|
||||
|
||||
## Inventory
|
||||
|
||||
The inventory is composed of 3 groups:
|
||||
## Customize Ansible vars
|
||||
|
||||
* **kube_node** : list of kubernetes nodes where the pods will run.
|
||||
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
|
||||
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
|
||||
|
||||
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
|
||||
If you want it a standalone, make sure those groups do not intersect.
|
||||
If you want the server to act both as control-plane and node, the server must be defined
|
||||
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
|
||||
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
|
||||
not _kube_node_.
|
||||
|
||||
There are also two special groups:
|
||||
|
||||
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
|
||||
* **bastion** : configure a bastion host if your nodes are not directly reachable
|
||||
|
||||
Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**.
|
||||
This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`)
|
||||
|
||||
Below is a complete inventory example:
|
||||
|
||||
```ini
|
||||
## Configure 'ip' variable to bind kubernetes services on a
|
||||
## different ip than the default iface
|
||||
node1 ansible_host=95.54.0.12 ip=10.3.0.1
|
||||
node2 ansible_host=95.54.0.13 ip=10.3.0.2
|
||||
node3 ansible_host=95.54.0.14 ip=10.3.0.3
|
||||
node4 ansible_host=95.54.0.15 ip=10.3.0.4
|
||||
node5 ansible_host=95.54.0.16 ip=10.3.0.5
|
||||
node6 ansible_host=95.54.0.17 ip=10.3.0.6
|
||||
|
||||
[kube_control_plane]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
node1
|
||||
node2
|
||||
node3
|
||||
|
||||
[kube_node]
|
||||
node2
|
||||
node3
|
||||
node4
|
||||
node5
|
||||
node6
|
||||
```
|
||||
|
||||
## Group vars and overriding variables precedence
|
||||
|
||||
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
|
||||
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
|
||||
Mandatory variables that are common for at least one role (or a node group) can be found in the
|
||||
`inventory/sample/group_vars/k8s_cluster.yml`.
|
||||
There are also role vars for docker, kubernetes preinstall and control plane roles.
|
||||
According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
|
||||
those cannot be overridden from the group vars. In order to override, one should use
|
||||
the `-e` runtime flags (most simple way) or other layers described in the docs.
|
||||
|
||||
Kubespray uses only a few layers to override things (or expect them to
|
||||
be overridden for roles):
|
||||
Kubespray expects users to use one of the following variables sources for settings and customization:
|
||||
|
||||
| Layer | Comment |
|
||||
|----------------------------------------|------------------------------------------------------------------------------|
|
||||
| **role defaults** | provides best UX to override things for Kubespray deployments |
|
||||
| inventory vars | Unused |
|
||||
| **inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things |
|
||||
| inventory host_vars | Unused |
|
||||
| playbook group_vars | Unused |
|
||||
| playbook host_vars | Unused |
|
||||
| **host facts** | Kubespray overrides for internal roles' logic, like state flags |
|
||||
| play vars | Unused |
|
||||
| play vars_prompt | Unused |
|
||||
| play vars_files | Unused |
|
||||
| registered vars | Unused |
|
||||
| set_facts | Kubespray overrides those, for some places |
|
||||
| **role and include vars** | Provides bad UX to override things! Use extra vars to enforce |
|
||||
| block vars (only for tasks in block) | Kubespray overrides for internal roles' logic |
|
||||
| task vars (only for the task) | Unused for roles, but only for helper scripts |
|
||||
| inventory vars | |
|
||||
| - **inventory group_vars** | most used |
|
||||
| - inventory host_vars | host specifc vars overrides, group_vars is usually more practical |
|
||||
| **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` |
|
||||
|
||||
[!IMPORTANT]
|
||||
Extra vars are best used to override kubespray internal variables, for instances, roles/vars/.
|
||||
Those vars are usually **not expected** (by Kubespray developpers) to be modified by end users, and not part of Kubespray
|
||||
interface. Thus they can change, disappear, or break stuff unexpectedly.
|
||||
|
||||
## Ansible tags
|
||||
|
||||
The following tags are defined in playbooks:
|
||||
|
||||
@@ -6,29 +6,25 @@ Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
|
||||
an example inventory located
|
||||
[here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini).
|
||||
|
||||
You can use an
|
||||
[inventory generator](https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py)
|
||||
to create or modify an Ansible inventory. Currently, it is limited in
|
||||
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does
|
||||
support creating inventory file for large clusters as well. It now supports
|
||||
separated ETCD and Kubernetes control plane roles from node role if the size exceeds a
|
||||
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` for more information.
|
||||
## Building your own inventory
|
||||
|
||||
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. See the
|
||||
[example inventory](/inventory/sample/inventory.ini)
|
||||
and [Ansible documentation on building your inventory](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html),
|
||||
and [details on the inventory structure expected by Kubespray](/docs/ansible/inventory.md).
|
||||
|
||||
Example inventory generator usage:
|
||||
|
||||
```ShellSession
|
||||
cp -r inventory/sample inventory/mycluster
|
||||
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
|
||||
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
|
||||
|
||||
<your-favorite-editor> inventory/mycluster/inventory.ini
|
||||
|
||||
# Review and change parameters under ``inventory/mycluster/group_vars``
|
||||
<your-favorite-editor> inventory/mycluster/group_vars/all.yml # for every node, including etcd
|
||||
<your-favorite-editor> inventory/mycluster/group_vars/k8s_cluster.yml # for every node in the cluster (not etcd when it's separate)
|
||||
<your-favorite-editor> inventory/mycluster/group_vars/kube_control_plane.yml # for the control plane
|
||||
<your-favorite-editor> inventory/myclsuter/group_vars/kube_node.yml # for worker nodes
|
||||
```
|
||||
|
||||
Then use `inventory/mycluster/hosts.yml` as inventory file.
|
||||
|
||||
## Starting custom deployment
|
||||
|
||||
Once you have an inventory, you may want to customize deployment data vars
|
||||
and start the deployment:
|
||||
|
||||
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
|
||||
|
||||
```ShellSession
|
||||
|
||||
@@ -212,17 +212,15 @@ Copy ``inventory/sample`` as ``inventory/mycluster``:
|
||||
cp -rfp inventory/sample inventory/mycluster
|
||||
```
|
||||
|
||||
Update Ansible inventory file with inventory builder:
|
||||
Update the sample Ansible inventory file with ip given by gcloud:
|
||||
|
||||
```ShellSession
|
||||
declare -a IPS=($(gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way" --format="value(EXTERNAL_IP)" | tr '\n' ' '))
|
||||
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
|
||||
gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way"
|
||||
```
|
||||
|
||||
Open the generated `inventory/mycluster/hosts.yaml` file and adjust it so
|
||||
that controller-0, controller-1 and controller-2 are control plane nodes and
|
||||
worker-0, worker-1 and worker-2 are worker nodes. Also update the `ip` to the respective local VPC IP and
|
||||
remove the `access_ip`.
|
||||
Open `inventory/mycluster/inventory.ini` file and add it so
|
||||
that controller-0, controller-1 and controller-2 in the `kube_control_plane` group and
|
||||
worker-0, worker-1 and worker-2 in the `kube_node` group. Add respective `ip` to the respective local VPC IP for each node.
|
||||
|
||||
The main configuration for the cluster is stored in
|
||||
`inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we
|
||||
@@ -242,7 +240,7 @@ the kubernetes cluster, just change the 'false' to 'true' for
|
||||
Now we will deploy the configuration:
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml
|
||||
ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml
|
||||
```
|
||||
|
||||
Ansible will now execute the playbook, this can take up to 20 minutes.
|
||||
@@ -596,7 +594,7 @@ If you want to keep the VMs and just remove the cluster state, you can simply
|
||||
run another Ansible playbook:
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml
|
||||
ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml
|
||||
```
|
||||
|
||||
Resetting the cluster to the VMs original state usually takes about a couple
|
||||
|
||||
Reference in New Issue
Block a user