modify doc structure and update existing doc-links as preparation for new doc generation script

This commit is contained in:
Payback159
2024-05-15 19:32:51 +02:00
parent 0b464b5239
commit 4dbfd42f1d
82 changed files with 70 additions and 70 deletions

View File

@@ -382,7 +382,7 @@ To clean up any ipvs leftovers:
Calico node, typha and kube-controllers need to be able to talk to the kubernetes API. Please reference the [Enabling eBPF Calico Docs](https://docs.projectcalico.org/maintenance/ebpf/enabling-bpf) for guidelines on how to do this.
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/ha-mode.md).
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/operations/ha-mode.md).
If no external loadbalancer is used, Calico eBPF can also use the localhost loadbalancer option. We are able to do so only if you use the same port for the localhost apiserver loadbalancer and the kube-apiserver. In this case Calico Automatic Host Endpoints need to be enabled to allow services like `coredns` and `metrics-server` to communicate with the kubernetes host endpoint. See [this blog post](https://www.projectcalico.org/securing-kubernetes-nodes-with-calico-automatic-host-endpoints/) on enabling automatic host endpoints.

View File

@@ -99,7 +99,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version
```yml
cilium_version: v1.15.4
cilium_version: v1.12.1
```
## Add variable to config

View File

@@ -59,7 +59,7 @@ not _kube_node_.
There are also two special groups:
* **calico_rr** : explained for [advanced Calico networking cases](/docs/calico.md)
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable
Below is a complete inventory example:
@@ -285,7 +285,7 @@ For more information about Ansible and bastion hosts, read
## Mitogen
Mitogen support is deprecated, please see [mitogen related docs](/docs/mitogen.md) for usage and reasons for deprecation.
Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation.
## Beyond ansible 2.9

View File

@@ -46,11 +46,11 @@ Some variables of note include:
* *loadbalancer_apiserver* - If defined, all hosts will connect to this
address instead of localhost for kube_control_planes and kube_control_plane[0] for
kube_nodes. See more details in the
[HA guide](/docs/ha-mode.md).
[HA guide](/docs/operations/ha-mode.md).
* *loadbalancer_apiserver_localhost* - makes all hosts to connect to
the apiserver internally load balanced endpoint. Mutual exclusive to the
`loadbalancer_apiserver`. See more details in the
[HA guide](/docs/ha-mode.md).
[HA guide](/docs/operations/ha-mode.md).
## Cluster variables

View File

@@ -54,7 +54,7 @@ cd kubespray
## Install Ansible
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
## Cluster Definition

View File

@@ -54,7 +54,7 @@ external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"
vsphere_csi_enabled: true
```
For a more fine-grained CSI setup, refer to the [vsphere-csi](/docs/vsphere-csi.md) documentation.
For a more fine-grained CSI setup, refer to the [vsphere-csi](/docs/CSI/vsphere-csi.md) documentation.
### Deployment

View File

@@ -25,7 +25,7 @@ Note, the canal network plugin deploys flannel as well plus calico policy contro
## Test cases
The [CI Matrix](/docs/ci.md) displays OS, Network Plugin and Container Manager tested.
The [CI Matrix](/docs/developers/ci.md) displays OS, Network Plugin and Container Manager tested.
All tests are breakdown into 3 "stages" ("Stage" means a build step of the build pipeline) as follows:

View File

@@ -52,7 +52,7 @@ speed, the variable 'download_run_once' is set. This will make kubespray
download all files and containers just once and then redistributes them to
the other nodes and as a bonus, also cache all downloads locally and re-use
them on the next provisioning run. For more information on download settings
see [download documentation](/docs/downloads.md).
see [download documentation](/docs/advanced/downloads.md).
## Example use of Vagrant

View File

@@ -36,7 +36,7 @@ ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \
--private-key=~/.ssh/private_key
```
See more details in the [ansible guide](/docs/ansible.md).
See more details in the [ansible guide](/docs/ansible/ansible.md).
### Adding nodes
@@ -81,7 +81,7 @@ kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use <http://localhost:8080> to connect. The kubeconfig files
generated will point to localhost (on kube_control_planes) and kube_node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](/docs/ha-mode.md).
More details on this process are in the [HA guide](/docs/operations/ha-mode.md).
Kubespray permits connecting to the cluster remotely on any IP of any
kube_control_plane host on port 6443 by default. However, this requires
@@ -140,5 +140,5 @@ If desired, copy admin.conf to ~/.kube/config.
## Setting up your first cluster
[Setting up your first cluster](/docs/setting-up-your-first-cluster.md) is an
[Setting up your first cluster](/docs/getting_started/setting-up-your-first-cluster.md) is an
applied step-by-step guide for setting up your first cluster with Kubespray.

View File

@@ -9,7 +9,7 @@ For a large scaled deployments, consider the following configuration changes:
* Override containers' `foo_image_repo` vars to point to intranet registry.
* Override the ``download_run_once: true`` and/or ``download_localhost: true``.
See [Downloading binaries and containers](/docs/downloads.md) for details.
See [Downloading binaries and containers](/docs/advanced/downloads.md) for details.
* Adjust the `retry_stagger` global var as appropriate. It should provide sane
load on a delegate (the first K8s control plane node) then retrying failed
@@ -32,7 +32,7 @@ For a large scaled deployments, consider the following configuration changes:
``kube_controller_node_monitor_period``,
``kube_apiserver_pod_eviction_not_ready_timeout_seconds`` &
``kube_apiserver_pod_eviction_unreachable_timeout_seconds`` for better Kubernetes reliability.
Check out [Kubernetes Reliability](/docs/kubernetes-reliability.md)
Check out [Kubernetes Reliability](/docs/advanced/kubernetes-reliability.md)
* Tune network prefix sizes. Those are ``kube_network_node_prefix``,
``kube_service_addresses`` and ``kube_pods_subnet``.
@@ -41,7 +41,7 @@ For a large scaled deployments, consider the following configuration changes:
from host/network interruption much quicker with calico_rr.
* Check out the
[Inventory](/docs/getting-started.md#building-your-own-inventory)
[Inventory](/docs/getting_started/getting-started.md#building-your-own-inventory)
section of the Getting started guide for tips on creating a large scale
Ansible inventory.