mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-13 21:34:40 +03:00
Docs: migrate to cloud_controllers
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
This commit is contained in:
133
docs/cloud_controllers/openstack.md
Normal file
133
docs/cloud_controllers/openstack.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# OpenStack
|
||||
|
||||
## Known compatible public clouds
|
||||
|
||||
Kubespray has been tested on a number of OpenStack Public Clouds including (in alphabetical order):
|
||||
|
||||
- [Auro](https://auro.io/)
|
||||
- [Betacloud](https://www.betacloud.io/)
|
||||
- [CityCloud](https://www.citycloud.com/)
|
||||
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
|
||||
- [ELASTX](https://elastx.se/)
|
||||
- [EnterCloudSuite](https://www.entercloudsuite.com/)
|
||||
- [FugaCloud](https://fuga.cloud/)
|
||||
- [Infomaniak](https://infomaniak.com)
|
||||
- [Open Telekom Cloud](https://cloud.telekom.de/) : requires to set the variable `wait_for_floatingip = "true"` in your cluster.tfvars
|
||||
- [OVHcloud](https://www.ovhcloud.com/)
|
||||
- [Rackspace](https://www.rackspace.com/)
|
||||
- [Ultimum](https://ultimum.io/)
|
||||
- [VexxHost](https://vexxhost.com/)
|
||||
- [Zetta](https://www.zetta.io/)
|
||||
|
||||
## The OpenStack cloud provider
|
||||
|
||||
The cloud provider is configured to have Octavia by default in Kubespray.
|
||||
|
||||
- Enable the external OpenStack cloud provider in `group_vars/all/all.yml`:
|
||||
|
||||
```yaml
|
||||
cloud_provider: external
|
||||
external_cloud_provider: openstack
|
||||
```
|
||||
|
||||
- Enable Cinder CSI in `group_vars/all/openstack.yml`:
|
||||
|
||||
```yaml
|
||||
cinder_csi_enabled: true
|
||||
```
|
||||
|
||||
- Enable topology support (optional), if your openstack provider has custom Zone names you can override the default "nova" zone by setting the variable `cinder_topology_zones`
|
||||
|
||||
```yaml
|
||||
cinder_topology: true
|
||||
```
|
||||
|
||||
- Enabling `cinder_csi_ignore_volume_az: true`, ignores volumeAZ and schedules on any of the available node AZ.
|
||||
|
||||
```yaml
|
||||
cinder_csi_ignore_volume_az: true
|
||||
```
|
||||
|
||||
- If you are using OpenStack loadbalancer(s) replace the `openstack_lbaas_subnet_id` with the new `external_openstack_lbaas_subnet_id`. **Note** The new cloud provider is using Octavia instead of Neutron LBaaS by default!
|
||||
|
||||
- If you are in a case of a multi-nic OpenStack VMs (see [kubernetes/cloud-provider-openstack#407](https://github.com/kubernetes/cloud-provider-openstack/issues/407) and [#6083](https://github.com/kubernetes-sigs/kubespray/issues/6083) for explanation), you should override the default OpenStack networking configuration:
|
||||
|
||||
```yaml
|
||||
external_openstack_network_ipv6_disabled: false
|
||||
external_openstack_network_internal_networks: []
|
||||
external_openstack_network_public_networks: []
|
||||
```
|
||||
|
||||
- You can override the default OpenStack metadata configuration (see [#6338](https://github.com/kubernetes-sigs/kubespray/issues/6338) for explanation):
|
||||
|
||||
```yaml
|
||||
external_openstack_metadata_search_order: "configDrive,metadataService"
|
||||
```
|
||||
|
||||
- Available variables for configuring lbaas:
|
||||
|
||||
```yaml
|
||||
external_openstack_lbaas_enabled: true
|
||||
external_openstack_lbaas_floating_network_id: "Neutron network ID to get floating IP from"
|
||||
external_openstack_lbaas_floating_subnet_id: "Neutron subnet ID to get floating IP from"
|
||||
external_openstack_lbaas_method: ROUND_ROBIN
|
||||
external_openstack_lbaas_provider: amphora
|
||||
external_openstack_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
|
||||
external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
|
||||
external_openstack_lbaas_manage_security_groups: false
|
||||
external_openstack_lbaas_create_monitor: false
|
||||
external_openstack_lbaas_monitor_delay: 5
|
||||
external_openstack_lbaas_monitor_max_retries: 1
|
||||
external_openstack_lbaas_monitor_timeout: 3
|
||||
external_openstack_lbaas_internal_lb: false
|
||||
|
||||
```
|
||||
|
||||
- Run `source path/to/your/openstack-rc` to read your OpenStack credentials like `OS_AUTH_URL`, `OS_USERNAME`, `OS_PASSWORD`, etc. Those variables are used for accessing OpenStack from the external cloud provider.
|
||||
- Run the `cluster.yml` playbook
|
||||
|
||||
## Additional step needed when using calico or kube-router
|
||||
|
||||
Being L3 CNI, calico and kube-router do not encapsulate all packages with the hosts' ip addresses. Instead the packets will be routed with the PODs ip addresses directly.
|
||||
|
||||
OpenStack will filter and drop all packets from ips it does not know to prevent spoofing.
|
||||
|
||||
In order to make L3 CNIs work on OpenStack you will need to tell OpenStack to allow pods packets by allowing the network they use.
|
||||
|
||||
First you will need the ids of your OpenStack instances that will run kubernetes:
|
||||
|
||||
```bash
|
||||
openstack server list --project YOUR_PROJECT
|
||||
+--------------------------------------+--------+----------------------------------+--------+-------------+
|
||||
| ID | Name | Tenant ID | Status | Power State |
|
||||
+--------------------------------------+--------+----------------------------------+--------+-------------+
|
||||
| e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
|
||||
| 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
|
||||
```
|
||||
|
||||
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports (though they are now configured through using OpenStack):
|
||||
|
||||
```bash
|
||||
openstack port list -c id -c device_id --project YOUR_PROJECT
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| id | device_id |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
|
||||
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
|
||||
```
|
||||
|
||||
Given the port ids on the left, you can set the two `allowed-address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
|
||||
|
||||
```bash
|
||||
# allow kube_service_addresses and kube_pods_subnet network
|
||||
openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
|
||||
openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
|
||||
```
|
||||
|
||||
If all the VMs in the tenant correspond to Kubespray deployment, you can "sweep run" above with:
|
||||
|
||||
```bash
|
||||
openstack port list --device-owner=compute:nova -c ID -f value | xargs -tI@ openstack port set @ --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
|
||||
```
|
||||
|
||||
Now you can finally run the playbook.
|
||||
134
docs/cloud_controllers/vsphere.md
Normal file
134
docs/cloud_controllers/vsphere.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# vSphere
|
||||
|
||||
Kubespray can be deployed with vSphere as Cloud provider. This feature supports:
|
||||
|
||||
- Volumes
|
||||
- Persistent Volumes
|
||||
- Storage Classes and provisioning of volumes
|
||||
- vSphere Storage Policy Based Management for Containers orchestrated by Kubernetes
|
||||
|
||||
## Out-of-tree vSphere cloud provider
|
||||
|
||||
### Prerequisites
|
||||
|
||||
You need at first to configure your vSphere environment by following the [official documentation](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#prerequisites).
|
||||
|
||||
After this step you should have:
|
||||
|
||||
- vSphere upgraded to 6.7 U3 or later
|
||||
- VM hardware upgraded to version 15 or higher
|
||||
- UUID activated for each VM where Kubernetes will be deployed
|
||||
|
||||
### Kubespray configuration
|
||||
|
||||
First in `inventory/sample/group_vars/all/all.yml` you must set the `cloud_provider` to `external` and `external_cloud_provider` to `vsphere`.
|
||||
|
||||
```yml
|
||||
cloud_provider: "external"
|
||||
external_cloud_provider: "vsphere"
|
||||
```
|
||||
|
||||
Then, `inventory/sample/group_vars/all/vsphere.yml`, you need to declare your vCenter credentials and enable the vSphere CSI following the description below.
|
||||
|
||||
| Variable | Required | Type | Choices | Default | Comment |
|
||||
|----------------------------------------|----------|---------|----------------------------|---------------------------|---------------------------------------------------------------------------------------------------------------------|
|
||||
| external_vsphere_vcenter_ip | TRUE | string | | | IP/URL of the vCenter |
|
||||
| external_vsphere_vcenter_port | TRUE | string | | "443" | Port of the vCenter API |
|
||||
| external_vsphere_insecure | TRUE | string | "true", "false" | "true" | set to "true" if the host above uses a self-signed cert |
|
||||
| external_vsphere_user | TRUE | string | | | User name for vCenter with required privileges (Can also be specified with the `VSPHERE_USER` environment variable) |
|
||||
| external_vsphere_password | TRUE | string | | | Password for vCenter (Can also be specified with the `VSPHERE_PASSWORD` environment variable) |
|
||||
| external_vsphere_datacenter | TRUE | string | | | Datacenter name to use |
|
||||
| external_vsphere_kubernetes_cluster_id | TRUE | string | | "kubernetes-cluster-id" | Kubernetes cluster ID to use |
|
||||
| vsphere_csi_enabled | TRUE | boolean | | false | Enable vSphere CSI |
|
||||
|
||||
Example configuration:
|
||||
|
||||
```yml
|
||||
external_vsphere_vcenter_ip: "myvcenter.domain.com"
|
||||
external_vsphere_vcenter_port: "443"
|
||||
external_vsphere_insecure: "true"
|
||||
external_vsphere_user: "administrator@vsphere.local"
|
||||
external_vsphere_password: "K8s_admin"
|
||||
external_vsphere_datacenter: "DATACENTER_name"
|
||||
external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"
|
||||
vsphere_csi_enabled: true
|
||||
```
|
||||
|
||||
For a more fine-grained CSI setup, refer to the [vsphere-csi](/docs/CSI/vsphere-csi.md) documentation.
|
||||
|
||||
### Deployment
|
||||
|
||||
Once the configuration is set, you can execute the playbook again to apply the new configuration:
|
||||
|
||||
```ShellSession
|
||||
cd kubespray
|
||||
ansible-playbook -i inventory/sample/hosts.ini -b -v cluster.yml
|
||||
```
|
||||
|
||||
You'll find some useful examples [here](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#sample-manifests-to-test-csi-driver-functionality) to test your configuration.
|
||||
|
||||
## In-tree vSphere cloud provider ([deprecated](https://cloud-provider-vsphere.sigs.k8s.io/concepts/in_tree_vs_out_of_tree.html))
|
||||
|
||||
### Prerequisites (deprecated)
|
||||
|
||||
You need at first to configure your vSphere environment by following the [official documentation](https://kubernetes.io/docs/getting-started-guides/vsphere/#vsphere-cloud-provider).
|
||||
|
||||
After this step you should have:
|
||||
|
||||
- UUID activated for each VM where Kubernetes will be deployed
|
||||
- A vSphere account with required privileges
|
||||
|
||||
If you intend to leverage the [zone and region node labeling](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domain-beta-kubernetes-io-region), create a tag category for both the zone and region in vCenter. The tags can then be applied at the host, cluster, datacenter, or folder level, and the cloud provider will walk the hierarchy to extract and apply the labels to the Kubernetes nodes.
|
||||
|
||||
### Kubespray configuration (deprecated)
|
||||
|
||||
First you must define the cloud provider in `inventory/sample/group_vars/all.yml` and set it to `vsphere`.
|
||||
|
||||
```yml
|
||||
cloud_provider: vsphere
|
||||
```
|
||||
|
||||
Then, in the same file, you need to declare your vCenter credentials following the description below.
|
||||
|
||||
| Variable | Required | Type | Choices | Default | Comment |
|
||||
|------------------------------|----------|---------|----------------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| vsphere_vcenter_ip | TRUE | string | | | IP/URL of the vCenter |
|
||||
| vsphere_vcenter_port | TRUE | integer | | | Port of the vCenter API. Commonly 443 |
|
||||
| vsphere_insecure | TRUE | integer | 1, 0 | | set to 1 if the host above uses a self-signed cert |
|
||||
| vsphere_user | TRUE | string | | | User name for vCenter with required privileges |
|
||||
| vsphere_password | TRUE | string | | | Password for vCenter |
|
||||
| vsphere_datacenter | TRUE | string | | | Datacenter name to use |
|
||||
| vsphere_datastore | TRUE | string | | | Datastore name to use |
|
||||
| vsphere_working_dir | TRUE | string | | | Working directory from the view "VMs and template" in the vCenter where VM are placed |
|
||||
| vsphere_scsi_controller_type | TRUE | string | buslogic, pvscsi, parallel | pvscsi | SCSI controller name. Commonly "pvscsi". |
|
||||
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` (Optional, only used for Kubernetes <= 1.9.2) |
|
||||
| vsphere_public_network | FALSE | string | | Blank | Name of the network the VMs are joined to |
|
||||
| vsphere_resource_pool | FALSE | string | | Blank | Name of the Resource pool where the VMs are located (Optional, only used for Kubernetes >= 1.9.2) |
|
||||
| vsphere_zone_category | FALSE | string | | | Name of the tag category used to set the `failure-domain.beta.kubernetes.io/zone` label on nodes (Optional, only used for Kubernetes >= 1.12.0) |
|
||||
| vsphere_region_category | FALSE | string | | | Name of the tag category used to set the `failure-domain.beta.kubernetes.io/region` label on nodes (Optional, only used for Kubernetes >= 1.12.0) |
|
||||
|
||||
Example configuration:
|
||||
|
||||
```yml
|
||||
vsphere_vcenter_ip: "myvcenter.domain.com"
|
||||
vsphere_vcenter_port: 443
|
||||
vsphere_insecure: 1
|
||||
vsphere_user: "k8s@vsphere.local"
|
||||
vsphere_password: "K8s_admin"
|
||||
vsphere_datacenter: "DATACENTER_name"
|
||||
vsphere_datastore: "DATASTORE_name"
|
||||
vsphere_working_dir: "Docker_hosts"
|
||||
vsphere_scsi_controller_type: "pvscsi"
|
||||
vsphere_resource_pool: "K8s-Pool"
|
||||
```
|
||||
|
||||
### Deployment (deprecated)
|
||||
|
||||
Once the configuration is set, you can execute the playbook again to apply the new configuration:
|
||||
|
||||
```ShellSession
|
||||
cd kubespray
|
||||
ansible-playbook -i inventory/sample/hosts.ini -b -v cluster.yml
|
||||
```
|
||||
|
||||
You'll find some useful examples [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere) to test your configuration.
|
||||
Reference in New Issue
Block a user