modify doc structure and update existing doc-links as preparation for new doc generation script

This commit is contained in:
Payback159
2024-05-15 19:32:51 +02:00
parent 0b464b5239
commit 4dbfd42f1d
82 changed files with 70 additions and 70 deletions

View File

@@ -0,0 +1,93 @@
# AWS
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targeted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
Make sure your VPC has both DNS Hostnames support and Private DNS enabled.
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
You can now create your cluster!
## Dynamic Inventory
There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.
This will produce an inventory that is passed into Ansible that looks like the following:
```json
{
"_meta": {
"hostvars": {
"ip-172-31-3-xxx.us-east-2.compute.internal": {
"ansible_ssh_host": "172.31.3.xxx"
},
"ip-172-31-8-xxx.us-east-2.compute.internal": {
"ansible_ssh_host": "172.31.8.xxx"
}
}
},
"etcd": [
"ip-172-31-3-xxx.us-east-2.compute.internal"
],
"k8s_cluster": {
"children": [
"kube_control_plane",
"kube_node"
]
},
"kube_control_plane": [
"ip-172-31-3-xxx.us-east-2.compute.internal"
],
"kube_node": [
"ip-172-31-8-xxx.us-east-2.compute.internal"
]
}
```
Guide:
- Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube_control_plane`, `etcd`, or `kube_node`. You can also share roles like `kube_control_plane, etcd`
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal:
```ShellSession
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="yyyyy"
export AWS_REGION="us-east-2"
```
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
**Optional** Using labels and taints
To add labels to your kubernetes node, add the following tag to your instance:
- Key: `kubespray-node-labels`
- Value: `node-role.kubernetes.io/ingress=`
To add taints to your kubernetes node, add the following tag to your instance:
- Key: `kubespray-node-taints`
- Value: `node-role.kubernetes.io/ingress=:NoSchedule`
## Kubespray configuration
Declare the cloud config variables for the `aws` provider as follows. Setting these variables are optional and depend on your use case.
| Variable | Type | Comment |
|------------------------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aws_zone | string | Force set the AWS zone. Recommended to leave blank. |
| aws_vpc | string | The AWS VPC flag enables the possibility to run the master components on a different aws account, on a different cloud provider or on-premise. If the flag is set also the KubernetesClusterTag must be provided |
| aws_subnet_id | string | SubnetID enables using a specific subnet to use for ELB's |
| aws_route_table_id | string | RouteTableID enables using a specific RouteTable |
| aws_role_arn | string | RoleARN is the IAM role to assume when interaction with AWS APIs |
| aws_kubernetes_cluster_tag | string | KubernetesClusterTag is the legacy cluster id we'll use to identify our cluster resources |
| aws_kubernetes_cluster_id | string | KubernetesClusterID is the cluster id we'll use to identify our cluster resources |
| aws_disable_security_group_ingress | bool | The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000. |
| aws_elb_security_group | string | Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead. |
| aws_disable_strict_zone_check | bool | During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment. |

View File

@@ -0,0 +1,123 @@
# Azure
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all/all.yml` and set it to `'azure'`.
All your instances are required to run in a resource group and a routing table has to be attached to the subnet your instances are in.
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/Azure/AKS)
## Parameters
Before creating the instances you must first set the `azure_` variables in the `group_vars/all/all.yml` file.
All values can be retrieved using the Azure CLI tool which can be downloaded here: <https://docs.microsoft.com/en-gb/cli/azure/install-azure-cli>
After installation you have to run `az login` to get access to your account.
### azure_cloud
Azure Stack has different API endpoints, depending on the Azure Stack deployment. These need to be provided to the Azure SDK.
Possible values are: `AzureChinaCloud`, `AzureGermanCloud`, `AzurePublicCloud` and `AzureUSGovernmentCloud`.
The full list of existing settings for the AzureChinaCloud, AzureGermanCloud, AzurePublicCloud and AzureUSGovernmentCloud
is available in the source code [here](https://github.com/kubernetes-sigs/cloud-provider-azure/blob/master/docs/cloud-provider-config.md)
### azure\_tenant\_id + azure\_subscription\_id
run `az account show` to retrieve your subscription id and tenant id:
`azure_tenant_id` -> Tenant ID field
`azure_subscription_id` -> ID field
### azure\_location
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `az account list-locations`
### azure\_resource\_group
The name of the resource group your instances are in, can be retrieved via `az group list`
### azure\_vmtype
The type of the vm. Supported values are `standard` or `vmss`. If vm is type of `Virtual Machines` then value is `standard`. If vm is part of `Virtual Machine Scale Sets` then value is `vmss`
### azure\_vnet\_name
The name of the virtual network your instances are in, can be retrieved via `az network vnet list`
### azure\_vnet\_resource\_group
The name of the resource group that contains the vnet.
### azure\_subnet\_name
The name of the subnet your instances are in, can be retrieved via `az network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
### azure\_security\_group\_name
The name of the network security group your instances are in, can be retrieved via `az network nsg list`
### azure\_security\_group\_resource\_group
The name of the resource group that contains the network security group. Defaults to `azure_vnet_resource_group`
### azure\_route\_table\_name
The name of the route table used with your instances.
### azure\_route\_table\_resource\_group
The name of the resource group that contains the route table. Defaults to `azure_vnet_resource_group`
### azure\_aad\_client\_id + azure\_aad\_client\_secret
These will have to be generated first:
- Create an Azure AD Application with:
```ShellSession
az ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET
```
display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output.
- Create Service principal for the application with:
```ShellSession
az ad sp create --id AppId
```
This is the AppId from the last command
- Create the role assignment with:
```ShellSession
az role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID
```
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.
### azure\_loadbalancer\_sku
Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
### azure\_exclude\_master\_from\_standard\_lb
azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.
### azure\_disable\_outbound\_snat
azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.
### azure\_primary\_availability\_set\_name
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.
### azure\_use\_instance\_metadata
Use instance metadata service where possible
## Provisioning Azure with Resource Group Templates
You'll find Resource Group Templates and scripts to provision the required infrastructure to Azure in [*contrib/azurerm*](../contrib/azurerm/README.md)

View File

@@ -0,0 +1,13 @@
# Cloud providers
## Provisioning
You can deploy instances in your cloud environment in several ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
## Deploy kubernetes
With ansible-playbook command
```ShellSession
ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml
```

View File

@@ -0,0 +1,100 @@
# Equinix Metal
Kubespray provides support for bare metal deployments using the [Equinix Metal](http://metal.equinix.com).
Deploying upon bare metal allows Kubernetes to run at locations where an existing public or private cloud might not exist such
as cell tower, edge collocated installations. The deployment mechanism used by Kubespray for Equinix Metal is similar to that used for
AWS and OpenStack clouds (notably using Terraform to deploy the infrastructure). Terraform uses the Equinix Metal provider plugin
to provision and configure hosts which are then used by the Kubespray Ansible playbooks. The Ansible inventory is generated
dynamically from the Terraform state file.
## Local Host Configuration
To perform this installation, you will need a localhost to run Terraform/Ansible (laptop, VM, etc) and an account with Equinix Metal.
In this example, we are provisioning a m1.large CentOS7 OpenStack VM as the localhost for the Kubernetes installation.
You'll need Ansible, Git, and PIP.
```bash
sudo yum install epel-release
sudo yum install ansible
sudo yum install git
sudo yum install python-pip
```
## Playbook SSH Key
An SSH key is needed by Kubespray/Ansible to run the playbooks.
This key is installed into the bare metal hosts during the Terraform deployment.
You can generate a key new key or use an existing one.
```bash
ssh-keygen -f ~/.ssh/id_rsa
```
## Install Terraform
Terraform is required to deploy the bare metal infrastructure. The steps below are for installing on CentOS 7.
[More terraform installation options are available.](https://learn.hashicorp.com/terraform/getting-started/install.html)
Grab the latest version of Terraform and install it.
```bash
echo "https://releases.hashicorp.com/terraform/$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')/terraform_$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')_linux_amd64.zip"
sudo yum install unzip
sudo unzip terraform_0.14.10_linux_amd64.zip -d /usr/local/bin/
```
## Download Kubespray
Pull over Kubespray and setup any required libraries.
```bash
git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
```
## Install Ansible
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
## Cluster Definition
In this example, a new cluster called "alpha" will be created.
```bash
cp -LRp contrib/terraform/packet/sample-inventory inventory/alpha
cd inventory/alpha/
ln -s ../../contrib/terraform/packet/hosts
```
Details about the cluster, such as the name, as well as the authentication tokens and project ID
for Equinix Metal need to be defined. To find these values see [Equinix Metal API Accounts](https://metal.equinix.com/developers/docs/accounts/).
```bash
vi cluster.tfvars
```
* cluster_name = alpha
* packet_project_id = ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
* public_key_path = 12345678-90AB-CDEF-GHIJ-KLMNOPQRSTUV
## Deploy Bare Metal Hosts
Initializing Terraform will pull down any necessary plugins/providers.
```bash
terraform init ../../contrib/terraform/packet/
```
Run Terraform to deploy the hardware.
```bash
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
```
## Run Kubespray Playbooks
With the bare metal infrastructure deployed, Kubespray can now install Kubernetes and setup the cluster.
```bash
ansible-playbook --become -i inventory/alpha/hosts cluster.yml
```

View File

@@ -0,0 +1,134 @@
# OpenStack
## Known compatible public clouds
Kubespray has been tested on a number of OpenStack Public Clouds including (in alphabetical order):
- [Auro](https://auro.io/)
- [Betacloud](https://www.betacloud.io/)
- [CityCloud](https://www.citycloud.com/)
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
- [ELASTX](https://elastx.se/)
- [EnterCloudSuite](https://www.entercloudsuite.com/)
- [FugaCloud](https://fuga.cloud/)
- [Infomaniak](https://infomaniak.com)
- [Open Telekom Cloud](https://cloud.telekom.de/) : requires to set the variable `wait_for_floatingip = "true"` in your cluster.tfvars
- [OVHcloud](https://www.ovhcloud.com/)
- [Rackspace](https://www.rackspace.com/)
- [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/)
## The OpenStack cloud provider
The cloud provider is configured to have Octavia by default in Kubespray.
- Enable the external OpenStack cloud provider in `group_vars/all/all.yml`:
```yaml
cloud_provider: external
external_cloud_provider: openstack
```
- Enable Cinder CSI in `group_vars/all/openstack.yml`:
```yaml
cinder_csi_enabled: true
```
- Enable topology support (optional), if your openstack provider has custom Zone names you can override the default "nova" zone by setting the variable `cinder_topology_zones`
```yaml
cinder_topology: true
```
- Enabling `cinder_csi_ignore_volume_az: true`, ignores volumeAZ and schedules on any of the available node AZ.
```yaml
cinder_csi_ignore_volume_az: true
```
- If you are using OpenStack loadbalancer(s) replace the `openstack_lbaas_subnet_id` with the new `external_openstack_lbaas_subnet_id`. **Note** The new cloud provider is using Octavia instead of Neutron LBaaS by default!
- If you are in a case of a multi-nic OpenStack VMs (see [kubernetes/cloud-provider-openstack#407](https://github.com/kubernetes/cloud-provider-openstack/issues/407) and [#6083](https://github.com/kubernetes-sigs/kubespray/issues/6083) for explanation), you should override the default OpenStack networking configuration:
```yaml
external_openstack_network_ipv6_disabled: false
external_openstack_network_internal_networks: []
external_openstack_network_public_networks: []
```
- You can override the default OpenStack metadata configuration (see [#6338](https://github.com/kubernetes-sigs/kubespray/issues/6338) for explanation):
```yaml
external_openstack_metadata_search_order: "configDrive,metadataService"
```
- Available variables for configuring lbaas:
```yaml
external_openstack_lbaas_enabled: true
external_openstack_lbaas_floating_network_id: "Neutron network ID to get floating IP from"
external_openstack_lbaas_floating_subnet_id: "Neutron subnet ID to get floating IP from"
external_openstack_lbaas_method: ROUND_ROBIN
external_openstack_lbaas_provider: amphora
external_openstack_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
external_openstack_lbaas_manage_security_groups: false
external_openstack_lbaas_create_monitor: false
external_openstack_lbaas_monitor_delay: 5
external_openstack_lbaas_monitor_max_retries: 1
external_openstack_lbaas_monitor_timeout: 3
external_openstack_lbaas_internal_lb: false
```
- Run `source path/to/your/openstack-rc` to read your OpenStack credentials like `OS_AUTH_URL`, `OS_USERNAME`, `OS_PASSWORD`, etc. Those variables are used for accessing OpenStack from the external cloud provider.
- Run the `cluster.yml` playbook
## Additional step needed when using calico or kube-router
Being L3 CNI, calico and kube-router do not encapsulate all packages with the hosts' ip addresses. Instead the packets will be routed with the PODs ip addresses directly.
OpenStack will filter and drop all packets from ips it does not know to prevent spoofing.
In order to make L3 CNIs work on OpenStack you will need to tell OpenStack to allow pods packets by allowing the network they use.
First you will need the ids of your OpenStack instances that will run kubernetes:
```bash
openstack server list --project YOUR_PROJECT
+--------------------------------------+--------+----------------------------------+--------+-------------+
| ID | Name | Tenant ID | Status | Power State |
+--------------------------------------+--------+----------------------------------+--------+-------------+
| e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
| 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
```
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports (though they are now configured through using OpenStack):
```bash
openstack port list -c id -c device_id --project YOUR_PROJECT
+--------------------------------------+--------------------------------------+
| id | device_id |
+--------------------------------------+--------------------------------------+
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
```
Given the port ids on the left, you can set the two `allowed-address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
```bash
# allow kube_service_addresses and kube_pods_subnet network
openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
```
If all the VMs in the tenant correspond to Kubespray deployment, you can "sweep run" above with:
```bash
openstack port list --device-owner=compute:nova -c ID -f value | xargs -tI@ openstack port set @ --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
```
Now you can finally run the playbook.

View File

@@ -0,0 +1,134 @@
# vSphere
Kubespray can be deployed with vSphere as Cloud provider. This feature supports:
- Volumes
- Persistent Volumes
- Storage Classes and provisioning of volumes
- vSphere Storage Policy Based Management for Containers orchestrated by Kubernetes
## Out-of-tree vSphere cloud provider
### Prerequisites
You need at first to configure your vSphere environment by following the [official documentation](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#prerequisites).
After this step you should have:
- vSphere upgraded to 6.7 U3 or later
- VM hardware upgraded to version 15 or higher
- UUID activated for each VM where Kubernetes will be deployed
### Kubespray configuration
First in `inventory/sample/group_vars/all/all.yml` you must set the `cloud_provider` to `external` and `external_cloud_provider` to `vsphere`.
```yml
cloud_provider: "external"
external_cloud_provider: "vsphere"
```
Then, `inventory/sample/group_vars/all/vsphere.yml`, you need to declare your vCenter credentials and enable the vSphere CSI following the description below.
| Variable | Required | Type | Choices | Default | Comment |
|----------------------------------------|----------|---------|----------------------------|---------------------------|---------------------------------------------------------------------------------------------------------------------|
| external_vsphere_vcenter_ip | TRUE | string | | | IP/URL of the vCenter |
| external_vsphere_vcenter_port | TRUE | string | | "443" | Port of the vCenter API |
| external_vsphere_insecure | TRUE | string | "true", "false" | "true" | set to "true" if the host above uses a self-signed cert |
| external_vsphere_user | TRUE | string | | | User name for vCenter with required privileges (Can also be specified with the `VSPHERE_USER` environment variable) |
| external_vsphere_password | TRUE | string | | | Password for vCenter (Can also be specified with the `VSPHERE_PASSWORD` environment variable) |
| external_vsphere_datacenter | TRUE | string | | | Datacenter name to use |
| external_vsphere_kubernetes_cluster_id | TRUE | string | | "kubernetes-cluster-id" | Kubernetes cluster ID to use |
| vsphere_csi_enabled | TRUE | boolean | | false | Enable vSphere CSI |
Example configuration:
```yml
external_vsphere_vcenter_ip: "myvcenter.domain.com"
external_vsphere_vcenter_port: "443"
external_vsphere_insecure: "true"
external_vsphere_user: "administrator@vsphere.local"
external_vsphere_password: "K8s_admin"
external_vsphere_datacenter: "DATACENTER_name"
external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"
vsphere_csi_enabled: true
```
For a more fine-grained CSI setup, refer to the [vsphere-csi](/docs/CSI/vsphere-csi.md) documentation.
### Deployment
Once the configuration is set, you can execute the playbook again to apply the new configuration:
```ShellSession
cd kubespray
ansible-playbook -i inventory/sample/hosts.ini -b -v cluster.yml
```
You'll find some useful examples [here](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#sample-manifests-to-test-csi-driver-functionality) to test your configuration.
## In-tree vSphere cloud provider ([deprecated](https://cloud-provider-vsphere.sigs.k8s.io/concepts/in_tree_vs_out_of_tree.html))
### Prerequisites (deprecated)
You need at first to configure your vSphere environment by following the [official documentation](https://kubernetes.io/docs/getting-started-guides/vsphere/#vsphere-cloud-provider).
After this step you should have:
- UUID activated for each VM where Kubernetes will be deployed
- A vSphere account with required privileges
If you intend to leverage the [zone and region node labeling](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domain-beta-kubernetes-io-region), create a tag category for both the zone and region in vCenter. The tags can then be applied at the host, cluster, datacenter, or folder level, and the cloud provider will walk the hierarchy to extract and apply the labels to the Kubernetes nodes.
### Kubespray configuration (deprecated)
First you must define the cloud provider in `inventory/sample/group_vars/all.yml` and set it to `vsphere`.
```yml
cloud_provider: vsphere
```
Then, in the same file, you need to declare your vCenter credentials following the description below.
| Variable | Required | Type | Choices | Default | Comment |
|------------------------------|----------|---------|----------------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| vsphere_vcenter_ip | TRUE | string | | | IP/URL of the vCenter |
| vsphere_vcenter_port | TRUE | integer | | | Port of the vCenter API. Commonly 443 |
| vsphere_insecure | TRUE | integer | 1, 0 | | set to 1 if the host above uses a self-signed cert |
| vsphere_user | TRUE | string | | | User name for vCenter with required privileges |
| vsphere_password | TRUE | string | | | Password for vCenter |
| vsphere_datacenter | TRUE | string | | | Datacenter name to use |
| vsphere_datastore | TRUE | string | | | Datastore name to use |
| vsphere_working_dir | TRUE | string | | | Working directory from the view "VMs and template" in the vCenter where VM are placed |
| vsphere_scsi_controller_type | TRUE | string | buslogic, pvscsi, parallel | pvscsi | SCSI controller name. Commonly "pvscsi". |
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` (Optional, only used for Kubernetes <= 1.9.2) |
| vsphere_public_network | FALSE | string | | Blank | Name of the network the VMs are joined to |
| vsphere_resource_pool | FALSE | string | | Blank | Name of the Resource pool where the VMs are located (Optional, only used for Kubernetes >= 1.9.2) |
| vsphere_zone_category | FALSE | string | | | Name of the tag category used to set the `failure-domain.beta.kubernetes.io/zone` label on nodes (Optional, only used for Kubernetes >= 1.12.0) |
| vsphere_region_category | FALSE | string | | | Name of the tag category used to set the `failure-domain.beta.kubernetes.io/region` label on nodes (Optional, only used for Kubernetes >= 1.12.0) |
Example configuration:
```yml
vsphere_vcenter_ip: "myvcenter.domain.com"
vsphere_vcenter_port: 443
vsphere_insecure: 1
vsphere_user: "k8s@vsphere.local"
vsphere_password: "K8s_admin"
vsphere_datacenter: "DATACENTER_name"
vsphere_datastore: "DATASTORE_name"
vsphere_working_dir: "Docker_hosts"
vsphere_scsi_controller_type: "pvscsi"
vsphere_resource_pool: "K8s-Pool"
```
### Deployment (deprecated)
Once the configuration is set, you can execute the playbook again to apply the new configuration:
```ShellSession
cd kubespray
ansible-playbook -i inventory/sample/hosts.ini -b -v cluster.yml
```
You'll find some useful examples [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere) to test your configuration.