mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-14 05:45:06 +03:00
* Use alternate self-sufficient shellcheck precommit This pre-commit does not require prerequisite on the host, making it easier to run in CI workflows. * Switch to upstream ansible-lint pre-commit hook This way, the hook is self contained and does not depend on a previous virtualenv installation. * pre-commit: fix hooks dependencies - ansible-syntax-check - tox-inventory-builder - jinja-syntax-check * Fix ci-matrix pre-commit hook - Remove dependency of pydblite which fails to setup on recent pythons - Discard shell script and put everything into pre-commit * pre-commit: apply autofixes hooks and fix the rest manually - markdownlint (manual fix) - end-of-file-fixer - requirements-txt-fixer - trailing-whitespace * Convert check_typo to pre-commit + use maintained version client9/misspell is unmaintained, and has been forked by the golangci team, see https://github.com/client9/misspell/issues/197#issuecomment-1596318684. They haven't yet added a pre-commit config, so use my fork with the pre-commit hook config until the pull request is merged. * collection-build-install convert to pre-commit * Run pre-commit hooks in dynamic pipeline Use gitlab dynamic child pipelines feature to have one source of truth for the pre-commit jobs, the pre-commit config file. Use one cache per pre-commit. This should reduce the "fetching cache" time steps in gitlab-ci, since each job will have a separate cache with only its hook installed. * Remove gitlab-ci job done in pre-commit * pre-commit: adjust mardownlint default, md fixes Use a style file as recommended by upstream. This makes for only one source of truth. Conserve previous upstream default for MD007 (upstream default changed here https://github.com/markdownlint/markdownlint/pull/373) * Update pre-commit hooks --------- Co-authored-by: Max Gautier <mg@max.gautier.name>
4.2 KiB
4.2 KiB
Kubernetes on NIFCLOUD with Terraform
Provision a Kubernetes cluster on NIFCLOUD using Terraform and Kubespray
Overview
The setup looks like following
Kubernetes cluster
+----------------------------+
+---------------+ | +--------------------+ |
| | | | +--------------------+ |
| API server LB +---------> | | | |
| | | | | Control Plane/etcd | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------------+ |
| ^ |
| | |
| v |
| +--------------------+ |
| | +--------------------+ |
| | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------------+ |
+----------------------------+
Requirements
- Terraform 1.3.7
Quickstart
Export Variables
-
Your NIFCLOUD credentials:
export NIFCLOUD_ACCESS_KEY_ID=<YOUR ACCESS KEY> export NIFCLOUD_SECRET_ACCESS_KEY=<YOUR SECRET ACCESS KEY> -
The SSH KEY used to connect to the instance:
- FYI: Cloud Help(SSH Key)
export TF_VAR_SSHKEY_NAME=<YOUR SSHKEY NAME> -
The IP address to connect to bastion server:
export TF_VAR_working_instance_ip=$(curl ifconfig.me)
Create The Infrastructure
-
Run terraform:
terraform init terraform apply -var-file ./sample-inventory/cluster.tfvars
Setup The Kubernetes
-
Generate cluster configuration file:
./generate-inventory.sh > sample-inventory/inventory.ini -
Export Variables:
BASTION_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.bastion_info | to_entries[].value.public_ip') API_LB_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_lb') CP01_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[0].value.private_ip') export ANSIBLE_SSH_ARGS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=\"ssh root@${BASTION_IP} -W %h:%p\"" -
Set ssh-agent"
eval `ssh-agent` ssh-add <THE PATH TO YOUR SSH KEY> -
Run cluster.yml playbook:
cd ./../../../ ansible-playbook -i contrib/terraform/nifcloud/inventory/inventory.ini cluster.yml
Connecting to Kubernetes
-
Install kubectl on the localhost
-
Fetching kubeconfig file:
mkdir -p ~/.kube scp -o ProxyCommand="ssh root@${BASTION_IP} -W %h:%p" root@${CP01_IP}:/etc/kubernetes/admin.conf ~/.kube/config -
Rewrite /etc/hosts
sudo echo "${API_LB_IP} lb-apiserver.kubernetes.local" >> /etc/hosts -
Run kubectl
kubectl get node
Variables
region: Region where to run the clusteraz: Availability zone where to run the clusterprivate_ip_bn: Private ip address of bastion serverprivate_network_cidr: Subnet of private networkinstances_cp: Machine to provision as Control Plane. Key of this object will be used as part of the machine' nameprivate_ip: private ip address of machine
instances_wk: Machine to provision as Worker Node. Key of this object will be used as part of the machine' nameprivate_ip: private ip address of machine
instance_key_name: The key name of the Key Pair to use for the instanceinstance_type_bn: The instance type of bastion serverinstance_type_wk: The instance type of worker nodeinstance_type_cp: The instance type of control planeimage_name: OS image used for the instanceworking_instance_ip: The IP address to connect to bastion serveraccounting_type: Accounting type. (1: monthly, 2: pay per use)