Compare commits

..

78 Commits

Author SHA1 Message Date
bleech1
4661e7db01 remove local lb privileged (#7437) (#7454)
Co-authored-by: Samuel Liu <liupeng0518@gmail.com>
2021-04-07 01:35:53 -07:00
Etienne Champetier
ba1d3dcddc Remove left over nodes_to_drain
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
2021-03-29 16:19:56 -07:00
Etienne Champetier
e7f8d5a987 Fix remove-node by removing jq usage (#7405)
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 36a3a78952)
2021-03-29 16:19:56 -07:00
David Louks
26183c2523 Remove ignore_errors from drain tasks and enable retires (#7151)
* Remove ignore_errors from drain tasks and enable retires

* Fix lint error by checking if stdout length is not 0, ie string is not empty.

(cherry picked from commit ccd3aeebbc)
2021-03-29 16:19:56 -07:00
Etienne Champetier
0f7b9363f9 Fix k8s-certs-renew for k8s < 1.20 (#7410)
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 2d1597bf10)
2021-03-29 16:19:56 -07:00
Anthony Rabbito
b0b569615e Correct Jinja Syntax for etcd-unsupported-arch (#6919)
`-%` causes `etcd-unsupported-arch: arm64` to print on COL 1 instead of
COL 6.

Signed-off-by: anthr76 <hello@anthonyrabbito.com>
(cherry picked from commit edfa3e9b14)
2021-03-29 16:19:56 -07:00
Kaleb Elwert
65aa9213d4 Allow connecting to bastion via non-standard SSH port (#7396)
* Allow connecting to bastion via non-standard port

* Fix bastion connection when ansible_port is not provided

(cherry picked from commit 6fa3565dac)
2021-03-29 16:19:56 -07:00
Kenichi Omichi
44d1f83ee9 Add cryptography installation (#7404)
To avoid ModuleNotFoundError due to no module named 'setuptools_rust',
this adds cryptography installation to requirements.txt.

Created by jfc-evs originally as https://github.com/kubernetes-sigs/kubespray/pull/7264

(cherry picked from commit 49abf6007a)
2021-03-29 16:19:56 -07:00
Etienne Champetier
b19d109a12 Auto renew control plane certificates (#7358)
While at it remove force_certificate_regeneration
This boolean only forced the renewal of the apiserver certs
Either manually use k8s-certs-renew.sh or set auto_renew_certificates

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit efa180392b)

Conflicts:
	roles/kubernetes/master/templates/k8s-certs-renew.service.j2
	roles/kubernetes/master/templates/k8s-certs-renew.sh.j2
	roles/kubernetes/master/templates/k8s-certs-renew.timer.j2
2021-03-23 07:29:36 -07:00
Etienne Champetier
4e52da6a35 Set K8S default to v1.19.9
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
2021-03-23 07:29:36 -07:00
Florian Ruynat
c1a686bf47 Update hashes for 1.20.5/1.19.9/1.18.17
(cherry picked from commit 6d3dbb43a4)
2021-03-23 07:29:36 -07:00
Florian Ruynat
eb8dd77ab6 Fix calico crds missing 3.16.9 (#7386)
(cherry picked from commit ead8a4e4de)
2021-03-23 07:29:36 -07:00
Florian Ruynat
cd46286523 Update CNI (calico, kubeovn, multus) and Helm
(cherry picked from commit 05f132c136)
2021-03-23 07:29:36 -07:00
Erwan Miran
e12850be55 Download Calico KDD CRDs (#7372)
* Download Calico KDD CRDs

* Replace kustomize with lineinfile and use ansible assemble module

* Replace find+lineinfile by sed in shell module to avoid nested loop

* add condition on sed

* use block for kdd tasks + remove supernumerary kdd manifest apply in start "Start Calico resources"

(cherry picked from commit 1c62af0c95)

Conflicts:
        roles/network_plugin/calico/tasks/install.yml
2021-03-23 07:29:36 -07:00
Florian Ruynat
5e4f3cabf1 Update nodelocaldns to 1.17.1
(cherry picked from commit 5f2c8ac38f)
2021-03-23 07:29:36 -07:00
Florian Ruynat
df00b1d69d Minor update to cilium and calico
(cherry picked from commit de46f86137)
2021-03-23 07:29:36 -07:00
Florian Ruynat
d74dcfd3b8 Update kube-ovn to 1.6.0 (#7240)
(cherry picked from commit edc4bb4a49)
2021-03-23 07:29:36 -07:00
Maciej Wereski
c1c720422f Upgrade openSUSE Leap to 15.2 (#7331)
15.1 has reached EOL on 2021-02-02.

Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
(cherry picked from commit 69d11daef6)
2021-03-23 07:29:36 -07:00
Etienne Champetier
bac71fa7cb Fixup one more missing kubespray-defaults (#7375)
"The error was: 'proxy_disable_env' is undefined\n\nThe error appears to
be in '<censored>scale.yml': line 72, column 7"

Fixes 067db686f6

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 057e8b4358)
2021-03-23 07:29:36 -07:00
Lennart Jern
ac1aa4d591 Check for dummy kernel module (#7348)
The dummy module is needed for nodelocaldns.

(cherry picked from commit 5a54db2f3c)
2021-03-15 07:07:05 -07:00
Etienne Champetier
c22915a08c Fixup kubelet.conf to point to kubelet-client-current.pem (#7347)
c9c0c01de0 only fix the problem for new clusters

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 14b63ede8c)

Conflicts:
	roles/kubernetes/master/tasks/kubelet-fix-client-cert-rotation.yml
2021-03-15 07:07:05 -07:00
Maciej
0ea43289e2 ansible and jinja2 updates (#7357)
* Update ansible to v2.9.18

Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>

* Update jinja2 to v2.11.3

Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
(cherry picked from commit b07c5966a6)
2021-03-15 07:07:05 -07:00
Victor Morales
01e527abf1 Add privileged_without_host_devices support (#7343)
When privileged is enabled for a container, all the `/dev/*` block
devices from the host are mounted into the guest. The
`privileged_without_host_devices` flag prevents host devices from
being passed to privileged containers.

More information:
* https://github.com/containerd/cri/pull/1225
* 1d0f68156b

(cherry picked from commit dc5df57c26)
2021-03-15 07:07:05 -07:00
Etienne Champetier
704a054064 Delete misnammed kubeadm-version.yml
The important action in kubeadm-version.yml is the templating of the configuration,
not finding / setting the version

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit a9c97e5253)

Conflicts:
	roles/kubernetes/master/tasks/kubeadm-version.yml
2021-03-15 07:07:05 -07:00
Etienne Champetier
8c693e8739 Always backup both certs and kubeconfig
There are no reasons not to backup during upgrade

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 53e5ef6b4e)

Conflicts:
	roles/kubernetes/master/tasks/kubeadm-backup.yml
	roles/kubernetes/master/tasks/kubeadm-certificate.yml
2021-03-15 07:07:05 -07:00
Etienne Champetier
9ecbf75cb4 Remove rotate_tokens logic
kubeadm never rotates sa.key/sa.pub, so there is no need to delete tokens/restart pods

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 8800b5c01d)
2021-03-15 07:07:05 -07:00
Etienne Champetier
591a51aa75 Remove admin.conf removal
kubeadm is the default for a long time now,
and admin.conf is created by it, so let kubeadm handle it

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 280036fad6)
2021-03-15 07:07:05 -07:00
Etienne Champetier
76a1697cf1 Remove useless call to 'kubeadm version'
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit a6e1f5ece9)
2021-03-15 07:07:05 -07:00
Etienne Champetier
1216a0d52d Remove pre kubeadm cert migration tasks
apiserver.pem is not used since ddffdb63bf

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit fedd671d68)

Conflicts:
	roles/kubernetes/master/tasks/kubeadm-cleanup-old-certs.yml
	roles/kubernetes/master/tasks/kubeadm-migrate-certs.yml
2021-03-15 07:07:05 -07:00
Du9L.com
f4d3a4a5ad kubeadm-config.v1beta2.yaml.j2: etcd log level arg (#7339)
According to [etcd's docs](https://etcd.io/docs/v3.4.0/op-guide/configuration/#--log-package-levels), argument 'log-package-levels' should not contain underscores.

(cherry picked from commit b7c22659e3)
2021-03-15 07:07:05 -07:00
Etienne Champetier
3c8ad073cd Stop using kubeadm to update server in kubeconfigs (#7338)
Using `kubeadm init phase kubeconfig all` breaks kubelet client certificate rotation
as we are missing `kubeadm init phase kubelet-finalize all` to point to `kubelet-client-current.pem`

kubeconfig format is stable so let's just use lineinfile,
this will avoid other future breakage

This revert to the logic before 6fe2248314

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit c9c0c01de0)
2021-03-15 07:07:05 -07:00
Etienne Champetier
53b9388b82 Add kube-ipvs0/nodelocaldns to NetworkManager unmanaged-devices (#7315)
On CentOS 8 they seem to be ignored by default, but better be extra safe
This also make it easy to exclude other network plugin interfaces

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit e442b1d2b9)
2021-03-15 07:07:05 -07:00
Etienne Champetier
f26cc9f75b Only use stat get_checksum: yes when needed (#7270)
By default Ansible stat module compute checksum, list extended attributes and find mime type
To find all stat invocations that really use one of those:
git grep -F stat. | grep -vE 'stat.(islnk|exists|lnk_source|writeable)'

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit de1d9df787)

Conflicts:
	roles/etcd/tasks/check_certs.yml
2021-03-15 07:07:05 -07:00
stress-t
5563ed8084 Fix: added string to bool conversion for use_localhost_as_kube api load balancer (#7324)
(cherry picked from commit 15f1b19136)
2021-03-02 08:33:19 -08:00
stress-t
a02e9206fe Improving PR 6473 (#7259)
(cherry picked from commit 796d3fb975)
2021-03-02 08:33:19 -08:00
Florian Ruynat
e7cc686beb Fix recover-control-plane undefined 'proxy_disable_env' variable (#7326)
(cherry picked from commit 05adeed1fa)

Conflicts:
	recover-control-plane.yml
2021-03-02 08:33:19 -08:00
wangxf
5d4fcbc5a1 fix: the filename </etc/vault> is Duplicate in the reset role. (#7313)
(cherry picked from commit 154fa45422)
2021-03-02 08:33:19 -08:00
Florian Ruynat
ba348c9a00 Move centos7-crio CI job to centos8 (#7327)
(cherry picked from commit e35becebf8)
2021-03-02 08:33:19 -08:00
Kenichi Omichi
b0f2471f0e Update Ansible to v2.9.17 (#7291)
This updates Ansible version to the latest stable version 2.9.17.

(cherry picked from commit 0ddf915027)
2021-03-02 08:33:19 -08:00
Etienne Champetier
fbdc2b3e20 Fix proxy usage when *_PROXY are present in environment (#7309)
Since a790935d02 all proxy users
should be properly configured

Now when you have *_PROXY vars in your environment it can leads to failure
if NO_PROXY is not correct, or to persistent configuration changes
as seen with kubeadm in 1c5391dda7

Instead of playing constant whack-a-bug, inject empty *_PROXY vars everywhere
at the play level, and override at the task level when needed

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 067db686f6)
2021-03-02 08:33:19 -08:00
Etienne Champetier
557139a8cf Fix reset when using containerd (#7308)
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit ed2b4b805e)
2021-03-02 08:33:19 -08:00
Etienne Champetier
daea9f3d21 Set Kubernetes default version to 1.19.8
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
2021-03-02 08:33:19 -08:00
Florian Ruynat
ac23d89a1a Add hashes for Kubernetes 1.18.16/1.19.8/1.20.4
(cherry picked from commit 86ce8aac85)
2021-03-02 08:33:19 -08:00
Etienne Champetier
3292887cae Fix "api is up" check (#7295)
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 662a37ab4f)
2021-03-02 08:33:19 -08:00
Etienne Champetier
c7658c0256 Remove calico-upgrade leftovers (#7282)
This is dead code since 28073c76ac

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 3749729d5a)
2021-02-22 06:01:43 -08:00
Etienne Champetier
716a66e5d3 facts.yaml: reduce the number of setup calls by ~7x (#7286)
Before this commit, we were gathering:
1 !all
7 network
7 hardware

After we are gathering:
1 !all
1 network
1 hardware

ansible_distribution_major_version is gathered by '!all'

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit fb8b075110)
2021-02-22 06:01:43 -08:00
Matt Calvert
efd138e752 Ensure we gather IPv6 facts
(cherry picked from commit 366cbb3e6f)
2021-02-22 06:01:43 -08:00
Etienne Champetier
40857b9859 Ensure kubeadm doesn't use proxy (#7275)
* Move proxy_env to kubespray-defaults/defaults

There is no reasons to use set_facts here

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>

* Ensure kubeadm doesn't use proxy

*_proxy variables might be present in the environment (/etc/environment, bash profile, ...)
When this is the case we end up with those proxy configuration in /etc/kubernetes/manifests/kube-*.yaml manifests

We cannot unset env variables, but kubeadm is nice enough to ignore empty vars
93d288e2a4/cmd/kubeadm/app/util/env.go (L27)

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 1c5391dda7)
2021-02-22 06:01:43 -08:00
Etienne Champetier
176df83e02 Fixup cri-o metacopy mount options (#7287)
Ubuntu 18.04 crio package ships with 'mountopt = "nodev,metacopy=on"'
even if GA kernel is 4.15 (HWE Kernel can be more recent)

Fedora package ships without metacopy=on

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 5c04bdd52b)
2021-02-22 06:01:43 -08:00
Etienne Champetier
60b405a7b7 bootstrap-os: match on os-release ID / VARIANT_ID (#7269)
This fixes deployment with CentOS 8 Streams and make detection more reliable

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
(cherry picked from commit 95b329b64d)

Conflicts:
  roles/bootstrap-os/tasks/main.yml
2021-02-22 06:01:43 -08:00
Cristian Calin
d48a4bbc85 add containerd.io to dpkg_selection (#7273)
`containerd.io` is the companion package of `docker-ce` and is the
proper package name. This is needed to avoid apt upgrade/dist-upgrade
from breaking kubernetes.

(cherry picked from commit 6450207713)
2021-02-22 06:01:43 -08:00
Takashi IIGUNI
10b08d8840 fix: Restart network doesn't work on Fedora CoreOS (#7271)
Running remove-node.yml tasks for clean up cluster on Fedora CoreOS.
The task failed to restart network daemon (task name: "reset | Restart network").
Fedora CoreOS is essentially using NetworkManager, but this task returns network.

Signed-off-by: Takashi IIGUNI <iiguni.tks@gmail.com>
(cherry picked from commit bcaa31ae33)
2021-02-22 06:01:43 -08:00
David Louks
189ce380bd Remove deletion of coredns deployment. (#7211)
* Add unique annotation on coredns deployment and only remove existing deployment if annotation is missing.

* Ignore errors when gathering coredns deployment details to handle case where it doesn't exist yet

* Remove run_once, deletegate_to and add to when statement

(cherry picked from commit 0cc1726781)
2021-02-22 06:01:43 -08:00
Geonju Kim
5f06864582 Change the owner of /etc/crictl.yaml to root (#7254)
(cherry picked from commit 1a91792e7c)
2021-02-22 06:01:43 -08:00
Mathieu Parent
3ad248b007 Update Helm version to 3.5.2 (#7248)
Helm v3.5.2 is a security (patch) release. Users are strongly
recommended to update to this release. It fixes two security issues in
upstream dependencies and one security issue in the Helm codebase.

See https://github.com/helm/helm/releases/tag/v3.5.2

(cherry picked from commit 670c37b428)
2021-02-22 06:01:43 -08:00
petruha
754a54adfc Run containerd related tasks on OracleLinux. (#7250)
(cherry picked from commit fc8551bcba)
2021-02-22 06:01:43 -08:00
forselli-stratio
960844d87b Fix ansible calico route reflector tasks in calico role (#7224)
* Fix calico-rr tasks

* revert stdin only when it's already a string

(cherry picked from commit 88bee6c68e)
2021-02-22 06:01:43 -08:00
Sander Cornelissen
6bde4e3fb3 Ensure when use_oracle_public_repo is set to false the public Oracle Linux yum repos are not set (#7228)
(cherry picked from commit b70d986bfa)
2021-02-22 06:01:43 -08:00
Felix Breuer
3725c80a71 FIX: Bastion undefined variable (#7227)
Fixes the following error when using Bastion Node with the sample config.
```
fatal: [bastion]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'bastion'\n\nThe error appears to be in '/home/felix/inovex/kubespray/roles/bastion-ssh-config/tasks/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: set bastion host IP\n  ^ here\n"}
```

(cherry picked from commit 973628fc1b)
2021-02-22 06:01:43 -08:00
Robin Elfrink
d94f32c160 Fix unintended SIGPIPEs. (#7214)
(cherry picked from commit 91fea7c956)
2021-02-22 06:01:43 -08:00
Jorik Jonker
6b184905e6 calico: fix NetworkManager check (#7169)
Previous check for presence of NM assumed "systemctl show
NetworkManager" would exit with a nonzero status code, which seems not
the case anymore with recent Flatcar Container Linux.

This new check also checks the activeness of network manager, as
`is-active` implies presence.

Signed-off-by Jorik Jonker <jorik@kippendief.biz>

(cherry picked from commit bba55faae8)
2021-02-22 06:01:43 -08:00
takmori_tech
782c3dc1c4 Update main.yml (#7175)
Fix issue #7129. Calico image tags support multiarch on quay.io.

(cherry picked from commit 2525d7aff8)
2021-02-22 06:01:43 -08:00
Florian Ruynat
f6b806e971 Update bunch of dependencies (#7187)
(cherry picked from commit 9ef62194c3)
2021-02-22 06:01:43 -08:00
Sergey
dee0594d74 Adding other masters sequentially, not in parallel (#7166)
(cherry picked from commit b2995e4ec4)
2021-02-22 06:01:43 -08:00
Arian van Putten
f8b15a714c roles/docker: Make repokey fingerprint overrideable (#7263)
This makes the docker role work the same as the containerd role.
Being able to override this is needed when you have your own debian
repository. E.g. when performing an airgapped installation
2021-02-15 20:47:05 -08:00
Ryler Hockenbury
d8ab76aa04 Update azure cloud config (#7208) (#7221)
* Allow configureable vni and port for flannel overlay

* additional options for azure cloud config
2021-01-27 03:47:40 -08:00
Rick Haan
8a5139e54c Check kube-apiserver up on all masters before upgrade (#7193) (#7217)
Only checking the kubernetes api on the first master when upgrading is not enough.
Each master needs to be checked before it's upgrade.

Signed-off-by: Rick Haan <rickhaan94@gmail.com>
2021-01-26 07:20:35 -08:00
Etienne Champetier
1727b3501f containerd,docker: stop installing extras repo on CentOS/RHEL
This was introduced in 143e2272ff
Extra repo is enabled by default in CentOS, and is not the right repo for EL8
Instead of adding a CentOS repo to RHEL, enable the needed RHEL repos with rhsm_repository

For RHEL 7, we need the "extras" repo for container-selinux
For RHEL 8, we need the "appstream" repo for container-selinux, ipvsadm and socat

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 8f2b0772f9)
2021-01-25 23:48:34 -08:00
Etienne Champetier
4ed05cf655 Calico: fixup check when ipipMode / vxlanMode is not present
calicoctl.sh get ipPool default-pool -o json
{
  "kind": "IPPool",
  "apiVersion": "projectcalico.org/v3",
  "metadata": {
    "name": "default-pool",
...
  },
  "spec": {
    "cidr": "10.233.64.0/18",
    "ipipMode": "Always",
    "natOutgoing": true,
    "blockSize": 24,
    "nodeSelector": "all()"
  }
}

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit f1576eabb1)
2021-01-25 23:48:34 -08:00
Etienne Champetier
8105cd7fbe preinstall: etcd group might not exists
fixes 8c1821228d

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 49c4345c9a)
2021-01-25 23:48:34 -08:00
Etienne Champetier
cf84a6bd3b containerd: ensure containerd is really started and enabled
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit a5d2137ed9)
2021-01-25 23:48:34 -08:00
Etienne Champetier
b80f612d29 containerd,docker: use apt_repository instead of action
yum_repository expect really different params, so nothing to factor here
Ubuntu is not an ansible_os_family, the OS family for Ubuntu is Debian
Check for ansible_pkg_mgr == apt

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit a8e51e686e)
2021-01-25 23:48:34 -08:00
Etienne Champetier
5e06ee6ea6 containerd,docker: use apt_key instead of action
we don't need rpm_key, so nothing to factor here
Ubuntu is not an ansible_os_family, the OS family for Ubuntu is Debian
Check for ansible_pkg_mgr == apt

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit a2429ef64d)
2021-01-25 23:48:34 -08:00
Etienne Champetier
4de5a070e1 containerd: use package instead of action
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 1b88678cf3)
2021-01-25 23:48:34 -08:00
Etienne Champetier
b198cd23d0 docker: use package instead of action, cleanup
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 0e96852159)
2021-01-25 23:48:34 -08:00
Etienne Champetier
74e8f58c57 containerd: use copy to set apt pin
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 19a61d838f)
2021-01-25 23:48:34 -08:00
Etienne Champetier
803f89e82b preinstall: use package instead of action, use state: present
Before this commit we were upgrading base os packages on each run

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 4eec302e86)
2021-01-25 23:48:34 -08:00
Etienne Champetier
a652a3b3b5 docker: stop using apt force
Here the desciption from Ansible docs
Corresponds to the --force-yes to apt-get and implies allow_unauthenticated: yes
This option will disable checking both the packages' signatures and the certificates of the web servers they are downloaded from.
This option *is not* the equivalent of passing the -f flag to apt-get on the command line
**This is a destructive operation with the potential to destroy your system, and it should almost never be used.** Please also see man apt-get for more information.

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit f3885aa589)
2021-01-25 23:48:34 -08:00
1316 changed files with 43417 additions and 45553 deletions

View File

@@ -7,32 +7,14 @@ skip_list:
# These rules are intentionally skipped: # These rules are intentionally skipped:
# #
# [role-name] "meta/main.yml" Role name role-name does not match ``^+$`` pattern # [E204]: "Lines should be no longer than 160 chars"
# Meta roles in Kubespray don't need proper names # This could be re-enabled with a major rewrite in the future.
# (Disabled in June 2021) # For now, there's not enough value gain from strictly limiting line length.
- 'role-name' # (Disabled in May 2019)
- '204'
# [var-naming] "defaults/main.yml" File defines variable 'apiVersion' that violates variable naming standards # [E701]: "meta/main.yml should contain relevant info"
# In Kubespray we use variables that use camelCase to match their k8s counterparts # Roles in Kubespray are not intended to be used/imported by Ansible Galaxy.
# (Disabled in June 2021) # While it can be useful to have these metadata available, they are also available in the existing documentation.
- 'var-naming' # (Disabled in May 2019)
- '701'
# [fqcn-builtins]
# Roles in kubespray don't need fully qualified collection names
# (Disabled in Feb 2023)
- 'fqcn-builtins'
# We use template in names
- 'name[template]'
# No changed-when on commands
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'no-changed-when'
# Disable run-once check with free strategy
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'run-once[task]'
exclude_paths:
# Generated files
- tests/files/custom_cni/cilium.yaml
- venv

View File

@@ -1,8 +0,0 @@
# This file contains ignores rule violations for ansible-lint
inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml jinja[spacing]
roles/kubernetes/control-plane/defaults/main/kube-proxy.yml jinja[spacing]
roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
roles/kubernetes/node/defaults/main.yml jinja[spacing]
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
roles/kubespray-defaults/defaults/main.yaml jinja[spacing]

21
.gitignore vendored
View File

@@ -3,19 +3,14 @@
**/vagrant_ansible_inventory **/vagrant_ansible_inventory
*.iml *.iml
temp temp
contrib/offline/offline-files
contrib/offline/offline-files.tar.gz
.idea .idea
.vscode
.tox .tox
.cache .cache
*.bak *.bak
*.tfstate *.tfstate
*.tfstate*backup *.tfstate.backup
*.lock.hcl
.terraform/ .terraform/
contrib/terraform/aws/credentials.tfvars contrib/terraform/aws/credentials.tfvars
.terraform.lock.hcl
/ssh-bastion.conf /ssh-bastion.conf
**/*.sw[pon] **/*.sw[pon]
*~ *~
@@ -104,17 +99,3 @@ target/
# virtualenv # virtualenv
venv/ venv/
ENV/ ENV/
# molecule
roles/**/molecule/**/__pycache__/
# macOS
.DS_Store
# Temp location used by our scripts
scripts/tmp/
tmp.md
# Ansible collection files
kubernetes_sigs-kubespray*tar.gz
ansible_collections

View File

@@ -1,6 +1,5 @@
--- ---
stages: stages:
- build
- unit-tests - unit-tests
- deploy-part1 - deploy-part1
- moderator - moderator
@@ -9,15 +8,14 @@ stages:
- deploy-special - deploy-special
variables: variables:
KUBESPRAY_VERSION: v2.22.1 KUBESPRAY_VERSION: v2.14.1
FAILFASTCI_NAMESPACE: 'kargo-ci' FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray' GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true" ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this" MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_JOB_ID" TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml" CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml" CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml"
GS_ACCESS_KEY_ID: $GS_KEY GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET GS_SECRET_ACCESS_KEY: $GS_SECRET
CONTAINER_ENGINE: docker CONTAINER_ENGINE: docker
@@ -28,23 +26,22 @@ variables:
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
IDEMPOT_CHECK: "false" IDEMPOT_CHECK: "false"
RESET_CHECK: "false" RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false" UPGRADE_TEST: "false"
MITOGEN_ENABLE: "false" MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv" ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false" RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]" RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube-master[1:]"
TERRAFORM_VERSION: 1.3.7
PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}"
before_script: before_script:
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh - mkdir -p /.ssh
.job: &job .job: &job
tags: tags:
- packet - packet
image: $PIPELINE_IMAGE image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
artifacts: artifacts:
when: always when: always
paths: paths:
@@ -52,8 +49,6 @@ before_script:
.testcases: &testcases .testcases: &testcases
<<: *job <<: *job
retry: 1
interruptible: true
before_script: before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1 - update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
@@ -75,10 +70,8 @@ ci-authorized:
only: [] only: []
include: include:
- .gitlab-ci/build.yml
- .gitlab-ci/lint.yml - .gitlab-ci/lint.yml
- .gitlab-ci/shellcheck.yml - .gitlab-ci/shellcheck.yml
- .gitlab-ci/terraform.yml - .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml - .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml - .gitlab-ci/vagrant.yml
- .gitlab-ci/molecule.yml

View File

@@ -1,40 +0,0 @@
---
.build:
stage: build
image:
name: moby/buildkit:rootless
entrypoint: [""]
variables:
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json
pipeline image:
extends: .build
script:
- |
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache
rules:
- if: '$CI_COMMIT_REF_NAME != $CI_DEFAULT_BRANCH'
pipeline image and build cache:
extends: .build
script:
- |
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache \
--export-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache,mode=max
rules:
- if: '$CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH'

View File

@@ -14,7 +14,7 @@ vagrant-validate:
stage: unit-tests stage: unit-tests
tags: [light] tags: [light]
variables: variables:
VAGRANT_VERSION: 2.3.7 VAGRANT_VERSION: 2.2.10
script: script:
- ./tests/scripts/vagrant-validate.sh - ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master'] except: ['triggers', 'master']
@@ -23,8 +23,9 @@ ansible-lint:
extends: .job extends: .job
stage: unit-tests stage: unit-tests
tags: [light] tags: [light]
script: # lint every yml/yaml file that looks like it contains Ansible plays
- ansible-lint -v script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
except: ['triggers', 'master'] except: ['triggers', 'master']
syntax-check: syntax-check:
@@ -39,34 +40,20 @@ syntax-check:
ANSIBLE_VERBOSITY: "3" ANSIBLE_VERBOSITY: "3"
script: script:
- ansible-playbook --syntax-check cluster.yml - ansible-playbook --syntax-check cluster.yml
- ansible-playbook --syntax-check playbooks/cluster.yml
- ansible-playbook --syntax-check upgrade-cluster.yml - ansible-playbook --syntax-check upgrade-cluster.yml
- ansible-playbook --syntax-check playbooks/upgrade_cluster.yml
- ansible-playbook --syntax-check reset.yml - ansible-playbook --syntax-check reset.yml
- ansible-playbook --syntax-check playbooks/reset.yml
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml - ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
except: ['triggers', 'master'] except: ['triggers', 'master']
collection-build-install-sanity-check:
extends: .job
stage: unit-tests
tags: [light]
variables:
ANSIBLE_COLLECTIONS_PATH: "./ansible_collections"
script:
- ansible-galaxy collection build
- ansible-galaxy collection install kubernetes_sigs-kubespray-$(grep "^version:" galaxy.yml | awk '{print $2}').tar.gz
- ansible-galaxy collection list $(egrep -i '(name:\s+|namespace:\s+)' galaxy.yml | awk '{print $2}' | tr '\n' '.' | sed 's|\.$||g') | grep "^kubernetes_sigs.kubespray"
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/cluster.yml
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/reset.yml
except: ['triggers', 'master']
tox-inventory-builder: tox-inventory-builder:
stage: unit-tests stage: unit-tests
tags: [light] tags: [light]
extends: .job extends: .job
before_script: before_script:
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
script: script:
- pip3 install tox - pip3 install tox
- cd contrib/inventory_builder && tox - cd contrib/inventory_builder && tox
@@ -81,27 +68,6 @@ markdownlint:
script: script:
- markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md - markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md
check-readme-versions:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_readme_versions.sh
check-galaxy-version:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_galaxy_version.sh
check-typo:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_typo.sh
ci-matrix: ci-matrix:
stage: unit-tests stage: unit-tests
tags: [light] tags: [light]

View File

@@ -1,83 +0,0 @@
---
.molecule:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: $PIPELINE_IMAGE
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- chronic ./tests/scripts/molecule_logs.sh
artifacts:
when: always
paths:
- molecule_logs/
# CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set
.molecule_periodic:
only:
variables:
- $PERIODIC_CI_ENABLED
allow_failure: true
extends: .molecule
molecule_full:
extends: .molecule_periodic
molecule_no_container_engines:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -e container-engine
when: on_success
molecule_docker:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-dockerd
when: on_success
molecule_containerd:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/containerd
when: on_success
molecule_cri-o:
extends: .molecule
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o
allow_failure: true
when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail
molecule_kata:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/kata-containers
when: on_success
molecule_gvisor:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/gvisor
when: on_success
molecule_youki:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/youki
when: on_success

View File

@@ -2,7 +2,6 @@
.packet: .packet:
extends: .testcases extends: .testcases
variables: variables:
ANSIBLE_TIMEOUT: "120"
CI_PLATFORM: packet CI_PLATFORM: packet
SSH_USER: kubespray SSH_USER: kubespray
tags: tags:
@@ -23,71 +22,59 @@
allow_failure: true allow_failure: true
extends: .packet extends: .packet
packet_cleanup_old: packet_ubuntu18-calico-aio:
stage: deploy-part1 stage: deploy-part1
extends: .packet_periodic extends: .packet_pr
script: when: on_success
- cd tests
- make cleanup-packet # Future AIO job
after_script: [] packet_ubuntu20-calico-aio:
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-all-in-one:
stage: deploy-part1 stage: deploy-part1
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
variables:
RESET_CHECK: "true"
# ### PR JOBS PART2 # ### PR JOBS PART2
packet_ubuntu20-all-in-one-docker: packet_centos7-flannel-containerd-addons-ha:
stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success
packet_ubuntu20-calico-all-in-one-hardening:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr
when: on_success when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_ubuntu22-all-in-one-docker: packet_centos8-crio:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-all-in-one:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-etcd-datastore:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_centos7-flannel-addons-ha:
extends: .packet_pr extends: .packet_pr
stage: deploy-part2 stage: deploy-part2
when: on_success when: on_success
packet_almalinux8-crio: packet_ubuntu18-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
allow_failure: true
packet_ubuntu20-crio:
extends: .packet_pr extends: .packet_pr
stage: deploy-part2 stage: deploy-part2
when: manual when: manual
variables:
MITOGEN_ENABLE: "true"
packet_fedora37-crio: packet_ubuntu16-canal-kubeadm-ha:
extends: .packet_pr
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_ubuntu16-canal-sep:
stage: deploy-special
extends: .packet_pr
when: manual when: manual
packet_ubuntu20-flannel-ha: packet_ubuntu16-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
@@ -97,40 +84,12 @@ packet_debian10-cilium-svc-proxy:
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
packet_debian10-calico: packet_debian10-containerd:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
variables:
packet_debian10-docker: MITOGEN_ENABLE: "true"
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian11-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian11-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-cilium:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos7-calico-ha-once-localhost: packet_centos7-calico-ha-once-localhost:
stage: deploy-part2 stage: deploy-part2
@@ -142,73 +101,54 @@ packet_centos7-calico-ha-once-localhost:
services: services:
- docker:19.03.9-dind - docker:19.03.9-dind
packet_almalinux8-kube-ovn: packet_centos8-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos8-calico:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
packet_almalinux8-calico: packet_fedora32-weave:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
packet_rockylinux8-calico: packet_opensuse-canal:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_periodic
when: on_success when: on_success
packet_rockylinux9-calico: packet_ubuntu18-ovn4nfv:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_periodic
when: on_success when: on_success
packet_rockylinux9-cilium:
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
RESET_CHECK: "true"
packet_almalinux8-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_fedora38-docker-weave:
stage: deploy-part2
extends: .packet_pr
when: on_success
allow_failure: true
packet_opensuse-docker-cilium:
stage: deploy-part2
extends: .packet_pr
when: manual
# ### MANUAL JOBS # ### MANUAL JOBS
packet_ubuntu20-docker-weave-sep: packet_ubuntu16-weave-sep:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_ubuntu20-cilium-sep: packet_ubuntu18-cilium-sep:
stage: deploy-special stage: deploy-special
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_ubuntu20-flannel-ha-once: packet_ubuntu18-flannel-containerd-ha:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
# Calico HA eBPF packet_ubuntu18-flannel-containerd-ha-once:
packet_almalinux8-calico-ha-ebpf:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_debian10-macvlan: packet_debian9-macvlan:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
@@ -218,53 +158,38 @@ packet_centos7-calico-ha:
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-multus-calico: packet_centos7-multus-calico:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_fedora38-docker-calico: packet_oracle7-canal-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora33-calico:
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
variables: variables:
RESET_CHECK: "true" MITOGEN_ENABLE: "true"
packet_fedora37-calico-selinux: packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora32-kube-ovn-containerd:
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
packet_fedora37-calico-swap-selinux:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_amazon-linux-2-all-in-one:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_almalinux8-calico-nodelocaldns-secondary:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora38-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_debian11-custom-cni:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian11-kubelet-csr-approver:
stage: deploy-part2
extends: .packet_pr
when: manual
# ### PR JOBS PART3 # ### PR JOBS PART3
# Long jobs (45min+) # Long jobs (45min+)
@@ -274,59 +199,36 @@ packet_centos7-weave-upgrade-ha:
when: on_success when: on_success
variables: variables:
UPGRADE_TEST: basic UPGRADE_TEST: basic
MITOGEN_ENABLE: "false"
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha: packet_debian9-calico-upgrade:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: basic
# Calico HA Wireguard
packet_ubuntu20-calico-ha-wireguard:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian11-calico-upgrade:
stage: deploy-part3 stage: deploy-part3
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
variables: variables:
UPGRADE_TEST: graceful UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_almalinux8-calico-remove-node: packet_debian9-calico-upgrade-once:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
REMOVE_NODE_CHECK: "true"
REMOVE_NODE_NAME: "instance-3"
packet_ubuntu20-calico-etcd-kubeadm:
stage: deploy-part3
extends: .packet_pr
when: on_success
packet_debian11-calico-upgrade-once:
stage: deploy-part3 stage: deploy-part3
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
variables: variables:
UPGRADE_TEST: graceful UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_ubuntu20-calico-ha-recover: packet_ubuntu18-calico-ha-recover:
stage: deploy-part3 stage: deploy-part3
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
variables: variables:
RECOVER_CONTROL_PLANE_TEST: "true" RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]" RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube-master[1:]"
packet_ubuntu20-calico-ha-recover-noquorum: packet_ubuntu18-calico-ha-recover-noquorum:
stage: deploy-part3 stage: deploy-part3
extends: .packet_periodic extends: .packet_periodic
when: on_success when: on_success
variables: variables:
RECOVER_CONTROL_PLANE_TEST: "true" RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:]:kube_control_plane[1:]" RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:],kube-master[1:]"

View File

@@ -11,6 +11,6 @@ shellcheck:
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/ - cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version - shellcheck --version
script: script:
# Run shellcheck for all *.sh # Run shellcheck for all *.sh except contrib/
- find . -name '*.sh' -not -path './.git/*' | xargs shellcheck --severity error - find . -name '*.sh' -not -path './contrib/*' -not -path './.git/*' | xargs shellcheck --severity error
except: ['triggers', 'master'] except: ['triggers', 'master']

View File

@@ -12,13 +12,13 @@
# Prepare inventory # Prepare inventory
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars . - cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
- ln -s contrib/terraform/$PROVIDER/hosts - ln -s contrib/terraform/$PROVIDER/hosts
- terraform -chdir="contrib/terraform/$PROVIDER" init - terraform init contrib/terraform/$PROVIDER
# Copy SSH keypair # Copy SSH keypair
- mkdir -p ~/.ssh - mkdir -p ~/.ssh
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa - echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa - chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub - echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
- mkdir -p contrib/terraform/$PROVIDER/group_vars - mkdir -p group_vars
# Random subnet to avoid routing conflicts # Random subnet to avoid routing conflicts
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24" - export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
@@ -28,8 +28,8 @@
tags: [light] tags: [light]
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
script: script:
- terraform -chdir="contrib/terraform/$PROVIDER" validate - terraform validate -var-file=cluster.tfvars contrib/terraform/$PROVIDER
- terraform -chdir="contrib/terraform/$PROVIDER" fmt -check -diff - terraform fmt -check -diff contrib/terraform/$PROVIDER
.terraform_apply: .terraform_apply:
extends: .terraform_install extends: .terraform_install
@@ -56,69 +56,93 @@
tf-validate-openstack: tf-validate-openstack:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: 0.12.29
PROVIDER: openstack PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-equinix: tf-validate-packet:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: 0.12.29
PROVIDER: equinix PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws: tf-validate-aws:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: 0.12.29
PROVIDER: aws PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-exoscale: tf-0.13.x-validate-openstack:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: 0.13.5
PROVIDER: exoscale PROVIDER: openstack
tf-validate-hetzner:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: hetzner
tf-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: vsphere
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-upcloud: tf-0.13.x-validate-packet:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: 0.13.5
PROVIDER: upcloud PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-nifcloud: tf-0.13.x-validate-aws:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: 0.13.5
PROVIDER: nifcloud PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu20-default: tf-0.14.x-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: 0.14.3
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: 0.14.3
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: 0.14.3
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu16-default:
# extends: .terraform_apply # extends: .terraform_apply
# variables: # variables:
# TF_VERSION: $TERRAFORM_VERSION # TF_VERSION: 0.12.29
# PROVIDER: packet # PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME # CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1" # TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1" # TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86 # TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86 # TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_metro: am # TF_VAR_facility: ewr1
# TF_VAR_public_key_path: "" # TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_20_04 # TF_VAR_operating_system: ubuntu_16_04
#
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: 0.12.29
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_facility: ams1
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_18_04
.ovh_variables: &ovh_variables .ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3 OS_AUTH_URL: https://auth.cloud.ovh.net/v3
@@ -144,6 +168,10 @@ tf-validate-nifcloud:
OS_INTERFACE: public OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3" OS_IDENTITY_API_VERSION: "3"
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df" TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
# Since ELASTX is in Stockholm, Mitogen helps with latency
MITOGEN_ENABLE: "false"
# Mitogen doesn't support interpreter discovery yet
ANSIBLE_PYTHON_INTERPRETER: "/usr/bin/python3"
tf-elastx_cleanup: tf-elastx_cleanup:
stage: unit-tests stage: unit-tests
@@ -156,14 +184,13 @@ tf-elastx_cleanup:
script: script:
- ./scripts/openstack-cleanup/main.py - ./scripts/openstack-cleanup/main.py
tf-elastx_ubuntu20-calico: tf-elastx_ubuntu18-calico:
extends: .terraform_apply extends: .terraform_apply
stage: deploy-part3 stage: deploy-part3
when: on_success when: on_success
allow_failure: true
variables: variables:
<<: *elastx_variables <<: *elastx_variables
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: 0.12.29
PROVIDER: openstack PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60" ANSIBLE_TIMEOUT: "60"
@@ -186,48 +213,47 @@ tf-elastx_ubuntu20-calico:
TF_VAR_az_list_node: '["sto1"]' TF_VAR_az_list_node: '["sto1"]'
TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2 TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2 TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-20.04-server-latest TF_VAR_image: ubuntu-18.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]' TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
# OVH voucher expired, commenting job until things are sorted out
# tf-ovh_cleanup: tf-ovh_cleanup:
# stage: unit-tests stage: unit-tests
# tags: [light] tags: [light]
# image: python image: python
# environment: ovh environment: ovh
# variables: variables:
# <<: *ovh_variables <<: *ovh_variables
# before_script: before_script:
# - pip install -r scripts/openstack-cleanup/requirements.txt - pip install -r scripts/openstack-cleanup/requirements.txt
# script: script:
# - ./scripts/openstack-cleanup/main.py - ./scripts/openstack-cleanup/main.py
# tf-ovh_ubuntu20-calico: tf-ovh_ubuntu18-calico:
# extends: .terraform_apply extends: .terraform_apply
# when: on_success when: on_success
# environment: ovh environment: ovh
# variables: variables:
# <<: *ovh_variables <<: *ovh_variables
# TF_VERSION: $TERRAFORM_VERSION TF_VERSION: 0.12.29
# PROVIDER: openstack PROVIDER: openstack
# CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
# ANSIBLE_TIMEOUT: "60" ANSIBLE_TIMEOUT: "60"
# SSH_USER: ubuntu SSH_USER: ubuntu
# TF_VAR_number_of_k8s_masters: "0" TF_VAR_number_of_k8s_masters: "0"
# TF_VAR_number_of_k8s_masters_no_floating_ip: "1" TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
# TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0" TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
# TF_VAR_number_of_etcd: "0" TF_VAR_number_of_etcd: "0"
# TF_VAR_number_of_k8s_nodes: "0" TF_VAR_number_of_k8s_nodes: "0"
# TF_VAR_number_of_k8s_nodes_no_floating_ip: "1" TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
# TF_VAR_number_of_gfs_nodes_no_floating_ip: "0" TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
# TF_VAR_number_of_bastions: "0" TF_VAR_number_of_bastions: "0"
# TF_VAR_number_of_k8s_masters_no_etcd: "0" TF_VAR_number_of_k8s_masters_no_etcd: "0"
# TF_VAR_use_neutron: "0" TF_VAR_use_neutron: "0"
# TF_VAR_floatingip_pool: "Ext-Net" TF_VAR_floatingip_pool: "Ext-Net"
# TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b" TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
# TF_VAR_network_name: "Ext-Net" TF_VAR_network_name: "Ext-Net"
# TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8 TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8 TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_image: "Ubuntu 20.04" TF_VAR_image: "Ubuntu 18.04"
# TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]' TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

View File

@@ -1,5 +1,21 @@
--- ---
molecule_tests:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
.vagrant: .vagrant:
extends: .testcases extends: .testcases
variables: variables:
@@ -10,22 +26,24 @@
tags: [c3.small.x86] tags: [c3.small.x86]
only: [/^pr-.*$/] only: [/^pr-.*$/]
except: ['triggers'] except: ['triggers']
image: $PIPELINE_IMAGE image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: [] services: []
before_script: before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh - ./tests/scripts/vagrant_clean.sh
script: script:
- ./tests/scripts/testcases_run.sh - ./tests/scripts/testcases_run.sh
after_script: after_script:
- chronic ./tests/scripts/testcases_cleanup.sh - chronic ./tests/scripts/testcases_cleanup.sh
allow_failure: true
vagrant_ubuntu20-calico-dual-stack: vagrant_ubuntu18-flannel:
stage: deploy-part2 stage: deploy-part2
extends: .vagrant extends: .vagrant
when: on_success when: on_success
vagrant_ubuntu20-weave-medium: vagrant_ubuntu18-weave-medium:
stage: deploy-part2 stage: deploy-part2
extends: .vagrant extends: .vagrant
when: manual when: manual
@@ -34,30 +52,3 @@ vagrant_ubuntu20-flannel:
stage: deploy-part2 stage: deploy-part2
extends: .vagrant extends: .vagrant
when: on_success when: on_success
allow_failure: false
vagrant_ubuntu20-flannel-collection:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu20-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu20-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_fedora37-kube-router:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_centos7-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual

View File

@@ -1,3 +1,2 @@
--- ---
MD013: false MD013: false
MD029: false

View File

@@ -1,71 +0,0 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-xml
- id: check-merge-conflict
- id: detect-private-key
- id: end-of-file-fixer
- id: forbid-new-submodules
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.27.1
hooks:
- id: yamllint
args: [--strict]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: [ -r, "~MD013,~MD029" ]
exclude: "^.git"
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 3.0.0
hooks:
- id: shellcheck
args: [ --severity, "error" ]
exclude: "^.git"
files: "\\.sh$"
- repo: local
hooks:
- id: ansible-lint
name: ansible-lint
entry: ansible-lint -v
language: python
pass_filenames: false
additional_dependencies:
- .[community]
- id: ansible-syntax-check
name: ansible-syntax-check
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
language: python
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"
- id: tox-inventory-builder
name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox"
language: python
pass_filenames: false
- id: check-readme-versions
name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh
language: script
pass_filenames: false
- id: ci-matrix
name: ci-matrix
entry: tests/scripts/md-table/test.sh
language: script
pass_filenames: false

View File

@@ -3,8 +3,6 @@ extends: default
ignore: | ignore: |
.git/ .git/
# Generated file
tests/files/custom_cni/cilium.yaml
rules: rules:
braces: braces:

View File

@@ -1 +0,0 @@
# See our release notes on [GitHub](https://github.com/kubernetes-sigs/kubespray/releases)

2
CNAME
View File

@@ -1 +1 @@
kubespray.io kubespray.io

View File

@@ -6,23 +6,11 @@
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications) It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
To install development dependencies you can set up a python virtual env with the necessary dependencies: To install development dependencies you can use `pip install -r tests/requirements.txt`
```ShellSession
virtualenv venv
source venv/bin/activate
pip install -r tests/requirements.txt
ansible-galaxy install -r tests/requirements.yml
```
#### Linting #### Linting
Kubespray uses [pre-commit](https://pre-commit.com) hook configuration to run several linters, please install this tool and use it to run validation tests before submitting a PR. Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `ansible-lint`
```ShellSession
pre-commit install
pre-commit run -a # To run pre-commit hook on all files in the repository, even if they were not modified
```
#### Molecule #### Molecule
@@ -39,9 +27,5 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
1. Submit an issue describing your proposed change to the repo in question. 1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly. 2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes. 3. Fork the desired repo, develop and test your code changes.
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo. 4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
5. Addess any pre-commit validation failures. 5. Submit a pull request.
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
7. Submit a pull request.
8. Work with the reviewers on their suggestions.
9. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.

View File

@@ -1,45 +1,25 @@
# Use imutable image tags rather than mutable tags (like ubuntu:22.04) # Use imutable image tags rather than mutable tags (like ubuntu:18.04)
FROM ubuntu:jammy-20230308 FROM ubuntu:bionic-20200807
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
# See: https://github.com/pypa/pip/issues/10219
ENV LANG=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive \
PYTHONDONTWRITEBYTECODE=1
WORKDIR /kubespray
COPY *.yml ./
COPY *.cfg ./
COPY roles ./roles
COPY contrib ./contrib
COPY inventory ./inventory
COPY library ./library
COPY extra_playbooks ./extra_playbooks
COPY playbooks ./playbooks
COPY plugins ./plugins
RUN apt update -q \ ENV KUBE_VERSION=v1.19.9
&& apt install -yq --no-install-recommends \
curl \ RUN mkdir /kubespray
python3 \ WORKDIR /kubespray
python3-pip \ RUN apt update -y && \
sshpass \ apt install -y \
vim \ libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
rsync \ ca-certificates curl gnupg2 software-properties-common python3-pip rsync
openssh-client \ RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
&& pip install --no-compile --no-cache-dir \ add-apt-repository \
ansible==7.6.0 \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
ansible-core==2.14.6 \ $(lsb_release -cs) \
cryptography==41.0.1 \ stable" \
jinja2==3.1.2 \ && apt update -y && apt-get install docker-ce -y
netaddr==0.8.0 \ COPY . .
jmespath==1.0.1 \ RUN /usr/bin/python3 -m pip install pip -U && /usr/bin/python3 -m pip install -r tests/requirements.txt && python3 -m pip install -r requirements.txt && update-alternatives --install /usr/bin/python python /usr/bin/python3 1
MarkupSafe==2.1.3 \
ruamel.yaml==0.17.21 \ RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl \
passlib==1.7.4 \ && chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \ # Some tools like yamllint need this
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \ ENV LANG=C.UTF-8
&& chmod a+x /usr/local/bin/kubectl \
&& rm -rf /var/lib/apt/lists/* /var/log/* \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;

View File

@@ -187,7 +187,7 @@
identification within third-party archives. identification within third-party archives.
Copyright 2016 Kubespray Copyright 2016 Kubespray
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
You may obtain a copy of the License at You may obtain a copy of the License at

View File

@@ -1,7 +1,5 @@
mitogen: mitogen:
@echo Mitogen support is deprecated. ansible-playbook -c local mitogen.yml -vv
@echo Please run the following command manually:
@echo ansible-playbook -c local mitogen.yml -vv
clean: clean:
rm -rf dist/ rm -rf dist/
rm *.retry rm *.retry

2
OWNERS
View File

@@ -5,4 +5,4 @@ approvers:
reviewers: reviewers:
- kubespray-reviewers - kubespray-reviewers
emeritus_approvers: emeritus_approvers:
- kubespray-emeritus_approvers - kubespray-emeritus_approvers

View File

@@ -4,28 +4,15 @@ aliases:
- chadswen - chadswen
- mirwan - mirwan
- miouge1 - miouge1
- woopstar
- luckysb - luckysb
- floryut - floryut
- oomichi
- cristicalin
- liupeng0518
- yankay
- mzaian
kubespray-reviewers: kubespray-reviewers:
- holmsten - holmsten
- bozzo - bozzo
- eppo - eppo
- oomichi - oomichi
- jayonlau
- cristicalin
- liupeng0518
- yankay
- cyclinder
- mzaian
- mrfreezeex
- erikjiang
kubespray-emeritus_approvers: kubespray-emeritus_approvers:
- riverzhang - riverzhang
- atoms - atoms
- ant31 - ant31
- woopstar

173
README.md
View File

@@ -5,7 +5,7 @@
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**. If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/) You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Equinix Metal](docs/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal** - Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Packet](docs/packet.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster - **Highly available** cluster
- **Composable** (Choice of the network plugin for instance) - **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions** - Supports most popular **Linux distributions**
@@ -13,16 +13,16 @@ You can get your invite [here](http://slack.k8s.io/)
## Quick Start ## Quick Start
Below are several ways to use Kubespray to deploy a Kubernetes cluster. To deploy the cluster you can use :
### Ansible ### Ansible
#### Usage #### Usage
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following steps:
```ShellSession ```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster`` # Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster cp -rfp inventory/sample inventory/mycluster
@@ -32,14 +32,7 @@ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inv
# Review and change parameters under ``inventory/mycluster/group_vars`` # Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
# uninstalling old packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
# And be mind it will remove the current kubernetes cluster (if it's running)!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root reset.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root # Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/, # The option `--become` is required, as for example writing SSL keys in /etc/,
@@ -48,61 +41,32 @@ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
``` ```
Note: When Ansible is already installed via system packages on the control node, Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
Python packages installed via `sudo pip install -r requirements.txt` will go to As a consequence, `ansible-playbook` command will fail with:
a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on
Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on
Ubuntu). As a consequence, the `ansible-playbook` command will fail with:
```raw ```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path. ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
``` ```
This likely indicates that a task depends on a module present in ``requirements.txt``. probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
One way of addressing this is to uninstall the system Ansible package then One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
reinstall Ansible via ``pip``, but this not always possible and one must A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.
take care regarding package versions.
A workaround consists of setting the `ANSIBLE_LIBRARY`
and `ANSIBLE_MODULE_UTILS` environment variables respectively to
the `ansible/modules` and `ansible/module_utils` subdirectories of the ``pip``
installation location, which is the ``Location`` shown by running
`pip show [package]` before executing `ansible-playbook`.
A simple way to ensure you get all the correct version of Ansible is to use
the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.22.1
docker pull quay.io/kubespray/kubespray:v2.22.1
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.22.1 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
#### Collection
See [here](docs/ansible_collection.md) if you wish to use this repository as an Ansible collection
### Vagrant ### Vagrant
For Vagrant we need to install Python dependencies for provisioning tasks. For Vagrant we need to install python dependencies for provisioning tasks.
Check that ``Python`` and ``pip`` are installed: Check if Python and pip are installed:
```ShellSession ```ShellSession
python -V && pip -V python -V && pip -V
``` ```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/> If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following step:
```ShellSession ```ShellSession
sudo pip install -r requirements.txt
vagrant up vagrant up
``` ```
@@ -129,92 +93,69 @@ vagrant up
- [AWS](docs/aws.md) - [AWS](docs/aws.md)
- [Azure](docs/azure.md) - [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md) - [vSphere](docs/vsphere.md)
- [Equinix Metal](docs/equinix-metal.md) - [Packet Host](docs/packet.md)
- [Large deployments](docs/large-deployments.md) - [Large deployments](docs/large-deployments.md)
- [Adding/replacing a node](docs/nodes.md) - [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md) - [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md) - [Air-Gap installation](docs/offline-environment.md)
- [NTP](docs/ntp.md)
- [Hardening](docs/hardening.md)
- [Mirror](docs/mirror.md)
- [Roadmap](docs/roadmap.md) - [Roadmap](docs/roadmap.md)
## Supported Linux Distributions ## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk** - **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Buster - **Debian** Buster, Jessie, Stretch, Wheezy
- **Ubuntu** 20.04, 22.04 - **Ubuntu** 16.04, 18.04, 20.04
- **CentOS/RHEL** 7, [8, 9](docs/centos.md#centos-8) - **CentOS/RHEL** 7, 8 (experimental: see [centos 8 notes](docs/centos8.md))
- **Fedora** 37, 38 - **Fedora** 32, 33
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md)) - **Fedora CoreOS** (experimental: see [fcos Note](docs/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed - **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** 7, [8, 9](docs/centos.md#centos-8) - **Oracle Linux** 7, 8 (experimental: [centos 8 notes](docs/centos8.md) apply)
- **Alma Linux** [8, 9](docs/centos.md#centos-8)
- **Rocky Linux** [8, 9](docs/centos.md#centos-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/uoslinux.md))
- **openEuler** (experimental: see [openEuler notes](docs/openeuler.md))
Note: Upstart/SysV init based OS types are not supported. Note: Upstart/SysV init based OS types are not supported.
## Supported Components ## Supported Components
- Core - Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.27.9 - [kubernetes](https://github.com/kubernetes/kubernetes) v1.19.9
- [etcd](https://github.com/etcd-io/etcd) v3.5.10 - [etcd](https://github.com/coreos/etcd) v3.4.13
- [docker](https://www.docker.com/) v20.10 (see note) - [docker](https://www.docker.com/) v19.03 (see note)
- [containerd](https://containerd.io/) v1.7.5 - [containerd](https://containerd.io/) v1.3.9
- [cri-o](http://cri-o.io/) v1.27 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS) - [cri-o](http://cri-o.io/) v1.19 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin - Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0 - [cni-plugins](https://github.com/containernetworking/plugins) v0.9.0
- [calico](https://github.com/projectcalico/calico) v3.25.2 - [calico](https://github.com/projectcalico/calico) v3.16.9
- [cilium](https://github.com/cilium/cilium) v1.13.4 - [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [flannel](https://github.com/flannel-io/flannel) v0.22.0 - [cilium](https://github.com/cilium/cilium) v1.8.8
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5 - [flanneld](https://github.com/coreos/flannel) v0.13.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1 - [kube-ovn](https://github.com/alauda/kube-ovn) v1.6.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8 - [kube-router](https://github.com/cloudnativelabs/kube-router) v1.1.1
- [weave](https://github.com/weaveworks/weave) v2.8.1 - [multus](https://github.com/intel/multus-cni) v3.7.0
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12 - [ovn4nfv](https://github.com/opnfv/ovn4nfv-k8s-plugin) v1.1.0
- [weave](https://github.com/weaveworks/weave) v2.7.0
- Application - Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.1 - [ambassador](https://github.com/datawire/ambassador): v1.5
- [coredns](https://github.com/coredns/coredns) v1.10.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.8.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.8.0
- [helm](https://helm.sh/) v3.12.3
- [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11 - [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11 - [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0 - [cert-manager](https://github.com/jetstack/cert-manager) v0.16.1
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0 - [coredns](https://github.com/coredns/coredns) v1.7.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.22.0 - [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.41.2
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.9.2
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.24
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
## Container Runtime Notes Note: The list of available docker version is 18.09, 19.03 and 20.10. The recommended docker version is 19.03. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
- Supported Docker versions are 18.09, 19.03, 20.10, 23.0 and 24.0. The *recommended* Docker version is 20.10 (except on Debian bookworm which without supporting for 20.10 and below any more). `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements ## Requirements
- **Minimum required version of Kubernetes is v1.25** - **Minimum required version of Kubernetes is v1.18**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands** - **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is not supported for now**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md)) - The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**. - The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to. - The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall. in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is run from non-root user account, correct privilege escalation method - If kubespray is ran from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified. or command parameters `--become or -b` should be specified.
Hardware: Hardware:
These limits are safeguarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide. These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master - Master
- Memory: 1500 MB - Memory: 1500 MB
@@ -223,17 +164,21 @@ These limits are safeguarded by Kubespray. Actual requirements for your workload
## Network Plugins ## Network Plugins
You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`) You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking. - [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options - [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay designed to give you the most efficient networking across a range of situations, including non-overlay
and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts,
pods, and (if using Istio and Envoy) applications at the service mesh layer. pods, and (if using Istio and Envoy) applications at the service mesh layer.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic. - [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [ovn4nfv](docs/ovn4nfv.md): [ovn4nfv-k8s-plugins](https://github.com/opnfv/ovn4nfv-k8s-plugin) is the network controller, OVS agent and CNI server to offer basic SFC and OVN overlay networking.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. - [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)). (Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
@@ -248,18 +193,15 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc. - [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray. The choice is defined with the variable `kube_network_plugin`. There is also an
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead. option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md). See also [Network checker](docs/netcheck.md).
## Ingress Plugins ## Ingress Plugins
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller. - [ambassador](docs/ambassador.md): the Ambassador Ingress Controller and API gateway.
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider. - [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
## Community docs and resources ## Community docs and resources
@@ -272,12 +214,11 @@ See also [Network checker](docs/netcheck.md).
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst) - [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform) - [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
- [Kubean](https://github.com/kubean-io/kubean)
## CI Tests ## CI Tests
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/-/pipelines) [![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/). CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Packet](https://www.packet.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
See the [test matrix](docs/test_cases.md) for details. See the [test matrix](docs/test_cases.md) for details.

View File

@@ -2,18 +2,17 @@
The Kubespray Project is released on an as-needed basis. The process is as follows: The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325) 1. An issue is proposing a new release with a changelog since the last release
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release 2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1` 3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables. 4. Remove hashes for [EOL versions](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details. 5. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
6. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes 6. An approver creates a release branch in the form `release-X.Y`
7. An approver creates a release branch in the form `release-X.Y` 7. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
8. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details. 8. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
9. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml` 9. The release issue is closed
10. The release issue is closed 10. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
11. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released` 11. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
12. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
## Major/minor releases and milestones ## Major/minor releases and milestones
@@ -47,37 +46,3 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1 then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3. and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively. And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.
## Release note creation
You can create a release note with:
```shell
export GITHUB_TOKEN=<your-github-token>
export ORG=kubernetes-sigs
export REPO=kubespray
release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --dependencies=false --output=/tmp/kubespray-release-note --required-author=""
```
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note
## Container image creation
The container image `quay.io/kubespray/kubespray:vX.Y.Z` can be created from Dockerfile of the kubespray root directory:
```shell
cd kubespray/
nerdctl build -t quay.io/kubespray/kubespray:vX.Y.Z .
nerdctl push quay.io/kubespray/kubespray:vX.Y.Z
```
The container image `quay.io/kubespray/vagrant:vX.Y.Z` can be created from build.sh of test-infra/vagrant-docker/:
```shell
cd kubespray/test-infra/vagrant-docker/
./build vX.Y.Z
```
Please note that the above operation requires the permission to push container images into quay.io/kubespray/.
If you don't have the permission, please ask it on the #kubespray-dev channel.

View File

@@ -9,7 +9,5 @@
# #
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE # DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/ # INSTRUCTIONS AT https://kubernetes.io/security/
atoms
mattymo mattymo
floryut
oomichi
cristicalin

82
Vagrantfile vendored
View File

@@ -19,18 +19,16 @@ SUPPORTED_OS = {
"flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]}, "flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]},
"flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]}, "flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]},
"flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]}, "flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"}, "ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
"ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"}, "centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"}, "centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"}, "centos8" => {box: "centos/8", user: "vagrant"},
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"}, "centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"}, "fedora32" => {box: "fedora/32-cloud-base", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"}, "fedora33" => {box: "fedora/33-cloud-base", user: "vagrant"},
"rockylinux8" => {box: "generic/rocky8", user: "vagrant"}, "opensuse" => {box: "bento/opensuse-leap-15.2", user: "vagrant"},
"fedora37" => {box: "fedora/37-cloud-base", user: "vagrant"},
"fedora38" => {box: "fedora/38-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"}, "opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"}, "oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"}, "oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
@@ -51,17 +49,16 @@ $vm_cpus ||= 2
$shared_folders ||= {} $shared_folders ||= {}
$forwarded_ports ||= {} $forwarded_ports ||= {}
$subnet ||= "172.18.8" $subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756" $os ||= "ubuntu1804"
$os ||= "ubuntu2004"
$network_plugin ||= "flannel" $network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni # Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking ||= "False" $multi_networking ||= false
$download_run_once ||= "True" $download_run_once ||= "True"
$download_force_cache ||= "False" $download_force_cache ||= "True"
# The first three nodes are etcd servers # The first three nodes are etcd servers
$etcd_instances ||= [$num_instances, 3].min $etcd_instances ||= $num_instances
# The first two nodes are kube masters # The first two nodes are kube masters
$kube_master_instances ||= [$num_instances, 2].min $kube_master_instances ||= $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes # All nodes are kube nodes
$kube_node_instances ||= $num_instances $kube_node_instances ||= $num_instances
# The following only works when using the libvirt provider # The following only works when using the libvirt provider
@@ -70,24 +67,14 @@ $kube_node_instances_with_disks_size ||= "20G"
$kube_node_instances_with_disks_number ||= 2 $kube_node_instances_with_disks_number ||= 2
$override_disk_size ||= false $override_disk_size ||= false
$disk_size ||= "20GB" $disk_size ||= "20GB"
$local_path_provisioner_enabled ||= "False" $local_path_provisioner_enabled ||= false
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/" $local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
$libvirt_nested ||= false $libvirt_nested ||= false
# boolean or string (e.g. "-vvv")
$ansible_verbosity ||= false
$ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
$playbook ||= "cluster.yml" $playbook ||= "cluster.yml"
host_vars = {} host_vars = {}
# throw error if os is not supported
if ! SUPPORTED_OS.key?($os)
puts "Unsupported OS: #{$os}"
puts "Supported OS are: #{SUPPORTED_OS.keys.join(', ')}"
exit 1
end
$box = SUPPORTED_OS[$os][:box] $box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example # if $inventory is not set, try to use example
$inventory = "inventory/sample" if ! $inventory $inventory = "inventory/sample" if ! $inventory
@@ -98,9 +85,9 @@ $inventory = File.absolute_path($inventory, File.dirname(__FILE__))
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini")) if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible") $vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible")
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible) FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
$vagrant_inventory = File.join($vagrant_ansible,"inventory") if ! File.exist?(File.join($vagrant_ansible,"inventory"))
FileUtils.rm_f($vagrant_inventory) FileUtils.ln_s($inventory, File.join($vagrant_ansible,"inventory"))
FileUtils.ln_s($inventory, $vagrant_inventory) end
end end
if Vagrant.has_plugin?("vagrant-proxyconf") if Vagrant.has_plugin?("vagrant-proxyconf")
@@ -179,7 +166,7 @@ Vagrant.configure("2") do |config|
# always make /dev/sd{a/b/c} so that CI can ensure that # always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs # virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d| (1..$kube_node_instances_with_disks_number).each do |d|
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi" lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
end end
end end
end end
@@ -207,33 +194,13 @@ Vagrant.configure("2") do |config|
end end
ip = "#{$subnet}.#{i+100}" ip = "#{$subnet}.#{i+100}"
node.vm.network :private_network, node.vm.network :private_network, ip: ip
:ip => ip,
:libvirt__guest_ipv6 => 'yes',
:libvirt__ipv6_address => "#{$subnet_ipv6}::#{i+100}",
:libvirt__ipv6_prefix => "64",
:libvirt__forward_mode => "none",
:libvirt__dhcp_enabled => false
# Disable swap for each vm # Disable swap for each vm
node.vm.provision "shell", inline: "swapoff -a" node.vm.provision "shell", inline: "swapoff -a"
# ubuntu2004 and ubuntu2204 have IPv6 explicitly disabled. This undoes that.
if ["ubuntu2004", "ubuntu2204"].include? $os
node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
end
# Hack for fedora37/38 to get the IP address of the second interface
if ["fedora37", "fedora38"].include? $os
config.vm.provision "shell", inline: <<-SHELL
nmcli conn modify 'Wired connection 2' ipv4.addresses $(cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep IPADDR | cut -d "=" -f2)
nmcli conn modify 'Wired connection 2' ipv4.method manual
service NetworkManager restart
SHELL
end
# Disable firewalld on oraclelinux/redhat vms # Disable firewalld on oraclelinux/redhat vms
if ["oraclelinux","oraclelinux8","rhel7","rhel8","rockylinux8"].include? $os if ["oraclelinux","oraclelinux8","rhel7","rhel8"].include? $os
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld" node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
end end
@@ -259,12 +226,9 @@ Vagrant.configure("2") do |config|
} }
# Only execute the Ansible provisioner once, when all the machines are up and ready. # Only execute the Ansible provisioner once, when all the machines are up and ready.
# And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
if i == $num_instances if i == $num_instances
node.vm.provision "ansible" do |ansible| node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook ansible.playbook = $playbook
ansible.compatibility_mode = "2.0"
ansible.verbose = $ansible_verbosity
$ansible_inventory_path = File.join( $inventory, "hosts.ini") $ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path) if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path ansible.inventory_path = $ansible_inventory_path
@@ -274,14 +238,12 @@ Vagrant.configure("2") do |config|
ansible.host_key_checking = false ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"] ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars ansible.host_vars = host_vars
if $ansible_tags != "" #ansible.tags = ['download']
ansible.tags = [$ansible_tags]
end
ansible.groups = { ansible.groups = {
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"], "etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
"kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"], "kube-master" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],
"kube_node" => ["#{$instance_name_prefix}-[1:#{$kube_node_instances}]"], "kube-node" => ["#{$instance_name_prefix}-[1:#{$kube_node_instances}]"],
"k8s_cluster:children" => ["kube_control_plane", "kube_node"], "k8s-cluster:children" => ["kube-master", "kube-node"],
} }
end end
end end

View File

@@ -1,8 +1,9 @@
[ssh_connection] [ssh_connection]
pipelining=True pipelining=True
ansible_ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p #control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults] [defaults]
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .) # https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
force_valid_group_names = ignore force_valid_group_names = ignore
@@ -10,11 +11,11 @@ host_key_checking=False
gathering = smart gathering = smart
fact_caching = jsonfile fact_caching = jsonfile
fact_caching_connection = /tmp fact_caching_connection = /tmp
fact_caching_timeout = 86400 fact_caching_timeout = 7200
stdout_callback = default stdout_callback = default
display_skipped_hosts = no display_skipped_hosts = no
library = ./library library = ./library
callbacks_enabled = profile_tasks,ara_default callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg

17
ansible_version.yml Normal file
View File

@@ -0,0 +1,17 @@
---
- hosts: localhost
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.9.0
maximal_ansible_version: 2.10.0
ansible_connection: local
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
tags:
- check

View File

@@ -1,3 +1,131 @@
--- ---
- name: Install Kubernetes - name: Check ansible version
ansible.builtin.import_playbook: playbooks/cluster.yml import_playbook: ansible_version.yml
- hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s-cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s-cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/node, tags: node }
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/master, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network }
- { role: kubernetes/node-label, tags: node-label }
- hosts: calico-rr
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] }
- hosts: kube-master[0]
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes-apps, tags: apps }
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }

View File

@@ -35,11 +35,11 @@ class SearchEC2Tags(object):
hosts['_meta'] = { 'hostvars': {} } hosts['_meta'] = { 'hostvars': {} }
##Search ec2 three times to find nodes of each group type. Relies on kubespray-role key/value. ##Search ec2 three times to find nodes of each group type. Relies on kubespray-role key/value.
for group in ["kube_control_plane", "kube_node", "etcd"]: for group in ["kube-master", "kube-node", "etcd"]:
hosts[group] = [] hosts[group] = []
tag_key = "kubespray-role" tag_key = "kubespray-role"
tag_value = ["*"+group+"*"] tag_value = ["*"+group+"*"]
region = os.environ['AWS_REGION'] region = os.environ['REGION']
ec2 = boto3.resource('ec2', region) ec2 = boto3.resource('ec2', region)
filters = [{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}] filters = [{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}]
@@ -67,15 +67,10 @@ class SearchEC2Tags(object):
if node_labels_tag: if node_labels_tag:
ansible_host['node_labels'] = dict([ label.strip().split('=') for label in node_labels_tag[0]['Value'].split(',') ]) ansible_host['node_labels'] = dict([ label.strip().split('=') for label in node_labels_tag[0]['Value'].split(',') ])
##Set when instance actually has node_taints
node_taints_tag = list(filter(lambda t: t['Key'] == 'kubespray-node-taints', instance.tags))
if node_taints_tag:
ansible_host['node_taints'] = list([ taint.strip() for taint in node_taints_tag[0]['Value'].split(',') ])
hosts[group].append(dns_name) hosts[group].append(dns_name)
hosts['_meta']['hostvars'][dns_name] = ansible_host hosts['_meta']['hostvars'][dns_name] = ansible_host
hosts['k8s_cluster'] = {'children':['kube_control_plane', 'kube_node']} hosts['k8s-cluster'] = {'children':['kube-master', 'kube-node']}
print(json.dumps(hosts, sort_keys=True, indent=2)) print(json.dumps(hosts, sort_keys=True, indent=2))
SearchEC2Tags() SearchEC2Tags()

View File

@@ -1 +1 @@
boto3 # Apache-2.0 boto3 # Apache-2.0

View File

@@ -1,2 +1,2 @@
.generated .generated
/inventory /inventory

View File

@@ -47,10 +47,6 @@ If you need to delete all resources from a resource group, simply call:
**WARNING** this really deletes everything from your resource group, including everything that was later created by you! **WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Installing Ansible and the dependencies
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
## Generating an inventory for kubespray ## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call: After you have applied the templates, you can generate an inventory with this call:
@@ -63,5 +59,6 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell ```shell
cd kubespray-root-dir cd kubespray-root-dir
sudo pip3 install -r requirements.txt
ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
``` ```

View File

@@ -1,6 +1,5 @@
--- ---
- name: Generate Azure inventory - hosts: localhost
hosts: localhost
gather_facts: False gather_facts: False
roles: roles:
- generate-inventory - generate-inventory

View File

@@ -1,6 +1,5 @@
--- ---
- name: Generate Azure inventory - hosts: localhost
hosts: localhost
gather_facts: False gather_facts: False
roles: roles:
- generate-inventory_2 - generate-inventory_2

View File

@@ -1,6 +1,5 @@
--- ---
- name: Generate Azure templates - hosts: localhost
hosts: localhost
gather_facts: False gather_facts: False
roles: roles:
- generate-templates - generate-templates

View File

@@ -1,6 +1,6 @@
--- ---
- name: Query Azure VMs - name: Query Azure VMs # noqa 301
command: azure vm list-ip-address --json {{ azure_resource_group }} command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd register: vm_list_cmd
@@ -12,4 +12,3 @@
template: template:
src: inventory.j2 src: inventory.j2
dest: "{{ playbook_dir }}/inventory" dest: "{{ playbook_dir }}/inventory"
mode: 0644

View File

@@ -7,9 +7,9 @@
{% endif %} {% endif %}
{% endfor %} {% endfor %}
[kube_control_plane] [kube-master]
{% for vm in vm_list %} {% for vm in vm_list %}
{% if 'kube_control_plane' in vm.tags.roles %} {% if 'kube-master' in vm.tags.roles %}
{{ vm.name }} {{ vm.name }}
{% endif %} {% endif %}
{% endfor %} {% endfor %}
@@ -21,13 +21,13 @@
{% endif %} {% endif %}
{% endfor %} {% endfor %}
[kube_node] [kube-node]
{% for vm in vm_list %} {% for vm in vm_list %}
{% if 'kube_node' in vm.tags.roles %} {% if 'kube-node' in vm.tags.roles %}
{{ vm.name }} {{ vm.name }}
{% endif %} {% endif %}
{% endfor %} {% endfor %}
[k8s_cluster:children] [k8s-cluster:children]
kube_node kube-node
kube_control_plane kube-master

View File

@@ -1,14 +1,14 @@
--- ---
- name: Query Azure VMs IPs - name: Query Azure VMs IPs # noqa 301
command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }} command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }}
register: vm_ip_list_cmd register: vm_ip_list_cmd
- name: Query Azure VMs Roles - name: Query Azure VMs Roles # noqa 301
command: az vm list -o json --resource-group {{ azure_resource_group }} command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd register: vm_list_cmd
- name: Query Azure Load Balancer Public IP - name: Query Azure Load Balancer Public IP # noqa 301
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd register: lb_pubip_cmd
@@ -22,10 +22,8 @@
template: template:
src: inventory.j2 src: inventory.j2
dest: "{{ playbook_dir }}/inventory" dest: "{{ playbook_dir }}/inventory"
mode: 0644
- name: Generate Load Balancer variables - name: Generate Load Balancer variables
template: template:
src: loadbalancer_vars.j2 src: loadbalancer_vars.j2
dest: "{{ playbook_dir }}/loadbalancer_vars.yml" dest: "{{ playbook_dir }}/loadbalancer_vars.yml"
mode: 0644

View File

@@ -7,9 +7,9 @@
{% endif %} {% endif %}
{% endfor %} {% endfor %}
[kube_control_plane] [kube-master]
{% for vm in vm_roles_list %} {% for vm in vm_roles_list %}
{% if 'kube_control_plane' in vm.tags.roles %} {% if 'kube-master' in vm.tags.roles %}
{{ vm.name }} {{ vm.name }}
{% endif %} {% endif %}
{% endfor %} {% endfor %}
@@ -21,13 +21,14 @@
{% endif %} {% endif %}
{% endfor %} {% endfor %}
[kube_node] [kube-node]
{% for vm in vm_roles_list %} {% for vm in vm_roles_list %}
{% if 'kube_node' in vm.tags.roles %} {% if 'kube-node' in vm.tags.roles %}
{{ vm.name }} {{ vm.name }}
{% endif %} {% endif %}
{% endfor %} {% endfor %}
[k8s_cluster:children] [k8s-cluster:children]
kube_node kube-node
kube_control_plane kube-master

View File

@@ -24,14 +24,14 @@ bastionIPAddressName: bastion-pubip
disablePasswordAuthentication: true disablePasswordAuthentication: true
sshKeyPath: "/home/{{ admin_username }}/.ssh/authorized_keys" sshKeyPath: "/home/{{admin_username}}/.ssh/authorized_keys"
imageReference: imageReference:
publisher: "OpenLogic" publisher: "OpenLogic"
offer: "CentOS" offer: "CentOS"
sku: "7.5" sku: "7.5"
version: "latest" version: "latest"
imageReferenceJson: "{{ imageReference | to_json }}" imageReferenceJson: "{{imageReference|to_json}}"
storageAccountName: "sa{{ nameSuffix | replace('-', '') }}" storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}" storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"

View File

@@ -8,13 +8,11 @@
path: "{{ base_dir }}" path: "{{ base_dir }}"
state: directory state: directory
recurse: true recurse: true
mode: 0755
- name: Store json files in base_dir - name: Store json files in base_dir
template: template:
src: "{{ item }}" src: "{{ item }}"
dest: "{{ base_dir }}/{{ item }}" dest: "{{ base_dir }}/{{ item }}"
mode: 0644
with_items: with_items:
- network.json - network.json
- storage.json - storage.json

View File

@@ -27,4 +27,4 @@
} }
} }
] ]
} }

View File

@@ -103,4 +103,4 @@
} }
{% endif %} {% endif %}
] ]
} }

View File

@@ -5,4 +5,4 @@
"variables": {}, "variables": {},
"resources": [], "resources": [],
"outputs": {} "outputs": {}
} }

View File

@@ -144,7 +144,7 @@
"[concat('Microsoft.Network/networkInterfaces/', 'master-{{i}}-nic')]" "[concat('Microsoft.Network/networkInterfaces/', 'master-{{i}}-nic')]"
], ],
"tags": { "tags": {
"roles": "kube_control_plane,etcd" "roles": "kube-master,etcd"
}, },
"apiVersion": "{{apiVersion}}", "apiVersion": "{{apiVersion}}",
"properties": { "properties": {

View File

@@ -61,7 +61,7 @@
"[concat('Microsoft.Network/networkInterfaces/', 'minion-{{i}}-nic')]" "[concat('Microsoft.Network/networkInterfaces/', 'minion-{{i}}-nic')]"
], ],
"tags": { "tags": {
"roles": "kube_node" "roles": "kube-node"
}, },
"apiVersion": "{{apiVersion}}", "apiVersion": "{{apiVersion}}",
"properties": { "properties": {
@@ -112,4 +112,4 @@
} {% if not loop.last %},{% endif %} } {% if not loop.last %},{% endif %}
{% endfor %} {% endfor %}
] ]
} }

View File

@@ -16,4 +16,4 @@
} }
} }
] ]
} }

View File

@@ -1,11 +1,9 @@
--- ---
- name: Create nodes as docker containers - hosts: localhost
hosts: localhost
gather_facts: False gather_facts: False
roles: roles:
- { role: dind-host } - { role: dind-host }
- name: Customize each node containers - hosts: containers
hosts: containers
roles: roles:
- { role: dind-cluster } - { role: dind-cluster }

View File

@@ -1,9 +1,9 @@
--- ---
- name: Set_fact distro_setup - name: set_fact distro_setup
set_fact: set_fact:
distro_setup: "{{ distro_settings[node_distro] }}" distro_setup: "{{ distro_settings[node_distro] }}"
- name: Set_fact other distro settings - name: set_fact other distro settings
set_fact: set_fact:
distro_user: "{{ distro_setup['user'] }}" distro_user: "{{ distro_setup['user'] }}"
distro_ssh_service: "{{ distro_setup['ssh_service'] }}" distro_ssh_service: "{{ distro_setup['ssh_service'] }}"
@@ -35,7 +35,6 @@
path-exclude=/usr/share/doc/* path-exclude=/usr/share/doc/*
path-include=/usr/share/doc/*/copyright path-include=/usr/share/doc/*/copyright
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
mode: 0644
when: when:
- ansible_os_family == 'Debian' - ansible_os_family == 'Debian'
@@ -43,7 +42,7 @@
package: package:
name: "{{ item }}" name: "{{ item }}"
state: present state: present
with_items: "{{ distro_extra_packages + ['rsyslog', 'openssh-server'] }}" with_items: "{{ distro_extra_packages }} + [ 'rsyslog', 'openssh-server' ]"
- name: Start needed services - name: Start needed services
service: service:
@@ -64,10 +63,9 @@
copy: copy:
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL" content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL"
dest: "/etc/sudoers.d/{{ distro_user }}" dest: "/etc/sudoers.d/{{ distro_user }}"
mode: 0640
- name: "Add my pubkey to {{ distro_user }} user authorized keys" - name: Add my pubkey to "{{ distro_user }}" user authorized keys
ansible.posix.authorized_key: authorized_key:
user: "{{ distro_user }}" user: "{{ distro_user }}"
state: present state: present
key: "{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_rsa.pub') }}" key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"

View File

@@ -1,9 +1,9 @@
--- ---
- name: Set_fact distro_setup - name: set_fact distro_setup
set_fact: set_fact:
distro_setup: "{{ distro_settings[node_distro] }}" distro_setup: "{{ distro_settings[node_distro] }}"
- name: Set_fact other distro settings - name: set_fact other distro settings
set_fact: set_fact:
distro_image: "{{ distro_setup['image'] }}" distro_image: "{{ distro_setup['image'] }}"
distro_init: "{{ distro_setup['init'] }}" distro_init: "{{ distro_setup['init'] }}"
@@ -13,7 +13,7 @@
distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}" distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}"
- name: Create dind node containers from "containers" inventory section - name: Create dind node containers from "containers" inventory section
community.docker.docker_container: docker_container:
image: "{{ distro_image }}" image: "{{ distro_image }}"
name: "{{ item }}" name: "{{ item }}"
state: started state: started
@@ -53,7 +53,7 @@
{{ distro_raw_setup_done }} && echo SKIPPED && exit 0 {{ distro_raw_setup_done }} && echo SKIPPED && exit 0
until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done
{{ distro_raw_setup }} {{ distro_raw_setup }}
delegate_to: "{{ item._ansible_item_label | default(item.item) }}" delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}" with_items: "{{ containers.results }}"
register: result register: result
changed_when: result.stdout.find("SKIPPED") < 0 changed_when: result.stdout.find("SKIPPED") < 0
@@ -63,25 +63,26 @@
until test -S /var/run/dbus/system_bus_socket; do sleep 1; done until test -S /var/run/dbus/system_bus_socket; do sleep 1; done
systemctl disable {{ distro_agetty_svc }} systemctl disable {{ distro_agetty_svc }}
systemctl stop {{ distro_agetty_svc }} systemctl stop {{ distro_agetty_svc }}
delegate_to: "{{ item._ansible_item_label | default(item.item) }}" delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}" with_items: "{{ containers.results }}"
changed_when: false changed_when: false
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian, # Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually # handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) - name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) # noqa 301
raw: | raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id mv -b /etc/machine-id.new /etc/machine-id
cmp /etc/machine-id /etc/machine-id~ || true cmp /etc/machine-id /etc/machine-id~ || true
systemctl daemon-reload systemctl daemon-reload
delegate_to: "{{ item._ansible_item_label | default(item.item) }}" delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}" with_items: "{{ containers.results }}"
- name: Early hack image install to adapt for DIND - name: Early hack image install to adapt for DIND
# noqa 302 - this task uses the raw module intentionally
raw: | raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label | default(item.item) }}" delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}" with_items: "{{ containers.results }}"
register: result register: result
changed_when: result.stdout.find("removed") >= 0 changed_when: result.stdout.find("removed") >= 0

View File

@@ -17,7 +17,7 @@ pass_or_fail() {
test_distro() { test_distro() {
local distro=${1:?};shift local distro=${1:?};shift
local extra="${*:-}" local extra="${*:-}"
local prefix="${distro[${extra}]}" local prefix="$distro[${extra}]}"
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro
pass_or_fail "$prefix: dind-nodes" || return 1 pass_or_fail "$prefix: dind-nodes" || return 1
(cd ../.. (cd ../..
@@ -46,7 +46,7 @@ test_distro() {
pass_or_fail "$prefix: netcheck" || return 1 pass_or_fail "$prefix: netcheck" || return 1
} }
NODES=($(egrep ^kube_node hosts)) NODES=($(egrep ^kube-node hosts))
NETCHECKER_HOST=localhost NETCHECKER_HOST=localhost
: ${OUTPUT_DIR:=./out} : ${OUTPUT_DIR:=./out}
@@ -71,15 +71,15 @@ for spec in ${SPECS}; do
echo "Loading file=${spec} ..." echo "Loading file=${spec} ..."
. ${spec} || continue . ${spec} || continue
: ${DISTROS:?} || continue : ${DISTROS:?} || continue
echo "DISTROS:" "${DISTROS[@]}" echo "DISTROS=${DISTROS[@]}"
echo "EXTRAS->" echo "EXTRAS->"
printf " %s\n" "${EXTRAS[@]}" printf " %s\n" "${EXTRAS[@]}"
let n=1 let n=1
for distro in "${DISTROS[@]}"; do for distro in ${DISTROS[@]}; do
for extra in "${EXTRAS[@]:-NULL}"; do for extra in "${EXTRAS[@]:-NULL}"; do
# Magic value to let this for run once: # Magic value to let this for run once:
[[ ${extra} == NULL ]] && unset extra [[ ${extra} == NULL ]] && unset extra
docker rm -f "${NODES[@]}" docker rm -f ${NODES[@]}
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++)) printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++))
{ {
info "${distro}[${extra}] START: file_out=${file_out}" info "${distro}[${extra}] START: file_out=${file_out}"

View File

@@ -44,11 +44,11 @@ import re
import subprocess import subprocess
import sys import sys
ROLES = ['all', 'kube_control_plane', 'kube_node', 'etcd', 'k8s_cluster', ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
'calico_rr'] 'calico-rr']
PROTECTED_NAMES = ROLES PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames', AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames',
'load', 'add'] 'load']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True, _boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False} '0': False, 'no': False, 'false': False, 'off': False}
yaml = YAML() yaml = YAML()
@@ -63,9 +63,7 @@ def get_var_as_bool(name, default):
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml") CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
# Remove the reference of KUBE_MASTERS after some deprecation cycles. KUBE_MASTERS = int(os.environ.get("KUBE_MASTERS", 2))
KUBE_CONTROL_HOSTS = int(os.environ.get("KUBE_CONTROL_HOSTS",
os.environ.get("KUBE_MASTERS", 2)))
# Reconfigures cluster distribution at scale # Reconfigures cluster distribution at scale
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50)) SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("MASSIVE_SCALE_THRESHOLD", 200)) MASSIVE_SCALE_THRESHOLD = int(os.environ.get("MASSIVE_SCALE_THRESHOLD", 200))
@@ -82,54 +80,32 @@ class KubesprayInventory(object):
def __init__(self, changed_hosts=None, config_file=None): def __init__(self, changed_hosts=None, config_file=None):
self.config_file = config_file self.config_file = config_file
self.yaml_config = {} self.yaml_config = {}
loadPreviousConfig = False if self.config_file:
printHostnames = False
# See whether there are any commands to process
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
if changed_hosts[0] == "add":
loadPreviousConfig = True
changed_hosts = changed_hosts[1:]
elif changed_hosts[0] == "print_hostnames":
loadPreviousConfig = True
printHostnames = True
else:
self.parse_command(changed_hosts[0], changed_hosts[1:])
sys.exit(0)
# If the user wants to remove a node, we need to load the config anyway
if changed_hosts and changed_hosts[0][0] == "-":
loadPreviousConfig = True
if self.config_file and loadPreviousConfig: # Load previous YAML file
try: try:
self.hosts_file = open(config_file, 'r') self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file) self.yaml_config = yaml.load_all(self.hosts_file)
except OSError as e: except OSError:
# I am assuming we are catching "cannot open file" exceptions pass
print(e)
sys.exit(1)
if printHostnames: if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
self.print_hostnames() self.parse_command(changed_hosts[0], changed_hosts[1:])
sys.exit(0) sys.exit(0)
self.ensure_required_groups(ROLES) self.ensure_required_groups(ROLES)
if changed_hosts: if changed_hosts:
changed_hosts = self.range2ips(changed_hosts) changed_hosts = self.range2ips(changed_hosts)
self.hosts = self.build_hostnames(changed_hosts, self.hosts = self.build_hostnames(changed_hosts)
loadPreviousConfig)
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES) self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
self.set_all(self.hosts) self.set_all(self.hosts)
self.set_k8s_cluster() self.set_k8s_cluster()
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1 etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count]) self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count])
if len(self.hosts) >= SCALE_THRESHOLD: if len(self.hosts) >= SCALE_THRESHOLD:
self.set_kube_control_plane(list(self.hosts.keys())[ self.set_kube_master(list(self.hosts.keys())[
etcd_hosts_count:(etcd_hosts_count + KUBE_CONTROL_HOSTS)]) etcd_hosts_count:(etcd_hosts_count + KUBE_MASTERS)])
else: else:
self.set_kube_control_plane( self.set_kube_master(list(self.hosts.keys())[:KUBE_MASTERS])
list(self.hosts.keys())[:KUBE_CONTROL_HOSTS])
self.set_kube_node(self.hosts.keys()) self.set_kube_node(self.hosts.keys())
if len(self.hosts) >= SCALE_THRESHOLD: if len(self.hosts) >= SCALE_THRESHOLD:
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count]) self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count])
@@ -179,29 +155,17 @@ class KubesprayInventory(object):
except IndexError: except IndexError:
raise ValueError("Host name must end in an integer") raise ValueError("Host name must end in an integer")
# Keeps already specified hosts, def build_hostnames(self, changed_hosts):
# and adds or removes the hosts provided as an argument
def build_hostnames(self, changed_hosts, loadPreviousConfig=False):
existing_hosts = OrderedDict() existing_hosts = OrderedDict()
highest_host_id = 0 highest_host_id = 0
# Load already existing hosts from the YAML try:
if loadPreviousConfig: for host in self.yaml_config['all']['hosts']:
try: existing_hosts[host] = self.yaml_config['all']['hosts'][host]
for host in self.yaml_config['all']['hosts']: host_id = self.get_host_id(host)
# Read configuration of an existing host if host_id > highest_host_id:
hostConfig = self.yaml_config['all']['hosts'][host] highest_host_id = host_id
existing_hosts[host] = hostConfig except Exception:
# If the existing host seems pass
# to have been created automatically, detect its ID
if host.startswith(HOST_PREFIX):
host_id = self.get_host_id(host)
if host_id > highest_host_id:
highest_host_id = host_id
except Exception as e:
# I am assuming we are catching automatically
# created hosts without IDs
print(e)
sys.exit(1)
# FIXME(mattymo): Fix condition where delete then add reuses highest id # FIXME(mattymo): Fix condition where delete then add reuses highest id
next_host_id = highest_host_id + 1 next_host_id = highest_host_id + 1
@@ -209,7 +173,6 @@ class KubesprayInventory(object):
all_hosts = existing_hosts.copy() all_hosts = existing_hosts.copy()
for host in changed_hosts: for host in changed_hosts:
# Delete the host from config the hostname/IP has a "-" prefix
if host[0] == "-": if host[0] == "-":
realhost = host[1:] realhost = host[1:]
if self.exists_hostname(all_hosts, realhost): if self.exists_hostname(all_hosts, realhost):
@@ -218,8 +181,6 @@ class KubesprayInventory(object):
elif self.exists_ip(all_hosts, realhost): elif self.exists_ip(all_hosts, realhost):
self.debug("Marked {0} for deletion.".format(realhost)) self.debug("Marked {0} for deletion.".format(realhost))
self.delete_host_by_ip(all_hosts, realhost) self.delete_host_by_ip(all_hosts, realhost)
# Host/Argument starts with a digit,
# then we assume its an IP address
elif host[0].isdigit(): elif host[0].isdigit():
if ',' in host: if ',' in host:
ip, access_ip = host.split(',') ip, access_ip = host.split(',')
@@ -239,15 +200,11 @@ class KubesprayInventory(object):
next_host = subprocess.check_output(cmd, shell=True) next_host = subprocess.check_output(cmd, shell=True)
next_host = next_host.strip().decode('ascii') next_host = next_host.strip().decode('ascii')
else: else:
# Generates a hostname because we have only an IP address
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id) next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1 next_host_id += 1
# Uses automatically generated node name
# in case we dont provide it.
all_hosts[next_host] = {'ansible_host': access_ip, all_hosts[next_host] = {'ansible_host': access_ip,
'ip': ip, 'ip': ip,
'access_ip': access_ip} 'access_ip': access_ip}
# Host/Argument starts with a letter, then we assume its a hostname
elif host[0].isalpha(): elif host[0].isalpha():
if ',' in host: if ',' in host:
try: try:
@@ -266,7 +223,6 @@ class KubesprayInventory(object):
'access_ip': access_ip} 'access_ip': access_ip}
return all_hosts return all_hosts
# Expand IP ranges into individual addresses
def range2ips(self, hosts): def range2ips(self, hosts):
reworked_hosts = [] reworked_hosts = []
@@ -310,7 +266,7 @@ class KubesprayInventory(object):
def purge_invalid_hosts(self, hostnames, protected_names=[]): def purge_invalid_hosts(self, hostnames, protected_names=[]):
for role in self.yaml_config['all']['children']: for role in self.yaml_config['all']['children']:
if role != 'k8s_cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa if role != 'k8s-cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa
all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa
for host in all_hosts.keys(): for host in all_hosts.keys():
if host not in hostnames and host not in protected_names: if host not in hostnames and host not in protected_names:
@@ -331,54 +287,52 @@ class KubesprayInventory(object):
if self.yaml_config['all']['hosts'] is None: if self.yaml_config['all']['hosts'] is None:
self.yaml_config['all']['hosts'] = {host: None} self.yaml_config['all']['hosts'] = {host: None}
self.yaml_config['all']['hosts'][host] = opts self.yaml_config['all']['hosts'][host] = opts
elif group != 'k8s_cluster:children': elif group != 'k8s-cluster:children':
if self.yaml_config['all']['children'][group]['hosts'] is None: if self.yaml_config['all']['children'][group]['hosts'] is None:
self.yaml_config['all']['children'][group]['hosts'] = { self.yaml_config['all']['children'][group]['hosts'] = {
host: None} host: None}
else: else:
self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa
def set_kube_control_plane(self, hosts): def set_kube_master(self, hosts):
for host in hosts: for host in hosts:
self.add_host_to_group('kube_control_plane', host) self.add_host_to_group('kube-master', host)
def set_all(self, hosts): def set_all(self, hosts):
for host, opts in hosts.items(): for host, opts in hosts.items():
self.add_host_to_group('all', host, opts) self.add_host_to_group('all', host, opts)
def set_k8s_cluster(self): def set_k8s_cluster(self):
k8s_cluster = {'children': {'kube_control_plane': None, k8s_cluster = {'children': {'kube-master': None, 'kube-node': None}}
'kube_node': None}} self.yaml_config['all']['children']['k8s-cluster'] = k8s_cluster
self.yaml_config['all']['children']['k8s_cluster'] = k8s_cluster
def set_calico_rr(self, hosts): def set_calico_rr(self, hosts):
for host in hosts: for host in hosts:
if host in self.yaml_config['all']['children']['kube_control_plane']: # noqa if host in self.yaml_config['all']['children']['kube-master']:
self.debug("Not adding {0} to calico_rr group because it " self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube_control_plane " "conflicts with kube-master group".format(host))
"group".format(host))
continue continue
if host in self.yaml_config['all']['children']['kube_node']: if host in self.yaml_config['all']['children']['kube-node']:
self.debug("Not adding {0} to calico_rr group because it " self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube_node group".format(host)) "conflicts with kube-node group".format(host))
continue continue
self.add_host_to_group('calico_rr', host) self.add_host_to_group('calico-rr', host)
def set_kube_node(self, hosts): def set_kube_node(self, hosts):
for host in hosts: for host in hosts:
if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD: if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD:
if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa
self.debug("Not adding {0} to kube_node group because of " self.debug("Not adding {0} to kube-node group because of "
"scale deployment and host is in etcd " "scale deployment and host is in etcd "
"group.".format(host)) "group.".format(host))
continue continue
if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa
if host in self.yaml_config['all']['children']['kube_control_plane']['hosts']: # noqa if host in self.yaml_config['all']['children']['kube-master']['hosts']: # noqa
self.debug("Not adding {0} to kube_node group because of " self.debug("Not adding {0} to kube-node group because of "
"scale deployment and host is in " "scale deployment and host is in kube-master "
"kube_control_plane group.".format(host)) "group.".format(host))
continue continue
self.add_host_to_group('kube_node', host) self.add_host_to_group('kube-node', host)
def set_etcd(self, hosts): def set_etcd(self, hosts):
for host in hosts: for host in hosts:
@@ -435,11 +389,9 @@ help - Display this message
print_cfg - Write inventory file to stdout print_cfg - Write inventory file to stdout
print_ips - Write a space-delimited list of IPs from "all" group print_ips - Write a space-delimited list of IPs from "all" group
print_hostnames - Write a space-delimited list of Hostnames from "all" group print_hostnames - Write a space-delimited list of Hostnames from "all" group
add - Adds specified hosts into an already existing inventory
Advanced usage: Advanced usage:
Create new or overwrite old inventory file: inventory.py 10.10.1.5 Add another host after initial creation: inventory.py 10.10.1.5
Add another host after initial creation: inventory.py add 10.10.1.6
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5 Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3 Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3 Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
@@ -450,9 +402,9 @@ Configurable env vars:
DEBUG Enable debug printing. Default: True DEBUG Enable debug printing. Default: True
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml
HOST_PREFIX Host prefix for generated hosts. Default: node HOST_PREFIX Host prefix for generated hosts. Default: node
KUBE_CONTROL_HOSTS Set the number of kube-control-planes. Default: 2 KUBE_MASTERS Set the number of kube-masters. Default: 2
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50 SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
MASSIVE_SCALE_THRESHOLD Separate K8s control-plane and ETCD if # of nodes >= 200 MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
''' # noqa ''' # noqa
print(help_text) print(help_text)
@@ -473,7 +425,6 @@ def main(argv=None):
if not argv: if not argv:
argv = sys.argv[1:] argv = sys.argv[1:]
KubesprayInventory(argv, CONFIG_FILE) KubesprayInventory(argv, CONFIG_FILE)
return 0
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -1,3 +1,3 @@
configparser>=3.3.0 configparser>=3.3.0
ipaddress
ruamel.yaml>=0.15.88 ruamel.yaml>=0.15.88
ipaddress

View File

@@ -1,3 +1,3 @@
hacking>=0.10.2 hacking>=0.10.2
mock>=1.3.0
pytest>=2.8.0 pytest>=2.8.0
mock>=1.3.0

View File

@@ -13,9 +13,8 @@
# under the License. # under the License.
import inventory import inventory
from io import StringIO import mock
import unittest import unittest
from unittest import mock
from collections import OrderedDict from collections import OrderedDict
import sys import sys
@@ -27,28 +26,6 @@ if path not in sys.path:
import inventory # noqa import inventory # noqa
class TestInventoryPrintHostnames(unittest.TestCase):
@mock.patch('ruamel.yaml.YAML.load')
def test_print_hostnames(self, load_mock):
mock_io = mock.mock_open(read_data='')
load_mock.return_value = OrderedDict({'all': {'hosts': {
'node1': {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'},
'node2': {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}}}})
with mock.patch('builtins.open', mock_io):
with self.assertRaises(SystemExit) as cm:
with mock.patch('sys.stdout', new_callable=StringIO) as stdout:
inventory.KubesprayInventory(
changed_hosts=["print_hostnames"],
config_file="file")
self.assertEqual("node1 node2\n", stdout.getvalue())
self.assertEqual(cm.exception.code, 0)
class TestInventory(unittest.TestCase): class TestInventory(unittest.TestCase):
@mock.patch('inventory.sys') @mock.patch('inventory.sys')
def setUp(self, sys_mock): def setUp(self, sys_mock):
@@ -90,14 +67,23 @@ class TestInventory(unittest.TestCase):
self.assertRaisesRegex(ValueError, "Host name must end in an", self.assertRaisesRegex(ValueError, "Host name must end in an",
self.inv.get_host_id, hostname) self.inv.get_host_id, hostname)
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_duplicate(self): def test_build_hostnames_add_duplicate(self):
changed_hosts = ['10.90.0.2'] changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node3', expected = OrderedDict([('node1',
{'ansible_host': '10.90.0.2', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2', 'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})]) 'access_ip': '10.90.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts, True) result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result) self.assertEqual(expected, result)
def test_build_hostnames_add_two(self): def test_build_hostnames_add_two(self):
@@ -113,30 +99,6 @@ class TestInventory(unittest.TestCase):
result = self.inv.build_hostnames(changed_hosts) result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result) self.assertEqual(expected, result)
def test_build_hostnames_add_three(self):
changed_hosts = ['10.90.0.2', '10.90.0.3', '10.90.0.4']
expected = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}),
('node3', {'ansible_host': '10.90.0.4',
'ip': '10.90.0.4',
'access_ip': '10.90.0.4'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_delete_first(self): def test_build_hostnames_delete_first(self):
changed_hosts = ['-10.90.0.2'] changed_hosts = ['-10.90.0.2']
existing_hosts = OrderedDict([ existing_hosts = OrderedDict([
@@ -151,24 +113,7 @@ class TestInventory(unittest.TestCase):
('node2', {'ansible_host': '10.90.0.3', ('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3', 'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})]) 'access_ip': '10.90.0.3'})])
result = self.inv.build_hostnames(changed_hosts, True) result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_delete_by_hostname(self):
changed_hosts = ['-node1']
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
expected = OrderedDict([
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result) self.assertEqual(expected, result)
def test_exists_hostname_positive(self): def test_exists_hostname_positive(self):
@@ -277,11 +222,11 @@ class TestInventory(unittest.TestCase):
self.inv.yaml_config['all']['children'][group]['hosts'].get(host), self.inv.yaml_config['all']['children'][group]['hosts'].get(host),
None) None)
def test_set_kube_control_plane(self): def test_set_kube_master(self):
group = 'kube_control_plane' group = 'kube-master'
host = 'node1' host = 'node1'
self.inv.set_kube_control_plane([host]) self.inv.set_kube_master([host])
self.assertIn( self.assertIn(
host, self.inv.yaml_config['all']['children'][group]['hosts']) host, self.inv.yaml_config['all']['children'][group]['hosts'])
@@ -296,8 +241,8 @@ class TestInventory(unittest.TestCase):
self.inv.yaml_config['all']['hosts'].get(host), opt) self.inv.yaml_config['all']['hosts'].get(host), opt)
def test_set_k8s_cluster(self): def test_set_k8s_cluster(self):
group = 'k8s_cluster' group = 'k8s-cluster'
expected_hosts = ['kube_node', 'kube_control_plane'] expected_hosts = ['kube-node', 'kube-master']
self.inv.set_k8s_cluster() self.inv.set_k8s_cluster()
for host in expected_hosts: for host in expected_hosts:
@@ -306,7 +251,7 @@ class TestInventory(unittest.TestCase):
self.inv.yaml_config['all']['children'][group]['children']) self.inv.yaml_config['all']['children'][group]['children'])
def test_set_kube_node(self): def test_set_kube_node(self):
group = 'kube_node' group = 'kube-node'
host = 'node1' host = 'node1'
self.inv.set_kube_node([host]) self.inv.set_kube_node([host])
@@ -330,12 +275,12 @@ class TestInventory(unittest.TestCase):
self.inv.set_all(hosts) self.inv.set_all(hosts)
self.inv.set_etcd(list(hosts.keys())[0:3]) self.inv.set_etcd(list(hosts.keys())[0:3])
self.inv.set_kube_control_plane(list(hosts.keys())[0:2]) self.inv.set_kube_master(list(hosts.keys())[0:2])
self.inv.set_kube_node(hosts.keys()) self.inv.set_kube_node(hosts.keys())
for h in range(3): for h in range(3):
self.assertFalse( self.assertFalse(
list(hosts.keys())[h] in list(hosts.keys())[h] in
self.inv.yaml_config['all']['children']['kube_node']['hosts']) self.inv.yaml_config['all']['children']['kube-node']['hosts'])
def test_scale_scenario_two(self): def test_scale_scenario_two(self):
num_nodes = 500 num_nodes = 500
@@ -346,12 +291,12 @@ class TestInventory(unittest.TestCase):
self.inv.set_all(hosts) self.inv.set_all(hosts)
self.inv.set_etcd(list(hosts.keys())[0:3]) self.inv.set_etcd(list(hosts.keys())[0:3])
self.inv.set_kube_control_plane(list(hosts.keys())[3:5]) self.inv.set_kube_master(list(hosts.keys())[3:5])
self.inv.set_kube_node(hosts.keys()) self.inv.set_kube_node(hosts.keys())
for h in range(5): for h in range(5):
self.assertFalse( self.assertFalse(
list(hosts.keys())[h] in list(hosts.keys())[h] in
self.inv.yaml_config['all']['children']['kube_node']['hosts']) self.inv.yaml_config['all']['children']['kube-node']['hosts'])
def test_range2ips_range(self): def test_range2ips_range(self):
changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8'] changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8']
@@ -368,7 +313,7 @@ class TestInventory(unittest.TestCase):
self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid", self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range) self.inv.range2ips, host_range)
def test_build_hostnames_create_with_one_different_ips(self): def test_build_hostnames_different_ips_add_one(self):
changed_hosts = ['10.90.0.2,192.168.0.2'] changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1', expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2', {'ansible_host': '192.168.0.2',
@@ -377,7 +322,17 @@ class TestInventory(unittest.TestCase):
result = self.inv.build_hostnames(changed_hosts) result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result) self.assertEqual(expected, result)
def test_build_hostnames_create_with_two_different_ips(self): def test_build_hostnames_different_ips_add_duplicate(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_two(self):
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3'] changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3']
expected = OrderedDict([ expected = OrderedDict([
('node1', {'ansible_host': '192.168.0.2', ('node1', {'ansible_host': '192.168.0.2',
@@ -386,210 +341,6 @@ class TestInventory(unittest.TestCase):
('node2', {'ansible_host': '192.168.0.3', ('node2', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3', 'ip': '10.90.0.3',
'access_ip': '192.168.0.3'})]) 'access_ip': '192.168.0.3'})])
self.inv.yaml_config['all']['hosts'] = OrderedDict()
result = self.inv.build_hostnames(changed_hosts) result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result) self.assertEqual(expected, result)
def test_build_hostnames_create_with_three_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2',
'10.90.0.3,192.168.0.3',
'10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node1', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node2', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node3', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_overwrite_one_with_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = OrderedDict([('node5',
{'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_overwrite_three_with_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = OrderedDict([
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_duplicate(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node3',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = expected
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_one_existing(self):
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_two_existing(self):
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_three_existing(self):
changed_hosts = ['10.90.0.5,192.168.0.5', '10.90.0.6,192.168.0.6']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'}),
('node6', {'ansible_host': '192.168.0.6',
'ip': '10.90.0.6',
'access_ip': '192.168.0.6'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
# Add two IP addresses into a config that has
# three already defined IP addresses. One of the IP addresses
# is a duplicate.
def test_build_hostnames_add_two_duplicate_one_overlap(self):
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
# Add two duplicate IP addresses into a config that has
# three already defined IP addresses
def test_build_hostnames_add_two_duplicate_two_overlap(self):
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)

View File

@@ -1,27 +1,21 @@
[tox] [tox]
minversion = 1.6 minversion = 1.6
skipsdist = True skipsdist = True
envlist = pep8 envlist = pep8, py33
[testenv] [testenv]
allowlist_externals = py.test whitelist_externals = py.test
usedevelop = True usedevelop = True
deps = deps =
-r{toxinidir}/requirements.txt -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt -r{toxinidir}/test-requirements.txt
setenv = VIRTUAL_ENV={envdir} setenv = VIRTUAL_ENV={envdir}
passenv = passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
http_proxy
HTTP_PROXY
https_proxy
HTTPS_PROXY
no_proxy
NO_PROXY
commands = pytest -vv #{posargs:./tests} commands = pytest -vv #{posargs:./tests}
[testenv:pep8] [testenv:pep8]
usedevelop = False usedevelop = False
allowlist_externals = bash whitelist_externals = bash
commands = commands =
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8" bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8"

View File

@@ -1,2 +1,3 @@
#k8s_deployment_user: kubespray #k8s_deployment_user: kubespray
#k8s_deployment_user_pkey_path: /tmp/ssh_rsa #k8s_deployment_user_pkey_path: /tmp/ssh_rsa

View File

@@ -1,9 +1,8 @@
--- ---
- name: Prepare Hypervisor to later install kubespray VMs - hosts: localhost
hosts: localhost
gather_facts: False gather_facts: False
become: yes become: yes
vars: vars:
bootstrap_os: none - bootstrap_os: none
roles: roles:
- { role: kvm-setup } - kvm-setup

View File

@@ -1,7 +1,7 @@
--- ---
- name: Install required packages - name: Install required packages
package: yum:
name: "{{ item }}" name: "{{ item }}"
state: present state: present
with_items: with_items:
@@ -22,9 +22,9 @@
- ntp - ntp
when: ansible_os_family == "Debian" when: ansible_os_family == "Debian"
- name: Create deployment user if required # Create deployment user if required
include_tasks: user.yml - include: user.yml
when: k8s_deployment_user is defined when: k8s_deployment_user is defined
- name: Set proper sysctl values # Set proper sysctl values
import_tasks: sysctl.yml - include: sysctl.yml

View File

@@ -1,6 +1,6 @@
--- ---
- name: Load br_netfilter module - name: Load br_netfilter module
community.general.modprobe: modprobe:
name: br_netfilter name: br_netfilter
state: present state: present
register: br_netfilter register: br_netfilter
@@ -25,19 +25,19 @@
- name: Enable net.ipv4.ip_forward in sysctl - name: Enable net.ipv4.ip_forward in sysctl
ansible.posix.sysctl: sysctl:
name: net.ipv4.ip_forward name: net.ipv4.ip_forward
value: 1 value: 1
sysctl_file: "{{ sysctl_file_path }}" sysctl_file: /etc/sysctl.d/ipv4-ip_forward.conf
state: present state: present
reload: yes reload: yes
- name: Set bridge-nf-call-{arptables,iptables} to 0 - name: Set bridge-nf-call-{arptables,iptables} to 0
ansible.posix.sysctl: sysctl:
name: "{{ item }}" name: "{{ item }}"
state: present state: present
value: 0 value: 0
sysctl_file: "{{ sysctl_file_path }}" sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
reload: yes reload: yes
with_items: with_items:
- net.bridge.bridge-nf-call-arptables - net.bridge.bridge-nf-call-arptables

View File

@@ -11,7 +11,6 @@
state: directory state: directory
owner: "{{ k8s_deployment_user }}" owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}" group: "{{ k8s_deployment_user }}"
mode: 0700
- name: Configure sudo for deployment user - name: Configure sudo for deployment user
copy: copy:

View File

@@ -1,51 +0,0 @@
---
- name: Check ansible version
import_playbook: kubernetes_sigs.kubespray.ansible_version
- name: Install mitogen
hosts: localhost
strategy: linear
vars:
mitogen_version: 0.3.2
mitogen_url: https://github.com/mitogen-hq/mitogen/archive/refs/tags/v{{ mitogen_version }}.tar.gz
ansible_connection: local
tasks:
- name: Create mitogen plugin dir
file:
path: "{{ item }}"
state: directory
mode: 0755
become: false
loop:
- "{{ playbook_dir }}/plugins/mitogen"
- "{{ playbook_dir }}/dist"
- name: Download mitogen release
get_url:
url: "{{ mitogen_url }}"
dest: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
validate_certs: true
mode: 0644
- name: Extract archive
unarchive:
src: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
dest: "{{ playbook_dir }}/dist/"
- name: Copy plugin
ansible.posix.synchronize:
src: "{{ playbook_dir }}/dist/mitogen-{{ mitogen_version }}/"
dest: "{{ playbook_dir }}/plugins/mitogen"
- name: Add strategy to ansible.cfg
community.general.ini_file:
path: ansible.cfg
mode: 0644
section: "{{ item.section | d('defaults') }}"
option: "{{ item.option }}"
value: "{{ item.value }}"
with_items:
- option: strategy
value: mitogen_linear
- option: strategy_plugins
value: plugins/mitogen/ansible_mitogen/plugins/strategy

View File

@@ -1,29 +1,24 @@
--- ---
- name: Bootstrap hosts - hosts: gfs-cluster
hosts: gfs-cluster
gather_facts: false gather_facts: false
vars: vars:
ansible_ssh_pipelining: false ansible_ssh_pipelining: false
roles: roles:
- { role: bootstrap-os, tags: bootstrap-os} - { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts - hosts: all
hosts: all
gather_facts: true gather_facts: true
- name: Install glusterfs server - hosts: gfs-cluster
hosts: gfs-cluster
vars: vars:
ansible_ssh_pipelining: true ansible_ssh_pipelining: true
roles: roles:
- { role: glusterfs/server } - { role: glusterfs/server }
- name: Install glusterfs servers - hosts: k8s-cluster
hosts: k8s_cluster
roles: roles:
- { role: glusterfs/client } - { role: glusterfs/client }
- name: Configure Kubernetes to use glusterfs - hosts: kube-master[0]
hosts: kube_control_plane[0]
roles: roles:
- { role: kubernetes-pv } - { role: kubernetes-pv }

View File

@@ -11,10 +11,10 @@
# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default). # ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default).
# ## As in the previous case, you can set ip to give direct communication on internal IPs # ## As in the previous case, you can set ip to give direct communication on internal IPs
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7 # gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8 # gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9 # gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# [kube_control_plane] # [kube-master]
# node1 # node1
# node2 # node2
@@ -23,16 +23,16 @@
# node2 # node2
# node3 # node3
# [kube_node] # [kube-node]
# node2 # node2
# node3 # node3
# node4 # node4
# node5 # node5
# node6 # node6
# [k8s_cluster:children] # [k8s-cluster:children]
# kube_node # kube-node
# kube_control_plane # kube-master
# [gfs-cluster] # [gfs-cluster]
# gfs_node1 # gfs_node1
@@ -41,3 +41,4 @@
# [network-storage:children] # [network-storage:children]
# gfs-cluster # gfs-cluster

View File

@@ -8,22 +8,18 @@ Installs and configures GlusterFS on Linux.
For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role). For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role).
This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/ansible/latest/collections/gluster/gluster/gluster_volume_module.html) module to ease the management of Gluster volumes. This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/gluster_volume_module.html) module to ease the management of Gluster volumes.
## Role Variables ## Role Variables
Available variables are listed below, along with default values (see `defaults/main.yml`): Available variables are listed below, along with default values (see `defaults/main.yml`):
```yaml glusterfs_default_release: ""
glusterfs_default_release: ""
```
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy). You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
```yaml glusterfs_ppa_use: yes
glusterfs_ppa_use: yes glusterfs_ppa_version: "3.5"
glusterfs_ppa_version: "3.5"
```
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info. For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
@@ -33,11 +29,9 @@ None.
## Example Playbook ## Example Playbook
```yaml
- hosts: server - hosts: server
roles: roles:
- geerlingguy.glusterfs - geerlingguy.glusterfs
```
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/). For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).

View File

@@ -6,12 +6,12 @@ galaxy_info:
description: GlusterFS installation for Linux. description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC" company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)" license: "license (BSD, MIT)"
min_ansible_version: "2.0" min_ansible_version: 2.0
platforms: platforms:
- name: EL - name: EL
versions: versions:
- "6" - 6
- "7" - 7
- name: Ubuntu - name: Ubuntu
versions: versions:
- precise - precise

View File

@@ -3,19 +3,14 @@
# hyperkube and needs to be installed as part of the system. # hyperkube and needs to be installed as part of the system.
# Setup/install tasks. # Setup/install tasks.
- name: Setup RedHat distros for glusterfs - include: setup-RedHat.yml
include_tasks: setup-RedHat.yml
when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined
- name: Setup Debian distros for glusterfs - include: setup-Debian.yml
include_tasks: setup-Debian.yml
when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined
- name: Ensure Gluster mount directories exist. - name: Ensure Gluster mount directories exist.
file: file: "path={{ item }} state=directory mode=0775"
path: "{{ item }}"
state: directory
mode: 0775
with_items: with_items:
- "{{ gluster_mount_dir }}" - "{{ gluster_mount_dir }}"
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added register: glusterfs_ppa_added
when: glusterfs_ppa_use when: glusterfs_ppa_use
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa no-handler - name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa 503
apt: apt:
name: "{{ item }}" name: "{{ item }}"
state: absent state: absent

View File

@@ -1,14 +1,10 @@
--- ---
- name: Install Prerequisites - name: Install Prerequisites
package: yum: name={{ item }} state=present
name: "{{ item }}"
state: present
with_items: with_items:
- "centos-release-gluster{{ glusterfs_default_release }}" - "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages - name: Install Packages
package: yum: name={{ item }} state=present
name: "{{ item }}"
state: present
with_items: with_items:
- glusterfs-client - glusterfs-client

View File

@@ -6,12 +6,12 @@ galaxy_info:
description: GlusterFS installation for Linux. description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC" company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)" license: "license (BSD, MIT)"
min_ansible_version: "2.0" min_ansible_version: 2.0
platforms: platforms:
- name: EL - name: EL
versions: versions:
- "6" - 6
- "7" - 7
- name: Ubuntu - name: Ubuntu
versions: versions:
- precise - precise

View File

@@ -4,110 +4,90 @@
include_vars: "{{ ansible_os_family }}.yml" include_vars: "{{ ansible_os_family }}.yml"
# Install xfs package # Install xfs package
- name: Install xfs Debian - name: install xfs Debian
apt: apt: name=xfsprogs state=present
name: xfsprogs
state: present
when: ansible_os_family == "Debian" when: ansible_os_family == "Debian"
- name: Install xfs RedHat - name: install xfs RedHat
package: yum: name=xfsprogs state=present
name: xfsprogs
state: present
when: ansible_os_family == "RedHat" when: ansible_os_family == "RedHat"
# Format external volumes in xfs # Format external volumes in xfs
- name: Format volumes in xfs - name: Format volumes in xfs
community.general.filesystem: filesystem: "fstype=xfs dev={{ disk_volume_device_1 }}"
fstype: xfs
dev: "{{ disk_volume_device_1 }}"
# Mount external volumes # Mount external volumes
- name: Mounting new xfs filesystem - name: mounting new xfs filesystem
ansible.posix.mount: mount: "name={{ gluster_volume_node_mount_dir }} src={{ disk_volume_device_1 }} fstype=xfs state=mounted"
name: "{{ gluster_volume_node_mount_dir }}"
src: "{{ disk_volume_device_1 }}"
fstype: xfs
state: mounted
# Setup/install tasks. # Setup/install tasks.
- name: Setup RedHat distros for glusterfs - include: setup-RedHat.yml
include_tasks: setup-RedHat.yml
when: ansible_os_family == 'RedHat' when: ansible_os_family == 'RedHat'
- name: Setup Debian distros for glusterfs - include: setup-Debian.yml
include_tasks: setup-Debian.yml
when: ansible_os_family == 'Debian' when: ansible_os_family == 'Debian'
- name: Ensure GlusterFS is started and enabled at boot. - name: Ensure GlusterFS is started and enabled at boot.
service: service: "name={{ glusterfs_daemon }} state=started enabled=yes"
name: "{{ glusterfs_daemon }}"
state: started
enabled: yes
- name: Ensure Gluster brick and mount directories exist. - name: Ensure Gluster brick and mount directories exist.
file: file: "path={{ item }} state=directory mode=0775"
path: "{{ item }}"
state: directory
mode: 0775
with_items: with_items:
- "{{ gluster_brick_dir }}" - "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}" - "{{ gluster_mount_dir }}"
- name: Configure Gluster volume with replicas - name: Configure Gluster volume with replicas
gluster.gluster.gluster_volume: gluster_volume:
state: present state: present
name: "{{ gluster_brick_name }}" name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}" brick: "{{ gluster_brick_dir }}"
replicas: "{{ groups['gfs-cluster'] | length }}" replicas: "{{ groups['gfs-cluster'] | length }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}" cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}" host: "{{ inventory_hostname }}"
force: yes force: yes
run_once: true run_once: true
when: groups['gfs-cluster'] | length > 1 when: groups['gfs-cluster']|length > 1
- name: Configure Gluster volume without replicas - name: Configure Gluster volume without replicas
gluster.gluster.gluster_volume: gluster_volume:
state: present state: present
name: "{{ gluster_brick_name }}" name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}" brick: "{{ gluster_brick_dir }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}" cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}" host: "{{ inventory_hostname }}"
force: yes force: yes
run_once: true run_once: true
when: groups['gfs-cluster'] | length <= 1 when: groups['gfs-cluster']|length <= 1
- name: Mount glusterfs to retrieve disk size - name: Mount glusterfs to retrieve disk size
ansible.posix.mount: mount:
name: "{{ gluster_mount_dir }}" name: "{{ gluster_mount_dir }}"
src: "{{ ip | default(ansible_default_ipv4['address']) }}:/gluster" src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
fstype: glusterfs fstype: glusterfs
opts: "defaults,_netdev" opts: "defaults,_netdev"
state: mounted state: mounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Get Gluster disk size - name: Get Gluster disk size
setup: setup: filter=ansible_mounts
filter: ansible_mounts
register: mounts_data register: mounts_data
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Set Gluster disk size to variable - name: Set Gluster disk size to variable
set_fact: set_fact:
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024 * 1024 * 1024)) | int }}" gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Create file on GlusterFS - name: Create file on GlusterFS
template: template:
dest: "{{ gluster_mount_dir }}/.test-file.txt" dest: "{{ gluster_mount_dir }}/.test-file.txt"
src: test-file.txt src: test-file.txt
mode: 0644
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs - name: Unmount glusterfs
ansible.posix.mount: mount:
name: "{{ gluster_mount_dir }}" name: "{{ gluster_mount_dir }}"
fstype: glusterfs fstype: glusterfs
src: "{{ ip | default(ansible_default_ipv4['address']) }}:/gluster" src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
state: unmounted state: unmounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added register: glusterfs_ppa_added
when: glusterfs_ppa_use when: glusterfs_ppa_use
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa no-handler - name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa 503
apt: apt:
name: "{{ item }}" name: "{{ item }}"
state: absent state: absent

View File

@@ -1,15 +1,11 @@
--- ---
- name: Install Prerequisites - name: Install Prerequisites
package: yum: name={{ item }} state=present
name: "{{ item }}"
state: present
with_items: with_items:
- "centos-release-gluster{{ glusterfs_default_release }}" - "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages - name: Install Packages
package: yum: name={{ item }} state=present
name: "{{ item }}"
state: present
with_items: with_items:
- glusterfs-server - glusterfs-server
- glusterfs-client - glusterfs-client

View File

@@ -0,0 +1,5 @@
---
- hosts: all
roles:
- role_under_test

View File

@@ -3,13 +3,12 @@
template: template:
src: "{{ item.file }}" src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}" dest: "{{ kube_config_dir }}/{{ item.dest }}"
mode: 0644
with_items: with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json} - { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml} - { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json} - { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
register: gluster_pv register: gluster_pv
when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
- name: Kubernetes Apps | Set GlusterFS endpoint and PV - name: Kubernetes Apps | Set GlusterFS endpoint and PV
kube: kube:
@@ -18,6 +17,6 @@
kubectl: "{{ bin_dir }}/kubectl" kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}" resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/{{ item.item.dest }}" filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
state: "{{ item.changed | ternary('latest', 'present') }}" state: "{{ item.changed | ternary('latest','present') }}"
with_items: "{{ gluster_pv.results }}" with_items: "{{ gluster_pv.results }}"
when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined

View File

@@ -1,11 +1,9 @@
--- ---
- name: Tear down heketi - hosts: kube-master[0]
hosts: kube_control_plane[0]
roles: roles:
- { role: tear-down } - { role: tear-down }
- name: Teardown disks in heketi - hosts: heketi-node
hosts: heketi-node
become: yes become: yes
roles: roles:
- { role: tear-down-disks } - { role: tear-down-disks }

View File

@@ -1,11 +1,9 @@
--- ---
- name: Prepare heketi install - hosts: heketi-node
hosts: heketi-node
roles: roles:
- { role: prepare } - { role: prepare }
- name: Provision heketi - hosts: kube-master[0]
hosts: kube_control_plane[0]
tags: tags:
- "provision" - "provision"
roles: roles:

View File

@@ -2,25 +2,18 @@ all:
vars: vars:
heketi_admin_key: "11elfeinhundertundelf" heketi_admin_key: "11elfeinhundertundelf"
heketi_user_key: "!!einseinseins" heketi_user_key: "!!einseinseins"
glusterfs_daemonset:
readiness_probe:
timeout_seconds: 3
initial_delay_seconds: 3
liveness_probe:
timeout_seconds: 3
initial_delay_seconds: 10
children: children:
k8s_cluster: k8s-cluster:
vars: vars:
kubelet_fail_swap_on: false kubelet_fail_swap_on: false
children: children:
kube_control_plane: kube-master:
hosts: hosts:
node1: node1:
etcd: etcd:
hosts: hosts:
node2: node2:
kube_node: kube-node:
hosts: &kube_nodes hosts: &kube_nodes
node1: node1:
node2: node2:

View File

@@ -5,13 +5,13 @@
- "dm_snapshot" - "dm_snapshot"
- "dm_mirror" - "dm_mirror"
- "dm_thin_pool" - "dm_thin_pool"
community.general.modprobe: modprobe:
name: "{{ item }}" name: "{{ item }}"
state: "present" state: "present"
- name: "Install glusterfs mount utils (RedHat)" - name: "Install glusterfs mount utils (RedHat)"
become: true become: true
package: yum:
name: "glusterfs-fuse" name: "glusterfs-fuse"
state: "present" state: "present"
when: "ansible_os_family == 'RedHat'" when: "ansible_os_family == 'RedHat'"

View File

@@ -1,3 +1,3 @@
--- ---
- name: "Stop port forwarding" - name: "stop port forwarding"
command: "killall " command: "killall "

View File

@@ -7,9 +7,9 @@
- name: "Bootstrap heketi." - name: "Bootstrap heketi."
when: when:
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Service']\")) | length == 0" - "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0"
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Deployment']\")) | length == 0" - "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Deployment']\"))|length == 0"
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Pod']\")) | length == 0" - "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod']\"))|length == 0"
include_tasks: "bootstrap/deploy.yml" include_tasks: "bootstrap/deploy.yml"
# Prepare heketi topology # Prepare heketi topology
@@ -20,11 +20,11 @@
- name: "Ensure heketi bootstrap pod is up." - name: "Ensure heketi bootstrap pod is up."
assert: assert:
that: "(initial_heketi_pod.stdout | from_json | json_query('items[*]')) | length == 1" that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1"
- name: Store the initial heketi pod name - name: Store the initial heketi pod name
set_fact: set_fact:
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout | from_json | json_query(\"items[*].metadata.name | [0]\") }}" initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}"
- name: "Test heketi topology." - name: "Test heketi topology."
changed_when: false changed_when: false
@@ -32,7 +32,7 @@
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json" command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Load heketi topology." - name: "Load heketi topology."
when: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*]\") | flatten | length == 0" when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0"
include_tasks: "bootstrap/topology.yml" include_tasks: "bootstrap/topology.yml"
# Provision heketi database volume # Provision heketi database volume
@@ -58,7 +58,7 @@
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']" service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']" job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when: when:
- "heketi_storage_state.stdout | from_json | json_query(secret_query) | length == 0" - "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout | from_json | json_query(endpoints_query) | length == 0" - "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout | from_json | json_query(service_query) | length == 0" - "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 0" - "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"

View File

@@ -1,10 +1,7 @@
--- ---
- name: "Kubernetes Apps | Lay Down Heketi Bootstrap" - name: "Kubernetes Apps | Lay Down Heketi Bootstrap"
become: true become: true
template: template: { src: "heketi-bootstrap.json.j2", dest: "{{ kube_config_dir }}/heketi-bootstrap.json" }
src: "heketi-bootstrap.json.j2"
dest: "{{ kube_config_dir }}/heketi-bootstrap.json"
mode: 0640
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap" - name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
kube: kube:
@@ -17,11 +14,11 @@
register: "initial_heketi_state" register: "initial_heketi_state"
vars: vars:
initial_heketi_state: { stdout: "{}" } initial_heketi_state: { stdout: "{}" }
pods_query: "items[?kind=='Pod'].status.conditions | [0][?type=='Ready'].status | [0]" pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions | [0][?type=='Available'].status | [0]" deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json" command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
until: until:
- "initial_heketi_state.stdout | from_json | json_query(pods_query) == 'True'" - "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout | from_json | json_query(deployments_query) == 'True'" - "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60 retries: 60
delay: 5 delay: 5

View File

@@ -15,10 +15,10 @@
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']" service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']" job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when: when:
- "heketi_storage_state.stdout | from_json | json_query(secret_query) | length == 0" - "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout | from_json | json_query(endpoints_query) | length == 0" - "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout | from_json | json_query(service_query) | length == 0" - "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 0" - "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"
register: "heketi_storage_result" register: "heketi_storage_result"
- name: "Get state of heketi database copy job." - name: "Get state of heketi database copy job."
command: "{{ bin_dir }}/kubectl get jobs --output=json" command: "{{ bin_dir }}/kubectl get jobs --output=json"
@@ -28,6 +28,6 @@
heketi_storage_state: { stdout: "{}" } heketi_storage_state: { stdout: "{}" }
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job' && status.succeeded==1]" job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job' && status.succeeded==1]"
until: until:
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 1" - "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 1"
retries: 60 retries: 60
delay: 5 delay: 5

View File

@@ -5,10 +5,10 @@
changed_when: false changed_when: false
- name: "Delete bootstrap Heketi." - name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\"" command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout | from_json | json_query('items[*]') | length > 0" when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over." - name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json" command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result" register: "heketi_result"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0" until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60 retries: 60
delay: 5 delay: 5

View File

@@ -10,11 +10,10 @@
template: template:
src: "topology.json.j2" src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json" dest: "{{ kube_config_dir }}/topology.json"
mode: 0644
- name: "Copy topology configuration into container." - name: "Copy topology configuration into container."
changed_when: false changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json" command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa no-handler - name: "Load heketi topology." # noqa 503
when: "render.changed" when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json" command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi" register: "load_heketi"
@@ -22,6 +21,6 @@
changed_when: false changed_when: false
register: "heketi_topology" register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json" command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\") | flatten | length == groups['heketi-node'] | length" until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60 retries: 60
delay: 5 delay: 5

View File

@@ -6,19 +6,19 @@
- name: "Get heketi volumes." - name: "Get heketi volumes."
changed_when: false changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json" command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout | from_json | json_query(\"volumes[*]\") }}" with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" } loop_control: { loop_var: "volume_id" }
register: "volumes_information" register: "volumes_information"
- name: "Test heketi database volume." - name: "Test heketi database volume."
set_fact: { heketi_database_volume_exists: true } set_fact: { heketi_database_volume_exists: true }
with_items: "{{ volumes_information.results }}" with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" } loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout | from_json }}" } vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'" when: "volume.name == 'heketidbstorage'"
- name: "Provision database volume." - name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage" command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined" when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod." - name: "Copy configuration from pod." # noqa 301
become: true become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json" command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids." - name: "Get heketi volume ids."
@@ -28,14 +28,14 @@
- name: "Get heketi volumes." - name: "Get heketi volumes."
changed_when: false changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json" command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout | from_json | json_query(\"volumes[*]\") }}" with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" } loop_control: { loop_var: "volume_id" }
register: "volumes_information" register: "volumes_information"
- name: "Test heketi database volume." - name: "Test heketi database volume."
set_fact: { heketi_database_volume_created: true } set_fact: { heketi_database_volume_created: true }
with_items: "{{ volumes_information.results }}" with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" } loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout | from_json }}" } vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'" when: "volume.name == 'heketidbstorage'"
- name: "Ensure heketi database volume exists." - name: "Ensure heketi database volume exists."
assert: { that: "heketi_database_volume_created is defined", msg: "Heketi database volume does not exist." } assert: { that: "heketi_database_volume_created is defined", msg: "Heketi database volume does not exist." }

View File

@@ -1,9 +1,6 @@
--- ---
- name: "Kubernetes Apps | Lay Down GlusterFS Daemonset" - name: "Kubernetes Apps | Lay Down GlusterFS Daemonset"
template: template: { src: "glusterfs-daemonset.json.j2", dest: "{{ kube_config_dir }}/glusterfs-daemonset.json" }
src: "glusterfs-daemonset.json.j2"
dest: "{{ kube_config_dir }}/glusterfs-daemonset.json"
mode: 0644
become: true become: true
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset" - name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
@@ -23,17 +20,14 @@
changed_when: false changed_when: false
vars: vars:
daemonset_state: { stdout: "{}" } daemonset_state: { stdout: "{}" }
ready: "{{ daemonset_state.stdout | from_json | json_query(\"status.numberReady\") }}" ready: "{{ daemonset_state.stdout|from_json|json_query(\"status.numberReady\") }}"
desired: "{{ daemonset_state.stdout | from_json | json_query(\"status.desiredNumberScheduled\") }}" desired: "{{ daemonset_state.stdout|from_json|json_query(\"status.desiredNumberScheduled\") }}"
until: "ready | int >= 3" until: "ready | int >= 3"
retries: 60 retries: 60
delay: 5 delay: 5
- name: "Kubernetes Apps | Lay Down Heketi Service Account" - name: "Kubernetes Apps | Lay Down Heketi Service Account"
template: template: { src: "heketi-service-account.json.j2", dest: "{{ kube_config_dir }}/heketi-service-account.json" }
src: "heketi-service-account.json.j2"
dest: "{{ kube_config_dir }}/heketi-service-account.json"
mode: 0644
become: true become: true
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Service Account" - name: "Kubernetes Apps | Install and configure Heketi Service Account"

View File

@@ -5,7 +5,7 @@
changed_when: false changed_when: false
- name: "Assign storage label" - name: "Assign storage label"
when: "label_present.stdout_lines | length == 0" when: "label_present.stdout_lines|length == 0"
command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs" command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
- name: Get storage nodes again - name: Get storage nodes again
@@ -15,5 +15,5 @@
- name: Ensure the label has been set - name: Ensure the label has been set
assert: assert:
that: "label_present | length > 0" that: "label_present|length > 0"
msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs." msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs."

View File

@@ -4,7 +4,6 @@
template: template:
src: "heketi-deployment.json.j2" src: "heketi-deployment.json.j2"
dest: "{{ kube_config_dir }}/heketi-deployment.json" dest: "{{ kube_config_dir }}/heketi-deployment.json"
mode: 0644
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi" - name: "Kubernetes Apps | Install and configure Heketi"
@@ -24,11 +23,11 @@
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]" deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json" command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
until: until:
- "heketi_state.stdout | from_json | json_query(pods_query) == 'True'" - "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "heketi_state.stdout | from_json | json_query(deployments_query) == 'True'" - "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60 retries: 60
delay: 5 delay: 5
- name: Set the Heketi pod name - name: Set the Heketi pod name
set_fact: set_fact:
heketi_pod_name: "{{ heketi_state.stdout | from_json | json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}" heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"

View File

@@ -5,7 +5,7 @@
changed_when: false changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding." - name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout | length == 0" when: "clusterrolebinding_state.stdout == \"\""
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account" command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- name: Get clusterrolebindings again - name: Get clusterrolebindings again
@@ -15,7 +15,7 @@
- name: Make sure that clusterrolebindings are present now - name: Make sure that clusterrolebindings are present now
assert: assert:
that: "clusterrolebinding_state.stdout | length > 0" that: "clusterrolebinding_state.stdout != \"\""
msg: "Cluster role binding is not present." msg: "Cluster role binding is not present."
- name: Get the heketi-config-secret secret - name: Get the heketi-config-secret secret
@@ -28,10 +28,9 @@
template: template:
src: "heketi.json.j2" src: "heketi.json.j2"
dest: "{{ kube_config_dir }}/heketi.json" dest: "{{ kube_config_dir }}/heketi.json"
mode: 0644
- name: "Deploy Heketi config secret" - name: "Deploy Heketi config secret"
when: "secret_state.stdout | length == 0" when: "secret_state.stdout == \"\""
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json" command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- name: Get the heketi-config-secret secret again - name: Get the heketi-config-secret secret again
@@ -41,5 +40,5 @@
- name: Make sure the heketi-config-secret secret exists now - name: Make sure the heketi-config-secret secret exists now
assert: assert:
that: "secret_state.stdout | length > 0" that: "secret_state.stdout != \"\""
msg: "Heketi config secret is not present." msg: "Heketi config secret is not present."

View File

@@ -2,10 +2,7 @@
- name: "Kubernetes Apps | Lay Down Heketi Storage" - name: "Kubernetes Apps | Lay Down Heketi Storage"
become: true become: true
vars: { nodes: "{{ groups['heketi-node'] }}" } vars: { nodes: "{{ groups['heketi-node'] }}" }
template: template: { src: "heketi-storage.json.j2", dest: "{{ kube_config_dir }}/heketi-storage.json" }
src: "heketi-storage.json.j2"
dest: "{{ kube_config_dir }}/heketi-storage.json"
mode: 0644
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Storage" - name: "Kubernetes Apps | Install and configure Heketi Storage"
kube: kube:

View File

@@ -12,11 +12,10 @@
- name: "Render storage class configuration." - name: "Render storage class configuration."
become: true become: true
vars: vars:
endpoint_address: "{{ (heketi_service.stdout | from_json).spec.clusterIP }}" endpoint_address: "{{ (heketi_service.stdout|from_json).spec.clusterIP }}"
template: template:
src: "storageclass.yml.j2" src: "storageclass.yml.j2"
dest: "{{ kube_config_dir }}/storageclass.yml" dest: "{{ kube_config_dir }}/storageclass.yml"
mode: 0644
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Storace Class" - name: "Kubernetes Apps | Install and configure Storace Class"
kube: kube:

View File

@@ -10,17 +10,16 @@
template: template:
src: "topology.json.j2" src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json" dest: "{{ kube_config_dir }}/topology.json"
mode: 0644 - name: "Copy topology configuration into container." # noqa 503
- name: "Copy topology configuration into container." # noqa no-handler
when: "rendering.changed" when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json" command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa no-handler - name: "Load heketi topology." # noqa 503
when: "rendering.changed" when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json" command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology." - name: "Get heketi topology."
register: "heketi_topology" register: "heketi_topology"
changed_when: false changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json" command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\") | flatten | length == groups['heketi-node'] | length" until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60 retries: 60
delay: 5 delay: 5

View File

@@ -73,8 +73,8 @@
"privileged": true "privileged": true
}, },
"readinessProbe": { "readinessProbe": {
"timeoutSeconds": {{ glusterfs_daemonset.readiness_probe.timeout_seconds }}, "timeoutSeconds": 3,
"initialDelaySeconds": {{ glusterfs_daemonset.readiness_probe.initial_delay_seconds }}, "initialDelaySeconds": 3,
"exec": { "exec": {
"command": [ "command": [
"/bin/bash", "/bin/bash",
@@ -84,8 +84,8 @@
} }
}, },
"livenessProbe": { "livenessProbe": {
"timeoutSeconds": {{ glusterfs_daemonset.liveness_probe.timeout_seconds }}, "timeoutSeconds": 3,
"initialDelaySeconds": {{ glusterfs_daemonset.liveness_probe.initial_delay_seconds }}, "initialDelaySeconds": 10,
"exec": { "exec": {
"command": [ "command": [
"/bin/bash", "/bin/bash",

View File

@@ -1,7 +1,7 @@
--- ---
- name: "Install lvm utils (RedHat)" - name: "Install lvm utils (RedHat)"
become: true become: true
package: yum:
name: "lvm2" name: "lvm2"
state: "present" state: "present"
when: "ansible_os_family == 'RedHat'" when: "ansible_os_family == 'RedHat'"
@@ -19,10 +19,10 @@
become: true become: true
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2" shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
register: "volume_groups" register: "volume_groups"
ignore_errors: true # noqa ignore-errors ignore_errors: true
changed_when: false changed_when: false
- name: "Remove volume groups." - name: "Remove volume groups." # noqa 301
environment: environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true become: true
@@ -30,16 +30,16 @@
with_items: "{{ volume_groups.stdout_lines }}" with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" } loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks." - name: "Remove physical volume from cluster disks." # noqa 301
environment: environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true become: true
command: "pvremove {{ disk_volume_device_1 }} --yes" command: "pvremove {{ disk_volume_device_1 }} --yes"
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: "Remove lvm utils (RedHat)" - name: "Remove lvm utils (RedHat)"
become: true become: true
package: yum:
name: "lvm2" name: "lvm2"
state: "absent" state: "absent"
when: "ansible_os_family == 'RedHat' and heketi_remove_lvm" when: "ansible_os_family == 'RedHat' and heketi_remove_lvm"

View File

@@ -1,51 +1,51 @@
--- ---
- name: Remove storage class. - name: "Remove storage class." # noqa 301
command: "{{ bin_dir }}/kubectl delete storageclass gluster" command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Tear down heketi. - name: "Tear down heketi." # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\"" command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Tear down heketi. - name: "Tear down heketi." # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\"" command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Tear down bootstrap. - name: "Tear down bootstrap."
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml" include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
- name: Ensure there is nothing left over. - name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json" command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result" register: "heketi_result"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0" until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60 retries: 60
delay: 5 delay: 5
- name: Ensure there is nothing left over. - name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json" command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result" register: "heketi_result"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0" until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60 retries: 60
delay: 5 delay: 5
- name: Tear down glusterfs. - name: "Tear down glusterfs." # noqa 301
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs" command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Remove heketi storage service. - name: "Remove heketi storage service." # noqa 301
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints" command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Remove heketi gluster role binding - name: "Remove heketi gluster role binding" # noqa 301
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin" command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Remove heketi config secret - name: "Remove heketi config secret" # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret" command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Remove heketi db backup - name: "Remove heketi db backup" # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup" command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Remove heketi service account - name: "Remove heketi service account" # noqa 301
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account" command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true # noqa ignore-errors ignore_errors: true
- name: Get secrets - name: "Get secrets"
command: "{{ bin_dir }}/kubectl get secrets --output=\"json\"" command: "{{ bin_dir }}/kubectl get secrets --output=\"json\""
register: "secrets" register: "secrets"
changed_when: false changed_when: false
- name: Remove heketi storage secret - name: "Remove heketi storage secret"
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" } vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout | from_json | json_query(storage_query) }}" command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
when: "storage_query is defined" when: "storage_query is defined"
ignore_errors: true # noqa ignore-errors ignore_errors: true

View File

@@ -1,16 +1,11 @@
# Offline deployment # Container image collecting script for offline deployment
## manage-offline-container-images.sh
Container image collecting script for offline deployment
This script has two features: This script has two features:
(1) Get container images from an environment which is deployed online. (1) Get container images from an environment which is deployed online.
(2) Deploy local container registry and register the container images to the registry. (2) Deploy local container registry and register the container images to the registry.
Step(1) should be done online site as a preparation, then we bring the gotten images Step(1) should be done online site as a preparation, then we bring the gotten images
to the target offline environment. if images are from a private registry, to the target offline environment.
you need to set `PRIVATE_REGISTRY` environment variable.
Then we will run step(2) for registering the images to local registry. Then we will run step(2) for registering the images to local registry.
Step(1) can be operated with: Step(1) can be operated with:
@@ -24,42 +19,3 @@ Step(2) can be operated with:
```shell ```shell
manage-offline-container-images.sh register manage-offline-container-images.sh register
``` ```
## generate_list.sh
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main/main.yml` file.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.
```shell
./generate_list.sh
tree temp
temp
├── files.list
├── files.list.template
├── images.list
└── images.list.template
0 directories, 5 files
```
In some cases you may want to update some component version, you can declare version variables in ansible inventory file or group_vars,
then run `./generate_list.sh -i [inventory_file]` to update file.list and images.list.
## manage-offline-files.sh
This script will download all files according to `temp/files.list` and run nginx container to provide offline file download.
Step(1) generate `files.list`
```shell
./generate_list.sh
```
Step(2) download files and run nginx container
```shell
./manage-offline-files.sh
```
when nginx container is running, it can be accessed through <http://127.0.0.1:8080/>.

View File

@@ -1,33 +0,0 @@
#!/bin/bash
set -eo pipefail
CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${DOWNLOAD_YML:="roles/download/defaults/main/main.yml"}
mkdir -p ${TEMP_DIR}
# generate all download files url template
grep 'download_url:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 's/^.*_url: //g;s/\"//g' > ${TEMP_DIR}/files.list.template
# generate all images list template
sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed -n "s/repo: //p;s/tag: //p" | tr -d ' ' \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/download/defaults/main/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
for i in $KUBE_IMAGES; do
echo "{{ kube_image_repo }}/$i:{{ kube_version }}" >> ${TEMP_DIR}/images.list.template
done
# run ansible to expand templates
/bin/cp ${CURRENT_DIR}/generate_list.yml ${REPO_ROOT_DIR}
(cd ${REPO_ROOT_DIR} && ansible-playbook $* generate_list.yml && /bin/rm generate_list.yml) || exit 1

Some files were not shown because too many files have changed in this diff Show More