Compare commits

...

423 Commits

Author SHA1 Message Date
Christoffer Anselm
dcd9c9509b Add etcd role dependency on kube user to avoid etcd role failure when running scale.yml with a fresh node. (#3240) (#4479) 2019-04-30 04:01:36 -07:00
Matthew Mosesohn
15eb7db36d Fix k8s api endpoint for secondary nodes in control plane mode (#4675)
Change-Id: I1588458b54c52443ad8d0afbd266f77ac0afea67
2019-04-29 07:50:24 -07:00
Matthew Mosesohn
a5b46bfc8c Run dns_late preinstall tasks on all k8s nodes (#4672)
* Run dns_late preinstall tasks on all k8s nodes

Related issue: #4656

Change-Id: I63f8559ef1a497b7580ab084561e6603fe647834

* Fix ansible-lint

Change-Id: Ia5b33fa63dbc36d8c3e9557ef3f2ea02af2325a5

* Fix recover_control_plane lint issues

Change-Id: I16643a3193c11b6ba704e9698812cac7e4fd19a8
2019-04-29 05:12:21 -07:00
Youngchul Bang
fbba259933 ingress-nginx: enable --report-node-internal-ip-address flag (#4114)
Close #4113
2019-04-29 01:44:22 -07:00
Florent Monbillard
7b77e2d232 Remove docker-storage-setup dependency if not needed (#4077)
When docker_container_storage_setup is false,
docker service should not depend on docker-storage-setup service,
because it's not installed.

For example, when using overlay2 on recent RHEL 7/Centos 7 kernels,
you most likely don't need it.
2019-04-29 01:42:22 -07:00
qvicksilver
48a182844c Documentation and playbook for recovering control plane from node failure (#4146) 2019-04-29 01:40:20 -07:00
MarkusTeufelberger
9335cdcebc ansible-lint: Add exception for invocation of "rm" (#4609) 2019-04-29 01:34:20 -07:00
Andreas Krüger
38af93b60c Remove rkt support (#4671) 2019-04-29 01:14:20 -07:00
Matthew Mosesohn
741de6051c Fix nodeselectors for contiv and nginx-ingress (#4662)
* Fix nodeselectors for contiv and nginx-ingress

Change-Id: Ib3eb6bd87193c69a90ee944c9164a0b6792c79ba

* Set kube proxy mode to iptables for addons task

Change-Id: Iff71a71f672405c74b4708c71db15ddc4391a53a
2019-04-28 23:36:19 -07:00
Dmitry
b8f0de3074 Fixed etcd-servers-overrides in kubeadm config (#4668)
* kube-apiserver will fail if used comma as separator
2019-04-28 23:02:20 -07:00
MarkusTeufelberger
88d919337e ansible-lint: don't compare to empty string [E602] (#4665) 2019-04-28 23:00:20 -07:00
Jiang Yi Tao
f518b90c6b associate fips for masters with no etcd (#4657) 2019-04-28 22:58:20 -07:00
Maxime Guyot
d5c33e6d6c Refactor test cases (#4655) 2019-04-28 22:56:19 -07:00
Matthew Mosesohn
338eb4ce65 Fix kubeadm upload certs with when condition (#4659)
* Fix kubeadm upload certs with when condition

Change-Id: I916dd2375b71eea2386047c7f185a2f8361f7a61

* Update kubeadm-secondary-experimental.yml
2019-04-27 01:14:20 -07:00
Matthew Mosesohn
009e208bcd Remove RHEL from packet deploy (#4661)
Change-Id: I131d77bb9d16cc0f252dd86166c29f72daa9a64a
2019-04-26 09:56:29 -07:00
Matthew Mosesohn
81e6877b02 Make cilium tests pass (#4660)
Cilium requires a high kernel. rhel7 and centos7 are too low, so they are removed.
Bumping ubuntu to ubuntu-1804

Change-Id: Ib1bffa45b8f9ed0ba500f751714372b3a3f7878b
2019-04-26 05:54:37 -07:00
Andreas Krüger
3722acee85 Fix broken metrics-server deployment not starting (#4651)
* Fix metrics-server deployment

* Make metrics server work

* Fix sample inventory
2019-04-26 00:44:26 -07:00
Maxime Guyot
a4a35f8a4f Git checkout a specific version for testing upgrades (#4653) 2019-04-25 05:24:46 -07:00
grialeyur
82119ca923 Add support calico kubernetes datastore and typha. (#4498)
* Add support calico kubernetes datastore and typha.

* Add typha_enabled to kubespray-defaults.
2019-04-25 05:00:48 -07:00
gitareest
6ca2019002 Fix issue with etcd arm host installation case (#4589)
Use host_architecture variable.
2019-04-25 04:58:47 -07:00
Maxime Guyot
53e3463b5a Fix GCE tests with undefined CI_PLATFORM (#4650) 2019-04-25 04:20:47 -07:00
Matthew Mosesohn
c9ed5f69d7 Prepend docker.io for all docker hub images (#4648)
Change-Id: I71dc793641bc168e40419e38f33f68f5325e77a9
2019-04-25 01:34:46 -07:00
Maxime Guyot
696d481e3b Fix dynamic inventory parsing in contrib/tf/packet (#4645) 2019-04-25 00:40:46 -07:00
Maxime Guyot
f5a83ceded Fix typo in test-infra playbook (#4644) 2019-04-24 13:34:46 -07:00
Andreas Krüger
3fe66a1298 Update downloads role to download to correct group (#4638) 2019-04-24 10:48:03 -07:00
Maxime Guyot
6af1f65d3c Fix python syntax in Terraform dynamic inventory (#4643) 2019-04-24 10:34:04 -07:00
Sergey Kolekonov
4a10dca7d4 Add an ability to provide oidc cert in base64 (#4618) 2019-04-24 09:40:01 -07:00
Matthew Mosesohn
4d57ed314d Clean up check for setting kubeadm certificate key (#4634)
Change-Id: I2c97c4753089eb3ec2e6b01b2681a8be98ecbb57
2019-04-24 07:14:12 -07:00
Andreas Krüger
86d0e12695 Add missing comma (#4636) 2019-04-24 07:10:02 -07:00
iwankgb
4e81bcc147 Fixing Vagrant cluster provisioning (#4218)
* Pass ansible_ssh_user as host_var

Co-authored-by: Damian Darczuk <damian.darczuk@intel.com>
Co-authored-by: Paweł Pałucki <pawel.palucki@intel.com>

* Create a directory before downloading container images to ansible host

Co-authored-by: Damian Darczuk <damian.darczuk@intel.com>
Co-authored-by: Paweł Pałucki <pawel.palucki@intel.com>

* Set private key usuing synchronize task options

Co-authored-by: Damian Darczuk <damian.darczuk@intel.com>
Co-authored-by: Paweł Pałucki <pawel.palucki@intel.com>
2019-04-24 05:42:05 -07:00
andreyshestakov
691baf5b14 Calico fix (#4540)
* Mark "Calico | Set global as_num" as "unchanged"

This command executes with "--skip-exists" parameter, so it is idempotent
and should not be marked as "changed".

* trigger ci
2019-04-24 05:40:01 -07:00
Attilio Greco
6243467856 remove duble check for run this task just one time (#4613) 2019-04-24 05:38:01 -07:00
Andreas Krüger
3c5a4474ac Increase ansible-lint speed (#4632) 2019-04-24 05:28:00 -07:00
Maxime Guyot
01da65252b Reduce VM size for Packet CI (#4630) 2019-04-24 04:30:04 -07:00
Andreas Krüger
f3e7615bef Switch deploy-part1 AIO job to Calico (#4628)
* Switch deploy-part1 AIO job to Calico

* Cleanup file

* Remove newline at end
2019-04-24 03:32:04 -07:00
Vincent Gramer
f47a666227 support azure loadbalancer standard sku (#4150) (#4476)
add the support of the folling property in azure-credential-check.yml
  - azure_loadbalancer_sku: Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
  - azure_exclude_master_from_standard_lb: excludes master nodes from standard load balancer.
  - azure_disable_outbound_snat: disables the outbound SNAT for public load balancer rules
  - useInstanceMetadata: Use instance metadata service where possible
  - azure_primary_availability_set: (Optional) The name of the availability set that should be used as the load balancer backend
2019-04-24 02:14:01 -07:00
Wilmar den Ouden
b708db4cd5 Update to v1.14.1 (#4481) 2019-04-24 02:08:01 -07:00
Maxime Guyot
a3144e7e21 Test with minimum requirements (#4615) 2019-04-24 02:02:03 -07:00
Maxime Guyot
683efc5698 Move on_success test to deploy-part2 (#4627) 2019-04-24 01:42:04 -07:00
Maxime Guyot
38a3075025 Always rebase on master before running a job (#4616) 2019-04-24 01:38:01 -07:00
Matthew Mosesohn
fc072300ea Purge legacy cleanup tasks from older than 1 year (#4450)
We don't need to support upgrades from 2 year old installs,
just from the last major version.

Also changed most retried tasks to 1s delay instead of longer.
2019-04-24 00:08:05 -07:00
Chad Swenson
d25ecfe1c1 Update Docker defaults to 18.09.5 and drop deprecated (#4624)
As of kubernetes v1.14, docker 18.09 is [validated for use](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#external-dependencies). Docker 1.11 and 1.12 were dropped.

This patch:
- Updates the default docker version to 18.09
- Updates Docker packages to the latest 18.09 patch (18.09.5)
- Removes options for Docker 1.11 and 1.12
2019-04-23 22:24:01 -07:00
Maxime Guyot
37d98e79ec Pin Terraform provider versions (#4620) 2019-04-23 22:22:01 -07:00
MarkusTeufelberger
a65605b17a ansible-lint: Don't use bare variables (#4608)
Circumvented one false positive from ansible-lint
Moved a block of jinja magic into its own variable
2019-04-23 22:20:00 -07:00
MarkusTeufelberger
424e59805f ansible-lint: Fix commands that are also available as module (#4619) 2019-04-23 22:18:00 -07:00
Maxime Guyot
6df8111cd4 Merge 020_check and 030_check (#4623)
* Merge 020_check and 030_check

* Fix pods output and fail if test pods is not ready
2019-04-23 16:12:00 -07:00
MarkusTeufelberger
76db060afb Define and implement specs for bootstrap-os (#4455)
* Add README to bootstrap-os role

* Rework bootstrap-os once more

* Document workarounds for bugs/deficiencies in Ansible modules
* Unify and document role variables
* Remove installation of additional packages and repositories
* Merge Ubuntu and Debian tasks
* Remove pipelining setting from default playbooks
* Fix OpenSUSE not running its required tasks
2019-04-23 15:46:02 -07:00
Andreas Krüger
d588532c9b Update probe timeouts, delays etc. (#4612)
* Fix merge conflict

* Add check delay

* Add more liveness and readiness options to metrics-server
2019-04-23 14:46:02 -07:00
Matthew Mosesohn
d6d7458d68 Fix control plane setup without a hardcoded key (#4610) 2019-04-23 14:37:59 -07:00
Maxime Guyot
228b244c84 Move inline shell into script files (#4604) 2019-04-23 13:36:03 -07:00
Matthew Mosesohn
d89ecb8308 disable metrics server and fix terraform (#4617)
* disable metrics server in centos7-flannel-addons job

Change-Id: I1d87923547584896f64dda9ea8feb5581ad48cbe

* Fix tf facility->facilities syntax

Change-Id: I434bfe53f47e8e4a546890e0b62d24bde6e6d6a7

* Update Terraform CI for facilities

* Fix undefined variable error
2019-04-23 12:06:03 -07:00
Maxime Guyot
50751bb610 Revert "Optimize kube resources creation (#4572)" (#4621)
This reverts commit f8fdc0cd93.
2019-04-23 20:37:23 +03:00
Justin Chao
64f48bf84c Update ansible.md (#4599)
Ansible 2.0 has deprecated the “ssh” from ansible_ssh_host.

Updating the docs to be more aligned with the Ansible version used in the sample/inventory.ini file as well.
Also adding `[bastion]` group in the docs to avoid confusion.
2019-04-22 23:36:09 -07:00
andreyshestakov
f8fdc0cd93 Optimize kube resources creation (#4572) 2019-04-22 23:34:10 -07:00
Matthew Mosesohn
09fe95bc60 Avoid creating k8s cert dir on non-k8s nodes (#4602) 2019-04-21 15:27:43 -07:00
Victor Morales
ada5941a70 Unmask Docker service in ClearLinux (#4583)
The docker service provided by the containers-basic bundle is masked
in ClearLinux distribution. This is causing errors in the following
steps. This commit ensures that the unit is not masked.
2019-04-21 07:31:43 -07:00
Maxime Guyot
88fe3403ce Add overcommitment for CPU in Packet CI playbook (#4597) 2019-04-21 02:27:44 -07:00
Maxime Guyot
04f2682ac6 Drop unused dynamic inventory functions (#4138) 2019-04-21 01:59:45 -07:00
rptaylor
873b5608cf add master_allowed_remote_ips (with terraform fmt) (#4022) 2019-04-21 01:57:44 -07:00
Maxime Guyot
12086744e0 Update docs for inventory_builder (#4581) 2019-04-20 11:09:45 -07:00
Vedran Bartonicek
33ab615072 Wait longer for node to join the cluster (#4549) 2019-04-20 07:05:40 -07:00
Maxime Guyot
f696d7abee Simplify syntax-check CI job (#4585) 2019-04-20 06:37:40 -07:00
Rabi Mishra
5a1cf19278 Install cri-tools on fedora (#4350) 2019-04-20 06:29:40 -07:00
Maxime Guyot
416e65509b Add documentation about CPU arch compatibility (#4302) 2019-04-20 06:27:40 -07:00
Maxime Guyot
4de6a78e26 Fix CI for packet_centos7-flannel-addons (#4586) 2019-04-20 06:21:40 -07:00
Maxime Guyot
026088deea Re-Add docker:dind for Packet CI (#4567) 2019-04-20 06:19:40 -07:00
Maxime Guyot
f142e671b3 Cleanup references to Travis CI (#4208)
Broken since 4efb0b7
2019-04-20 06:17:40 -07:00
Maxime Guyot
2f49b6caa8 Use yamllint --strict (#4587) 2019-04-20 06:15:41 -07:00
Maxime Guyot
50c86919dc Packet CI: Increasing the time wiating for IP to be assigned (#4584) 2019-04-20 06:13:40 -07:00
Maxime Guyot
781cc00cc4 Add a testcase to check that pods are running (#4555) 2019-04-20 06:11:40 -07:00
Matthew Mosesohn
05dc2b3a09 Use K8s 1.14 and add kubeadm experimental control plane mode (#4514)
* Use K8s 1.14 and add kubeadm experimental control plane mode

This reverts commit d39c273d96.

* Cleanup kubeadm setup run on first master

* pin kubeadm_certificate_key in test

* Remove kubelet autolabel of kube-node, add symlink for pki dir

Change-Id: Id5e74dd667c60675dbfe4193b0bc9fb44380e1ca
2019-04-19 06:01:54 -07:00
Aleksey Kasatkin
d0e628911c Add sha256 hashes for calicoctl v3.6.1 (#4580)
Hashes are added to calicoctl_binary_checksums for both adm and arm platforms.
2019-04-19 05:45:55 -07:00
Andreas Krüger
656633f784 YAMLLint everything (#4576) 2019-04-18 23:59:54 -07:00
Maxime Guyot
530e1c329d Add shellcheck CI (#4562) 2019-04-18 23:57:54 -07:00
Victor Morales
f5aec8add4 Fix runc absolute path (#4542)
The BINDIR variable defined on the runc's Makefile[1] defines
installation path is on $(PREFIX)/sbin which used for most of the
Linux distributions. This change fixes the absolute path used for
non-ClearLinux distributions (CentOS, Ubuntu).

[1] https://github.com/opencontainers/runc/blob/master/Makefile#L10
2019-04-18 15:41:58 -07:00
Maxime Guyot
f92309bfd0 Fix ansible-lint for ceph package (#4568) 2019-04-18 13:45:25 -07:00
Maxime Guyot
ef10feb26f Comment loadbalancer_* settings in sample inventory (#4566) 2019-04-18 04:20:10 -07:00
Victor Morales
c6586829de Ensure /etc/bash_completion.d/ folder exists (#4543)
The Stateless ClearLinux feature[1] requires the creation of folders
in /etc folder. This change ensure the existence of the
/etc/bash_completion.d/ folder for ClearLinux Distribution.

[1] https://clearlinux.org/features/stateless
2019-04-18 02:24:10 -07:00
johnstudarus
b103385678 added missing sidebar link to Packet doc (#4513) 2019-04-18 02:22:10 -07:00
Maxime Guyot
848191e97a Enable working Packet CI jobs and delay GCE CI (#4559) 2019-04-18 01:50:09 -07:00
MarkusTeufelberger
04e3fb6a5a Fix ansible-lint error 103 (#4511) 2019-04-18 01:42:10 -07:00
Maxime Guyot
b218e17f44 ansible-lint: E403 Package installs should not use latest (#4500) 2019-04-18 01:34:08 -07:00
Maxime Guyot
bba6d0c613 Fix CI link (#4521) 2019-04-18 01:12:08 -07:00
Maxime Guyot
49af1f9969 Fix ansible-lint e601 in create-vms (#4561) 2019-04-17 10:46:10 -07:00
Maxime Guyot
a6dc50e7cb Add host information for canal readiness probe (#4548) 2019-04-17 10:22:02 -07:00
Maxime Guyot
f69b5f7f33 Upgrade to Ansible 2.7.8 (#4535) 2019-04-17 10:18:05 -07:00
Maxime Guyot
37eac010c8 ansible-lint: Don’t compare to literal True/False (#4499) 2019-04-17 08:42:03 -07:00
Andreas Krüger
d4b9f15c0a PHASE 2 - Enable Packet-CI in gitlab and move unit-tests and deploy-part1 (#4538)
* PHASE 2 - Enable Packet-CI in gitlab

* Add gitlab files

* Reset files back and only keep Packet

* Include packet

* Add missing Upgrade Tests

* Update GCE jobs etc

* Fix bug

* Yaml lint all gitlab files

* Remove GCE

* Test

* Test again

* Enable GCE again

* Install requirements

* Cleanup the gitlab file

* Cleanup runner tags

* Install requirements

* Test

* Test variables for gce

* Test again

* Test again

* Fix

* Update
2019-04-17 08:32:03 -07:00
Maxime Guyot
ec3daedf9e Revert "Fix for unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels (#4320)" (#4553)
This reverts commit 586ad89d50.
2019-04-17 07:58:06 -07:00
Maxime Guyot
1cf76a10db Disable usage of default security group (#4533) 2019-04-17 02:10:03 -07:00
Jugwan Eom
d83181a2be add RBD Provisioner Addon (#3667) (#3668)
Based on the CephFS Provisioner Addon, the following changes have been made:
 - Upstream v2.1.1-k8s1.11
 - Configurable Provisioner replicas
2019-04-16 23:14:02 -07:00
Andreas Krüger
b834a28891 PHASE 1 - Add Packet-CI playbook and configuration (#4537) 2019-04-16 14:49:07 -07:00
andreyshestakov
78f6f6b889 Mark "Calico | Set global as_num" as "unchanged" (#4539)
This command executes with "--skip-exists" parameter, so it is idempotent
and should not be marked as "changed".
2019-04-16 09:31:11 -07:00
Maxime Guyot
0b02f6593b Split .gitlab-ci.yml into several files (#4519) 2019-04-16 05:35:05 -07:00
Andreas Holmsten
7f1d9ff543 [contrib/terraform/openstack] Add k8s_allowed_remote_ips variable (#4506)
* Add k8s_allowed_remote_ips variable

Useful for defining CIDRs allowed to initiate a SSH connection when
you don't want to use a bastion.

* Add TF_VAR_k8s_allowed_remote_ips variable to tf-apply-ovh
2019-04-15 07:22:08 -07:00
Matthew Mosesohn
c5fb734098 Switch calicoctl from a container to a binary (#4524) 2019-04-15 04:24:04 -07:00
Maxime Guyot
d5d3cfd3fa Sanitize the cluster_name variable (#4509) 2019-04-15 04:22:06 -07:00
Maxime Guyot
cc77a8c395 Add logo folders (#4515) 2019-04-12 11:00:47 -07:00
Matthew Mosesohn
d39c273d96 Revert "Use K8s 1.14 and add kubeadm experimental control plane mode (#4317)" (#4510)
This reverts commit 316508626d.
2019-04-11 12:52:43 -07:00
Matthew Mosesohn
316508626d Use K8s 1.14 and add kubeadm experimental control plane mode (#4317)
* Use Kubernetes 1.14 and experimental control plane support

* bump to v1.14.0
2019-04-11 05:30:13 -07:00
Maxime Guyot
46ba6a4154 ansible-lint: when lines should not include Jinja2 variables (#4496) 2019-04-11 03:06:10 -07:00
Maxime Guyot
d8cbbc414e Add a PR template (#4491) 2019-04-11 03:04:14 -07:00
Maxime Guyot
ebae491e3f Add several issue templates (#4493) 2019-04-11 03:02:13 -07:00
Maxime Guyot
6f919e5020 Add CI for Ubuntu 18.04 on Packet (#4439) 2019-04-11 00:26:10 -07:00
Andreas Krüger
4ff851b302 Enable nodelocaldns by default (#4461)
* Enable nodelocaldns by default

* Enable nodelocaldns by default

* nodelocaldns is now default

* Disable enable_nodelocaldns for the addons CI jobs

Disable enable_nodelocaldns for the addons CI jobs to make sure things still work without nodelocaldns
2019-04-11 00:24:08 -07:00
Qasim Sarfraz
3af90f8772 disable cloud-routes for non-cloud plugin (#4443) 2019-04-10 23:50:09 -07:00
MarkusTeufelberger
cb54d074b5 Fix syntax of yaml in .gitlab-ci.yml file (#4409) 2019-04-10 23:46:10 -07:00
Andreas Krüger
9032e271f1 Upgrade CoreDNS to 1.5.0 (#4494) 2019-04-10 13:40:08 -07:00
Andreas Krüger
15597aa493 Do not force TCP connections to upstreams. (#4492) 2019-04-10 12:40:09 -07:00
Sergey
3b9d13fda9 Return back bind API server node loadbalancer to 127.0.0.1 for security purposes. (#4489) 2019-04-10 12:20:08 -07:00
Andreas Krüger
5e0249ae7c Add HAProxy as internal loadbalancer (#4480) 2019-04-10 05:56:18 -07:00
Remous-Aris Koutsiamanis
27958e4247 Fix "Prevent inventory.py from configuring an even number of nodes in etcd" #4399 (#4465)
by making clusters with fewer than 3 nodes have only 1 etcd node
2019-04-10 05:52:14 -07:00
Maxime Guyot
353afa7cb0 Fix ipip: false in calico v3 (#4473) 2019-04-10 05:50:15 -07:00
Maxime Guyot
e865c50574 Fix terraform fmt on contrib/terraform/aws (#4484) 2019-04-10 04:32:14 -07:00
Neven Miculinic
a30ad1e5a5 Added generic CNI network plugin (#4322)
* Added generic CNI network plugin

* Added CNI network plugin documentation

* added necessary fix
2019-04-10 04:16:15 -07:00
Robert Neumann
586ad89d50 Fix for unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels (#4320)
* Fix the file path for all.yml and k8s-cluster.yml

* Fix --node-labels namespace error "unknown labels specified"

* Update templates and configs kubelet node-labels
2019-04-10 04:14:12 -07:00
Sidharth Anupkrishnan
6caa639243 Update CoreDNS label as specified in the kubernetes coredns repository (#3920) 2019-04-10 04:12:13 -07:00
Maxime Guyot
80f31818df Add terraform validate for contrib/terraform/aws (#4438) 2019-04-10 02:14:14 -07:00
Maxime Guyot
854cc53fa5 Add CI for contrib/terraform/openstack (#4475) 2019-04-10 02:12:16 -07:00
MarkusTeufelberger
d2a1ac3b0c Add Ansible-lint CI step (#4411)
* Add ansible-lint as gitlab-ci step

* Fix jinja2 syntax in include_tasks that breaks ansible-lint

* Use a block scalar to get around gitlab quoting/escaping rules

* Run ansible-lint in verbose mode in CI
2019-04-10 02:04:16 -07:00
Andreas Krüger
a678d1be9d Update CI to use 2.9.0 release and update Dockerfile to now use 18.04 (#4472)
* Update CI to use 2.9.0 release and update Dockerfile to now use 18.04

* Update CI to use 2.9.0 release and update Dockerfile to now use 18.04

* Update the kubectl bin
2019-04-09 05:57:06 -07:00
André R. de Miranda
097806dfe8 Added tag kube-proxy (#4272)
Signed-off-by: André R. de Miranda <andre@miranda.work>
2019-04-09 05:25:06 -07:00
Abdulaziz AlMalki
7cdf1fd388 quote values for kube_oidc_groups_prefix and kube_oidc_username_prefix values to accept colon, e.g oidc: (#4305)
This will fix error: error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context

Signed-off-by: Abdulaziz AlMalki <almalki.a@gmail.com>
2019-04-09 05:23:06 -07:00
Maxime Guyot
a4e65c7ceb Upgrade to Ansible >2.7.0 (#4471) 2019-04-09 04:21:07 -07:00
Karen Almog
20ebb49568 Don't create security groups for a bastion host on openstack, if doesn't exist (#4291) 2019-04-09 04:01:09 -07:00
Andreas Krüger
aa162b0d5d Update kube-router to 0.2.5 (#4469) 2019-04-09 03:37:04 -07:00
Maxime Guyot
b15f3e182d add default routing to canal and disable bird checks (#4468)
Co-Author: Paweł Skrzyński
2019-04-09 02:45:07 -07:00
Andreas Krüger
4d39c1856e Fix jinja filters (#4470) 2019-04-09 02:19:06 -07:00
Maxime Guyot
b2fa84af61 Vagrant fix password prompt (#4457) 2019-04-09 00:59:05 -07:00
Maxime Guyot
913fed0089 kubeadmn init: add 'until' to make 'retries' effective (#4464)
an 'until' clause is required or 'retries' is ignored

(see note @ https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#do-until-loops)
2019-04-09 00:21:04 -07:00
Maxime Guyot
80ea18bd28 Disable download_once in Vagrant to workaround rsync error (#4448) 2019-04-09 00:19:05 -07:00
Markos Chandras
12c6b5c3eb openSUSE: Use Leap 15.0 instead of 42.3 (#4442)
* Vagrantfile: Bump openSUSE to Leap 15.0

* roles: container-engine: Add 'containerd' package for openSUSE

The 'containerd' package contains the docker-containerd and
docker-containerd-shim binaries. We also need to ensure that the latest
version is installed since an older version may already be present (eg GCE
images)

* Remove docker log-opts for opensuse

* roles: bootstrap-os: Use lowercase 'o' for openSUSE

OpenSUSE is not a valid family name. The correct one is openSUSE

* roles: bootstrap-os: Update zypper cache before first installation

The zypper cache may be outdated so ensure that it's fully updated
before we try and install the bootstrap packages.
2019-04-09 00:17:05 -07:00
Maxime Guyot
35c0010876 Rename inventory/sample/hosts.ini to fix vagrant up (#4447) 2019-04-09 00:15:06 -07:00
rptaylor
f52584a715 robust handling of API server SANs (#4435)
* robust handling of API server SANs

* use apiserver_loadbalancer_domain_name if it is defined, according to PR 3977
2019-04-08 08:10:35 -07:00
Erwan Miran
09bbdadcee remove nodelocaldns iface on reset (#4460) 2019-04-08 02:26:25 -07:00
Xinghong Fang
d711a0c83f [nodelocaldns] expand tolerations on the daemonset (#4451) 2019-04-08 02:24:26 -07:00
Andreas Holmsten
01cf11b961 Run terraform fmt and add step to CI (#4405)
* Run terraform fmt

* Add terraform fmt to .terraform-validate CI step

* Add tf-validate-aws CI step

* Revert "Add tf-validate-aws CI step"

This reverts commit e007225fac.
2019-04-08 02:22:24 -07:00
Eric Ross
29825e6873 Missing ruamel.yaml from requirements.txt (#4446) 2019-04-08 02:20:27 -07:00
Andreas Krüger
d18ad63e49 Update nginx to 1.15. Update manifest and performance optimize (#4458) 2019-04-08 02:02:29 -07:00
Andreas Holmsten
3da392d1cf Add OWNERS to contrib/terraform (#4441) 2019-04-08 00:36:24 -07:00
Maxime Guyot
8947614d97 Upgrade to etcd v3.2.26 (#4444) 2019-04-08 00:34:25 -07:00
Victor Morales
7e4f4a96fc Replace iteritems() to items() in Jinja2 templates (#4437)
The iteritems() dictionary's method has been removed in Python3. Using
this method in Jinja2 templates limits the execution to Python2 which
will be deprecated in 2020[1]. This change replaces that method for
the items() method as it's suggested in the official website[2].

[1] https://pythonclock.org/
[2] https://docs.ansible.com/ansible/latest/user_guide/playbooks_python_version.html#dict-iteritems
2019-04-08 00:32:26 -07:00
MarkusTeufelberger
301a371efe Update pypy3 on CoreOS to 7.0.0 (#4456) 2019-04-08 00:28:24 -07:00
Maxime Guyot
1a6df84c7a Upgrade to Helm 2.13.1 (#4445) 2019-04-07 07:04:25 -07:00
Andreas Krüger
2d38c1e20c Update premoderator to fix Github API throttle (#4424)
* Update premoderator to fix Github API throttle

* Update premoderator script

Add exit codes and document the exit code.

* Fix indentation
2019-04-06 12:12:26 -07:00
Maxime Guyot
9155339cf0 Fix pep8 warnings (#4368) 2019-04-05 12:51:22 -07:00
rptaylor
d8a023a92c Tell git to ignore .terraform directory (#4428)
The .terraform directory is populated when modules are downloaded:
https://www.terraform.io/docs/commands/get.html
"The modules are downloaded into a local .terraform folder. This folder should not be committed to version control."
2019-04-05 01:27:18 -07:00
Maxime Guyot
8ad74404c9 Remove bash-completion (#4431) 2019-04-05 01:23:22 -07:00
Maxime Guyot
1ce2f04f47 allow Suse OS family (#4430) 2019-04-04 03:02:51 -07:00
Xavi
20b12751af add Cinder allowVolumeExpansion option (#4415) 2019-04-04 02:36:50 -07:00
Maxime Guyot
e485fab7eb Add CI for contrib/terraform/ (#4133) 2019-04-04 01:42:52 -07:00
Maxime Guyot
adca353fe9 Use docker.io for calico (#4253) 2019-04-04 01:20:49 -07:00
Andreas Krüger
7a72e567d5 Update CoreDNS to 1.4.0 (#4422)
* Update CoreDNS to 1.4.0

* Update readme to reflect CoreDNS update
2019-04-04 00:40:50 -07:00
Andreas Krüger
3c050be0b0 Update nodelocaldns cache settings (#4423) 2019-04-04 00:38:51 -07:00
Andreas Krüger
41e684eb5a Update DNS Autoscaler to 1.4.0 (#4425)
* Update DNS Autoscaler

* Update downloads too

* Fix yamllint

* Fix yamllint
2019-04-04 00:36:51 -07:00
Erwan Miran
2067417ad4 jmespath is required when re-running cluster.yml (#4426) 2019-04-04 00:34:49 -07:00
Sergey
55890e1b82 keep compatibility as it was before (#4268) 2019-04-03 01:39:42 -07:00
Sergey
1e524c68d5 remove our config if docker start failed (#4260) 2019-04-03 01:37:44 -07:00
Sergey
740d8b0a26 enable kubelet client certificate rotation (#4081)
* enable kubelet client certificate rotation

* change to variable kubelet_rotate_certificates
2019-04-03 01:35:44 -07:00
Gautam Divgi
a8dd69cf17 Fixed cleanup-docker-orphans.sh to use docker-containerd-shim and containerd-shim (#4418) 2019-04-02 09:11:21 -07:00
Matthew Mosesohn
4fe2aa6bf7 Use install_cni init container for cni copy for calico/canal (#4416) 2019-04-02 03:32:36 -07:00
Chad Swenson
5d5c9cab19 Speed up old docker package removal (#4408)
Both the `yum` and `apt` modules support a list as input, this allows us avoid the slower `with_items` approach, which can take a long time with a large count of cluster nodes.
2019-04-01 15:08:35 -07:00
Matthew Mosesohn
5f12b7aedf Remove kubedns and dnsmasq. Move dns_late phase after apps (#4406)
Both kubedns and dnsmasq modes are long not maintained.
We should run dns_late steps at the end because sshd
makes DNS lookups during Ansible run and has 2s timeouts
for each failed lookup trying to connect to coredns before
it is ready.
2019-04-01 12:32:34 -07:00
Bort Verwilst
d71590bbd0 add 1.14.0 checksum, remove 1.11.* checksums (#4401) 2019-04-01 07:16:33 -07:00
MarkusTeufelberger
9ffc65f8f3 Yamllint fixes (#4410)
* Lint everything in the repository with yamllint

* yamllint fixes: syntax fixes only

* yamllint fixes: move comments to play names

* yamllint fixes: indent comments in .gitlab-ci.yml file
2019-04-01 02:38:33 -07:00
ml
483f1d2ca0 Calico felix - Fix jinja2 boolean condition (#4348)
* Fix jinja2 boolean condition

* Convert all felix variable to booleans instead.
2019-03-29 16:07:09 -07:00
tikitavi
1babba753d adapt inventory script to python 2.7 version (#4407) 2019-03-29 06:08:13 -07:00
johnstudarus
ed18a10571 Corrected cloud name (#4316)
The correct name is Packet, not Packet Host.
2019-03-29 00:28:13 -07:00
Dmitry Chepurovskiy
0440e45d65 Fix supplementary_addresses rendering error (#4403) 2019-03-29 00:26:13 -07:00
Stefan Prietl
2fb27c8521 Use static files in KubeDNS templating task (#4379)
This commit adapts the "Lay Down KubeDNS Template" task to use the static
files moved by pull request [1]

[1] https://github.com/kubernetes-sigs/kubespray/pull/4341
2019-03-28 06:26:43 -07:00
Qasim Sarfraz
f17f4ff963 Fix bootsrap-os role, failing to create remote_tmp (#4384)
* Fix bootsrap-os role, failing to create remote_tmp

* use ansible_remote_tmp hostvar
2019-03-28 06:24:43 -07:00
Sergey
e9c34fe038 Default values for variable dns_servers and dns_domain are set in two files: (#3999)
values from inventory in roles/kubespray-defaults/defaults/main.yml
hardcoded values in roles/container-engine/defaults/main.yml

dns_servers set empty in roles/container-engine/defaults/main.yml and skydns_server not set in docker_dns_servers variables
also set default value for manual_dns_serve

another variables in roles/container-engine/defaults not need to set
2019-03-28 06:22:44 -07:00
Dmitry Chepurovskiy
669ab10c17 Added livenessProbe for local nginx apiserver proxy liveness probe (#4222)
* Added configurable local apiserver proxy liveness probe

* Enable API LB healthcheck by default

* Fix template spacing and moved healthz location to nginx http section

* Fix healthcheck listen address to allow kubelet request healthcheck
2019-03-28 06:20:46 -07:00
Qasim Sarfraz
0a3cf1a087 Fix CA cert environment variable for ectd v3 (#4381) 2019-03-28 00:18:43 -07:00
Maxime Guyot
3511b55cf5 Increase CPU flavor for CI (#4389) 2019-03-27 16:26:48 -07:00
Chad Swenson
1f01b6546c Merge pull request #4396 from verwilst/feature/k8s-1.13.5
Upgrade to k8s 1.13.5
2019-03-27 13:47:39 -05:00
Bart Verwilst
0efa3e6392 Upgrade to k8s 1.13.5 2019-03-27 11:16:21 +01:00
Matthew Mosesohn
6d7f3c4405 Reduce jinja2 filters in coredns templates (#4390) 2019-03-26 11:09:17 -07:00
Michael Vorburger ⛑️
85e0fb32e6 clarify that kubespray now supports kubeadm (fixes #4089) (#4366) 2019-03-26 03:51:19 -07:00
Etienne
d0ae316934 Use proxy_env with kubeadm phase commands (#4325) 2019-03-26 03:03:19 -07:00
Dmitry Chepurovskiy
f6d280452f Added support of bastion host for reset.yaml (#4359)
* Added support of bastion host for reset.yaml

* Empty commit to triger CI
2019-03-26 02:59:16 -07:00
Maxime Guyot
7fb5fbac37 Use wide for netchecker debug output (#4383) 2019-03-22 19:41:06 -07:00
Matthew Mosesohn
b7fd462944 Fix support for ansible 2.7.9 (#4375) 2019-03-20 11:29:42 -07:00
Matthew Mosesohn
ec08303f82 Revert "Fix #4237: update kube cert path (#4354)" (#4369)
This reverts commit ea7a6f1cf1.

This change modified the certs dir for Kubernetes, but did not move the directories for existing clusters.
2019-03-20 05:56:57 -07:00
Maxime Guyot
e640233947 Use sample inventory file in doc (#4052) 2019-03-18 01:43:15 -07:00
Dmitry Chepurovskiy
ea7a6f1cf1 Fix #4237: update kube cert path (#4354) 2019-03-17 23:55:11 -07:00
Peter Metz
38009a215a fix(contrib/metallb): adds missing become: true in role (#4356)
On CoreOS, without this, it fails to kubectl apply MetalLB due to lack of privileges.
2019-03-17 18:15:09 -07:00
Matthew Mosesohn
150a969cf4 Forcefully delete pods when necessary (#4328)
Pods on down/unresponsive nodes can't be deleted without
--force --grace-period=0.

Fixes #4314
2019-03-14 07:45:46 -07:00
Manuel Cintron
3c4cbf133e Adding ability to override dashboard replica count (#4344) 2019-03-13 13:58:25 -07:00
Matthew Mosesohn
fd2c47b56a Move most coredns templates to static files (#4341)
* Move most coredns templates to static files

This should speed up the task slightly

* yaml lint fixes
2019-03-12 21:17:31 -07:00
tikitavi
2560c4dda3 fixing dump of ordered dictionaries in inventory script (#4343) 2019-03-13 02:57:34 +03:00
tikitavi
254a0ab69d fix inventory script (#4342)
hosts are ordered dictionary
remove ansible_user from inventory file
2019-03-13 01:46:46 +03:00
tikitavi
7b3e59ed0a fix inventory script (#4339)
- fix order of entries when the new yaml file is created
- fix group in case there are no hosts in it
2019-03-12 11:02:44 -07:00
tikitavi
44de04be89 update inventory builder for public and private IP per node (#4323) 2019-03-07 18:30:12 +03:00
Bort Verwilst
33024731e4 Upgrade to k8s 1.13.4 (#4319) 2019-03-06 23:16:56 -08:00
chadswilson
d469282f1c add blockSize to IPPool spec for Calico >= v3.3.0 (#4224)
* add blockSize to IPPool spec for Calico >= v3.3.0

* fix "cidr" spec in Calico IPPool resource for my PR
2019-03-06 12:42:48 -08:00
Matthew Mosesohn
acbf3db233 Remove hard dependence on facts for all nodes (#4304)
* Remove hard dependence on facts for all nodes

* Update main.yaml

* Update main.yaml
2019-03-05 03:04:39 -08:00
Matthew Mosesohn
adf6a7121f Reenable set_facts task for dns_late (#4312) 2019-03-01 05:39:30 -08:00
tikitavi
b73f009c07 rewrite inventory script to create inventory file in YAML format (#4303)
* rewrite inventory script to create inventory file in YAML format

* minor fixes to inventory script

* change requirments for the inventory script
2019-02-28 17:28:27 +03:00
Bort Verwilst
bbfd2dc2bd Add 1.12.6, sort arm64 descending (#4308)
* Add 1.12.6, sort arm64 descending

* remove 1.10.x checksums (EOL anyways)
2019-02-28 05:55:19 -08:00
Matthew Mosesohn
4fe61968cf Set default value for local_path_provisioner_enabled in role (#4309) 2019-02-28 05:36:08 -08:00
Anupam Basak
9e8e069b23 remove kube bridge on reset (#4250) 2019-02-26 00:32:00 -08:00
Peter Metz
26ca58419f feat(external-provisioner): adds support for local-path-provisioner (#4232)
* feat(external-provisioner/local-path-provisioner): adds support for local path provisioner

Helpful for local development but also in production workloads (once the
permission model is worked out) where you have redundancy built into the
software uses the PVCs (e.g. database cluster with synchronous
replication)

* feat(local-path-provisioner): adds debug flag, image tag group var

* fix(local-path-provisioner): moves image repo/tag to download role

* test(gce_centos7-flannel): enables local-path-provisioner in test case

* fix(addons): add image repo/tag to commented default values

* fix(local-path-provisioner): typo in jinja template for local path provisioner

* style(local-path-provisioner): debug flag condition re-formatted

* fix(local-path-provisioner): adds missing default value for debug flag

* fix(local-path-provisioner): syntax fix for debug if condition end

* fix(local-path-provisioner): jinja template syntax: if condition white space
2019-02-25 22:45:30 -08:00
etharendil
063faaae1c recursive option for kube ansible module (#4273)
kube ansible module can be used with recursive: true
which sill process the directory used in -f, --filename recursively
2019-02-25 22:17:23 -08:00
Maxime Guyot
131c3d4d5b Add link to Kubespray.io (#4240) 2019-02-25 21:20:14 -08:00
Christian Berendt
44ee4b507c terraform: use openstackclient instead of novaclient (#4280)
The openstackclient is the preferred CLI for OpenStack
environments and should be used instead of novaclient.
2019-02-25 20:13:16 -08:00
Maxime Guyot
c36a0226d0 Add more links to the docs (#4204) 2019-02-25 20:11:23 -08:00
hikoz
67832aada9 changed_when:false (#4189) 2019-02-25 20:09:30 -08:00
johnstudarus
74727b085b Packet docs (#4160)
* Create packet.md

* Update README.md

* Update README.md

* Update packet.md

download the latest version

* Update packet.md
2019-02-25 20:07:38 -08:00
Maxime Guyot
bb495006c8 Update MetalLB to v0.7.3 (#4194) 2019-02-25 20:05:45 -08:00
hikoz
3d25b4dfc1 30MiB for gpu-device-plugin (#4227)
* 30MiB for gpu-device-plugin

* use vars for easier configuration
2019-02-25 20:03:53 -08:00
Wong Hoi Sing Edison
1c12c19150 weave: Upgrade to 2.5.1 (#4248)
Upstream Changes:

  - weave 2.5.1 (https://github.com/weaveworks/weave/releases/tag/v2.5.1)

Our Changes:

  - Sync templates with upstream changes
2019-02-25 20:02:00 -08:00
Sebastian Poxhofer
58dc641001 added hardware requirements in README.md (#4233)
* added hardware requirements in README.md

* added hardware requirements in README.md
2019-02-25 20:00:08 -08:00
Ryler Hockenbury
88249308a0 Add labels to vsphere cloud config (#4275) 2019-02-25 19:58:15 -08:00
Gabor Lekeny
b4aaa7b908 Speed up tasks (#4278)
* fact gathering should run only once per node
* eliminate ansible version check, it is at the beginning of each
  playbook
2019-02-25 19:56:23 -08:00
Christian Berendt
c386172be7 terraform: correct the spelling of Betacloud (#4282) 2019-02-25 19:38:32 -08:00
Andrey Zhelnin
c66e9a6d62 Disable become for localhost (#4287) 2019-02-25 19:36:44 -08:00
Vasilis Remmas
81801ce23b Add master toleration flag in dashboard deployment (#4290) 2019-02-25 19:34:47 -08:00
Etienne
7dfa39483f Make container storage repository configurable (#4284) 2019-02-25 19:29:32 -08:00
Matthew Mosesohn
b07641c3f3 Move kube_proxy_remove out of set_facts and set default (#4180) 2019-02-25 00:08:06 -08:00
Matthew Mosesohn
4638acfe81 Retry applying podsecurity policies (#4279) 2019-02-24 22:50:55 -08:00
Kaoet
aadef80404 Upgrade to latest version of ubuntu-nvidia-driver-installer. (#4296)
The lastest version of ubuntu-nvidia-driver-installer contains a fix for
https://github.com/GoogleCloudPlatform/container-engine-accelerators/issues/90
which causes the installer pod to crash when driver is already loaded.
2019-02-24 22:22:48 -08:00
Frank Ritchie
9805fb7a34 Add flexvolume plugin dir to kubeadm kubelet (#4168)
This was already approved in #4106 but there are CI issues
with that PR due to references to kubernetes incubator.

After upgrading to Kubespray 2.8.1 with Kubeadm enabled Rook
Ceph volume provision failed due to the flexvolume plugin dir not
being correct. Adding the var fixed the issue
2019-02-20 15:02:02 -08:00
Christian Berendt
7d2ba49969 Add CNCF CLA to the contributing document (#4281) 2019-02-20 06:47:17 -08:00
Peter Metz
f81bafa07b feat(vagrant/virtualbox): adds parameter to resize vbox disks (#4231)
Useful if the default 20GB is not enough in cases where you are using
the local path provisioner of rancher for example
2019-02-20 06:37:18 -08:00
Peter Metz
94892ab3a4 fix(vagrant): sets video RAM to 8 MB, avoids large default (256) (#4230) 2019-02-20 06:35:21 -08:00
Maxime Guyot
323d788f48 Add support for --enable-skip-login in Dashboard (#4265) 2019-02-19 23:24:29 -08:00
Abdulaziz AlMalki
eafab9636f fix wrong indent of oidc-username-prefix and oidc-groups-prefix in kubeadm config template (#4263) 2019-02-19 23:22:32 -08:00
Seungkyu Ahn
107bfb259a This PS is to fix the bug when Workers can't join the cluster (#4276)
because of etc-kubernetes-manifests not empty.
2019-02-19 22:13:59 -08:00
Rong Zhang
d4a36aa55b Merge pull request #4027 from riverzhang/kube-proxy
Add update server field in kube-proxy kubeconfig
2019-02-20 13:41:06 +08:00
Manuel Cintron
07b2894080 Adding ability to maintain existing Encryption Secrets at Rest. (#4255)
* Adding ability to maintain existing Encryption Secrets at Rest.

If secrets_encryption.yaml is present it will not be overriten with a new kube_encrypt_token.

This should allow for it to be set ahead of a playbook running or maintain it if cluster.yml is ran on the same cluster and the ansible host does not have access to the secrets.

* Setting existing kube_encrypt_token across all master nodes in case it was missing in one or more nodes.
2019-02-19 07:31:45 -08:00
Florent Monbillard
802ac377b8 Fix typo in task description (#4243) 2019-02-19 06:06:29 -08:00
Roy Lenferink
738ab4239a Updated OWNERS file pointing to docs (#4184) 2019-02-18 05:49:36 -08:00
Ted Wexler
b5a895d1ec Run 'terraform fmt' in contrib/terraform/openstack (#4242) 2019-02-17 21:04:41 -08:00
Kaoet
23685b4537 Add image tag in "pause" container of nvidia driver installer. (#4247) 2019-02-17 21:02:30 -08:00
Chad Swenson
e552be76ce Docker apt repo name fix (again) (#4246)
For some reason 18.09 packages are now prefixed with `5:` in the download.docker.com apt repos
Followup to #4236
2019-02-14 10:19:19 -08:00
Ryler Hockenbury
eea22dfd40 Fix typo with docker-ce package versions (#4236) 2019-02-14 07:32:12 -08:00
Maxime Guyot
0a722942cc Use git tag when checking out for test upgrade (#4209) 2019-02-14 05:09:56 -08:00
Kaoet
192f4c4e96 Allow customizing container image path used in NVIDIA GPU addon. (#4229) 2019-02-14 03:51:38 -08:00
hikoz
e03588f431 use swapon -s (#4216) 2019-02-14 02:35:17 -08:00
Chad Swenson
8872b2e0c6 Fix calico when kube_override_hostname is set (#4235)
This fixes an issue where the `nodename` in calico's cni config json can fall out of sync with the k8s node name used by the calico pod if `kube_override_hostname` is set
2019-02-13 16:02:48 -08:00
Florent Monbillard
061f5a313b Explicitely set etcd endpoint in kubeadm-images.yaml (#4063)
Currently, the task `container_download | download images for kubeadm config images` fetches etcd image even though it's not required (etcd is bootstrapped by kubespray, not kubeadm).

`kubeadm-images.yaml` is only a subset of `kubeadm-config.yaml`, therefore ``kubeadm config images pull` will try to get all this list (including etcd)

```
# kubeadm config images list --config /etc/kubernetes/kubeadm-images.yaml
k8s.gcr.io/kube-apiserver:v1.13.2
k8s.gcr.io/kube-controller-manager:v1.13.2
k8s.gcr.io/kube-scheduler:v1.13.2
k8s.gcr.io/kube-proxy:v1.13.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6
```

When using the `kubeadm-config.yaml` though, it doesn't list etcd image:

```
# kubeadm config images list --config /etc/kubernetes/kubeadm-config.yaml
k8s.gcr.io/kube-apiserver:v1.13.2
k8s.gcr.io/kube-controller-manager:v1.13.2
k8s.gcr.io/kube-scheduler:v1.13.2
k8s.gcr.io/kube-proxy:v1.13.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/coredns:1.2.6
```

This change just adds the etcd endpoints in the `kubeadm-images.yaml` to give a hint to kubeadm it doesn't need etcd image for its boostrapping as etcd is "external".
I confess it is a ugly hack, a better way would be to use a single `kubeadm-config.yaml` for both tasks, but they are triggered by different roles (`kubeadm-images.yaml` is used by download, `kubeadm-config.yaml` by kubernetes/master) at different steps and I didn't want to refactor too many things to prevent breakage. 

This is specially useful for offline installation where a whitelist of container images is mirrored on a local private container registry. `k8s.gcr.io/etcd` and `quay.io/coreos/etcd`  are two different repositories hosting the same images but using *different tags*! 
* coreos/etcd:v3.2.24   
* k8s.gcr.io/etcd:3.2.24 (note the missing 'v' in the tag name)
2019-02-13 12:44:12 -08:00
Chad Swenson
2e2ed3bd35 [SECURITY] Docker patches for CVE-2019-5736 (#4223)
This updates docker 18.06 and 18.09 with the two patches released
yesterday to address the new runc exploit. Details here:
https://kubernetes.io/blog/2019/02/11/runc-and-cve-2019-5736/
2019-02-13 01:50:53 -08:00
Manuel Cintron
7697baf0da Omit does not work in the context of yum_repository proxy. The ansible documentation specifies to use _none_ to disable the global proxy setting. (#4225) 2019-02-12 16:46:32 -08:00
Sorin Sbarnea
22a5a00c49 Improve kubeadm join tasks (#4206)
Fix issue where `kubeadm join` could wait forever for joining.

Fix issue where `kubeadm join` were not reaching the user, making
impossible to find the cause of the failure.

New behaviour is to first attempt to join without bypassing the
verifications checks and to display them if needed.

If this fails it still attempts to join by ignoring the check in
order to make previous behavior.

A timeout of 60 seconds is allocated for a joining.

Related-bug: #3973
2019-02-12 13:42:56 -08:00
Robert Neumann
8b289ad9e1 Fix the file path for all.yml and k8s-cluster.yml (#4210) 2019-02-11 14:55:41 -08:00
Maxime Guyot
6a33411d65 Add an option for helm init --wait (#4202) 2019-02-11 14:32:26 -08:00
hikoz
9a91ef8628 change permission after unarchive (#4191) 2019-02-11 14:21:38 -08:00
Sergey
fbce6349c4 check kube_pods_subnet and kube_service_addresses to valid ip network range, not single ip address (#4188) 2019-02-11 14:12:06 -08:00
Maxime Guyot
954676b3d8 Update the admin cert paths (#4135) 2019-02-11 14:10:10 -08:00
MarkusTeufelberger
e2ad6aad5a bootstrap: rework role (#4045)
* bootstrap: rework role

* support being called from a non-root user
* run some commands in check mode
* unify spelling/task names

* bootstrap: fix wording of comments for check_mode: false

* bootstrap: remove setup-pipelining task
2019-02-11 14:04:27 -08:00
Chad Swenson
038a2eb862 Merge pull request #3949 from trogeat/patch-fix-missing-ca-cert-apiserver
kubespray: fix missing ca-certificate path in apiserver
2019-02-11 15:40:04 -06:00
Manuel Cintron
5d146e52fe If a centos or rhel node is not configured with the extras repo installation of required packages (python-httplib2 in particular) will fail later on. (#4213) 2019-02-11 13:27:02 -08:00
Jeff Bornemann
c41c1e771f OCI Cloud Provider Update (#4186)
* OCI subnet AD 2 is not required for CCM >= 0.7.0

Reorganize OCI provider to generate configuration, rather than pull

Add pull secret option to OCI cloud provider

* Updated oci example to document new parameters
2019-02-11 12:08:53 -08:00
tikitavi
befa8a6cbd fix error with delete host in inventory.py script (#4203)
* fix error with delete host in inventory.py script

* minor fix
2019-02-11 15:57:51 +03:00
Karl
85b77f7c22 Remove Ubuntu Bionic specific vars file - breaks multi-arch (#3974) 2019-02-11 00:04:27 -08:00
Maxime Guyot
6b3f7306a4 Add support for arm64 images for hyperkube, kubeadm and cni_binary (#4176) 2019-02-09 02:08:57 -08:00
Earl C. Ruby III
ba5c0fa364 Tell Git to ignore the inventory/mycluster directory (#3900)
The inventory/mycluster directory gets created when someone follows
the instructions in README.md, but it should never be committed to
the kubespray repo. Ignore it.
2019-02-07 23:30:28 -08:00
Maxime Guyot
2a92fd2f14 Update docs/roadmap.md (#4198) 2019-02-07 07:43:35 -08:00
Maxime Guyot
7e974f1401 Fix MetaLB library (#4195) 2019-02-07 17:31:53 +03:00
Matthew Mosesohn
8373fa393a Update CNAME 2019-02-07 16:30:25 +03:00
Matthew Mosesohn
613841381d Create CNAME 2019-02-07 16:28:44 +03:00
Maxime Guyot
9e76aafc1c Publish docs with docsify (#4193)
* Add docsify website

* Add website CI
2019-02-07 04:52:08 -08:00
Matthew Mosesohn
9b5096ab10 Set theme jekyll-theme-slate 2019-02-07 15:47:50 +03:00
joakimr-axis
01d70f2c7c Update flannel version to v0.11.0 (#4190)
Change-Id: I27d670803bea82a68d5eb0e49d4677f4afdce55f
2019-02-07 04:33:01 -08:00
Chad Swenson
6878c2af4e Fix kube_hostname_override inconsistencies (#4185) 2019-02-06 22:20:11 -08:00
Bort Verwilst
db2b76a22a update k8s to 1.13.3 (#4192)
* update k8s to 1.13.3

* update README as well
2019-02-06 10:48:05 -08:00
tikitavi
263c8731f2 add to inventory.py script ability to indicate ip ranges (#4182)
* add to inventory.py script ability to indicate ip ranges

* add test for range2ip function for inventory.py script

some fixes

* add negative test for range2ip function for inventory.py script
2019-02-06 18:22:13 +03:00
peerapach
69e5deeccc Fix newline issue of priorityClassName when enable tolerations (#4164) 2019-02-04 12:59:01 -08:00
Matthew Mosesohn
2e1e27219e Refactor collect-info.yaml playbook (#4157)
Run only commands that apply to the current deployed cluster (only get
calico info and skip weave/flannel when deploying calico, for example).

Add helm release info if helm is deployed
2019-02-04 12:46:48 -08:00
Danny Kulchinsky
226d5ed7de [Calico] Define FELIX_KUBENODEPORTRANGES when kube-proxy in ipvs mode (#4173)
* Define FELIX_KUBENODEPORTRANGES when kube-proxy in ipvs mode

* ensure kube_apiserver_node_port_range is defined
2019-02-04 12:42:40 -08:00
Earl C. Ruby III
52e0aa7a80 Install the latest filesystem creation packages (#3904)
This PR ensures that the e2fsprogs and xfsprogs packages are
installed on all Kubernetes nodes and that the packages are
the latest versions. It also ensures that the nodes can
create XFS filesystems when necessary, since not all distros
install xfsprogs by default.

e2fsprogs - ext2/ext3/ext4 file system utilities
xfsprogs - Utilities for managing the XFS filesystem
2019-02-04 12:23:33 -08:00
peerapach
bd9474bafd fix kubeadm-setup when enable access_ip (#4145) 2019-02-01 20:10:34 -08:00
Sorin Sbarnea
316b73178d Add timeout to Get current version of calico cluster version (#4149)
Avoid waiting forever for this task that should be very quick.

Fixes: #4148
2019-02-01 20:09:04 -08:00
Samina Fu
58c71d8ea6 Add Setting Multi on group_vars (#4054) 2019-01-31 23:48:13 -08:00
Peter Metz
e245e935aa fix(vagrant): sets ansible.inventory_path to file not dir (#4153)
This fixes the issue where if there was a hosts.ini file present in the
inventory directory, then Vagrant would set an incorrect path as
ansible.inventory_path
2019-01-31 23:46:52 -08:00
Manuel Cintron
143e2272ff Fixing an issue where trying to install docker-ce-18.09 on rhel7 nodes (or potentially centos 7) without an enabled extras repo the installation will fail because container-selinux >= 2.9 is required. The check for container-selinux upfront should obviate the need for adding an extras repo if the node is able to find it from another source. (#4161) 2019-01-31 16:19:48 -08:00
Vasilis Remmas
cd7924f8c9 Add oidc prefixes to kubeadm templates (#4159) 2019-01-31 15:31:43 -08:00
Erwan Miran
7f93a5a0f5 Fix deprecation warnings (#4130)
* use not deprecated ansible_play_hosts variable

* Using tests as filters is deprecated

* Fix deprecation warning about pkg list
2019-01-31 14:57:22 -08:00
Danny Kulchinsky
1abd3cf3d7 Update calico version in README (#4143) 2019-01-31 14:52:43 -08:00
Petr Ruzicka
91e2d61cf2 Adding link to ../../contrib in README (#4097) 2019-01-31 14:44:06 -08:00
Erwan Miran
f6d60a7e89 Calico: Ability to define the default IPPool CIDR (instead of kube_pods_subnet) (#4131)
* Calico: Ability to define the default IPPool CIDR (instead of kube_pods_subnet)

* Documentation for calico_pool_cidr (and calico_advertise_cluster_ips which has been forgotten...)
2019-01-31 13:39:13 -08:00
Maxime Guyot
40f1c51ec3 Add support for Packet with Terraform (#4043)
* Add support for Packet with Terraform

Co-Author: johnstudarus <john@jhlconsulting.com>

* removed advanced features to streamline

* clarifying usage

* Update README.md

provide a better test to validate things are working OK

* Update README.md

clarifying what to set

* minor wordsmithing

* Fix admin cert path

* clarifying how to configure keys

* enabling kubeconfig_localhost

pull over the configuration file via playbooks rather than the key files individually

* Create output.tf

* Add support for node specific plans
2019-01-31 07:24:36 -08:00
Thomas Nys
68fd7e39da Set cluster DNS correctly in case of nodelocal dns cache (#3879)
* Set cluster DNS correctly in case of nodelocal dns cache

* Pass in cluster_ip based on dns mode

* Disable nodelocaldns by default

* Fix syntax error

* Fix syntax issue

* Add nodelocadns ip to vars of node installation

* Change location of nodelocaldns_ip

* Try to remove newlines from jinja template

* Add debug for config file

* Move parameter logic outside of template

* Adapt templates after feedback

* Remove debugging
2019-01-28 23:39:27 -08:00
wangxf
a096761306 [PR-Calico]Support calico 3.4.0 (#4102)
* Suport calico 3.4.0

Signed-off-by: wangxf1987 <xiaofeix.wang@gmail.com>

* Remove symlink + cni conflist template when 3.3.0+, handle Canal, addition of install-cni: sidecar(3.3.0) or initontainer(3.4.0), KUBECONFIG_FILEPATH, calico_cert_dir, advertise cluster ips

* scheduler.alpha.kubernetes.io/critical-pod deprecated since 1.12
2019-01-28 11:03:49 -08:00
Erwan Miran
d790ec96d8 Fixup 4125: Debug agents when requests time out (#4132) 2019-01-28 10:22:43 -08:00
Erwan Miran
5e260fe23a Fixup 4094: Debug agents when nothing is return (#4125) 2019-01-28 03:33:18 -08:00
Florent Monbillard
2054a98cf7 Run kubeadm and hyperkube outside of local_release_dir (#4098)
Addressing the discussion started in #4064, this PR moves kubeadm and
hyperkube binaries to /usr/local/bin before running them on the master
nodes.

It is to address the case where local_release_dir points to /tmp
(kubespray default) and /tmp is mounted with noexec mode, preventing
any binaries to be run in that partition.

In role "node", we still move kubeadm to bin_dir only on the worker
nodes.
2019-01-28 02:00:49 -08:00
Sergey
ce8ba1f170 create artifacts_dir (#4079) 2019-01-28 01:59:15 -08:00
Danny Kulchinsky
595d6427ac [Nodelocal DNS cache] Mount host /run/xtables.lock in nodelocaldns container (#4074)
* Mount host /run/xtables.lock in nodelocaldns container

* fix typo in nodelocaldns daemonset manifest yml

* Add prometheus scrape annotation, updateStrategy and reduce termination grace period

* fix indentation

* actually fix it..

* Bump k8s-dns-node-cache tag to 1.15.1 (fixes https://github.com/kubernetes/dns/issues/282)
2019-01-28 01:57:40 -08:00
Aivars Sterns
39dc61b948 add miouge1 to reviewers (slack - maxguy) (#4108) 2019-01-28 00:42:22 -08:00
Danny Kulchinsky
96688269f8 Support both --address and --bind-address for scheduler and controller-manager (#4112) 2019-01-27 23:43:34 -08:00
Rong Zhang
55aa58ee2e Merge pull request #4025 from riverzhang/download-images
Fix kubeadm config images pull
2019-01-28 15:41:15 +08:00
Erwan Miran
556a8d68bc Set IP env var to autodetect when calico_ip_auto_method is defined (#4105) 2019-01-27 23:09:18 -08:00
rongzhang
3ed5f89cf5 Add update server field in kube-proxy kubeconfig
I know this is a bit hack.
If you use cloud LB, you can use kubeadm's controlPlaneEndpoint to configure kube-proxy's server field.
But for nginx-proxy, it didn't start when kubeadm init.
2019-01-28 14:45:43 +08:00
rongzhang
8d0158ceeb Fix kubeadm config images pull
Supported by kubeadm v1.11
2019-01-28 14:42:55 +08:00
Peter Metz
fcd895d032 fix(vagrant): forces flannel interface as eth1 (#4070)
Without this pods cannot communicate with each other by default (broken
networking)

Closes #2114
2019-01-26 13:38:37 -08:00
Erwan Miran
61d88b8db2 Fix random failure in debug: var=result.content|from_json (#4094)
* Fix random failure in debug: var=result.content|from_json

* netchecker agents are deployed on all k8s-cluster group members

* reducing limits/requests is not enough, switching to n1-standard-2

* gce_centos7 need more cpu
2019-01-25 08:14:22 -08:00
Chad Swenson
3e52f1a4e9 Merge pull request #4091 from doughgle/master
Introduce `calico_upgrade_url` var for Calico upgrade tool.
2019-01-23 17:39:59 -06:00
Douglas Hellinger
4479cc48fe Introduce calico_upgrade_url var for Calico upgrade tool.
So that binary can be sourced from anywhere - not only github.
2019-01-23 16:19:27 +08:00
Chad Swenson
5708914699 Merge pull request #4088 from chadswen/bootstrap-rhel-epel-fixes
Fix epel_enabled and RHEL support in bootstrap-os
2019-01-22 17:13:10 -06:00
Chad Swenson
881be9b741 Fix epel_enabled and RHEL support in bootstrap-os
Looks like `epel_enabled` was not configured for the epel install in `bootstrap-centos.yml`. Also, there were no conditionals that would trigger bootstrap for RHEL.
2019-01-22 16:40:02 -06:00
Chad Swenson
e6f1c4df7f Merge pull request #4085 from chadswen/docker-systemd-after-containerd
Fix docker 18.09.1 systemd service
2019-01-22 13:33:34 -06:00
Chad Swenson
e2592f1ce2 Fix docker 18.09.1 systemd service
The `docker-ce` 18.09.1 packaging missed an `After` dependency on containerd in the systemd service. Upstream PR: https://github.com/docker/docker-ce-packaging/pull/290
2019-01-22 11:19:54 -06:00
Matthew Mosesohn
77d31e679a fixup external kube-apiserver port (#4075) 2019-01-21 14:43:27 +03:00
Florent Monbillard
decbcdc423 Use external LB IP for external api endpoint (#4060)
* Use external LB IP for external api endpoint

Use loadbalancer_apiserver.address instead of apiserver_loadbalancer_domain_name for kudadm init --apiserver-advertise-address argument

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options states apiserver-advertise-address needs to be a IPv4 or IPv6 address

* only use loadbalancer IP if it is defined
2019-01-21 12:27:42 +03:00
Chad Swenson
e3ffa21303 Merge pull request #4019 from chadswen/kubeadm-env
Fix PATH for kubeadm init
2019-01-18 11:27:57 -06:00
Chad Swenson
f2ecda6f0f Merge pull request #4059 from chadswen/helm-version-bump
Update helm version for security and stablity fixes
2019-01-18 11:25:42 -06:00
Chad Swenson
26f6f1f62e Merge pull request #4050 from chadswen/docker-18.09.1
Bump docker 18.09 to the latest patch
2019-01-18 11:23:44 -06:00
Matthew Mosesohn
28aee0fc34 Update OWNERS_ALIASES (#4068) 2019-01-18 18:58:04 +03:00
Bort Verwilst
f97cb4e761 Add 1.12.5 checksums (#4067) 2019-01-18 07:16:43 -08:00
Chad Swenson
405198acd0 Update helm version for security and stablity fixes
Helm v2.12.2 has fixes for a security vuln, and there have been several improvements since our last update.
2019-01-16 11:03:23 -06:00
Matthew Mosesohn
eecaba6b84 Generate external admin.conf with kubeadm (#4056)
* Generate external admin.conf with kubeadm

* Fix apiserver sans
2019-01-16 16:30:50 +03:00
Thomas Rogeat
83e11f9ef7 kubespray: fix missing ca-certificate path in apiserver 2019-01-16 11:48:24 +01:00
Chad Swenson
5a7ac7e5c1 Merge pull request #3984 from dannyk81/calico_xtables_lock
[calico/canal] mount host's xtables lock and enable calico locking for <v3.2.1
2019-01-15 23:13:02 -06:00
Chad Swenson
c15c933ce8 Bump docker 18.09 to the latest patch
Docker 18.09.1 is out and it includes some fixes that are quite critical for RHEL distros, details here: https://docs.docker.com/engine/release-notes/#18091
2019-01-15 13:54:58 -06:00
Chad Swenson
0697ab4b4f Merge pull request #4048 from chadswen/readonly-writable-fix
Fix kubeadm config extra volumes
2019-01-15 13:02:04 -06:00
Chad Swenson
13e3e867ac Fix kubeadm config extra volumes
I found a potential use case where `writable` could be null and therfore
not treated like a boolean, so this adds an extra default statement to
avoid negating a non-boolean as boolean which would lead to undefined. refs #4020
2019-01-15 12:35:22 -06:00
Chad Swenson
cc30220f01 Merge pull request #4044 from chadswen/lvp-cm-fix
Fix local-volume-provisioner configmap template
2019-01-15 09:08:08 -06:00
Danny Kulchinsky
257019d424 Mount host's xtable lock and enable calico lokcing for <v3.2.1 2019-01-14 17:16:29 -05:00
Chad Swenson
4959bfc1b3 Merge pull request #3950 from elementyang/pr-registry
fix registry_storage_class equals empty string
2019-01-14 15:45:09 -06:00
Chad Swenson
301671ae19 Merge pull request #4026 from riverzhang/bind-address
Use --bind-address instead of --address
2019-01-14 15:35:00 -06:00
Chad Swenson
1e09fd8e0f Merge pull request #3970 from woopstar/image_builder_1
Add image builder to create Docker vm's for kube-virt
2019-01-14 15:21:58 -06:00
Chad Swenson
f10f7d0e84 Merge pull request #3975 from kskewes/arm64-urls
Update kubectl and etcd download urls for mult-arch
2019-01-14 15:04:29 -06:00
Chad Swenson
3ee5aa0d6b Fix local-volume-provisioner configmap template
Looks like the template is removing the trailing space between storage
class entries, and since CI only has one storage class we never hit this
issue. This change will prevent the yaml from printing on a single line
when multiple storage classes are defined.
2019-01-14 14:28:00 -06:00
Chad Swenson
fce8712bff Merge pull request #4033 from MarkusTeufelberger/pypy_portable
Use Pypy portable on coreos
2019-01-14 12:30:47 -06:00
Chad Swenson
2051bf2b67 Merge pull request #4028 from riverzhang/v1.13.2
Upgrade kubernetes to v1.13.2
2019-01-14 10:00:15 -06:00
Markus Teufelberger
87c9a871b9 bootstrap-os: use the systemd module to stop and mask locksmithd 2019-01-12 15:06:01 +01:00
Markus Teufelberger
5e2c14e916 bootstrap-os: simplify pip3 installation on coreos 2019-01-12 15:05:33 +01:00
Markus Teufelberger
5b5546adf1 bootstrap-os: Install pypy3 portable 2019-01-12 15:04:33 +01:00
rongzhang
0b09c8154a Upgrade kubernetes to v1.13.2 2019-01-11 14:32:42 +08:00
rongzhang
bab2e5ed0d Use --bind-address instead of --address
--address deprecated
2019-01-11 12:22:47 +08:00
Chad Swenson
7c620ade85 Merge pull request #4020 from chadswen/kubeadm-config-field-updates
Fix readOnly flag in kubeadm-config.v1beta1.yaml.j2
2019-01-10 16:30:56 -06:00
Chad Swenson
1d9c0c7d17 Fix readOnly flag in kubeadm-config.v1beta1.yaml.j2
In v1beta1 of `ClusterConfiguration` the extraVolumes `writable` field was changed to `readOnly` and its boolean value must be negated.

Also, the json field for `useHyperKubeImage` was incorrectly capitalized.
2019-01-09 20:43:35 -06:00
Chad Swenson
aa1d5b8970 Fix PATH for kubeadm init
Right now we're consistently getting warnings about kubelet not found in
path during `kubeadm init`. We fixed this for `kubeadm join` in #3342, and this brings the change to init
as well.
2019-01-09 18:38:02 -06:00
Sascha Marcel Schmidt
435993891b fix assertions, use msg instead of message (#3913) 2019-01-09 11:01:47 -08:00
Chad Swenson
1d5a9464e2 Merge pull request #4009 from chadswen/lvp-fixup
Bugfixes for Local Volume Provisioner
2019-01-09 11:22:28 -06:00
Chad Swenson
e88b8f247a Merge pull request #3996 from Bobonium/issue_3586_kube_router_with_external_loadbalancer_not_working
use api server loadbalancer ip if external loadbalancer is used (fixes kube-router deployment)
2019-01-09 11:20:38 -06:00
Chad Swenson
880c9c6b48 Merge pull request #4016 from mcntrn/download_file_basic_auth
Added optional basic auth parameters
2019-01-09 11:15:05 -06:00
Manuel Cintron
7633e6d582 Added pass through parameters to enable basic auth for downloads 2019-01-08 19:36:13 -06:00
Chad Swenson
72802e4d8d Bugfixes for Local Volume Provisioner
- Fixed an issue where storage class host directories were looped
through excessive target hosts
- Fixes examples in the LVP `README.md` to use nested dicts instead of a
list of dicts
2019-01-08 17:45:20 -06:00
Wilmar den Ouden
4fb8adb9e4 More dynamic local-storage-provisioner approach (#3472)
* Makes local volume provisioner more dynamic

* Correct variable name in local storage provisioner defaults

* Updates external-provisioner readme

* Updates variable naming to be more clear, more documentation, fixes sample inventory

* Variable refactor, untangled some jinja2 loops

* Corrects variable name

* No variable substitution in dict keys, replaced with anchor

* Fixes default storage_classes dict, inline docs

* Fixes spelling in inline docs

* Addresses comments in review

* Updates all the defaults

* Fix failing CI task

* Fixes external provisioner daemonset
2019-01-08 12:36:44 -08:00
Chad Swenson
5c52a830d2 Update kubernetes dashboard to latest patch (#3995) 2019-01-08 09:46:20 -08:00
Julien C
2c8d75afb7 Remove --limit option to select node to delete (#4001)
--limit doesn't work when using remove-node.yml as there is group listing with "hosts: kube-master" in the playbook. Thus, remove-node/pre-remove/post-remove tasks are skipped as they are filtered by group "hosts: kube-master"
2019-01-08 12:09:18 +01:00
Andreas Holmsten
4d5b41b8db Allow override of bind addr for controller-manager and scheduler (#3968)
* allows to override the bind addresses for controller-manager and scheduler

Useful for Prometheus metrics monitoring

* Add bind addr override support in kubeadm/v1beta1

Adds support for override of bind addresses for controller-manager
and scheduler in kubeadm/v1beta1

* Move location of bind address vars

* Remove double declaration of schedulerExtraArgs
2019-01-07 20:41:54 -08:00
Bobonium
11d9c2e2c3 use api server loadbalancer ip if external loadbalancer is used - this fixes the broken kube-router deployment 2019-01-07 23:06:52 +01:00
Andreas Kruger
352fbd71e7 Merge branch 'master' of github.com:kubernetes-sigs/kubespray into image_builder_1 2019-01-04 10:18:23 +01:00
Andreas Kruger
2706633f81 Update OWNERS file 2019-01-04 10:18:05 +01:00
Andreas Kruger
50af3cf6c1 Added owners file 2019-01-04 10:16:07 +01:00
Aivars Sterns
39d7503069 Merge pull request #3959 from elementyang/pr-ingress
fix ingress nodeSelector label
2019-01-04 08:58:16 +00:00
Karl Skewes
41434ce080 Update kubectl and etcd download urls for mult-arch 2019-01-04 21:44:57 +13:00
MarkusTeufelberger
f72ed13f3c remove os_family variable from bootstrap-os (#3962)
* remove os_family variable from bootstrap-os

* quote the conditions another time to fix the syntax error
2019-01-03 11:28:03 -08:00
Andreas Kruger
0fec370dcd Minor changes 2019-01-03 15:41:31 +01:00
Andreas Kruger
bf63569184 Add image builder to create Docker vm's for kube-virt 2019-01-03 15:34:37 +01:00
okamototk
8216e821d3 Fix kubeadm v1beta1 configuration taint (#3928)
* Use master node taint same as kubeadm configuration v1alpha3 or before.
2019-01-03 03:42:23 -08:00
Andreas Krüger
13efa95ef7 Run less CI jobs on each PR (#3967) 2019-01-03 01:26:38 -08:00
Anton Patsev
e25237455c Fix mixup http/https in bootstrap-debian.yml (#3963)
* Fix mixup http/https in bootstrap-debian.yml

* Update bootstrap-debian.yml
2019-01-03 00:18:09 -08:00
Andreas Krüger
b38ed2c959 Update to Dockerfile used for releasing 2.8 and 2.8.1 (#3966) 2019-01-03 00:16:35 -08:00
Andreas Holmsten
a34139e19e (Re)add line break for supplementary addr in SANs (#3952)
The change implemented in #3908 remove line breaks for supplementary
addresses in kubeadm SANs, causing errors in the config file and
failure to bring cluster up. This commit reimplement line breaks in
between supplementary addresses.
2019-01-03 00:12:00 -08:00
Chad Swenson
80379f6cab Fix kube-proxy configuration for kubeadm (#3958)
- Creates and defaults an ansible variable for every configuration option in the `kubeproxy.config.k8s.io/v1alpha1` type spec
  - Fixes vars that were orphaned by removing non-kubeadm
  - Fixes previously harcoded kubeadm values
- Introduces a `main` directory for role default files per component (requires ansible 2.6.0+)
  - Split out just `kube-proxy.yml` in this first effort
- Removes the kube-proxy server field patch task

We should continue to pull out other components from `main.yml` into their own defaults files as I did here for `defaults/main/kube-proxy.yml`. I hope for and will need others to join me in this refactoring across the project until each component config template has a matching role defaults file, with shared defaults in `kubespray-defaults` or `downloads`
2019-01-03 00:04:26 -08:00
MarkusTeufelberger
d58b338bd8 Update the version of pypy used on CoreOS bootstrap-os (#3922)
* Update the version of pypy used on CoreOS bootstrap-os

* update the pip installation process on CoreOS
2019-01-02 06:17:20 -08:00
elementyang
e1e13b68b3 fix ingress nodeSelector label 2018-12-29 14:41:23 +08:00
elementyang
90ee5df413 fix registry_storage_class equals empty string 2018-12-29 14:31:47 +08:00
Rong Zhang
5834e609a6 Add scale master features (#3946)
* Add scale master features

* Add certificate management with kubeadm

* Add kubeadm kubeconfig

* Fix ymalroles error

* fix upgrade cluster fialed

* force update cert and keys when you reconfigure cluster
2018-12-27 23:27:27 -08:00
elementyang
532e97c542 fix registry_storage_class equals empty string 2018-12-28 14:23:19 +08:00
Markos Chandras
d156449819 roles: docker: Update docker service for SUSE distributions (#3924)
The containerd service and socket files have been dropped from the
openSUSE docker package so we should not require them in the docker
service anymore. This makes the docker service file look similar to
the one shipped by the openSUSE package.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-12-27 07:26:02 -08:00
Anton Patsev
d4bd08f82e Install python-pip from local yum repository (#3940)
Add support install python-pip from local yum repository if local yum repository exist.
2018-12-27 06:30:59 -08:00
Earl C. Ruby III
3ce033995f Documented docker_version acceptable values (#3901)
Added a line documenting where to find acceptable values for the
`docker_version` setting. If you use a value that is not used as
a key value by `docker_versioned_pkg` the container-engine/docker
playbook will throw a "Unexpected templating type error". (e.g.
If you use '18.06.1' or '18.06.1-ce', neither of which is used
as a key value of `docker_versioned_pkg`, rather than '18.06',
you'll get an error when installing on Ubuntu 18.04.)
2018-12-27 16:32:16 +03:00
Gautam Divgi
320f4d4d7f Added filters for integer conversion of kubelet_max_pods and kube_network_node_prefix (#3857) 2018-12-26 13:58:53 -08:00
Seongjin Cho
16715adfa0 Adds support for webhook token auth. (#3939)
Webhook token auth:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication

Fixes #3063.
2018-12-26 01:52:53 -08:00
Lucas Melchior
100d972cea Updated cri-o documentation (#3878) 2018-12-25 22:55:17 -08:00
Rong Zhang
ce63597e4a Merge pull request #3941 from riverzhang/gpu
Fix GPU node Scheduling
2018-12-26 13:39:10 +08:00
Anton Patsev
5f117fb65e Add support http/https proxy for bootstrap-debian (#3932) 2018-12-25 10:46:53 -08:00
WillPlatnick
72fee60c8f Update nodelocal to be in its own section (#3931) 2018-12-25 07:10:08 -08:00
rongzhang
1bb1ba2274 Fix GPU node Scheduling 2018-12-25 21:37:10 +08:00
Zefool
6ebcaab2bb controlPlaneEndpoint set up through load balancer should be possible … (#3888)
* controlPlaneEndpoint set up through load balancer should be possible  even in single master setups

Enable load balancer for single-master setups
Fixes an issue where single-master setups are not reachable using the usual admin.conf from outside the cluster. 

controlPlaneEndpoint set up through load balancer should be possible  even in single master setups

* add fix to other api versions

* remove obsolete check completely

* remove check, pass 2

* removes checks in client configuration

* delete 'and'
2018-12-25 00:03:32 -08:00
Rong Zhang
cd42e649a7 Fix reconfigure and upgrade cluster (#3938) 2018-12-24 23:06:27 -08:00
Rong Zhang
8167e5b690 Fix kubeadm images templates (#3936)
download v1.12.3 kubernetes images failed
2018-12-23 06:35:06 -08:00
Bort Verwilst
de014422bf Add k8s 1.12.4 checksums (#3929) 2018-12-23 01:09:09 -08:00
Rong Zhang
2f5c0d10bb Merge pull request #3934 from riverzhang/delete-kubeamd-client
Delete unused controlPlane for join node
2018-12-23 12:07:26 +08:00
Rong Zhang
48b5ee5cd5 Merge pull request #3933 from riverzhang/fix-kubeadm-images
Fix installation using CRIO about download images failed
2018-12-23 11:10:08 +08:00
rongzhang
dd4159fe65 Delete unused controlPlane for join node
it is used for join master or use --experimental-control-plane argments
2018-12-23 00:31:01 +08:00
rongzhang
62a8961d8f Fix installation using CRIO about download images failed 2018-12-23 00:20:39 +08:00
Seongjin Cho
e7b835eb4c Fix duplicate storage-backend (#3906) 2018-12-20 01:01:39 -08:00
Hedayat Vatankhah (هدایت)
fbe9e0ac1a Fix docker_options definition when docker_version is 'latest' rather than a number (#3919)
- NOTE: it assumes that the 'latest' version is newer than 17.05
2018-12-20 00:58:21 -08:00
Rong Zhang
40feb120e4 Merge pull request #3895 from riverzhang/v1.13.1
Upgrade kubernetes to v1.13.1
2018-12-20 16:53:31 +08:00
Rong Zhang
6362211860 Add images downloader to download roles (#3914)
* Add images downloader to download roles

* Use single jinja2 templates

* add kube_version to templates
2018-12-19 05:17:58 -08:00
Rong Zhang
925a820b56 Fix skip upgrade first master (#3915) 2018-12-19 05:16:14 -08:00
Matthew Mosesohn
50b884a32d Fixup line breaks for kubeadm SANs (#3908) 2018-12-19 02:47:31 -08:00
rongzhang
890878f5db disable ubuntu18-flannel test 2018-12-19 15:14:04 +08:00
rongzhang
435ef14379 Upgrade kubernetes to v1.13.1 2018-12-19 15:13:43 +08:00
Matthew Mosesohn
3c44ffcf80 set kubespray-defaults kube_api_anonymous_auth to true (#3909) 2018-12-18 06:53:58 -08:00
Ganesh Maharaj Mahalingam
73aee004ac Enable ClearLinux as a distro in kubespray (#3855)
Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
2018-12-18 01:39:25 -08:00
ihard
30a9149b52 add vars for cilium init container (#3893)
* add vars for cilium init container

* make yamllint happy

* add var cilium_init in downloads
2018-12-18 00:34:19 -08:00
Ryler Hockenbury
4a7f829ecf Reapply win_node patches (#3868) 2018-12-13 06:17:46 -08:00
Egor
dc8a8011be Load nf_conntrack module if nf_conntrack_ipv4 failed (#3764) 2018-12-12 05:33:54 -08:00
Maxim Snezhkov
5e84dabb46 Fix assertion for alone etcd nodes (#3847) 2018-12-12 05:21:54 -08:00
Ryler Hockenbury
3e8f4c1545 Use recommended defaults for dns autoscale (#3884) 2018-12-12 05:05:46 -08:00
Ganesh Maharaj Mahalingam
1a50a1a733 cri-o reset all containers and pods (#3856)
Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
2018-12-12 01:59:55 -08:00
Florent Monbillard
e50647d252 dns_mode defaults to coredns (#3882)
since bad886ca9b, dns_mode is set to coredns by default instead of kubedns
2018-12-12 01:45:00 -08:00
Maxim Snezhkov
951e4675c6 Fix error with ipvs on cluster reset task (#3848) 2018-12-12 01:43:16 -08:00
Ryler Hockenbury
c04e8b57b9 Metrics server resizer addon needs to target metrics server deployment (#3867)
* Metrics server resizer addon should target metrics server deployment

* Target metrics server deployment without version
2018-12-12 00:09:09 -08:00
gdoucet
32d47c836d Adding is_atomic in centos bootstrap-os (#3873)
Adding fact is_atomic in bootstrap-centos.yml.

Fix issue: #3538
2018-12-11 02:43:21 -08:00
Maxim Snezhkov
90a7941d56 Fix disabling swap on ubuntu systems (#3864) 2018-12-11 02:42:00 -08:00
Thomas Nys
3e3ee0aeb1 Add support for running a nodelocal dns cache (#3861)
* Add support for running a nodelocal dns cache

After encountering dns issues in a cluster I was recently working on I
noticed Kubernetes 1.13 introduced support for running a nodelocal dns
cache.

I believe this can usefull for more people.

73b548db06
https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md

* Add requested changes

* Add additional requested changes + documentation

* Add requested changes after review

* Replace incorrect variable
2018-12-10 17:28:03 -08:00
Anton Patsev
7b674e0607 Add proxy to /etc/apt/apt.conf for ubuntu (#3869) 2018-12-10 02:33:45 -08:00
Julien C
593a9a262d Add metrics service to kube-dns (#3852)
Metrics port is exposed through a service for CoreDNS but not for kube-dns.
2018-12-10 01:45:00 -08:00
Zohar Mamedov
456596710e kube-router manifest DSR adjustments (#3828) 2018-12-10 00:40:39 -08:00
Đào Hoàng Sơn
01cd4cf1c6 Remove vault role from inventory_builder. (#3863)
Related to https://github.com/kubernetes-sigs/kubespray/pull/3684
2018-12-09 18:13:42 +01:00
Andrey Zhelnin
1712314fab Setting host_architecture var (#3846)
Setting host_architecture to allow etcd upgrade working through: ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd (on other case host_architecture is missing)
2018-12-07 05:41:30 -08:00
Egor
7da9880ff7 Move node-cidr-mask-size to ControllerManagerextraArgs (#3845) 2018-12-07 04:23:17 -08:00
Bjorn Skovlund Ryden
d42b37b77d Added RBAC rights for metrics_server. Fixes #3829 (#3843) 2018-12-07 03:11:35 -08:00
Maxime Guyot
c3e83f464f Remove mention of fuel in README (#3826) 2018-12-07 11:12:54 +01:00
Rong Zhang
1550c05a7a Add docker 18.09 support (#3844) 2018-12-07 02:02:39 -08:00
pasqualet
ea833a4cd7 Fix apiServerCertSANs in kubeadm config file (#3839) 2018-12-07 00:11:08 -08:00
Tagir
2d8e04dca7 Added v1.10.11 v1.11.5 support (#3837) 2018-12-07 00:09:51 -08:00
Andreas Krüger
d5ce5874e8 Streamline path to certs dir (#3836)
* Streamline path to certs dir

* More fixes

* Set path to etcd certs in kubernetes defaults instead
2018-12-06 23:11:53 -08:00
Rong Zhang
225f765b56 Upgrade kubernetes to v1.13.0 (#3810)
* Upgrade kubernetes to v1.13.0

* Remove all precense of scheduler.alpha.kubernetes.io/critical-pod in templates

* Fix cert dir

* Use kubespray v2.8 as baseline for gitlab
2018-12-06 12:11:48 -08:00
Andreas Krüger
ddffdb63bf Remove non-kubeadm deployment (#3811)
* Remove non-kubeadm deployment

* More cleanup

* More cleanup

* More cleanup

* More cleanup

* Fix gitlab

* Try stop gce first before absent to make the delete process work

* More cleanup

* Fix bug with checking if kubeadm has already run

* Fix bug with checking if kubeadm has already run

* More fixes

* Fix test

* fix

* Fix gitlab checkout untill kubespray 2.8 is on quay

* Fixed

* Add upgrade path from non-kubeadm to kubeadm. Revert ssl path

* Readd secret checking

* Do gitlab checks from v2.7.0 test upgrade path to 2.8.0

* fix typo

* Fix CI jobs to kubeadm again. Fix broken hyperkube path

* Fix gitlab

* Fix rotate tokens

* More fixes

* More fixes

* Fix tokens
2018-12-06 02:33:38 -08:00
Erwan Miran
0d1be39a97 Reset: Check for kube-ipvs0 presence before remove it (#3816) 2018-12-04 19:18:50 -08:00
Erwan Miran
2c1dd69891 Reset tasks specific to Calico (#3813) 2018-12-04 11:37:45 -08:00
Chad Swenson
145687a48e Reduce log spam of verbose tasks (#3806)
Added a loop_control label to a few tasks that flood our logs.
2018-12-04 10:35:44 -08:00
Rong Zhang
9051aa5296 Fix ubuntu-contiv test failed (#3808)
netchecker agent status is pending
2018-12-03 23:01:32 -08:00
542 changed files with 10895 additions and 26115 deletions

16
.ansible-lint Normal file
View File

@@ -0,0 +1,16 @@
---
parseable: true
skip_list:
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
# The following rules throw errors.
# These either still need to be corrected in the repository and the rules re-enabled or they are skipped on purpose.
- '204'
- '206'
- '301'
- '305'
- '306'
- '404'
- '502'
- '503'
- '504'
- '701'

View File

@@ -1,16 +1,11 @@
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
---
name: Bug Report
about: Report a bug encountered while operating Kubernetes
labels: kind/bug
---
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out
information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
Please, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.

11
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@@ -0,0 +1,11 @@
---
name: Enhancement Request
about: Suggest an enhancement to the Kubespray project
labels: kind/feature
---
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
**Why is this needed**:

20
.github/ISSUE_TEMPLATE/failing-test.md vendored Normal file
View File

@@ -0,0 +1,20 @@
---
name: Failing Test
about: Report test failures in Kubespray CI jobs
labels: kind/failing-test
---
<!-- Please only use this template for submitting reports about failing tests in Kubespray CI jobs -->
**Which jobs are failing**:
**Which test(s) are failing**:
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
**Anything else we need to know**:

18
.github/ISSUE_TEMPLATE/support.md vendored Normal file
View File

@@ -0,0 +1,18 @@
---
name: Support Request
about: Support request or question relating to Kubespray
labels: triage/support
---
<!--
STOP -- PLEASE READ!
GitHub is not the right place for support requests.
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubespray) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
-->

44
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,44 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here:
https://git.k8s.io/community/contributors/devel/release.md#issue-kind-label
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/testing.md
4. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
5. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
6. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests
-->
**What type of PR is this?**
> Uncomment only one ` /kind <>` line, hit enter to put that in a new line, and remove leading whitespaces from that line:
>
> /kind api-change
> /kind bug
> /kind cleanup
> /kind design
> /kind documentation
> /kind failing-test
> /kind feature
> /kind flake
**What this PR does / why we need it**:
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
_If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`_*
-->
Fixes #
**Special notes for your reviewer**:
**Does this PR introduce a user-facing change?**:
<!--
If no, just write "NONE" in the release-note block below.
If yes, a release note is required:
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
-->
```release-note
```

11
.gitignore vendored
View File

@@ -1,9 +1,6 @@
.vagrant
*.retry
**/vagrant_ansible_inventory
inventory/credentials/
inventory/group_vars/fake_hosts.yml
inventory/host_vars/
temp
.idea
.tox
@@ -11,12 +8,19 @@ temp
*.bak
*.tfstate
*.tfstate.backup
.terraform/
contrib/terraform/aws/credentials.tfvars
/ssh-bastion.conf
**/*.sw[pon]
*~
vagrant/
# Ansible inventory
inventory/*
!inventory/local
!inventory/sample
inventory/*/artifacts/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@@ -24,7 +28,6 @@ __pycache__/
# Distribution / packaging
.Python
inventory/*/artifacts/
env/
build/
credentials/

View File

@@ -1,8 +1,10 @@
---
stages:
- unit-tests
- moderator
- deploy-part1
- deploy-part2
- deploy-gce
- deploy-special
variables:
@@ -24,726 +26,45 @@ variables:
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
KUBEADM_ENABLED: "false"
LOG_LEVEL: "-vv"
# asia-east1-a
# asia-northeast1-a
# europe-west1-b
# us-central1-a
# us-east1-b
# us-west1-a
before_script:
- ./tests/scripts/rebase.sh
- /usr/bin/python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh
.job: &job
tags:
- kubernetes
- docker
image: quay.io/kubespray/kubespray:v2.7
.docker_service: &docker_service
services:
- docker:dind
.create_cluster: &create_cluster
<<: *job
<<: *docker_service
.gce_variables: &gce_variables
GCE_USER: travis
SSH_USER: $GCE_USER
CLOUD_MACHINE_TYPE: "g1-small"
CI_PLATFORM: "gce"
PRIVATE_KEY: $GCE_PRIVATE_KEY
.do_variables: &do_variables
PRIVATE_KEY: $DO_PRIVATE_KEY
CI_PLATFORM: "do"
SSH_USER: root
- packet
variables:
KUBESPRAY_VERSION: v2.9.0
image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
.testcases: &testcases
<<: *job
<<: *docker_service
cache:
key: "$CI_BUILD_REF_NAME"
paths:
- downloads/
- $HOME/.cache
services:
- docker:dind
before_script:
- docker info
- /usr/bin/python -m pip install -r requirements.txt
- /usr/bin/python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh
- mkdir -p $HOME/.ssh
- ansible-playbook --version
- export PYPATH=$([[ ! "$CI_JOB_NAME" =~ "coreos" ]] && echo /usr/bin/python || echo /opt/bin/python)
- echo "CI_JOB_NAME is $CI_JOB_NAME"
- echo "PYPATH is $PYPATH"
- ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh
script:
- pwd
- ls
- echo ${PWD}
- echo "${STARTUP_SCRIPT}"
- cd tests && make create-${CI_PLATFORM} -s ; cd -
# Check out latest tag if testing upgrade
# Uncomment when gitlab kubespray repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout 53d87e53c5899d4ea2904ab7e3883708dd6363d3
# Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Workaround https://github.com/kubernetes-sigs/kubespray/issues/2021
- 'sh -c "echo ignore_assert_errors: true | tee -a tests/files/${CI_JOB_NAME}.yml"'
# Create cluster
- >
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_ssh_user=${SSH_USER}
-e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts"
cluster.yml
# Repeat deployment if testing upgrade
- >
if [ "${UPGRADE_TEST}" != "false" ]; then
test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml";
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
git checkout "${CI_BUILD_REF}";
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_ssh_user=${SSH_USER}
-e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts"
$PLAYBOOK;
fi
# Tests Cases
## Test Master API
- >
ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
## Ping the between 2 pod
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
## Advanced DNS checks
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/040_check-network-adv.yml $LOG_LEVEL
## Idempotency checks 1/5 (repeat deployment)
- >
if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts"
cluster.yml;
fi
## Idempotency checks 2/5 (Advanced DNS checks)
- >
if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
--limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi
## Idempotency checks 3/5 (reset deployment)
- >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e reset_confirmation=yes
--limit "all:!fake_hosts"
reset.yml;
fi
## Idempotency checks 4/5 (redeploy after reset)
- >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts"
cluster.yml;
fi
## Idempotency checks 5/5 (Advanced DNS checks)
- >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi
- ./tests/scripts/testcases_run.sh
after_script:
- cd tests && make delete-${CI_PLATFORM} -s ; cd -
.gce: &gce
<<: *testcases
variables:
<<: *gce_variables
.do: &do
variables:
<<: *do_variables
<<: *testcases
# Test matrix. Leave the comments for markup scripts.
.coreos_calico_aio_variables: &coreos_calico_aio_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu18_flannel_aio_variables: &ubuntu18_flannel_aio_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1
UPGRADE_TEST: "graceful"
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.coreos_cilium_variables: &coreos_cilium_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_cilium_sep_variables: &ubuntu_cilium_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.centos7_flannel_addons_variables: &centos7_flannel_addons_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.debian9_calico_variables: &debian9_calico_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.coreos_canal_variables: &coreos_canal_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_calico_ha_variables: &centos7_calico_ha_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_kube_router_variables: &centos7_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_multus_calico_variables: &centos7_multus_calico_variables
# stage: deploy-part2
UPGRADE_TEST: "graceful"
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.coreos_kube_router_variables: &coreos_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_flannel_variables: &ubuntu_flannel_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.ubuntu_kube_router_variables: &ubuntu_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.opensuse_canal_variables: &opensuse_canal_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
### PR JOBS PART1
gce_ubuntu18-flannel-aio:
stage: deploy-part1
<<: *job
<<: *gce
variables:
<<: *ubuntu18_flannel_aio_variables
<<: *gce_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
### PR JOBS PART2
gce_coreos-calico-aio:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *coreos_calico_aio_variables
<<: *gce_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_centos7-flannel-addons:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_flannel_addons_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_centos-weave-kubeadm-sep:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_ubuntu-flannel-ha:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_flannel_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
### MANUAL JOBS
gce_ubuntu-weave-sep:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_weave_sep_variables
when: manual
except: ['triggers']
only: [/^pr-.*$/]
gce_coreos-calico-sep-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_calico_aio_variables
when: on_success
only: ['triggers']
gce_ubuntu-canal-ha-triggers:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
when: on_success
only: ['triggers']
gce_centos7-flannel-addons-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_flannel_addons_variables
when: on_success
only: ['triggers']
gce_ubuntu-weave-sep-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_weave_sep_variables
when: on_success
only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto)
do_ubuntu-canal-ha:
stage: deploy-part2
<<: *job
<<: *do
variables:
<<: *do_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-ha:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-kubeadm:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_kubeadm_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-kubeadm-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_kubeadm_variables
when: on_success
only: ['triggers']
gce_centos-weave-kubeadm-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
gce_ubuntu-contiv-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_contiv_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-cilium:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_cilium_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-cilium-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_cilium_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *rhel7_weave_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *rhel7_weave_variables
when: on_success
only: ['triggers']
gce_debian9-calico-upgrade:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *debian9_calico_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_debian9-calico-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *debian9_calico_variables
when: on_success
only: ['triggers']
gce_coreos-canal:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_canal_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-canal-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_canal_variables
when: on_success
only: ['triggers']
gce_rhel7-canal-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *rhel7_canal_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-canal-sep-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *rhel7_canal_sep_variables
when: on_success
only: ['triggers']
gce_centos7-calico-ha:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_calico_ha_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-calico-ha-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_calico_ha_variables
when: on_success
only: ['triggers']
gce_centos7-kube-router:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-multus-calico:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_multus_calico_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_opensuse-canal:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *opensuse_canal_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
gce_coreos-alpha-weave-ha:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_alpha_weave_ha_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-kube-router:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-rkt-sep:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_rkt_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-kube-router-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
- ./tests/scripts/testcases_cleanup.sh
# For failfast, at least 1 job must be defined in .gitlab-ci.yml
# Premoderated with manual actions
ci-authorized:
<<: *job
extends: .job
stage: moderator
before_script:
- apt-get -y install jq
script:
- /bin/sh scripts/premoderator.sh
except: ['triggers', 'master']
syntax-check:
<<: *job
stage: unit-tests
script:
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root upgrade-cluster.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root reset.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root extra_playbooks/upgrade-only-k8s.yml -vvv --syntax-check
except: ['triggers', 'master']
yamllint:
<<: *job
stage: unit-tests
script:
- yamllint roles
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
<<: *job
script:
- pip install tox
- cd contrib/inventory_builder && tox
when: manual
except: ['triggers', 'master']
include:
- .gitlab-ci/lint.yml
- .gitlab-ci/shellcheck.yml
- .gitlab-ci/gce.yml
- .gitlab-ci/digital-ocean.yml
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml

View File

@@ -0,0 +1,19 @@
---
.do_variables: &do_variables
PRIVATE_KEY: $DO_PRIVATE_KEY
CI_PLATFORM: "do"
SSH_USER: root
.do: &do
extends: .testcases
tags:
- do
do_ubuntu-canal-ha:
stage: deploy-part2
extends: .do
variables:
<<: *do_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

264
.gitlab-ci/gce.yml Normal file
View File

@@ -0,0 +1,264 @@
---
.gce_variables: &gce_variables
GCE_USER: travis
SSH_USER: $GCE_USER
CLOUD_MACHINE_TYPE: "g1-small"
CI_PLATFORM: "gce"
PRIVATE_KEY: $GCE_PRIVATE_KEY
.cache: &cache
cache:
key: "$CI_BUILD_REF_NAME"
paths:
- downloads/
- $HOME/.cache
.gce: &gce
extends: .testcases
<<: *cache
variables:
<<: *gce_variables
tags:
- gce
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1
UPGRADE_TEST: "graceful"
.centos7_multus_calico_variables: &centos7_multus_calico_variables
# stage: deploy-gce
UPGRADE_TEST: "graceful"
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
### PR JOBS PART1
gce_ubuntu18-flannel-aio:
stage: deploy-part1
<<: *gce
when: manual
except: ['triggers']
only: [/^pr-.*$/]
### PR JOBS PART2
gce_coreos-calico-aio:
stage: deploy-gce
<<: *gce
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_centos7-flannel-addons:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: [/^pr-.*$/]
### MANUAL JOBS
gce_centos-weave-kubeadm-sep:
stage: deploy-gce
extends: .gce
variables:
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
gce_ubuntu-weave-sep:
stage: deploy-gce
<<: *gce
when: manual
only: ['triggers']
gce_coreos-calico-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
gce_ubuntu-canal-ha-triggers:
stage: deploy-special
<<: *gce
when: on_success
only: ['triggers']
gce_centos7-flannel-addons-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
gce_ubuntu-weave-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto)
gce_ubuntu-canal-ha:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-kubeadm:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-kubeadm-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
gce_ubuntu-flannel-ha:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
gce_centos-weave-kubeadm-triggers:
stage: deploy-gce
extends: .gce
variables:
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
gce_ubuntu-contiv-sep:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-cilium:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu18-cilium-sep:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
gce_debian9-calico-upgrade:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_debian9-calico-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
gce_coreos-canal:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-canal-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
gce_rhel7-canal-sep:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-canal-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
gce_centos7-calico-ha:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-calico-ha-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
gce_centos7-kube-router:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-multus-calico:
stage: deploy-gce
extends: .gce
variables:
<<: *centos7_multus_calico_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_opensuse-canal:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
gce_coreos-alpha-weave-ha:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-kube-router:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-kube-router-sep:
stage: deploy-special
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

40
.gitlab-ci/lint.yml Normal file
View File

@@ -0,0 +1,40 @@
---
yamllint:
extends: .job
stage: unit-tests
script:
- yamllint --strict .
except: ['triggers', 'master']
ansible-lint:
extends: .job
stage: unit-tests
# lint every yml/yaml file that looks like it contains Ansible plays
script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
except: ['triggers', 'master']
syntax-check:
extends: .job
stage: unit-tests
variables:
ANSIBLE_INVENTORY: inventory/local-tests.cfg
ANSIBLE_REMOTE_USER: root
ANSIBLE_BECOME: "true"
ANSIBLE_BECOME_USER: root
ANSIBLE_VERBOSITY: "3"
script:
- ansible-playbook --syntax-check cluster.yml
- ansible-playbook --syntax-check upgrade-cluster.yml
- ansible-playbook --syntax-check reset.yml
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
extends: .job
script:
- pip install tox
- cd contrib/inventory_builder && tox
when: manual
except: ['triggers', 'master']

123
.gitlab-ci/packet.yml Normal file
View File

@@ -0,0 +1,123 @@
---
.packet_variables: &packet_variables
CI_PLATFORM: "packet"
SSH_USER: "kubespray"
.packet: &packet
extends: .testcases
variables:
<<: *packet_variables
tags:
- packet
.test-upgrade: &test-upgrade
variables:
UPGRADE_TEST: "graceful"
packet_ubuntu18-calico-aio:
stage: deploy-part1
<<: *packet
when: on_success
except: ['triggers']
only: ['master', /^pr-.*$/]
# ### PR JOBS PART2
packet_centos7-flannel-addons:
stage: deploy-part2
<<: *packet
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
# ### MANUAL JOBS
packet_centos-weave-kubeadm-sep:
stage: deploy-part2
<<: *packet
when: on_success
only: ['triggers']
packet_ubuntu-weave-sep:
stage: deploy-part2
<<: *packet
when: manual
only: ['triggers']
# # More builds for PRs/merges (manual) and triggers (auto)
packet_ubuntu-canal-ha:
stage: deploy-special
<<: *packet
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_ubuntu-canal-kubeadm:
stage: deploy-part2
<<: *packet
when: on_success
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_ubuntu-flannel-ha:
stage: deploy-part2
<<: *packet
when: on_success
except: ['triggers']
packet_ubuntu-contiv-sep:
stage: deploy-special
<<: *packet
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_ubuntu18-cilium-sep:
stage: deploy-special
<<: *packet
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_debian9-calico-upgrade:
stage: deploy-part2
<<: *packet
when: on_success
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_centos7-calico-ha:
stage: deploy-part2
<<: *packet
when: on_success
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_centos7-kube-router:
stage: deploy-special
<<: *packet
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_centos7-multus-calico:
stage: deploy-part2
<<: *packet
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_opensuse-canal:
stage: deploy-part2
<<: *packet
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_ubuntu-kube-router-sep:
stage: deploy-special
<<: *packet
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

15
.gitlab-ci/shellcheck.yml Normal file
View File

@@ -0,0 +1,15 @@
---
shellcheck:
extends: .job
stage: unit-tests
variables:
SHELLCHECK_VERSION: v0.6.0
before_script:
- ./tests/scripts/rebase.sh
- curl --silent "https://storage.googleapis.com/shellcheck/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:
# Run shellcheck for all *.sh except contrib/
- find . -name '*.sh' -not -path './contrib/*' | xargs shellcheck --severity error
except: ['triggers', 'master']

133
.gitlab-ci/terraform.yml Normal file
View File

@@ -0,0 +1,133 @@
---
# Tests for contrib/terraform/
.terraform_install:
extends: .job
before_script:
- ./tests/scripts/rebase.sh
# Set Ansible config
- cp ansible.cfg ~/.ansible.cfg
# Install Terraform
- apt-get install -y unzip
- curl https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip > /tmp/terraform.zip
- unzip /tmp/terraform.zip && mv ./terraform /usr/local/bin/ && terraform --version
# Prepare inventory
- cp -LRp contrib/terraform/$PROVIDER/sample-inventory inventory/$CLUSTER
- cd inventory/$CLUSTER
- ln -s ../../contrib/terraform/$PROVIDER/hosts
- terraform init ../../contrib/terraform/$PROVIDER
# Copy SSH keypair
- mkdir -p ~/.ssh
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
only: ['master', /^pr-.*$/]
.terraform_validate:
extends: .terraform_install
stage: unit-tests
script:
- terraform validate -var-file=cluster.tf ../../contrib/terraform/$PROVIDER
- terraform fmt -check -diff ../../contrib/terraform/$PROVIDER
.terraform_apply:
extends: .terraform_install
stage: deploy-part2
when: manual
variables:
ANSIBLE_INVENTORY_UNPARSED_FAILED: "true"
script:
- terraform apply -auto-approve ../../contrib/terraform/$PROVIDER
- ansible-playbook -i hosts ../../cluster.yml --become
after_script:
# Cleanup regardless of exit code
- cd inventory/$CLUSTER
- terraform destroy -auto-approve ../../contrib/terraform/$PROVIDER
tf-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: 0.11.11
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: 0.11.11
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-packet-ubuntu16-default:
extends: .terraform_apply
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_plan_k8s_masters: t1.small.x86
TF_VAR_plan_k8s_nodes: t1.small.x86
TF_VAR_facility: ewr1
TF_VAR_public_key_path: ""
TF_VAR_operating_system: ubuntu_16_04
tf-packet-ubuntu18-default:
extends: .terraform_apply
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_plan_k8s_masters: t1.small.x86
TF_VAR_plan_k8s_nodes: t1.small.x86
TF_VAR_facility: ams1
TF_VAR_public_key_path: ""
TF_VAR_operating_system: ubuntu_18_04
.ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3
OS_PROJECT_ID: 8d3cd5d737d74227ace462dee0b903fe
OS_PROJECT_NAME: "9361447987648822"
OS_USER_DOMAIN_NAME: Default
OS_PROJECT_DOMAIN_ID: default
OS_USERNAME: 8XuhBMfkKVrk
OS_REGION_NAME: UK1
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
tf-apply-ovh:
extends: .terraform_apply
variables:
<<: *ovh_variables
TF_VERSION: 0.11.11
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG
TF_VAR_number_of_k8s_masters: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "0"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_use_neutron: "0"
TF_VAR_floatingip_pool: "Ext-Net"
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
TF_VAR_network_name: "Ext-Net"
TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_image: "Ubuntu 18.04"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

1
CNAME Normal file
View File

@@ -0,0 +1 @@
kubespray.io

View File

@@ -7,4 +7,5 @@
1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Submit a pull request.
4. Sign the CNCF CLA (https://git.k8s.io/community/CLA.md#the-contributor-license-agreement)
5. Submit a pull request.

View File

@@ -1,10 +1,10 @@
FROM ubuntu:16.04
FROM ubuntu:18.04
RUN mkdir /kubespray
WORKDIR /kubespray
RUN apt update -y && \
apt install -y \
libssl-dev python-dev sshpass apt-transport-https \
libssl-dev python-dev sshpass apt-transport-https jq \
ca-certificates curl gnupg2 software-properties-common python-pip
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
@@ -14,3 +14,5 @@ RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
&& apt update -y && apt-get install docker-ce -y
COPY . .
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/linux/amd64/kubectl \
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl

3
OWNERS
View File

@@ -1,5 +1,4 @@
# See the OWNERS file documentation:
# https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- kubespray-approvers

View File

@@ -11,8 +11,10 @@ aliases:
- riverzhang
- holser
- smana
- verwilst
kubespray-reviewers:
- jjungnickel
- archifleks
- chapsuk
- mirwan
- miouge1

View File

@@ -3,10 +3,10 @@
Deploy a Production Ready Kubernetes Cluster
============================================
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
@@ -19,10 +19,6 @@ To deploy the cluster you can use :
### Ansible
#### Ansible version
Ansible v2.7.0 is failing and/or produce unexpected results due to [ansible/ansible/issues/46600](https://github.com/ansible/ansible/issues/46600)
#### Usage
# Install dependencies from ``requirements.txt``
@@ -33,7 +29,7 @@ Ansible v2.7.0 is failing and/or produce unexpected results due to [ansible/ansi
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
@@ -43,7 +39,7 @@ Ansible v2.7.0 is failing and/or produce unexpected results due to [ansible/ansi
# The option `-b` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without -b the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with:
@@ -90,6 +86,7 @@ Documents
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Packet Host](docs/packet.md)
- [Large deployments](docs/large-deployments.md)
- [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md)
@@ -111,37 +108,32 @@ Supported Components
--------------------
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.12.3
- [etcd](https://github.com/coreos/etcd) v3.2.24
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.14.1
- [etcd](https://github.com/coreos/etcd) v3.2.26
- [docker](https://www.docker.com/) v18.06 (see note)
- [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin
- [calico](https://github.com/projectcalico/calico) v3.1.3
- [calico](https://github.com/projectcalico/calico) v3.4.0
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.3.0
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.10.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.1
- [flanneld](https://github.com/coreos/flannel) v0.11.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
- [multus](https://github.com/intel/multus-cni) v3.1.autoconf
- [weave](https://github.com/weaveworks/weave) v2.5.0
- [weave](https://github.com/weaveworks/weave) v2.5.1
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2
- [coredns](https://github.com/coredns/coredns) v1.2.6
- [coredns](https://github.com/coredns/coredns) v1.5.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.21.0
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note 2: rkt support as docker alternative is limited to control plane (etcd and
kubelet). Docker is still used for Kubernetes cluster workloads and network
plugins' related OS services. Also note, only one of the supported network
plugins can be deployed for a given single cluster.
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Requirements
------------
- **Ansible v2.5 (or newer) and python-netaddr is installed on the machine
- **Ansible v2.7.8 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
@@ -153,6 +145,14 @@ Requirements
should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified.
Hardware:
These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
Network Plugins
---------------
@@ -195,13 +195,12 @@ Tools and projects on top of Kubespray
--------------------------------------
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
CI Tests
--------
[![Build graphs](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/badges/master/build.svg)](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
CI/end-to-end tests sponsored by Google (GCE)
See the [test matrix](docs/test_cases.md) for details.

29
Vagrantfile vendored
View File

@@ -23,7 +23,7 @@ SUPPORTED_OS = {
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-42.3-x86_64", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
}
@@ -50,6 +50,10 @@ $kube_node_instances = $num_instances
$kube_node_instances_with_disks = false
$kube_node_instances_with_disks_size = "20G"
$kube_node_instances_with_disks_number = 2
$override_disk_size = false
$disk_size = "20GB"
$local_path_provisioner_enabled = false
$local_path_provisioner_claim_root = "/opt/local-path-provisioner/"
$playbook = "cluster.yml"
@@ -97,6 +101,13 @@ Vagrant.configure("2") do |config|
# always use Vagrants insecure key
config.ssh.insert_key = false
if ($override_disk_size)
unless Vagrant.has_plugin?("vagrant-disksize")
system "vagrant plugin install vagrant-disksize"
end
config.disksize.size = $disk_size
end
(1..$num_instances).each do |i|
config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
@@ -120,6 +131,7 @@ Vagrant.configure("2") do |config|
vb.cpus = $vm_cpus
vb.gui = $vm_gui
vb.linked_clone = true
vb.customize ["modifyvm", :id, "--vram", "8"] # ubuntu defaults to 256 MB which is a waste of precious RAM
end
node.vm.provider :libvirt do |lv|
@@ -165,24 +177,29 @@ Vagrant.configure("2") do |config|
host_vars[vm_name] = {
"ip": ip,
"flannel_interface": "eth1",
"kube_network_plugin": $network_plugin,
"kube_network_plugin_multus": $multi_networking,
"docker_keepcache": "1",
"download_run_once": "True",
"download_localhost": "False"
"download_run_once": "False",
"download_localhost": "False",
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
"ansible_ssh_user": SUPPORTED_OS[$os][:user]
}
# Only execute the Ansible provisioner once, when all the machines are up and ready.
if i == $num_instances
node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook
if File.exist?(File.join( $inventory, "hosts.ini"))
ansible.inventory_path = $inventory
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path
end
ansible.become = true
ansible.limit = "all"
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "--ask-become-pass"]
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars
#ansible.tags = ['download']
ansible.groups = {

2
_config.yml Normal file
View File

@@ -0,0 +1,2 @@
---
theme: jekyll-theme-slate

View File

@@ -1,32 +1,18 @@
---
- hosts: localhost
gather_facts: false
become: no
tasks:
- name: "Check ansible version !=2.7.0"
- name: "Check ansible version >=2.7.8"
assert:
msg: "Ansible V2.7.0 can't be used until: https://github.com/ansible/ansible/issues/46600 is fixed"
msg: "Ansible must be v2.7.8 or higher"
that:
- ansible_version.string is version("2.7.0", "!=")
- ansible_version.string is version("2.5.0", ">=")
- ansible_version.string is version("2.7.8", ">=")
tags:
- check
vars:
ansible_connection: local
- hosts: localhost
gather_facts: false
tasks:
- name: deploy warning for non kubeadm
debug:
msg: "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
when: not kubeadm_enabled and not skip_non_kubeadm_warning
- name: deploy cluster for non kubeadm
pause:
prompt: "Are you sure you want to deploy cluster using the deprecated non-kubeadm mode."
echo: no
when: not kubeadm_enabled and not skip_non_kubeadm_warning
- hosts: bastion[0]
gather_facts: False
roles:
@@ -36,10 +22,6 @@
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
vars:
# Need to disable pipelining for bootstrap-os as some systems have requiretty in sudoers set, which makes pipelining
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
ansible_ssh_pipelining: false
roles:
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
@@ -48,13 +30,14 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
vars:
ansible_ssh_pipelining: true
gather_facts: true
gather_facts: false
pre_tasks:
- name: gather facts from all instances
setup:
delegate_to: "{{item}}"
delegate_facts: True
delegate_facts: true
with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
run_once: true
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -96,7 +79,7 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network }
- hosts: kube-master[0]
@@ -104,7 +87,7 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"], when: "kubeadm_enabled" }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"]}
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -121,16 +104,15 @@
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
environment: "{{proxy_env}}"
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps }
environment: "{{proxy_env}}"
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }

View File

@@ -1,3 +1,4 @@
---
apiVersion: "2015-06-15"
virtualNetworkName: "{{ azure_virtual_network_name | default('KubeVNET') }}"
@@ -34,4 +35,3 @@ imageReferenceJson: "{{imageReference|to_json}}"
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"

View File

@@ -1,3 +1,4 @@
---
- set_fact:
base_dir: "{{playbook_dir}}/.generated/"

View File

@@ -1,2 +1,3 @@
---
# See distro.yaml for supported node_distro images
node_distro: debian

View File

@@ -1,3 +1,4 @@
---
distro_settings:
debian: &DEBIAN
image: "debian:9.5"

View File

@@ -1,7 +1,7 @@
---
# kubespray-dind.yaml: minimal kubespray ansible playbook usable for DIND
# See contrib/dind/README.md
kube_api_anonymous_auth: true
kubeadm_enabled: true
kubelet_fail_swap_on: false

View File

@@ -1,3 +1,4 @@
---
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"

View File

@@ -1,3 +1,4 @@
---
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
@@ -78,6 +79,7 @@
with_items: "{{ containers.results }}"
- name: Early hack image install to adapt for DIND
# noqa 302 - this task uses the raw module intentionally
raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"

View File

@@ -1,8 +1,6 @@
DISTROS=(debian centos)
NETCHECKER_HOST=${NODES[0]}
EXTRAS=(
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":true}'
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":true}'
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":true}'
)

View File

@@ -1,8 +1,8 @@
DISTROS=(debian centos)
EXTRAS=(
'kube_network_plugin=calico {"kubeadm_enabled":true}'
'kube_network_plugin=canal {"kubeadm_enabled":true}'
'kube_network_plugin=cilium {"kubeadm_enabled":true}'
'kube_network_plugin=flannel {"kubeadm_enabled":true}'
'kube_network_plugin=weave {"kubeadm_enabled":true}'
'kube_network_plugin=calico {}'
'kube_network_plugin=canal {}'
'kube_network_plugin=cilium {}'
'kube_network_plugin=flannel {}'
'kube_network_plugin=weave {}'
)

View File

@@ -17,6 +17,9 @@
#
# Advanced usage:
# Add another host after initial creation: inventory.py 10.10.1.5
# Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
# Add hosts with different ip and access ip:
# inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3
# Delete a host: inventory.py -10.10.1.3
# Delete a host by id: inventory.py -node1
#
@@ -31,21 +34,21 @@
# ip: X.X.X.X
from collections import OrderedDict
try:
import configparser
except ImportError:
import ConfigParser as configparser
from ipaddress import ip_address
from ruamel.yaml import YAML
import os
import re
import sys
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster:children',
'calico-rr', 'vault']
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
'calico-rr']
PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False}
yaml = YAML()
yaml.Representer.add_representer(OrderedDict, yaml.Representer.represent_dict)
def get_var_as_bool(name, default):
@@ -54,7 +57,8 @@ def get_var_as_bool(name, default):
# Configurable as shell vars start
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.ini")
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
# Reconfigures cluster distribution at scale
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
@@ -68,11 +72,14 @@ HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
class KubesprayInventory(object):
def __init__(self, changed_hosts=None, config_file=None):
self.config = configparser.ConfigParser(allow_no_value=True,
delimiters=('\t', ' '))
self.config_file = config_file
self.yaml_config = {}
if self.config_file:
self.config.read(self.config_file)
try:
self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file)
except FileNotFoundError:
pass
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
self.parse_command(changed_hosts[0], changed_hosts[1:])
@@ -81,18 +88,20 @@ class KubesprayInventory(object):
self.ensure_required_groups(ROLES)
if changed_hosts:
changed_hosts = self.range2ips(changed_hosts)
self.hosts = self.build_hostnames(changed_hosts)
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
self.set_all(self.hosts)
self.set_k8s_cluster()
self.set_etcd(list(self.hosts.keys())[:3])
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count])
if len(self.hosts) >= SCALE_THRESHOLD:
self.set_kube_master(list(self.hosts.keys())[3:5])
self.set_kube_master(list(self.hosts.keys())[etcd_hosts_count:5])
else:
self.set_kube_master(list(self.hosts.keys())[:2])
self.set_kube_node(self.hosts.keys())
if len(self.hosts) >= SCALE_THRESHOLD:
self.set_calico_rr(list(self.hosts.keys())[:3])
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count])
else: # Show help if no options
self.show_help()
sys.exit(0)
@@ -101,8 +110,9 @@ class KubesprayInventory(object):
def write_config(self, config_file):
if config_file:
with open(config_file, 'w') as f:
self.config.write(f)
with open(self.config_file, 'w') as f:
yaml.dump(self.yaml_config, f)
else:
print("WARNING: Unable to save config. Make sure you set "
"CONFIG_FILE env var.")
@@ -112,28 +122,29 @@ class KubesprayInventory(object):
print("DEBUG: {0}".format(msg))
def get_ip_from_opts(self, optstring):
opts = optstring.split(' ')
for opt in opts:
if '=' not in opt:
continue
k, v = opt.split('=')
if k == "ip":
return v
if 'ip' in optstring:
return optstring['ip']
else:
raise ValueError("IP parameter not found in options")
def ensure_required_groups(self, groups):
for group in groups:
try:
if group == 'all':
self.debug("Adding group {0}".format(group))
self.config.add_section(group)
except configparser.DuplicateSectionError:
pass
if group not in self.yaml_config:
all_dict = OrderedDict([('hosts', OrderedDict({})),
('children', OrderedDict({}))])
self.yaml_config = {'all': all_dict}
else:
self.debug("Adding group {0}".format(group))
if group not in self.yaml_config['all']['children']:
self.yaml_config['all']['children'][group] = {'hosts': {}}
def get_host_id(self, host):
'''Returns integer host ID (without padding) from a given hostname.'''
try:
short_hostname = host.split('.')[0]
return int(re.findall("\d+$", short_hostname)[-1])
return int(re.findall("\\d+$", short_hostname)[-1])
except IndexError:
raise ValueError("Host name must end in an integer")
@@ -141,12 +152,12 @@ class KubesprayInventory(object):
existing_hosts = OrderedDict()
highest_host_id = 0
try:
for host, opts in self.config.items('all'):
existing_hosts[host] = opts
for host in self.yaml_config['all']['hosts']:
existing_hosts[host] = self.yaml_config['all']['hosts'][host]
host_id = self.get_host_id(host)
if host_id > highest_host_id:
highest_host_id = host_id
except configparser.NoSectionError:
except Exception:
pass
# FIXME(mattymo): Fix condition where delete then add reuses highest id
@@ -163,22 +174,53 @@ class KubesprayInventory(object):
self.debug("Marked {0} for deletion.".format(realhost))
self.delete_host_by_ip(all_hosts, realhost)
elif host[0].isdigit():
if ',' in host:
ip, access_ip = host.split(',')
else:
ip = host
access_ip = host
if self.exists_hostname(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
continue
elif self.exists_ip(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
elif self.exists_ip(all_hosts, ip):
self.debug("Skipping existing host {0}.".format(ip))
continue
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1
all_hosts[next_host] = "ansible_host={0} ip={1}".format(
host, host)
all_hosts[next_host] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
elif host[0].isalpha():
raise Exception("Adding hosts by hostname is not supported.")
return all_hosts
def range2ips(self, hosts):
reworked_hosts = []
def ips(start_address, end_address):
try:
# Python 3.x
start = int(ip_address(start_address))
end = int(ip_address(end_address))
except:
# Python 2.7
start = int(ip_address(unicode(start_address)))
end = int(ip_address(unicode(end_address)))
return [ip_address(ip).exploded for ip in range(start, end + 1)]
for host in hosts:
if '-' in host and not host.startswith('-'):
start, end = host.strip().split('-')
try:
reworked_hosts.extend(ips(start, end))
except ValueError:
raise Exception("Range of ip_addresses isn't valid")
else:
reworked_hosts.append(host)
return reworked_hosts
def exists_hostname(self, existing_hosts, hostname):
return hostname in existing_hosts.keys()
@@ -196,16 +238,34 @@ class KubesprayInventory(object):
raise ValueError("Unable to find host by IP: {0}".format(ip))
def purge_invalid_hosts(self, hostnames, protected_names=[]):
for role in self.config.sections():
for host, _ in self.config.items(role):
for role in self.yaml_config['all']['children']:
if role != 'k8s-cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa
all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa
for host in all_hosts.keys():
if host not in hostnames and host not in protected_names:
self.debug("Host {0} removed from role {1}".format(host,
role))
self.config.remove_option(role, host)
self.debug(
"Host {0} removed from role {1}".format(host, role)) # noqa
del self.yaml_config['all']['children'][role]['hosts'][host] # noqa
# purge from all
if self.yaml_config['all']['hosts']:
all_hosts = self.yaml_config['all']['hosts'].copy()
for host in all_hosts.keys():
if host not in hostnames and host not in protected_names:
self.debug("Host {0} removed from role all".format(host))
del self.yaml_config['all']['hosts'][host]
def add_host_to_group(self, group, host, opts=""):
self.debug("adding host {0} to group {1}".format(host, group))
self.config.set(group, host, opts)
if group == 'all':
if self.yaml_config['all']['hosts'] is None:
self.yaml_config['all']['hosts'] = {host: None}
self.yaml_config['all']['hosts'][host] = opts
elif group != 'k8s-cluster:children':
if self.yaml_config['all']['children'][group]['hosts'] is None:
self.yaml_config['all']['children'][group]['hosts'] = {
host: None}
else:
self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa
def set_kube_master(self, hosts):
for host in hosts:
@@ -216,16 +276,16 @@ class KubesprayInventory(object):
self.add_host_to_group('all', host, opts)
def set_k8s_cluster(self):
self.add_host_to_group('k8s-cluster:children', 'kube-node')
self.add_host_to_group('k8s-cluster:children', 'kube-master')
k8s_cluster = {'children': {'kube-master': None, 'kube-node': None}}
self.yaml_config['all']['children']['k8s-cluster'] = k8s_cluster
def set_calico_rr(self, hosts):
for host in hosts:
if host in self.config.items('kube-master'):
if host in self.yaml_config['all']['children']['kube-master']:
self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube-master group".format(host))
continue
if host in self.config.items('kube-node'):
if host in self.yaml_config['all']['children']['kube-node']:
self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube-node group".format(host))
continue
@@ -233,14 +293,14 @@ class KubesprayInventory(object):
def set_kube_node(self, hosts):
for host in hosts:
if len(self.config['all']) >= SCALE_THRESHOLD:
if self.config.has_option('etcd', host):
if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD:
if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa
self.debug("Not adding {0} to kube-node group because of "
"scale deployment and host is in etcd "
"group.".format(host))
continue
if len(self.config['all']) >= MASSIVE_SCALE_THRESHOLD:
if self.config.has_option('kube-master', host):
if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa
if host in self.yaml_config['all']['children']['kube-master']['hosts']: # noqa
self.debug("Not adding {0} to kube-node group because of "
"scale deployment and host is in kube-master "
"group.".format(host))
@@ -250,42 +310,31 @@ class KubesprayInventory(object):
def set_etcd(self, hosts):
for host in hosts:
self.add_host_to_group('etcd', host)
self.add_host_to_group('vault', host)
def load_file(self, files=None):
'''Directly loads JSON, or YAML file to inventory.'''
'''Directly loads JSON to inventory.'''
if not files:
raise Exception("No input file specified.")
import json
import yaml
for filename in list(files):
# Try JSON, then YAML
# Try JSON
try:
with open(filename, 'r') as f:
data = json.load(f)
except ValueError:
try:
with open(filename, 'r') as f:
data = yaml.load(f)
print("yaml")
except ValueError:
raise Exception("Cannot read %s as JSON, YAML, or CSV",
filename)
raise Exception("Cannot read %s as JSON, or CSV", filename)
self.ensure_required_groups(ROLES)
self.set_k8s_cluster()
for group, hosts in data.items():
self.ensure_required_groups([group])
for host, opts in hosts.items():
optstring = "ansible_host={0} ip={0}".format(opts['ip'])
for key, val in opts.items():
if key == "ip":
continue
optstring += " {0}={1}".format(key, val)
optstring = {'ansible_host': opts['ip'],
'ip': opts['ip'],
'access_ip': opts['ip']}
self.add_host_to_group('all', host, optstring)
self.add_host_to_group(group, host)
self.write_config(self.config_file)
@@ -313,24 +362,26 @@ print_ips - Write a space-delimited list of IPs from "all" group
Advanced usage:
Add another host after initial creation: inventory.py 10.10.1.5
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
Delete a host: inventory.py -10.10.1.3
Delete a host by id: inventory.py -node1
Configurable env vars:
DEBUG Enable debug printing. Default: True
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.ini
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml
HOST_PREFIX Host prefix for generated hosts. Default: node
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
'''
''' # noqa
print(help_text)
def print_config(self):
self.config.write(sys.stdout)
yaml.dump(self.yaml_config, sys.stdout)
def print_ips(self):
ips = []
for host, opts in self.config.items('all'):
for host, opts in self.yaml_config['all']['hosts'].items():
ips.append(self.get_ip_from_opts(opts))
print(' '.join(ips))
@@ -340,5 +391,6 @@ def main(argv=None):
argv = sys.argv[1:]
KubesprayInventory(argv, CONFIG_FILE)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1 +1,3 @@
configparser>=3.3.0
ruamel.yaml>=0.15.88
ipaddress

View File

@@ -34,7 +34,9 @@ class TestInventory(unittest.TestCase):
self.inv = inventory.KubesprayInventory()
def test_get_ip_from_opts(self):
optstring = "ansible_host=10.90.3.2 ip=10.90.3.2"
optstring = {'ansible_host': '10.90.3.2',
'ip': '10.90.3.2',
'access_ip': '10.90.3.2'}
expected = "10.90.3.2"
result = self.inv.get_ip_from_opts(optstring)
self.assertEqual(expected, result)
@@ -48,7 +50,7 @@ class TestInventory(unittest.TestCase):
groups = ['group1', 'group2']
self.inv.ensure_required_groups(groups)
for group in groups:
self.assertTrue(group in self.inv.config.sections())
self.assertTrue(group in self.inv.yaml_config['all']['children'])
def test_get_host_id(self):
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain',
@@ -67,35 +69,49 @@ class TestInventory(unittest.TestCase):
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
'ansible_host=10.90.0.2 ip=10.90.0.2')])
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_duplicate(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
'ansible_host=10.90.0.2 ip=10.90.0.2')])
self.inv.config['all'] = expected
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_two(self):
changed_hosts = ['10.90.0.2', '10.90.0.3']
expected = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
self.inv.config['all'] = OrderedDict()
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = OrderedDict()
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_delete_first(self):
changed_hosts = ['-10.90.0.2']
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
self.inv.config['all'] = existing_hosts
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
expected = OrderedDict([
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
@@ -103,8 +119,12 @@ class TestInventory(unittest.TestCase):
hostname = 'node1'
expected = True
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_hostname(existing_hosts, hostname)
self.assertEqual(expected, result)
@@ -112,8 +132,12 @@ class TestInventory(unittest.TestCase):
hostname = 'node99'
expected = False
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_hostname(existing_hosts, hostname)
self.assertEqual(expected, result)
@@ -121,8 +145,12 @@ class TestInventory(unittest.TestCase):
ip = '10.90.0.2'
expected = True
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_ip(existing_hosts, ip)
self.assertEqual(expected, result)
@@ -130,26 +158,40 @@ class TestInventory(unittest.TestCase):
ip = '10.90.0.200'
expected = False
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_ip(existing_hosts, ip)
self.assertEqual(expected, result)
def test_delete_host_by_ip_positive(self):
ip = '10.90.0.2'
expected = OrderedDict([
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.delete_host_by_ip(existing_hosts, ip)
self.assertEqual(expected, existing_hosts)
def test_delete_host_by_ip_negative(self):
ip = '10.90.0.200'
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.assertRaisesRegexp(ValueError, "Unable to find host",
self.inv.delete_host_by_ip, existing_hosts, ip)
@@ -157,59 +199,71 @@ class TestInventory(unittest.TestCase):
proper_hostnames = ['node1', 'node2']
bad_host = 'doesnotbelong2'
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3'),
('doesnotbelong2', 'whateveropts=ilike')])
self.inv.config['all'] = existing_hosts
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}),
('doesnotbelong2', {'whateveropts=ilike'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
self.inv.purge_invalid_hosts(proper_hostnames)
self.assertTrue(bad_host not in self.inv.config['all'].keys())
self.assertTrue(
bad_host not in self.inv.yaml_config['all']['hosts'].keys())
def test_add_host_to_group(self):
group = 'etcd'
host = 'node1'
opts = 'ip=10.90.0.2'
opts = {'ip': '10.90.0.2'}
self.inv.add_host_to_group(group, host, opts)
self.assertEqual(self.inv.config[group].get(host), opts)
self.assertEqual(
self.inv.yaml_config['all']['children'][group]['hosts'].get(host),
None)
def test_set_kube_master(self):
group = 'kube-master'
host = 'node1'
self.inv.set_kube_master([host])
self.assertTrue(host in self.inv.config[group])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
def test_set_all(self):
group = 'all'
hosts = OrderedDict([
('node1', 'opt1'),
('node2', 'opt2')])
self.inv.set_all(hosts)
for host, opt in hosts.items():
self.assertEqual(self.inv.config[group].get(host), opt)
self.assertEqual(
self.inv.yaml_config['all']['hosts'].get(host), opt)
def test_set_k8s_cluster(self):
group = 'k8s-cluster:children'
group = 'k8s-cluster'
expected_hosts = ['kube-node', 'kube-master']
self.inv.set_k8s_cluster()
for host in expected_hosts:
self.assertTrue(host in self.inv.config[group])
self.assertTrue(
host in
self.inv.yaml_config['all']['children'][group]['children'])
def test_set_kube_node(self):
group = 'kube-node'
host = 'node1'
self.inv.set_kube_node([host])
self.assertTrue(host in self.inv.config[group])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
def test_set_etcd(self):
group = 'etcd'
host = 'node1'
self.inv.set_etcd([host])
self.assertTrue(host in self.inv.config[group])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
def test_scale_scenario_one(self):
num_nodes = 50
@@ -219,11 +273,13 @@ class TestInventory(unittest.TestCase):
hosts["node" + str(hostid)] = ""
self.inv.set_all(hosts)
self.inv.set_etcd(hosts.keys()[0:3])
self.inv.set_kube_master(hosts.keys()[0:2])
self.inv.set_etcd(list(hosts.keys())[0:3])
self.inv.set_kube_master(list(hosts.keys())[0:2])
self.inv.set_kube_node(hosts.keys())
for h in range(3):
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
self.assertFalse(
list(hosts.keys())[h] in
self.inv.yaml_config['all']['children']['kube-node']['hosts'])
def test_scale_scenario_two(self):
num_nodes = 500
@@ -233,8 +289,57 @@ class TestInventory(unittest.TestCase):
hosts["node" + str(hostid)] = ""
self.inv.set_all(hosts)
self.inv.set_etcd(hosts.keys()[0:3])
self.inv.set_kube_master(hosts.keys()[3:5])
self.inv.set_etcd(list(hosts.keys())[0:3])
self.inv.set_kube_master(list(hosts.keys())[3:5])
self.inv.set_kube_node(hosts.keys())
for h in range(5):
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
self.assertFalse(
list(hosts.keys())[h] in
self.inv.yaml_config['all']['children']['kube-node']['hosts'])
def test_range2ips_range(self):
changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8']
expected = ['10.90.0.2',
'10.90.0.4',
'10.90.0.5',
'10.90.0.6',
'10.90.0.8']
result = self.inv.range2ips(changed_hosts)
self.assertEqual(expected, result)
def test_range2ips_incorrect_range(self):
host_range = ['10.90.0.4-a.9b.c.e']
self.assertRaisesRegexp(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range)
def test_build_hostnames_different_ips_add_one(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_duplicate(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_two(self):
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3']
expected = OrderedDict([
('node1', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node2', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'})])
self.inv.yaml_config['all']['hosts'] = OrderedDict()
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)

View File

@@ -1,15 +1,9 @@
---
- name: Upgrade all packages to the latest version (yum)
yum:
name: '*'
state: latest
when: ansible_os_family == "RedHat"
- name: Install required packages
yum:
name: "{{ item }}"
state: latest
state: present
with_items:
- bind-utils
- ntp
@@ -21,23 +15,13 @@
update_cache: yes
cache_valid_time: 3600
name: "{{ item }}"
state: latest
state: present
install_recommends: no
with_items:
- dnsutils
- ntp
when: ansible_os_family == "Debian"
- name: Upgrade all packages to the latest version (apt)
shell: apt-get -o \
Dpkg::Options::=--force-confdef -o \
Dpkg::Options::=--force-confold -q -y \
dist-upgrade
environment:
DEBIAN_FRONTEND: noninteractive
when: ansible_os_family == "Debian"
# Create deployment user if required
- include: user.yml
when: k8s_deployment_user is defined

View File

@@ -6,5 +6,5 @@ This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer
## Install
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/metallb/metallb.yml
ansible-playbook --ask-become -i inventory/sample/hosts.ini contrib/metallb/metallb.yml
```

1
contrib/metallb/library Symbolic link
View File

@@ -0,0 +1 @@
../../library

View File

@@ -5,3 +5,4 @@ metallb:
cpu: "100m"
memory: "100Mi"
port: "7472"
version: v0.7.3

View File

@@ -12,6 +12,7 @@
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "{{ item.changed | ternary('latest','present') }}"
become: true
with_items: "{{ rendering.results }}"
when:
- "inventory_hostname == groups['kube-master'][0]"

View File

@@ -53,22 +53,6 @@ rules:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: leader-election
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["metallb-speaker"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
@@ -131,21 +115,6 @@ roleRef:
kind: Role
name: config-watcher
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: leader-election
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-election
---
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
@@ -173,7 +142,7 @@ spec:
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:v0.6.2
image: metallb/speaker:{{ metallb.version }}
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
@@ -230,7 +199,7 @@ spec:
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:v0.6.2
image: metallb/controller:{{ metallb.version }}
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
@@ -250,5 +219,3 @@ spec:
readOnlyRootFilesystem: true
---

View File

@@ -22,4 +22,3 @@
- hosts: kube-master[0]
roles:
- { role: kubernetes-pv }

View File

@@ -79,4 +79,3 @@
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
state: unmounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]

View File

@@ -1,2 +1,3 @@
---
dependencies:
- {role: kubernetes-pv/ansible, tags: apps}

View File

@@ -1,3 +1,4 @@
---
# Bootstrap heketi
- name: "Get state of heketi service, deployment and pods."
register: "initial_heketi_state"

View File

@@ -8,7 +8,9 @@
- register: "clusterrolebinding_state"
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- assert: { that: "clusterrolebinding_state.stdout != \"\"", message: "Cluster role binding is not present." }
- assert:
that: "clusterrolebinding_state.stdout != \"\""
msg: "Cluster role binding is not present."
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
@@ -24,4 +26,6 @@
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- assert: { that: "secret_state.stdout != \"\"", message: "Heketi config secret is not present." }
- assert:
that: "secret_state.stdout != \"\""
msg: "Heketi config secret is not present."

View File

@@ -69,7 +69,7 @@
},
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 60,
"initialDelaySeconds": 3,
"exec": {
"command": [
"/bin/bash",
@@ -80,7 +80,7 @@
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 60,
"initialDelaySeconds": 10,
"exec": {
"command": [
"/bin/bash",

View File

@@ -106,7 +106,7 @@
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"initialDelaySeconds": 10,
"httpGet": {
"path": "/hello",
"port": 8080

View File

@@ -122,7 +122,7 @@
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"initialDelaySeconds": 10,
"httpGet": {
"path": "/hello",
"port": 8080

5
contrib/terraform/OWNERS Normal file
View File

@@ -0,0 +1,5 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- holmsten
- miouge1

View File

@@ -24,10 +24,8 @@ module "aws-vpc" {
aws_cidr_subnets_private = "${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public = "${var.aws_cidr_subnets_public}"
default_tags = "${var.default_tags}"
}
module "aws-elb" {
source = "modules/elb"
@@ -38,7 +36,6 @@ module "aws-elb" {
aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}"
default_tags = "${var.default_tags}"
}
module "aws-iam" {
@@ -60,7 +57,6 @@ resource "aws_instance" "bastion-server" {
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
key_name = "${var.AWS_SSH_KEY_NAME}"
@@ -72,7 +68,6 @@ resource "aws_instance" "bastion-server" {
))}"
}
/*
* Create K8s Master and worker nodes and etcd instances
*
@@ -84,18 +79,14 @@ resource "aws_instance" "k8s-master" {
count = "${var.aws_kube_master_num}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
@@ -109,18 +100,15 @@ resource "aws_elb_attachment" "attach_master_nodes" {
instance = "${element(aws_instance.k8s-master.*.id,count.index)}"
}
resource "aws_instance" "k8s-etcd" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_etcd_size}"
count = "${var.aws_etcd_num}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
key_name = "${var.AWS_SSH_KEY_NAME}"
@@ -130,10 +118,8 @@ resource "aws_instance" "k8s-etcd" {
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd"
))}"
}
resource "aws_instance" "k8s-worker" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_worker_size}"
@@ -148,17 +134,13 @@ resource "aws_instance" "k8s-worker" {
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker"
))}"
}
/*
* Create Kubespray Inventory File
*
@@ -176,7 +158,6 @@ data "template_file" "inventory" {
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
}
}
resource "null_resource" "inventories" {
@@ -187,5 +168,4 @@ resource "null_resource" "inventories" {
triggers {
template = "${data.template_file.inventory.rendered}"
}
}

View File

@@ -7,7 +7,6 @@ resource "aws_security_group" "aws-elb" {
))}"
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = "${var.aws_elb_api_port}"

View File

@@ -14,14 +14,11 @@ variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server"
}
variable "aws_avail_zones" {
description = "Availability Zones Used"
type = "list"
}
variable "aws_subnet_ids_public" {
description = "IDs of Public Subnets"
type = "list"

View File

@@ -2,6 +2,7 @@
resource "aws_iam_role" "kube-master" {
name = "kubernetes-${var.aws_cluster_name}-master"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
@@ -20,6 +21,7 @@ EOF
resource "aws_iam_role" "kube-worker" {
name = "kubernetes-${var.aws_cluster_name}-node"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
@@ -41,6 +43,7 @@ EOF
resource "aws_iam_role_policy" "kube-master" {
name = "kubernetes-${var.aws_cluster_name}-master"
role = "${aws_iam_role.kube-master.id}"
policy = <<EOF
{
"Version": "2012-10-17",
@@ -75,6 +78,7 @@ EOF
resource "aws_iam_role_policy" "kube-worker" {
name = "kubernetes-${var.aws_cluster_name}-node"
role = "${aws_iam_role.kube-worker.id}"
policy = <<EOF
{
"Version": "2012-10-17",
@@ -124,7 +128,6 @@ resource "aws_iam_role_policy" "kube-worker" {
EOF
}
#Create AWS Instance Profiles
resource "aws_iam_instance_profile" "kube-master" {

View File

@@ -1,4 +1,3 @@
resource "aws_vpc" "cluster-vpc" {
cidr_block = "${var.aws_vpc_cidr_block}"
@@ -11,17 +10,14 @@ resource "aws_vpc" "cluster-vpc" {
))}"
}
resource "aws_eip" "cluster-nat-eip" {
count = "${length(var.aws_cidr_subnets_public)}"
vpc = true
}
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-internetgw"
))}"
@@ -43,7 +39,6 @@ resource "aws_nat_gateway" "cluster-nat-gateway" {
count = "${length(var.aws_cidr_subnets_public)}"
allocation_id = "${element(aws_eip.cluster-nat-eip.*.id, count.index)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)}"
}
resource "aws_subnet" "cluster-vpc-subnets-private" {
@@ -63,6 +58,7 @@ resource "aws_subnet" "cluster-vpc-subnets-private" {
resource "aws_route_table" "kubernetes-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
@@ -76,6 +72,7 @@ resource "aws_route_table" "kubernetes-public" {
resource "aws_route_table" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
vpc_id = "${aws_vpc.cluster-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
@@ -84,24 +81,20 @@ resource "aws_route_table" "kubernetes-private" {
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
))}"
}
resource "aws_route_table_association" "kubernetes-public" {
count = "${length(var.aws_cidr_subnets_public)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id,count.index)}"
route_table_id = "${aws_route_table.kubernetes-public.id}"
}
resource "aws_route_table_association" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-private.*.id,count.index)}"
route_table_id = "${element(aws_route_table.kubernetes-private.*.id,count.index)}"
}
#Kubernetes Security Groups
resource "aws_security_group" "kubernetes" {
@@ -131,7 +124,6 @@ resource "aws_security_group_rule" "allow-all-egress" {
security_group_id = "${aws_security_group.kubernetes.id}"
}
resource "aws_security_group_rule" "allow-ssh-connections" {
type = "ingress"
from_port = 22

View File

@@ -12,10 +12,8 @@ output "aws_subnet_ids_public" {
output "aws_security_group" {
value = ["${aws_security_group.kubernetes.*.id}"]
}
output "default_tags" {
value = "${var.default_tags}"
}

View File

@@ -2,12 +2,10 @@ variable "aws_vpc_cidr_block" {
description = "CIDR Blocks for AWS VPC"
}
variable "aws_cluster_name" {
description = "Name of Cluster"
}
variable "aws_avail_zones" {
description = "AWS Availability Zones Used"
type = "list"

View File

@@ -14,7 +14,6 @@ output "etcd" {
value = "${join("\n", aws_instance.k8s-etcd.*.private_ip)}"
}
output "aws_elb_api_fqdn" {
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
}

View File

@@ -0,0 +1,53 @@
#Global Vars
aws_cluster_name = "devtest"
#VPC Vars
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
#Bastion Host
aws_bastion_size = "t2.medium"
#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"
default_tags = {
# Env = "devtest" # Product = "kubernetes"
}
inventory_file = "../../../inventory/hosts"
## Credentials
#AWS Access Key
AWS_ACCESS_KEY_ID = ""
#AWS Secret Key
AWS_SECRET_ACCESS_KEY = ""
#EC2 SSH Key Name
AWS_SSH_KEY_NAME = ""
#AWS Region
AWS_DEFAULT_REGION = "eu-central-1"

View File

@@ -0,0 +1 @@
../../../../inventory/sample/group_vars

View File

@@ -10,7 +10,7 @@ most modern installs of OpenStack that support the basic services.
### Known compatible public clouds
- [Auro](https://auro.io/)
- [BetaCloud](https://www.betacloud.io/)
- [Betacloud](https://www.betacloud.io/)
- [CityCloud](https://www.citycloud.com/)
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
- [ELASTX](https://elastx.se/)
@@ -109,6 +109,7 @@ Create an inventory directory for your cluster by copying the existing sample an
$ cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
$ cd inventory/$CLUSTER
$ ln -s ../../contrib/terraform/openstack/hosts
$ ln -s ../../contrib
```
This will be the base for subsequent Terraform commands.
@@ -228,7 +229,7 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through`nova flavor-list` |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
@@ -242,6 +243,8 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
#### Terraform state files
@@ -358,7 +361,7 @@ If it fails try to connect manually via SSH. It could be something as simple as
### Configure cluster variables
Edit `inventory/$CLUSTER/group_vars/all.yml`:
Edit `inventory/$CLUSTER/group_vars/all/all.yml`:
- **bin_dir**:
```
# Directory where the binaries will be installed
@@ -371,7 +374,7 @@ bin_dir: /opt/bin
```
cloud_provider: openstack
```
Edit `inventory/$CLUSTER/group_vars/k8s-cluster.yml`:
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml`:
- Set variable **kube_network_plugin** to your desired networking plugin.
- **flannel** works out-of-the-box
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
@@ -415,8 +418,8 @@ ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
```
4. Get `admin`'s certificates and keys:
```
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1-key.pem > admin-key.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1.pem > admin.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
```
5. Configure kubectl:

View File

@@ -1,3 +1,7 @@
provider "openstack" {
version = "~> 1.17"
}
module "network" {
source = "modules/network"
@@ -49,9 +53,13 @@ module "compute" {
network_name = "${var.network_name}"
flavor_bastion = "${var.flavor_bastion}"
k8s_master_fips = "${module.ips.k8s_master_fips}"
k8s_master_no_etcd_fips = "${module.ips.k8s_master_no_etcd_fips}"
k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}"
bastion_allowed_remote_ips = "${var.bastion_allowed_remote_ips}"
master_allowed_remote_ips = "${var.master_allowed_remote_ips}"
k8s_allowed_remote_ips = "${var.k8s_allowed_remote_ips}"
k8s_allowed_egress_ips = "${var.k8s_allowed_egress_ips}"
supplementary_master_groups = "${var.supplementary_master_groups}"
supplementary_node_groups = "${var.supplementary_node_groups}"
worker_allowed_ports = "${var.worker_allowed_ports}"
@@ -72,7 +80,7 @@ output "router_id" {
}
output "k8s_master_fips" {
value = "${module.ips.k8s_master_fips}"
value = "${concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)}"
}
output "k8s_node_fips" {

View File

@@ -6,25 +6,29 @@ resource "openstack_compute_keypair_v2" "k8s" {
resource "openstack_networking_secgroup_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master"
description = "${var.cluster_name} - Kubernetes Master"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
count = "${length(var.master_allowed_remote_ips)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "6443"
port_range_max = "6443"
remote_ip_prefix = "0.0.0.0/0"
remote_ip_prefix = "${var.master_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s_master.id}"
}
resource "openstack_networking_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion"
count = "${var.number_of_bastions ? 1 : 0}"
description = "${var.cluster_name} - Bastion Server"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "bastion" {
count = "${length(var.bastion_allowed_remote_ips)}"
count = "${var.number_of_bastions ? length(var.bastion_allowed_remote_ips) : 0}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
@@ -37,6 +41,7 @@ resource "openstack_networking_secgroup_rule_v2" "bastion" {
resource "openstack_networking_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "k8s" {
@@ -46,9 +51,29 @@ resource "openstack_networking_secgroup_rule_v2" "k8s" {
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
resource "openstack_networking_secgroup_rule_v2" "k8s_allowed_remote_ips" {
count = "${length(var.k8s_allowed_remote_ips)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.k8s_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
resource "openstack_networking_secgroup_rule_v2" "egress" {
count = "${length(var.k8s_allowed_egress_ips)}"
direction = "egress"
ethertype = "IPv4"
remote_ip_prefix = "${var.k8s_allowed_egress_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
resource "openstack_networking_secgroup_v2" "worker" {
name = "${var.cluster_name}-k8s-worker"
description = "${var.cluster_name} - Kubernetes worker nodes"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "worker" {
@@ -75,7 +100,6 @@ resource "openstack_compute_instance_v2" "bastion" {
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"default",
]
metadata = {
@@ -87,7 +111,6 @@ resource "openstack_compute_instance_v2" "bastion" {
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master" {
@@ -103,9 +126,7 @@ resource "openstack_compute_instance_v2" "k8s_master" {
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
metadata = {
@@ -117,7 +138,6 @@ resource "openstack_compute_instance_v2" "k8s_master" {
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
@@ -133,7 +153,6 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
@@ -146,7 +165,6 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "etcd" {
@@ -168,7 +186,6 @@ resource "openstack_compute_instance_v2" "etcd" {
kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
@@ -185,7 +202,6 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
metadata = {
@@ -193,7 +209,6 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
@@ -217,7 +232,6 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
@@ -233,9 +247,7 @@ resource "openstack_compute_instance_v2" "k8s_node" {
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.worker.name}",
"default",
]
metadata = {
@@ -247,7 +259,6 @@ resource "openstack_compute_instance_v2" "k8s_node" {
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
@@ -264,7 +275,6 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
"default",
]
metadata = {
@@ -272,7 +282,6 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_floatingip_associate_v2" "bastion" {
@@ -287,6 +296,12 @@ resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
floating_ip = "${var.k8s_master_fips[count.index]}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" {
count = "${var.number_of_k8s_masters_no_etcd}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)}"
floating_ip = "${var.k8s_master_no_etcd_fips[count.index]}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
floating_ip = "${var.k8s_node_fips[count.index]}"
@@ -312,16 +327,13 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {

View File

@@ -54,6 +54,10 @@ variable "k8s_master_fips" {
type = "list"
}
variable "k8s_master_no_etcd_fips" {
type = "list"
}
variable "k8s_node_fips" {
type = "list"
}
@@ -66,6 +70,18 @@ variable "bastion_allowed_remote_ips" {
type = "list"
}
variable "master_allowed_remote_ips" {
type = "list"
}
variable "k8s_allowed_remote_ips" {
type = "list"
}
variable "k8s_allowed_egress_ips" {
type = "list"
}
variable "supplementary_master_groups" {
default = ""
}

View File

@@ -10,6 +10,12 @@ resource "openstack_networking_floatingip_v2" "k8s_master" {
depends_on = ["null_resource.dummy_dependency"]
}
resource "openstack_networking_floatingip_v2" "k8s_master_no_etcd" {
count = "${var.number_of_k8s_masters_no_etcd}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
}
resource "openstack_networking_floatingip_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
pool = "${var.floatingip_pool}"

View File

@@ -2,6 +2,10 @@ output "k8s_master_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_master.*.address}"]
}
output "k8s_master_no_etcd_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_master_no_etcd.*.address}"]
}
output "k8s_node_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_node.*.address}"]
}

View File

@@ -4,7 +4,6 @@ output "router_id" {
output "router_internal_port_id" {
value = "${element(concat(openstack_networking_router_interface_v2.k8s.*.id, list("")), 0)}"
}
output "subnet_id" {

View File

@@ -6,11 +6,13 @@ public_key_path = "~/.ssh/id_rsa.pub"
# image to use for bastion, masters, standalone etcd instances, and nodes
image = "<image name>"
# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
ssh_user = "<cloud-provisioned user>"
# 0|1 bastion nodes
number_of_bastions = 0
#flavor_bastion = "<UUID>"
# standalone etcds
@@ -18,14 +20,20 @@ number_of_etcd = 0
# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
number_of_k8s_masters_no_floating_ip = 0
number_of_k8s_masters_no_floating_ip_no_etcd = 0
flavor_k8s_master = "<UUID>"
# nodes
number_of_k8s_nodes = 2
number_of_k8s_nodes_no_floating_ip = 4
#flavor_k8s_node = "<UUID>"
# GlusterFS
@@ -40,7 +48,11 @@ number_of_k8s_nodes_no_floating_ip = 4
# networking
network_name = "<network>"
external_net = "<UUID>"
subnet_cidr = "<cidr>"
floatingip_pool = "<pool>"
bastion_allowed_remote_ips = ["0.0.0.0/0"]

View File

@@ -74,27 +74,27 @@ variable "ssh_user_gfs" {
}
variable "flavor_bastion" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_k8s_master" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_k8s_node" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_etcd" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_gfs_node" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
@@ -145,14 +145,33 @@ variable "bastion_allowed_remote_ips" {
default = ["0.0.0.0/0"]
}
variable "master_allowed_remote_ips" {
description = "An array of CIDRs allowed to access API of masters"
type = "list"
default = ["0.0.0.0/0"]
}
variable "k8s_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
default = []
}
variable "k8s_allowed_egress_ips" {
description = "An array of CIDRs allowed for egress traffic"
type = "list"
default = ["0.0.0.0/0"]
}
variable "worker_allowed_ports" {
type = "list"
default = [
{
"protocol" = "tcp"
"port_range_min" = 30000
"port_range_max" = 32767
"remote_ip_prefix" = "0.0.0.0/0"
}
},
]
}

View File

@@ -0,0 +1,231 @@
# Kubernetes on Packet with Terraform
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
[Packet](https://www.packet.com).
## Status
This will install a Kubernetes cluster on Packet bare metal. It should work in all locations and on most server types.
## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your Packet project.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by [cluster.yml](../../..//cluster.yml)
to actually install Kubernetes with Kubespray.
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts.
- Master nodes with etcd: `number_of_k8s_masters` variable
- Master nodes without etcd: `number_of_k8s_masters_no_etcd` variable
- Standalone etcd hosts: `number_of_etcd` variable
- Kubernetes worker nodes: `number_of_k8s_nodes` variable
Note that the Ansible script will report an invalid configuration if you wind up
with an *even number* of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- Install dependencies: `sudo pip install -r requirements.txt`
- Account with Packet Host
- An SSH key pair
## SSH Key Setup
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tf will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tf to blank to prevent the duplicate key from being uploaded which will cause an error.
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
```ShellSession
ssh-keygen -f ~/.ssh/id_rsa
```
## Terraform
Terraform will be used to provision all of the Packet resources with base software as appropriate.
### Configuration
#### Inventory files
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
$ cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER
$ cd inventory/$CLUSTER
$ ln -s ../../contrib/terraform/packet/hosts
```
This will be the base for subsequent Terraform commands.
#### Packet API access
Your Packet API key must be available in the `PACKET_AUTH_TOKEN` environment variable.
This key is typically stored outside of the code repo since it is considered secret.
If someone gets this key, they can startup/shutdown hosts in your project!
For more information on how to generate an API key or find your project ID, please see:
https://support.packet.com/kb/articles/api-integrations
The Packet Project ID associated with the key will be set later in cluster.tf.
For more information about the API, please see:
https://www.packet.com/developers/api/
Example:
```ShellSession
$ export PACKET_AUTH_TOKEN="Example-API-Token"
```
Note that to deploy several clusters within the same project you need to use [terraform workspace](https://www.terraform.io/docs/state/workspaces.html#using-workspaces).
#### Cluster variables
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
The `cluster_name` is used to set a tag on each server deployed as part of this cluster.
This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
* cluster_name = the name of the inventory directory created above as $CLUSTER
* packet_project_id = the Packet Project ID associated with the Packet API token above
#### Enable localhost access
Kubespray will pull down a Kubernetes configuration file to access this cluster by enabling the
`kubeconfig_localhost: true` in the Kubespray configuration.
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml` and comment back in the following line and change from `false` to `true`:
`\# kubeconfig_localhost: false`
becomes:
`kubeconfig_localhost: true`
Once the Kubespray playbooks are run, a Kubernetes configuration file will be written to the local host at `inventory/$CLUSTER/artifacts/admin.conf`
#### Terraform state files
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/packet` directory :
* `.terraform`
* `.tfvars`
* `.tfstate`
* `.tfstate.backup`
You can still add them manually if you want to.
### Initialization
Before Terraform can operate on your cluster you need to install the required
plugins. This is accomplished as follows:
```ShellSession
$ cd inventory/$CLUSTER
$ terraform init ../../contrib/terraform/packet
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
### Provisioning cluster
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/packet
$ export ANSIBLE_HOST_KEY_CHECKING=False
$ ansible-playbook -i hosts ../../cluster.yml
```
### Destroying cluster
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/packet
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
* remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
* clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
### Debugging
You can enable debugging output from Terraform by setting `TF_LOG` to `DEBUG` before running the Terraform command.
## Ansible
### Node access
#### SSH
Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner:
```
$ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/id_rsa
```
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
#### Test access
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```
$ ansible -i inventory/$CLUSTER/hosts -m ping all
example-k8s_node-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-etcd-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-k8s-master-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
### Deploy Kubernetes
```
$ ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
```
This will take some time as there are many tasks to run.
## Kubernetes
### Set up kubectl
* [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on the localhost.
* Verify that Kubectl runs correctly
```
kubectl version
```
* Verify that the Kubernetes configuration file has been copied over
```
cat inventory/alpha/$CLUSTER/admin.conf
```
* Verify that all the nodes are running correctly.
```
kubectl version
kubectl --kubeconfig=inventory/$CLUSTER/artifacts/admin.conf get nodes
```
## What's next
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).

View File

@@ -0,0 +1 @@
../terraform.py

View File

@@ -0,0 +1,62 @@
# Configure the Packet Provider
provider "packet" {
version = "~> 2.0"
}
resource "packet_ssh_key" "k8s" {
count = "${var.public_key_path != "" ? 1 : 0}"
name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}"
}
resource "packet_device" "k8s_master" {
depends_on = ["packet_ssh_key.k8s"]
count = "${var.number_of_k8s_masters}"
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
plan = "${var.plan_k8s_masters}"
facilities = ["${var.facility}"]
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master", "etcd", "kube-node"]
}
resource "packet_device" "k8s_master_no_etcd" {
depends_on = ["packet_ssh_key.k8s"]
count = "${var.number_of_k8s_masters_no_etcd}"
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
plan = "${var.plan_k8s_masters_no_etcd}"
facilities = ["${var.facility}"]
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master"]
}
resource "packet_device" "k8s_etcd" {
depends_on = ["packet_ssh_key.k8s"]
count = "${var.number_of_etcd}"
hostname = "${var.cluster_name}-etcd-${count.index+1}"
plan = "${var.plan_etcd}"
facilities = ["${var.facility}"]
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "packet_device" "k8s_node" {
depends_on = ["packet_ssh_key.k8s"]
count = "${var.number_of_k8s_nodes}"
hostname = "${var.cluster_name}-k8s-node-${count.index+1}"
plan = "${var.plan_k8s_nodes}"
facilities = ["${var.facility}"]
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-node"]
}

View File

@@ -0,0 +1,15 @@
output "k8s_masters" {
value = "${packet_device.k8s_master.*.access_public_ipv4}"
}
output "k8s_masters_no_etc" {
value = "${packet_device.k8s_master_no_etcd.*.access_public_ipv4}"
}
output "k8s_etcds" {
value = "${packet_device.k8s_etcd.*.access_public_ipv4}"
}
output "k8s_nodes" {
value = "${packet_device.k8s_node.*.access_public_ipv4}"
}

View File

@@ -0,0 +1,32 @@
# your Kubernetes cluster name here
cluster_name = "mycluster"
# Your Packet project ID. See https://support.packet.com/kb/articles/api-integrations
packet_project_id = "Example-API-Token"
# The public SSH key to be uploaded into authorized_keys in bare metal Packet nodes provisioned
# leave this value blank if the public key is already setup in the Packet project
# Terraform will complain if the public key is setup in Packet
public_key_path = "~/.ssh/id_rsa.pub"
# cluster location
facility = "ewr1"
# standalone etcds
number_of_etcd = 0
plan_etcd = "t1.small.x86"
# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
plan_k8s_masters = "t1.small.x86"
plan_k8s_masters_no_etcd = "t1.small.x86"
# nodes
number_of_k8s_nodes = 2
plan_k8s_nodes = "t1.small.x86"

View File

@@ -0,0 +1 @@
../../../../inventory/sample/group_vars

View File

@@ -0,0 +1,56 @@
variable "cluster_name" {
default = "kubespray"
}
variable "packet_project_id" {
description = "Your Packet project ID. See https://support.packet.com/kb/articles/api-integrations"
}
variable "operating_system" {
default = "ubuntu_16_04"
}
variable "public_key_path" {
description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub"
}
variable "billing_cycle" {
default = "hourly"
}
variable "facility" {
default = "dfw2"
}
variable "plan_k8s_masters" {
default = "c2.medium.x86"
}
variable "plan_k8s_masters_no_etcd" {
default = "c2.medium.x86"
}
variable "plan_etcd" {
default = "c2.medium.x86"
}
variable "plan_k8s_nodes" {
default = "c2.medium.x86"
}
variable "number_of_k8s_masters" {
default = 0
}
variable "number_of_k8s_masters_no_etcd" {
default = 0
}
variable "number_of_etcd" {
default = 0
}
variable "number_of_k8s_nodes" {
default = 0
}

View File

@@ -149,163 +149,46 @@ def parse_bool(string_form):
raise ValueError('could not convert %r to a bool' % string_form)
@parses('triton_machine')
@calculate_mantl_vars
def triton_machine(resource, module_name):
@parses('packet_device')
def packet_device(resource, tfvars=None):
raw_attrs = resource['primary']['attributes']
name = raw_attrs.get('name')
name = raw_attrs['hostname']
groups = []
attrs = {
'id': raw_attrs['id'],
'dataset': raw_attrs['dataset'],
'disk': raw_attrs['disk'],
'firewall_enabled': parse_bool(raw_attrs['firewall_enabled']),
'image': raw_attrs['image'],
'ips': parse_list(raw_attrs, 'ips'),
'memory': raw_attrs['memory'],
'name': raw_attrs['name'],
'networks': parse_list(raw_attrs, 'networks'),
'package': raw_attrs['package'],
'primary_ip': raw_attrs['primaryip'],
'root_authorized_keys': raw_attrs['root_authorized_keys'],
'state': raw_attrs['state'],
'tags': parse_dict(raw_attrs, 'tags'),
'type': raw_attrs['type'],
'user_data': raw_attrs['user_data'],
'user_script': raw_attrs['user_script'],
# ansible
'ansible_ssh_host': raw_attrs['primaryip'],
'ansible_ssh_port': 22,
'ansible_ssh_user': 'root', # it's "root" on Triton by default
# generic
'public_ipv4': raw_attrs['primaryip'],
'provider': 'triton',
}
# private IPv4
for ip in attrs['ips']:
if ip.startswith('10') or ip.startswith('192.168'): # private IPs
attrs['private_ipv4'] = ip
break
if 'private_ipv4' not in attrs:
attrs['private_ipv4'] = attrs['public_ipv4']
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['tags'].get('dc', 'none')),
'role': attrs['tags'].get('role', 'none'),
'ansible_python_interpreter': attrs['tags'].get('python_bin', 'python')
})
# add groups based on attrs
groups.append('triton_image=' + attrs['image'])
groups.append('triton_package=' + attrs['package'])
groups.append('triton_state=' + attrs['state'])
groups.append('triton_firewall_enabled=%s' % attrs['firewall_enabled'])
groups.extend('triton_tags_%s=%s' % item
for item in attrs['tags'].items())
groups.extend('triton_network=' + network
for network in attrs['networks'])
# groups specific to Mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('digitalocean_droplet')
@calculate_mantl_vars
def digitalocean_host(resource, tfvars=None):
raw_attrs = resource['primary']['attributes']
name = raw_attrs['name']
groups = []
attrs = {
'id': raw_attrs['id'],
'image': raw_attrs['image'],
'ipv4_address': raw_attrs['ipv4_address'],
'facilities': parse_list(raw_attrs, 'facilities'),
'hostname': raw_attrs['hostname'],
'operating_system': raw_attrs['operating_system'],
'locked': parse_bool(raw_attrs['locked']),
'metadata': json.loads(raw_attrs.get('user_data', '{}')),
'region': raw_attrs['region'],
'size': raw_attrs['size'],
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'),
'status': raw_attrs['status'],
'tags': parse_list(raw_attrs, 'tags'),
'plan': raw_attrs['plan'],
'project_id': raw_attrs['project_id'],
'state': raw_attrs['state'],
# ansible
'ansible_ssh_host': raw_attrs['ipv4_address'],
'ansible_ssh_port': 22,
'ansible_ssh_user': 'root', # it's always "root" on DO
'ansible_ssh_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # it's always "root" on Packet
# generic
'public_ipv4': raw_attrs['ipv4_address'],
'private_ipv4': raw_attrs.get('ipv4_address_private',
raw_attrs['ipv4_address']),
'provider': 'digitalocean',
'ipv4_address': raw_attrs['network.0.address'],
'public_ipv4': raw_attrs['network.0.address'],
'ipv6_address': raw_attrs['network.1.address'],
'public_ipv6': raw_attrs['network.1.address'],
'private_ipv4': raw_attrs['network.2.address'],
'provider': 'packet',
}
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
})
# add groups based on attrs
groups.append('do_image=' + attrs['image'])
groups.append('do_locked=%s' % attrs['locked'])
groups.append('do_region=' + attrs['region'])
groups.append('do_size=' + attrs['size'])
groups.append('do_status=' + attrs['status'])
groups.extend('do_metadata_%s=%s' % item
for item in attrs['metadata'].items())
groups.append('packet_operating_system=' + attrs['operating_system'])
groups.append('packet_locked=%s' % attrs['locked'])
groups.append('packet_state=' + attrs['state'])
groups.append('packet_plan=' + attrs['plan'])
# groups specific to Mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
# groups specific to kubespray
groups = groups + attrs['tags']
return name, attrs, groups
@parses('softlayer_virtualserver')
@calculate_mantl_vars
def softlayer_host(resource, module_name):
raw_attrs = resource['primary']['attributes']
name = raw_attrs['name']
groups = []
attrs = {
'id': raw_attrs['id'],
'image': raw_attrs['image'],
'ipv4_address': raw_attrs['ipv4_address'],
'metadata': json.loads(raw_attrs.get('user_data', '{}')),
'region': raw_attrs['region'],
'ram': raw_attrs['ram'],
'cpu': raw_attrs['cpu'],
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'),
'public_ipv4': raw_attrs['ipv4_address'],
'private_ipv4': raw_attrs['ipv4_address_private'],
'ansible_ssh_host': raw_attrs['ipv4_address'],
'ansible_ssh_port': 22,
'ansible_ssh_user': 'root',
'provider': 'softlayer',
}
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
})
# groups specific to Mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
attrs = {
@@ -403,281 +286,6 @@ def openstack_host(resource, module_name):
return name, attrs, groups
@parses('aws_instance')
@calculate_mantl_vars
def aws_host(resource, module_name):
name = resource['primary']['attributes']['tags.Name']
raw_attrs = resource['primary']['attributes']
groups = []
attrs = {
'ami': raw_attrs['ami'],
'availability_zone': raw_attrs['availability_zone'],
'ebs_block_device': parse_attr_list(raw_attrs, 'ebs_block_device'),
'ebs_optimized': parse_bool(raw_attrs['ebs_optimized']),
'ephemeral_block_device': parse_attr_list(raw_attrs,
'ephemeral_block_device'),
'id': raw_attrs['id'],
'key_name': raw_attrs['key_name'],
'private': parse_dict(raw_attrs, 'private',
sep='_'),
'public': parse_dict(raw_attrs, 'public',
sep='_'),
'root_block_device': parse_attr_list(raw_attrs, 'root_block_device'),
'security_groups': parse_list(raw_attrs, 'security_groups'),
'subnet': parse_dict(raw_attrs, 'subnet',
sep='_'),
'tags': parse_dict(raw_attrs, 'tags'),
'tenancy': raw_attrs['tenancy'],
'vpc_security_group_ids': parse_list(raw_attrs,
'vpc_security_group_ids'),
# ansible-specific
'ansible_ssh_port': 22,
'ansible_ssh_host': raw_attrs['public_ip'],
# generic
'public_ipv4': raw_attrs['public_ip'],
'private_ipv4': raw_attrs['private_ip'],
'provider': 'aws',
}
# attrs specific to Ansible
if 'tags.sshUser' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['tags.sshUser']
if 'tags.sshPrivateIp' in raw_attrs:
attrs['ansible_ssh_host'] = raw_attrs['private_ip']
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['tags'].get('dc', module_name)),
'role': attrs['tags'].get('role', 'none'),
'ansible_python_interpreter': attrs['tags'].get('python_bin','python')
})
# groups specific to Mantl
groups.extend(['aws_ami=' + attrs['ami'],
'aws_az=' + attrs['availability_zone'],
'aws_key_name=' + attrs['key_name'],
'aws_tenancy=' + attrs['tenancy']])
groups.extend('aws_tag_%s=%s' % item for item in attrs['tags'].items())
groups.extend('aws_vpc_security_group=' + group
for group in attrs['vpc_security_group_ids'])
groups.extend('aws_subnet_%s=%s' % subnet
for subnet in attrs['subnet'].items())
# groups specific to Mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('google_compute_instance')
@calculate_mantl_vars
def gce_host(resource, module_name):
name = resource['primary']['id']
raw_attrs = resource['primary']['attributes']
groups = []
# network interfaces
interfaces = parse_attr_list(raw_attrs, 'network_interface')
for interface in interfaces:
interface['access_config'] = parse_attr_list(interface,
'access_config')
for key in interface.keys():
if '.' in key:
del interface[key]
# general attrs
attrs = {
'can_ip_forward': raw_attrs['can_ip_forward'] == 'true',
'disks': parse_attr_list(raw_attrs, 'disk'),
'machine_type': raw_attrs['machine_type'],
'metadata': parse_dict(raw_attrs, 'metadata'),
'network': parse_attr_list(raw_attrs, 'network'),
'network_interface': interfaces,
'self_link': raw_attrs['self_link'],
'service_account': parse_attr_list(raw_attrs, 'service_account'),
'tags': parse_list(raw_attrs, 'tags'),
'zone': raw_attrs['zone'],
# ansible
'ansible_ssh_port': 22,
'provider': 'gce',
}
# attrs specific to Ansible
if 'metadata.ssh_user' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
})
try:
attrs.update({
'ansible_ssh_host': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'],
'public_ipv4': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'],
'private_ipv4': interfaces[0]['address'],
'publicly_routable': True,
})
except (KeyError, ValueError):
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
# add groups based on attrs
groups.extend('gce_image=' + disk['image'] for disk in attrs['disks'])
groups.append('gce_machine_type=' + attrs['machine_type'])
groups.extend('gce_metadata_%s=%s' % (key, value)
for (key, value) in attrs['metadata'].items()
if key not in set(['sshKeys']))
groups.extend('gce_tag=' + tag for tag in attrs['tags'])
groups.append('gce_zone=' + attrs['zone'])
if attrs['can_ip_forward']:
groups.append('gce_ip_forward')
if attrs['publicly_routable']:
groups.append('gce_publicly_routable')
# groups specific to Mantl
groups.append('role=' + attrs['metadata'].get('role', 'none'))
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('vsphere_virtual_machine')
@calculate_mantl_vars
def vsphere_host(resource, module_name):
raw_attrs = resource['primary']['attributes']
network_attrs = parse_dict(raw_attrs, 'network_interface')
network = parse_dict(network_attrs, '0')
ip_address = network.get('ipv4_address', network['ip_address'])
name = raw_attrs['name']
groups = []
attrs = {
'id': raw_attrs['id'],
'ip_address': ip_address,
'private_ipv4': ip_address,
'public_ipv4': ip_address,
'metadata': parse_dict(raw_attrs, 'custom_configuration_parameters'),
'ansible_ssh_port': 22,
'provider': 'vsphere',
}
try:
attrs.update({
'ansible_ssh_host': ip_address,
})
except (KeyError, ValueError):
attrs.update({'ansible_ssh_host': '', })
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('consul_dc', module_name)),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
})
# attrs specific to Ansible
if 'ssh_user' in attrs['metadata']:
attrs['ansible_ssh_user'] = attrs['metadata']['ssh_user']
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('azure_instance')
@calculate_mantl_vars
def azure_host(resource, module_name):
name = resource['primary']['attributes']['name']
raw_attrs = resource['primary']['attributes']
groups = []
attrs = {
'automatic_updates': raw_attrs['automatic_updates'],
'description': raw_attrs['description'],
'hosted_service_name': raw_attrs['hosted_service_name'],
'id': raw_attrs['id'],
'image': raw_attrs['image'],
'ip_address': raw_attrs['ip_address'],
'location': raw_attrs['location'],
'name': raw_attrs['name'],
'reverse_dns': raw_attrs['reverse_dns'],
'security_group': raw_attrs['security_group'],
'size': raw_attrs['size'],
'ssh_key_thumbprint': raw_attrs['ssh_key_thumbprint'],
'subnet': raw_attrs['subnet'],
'username': raw_attrs['username'],
'vip_address': raw_attrs['vip_address'],
'virtual_network': raw_attrs['virtual_network'],
'endpoint': parse_attr_list(raw_attrs, 'endpoint'),
# ansible
'ansible_ssh_port': 22,
'ansible_ssh_user': raw_attrs['username'],
'ansible_ssh_host': raw_attrs['vip_address'],
}
# attrs specific to mantl
attrs.update({
'consul_dc': attrs['location'].lower().replace(" ", "-"),
'role': attrs['description']
})
# groups specific to mantl
groups.extend(['azure_image=' + attrs['image'],
'azure_location=' + attrs['location'].lower().replace(" ", "-"),
'azure_username=' + attrs['username'],
'azure_security_group=' + attrs['security_group']])
# groups specific to mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('clc_server')
@calculate_mantl_vars
def clc_server(resource, module_name):
raw_attrs = resource['primary']['attributes']
name = raw_attrs.get('id')
groups = []
md = parse_dict(raw_attrs, 'metadata')
attrs = {
'metadata': md,
'ansible_ssh_port': md.get('ssh_port', 22),
'ansible_ssh_user': md.get('ssh_user', 'root'),
'provider': 'clc',
'publicly_routable': False,
}
try:
attrs.update({
'public_ipv4': raw_attrs['public_ip_address'],
'private_ipv4': raw_attrs['private_ip_address'],
'ansible_ssh_host': raw_attrs['public_ip_address'],
'publicly_routable': True,
})
except (KeyError, ValueError):
attrs.update({
'ansible_ssh_host': raw_attrs['private_ip_address'],
'private_ipv4': raw_attrs['private_ip_address'],
})
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
'role': attrs['metadata'].get('role', 'none'),
})
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
def iter_host_ips(hosts, ips):
'''Update hosts that have an entry in the floating IP list'''
for host in hosts:

View File

@@ -1,3 +1,4 @@
---
vault_deployment_type: docker
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
vault_version: 0.10.1

View File

@@ -1,7 +1,7 @@
---
# Stop temporary Vault if it's running (can linger if playbook fails out)
- name: stop vault-temp container
shell: docker stop {{ vault_temp_container_name }} || rkt stop {{ vault_temp_container_name }}
shell: docker stop {{ vault_temp_container_name }}
failed_when: false
register: vault_temp_stop
changed_when: vault_temp_stop is succeeded

View File

@@ -5,17 +5,19 @@
set_fact:
sync_file_dir: "{{ sync_file_path | dirname }}"
sync_file: "{{ sync_file_path | basename }}"
when: sync_file_path is defined and sync_file_path != ''
when:
- sync_file_path is defined
- sync_file_path
- name: "sync_file | Set fact for sync_file_path when undefined"
set_fact:
sync_file_path: "{{ (sync_file_dir, sync_file)|join('/') }}"
when: sync_file_path is not defined or sync_file_path == ''
when: sync_file_path is not defined or not sync_file_path
- name: "sync_file | Set fact for key path name"
set_fact:
sync_file_key_path: "{{ sync_file_path.rsplit('.', 1)|first + '-key.' + sync_file_path.rsplit('.', 1)|last }}"
when: sync_file_key_path is not defined or sync_file_key_path == ''
when: sync_file_key_path is not defined or not sync_file_key_path
- name: "sync_file | Check if {{sync_file_path}} file exists"
stat:
@@ -46,17 +48,17 @@
- name: "sync_file | Remove sync sources with files that do not match sync_file_srcs|first"
set_fact:
_: "{% if inventory_hostname in sync_file_srcs %}{{ sync_file_srcs.remove(inventory_hostname) }}{% endif %}"
when: >-
sync_file_srcs|d([])|length > 1 and
inventory_hostname != sync_file_srcs|first
when:
- sync_file_srcs|d([])|length > 1
- inventory_hostname != sync_file_srcs|first
- name: "sync_file | Remove sync sources with keys that do not match sync_file_srcs|first"
set_fact:
_: "{% if inventory_hostname in sync_file_srcs %}{{ sync_file_srcs.remove(inventory_hostname) }}{% endif %}"
when: >-
sync_file_is_cert|d() and
sync_file_key_srcs|d([])|length > 1 and
inventory_hostname != sync_file_key_srcs|first
when:
- sync_file_is_cert|d()
- sync_file_key_srcs|d([])|length > 1
- inventory_hostname != sync_file_key_srcs|first
- name: "sync_file | Consolidate file and key sources"
set_fact:

View File

@@ -1,45 +0,0 @@
[Unit]
Description=hashicorp vault on rkt
Documentation=https://github.com/hashicorp/vault
Wants=network.target
[Service]
User=root
Restart=on-failure
RestartSec=10s
TimeoutStartSec=5
LimitNOFILE=40000
# Container has the following internal mount points:
# /vault/file/ # File backend storage location
# /vault/logs/ # Log files
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/vault.uuid
ExecStart=/usr/bin/rkt run \
--insecure-options=image \
--volume hosts,kind=host,source=/etc/hosts,readOnly=true \
--mount volume=hosts,target=/etc/hosts \
--volume=volume-vault-file,kind=host,source=/var/lib/vault \
--volume=volume-vault-logs,kind=host,source={{ vault_log_dir }} \
--volume=vault-cert-dir,kind=host,source={{ vault_cert_dir }} \
--mount=volume=vault-cert-dir,target={{ vault_cert_dir }} \
--volume=vault-conf-dir,kind=host,source={{ vault_config_dir }} \
--mount=volume=vault-conf-dir,target={{ vault_config_dir }} \
--volume=vault-secrets-dir,kind=host,source={{ vault_secrets_dir }} \
--mount=volume=vault-secrets-dir,target={{ vault_secrets_dir }} \
--volume=vault-roles-dir,kind=host,source={{ vault_roles_dir }} \
--mount=volume=vault-roles-dir,target={{ vault_roles_dir }} \
--volume=etcd-cert-dir,kind=host,source={{ etcd_cert_dir }} \
--mount=volume=etcd-cert-dir,target={{ etcd_cert_dir }} \
docker://{{ vault_image_repo }}:{{ vault_image_tag }} \
--uuid-file-save=/var/run/vault.uuid \
--name={{ vault_container_name }} \
--net=host \
--caps-retain=CAP_IPC_LOCK \
--exec vault -- \
server \
--config={{ vault_config_dir }}/config.json
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/vault.uuid
[Install]
WantedBy=multi-user.target

View File

@@ -93,6 +93,6 @@ Potential Work
- Change the Vault role to not run certain tasks when ``root_token`` and
``unseal_keys`` are not present. Alternatively, allow user input for these
values when missing.
- Add the ability to start temp Vault with Host, Rkt, or Docker
- Add the ability to start temp Vault with Host or Docker
- Add a dynamic way to change out the backend role creation during Bootstrap,
so other services can be used (such as Consul)

40
docs/_sidebar.md Normal file
View File

@@ -0,0 +1,40 @@
* [Readme](/)
* [Comparisons](/docs/comparisons.md)
* [Getting started](/docs/getting-started.md)
* [Ansible](docs/ansible.md)
* [Variables](/docs/vars.md)
* [Ansible](/docs/ansible.md)
* Operations
* [Integration](docs/integration.md)
* [Upgrades](/docs/upgrades.md)
* [HA Mode](docs/ha-mode.md)
* [Large deployments](docs/large-deployments.md)
* CNI
* [Calico](docs/calico.md)
* [Contiv](docs/contiv.md)
* [Flannel](docs/flannel.md)
* [Kube Router](docs/kube-router.md)
* [Weave](docs/weave.md)
* [Multus](docs/multus.md)
* [Cloud providers](docs/cloud.md)
* [AWS](docs/aws.md)
* [Azure](docs/azure.md)
* [OpenStack](/docs/openstack.md)
* [Packet](/docs/packet.md)
* [vSphere](/docs/vsphere.md)
* Operating Systems
* [Atomic](docs/atomic.md)
* [Debian](docs/debian.md)
* [Coreos](docs/coreos.md)
* [OpenSUSE](docs/opensuse.md)
* Advanced
* [Proxy](/docs/proxy.md)
* [Downloads](docs/downloads.md)
* [CRI-O](docs/cri-o.md)
* [Netcheck](docs/netcheck.md)
* [DNS Stack](docs/dns-stack.md)
* [Kubernetes reliability](docs/kubernetes-reliability.md)
* Developers
* [Test cases](docs/test_cases.md)
* [Vagrant](docs/vagrant.md)
* [Roadmap](docs/roadmap.md)

View File

@@ -35,12 +35,12 @@ Below is a complete inventory example:
```
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_ssh_host=95.54.0.12 ip=10.3.0.1
node2 ansible_ssh_host=95.54.0.13 ip=10.3.0.2
node3 ansible_ssh_host=95.54.0.14 ip=10.3.0.3
node4 ansible_ssh_host=95.54.0.15 ip=10.3.0.4
node5 ansible_ssh_host=95.54.0.16 ip=10.3.0.5
node6 ansible_ssh_host=95.54.0.17 ip=10.3.0.6
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube-master]
node1
@@ -70,7 +70,7 @@ The group variables to control main deployment options are located in the direct
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/sample/group_vars/k8s-cluster.yml`.
There are also role vars for docker, rkt, kubernetes preinstall and master roles.
There are also role vars for docker, kubernetes preinstall and master roles.
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use
the `-e ` runtime flags (most simple way) or other layers described in the docs.
@@ -110,7 +110,6 @@ The following tags are defined in playbooks:
| calico | Network plugin Calico
| canal | Network plugin Canal
| cloud-provider | Cloud-provider related tasks
| dnsmasq | Configuring DNS stack for hosts and K8s apps
| docker | Configuring docker for hosts
| download | Fetching container images to a delegate host
| etcd | Configuring etcd cluster
@@ -152,11 +151,11 @@ Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers:
```
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,dnsmasq,facts --skip-tags=download,bootstrap-os
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
```
ansible-playbook -i inventory/sample/hosts.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
```
And this prepares all container images locally (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes:
@@ -176,7 +175,8 @@ simply add a line to your inventory, where you have to replace x.x.x.x with the
bastion host.
```
bastion ansible_ssh_host=x.x.x.x
[bastion]
bastion ansible_host=x.x.x.x
```
For more information about Ansible and bastion hosts, read

16
docs/arch.md Normal file
View File

@@ -0,0 +1,16 @@
## Architecture compatibility
The following table shows the impact of the CPU architecture on compatible features:
- amd64: Cluster using only x86/amd64 CPUs
- arm64: Cluster using only arm64 CPUs
- amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs
| kube_network_plugin | amd64 | arm64 | amd64 + arm64 |
| ------------------- | ----- | ----- | ------------- |
| Calico | Y | Y | Y |
| Weave | Y | Y | Y |
| Flannel | Y | N | N |
| Canal | Y | N | N |
| Cilium | Y | N | N |
| Contib | Y | N | N |
| kube-router | Y | N | N |

View File

@@ -51,6 +51,25 @@ This is the AppId from the last command
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your choosen secret.
#### azure\_loadbalancer\_sku
Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
#### azure\_exclude\_master\_from\_standard\_lb
azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.
#### azure\_disable\_outbound\_snat
azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.
#### azure\_primary\_availability\_set\_name
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.
#### azure\_use\_instance\_metadata
Use instance metadata service where possible
## Provisioning Azure with Resource Group Templates
You'll find Resource Group Templates and scripts to provision the required infrastructure to Azure in [*contrib/azurerm*](../contrib/azurerm/README.md)

View File

@@ -67,6 +67,15 @@ To re-define you need to edit the inventory and add a group variable `calico_net
calico_network_backend: none
```
##### Optional : Define the default pool CIDR
By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool.
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):
```
calico_pool_cidr: 10.233.64.0/20
```
##### Optional : BGP Peering with border routers
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
@@ -86,6 +95,12 @@ In order to define global peers, the `peers` variable can be defined in group_va
In order to define peers on a per node basis, the `peers` variable must be defined in hostvars.
NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining both global and per node peers would end up with having only per node peers. If having both global and per node peers defined was meant to happen, global peers would have to be defined in hostvars for each host (as well as per node peers)
Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml)
```
calico_advertise_cluster_ips: true
```
##### Optional : Define global AS number
Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key).

View File

@@ -4,7 +4,7 @@
# as: "65xxx"
# - router_id: "10.99.0.35"
# as: "65xxx"
#
# loadbalancer_apiserver:
# address: "10.99.0.44"
# port: "8383"

View File

@@ -4,7 +4,7 @@
# as: "65xxx"
# - router_id: "10.99.0.3"
# as: "65xxx"
#
# loadbalancer_apiserver:
# address: "10.99.0.21"
# port: "8383"

10
docs/cni.md Normal file
View File

@@ -0,0 +1,10 @@
CNI
==============
This network plugin only unpacks CNI plugins version `cni_version` into `/opt/cni/bin` and instructs kubelet to use cni, that is adds following cli params:
`KUBELET_NETWORK_PLUGIN="--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"`
It's intended usage is for custom CNI configuration, e.g. manual routing tables + bridge + loopback CNI plugin outside kubespray scope. Furthermore, it's used for non-kubespray supported CNI plugins which you can install afterward.
You are required to fill `/etc/cni/net.d` with valid CNI configuration after using kubespray.

View File

@@ -19,7 +19,9 @@ on. Had it belonged to the new [operators world](https://coreos.com/blog/introdu
it may have been named a "Kubernetes cluster operator". Kubespray however,
does generic configuration management tasks from the "OS operators" ansible
world, plus some initial K8s clustering (with networking plugins included) and
control plane bootstrapping. Kubespray [strives](https://github.com/kubernetes-sigs/kubespray/issues/553)
to adopt kubeadm as a tool in order to consume life cycle management domain
knowledge from it and offload generic OS configuration things from it, which
hopefully benefits both sides.
control plane bootstrapping.
Kubespray supports `kubeadm` for cluster creation since v2.3
(and deprecated non-kubeadm deployment starting from v2.8)
in order to consume life cycle management domain knowledge from it
and offload generic OS configuration things from it, which hopefully benefits both sides.

View File

@@ -1,31 +1,28 @@
cri-o
CRI-O
===============
cri-o is container developed by kubernetes project.
Currently, only basic function is supported for cri-o.
[CRI-O] is a lightweight container runtime for Kubernetes.
Kubespray supports basic functionality for using CRI-O as the default container runtime in a cluster.
* cri-o is supported kubernetes 1.11.1 or later.
* helm and other feature may not be supported due to docker dependency.
* scale.yml and upgrade-cluster.yml are not supported.
* Kubernetes supports CRI-O on v1.11.1 or later.
* Helm and other tools may not function as normal due to dependency on Docker.
* `scale.yml` and `upgrade-cluster.yml` are not supported on clusters using CRI-O.
helm and other feature may not be supported due to docker dependency.
Use cri-o instead of docker, set following variable:
_To use CRI-O instead of Docker, set the following variables:_
#### all.yml
```
kubeadm_enabled: true
...
```yaml
download_container: false
skip_downloads: false
```
#### k8s-cluster.yml
```
```yaml
etcd_deployment_type: host
kubelet_deployment_type: host
container_manager: crio
```
[CRI-O]: https://cri-o.io/

View File

@@ -20,10 +20,6 @@ ndots value to be used in ``/etc/resolv.conf``
It is important to note that multiple search domains combined with high ``ndots``
values lead to poor performance of DNS stack, so please choose it wisely.
The dnsmasq DaemonSet can accept lower ``ndots`` values and return NXDOMAIN
replies for [bogus internal FQDNS](https://github.com/kubernetes/kubernetes/issues/19634#issuecomment-253948954)
before it even hits the kubedns app. This enables dnsmasq to serve as a
protective, but still recursive resolver in front of kubedns.
#### searchdomains
Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
@@ -41,8 +37,7 @@ is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8
#### upstream_dns_servers
DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup
DNS servers in early cluster deployment when no cluster DNS is available yet. These are also added as upstream
DNS servers used by ``dnsmasq`` (when deployed with ``dns_mode: dnsmasq_kubedns``).
DNS servers in early cluster deployment when no cluster DNS is available yet.
DNS modes supported by Kubespray
============================
@@ -52,32 +47,20 @@ You can modify how Kubespray sets up DNS for your cluster with the variables ``d
## dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### dnsmasq_kubedns
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns.
It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All
other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver``
#### kubedns (default)
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
all queries.
#### coredns
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use CoreDNS for
all queries.
#### coredns (default)
This installs CoreDNS as the default cluster DNS for all queries.
#### coredns_dual
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use CoreDNS for
all queries. It will also deploy a secondary CoreDNS stack
This installs CoreDNS as the default cluster DNS for all queries, plus a secondary CoreDNS stack.
#### manual
This does not install dnsmasq or kubedns, but allows you to specify
This does not install coredns, but allows you to specify
`manual_dns_server`, which will be configured on nodes for handling Pod DNS.
Use this method if you plan to install your own DNS server in the cluster after
initial deployment.
#### none
This does not install any of dnsmasq and kubedns/skydns. This basically disables cluster DNS completely and
This does not install any of DNS solution at all. This basically disables cluster DNS completely and
leaves you with a non functional cluster.
## resolvconf_mode
@@ -103,7 +86,7 @@ The following dns options are added to the docker daemon
* attempts:2
For normal PODs, k8s will ignore these options and setup its own DNS settings for the PODs, taking
the --cluster_dns (either dnsmasq or kubedns, depending on dns_mode) kubelet option into account.
the --cluster_dns (either coredns or coredns_dual, depending on dns_mode) kubelet option into account.
For ``hostNetwork: true`` PODs however, k8s will let docker setup DNS settings. Docker containers which
are not started/managed by k8s will also use these docker options.
@@ -115,7 +98,7 @@ servers, which in turn will forward queries to the system nameserver if required
#### host_resolvconf
This activates the classic Kubespray behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
configuration to point to the cluster dns server (either dnsmasq or kubedns, depending on dns_mode).
configuration to point to the cluster dns server (either coredns or coredns_dual, depending on dns_mode).
As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first
stage (``dns_early: true``), ``/etc/resolv.conf`` is configured to use the DNS servers found in ``upstream_dns_servers``
@@ -130,6 +113,11 @@ Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that
The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve
cluster service names.
## Nodelocal DNS cache
Setting ``enable_nodelocaldns`` to ``true`` will make pods reach out to the dns (core-dns) caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns / core-dns (depending on what main DNS plugin is configured in your cluster) for cache misses of cluster hostnames(cluster.local suffix by default).
More information on the rationale behind this implementation can be found [here](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md).
Limitations
-----------

View File

@@ -6,7 +6,7 @@ Building your own inventory
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
an example inventory located
[here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/hosts.ini).
[here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini).
You can use an
[inventory generator](https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py)
@@ -20,7 +20,9 @@ Example inventory generator usage:
cp -r inventory/sample inventory/mycluster
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Then use `inventory/mycluster/hosts.yml` as inventory file.
Starting custom deployment
--------------------------
@@ -30,7 +32,7 @@ and start the deployment:
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v \
ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \
--private-key=~/.ssh/private_key
See more details in the [ansible guide](ansible.md).
@@ -43,7 +45,7 @@ You may want to add worker, master or etcd nodes to your existing cluster. This
- Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `cluster.yml` for `scale.yml`:
ansible-playbook -i inventory/mycluster/hosts.ini scale.yml -b -v \
ansible-playbook -i inventory/mycluster/hosts.yml scale.yml -b -v \
--private-key=~/.ssh/private_key
Remove nodes
@@ -53,25 +55,15 @@ You may want to remove **worker** nodes to your existing cluster. This can be do
Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \
--private-key=~/.ssh/private_key
We support two ways to select the nodes:
- Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--extra-vars "node=nodename,nodename2"
```
or
- Use `--limit nodename,nodename2` to select the node
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--limit nodename,nodename2"
```
Connecting to Kubernetes
------------------------

View File

@@ -24,7 +24,7 @@ where an external LB or virtual IP management is inconvenient. This option is
configured by the variable `loadbalancer_apiserver_localhost` (defaults to
`True`. Or `False`, if there is an external `loadbalancer_apiserver` defined).
You may also define the port the local internal loadbalancer uses by changing,
`nginx_kube_apiserver_port`. This defaults to the value of
`loadbalancer_apiserver_port`. This defaults to the value of
`kube_apiserver_port`. It is also important to note that Kubespray will only
configure kubelet and kube-proxy on non-master nodes to use the local internal
loadbalancer.
@@ -114,7 +114,7 @@ Where:
* `ext` - Externally load balanced VIP:port and FQDN, not managed by Kubespray;
* `lc` - localhost;
* `bip` - a custom bind IP or localhost for the default bind IP '0.0.0.0';
* `nsp` - nginx secure port, `nginx_kube_apiserver_port`, defers to `sp`;
* `nsp` - nginx secure port, `loadbalancer_apiserver_port`, defers to `sp`;
* `sp` - secure port, `kube_apiserver_port`;
* `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
* `ip` - the node IP, defers to the ansible IP;

View File

@@ -62,34 +62,6 @@ You can change the default configuration by overriding `kube_router_...` variabl
these are named to follow `kube-router` command-line options as per
<https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers>.
## Caveats
### kubeadm_enabled: true
If you want to set `kube-router` to replace `kube-proxy`
(`--run-service-proxy=true`) while using `kubeadm_enabled`,
then 'kube-proxy` DaemonSet will be removed *after* kubeadm finishes
running, as it's not possible to skip kube-proxy install in kubeadm flags
and/or config, see https://github.com/kubernetes/kubeadm/issues/776.
Given above, if `--run-service-proxy=true` is needed it would be
better to void `kubeadm_enabled` i.e. set:
```
kubeadm_enabled: false
kube_router_run_service_proxy: true
```
If for some reason you do want/need to set `kubeadm_enabled`, removing
it afterwards behave better if kube-proxy is set to ipvs mode, i.e. set:
```
kubeadm_enabled: true
kube_router_run_service_proxy: true
kube_proxy_mode: ipvs
```
## Advanced BGP Capabilities
https://github.com/cloudnativelabs/kube-router#advanced-bgp-capabilities

View File

@@ -15,8 +15,8 @@ For a large scaled deployments, consider the following configuration changes:
load on a delegate (the first K8s master node) then retrying failed
push or download operations.
* Tune parameters for DNS related applications (dnsmasq daemon set, kubedns
replication controller). Those are ``dns_replicas``, ``dns_cpu_limit``,
* Tune parameters for DNS related applications
Those are ``dns_replicas``, ``dns_cpu_limit``,
``dns_cpu_requests``, ``dns_memory_limit``, ``dns_memory_requests``.
Please note that limits must always be greater than or equal to requests.

View File

@@ -1,4 +1,4 @@
openSUSE Leap 42.3 and Tumbleweed
openSUSE Leap 15.0 and Tumbleweed
===============
openSUSE Leap installation Notes:

97
docs/packet.md Normal file
View File

@@ -0,0 +1,97 @@
Packet
===============
Kubespray provides support for bare metal deployments using the [Packet bare metal cloud](http://www.packet.com).
Deploying upon bare metal allows Kubernetes to run at locations where an existing public or private cloud might not exist such
as cell tower, edge collocated installations. The deployment mechanism used by Kubespray for Packet is similar to that used for
AWS and OpenStack clouds (notably using Terraform to deploy the infrastructure). Terraform uses the Packet provider plugin
to provision and configure hosts which are then used by the Kubespray Ansible playbooks. The Ansible inventory is generated
dynamically from the Terraform state file.
## Local Host Configuration
To perform this installation, you will need a localhost to run Terraform/Ansible (laptop, VM, etc) and an account with Packet.
In this example, we're using an m1.large CentOS 7 OpenStack VM as the localhost to kickoff the Kubernetes installation.
You'll need Ansible, Git, and PIP.
```bash
sudo yum install epel-release
sudo yum install ansible
sudo yum install git
sudo yum install python-pip
```
## Playbook SSH Key
An SSH key is needed by Kubespray/Ansible to run the playbooks.
This key is installed into the bare metal hosts during the Terraform deployment.
You can generate a key new key or use an existing one.
```bash
ssh-keygen -f ~/.ssh/id_rsa
```
## Install Terraform
Terraform is required to deploy the bare metal infrastructure. The steps below are for installing on CentOS 7.
[More terraform installation options are available.](https://learn.hashicorp.com/terraform/getting-started/install.html)
Grab the latest version of Terraform and install it.
```bash
echo "https://releases.hashicorp.com/terraform/$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')/terraform_$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')_darwin_amd64.zip"
sudo yum install unzip
sudo unzip terraform_0.11.11_linux_amd64.zip -d /usr/local/bin/
```
## Download Kubespray
Pull over Kubespray and setup any required libraries.
```bash
git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
sudo pip install -r requirements.txt
```
## Cluster Definition
In this example, a new cluster called "alpha" will be created.
```bash
cp -LRp contrib/terraform/packet/sample-inventory inventory/alpha
cd inventory/alpha/
ln -s ../../contrib/terraform/packet/hosts
```
Details about the cluster, such as the name, as well as the authentication tokens and project ID
for Packet need to be defined. To find these values see [Packet API Integration](https://support.packet.com/kb/articles/api-integrations)
```bash
vi cluster.tf
```
* cluster_name = alpha
* packet_project_id = ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
* public_key_path = 12345678-90AB-CDEF-GHIJ-KLMNOPQRSTUV
## Deploy Bare Metal Hosts
Initializing Terraform will pull down any necessary plugins/providers.
```bash
terraform init ../../contrib/terraform/packet/
```
Run Terraform to deploy the hardware.
```bash
terraform apply -var-file=cluster.tf ../../contrib/terraform/packet
```
## Run Kubespray Playbooks
With the bare metal infrastructure deployed, Kubespray can now install Kubernetes and setup the cluster.
```bash
ansible-playbook --become -i inventory/alpha/hosts cluster.yml
```

Some files were not shown because too many files have changed in this diff Show More