Compare commits

...

801 Commits

Author SHA1 Message Date
Brad Beam
72a0d78b3c Merge pull request #1585 from mattymo/canal_upgrade
Fix upgrade for canal and apiserver cert
2017-08-29 18:45:21 -05:00
Matthew Mosesohn
13d08af054 Fix upgrade for canal and apiserver cert
Fixes #1573
2017-08-29 22:08:30 +01:00
Brad Beam
80a7ae9845 Merge pull request #1581 from 2ffs2nns/update-calico-version
update calico version
2017-08-29 07:48:44 -05:00
Eric Hoffmann
6c30a7b2eb update calico version
update calico releases link
2017-08-28 16:23:51 -07:00
Matthew Mosesohn
76b72338da Add CNI config for rkt kubelet (#1579) 2017-08-28 21:11:01 +03:00
Chad Swenson
a39e78d42d Initial version of Flannel using CNI (#1486)
* Updates Controller Manager/Kubelet with Flannel's required configuration for CNI
* Removes old Flannel installation
* Install CNI enabled Flannel DaemonSet/ConfigMap/CNI bins and config (with portmap plugin) on host
* Uses RBAC if enabled
* Fixed an issue that could occur if br_netfilter is not a module and net.bridge.bridge-nf-call-iptables sysctl was not set
2017-08-25 10:07:50 +03:00
Brad Beam
4550dccb84 Fixing reference to vault leader url (#1569) 2017-08-24 23:21:39 +03:00
Hassan Zamani
01ce09f343 Add feature_gates var for customizing Kubernetes feature gates (#1520) 2017-08-24 23:18:38 +03:00
Brad Beam
71dca67ca2 Merge pull request #1508 from tmjd/update-calico-2-4-0
Update Calico to 2.4.1 release.
2017-08-24 14:57:29 -05:00
Hans Kristian Flaatten
327f9baccf Update supported component versions in README.md (#1555) 2017-08-24 21:36:53 +03:00
Yuki KIRII
a98b866a66 Verify if br_netfilter module exists (#1492) 2017-08-24 17:47:32 +03:00
Xavier Mehrenberger
3aabba7535 Remove discontinued option --reconcile-cidr if kube_network_plugin=="cloud" (#1568) 2017-08-24 17:01:30 +03:00
Mohamed Mehany
c22cfa255b Added private key file to ssh bastion conf (#1563)
* Added private key file to ssh bastion conf

* Used regular if condition insted of inline conditional
2017-08-24 17:00:45 +03:00
Brad Beam
af211b3d71 Merge pull request #1567 from mattymo/tolerations
Enable scheduling of critical pods and network plugins on master
2017-08-24 08:40:41 -05:00
Matthew Mosesohn
6bb3463e7c Enable scheduling of critical pods and network plugins on master
Added toleration to DNS, netchecker, fluentd, canal, and
calico policy.

Also small fixes to make yamllint pass.
2017-08-24 10:41:17 +01:00
Brad Beam
8b151d12b9 Adding yamllinter to ci steps (#1556)
* Adding yaml linter to ci check

* Minor linting fixes from yamllint

* Changing CI to install python pkgs from requirements.txt

- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
2017-08-24 12:09:52 +03:00
Ian Lewis
ecb6dc3679 Register standalone master w/ taints (#1426)
If Kubernetes > 1.6 register standalone master nodes w/ a
node-role.kubernetes.io/master=:NoSchedule taint to allow
for more flexible scheduling rather than just marking unschedulable.
2017-08-23 16:44:11 +03:00
riverzhang
49a223a17d Update elrepo-release rpm version (#1554) 2017-08-23 09:54:51 +03:00
Brad Beam
e5cfdc648c Adding ability to override max ttl (#1559)
Prior this would fail because we didnt set max ttl for vault temp
2017-08-23 09:54:01 +03:00
Erik Stidham
9f9f70aade Update Calico to 2.4.1 release.
- Switched Calico images to be pulled from quay.io
- Updated Canal too
2017-08-21 09:33:12 -05:00
Bogdan Dobrelya
e91c04f586 Merge pull request #1553 from mattymo/kubelet-deployment-doc
Add node to docs about kubelet deployment type changes
2017-08-21 11:42:23 +02:00
Matthew Mosesohn
277fa6c12d Add node to docs about kubelet deployment type changes 2017-08-21 09:13:59 +01:00
Matthew Mosesohn
ca3050ec3d Update to Kubernetes v1.7.3 (#1549)
Change kubelet deploy mode to host
Enable cri and qos per cgroup for kubelet
Update CoreOS images
Add upgrade hook for switching from kubelet deployment from docker to host.
Bump machine type for ubuntu-rkt-sep
2017-08-21 10:53:49 +03:00
Bogdan Dobrelya
1b3ced152b Merge pull request #1544 from bogdando/rpm_spec
[WIP] Support pbr builds and prepare for RPM packaging as the ansible-kubespray artifact
2017-08-21 09:13:59 +02:00
Vijay Katam
97031f9133 Make epel-release install configurable (#1497) 2017-08-20 14:03:10 +03:00
Vijay Katam
c92506e2e7 Add calico variable that enables ignoring Kernel's RPF Setting (#1493) 2017-08-20 14:01:09 +03:00
Kevin Lefevre
65a9772adf Add OpenStack LBaaS support (#1506) 2017-08-20 13:59:15 +03:00
Anton
1e07ee6cc4 etcd_compaction_retention every 8 hour (#1527) 2017-08-20 13:55:48 +03:00
Abdelsalam Abbas
01a130273f fix issues with if condition (#1537) 2017-08-20 13:55:13 +03:00
Miad Abrin
3c710219a1 Fix Some Typos in kubernetes master role (#1547)
* Fix Typo etc3 -> etcd3

* Fix typo in post-upgrade of master. stop -> start
2017-08-20 13:54:28 +03:00
Maxim Krasilnikov
2ba285a544 Fixed deploy cluster with vault cert manager (#1548)
* Added custom ips to etcd vault distributed certificates

* Added custom ips to kube-master vault distributed certificates

* Added comment about issue_cert_copy_ca var in vault/issue_cert role file

* Generate kube-proxy, controller-manager and scheduler certificates by vault

* Revert "Disable vault from CI (#1546)"

This reverts commit 781f31d2b8.

* Fixed upgrade cluster with vault cert manager

* Remove vault dir in reset playbook
2017-08-20 13:53:58 +03:00
Antoine Legrand
72ae7638bc Merge pull request #1446 from matlockx/master
add possibility to ignore the hostname override
2017-08-18 17:03:40 +02:00
Xavier Lange
3bfad5ca73 Bump etcd to 3.2.4 (#1468) 2017-08-18 17:12:33 +03:00
Bogdan Dobrelya
668d02846d Align pbr config data with the spec file
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 16:04:48 +02:00
Matthew Mosesohn
781f31d2b8 Disable vault from CI (#1546)
https://github.com/kubernetes-incubator/kubespray/issues/1545
2017-08-18 16:45:27 +03:00
Matthew Mosesohn
df28db0066 Fix cert and netchecker upgrade issues (#1543)
* Bump tag for upgrade CI, fix netchecker upgrade

netchecker-server was changed from pod to deployment, so
we need an upgrade hook for it.

CI now uses v2.1.1 as a basis for upgrade.

* Fix upgrades for certs from non-rbac to rbac
2017-08-18 15:46:22 +03:00
Jan Jungnickel
20183f3860 Bump Calico CNI Plugin to 1.8.0 (#1458)
This aligns calico component versions with Calico release 2.1.5 and
fixes an issue with nodes being unable to schedule existing workloads
as per [#349](https://github.com/projectcalico/cni-plugin/issues/349)
2017-08-18 15:40:14 +03:00
Bogdan Dobrelya
48edf1757b Adjust the rpm spec data
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 14:09:55 +02:00
Matthew Mosesohn
2645e88b0c Fix vault setup partially (#1531)
This does not address per-node certs and scheduler/proxy/controller-manager
component certs which are now required. This should be handled in a
follow-up patch.
2017-08-18 15:09:45 +03:00
Bogdan Dobrelya
db121049b3 Move the spec file
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 13:59:27 +02:00
Bogdan Dobrelya
8058cdbc0e Add pbr build configuration
Required for an RPM package builds with the contrib/ansible-kubespray.spec

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 12:56:01 +02:00
Bogdan Dobrelya
31d357284a Update gitignore to prepare for a package build
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 11:58:07 +02:00
Bogdan Dobrelya
4ee77ce026 Add an RPM spec file and customize ansible roles_path
Install roles under /usr/local/share/kubespray/roles,
playbooks - /usr/local/share/kubespray/playbooks/,
ansible.cfg and inventory group vars - into /etc/kubespray.
Ship README and an example inventory as the package docs.
Update the ansible.cfg to consume the roles from the given path,
including virtualenvs prefix, if defined.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-08-18 11:54:20 +02:00
Kyle Bai
8373129588 Support CentOS 7 through Vagrant (#1542) 2017-08-18 09:16:47 +03:00
Malepati Bala Siva Sai Akhil
9a3c6f236d Update Community Code of Conduct (#1530)
Update Community Code of Conduct from kubernetes/kubernetes-template-project
2017-08-15 16:24:20 +03:00
Joseph Heck
bc5159a1f5 Update comparisons.md (#1519)
Minor grammar fixes
2017-08-14 18:48:35 +03:00
Brad Beam
af007c7189 Fixing netchecker-server type - pod => deployment (#1509) 2017-08-14 18:43:56 +03:00
Malepati Bala Siva Sai Akhil
dc79d07303 Fix Typo in Events Code of Conduct (#1521) 2017-08-13 22:49:24 +03:00
Brad Beam
79167c7577 Merge pull request #1461 from Abdelsalam-Abbas/azure_cli_2
Update azure contrib to use azure cli 2.0
2017-08-11 13:56:41 -05:00
Brad Beam
08dd057864 Merge pull request #1517 from seungkyua/apply_efk_rabc_and_fluentd_configmap
Apply RBAC to efk and create fluentd.conf
2017-08-11 13:33:35 -05:00
Abdelsalam Abbas
fee3f288c0 update azure contrib to use azure cli 2.0 2017-08-11 20:13:02 +02:00
Seungkyu Ahn
b22bef5cfb Apply RBAC to efk and create fluentd.conf
Making fluentd.conf as configmap to change configuration.
Change elasticsearch rc to deployment.
Having installed previous elastaicsearch as rc, first should delete that.
2017-08-11 05:31:50 +00:00
Brad Beam
460b5824c3 Merge pull request #1448 from lancomsystems/log-rotataion-example
Add logging options to default docker options
2017-08-10 08:30:23 -05:00
Brad Beam
b0a28b1e80 Merge pull request #1462 from Abdelsalam-Abbas/azure_vars
Add more variables for more clarity
2017-08-10 08:29:09 -05:00
Brad Beam
ca6535f210 Merge pull request #1488 from timtoum/weave_docs
added Weave documentation
2017-08-10 08:26:19 -05:00
Brad Beam
1155008719 Merge pull request #1481 from magnon-bliex/fluentd-template-fix-typo
fixed typo in fluentd-ds.yml.j2
2017-08-10 08:19:59 -05:00
Brad Beam
d07594ed59 Merge pull request #1512 from samuelmanzer/master
Add to network plugins documentation - README.md
2017-08-10 08:13:29 -05:00
Sam Manzer
4b137efdbd Add to network plugins documentation - README.md 2017-08-09 14:28:33 -05:00
Brad Beam
383d582b47 Merge pull request #1382 from jwfang/rbac
basic rbac support
2017-08-07 08:01:51 -05:00
Spencer Smith
6eacedc443 Merge pull request #1483 from delfer/patch-3
Update flannel from 0.6.2 to 0.8.0
2017-08-01 13:57:43 -04:00
timtoum
b1a5bb593c update docs 2017-08-01 15:55:38 +02:00
timtoum
9369c6549a update docs 2017-08-01 14:30:12 +02:00
email
c7731a3b93 update docs 2017-08-01 14:24:19 +02:00
email
24706c163a update docs 2017-08-01 14:12:21 +02:00
email
a276dc47e0 update docs 2017-08-01 10:52:21 +02:00
Spencer Smith
e55f8a61cd Merge pull request #1482 from bradbeam/fix1393
Removing run_once in these tasks so that etcd ca certs get propogated…
2017-07-31 13:47:18 -04:00
email
c8bcca0845 update docs 2017-07-31 16:33:00 +02:00
Spencer Smith
cb6892d2ed Merge pull request #1469 from hzamani/etcd_metrics
Add etcd metrics flag
2017-07-31 09:04:07 -04:00
Spencer Smith
43eda8d878 Merge pull request #1471 from whereismyjetpack/fix_1447
add newline after expanding user information
2017-07-31 09:03:04 -04:00
Spencer Smith
a2534e03bd Merge pull request #1442 from Sispheor/fix_kublet_options
Fix enforce-node-allocatable option
2017-07-31 09:00:42 -04:00
email
dc5b955930 update docs 2017-07-31 13:45:43 +02:00
email
5de7896ffb update docs 2017-07-31 13:28:47 +02:00
email
01af45d14a update docs 2017-07-31 13:23:01 +02:00
nico
cc9f3ea938 Fix enforce-node-allocatable option
Closes #1228
pods is default enforcement

see https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
add

update
2017-07-31 10:06:53 +02:00
Alexander Chumakov
ff43de695e Update flannel from v0.6.2 to v0.8.0 in Readme 2017-07-29 08:00:05 +00:00
Alexander Chumakov
8bc717a55c Update flannel from 0.6.2 to 0.8.0 2017-07-29 10:54:31 +03:00
Brad Beam
d09222c900 Removing run_once in these tasks so that etcd ca certs get propogated properly to worker nodes
without this etcd ca certs dont exist on worker nodes causing calico to fail
2017-07-28 14:34:47 -05:00
email
87cdb81fae update docs 2017-07-28 11:33:13 +02:00
magnon-bliex
38eb1d548a fixed typo 2017-07-28 14:10:13 +09:00
Anton
e0960f6288 FIX: Unneded (extra) cycles in some tasks (#1393) 2017-07-27 20:46:21 +03:00
email
74403f2003 update docs 2017-07-27 17:00:54 +02:00
Spencer Smith
b2c83714d1 Merge pull request #1478 from delfer/patch-2
[terraform/openstack] fixed mistake in README.md
2017-07-27 10:50:26 -04:00
email
2c21672de6 update docs 2017-07-27 15:10:08 +02:00
email
f7dc21773d new doc for weave 2017-07-27 14:40:52 +02:00
timtoum
3e457e4edf Enable weave seed mode for kubespray (#1414)
* Enable weave seed mode for kubespray

* fix task Weave seed | Set peers if existing peers

* fix mac address variabilisation

* fix default values

* fix include seed condition

* change weave var to default values

* fix Set peers if existing peers
2017-07-26 19:09:34 +03:00
Alexander Chumakov
03572d175f [terraform/openstack] fixed mistake in README.md 2017-07-26 17:42:44 +03:00
Dann Bohn
c4894d6092 add newline after expanding user information 2017-07-25 12:59:10 -04:00
Hassan Zamani
3fb0383df4 Add etcd metrics flag 2017-07-25 20:00:30 +04:30
Spencer Smith
ee36763f9d Merge pull request #1464 from johnko/patch-4
set loadbalancer_apiserver_localhost default true
2017-07-25 10:00:56 -04:00
Spencer Smith
955c5549ae Merge pull request #1402 from Lendico/fix_failed_when
"failed_when: false" and "|succeeded" checks for registered vars
2017-07-25 09:33:43 -04:00
Spencer Smith
4a34514b21 Merge pull request #1447 from whereismyjetpack/template_known_users
Template out known_users.csv, optionally add groups
2017-07-25 08:55:08 -04:00
jwfang
805d9f22ce note upgrade from non-RBAC not supported 2017-07-24 19:11:41 +08:00
Brad Beam
20f29327e9 Merge pull request #1379 from gdmello/etcd_data_dir_fix
Custom `etcd_data_dir` saves etcd data to host, not container
2017-07-20 09:30:18 -05:00
John Ko
018b5039e7 set loadbalancer_apiserver_localhost default true
to match this https://github.com/kubernetes-incubator/kubespray/blob/master/roles/kubernetes/node/tasks/main.yml#L20
and the documented behaviour in HA docs 
related to #1456
@rsmitty
2017-07-20 10:27:05 -04:00
Abdelsalam Abbas
d6aeb767a0 Add more azure variables for more clarity 2017-07-20 15:29:27 +02:00
Spencer Smith
b5d3d4741f Merge pull request #1454 from Abdelsalam-Abbas/higher_drain_timeout
higher the timeouts for draining nodes while upgrading kubernetes version
2017-07-19 10:39:33 -04:00
Spencer Smith
85c747d444 Merge pull request #1441 from bradbeam/1434
Adding recursive=true for rkt kubelet dir
2017-07-19 10:38:06 -04:00
Spencer Smith
927e6d89d7 Merge pull request #1435 from delfer/master
Kubernetes upgrade to 1.6.7
2017-07-19 05:23:38 -07:00
jwfang
3d87f23bf5 uncomment unintended local changes 2017-07-19 12:11:47 +08:00
Brad Beam
45845d4a2a Merge pull request #1437 from rajiteh/fix_aws_docs
Add more instructions to setting up AWS provider
2017-07-18 16:43:01 -05:00
Brad Beam
00ef129b2a Merge pull request #1455 from johnko/patch-2
fix some typos in HA doc
2017-07-18 16:12:58 -05:00
John Ko
06b219217b fix some typos in HA doc 2017-07-18 10:44:08 -04:00
jwfang
789910d8eb remote unused netchecker-agent-hostnet-ds.j2 2017-07-17 19:29:59 +08:00
jwfang
a8e6a0763d run netchecker-server with list pods 2017-07-17 19:29:59 +08:00
jwfang
e1386ba604 only patch system:kube-dns role for old dns 2017-07-17 19:29:59 +08:00
jwfang
83deecb9e9 Revert "no need to patch system:kube-dns"
This reverts commit c2ea8c588aa5c3879f402811d3599a7bb3ccab24.
2017-07-17 19:29:59 +08:00
jwfang
d8dcb8f6e0 no need to patch system:kube-dns 2017-07-17 19:29:59 +08:00
jwfang
5fa31eaead add '-e "${AUTHORIZATION_MODES}"' for all cluster.yml 2017-07-17 19:29:59 +08:00
jwfang
d245201614 test: change ubuntu_calico_rbac to ubuntu_flannel_rbac 2017-07-17 19:29:59 +08:00
jwfang
a5b84a47b0 docs: experimental, no calico/vault 2017-07-17 19:29:59 +08:00
jwfang
552b2f0635 change authorization_modes default value 2017-07-17 19:29:59 +08:00
jwfang
0b3badf3d8 revert calico-related changes 2017-07-17 19:29:59 +08:00
jwfang
cea3e224aa change authorization_modes default value 2017-07-17 19:29:59 +08:00
jwfang
1eaf0e1c63 rename task 2017-07-17 19:29:59 +08:00
jwfang
2cda982345 binding group system:nodes to clusterrole calico-role 2017-07-17 19:29:59 +08:00
jwfang
c9734b6d7b run calico-policy-controller with proper sa/role/rolebinding 2017-07-17 19:29:59 +08:00
jwfang
fd01377f12 remove more bins when reset 2017-07-17 19:29:59 +08:00
jwfang
8d2fc88336 add ci test for rbac 2017-07-17 19:29:59 +08:00
jwfang
092bf07cbf basic rbac support 2017-07-17 19:29:59 +08:00
Ubuntu
5145a8e8be higher draining timeouts 2017-07-16 20:52:13 +00:00
Matthew Mosesohn
b495d36fa5 Merge pull request #1450 from johnko/patch-1
fix typo 'on' > 'one'
2017-07-14 23:00:19 +03:00
John Ko
3bdeaa4a6f fix typo 'on' > 'one' 2017-07-14 15:25:09 -04:00
Dann Bohn
d1f58fed4c Template out known_users.csv, optionally add groups 2017-07-14 09:27:20 -04:00
Martin Joehren
12e918bd31 add possibility to ignore the hostname override 2017-07-13 14:04:39 +00:00
Brad Beam
637f445c3f Merge pull request #1365 from AtzeDeVries/master
Give more control over IPIP, but with same default behaviour
2017-07-12 10:17:17 -05:00
Brad Beam
d0e4cf5895 Merge pull request #1438 from gstorme/etcd_retention
add configurable parameter for etcd_auto_compaction_retention
2017-07-12 09:53:15 -05:00
Brad Beam
e0bf8b2aab Adding recursive=true for rkt kubelet dir
Fixes #1434
2017-07-12 09:28:54 -05:00
Matthew Mosesohn
483c06b4ab Merge pull request #1440 from Sispheor/vsphere_doc
add vsphere cloud provider doc
2017-07-12 12:05:26 +03:00
nico
f4a3b31415 add vsphere cloud provider doc
fix typo
2017-07-12 11:01:06 +02:00
Raj Perera
5c7e309d13 Add more instructions to setting up AWS provider 2017-07-11 10:53:19 -04:00
Spencer Smith
7a72b2d558 Merge pull request #1418 from Abdelsalam-Abbas/fix_vagrantfile
make sure every instance is a node if user changed defaults
2017-07-11 08:56:31 -04:00
Spencer Smith
c75b21a510 Merge pull request #1408 from amitkumarj441/patch-1
Remove deprecated 'enable-cri' flag in kubernetes 1.7
2017-07-11 08:56:14 -04:00
Spencer Smith
a9f318d523 Merge pull request #1424 from Abdelsalam-Abbas/fix_azure_https_ports
fix azure kubernetes port to 6443
2017-07-11 08:55:30 -04:00
Spencer Smith
1dca0bd8d7 Merge pull request #1428 from delfer/patch-1
[terraform/openstack] README.md Guide expanded
2017-07-11 08:53:33 -04:00
Alexander Chumakov
f3165a716a Add more config to README.md
Add resolvconf_mode and cloud_provider config description to README.md
2017-07-11 12:46:19 +03:00
Delfer
9f45eba6f6 Kubernetes upgrade to 1.6.7 2017-07-11 09:11:55 +00:00
Abdelsalam Abbas
ecaa7dad49 add a variable for kube_apiserver at all 2017-07-10 20:16:02 +02:00
Spencer Smith
ee84e34570 Merge pull request #1420 from rsmitty/default-matching
match kubespray-defaults dns mode with k8s-cluster setting
2017-07-10 12:35:31 -04:00
Alexander Chumakov
442be2ac02 [terraform/openstack] README.md Guide expanded
Add section how to configure k8s cluster and set up kubectl
2017-07-10 18:53:57 +03:00
Abdelsalam Abbas
22d600e8c0 fix azure kubernetes port to 6443 2017-07-09 09:56:32 +02:00
AtzeDeVries
e160018826 Fixed conflicts, ipip:true as defualt and added ipip_mode 2017-07-08 14:36:44 +02:00
Spencer Smith
d1a02bd3e9 match kubespray-defaults dns mode with k8s-cluster setting 2017-07-07 13:13:12 -04:00
Julian Poschmann
380fb986b6 Add logging options to default docker options 2017-07-07 12:39:42 +02:00
Abdelsalam Abbas
e7f794531e make sure every instance is a node if user changed defauls of num_instances 2017-07-07 09:20:14 +02:00
Brad Beam
992023288f Merge pull request #1319 from fieryvova/private-dns-server
Add private dns server for a specific zone
2017-07-06 15:02:54 -05:00
Spencer Smith
ef5a36dd69 Merge pull request #1281 from y-taka-23/patch-01
Typo
2017-07-06 14:11:12 -04:00
Spencer Smith
3ab90db6ee Merge pull request #1411 from kevinjqiu/allow-calico-ipip-subnet-mode
Allow calico ipPool to be created with mode "cross-subnet"
2017-07-06 14:04:03 -04:00
Vladimir Kozyrev
e26be9cb8a add private dns server for a specific zone 2017-07-06 16:30:47 +03:00
Spencer Smith
bba555bb08 Merge pull request #1346 from Starefossen/patch-1
Set kubedns minimum replicas to 2
2017-07-06 09:14:11 -04:00
Spencer Smith
4b0af73dd2 Merge pull request #1332 from gstorme/kube_apiserver_insecure_port
Use the kube_apiserver_insecure_port variable instead of static 8080
2017-07-06 09:06:50 -04:00
Spencer Smith
da72b8c385 Merge pull request #1391 from Abdelsalam-Abbas/master
Uncodron Masters which have scheduling Enabled
2017-07-06 09:06:02 -04:00
Spencer Smith
44079b7176 Merge pull request #1401 from Lendico/better_task_naming
Better naming for recurrent tasks
2017-07-06 09:01:07 -04:00
Spencer Smith
19c36fe4c9 Merge pull request #1406 from matlockx/master
added flag for not populating inventory entries to etc hosts file
2017-07-06 08:59:49 -04:00
Kevin Jing Qiu
a742d10c54 Allow calico ipPool to be created with mode "cross-subnet" 2017-07-04 19:05:16 -04:00
Hans Kristian Flaatten
6bd27038cc Set kubedns min replicas to 1 in gitlab config 2017-07-04 16:58:16 +02:00
Hans Kristian Flaatten
5df757a403 Correct indentation and line endings for gitlab config 2017-07-04 16:58:16 +02:00
Hans Kristian Flaatten
38f5d1b18e Set kubedns minimum replicas to 2 2017-07-04 16:58:16 +02:00
Abdelsalam Abbas
5f75d4c099 Uncodron Masters which have scheduling Enabled 2017-07-03 15:30:21 +02:00
Amit Kumar Jaiswal
319a0d65af Update kubelet.j2
Updated with closing endif.
2017-07-03 16:23:35 +05:30
Amit Kumar Jaiswal
3d2680a102 Update kubelet.j2
Updated!
2017-07-03 15:58:50 +05:30
Amit Kumar Jaiswal
c36fb5919a Update kubelet.j2
Updated!!
2017-07-03 15:55:04 +05:30
Amit Kumar Jaiswal
46d3f4369e Updated K8s version
Signed-off-by: Amit Kumar Jaiswal <amitkumarj441@gmail.com>
2017-07-03 04:06:42 +05:30
Martin Joehren
c2b3920b50 added flag for not populating inventory entries to etc hosts file 2017-06-30 16:41:03 +00:00
Spencer Smith
6e7323e3e8 Merge pull request #1398 from tanshanshan/fix-reset
clean files in reset roles
2017-06-30 07:59:44 -04:00
Spencer Smith
e98b0371e5 Merge pull request #1368 from vgkowski/patch-3
change documentation from "self hosted" to "static pod" for the contr…
2017-06-30 07:31:52 -04:00
Spencer Smith
f085419055 Merge pull request #1388 from vgkowski/master
add six package to bootstrap role
2017-06-30 07:30:36 -04:00
Anton Nerozya
1fedbded62 ignore_errors instead of failed_when: false 2017-06-29 20:15:14 +02:00
Anton Nerozya
c8258171ca Better naming for recurrent tasks 2017-06-29 19:50:09 +02:00
tanshanshan
007ee0da8e fix reset 2017-06-29 14:45:15 +08:00
Brad Beam
5e1ac9ce87 Merge pull request #1354 from chadswen/kubedns-var-fix
kubedns consistency fixes
2017-06-27 22:26:46 -05:00
Brad Beam
a7cd08603e Merge pull request #1384 from gdmello/etcd_backup_dir_fix
Make etcd_backup_prefix configurable.
2017-06-27 22:25:53 -05:00
Brad Beam
854cd1a517 Merge pull request #1380 from jwfang/max-dns
docker_dns_servers_strict to control docker_dns_servers rtrim
2017-06-27 21:15:12 -05:00
Spencer Smith
cf8c74cb07 Merge pull request #1342 from Abdelsalam-Abbas/patch-1
Create ansible.md
2017-06-27 13:58:18 -04:00
Spencer Smith
23565ebe62 Merge pull request #1356 from rsmitty/rename
Rename project to kubespray
2017-06-27 11:40:03 -04:00
Chad Swenson
8467bce2a6 Fix inconsistent kubedns version and parameterize kubedns autoscaler image vars 2017-06-27 10:19:31 -05:00
Spencer Smith
e6225d70a1 Merge pull request #1389 from Abdelsalam-Abbas/master
changing username from "ubuntu" to the correct one "vagrant" for ubuntu
2017-06-27 11:04:35 -04:00
Abdelsalam Abbas
a69de8be40 changing username from "ubuntu" to the correct one "vagrant" for ubuntu 2017-06-27 16:42:18 +02:00
gdmelloatpoints
649654207f mount the etcd data directory in the container with the same path as on the host. 2017-06-27 09:29:47 -04:00
gdmelloatpoints
3123502f4c move etcd_backup_prefix to new home. 2017-06-27 09:12:34 -04:00
vincent gromakowski
17d54cffbb add six package to bootstrap role 2017-06-27 10:08:57 +02:00
Brad Beam
bddee7c38e Merge pull request #1338 from kevinjqiu/vagrant-sync-folder
Sync folders on the vagrant machine
2017-06-26 22:10:58 -05:00
Brad Beam
6f9c311285 Merge pull request #1387 from rsmitty/ci-fixes
CI Fixes: turn off coreos updates
2017-06-26 22:00:08 -05:00
Brad Beam
0cfa6a8981 Merge pull request #1372 from seungkyua/apply_kubedns_to_the_latest
Make kubedns up to date
2017-06-26 21:58:03 -05:00
Seungkyu Ahn
d5516a4ca9 Make kubedns up to date
Update kube-dns version to 1.14.2
https://github.com/kubernetes/kubernetes/pull/45684
2017-06-27 00:57:29 +00:00
Spencer Smith
d2b793057e Merge pull request #1370 from Abdelsalam-Abbas/master
Fixing a condition that cause upgrade failure.
2017-06-26 17:15:03 -04:00
Spencer Smith
b2a409fd4d turn off coreos updates 2017-06-26 15:45:08 -04:00
gdmelloatpoints
4ba237c5d8 Make etcd_backup_prefix configurable. Ensures that backups can be stored on a different location other than ${HOST}/var/backups, say an EBS volume on AWS. 2017-06-26 09:42:30 -04:00
AtzeDeVries
f5ef02d4cc Merge remote-tracking branch 'upstream/master' 2017-06-26 11:37:23 +02:00
jwfang
ec2255764a docker_dns_servers_strict to control docker_dns_servers rtrim 2017-06-26 17:29:12 +08:00
Abdelsalam Abbas
1a8e92c922 Fixing cordoning condition that cause fail for upgrading the cluster 2017-06-23 20:41:47 +02:00
gdmelloatpoints
5c1891ec9f In the etcd container, the etcd data directory is always /var/lib/etcd. Reverting to this value, since etcd_data_dir on the host maps to /var/lib/etcd in the container. 2017-06-23 13:49:31 -04:00
Spencer Smith
83265b7f75 renaming kargo-cli to kubespray-cli 2017-06-23 12:35:10 -04:00
Brad Beam
5364a10033 Merge pull request #1374 from Lendico/doc_ansible_integration
Flow for intergation with existing ansible repo
2017-06-23 11:31:22 -05:00
Brad Beam
c2a46e4aa3 Merge pull request #1345 from y-taka-23/neutron-for-calico
Modify documented neutron commands for Calico setup
2017-06-23 11:25:56 -05:00
Spencer Smith
bae5ce0bfa Merge branch 'master' into rename 2017-06-23 12:23:51 -04:00
Spencer Smith
cc5edb720c Merge pull request #1378 from rsmitty/fix-premoderator
premoderator breaks on redirect. update to use kubespray.
2017-06-23 12:10:15 -04:00
Spencer Smith
e17c2ef698 premoderator breaks on redirect. update to use kubespray. 2017-06-23 11:49:48 -04:00
AtzeDeVries
61b74f9a5b updated to direct control over ipip 2017-06-23 09:16:05 +02:00
Anton Nerozya
0cd83eadc0 README: Integration with existing ansible repo 2017-06-22 18:58:10 +02:00
Anton Nerozya
1757c45490 Merge remote-tracking branch 'upstream/master' 2017-06-22 18:23:29 +02:00
vgkowski
d85f98d2a9 change documentation from "self hosted" to "static pod" for the control plane 2017-06-21 11:00:11 +02:00
TAKAHASHI Yuto
9e123011c2 Modify documented neutron commands for Calico setup 2017-06-21 15:11:39 +09:00
Brad Beam
774c4d0d6f Merge pull request #1360 from vgkowski/patch-3
Update openstack documentation with Calico
2017-06-20 22:10:48 -05:00
AtzeDeVries
7332679678 Give more control over IPIP, but with same default behaviour 2017-06-20 14:50:08 +02:00
vgkowski
bb6f727f25 Update openstack documentation with Calico
Linked to the issue https://github.com/kubernetes-incubator/kubespray/issues/1359
2017-06-19 15:48:34 +02:00
Matthew Mosesohn
586d2a41ce Merge pull request #1357 from seungkyua/fixed_helm_bash_completion
Fixed helm bash complete
2017-06-19 09:57:36 +03:00
Seungkyu Ahn
91dff61008 Fixed helm bash complete 2017-06-19 15:33:50 +09:00
Spencer Smith
8203383c03 rename almost all mentions of kargo 2017-06-16 13:25:46 -04:00
Spencer Smith
a3c88a0de5 rename kargo mentions in top-level yml files 2017-06-16 12:18:35 -04:00
Gregory Storme
fff0aec720 add configurable parameter for etcd_auto_compaction_retention 2017-06-14 10:39:38 +02:00
Brad Beam
b73786c6d5 Merge pull request #1335 from bradbeam/imagerepo
Set default value for kube_hyperkube_image_repo
2017-06-12 09:46:17 -05:00
Abdelsalam Abbas
67eeccb31f Create ansible.md
fixing a typo
2017-06-12 13:20:15 +02:00
Gregory Storme
266ca9318d Use the kube_apiserver_insecure_port variable instead of static 8080 2017-06-12 09:20:59 +02:00
Kevin Jing Qiu
3e97299a46 Sync folders on the vagrant machine 2017-06-09 17:19:28 -04:00
Brad Beam
eacc42fedd Merge pull request #1240 from bradbeam/vaultfixup
Fixing up vault variables
2017-06-08 22:33:03 -05:00
Brad Beam
db3e8edacd Fixing up vault variables 2017-06-08 16:15:33 -05:00
Brad Beam
6e41634295 Set default value for kube_hyperkube_image_repo
Fixes #1334
2017-06-08 12:22:16 -05:00
Spencer Smith
ef3c2d86d3 Merge pull request #1327 from rsmitty/coreos-testing-update
use latest coreos-stable for testing to avoid upgrades during deployment
2017-06-07 16:31:23 -07:00
Brad Beam
780308c194 Merge pull request #1174 from jlothian/atomic-docker-restart
Fix docker restart in atomic
2017-06-07 12:05:32 -05:00
Brad Beam
696fd690ae Merge pull request #1092 from bradbeam/rkt_docker
Adding flag for docker container in kubelet w/ rkt
2017-06-06 12:58:40 -05:00
Spencer Smith
d323501c7f Merge pull request #1328 from kevinjqiu/coreos-vagrant
Support provisioning vagrant k8s clusters with coreos
2017-06-05 14:30:49 -07:00
Kevin Jing Qiu
66d8b2c18a Specify coreos vagrant box url 2017-06-04 11:31:39 -04:00
Kevin Jing Qiu
6d8a415b4d Update doc on Vagrant local override file 2017-06-02 20:09:37 -04:00
Kevin Jing Qiu
dad268a686 Add default ssh user for different OSes 2017-06-02 19:51:09 -04:00
Kevin Jing Qiu
e7acc2fddf Update doc for Vagrant install 2017-06-02 19:03:43 -04:00
Kevin Jing Qiu
6fb17a813c Support provisioning vagrant k8s clusters with coreos 2017-06-02 18:53:47 -04:00
Spencer Smith
11ede9f872 use latest coreos-stable for testing to avoid upgrades during deployment 2017-06-02 12:24:54 -04:00
Spencer Smith
6ac1c1c886 Merge pull request #1320 from rsmitty/centos-cert-fix
check if cloud_provider is defined
2017-05-31 11:54:15 -04:00
Spencer Smith
01c0ab4f06 check if cloud_provider is defined 2017-05-31 08:24:24 -04:00
Spencer Smith
7713f35326 Merge pull request #1317 from mtsr/versionlock
Adds note on versionlock to README
2017-05-30 14:37:21 -04:00
Spencer Smith
7220b09ff9 Merge pull request #1315 from rsmitty/hostnames-upgrade
Resolve upgrade issues
2017-05-30 11:40:19 -04:00
Spencer Smith
b7298ef51a Merge pull request #1313 from rsmitty/centos-cert-path
add direct path for cert in AWS with RHEL family
2017-05-30 11:37:37 -04:00
Spencer Smith
16b10b026b add scale.yml to do minimum needed for a node bootstrap 2017-05-29 13:49:21 +02:00
Jonas Matser
9b18c073b6 Adds note on versionlock to README
Note to users that auto-updates break clusters that don't lock the docker version somehow.
2017-05-28 20:55:44 +02:00
Spencer Smith
dd89e705f2 don't uncordon masters 2017-05-26 17:48:56 -04:00
Spencer Smith
56b86bbfca inventory hostname for cordoning/uncordoning 2017-05-26 17:47:25 -04:00
Spencer Smith
7e2aafcc76 add direct path for cert in AWS with RHEL family 2017-05-26 17:32:50 -04:00
Spencer Smith
11c774b04f Merge pull request #1306 from rsmitty/scale-up
add scale.yml to do minimum needed for a node bootstrap
2017-05-25 18:51:09 -04:00
Spencer Smith
6ba926381b Merge pull request #1309 from jhunthrop/router-peering
adding --skip-exists flag for peer_with_router
2017-05-25 18:50:54 -04:00
Justin Hunthrop
af55e179c7 adding --skip-exists flag for peer_with_router 2017-05-25 14:29:18 -05:00
Spencer Smith
18a42e4b38 add scale.yml to do minimum needed for a node bootstrap 2017-05-24 15:49:21 -04:00
Spencer Smith
a10ccadb54 Merge pull request #1300 from rsmitty/dynamic-inventory-aws
Added dynamic inventory for AWS as contrib
2017-05-23 12:57:51 -04:00
Spencer Smith
15fee582cc Merge pull request #1305 from zouyee/master
upgrade k8s version to 1.6.4
2017-05-23 12:52:13 -04:00
zoues
43408634bb Merge branch 'master' into master 2017-05-23 09:32:28 +08:00
zouyee
d47fce6ce7 upgrade k8s version to 1.6.4 2017-05-23 09:30:03 +08:00
Matthew Mosesohn
9e64267867 Merge pull request #1293 from mattymo/kubelet_host_mode
Add host-based kubelet deployment
2017-05-19 18:07:39 +03:00
Josh Lothian
7ae5785447 Removed the other unused handler
With live-restore: true, we don't need a special docker restart
2017-05-19 09:50:10 -05:00
Josh Lothian
ef8d3f684f Remove unused handler
Previous patch removed the step that sets live-restore
back to false, so don't try to notify that handler any more
2017-05-19 09:45:46 -05:00
Matthew Mosesohn
cc6e3d14ce Add host-based kubelet deployment
Kubelet gets copied from hyperkube container and run locally.
2017-05-19 16:54:07 +03:00
Spencer Smith
83f44b1ac1 Added example json 2017-05-18 17:57:30 -04:00
Spencer Smith
1f470eadd1 Added dynamic inventory for AWS as contrib 2017-05-18 17:52:44 -04:00
Spencer Smith
005b01bd9a Merge pull request #1299 from bradbeam/kubelet
Minor kubelet updates
2017-05-18 12:52:43 -04:00
Josh Lothian
6f67367b57 Leave 'live-restore' false
Leave live-restore false to updates always pick
up new network configuration
2017-05-17 14:31:49 -05:00
Josh Lothian
9ee0600a7f Update handler names and explanation 2017-05-17 14:31:49 -05:00
Josh Lothian
30cc7c847e Reconfigure docker restart behavior on atomic
Before restarting docker, instruct it to kill running
containers when it restarts.

Needs a second docker restart after we restore the original
behavior, otherwise the next time docker is restarted by
an operator, it will unexpectedly bring down all running
containers.
2017-05-17 14:31:49 -05:00
Josh Lothian
a5bb24b886 Fix docker restart in atomic
In atomic, containers are left running when docker is restarted.
When docker is restarted after the flannel config is put in place,
the docker0 interface isn't re-IPed because docker sees the running
containers and won't update the previous config.

This patch kills all the running containers after docker is stopped.
We can't simply `docker stop` the running containers, as they respawn
before we've got a chance to stop the docker daemon, so we need to
use runc to do this after dockerd is stopped.
2017-05-17 14:31:49 -05:00
Spencer Smith
f02d810af8 Merge pull request #1298 from rsmitty/centos-bootstrap
issue raw yum command since we don't have facts in bootstrapping
2017-05-17 14:44:54 -04:00
Brad Beam
55f6b6a6ab Merge pull request #940 from Connz/patch-1
Fixed nova command to get available flavors
2017-05-16 21:24:07 -05:00
Brad Beam
b999ee60aa Fixing typo in kubelet cluster-dns and cluster-domain flags 2017-05-16 15:43:29 -05:00
Brad Beam
85afd3ef14 Removing old sysv reference 2017-05-16 15:28:39 -05:00
Spencer Smith
1907030d89 issue raw yum command since we don't have facts in bootstrapping 2017-05-16 10:07:38 -04:00
Spencer Smith
361a5eac7e Merge pull request #1290 from huikang/update-version-readme
Update the kubernete and docker verion in readme
2017-05-15 09:55:04 -04:00
Spencer Smith
fecb41d2ef Merge pull request #1289 from rsmitty/default-dns-mode
default to kubedns &set nxdomain in kubedns deployment if that's the dns_mode
2017-05-15 09:52:07 -04:00
Hui Kang
4cdb641e7b Update the kubernete and docker verion in readme
- kubernetes v1.6.1
- docker v1.13.1

Signed-off-by: Hui Kang <hkang.sunysb@gmail.com>
2017-05-13 22:34:41 -04:00
Spencer Smith
efa2dff681 remove conditional 2017-05-12 17:16:49 -04:00
Spencer Smith
31a7b7d24e default to kubedns and set nxdomain in kubedns deployment if that's the dns_mode 2017-05-12 15:57:24 -04:00
TAKAHASHI Yuto
af8cc4dc4a Typo 2017-05-08 22:55:34 +09:00
Matthew Mosesohn
8eb60f5624 Merge pull request #1280 from moss2k13/bugfix/helm_centos
Updated kubernetes-apps helm installation
2017-05-08 12:45:35 +03:00
moss2k13
791ea89b88 Updated helm installation
Added full path for helm
2017-05-08 09:27:06 +02:00
Spencer Smith
c572760a66 Merge pull request #1254 from iJanki/cert_group
Adding /O=system:masters to admin certificate
2017-05-05 10:58:42 -04:00
Brad Beam
69fc19f7e0 Merge pull request #1252 from adidenko/separate-tags-for-netcheck-containers
Add support for different tags for netcheck containers
2017-05-05 08:04:54 -05:00
Spencer Smith
b939c24b3d Merge pull request #1250 from digitalrebar/master
bootstrap task on centos missing packages
2017-05-02 12:24:11 -04:00
Spencer Smith
3eb494dbe3 Merge pull request #1259 from bradbeam/calico214
Updating calico to v2.1.4
2017-05-02 12:20:47 -04:00
Spencer Smith
d6a66c83c2 Merge pull request #1266 from rsmitty/os-release
mount os-release to ensure the node's OS is what's seen in k8s api
2017-05-02 12:17:48 -04:00
Spencer Smith
582a9a5db8 Merge pull request #1265 from cfarquhar/fix_docs_calico_link
Fix link from ansible.md to calico.md
2017-05-02 12:17:10 -04:00
Spencer Smith
0afbc19ffb ensure the /etc/os-release is mounted read only 2017-05-01 14:51:40 -04:00
Spencer Smith
ac9290f985 add for rkt as well 2017-04-28 17:45:10 -04:00
Brad Beam
a133ba1998 Updating calico to v2.1.4 2017-04-28 14:04:25 -05:00
Spencer Smith
5657738f7e mount os-release to ensure the node's OS is what's seen in k8s api 2017-04-28 13:40:54 -04:00
Charles Farquhar
d310acc1eb Fix link from ansible.md to calico.md
This commit fixes a broken link from ansible.md to calico.md.
2017-04-28 12:10:23 -05:00
Matthew Mosesohn
2b88f10b04 Merge pull request #1262 from holser/switch_ci_to_ansible_2.3
Switch CI to ansible 2.3.0
2017-04-28 12:07:19 +03:00
Aleksandr Didenko
883ba7aa90 Add support for different tags for netcheck containers
Replace 'netcheck_tag' with 'netcheck_version' and add additional
'netcheck_server_tag' and 'netcheck_agent_tag' config options to
provide ability to use different tags for server and agent
containers.
2017-04-27 17:15:28 +02:00
Sergii Golovatiuk
28f55deaae Switch CI to ansible 2.3.0
Closes: 1253
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-27 12:50:16 +02:00
Matthew Mosesohn
40407930d5 Merge pull request #1260 from holser/fix_jinja_ansible_2.3
Ansible 2.3 support
2017-04-27 13:39:28 +03:00
Sergii Golovatiuk
674b71b535 Ansible 2.3 support
- Fix when clauses in various places
- Update requirements.txt
- Fix README.md

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-26 15:22:10 +02:00
Matthew Mosesohn
677d9c47ac Merge pull request #1256 from AlexeyKasatkin/add_MY_NODE_NAME_var
add MY_NODE_NAME variable into netchecker-agent environment
2017-04-25 18:12:25 +03:00
Aleksey Kasatkin
2638ab98ad add MY_NODE_NAME variable into netchecker-agent environment 2017-04-24 17:19:42 +03:00
Matthew Mosesohn
bc3068c2f9 Merge pull request #1251 from FengyunPan/fix-helm-home
Specify a dir and attach it to helm for HELM_HOME
2017-04-24 15:17:28 +03:00
FengyunPan
2bde9bea1c Specify a dir and attach it to helm for HELM_HOME 2017-04-21 10:51:27 +08:00
Spencer Smith
502f2f040d Merge pull request #1249 from rsmitty/master
add some known tweaks that need to be made for coreos to docs
2017-04-20 18:40:25 -04:00
Greg Althaus
041d4d666e Install required selinux-python bindings in bootstrap
on centos.  The bootstrap tty fixup needs it.
2017-04-20 11:17:01 -05:00
Spencer Smith
c0c10a97e7 Merge pull request #1248 from rsmitty/aws-resolver
allow for correct aws default resolver
2017-04-20 11:25:40 -04:00
Spencer Smith
5a7c50027f add some known tweaks that need to be made for coreos 2017-04-20 11:14:41 -04:00
Spencer Smith
88b5065e7d fix stray 'in' and break into multiple lines for clarity 2017-04-20 09:53:01 -04:00
Spencer Smith
b690008192 allow for correct aws default resolver 2017-04-20 09:32:03 -04:00
Matthew Mosesohn
2d6bc9536c Merge pull request #1246 from holser/disable_dns_for_kube_services
Change DNS policy for kubernetes components
2017-04-20 16:12:52 +03:00
Sergii Golovatiuk
01dc6b2f0e Add aws to default_resolver
When VPC is used, external DNS might not be available. This patch change
behavior to use metadata service instead of external DNS when
upstream_dns_servers is not specified.

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-20 11:47:19 +02:00
Sergii Golovatiuk
d8aa2d0a9e Change DNS policy for kubernetes components
According to code apiserver, scheduler, controller-manager, proxy don't
use resolution of objects they created. It's not harmful to change
policy to have external resolver.

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-20 11:22:57 +02:00
Matthew Mosesohn
19bb97d24d Merge pull request #1238 from Starefossen/fix/namespace-template-file
Move namespace file to template directory
2017-04-20 12:19:55 +03:00
Matthew Mosesohn
9f4f168804 Merge pull request #1241 from bradbeam/rktcnidir
Explicitly create cni bin dir
2017-04-20 12:19:26 +03:00
Matthew Mosesohn
82e133b382 Merge pull request #1235 from JustinAzoff/patch-1
Fix IPS array variable expansion
2017-04-20 12:08:49 +03:00
Matthew Mosesohn
cf3083d68e Merge pull request #1239 from mattymo/resettags
Add tags to reset playbook and make iptables flush optional
2017-04-20 11:35:08 +03:00
Sergii Golovatiuk
e796cdbb27 Fix restart kube-controller (#1242)
kubernetesUnitPrefix was changed to k8s_* in 1.5. This patch reflects
this change in kargo
2017-04-20 11:26:01 +03:00
Matthew Mosesohn
2d44582f88 Add tags to reset playbook and make iptables flush optional
Fixes #1229
2017-04-19 19:32:18 +03:00
Spencer Smith
2a61344c03 Merge pull request #1236 from mattymo/minupgrade
Add minimal k8s upgrade playbook
2017-04-19 12:05:39 -04:00
Spencer Smith
77c6aad1b5 Merge pull request #1237 from Starefossen/chore/remove-dot-bak
Remove and ignore .bak files
2017-04-19 12:03:41 -04:00
Brad Beam
b60a897265 Explicitly create cni bin dir
If this path doesnt exist, it will cause kubelet to fail to start when
using rkt
2017-04-19 16:00:44 +00:00
Hans Kristian Flaatten
fdd41c706a Remove and ignore .bak files 2017-04-19 13:37:23 +02:00
Hans Kristian Flaatten
d68cfeed6e Move namespace file to template directory 2017-04-19 13:37:02 +02:00
Matthew Mosesohn
14911e0d22 Add minimal k8s upgrade playbook 2017-04-18 13:28:36 +03:00
Justin
9503434d53 Fix IPS array variable expansion
$IPS only expands to the first ip address in the array:

justin@box:~$ declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
justin@box:~$ echo $IPS
10.10.1.3
justin@box:~$ echo ${IPS[@]}
10.10.1.3 10.10.1.4 10.10.1.5
2017-04-17 20:56:52 -04:00
Spencer Smith
c3c9e955e5 Merge pull request #1232 from rsmitty/custom-flags
add ability for custom flags
2017-04-17 14:01:32 -04:00
Spencer Smith
72d5db92a8 remove stray spaces in templating 2017-04-17 12:24:24 -04:00
Spencer Smith
3f302c8d47 ensure spacing on string of flags 2017-04-17 12:13:39 -04:00
Spencer Smith
04a769bb37 ensure spacing on string of flags 2017-04-17 11:11:10 -04:00
Spencer Smith
f9d4a1c1d8 update to safeguard against accidentally passing string instead of list 2017-04-17 11:09:34 -04:00
Matthew Mosesohn
3e7db46195 Merge pull request #1233 from gbolo/master
allow admission control plug-ins to be easily customized
2017-04-17 12:59:49 +03:00
Matthew Mosesohn
e52aca4837 Merge pull request #1223 from mattymo/vault_cert_skip
Skip vault cert task evaluation when using script certs
2017-04-17 12:52:42 +03:00
Matthew Mosesohn
5ec503bd6f Merge pull request #1222 from bradbeam/calico
Updating calico versions
2017-04-17 12:52:20 +03:00
gbolo
49be805001 allow admission control plug-ins to be easily customized 2017-04-16 22:03:45 -04:00
Spencer Smith
94596388f7 add ability for custom flags 2017-04-14 17:33:04 -04:00
Spencer Smith
5c4980c6e0 Merge pull request #1231 from holser/fix_netchecker-server
Reschedule netchecker-server in case of HW failure.
2017-04-14 10:50:07 -04:00
Spencer Smith
6d157f0b3e Merge pull request #1225 from VincentS/aws_fixes
Fixes for AWS Terraform Deployment and Updated Readme
2017-04-14 10:47:25 -04:00
Spencer Smith
c3d5fdff64 Merge pull request #1192 from justindowning/patch-2
Update upgrades.md
2017-04-14 10:19:35 -04:00
Spencer Smith
d6cbdbd6aa Merge pull request #1230 from jduhamel/jduhamel-kubedns-autoscaler-1
Update kubedns-autoscaler change target
2017-04-14 09:56:48 -04:00
Matthew Mosesohn
d7b8fb3113 Update start_vault_temp.yml 2017-04-14 13:32:41 +03:00
Sergii Golovatiuk
45044c2d75 Reschedule netchecker-server in case of HW failure.
Pod opbject is not reschedulable by kubernetes. It means that if node
with netchecker-server goes down, netchecker-server won't be scheduled
somewhere. This commit changes the type of netchecker-server to
Deployment, so netchecker-server will be scheduled on other nodes in
case of failures.
2017-04-14 10:49:16 +02:00
Joe Duhamel
a9f260d135 Update dnsmasq-autoscaler
changed target to be a deployment rather than a replicationcontroller.
2017-04-13 15:07:06 -04:00
Joe Duhamel
072b3b9d8c Update kubedns-autoscaler change target
The target was a replicationcontroller but kubedns is currently a deployment
2017-04-13 14:55:25 -04:00
Matthew Mosesohn
ae7f59e249 Skip vault cert task evaluation completely when using script cert generation 2017-04-13 19:29:07 +03:00
Spencer Smith
450b4e16b2 Merge pull request #1224 from VincentS/var_fix
Fix undefined variables for etcd deployment
2017-04-12 09:19:02 -04:00
Vincent Schwarzer
c48ffa24be Fixes for AWS Terraform Deployment and Updated Readme 2017-04-12 15:15:54 +02:00
Vincent Schwarzer
7f0c0a0922 Fix for etcd variable issue 2017-04-12 12:59:49 +02:00
Brad Beam
bce1c62308 Updating calico versions 2017-04-11 20:52:04 -05:00
Spencer Smith
9b3aa3451e Merge pull request #1218 from bradbeam/efkidempotent
Fixing resource type for kibana
2017-04-11 19:04:13 -04:00
Spencer Smith
436c0b58db Merge pull request #1217 from bradbeam/helmcompletion
Excluding bash completion for helm on CoreOS
2017-04-11 17:34:11 -04:00
Spencer Smith
7ac62822cb Merge pull request #1219 from zouyee/master
upgrade etcd version from v3.0.6 to v3.0.17
2017-04-11 17:32:56 -04:00
Matthew Mosesohn
af8ae83ea0 Merge pull request #1216 from mattymo/rework_collect_logs
Allow collect-logs.yaml to operate without inventory vars
2017-04-11 16:58:39 +03:00
zouyee
0bcecae2a3 upgrade etcd version from v3.0.6 to v3.0.17 2017-04-11 10:42:35 +08:00
Brad Beam
bd130315b6 Excluding bash completion for helm on CoreOS 2017-04-10 11:07:15 -05:00
Brad Beam
504711647e Fixing resource type for kibana 2017-04-10 11:01:12 -05:00
Matthew Mosesohn
a9a016d7b1 Allow collect-logs.yaml to operate without inventory vars 2017-04-10 18:49:17 +03:00
Antoine Legrand
ab12b23e6f Merge pull request #1173 from bradbeam/dockerlogs
Setting defaults for docker log rotation
2017-04-09 11:50:01 +02:00
Matthew Mosesohn
797bdbd998 Merge pull request #1210 from mattymo/fix-1.5-kubelet
Unbreak 1.5 deployment with kubelet
2017-04-07 08:22:39 +03:00
Matthew Mosesohn
1c45d37348 Update kubelet.j2 2017-04-06 22:59:18 +03:00
Matthew Mosesohn
b521255ec9 Unbreak 1.5 deployment with kubelet
1.5 kubelet fails to start when using unknown params
2017-04-06 21:07:48 +03:00
Matthew Mosesohn
75ea001bfe Merge pull request #1208 from mattymo/1.6-flannel
Update to k8s 1.6 with flannel and centos fixes
2017-04-06 13:04:02 +03:00
Matthew Mosesohn
ff2fb9196f Fix flannel for 1.6 and apply fixes to enable containerized kubelet 2017-04-06 10:06:21 +04:00
Matthew Mosesohn
acae0fe4a3 Merge pull request #1205 from holser/resolv_updates
Refactoring resolv.conf
2017-04-05 14:22:52 +03:00
Matthew Mosesohn
ccc11e5680 Upgrade to Kubernetes 1.6.1 2017-04-05 13:26:36 +03:00
Sergii Golovatiuk
2670eefcd4 Refactoring resolv.conf
- Renaming templates for netchecker
- Add dnsPolicy: ClusterFirstWithHostNet to kube-proxy

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-05 09:28:01 +02:00
Matthew Mosesohn
c0cae9e8a0 Merge pull request #1204 from mattymo/resolvconf-nodes
Restart kubelet when updating /etc/resolv.conf on all k8s nodes
2017-04-04 22:03:44 +03:00
Matthew Mosesohn
f8cf6b4f7c Merge pull request #1186 from holser/resolv_conf
Set ClusterFirstWithHostNet for Pods with hostnetwork: true
2017-04-04 20:49:55 +03:00
Matthew Mosesohn
a29182a010 Restart kubelet when updating /etc/resolv.conf on all k8s nodes 2017-04-04 20:43:47 +03:00
Sergii Golovatiuk
1cfe0beac0 Set ClusterFirstWithHostNet for Pods with hostnetwork: true
In kubernetes 1.6 ClusterFirstWithHostNet was added as an option. In
accordance to it kubelet will generate resolv.conf based on own
resolv.conf. However, this doesn't create 'options', thus the proper
solution requires some investigation.

This patch sets the same resolv.conf for kubelet as host

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-04 16:34:13 +02:00
Matthew Mosesohn
798f90c4d5 Merge pull request #1153 from mattymo/graceful_drain
Move graceful upgrade test to Ubuntu canal HA, adjust drain
2017-04-04 17:33:53 +03:00
Matthew Mosesohn
fac4334950 Merge pull request #1201 from mattymo/configurable_failure
Make any_errors_fatal configurable
2017-04-04 15:51:56 +03:00
Matthew Mosesohn
f8d44a8a88 Merge pull request #1200 from mattymo/issue1190
Fix multiline condition for k8s check certs
2017-04-04 15:48:05 +03:00
Matthew Mosesohn
1136a94a6e Merge pull request #1191 from justindowning/patch-1
pin ansible to version 2.2.1.0
2017-04-04 13:42:32 +03:00
Matthew Mosesohn
fd20e0de90 Wait for container creation in check network test 2017-04-04 13:12:24 +03:00
Matthew Mosesohn
a1150dc334 Make any_errors_fatal configurable
Useful at scale when 1 or 2 noes my fail and you can proceed with
the majority and go back and fix the others later.
2017-04-04 12:52:47 +03:00
Matthew Mosesohn
b4d06ff8dd Add /var/lib/cni to kubelet
Necessary to persist this directory for host-local IPAM used by Canal
Add pre-upgrade task to copy /var/lib/cni out of old kubelet.
2017-04-03 19:38:24 +03:00
Matthew Mosesohn
7581705007 Merge pull request #1185 from intelsdi-x/hostname
Use hostname module to set hostname, and do it for all Os not only Co…
2017-04-03 19:01:12 +03:00
Matthew Mosesohn
5a5707159a Fix multiline condition for k8s check certs
Fixes #1190
2017-04-03 17:44:55 +03:00
Matthew Mosesohn
742a1681ce Merge pull request #1166 from rogerwelin/master
add iptables --flush to reset role
2017-04-03 17:25:10 +03:00
Matthew Mosesohn
fba9b9cb65 Merge pull request #1182 from artem-panchenko/bumpCalicoPolicyControllerVersion
Bump calico policy controller version
2017-04-03 17:21:52 +03:00
Paweł Skrzyński
61b2d7548a Use hostname module to set hostname, and do it for all Os not only CoreOS 2017-04-03 15:09:33 +02:00
Matthew Mosesohn
80828a7c77 use etcd2 when upgrading unless forced 2017-04-03 15:07:42 +03:00
Matthew Mosesohn
f5af86c9d5 Merge pull request #1194 from adidenko/fix-sync_certs
Fix multiline when condition in sync_certs task
2017-03-31 17:39:40 +03:00
Aleksandr Didenko
58acbe7caf Fix multiline when condition in sync_certs task
Folded style in multiline 'when' condition causes error with
unexpected ident. Changing it to literal style should fix
the issue.

Closes #1190
2017-03-30 22:21:04 +02:00
Spencer Smith
355b92d7ba Merge pull request #1170 from jlothian/atomic-docker-network
1169 - fix docker systemd unit
2017-03-30 13:13:28 -07:00
Matthew Mosesohn
d42e4f2344 Update .gitlab-ci.yml 2017-03-30 12:19:15 +04:00
Justin Downing
fbded9cdac Update upgrades.md
Clarify that the `kube_version` environment variable is needed for the CLI "graceful upgrade". Also add and example to check that the upgrade was successful.
2017-03-29 22:00:52 -04:00
Justin Downing
907e43b9d5 pin ansible to version 2.2.1.0
ansible 2.2.2.0 has an [issue]() that causes problems for kargo:

```
(env) kargo ᐅ env/bin/ansible-playbook upgrade-cluster.yml 
ERROR! Unexpected Exception: 'Host' object has no attribute 'remove_group'
```

Pinning ansible to 2.2.1.0 resolved this for me.
2017-03-29 21:40:34 -04:00
Matthew Mosesohn
fb467df47c fix etcd restart 2017-03-29 23:22:49 +04:00
Matthew Mosesohn
48beef25fa delete master containers forcefully 2017-03-27 19:08:22 +03:00
Matthew Mosesohn
a3f568fc64 restart scheduler and controller-manager too 2017-03-27 13:51:35 +03:00
Matthew Mosesohn
57ee304260 ensure post-upgrade purge ones only once 2017-03-27 13:28:37 +03:00
Matthew Mosesohn
0794a866a7 switch debian8-canal-ha to ubuntu 2017-03-27 13:28:37 +03:00
Matthew Mosesohn
49e4d344da move network plugins out of grouped upgrades 2017-03-27 13:28:37 +03:00
Matthew Mosesohn
21a9dea99f move kubernetes-apps/network-plugin back to master role 2017-03-27 13:28:37 +03:00
Matthew Mosesohn
6e505c0c3f Fix delegate tasks for kubectl and etcdctl 2017-03-27 13:28:37 +03:00
Matthew Mosesohn
e9a294fd9c Significantly reduce memory requirements
Canal runs more pods and upgrades need a bit of extra
room to load new pods in and get the old ones out.
2017-03-27 13:28:37 +03:00
Matthew Mosesohn
44d851d5bb Only cordon Ready nodes 2017-03-27 13:28:37 +03:00
Matthew Mosesohn
5ed03ce7f0 Use checksum of dnsmasq config to trigger updates of dnsmasq
Allows config changes made by Ansible to restart dnsmasq deployment
2017-03-27 13:28:37 +03:00
Matthew Mosesohn
c1b9660ec8 Move graceful upgrade test to debian canal HA, adjust drain
Graceful upgrades require 3 nodes
Drain now has a command timeout of 40s
2017-03-27 13:28:37 +03:00
Matthew Mosesohn
c2c334d22f Merge pull request #1181 from holser/refactor_etcd
Refactor etcd role
2017-03-27 13:05:35 +03:00
Antoine Legrand
ed5c848473 Merge pull request #1175 from zoidbergwill/patch-1
Fix markdown of heading in README
2017-03-27 09:33:43 +02:00
Sergii Golovatiuk
f144fd1ed3 Refactor etcd role
- Run docker run from script rather than directly from systemd target
- Refactoring styling/templates

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-03-24 12:34:15 +01:00
Artem Panchenko
e96557f410 Bump calico policy controller version
Latest released version of kube-policy-controller
contains important bug fixes and should be used
by default.
2017-03-24 12:13:09 +02:00
Antoine Legrand
ac96d5ccf0 Merge pull request #1176 from zoidbergwill/patch-2
Update roadmap.md
2017-03-23 12:05:35 +01:00
Matthew Mosesohn
b2af19471e Merge pull request #1177 from rutsky/replace-nbsp
replace non-breakable space with regular space
2017-03-23 12:59:45 +03:00
Matthew Mosesohn
6805d0ff2b Merge pull request #1179 from kubernetes-incubator/missing_defaults
Add missing defaults
2017-03-23 12:16:13 +03:00
Antoine Legrand
6e1de9d820 Add missing defaults 2017-03-23 10:05:34 +01:00
Matthew Mosesohn
d27ca7854f Merge pull request #1161 from VincentS/aws_deployment
Fixes for AWS Terraform Deployment
2017-03-23 11:59:39 +03:00
Vladimir Rutsky
c4e57477fb replace non-breakable space with regular space
Non-brekable space is 0xc2 0xa0 byte sequence in UTF-8.

To find one:

    $ git grep -I -P '\xc2\xa0'

To replace with regular space:

    $ git grep -l -I -P '\xc2\xa0' | xargs sed -i 's/\xc2\xa0/ /g'

This commit doesn't include changes that will overlap with commit f1c59a91a1.
2017-03-23 00:25:01 +03:00
William Martin Stewart
f1c59a91a1 Update roadmap.md 2017-03-22 22:03:06 +02:00
William Martin Stewart
74c573ef04 Update README.md 2017-03-22 22:01:44 +02:00
Matthew Mosesohn
5f082bc0e5 Merge pull request #1172 from mattymo/dnsmasq_upgrade
Use checksum of dnsmasq config to trigger updates of dnsmasq
2017-03-22 18:00:10 +03:00
Matthew Mosesohn
0e3b7127b5 Merge pull request #1167 from mattymo/dnsmasq_when_deploying_master
Change wait for dnsmasq to skip if there are no kube-nodes in play
2017-03-22 17:59:56 +03:00
Brad Beam
5d3414a40b Setting defaults for docker log rotation 2017-03-22 09:40:10 -04:00
Roger Welin
f4638c7580 add iptables --flush to reset role 2017-03-22 11:10:24 +01:00
Matthew Mosesohn
8b0b500c89 Use checksum of dnsmasq config to trigger updates of dnsmasq
Allows config changes made by Ansible to restart dnsmasq deployment
2017-03-22 13:03:55 +03:00
Matthew Mosesohn
04746fc4d8 Merge pull request #1163 from mattymo/kvm_setup
Add KVM hypervisor playbook to contrib
2017-03-22 12:31:14 +03:00
Matthew Mosesohn
463ef3f8bc Merge pull request #1168 from mattymo/disable_download_delegate
Disable download_run_once and download_localhost for most CI scenarios
2017-03-22 12:19:24 +03:00
Josh Lothian
5e2f78424f 1169 - fix docker systemd unit
The docker-network environment file masks the new values
put into /etc/systemd/system/docker.service.d/flannel-options.conf
to renumber the docker0 to work correctly with flannel.
2017-03-21 15:22:14 -05:00
Matthew Mosesohn
3889c2e01c Add KVM hypervisor playbook to contrib
Optional Ansible playbook for preparing a host for running Kargo.
This includes creation of a user account, some basic packages,
and sysctl values required to allow CNI networking on a libvirt network.
2017-03-21 19:50:01 +03:00
Matthew Mosesohn
1887e984a0 Change wait for dnsmasq to skip if there are no kube-nodes in play
Also changed unnecessary delay to a max timeout (now defaulting to 1s sleep
between tries)

Also rename play_hosts to ansible_play_hosts
2017-03-21 18:55:22 +03:00
Matthew Mosesohn
a495bbc1db Disable download_run_once and download_localhost for most CI scenarios
This adds time to deployment, so we should only test it sparingly during
daily master.
2017-03-21 16:41:30 +03:00
Matthew Mosesohn
cd429d3654 Merge pull request #1159 from holser/etcd_backup_restore
Backup etcd
2017-03-21 13:07:44 +03:00
Matthew Mosesohn
771aef0b44 Merge pull request #1162 from holser/bump_coreos_ci
Bump CoreOS stable to latest version
2017-03-20 17:45:04 +03:00
Matthew Mosesohn
f7ef452d8a Merge pull request #1160 from mattymo/simpler_idempotency
Make reset check on idempotency check optional
2017-03-20 17:04:51 +03:00
Matthew Mosesohn
0f64f8db90 Merge pull request #1155 from mattymo/helm
Add helm deployment
2017-03-20 17:00:06 +03:00
Sergii Golovatiuk
c04a6254b9 Backup etcd data before restarting etcd
etcd is crucial part of kubernetes cluster. Ansible restarts etcd on
reconfiguration. Backup helps operator to restore cluster manually in
case of any issues.

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-03-20 14:50:52 +01:00
Sergii Golovatiuk
485e17d6ed Bump CoreOS stable to latest version
1298.6.0 fixes some sporadic network issues. It also includes docker
1.12.6 which includes several stability fixes for kubernetes.

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-03-20 14:31:33 +01:00
Vincent Schwarzer
952ab03d2a Fixes for AWS Terraform Deployment 2017-03-20 12:08:17 +01:00
Matthew Mosesohn
bbb524018e Make reset check on idempotency check optional
By default we do not test reset.yml now.
2017-03-20 13:16:58 +03:00
Matthew Mosesohn
859c08620b Merge pull request #1105 from VincentS/aws_deployment
AWS Terraform for Kargo
2017-03-20 12:55:11 +03:00
Antoine Legrand
f6cd42e6e0 Merge pull request #1158 from rutsky/patch-6
limit jinja2 version to <2.9
2017-03-19 23:42:11 +01:00
Vladimir Rutsky
61ee67d612 limit jinja2 version to <2.9
Ansible 2.2.1 requires jinja2<2.9, see <https://github.com/ansible/ansible/blob/v2.2.1.0-1/setup.py#L25>,
but without explicit limiting upper jinja2 version here pip ignores
Ansible requirements and installs latest available jinja2
(pip is not very smart here), which is incompatible with with
Ansible 2.2.1.
With incompatible jinja2 version "ansible-vault create" (and probably other parts)
fails with:
  ERROR! Unexpected Exception: The 'jinja2<2.9' distribution was not found 
  and is required by ansible
This upper limit should be removed in 2.2.2 release, see:
<978311bf3f>
2017-03-20 01:33:08 +03:00
Matthew Mosesohn
939c1def5d Merge pull request #1152 from mattymo/redhat_weave
Fix weave on RHEL deployment
2017-03-19 16:45:20 +03:00
Matthew Mosesohn
b7ab80e8ea Merge pull request #1149 from mattymo/centos-retries
Retry yum/apt/rpm download commands
2017-03-18 11:12:36 +03:00
Matthew Mosesohn
b69d4b0ecc Add helm deployment 2017-03-17 20:24:41 +03:00
Matthew Mosesohn
2f437d7452 Merge pull request #1157 from rutsky/remove-change-k8s-version
remove obsolete script
2017-03-17 20:23:34 +03:00
Vladimir Rutsky
d761216ec1 remove obsolete script
Currently Kubernetes version can be selected using "kube_version" variable.
2017-03-17 20:09:36 +03:00
Matthew Mosesohn
088e9be931 Merge pull request #1156 from rutsky/patch-5
fix jinja package name
2017-03-17 20:08:36 +03:00
Vladimir Rutsky
32ecac6464 fix jinja package name
Jinja 2.* releases are published under `Jinja2` name.
2017-03-17 20:07:49 +03:00
Matthew Mosesohn
7760c3e4aa Retry yum/apt/rpm download commands, fix succeeded filter 2017-03-17 18:56:26 +03:00
Matthew Mosesohn
3cfb76e57f Merge pull request #1146 from mattymo/resolvconf_optimize
Condense resolvconf sources before starting loop
2017-03-17 18:42:32 +03:00
Matthew Mosesohn
e1faeb0f6c Fix weave on RHEL deployment
Reduce retry delay checking weave
Always load br_netfilter module
2017-03-17 18:17:47 +03:00
Matthew Mosesohn
25bff851dd Merge pull request #1136 from adidenko/fix-calico-policy-order
Move calico-policy-controller into separate role
2017-03-17 17:32:14 +03:00
Aleksandr Didenko
3a39904011 Move calico-policy-controller into separate role
By default Calico CNI does not create any network access policies
or profiles if 'policy' is enabled in CNI config. And without any
policies/profiles network access to/from PODs is blocked.

K8s related policies are created by calico-policy-controller in
such case. So we need to start it as soon as possible, before any
real workloads.

This patch also fixes kube-api port in calico-policy-controller
yaml template.

Closes #1132
2017-03-17 11:21:52 +01:00
Matthew Mosesohn
7e1fbfba64 Merge pull request #1147 from mattymo/calico-update
Update calico to 1.1.0-rc8
2017-03-17 13:17:41 +03:00
Matthew Mosesohn
a52064184e Condense resolvconf sources before starting loop 2017-03-17 13:06:56 +03:00
Matthew Mosesohn
b4a1ba828a Merge pull request #1148 from VincentS/patch-1
Fixed Formatting / Ansbile-Playbook Command Upgrade Cluster
2017-03-16 19:55:59 +03:00
Vincent Schwarzer
c8c6105ee2 Fixed Formatting / Ansbile-Playbook Command
- added -b and fixed typo in ansible-playbook command 
- fixed formatting issue
2017-03-16 17:53:48 +01:00
Matthew Mosesohn
0b49eeeba3 Update calico to 1.1.0-rc8
Fixes bug in CentOS/RHEL in felix related to overlayfs driver.
2017-03-16 19:23:36 +03:00
Matthew Mosesohn
b0830f0cd7 Merge pull request #1087 from bradbeam/openstack
Adding openstack domain id
2017-03-16 17:53:14 +03:00
Matthew Mosesohn
565d4a53b0 Merge pull request #1108 from idcrook/issue_1107-docker-versioning
Adding Docker CE 'stable' and 'edge' version packages
2017-03-16 16:32:13 +03:00
Matthew Mosesohn
9624662bf6 Merge pull request #1141 from mattymo/idempotency2
More idempotency fixes
2017-03-16 12:29:42 +03:00
Matthew Mosesohn
8195957461 Merge branch 'master' into idempotency2 2017-03-16 09:29:43 +03:00
Matthew Mosesohn
02fed4a082 Merge pull request #1138 from mattymo/idempotency-fixes
Idempotency fixes for etcd certs and resolvconf tasks
2017-03-16 09:20:28 +03:00
Bogdan Dobrelya
34ecf4ea51 Merge pull request #1109 from pcm32/feature/fixTerraformOS
Restores working order of contrib/terraform/openstack
2017-03-15 17:15:35 +01:00
Matthew Mosesohn
a422ad0d50 More idempotency fixes
Fixed sync_tokens fact
Fixed sync_certs for k8s tokens fact
Disabled register docker images changability
Fixed CNI dir permission
Fix idempotency for etcd pre upgrade checks
2017-03-15 19:06:39 +03:00
Matthew Mosesohn
096d96e344 Merge pull request #1137 from holser/bug/1135
Turn on iptables for flannel
2017-03-15 17:06:42 +03:00
Bogdan Dobrelya
e61310bc89 Merge pull request #1140 from VincentS/jinja28
Added Jinja 2.8 to Docs
2017-03-15 13:18:53 +01:00
Vincent Schwarzer
111ca9584e Added Jinja 2.8 to Docs
Added Jinja 2.8 Requirements to docs and pip requirements file which
is needed to run the current Ansible Playbooks.
2017-03-15 13:11:09 +01:00
Matthew Mosesohn
7d35c4592c Merge pull request #1139 from VincentS/docu_fix
Fix for CoreOS Docu
2017-03-15 15:06:41 +03:00
Vincent Schwarzer
3e8386cbf3 Fixed CoreOS Docu
CoreOS docu was referencing outdated bootstrap playbook that
is now part of kargo itself.
2017-03-15 13:04:01 +01:00
Matthew Mosesohn
4354162067 Merge pull request #1080 from VincentS/Granular_Auth_Control
Granular authentication Control
2017-03-15 13:12:51 +03:00
Matthew Mosesohn
a62a444229 Merge pull request #1117 from mattymo/etcd3-upgrade
Migrate k8s data to etcd3 api store
2017-03-15 12:56:06 +03:00
Matthew Mosesohn
f6b72fa830 Make resolvconf preinstall idempotent 2017-03-15 01:20:13 +04:00
Sergii Golovatiuk
9667e8615f Turn on iptables for flannel
Closes: #1135
Closes: #1026
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-03-14 17:54:55 +01:00
Vincent Schwarzer
026da060f2 Granular authentication Control
It is now possible to deactivate selected authentication methods
(basic auth, token auth) inside the cluster by adding
removing the required arguments to the Kube API Server and generating
the secrets accordingly.

The x509 authentification is currently not optional because disabling it
would affect the kubectl clients deployed on the master nodes.
2017-03-14 16:57:35 +01:00
Matthew Mosesohn
3feab1cb2d Merge pull request #1134 from mattymo/1.6-support
Explicitly set cni-bin-dir
2017-03-14 17:53:08 +03:00
Matthew Mosesohn
804e9a09c0 Migrate k8s data to etcd3 api store
Default backend is now etcd3 (was etcd2).
The migration process consists of the following steps:
* check if migration is necessary
* stop etcd on first etcd server
* run migration script
* start etcd on first etcd server
* stop kube-apiserver until configuration is updated
* update kube-apiserver
* purge old etcdv2 data
2017-03-14 17:50:20 +03:00
Matthew Mosesohn
4c6829513c Fix etcd idempotency 2017-03-14 17:23:29 +03:00
Matthew Mosesohn
4038954f96 Merge pull request #1078 from VincentS/oidc_support
Added Support for OpenID Connect Authentication
2017-03-14 12:07:21 +03:00
Matthew Mosesohn
52a6dd5427 Explicitly set cni-bin-dir 2017-03-13 20:13:21 +03:00
Matthew Mosesohn
c301dd5d94 Merge pull request #1118 from mattymo/noderolelabels
Add node labels in kubelet
2017-03-13 19:04:21 +03:00
Connz
28473e919f Fixed nova command to get available flavors
The nova command for getting the flavors is not
nova list-flavors
but
nova flavor-list
2017-03-09 11:10:25 +01:00
Cesarini, Daniele
69636d2453 Adding /O=system:masters to admin certificate
Issue #1125. Make RBAC authorization plugin work out of the box.
"When bootstrapping, superuser credentials should include the system:masters group, for example by creating a client cert with /O=system:masters. This gives those credentials full access to the API and allows an admin to then set up bindings for other users."
2017-03-08 14:42:25 +00:00
Antoine Legrand
7cb7eee29d Merge pull request #1116 from kubernetes-incubator/contrib_docs
Reference external documentation sources
2017-03-07 13:33:25 +01:00
David Crook
a52e1069ce updated debian and ubuntu package names based on testing
docker-ce is not the .deb package until the repositories are switched over to new "downloads" docker webserver
2017-03-06 16:54:39 -07:00
David Crook
a8e5002aeb removed irrelevant comments 2017-03-06 16:02:53 -07:00
David Crook
c515a351c6 Merge branch 'master' into issue_1107-docker-versioning 2017-03-06 16:00:31 -07:00
Antoine Legrand
7777b30693 Merge pull request #1120 from bradbeam/fixtags
Removing cloud_provider tag to fix scenario where cloud_provider is n…
2017-03-06 19:00:41 +01:00
Brad Beam
d04fbf3f78 Removing cloud_provider tag to fix scenario where cloud_provider is not defined 2017-03-06 10:52:38 -06:00
Matthew Mosesohn
54207877bd Add node labels in kubelet
Related-issue: https://github.com/kubernetes/community/issues/300
Upgraded nodes do not obtain labels automatically.
See https://github.com/kubernetes/kubernetes/pull/29459 for more details.
2017-03-06 17:18:42 +03:00
Vincent Schwarzer
3c6b1480b8 Rewrote AWS Terraform for Kargo
Rewrote AWS Terraform deployment for AWS Kargo. It supports now
multiple Availability Zones, AWS Loadbalancer for Kubernetes API,
Bastion Host, ...

For more information see README
2017-03-06 12:52:02 +01:00
Vincent Schwarzer
b075960e3b Added Support for OpenID Connect Authentication
To use OpenID Connect Authentication beside deploying an OpenID Connect
Identity Provider it is necesarry to pass additional arguments to the Kube API Server.
These required arguments were added to the kube apiserver manifest.
2017-03-06 12:40:35 +01:00
Antoine Legrand
85596c2610 Merge pull request #1045 from bradbeam/vsphere
Adding vsphere cloud provider support
2017-03-06 12:34:05 +01:00
Antoine Legrand
0613e3c24d Reference external documentation sources 2017-03-06 12:25:54 +01:00
Antoine Legrand
ee5f009b95 Merge pull request #1112 from mattymo/skip_vault_if_disabled
Disable vault role properly on ansible 2.2.0
2017-03-06 11:27:53 +01:00
Antoine Legrand
d76816d043 Merge pull request #1115 from mattymo/etcd-phases
Remove standalone etcd specific play, cleanup host mode
2017-03-06 11:21:08 +01:00
Matthew Mosesohn
45274560ec Disable vault role properly on ansible 2.2.0
when condition does not seem to work correctly at playbook
level for ansible 2.2.0.
2017-03-05 00:43:01 +04:00
Matthew Mosesohn
02a8e78902 Remove standalone etcd specific play, cleanup host mode
Now etcd role can optionally disable etcd cluster setup for faster
deployment when it is combined with etcd role.
2017-03-04 00:34:26 +04:00
Matthew Mosesohn
8f3d9e93ce Merge pull request #1111 from mattymo/use_find_for_certs
Use find module for checking for certificates
2017-03-03 20:08:33 +03:00
Matthew Mosesohn
a244aca6a4 Merge pull request #1113 from VincentS/AWS_IAM_PROFILES
Added Missing AWS IAM Profiles and Policies
2017-03-03 17:35:55 +03:00
Vincent Schwarzer
5ae85b9de5 Added Missing AWS IAM Profiles and Policies
The AWS IAM profiles and policies required to run Kargo on AWS
are no longer hosted in the kubernetes main repo since kube-up got
deprecated. Hence we have to move the files into the kargo repository.
2017-03-03 15:30:07 +01:00
Matthew Mosesohn
d176818c44 Use find module for checking for certificates
Also generate certs only when absent on master (rather than
when absent on target node)
2017-03-03 16:21:01 +03:00
Bogdan Dobrelya
aeec0f9a71 Merge pull request #1071 from vijaykatam/atomic_host
Add support for atomic host
2017-03-03 13:03:59 +01:00
Matthew Mosesohn
08a02af833 Merge pull request #1075 from VincentS/loadbalancer_aws
Possibility to add Loadbalancers without static IP (e.g. AWS ELB) #1074
2017-03-03 14:07:22 +03:00
Pablo Moreno
cf26585cff Restores working order of contrib/terraform/openstack, includes vault group and avoids group_vars/k8s-cluster.yml 2017-03-02 23:58:07 +00:00
David Crook
3f4a375ac4 first pass at adding 'stable' and 'edge' version packages
- Only have ubuntu to test on
  - fedora and redhat are placeholders/guesses
  - the "old" package repositories seem to have the "new" CE version which is `1.13.1` based
- `docker-ce` looks like it is named as a backported `docker-engine` package in some
  places

- Did not change the `defaults` version anywhere, so should work as before
- Did not point to new package repositories, as existing ones have the new packages.
2017-03-02 13:48:09 -07:00
Matthew Mosesohn
cc632f2713 Merge pull request #1099 from rutsky/patch-4
fix inline verbatim blocks formatting in markdown
2017-03-02 17:46:52 +03:00
Matthew Mosesohn
5ebc9a380c Merge pull request #1060 from holser/etcdv3
Allow to specify etcd backend for kube-api
2017-03-02 17:24:09 +03:00
Matthew Mosesohn
6453650895 Merge pull request #1093 from mattymo/scaledns
Add autoscalers for dnsmasq and kubedns
2017-03-02 16:58:56 +03:00
Matthew Mosesohn
9cb12cf250 Add autoscalers for dnsmasq and kubedns
By default kubedns and dnsmasq scale when installed.
Dnsmasq is no longer a daemonset. It is now a deployment.
Kubedns is no longer a replicationcluster. It is now a deployment.
Minimum replicas is two (to enable rolling updates).

Reduced memory erquirements for dnsmasq and kubedns
2017-03-02 13:44:22 +03:00
Vincent Schwarzer
68e8d74545 Changes based on feedback (additional ansible checks) 2017-03-02 11:04:10 +01:00
Vincent Schwarzer
fc054e21f6 Modified how adding LB for the Kube API is handled (AWS)
Until now it was not possible to add an API Loadbalancer
without an static IP Address. But certain Loadbalancers
like AWS Elastic Loadbalanacer dontt have an fixed IP address.
With this commit it is possible to add these kind of Loadbalancers
to the Kargo deployment.
2017-03-02 11:04:10 +01:00
Matthew Mosesohn
3256f4bc0f Merge pull request #1103 from mattymo/upgradesyntax
Add upgrade-cluster and reset playbooks to syntax check
2017-03-02 12:41:10 +03:00
Matthew Mosesohn
0e9ad8f2c7 Merge pull request #1100 from retr0h/host-vars
Added host_vars to gitignore
2017-03-02 12:32:22 +03:00
Matthew Mosesohn
efbb5b2db3 Merge pull request #1101 from retr0h/docker-1.13.1
Use docker-engine 1.13.1
2017-03-02 12:31:58 +03:00
Matthew Mosesohn
85ed4157ff Add upgrade-cluster and reset playbooks to syntax check 2017-03-02 09:37:16 +04:00
John Dewey
a43569c8a5 Use docker-engine 1.13.1
The default version of Docker was switched to 1.13 in #1059.  This
change also bumped ubuntu from installing docker-engine 1.13.0 to
1.13.1.  This PR updates os families which had 1.13 defined, but
were using 1.13.0.

The impetus for this change is an issue running tiller 1.2.3 on
docker 1.13.0.  See discussion [1][2].

[1] https://github.com/kubernetes/helm/issues/1838
[2] https://github.com/kubernetes-incubator/kargo/pull/1100
2017-03-01 12:53:39 -08:00
John Dewey
e771d0ea39 Updated gitignore pattern per review 2017-03-01 12:45:24 -08:00
John Dewey
9073eba405 Added host_vars to gitignore
Since inventory ships with kargo, the ability to change functionality
without having a dirty git index is nice.  An example, we wish to change
is the version of docker deployed to our CentOS systems.  Due to an issue
with tiller and docker 1.13, we wish to deploy docker 1.12.  Since this
change does not belong in Kargo, we wish to locally override the docker
version, until the issue is sorted.
2017-03-01 11:08:35 -08:00
Matthew Mosesohn
a5cd73d047 Merge pull request #959 from galthaus/host-mode-restart
Restart kube-controller for host_resolvconf mode
2017-03-01 20:54:21 +03:00
Vijay Katam
a0b1eda1d0 Add support for atomic host
Updates based on feedback

Simplify checks for file exists

remove invalid char

Review feedback. Use regular systemd file.

Add template for docker systemd atomic
2017-03-01 09:38:19 -08:00
Vladimir Rutsky
ad80e09ac5 fix inline verbatim blocks formatting in markdown 2017-03-01 17:50:28 +04:00
Antoine Legrand
77e5171679 Merge pull request #1076 from VincentS/etcd_openssl_count_fix
Fixed counter in ETCD Openssl.conf
2017-03-01 14:17:27 +01:00
Bogdan Dobrelya
0c66418dad Merge pull request #1090 from artem-panchenko/calicoAcceptHostEndpointConnections
Allow connections from pods to local endpoints
2017-03-01 13:37:05 +01:00
Bogdan Dobrelya
45a9eac7d2 Merge pull request #1097 from kubernetes-incubator/mattymo-patch-1
Fix vault role in upgrade-cluster.yml
2017-03-01 09:21:02 +01:00
Matthew Mosesohn
838adf7475 Fix vault role in upgrade-cluster.yml 2017-03-01 11:19:38 +03:00
Artem Panchenko
fa05d15093 Allow connections from pods to local endpoints
By default Calico blocks traffic from endpoints
to the host itself by using an iptables DROP
action. It could lead to a situation when service
has one alive endpoint, but pods which run on
the same node can not access it. Changed the action
to RETURN.
2017-03-01 09:21:02 +02:00
Antoine Legrand
1122740bd7 Merge pull request #1094 from retr0h/vagrant-flannel
Ensure vagrant uses flannel
2017-03-01 00:07:24 +01:00
John Dewey
f877278075 Ensure vagrant uses flannel
The Vagrantfile is setup to use flannel.  The default network
was changed to Calico (#1031).  However, the Vagrantfile was
not updated to reflect this.  Ensuring the Vagrantfile remains
functional on master, until someone decides to make it work
with Calico.
2017-02-28 13:31:28 -08:00
Matthew Mosesohn
cbaa6abdd0 Merge pull request #1066 from bradbeam/rkt-kubelet-cloudprovider
Adding KUBELET_CLOUDPROVIDER to kubelet.rkt.service
2017-02-28 20:02:56 +03:00
Matthew Mosesohn
76a4803292 Merge pull request #1084 from mattymo/fixubunturktjob
Remove upgrade from the ubuntu-rkt-sep CI job
2017-02-28 20:02:05 +03:00
Antoine Legrand
b286b2eb31 Merge pull request #1083 from holser/api_port
Change kube-api default port from 443 to 6443
2017-02-28 17:57:35 +01:00
Sergii Golovatiuk
295103adc0 Allow to specify etcd backend for kube-api
Kubernetes project is about to set etcdv3 as default storage engine in
1.6. This patch allows to specify particular backend for
kube-apiserver. User may force the option to etcdv3 for new environment.
At the same time if the environment uses v2 it will continue uses it
until user decides to upgrade to v3.

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-28 17:13:22 +01:00
Sergii Golovatiuk
d31c040dc0 Change kube-api default port from 443 to 6443
Operator can specify any port for kube-api (6443 default) This helps in
case where some pods such as Ingress require 443 exclusively.

Closes: 820
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-28 15:45:35 +01:00
Brad Beam
8a63b35f44 Adding flag for docker container in kubelet w/ rkt 2017-02-28 07:55:12 -06:00
Brad Beam
bfff06d402 Adding KUBELET_CLOUDPROVIDER to kubelet.rkt.service 2017-02-28 06:29:35 -06:00
Matthew Mosesohn
21d3d75827 Merge pull request #1086 from bradbeam/lowermem
Lower default memory requests
2017-02-28 13:37:28 +03:00
Matthew Mosesohn
2c3538981a Merge pull request #1077 from holser/bug/1073
Make etcd data dir configurable.
2017-02-28 13:19:20 +03:00
Brad Beam
30a9899262 Making openstack domain name optional 2017-02-27 21:19:27 -06:00
Xavier Lange
dd10b8a27c Bug fix: support kilo's keystone requirement for domain-name, extracts from ENV var 2017-02-27 21:18:30 -06:00
Brad Beam
dbf13290f5 Updating vsphere cloud provider support 2017-02-27 15:08:04 -06:00
Sergii Golovatiuk
f9ff93c606 Make etcd data dir configurable.
Closes: #1073
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-27 21:35:51 +01:00
Jan Jungnickel
df476b0088 Initial support for vsphere as cloud provider 2017-02-27 12:51:41 -06:00
Brad Beam
56664b34a6 Lower default memory requests
This is to address out of memory issues on CI as well as help
fit deployments for people starting out with kargo on smaller
machines
2017-02-27 10:53:43 -06:00
Matthew Mosesohn
efb45733de Remove upgrade from the ubuntu-rkt-sep CI job 2017-02-27 18:16:22 +03:00
Vincent Schwarzer
0cbc3d8df6 Fixed counter in ETCD Openssl.conf
When a apiserver_loadbalancer_domain_name is added to the Openssl.conf
the counter gets not increased correctly. This didnt seem to have an
effect at the current kargo version.
2017-02-27 12:01:09 +01:00
Bogdan Dobrelya
27b4e61c9f Merge pull request #946 from neith00/master
Using the command module instead of raw
2017-02-27 10:59:53 +01:00
Bogdan Dobrelya
069606947c Merge pull request #1063 from bogdando/fix
Align LB defaults with the HA docs
2017-02-27 10:14:42 +01:00
Matthew Mosesohn
6ae6b7cfcd Merge pull request #1072 from gkopylov/fix_doc_issue
Fix cluster.yml file extension in docs
2017-02-26 15:12:45 +03:00
Kopylov German
d197ce230f Fix cluster.yml file extension in docs 2017-02-26 13:42:52 +03:00
Matthew Mosesohn
c6cb0d3984 Merge pull request #1069 from holser/increase_ssl_ttl
Increase SSL TTL to 3650 days
2017-02-25 10:47:30 +03:00
Sergii Golovatiuk
00cfead9bb Increase SSL TTL to 3650 days
In real scenarios 365 days is short period of time. 3650 days is good
enough for long running k8s environments
2017-02-24 15:38:13 +01:00
Antoine Legrand
20b1e4db0b Merge pull request #1068 from holser/uncomment_all.yml
Uncomment one key/value in all.yml
2017-02-24 12:54:51 +01:00
Sergii Golovatiuk
a098a32f7d Uncomment one key/value in all.yml
all.yaml shouldn't be empty otherwise ansible won't be able to merge 2
dicts.

Related bug: ansible/issues/21889
2017-02-24 12:25:45 +01:00
Antoine Legrand
9ee9a1033f Merge pull request #1067 from kubernetes-incubator/ant31-patch-2
Uncommented group_vars variables
2017-02-24 11:45:17 +01:00
Antoine Legrand
eb904668b2 Uncommented group_vars variables 2017-02-24 10:54:25 +01:00
Bogdan Dobrelya
75b69876a3 Merge pull request #1064 from kubernetes-incubator/rework_vars
Add default var role
2017-02-23 21:48:23 +01:00
Antoine Legrand
08d9d24320 Add subnet var in tests 2017-02-23 15:14:28 +01:00
Antoine Legrand
c7d61af332 Comment all variables in group_vars 2017-02-23 14:02:57 +01:00
Antoine Legrand
5f7607412b Add default var role 2017-02-23 12:07:17 +01:00
Antoine Legrand
403fea39f7 Merge pull request #829 from bogdando/opts
Rework group/role vars
2017-02-23 10:39:43 +01:00
Bogdan Dobrelya
f2a4619c57 Align LB defaults with the HA docs
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-02-23 10:32:44 +01:00
Bogdan Dobrelya
712872efba Rework inventory all by real groups' vars
* Leave all.yml to keep only optional vars
* Store groups' specific vars by existing group names
* Fix optional vars casted as mandatory (add default())
* Fix missing defaults for an optional IP var
* Relink group_vars for terraform to reflect changes

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-02-23 09:43:42 +01:00
Matthew Mosesohn
8cbf3fe5f8 Merge pull request #1020 from mattymo/synthscale
Add synthetic scale deployment mode
2017-02-22 19:15:46 +03:00
Matthew Mosesohn
02137f8cee Merge pull request #1059 from holser/docker_iptables
iptables switch for docker
2017-02-22 08:23:58 +03:00
Matthew Mosesohn
43ea281a7f Merge pull request #1061 from ivan4th/fix-shell-vars
Fix shell special vars
2017-02-22 08:23:44 +03:00
Ivan Shvedunov
0006e5ab45 Fix shell special vars 2017-02-21 22:22:40 +03:00
Matthew Mosesohn
d821448e2f Merge branch 'master' into synthscale 2017-02-21 22:17:43 +03:00
Sergii Golovatiuk
3bd46f7ac8 Switch docker to 1.13
- Remove variable dup for Ubuntu
- Update Docker to 1.13
2017-02-21 19:10:34 +01:00
Sergii Golovatiuk
ebf9daf73e Statically disable iptables management for docker
Docker 1.13 changes the behaviour of iptables defaults from allow
to drop. This patch disables docker's iptables management as it was
in Docker 1.12 [1]

[1] https://github.com/docker/docker/pull/28257

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-21 19:10:34 +01:00
Matthew Mosesohn
2ba66f0b26 Change coreos-alpha dns mode to host_resolvconf 2017-02-21 18:14:42 +03:00
Matthew Mosesohn
0afadb9149 Merge pull request #1046 from skyscooby/pedantic-syntax-cleanup
Cleanup legacy syntax, spacing, files all to yml
2017-02-21 17:03:16 +03:00
Matthew Mosesohn
19d0159e33 Raise timeout for get netchecker agents 2017-02-21 14:48:25 +03:00
Matthew Mosesohn
d4f15ab402 Merge pull request #1055 from mattymo/etcd-preupgrade-speedup
speed up etcd preupgrade check
2017-02-21 12:51:42 +03:00
Matthew Mosesohn
527e030283 Merge pull request #1058 from holser/update_calico_cni
Update calico-cni to 1.5.6
2017-02-20 23:09:47 +03:00
Matthew Mosesohn
634e6a381c Merge pull request #1043 from rutsky/patch-3
fix typos in azure docs
2017-02-20 20:24:05 +03:00
Matthew Mosesohn
042d094ce7 Merge pull request #1034 from rutsky/fix-openssl-lb-index
fix load balancer DNS name index evaluation in openssl.conf
2017-02-20 20:23:26 +03:00
Matthew Mosesohn
3cc1491833 Merge branch 'master' into pedantic-syntax-cleanup 2017-02-20 20:19:38 +03:00
Matthew Mosesohn
d19e6dec7a speed up etcd preupgrade check 2017-02-20 20:18:10 +03:00
Matthew Mosesohn
6becfc52a8 Merge pull request #1056 from mattymo/k8s153
Update Kubernetes to v1.5.3
2017-02-20 20:13:40 +03:00
Sergii Golovatiuk
a2cbbc5c4f Update calico-cni to 1.5.6
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-20 17:14:45 +01:00
Matthew Mosesohn
10173525d8 Update Kubernetes to v1.5.3 2017-02-20 18:14:56 +03:00
Antoine Legrand
ccdb72a422 Merge pull request #1053 from hvnsweeting/master
Update Doc
2017-02-20 10:42:16 +01:00
Hung Nguyen Viet
df96617d3c Only 1 key needed 2017-02-20 14:54:20 +07:00
Antoine Legrand
09aa3e0e79 Merge pull request #1052 from hvnsweeting/master
Put Ansible requirements first
2017-02-20 08:44:16 +01:00
Hung Nguyen Viet
a673e97f02 Put Ansible requirements first
And re-phrase all sentences to passive tense
2017-02-20 14:39:51 +07:00
Matthew Mosesohn
43e86921e0 pin coreos-alpha to 1325 2017-02-19 16:23:35 +03:00
Antoine Legrand
ad58e08a41 Merge pull request #1049 from alop/selinux
Safe disable SELinux
2017-02-19 10:26:01 +01:00
Abel Lopez
0bfc2d0f2f Safe disable SELinux
Sometimes, a sysadmin might outright delete the SELinux rpms and
delete the configuration. This causes the selinux module to fail
with
```
IOError: [Errno 2] No such file or directory: '/etc/selinux/config'\n",
"module_stdout": "", "msg": "MODULE FAILURE"}
```

This simply checks that /etc/selinux/config exists before we try
to set it Permissive.

Update from feedback
2017-02-18 11:54:25 -08:00
Matthew Mosesohn
475a42767a Suppress logging for download image
This generates too much output and during upgrade scenarios
can bring us over the 4mb limit.
2017-02-18 19:10:26 +04:00
Matthew Mosesohn
ce4eefff6a Use first kube-master to check results 2017-02-18 14:11:51 +04:00
Matthew Mosesohn
82b247d1a4 Adapt advanced network checker for scale
Skip nodes not in ansible play (via --limit)
2017-02-18 14:09:57 +04:00
Matthew Mosesohn
a21eb036ee Add no_log to cert tar tasks
This works around 4MB limit for gitlab CI runner.
2017-02-18 14:09:57 +04:00
Matthew Mosesohn
9c1701f2aa Add synthetic scale deployment mode
New deploy modes: scale, ha-scale, separate-scale
Creates 200 fake hosts for deployment with fake hostvars.

Useful for testing certificate generation and propagation to other
master nodes.

Updated test cases descriptions.
2017-02-18 14:09:55 +04:00
Andrew Greenwood
fd17c37feb Regex syntax changes in yml mode 2017-02-17 17:30:39 -05:00
Andrew Greenwood
cde5451e79 Syntax Bugfix 2017-02-17 17:08:44 -05:00
Andrew Greenwood
ca9ea097df Cleanup legacy syntax, spacing, files all to yml
Migrate older inline= syntax to pure yml syntax for module args as to be consistant with most of the rest of the tasks
Cleanup some spacing in various files
Rename some files named yaml to yml for consistancy
2017-02-17 16:22:34 -05:00
Antoine Legrand
b84cc14694 Merge pull request #1029 from mattymo/graceful
Add graceful upgrade process
2017-02-17 21:24:32 +01:00
Vladimir Rutsky
a84175b3b9 fix typo: "infrastructore" 2017-02-17 23:27:38 +04:00
Vladimir Rutsky
438b4e9625 fix typos in azure docs 2017-02-17 21:39:22 +04:00
Matthew Mosesohn
a510e7b8f3 Use gce hostname as inventory name
Calico does not allow renaming hosts
2017-02-17 20:21:58 +03:00
Antoine Legrand
e16ebcad6e Merge pull request #1042 from holser/fix_facts
Fix fact tags
2017-02-17 17:56:29 +01:00
Sergii Golovatiuk
e91e58aec9 Fix fact tags
Ansible playbook fails when tags are limited to "facts,etcd" or to
"facts". This patch allows to run ansible-playbook to gather facts only
that don't require calico/flannel/weave components to be verified. This
allows to run ansible with 'facts,bootstrap-os' or just 'facts' to
gether facts that don't require specific components.

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-17 12:32:33 +01:00
Antoine Legrand
3629b9051d Merge pull request #1038 from rutsky/kubelet-mount-var-log
Mount host's /var/log into kubelet container
2017-02-17 10:26:12 +01:00
Antoine Legrand
ef919d963b Merge pull request #1040 from retr0h/vagrant-config
Better control instance sizing
2017-02-17 10:25:09 +01:00
Antoine Legrand
4545114408 Merge pull request #1037 from mattymo/coreos_fix
Fix references to CoreOS and Container Linux by CoreOS
2017-02-17 10:21:14 +01:00
Smaine Kahlouch
9ed32b9dd0 Merge pull request #1036 from rutsky/fix-kibana-default-base-url
fix typo in "kibana_base_url" variable name
2017-02-17 07:03:59 +01:00
John Dewey
45dbe6d542 Better control instance sizing
* Git ignore the user controlled config.rb.
* Ability to better control the number of instances running.
2017-02-16 13:09:34 -08:00
Vladimir Rutsky
bff955ff7e Mount host's /var/log into kubelet container
Kubelet is responsible for creating symlinks from /var/lib/docker to /var/log
to make fluentd logging collector work.
However without using host's /var/log those links are invisible to fluentd.

This is done on rkt configuration too.
2017-02-16 22:31:05 +03:00
Matthew Mosesohn
80c0e747a7 Fix references to CoreOS and Container Linux by CoreOS
Fixes #967
2017-02-16 19:25:17 +03:00
Matthew Mosesohn
617edda9ba Adjust weave daemonset for serial deployment 2017-02-16 18:24:30 +03:00
Vladimir Rutsky
7ab04b2e73 fix typo in "kibana_base_url" variable name
This typo lead to kibana_base_url being undefined and Kibana used
default base URL ("/") which is incorrect with default proxy-based
access.
2017-02-16 18:17:06 +03:00
Antoine Legrand
e89056a614 Merge pull request #1033 from rutsky/reset-confirmation
ask confirmation before running reset.yml playbook
2017-02-16 16:10:58 +01:00
Matthew Mosesohn
97ebbb9672 Add graceful upgrade process
Based on #718 introduced by rsmitty.

Includes all roles and all options to support deployment of
new hosts in case they were added to inventory.

Main difference here is that master role is evaluated first
so that master components get upgraded first.

Fixes #694
2017-02-16 17:18:38 +03:00
Vladimir Rutsky
c02213e4af force reset confirmation in CI 2017-02-16 16:35:01 +03:00
Smaine Kahlouch
73e0aeb4ca Merge pull request #1031 from mattymo/defaultcalico
Change default network plugin to Calico
2017-02-16 14:04:12 +01:00
Vladimir Rutsky
a1ec6f401c fix load balancer DNS name index evaluation in openssl.conf
Looks like OpenSSL still properly handles it, even with duplicated
"DNS.X" items.
2017-02-16 00:16:13 +03:00
Vladimir Rutsky
5337d37a1c ask confirmation before running reset.yml playbook 2017-02-15 21:05:46 +03:00
Matthew Mosesohn
d92d955aeb Merge pull request #985 from rutsky/check-mode-for-shell-commands
set "check_mode: on" for read-only "shell" steps that registers result
2017-02-15 17:53:41 +03:00
Matthew Mosesohn
7ac84d386c Merge pull request #1030 from rutsky/remove-swp
remove temporary file
2017-02-15 17:44:41 +03:00
Vladimir Rutsky
8397baa700 remove temporary file 2017-02-15 17:40:05 +03:00
Matthew Mosesohn
2d65554cb9 Change default network plugin to Calico 2017-02-15 16:15:22 +03:00
Matthew Mosesohn
64e40d471c Merge pull request #1028 from holser/ansible.cfg
Add timings to RECAP output.
2017-02-15 12:41:49 +03:00
Sergii Golovatiuk
c5ea29649b Add timings to RECAP output.
- Starting from version 2.0 ansible has 'callback_whitelist =
  profile_tasks'. It allows to analyze CI to find some time regressions.
- Add skippy to CI's ansible.cfg

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-14 18:47:02 +01:00
Antoine Legrand
410438a0e3 Merge pull request #1008 from bradbeam/rkt-proxy
Adding support for proxy w/ rkt kubelet
2017-02-14 17:52:21 +01:00
Spencer Smith
fbaef7e60f specify grace period for draining 2017-02-14 18:51:13 +03:00
Spencer Smith
017a813621 first cut of an upgrade process 2017-02-14 18:51:13 +03:00
Brad Beam
4c891b8bb0 Adding support for proxy w/ rkt kubelet 2017-02-14 08:09:49 -06:00
Matthew Mosesohn
948d9bdadb Merge pull request #1019 from mattymo/issue1011
Update calico to v1.0.2
2017-02-14 14:01:25 +03:00
Matthew Mosesohn
b7258ec3bb Merge pull request #1013 from mattymo/remove_masqerade_all
Disable kube_proxy_masquerade_all
2017-02-14 14:00:29 +03:00
Antoine Legrand
93cb5a5bd6 Merge pull request #1027 from hvnsweeting/master
Multiples doc fixes
2017-02-14 11:39:22 +01:00
Hung Nguyen Viet
d8f46c4410 Highlight important action 2017-02-14 17:18:25 +07:00
Hung Nguyen Viet
d0757ccc5e Fix typo 2017-02-14 17:18:22 +07:00
Antoine Legrand
f4f730bd8a Merge pull request #1025 from holser/bug/961
Install pip on Ubuntu
2017-02-14 10:31:42 +01:00
Matthew Mosesohn
f5e27f1a21 Merge pull request #1021 from holser/remove_deprecated
Replace always_run with check_mode
2017-02-14 11:25:58 +03:00
Matthew Mosesohn
bb6415ddc4 Merge pull request #1015 from holser/rkt_ssl_ca_dirs
Set ssl_ca_dirs for rkt based on fact
2017-02-14 11:25:17 +03:00
Sergii Golovatiuk
2b6179841b Install pip on Ubuntu
- Refactor 'Check if bootstrap is needed' as ansible loop. This allows
  to add new elements easily without refactoring. Add pip to the list.
- Refactor 'Install python 2.x' task to run once if any of rc
  codes != 0. Actually, need_bootstrap is array of hashes, so map will
  allow to get single array of rc statuses. So if status is not zero it
  will be sorted and the last element will be get, converted to bool.

Closes: #961
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-13 19:35:13 +01:00
Antoine Legrand
e877cd2874 Merge pull request #1024 from holser/bug/961
Install pip on Ubuntu
2017-02-13 17:53:57 +01:00
Matthew Mosesohn
203ddfcd43 Merge pull request #1023 from mattymo/fix_dnsmasq_cleanup
Clean up dnsmasq purge task
2017-02-13 19:50:01 +03:00
Vladimir Rutsky
09847567ae set "check_mode: no" for read-only "shell" steps that registers result
"shell" step doesn't support check mode, which currently leads to failures,
when Ansible is being run in check mode (because Ansible doesn't run command,
assuming that command might have effect, and no "rc" or "output" is registered).

Setting "check_mode: no" allows to run those "shell" commands in check mode
(which is safe, because those shell commands doesn't have side effects).
2017-02-13 18:53:41 +03:00
Sergii Golovatiuk
732ae69d22 Install pip on Ubuntu
Closes: #961
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-13 16:27:09 +01:00
Greg Althaus
2b10376339 When resolv.conf changes during host_resolvconf mode, we need to
restart the controller to get the new file configuration.
I'm not fond of this form and would like a better way, but this
seems to "work".
2017-02-13 09:20:02 -06:00
Antoine Legrand
9667ac3baf Merge pull request #1022 from kubernetes-incubator/ant31-patch-1
Document gitlab-runner.sh
2017-02-13 15:40:34 +01:00
Matthew Mosesohn
b5be335db3 Clean up dnsmasq purge task 2017-02-13 17:30:15 +03:00
Antoine Legrand
d33945780d Document gitlab-runner.sh 2017-02-13 15:04:35 +01:00
Sergii Golovatiuk
5f4cc3e1de Replace always_run with check_mode
always_run was deprecated in Ansible 2.2 and will be removed in 2.4
ansible logs contain "[DEPRECATION WARNING]: always_run is deprecated.
Use check_mode = no instead". This patch fix deprecation.
2017-02-13 15:00:56 +01:00
Matthew Mosesohn
ec567bd53c Update calico to v1.0.2
Also calico-cni to v1.5.6, calico-policy to v0.5.2

Fixes: #1011
2017-02-13 15:39:25 +03:00
Sergii Golovatiuk
aeadaa1184 Set ssl_ca_dirs for rkt based on fact
Since systemd kubelet.service has {{ ssl_ca_dirs }}, fact should be
gathered before writing kubelet.service.

Closes: #1007
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-13 13:28:29 +01:00
Matthew Mosesohn
2f0f0006e3 Merge pull request #988 from mattymo/feat/rolling3
Add CI cases for testing upgrade from v2.0.1 release
2017-02-10 18:09:43 +03:00
Matthew Mosesohn
de047a2b8c Merge pull request #983 from vwfs/centos_kernel_upgrade
Add kernel upgrade for CentOS
2017-02-10 14:40:27 +03:00
Antoine Legrand
86a35652bb Merge pull request #1009 from mattymo/dnsmasq_updates
Enable reset of dnsmasq if manifest or config changes
2017-02-10 11:43:09 +01:00
Matthew Mosesohn
6ae70e03cb fixup upgrades for canal and weave 2017-02-10 13:27:41 +03:00
Matthew Mosesohn
2c532cb74d Disable kube_proxy_masquerade_all
Fixes #1012
2017-02-10 13:16:39 +03:00
Matthew Mosesohn
779f20d64e Merge pull request #1010 from bogdando/fixes
Fix misleading HA docs
2017-02-10 13:01:29 +03:00
Bogdan Dobrelya
89ae9f1f88 Merge pull request #1002 from code0x9/master
use ansible sysctl module for config ip forwarding
2017-02-10 10:40:18 +01:00
Bogdan Dobrelya
ed1ab11001 Fix misleading HA docs
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-02-10 10:28:27 +01:00
Alexander Block
d2e010cbe1 Add kernel upgrade for CentOS 2017-02-10 09:29:12 +01:00
Matthew Mosesohn
a44a0990f5 Enable reset of dnsmasq if manifest or config changes 2017-02-10 10:40:07 +04:00
Matthew Mosesohn
2f88c9eefe Merge pull request #989 from holser/kubelet_remedy
Kubernetes Reliability Improvements
2017-02-10 09:29:29 +03:00
Matthew Mosesohn
60f1936a62 Merge pull request #1004 from galthaus/kubelet-load-modules
Allow kubelet to load kernel modules
2017-02-10 09:28:16 +03:00
Matthew Mosesohn
ee15f99dd7 Add CI cases for testing upgrade from v2.0.1 release
These are manual trigger jobs, but should be run if any PR
impacts upgrades.
2017-02-10 10:20:58 +04:00
Matthew Mosesohn
b0ee27ba46 Merge pull request #1006 from mattymo/fix_weave_upgrade
Enable weave upgrade from previous versions
2017-02-10 09:03:49 +03:00
Antoine Legrand
067bbaa473 Merge pull request #1001 from idcrook/kargo-issue-1000-efk-enable
removed explicit role for efk in cluster.yml
2017-02-10 03:03:18 +01:00
Sergii Golovatiuk
c07d60bc90 Kubernetes Reliability Improvements
- Exclude kubelet CPU/RAM (kube-reserved) from cgroup. It decreases a
  chance of overcommitment
- Add a possibility to modify Kubelet node-status-update-frequency
- Add a posibility to configure node-monitor-grace-period,
  node-monitor-period, pod-eviction-timeout for Kubernetes controller
  manager
- Add Kubernetes Relaibility Documentation with recomendations for
  various scenarios.

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-09 23:54:08 +01:00
Matthew Mosesohn
29fd957352 Enable weave upgrade from previous versions
Raise readiness probe initial time to 60 (was 30)
2017-02-09 21:39:31 +03:00
Matthew Mosesohn
ef10ce04e2 Merge pull request #1005 from rutsky/patch-2
fix kube_apiserver_ip/kube_apiserver_port description
2017-02-09 21:08:15 +03:00
Vladimir Rutsky
f0269b28f4 fix kube_apiserver_ip/kube_apiserver_port description 2017-02-09 21:47:36 +04:00
Matthew Mosesohn
0a7c6eb9dc Merge pull request #998 from mattymo/fix_upgrade_daemonsets
Fix upgrade for all daemonset type resources
2017-02-09 20:02:21 +03:00
Greg Althaus
3f0c13af8a Make kubelet_load_modules always present but false.
Update code and docs for that assumption.
2017-02-09 10:25:44 -06:00
Greg Althaus
fcd78eb1f7 Due to the nsenter and other reworks, it appears that
kubelet lost the ability to load kernel modules.  This
puts that back by adding the lib/modules mount to kubelet.

The new variable kubelet_load_modules can be set to true
to enable this item.  It is OFF by default.
2017-02-09 10:02:26 -06:00
Matthew Mosesohn
17dfae6d4e Merge pull request #999 from holser/decrease_weave_ram_limits
Lower weave RAM settings.
2017-02-09 13:19:12 +03:00
Mark Lee
e414c25fd7 follow sysctl.conf file symlink if linked 2017-02-09 18:16:52 +09:00
Mark Lee
34a71554ae use ansible sysctl module for config ip forwarding 2017-02-09 17:28:44 +09:00
Bogdan Dobrelya
3b1a196c75 Merge pull request #902 from insequent/master
Adding vault role
2017-02-09 09:24:52 +01:00
Bogdan Dobrelya
105dbf471e Merge pull request #993 from code0x9/master
enable proxy support on docker repository
2017-02-09 09:21:01 +01:00
David Crook
d4d9f27a8d removed explicit role for efk in cluster.yml 2017-02-08 20:48:28 -07:00
Antoine Legrand
68df0d4909 Merge pull request #986 from vwfs/dnsmasq_system_nameservers
Also add the system nameservers to upstream servers in dnsmasq
2017-02-08 23:21:54 +01:00
Antoine Legrand
9c572fe54b Merge pull request #984 from rutsky/patch-2
fix typo: "explicetely"
2017-02-08 23:19:01 +01:00
Josh Conant
245e05ce61 Vault security hardening and role isolation 2017-02-08 21:41:36 +00:00
Josh Conant
f4ec2d18e5 Adding the Vault role 2017-02-08 21:31:28 +00:00
Sergii Golovatiuk
4124d84c00 Lower weave RAM settings.
- Since Weave 1.8.x was rewritten in Golang we may decrease RAM settings
  to continue using g1-small for CI
2017-02-08 18:50:36 +01:00
Matthew Mosesohn
3c713a3f53 Fix upgrade for all daemonset type resources
Daemonsets cannot be simply upgraded through a single API call,
regardless of any kubectl documentation. The resource must be
purged and then recreated in order to make any changes.
2017-02-08 18:16:00 +03:00
Alexander Block
89e570493a Also add the system nameservers to upstream servers in dnsmasq
Also make no-resolv unconditional again. Otherwise, we may end up in
a resolver loop. The resolver loop was the cause for the piling up
parallel queries.
2017-02-08 14:38:55 +01:00
Matthew Mosesohn
16674774c7 Merge pull request #994 from mattymo/docker_save
Change docker save compress level to 1
2017-02-08 15:13:15 +03:00
Matthew Mosesohn
0180ad7f38 Merge pull request #990 from mattymo/fix_cert_upgrade
Fix check for node-NODEID certs existence
2017-02-08 14:44:09 +03:00
Matthew Mosesohn
bfd1ea1da1 Merge pull request #971 from bradbeam/efk
Adding EFK logging stack
2017-02-08 14:28:04 +03:00
Mark Lee
3eacd0c871 Update rh_docker.repo.j2 2017-02-08 20:03:51 +09:00
Matthew Mosesohn
d587270293 Merge pull request #992 from vwfs/host_mount_dev
Host mount /dev for kubelet
2017-02-08 13:45:22 +03:00
Matthew Mosesohn
3eb13e83cf Change docker save compress level to 1
Faster gzip improves CI deploy times by at least 2 mins.

Fixes #982
2017-02-08 13:25:11 +03:00
Mark Lee
df761713aa Merge branch 'master' of https://github.com/kubespray/kargo 2017-02-08 19:19:26 +09:00
Mark Lee
de50f37fea enable proxy support on docker repository 2017-02-08 19:19:08 +09:00
Matthew Mosesohn
bad6076905 Merge pull request #987 from mattymo/etcd-retune
Re-tune ETCD performance params
2017-02-08 13:00:25 +03:00
Bogdan Dobrelya
c2bd76a22e Merge pull request #956 from adidenko/update-netchecker
Update playbooks to support new netchecker
2017-02-08 10:09:46 +01:00
Alexander Block
010fe30b53 Host mount /dev for kubelet 2017-02-08 09:55:51 +01:00
Matthew Mosesohn
e5779ab786 Fix check for node-NODEID certs existence
Fixes upgrade from pre-individual node cert envs.
2017-02-07 21:06:48 +03:00
Matthew Mosesohn
71e14a13b4 Re-tune ETCD performance params
Reduce election timeout to 5000ms (was 10000ms)
Raise heartbeat interval to 250ms (was 100ms)
Remove etcd cpu share (was 300)
Make etcd_cpu_limit and etcd_memory_limit optional.
2017-02-07 20:15:14 +03:00
Matthew Mosesohn
491074aab1 Merge pull request #969 from mattymo/port_reserve
Prevent dynamic port allocation in nodePort range
2017-02-07 18:24:57 +03:00
Aleksandr Didenko
54af533b31 Update playbooks to support new netchecker
Netchecker is rewritten in Go lang with some new args instead of
env variables. Also netchecker-server no longer requires kubectl
container. Updating playbooks accordingly.
2017-02-07 15:20:34 +01:00
Matthew Mosesohn
4f13043d14 Merge pull request #976 from holser/bug/975
Improve Weave
2017-02-06 22:48:13 +03:00
Vladimir Rutsky
6a5df4d999 fix typo: "pubilcally" 2017-02-06 21:35:02 +04:00
Vladimir Rutsky
d41602088b fix typo: "explicetely" 2017-02-06 21:29:11 +04:00
Matthew Mosesohn
f3a0f73588 Prevent dynamic port allocation in nodePort range
kube_apiserver_node_port_range should be accessible only
to kube-proxy and not be taken by a dynamic port allocation.

Potentially temporary if https://github.com/kubernetes/kubernetes/issues/40920
gets fixed.
2017-02-06 20:01:16 +03:00
Matthew Mosesohn
be1e1b41bd Merge pull request #981 from kubernetes-incubator/revert-911-DROP_CAPS
Revert "Drop linux capabilities and rework users/groups"
2017-02-06 17:52:58 +03:00
Matthew Mosesohn
fd30131dc2 Revert "Drop linux capabilities and rework users/groups" 2017-02-06 15:58:54 +03:00
Sergii Golovatiuk
5122697f0b Improve Weave
- Remove weave CPU limits from .gitlab-ci.yml. Closes: #975
- Fix weave version in documentation

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-06 13:24:40 +01:00
Bogdan Dobrelya
b7bf502e02 Merge pull request #978 from rutsky/patch-1
remove extra `~`
2017-02-06 12:07:54 +01:00
Bogdan Dobrelya
3f70e3a843 Merge pull request #977 from holser/bug/973
Add .swp .swo .swn to .gitignore
2017-02-06 12:07:07 +01:00
Bogdan Dobrelya
cae2982d81 Merge pull request #911 from bogdando/DROP_CAPS
Drop linux capabilities and rework users/groups
2017-02-06 12:05:51 +01:00
Vladimir Rutsky
b638c89556 remove extra ~ 2017-02-06 15:05:24 +04:00
Bogdan Dobrelya
9bc51bd0e2 Merge pull request #972 from kubernetes-incubator/update-roadmap
Update roadmap.md
2017-02-06 12:03:09 +01:00
Sergii Golovatiuk
408b4f3f42 Add .swp .swo .swn to .gitignore
According to http://vimdoc.sourceforge.net/htmldoc/recover.html vim
creates .swo .swn .swp files. This patch adds them to .gitignore in all
directories recursively

Closes: #973
2017-02-06 12:00:49 +01:00
Antoine Legrand
d818ac1d59 Update roadmap.md 2017-02-04 23:23:24 +01:00
Antoine Legrand
bd1c764a1a Merge pull request #963 from rutsky/bastion-ansible-host
handle both 'ansible_host' and 'ansible_ssh_host' in bastion configration
2017-02-04 15:42:39 -05:00
Antoine Legrand
8f377ad8bd Merge pull request #968 from rutsky/remove-deprecated-ubuntu-bootstrap
remove deprecated ubuntu-bootstrap.yml script
2017-02-04 15:36:49 -05:00
Brad Beam
df3e11bdb8 Adding EFK logging stack 2017-02-03 16:27:08 -06:00
Vladimir Rutsky
97dabbe997 remove deprecated ubuntu-bootstrap.yml script
Signed-off-by: Vladimir Rutsky <rutsky.vladimir@gmail.com>
2017-02-03 15:02:17 +03:00
Bogdan Dobrelya
5a7a3f6d4a Merge pull request #949 from vmtyler/master
Fixes Support for OpenStack v3 credentials
2017-02-03 12:22:00 +01:00
Vladimir Rutsky
b4327fdc99 handle both 'ansible_host' and 'ansible_ssh_host' in bastion configuration
'absible_ssh_host' is deprecated in Ansible 2.0 and at least
'contrib/inventory_builder/inventory.py' uses 'ansible_host' instead.
2017-02-02 18:34:53 +03:00
Matthew Mosesohn
10f924a617 Merge pull request #927 from holser/nsenter_fix
Remove nsenter workaround
2017-02-02 18:18:15 +03:00
Matthew Mosesohn
3dd6a01c8b Merge pull request #901 from galthaus/dns-tweak
DHCP Hook protections
2017-02-02 16:47:16 +03:00
Sergii Golovatiuk
585afef945 Remove nsenter workaround
- Docker 1.12 and further don't need nsenter hack. This patch removes
  it.  Also, it bumps the minimal version to 1.12.

Closes #776

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-02-02 14:38:11 +01:00
Matthew Mosesohn
bdc65990e1 Merge pull request #958 from holser/fix_weave_cpu
Fix CPU out of scope for Weave-net
2017-02-02 16:05:47 +03:00
Sergii Golovatiuk
f2e4ffcac2 Fix weave-net after upgrade to 1.82
- Set recommended CPU settings
- Cleans up upgrade to weave 1.82. The original WeaveWorks
daemonset definition uses weave-net name.
- Limit DS creation to master
- Combined 2 tasks into one with better condition
2017-02-02 10:31:58 +01:00
Matthew Mosesohn
ae66b6e648 Merge pull request #957 from mattymo/weave-net-naming
Rename weave-kube to weave-net
2017-02-02 10:18:02 +03:00
Greg Althaus
923057c1a8 This continues the DHCP hook checks. Also protect the create side
if the system doesn't have any config files at all.
2017-01-31 09:56:27 -06:00
Matthew Mosesohn
0f6e08d34f Merge pull request #951 from mattymo/k8s-certs-scale
Fix cert distribution at scale
2017-01-31 18:49:26 +03:00
Matthew Mosesohn
4889a3e2e1 Merge pull request #954 from artem-panchenko/improve_dnsmasq
Explicitly set config path for DNSMasq
2017-01-31 18:48:46 +03:00
Matthew Mosesohn
39d87a96aa Rename weave-kube to weave-net
Cleans up upgrade to weave 1.82. The original WeaveWorks
daemonset definition uses weave-net name.
2017-01-31 18:47:27 +03:00
Bogdan Dobrelya
e7c03ba66a Merge pull request #955 from mattymo/disable-idempotency-check
Disable idempotency for ubuntu-weave-sep
2017-01-31 14:55:27 +01:00
Matthew Mosesohn
08822ec684 Fix cert distribution at scale
Use stdin instead of bash args to pass node filenames and base64 data.
Use tempfile for master cert data
2017-01-31 16:27:45 +03:00
Matthew Mosesohn
6463a01e04 Merge pull request #880 from bradbeam/weave-kube
Weave kube
2017-01-31 13:31:09 +03:00
Matthew Mosesohn
0cf1850465 Disable idempotency for ubuntu-weave-sep
CI is failing 40% of the time due to errors in reset.
Let's disable idempotency check per-patch until we fix it.

Fixes #953
2017-01-31 13:23:27 +03:00
Artem Panchenko
1418fb394b Explicitly set config path for DNSMasq
When DNSMasq is configured to read its settings
from a folder ('-7' or '--conf-dir' option) it only
checks that the directory exists and doesn't fail if
it's empty. It could lead to a situation when DNSMasq
is running and handles requests, but not properly
configured, so some of queries can't be resolved.
2017-01-31 12:14:57 +02:00
Matthew Mosesohn
e4eda88ca9 Merge pull request #944 from tureus/skip-cloud-config-on-etcd
Bugfix: skip cloud_config on etcd
2017-01-30 20:12:36 +03:00
Bogdan Dobrelya
71a3c97d6f Merge pull request #943 from bradbeam/cilint
Fixing lint check for ci
2017-01-30 09:19:44 +01:00
Antoine Legrand
1c3d2924ae Merge pull request #947 from bradbeam/libs
Consolidating kube.py module
2017-01-29 00:02:32 +01:00
Brad Beam
a11b9d28bd Upgrading weave to weave-kube 2017-01-27 17:05:25 -06:00
Brad Beam
b54eb609bf Consolidating kube.py module 2017-01-27 11:28:11 -06:00
Bogdan Dobrelya
dc8ff413f9 Merge pull request #948 from mattymo/update_coreos
Update coreos-stable image
2017-01-27 17:53:17 +01:00
Tyler Britten
f8ffa1601d Fixed for non-null output 2017-01-27 10:47:59 -05:00
Tyler Britten
da01bc1fbb Updated OpenStack vars to check for tenant_id (v2) and project_id (v3) 2017-01-27 10:26:20 -05:00
Matthew Mosesohn
a2079a9ca9 Update coreos-stable image
Our old coreos-stable image has docker 1.10
2017-01-27 16:20:40 +04:00
neith00
bbc8c09753 Using the command module instead of raw
Using the command module instead of raw.
Also fixed the syntax.
2017-01-26 16:28:48 +01:00
Matthew Mosesohn
a627299468 Merge pull request #941 from adidenko/use_ansible_hostname_in_calico
Switch to ansible_hostname in calico
2017-01-26 13:06:35 +03:00
Xavier Lange
e5fdc63bdd Bugfix: skip cloud_config on etcd 2017-01-25 14:09:21 -08:00
Brad Beam
fe83e70074 Fixing lint check for ci 2017-01-25 09:54:32 -06:00
Aleksandr Didenko
46c177b982 Switch to ansible_hostname in calico
For consistancy with kubernetes services we should use the same
hostname for nodes, which is 'ansible_hostname'.

Also fixing missed 'kube-node' in templates, Calico is installed
on 'k8s-cluster' roles, not only 'kube-node'.
2017-01-25 11:49:58 +01:00
Bogdan Dobrelya
1df50adc1c Merge pull request #933 from frozenice/hide-skipped-hosts
add skippy stdout callback
2017-01-25 10:33:20 +01:00
Bogdan Dobrelya
b6cd9a4c4b Merge pull request #938 from bradbeam/ci
Splitting out moderator check from syntax check
2017-01-25 10:12:11 +01:00
Brad Beam
2333ec4d1f Splitting out moderator check from syntax check
- Attempt to clarify CI runs from contributors
2017-01-24 23:05:12 -06:00
Bogdan Dobrelya
85a8a54d3e Merge pull request #935 from sc68cal/terraform_groupvars_update
Update the group_vars for Terraform
2017-01-24 11:33:17 +01:00
Bogdan Dobrelya
7294a22901 Merge pull request #934 from frozenice/use-api-pwd-for-root
also use kube_api_pwd for root account
2017-01-24 11:24:02 +01:00
Matthew Mosesohn
f4b7474ade Merge pull request #926 from adidenko/fix-calico-rr-for-masters
Fix calico-rr peering with k8s masters
2017-01-24 12:38:52 +03:00
Matthew Mosesohn
9428321607 Merge pull request #932 from vwfs/centos_pin_docker_version
Pin docker version on RedHat and CentOS to the desired version
2017-01-24 12:21:50 +03:00
Matthew Mosesohn
882544446a Merge pull request #928 from sc68cal/terraform_identity_version
Specify the version of the credentials to download from Horizon
2017-01-24 12:21:27 +03:00
Sean M. Collins
73160c9b90 Update terraform's group_vars to be a symlink
That way, it will not become stale.

Related bug #929
2017-01-23 16:08:37 -05:00
Sean M. Collins
2184d6a3ff Specify the version of the credentials to download from Horizon
More recent versions of OpenStack Horizon provide Identity v2 and
Identity v3 versions of the RC file.
2017-01-23 14:52:51 -05:00
David Kirstein
6e35895b44 also use kube_api_pwd for root account
This makes it a bit more secure. Also the password can now be changed with a (inventory) variable (no need to edit all.yml).
2017-01-23 19:09:30 +01:00
David Kirstein
8009ff8537 add skippy stdout callback
It removes the teal lines when a host is skipped for a task. This makes the output less spammy and much easier to read. Empty TASK blocks are still included in the output, but that's ok.
2017-01-23 18:53:14 +01:00
Alexander Block
9bf792ce0b Pin docker version on RedHat and CentOS to the desired version 2017-01-23 12:39:54 +01:00
Aleksandr Didenko
f05aaeb329 Fix calico-rr peering with k8s masters
Calico-rr is broken for deployments with separate k8s-master and
k8s-node roles. In order to fix it we should peer k8s-cluster
nodes with calico-rr, not just k8s-node. The same for peering
with routers.

Closes #925
2017-01-23 10:19:09 +01:00
Bogdan Dobrelya
1bdf34e7dc Merge pull request #915 from bradbeam/ci
Sorting ansible args, fixed ci cluster_mode
2017-01-20 09:43:10 +01:00
Bogdan Dobrelya
cd25bfca91 Merge pull request #884 from mattymo/inventory_builder_scale
Add scale thresholds to split etcd and k8s-masters
2017-01-20 09:34:45 +01:00
Bogdan Dobrelya
1b621ab81c Merge pull request #873 from crodetsky/fix_test_cases
Genericize test cases and namespace create pod
2017-01-20 09:30:35 +01:00
Bogdan Dobrelya
cb2e5ac776 Drop linux capabilities and rework users/groups
* Drop linux capabilities for unprivileged containerized
  worlkoads Kargo configures for deployments.
* Configure required securityContext/user/group/groups for kube
  components' static manifests, etcd, calico-rr and k8s apps,
  like dnsmasq daemonset.
* Rework cloud-init (etcd) users creation for CoreOS.
* Fix nologin paths, adjust defaults for addusers role and ensure
  supplementary groups membership added for users.
* Add netplug user for network plugins (yet unused by privileged
  networking containers though).
* Grant the kube and netplug users read access for etcd certs via
  the etcd certs group.
* Grant group read access to kube certs via the kube cert group.
* Remove priveleged mode for calico-rr and run it under its uid/gid
  and supplementary etcd_cert group.
* Adjust docs.
* Align cpu/memory limits and dropped caps with added rkt support
  for control plane.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-20 08:50:42 +01:00
Matthew Mosesohn
8ce32eb3e1 Merge pull request #905 from galthaus/async-runs
Add tasks to ensure that the first nodes have their directories for cert gen
2017-01-19 18:32:27 +03:00
Matthew Mosesohn
aae0314bda Merge pull request #904 from galthaus/nginx-port-config
Add nginx local balancer port configuration variable
2017-01-19 18:31:57 +03:00
Matthew Mosesohn
35d5248d41 Merge pull request #913 from galthaus/apps-master-only
Ansible apps should only check for api-server running on the master.
2017-01-19 18:30:58 +03:00
Matthew Mosesohn
0ccc2555d3 Merge pull request #917 from mattymo/rkt_resolvconf
Fix setting resolvconf when using rkt deploy mode
2017-01-19 18:30:21 +03:00
Matthew Mosesohn
b26a711e96 Merge pull request #916 from mattymo/update_ansible
Update Ansible to 2.2.1
2017-01-19 18:13:45 +03:00
Matthew Mosesohn
2218a052b2 Merge pull request #921 from mattymo/docker113
Add docker 1.13, update 1.12 to 1.12.6
2017-01-19 18:13:21 +03:00
Matthew Mosesohn
40f419ca54 Merge pull request #922 from holser/dnsmasq_dns-forward-max
Allow to specify number of concurrent DNS queries
2017-01-19 18:08:04 +03:00
Matthew Mosesohn
f742fc3dd1 Add scale thresholds to split etcd and k8s-masters
Also adds calico-rr group if there are standalone etcd nodes.
Now if there are 50 or more nodes, 3 etcd nodes will be standalone.
If there are 200 or more nodes, 2 kube-masters will be standalone.
If thresholds are exceeded, kube-node group cannot add nodes that
belong to etcd or kube-master groups (according to above statements).
2017-01-19 17:30:56 +03:00
Matthew Mosesohn
33fbcc56d6 Add docker 1.13, update 1.12 to 1.12.6
Fixes #903
2017-01-19 13:58:36 +03:00
Sergii Golovatiuk
61d05dea58 Allow to specify number of concurrent DNS queries
ndots creates overhead as every pod creates 5 concurrent connections
that are forwarded to sky dns. Under some circumstances dnsmasq may
prevent forwarding traffic with "Maximum number of concurrent DNS
queries reached" in the logs.

This patch allows to configure the number of concurrent forwarded DNS
queries "dns-forward-max" as well as "cache-size" leaving the default
values as they were before.

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-01-19 11:47:37 +01:00
Matthew Mosesohn
8a821060a3 Update Ansible to 2.2.1 2017-01-19 13:46:46 +03:00
Greg Althaus
0d44599a63 Add explicit name printing in task names for deletgated task during
cert creation
2017-01-18 14:06:50 -06:00
crodetsky
8e29b08070 Genericize test cases and namespace create pod
This change modifies 020_check-create-pod and 030_check-network test cases to
target `kube-master[0]` instead of `node1` as these tests can be useful in
deployments that do not use the same naming convention as the basic tests.

This change also modifies 020_check-create-pod to namespace into a `test`
namespace allowing the `get pods` command to get its expected number of
running containers.

Closes #866 and #867.
2017-01-18 14:52:35 -05:00
Matthew Mosesohn
b6c3e61603 Fix setting resolvconf when using rkt deploy mode
rkt deploy mode doesn't create {{ bin_dir }}/kubelet, so
let's rely on kubelet.env file instad.
2017-01-18 19:18:47 +03:00
Brad Beam
dc08b75c6a Sorting ansible args, fixed ci cluster_mode
- s/separated/separate/g for cluster_mode so it now generates the correct number of instances
2017-01-18 08:03:04 -06:00
Matthew Mosesohn
5420fa942e Merge pull request #897 from holser/flush_handlers_before_etcd
Flush handlers before etcd restart
2017-01-18 12:27:01 +03:00
Matthew Mosesohn
1ee33d3a8d Merge pull request #910 from mattymo/escape_curly
Fix ansible 2.2.1 handling of registered vars
2017-01-18 11:13:01 +03:00
Greg Althaus
61dab8dc0b Should only check for api-server running on the master.
If this runs on other nodes, it will fail the playbook.
2017-01-17 15:57:34 -06:00
Greg Althaus
0022a2b29e Add doc updates. 2017-01-17 13:15:48 -06:00
Matthew Mosesohn
b2a27ed089 Fix bash completion installation 2017-01-17 20:36:58 +03:00
Matthew Mosesohn
d8ae50800a Work around escaping curly braces for docker inspect 2017-01-17 20:35:38 +03:00
Sergii Golovatiuk
43fa72b7b7 Flush handlers before etcd restart
systemctl daemon-reload should be run before when task modifies/creates
union for etcd. Otherwise etcd won't be able to start

Closes #892

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-01-17 15:04:25 +01:00
Bogdan Dobrelya
36b62b7270 Merge pull request #896 from bogdando/idempot_check
Add idempotency checks for CI
2017-01-17 14:21:32 +01:00
Matthew Mosesohn
73204c868d Merge pull request #909 from mattymo/docker-upgrade
Always trigger docker restart when docker package changes
2017-01-17 11:37:42 +03:00
Matthew Mosesohn
2ee889843a Merge pull request #900 from galthaus/cn-length
Cert fail if inventory names too long
2017-01-16 23:39:32 +03:00
Matthew Mosesohn
74b78e75a1 Always trigger docker restart when docker package changes
Docker upgrade doesn't auto-restart docker, causing failures
when trying to start another container
2017-01-16 17:52:28 +03:00
Greg Althaus
6905edbeb6 Add a variable that defaults to kube_apiserver_port that defines
the which port the local nginx proxy should listen on for HA
local balancer configurations.
2017-01-14 23:38:07 -06:00
Greg Althaus
6c69da1573 This PR adds/or modifies a few tasks to allow for the playbook to
be run by limit on each node without regard for order.

The changes make sure that all of the directories needed to do
certificate management are on the master[0] or etcd[0] node regardless
of when the playbook gets run on each node.  This allows for separate
ansible playbook runs in parallel that don't have to be synchronized.
2017-01-14 23:24:34 -06:00
Bogdan Dobrelya
e776dfd800 Add idempotency checks for CI
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-13 17:16:03 +01:00
Greg Althaus
95bf380d07 If the inventory name of the host exceeds 63 characters,
the openssl tools will fail to create signing requests because
the CN is too long.  This is mainly a problem when FQDNs are used
in the inventory file.

THis will truncate the hostname for the CN field only at the
first dot.  This should handle the issue for most cases.
2017-01-13 10:02:23 -06:00
Bogdan Dobrelya
2a61ad1b57 Merge pull request #895 from mattymo/same_apiserver_certs
Use only one certificate for all apiservers
2017-01-13 13:05:06 +01:00
Matthew Mosesohn
80703010bd Use only one certificate for all apiservers
https://github.com/kubernetes/kubernetes/issues/25063
2017-01-13 14:03:20 +03:00
Bogdan Dobrelya
e88c10670e Merge pull request #891 from galthaus/selinux-order
preinstall fails on AWS CentOS7 image
2017-01-13 11:51:18 +01:00
Bogdan Dobrelya
2a2953c674 Merge pull request #893 from kubernetes-incubator/undo_hostresolvconf
Don't try to delete kargo specific config from dhclient when file does not exist
2017-01-13 11:35:46 +01:00
Alexander Block
1054f37765 Don't try to delete kargo specific config from dhclient when file does not exist
Also remove the check for != "RedHat" when removing the dhclient hook,
as this had also to be done on other distros. Instead, check if the
dhclienthookfile is defined.
2017-01-13 10:56:10 +01:00
Greg Althaus
f77257cf79 When running on CentOS7 image in AWS with selinux on, the order of
the tasks fail because selinux prevents ip-forwarding setting.

Moving the tasks around addresses two issues.  Makes sure that
the correct python tools are in place before adjusting of selinux
and makes sure that ipforwarding is toggled after selinux adjustments.
2017-01-12 10:12:21 -06:00
Bogdan Dobrelya
f004cc07df Merge pull request #830 from mattymo/k8sperhost
Generate individual certificates for k8s hosts
2017-01-12 12:42:14 +01:00
Bogdan Dobrelya
065a4da72d Merge pull request #886 from kubernetes-incubator/undo_hostresolvconf
Add tasks to undo changes to hosts /etc/resolv.conf and dhclient configs
2017-01-12 12:27:22 +01:00
Bogdan Dobrelya
98c7f2eb13 Merge pull request #887 from bogdando/docs
Clarify major/minor/maintainance releases
2017-01-12 11:55:00 +01:00
Bogdan Dobrelya
d332502d3d Clarify major/minor/maintainance releases
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-12 11:25:04 +01:00
Alexander Block
a7bf7867d7 Add tasks to undo changes to hosts /etc/resolv.conf and dhclient configs 2017-01-11 16:56:16 +01:00
Bogdan Dobrelya
c63cda7c21 Merge pull request #883 from bogdando/docs
Docs updates
2017-01-11 15:40:41 +01:00
Bogdan Dobrelya
caab0cdf27 Docs updates
Fix mismatching inventory examples.
Add command examples.
Clarify groups use cases.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-11 15:39:35 +01:00
Bogdan Dobrelya
1191876ae8 Merge pull request #882 from bogdando/releases
Clarify release policy
2017-01-11 11:45:47 +01:00
Bogdan Dobrelya
fa51a589ef Clarify release policy
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-11 11:18:21 +01:00
Matthew Mosesohn
3f274115b0 Generate individual certificates for k8s hosts 2017-01-11 12:58:07 +03:00
Matthew Mosesohn
3b0918981e Merge pull request #878 from bradbeam/rkt-cni
Adding /opt/cni /etc/cni to rkt run kubelet
2017-01-11 12:22:04 +03:00
Bogdan Dobrelya
a327dfeed7 Merge pull request #881 from bogdando/docs
Fix inventory generator link
2017-01-10 17:09:35 +01:00
Bogdan Dobrelya
d8cef34d6c Merge pull request #872 from mattymo/bug868
Bind nginx localhost proxy to localhost
2017-01-10 17:09:25 +01:00
Bogdan Dobrelya
6fb6947feb Fix inventory generator link
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-10 17:02:28 +01:00
Brad Beam
db8173da28 Adding /opt/cni /etc/cni to rkt run kubelet 2017-01-10 08:48:58 -06:00
Bogdan Dobrelya
bcdfb3cfb0 Merge pull request #793 from kubernetes-incubator/fix_dhclientconf_path
Fix wrong path of dhclient on CentOS+Azure
2017-01-10 13:23:55 +01:00
Bogdan Dobrelya
79aeb10431 Merge pull request #858 from bradbeam/calicoctl-canal
Misc updates for canal
2017-01-10 12:24:59 +01:00
Matthew Mosesohn
e22f938ae5 Bind nginx localhost proxy to localhost
This proxy should only be listening for local connections, not 0.0.0.0.

Fixes #868
2017-01-09 17:19:54 +03:00
Brad Beam
cf042b2a4c Create network policy directory for canal 2017-01-05 10:54:27 -06:00
Brad Beam
65c86377fc Adding calicoctl to canal deployment 2017-01-05 10:54:27 -06:00
Alexander Block
8e4e3998dd Fix wrong path of dhclient on CentOS+Azure
This was alredy fixed in #755 but had to be reverted. This PR should be
more intelligent about deciding which path to use.
2016-12-21 21:51:07 +01:00
397 changed files with 10162 additions and 3056 deletions

View File

@@ -24,7 +24,7 @@ explain why.
- **Version of Ansible** (`ansible --version`):
**Kargo version (commit) (`git rev-parse --short HEAD`):**
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
**Network plugin used**:

85
.gitignore vendored
View File

@@ -1,13 +1,92 @@
.vagrant
*.retry
inventory/vagrant_ansible_inventory
inventory/group_vars/fake_hosts.yml
inventory/host_vars/
temp
.idea
.tox
.cache
*.egg-info
*.pyc
*.pyo
*.bak
*.tfstate
*.tfstate.backup
**/*.sw[pon]
/ssh-bastion.conf
**/*.sw[pon]
vagrant/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# dotenv
.env
# virtualenv
venv/
ENV/

View File

@@ -1,4 +1,5 @@
stages:
- moderator
- unit-tests
- deploy-gce-part1
- deploy-gce-part2
@@ -17,10 +18,7 @@ variables:
# us-west1-a
before_script:
- pip install ansible
- pip install netaddr
- pip install apache-libcloud==0.20.1
- pip install boto==2.9.0
- pip install -r tests/requirements.txt
- mkdir -p /.ssh
- cp tests/ansible.cfg .
@@ -46,14 +44,24 @@ before_script:
PRIVATE_KEY: $GCE_PRIVATE_KEY
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
CLOUD_MACHINE_TYPE: "g1-small"
ANSIBLE_KEEP_REMOTE_FILES: "1"
ANSIBLE_CONFIG: ./tests/ansible.cfg
BOOTSTRAP_OS: none
DOWNLOAD_LOCALHOST: "false"
DOWNLOAD_RUN_ONCE: "false"
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
RESOLVCONF_MODE: docker_dns
LOG_LEVEL: "-vv"
ETCD_DEPLOYMENT: "docker"
KUBELET_DEPLOYMENT: "docker"
KUBELET_DEPLOYMENT: "host"
VAULT_DEPLOYMENT: "docker"
WEAVE_CPU_LIMIT: "100m"
AUTHORIZATION_MODES: "{ 'authorization_modes': [] }"
MAGIC: "ci check this"
.gce: &gce
<<: *job
<<: *docker_service
@@ -62,65 +70,178 @@ before_script:
paths:
- downloads/
- $HOME/.cache
stage: deploy-gce
before_script:
- docker info
- pip install ansible==2.1.3.0
- pip install netaddr
- pip install apache-libcloud==0.20.1
- pip install boto==2.9.0
- pip install -r tests/requirements.txt
- mkdir -p /.ssh
- cp tests/ansible.cfg .
- mkdir -p $HOME/.ssh
- echo $PRIVATE_KEY | base64 -d > $HOME/.ssh/id_rsa
- echo $GCE_PEM_FILE | base64 -d > $HOME/.ssh/gce
- echo $GCE_CREDENTIALS > $HOME/.ssh/gce.json
- chmod 400 $HOME/.ssh/id_rsa
- ansible-playbook --version
- cp tests/ansible.cfg .
- export PYPATH=$([ $BOOTSTRAP_OS = none ] && echo /usr/bin/python || echo /opt/bin/python)
script:
- pwd
- ls
- echo ${PWD}
- echo "${STARTUP_SCRIPT}"
- >
ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local $LOG_LEVEL
-e mode=${CLUSTER_MODE}
-e test_id=${TEST_ID}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local
${LOG_LEVEL}
-e cloud_image=${CLOUD_IMAGE}
-e cloud_region=${CLOUD_REGION}
-e gce_credentials_file=${HOME}/.ssh/gce.json
-e gce_project_id=${GCE_PROJECT_ID}
-e gce_service_account_email=${GCE_ACCOUNT}
-e gce_credentials_file=${HOME}/.ssh/gce.json
-e cloud_image=${CLOUD_IMAGE}
-e cloud_machine_type=${CLOUD_MACHINE_TYPE}
-e inventory_path=${PWD}/inventory/inventory.ini
-e cloud_region=${CLOUD_REGION}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e mode=${CLUSTER_MODE}
-e test_id=${TEST_ID}
-e startup_script="'${STARTUP_SCRIPT}'"
# Check out latest tag if testing upgrade
# Uncomment when gitlab kargo repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout 72ae7638bcc94c66afa8620dfa4ad9a9249327ea
# Create cluster
- >
ansible-playbook -i inventory/inventory.ini -b --become-user=root --private-key=${HOME}/.ssh/id_rsa -u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cert_management=${CERT_MGMT:-script}
-e cloud_provider=gce
-e deploy_netchecker=true
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e vault_deployment_type=${VAULT_DEPLOYMENT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml
# Repeat deployment if testing upgrade
- >
if [ "${UPGRADE_TEST}" != "false" ]; then
test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml";
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
git checkout "${CI_BUILD_REF}";
ansible-playbook -i inventory/inventory.ini -b --become-user=root --private-key=${HOME}/.ssh/id_rsa -u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER}
-e bootstrap_os=${BOOTSTRAP_OS}
-e cloud_provider=gce
-e deploy_netchecker=true
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e local_release_dir=${PWD}/downloads
-e resolvconf_mode=${RESOLVCONF_MODE}
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
$PLAYBOOK;
fi
# Tests Cases
## Test Master API
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
## Ping the between 2 pod
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
## Advanced DNS checks
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/040_check-network-adv.yml $LOG_LEVEL
## Idempotency checks 1/5 (repeat deployment)
- >
if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e ansible_python_interpreter=${PYPATH}
-e download_run_once=true
-e download_localhost=true
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
cluster.yml
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml;
fi
## Idempotency checks 2/5 (Advanced DNS checks)
- >
if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi
# Tests Cases
## Test Master API
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/010_check-apiserver.yml $LOG_LEVEL
## Idempotency checks 3/5 (reset deployment)
- >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e ansible_python_interpreter=${PYPATH}
-e reset_confirmation=yes
--limit "all:!fake_hosts"
reset.yml;
fi
## Ping the between 2 pod
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/030_check-network.yml $LOG_LEVEL
## Idempotency checks 4/5 (redeploy after reset)
- >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
--private-key=${HOME}/.ssh/id_rsa
-e bootstrap_os=${BOOTSTRAP_OS}
-e ansible_python_interpreter=${PYPATH}
-e download_localhost=${DOWNLOAD_LOCALHOST}
-e download_run_once=${DOWNLOAD_RUN_ONCE}
-e deploy_netchecker=true
-e resolvconf_mode=${RESOLVCONF_MODE}
-e local_release_dir=${PWD}/downloads
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
-e kubedns_min_replicas=1
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
-e "${AUTHORIZATION_MODES}"
--limit "all:!fake_hosts"
cluster.yml;
fi
## Advanced DNS checks
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/040_check-network-adv.yml $LOG_LEVEL
## Idempotency checks 5/5 (Advanced DNS checks)
- >
if [ "${IDEMPOT_CHECK}" = "true" AND "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi
after_script:
- >
@@ -139,18 +260,23 @@ before_script:
.coreos_calico_sep_variables: &coreos_calico_sep_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: coreos-stable
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-west1-b
CLUSTER_MODE: separated
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: separate
BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
##User-data to simply turn off coreos upgrades
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.debian8_canal_ha_variables: &debian8_canal_ha_variables
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
CLOUD_IMAGE: debian-8-kubespray
CLOUD_REGION: us-east1-b
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
UPGRADE_TEST: "graceful"
STARTUP_SCRIPT: ""
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-gce-part1
@@ -158,67 +284,101 @@ before_script:
CLOUD_IMAGE: rhel-7
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.centos7_flannel_variables: &centos7_flannel_variables
# stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: centos-7
CLOUD_REGION: us-west1-a
CLOUD_MACHINE_TYPE: "n1-standard-2"
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.debian8_calico_variables: &debian8_calico_variables
# stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: calico
CLOUD_IMAGE: debian-8-kubespray
CLOUD_REGION: us-central1-b
CLUSTER_MODE: default
STARTUP_SCRIPT: ""
.coreos_canal_variables: &coreos_canal_variables
# stage: deploy-gce-part2
KUBE_NETWORK_PLUGIN: canal
CLOUD_IMAGE: coreos-stable
CLOUD_IMAGE: coreos-stable-1465-6-0-v20170817
CLOUD_REGION: us-east1-b
CLUSTER_MODE: default
BOOTSTRAP_OS: coreos
IDEMPOT_CHECK: "true"
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: canal
CLOUD_IMAGE: rhel-7
CLOUD_REGION: us-east1-b
CLUSTER_MODE: separated
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: weave
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separated
CLUSTER_MODE: separate
IDEMPOT_CHECK: "false"
STARTUP_SCRIPT: ""
.centos7_calico_ha_variables: &centos7_calico_ha_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: calico
DOWNLOAD_LOCALHOST: "true"
DOWNLOAD_RUN_ONCE: "true"
CLOUD_IMAGE: centos-7
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: ha
CLUSTER_MODE: ha-scale
IDEMPOT_CHECK: "true"
STARTUP_SCRIPT: ""
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-gce-special
KUBE_NETWORK_PLUGIN: weave
CLOUD_IMAGE: coreos-alpha
CLOUD_IMAGE: coreos-alpha-1506-0-0-v20170817
CLOUD_REGION: us-west1-a
CLUSTER_MODE: ha
CLUSTER_MODE: ha-scale
BOOTSTRAP_OS: coreos
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
STARTUP_SCRIPT: 'systemctl disable locksmithd && systemctl stop locksmithd'
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separated
CLUSTER_MODE: separate
ETCD_DEPLOYMENT: rkt
KUBELET_DEPLOYMENT: rkt
STARTUP_SCRIPT: ""
.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
CERT_MGMT: vault
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: us-central1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
.ubuntu_flannel_rbac_variables: &ubuntu_flannel_rbac_variables
# stage: deploy-gce-special
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
KUBE_NETWORK_PLUGIN: flannel
CLOUD_IMAGE: ubuntu-1604-xenial
CLOUD_REGION: europe-west1-b
CLUSTER_MODE: separate
STARTUP_SCRIPT: ""
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
coreos-calico-sep:
@@ -285,24 +445,24 @@ ubuntu-weave-sep-triggers:
only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto)
debian8-canal-ha:
ubuntu-canal-ha:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *debian8_canal_ha_variables
<<: *ubuntu_canal_ha_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
debian8-canal-ha-triggers:
ubuntu-canal-ha-triggers:
stage: deploy-gce-part1
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *debian8_canal_ha_variables
<<: *ubuntu_canal_ha_variables
when: on_success
only: ['triggers']
@@ -327,7 +487,7 @@ rhel7-weave-triggers:
when: on_success
only: ['triggers']
debian8-calico:
debian8-calico-upgrade:
stage: deploy-gce-part2
<<: *job
<<: *gce
@@ -434,17 +594,55 @@ ubuntu-rkt-sep:
except: ['triggers']
only: ['master', /^pr-.*$/]
# Premoderated with manual actions
syntax-check:
ubuntu-vault-sep:
stage: deploy-gce-part1
<<: *job
stage: unit-tests
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_vault_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
ubuntu-flannel-rbac-sep:
stage: deploy-gce-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_flannel_rbac_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
# Premoderated with manual actions
ci-authorized:
<<: *job
stage: moderator
before_script:
- apt-get -y install jq
script:
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check
- /bin/sh scripts/premoderator.sh
except: ['triggers', 'master']
syntax-check:
<<: *job
stage: unit-tests
script:
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root upgrade-cluster.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root reset.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root extra_playbooks/upgrade-only-k8s.yml -vvv --syntax-check
except: ['triggers', 'master']
yamllint:
<<: *job
stage: unit-tests
script:
- yamllint roles
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
<<: *job

View File

@@ -1,161 +0,0 @@
sudo: required
services:
- docker
git:
depth: 5
env:
global:
GCE_USER=travis
SSH_USER=$GCE_USER
TEST_ID=$TRAVIS_JOB_NUMBER
CONTAINER_ENGINE=docker
PRIVATE_KEY=$GCE_PRIVATE_KEY
GS_ACCESS_KEY_ID=$GS_KEY
GS_SECRET_ACCESS_KEY=$GS_SECRET
ANSIBLE_KEEP_REMOTE_FILES=1
CLUSTER_MODE=default
BOOTSTRAP_OS=none
matrix:
# Debian Jessie
- >-
KUBE_NETWORK_PLUGIN=canal
CLOUD_IMAGE=debian-8-kubespray
CLOUD_REGION=asia-east1-a
CLUSTER_MODE=ha
- >-
KUBE_NETWORK_PLUGIN=calico
CLOUD_IMAGE=debian-8-kubespray
CLOUD_REGION=europe-west1-c
CLUSTER_MODE=default
# Centos 7
- >-
KUBE_NETWORK_PLUGIN=flannel
CLOUD_IMAGE=centos-7
CLOUD_REGION=asia-northeast1-c
CLUSTER_MODE=default
- >-
KUBE_NETWORK_PLUGIN=calico
CLOUD_IMAGE=centos-7
CLOUD_REGION=us-central1-b
CLUSTER_MODE=ha
# Redhat 7
- >-
KUBE_NETWORK_PLUGIN=weave
CLOUD_IMAGE=rhel-7
CLOUD_REGION=us-east1-c
CLUSTER_MODE=default
# CoreOS stable
#- >-
# KUBE_NETWORK_PLUGIN=weave
# CLOUD_IMAGE=coreos-stable
# CLOUD_REGION=europe-west1-b
# CLUSTER_MODE=ha
# BOOTSTRAP_OS=coreos
- >-
KUBE_NETWORK_PLUGIN=canal
CLOUD_IMAGE=coreos-stable
CLOUD_REGION=us-west1-b
CLUSTER_MODE=default
BOOTSTRAP_OS=coreos
# Extra cases for separated roles
- >-
KUBE_NETWORK_PLUGIN=canal
CLOUD_IMAGE=rhel-7
CLOUD_REGION=asia-northeast1-b
CLUSTER_MODE=separate
- >-
KUBE_NETWORK_PLUGIN=weave
CLOUD_IMAGE=ubuntu-1604-xenial
CLOUD_REGION=europe-west1-d
CLUSTER_MODE=separate
- >-
KUBE_NETWORK_PLUGIN=calico
CLOUD_IMAGE=coreos-stable
CLOUD_REGION=us-central1-f
CLUSTER_MODE=separate
BOOTSTRAP_OS=coreos
matrix:
allow_failures:
- env: KUBE_NETWORK_PLUGIN=weave CLOUD_IMAGE=coreos-stable CLOUD_REGION=europe-west1-b CLUSTER_MODE=ha BOOTSTRAP_OS=coreos
before_install:
# Install Ansible.
- pip install --user ansible
- pip install --user netaddr
# W/A https://github.com/ansible/ansible-modules-core/issues/5196#issuecomment-253766186
- pip install --user apache-libcloud==0.20.1
- pip install --user boto==2.9.0 -U
# Load cached docker images
- if [ -d /var/tmp/releases ]; then find /var/tmp/releases -type f -name "*.tar" | xargs -I {} sh -c "zcat {} | docker load"; fi
cache:
- directories:
- $HOME/.cache/pip
- $HOME/.local
- /var/tmp/releases
before_script:
- echo "RUN $TRAVIS_JOB_NUMBER $KUBE_NETWORK_PLUGIN $CONTAINER_ENGINE "
- mkdir -p $HOME/.ssh
- echo $PRIVATE_KEY | base64 -d > $HOME/.ssh/id_rsa
- echo $GCE_PEM_FILE | base64 -d > $HOME/.ssh/gce
- chmod 400 $HOME/.ssh/id_rsa
- chmod 755 $HOME/.local/bin/ansible-playbook
- $HOME/.local/bin/ansible-playbook --version
- cp tests/ansible.cfg .
- export PYPATH=$([ $BOOTSTRAP_OS = none ] && echo /usr/bin/python || echo /opt/bin/python)
# - "echo $HOME/.local/bin/ansible-playbook -i inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root -e '{\"cloud_provider\": true}' $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN} setup-kubernetes/cluster.yml"
script:
- >
$HOME/.local/bin/ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local $LOG_LEVEL
-e mode=${CLUSTER_MODE}
-e test_id=${TEST_ID}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e gce_project_id=${GCE_PROJECT_ID}
-e gce_service_account_email=${GCE_ACCOUNT}
-e gce_pem_file=${HOME}/.ssh/gce
-e cloud_image=${CLOUD_IMAGE}
-e inventory_path=${PWD}/inventory/inventory.ini
-e cloud_region=${CLOUD_REGION}
# Create cluster with netchecker app deployed
- >
$HOME/.local/bin/ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e bootstrap_os=${BOOTSTRAP_OS}
-e ansible_python_interpreter=${PYPATH}
-e download_run_once=true
-e download_localhost=true
-e local_release_dir=/var/tmp/releases
-e deploy_netchecker=true
cluster.yml
# Tests Cases
## Test Master API
- $HOME/.local/bin/ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/010_check-apiserver.yml $LOG_LEVEL
## Ping the between 2 pod
- $HOME/.local/bin/ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/030_check-network.yml $LOG_LEVEL
## Advanced DNS checks
- $HOME/.local/bin/ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/040_check-network-adv.yml $LOG_LEVEL
after_script:
- >
$HOME/.local/bin/ansible-playbook -i inventory/inventory.ini tests/cloud_playbooks/delete-gce.yml -c local $LOG_LEVEL
-e mode=${CLUSTER_MODE}
-e test_id=${TEST_ID}
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
-e gce_project_id=${GCE_PROJECT_ID}
-e gce_service_account_email=${GCE_ACCOUNT}
-e gce_pem_file=${HOME}/.ssh/gce
-e cloud_image=${CLOUD_IMAGE}
-e inventory_path=${PWD}/inventory/inventory.ini
-e cloud_region=${CLOUD_REGION}

16
.yamllint Normal file
View File

@@ -0,0 +1,16 @@
---
extends: default
rules:
braces:
min-spaces-inside: 0
max-spaces-inside: 1
brackets:
min-spaces-inside: 0
max-spaces-inside: 1
indentation:
spaces: 2
indent-sequences: consistent
line-length: disable
new-line-at-end-of-file: disable
truthy: disable

View File

@@ -1,8 +1,8 @@
![Kubernetes Logo](https://s28.postimg.org/lf3q4ocpp/k8s.png)
##Deploy a production ready kubernetes cluster
## Deploy a production ready kubernetes cluster
If you have questions, join us on the [kubernetes slack](https://slack.k8s.io), channel **#kargo**.
If you have questions, join us on the [kubernetes slack](https://slack.k8s.io), channel **#kubespray**.
- Can be deployed on **AWS, GCE, Azure, OpenStack or Baremetal**
- **High available** cluster
@@ -13,15 +13,16 @@ If you have questions, join us on the [kubernetes slack](https://slack.k8s.io),
To deploy the cluster you can use :
[**kargo-cli**](https://github.com/kubespray/kargo-cli) <br>
**Ansible** usual commands and [**inventory builder**](https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_builder/inventory.py) <br>
[**kubespray-cli**](https://github.com/kubespray/kubespray-cli) <br>
**Ansible** usual commands and [**inventory builder**](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py) <br>
**vagrant** by simply running `vagrant up` (for tests purposes) <br>
* [Requirements](#requirements)
* [Kargo vs ...](docs/comparisons.md)
* [Kubespray vs ...](docs/comparisons.md)
* [Getting started](docs/getting-started.md)
* [Ansible inventory and tags](docs/ansible.md)
* [Integration with existing ansible repo](docs/integration.md)
* [Deployment data variables](docs/vars.md)
* [DNS stack](docs/dns-stack.md)
* [HA mode](docs/ha-mode.md)
@@ -33,6 +34,7 @@ To deploy the cluster you can use :
* [OpenStack](docs/openstack.md)
* [AWS](docs/aws.md)
* [Azure](docs/azure.md)
* [vSphere](docs/vsphere.md)
* [Large deployments](docs/large-deployments.md)
* [Upgrades basics](docs/upgrades.md)
* [Roadmap](docs/roadmap.md)
@@ -50,16 +52,19 @@ Note: Upstart/SysV init based OS types are not supported.
Versions of supported components
--------------------------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.5.1 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.0.6 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.6.2 <br>
[calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.23.0 <br>
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
[weave](http://weave.works/) v1.6.1 <br>
[docker](https://www.docker.com/) v1.12.5 <br>
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 <br>
Note: rkt support as docker alternative is limited to control plane (etcd and
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.7.3 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.2.4 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br>
[calico](https://docs.projectcalico.org/v2.5/releases/) v2.5.0 <br>
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
[weave](http://weave.works/) v2.0.1 <br>
[docker](https://www.docker.com/) v1.13 (see note)<br>
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)<br>
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note 2: rkt support as docker alternative is limited to control plane (etcd and
kubelet). Docker is still used for Kubernetes cluster workloads and network
plugins' related OS services. Also note, only one of the supported network
plugins can be deployed for a given single cluster.
@@ -67,16 +72,19 @@ plugins can be deployed for a given single cluster.
Requirements
--------------
* **Ansible v2.3 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands**
* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
* The target servers must have **access to the Internet** in order to pull docker images.
* The target servers are configured to allow **IPv4 forwarding**.
* **Your ssh key must be copied** to all the servers part of your inventory.
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
* The target servers are configured to allow **IPv4 forwarding**.
* **Copy your ssh keys** to all the servers part of your inventory.
* **Ansible v2.2 (or newer) and python-netaddr**
## Network plugins
You can choose between 4 network plugins. (default: `flannel` with vxlan backend)
You can choose between 4 network plugins. (default: `calico`, except Vagrant uses `flannel`)
* [**flannel**](docs/flannel.md): gre/vxlan (layer 2) networking.
@@ -84,18 +92,30 @@ You can choose between 4 network plugins. (default: `flannel` with vxlan backend
* [**canal**](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
* **weave**: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
* [**weave**](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
## Community docs and resources
- [kubernetes.io/docs/getting-started-guides/kubespray/](https://kubernetes.io/docs/getting-started-guides/kubespray/)
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
## Tools and projects on top of Kubespray
- [Digital Rebar](https://github.com/digitalrebar/digitalrebar)
- [Kubespray-cli](https://github.com/kubespray/kubespray-cli)
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
- [Terraform Contrib](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform)
## CI Tests
![Gitlab Logo](https://s27.postimg.org/wmtaig1wz/gitlabci.png)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/pipelines) </br>
[![Build graphs](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/badges/master/build.svg)](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines) </br>
CI/end-to-end tests sponsored by Google (GCE), and [teuto.net](https://teuto.net/) for OpenStack.
CI/end-to-end tests sponsored by Google (GCE), DigitalOcean, [teuto.net](https://teuto.net/) (openstack).
See the [test matrix](docs/test_cases.md) for details.

View File

@@ -1,9 +1,40 @@
# Release Process
The Kargo Project is released on an as-needed basis. The process is as follows:
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
2. At least on of the [OWNERS](OWNERS) must LGTM this release
2. At least one of the [OWNERS](OWNERS) must LGTM this release
3. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
4. The release issue is closed
5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kargo $VERSION is released`
5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
## Major/minor releases, merge freezes and milestones
* Kubespray does not maintain stable branches for releases. Releases are tags, not
branches, and there are no backports. Therefore, there is no need for merge
freezes as well.
* Fixes for major releases (vX.x.0) and minor releases (vX.Y.x) are delivered
via maintenance releases (vX.Y.Z) and assigned to the corresponding open
milestone (vX.Y). That milestone remains open for the major/minor releases
support lifetime, which ends once the milestone closed. Then only a next major
or minor release can be done.
* Kubespray major and minor releases are bound to the given ``kube_version`` major/minor
version numbers and other components' arbitrary versions, like etcd or network plugins.
Older or newer versions are not supported and not tested for the given release.
* There is no unstable releases and no APIs, thus Kubespray doesn't follow
[semver](http://semver.org/). Every version describes only a stable release.
Breaking changes, if any introduced by changed defaults or non-contrib ansible roles'
playbooks, shall be described in the release notes. Other breaking changes, if any in
the contributed addons or bound versions of Kubernetes and other components, are
considered out of Kubespray scope and are up to the components' teams to deal with and
document.
* Minor releases can change components' versions, but not the major ``kube_version``.
Greater ``kube_version`` requires a new major or minor release. For example, if Kubespray v2.0.0
is bound to ``kube_version: 1.4.x``, ``calico_version: 0.22.0``, ``etcd_version: v3.0.6``,
then Kubespray v2.1.0 may be bound to only minor changes to ``kube_version``, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to ``kube_version: 2.x.x`` respectively.

54
Vagrantfile vendored
View File

@@ -7,6 +7,16 @@ Vagrant.require_version ">= 1.8.0"
CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
COREOS_URL_TEMPLATE = "https://storage.googleapis.com/%s.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
SUPPORTED_OS = {
"coreos-stable" => {box: "coreos-stable", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
"coreos-alpha" => {box: "coreos-alpha", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
"coreos-beta" => {box: "coreos-beta", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
"ubuntu" => {box: "bento/ubuntu-16.04", bootstrap_os: "ubuntu", user: "vagrant"},
"centos" => {box: "bento/centos-7.3", bootstrap_os: "centos", user: "vagrant"},
}
# Defaults for config options defined in CONFIG
$num_instances = 3
$instance_name_prefix = "k8s"
@@ -16,7 +26,12 @@ $vm_cpus = 1
$shared_folders = {}
$forwarded_ports = {}
$subnet = "172.17.8"
$box = "bento/ubuntu-16.04"
$os = "ubuntu"
# The first three nodes are etcd servers
$etcd_instances = $num_instances
# The first two nodes are masters
$kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1)
$local_release_dir = "/vagrant/temp"
host_vars = {}
@@ -24,6 +39,10 @@ if File.exist?(CONFIG)
require CONFIG
end
# All nodes are kube nodes
$kube_node_instances = $num_instances
$box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = File.join(File.dirname(__FILE__), "inventory") if ! $inventory
@@ -49,7 +68,10 @@ Vagrant.configure("2") do |config|
# always use Vagrants insecure key
config.ssh.insert_key = false
config.vm.box = $box
if SUPPORTED_OS[$os].has_key? :box_url
config.vm.box_url = SUPPORTED_OS[$os][:box_url]
end
config.ssh.username = SUPPORTED_OS[$os][:user]
# plugin conflict
if Vagrant.has_plugin?("vagrant-vbguest") then
config.vbguest.auto_update = false
@@ -80,6 +102,10 @@ Vagrant.configure("2") do |config|
end
end
$shared_folders.each do |src, dst|
config.vm.synced_folder src, dst
end
config.vm.provider :virtualbox do |vb|
vb.gui = $vm_gui
vb.memory = $vm_memory
@@ -88,12 +114,15 @@ Vagrant.configure("2") do |config|
ip = "#{$subnet}.#{i+100}"
host_vars[vm_name] = {
"ip" => ip,
#"access_ip" => ip,
"flannel_interface" => ip,
"flannel_backend_type" => "host-gw",
"local_release_dir" => "/vagrant/temp",
"download_run_once" => "False"
"ip": ip,
"flannel_interface": ip,
"flannel_backend_type": "host-gw",
"local_release_dir" => $local_release_dir,
"download_run_once": "False",
# Override the default 'calico' with flannel.
# inventory/group_vars/k8s-cluster.yml
"kube_network_plugin": "flannel",
"bootstrap_os": SUPPORTED_OS[$os][:bootstrap_os]
}
config.vm.network :private_network, ip: ip
@@ -112,12 +141,9 @@ Vagrant.configure("2") do |config|
ansible.host_vars = host_vars
#ansible.tags = ['download']
ansible.groups = {
# The first three nodes should be etcd servers
"etcd" => ["#{$instance_name_prefix}-0[1:3]"],
# The first two nodes should be masters
"kube-master" => ["#{$instance_name_prefix}-0[1:2]"],
# all nodes should be kube nodes
"kube-node" => ["#{$instance_name_prefix}-0[1:#{$num_instances}]"],
"etcd" => ["#{$instance_name_prefix}-0[1:#{$etcd_instances}]"],
"kube-master" => ["#{$instance_name_prefix}-0[1:#{$kube_master_instances}]"],
"kube-node" => ["#{$instance_name_prefix}-0[1:#{$kube_node_instances}]"],
"k8s-cluster:children" => ["kube-master", "kube-node"],
}
end

View File

@@ -7,3 +7,7 @@ host_key_checking=False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp
stdout_callback = skippy
library = ./library
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles

View File

@@ -2,66 +2,91 @@
- hosts: localhost
gather_facts: False
roles:
- bastion-ssh-config
tags: [localhost, bastion]
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: true
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
vars:
# Need to disable pipelining for bootstrap-os as some systems have requiretty in sudoers set, which makes pipelining
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
ansible_ssh_pipelining: false
roles:
- bootstrap-os
tags:
- bootstrap-os
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: true
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
vars:
ansible_ssh_pipelining: true
gather_facts: true
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: true
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kernel-upgrade, tags: kernel-upgrade, when: kernel_upgrade is defined and kernel_upgrade }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker }
- { role: rkt, tags: rkt, when: "'rkt' in [ etcd_deployment_type, kubelet_deployment_type ]" }
- role: rkt
tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
- hosts: etcd:!k8s-cluster
any_errors_fatal: true
- hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: etcd, tags: etcd }
- { role: kubespray-defaults, when: "cert_management == 'vault'" }
- { role: vault, tags: vault, vault_bootstrap: true, when: "cert_management == 'vault'" }
- hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true }
- hosts: k8s-cluster
any_errors_fatal: true
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: etcd, tags: etcd }
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false }
- hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: vault, tags: vault, when: "cert_management == 'vault'"}
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/node, tags: node }
- { role: network_plugin, tags: network }
- hosts: kube-master
any_errors_fatal: true
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/master, tags: master }
- { role: kubernetes-apps/lib, tags: apps }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- hosts: calico-rr
any_errors_fatal: true
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: k8s-cluster
any_errors_fatal: true
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
- hosts: kube-master[0]
any_errors_fatal: true
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubernetes-apps/lib, tags: apps }
- { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps }

View File

@@ -32,8 +32,7 @@ Conduct may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by
opening an issue or contacting one or more of the project maintainers.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Kubernetes maintainer, Sarah Novotny <sarahnovotny@google.com>, and/or Dan Kohn <dan@linuxfoundation.org>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
@@ -53,7 +52,7 @@ The Kubernetes team does not condone any statements by speakers contrary to thes
team reserves the right to deny entrance and/or eject from an event (without refund) any individual found to
be engaging in discriminatory or offensive speech or actions.
Please bring any concerns to to the immediate attention of Kubernetes event staff
Please bring any concerns to the immediate attention of Kubernetes event staff.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/code-of-conduct.md?pixel)]()

View File

@@ -0,0 +1,27 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["elasticloadbalancing:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["route53:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::kubernetes-*"
]
}
]
}

View File

@@ -0,0 +1,10 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}

View File

@@ -0,0 +1,45 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::kubernetes-*"
]
},
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:AttachVolume",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DetachVolume",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["route53:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}

View File

@@ -0,0 +1,10 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env python
import boto3
import os
import argparse
import json
class SearchEC2Tags(object):
def __init__(self):
self.parse_args()
if self.args.list:
self.search_tags()
if self.args.host:
data = {}
print json.dumps(data, indent=2)
def parse_args(self):
##Check if VPC_VISIBILITY is set, if not default to private
if "VPC_VISIBILITY" in os.environ:
self.vpc_visibility = os.environ['VPC_VISIBILITY']
else:
self.vpc_visibility = "private"
##Support --list and --host flags. We largely ignore the host one.
parser = argparse.ArgumentParser()
parser.add_argument('--list', action='store_true', default=False, help='List instances')
parser.add_argument('--host', action='store_true', help='Get all the variables about a specific instance')
self.args = parser.parse_args()
def search_tags(self):
hosts = {}
hosts['_meta'] = { 'hostvars': {} }
##Search ec2 three times to find nodes of each group type. Relies on kubespray-role key/value.
for group in ["kube-master", "kube-node", "etcd"]:
hosts[group] = []
tag_key = "kubespray-role"
tag_value = ["*"+group+"*"]
region = os.environ['REGION']
ec2 = boto3.resource('ec2', region)
instances = ec2.instances.filter(Filters=[{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}])
for instance in instances:
if self.vpc_visibility == "public":
hosts[group].append(instance.public_dns_name)
hosts['_meta']['hostvars'][instance.public_dns_name] = {
'ansible_ssh_host': instance.public_ip_address
}
else:
hosts[group].append(instance.private_dns_name)
hosts['_meta']['hostvars'][instance.private_dns_name] = {
'ansible_ssh_host': instance.private_ip_address
}
hosts['k8s-cluster'] = {'children':['kube-master', 'kube-node']}
print json.dumps(hosts, sort_keys=True, indent=2)
SearchEC2Tags()

View File

@@ -5,7 +5,7 @@ Provision the base infrastructure for a Kubernetes cluster by using [Azure Resou
## Status
This will provision the base infrastructure (vnet, vms, nics, ips, ...) needed for Kubernetes in Azure into the specified
Resource Group. It will not install Kubernetes itself, this has to be done in a later step by yourself (using kargo of course).
Resource Group. It will not install Kubernetes itself, this has to be done in a later step by yourself (using kubespray of course).
## Requirements
@@ -47,7 +47,7 @@ $ ./clear-rg.sh <resource_group_name>
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Generating an inventory for kargo
## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call:
@@ -55,10 +55,10 @@ After you have applied the templates, you can generate an inventory with this ca
$ ./generate-inventory.sh <resource_group_name>
```
It will create the file ./inventory which can then be used with kargo, e.g.:
It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell
$ cd kargo-root-dir
$ cd kubespray-root-dir
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/group_vars/all.yml" cluster.yml
```

View File

@@ -9,11 +9,18 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
ansible-playbook generate-templates.yml
azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/minions.json -g $AZURE_RESOURCE_GROUP
if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0"
./apply-rg_2.sh "$AZURE_RESOURCE_GROUP"
elif azure &>/dev/null; then
ansible-playbook generate-templates.yml
azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/minions.json -g $AZURE_RESOURCE_GROUP
else
echo "Azure cli not found"
fi

19
contrib/azurerm/apply-rg_2.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
set -e
AZURE_RESOURCE_GROUP="$1"
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
echo "AZURE_RESOURCE_GROUP is missing"
exit 1
fi
ansible-playbook generate-templates.yml
az group deployment create --template-file ./.generated/network.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/minions.json -g $AZURE_RESOURCE_GROUP

View File

@@ -9,6 +9,10 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
ansible-playbook generate-templates.yml
azure group deployment create -g "$AZURE_RESOURCE_GROUP" -f ./.generated/clear-rg.json -m Complete
if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0"
./clear-rg_2.sh "$AZURE_RESOURCE_GROUP"
else
ansible-playbook generate-templates.yml
azure group deployment create -g "$AZURE_RESOURCE_GROUP" -f ./.generated/clear-rg.json -m Complete
fi

14
contrib/azurerm/clear-rg_2.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/usr/bin/env bash
set -e
AZURE_RESOURCE_GROUP="$1"
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
echo "AZURE_RESOURCE_GROUP is missing"
exit 1
fi
ansible-playbook generate-templates.yml
az group deployment create -g "$AZURE_RESOURCE_GROUP" --template-file ./.generated/clear-rg.json --mode Complete

View File

@@ -8,5 +8,11 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
echo "AZURE_RESOURCE_GROUP is missing"
exit 1
fi
ansible-playbook generate-inventory.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP"
# check if azure cli 2.0 exists else use azure cli 1.0
if az &>/dev/null; then
ansible-playbook generate-inventory_2.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP"
elif azure &>/dev/null; then
ansible-playbook generate-inventory.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP"
else
echo "Azure cli not found"
fi

View File

@@ -0,0 +1,5 @@
---
- hosts: localhost
gather_facts: False
roles:
- generate-inventory_2

View File

@@ -1,5 +1,6 @@
# Due to some Azure limitations, this name must be globally unique
# Due to some Azure limitations (ex:- Storage Account's name must be unique),
# this name must be globally unique - it will be used as a prefix for azure components
cluster_name: example
# Set this to true if you do not want to have public IPs for your masters and minions. This will provision a bastion
@@ -17,10 +18,29 @@ minions_os_disk_size: 1000
admin_username: devops
admin_password: changeme
# MAKE SURE TO CHANGE THIS TO YOUR PUBLIC KEY to access your azure machines
ssh_public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLRzcxbsFDdEibiyXCSdIFh7bKbXso1NqlKjEyPTptf3aBXHEhVil0lJRjGpTlpfTy7PHvXFbXIOCdv9tOmeH1uxWDDeZawgPFV6VSZ1QneCL+8bxzhjiCn8133wBSPZkN8rbFKd9eEUUBfx8ipCblYblF9FcidylwtMt5TeEmXk8yRVkPiCuEYuDplhc2H0f4PsK3pFb5aDVdaDT3VeIypnOQZZoUxHWqm6ThyHrzLJd3SrZf+RROFWW1uInIDf/SZlXojczUYoffxgT1lERfOJCHJXsqbZWugbxQBwqsVsX59+KPxFFo6nV88h3UQr63wbFx52/MXkX4WrCkAHzN ablock-vwfs@dell-lappy"
# Disable using ssh using password. Change it to false to allow to connect to ssh by password
disablePasswordAuthentication: true
# Azure CIDRs
azure_vnet_cidr: 10.0.0.0/8
azure_admin_cidr: 10.241.2.0/24
azure_masters_cidr: 10.0.4.0/24
azure_minions_cidr: 10.240.0.0/16
# Azure loadbalancer port to use to access your cluster
kube_apiserver_port: 6443
# Azure Netwoking and storage naming to use with inventory/all.yml
#azure_virtual_network_name: KubeVNET
#azure_subnet_admin_name: ad-subnet
#azure_subnet_masters_name: master-subnet
#azure_subnet_minions_name: minion-subnet
#azure_route_table_name: routetable
#azure_security_group_name: secgroup
# Storage types available are: "Standard_LRS","Premium_LRS"
#azure_storage_account_type: Standard_LRS

View File

@@ -8,4 +8,4 @@
vm_list: "{{ vm_list_cmd.stdout }}"
- name: Generate inventory
template: src=inventory.j2 dest="{{playbook_dir}}/inventory"
template: src=inventory.j2 dest="{{playbook_dir}}/inventory"

View File

@@ -0,0 +1,16 @@
---
- name: Query Azure VMs IPs
command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }}
register: vm_ip_list_cmd
- name: Query Azure VMs Roles
command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd
- set_fact:
vm_ip_list: "{{ vm_ip_list_cmd.stdout }}"
vm_roles_list: "{{ vm_list_cmd.stdout }}"
- name: Generate inventory
template: src=inventory.j2 dest="{{playbook_dir}}/inventory"

View File

@@ -0,0 +1,34 @@
{% for vm in vm_ip_list %}
{% if not use_bastion or vm.virtualMachinename == 'bastion' %}
{{ vm.virtualMachine.name }} ansible_ssh_host={{ vm.virtualMachine.network.publicIpAddresses[0].ipAddress }} ip={{ vm.virtualMachine.network.privateIpAddresses[0] }}
{% else %}
{{ vm.virtualMachine.name }} ansible_ssh_host={{ vm.virtualMachine.network.privateIpAddresses[0] }}
{% endif %}
{% endfor %}
[kube-master]
{% for vm in vm_roles_list %}
{% if 'kube-master' in vm.tags.roles %}
{{ vm.name }}
{% endif %}
{% endfor %}
[etcd]
{% for vm in vm_roles_list %}
{% if 'etcd' in vm.tags.roles %}
{{ vm.name }}
{% endif %}
{% endfor %}
[kube-node]
{% for vm in vm_roles_list %}
{% if 'kube-node' in vm.tags.roles %}
{{ vm.name }}
{% endif %}
{% endfor %}
[k8s-cluster:children]
kube-node
kube-master

View File

@@ -1,15 +1,15 @@
apiVersion: "2015-06-15"
virtualNetworkName: "KubVNET"
virtualNetworkName: "{{ azure_virtual_network_name | default('KubeVNET') }}"
subnetAdminName: "ad-subnet"
subnetMastersName: "master-subnet"
subnetMinionsName: "minion-subnet"
subnetAdminName: "{{ azure_subnet_admin_name | default('ad-subnet') }}"
subnetMastersName: "{{ azure_subnet_masters_name | default('master-subnet') }}"
subnetMinionsName: "{{ azure_subnet_minions_name | default('minion-subnet') }}"
routeTableName: "routetable"
securityGroupName: "secgroup"
routeTableName: "{{ azure_route_table_name | default('routetable') }}"
securityGroupName: "{{ azure_security_group_name | default('secgroup') }}"
nameSuffix: "{{cluster_name}}"
nameSuffix: "{{ cluster_name }}"
availabilitySetMasters: "master-avs"
availabilitySetMinions: "minion-avs"
@@ -33,5 +33,5 @@ imageReference:
imageReferenceJson: "{{imageReference|to_json}}"
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
storageAccountType: "Standard_LRS"
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"

View File

@@ -62,8 +62,8 @@
"id": "[concat(variables('lbID'), '/backendAddressPools/kube-api-backend')]"
},
"protocol": "tcp",
"frontendPort": 443,
"backendPort": 443,
"frontendPort": "{{kube_apiserver_port}}",
"backendPort": "{{kube_apiserver_port}}",
"enableFloatingIP": false,
"idleTimeoutInMinutes": 5,
"probe": {
@@ -77,7 +77,7 @@
"name": "kube-api",
"properties": {
"protocol": "tcp",
"port": 443,
"port": "{{kube_apiserver_port}}",
"intervalInSeconds": 5,
"numberOfProbes": 2
}
@@ -193,4 +193,4 @@
} {% if not loop.last %},{% endif %}
{% endfor %}
]
}
}

View File

@@ -92,7 +92,7 @@
"description": "Allow secure kube-api",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "443",
"destinationPortRange": "{{kube_apiserver_port}}",
"sourceAddressPrefix": "Internet",
"destinationAddressPrefix": "*",
"access": "Allow",
@@ -106,4 +106,4 @@
"dependsOn": []
}
]
}
}

View File

@@ -40,7 +40,8 @@ import os
import re
import sys
ROLES = ['kube-master', 'all', 'k8s-cluster:children', 'kube-node', 'etcd']
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster:children',
'calico-rr', 'vault']
PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
@@ -51,12 +52,20 @@ def get_var_as_bool(name, default):
value = os.environ.get(name, '')
return _boolean_states.get(value.lower(), default)
# Configurable as shell vars start
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory.cfg")
# Reconfigures cluster distribution at scale
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
DEBUG = get_var_as_bool("DEBUG", True)
HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
# Configurable as shell vars end
class KargoInventory(object):
class KubesprayInventory(object):
def __init__(self, changed_hosts=None, config_file=None):
self.config = configparser.ConfigParser(allow_no_value=True,
@@ -74,11 +83,16 @@ class KargoInventory(object):
if changed_hosts:
self.hosts = self.build_hostnames(changed_hosts)
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
self.set_kube_master(list(self.hosts.keys())[:2])
self.set_all(self.hosts)
self.set_k8s_cluster()
self.set_kube_node(self.hosts.keys())
self.set_etcd(list(self.hosts.keys())[:3])
if len(self.hosts) >= SCALE_THRESHOLD:
self.set_kube_master(list(self.hosts.keys())[3:5])
else:
self.set_kube_master(list(self.hosts.keys())[:2])
self.set_kube_node(self.hosts.keys())
if len(self.hosts) >= SCALE_THRESHOLD:
self.set_calico_rr(list(self.hosts.keys())[:3])
else: # Show help if no options
self.show_help()
sys.exit(0)
@@ -205,13 +219,38 @@ class KargoInventory(object):
self.add_host_to_group('k8s-cluster:children', 'kube-node')
self.add_host_to_group('k8s-cluster:children', 'kube-master')
def set_calico_rr(self, hosts):
for host in hosts:
if host in self.config.items('kube-master'):
self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube-master group".format(host))
continue
if host in self.config.items('kube-node'):
self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube-node group".format(host))
continue
self.add_host_to_group('calico-rr', host)
def set_kube_node(self, hosts):
for host in hosts:
if len(self.config['all']) >= SCALE_THRESHOLD:
if self.config.has_option('etcd', host):
self.debug("Not adding {0} to kube-node group because of "
"scale deployment and host is in etcd "
"group.".format(host))
continue
if len(self.config['all']) >= MASSIVE_SCALE_THRESHOLD:
if self.config.has_option('kube-master', host):
self.debug("Not adding {0} to kube-node group because of "
"scale deployment and host is in kube-master "
"group.".format(host))
continue
self.add_host_to_group('kube-node', host)
def set_etcd(self, hosts):
for host in hosts:
self.add_host_to_group('etcd', host)
self.add_host_to_group('vault', host)
def load_file(self, files=None):
'''Directly loads JSON, or YAML file to inventory.'''
@@ -275,7 +314,15 @@ print_ips - Write a space-delimited list of IPs from "all" group
Advanced usage:
Add another host after initial creation: inventory.py 10.10.1.5
Delete a host: inventory.py -10.10.1.3
Delete a host by id: inventory.py -node1'''
Delete a host by id: inventory.py -node1
Configurable env vars:
DEBUG Enable debug printing. Default: True
CONFIG_FILE File to write config to Default: ./inventory.cfg
HOST_PREFIX Host prefix for generated hosts. Default: node
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
'''
print(help_text)
def print_config(self):
@@ -291,7 +338,7 @@ Delete a host by id: inventory.py -node1'''
def main(argv=None):
if not argv:
argv = sys.argv[1:]
KargoInventory(argv, CONFIG_FILE)
KubesprayInventory(argv, CONFIG_FILE)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,48 +0,0 @@
---
- src: https://gitlab.com/kubespray-ansibl8s/k8s-common.git
path: roles/apps
scm: git
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-dashboard.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-kubedns.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-elasticsearch.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-redis.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-memcached.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-postgres.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-pgbouncer.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-heapster.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-influxdb.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-kubedash.git
# path: roles/apps
# scm: git
#
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-kube-logstash.git
# path: roles/apps
# scm: git

View File

@@ -1,3 +1,3 @@
[metadata]
name = kargo-inventory-builder
name = kubespray-inventory-builder
version = 0.1

View File

@@ -31,7 +31,7 @@ class TestInventory(unittest.TestCase):
sys_mock.exit = mock.Mock()
super(TestInventory, self).setUp()
self.data = ['10.90.3.2', '10.90.3.3', '10.90.3.4']
self.inv = inventory.KargoInventory()
self.inv = inventory.KubesprayInventory()
def test_get_ip_from_opts(self):
optstring = "ansible_host=10.90.3.2 ip=10.90.3.2"
@@ -210,3 +210,31 @@ class TestInventory(unittest.TestCase):
self.inv.set_etcd([host])
self.assertTrue(host in self.inv.config[group])
def test_scale_scenario_one(self):
num_nodes = 50
hosts = OrderedDict()
for hostid in range(1, num_nodes+1):
hosts["node" + str(hostid)] = ""
self.inv.set_all(hosts)
self.inv.set_etcd(hosts.keys()[0:3])
self.inv.set_kube_master(hosts.keys()[0:2])
self.inv.set_kube_node(hosts.keys())
for h in range(3):
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
def test_scale_scenario_two(self):
num_nodes = 500
hosts = OrderedDict()
for hostid in range(1, num_nodes+1):
hosts["node" + str(hostid)] = ""
self.inv.set_all(hosts)
self.inv.set_etcd(hosts.keys()[0:3])
self.inv.set_kube_master(hosts.keys()[3:5])
self.inv.set_kube_node(hosts.keys())
for h in range(5):
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])

View File

@@ -11,7 +11,7 @@ deps =
-r{toxinidir}/test-requirements.txt
setenv = VIRTUAL_ENV={envdir}
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
commands = py.test -vv #{posargs:./tests}
commands = pytest -vv #{posargs:./tests}
[testenv:pep8]
usedevelop = False

View File

@@ -0,0 +1,11 @@
# Kubespray on KVM Virtual Machines hypervisor preparation
A simple playbook to ensure your system has the right settings to enable Kubespray
deployment on VMs.
This playbook does not create Virtual Machines, nor does it run Kubespray itself.
### User creation
If you want to create a user for running Kubespray deployment, you should specify
both `k8s_deployment_user` and `k8s_deployment_user_pkey_path`.

View File

@@ -0,0 +1,3 @@
#k8s_deployment_user: kubespray
#k8s_deployment_user_pkey_path: /tmp/ssh_rsa

View File

@@ -0,0 +1,8 @@
---
- hosts: localhost
gather_facts: False
become: yes
vars:
- bootstrap_os: none
roles:
- kvm-setup

View File

@@ -0,0 +1,46 @@
---
- name: Upgrade all packages to the latest version (yum)
yum:
name: '*'
state: latest
when: ansible_os_family == "RedHat"
- name: Install required packages
yum:
name: "{{ item }}"
state: latest
with_items:
- bind-utils
- ntp
when: ansible_os_family == "RedHat"
- name: Install required packages
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 3600
name: "{{ item }}"
state: latest
install_recommends: no
with_items:
- dnsutils
- ntp
when: ansible_os_family == "Debian"
- name: Upgrade all packages to the latest version (apt)
shell: apt-get -o \
Dpkg::Options::=--force-confdef -o \
Dpkg::Options::=--force-confold -q -y \
dist-upgrade
environment:
DEBIAN_FRONTEND: noninteractive
when: ansible_os_family == "Debian"
# Create deployment user if required
- include: user.yml
when: k8s_deployment_user is defined
# Set proper sysctl values
- include: sysctl.yml

View File

@@ -0,0 +1,46 @@
---
- name: Load br_netfilter module
modprobe:
name: br_netfilter
state: present
register: br_netfilter
- name: Add br_netfilter into /etc/modules
lineinfile:
dest: /etc/modules
state: present
line: 'br_netfilter'
when: br_netfilter is defined and ansible_os_family == 'Debian'
- name: Add br_netfilter into /etc/modules-load.d/kubespray.conf
copy:
dest: /etc/modules-load.d/kubespray.conf
content: |-
### This file is managed by Ansible
br-netfilter
owner: root
group: root
mode: 0644
when: br_netfilter is defined
- name: Enable net.ipv4.ip_forward in sysctl
sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_file: /etc/sysctl.d/ipv4-ip_forward.conf
state: present
reload: yes
- name: Set bridge-nf-call-{arptables,iptables} to 0
sysctl:
name: "{{ item }}"
state: present
value: 0
sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
reload: yes
with_items:
- net.bridge.bridge-nf-call-arptables
- net.bridge.bridge-nf-call-ip6tables
- net.bridge.bridge-nf-call-iptables
when: br_netfilter is defined

View File

@@ -0,0 +1,46 @@
---
- name: Create user {{ k8s_deployment_user }}
user:
name: "{{ k8s_deployment_user }}"
groups: adm
shell: /bin/bash
- name: Ensure that .ssh exists
file:
path: "/home/{{ k8s_deployment_user }}/.ssh"
state: directory
owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}"
- name: Configure sudo for deployment user
copy:
content: |
%{{ k8s_deployment_user }} ALL=(ALL) NOPASSWD: ALL
dest: "/etc/sudoers.d/55-k8s-deployment"
owner: root
group: root
mode: 0644
- name: Write private SSH key
copy:
src: "{{ k8s_deployment_user_pkey_path }}"
dest: "/home/{{ k8s_deployment_user }}/.ssh/id_rsa"
mode: 0400
owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}"
when: k8s_deployment_user_pkey_path is defined
- name: Write public SSH key
shell: "ssh-keygen -y -f /home/{{ k8s_deployment_user }}/.ssh/id_rsa \
> /home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
args:
creates: "/home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
when: k8s_deployment_user_pkey_path is defined
- name: Fix ssh-pub-key permissions
file:
path: "/home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
mode: 0600
owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}"
when: k8s_deployment_user_pkey_path is defined

View File

@@ -1,4 +1,4 @@
# Deploying a Kargo Kubernetes Cluster with GlusterFS
# Deploying a Kubespray Kubernetes Cluster with GlusterFS
You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible** section.
@@ -6,7 +6,7 @@ You can either deploy using Ansible on its own by supplying your own inventory f
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kargo root folder, and execute (supposing that the machines are all using ubuntu):
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
```
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./cluster.yml
@@ -28,7 +28,7 @@ k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_us
## Using Terraform and Ansible
First step is to fill in a `my-kargo-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
First step is to fill in a `my-kubespray-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
```
cluster_name = "cluster1"
@@ -65,15 +65,15 @@ $ echo Setting up Terraform creds && \
export TF_VAR_auth_url=${OS_AUTH_URL}
```
Then, standing on the kargo directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
Then, standing on the kubespray directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
```
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
```
This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping).
Then, provision your Kubernetes (Kargo) cluster with the following ansible call:
Then, provision your Kubernetes (kubespray) cluster with the following ansible call:
```
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml
@@ -88,5 +88,5 @@ ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./co
If you need to destroy the cluster, you can run:
```
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
```

View File

@@ -1 +0,0 @@
../../../../../roles/kubernetes-apps/lib

View File

@@ -0,0 +1,60 @@
%global srcname ansible_kubespray
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: ansible-kubespray
Version: XXX
Release: XXX
Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Vendor: Kubespray <smainklh@gmail.com>
Url: https://github.com/kubernetes-incubator/kubespray
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python-d2to1
BuildRequires: python-pbr
Requires: ansible
Requires: python-jinja2
Requires: python-netaddr
%description
Ansible-kubespray is a set of Ansible modules and playbooks for
installing a Kubernetes cluster. If you have questions, join us
on the https://slack.k8s.io, channel '#kubespray'.
%prep
%autosetup -n %{name}-%{upstream_version} -S git
%build
%{__python2} setup.py build
%install
export PBR_VERSION=%{version}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot}
%files
%doc README.md
%doc inventory/inventory.example
%config /etc/kubespray/ansible.cfg
%config /etc/kubespray/inventory/group_vars/all.yml
%config /etc/kubespray/inventory/group_vars/k8s-cluster.yml
%license LICENSE
%{python2_sitelib}/%{srcname}-%{version}-py%{python2_version}.egg-info
/usr/local/share/kubespray/roles/
/usr/local/share/kubespray/playbooks/
%defattr(-,root,root)
%changelog

View File

@@ -1,2 +1,2 @@
*.tfstate*
inventory
.terraform

View File

@@ -1,261 +0,0 @@
variable "deploymentName" {
type = "string"
description = "The desired name of your deployment."
}
variable "numControllers"{
type = "string"
description = "Desired # of controllers."
}
variable "numEtcd" {
type = "string"
description = "Desired # of etcd nodes. Should be an odd number."
}
variable "numNodes" {
type = "string"
description = "Desired # of nodes."
}
variable "volSizeController" {
type = "string"
description = "Volume size for the controllers (GB)."
}
variable "volSizeEtcd" {
type = "string"
description = "Volume size for etcd (GB)."
}
variable "volSizeNodes" {
type = "string"
description = "Volume size for nodes (GB)."
}
variable "subnet" {
type = "string"
description = "The subnet in which to put your cluster."
}
variable "securityGroups" {
type = "string"
description = "The sec. groups in which to put your cluster."
}
variable "ami"{
type = "string"
description = "AMI to use for all VMs in cluster."
}
variable "SSHKey" {
type = "string"
description = "SSH key to use for VMs."
}
variable "master_instance_type" {
type = "string"
description = "Size of VM to use for masters."
}
variable "etcd_instance_type" {
type = "string"
description = "Size of VM to use for etcd."
}
variable "node_instance_type" {
type = "string"
description = "Size of VM to use for nodes."
}
variable "terminate_protect" {
type = "string"
default = "false"
}
variable "awsRegion" {
type = "string"
}
provider "aws" {
region = "${var.awsRegion}"
}
variable "iam_prefix" {
type = "string"
description = "Prefix name for IAM profiles"
}
resource "aws_iam_instance_profile" "kubernetes_master_profile" {
name = "${var.iam_prefix}_kubernetes_master_profile"
roles = ["${aws_iam_role.kubernetes_master_role.name}"]
}
resource "aws_iam_role" "kubernetes_master_role" {
name = "${var.iam_prefix}_kubernetes_master_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "kubernetes_master_policy" {
name = "${var.iam_prefix}_kubernetes_master_policy"
role = "${aws_iam_role.kubernetes_master_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["elasticloadbalancing:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_instance_profile" "kubernetes_node_profile" {
name = "${var.iam_prefix}_kubernetes_node_profile"
roles = ["${aws_iam_role.kubernetes_node_role.name}"]
}
resource "aws_iam_role" "kubernetes_node_role" {
name = "${var.iam_prefix}_kubernetes_node_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "kubernetes_node_policy" {
name = "${var.iam_prefix}_kubernetes_node_policy"
role = "${aws_iam_role.kubernetes_node_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:AttachVolume",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DetachVolume",
"Resource": "*"
}
]
}
EOF
}
resource "aws_instance" "master" {
count = "${var.numControllers}"
ami = "${var.ami}"
instance_type = "${var.master_instance_type}"
subnet_id = "${var.subnet}"
vpc_security_group_ids = ["${var.securityGroups}"]
key_name = "${var.SSHKey}"
disable_api_termination = "${var.terminate_protect}"
iam_instance_profile = "${aws_iam_instance_profile.kubernetes_master_profile.id}"
root_block_device {
volume_size = "${var.volSizeController}"
}
tags {
Name = "${var.deploymentName}-master-${count.index + 1}"
}
}
resource "aws_instance" "etcd" {
count = "${var.numEtcd}"
ami = "${var.ami}"
instance_type = "${var.etcd_instance_type}"
subnet_id = "${var.subnet}"
vpc_security_group_ids = ["${var.securityGroups}"]
key_name = "${var.SSHKey}"
disable_api_termination = "${var.terminate_protect}"
root_block_device {
volume_size = "${var.volSizeEtcd}"
}
tags {
Name = "${var.deploymentName}-etcd-${count.index + 1}"
}
}
resource "aws_instance" "minion" {
count = "${var.numNodes}"
ami = "${var.ami}"
instance_type = "${var.node_instance_type}"
subnet_id = "${var.subnet}"
vpc_security_group_ids = ["${var.securityGroups}"]
key_name = "${var.SSHKey}"
disable_api_termination = "${var.terminate_protect}"
iam_instance_profile = "${aws_iam_instance_profile.kubernetes_node_profile.id}"
root_block_device {
volume_size = "${var.volSizeNodes}"
}
tags {
Name = "${var.deploymentName}-minion-${count.index + 1}"
}
}
output "kubernetes_master_profile" {
value = "${aws_iam_instance_profile.kubernetes_master_profile.id}"
}
output "kubernetes_node_profile" {
value = "${aws_iam_instance_profile.kubernetes_node_profile.id}"
}
output "master-ip" {
value = "${join(", ", aws_instance.master.*.private_ip)}"
}
output "etcd-ip" {
value = "${join(", ", aws_instance.etcd.*.private_ip)}"
}
output "minion-ip" {
value = "${join(", ", aws_instance.minion.*.private_ip)}"
}

View File

@@ -1,37 +0,0 @@
variable "SSHUser" {
type = "string"
description = "SSH User for VMs."
}
resource "null_resource" "ansible-provision" {
depends_on = ["aws_instance.master","aws_instance.etcd","aws_instance.minion"]
##Create Master Inventory
provisioner "local-exec" {
command = "echo \"[kube-master]\" > inventory"
}
provisioner "local-exec" {
command = "echo \"${join("\n",formatlist("%s ansible_ssh_user=%s", aws_instance.master.*.private_ip, var.SSHUser))}\" >> inventory"
}
##Create ETCD Inventory
provisioner "local-exec" {
command = "echo \"\n[etcd]\" >> inventory"
}
provisioner "local-exec" {
command = "echo \"${join("\n",formatlist("%s ansible_ssh_user=%s", aws_instance.etcd.*.private_ip, var.SSHUser))}\" >> inventory"
}
##Create Nodes Inventory
provisioner "local-exec" {
command = "echo \"\n[kube-node]\" >> inventory"
}
provisioner "local-exec" {
command = "echo \"${join("\n",formatlist("%s ansible_ssh_user=%s", aws_instance.minion.*.private_ip, var.SSHUser))}\" >> inventory"
}
provisioner "local-exec" {
command = "echo \"\n[k8s-cluster:children]\nkube-node\nkube-master\" >> inventory"
}
}

View File

@@ -2,27 +2,56 @@
**Overview:**
- This will create nodes in a VPC inside of AWS
This project will create:
* VPC with Public and Private Subnets in # Availability Zones
* Bastion Hosts and NAT Gateways in the Public Subnet
* A dynamic number of masters, etcd, and worker nodes in the Private Subnet
* even distributed over the # of Availability Zones
* AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
- A dynamic number of masters, etcd, and nodes can be created
- These scripts currently expect Private IP connectivity with the nodes that are created. This means that you may need a tunnel to your VPC or to run these scripts from a VM inside the VPC. Will be looking into how to work around this later.
**Requirements**
- Terraform 0.8.7 or newer
**How to Use:**
- Export the variables for your Amazon credentials:
- Export the variables for your AWS credentials or edit `credentials.tfvars`:
```
export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="yyy"
export AWS_ACCESS_KEY_ID="www"
export AWS_SECRET_ACCESS_KEY ="xxx"
export AWS_SSH_KEY_NAME="yyy"
export AWS_DEFAULT_REGION="zzz"
```
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
- Update `contrib/terraform/aws/terraform.tfvars` with your data
- Allocate new AWS Elastic IPs: Depending on # of Availability Zones used (2 for each AZ)
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
**Troubleshooting**
***Remaining AWS IAM Instance Profile***:
If the cluster was destroyed without using Terraform it is possible that
the AWS IAM Instance Profiles still remain. To delete them you can use
the `AWS CLI` with the following command:
```
aws iam delete-instance-profile --region <region_name> --instance-profile-name <profile_name>
```
- Update contrib/terraform/aws/terraform.tfvars with your data
***Ansible Inventory doesnt get created:***
- Run with `terraform apply`
It could happen that Terraform doesnt create an Ansible Inventory file automatically. If this is the case copy the output after `inventory=` and create a file named `hosts`in the directory `inventory` and paste the inventory into the file.
- Once the infrastructure is created, you can run the kubespray playbooks and supply contrib/terraform/aws/inventory with the `-i` flag.
**Architecture**
**Future Work:**
Pictured is an AWS Infrastructure created with this Terraform project distributed over two Availability Zones.
- Update the inventory creation file to be something a little more reasonable. It's just a local-exec from Terraform now, using terraform.py or something may make sense in the future.
![AWS Infrastructure with Terraform ](docs/aws_kubespray.png)

View File

@@ -0,0 +1,186 @@
terraform {
required_version = ">= 0.8.7"
}
provider "aws" {
access_key = "${var.AWS_ACCESS_KEY_ID}"
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
region = "${var.AWS_DEFAULT_REGION}"
}
/*
* Calling modules who create the initial AWS VPC / AWS ELB
* and AWS IAM Roles for Kubernetes Deployment
*/
module "aws-vpc" {
source = "modules/vpc"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones="${var.aws_avail_zones}"
aws_cidr_subnets_private="${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public="${var.aws_cidr_subnets_public}"
}
module "aws-elb" {
source = "modules/elb"
aws_cluster_name="${var.aws_cluster_name}"
aws_vpc_id="${module.aws-vpc.aws_vpc_id}"
aws_avail_zones="${var.aws_avail_zones}"
aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}"
}
module "aws-iam" {
source = "modules/iam"
aws_cluster_name="${var.aws_cluster_name}"
}
/*
* Create Bastion Instances in AWS
*
*/
resource "aws_instance" "bastion-server" {
ami = "${var.aws_bastion_ami}"
instance_type = "${var.aws_bastion_size}"
count = "${length(var.aws_cidr_subnets_public)}"
associate_public_ip_address = true
availability_zone = "${element(var.aws_avail_zones,count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
key_name = "${var.AWS_SSH_KEY_NAME}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-bastion-${count.index}"
Cluster = "${var.aws_cluster_name}"
Role = "bastion-${var.aws_cluster_name}-${count.index}"
}
}
/*
* Create K8s Master and worker nodes and etcd instances
*
*/
resource "aws_instance" "k8s-master" {
ami = "${var.aws_cluster_ami}"
instance_type = "${var.aws_kube_master_size}"
count = "${var.aws_kube_master_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-master${count.index}"
Cluster = "${var.aws_cluster_name}"
Role = "master"
}
}
resource "aws_elb_attachment" "attach_master_nodes" {
count = "${var.aws_kube_master_num}"
elb = "${module.aws-elb.aws_elb_api_id}"
instance = "${element(aws_instance.k8s-master.*.id,count.index)}"
}
resource "aws_instance" "k8s-etcd" {
ami = "${var.aws_cluster_ami}"
instance_type = "${var.aws_etcd_size}"
count = "${var.aws_etcd_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
key_name = "${var.AWS_SSH_KEY_NAME}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-etcd${count.index}"
Cluster = "${var.aws_cluster_name}"
Role = "etcd"
}
}
resource "aws_instance" "k8s-worker" {
ami = "${var.aws_cluster_ami}"
instance_type = "${var.aws_kube_worker_size}"
count = "${var.aws_kube_worker_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-worker${count.index}"
Cluster = "${var.aws_cluster_name}"
Role = "worker"
}
}
/*
* Create Kubespray Inventory File
*
*/
data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}"
vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_ssh_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_ssh_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_ssh_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n",formatlist("%s ansible_ssh_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
elb_api_port = "loadbalancer_apiserver.port=${var.aws_elb_api_port}"
kube_insecure_apiserver_address = "kube_apiserver_insecure_bind_address: ${var.kube_insecure_apiserver_address}"
}
}
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ../../../inventory/hosts"
}
}

View File

@@ -0,0 +1,8 @@
#AWS Access Key
AWS_ACCESS_KEY_ID = ""
#AWS Secret Key
AWS_SECRET_ACCESS_KEY = ""
#EC2 SSH Key Name
AWS_SSH_KEY_NAME = ""
#AWS Region
AWS_DEFAULT_REGION = "eu-central-1"

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

View File

@@ -0,0 +1,58 @@
resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = "${var.aws_vpc_id}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
}
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = "${var.aws_elb_api_port}"
to_port = "${var.k8s_secure_api_port}"
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
}
resource "aws_security_group_rule" "aws-allow-api-egress" {
type = "egress"
from_port = 0
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
}
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = ["${var.aws_subnet_ids_public}"]
security_groups = ["${aws_security_group.aws-elb.id}"]
listener {
instance_port = "${var.k8s_secure_api_port}"
instance_protocol = "tcp"
lb_port = "${var.aws_elb_api_port}"
lb_protocol = "tcp"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:8080/"
interval = 30
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags {
Name = "kubernetes-${var.aws_cluster_name}-elb-api"
}
}

View File

@@ -0,0 +1,7 @@
output "aws_elb_api_id" {
value = "${aws_elb.aws-elb-api.id}"
}
output "aws_elb_api_fqdn" {
value = "${aws_elb.aws-elb-api.dns_name}"
}

View File

@@ -0,0 +1,28 @@
variable "aws_cluster_name" {
description = "Name of Cluster"
}
variable "aws_vpc_id" {
description = "AWS VPC ID"
}
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
}
variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server"
}
variable "aws_avail_zones" {
description = "Availability Zones Used"
type = "list"
}
variable "aws_subnet_ids_public" {
description = "IDs of Public Subnets"
type = "list"
}

View File

@@ -0,0 +1,138 @@
#Add AWS Roles for Kubernetes
resource "aws_iam_role" "kube-master" {
name = "kubernetes-${var.aws_cluster_name}-master"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
}
EOF
}
resource "aws_iam_role" "kube-worker" {
name = "kubernetes-${var.aws_cluster_name}-node"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
}
EOF
}
#Add AWS Policies for Kubernetes
resource "aws_iam_role_policy" "kube-master" {
name = "kubernetes-${var.aws_cluster_name}-master"
role = "${aws_iam_role.kube-master.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["elasticloadbalancing:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["route53:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::kubernetes-*"
]
}
]
}
EOF
}
resource "aws_iam_role_policy" "kube-worker" {
name = "kubernetes-${var.aws_cluster_name}-node"
role = "${aws_iam_role.kube-worker.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::kubernetes-*"
]
},
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:AttachVolume",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DetachVolume",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["route53:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
EOF
}
#Create AWS Instance Profiles
resource "aws_iam_instance_profile" "kube-master" {
name = "kube_${var.aws_cluster_name}_master_profile"
roles = ["${aws_iam_role.kube-master.name}"]
}
resource "aws_iam_instance_profile" "kube-worker" {
name = "kube_${var.aws_cluster_name}_node_profile"
roles = ["${aws_iam_role.kube-worker.name}"]
}

View File

@@ -0,0 +1,7 @@
output "kube-master-profile" {
value = "${aws_iam_instance_profile.kube-master.name }"
}
output "kube-worker-profile" {
value = "${aws_iam_instance_profile.kube-worker.name }"
}

View File

@@ -0,0 +1,3 @@
variable "aws_cluster_name" {
description = "Name of Cluster"
}

View File

@@ -0,0 +1,138 @@
resource "aws_vpc" "cluster-vpc" {
cidr_block = "${var.aws_vpc_cidr_block}"
#DNS Related Entries
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = "kubernetes-${var.aws_cluster_name}-vpc"
}
}
resource "aws_eip" "cluster-nat-eip" {
count = "${length(var.aws_cidr_subnets_public)}"
vpc = true
}
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-internetgw"
}
}
resource "aws_subnet" "cluster-vpc-subnets-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
count="${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
}
}
resource "aws_nat_gateway" "cluster-nat-gateway" {
count = "${length(var.aws_cidr_subnets_public)}"
allocation_id = "${element(aws_eip.cluster-nat-eip.*.id, count.index)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)}"
}
resource "aws_subnet" "cluster-vpc-subnets-private" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
count="${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
}
}
#Routing in VPC
#TODO: Do we need two routing tables for each subnet for redundancy or is one enough?
resource "aws_route_table" "kubernetes-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
}
tags {
Name = "kubernetes-${var.aws_cluster_name}-routetable-public"
}
}
resource "aws_route_table" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
vpc_id = "${aws_vpc.cluster-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
}
tags {
Name = "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
}
}
resource "aws_route_table_association" "kubernetes-public" {
count = "${length(var.aws_cidr_subnets_public)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id,count.index)}"
route_table_id = "${aws_route_table.kubernetes-public.id}"
}
resource "aws_route_table_association" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-private.*.id,count.index)}"
route_table_id = "${element(aws_route_table.kubernetes-private.*.id,count.index)}"
}
#Kubernetes Security Groups
resource "aws_security_group" "kubernetes" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags {
Name = "kubernetes-${var.aws_cluster_name}-securitygroup"
}
}
resource "aws_security_group_rule" "allow-all-ingress" {
type = "ingress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks= ["${var.aws_vpc_cidr_block}"]
security_group_id = "${aws_security_group.kubernetes.id}"
}
resource "aws_security_group_rule" "allow-all-egress" {
type = "egress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
}
resource "aws_security_group_rule" "allow-ssh-connections" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
}

View File

@@ -0,0 +1,16 @@
output "aws_vpc_id" {
value = "${aws_vpc.cluster-vpc.id}"
}
output "aws_subnet_ids_private" {
value = ["${aws_subnet.cluster-vpc-subnets-private.*.id}"]
}
output "aws_subnet_ids_public" {
value = ["${aws_subnet.cluster-vpc-subnets-public.*.id}"]
}
output "aws_security_group" {
value = ["${aws_security_group.kubernetes.*.id}"]
}

View File

@@ -0,0 +1,24 @@
variable "aws_vpc_cidr_block" {
description = "CIDR Blocks for AWS VPC"
}
variable "aws_cluster_name" {
description = "Name of Cluster"
}
variable "aws_avail_zones" {
description = "AWS Availability Zones Used"
type = "list"
}
variable "aws_cidr_subnets_private" {
description = "CIDR Blocks for private subnets in Availability zones"
type = "list"
}
variable "aws_cidr_subnets_public" {
description = "CIDR Blocks for public subnets in Availability zones"
type = "list"
}

View File

@@ -0,0 +1,24 @@
output "bastion_ip" {
value = "${join("\n", aws_instance.bastion-server.*.public_ip)}"
}
output "masters" {
value = "${join("\n", aws_instance.k8s-master.*.private_ip)}"
}
output "workers" {
value = "${join("\n", aws_instance.k8s-worker.*.private_ip)}"
}
output "etcd" {
value = "${join("\n", aws_instance.k8s-etcd.*.private_ip)}"
}
output "aws_elb_api_fqdn" {
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
}
output "inventory" {
value = "${data.template_file.inventory.rendered}"
}

View File

@@ -0,0 +1,28 @@
${connection_strings_master}
${connection_strings_node}
${connection_strings_etcd}
${public_ip_address_bastion}
[kube-master]
${list_master}
[kube-node]
${list_node}
[etcd]
${list_etcd}
[k8s-cluster:children]
kube-node
kube-master
[k8s-cluster:vars]
${elb_api_fqdn}
${elb_api_port}
${kube_insecure_apiserver_address}

View File

@@ -1,22 +1,31 @@
deploymentName="test-kube-deploy"
#Global Vars
aws_cluster_name = "devtest"
numControllers="2"
numEtcd="3"
numNodes="2"
#VPC Vars
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_avail_zones = ["eu-central-1a","eu-central-1b"]
volSizeController="20"
volSizeEtcd="20"
volSizeNodes="20"
#Bastion Host
aws_bastion_ami = "ami-5900cc36"
aws_bastion_size = "t2.small"
awsRegion="us-west-2"
subnet="subnet-xxxxx"
ami="ami-32a85152"
securityGroups="sg-xxxxx"
SSHUser="core"
SSHKey="my-key"
master_instance_type="m3.xlarge"
etcd_instance_type="m3.xlarge"
node_instance_type="m3.xlarge"
#Kubernetes Cluster
terminate_protect="false"
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
aws_cluster_ami = "ami-903df7ff"
#Settings AWS ELB
aws_elb_api_port = 443
k8s_secure_api_port = 443

View File

@@ -0,0 +1,32 @@
#Global Vars
aws_cluster_name = "devtest"
#VPC Vars
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_avail_zones = ["eu-central-1a","eu-central-1b"]
#Bastion Host
aws_bastion_ami = "ami-5900cc36"
aws_bastion_size = "t2.small"
#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
aws_cluster_ami = "ami-903df7ff"
#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = 0.0.0.0

View File

@@ -0,0 +1,101 @@
variable "AWS_ACCESS_KEY_ID" {
description = "AWS Access Key"
}
variable "AWS_SECRET_ACCESS_KEY" {
description = "AWS Secret Key"
}
variable "AWS_SSH_KEY_NAME" {
description = "Name of the SSH keypair to use in AWS."
}
variable "AWS_DEFAULT_REGION" {
description = "AWS Region"
}
//General Cluster Settings
variable "aws_cluster_name" {
description = "Name of AWS Cluster"
}
//AWS VPC Variables
variable "aws_vpc_cidr_block" {
description = "CIDR Block for VPC"
}
variable "aws_avail_zones" {
description = "Availability Zones Used"
type = "list"
}
variable "aws_cidr_subnets_private" {
description = "CIDR Blocks for private subnets in Availability Zones"
type = "list"
}
variable "aws_cidr_subnets_public" {
description = "CIDR Blocks for public subnets in Availability Zones"
type = "list"
}
//AWS EC2 Settings
variable "aws_bastion_ami" {
description = "AMI ID for Bastion Host in chosen AWS Region"
}
variable "aws_bastion_size" {
description = "EC2 Instance Size of Bastion Host"
}
/*
* AWS EC2 Settings
* The number should be divisable by the number of used
* AWS Availability Zones without an remainder.
*/
variable "aws_kube_master_num" {
description = "Number of Kubernetes Master Nodes"
}
variable "aws_kube_master_size" {
description = "Instance size of Kube Master Nodes"
}
variable "aws_etcd_num" {
description = "Number of etcd Nodes"
}
variable "aws_etcd_size" {
description = "Instance size of etcd Nodes"
}
variable "aws_kube_worker_num" {
description = "Number of Kubernetes Worker Nodes"
}
variable "aws_kube_worker_size" {
description = "Instance size of Kubernetes Worker Nodes"
}
variable "aws_cluster_ami" {
description = "AMI ID for Kubernetes Cluster"
}
/*
* AWS ELB Settings
*
*/
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
}
variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server"
}
variable "kube_insecure_apiserver_address" {
description= "Bind Address for insecure Port of K8s API Server"
}

View File

@@ -0,0 +1 @@
../../inventory/group_vars

View File

@@ -30,12 +30,14 @@ requirements.
#### OpenStack
Ensure your OpenStack credentials are loaded in environment variables. This can be done by downloading a credentials .rc file from your OpenStack dashboard and sourcing it:
Ensure your OpenStack **Identity v2** credentials are loaded in environment variables. This can be done by downloading a credentials .rc file from your OpenStack dashboard and sourcing it:
```
$ source ~/.stackrc
```
> You must set **OS_REGION_NAME** and **OS_TENANT_ID** environment variables not required by openstack CLI
You will need two networks before installing, an internal network and
an external (floating IP Pool) network. The internet network can be shared as
we use security groups to provide network segregation. Due to the many
@@ -86,7 +88,7 @@ This will provision one VM as master using a floating ip, two additional masters
Additionally, now the terraform based installation supports provisioning of a GlusterFS shared file system based on a separate set of VMs, running either a Debian or RedHat based set of VMs. To enable this, you need to add to your `my-terraform-vars.tfvars` the following variables:
```
# Flavour depends on your openstack installation, you can get available flavours through `nova list-flavors`
# Flavour depends on your openstack installation, you can get available flavours through `nova flavor-list`
flavor_gfs_node = "af659280-5b8a-42b5-8865-a703775911da"
# This is the name of an image already available in your openstack installation.
image_gfs = "Ubuntu 15.10"
@@ -99,6 +101,46 @@ ssh_user_gfs = "ubuntu"
If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using Container Linux by CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher.
# Configure Cluster variables
Edit `inventory/group_vars/all.yml`:
- Set variable **bootstrap_os** according selected image
```
# Valid bootstrap options (required): ubuntu, coreos, centos, none
bootstrap_os: coreos
```
- **bin_dir**
```
# Directory where the binaries will be installed
# Default:
# bin_dir: /usr/local/bin
# For Container Linux by CoreOS:
bin_dir: /opt/bin
```
- and **cloud_provider**
```
cloud_provider: openstack
```
Edit `inventory/group_vars/k8s-cluster.yml`:
- Set variable **kube_network_plugin** according selected networking
```
# Choose network plugin (calico, weave or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: flannel
```
> flannel works out-of-the-box
> calico requires allowing service's and pod's subnets on according OpenStack Neutron ports
- Set variable **resolvconf_mode**
```
# Can be docker_dns, host_resolvconf or none
# Default:
# resolvconf_mode: docker_dns
# For Container Linux by CoreOS:
resolvconf_mode: host_resolvconf
```
For calico configure OpenStack Neutron ports: [OpenStack](/docs/openstack.md)
# Provision a Kubernetes Cluster on OpenStack
@@ -156,6 +198,49 @@ Deploy kubernetes:
$ ansible-playbook --become -i contrib/terraform/openstack/hosts cluster.yml
```
# Set up local kubectl
1. Install kubectl on your workstation:
[Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
2. Add route to internal IP of master node (if needed):
```
sudo route add [master-internal-ip] gw [router-ip]
```
or
```
sudo route add -net [internal-subnet]/24 gw [router-ip]
```
3. List Kubernetes certs&keys:
```
ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
```
4. Get admin's certs&key:
```
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
```
5. Edit OpenStack Neutron master's Security Group to allow TCP connections to port 6443
6. Configure kubectl:
```
kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
--certificate-authority=ca.pem
kubectl config set-credentials default-admin \
--certificate-authority=ca.pem \
--client-key=admin-key.pem \
--client-certificate=admin.pem
kubectl config set-context default-system --cluster=default-cluster --user=default-admin
kubectl config use-context default-system
```
7. Check it:
```
kubectl version
```
# What's next
[Start Hello Kubernetes Service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/)
# clean up:
```

View File

@@ -0,0 +1 @@
../../../inventory/group_vars

View File

@@ -1,165 +0,0 @@
# Valid bootstrap options (required): ubuntu, coreos, none
bootstrap_os: none
# Directory where the binaries will be installed
bin_dir: /usr/local/bin
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
local_release_dir: "/tmp/releases"
# Random shifts for retrying failed ops like pushing/downloading
retry_stagger: 5
# Uncomment this line for Container Linux by CoreOS only.
# Directory where python binary is installed
# ansible_python_interpreter: "/opt/bin/python"
# This is the group that the cert creation scripts chgrp the
# cert files to. Not really changable...
kube_cert_group: kube-cert
# Cluster Loglevel configuration
kube_log_level: 2
# Users to create for basic auth in Kubernetes API via HTTP
kube_api_pwd: "changeme"
kube_users:
kube:
pass: "{{kube_api_pwd}}"
role: admin
root:
pass: "changeme"
role: admin
# Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf
ndots: 5
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# For some environments, each node has a pubilcally accessible
# address and an address it should bind services to. These are
# really inventory level variables, but described here for consistency.
#
# When advertising access, the access_ip will be used, but will defer to
# ip and then the default ansible ip when unspecified.
#
# When binding to restrict access, the ip variable will be used, but will
# defer to the default ansible ip when unspecified.
#
# The ip variable is used for specific address binding, e.g. listen address
# for etcd. This is use to help with environments like Vagrant or multi-nic
# systems where one address should be preferred over another.
# ip: 10.2.2.2
#
# The access_ip variable is used to define how other nodes should access
# the node. This is used in flannel to allow other flannel nodes to see
# this node for example. The access_ip is really useful AWS and Google
# environments where the nodes are accessed remotely by the "public" ip,
# but don't know about that address themselves.
# access_ip: 1.1.1.1
# Etcd access modes:
# Enable multiaccess to configure clients to access all of the etcd members directly
# as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers.
# This may be the case if clients support and loadbalance multiple etcd servers natively.
etcd_multiaccess: true
# Assume there are no internal loadbalancers for apiservers exist and listen on
# kube_apiserver_port (default 443)
loadbalancer_apiserver_localhost: true
# Choose network plugin (calico, weave or flannel)
kube_network_plugin: flannel
# Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18
# internal network. When used, it will assign IP
# addresses from this range to individual pods.
# This network must be unused in your network infrastructure!
kube_pods_subnet: 10.233.64.0/18
# internal network total size (optional). This is the prefix of the
# entire network. Must be unused in your environment.
# kube_network_prefix: 18
# internal network node size allocation (optional). This is the size allocated
# to each node on your network. With these defaults you should have
# room for 4096 nodes with 254 pods per node.
kube_network_node_prefix: 24
# With calico it is possible to distributed routes with border routers of the datacenter.
peer_with_router: false
# Warning : enabling router peering will disable calico's default behavior ('node mesh').
# The subnets of each nodes will be distributed by the datacenter router
# The port the API Server will be listening on.
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 443 # (https)
kube_apiserver_insecure_port: 8080 # (http)
# Internal DNS configuration.
# Kubernetes can create and mainatain its own DNS server to resolve service names
# into appropriate IP addresses. It's highly advisable to run such DNS server,
# as it greatly simplifies configuration of your applications - you can use
# service names instead of magic environment variables.
# Can be dnsmasq_kubedns, kubedns or none
dns_mode: dnsmasq_kubedns
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
## Upstream dns servers used by dnsmasq
#upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4
dns_domain: "{{ cluster_name }}"
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
# There are some changes specific to the cloud providers
# for instance we need to encapsulate packets with some network plugins
# If set the possible values are either 'gce', 'aws', 'azure' or 'openstack'
# When openstack is used make sure to source in the openstack credentials
# like you would do when using nova-client before starting the playbook.
# When azure is used, you need to also set the following variables.
# cloud_provider:
# see docs/azure.md for details on how to get these values
#azure_tenant_id:
#azure_subscription_id:
#azure_aad_client_id:
#azure_aad_client_secret:
#azure_resource_group:
#azure_location:
#azure_subnet_name:
#azure_security_group_name:
#azure_vnet_name:
## Set these proxy values in order to update docker daemon to use proxies
# http_proxy: ""
# https_proxy: ""
# no_proxy: ""
# Path used to store Docker data
docker_daemon_graph: "/var/lib/docker"
## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear.
## An obvious use case is allowing insecure-registry access
## to self hosted registries like so:
docker_options: "--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }}"
# K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
# default packages to install within the cluster
kpm_packages: []
# - name: kube-system/grafana

View File

@@ -68,7 +68,7 @@ resource "openstack_compute_instance_v2" "k8s_master" {
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster,vault"
}
}
@@ -87,10 +87,10 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/k8s-cluster.yml"
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
@@ -107,7 +107,7 @@ resource "openstack_compute_instance_v2" "k8s_node" {
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_node.*.address, count.index)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster"
kubespray_groups = "kube-node,k8s-cluster,vault"
}
}
@@ -123,10 +123,10 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster"
kubespray_groups = "kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/k8s-cluster.yml"
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}

View File

@@ -8,20 +8,39 @@ The inventory is composed of 3 groups:
* **kube-node** : list of kubernetes nodes where the pods will run.
* **kube-master** : list of servers where kubernetes master components (apiserver, scheduler, controller) will run.
Note: if you want the server to act both as master and node the server must be defined on both groups _kube-master_ and _kube-node_
* **etcd**: list of server to compose the etcd server. you should have at least 3 servers for failover purposes.
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
Note: do not modify the children of _k8s-cluster_, like putting
the _etcd_ group into the _k8s-cluster_, unless you are certain
to do that and you have it fully contained in the latter:
```
k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
```
When _kube-node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as master and node, the server must be defined
on both groups _kube-master_ and _kube-node_. If you want a standalone and
unschedulable master, the server must be defined only in the _kube-master_ and
not _kube-node_.
There are also two special groups:
* **calico-rr** : explained for [advanced Calico networking cases](calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable
Below is a complete inventory example:
```
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_ssh_host=95.54.0.12 # ip=10.3.0.1
node2 ansible_ssh_host=95.54.0.13 # ip=10.3.0.2
node3 ansible_ssh_host=95.54.0.14 # ip=10.3.0.3
node4 ansible_ssh_host=95.54.0.15 # ip=10.3.0.4
node5 ansible_ssh_host=95.54.0.16 # ip=10.3.0.5
node6 ansible_ssh_host=95.54.0.17 # ip=10.3.0.6
node1 ansible_ssh_host=95.54.0.12 ip=10.3.0.1
node2 ansible_ssh_host=95.54.0.13 ip=10.3.0.2
node3 ansible_ssh_host=95.54.0.14 ip=10.3.0.3
node4 ansible_ssh_host=95.54.0.15 ip=10.3.0.4
node5 ansible_ssh_host=95.54.0.16 ip=10.3.0.5
node6 ansible_ssh_host=95.54.0.17 ip=10.3.0.6
[kube-master]
node1
@@ -42,38 +61,39 @@ node6
[k8s-cluster:children]
kube-node
kube-master
etcd
```
Group vars and overriding variables precedence
----------------------------------------------
The group variables to control main deployment options are located in the directory ``inventory/group_vars``.
Optional variables are located in the `inventory/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/group_vars/k8s-cluster.yml`.
There are also role vars for docker, rkt, kubernetes preinstall and master roles.
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overriden from the group vars. In order to override, one should use
the `-e ` runtime flags (most simple way) or other layers described in the docs.
Kargo uses only a few layers to override things (or expect them to
Kubespray uses only a few layers to override things (or expect them to
be overriden for roles):
Layer | Comment
------|--------
**role defaults** | provides best UX to override things for Kargo deployments
**role defaults** | provides best UX to override things for Kubespray deployments
inventory vars | Unused
**inventory group_vars** | Expects users to use ``all.yml``,``k8s-cluster.yml`` etc. to override things
inventory host_vars | Unused
playbook group_vars | Unuses
playbook group_vars | Unused
playbook host_vars | Unused
**host facts** | Kargo overrides for internal roles' logic, like state flags
**host facts** | Kubespray overrides for internal roles' logic, like state flags
play vars | Unused
play vars_prompt | Unused
play vars_files | Unused
registered vars | Unused
set_facts | Kargo overrides those, for some places
set_facts | Kubespray overrides those, for some places
**role and include vars** | Provides bad UX to override things! Use extra vars to enforce
block vars (only for tasks in block) | Kargo overrides for internal roles' logic
block vars (only for tasks in block) | Kubespray overrides for internal roles' logic
task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
@@ -104,12 +124,12 @@ The following tags are defined in playbooks:
| k8s-pre-upgrade | Upgrading K8s cluster
| k8s-secrets | Configuring K8s certs/keys
| kpm | Installing K8s apps definitions with KPM
| kube-apiserver | Configuring self-hosted kube-apiserver
| kube-controller-manager | Configuring self-hosted kube-controller-manager
| kube-apiserver | Configuring static pod kube-apiserver
| kube-controller-manager | Configuring static pod kube-controller-manager
| kubectl | Installing kubectl and bash completion
| kubelet | Configuring kubelet service
| kube-proxy | Configuring self-hosted kube-proxy
| kube-scheduler | Configuring self-hosted kube-scheduler
| kube-proxy | Configuring static pod kube-proxy
| kube-scheduler | Configuring static pod kube-scheduler
| localhost | Special steps for the localhost (ansible runner)
| master | Configuring K8s master node role
| netchecker | Installing netchecker K8s app
@@ -142,7 +162,7 @@ ansible-playbook -i inventory/inventory.ini -e dns_server='' cluster.yml --tags
And this prepares all container images localy (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes:
```
ansible-playbook -i inventory/inventory.ini cluster.yaml \
ansible-playbook -i inventory/inventory.ini cluster.yml \
-e download_run_once=true -e download_localhost=true \
--tags download --skip-tags upload,upgrade
```

22
docs/atomic.md Normal file
View File

@@ -0,0 +1,22 @@
Atomic host bootstrap
=====================
Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`.
Note: Flannel is the only plugin that has currently been tested with atomic
### Vagrant
* For bootstrapping with Vagrant, use box centos/atomic-host
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
* Update `vm_memory = 2048` and `vm_cpus = 2`
* Networking on vagrant hosts has to be brought up manually once they are booted.
```
vagrant ssh
sudo /sbin/ifup enp0s8
```
* For users of vagrant-libvirt download qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/
Then you can proceed to [cluster deployment](#run-deployment)

View File

@@ -3,8 +3,58 @@ AWS
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes/kubernetes/tree/master/cluster/aws/templates/iam). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets and all instances that kubernetes will be run on with key `kuberentes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
Make sure your VPC has both DNS Hostnames support and Private DNS enabled.
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
You can now create your cluster!
### Dynamic Inventory ###
There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.
This will produce an inventory that is passed into Ansible that looks like the following:
```
{
"_meta": {
"hostvars": {
"ip-172-31-3-xxx.us-east-2.compute.internal": {
"ansible_ssh_host": "172.31.3.xxx"
},
"ip-172-31-8-xxx.us-east-2.compute.internal": {
"ansible_ssh_host": "172.31.8.xxx"
}
}
},
"etcd": [
"ip-172-31-3-xxx.us-east-2.compute.internal"
],
"k8s-cluster": {
"children": [
"kube-master",
"kube-node"
]
},
"kube-master": [
"ip-172-31-3-xxx.us-east-2.compute.internal"
],
"kube-node": [
"ip-172-31-8-xxx.us-east-2.compute.internal"
]
}
```
Guide:
- Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal:
```
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="yyyyy"
export REGION="us-east-2"
```
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`

View File

@@ -1,7 +1,7 @@
Azure
===============
To deploy kubespray on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
All your instances are required to run in a resource group and a routing table has to be attached to the subnet your instances are in.
@@ -49,8 +49,8 @@ This is the AppId from the last command
- Create the role assignment with:
`azure role assignment create --spn http://kubernetes -o "Owner" -c /subscriptions/SUBSCRIPTION_ID`
azure\_aad\_client\_id musst be set to the AppId, azure\_aad\_client\_secret is your choosen secret.
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your choosen secret.
## Provisioning Azure with Resource Group Templates
You'll find Resource Group Templates and scripts to provision the required infrastructore to Azure in [*contrib/azurerm*](../contrib/azurerm/README.md)
You'll find Resource Group Templates and scripts to provision the required infrastructure to Azure in [*contrib/azurerm*](../contrib/azurerm/README.md)

View File

@@ -96,7 +96,7 @@ You need to edit your inventory and add:
* `cluster_id` by route reflector node/group (see details
[here](https://hub.docker.com/r/calico/routereflector/))
Here's an example of Kargo inventory with route reflectors:
Here's an example of Kubespray inventory with route reflectors:
```
[all]
@@ -145,9 +145,27 @@ cluster_id="1.0.0.1"
The inventory above will deploy the following topology assuming that calico's
`global_as_num` is set to `65400`:
![Image](figures/kargo-calico-rr.png?raw=true)
![Image](figures/kubespray-calico-rr.png?raw=true)
##### Optional : Define default endpoint to host action
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) withing the same node are dropped.
To re-define default action please set the following variable in your inventory:
```
calico_endpoint_to_host_action: "ACCEPT"
```
Cloud providers configuration
=============================
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
##### Optional : Ignore kernel's RPF check setting
By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting:
```
calico_node_ignorelooserpf: true
```

View File

@@ -3,17 +3,17 @@ Cloud providers
#### Provisioning
You can use kargo-cli to start new instances on cloud providers
You can use kubespray-cli to start new instances on cloud providers
here's an example
```
kargo [aws|gce] --nodes 2 --etcd 3 --cluster-name test-smana
kubespray [aws|gce] --nodes 2 --etcd 3 --cluster-name test-smana
```
#### Deploy kubernetes
With kargo-cli
With kubespray-cli
```
kargo deploy [--aws|--gce] -u admin
kubespray deploy [--aws|--gce] -u admin
```
Or ansible-playbook command

View File

@@ -1,25 +1,25 @@
Kargo vs [Kops](https://github.com/kubernetes/kops)
Kubespray vs [Kops](https://github.com/kubernetes/kops)
---------------
Kargo runs on bare metal and most clouds, using Ansible as its substrate for
Kubespray runs on bare metal and most clouds, using Ansible as its substrate for
provisioning and orchestration. Kops performs the provisioning and orchestration
itself, and as such is less flexible in deployment platforms. For people with
familiarity with Ansible, existing Ansible deployments or the desire to run a
Kubernetes cluster across multiple platforms, Kargo is a good choice. Kops,
Kubernetes cluster across multiple platforms, Kubespray is a good choice. Kops,
however, is more tightly integrated with the unique features of the clouds it
supports so it could be a better choice if you know that you will only be using
one platform for the foreseeable future.
Kargo vs [Kubeadm](https://github.com/kubernetes/kubeadm)
Kubespray vs [Kubeadm](https://github.com/kubernetes/kubeadm)
------------------
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle
management, including self-hosted layouts, dynamic discovery services and so
on. Had it belong to the new [operators world](https://coreos.com/blog/introducing-operators.html),
it would've likely been named a "Kubernetes cluster operator". Kargo however,
on. Had it belonged to the new [operators world](https://coreos.com/blog/introducing-operators.html),
it may have been named a "Kubernetes cluster operator". Kubespray however,
does generic configuration management tasks from the "OS operators" ansible
world, plus some initial K8s clustering (with networking plugins included) and
control plane bootstrapping. Kargo [strives](https://github.com/kubernetes-incubator/kargo/issues/553)
control plane bootstrapping. Kubespray [strives](https://github.com/kubernetes-incubator/kubespray/issues/553)
to adopt kubeadm as a tool in order to consume life cycle management domain
knowledge from it and offload generic OS configuration things from it, which
hopefully benefits both sides.

View File

@@ -1,24 +1,20 @@
CoreOS bootstrap
===============
Example with **kargo-cli**:
Example with **kubespray-cli**:
```
kargo deploy --gce --coreos
kubespray deploy --gce --coreos
```
Or with Ansible:
Before running the cluster playbook you must satisfy the following requirements:
* On each CoreOS nodes a writable directory **/opt/bin** (~400M disk space)
* Uncomment the variable **ansible\_python\_interpreter** in the file `inventory/group_vars/all.yml`
* run the Python bootstrap playbook
```
ansible-playbook -u smana -e ansible_ssh_user=smana -b --become-user=root -i inventory/inventory.cfg coreos-bootstrap.yml
```
General CoreOS Pre-Installation Notes:
- You should set the bootstrap_os variable to `coreos`
- Ensure that the bin_dir is set to `/opt/bin`
- ansible_python_interpreter should be `/opt/bin/python`. This will be laid down by the bootstrap task.
- The default resolvconf_mode setting of `docker_dns` **does not** work for CoreOS. This is because we do not edit the systemd service file for docker on CoreOS nodes. Instead, just use the `host_resolvconf` mode. It should work out of the box.
Then you can proceed to [cluster deployment](#run-deployment)

View File

@@ -1,7 +1,7 @@
K8s DNS stack by Kargo
K8s DNS stack by Kubespray
======================
For K8s cluster nodes, kargo configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/)
For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/)
[cluster add-on](http://releases.k8s.io/master/cluster/addons/README.md)
to serve as an authoritative DNS server for a given ``dns_domain`` and its
``svc, default.svc`` default subdomains (a total of ``ndots: 5`` max levels).
@@ -44,13 +44,13 @@ DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode``
DNS servers in early cluster deployment when no cluster DNS is available yet. These are also added as upstream
DNS servers used by ``dnsmasq`` (when deployed with ``dns_mode: dnsmasq_kubedns``).
DNS modes supported by kargo
DNS modes supported by Kubespray
============================
You can modify how kargo sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
## dns_mode
``dns_mode`` configures how kargo will setup cluster DNS. There are three modes available:
``dns_mode`` configures how Kubespray will setup cluster DNS. There are three modes available:
#### dnsmasq_kubedns (default)
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
@@ -67,7 +67,7 @@ This does not install any of dnsmasq and kubedns/skydns. This basically disables
leaves you with a non functional cluster.
## resolvconf_mode
``resolvconf_mode`` configures how kargo will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
``resolvconf_mode`` configures how Kubespray will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
There are three modes available:
#### docker_dns (default)
@@ -100,7 +100,7 @@ used as a backup nameserver. After cluster DNS is running, all queries will be a
servers, which in turn will forward queries to the system nameserver if required.
#### host_resolvconf
This activates the classic kargo behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
This activates the classic Kubespray behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
configuration to point to the cluster dns server (either dnsmasq or kubedns, depending on dns_mode).
As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first
@@ -120,7 +120,7 @@ cluster service names.
Limitations
-----------
* Kargo has yet ways to configure Kubedns addon to forward requests SkyDns can
* Kubespray has yet ways to configure Kubedns addon to forward requests SkyDns can
not answer with authority to arbitrary recursive resolvers. This task is left
for future. See [official SkyDns docs](https://github.com/skynetservices/skydns)
for details.

View File

@@ -1,7 +1,7 @@
Downloading binaries and containers
===================================
Kargo supports several download/upload modes. The default is:
Kubespray supports several download/upload modes. The default is:
* Each node downloads binaries and container images on its own, which is
``download_run_once: False``.

View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

@@ -23,13 +23,6 @@ ip a show dev flannel.1
valid_lft forever preferred_lft forever
```
* Docker must be configured with a bridge ip in the flannel subnet.
```
ps aux | grep docker
root 20196 1.7 2.7 1260616 56840 ? Ssl 10:18 0:07 /usr/bin/docker daemon --bip=10.233.16.1/24 --mtu=1450
```
* Try to run a container and check its ip address
```

View File

@@ -1,32 +1,69 @@
Getting started
===============
The easiest way to run the deployement is to use the **kargo-cli** tool.
A complete documentation can be found in its [github repository](https://github.com/kubespray/kargo-cli).
The easiest way to run the deployement is to use the **kubespray-cli** tool.
A complete documentation can be found in its [github repository](https://github.com/kubespray/kubespray-cli).
Here is a simple example on AWS:
Here is a simple example on AWS:
* Create instances and generate the inventory
```
kargo aws --instances 3
kubespray aws --instances 3
```
* Run the deployment
* Run the deployment
```
kargo deploy --aws -u centos -n calico
kubespray deploy --aws -u centos -n calico
```
Building your own inventory
^^^^^^^^^^^^^^^^^^^^^^^^^^^
---------------------------
Ansible inventory can be stored in 3 formats: YAML, JSON, or inifile. There is
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
an example inventory located
[here](https://github.com/kubernetes-incubator/kargo/blob/master/inventory/inventory.example).
[here](https://github.com/kubernetes-incubator/kubespray/blob/master/inventory/inventory.example).
You can use an
[inventory generator](https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_generator/inventory_generator.py)
[inventory generator](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py)
to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only use for making a basic Kargo cluster, but it does
support creating large clusters.
functionality and is only use for making a basic Kubespray cluster, but it does
support creating large clusters. It now supports
separated ETCD and Kubernetes master roles from node role if the size exceeds a
certain threshold. Run inventory.py help for more information.
Example inventory generator usage:
```
cp -r inventory my_inventory
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=my_inventory/inventory.cfg python3 contrib/inventory_builder/inventory.py ${IPS[@]}
```
Starting custom deployment
--------------------------
Once you have an inventory, you may want to customize deployment data vars
and start the deployment:
**IMPORTANT: Edit my_inventory/groups_vars/*.yaml to override data vars**
```
ansible-playbook -i my_inventory/inventory.cfg cluster.yml -b -v \
--private-key=~/.ssh/private_key
```
See more details in the [ansible guide](ansible.md).
Adding nodes
--------------------------
You may want to add worker nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
```
ansible-playbook -i my_inventory/inventory.cfg scale.yml -b -v \
--private-key=~/.ssh/private_key
```

View File

@@ -11,37 +11,35 @@ achieve the same goal.
Etcd
----
Etcd proxies are deployed on each node in the `k8s-cluster` group. A proxy is
a separate etcd process. It has a `localhost:2379` frontend and all of the etcd
cluster members as backends. Note that the `access_ip` is used as the backend
IP, if specified. Frontend endpoints cannot be accessed externally as they are
bound to a localhost only.
The `etcd_access_endpoint` fact provides an access pattern for clients. And the
`etcd_multiaccess` (defaults to `false`) group var controlls that behavior.
When enabled, it makes deployed components to access the etcd cluster members
`etcd_multiaccess` (defaults to `True`) group var controls that behavior.
It makes deployed components to access the etcd cluster members
directly: `http://ip1:2379, http://ip2:2379,...`. This mode assumes the clients
do a loadbalancing and handle HA for connections. Note, a pod definition of a
flannel networking plugin always uses a single `--etcd-server` endpoint!
do a loadbalancing and handle HA for connections.
Kube-apiserver
--------------
K8s components require a loadbalancer to access the apiservers via a reverse
proxy. Kargo includes support for an nginx-based proxy that resides on each
proxy. Kubespray includes support for an nginx-based proxy that resides on each
non-master Kubernetes node. This is referred to as localhost loadbalancing. It
is less efficient than a dedicated load balancer because it creates extra
health checks on the Kubernetes apiserver, but is more practical for scenarios
where an external LB or virtual IP management is inconvenient.
where an external LB or virtual IP management is inconvenient. This option is
configured by the variable `loadbalancer_apiserver_localhost` (defaults to `True`).
You may also define the port the local internal loadbalancer uses by changing,
`nginx_kube_apiserver_port`. This defaults to the value of `kube_apiserver_port`.
It is also important to note that Kubespray will only configure kubelet and kube-proxy
on non-master nodes to use the local internal loadbalancer.
This option is configured by the variable `loadbalancer_apiserver_localhost`.
you will need to configure your own loadbalancer to achieve HA. Note that
deploying a loadbalancer is up to a user and is not covered by ansible roles
in Kargo. By default, it only configures a non-HA endpoint, which points to
the `access_ip` or IP address of the first server node in the `kube-master`
group. It can also configure clients to use endpoints for a given loadbalancer
type. The following diagram shows how traffic to the apiserver is directed.
If you choose to NOT use the local internal loadbalancer, you will need to configure
your own loadbalancer to achieve HA. Note that deploying a loadbalancer is up to
a user and is not covered by ansible roles in Kubespray. By default, it only configures
a non-HA endpoint, which points to the `access_ip` or IP address of the first server
node in the `kube-master` group. It can also configure clients to use endpoints
for a given loadbalancer type. The following diagram shows how traffic to the
apiserver is directed.
![Image](figures/loadbalancer_localhost.png?raw=true)
@@ -63,8 +61,8 @@ listen kubernetes-apiserver-https
mode tcp
timeout client 3h
timeout server 3h
server master1 <IP1>:443
server master2 <IP2>:443
server master1 <IP1>:6443
server master2 <IP2>:6443
balance roundrobin
```
@@ -90,16 +88,18 @@ Access endpoints are evaluated automagically, as the following:
| Endpoint type | kube-master | non-master |
|------------------------------|---------------|---------------------|
| Local LB | http://lc:p | https://lc:sp |
| Local LB (default) | http://lc:p | https://lc:nsp |
| External LB, no internal | https://lb:lp | https://lb:lp |
| No ext/int LB (default) | http://lc:p | https://m[0].aip:sp |
| No ext/int LB | http://lc:p | https://m[0].aip:sp |
Where:
* `m[0]` - the first node in the `kube-master` group;
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
* `lc` - localhost;
* `p` - insecure port, `kube_apiserver_insecure_port`
* `nsp` - nginx secure port, `nginx_kube_apiserver_port`;
* `sp` - secure port, `kube_apiserver_port`;
* `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
* `ip` - the node IP, defers to the ansible IP;
* `aip` - `access_ip`, defers to the ip.

121
docs/integration.md Normal file
View File

@@ -0,0 +1,121 @@
# Kubespray (kargo) in own ansible playbooks repo
1. Fork [kubespray repo](https://github.com/kubernetes-incubator/kubespray) to your personal/organisation account on github.
Note:
* All forked public repos at github will be also public, so **never commit sensitive data to your public forks**.
* List of all forked repos could be retrieved from github page of original project.
2. Add **forked repo** as submodule to desired folder in your existent ansible repo(for example 3d/kubespray):
```git submodule add https://github.com/YOUR_GITHUB/kubespray.git kubespray```
Git will create _.gitmodules_ file in your existent ansible repo:
```
[submodule "3d/kubespray"]
path = 3d/kubespray
url = https://github.com/YOUR_GITHUB/kubespray.git
```
3. Configure git to show submodule status:
```git config --global status.submoduleSummary true```
4. Add *original* kubespray repo as upstream:
```git remote add upstream https://github.com/kubernetes-incubator/kubespray.git```
5. Sync your master branch with upstream:
```
git checkout master
git fetch upstream
git merge upstream/master
git push origin master
```
6. Create a new branch which you will use in your working environment:
```git checkout -b work```
***Never*** use master branch of your repository for your commits.
7. Modify path to library and roles in your ansible.cfg file (role naming should be uniq, you may have to rename your existent roles if they have same names as kubespray project):
```
...
library = 3d/kubespray/library/
roles_path = 3d/kubespray/roles/
...
```
8. Copy and modify configs from kubespray `group_vars` folder to corresponging `group_vars` folder in your existent project.
You could rename *all.yml* config to something else, i.e. *kubespray.yml* and create corresponding group in your inventory file, which will include all hosts groups related to kubernetes setup.
9. Modify your ansible inventory file by adding mapping of your existent groups (if any) to kubespray naming.
For example:
```
...
#Kargo groups:
[kube-node:children]
kubenode
[k8s-cluster:children]
kubernetes
[etcd:children]
kubemaster
kubemaster-ha
[kube-master:children]
kubemaster
kubemaster-ha
[vault:children]
kube-master
[kubespray:children]
kubernetes
```
* Last entry here needed to apply kubespray.yml config file, renamed from all.yml of kubespray project.
10. Now you can include kargo tasks in you existent playbooks by including cluster.yml file:
```
- name: Include kargo tasks
include: 3d/kubespray/cluster.yml
```
Or your could copy separate tasks from cluster.yml into your ansible repository.
11. Commit changes to your ansible repo. Keep in mind, that submodule folder is just a link to the git commit hash of your forked repo.
When you update your "work" branch you need to commit changes to ansible repo as well.
Other members of your team should use ```git submodule sync```, ```git submodule update --init``` to get actual code from submodule.
# Contributing
If you made useful changes or fixed a bug in existent kubespray repo, use this flow for PRs to original kubespray repo.
0. Sign the [CNCF CLA](https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ).
1. Change working directory to git submodule directory (3d/kubespray).
2. Setup desired user.name and user.email for submodule.
If kubespray is only one submodule in your repo you could use something like:
```git submodule foreach --recursive 'git config user.name "First Last" && git config user.email "your-email-addres@used.for.cncf"'```
3. Sync with upstream master:
```
git fetch upstream
git merge upstream/master
git push origin master
```
4. Create new branch for the specific fixes that you want to contribute:
```git checkout -b fixes-name-date-index```
Branch name should be self explaining to you, adding date and/or index will help you to track/delete your old PRs.
5. Find git hash of your commit in "work" repo and apply it to newly created "fix" repo:
```
git cherry-pick <COMMIT_HASH>
```
6. If your have several temporary-stage commits - squash them using [```git rebase -i```](http://eli.thegreenplace.net/2014/02/19/squashing-github-pull-requests-into-a-single-commit)
Also you could use interactive rebase (```git rebase -i HEAD~10```) to delete commits which you don't want to contribute into original repo.
7. When your changes is in place, you need to check upstream repo one more time because it could be changed during your work.
Check that you're on correct branch:
```git status```
And pull changes from upstream (if any):
```git pull --rebase upstream master```
8. Now push your changes to your **fork** repo with ```git push```. If your branch doesn't exists on github, git will propose you to use something like ```git push --set-upstream origin fixes-name-date-index```.
9. Open you forked repo in browser, on the main page you will see proposition to create pull request for your newly created branch. Check proposed diff of your PR. If something is wrong you could safely delete "fix" branch on github using ```git push origin --delete fixes-name-date-index```, ```git branch -D fixes-name-date-index``` and start whole process from the beginning.
If everything is fine - add description about your changes (what they do and why they're needed) and confirm pull request creation.

View File

@@ -0,0 +1,103 @@
# Overview
Distributed system such as Kubernetes are designed to be resilient to the
failures. More details about Kubernetes High-Availability (HA) may be found at
[Building High-Availability Clusters](https://kubernetes.io/docs/admin/high-availability/)
To have a simple view the most of parts of HA will be skipped to describe
Kubelet<->Controller Manager communication only.
By default the normal behavior looks like:
1. Kubelet updates it status to apiserver periodically, as specified by
`--node-status-update-frequency`. The default value is **10s**.
2. Kubernetes controller manager checks the statuses of Kubelets every
`-node-monitor-period`. The default value is **5s**.
3. In case the status is updated within `--node-monitor-grace-period` of time,
Kubernetes controller manager considers healthy status of Kubelet. The
default value is **40s**.
> Kubernetes controller manager and Kubelets work asynchronously. It means that
> the delay may include any network latency, API Server latency, etcd latency,
> latency caused by load on one's master nodes and so on. So if
> `--node-status-update-frequency` is set to 5s in reality it may appear in
> etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum
> nodes.
# Failure
Kubelet will try to make `nodeStatusUpdateRetry` post attempts. Currently
`nodeStatusUpdateRetry` is constantly set to 5 in
[kubelet.go](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet.go#L102).
Kubelet will try to update the status in
[tryUpdateNodeStatus](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet_node_status.go#L345)
function. Kubelet uses `http.Client()` Golang method, but has no specified
timeout. Thus there may be some glitches when API Server is overloaded while
TCP connection is established.
So, there will be `nodeStatusUpdateRetry` * `--node-status-update-frequency`
attempts to set a status of node.
At the same time Kubernetes controller manager will try to check
`nodeStatusUpdateRetry` times every `--node-monitor-period` of time. After
`--node-monitor-grace-period` it will consider node unhealthy. It will remove
its pods based on `--pod-eviction-timeout`
Kube proxy has a watcher over API. Once pods are evicted, Kube proxy will
notice and will update iptables of the node. It will remove endpoints from
services so pods from failed node won't be accessible anymore.
# Recommendations for different cases
## Fast Update and Fast Reaction
If `-node-status-update-frequency` is set to **4s** (10s is default).
`--node-monitor-period` to **2s** (5s is default).
`--node-monitor-grace-period` to **20s** (40s is default).
`--pod-eviction-timeout` is set to **30s** (5m is default)
In such scenario, pods will be evicted in **50s** because the node will be
considered as down after **20s**, and `--pod-eviction-timeout` occurs after
**30s** more. However, this scenario creates an overhead on etcd as every node
will try to update its status every 2 seconds.
If the environment has 1000 nodes, there will be 15000 node updates per
minute which may require large etcd containers or even dedicated nodes for etcd.
> If we calculate the number of tries, the division will give 5, but in reality
> it will be from 3 to 5 with `nodeStatusUpdateRetry` attempts of each try. The
> total number of attemtps will vary from 15 to 25 due to latency of all
> components.
## Medium Update and Average Reaction
Let's set `-node-status-update-frequency` to **20s**
`--node-monitor-grace-period` to **2m** and `--pod-eviction-timeout` to **1m**.
In that case, Kubelet will try to update status every 20s. So, it will be 6 * 5
= 30 attempts before Kubernetes controller manager will consider unhealthy
status of node. After 1m it will evict all pods. The total time will be 3m
before eviction process.
Such scenario is good for medium environments as 1000 nodes will require 3000
etcd updates per minute.
> In reality, there will be from 4 to 6 node update tries. The total number of
> of attempts will vary from 20 to 30.
## Low Update and Slow reaction
Let's set `-node-status-update-frequency` to **1m**.
`--node-monitor-grace-period` will set to **5m** and `--pod-eviction-timeout`
to **1m**. In this scenario, every kubelet will try to update the status every
minute. There will be 5 * 5 = 25 attempts before unhealty status. After 5m,
Kubernetes controller manager will set unhealthy status. This means that pods
will be evicted after 1m after being marked unhealthy. (6m in total).
> In reality, there will be from 3 to 5 tries. The total number of attempt will
> vary from 15 to 25.
There can be different combinations such as Fast Update with Slow reaction to
satisfy specific cases.

View File

@@ -3,7 +3,8 @@ Large deployments of K8s
For a large scaled deployments, consider the following configuration changes:
* Tune [ansible settings](http://docs.ansible.com/ansible/intro_configuration.html)
* Tune [ansible settings]
(http://docs.ansible.com/ansible/intro_configuration.html)
for `forks` and `timeout` vars to fit large numbers of nodes being deployed.
* Override containers' `foo_image_repo` vars to point to intranet registry.
@@ -23,9 +24,25 @@ For a large scaled deployments, consider the following configuration changes:
* Tune CPU/memory limits and requests. Those are located in roles' defaults
and named like ``foo_memory_limit``, ``foo_memory_requests`` and
``foo_cpu_limit``, ``foo_cpu_requests``. Note that 'Mi' memory units for K8s
will be submitted as 'M', if applied for ``docker run``, and cpu K8s units will
end up with the 'm' skipped for docker as well. This is required as docker does not
understand k8s units well.
will be submitted as 'M', if applied for ``docker run``, and cpu K8s units
will end up with the 'm' skipped for docker as well. This is required as
docker does not understand k8s units well.
* Tune ``kubelet_status_update_frequency`` to increase reliability of kubelet.
``kube_controller_node_monitor_grace_period``,
``kube_controller_node_monitor_period``,
``kube_controller_pod_eviction_timeout`` for better Kubernetes reliability.
Check out [Kubernetes Reliability](kubernetes-reliability.md)
* Add calico-rr nodes if you are deploying with Calico or Canal. Nodes recover
from host/network interruption much quicker with calico-rr. Note that
calico-rr role must be on a host without kube-master or kube-node role (but
etcd role is okay).
* Check out the
[Inventory](getting-started.md#building-your-own-inventory)
section of the Getting started guide for tips on creating a large scale
Ansible inventory.
For example, when deploying 200 nodes, you may want to run ansible with
``--forks=50``, ``--timeout=600`` and define the ``retry_stagger: 60``.

View File

@@ -1,8 +1,8 @@
Network Checker Application
===========================
With the ``deploy_netchecker`` var enabled (defaults to false), Kargo deploys a
Network Checker Application from the 3rd side `l23network/mcp-netchecker` docker
With the ``deploy_netchecker`` var enabled (defaults to false), Kubespray deploys a
Network Checker Application from the 3rd side `l23network/k8s-netchecker` docker
images. It consists of the server and agents trying to reach the server by usual
for Kubernetes applications network connectivity meanings. Therefore, this
automagically verifies a pod to pod connectivity via the cluster IP and checks
@@ -17,7 +17,7 @@ any of the cluster nodes:
```
curl http://localhost:31081/api/v1/connectivity_check
```
Note that Kargo does not invoke the check but only deploys the application, if
Note that Kubespray does not invoke the check but only deploys the application, if
requested.
There are related application specifc variables:
@@ -25,8 +25,8 @@ There are related application specifc variables:
netchecker_port: 31081
agent_report_interval: 15
netcheck_namespace: default
agent_img: "quay.io/l23network/mcp-netchecker-agent:v0.1"
server_img: "quay.io/l23network/mcp-netchecker-server:v0.1"
agent_img: "quay.io/l23network/k8s-netchecker-agent:v1.0"
server_img: "quay.io/l23network/k8s-netchecker-server:v1.0"
```
Note that the application verifies DNS resolve for FQDNs comprising only the

View File

@@ -35,14 +35,12 @@ Then you can use the instance ids to find the connected [neutron](https://wiki.o
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
Given the port ids on the left, you can set the `allowed_address_pairs` in neutron:
Given the port ids on the left, you can set the `allowed_address_pairs` in neutron.
Note that you have to allow both of `kube_service_addresses` (default `10.233.0.0/18`)
and `kube_pods_subnet` (default `10.233.64.0/18`.)
# allow kube_service_addresses network
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18
# allow kube_pods_subnet network
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.64.0/18
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.64.0/18
# allow kube_service_addresses and kube_pods_subnet network
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18 ip_address=10.233.64.0/18
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18 ip_address=10.233.64.0/18
Now you can finally run the playbook.

View File

@@ -1,65 +1,66 @@
Kargo's roadmap
Kubespray's roadmap
=================
### Kubeadm
- Propose kubeadm as an option in order to setup the kubernetes cluster.
That would probably improve deployment speed and certs management [#553](https://github.com/kubespray/kargo/issues/553)
That would probably improve deployment speed and certs management [#553](https://github.com/kubespray/kubespray/issues/553)
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kargo/issues/320)
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320)
- the playbook would install and configure docker/rkt and the etcd cluster
- the following data would be inserted into etcd: certs,tokens,users,inventory,group_vars.
- a "kubespray" container would be deployed (kargo-cli, ansible-playbook, kpm)
- a "kubespray" container would be deployed (kubespray-cli, ansible-playbook, kpm)
- to be discussed, a way to provide the inventory
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kargo/issues/321)
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321)
### Provisionning and cloud providers
- Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure**
- On AWS autoscaling, multi AZ
- On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kargo/issues/297)
- On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kargo/issues/280)
- **TLS boostrap** support for kubelet [#234](https://github.com/kubespray/kargo/issues/234)
(related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br>
- [ ] Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure**
- [ ] On AWS autoscaling, multi AZ
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297)
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280)
- [x] **TLS boostrap** support for kubelet [#234](https://github.com/kubespray/kubespray/issues/234)
(related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br>
https://github.com/kubernetes/kubernetes/issues/18112)
### Tests
- Run kubernetes e2e tests
- migrate to jenkins
- [x] Run kubernetes e2e tests
- [x] migrate to jenkins
(a test is currently a deployment on a 3 node cluste, testing k8s api, ping between 2 pods)
- Full tests on GCE per day (All OS's, all network plugins)
- trigger a single test per pull request
- single test with the Ansible version n-1 per day
- Test idempotency on on single OS but for all network plugins/container engines
- single test on AWS per day
- test different achitectures :
- [x] Full tests on GCE per day (All OS's, all network plugins)
- [x] trigger a single test per pull request
- [ ] ~~single test with the Ansible version n-1 per day~~
- [x] Test idempotency on on single OS but for all network plugins/container engines
- [ ] single test on AWS per day
- [x] test different achitectures :
- 3 instances, 3 are members of the etcd cluster, 2 of them acting as master and node, 1 as node
- 5 instances, 3 are etcd and nodes, 2 are masters only
- 7 instances, 3 etcd only, 2 masters, 2 nodes
- test scale up cluster: +1 etcd, +1 master, +1 node
- [ ] test scale up cluster: +1 etcd, +1 master, +1 node
### Lifecycle
- Adopt the kubeadm tool by delegating CM tasks it is capable to accomplish well [#553](https://github.com/kubespray/kargo/issues/553)
- Drain worker node when upgrading k8s components in a worker node. [#154](https://github.com/kubespray/kargo/issues/154)
- Drain worker node when shutting down/deleting an instance
- [ ] Adopt the kubeadm tool by delegating CM tasks it is capable to accomplish well [#553](https://github.com/kubespray/kubespray/issues/553)
- [x] Drain worker node when upgrading k8s components in a worker node. [#154](https://github.com/kubespray/kubespray/issues/154)
- [ ] Drain worker node when shutting down/deleting an instance
- [ ] Upgrade granularity: select components to upgrade and skip others
### Networking
- romana.io support [#160](https://github.com/kubespray/kargo/issues/160)
- Configure network policy for Calico. [#159](https://github.com/kubespray/kargo/issues/159)
- Opencontrail
- Canal
- Cloud Provider native networking (instead of our network plugins)
- [ ] romana.io support [#160](https://github.com/kubespray/kubespray/issues/160)
- [ ] Configure network policy for Calico. [#159](https://github.com/kubespray/kubespray/issues/159)
- [ ] Opencontrail
- [x] Canal
- [x] Cloud Provider native networking (instead of our network plugins)
### High availability
### High availability
- (to be discussed) option to set a loadbalancer for the apiservers like ucarp/packemaker/keepalived
While waiting for the issue [kubernetes/kubernetes#18174](https://github.com/kubernetes/kubernetes/issues/18174) to be fixed.
### Kargo-cli
### Kubespray-cli
- Delete instances
- `kargo vagrant` to setup a test cluster locally
- `kargo azure` for Microsoft Azure support
- `kubespray vagrant` to setup a test cluster locally
- `kubespray azure` for Microsoft Azure support
- switch to Terraform instead of Ansible for provisionning
- update $HOME/.kube/config when a cluster is deployed. Optionally switch to this context
### Kargo API
### Kubespray API
- Perform all actions through an **API**
- Store inventories / configurations of mulltiple clusters
- make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory
@@ -72,7 +73,7 @@ Include optionals deployments to init the cluster:
##### Others
##### Dashboards:
##### Dashboards:
- kubernetes-dashboard
- Fabric8
- Tectonic
@@ -86,8 +87,8 @@ Include optionals deployments to init the cluster:
### Others
- remove nodes (adding is already supported)
- being able to choose any k8s version (almost done)
- **rkt** support [#59](https://github.com/kubespray/kargo/issues/59)
- **rkt** support [#59](https://github.com/kubespray/kubespray/issues/59)
- Review documentation (split in categories)
- **consul** -> if officialy supported by k8s
- flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kargo/issues/312)
- Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kargo/issues/329)
- flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kubespray/issues/312)
- Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kubespray/issues/329)

View File

@@ -4,25 +4,40 @@ Travis CI test matrix
GCE instances
-------------
Here is the test matrix for the Travis CI gates:
Here is the test matrix for the CI gates:
| Network plugin| OS type| GCE region| Nodes layout|
|-------------------------|-------------------------|-------------------------|-------------------------|
| canal| debian-8-kubespray| asia-east1-a| ha|
| canal| debian-8-kubespray| asia-east1-a| ha-scale|
| calico| debian-8-kubespray| europe-west1-c| default|
| flannel| centos-7| asia-northeast1-c| default|
| calico| centos-7| us-central1-b| ha|
| weave| rhel-7| us-east1-c| default|
| canal| coreos-stable| us-west1-b| default|
| canal| coreos-stable| us-west1-b| ha-scale|
| canal| rhel-7| asia-northeast1-b| separate|
| weave| ubuntu-1604-xenial| europe-west1-d| separate|
| calico| coreos-stable| us-central1-f| separate|
Where the nodes layout `default` is a non-HA two nodes setup with the separate `kube-node`
and the `etcd` group merged with the `kube-master`. The `separate` layout is when
there is only node of each type, which is a kube master, compute and etcd cluster member.
And the `ha` layout stands for a two etcd nodes, two masters and a single worker node,
partially intersecting though.
Node Layouts
------------
There are four node layout types: `default`, `separate`, `ha`, and `scale`.
`default` is a non-HA two nodes setup with one separate `kube-node`
and the `etcd` group merged with the `kube-master`.
`separate` layout is when there is only node of each type, which includes
a kube-master, kube-node, and etcd cluster member.
`ha` layout consists of two etcd nodes, two masters and a single worker node,
with role intersection.
`scale` layout can be combined with above layouts. It includes 200 fake hosts
in the Ansible inventory. This helps test TLS certificate generation at scale
to prevent regressions and profile certain long-running tasks. These nodes are
never actually deployed, but certificates are generated for them.
Note, the canal network plugin deploys flannel as well plus calico policy controller.
@@ -40,15 +55,15 @@ GCE instances
| Stage| Network plugin| OS type| GCE region| Nodes layout
|--------------------|--------------------|--------------------|--------------------|--------------------|
| part1| calico| coreos-stable| us-west1-b| separated|
| part1| calico| coreos-stable| us-west1-b| separate|
| part1| canal| debian-8-kubespray| us-east1-b| ha|
| part1| weave| rhel-7| europe-west1-b| default|
| part2| flannel| centos-7| us-west1-a| default|
| part2| calico| debian-8-kubespray| us-central1-b| default|
| part2| canal| coreos-stable| us-east1-b| default|
| special| canal| rhel-7| us-east1-b| separated|
| special| weave| ubuntu-1604-xenial| us-central1-b| separated|
| special| calico| centos-7| europe-west1-b| ha|
| special| weave| coreos-alpha| us-west1-a| ha|
| special| canal| rhel-7| us-east1-b| separate|
| special| weave| ubuntu-1604-xenial| us-central1-b| default|
| special| calico| centos-7| europe-west1-b| ha-scale|
| special| weave| coreos-alpha| us-west1-a| ha-scale|
The "Stage" means a build step of the build pipeline. The steps are ordered as `part1->part2->special`.

View File

@@ -1,11 +1,11 @@
Upgrading Kubernetes in Kargo
Upgrading Kubernetes in Kubespray
=============================
#### Description
Kargo handles upgrades the same way it handles initial deployment. That is to
Kubespray handles upgrades the same way it handles initial deployment. That is to
say that each component is laid down in a fixed order. You should be able to
upgrade from Kargo tag 2.0 up to the current master without difficulty. You can
upgrade from Kubespray tag 2.0 up to the current master without difficulty. You can
also individually control versions of components by explicitly defining their
versions. Here are all version vars for each component:
@@ -18,7 +18,7 @@ versions. Here are all version vars for each component:
* flannel_version
* kubedns_version
#### Example
#### Unsafe upgrade example
If you wanted to upgrade just kube_version from v1.4.3 to v1.4.6, you could
deploy the following way:
@@ -33,15 +33,37 @@ And then repeat with v1.4.6 as kube_version:
ansible-playbook cluster.yml -i inventory/inventory.cfg -e kube_version=v1.4.6
```
#### Graceful upgrade
Kubespray also supports cordon, drain and uncordoning of nodes when performing
a cluster upgrade. There is a separate playbook used for this purpose. It is
important to note that upgrade-cluster.yml can only be used for upgrading an
existing cluster. That means there must be at least 1 kube-master already
deployed.
```
git fetch origin
git checkout origin/master
ansible-playbook upgrade-cluster.yml -b -i inventory/inventory.cfg -e kube_version=v1.6.0
```
After a successul upgrade, the Server Version should be updated:
```
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T19:15:41Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0+coreos.0", GitCommit:"8031716957d697332f9234ddf85febb07ac6c3e3", GitTreeState:"clean", BuildDate:"2017-03-29T04:33:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
```
#### Upgrade order
As mentioned above, components are upgraded in the order in which they were
installed in the Ansible playbook. The order of component installation is as
follows:
# Docker
# etcd
# kubelet and kube-proxy
# network_plugin (such as Calico or Weave)
# kube-apiserver, kube-scheduler, and kube-controller-manager
# Add-ons (such as KubeDNS)
* Docker
* etcd
* kubelet and kube-proxy
* network_plugin (such as Calico or Weave)
* kube-apiserver, kube-scheduler, and kube-controller-manager
* Add-ons (such as KubeDNS)

View File

@@ -39,3 +39,31 @@ k8s-01 Ready 45s
k8s-02 Ready 45s
k8s-03 Ready 45s
```
Customize Vagrant
=================
You can override the default settings in the `Vagrantfile` either by directly modifying the `Vagrantfile`
or through an override file.
In the same directory as the `Vagrantfile`, create a folder called `vagrant` and create `config.rb` file in it.
You're able to override the variables defined in `Vagrantfile` by providing the value in the `vagrant/config.rb` file,
e.g.:
echo '$forwarded_ports = {8001 => 8001}' >> vagrant/config.rb
and after `vagrant up` or `vagrant reload`, your host will have port forwarding setup with the guest on port 8001.
Use alternative OS for Vagrant
==============================
By default, Vagrant uses Ubuntu 16.04 box to provision a local cluster. You may use an alternative supported
operating system for your local cluster.
Customize `$os` variable in `Vagrantfile` or as override, e.g.,:
echo '$os = "coreos-stable"' >> vagrant/config.rb
The supported operating systems for vagrant are defined in the `SUPPORTED_OS` constant in the `Vagrantfile`.

View File

@@ -1,4 +1,4 @@
Configurable Parameters in Kargo
Configurable Parameters in Kubespray
================================
#### Generic Ansible variables
@@ -12,7 +12,7 @@ Some variables of note include:
* *ansible_default_ipv4.address*: IP address Ansible automatically chooses.
Generated based on the output from the command ``ip -4 route get 8.8.8.8``
#### Common vars that are used in Kargo
#### Common vars that are used in Kubespray
* *calico_version* - Specify version of Calico to use
* *calico_cni_version* - Specify version of Calico CNI plugin to use
@@ -23,7 +23,7 @@ Some variables of note include:
* *hyperkube_image_repo* - Specify the Docker repository where Hyperkube
resides
* *hyperkube_image_tag* - Specify the Docker tag where Hyperkube resides
* *kube_network_plugin* - Changes k8s plugin to Calico
* *kube_network_plugin* - Sets k8s network plugin (default Calico)
* *kube_proxy_mode* - Changes k8s proxy mode to iptables mode
* *kube_version* - Specify a given Kubernetes hyperkube version
* *searchdomains* - Array of DNS domains to search when looking up hostnames
@@ -35,15 +35,16 @@ Some variables of note include:
* *access_ip* - IP for other hosts to use to connect to. Often required when
deploying from a cloud, such as OpenStack or GCE and you have separate
public/floating and private IPs.
* *ansible_default_ipv4.address* - Not Kargo-specific, but it is used if ip
* *ansible_default_ipv4.address* - Not Kubespray-specific, but it is used if ip
and access_ip are undefined
* *loadbalancer_apiserver* - If defined, all hosts will connect to this
address instead of localhost for kube-masters and kube-master[0] for
kube-nodes. See more details in the
[HA guide](https://github.com/kubernetes-incubator/kargo/blob/master/docs/ha-mode.md).
* *loadbalancer_apiserver_localhost* - If enabled, all hosts will connect to
the apiserver internally load balanced endpoint. See more details in the
[HA guide](https://github.com/kubernetes-incubator/kargo/blob/master/docs/ha-mode.md).
[HA guide](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md).
* *loadbalancer_apiserver_localhost* - makes all hosts to connect to
the apiserver internally load balanced endpoint. Mutual exclusive to the
`loadbalancer_apiserver`. See more details in the
[HA guide](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md).
#### Cluster variables
@@ -66,6 +67,13 @@ following default cluster paramters:
OpenStack (default is unset)
* *kube_hostpath_dynamic_provisioner* - Required for use of PetSets type in
Kubernetes
* *kube_feature_gates* - A list of key=value pairs that describe feature gates for
alpha/experimental Kubernetes features. (defaults is `[]`)
* *authorization_modes* - A list of [authorization mode](
https://kubernetes.io/docs/admin/authorization/#using-flags-for-your-authorization-module)
that the cluster should be configured for. Defaults to `[]` (i.e. no authorization).
Note: `RBAC` is currently in experimental phase, and do not support either calico or
vault. Upgrade from non-RBAC to RBAC is not tested.
Note, if cloud providers have any use of the ``10.233.0.0/16``, like instances'
private addresses, make sure to pick another values for ``kube_service_addresses``
@@ -78,13 +86,13 @@ other settings from your existing /etc/resolv.conf are lost. Set the following
variables to match your requirements.
* *upstream_dns_servers* - Array of upstream DNS servers configured on host in
addition to Kargo deployed DNS
addition to Kubespray deployed DNS
* *nameservers* - Array of DNS servers configured for use in dnsmasq
* *searchdomains* - Array of up to 4 search domains
* *skip_dnsmasq* - Don't set up dnsmasq (use only KubeDNS)
For more information, see [DNS
Stack](https://github.com/kubernetes-incubator/kargo/blob/master/docs/dns-stack.md).
Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-stack.md).
#### Other service variables
@@ -92,9 +100,31 @@ Stack](https://github.com/kubernetes-incubator/kargo/blob/master/docs/dns-stack.
``--insecure-registry=myregistry.mydomain:5000``
* *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a
proxy
* *kubelet_deployment_type* - Controls which platform to deploy kubelet on.
Available options are ``host``, ``rkt``, and ``docker``. ``docker`` mode
is unlikely to work on newer releases. Starting with Kubernetes v1.7
series, this now defaults to ``host``. Before v1.7, the default was Docker.
This is because of cgroup [issues](https://github.com/kubernetes/kubernetes/issues/43704).
* *kubelet_load_modules* - For some things, kubelet needs to load kernel modules. For example,
dynamic kernel services are needed for mounting persistent volumes into containers. These may not be
loaded by preinstall kubernetes processes. For example, ceph and rbd backed volumes. Set this variable to
true to let kubelet load kernel modules.
##### Custom flags for Kube Components
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. Example:
```
kubelet_custom_flags:
- "--eviction-hard=memory.available<100Mi"
- "--eviction-soft-grace-period=memory.available=30s"
- "--eviction-soft=memory.available<300Mi"
```
The possible vars are:
* *apiserver_custom_flags*
* *controller_mgr_custom_flags*
* *scheduler_custom_flags*
* *kubelet_custom_flags*
#### User accounts
Kargo sets up two Kubernetes accounts by default: ``root`` and ``kube``. Their
Kubespray sets up two Kubernetes accounts by default: ``root`` and ``kube``. Their
passwords default to changeme. You can set this by changing ``kube_api_pwd``.

92
docs/vault.md Normal file
View File

@@ -0,0 +1,92 @@
Hashicorp Vault Role
====================
Overview
--------
The Vault role is a two-step process:
1. Bootstrap
You cannot start your certificate management service securely with SSL (and
the datastore behind it) without having the certificates in-hand already. This
presents an unfortunate chicken and egg scenario, with one requiring the other.
To solve for this, the Bootstrap step was added.
This step spins up a temporary instance of Vault to issue certificates for
Vault itself. It then leaves the temporary instance running, so that the Etcd
role can generate certs for itself as well. Eventually, this may be improved
to allow alternate backends (such as Consul), but currently the tasks are
hardcoded to only create a Vault role for Etcd.
2. Cluster
This step is where the long-term Vault cluster is started and configured. Its
first task, is to stop any temporary instances of Vault, to free the port for
the long-term. At the end of this task, the entire Vault cluster should be up
and read to go.
Keys to the Kingdom
-------------------
The two most important security pieces of Vault are the ``root_token``
and ``unsealing_keys``. Both of these values are given exactly once, during
the initialization of the Vault cluster. For convenience, they are saved
to the ``vault_secret_dir`` (default: /etc/vault/secrets) of every host in the
vault group.
It is *highly* recommended that these secrets are removed from the servers after
your cluster has been deployed, and kept in a safe location of your choosing.
Naturally, the seriousness of the situation depends on what you're doing with
your Kubespray cluster, but with these secrets, an attacker will have the ability
to authenticate to almost everything in Kubernetes and decode all private
(HTTPS) traffic on your network signed by Vault certificates.
For even greater security, you may want to remove and store elsewhere any
CA keys generated as well (e.g. /etc/vault/ssl/ca-key.pem).
Vault by default encrypts all traffic to and from the datastore backend, all
resting data, and uses TLS for its TCP listener. It is recommended that you
do not change the Vault config to disable TLS, unless you absolutely have to.
Usage
-----
To get the Vault role running, you must to do two things at a minimum:
1. Assign the ``vault`` group to at least 1 node in your inventory
2. Change ``cert_management`` to be ``vault`` instead of ``script``
Nothing else is required, but customization is possible. Check
``roles/vault/defaults/main.yml`` for the different variables that can be
overridden, most common being ``vault_config``, ``vault_port``, and
``vault_deployment_type``.
Also, if you intend to use a Root or Intermediate CA generated elsewhere,
you'll need to copy the certificate and key to the hosts in the vault group
prior to running the vault role. By default, they'll be located at
``/etc/vault/ssl/ca.pem`` and ``/etc/vault/ssl/ca-key.pem``, respectively.
Additional Notes:
- ``groups.vault|first`` is considered the source of truth for Vault variables
- ``vault_leader_url`` is used as pointer for the current running Vault
- Each service should have its own role and credentials. Currently those
credentials are saved to ``/etc/vault/roles/<role>/``. The service will
need to read in those credentials, if they want to interact with Vault.
Potential Work
--------------
- Change the Vault role to not run certain tasks when ``root_token`` and
``unseal_keys`` are not present. Alternatively, allow user input for these
values when missing.
- Add the ability to start temp Vault with Host, Rkt, or Docker
- Add a dynamic way to change out the backend role creation during Bootstrap,
so other services can be used (such as Consul)
- Segregate Server Cert generation from Auth Cert generation (separate CAs).
This work was partially started with the `auth_cert_backend` tasks, but would
need to be further applied to all roles (particularly Etcd and Kubernetes).

61
docs/vsphere.md Normal file
View File

@@ -0,0 +1,61 @@
# vSphere cloud provider
Kubespray can be deployed with vSphere as Cloud provider. This feature supports
- Volumes
- Persistent Volumes
- Storage Classes and provisioning of volumes.
- vSphere Storage Policy Based Management for Containers orchestrated by Kubernetes.
## Prerequisites
You need at first to configure you vSphere environement by following the [official documentation](https://kubernetes.io/docs/getting-started-guides/vsphere/#vsphere-cloud-provider).
After this step you should have:
- UUID activated for each VM where Kubernetes will be deployed
- A vSphere account with required privileges
## Kubespray configuration
Fist you must define the cloud provider in `inventory/group_vars/all.yml` and set it to `vsphere`.
```yml
cloud_provider: vsphere
```
Then, in the same file, you need to declare your vCenter credential following the description bellow.
| Variable | Required | Type | Choices | Default | Comment |
|------------------------------|----------|---------|----------------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| vsphere_vcenter_ip | TRUE | string | | | IP/URL of the vCenter |
| vsphere_vcenter_port | TRUE | integer | | | Port of the vCenter API. Commonly 443 |
| vsphere_insecure | TRUE | integer | 1, 0 | | set to 1 if the host above uses a self-signed cert |
| vsphere_user | TRUE | string | | | User name for vCenter with required privileges |
| vsphere_password | TRUE | string | | | Password for vCenter |
| vsphere_datacenter | TRUE | string | | | Datacenter name to use |
| vsphere_datastore | TRUE | string | | | Datastore name to use |
| vsphere_working_dir | TRUE | string | | | Working directory from the view "VMs and template" in the vCenter where VM are placed |
| vsphere_scsi_controller_type | TRUE | string | buslogic, pvscsi, parallel | pvscsi | SCSI controller name. Commonly "pvscsi". |
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` |
| vsphere_public_network | FALSE | string | | Blank | Name of the network the VMs are joined to |
Example configuration
```yml
vsphere_vcenter_ip: "myvcenter.domain.com"
vsphere_vcenter_port: 443
vsphere_insecure: 1
vsphere_user: "k8s@vsphere.local"
vsphere_password: "K8s_admin"
vsphere_datacenter: "DATACENTER_name"
vsphere_datastore: "DATASTORE_name"
vsphere_working_dir: "Docker_hosts"
vsphere_scsi_controller_type: "pvscsi"
```
## Deployment
Once the configuration is set, you can execute the playbook again to apply the new configuration
```
cd kubespray
ansible-playbook -i inventory/inventory.cfg -b -v cluster.yml
```
You'll find some usefull examples [here](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere) to test your configuration.

98
docs/weave.md Normal file
View File

@@ -0,0 +1,98 @@
Weave
=======
Weave 2.0.1 is supported by kubespray
Weave uses [**consensus**](https://www.weave.works/docs/net/latest/ipam/##consensus) mode (default mode) and [**seed**](https://www.weave.works/docs/net/latest/ipam/#seed) mode.
`Consensus` mode is best to use on static size cluster and `seed` mode is best to use on dynamic size cluster
Weave encryption is supported for all communication
* To use Weave encryption, specify a strong password (if no password, no encrytion)
```
# In file ./inventory/group_vars/k8s-cluster.yml
weave_password: EnterPasswordHere
```
This password is used to set an environment variable inside weave container.
Weave is deployed by kubespray using a daemonSet
* Check the status of Weave containers
```
# From client
kubectl -n kube-system get pods | grep weave
# output
weave-net-50wd2 2/2 Running 0 2m
weave-net-js9rb 2/2 Running 0 2m
```
There must be as many pods as nodes (here kubernetes have 2 nodes so there are 2 weave pods).
* Check status of weave (connection,encryption ...) for each node
```
# On nodes
curl http://127.0.0.1:6784/status
# output on node1
Version: 2.0.1 (up to date; next check at 2017/08/01 13:51:34)
Service: router
Protocol: weave 1..2
Name: fa:16:3e:b3:d6:b2(node1)
Encryption: enabled
PeerDiscovery: enabled
Targets: 2
Connections: 2 (1 established, 1 failed)
Peers: 2 (with 2 established connections)
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.233.64.0/18
DefaultSubnet: 10.233.64.0/18
```
* Check parameters of weave for each node
```
# On nodes
ps -aux | grep weaver
# output on node1 (here its use seed mode)
root 8559 0.2 3.0 365280 62700 ? Sl 08:25 0:00 /home/weave/weaver --name=fa:16:3e:b3:d6:b2 --port=6783 --datapath=datapath --host-root=/host --http-addr=127.0.0.1:6784 --status-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=10.233.64.0/18 --nickname=node1 --ipalloc-init seed=fa:16:3e:b3:d6:b2,fa:16:3e:f0:50:53 --conn-limit=30 --expect-npc 192.168.208.28 192.168.208.19
```
### Consensus mode (default mode)
This mode is best to use on static size cluster
### Seed mode
This mode is best to use on dynamic size cluster
The seed mode also allows multi-clouds and hybrid on-premise/cloud clusters deployement.
* Switch from consensus mode to seed mode
```
# In file ./inventory/group_vars/k8s-cluster.yml
weave_mode_seed: true
```
These two variables are only used when `weave_mode_seed` is set to `true` (**/!\ do not manually change these values**)
```
# In file ./inventory/group_vars/k8s-cluster.yml
weave_seed: uninitialized
weave_peers: uninitialized
```
The first variable, `weave_seed`, contains the initial nodes of the weave network
The seconde variable, `weave_peers`, saves the IPs of all nodes joined to the weave network
These two variables are used to connect a new node to the weave network. The new node needs to know the firsts nodes (seed) and the list of IPs of all nodes.
To reset these variables and reset the weave network set them to `uninitialized`

1
extra_playbooks/inventory Symbolic link
View File

@@ -0,0 +1 @@
../inventory

Some files were not shown because too many files have changed in this diff Show More