* kubedns & kubedns-autoscaler: Stick to master nodes.
- Tolerate only master nodes and not any NoSchedule taint
- Pods are on different nodes
- Pods are required to be on a master node.
* kubedns: use soft nodeAffinity.
Prefer to be on a master node, don't require.
* coredns: Stick to (different) master nodes.
- Pods are on different nodes
- Pods are preferred to be on a master node.
According to cluster/binary.yml vault binary will be placed to `{{ bin_dir }}` and according to `inventory/sample/group_vars/all.yml` that is
`inventory/sample/group_vars/all.yml`
Attempting to clarify the language surrounding the etcd node deployment script failure mechanism. I had this error when doing a new cluster deployment last night and, though it should have been, it wasn't immediately apparent to me what was causing the issue (since my default master node hostnames do not specify whether they are also acting as etcd replicas).
ingress-nginx 0.16.2 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.16.2)
This patch simplify ingress-nginx deployment by default deploy on
master, with customizable options; on the other hand, remove the
additional Ansible group "kube-ingress" and its k8s node label
injection.
Reference to https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites:
GCE/Google Kubernetes Engine deploys an ingress controller on the master.
By changing `ingress_nginx_nodeselector` plus custom k8s node
label, user could customize the DaemonSet deployment target.
If `ingress_nginx_nodeselector` is empty, will deploy DaemonSet on
every k8s node.
- cephfs-provisioner 06fddbe2 (https://github.com/kubernetes-incubator/external-storage/tree/06fddbe2/ceph/cephfs)
Noteable changes from upstream:
- Added storage class parameters to specify a root path within the backing cephfs and, optionally, use deterministic directory and user names (https://github.com/kubernetes-incubator/external-storage/pull/696)
- Support capacity (https://github.com/kubernetes-incubator/external-storage/pull/770)
- Enable metrics server (https://github.com/kubernetes-incubator/external-storage/pull/797)
Other noteable changes:
- Clean up legacy manifests file naming
- Remove legacy manifests, namespace and storageclass before upgrade
- `cephfs_provisioner_monitors` simplified as string
- Default to new deterministic naming
- Add `reclaimPolicy` support in StorageClass
With legacy non-deterministic naming style (where $UUID are generated ramdonly):
- cephfs_provisioner_claim_root: /volumes/kubernetes
- cephfs_provisioner_deterministic_names: false
- Generated CephFS volume: /volumes/kubernetes/kubernetes-dynamic-pvc-$UUID
- Generated CephFS user: kubernetes-dynamic-user-$UUID
With new default deterministic naming style (where $NAMESPACE and $PVC are predictable):
- cephfs_provisioner_claim_root: /volumes
- cephfs_provisioner_deterministic_names: true
- Generated CephFS volume: /volumes/$NAMESPACE/$PVC
- Generated CephFS user: k8s.$NAMESPACE.$PVC
The README says to check if Python and pip are installed type:
```
python -v && pip -v
```
Lowercase `-v` is `--verbose`, uppercase `-V` is `--version`. The
command should be:
```
python -V && pip -V
```
The number of pods on a given node is determined by the --max-pods=k
directive. When the address space is exhausted, no more pods can be
scheduled even if from the --max-pods-perspective, the node still has
capacity.
The special case that a pod is scheduled and uses the node IP in the
host network namespace is too "soft" to derive a guarantee.
Comparing kubelet_max_pods with kube_network_node_prefix when given
allows to assert that pod limits match the CIDR address space.
* Move front-proxy-client certs back to kube mount
We want the same CA for all k8s certs
* Refactor vault to use a third party module
The module adds idempotency and reduces some of the repetitive
logic in the vault role
Requires ansible-modules-hashivault on ansible node and hvac
on the vault hosts themselves
Add upgrade test scenario
Remove bootstrap-os tags from tasks
* fix upgrade issues
* improve unseal logic
* specify ca and fix etcd check
* Fix initialization check
bump machine size
* [terraform/openstack] Restores ability to use existing public nodes and masters as bastion.
* [terraform/openstack] Uses network_id as output
* [terraform/openstack] Fixes link to inventory/local/group_vars
* [terraform/openstack] Adds supplementary master groups
* [terraform/openstack] Updates documentation avoiding manual setups for bastion (as they are not needed now).
* [terraform/openstack] Supplementary master groups in docs.
* [terraform/openstack] Fixes repeated usage of master fips instead of bastion fips
* [terraform/openstack] Missing change for network_id to subnet_id
* [terraform/openstack] Changes conditional to element( concat ) form to avoid type issues with empty lists.
* sysctl file should be in defaults so that it can be overriden
* Change sysctl_file_path to be consistent with roles/kubernetes/preinstall/defaults/main.yml
pip was always being downloaded on subsequent runs, This PR always runs the pip command, and checks the rc of it before downloading pip
Fix in favor of #2582
Kubespray should not install any helm charts. This is a task
that a user should do on his/her own through ansible or another
tool. It opens the door to wrapping installation of any helm
chart.
Change to support multiple inventory path led to Vagrant environment not
getting a default group_vars in it's inventory path. Using sample as the
default path if none specified.
Fix issue #2541
Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
The RPM file that's provided by upstream can be used for SUSE
distributions as well. Moreover we simplify the playbook to use
the 'package' module to install packages across different distros.
Link: https://github.com/rkt/rkt/pull/3904
If the 'docker' package is already installed, then the handlers will not
run and the service will not be (re-)started. As such, lets make sure
that the service is started even if the packages are already installed.
Add support for installing Docker on SUSE distributions. The Docker
repository at https://yum.dockerproject.org/repo/main/ does not support
recent openSUSE distributions so the only alternative is to use the
packages from the distro repositories. This however renders the
'docker_version' Ansible variable useless on SUSE.
The openssl package on Tumbleweed is actually a virtual package covering
openssl-1.0.0 and openssl-1.1.0 implementations. It defaults to 1.1.0 so
when trying to install it and openssl-1.0.0 is installed, zypper fails
with conflicts. As such, lets explicitly pull the package that we need
which also updates the virtual one.
Co-authored-by: Markos Chandras <mchandras@suse.de>
Depending on the VM configuration, vagrant may either use 'rsync' or
vboxfs for populating the working directory to the VM. However, vboxfs
means that any files created by the VM will also be present on the host.
As such, lets be explicit and always use 'rsync' to copy the directory
to the VM so we can keep the host copy clean. Moreover, the default
rsync options include '--copy-links' and this breaks rsync if there are
missing symlinks in the working directory like the following one:
Error: symlink has no referent:
"/home/user/kubespray/contrib/network-storage/glusterfs/group_vars"
As such, we override the default options to drop --copy-links.
While `do` looks cleaner, forcing this extra option in ansible.cfg
seems to be more invasive. It would be better to keep the traditional
approach of `set dummy = ` instead.
The current way to setup the etc cluster is messy and buggy.
- It checks for cluster is healthy before the cluster is even created.
- The unit files are started on handlers, not in the task, so you mess with "flush handlers".
- The join_member.yml is not used.
- etcd events cluster is not configured for kubeadm
- remove duplicate runs between running the role on etcd nodes and k8s nodes
* Remove old docker packages
This removes docker packages that are obsolete if docker-ce packages are to be installed, which fixes some package conflict issues that can occur during upgrades.
* Add support for setting obsoletes=0 when installing docker with yum
The default for kibana_base_url does not make sense an makes kibana unusable. The default path forces a 404 when you try to open kibana in the browser. Not setting kibana_base_url works just fine.
Added CoreDNS to downloads
Updated with labels. Should now work without RBAC too
Fix DNS settings on hosts
Rename CoreDNS service from kube-dns to coredns
Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html
Updated docs with CoreDNS info
Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed
Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806'
Set dns list correct. Thanks to @whereismyjetpack
Only download KubeDNS or CoreDNS if selected
Move dns cleanup to its own file and import tasks based on dns mode
Fix install of KubeDNS when dnsmask_kubedns mode is selected
Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf.
Run DNS manifests for CoreDNS and KubeDNS
Set skydns servers on dual stack deployment
Use only one template for CoreDNS dual deployment
Set correct cluster ip for the dns server
Flannel use interface for inter-host communication setted on --iface options
Defaults to the interface for the default route on the machine.
flannel config set via daemonset, and flannel config on all nodes is the same.
But different nodes can have different interface names for the inter-host communication network
The option --iface-regex allows the flannel to find the interface on which the address is set from the inter-host communication network
* Added option for encrypting secrets to etcd
* Fix keylength to 32
* Forgot the default
* Rename secrets.yaml to secrets_encryption.yaml
* Fix static path for secrets file to use ansible variable
* Rename secrets.yaml.j2 to secrets_encryption.yaml.j2
* Base64 encode the token
* Fixed merge error
* Changed path to credentials dir
* Update path to secrets file which is now readable inside the apiserver container. Set better file permissions
* Add encryption option to k8s-cluster.yml
Setting the following:
```
kube_kubeadm_controller_extra_args:
address: 0.0.0.0
terminated-pod-gc-threshold: "100"
```
Results in `terminated-pod-gc-threshold: 100` in the kubeadm config file. But it has to be a string to work.
* Multiple files are now supported across operations.
* Can be specified as a list or a comma separated string.
* Single item per task params will still work without changes.
* Added `files`, `filenames`, and `file`, as aliases for the `filename` param.
* Improved output of error message to always include stderr
* `exists` now supports checking files
Follow up PRs encouraged across roles to start converting `with_items` loops on `kube` tasks into `files` param lists so we can improve performance.
This is trying to match what the roles/bastion-ssh-config is trying to do. When the setup is going through bastion, we want to ssh private key to be used on the bastion instance.
to the API server configuration.
This solves the problem where if you have non-resolvable node names,
and try to scale the server by adding new nodes, kubectl commands
start to fail for newly added nodes, giving a TCP timeout error when
trying to resolve the node hostname against a public DNS.
Adding this into the default example inventory so it has less of a chance of biting others after weeks of random failures (as etcd does not express that it has run out of RAM it just stalls).. 512MB was not enough for us to run one of our products.
* Fix run kubectl error
Fix run kubectl error when first master doesn't work
* if access_ip is define use first_kube_master
else different master use a different ip
* Delete set first_kube_master and use kube_apiserver_access_address
* Set filemode to 0640
weave-net.yml file is readable by all users on the host. It however contains the weave_password to encrypt all pod communication. It should only be readable by root.
* Set mode 0640 on users_file with basic auth
* Added cilium support
* Fix typo in debian test config
* Remove empty lines
* Changed cilium version from <latest> to <v1.0.0-rc3>
* Add missing changes for cilium
* Add cilium to CI pipeline
* Fix wrong file name
* Check kernel version for cilium
* fixed ci error
* fixed cilium-ds.j2 template
* added waiting for cilium pods to run
* Fixed missing EOF
* Fixed trailing spaces
* Fixed trailing spaces
* Fixed trailing spaces
* Fixed too many blank lines
* Updated tolerations,annotations in cilium DS template
* Set cilium_version to iptables-1.9 to see if bug is fixed in CI
* Update cilium image tag to v1.0.0-rc4
* Update Cilium test case CI vars filenames
* Add optional prometheus flag, adjust initial readiness delay
* Update README.md with cilium info
When etcd exceeds its memory limit, it becomes useless but keeps running.
We should let OOM killer kill etcd process in the container, so systemd can spot
the problem and restart etcd according to "Restart" setting in etcd.service unit file.
If OOME problem keep repeating, i.e. it happens every single restart,
systemd will eventually back off and stop restarting it anyway.
--restart=on-failure:5 in this file has no effect because memory allocation error
doesn't by itself cause the process to die
Related: https://github.com/kubernetes-incubator/kubespray/blob/master/roles/etcd/templates/etcd-docker.service.j2
This kind of reverts a change introduced in #1860.
Even though there it kubeadm_token_ttl=0 which means that kubeadm token never expires, it is not present in `kubeadm token list` after cluster is provisioned (at least after it is running for some time) and there is issue regarding this https://github.com/kubernetes/kubeadm/issues/335, so we need to create a new temporary token during the cluster upgrade.
Ansible automatically installs the python-apt package when using
the 'apt' Ansible module, if python-apt is not present. This patch
removes the (unneeded) explicit installation in the Kubespray
'preinstall' role.
The default path assumes that the vagrant dir is called 'inventory'.
With custom defined inventory dirs that are not called 'inventory' this
fails to create the correct symlink under .vagrant.d.
In some installation, it can take up to 3sec to get the value. Retrying
for 5 sec will ensure the command won't return 1.
Signed-off-by: Sébastien Han <seb@redhat.com>
* allow installs to not have hostname overriden with fqdn from inventory
* calico-config no longer requires local as and will default to global
* when cloudprovider is not defined, use the inventory_hostname for cni-calico
* allow reset to not restart network (buggy nodes die with this cmd)
* default kube_override_hostname to inventory_hostname instead of ansible_hostname
The "centos/7" box is the official centos box and supports all the major
providers:
virtualbox Externally hosted (cloud.centos.org)
vmware_desktop Externally hosted (cloud.centos.org)
libvirt Externally hosted (cloud.centos.org)
hyperv Externally hosted (cloud.centos.org)
Where bento/centos-7.3 only supports:
parallels Hosted by Vagrant Cloud (570 MB)
virtualbox Hosted by Vagrant Cloud (525 MB)
vmware_desktop Hosted by Vagrant Cloud (608 MB)
Signed-off-by: Sébastien Han <seb@redhat.com>
When testing deployments of SDS, it is quite useful to get a Kubernetes
env with nodes having dedicated drives.
You can now enable this by setting: kube_node_instances_with_disks: true
Also you can chose the amount of drives per machine and their respective
size:
* kube_node_instances_with_disks_number: 10
* kube_node_instances_with_disks_size: "20G"
Signed-off-by: Sébastien Han <seb@redhat.com>
Cloud resolvers are mandatory for hosts on GCE and OpenStack
clouds. The 8.8.8.8 alternative resolver was dropped because
there is already a default nameserver. The new var name
reflects the purpose better.
Also restart apiserver when modifying dns settings.
If you configure your external loadbalancer to do a simple tcp pass-through to the api servers, and you do not use a DNS FQDN but just the ip, then you need to add the ip adress to the certificates too.
Example config:
```
## External LB example config
apiserver_loadbalancer_domain_name: "10.50.63.10"
loadbalancer_apiserver:
address: 10.50.63.10
port: 8383
```
Some installation are failing to authenticate with peers due to
etcd picking up/resoling the wrong node.
By setting 'etcd_peer_client_auth' to "False" you can disable peer client cert
authentication.
Signed-off-by: Sébastien Han <seb@redhat.com>
* Update rpm spec and pbr setup configs
* Rename package to kubespray
* Do not break Fedora's FHS and install to /usr/share instead
* Remove the vendor tag
* Update source0 for better artifacts' names
* Fix missing files build errors
* Make version/release to auto match from git and fit PEP 440
Co-authored-by: Matthias Runge <mrunge@redhat.com>
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
* Add package paths to roles search in ansible conf
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
* Poke jinja2 requirements in rpm spec file
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
kube-proxy is complaining of missing modules at startup. There is a plan
to also support an LVS implementation of kube-proxy in additon to
userspace and iptables
* Fix HA docs API access endpoints explained
Follow-up commit 81347298a3
and fix the endpoint value provided in HA docs.
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
* Clarify internal LB with external LB use case
* Clarify how to use both internal and external, non-cluster aware and
not managed with Kubespray, LB solutions.
* Clarify the requirements, like TLS/SSL termination, for such an external LB.
Unlike to the 'cluster-aware' external LB config, endpoints' security must be
managed by that non-cluster aware external LB.
* Note that masters always contact their local apiservers via https://bip:sp.
It's highly unlikely to go down and it reduces latency that might be
introduced when going host->lb->host. Only computes go that path.
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
* Add a note for supplementary_addresses_in_ssl_keys
Explain how to benefit from supplementary_addresses_in_ssl_keys
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
Hardcoded variables are removed from variables.tf file because it might
not be suitable for all OpenStack Cloud depending on Identity API
version available (between v2 or v3) and preferred authentication
method.
Auto configure API access endpoint with a custom bind IP, if provided.
Fix HA docs' http URLs are https in fact, clarify the insecure vs secure
API access modes as well.
Closes: #issues/2051
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
Update checksum for kubeadm
Use v1.9.0 kubeadm params
Include hash of ca.crt for kubeadm join
Update tag for testing upgrades
Add workaround for testing upgrades
Remove scale CI scenarios because of slow inventory parsing
in ansible 2.4.x.
Change region for tests to us-central1 to
improve ansible performance
Starting with Kubernetes v1.8.4, kubelet ignores the AWS cloud
provider string and uses the override hostname, which fails
Node admission checks.
Fixes#2094
The search line in /etc/resolv.conf could have
multiple spaces or tabs between domains.
split(' ') will give wrong results in some case,
use split() without argument instead.
e.g.
>>> 'domain.tld cluster.tld '.split(' ')
['domain.tld\tcluster.tld', '']
>>> 'domain.tld cluster.tld '.split()
['domain.tld', 'cluster.tld']
As we have seen with other containers, sometimes container removal fails on the first attempt due to some Docker bugs. Retrying typically corrects the issue.
Use a etcd-initer init container to generate etcd args, it determines
etcd name by comparing its ip and etcd cluster ips. This way will
make etcd configuration independent to the ansible templating so
that could be easier on adding master nodes.
Putting contiv etcd and etcd-proxy into the same daemonset and manage
the difference by a env file is not good for scaling (adding nodes).
This commit split them into two daemonsets so that when adding nodes,
k8s could automatically starting a etcd-proxy on new nodes without need
to run related play that putting env file.
* Remove the network device created by the flannel
Remove the network device created by the flannel
* Modify flannel.1 device path
Modify flannel.1 device path
* remove trailing spaces
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).
Rework of #1937 with kubeadm support
Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
* Adding bastion and private network provisioning for openstack terraform
* Remove usage of floating-ip property
* Combine openstack instances + floating ips
* Fix relating floating IPs to hosts for openstack builds
* Tighten up security groups
Allow ssh into all instances with floating IP
* Add the gluster hosts to the no-floating group
* Break terraform into modules
* Update README and var descriptions to match current config
* Remove volume property in gluster compute def
* Include cluster name in internal network and router names
* Make dns_nameservers a variable
* Properly tag instances and subnets with `kubernetes.io/cluster/$cluster_name`
This is required by kubernetes to support multiple clusters in a single vpc/az
* Get rid of loadbalancer_apiserver_address as it is no longer needed
* Dynamically retrieve aws_bastion_ami latest reference by querying AWS rather than hard coded
* Dynamically retrieve the list of availability_zones instead of needing to have them hard coded
* Limit availability zones to first 2, using slice extrapolation function
* Replace the need for hardcoded variable "aws_cluster_ami" by the data provided by Terraform
* Move ami choosing to vars, so people don't need to edit create infrastructure if they want another vendor image (as suggested by @atoms)
* Make name of the data block agnostic of distribution, given there are more than one distribution supported
* Add documentation about other distros being supported and what to change in which location to make these changes
* Allow setting --bind-address for apiserver hyperkube
This is required if you wish to configure a loadbalancer (e.g haproxy)
running on the master nodes without choosing a different port for the
vip from that used by the API - in this case you need the API to bind to
a specific interface, then haproxy can bind the same port on the VIP:
root@overcloud-controller-0 ~]# netstat -taupen | grep 6443
tcp 0 0 192.168.24.6:6443 0.0.0.0:* LISTEN 0 680613 134504/haproxy
tcp 0 0 192.168.24.16:6443 0.0.0.0:* LISTEN 0 653329 131423/hyperkube
tcp 0 0 192.168.24.16:6443 192.168.24.16:58404 ESTABLISHED 0 652991 131423/hyperkube
tcp 0 0 192.168.24.16:58404 192.168.24.16:6443 ESTABLISHED 0 652986 131423/hyperkube
This can be achieved e.g via:
kube_apiserver_bind_address: 192.168.24.16
* Address code review feedback
* Update kube-apiserver.manifest.j2
* Add Contiv support
Contiv is a network plugin for Kubernetes and Docker. It supports
vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies,
multiple networks and bridging pods onto physical networks.
* Update contiv version to 1.1.4
Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config.
* Load openvswitch module to workaround on CentOS7.4
* Set contiv cni version to 0.1.0
Correct contiv CNI version to 0.1.0.
* Use kube_apiserver_endpoint for K8S_API_SERVER
Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks
to a available endpoint no matter if there's a loadbalancer or not.
* Make contiv use its own etcd
Before this commit, contiv is using a etcd proxy mode to k8s etcd,
this work fine when the etcd hosts are co-located with contiv etcd
proxy, however the k8s peering certs are only in etcd group, as a
result the etcd-proxy is not able to peering with the k8s etcd on
etcd group, plus the netplugin is always trying to find the etcd
endpoint on localhost, this will cause problem for all netplugins
not runnign on etcd group nodes.
This commit make contiv uses its own etcd, separate from k8s one.
on kube-master nodes (where net-master runs), it will run as leader
mode and on all rest nodes it will run as proxy mode.
* Use cp instead of rsync to copy cni binaries
Since rsync has been removed from hyperkube, this commit changes it
to use cp instead.
* Make contiv-etcd able to run on master nodes
* Add rbac_enabled flag for contiv pods
* Add contiv into CNI network plugin lists
* migrate contiv test to tests/files
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
* Add required rules for contiv netplugin
* Better handling json return of fwdMode
* Make contiv etcd port configurable
* Use default var instead of templating
* roles/download/defaults/main.yml: use contiv 1.1.7
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
Move RS to deployment so no need to take care of the revision history
limits :
- Delete the old RS
- Make Calico manifest a deployment
- move deployments to apps/v1beta2 API since Kubernetes 1.8
* Defaults for apiserver_loadbalancer_domain_name
When loadbalancer_apiserver is defined, use the
apiserver_loadbalancer_domain_name with a given default value.
Fix unconsistencies for checking if apiserver_loadbalancer_domain_name
is defined AND using it with a default value provided at once.
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
* Define defaults for LB modes in common defaults
Adjust the defaults for apiserver_loadbalancer_domain_name and
loadbalancer_apiserver_localhost to come from a single source, which is
kubespray-defaults. Removes some confusion and simplefies the code.
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
Thought this wasn't required at first but I forgot there's no auto flush at the end of these tasks since the `kubernetes/master` role is not the end of the play.
* Fixes an issue where apiserver and friends (controller manager, scheduler) were prevented from restarting after manifests/secrets are changed. This occurred when a replaced kubelet doesn't reconcile new master manifests, which caused old master component versions to linger during deployment. In my case this was causing upgrades from k8s 1.6/1.7 -> k8s 1.8 to fail
* Improves transitions from kubelet container to host kubelet by preventing issues where kubelet container reappeared during the deployment
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:
1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.
Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.
Options:
1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)
Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
* Change deprecated vagrant ansible flag 'sudo' to 'become'
* Emphasize, that the name of the pip_pyton_modules is only considered in coreos
* Remove useless unused variable
* Fix warning when jinja2 template-delimiters used in when statement
There is no need for jinja2 template-delimiters like {{ }} or {% %}
any more. They can just be omitted as described in https://github.com/ansible/ansible/issues/22397
* Fix broken link in getting-started guide
* Change deprecated vagrant ansible flag 'sudo' to 'become'
* Workaround ansible bug where access var via dict doesn't get real value
When accessing a variable via it's name "{{ foo }}" its value is
retrieved. But when the variable value is retrieved via the vars-dict
"{{ vars['foo'] }}" this doesn't resolve the expression of the variable
any more due to a bug. So e.g. a expression foo="{{ 1 == 1 }}" isn't
longer resolved but just returned as string "1 == 1".
* Make file yamllint complient
When proxy vars are set, `uri` module tasks will attempt to route traffic through the proxy. This causes the "Wait for" tasks in the `etcd` and `kubernetes/master` roles to hang, as localhost connections struggle with a proxy.
As far as I know these roles only need local/cluster networking, so a proxy doesn't apply here anyway.
Some time ago I think the hardcoded `/var/lib/docker` was required, but kubelet running in a container has been aware of the Docker path since at least as far back as k8s 1.6.
Without this change, you see a large number of errors in the kubelet logs if you installed with a non-default `docker_daemon_graph`
This allows overriding of apt repo endpoints when internet sources are not accessible. Additionally, switch to using the dockerproject.org gpg key url for apt instead of keyservers.net
* Fix broken CI jobs
Adjust image and image_family scenarios for debian.
Checkout CI file for upgrades
* add debugging to file download
* Fix download for alternate playbooks
* Update ansible ssh args to force ssh user
* Update sync_container.yml
* Refactor downloads to use download role directly
Also disable fact delegation so download delegate works acros OSes.
* clean up bools and ansible_os_family conditionals
* Update main.yml
Needs to set up resolv.conf before updating Yum cache otherwise no name resolution available (resolv.conf empty).
* Update main.yml
Removing trailing spaces
* Add possibility to insert more ip adresses in certificates
* Add newline at end of files
* Move supp ip parameters to k8s-cluster group file
* Add supplementary addresses in kubeadm master role
* Improve openssl indexes
* don't try to install this rpm on fedora atomic
* add docker 1.13.1 for fedora
* built-in docker unit file is sufficient, as tested on both fedora and centos atomic
* Change file used to check kubeadm upgrade method
Test for ca.crt instead of admin.conf because admin.conf
is created during normal deployment.
* more fixes for upgrade
In 1.8, the Node authorization mode should be listed first to
allow kubelet to access secrets. This seems to only impact
environments with cloudprovider enabled.
* Changre raw execution to use yum module
Changed raw exection to use yum module provided by Ansible.
* Replace ansible_ssh_* by ansible_*
Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user, ansible_ssh_host, and ansible_ssh_port to become ansible_user, ansible_host, and ansible_port. If you are using a version of Ansible prior to 2.0, you should continue using the older style variables (ansible_ssh_*). These shorter variables are ignored, without warning, in older versions of Ansible.
I am not sure about the broader impact of this change. But I have seen on the requirements the version required is ansible>=2.4.0.
http://docs.ansible.com/ansible/latest/intro_inventory.html
This role only support Red Hat type distros and is not maintained
or used by many users. It should be removed because it creates
feature disparity between supported OSes and is not maintained.
* Rename dns_server to dnsmasq_dns_server so that it includes role prefix
as the var name is generic and conflicts when integrating with existing ansible automation.
* Enable selinux state to be configurable with new var preinstall_selinux_state
PID namespace sharing is disabled only in Kubernetes 1.7.
Explicitily enabling it by default could help reduce unexpected
results when upgrading to or downgrading from 1.7.
For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python and pip are installed:
python -V && pip -V
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
sudo pip install -r requirements.txt
vagrant up
Documents
---------
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Network plugins](#network-plugins)
- [Vagrant install](docs/vagrant.md)
- [CoreOS bootstrap](docs/coreos.md)
- [Debian Jessie setup](docs/debian.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
- [Cloud providers](docs/cloud.md)
- [OpenStack](docs/openstack.md)
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Large deployments](docs/large-deployments.md)
- [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md)
Supported Linux Distributions
-----------------------------
- **Container Linux by CoreOS**
- **Debian** Jessie, Stretch, Wheezy
- **Ubuntu** 16.04
- **CentOS/RHEL** 7
- **Fedora/CentOS** Atomic
- **openSUSE** Leap 42.3/Tumbleweed
Note: Upstart/SysV init based OS types are not supported.
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
@@ -70,52 +114,62 @@ plugins' related OS services. Also note, only one of the supported network
plugins can be deployed for a given single cluster.
Requirements
--------------
------------
* **Ansible v2.3 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands**
***Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
* The target servers must have **access to the Internet** in order to pull docker images.
* The target servers are configured to allow **IPv4 forwarding**.
***Your ssh key must be copied** to all the servers part of your inventory.
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
- **Ansible v2.4 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images.
- The target servers are configured to allow **IPv4 forwarding**.
- **Your ssh key must be copied** to all the servers part of your inventory.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is ran from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified.
Network Plugins
---------------
## Network plugins
You can choose between 6 network plugins. (default: `calico`, except Vagrant uses `flannel`)
You can choose between 4 network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
* [**canal**](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
* [**weave**](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
As contributors and maintainers of this project, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free experience for
everyone, regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information, such as physical or electronic addresses,
without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are not
aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
commit themselves to fairly and consistently applying these principles to every aspect
of managing this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Kubernetes maintainer, Sarah Novotny <sarahnovotny@google.com>, and/or Dan Kohn <dan@linuxfoundation.org>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
http://contributor-covenant.org/version/1/2/0/
### Kubernetes Events Code of Conduct
Kubernetes events are working conferences intended for professional networking and collaboration in the
Kubernetes community. Attendees are expected to behave according to professional standards and in accordance
with their employer's policies on appropriate workplace behavior.
While at Kubernetes events or related social networking opportunities, attendees should not engage in
discriminatory or offensive speech or actions regarding gender, sexuality, race, or religion. Speakers should
be especially aware of these concerns.
The Kubernetes team does not condone any statements by speakers contrary to these standards. The Kubernetes
team reserves the right to deny entrance and/or eject from an event (without refund) any individual found to
be engaging in discriminatory or offensive speech or actions.
Please bring any concerns to the immediate attention of Kubernetes event staff.
@@ -6,16 +6,16 @@ You can either deploy using Ansible on its own by supplying your own inventory f
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/sample/k8s_gfs_inventory`. Make sure that the settings on `inventory/sample/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:
when:inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
- Export the variables for your AWS credentials or edit `credentials.tfvars`:
```
export AWS_ACCESS_KEY_ID="www"
export AWS_SECRET_ACCESS_KEY ="xxx"
export AWS_SSH_KEY_NAME="yyy"
export AWS_DEFAULT_REGION="zzz"
export TF_VAR_AWS_ACCESS_KEY_ID="www"
export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
export TF_VAR_AWS_DEFAULT_REGION="zzz"
```
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
- Update `contrib/terraform/aws/terraform.tfvars` with your data
- Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below)
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use CoreOS as base image. If you want to change this behaviour, see note "Using other distrib than CoreOs" below.
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To make use of it, make sure you have a line in your `ansible.cfg` file that looks like the following:
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
Ansible automatically detects bastion and changes ssh_args
This will install a Kubernetes cluster on an Openstack Cloud. It has been tested on a
OpenStack Cloud provided by [BlueBox](https://www.blueboxcloud.com/) and on OpenStack at [EMBL-EBI's](http://www.ebi.ac.uk/) [EMBASSY Cloud](http://www.embassycloud.org/). This should work on most modern installs of OpenStack that support the basic
services.
This will install a Kubernetes cluster on an Openstack Cloud. It should work on
most modern installs of OpenStack that support the basic services.
There are some assumptions made to try and ensure it will work on your openstack cluster.
## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your OpenStack cluster.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by the main ansible script
to actually install kubernetes and stand up the cluster.
* floating-ips are used for access, but you can have masters and nodes that don't use floating-ips if needed. You need currently at least 1 floating ip, which needs to be used on a master. If using more than one, at least one should be on a master for bastions to work fine.
* you already have a suitable OS image in glance
* you already have both an internal network and a floating-ip pool created
* you have security-groups enabled
### Networking
The configuration includes creating a private subnet with a router to the
external net. It will allocate floating IPs from a pool and assign them to the
hosts where that makes sense. You have the option of creating bastion hosts
inside the private subnet to access the nodes there. Alternatively, a node with
a floating IP can be used as a jump host to nodes without.
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating
floating IP addresses or not.
- Master nodes with etcd
- Master nodes without etcd
- Standalone etcd hosts
- Kubernetes worker nodes
Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
### GlusterFS
The Terraform configuration supports provisioning of an optional GlusterFS
shared file system based on a separate set of VMs. To enable this, you need to
specify:
- the number of Gluster hosts (minimum 2)
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
- Other properties related to provisioning the hosts
Even if you are using Container Linux by CoreOS for your cluster, you will still
need the GlusterFS VMs to be based on either Debian or RedHat based images.
Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through
binaries available on hyperkube v1.4.3_coreos.0 or higher.
- you have a pair of keys generated that can be used to secure the new hosts
## Module Architecture
The configuration is divided into three modules:
- Network
- IPs
- Compute
The main reason for splitting the configuration up in this way is to easily
accommodate situations where floating IPs are limited by a quota or if you have
any external references to the floating IP (e.g. DNS) that would otherwise have
to be updated.
You can force your existing IPs by modifying the compute variables in
`kubespray.tf` as follows:
```
k8s_master_fips = ["151.101.129.67"]
k8s_node_fips = ["151.101.129.68"]
```
## Terraform
Terraform will be used to provision all of the OpenStack resources with base software as appropriate.
Terraform will be used to provision all of the OpenStack resources. It is also used to deploy and provision the software
requirements.
### Configuration
### Prep
#### Inventory files
#### OpenStack
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
Ensure your OpenStack **Identity v2** credentials are loaded in environment variables. This can be done by downloading a credentials .rc file from your OpenStack dashboard and sourcing it:
> You must set **OS_REGION_NAME** and **OS_TENANT_ID** environment variables not required by openstack CLI
This will be the base for subsequent Terraform commands.
You will need two networks before installing, an internal network and
an external (floating IP Pool) network. The internet network can be shared as
we use security groups to provide network segregation. Due to the many
differences between OpenStack installs the Terraform does not attempt to create
these for you.
#### OpenStack access and credentials
By default Terraform will expect that your networks are called `internal` and
`external`. You can change this by altering the Terraform variables `network_name` and `floatingip_pool`. This can be done on a new variables file or through environment variables.
No provider variables are hardcoded inside `variables.tf` because Terraform
supports various authentication methods for OpenStack: the older script and
environment method (using `openrc`) as well as a newer declarative method, and
different OpenStack environments may support Identity API version 2 or 3.
A full list of variables you can change can be found at [variables.tf](variables.tf).
These are examples and may vary depending on your OpenStack cloud provider,
for an exhaustive list on how to authenticate on OpenStack with Terraform
please read the [OpenStack provider documentation](https://www.terraform.io/docs/providers/openstack/).
All OpenStack resources will use the Terraform variable `cluster_name` (
default `example`) in their name to make it easier to track. For example the
first compute resource will be named `example-kubernetes-1`.
##### Declarative method (recommended)
#### Terraform
The recommended authentication method is to describe credentials in a YAML file `clouds.yaml` that can be stored in:
* the current directory
*`~/.config/openstack`
*`/etc/openstack`
`clouds.yaml`:
```
clouds:
mycloud:
auth:
auth_url: https://openstack:5000/v3
username: "username"
project_name: "projectname"
project_id: projectid
user_domain_name: "Default"
password: "password"
region_name: "RegionOne"
interface: "public"
identity_api_version: 3
```
If you have multiple clouds defined in your `clouds.yaml` file you can choose
the one you want to use with the environment variable `OS_CLOUD`:
```
export OS_CLOUD=mycloud
```
##### Openrc method
When using classic environment variables, Terraform uses default `OS_*`
environment variables. A script suitable for your environment may be available
from Horizon under *Project* -> *Compute* -> *Access & Security* -> *API Access*.
With identity v2:
```
source openrc
env | grep OS
OS_AUTH_URL=https://openstack:5000/v2.0
OS_PROJECT_ID=projectid
OS_PROJECT_NAME=projectname
OS_USERNAME=username
OS_PASSWORD=password
OS_REGION_NAME=RegionOne
OS_INTERFACE=public
OS_IDENTITY_API_VERSION=2
```
With identity v3:
```
source openrc
env | grep OS
OS_AUTH_URL=https://openstack:5000/v3
OS_PROJECT_ID=projectid
OS_PROJECT_NAME=username
OS_PROJECT_DOMAIN_ID=default
OS_USERNAME=username
OS_PASSWORD=password
OS_REGION_NAME=RegionOne
OS_INTERFACE=public
OS_IDENTITY_API_VERSION=3
OS_USER_DOMAIN_NAME=Default
```
Terraform does not support a mix of DomainName and DomainID, choose one or the
other:
```
* provider.openstack: You must provide exactly one of DomainID or DomainName to authenticate by Username
```
```
unset OS_USER_DOMAIN_NAME
export OS_USER_DOMAIN_ID=default
or
unset OS_PROJECT_DOMAIN_ID
set OS_PROJECT_DOMAIN_NAME=Default
```
#### Cluster variables
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|Variable | Description |
|---------|-------------|
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`network_name` | The name to be given to the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through`nova flavor-list` |
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
|`number_of_k8s_masters_no_etcd`, `number_of_k8s_masters_no_floating_ip_no_etcd` | Number of nodes that serve as just master with no etcd. These can be provisioned with or without floating IP addresses |
|`number_of_etcd` | Number of pure etcd nodes |
|`number_of_k8s_nodes`, `number_of_k8s_nodes_no_floating_ip` | Kubernetes worker nodes. These can be provisioned with or without floating ip addresses. |
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
#### Terraform state files
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/openstack` directory :
*`.terraform`
*`.tfvars`
*`.tfstate`
*`.tfstate.backup`
You can still add them manually if you want to.
### Initialization
Before Terraform can operate on your cluster you need to install the required
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
* remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
* clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
### Debugging
You can enable debugging output from Terraform by setting
`OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before running the Terraform command.
### Terraform output
Terraform can output values that are useful for configure Neutron/Octavia LBaaS or Cinder persistent volume provisioning as part of your Kubernetes deployment:
-`private_subnet_id`: the subnet where your instances are running is used for `openstack_lbaas_subnet_id`
-`floating_network_id`: the network_id where the floating IP are provisioned is used for `openstack_lbaas_floating_network_id`
## Ansible
### Node access
#### SSH
Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner:
@@ -63,141 +300,26 @@ $ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/id_rsa
```
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
Ensure that you have your Openstack credentials loaded into Terraform
environment variables. Likely via a command similar to:
#### Bastion host
Bastion access will be determined by:
- Your choice on the amount of bastion hosts (set by `number_of_bastions` terraform variable).
- The existence of nodes/masters with floating IPs (set by `number_of_k8s_masters`, `number_of_k8s_nodes`, `number_of_k8s_masters_no_etcd` terraform variables).
If you have a bastion host, your ssh traffic will be directly routed through it. This is regardless of whether you have masters/nodes with a floating IP assigned.
If you don't have a bastion host, but at least one of your masters/nodes have a floating IP, then ssh traffic will be tunneled by one of these machines.
So, either a bastion host, or at least master/node with a floating IP are required.
#### Test access
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```
$ echo Setting up Terraform creds && \
export TF_VAR_username=${OS_USERNAME} && \
export TF_VAR_password=${OS_PASSWORD} && \
export TF_VAR_tenant=${OS_TENANT_NAME} && \
export TF_VAR_auth_url=${OS_AUTH_URL}
```
##### Alternative: etcd inside masters
If you want to provision master or node VMs that don't use floating ips and where etcd is inside masters, write on a `my-terraform-vars.tfvars` file, for example:
```
number_of_k8s_masters = "1"
number_of_k8s_masters_no_floating_ip = "2"
number_of_k8s_nodes_no_floating_ip = "1"
number_of_k8s_nodes = "0"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy) and one VM as node, again without a floating ip.
##### Alternative: etcd on separate machines
If you want to provision master or node VMs that don't use floating ips and where **etcd is on separate nodes from Kubernetes masters**, write on a `my-terraform-vars.tfvars` file, for example:
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy), two VMs as nodes with floating ips, one VM as node without floating ip and three VMs for etcd.
##### Alternative: add GlusterFS
Additionally, now the terraform based installation supports provisioning of a GlusterFS shared file system based on a separate set of VMs, running either a Debian or RedHat based set of VMs. To enable this, you need to add to your `my-terraform-vars.tfvars` the following variables:
```
# Flavour depends on your openstack installation, you can get available flavours through `nova flavor-list`
# This is the name of an image already available in your openstack installation.
image_gfs = "Ubuntu 15.10"
number_of_gfs_nodes_no_floating_ip = "3"
# This is the size of the non-ephemeral volumes to be attached to store the GlusterFS bricks.
gfs_volume_size_in_gb = "50"
# The user needed for the image choosen for GlusterFS.
ssh_user_gfs = "ubuntu"
```
If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using Container Linux by CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher.
# Configure Cluster variables
Edit `inventory/group_vars/all.yml`:
- Set variable **bootstrap_os** according selected image
if you choose to add masters or nodes without floating ips (only internal ips on your OpenStack tenancy), this script will create as well a file `contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for ansible to be able to access your machines tunneling through the first floating ip used. If you want to manually handling the ssh tunneling to these machines, please delete or move that file. If you want to use this, just leave it there, as ansible will pick it up automatically.
Make sure you can connect to the hosts:
```
$ ansible -i contrib/terraform/openstack/hosts -m ping all
if you are deploying a system that needs bootstrapping, like Container Linux by CoreOS, these might have a state `FAILED` due to Container Linux by CoreOS not having python. As long as the state is not `UNREACHABLE`, this is fine.
If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
if it fails try to connect manually via SSH ... it could be somthing as simple as a stale host key.
Deploy kubernetes:
### Configure cluster variables
Edit `inventory/$CLUSTER/group_vars/all.yml`:
- Set variable **bootstrap_os** appropriately for your desired image:
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).
@@ -5,7 +5,7 @@ To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provi
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets and all instances that kubernetes will be run on with key `kuberentes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
Make sure your VPC has both DNS Hostnames support and Private DNS enabled.
You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
Here is the [Contiv documentation](http://contiv.github.io/documents/).
## Administrate Contiv
There are two ways to manage Contiv:
* a web UI managed by the api proxy service
* a CLI named `netctl`
### Interfaces
#### The Web Interface
This UI is hosted on all kubernetes master nodes. The service is available at `https://<one of your master node>:10000`.
You can configure the api proxy by overriding the following variables:
```yaml
contiv_enable_api_proxy:true
contiv_api_proxy_port:10000
contiv_generate_certificate:true
```
The default credentials to log in are: admin/admin.
#### The Command Line Interface
The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster:
```bash
exportNETMASTER=http://127.0.0.1:9999
```
The port can be changed by overriding the following variable:
```yaml
contiv_netmaster_port:9999
```
The CLI doesn't use the authentication process needed by the web interface.
### Network configuration
The default configuration uses VXLAN to create an overlay. Two networks are created by default:
*`contivh1`: an infrastructure network. It allows nodes to access the pods IPs. It is mandatory in a Kubernetes environment that uses VXLAN.
*`default-net` : the default network that hosts pods.
You can change the default network configuration by overriding the `contiv_networks` variable.
The default forward mode is set to routing:
```yaml
contiv_fwd_mode:routing
```
The following is an example of how you can use VLAN instead of VXLAN:
See more details in the [ansible guide](ansible.md).
Adding nodes
------------
You may want to add **worker** nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
- Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.
Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
By default, Kubespray configures kube-master hosts with insecure access to
kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use http://localhost:8080 to connect. The kubeconfig files
because kubectl will use <http://localhost:8080> to connect. The kubeconfig files
generated will point to localhost (on kube-masters) and kube-node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](ha.md).
More details on this process are in the [HA guide](ha-mode.md).
Kubespray permits connecting to the cluster remotely on any IP of any
kube-master host on port 6443 by default. However, this requires
authentication. One could generate a kubeconfig based on one installed
Kubespray permits connecting to the cluster remotely on any IP of any
kube-master host on port 6443 by default. However, this requires
authentication. One could generate a kubeconfig based on one installed
kube-master hosts (needs improvement) or connect with a username and password.
By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user`. This contains a randomly generated
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user.creds`. This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this
file yourself.
file yourself.
For more information on kubeconfig and accessing a Kubernetes cluster, refer to
the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
@@ -93,25 +99,34 @@ the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-applicati
Accessing Kubernetes Dashboard
------------------------------
If the variable `dashboard_enabled` is set (default is true), then you can
access the Kubernetes Dashboard at the following URL:
As of kubernetes-dashboard v1.7.x:
https://kube:_kube-password_@_host_:6443/ui/
- New login options that use apiserver auth proxying of token/basic/kubeconfig by default
- Requires RBAC in authorization\_modes
- Only serves over https
- No longer available at <https://first_master:6443/ui> until apiserver is updated with the https proxy URL
To see the password, refer to the section above, titled *Connecting to
Kubernetes*. The host can be any kube-master or kube-node or loadbalancer
(when enabled).
If the variable `dashboard_enabled` is set (default is true), then you can access the Kubernetes Dashboard at the following URL, You will be prompted for credentials:
It is recommended to access dashboard from behind a gateway (like Ingress Controller) that enforces an authentication token. Details and other access options here: <https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above>
Accessing Kubernetes API
------------------------
The main client of Kubernetes is `kubectl`. It is installed on each kube-master
host and can optionally be configured on your ansible host by setting
`kubeconfig_localhost: true` in the configuration. If enabled, kubectl and
admin.conf will appear in the artifacts/ directory after deployment. You can
see a list of nodes by running the following commands:
`kubectl_localhost: true` and `kubeconfig_localhost: true` in the configuration:
cd artifacts/
./kubectl --kubeconfig admin.conf get nodes
- If `kubectl_localhost` enabled, `kubectl` will download onto `/usr/local/bin/` and setup with bash completion. A helper script `inventory/mycluster/artifacts/kubectl.sh` also created for setup with below `admin.conf`.
- If `kubeconfig_localhost` enabled `admin.conf` will appear in the `inventory/mycluster/artifacts/` directory after deployment.
If desired, copy kubectl to your bin dir and admin.conf to ~/.kube/config.
You can see a list of nodes by running the following commands:
7. Modify path to library and roles in your ansible.cfg file (role naming should be uniq, you may have to rename your existent roles if they have same names as kubespray project):
```
...
library = 3d/kubespray/library/
library = 3d/kubespray/library/
roles_path = 3d/kubespray/roles/
...
```
@@ -73,7 +73,7 @@ You could rename *all.yml* config to something else, i.e. *kubespray.yml* and cr
10. Now you can include kargo tasks in you existent playbooks by including cluster.yml file:
```
- name: Include kargo tasks
include: 3d/kubespray/cluster.yml
include: 3d/kubespray/cluster.yml
```
Or your could copy separate tasks from cluster.yml into your ansible repository.
@@ -84,7 +84,7 @@ Other members of your team should use ```git submodule sync```, ```git submodule
# Contributing
If you made useful changes or fixed a bug in existent kubespray repo, use this flow for PRs to original kubespray repo.
0. Sign the [CNCF CLA](https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ).
0. Sign the [CNCF CLA](https://git.k8s.io/community/CLA.md).
1. Change working directory to git submodule directory (3d/kubespray).
- the playbook would install and configure docker/rkt and the etcd cluster
- the following data would be inserted into etcd: certs,tokens,users,inventory,group_vars.
- a "kubespray" container would be deployed (kubespray-cli, ansible-playbook, kpm)
- a "kubespray" container would be deployed (kubespray-cli, ansible-playbook)
- to be discussed, a way to provide the inventory
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321)
### Provisionning and cloud providers
### Provisioning and cloud providers
- [ ] Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure**
- [ ] On AWS autoscaling, multi AZ
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297)
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280)
- [x]**TLS boostrap** support for kubelet [#234](https://github.com/kubespray/kubespray/issues/234)
- [x]**TLS boostrap** support for kubelet (covered by kubeadm, but not in standard deployment) [#234](https://github.com/kubespray/kubespray/issues/234)
* *kubelet_cgroup_driver* - Allows manual override of the
cgroup-driver option for Kubelet. By default autodetection is used
to match Docker configuration.
* *node_labels* - Labels applied to nodes via kubelet --node-labels parameter.
For example, labels can be set in the inventory as variables or more widely in group_vars.
*node_labels* must be defined as a dict:
```
node_labels:
label1_name: label1_value
label2_name: label2_value
```
##### Custom flags for Kube Components
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. Example:
@@ -131,6 +145,6 @@ The possible vars are:
By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user`. This contains a randomly generated
`PATH_TO_KUBESPRAY/credentials/kube_user.creds`. This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` |
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` (Optional, only used for Kubernetes <= 1.9.2) |
| vsphere_public_network | FALSE | string | | Blank | Name of the network the VMs are joined to |
| vsphere_resource_pool | FALSE | string | | Blank | Name of the Resource pool where the VMs are located (Optional, only used for Kubernetes >= 1.9.2) |
@@ -12,7 +12,7 @@ Weave encryption is supported for all communication
* To use Weave encryption, specify a strong password (if no password, no encrytion)
```
# In file ./inventory/group_vars/k8s-cluster.yml
# In file ./inventory/sample/group_vars/k8s-cluster.yml
weave_password: EnterPasswordHere
```
@@ -77,21 +77,21 @@ The seed mode also allows multi-clouds and hybrid on-premise/cloud clusters depl
* Switch from consensus mode to seed mode
```
# In file ./inventory/group_vars/k8s-cluster.yml
# In file ./inventory/sample/group_vars/k8s-cluster.yml
weave_mode_seed: true
```
These two variables are only used when `weave_mode_seed` is set to `true` (**/!\ do not manually change these values**)
```
# In file ./inventory/group_vars/k8s-cluster.yml
# In file ./inventory/sample/group_vars/k8s-cluster.yml
weave_seed: uninitialized
weave_peers: uninitialized
```
The first variable, `weave_seed`, contains the initial nodes of the weave network
The seconde variable, `weave_peers`, saves the IPs of all nodes joined to the weave network
The second variable, `weave_peers`, saves the IPs of all nodes joined to the weave network
These two variables are used to connect a new node to the weave network. The new node needs to know the firsts nodes (seed) and the list of IPs of all nodes.
## for mounting persistent volumes into containers. These may not be loaded by preinstall kubernetes
## processes. For example, ceph and rbd backed volumes. Set to true to allow kubelet to load kernel
## modules.
#kubelet_load_modules: false
#kubelet_load_modules: false
## Internal network total size. This is the prefix of the
## entire network. Must be unused in your environment.
@@ -56,7 +60,7 @@ bin_dir: /usr/local/bin
## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', or 'vsphere'
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using nova-client before starting the playbook.
#cloud_provider:
@@ -72,14 +76,20 @@ bin_dir: /usr/local/bin
#azure_subnet_name:
#azure_security_group_name:
#azure_vnet_name:
#azure_vnet_resource_group:
#azure_route_table_name:
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (https://github.com/kubernetes/kubernetes/issues/50461)
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
{# The default resolver is only needed when the hosts resolv.conf was modified by us. If it was not modified, we can rely on dnsmasq to reuse the systems resolv.conf #}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.