Docs improvements (#7660)

* Docs: update sidebar

* Docs: move registry documentation into docs/

* Docs: move rbd_provisioner documentation into docs/

* Docs: move cephfs_provisioner into docs/

* Docs: move local_volume_provisioner documentation into docs/

* Docs: move ambassador.md to docs/ingress_controller/

* Docs: move metallb.md to docs/ingress_controller/

* Docs: move ingress_nginx documentation into docs/

* Docs: move alb_ingress_controller documentation into docs/

* Docs: merge ambassador documentation into docs/ingress_controller/

* Docs: move cert_manager documentation into docs/

* Docs: move bootstrap-os documentation into docs/

* Docs: update file locations in sidebar
This commit is contained in:
Cristian Calin
2021-06-01 17:30:27 +03:00
committed by GitHub
parent 4674b03661
commit 6a2ea94b39
12 changed files with 46 additions and 91 deletions

View File

@@ -0,0 +1,43 @@
# AWS ALB Ingress Controller
**NOTE:** The current image version is `v1.1.6`. Please file any issues you find and note the version used.
The AWS ALB Ingress Controller satisfies Kubernetes [ingress resources](https://kubernetes.io/docs/user-guide/ingress) by provisioning [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).
This project was originated by [Ticketmaster](https://github.com/ticketmaster) and [CoreOS](https://github.com/coreos) as part of Ticketmaster's move to AWS and CoreOS Tectonic. Learn more about Ticketmaster's Kubernetes initiative from Justin Dean's video at [Tectonic Summit](https://www.youtube.com/watch?v=wqXVKneP0Hg).
This project was donated to Kubernetes SIG-AWS to allow AWS, CoreOS, Ticketmaster and other SIG-AWS contributors to officially maintain the project. SIG-AWS reached this consensus on June 1, 2018.
## Documentation
Checkout our [Live Docs](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/)!
## Getting started
To get started with the controller, see our [walkthrough](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/walkthrough/echoserver/).
## Setup
- See [controller setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/setup/) on how to install ALB ingress controller
- See [external-dns setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/external-dns/setup/) for how to setup the external-dns to manage route 53 records.
## Building
For details on building this project, see our [building guide](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/BUILDING/).
## Community, discussion, contribution, and support
Learn how to engage with the Kubernetes community on the [community page](http://kubernetes.io/community/).
You can reach the maintainers of this project at:
- [Slack channel](https://kubernetes.slack.com/messages/sig-aws)
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-aws)
### Code of conduct
Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).
## License
[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fcoreos%2Falb-ingress-controller.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2Fcoreos%2Falb-ingress-controller?ref=badge_large)

View File

@@ -0,0 +1,97 @@
# Ambassador
The [Ambassador API Gateway](https://github.com/datawire/ambassador) provides all the functionality of a traditional ingress controller
(e.g., path-based routing) while exposing many additional capabilities such as authentication,
URL rewriting, CORS, rate limiting, and automatic metrics collection.
## Installation
### Configuration
* `ingress_ambassador_namespace` (default `ambassador`): namespace for installing Ambassador.
* `ingress_ambassador_update_window` (default `0 0 * * SUN`): _crontab_-like expression
for specifying when the Operator should try to update the Ambassador API Gateway.
* `ingress_ambassador_version` (default: `*`): SemVer rule for versions allowed for
installation/updates.
* `ingress_ambassador_secure_port` (default: 443): HTTPS port to listen at.
* `ingress_ambassador_insecure_port` (default: 80): HTTP port to listen at.
* `ingress_ambassador_multi_namespaces` (default `false`): By default, Ambassador will only
watch the `ingress_ambassador_namespace` namespace for `AmbassadorInstallation` CRD resources.
When set to `true`, this value will tell the Ambassador Operator to watch **all** namespaces
for CRDs. If you want to run multiple Ambassador ingress instances, set this to `true`.
### Ingress annotations
The Ambassador API Gateway will automatically load balance `Ingress` resources
that include the annotation `kubernetes.io/ingress.class=ambassador`. All the other
resources will be just ignored.
### Ambassador Operator
This Ambassador addon deploys the Ambassador Operator, which in turn will install
the [Ambassador API Gateway](https://github.com/datawire/ambassador) in
a Kubernetes cluster.
The Ambassador Operator is a Kubernetes Operator that controls Ambassador's complete lifecycle
in your cluster, automating many of the repeatable tasks you would otherwise have to perform
yourself. Once installed, the Operator will complete installations and seamlessly upgrade to new
versions of Ambassador as they become available.
## Usage
The following example creates simple http-echo services and an `Ingress` object
to route to these services.
Note well that the [Ambassador API Gateway](https://github.com/datawire/ambassador) will automatically load balance `Ingress` resources
that include the annotation `kubernetes.io/ingress.class=ambassador`. All the other
resources will be just ignored.
```yaml
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- name: foo-app
image: hashicorp/http-echo
args:
- "-text=foo"
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
# Default port used by the image
- port: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: ambassador
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 5678
```
Now you can test that the ingress is working with curl:
```console
$ export AMB_IP=$(kubectl get service ambassador -n ambassador -o 'go-template={{range .status.loadBalancer.ingress}}{{print .ip "\n"}}{{end}}')
$ curl $AMB_IP/foo
foo
```

View File

@@ -0,0 +1,203 @@
# Installation Guide
## Contents
- [Prerequisite Generic Deployment Command](#prerequisite-generic-deployment-command)
- [Provider Specific Steps](#provider-specific-steps)
- [Docker for Mac](#docker-for-mac)
- [minikube](#minikube)
- [AWS](#aws)
- [GCE - GKE](#gce-gke)
- [Azure](#azure)
- [Bare-metal](#bare-metal)
- [Verify installation](#verify-installation)
- [Detect installed version](#detect-installed-version)
- [Using Helm](#using-helm)
## Prerequisite Generic Deployment Command
!!! attention
The default configuration watches Ingress object from *all the namespaces*.
To change this behavior use the flag `--watch-namespace` to limit the scope to a particular namespace.
!!! warning
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
!!! attention
If you're using GKE you need to initialize your user as a cluster-admin with the following command:
```console
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
```
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/cloud/deploy.yaml
```
### Provider Specific Steps
There are cloud provider specific yaml files.
#### Docker for Mac
Kubernetes is available in Docker for Mac (from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018))
First you need to [enable kubernetes](https://docs.docker.com/docker-for-mac/#kubernetes).
Then you have to create a service:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
#### minikube
For standard usage:
```console
minikube addons enable ingress
```
For development:
1. Disable the ingress addon:
```console
minikube addons disable ingress
```
1. Execute `make dev-env`
1. Confirm the `nginx-ingress-controller` deployment exists:
```console
$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s
nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s
```
#### AWS
In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of `Type=LoadBalancer`.
Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB)
Please check the [elastic load balancing AWS details page](https://aws.amazon.com/elasticloadbalancing/details/)
##### Elastic Load Balancer - ELB
This setup requires to choose in which layer (L4 or L7) we want to configure the Load Balancer:
- [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer): Use an Network Load Balancer (NLB) with TCP as the listener protocol for ports 80 and 443.
- [Layer 7](https://en.wikipedia.org/wiki/OSI_model#Layer_7:_Application_Layer): Use an Elastic Load Balancer (ELB) with HTTP as the listener protocol for port 80 and terminate TLS in the ELB
For L4:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy.yaml
```
For L7:
Change the value of `service.beta.kubernetes.io/aws-load-balancer-ssl-cert` in the file `provider/aws/deploy-tls-termination.yaml` replacing the dummy id with a valid one. The dummy value is `"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"`
Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the [ELB Idle Timeouts section](#elb-idle-timeouts) for additional information. If a change is required, users will need to update the value of `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` in `provider/aws/deploy-tls-termination.yaml`
Then execute:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy-tls-termination.yaml
```
This example creates an ELB with just two listeners, one in port 80 and another in port 443
![Listeners](https://github.com/kubernetes/ingress-nginx/raw/master/docs/images/elb-l7-listener.png)
##### ELB Idle Timeouts
In some scenarios users will need to modify the value of the ELB idle timeout.
Users need to ensure the idle timeout is less than the [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) that is configured for NGINX.
By default NGINX `keepalive_timeout` is set to `75s`.
The default ELB idle timeout will work for most scenarios, unless the NGINX [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) has been modified,
in which case `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` will need to be modified to ensure it is less than the `keepalive_timeout` the user has configured.
_Please Note: An idle timeout of `3600s` is recommended when using WebSockets._
More information with regards to idle timeouts for your Load Balancer can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html).
##### Network Load Balancer (NLB)
This type of load balancer is supported since v1.10.0 as an ALPHA feature.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-nlb.yaml
```
#### GCE-GKE
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
**Important Note:** proxy protocol is not supported in GCE/GKE
#### Azure
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
#### Bare-metal
Using [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport):
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
```
!!! tip
For extended notes regarding deployments on bare-metal, see [Bare-metal considerations](./baremetal.md).
### Verify installation
To check if the ingress controller pods have started, run the following command:
```console
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
```
Once the operator pods are running, you can cancel the above command by typing `Ctrl+C`.
Now, you are ready to create your first ingress.
### Detect installed version
To detect which version of the ingress controller is running, exec into the pod and run `nginx-ingress-controller version` command.
```console
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/component=controller -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
```
## Using Helm
NGINX Ingress controller can be installed via [Helm](https://helm.sh/) using the chart [ingress-nginx/ingress-nginx](https://kubernetes.github.io/ingress-nginx).
Official documentation is [here](https://kubernetes.github.io/ingress-nginx/deploy/#using-helm)
To install the chart with the release name `my-nginx`:
```console
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-nginx ingress-nginx/ingress-nginx
```
Detect installed version:
```console
POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
```

View File

@@ -0,0 +1,81 @@
# MetalLB
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation.
It allows you to create Kubernetes services of type "LoadBalancer" in clusters that don't run on a cloud provider, and thus cannot simply hook into 3rd party products to provide load-balancers.
The default operationg mode of MetalLB is in ["Layer2"](https://metallb.universe.tf/concepts/layer2/) but it can also operate in ["BGP"](https://metallb.universe.tf/concepts/bgp/) mode.
## Install
You have to explicitly enable the MetalLB extension and set an IP address range from which to allocate LoadBalancer IPs.
```yaml
metallb_enabled: true
metallb_speaker_enabled: true
metallb_ip_range:
- 10.5.0.0/16
```
By default only the MetalLB BGP speaker is allowed to run on control plane nodes. If you have a single node cluster or a cluster where control plane are also worker nodes you may need to enable tolerations for the MetalLB controller:
```yaml
metallb_controller_tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
value: ""
effect: "NoSchedule"
```
## BGP Mode
When operating in BGP Mode MetalLB needs to have defined upstream peers:
```yaml
metallb_protocol: bgp
metallb_ip_range:
- 10.5.0.0/16
metallb_peers:
- peer_address: 192.0.2.1
peer_asn: 64512
my_asn: 4200000000
- peer_address: 192.0.2.2
peer_asn: 64513
my_asn: 4200000000
```
When using calico >= 3.18 you can replace MetalLB speaker by calico Service LoadBalancer IP advertisement.
See [calico service IPs advertisement documentation](https://docs.projectcalico.org/archive/v3.18/networking/advertise-service-ips#advertise-service-load-balancer-ip-addresses).
In this scenarion you should disable the MetalLB speaker and configure the `calico_advertise_service_loadbalancer_ips` to match your `metallb_ip_range`
```yaml
metallb_speaker_enabled: false
metallb_ip_range:
- 10.5.0.0/16
calico_advertise_service_loadbalancer_ips: "{{ metallb_ip_range }}"
```
If you have additional loadbalancer IP pool in `metallb_additional_address_pools`, ensure to add them to the list.
```yaml
metallb_speaker_enabled: false
metallb_ip_range:
- 10.5.0.0/16
metallb_additional_address_pools:
kube_service_pool_1:
ip_range:
- 10.6.0.0/16
protocol: "bgp"
auto_assign: false
kube_service_pool_2:
ip_range:
- 10.10.0.0/16
protocol: "bgp"
auto_assign: false
calico_advertise_service_loadbalancer_ips:
- 10.5.0.0/16
- 10.6.0.0/16
- 10.10.0.0/16
```