Compare commits

...

33 Commits

Author SHA1 Message Date
Anshuman Agarwala
63cdf87915 Removed equinix provider (#12229) 2025-05-20 03:53:15 -07:00
Max Gautier
175babc4df Move some approvers to emeritus (#12156)
Thanks for you work !
2025-05-20 03:11:17 -07:00
Ekko
6c5c45b328 Allow stopping ubuntu unattended-upgrades (#12174)
Signed-off-by: Ekko Tu <lihai.tu@daocloud.io>
2025-05-20 01:07:16 -07:00
Kubernetes Prow Robot
019cf2ab42 Merge pull request #12101 from tico88612/refactor/cilium-install
Refactor Cilium CNI installation
2025-05-20 01:01:15 -07:00
dependabot[bot]
571e747689 build(deps): bump cryptography from 44.0.3 to 45.0.2 (#12235)
Bumps [cryptography](https://github.com/pyca/cryptography) from 44.0.3 to 45.0.2.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/44.0.3...45.0.2)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 07:21:15 -07:00
ChengHao Yang
1266527014 Add cilium cli binary hash before 0.18.3
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
5e2e63ebe3 Make cilium dnsProxy transparent mode configure
When Cilium is configured to replace kube-proxy, it automatically
enables dnsProxy, which can conflict with nodelocaldns.
2025-05-19 08:48:15 +08:00
ChengHao Yang
db290ca686 Add cilium gateway api support
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
6619d98682 Add cilium hubble export dynamic content
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
b771d73fe0 Add cilium hubble export file max backups & size mb
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
65751e8193 Add cilium operator tolerations default values
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
4c16fc155f Cilium values k8sServiceHost and k8sServicePort use auto
Signed-off-by: ChengHao Yang
<17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
dcd3461bce Cilium values use image variables
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
48f75c2c2b Upgrade Cilium related images
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
a4b73c09a7 Upgrade cilium version to 1.17.3
Signed-off-by: ChengHao Yang
<17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
af62570110 Change cilium_kube_proxy_replacement to true for CI tests
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
bebba47eb4 Change kube_owner to root for cilium CI test
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
86437730de Use cilium-cli install Cilium
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
6fe64323db Remove old cilium templates install
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:13 +08:00
ChengHao Yang
1e471d5eeb Upgrade outdated cilium_min_version_required
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:11 +08:00
Max Gautier
3a2862ea19 Move checksums to kubespray_defaults/vars (#12234)
The checksums are not a defaults and are not meant to be changed from
the inventories.

Furthermore, role defaults have a lower priority that hosts facts, which
technically means a rogue hosts could hijack the hashes for its
variables.
2025-05-18 16:13:14 -07:00
Jay.H
8a4f4d13f7 fix manage-offline-container-images.sh create_registry (#11964) 2025-05-17 07:25:13 -07:00
ErmolenkoMaxim
46a0dc9a51 Add support for hubble-export-file-max-backups and max-size-mb variables (#12072)
* feat(cilium): add configurable Hubble export log rotation parameters

- Adds support for `cilium_hubble_export_file_max_backups` and `cilium_hubble_export_file_max_size_mb`
- Applies values only if `cilium_hubble_export_file_path` is defined
- Default values are set in role defaults
- Cleans up template logic by removing unnecessary conditionals

* Fix indentation for hubble export settings

* Fix undefined variable issue with ipwrap in kubeconfig override that caused pre-commit errors

* Update main.yml

rollback
2025-05-17 00:35:13 -07:00
Max Gautier
faae36086c Patch versions updates (#12226)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-16 14:13:14 -07:00
ERIK
e4c0c427a3 improve NTP package conflict handling (#12212)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-05-16 03:55:14 -07:00
Max Gautier
bca5a4ce3b CI: remove ci-not-authorized job (#12225)
This is now handled directly at the failfast-ci level (== integration
Github <-> Gitlab).
The whole pipeline will not be triggered unless:
- The author is a maintainer
- The PR has the /ok-to-test label
2025-05-16 03:27:13 -07:00
Antoine Legrand
5c07c6e6d3 Add option to [not] install coredns via Kubespray (#12218) 2025-05-16 03:23:13 -07:00
Takuya Murakami
c6dfe22a41 Improve logging of kubeadm init failure of first control plane node (#12216)
Split retry task of 'kubeadm init' to show the failure log of
the first execution.
2025-05-16 03:01:13 -07:00
Seena Fallah
ec85b7e2c9 download: respect enable_dns_autoscaler when enabling dnsautoscaler (#12217)
dnsautoscaler should only be enabled when enable_dns_autoscaler is
set to true. without this, it could be enabled without any manifest
actually using it, which makes it a false signal.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2025-05-15 12:45:13 -07:00
Kubernetes Prow Robot
acd6872c80 Merge pull request #12219 from VannTen/test/ha_etcd_separate
Fix broken workaround for separate etcd setup
2025-05-15 12:39:14 -07:00
Max Gautier
22d3cf9c2b Move 'pretend certificates' **after** cert distribution
The link target will only exist after we distribute the certs on each node.
2025-05-15 18:35:34 +02:00
Max Gautier
2d3bd8686f Add testcase separate ha-etcd
Also use a distinct node to test certificate distribution.
2025-05-15 18:20:13 +02:00
Hyeonki Hong
2c3b6c9199 feat: add trigger to restart kube-apiserver when config files change (#12172)
* feat: add trigger to restart kube-apiserver when config files change

* fix: remove not upgrade_cluster_setup condition

* refactor: streamline kube-apiserver restart notifications
2025-05-15 06:51:14 -07:00
69 changed files with 430 additions and 2716 deletions

View File

@@ -55,37 +55,9 @@ before_script:
extends: .job
needs:
- pipeline-image
- ci-not-authorized
- pre-commit # lint
- vagrant-validate # lint
# For failfast, at least 1 job must be defined in .gitlab-ci.yml
# Premoderated with manual actions
ci-not-authorized:
stage: build
before_script: []
after_script: []
rules:
# LGTM or ok-to-test labels
- if: $PR_LABELS =~ /.*,(lgtm|approved|ok-to-test).*|^(lgtm|approved|ok-to-test).*/i
variables:
CI_OK_TO_TEST: '0'
when: always
- if: $CI_PIPELINE_SOURCE == "schedule" || $CI_PIPELINE_SOURCE == "trigger"
variables:
CI_OK_TO_TEST: '0'
- if: $CI_COMMIT_BRANCH == "master"
variables:
CI_OK_TO_TEST: '0'
- when: always
variables:
CI_OK_TO_TEST: '1'
script:
- exit $CI_OK_TO_TEST
tags:
- ffci
needs: []
include:
- .gitlab-ci/build.yml
- .gitlab-ci/lint.yml

View File

@@ -12,7 +12,6 @@
- ffci
needs:
- pipeline-image
- ci-not-authorized
# TODO: generate testcases matrixes from the files in tests/files/
# this is needed to avoid the need for PR rebasing when a job was added or removed in the target branch
@@ -55,6 +54,7 @@ pr:
- ubuntu22-calico-all-in-one
- ubuntu22-calico-all-in-one-upgrade
- ubuntu24-calico-etcd-datastore
- ubuntu24-ha-separate-etcd
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
ubuntu20-calico-all-in-one:

View File

@@ -12,7 +12,6 @@
image: $PIPELINE_IMAGE
needs:
- pipeline-image
# - ci-not-authorized
script:
- ./tests/scripts/molecule_run.sh
after_script:

View File

@@ -3,7 +3,6 @@
.terraform_install:
extends: .job
needs:
- ci-not-authorized
- pipeline-image
variables:
TF_VAR_public_key_path: "${ANSIBLE_PRIVATE_KEY_FILE}.pub"
@@ -33,7 +32,6 @@ terraform_validate:
matrix:
- PROVIDER:
- openstack
- equinix
- aws
- exoscale
- hetzner

View File

@@ -1,8 +1,6 @@
---
vagrant:
extends: .job-moderated
needs:
- ci-not-authorized
variables:
CI_PLATFORM: "vagrant"
SSH_USER: "vagrant"

View File

@@ -35,8 +35,8 @@ RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/v1.32.4/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.32.4/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& curl -L "https://dl.k8s.io/release/v1.32.5/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.32.5/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl
COPY *.yml ./

View File

@@ -1,13 +1,9 @@
aliases:
kubespray-approvers:
- cristicalin
- floryut
- liupeng0518
- mzaian
- oomichi
- yankay
- ant31
- mzaian
- vannten
- yankay
kubespray-reviewers:
- cyclinder
- erikjiang
@@ -19,8 +15,12 @@ aliases:
kubespray-emeritus_approvers:
- atoms
- chadswen
- cristicalin
- floryut
- liupeng0518
- luckysb
- mattymo
- miouge1
- oomichi
- riverzhang
- woopstar

View File

@@ -111,7 +111,7 @@ Note:
<!-- BEGIN ANSIBLE MANAGED BLOCK -->
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.32.4
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.32.5
- [etcd](https://github.com/etcd-io/etcd) 3.5.16
- [docker](https://www.docker.com/) 28.0
- [containerd](https://containerd.io/) 2.0.5
@@ -119,7 +119,7 @@ Note:
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) 1.4.1
- [calico](https://github.com/projectcalico/calico) 3.29.3
- [cilium](https://github.com/cilium/cilium) 1.15.9
- [cilium](https://github.com/cilium/cilium) 1.17.3
- [flannel](https://github.com/flannel-io/flannel) 0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1

View File

@@ -127,7 +127,7 @@ function register_container_images() {
tar -zxvf ${IMAGE_TAR_FILE}
if [ "${create_registry}" ]; then
if ${create_registry}; then
sudo ${runtime} load -i ${IMAGE_DIR}/registry-latest.tar
set +e

View File

@@ -1,246 +0,0 @@
# Kubernetes on Equinix Metal with Terraform
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
[Equinix Metal](https://metal.equinix.com) ([formerly Packet](https://blog.equinix.com/blog/2020/10/06/equinix-metal-metal-and-more/)).
## Status
This will install a Kubernetes cluster on Equinix Metal. It should work in all locations and on most server types.
## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your Equinix Metal project.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by [cluster.yml](../../../cluster.yml)
to actually install Kubernetes with Kubespray.
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts.
- Master nodes with etcd: `number_of_k8s_masters` variable
- Master nodes without etcd: `number_of_k8s_masters_no_etcd` variable
- Standalone etcd hosts: `number_of_etcd` variable
- Kubernetes worker nodes: `number_of_k8s_nodes` variable
Note that the Ansible script will report an invalid configuration if you wind up
with an *even number* of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- [Install Ansible dependencies](/docs/ansible/ansible.md#installing-ansible)
- Account with Equinix Metal
- An SSH key pair
## SSH Key Setup
An SSH keypair is required so Ansible can access the newly provisioned nodes (Equinix Metal hosts). By default, the public SSH key defined in cluster.tfvars will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Equinix Metal (i.e. via the portal), then set the public keyfile name in cluster.tfvars to blank to prevent the duplicate key from being uploaded which will cause an error.
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
```ShellSession
ssh-keygen -f ~/.ssh/id_rsa
```
## Terraform
Terraform will be used to provision all of the Equinix Metal resources with base software as appropriate.
### Configuration
#### Inventory files
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
cp -LRp contrib/terraform/equinix/sample-inventory inventory/$CLUSTER
cd inventory/$CLUSTER
ln -s ../../contrib/terraform/equinix/hosts
```
This will be the base for subsequent Terraform commands.
#### Equinix Metal API access
Your Equinix Metal API key must be available in the `METAL_AUTH_TOKEN` environment variable.
This key is typically stored outside of the code repo since it is considered secret.
If someone gets this key, they can startup/shutdown hosts in your project!
For more information on how to generate an API key or find your project ID, please see
[Accounts Index](https://metal.equinix.com/developers/docs/accounts/).
The Equinix Metal Project ID associated with the key will be set later in `cluster.tfvars`.
For more information about the API, please see [Equinix Metal API](https://metal.equinix.com/developers/api/).
For more information about terraform provider authentication, please see [the equinix provider documentation](https://registry.terraform.io/providers/equinix/equinix/latest/docs).
Example:
```ShellSession
export METAL_AUTH_TOKEN="Example-API-Token"
```
Note that to deploy several clusters within the same project you need to use [terraform workspace](https://www.terraform.io/docs/state/workspaces.html#using-workspaces).
#### Cluster variables
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
The `cluster_name` is used to set a tag on each server deployed as part of this cluster.
This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
- cluster_name = the name of the inventory directory created above as $CLUSTER
- equinix_metal_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
#### Enable localhost access
Kubespray will pull down a Kubernetes configuration file to access this cluster by enabling the
`kubeconfig_localhost: true` in the Kubespray configuration.
Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml` and comment back in the following line and change from `false` to `true`:
`\# kubeconfig_localhost: false`
becomes:
`kubeconfig_localhost: true`
Once the Kubespray playbooks are run, a Kubernetes configuration file will be written to the local host at `inventory/$CLUSTER/artifacts/admin.conf`
#### Terraform state files
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `contrib/terraform/equinix` directory :
- `.terraform`
- `.tfvars`
- `.tfstate`
- `.tfstate.backup`
- `.lock.hcl`
You can still add them manually if you want to.
### Initialization
Before Terraform can operate on your cluster you need to install the required
plugins. This is accomplished as follows:
```ShellSession
cd inventory/$CLUSTER
terraform -chdir=../../contrib/terraform/metal init -var-file=cluster.tfvars
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
### Provisioning cluster
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
terraform -chdir=../../contrib/terraform/equinix apply -var-file=cluster.tfvars
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook -i hosts ../../cluster.yml
```
### Destroying cluster
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
terraform -chdir=../../contrib/terraform/equinix destroy -var-file=cluster.tfvars
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
- Remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
- Clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
### Debugging
You can enable debugging output from Terraform by setting `TF_LOG` to `DEBUG` before running the Terraform command.
## Ansible
### Node access
#### SSH
Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner:
```ShellSession
eval $(ssh-agent -s)
ssh-add ~/.ssh/id_rsa
```
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
#### Test access
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```ShellSession
$ ansible -i inventory/$CLUSTER/hosts -m ping all
example-k8s_node-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-etcd-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-k8s-master-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
### Deploy Kubernetes
```ShellSession
ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
```
This will take some time as there are many tasks to run.
## Kubernetes
### Set up kubectl
- [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on the localhost.
- Verify that Kubectl runs correctly
```ShellSession
kubectl version
```
- Verify that the Kubernetes configuration file has been copied over
```ShellSession
cat inventory/alpha/$CLUSTER/admin.conf
```
- Verify that all the nodes are running correctly.
```ShellSession
kubectl version
kubectl --kubeconfig=inventory/$CLUSTER/artifacts/admin.conf get nodes
```
## What's next
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).

View File

@@ -1 +0,0 @@
../terraform.py

View File

@@ -1,57 +0,0 @@
resource "equinix_metal_ssh_key" "k8s" {
count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path))
}
resource "equinix_metal_device" "k8s_master" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane", "etcd", "kube_node"]
}
resource "equinix_metal_device" "k8s_master_no_etcd" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters_no_etcd
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane"]
}
resource "equinix_metal_device" "k8s_etcd" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}"
plan = var.plan_etcd
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "equinix_metal_device" "k8s_node" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
plan = var.plan_k8s_nodes
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_node"]
}

View File

@@ -1,15 +0,0 @@
output "k8s_masters" {
value = equinix_metal_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = equinix_metal_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = equinix_metal_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = equinix_metal_device.k8s_node.*.access_public_ipv4
}

View File

@@ -1,17 +0,0 @@
terraform {
required_version = ">= 1.0.0"
provider_meta "equinix" {
module_name = "kubespray"
}
required_providers {
equinix = {
source = "equinix/equinix"
version = "1.24.0"
}
}
}
# Configure the Equinix Metal Provider
provider "equinix" {
}

View File

@@ -1,35 +0,0 @@
# your Kubernetes cluster name here
cluster_name = "mycluster"
# Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/
equinix_metal_project_id = "Example-Project-Id"
# The public SSH key to be uploaded into authorized_keys in bare metal Equinix Metal nodes provisioned
# leave this value blank if the public key is already setup in the Equinix Metal project
# Terraform will complain if the public key is setup in Equinix Metal
public_key_path = "~/.ssh/id_rsa.pub"
# Equinix interconnected bare metal across our global metros.
metro = "da"
# operating_system
operating_system = "ubuntu_22_04"
# standalone etcds
number_of_etcd = 0
plan_etcd = "t1.small.x86"
# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
plan_k8s_masters = "t1.small.x86"
plan_k8s_masters_no_etcd = "t1.small.x86"
# nodes
number_of_k8s_nodes = 2
plan_k8s_nodes = "t1.small.x86"

View File

@@ -1 +0,0 @@
../../../../inventory/sample/group_vars

View File

@@ -1,56 +0,0 @@
variable "cluster_name" {
default = "kubespray"
}
variable "equinix_metal_project_id" {
description = "Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/"
}
variable "operating_system" {
default = "ubuntu_22_04"
}
variable "public_key_path" {
description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub"
}
variable "billing_cycle" {
default = "hourly"
}
variable "metro" {
default = "da"
}
variable "plan_k8s_masters" {
default = "c3.small.x86"
}
variable "plan_k8s_masters_no_etcd" {
default = "c3.small.x86"
}
variable "plan_etcd" {
default = "c3.small.x86"
}
variable "plan_k8s_nodes" {
default = "c3.medium.x86"
}
variable "number_of_k8s_masters" {
default = 1
}
variable "number_of_k8s_masters_no_etcd" {
default = 0
}
variable "number_of_etcd" {
default = 0
}
variable "number_of_k8s_nodes" {
default = 1
}

View File

@@ -237,7 +237,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version
```yml
cilium_version: "1.15.9"
cilium_version: "1.17.3"
```
## Add variable to config

1
docs/_sidebar.md generated
View File

@@ -23,7 +23,6 @@
* [Aws](/docs/cloud_providers/aws.md)
* [Azure](/docs/cloud_providers/azure.md)
* [Cloud](/docs/cloud_providers/cloud.md)
* [Equinix-metal](/docs/cloud_providers/equinix-metal.md)
* CNI
* [Calico](/docs/CNI/calico.md)
* [Cilium](/docs/CNI/cilium.md)

View File

@@ -1,100 +0,0 @@
# Equinix Metal
Kubespray provides support for bare metal deployments using the [Equinix Metal](http://metal.equinix.com).
Deploying upon bare metal allows Kubernetes to run at locations where an existing public or private cloud might not exist such
as cell tower, edge collocated installations. The deployment mechanism used by Kubespray for Equinix Metal is similar to that used for
AWS and OpenStack clouds (notably using Terraform to deploy the infrastructure). Terraform uses the Equinix Metal provider plugin
to provision and configure hosts which are then used by the Kubespray Ansible playbooks. The Ansible inventory is generated
dynamically from the Terraform state file.
## Local Host Configuration
To perform this installation, you will need a localhost to run Terraform/Ansible (laptop, VM, etc) and an account with Equinix Metal.
In this example, we are provisioning a m1.large CentOS7 OpenStack VM as the localhost for the Kubernetes installation.
You'll need Ansible, Git, and PIP.
```bash
sudo yum install epel-release
sudo yum install ansible
sudo yum install git
sudo yum install python-pip
```
## Playbook SSH Key
An SSH key is needed by Kubespray/Ansible to run the playbooks.
This key is installed into the bare metal hosts during the Terraform deployment.
You can generate a key new key or use an existing one.
```bash
ssh-keygen -f ~/.ssh/id_rsa
```
## Install Terraform
Terraform is required to deploy the bare metal infrastructure. The steps below are for installing on CentOS 7.
[More terraform installation options are available.](https://learn.hashicorp.com/terraform/getting-started/install.html)
Grab the latest version of Terraform and install it.
```bash
echo "https://releases.hashicorp.com/terraform/$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')/terraform_$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')_linux_amd64.zip"
sudo yum install unzip
sudo unzip terraform_0.14.10_linux_amd64.zip -d /usr/local/bin/
```
## Download Kubespray
Pull over Kubespray and setup any required libraries.
```bash
git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
```
## Install Ansible
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
## Cluster Definition
In this example, a new cluster called "alpha" will be created.
```bash
cp -LRp contrib/terraform/packet/sample-inventory inventory/alpha
cd inventory/alpha/
ln -s ../../contrib/terraform/packet/hosts
```
Details about the cluster, such as the name, as well as the authentication tokens and project ID
for Equinix Metal need to be defined. To find these values see [Equinix Metal API Accounts](https://metal.equinix.com/developers/docs/accounts/).
```bash
vi cluster.tfvars
```
* cluster_name = alpha
* packet_project_id = ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
* public_key_path = 12345678-90AB-CDEF-GHIJ-KLMNOPQRSTUV
## Deploy Bare Metal Hosts
Initializing Terraform will pull down any necessary plugins/providers.
```bash
terraform init ../../contrib/terraform/packet/
```
Run Terraform to deploy the hardware.
```bash
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
```
## Run Kubespray Playbooks
With the bare metal infrastructure deployed, Kubespray can now install Kubernetes and setup the cluster.
```bash
ansible-playbook --become -i inventory/alpha/hosts cluster.yml
```

View File

@@ -47,8 +47,8 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \
&& curl -L https://dl.k8s.io/release/v1.32.4/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.32.4/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& curl -L https://dl.k8s.io/release/v1.32.5/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.32.5/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
# Install Vagrant
&& curl -LO https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \

View File

@@ -1,6 +1,6 @@
ansible==9.13.0
# Needed for community.crypto module
cryptography==44.0.3
cryptography==45.0.2
# Needed for jinja2 json_query templating
jmespath==1.0.1
# Needed for ansible.utils.ipaddr

View File

@@ -19,6 +19,8 @@ use_oracle_public_repo: true
## Ubuntu specific variables
# Disable unattended-upgrades for Linux kernel and all packages start with linux- on Ubuntu
ubuntu_kernel_unattended_upgrades_disabled: false
# Stop unattended-upgrades if it is currently running on Ubuntu
ubuntu_stop_unattended_upgrades: false
fedora_coreos_packages:
- python

View File

@@ -19,3 +19,11 @@
when:
- ubuntu_kernel_unattended_upgrades_disabled
- unattended_upgrades_file_stat.stat.exists
- name: Stop unattended-upgrades service
service:
name: unattended-upgrades
state: stopped
enabled: false
become: true
when: ubuntu_stop_unattended_upgrades

View File

@@ -98,28 +98,6 @@
loop_control:
label: "{{ item.item }}"
# This is a hack around the fact kubeadm expect the same certs path on all kube_control_plane
# TODO: fix certs generation to have the same file everywhere
# OR work with kubeadm on node-specific config
- name: Gen_certs | Pretend all control plane have all certs (with symlinks)
file:
state: link
src: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}{{ item[0] }}.pem"
dest: "{{ etcd_cert_dir }}/node-{{ item[1] }}{{ item[0] }}.pem"
mode: "0640"
loop: "{{ suffixes | product(groups['kube_control_plane']) }}"
vars:
suffixes:
- ''
- '-key'
when:
- ('kube_control_plane' in group_names)
- item[1] != inventory_hostname
register: symlink_created
failed_when:
- symlink_created is failed
- ('refusing to convert from file to symlink' not in symlink_created.msg)
- name: Gen_certs | Gather node certs from first etcd node
slurp:
src: "{{ item }}"
@@ -175,3 +153,25 @@
owner: "{{ etcd_owner }}"
mode: "{{ etcd_cert_dir_mode }}"
recurse: true
# This is a hack around the fact kubeadm expect the same certs path on all kube_control_plane
# TODO: fix certs generation to have the same file everywhere
# OR work with kubeadm on node-specific config
- name: Gen_certs | Pretend all control plane have all certs (with symlinks)
file:
state: link
src: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}{{ item[0] }}.pem"
dest: "{{ etcd_cert_dir }}/node-{{ item[1] }}{{ item[0] }}.pem"
mode: "0640"
loop: "{{ suffixes | product(groups['kube_control_plane']) }}"
vars:
suffixes:
- ''
- '-key'
when:
- ('kube_control_plane' in group_names)
- item[1] != inventory_hostname
register: symlink_created
failed_when:
- symlink_created is failed
- ('refusing to convert from file to symlink' not in symlink_created.msg)

View File

@@ -20,7 +20,7 @@ coredns_default_zone_cache_block: |
coredns_pod_disruption_budget: false
# value for coredns pdb
coredns_pod_disruption_budget_max_unavailable: "30%"
deploy_coredns: true
# coredns_additional_configs adds any extra configuration to coredns
# coredns_additional_configs: |
# whoami

View File

@@ -22,7 +22,9 @@
- coredns
vars:
clusterIP: "{{ skydns_server }}"
when: dns_mode in ['coredns', 'coredns_dual']
when:
- dns_mode in ['coredns', 'coredns_dual']
- deploy_coredns
- name: Kubernetes Apps | CoreDNS Secondary
command:
@@ -38,6 +40,7 @@
coredns_ordinal_suffix: "-secondary"
when:
- dns_mode == 'coredns_dual'
- deploy_coredns
- name: Kubernetes Apps | nodelocalDNS
command:

View File

@@ -61,6 +61,7 @@
dest: "{{ audit_policy_file }}"
mode: "0640"
when: kubernetes_audit or kubernetes_audit_webhook
notify: Control plane | Restart apiserver
- name: Write api audit webhook config yaml
template:
@@ -68,6 +69,7 @@
dest: "{{ audit_webhook_config_file }}"
mode: "0640"
when: kubernetes_audit_webhook
notify: Control plane | Restart apiserver
- name: Create apiserver tracing config directory
file:
@@ -82,6 +84,7 @@
dest: "{{ kube_config_dir }}/tracing/apiserver-tracing.yaml"
mode: "0640"
when: kube_apiserver_tracing
notify: Control plane | Restart apiserver
# Nginx LB(default), If kubeadm_config_api_fqdn is defined, use other LB by kubeadm controlPlaneEndpoint.
- name: Set kubeadm_config_api_fqdn define
@@ -109,6 +112,7 @@
dest: "{{ kube_config_dir }}/admission-controls/admission-controls.yaml"
mode: "0640"
when: kube_apiserver_admission_control_config_file
notify: Control plane | Restart apiserver
- name: Kubeadm | Push admission control config files
template:
@@ -119,6 +123,7 @@
- kube_apiserver_admission_control_config_file
- item in kube_apiserver_admission_plugins_needs_configuration
loop: "{{ kube_apiserver_enable_admission_plugins }}"
notify: Control plane | Restart apiserver
- name: Kubeadm | Check apiserver.crt SANs
vars:
@@ -166,22 +171,32 @@
- not kube_external_ca_mode
- name: Kubeadm | Initialize first control plane node
command: >-
timeout -k {{ kubeadm_init_timeout }} {{ kubeadm_init_timeout }}
{{ bin_dir }}/kubeadm init
--config={{ kube_config_dir }}/kubeadm-config.yaml
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--skip-phases={{ kubeadm_init_phases_skip | join(',') }}
{{ kube_external_ca_mode | ternary('', '--upload-certs') }}
register: kubeadm_init
# Retry is because upload config sometimes fails
retries: 3
until: kubeadm_init is succeeded or "field is immutable" in kubeadm_init.stderr
when: inventory_hostname == first_kube_control_plane and not kubeadm_already_run.stat.exists
failed_when: kubeadm_init.rc != 0 and "field is immutable" not in kubeadm_init.stderr
vars:
kubeadm_init_first_control_plane_cmd: >-
timeout -k {{ kubeadm_init_timeout }} {{ kubeadm_init_timeout }}
{{ bin_dir }}/kubeadm init
--config={{ kube_config_dir }}/kubeadm-config.yaml
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--skip-phases={{ kubeadm_init_phases_skip | join(',') }}
{{ kube_external_ca_mode | ternary('', '--upload-certs') }}
environment:
PATH: "{{ bin_dir }}:{{ ansible_env.PATH }}"
notify: Control plane | restart kubelet
block:
- name: Kubeadm | Initialize first control plane node (1st try)
command: "{{ kubeadm_init_first_control_plane_cmd }}"
register: kubeadm_init
failed_when: kubeadm_init.rc != 0 and "field is immutable" not in kubeadm_init.stderr
rescue:
# Retry is because upload config sometimes fails
# This retry task is separated from 1st task to show log of failure of 1st task.
- name: Kubeadm | Initialize first control plane node (retry)
command: "{{ kubeadm_init_first_control_plane_cmd }}"
register: kubeadm_init
retries: 2
until: kubeadm_init is succeeded or "field is immutable" in kubeadm_init.stderr
failed_when: kubeadm_init.rc != 0 and "field is immutable" not in kubeadm_init.stderr
- name: Set kubeadm certificate key
set_fact:

View File

@@ -55,17 +55,6 @@ minimal_node_memory_mb: 1024
minimal_master_memory_mb: 1500
## NTP Settings
# Start the ntpd or chrony service and enable it at system boot.
ntp_enabled: false
# The package to install which provides NTP functionality.
# The default is ntp for most platforms, or chrony on RHEL/CentOS 7 and later.
# The ntp_package can be one of ['ntp', 'ntpsec', 'chrony']
ntp_package: >-
{% if ansible_os_family == "RedHat" -%}
chrony
{%- else -%}
ntp
{%- endif -%}
# Manage the NTP configuration file.
ntp_manage_config: false

View File

@@ -1,12 +1,4 @@
---
- name: Ensure NTP package
package:
name:
- "{{ ntp_package }}"
state: present
when:
- not is_fedora_coreos
- not ansible_os_family in ["Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: Disable systemd-timesyncd
service:

View File

@@ -113,7 +113,7 @@ flannel_cni_version: 1.1.2
weave_version: 2.8.7
cni_version: "{{ (cni_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_version: "1.15.9"
cilium_version: "1.17.3"
cilium_cli_version: "{{ (ciliumcli_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_enable_hubble: false
@@ -261,13 +261,13 @@ cilium_operator_image_tag: "v{{ cilium_version }}"
cilium_hubble_relay_image_repo: "{{ quay_image_repo }}/cilium/hubble-relay"
cilium_hubble_relay_image_tag: "v{{ cilium_version }}"
cilium_hubble_certgen_image_repo: "{{ quay_image_repo }}/cilium/certgen"
cilium_hubble_certgen_image_tag: "v0.1.8"
cilium_hubble_certgen_image_tag: "v0.2.1"
cilium_hubble_ui_image_repo: "{{ quay_image_repo }}/cilium/hubble-ui"
cilium_hubble_ui_image_tag: "v0.11.0"
cilium_hubble_ui_image_tag: "v0.13.2"
cilium_hubble_ui_backend_image_repo: "{{ quay_image_repo }}/cilium/hubble-ui-backend"
cilium_hubble_ui_backend_image_tag: "v0.11.0"
cilium_hubble_envoy_image_repo: "{{ docker_image_repo }}/envoyproxy/envoy"
cilium_hubble_envoy_image_tag: "v1.22.5"
cilium_hubble_ui_backend_image_tag: "v0.13.2"
cilium_hubble_envoy_image_repo: "{{ quay_image_repo }}/cilium/cilium-envoy"
cilium_hubble_envoy_image_tag: "v1.32.5-1744305768-f9ddca7dcd91f7ca25a505560e655c47d3dec2cf"
kube_ovn_container_image_repo: "{{ docker_image_repo }}/kubeovn/kube-ovn"
kube_ovn_container_image_tag: "v{{ kube_ovn_version }}"
kube_ovn_vpc_container_image_repo: "{{ docker_image_repo }}/kubeovn/vpc-nat-gateway"
@@ -897,7 +897,7 @@ downloads:
- k8s_cluster
dnsautoscaler:
enabled: "{{ dns_mode in ['coredns', 'coredns_dual'] }}"
enabled: "{{ dns_mode in ['coredns', 'coredns_dual'] and enable_dns_autoscaler }}"
container: true
repo: "{{ dnsautoscaler_image_repo }}"
tag: "{{ dnsautoscaler_image_tag }}"

View File

@@ -770,3 +770,20 @@ system_upgrade_reboot: on-upgrade # never, always
# Enables or disables the scheduler plugins.
scheduler_plugins_enabled: false
## NTP Settings
# Start the ntpd or chrony service and enable it at system boot.
ntp_enabled: false
# TODO: Refactor NTP package selection to integrate with the general package installation system
# instead of using a separate variable approach
# The package to install which provides NTP functionality.
# The default is ntp for most platforms, or chrony on RHEL/CentOS 7 and later.
# The ntp_package can be one of ['ntp', 'ntpsec', 'chrony']
ntp_package: >-
{% if ansible_os_family == "RedHat" -%}
chrony
{%- else -%}
ntp
{%- endif -%}

View File

@@ -108,11 +108,13 @@ crio_archive_checksums:
1.30.0: sha256:e6fe5c39fa7b7cf8167bb59b94dc9028f8def0c4fec4c1c9028ec4b84da6c53a
kubelet_checksums:
arm64:
1.32.5: sha256:034753a2e308afeb4ce3cf332d38346c6e660252eac93b268fac0e112a56ff46
1.32.4: sha256:91117b71eb2bb3dd79ec3ed444e058a347349108bf661838f53ee30d2a0ff168
1.32.3: sha256:5c3c98e6e0fa35d209595037e05022597954b8d764482417a9588e15218f0fe2
1.32.2: sha256:d74b659bbde5adf919529d079975900e51e10bc807f0fda9dc9f6bb07c4a3a7b
1.32.1: sha256:8e6d0eeedd9f0b8b38d4f600ee167816f71cf4dacfa3d9a9bb6c3561cc884e95
1.32.0: sha256:bda9b2324c96693b38c41ecea051bab4c7c434be5683050b5e19025b50dbc0bf
1.31.9: sha256:2debf321e74f430c3832e2426766271f4d51e54927e6ad4be0235d31453dace6
1.31.8: sha256:c071aa506071db5f03a03ea3f406b4250359b08b7ae10eeee3cfb3da05411925
1.31.7: sha256:c6624e9e0bbf31334893f991f9a85c7018d8073c32147f421f6338bc92ac6f33
1.31.6: sha256:79b2bae5f578bae643e44ae1a40c834221983ac8e695c82aad79f2dc96c50ada
@@ -122,6 +124,7 @@ kubelet_checksums:
1.31.2: sha256:118e1b0e85357a81557f9264521c083708f295d7c5f954a4113500fd1afca8f8
1.31.1: sha256:fbd98311e96b9dcdd73d1688760d410cc70aefce26272ff2f20eef51a7c0d1da
1.31.0: sha256:b310da449a9d2f8b928cab5ca12a6772617ba421023894e061ca2647e6d9f1c3
1.30.13: sha256:673ffbf0c84814a0625fef0d4e44647ec7cf3786ab839729d2d03782559b3cdf
1.30.12: sha256:0d280ebaa41b7d4c34977f131cf9cda663db94c3ae33d5613b1729a02b3bedd7
1.30.11: sha256:2ead74deda3ae5ab2fac1e1476d5b4c81ad73cf6383c279b5781513b98e43f39
1.30.10: sha256:497d403610fda7ff4fefa1c5c467a5fe9efbc3b3368ecd40542ef1e22eff88ca
@@ -136,11 +139,13 @@ kubelet_checksums:
1.30.1: sha256:c45049b829af876588ec1a30def3884ce77c2c175cd77485d49c78d2064a38fb
1.30.0: sha256:fa887647422d34f3c7cc5b30fefcf97084d2c3277eff237c5808685ba8e4b15a
amd64:
1.32.5: sha256:2b2988edd1646bf139dee6956d4283c520ff151a36febd10701ffda4852b8250
1.32.4: sha256:3e0c265fe80f3ea1b7271a00879d4dbd5e6ea1e91ecf067670c983e07c33a6f4
1.32.3: sha256:024bb7faffa787c7717a2b37398a8c6df35694a8585a73074b052c3f4c4906ce
1.32.2: sha256:9927fee1678202719075d8d546390bcda86c9e519b811fb7f4820b6823f84cab
1.32.1: sha256:967dc8984651c48230a2ff5319e22cbf858452e974104a19bbade5d1708f72ad
1.32.0: sha256:5ad4965598773d56a37a8e8429c3dc3d86b4c5c26d8417ab333ae345c053dae2
1.31.9: sha256:4e5e2bce4e80575a253654877f0156393d79647a36afb784da27f3ddef446456
1.31.8: sha256:02697f8d14fc36089954380730f300df78b63dada1dc6f52d8e60bd5ce217d48
1.31.7: sha256:279e766a1a7c0dce2efae452c9de1e52b169df31c4b75c9d3b7d51f767ae6d42
1.31.6: sha256:ea50176095dd4650f6b270c79cf6d30deaaeb96ffa7d1eaac6924428cc9d2486
@@ -150,6 +155,7 @@ kubelet_checksums:
1.31.2: sha256:b0de6290267bbb4f6bcd9c4d50bb331e335f8dc47653644ae278844bb04c1fb6
1.31.1: sha256:50619fff95bdd7e690c049cc083f495ae0e7c66d0cdf6a8bcad298af5fe28438
1.31.0: sha256:39e7f1c61c8389ea7680690f8bd5dd733672fa16875ae598df0fd8c205df57a9
1.30.13: sha256:b8d8c3cc0c13b2e42c1d83ab6c03024825bc01887c923fd6f8568ebe066ec28e
1.30.12: sha256:aab260aa88dd27f785bdb64e7e5be0173bcd1a871d0fa84d5dc7736469f7c395
1.30.11: sha256:59177fc92e2b2bb988f7d8d39682ea9e3d9d883273c9c8b51b39502d9b965431
1.30.10: sha256:0c7aa1db3fa339aa13af0f825d25a76b3c74f785d4fcd49d6a0bc5a96f0971f0
@@ -164,11 +170,13 @@ kubelet_checksums:
1.30.1: sha256:87bd6e5de9c0769c605da5fedb77a35c8b764e3bda1632447883c935dcf219d3
1.30.0: sha256:32a32ec3d7e7f8b2648c9dd503ce9ef63b4af1d1677f5b5aed7846fb02d66f18
ppc64le:
1.32.5: sha256:b9cb7bf4b5518e1b5542717c82a753663154e08c84e336feba424cf3575313a3
1.32.4: sha256:62e7854ea84bf0fd5a9c47a1ab7ade7a74b4f160efdf486320ed913b4e8e7f79
1.32.3: sha256:efc2b01d4ab74f283ab4ff2bad4369e2b9f66fa875673b72627aa6e7a7b507cb
1.32.2: sha256:3602474e25b0b42a4b0f43ece2ca1e03fe5f3864f0936537256920bbb2eb9acd
1.32.1: sha256:623889368808042a236d7078d85a23ce5ef0e43b6fadc09bcacfdf704ac876b4
1.32.0: sha256:99d409a8023224d84c361e29cdf21ac0458a5449f03e12550288aa654539e3a1
1.31.9: sha256:53410497c9abf3355c89997654f0e1f189084888dc56a57199c6ed1c4e3cb61c
1.31.8: sha256:925bc404df4a54fed659db28e5bc55b5e4b6707f60d8aa26660b2a20f65a804c
1.31.7: sha256:159be13904091020c2be08a22155f3d3a2e22a0d31d96ceabfa84cabe1dbb6f7
1.31.6: sha256:910a4cfc99e18d6065a4d8abcd678559d278797a5de2110050cc75931b000d8f
@@ -178,6 +186,7 @@ kubelet_checksums:
1.31.2: sha256:b7eb859eaa5494273c587b0dcbb75a5a27251df5e140087de542cb7e358d79b1
1.31.1: sha256:5b9e8de02f797991670c3f16fa7e46edc7e862644bfa376573c2fca2eaf01519
1.31.0: sha256:b347b96dd79d3ac09e490669b38c5c2a49b5d73cf82cb619a1c54c6e0a165dbb
1.30.13: sha256:94ce01e6628f8339a9ff06f13e37298bbd2aedbcb3e37e7943ed8d90fd55e91d
1.30.12: sha256:b49ce79bbaedb9d40805be5f6968c6c9ee9a711dde9fc01831cd257dea7ae8a9
1.30.11: sha256:c9f778480278a4bda2c81cdeec7b2bab9c969299054f9e27234359fd5b80d6f3
1.30.10: sha256:e2061a23cac69937ab2454fc9f870f6e5cad4debe668e81389fd4a7fde36d3dd
@@ -193,11 +202,13 @@ kubelet_checksums:
1.30.0: sha256:8d4aa6b10bcddae9a7c754492743cfea88c1c6a4628cab98cdd29bb18d505d03
kubectl_checksums:
arm:
1.32.5: sha256:7270e6ac4b82b5e4bd037dccae1631964634214baa66a9548deb5edd3f79de31
1.32.4: sha256:bf28793213039690d018bbfa9bcfcfed76a9aa8e18dc299eced8709ca542fcdd
1.32.3: sha256:f990c878e54e5fac82eac7398ef643acca9807838b19014f1816fa9255b2d3d9
1.32.2: sha256:e1e6a2fd4571cd66c885aa42b290930660d34a7331ffb576fcab9fd1a0941a83
1.32.1: sha256:8ccf69be2578d3a324e9fc7d4f3b29bc9743cc02d72f33ba2d0fe30389014bc8
1.32.0: sha256:6b33ea8c80f785fb07be4d021301199ae9ee4f8d7ea037a8ae544d5a7514684e
1.31.9: sha256:54e560eb3ad4b2b0ae95d79d71b2816dfa154b33758e49f2583bec0980f19861
1.31.8: sha256:65fdd04f5171e44620cc4e0b9e0763b1b3d10b2b15c1f7f99b549d36482015d4
1.31.7: sha256:870d919f8ef5f5c608bd69c57893937910de6a8ed2c077fc4f0945375f61734d
1.31.6: sha256:b370a552cd6c9bb5fc42e4e9031b74f35da332f27b585760bacb0d3189d8634d
@@ -207,6 +218,7 @@ kubectl_checksums:
1.31.2: sha256:f2a638bdaa4764e82259ed1548ce2c86056e33a3d09147f7f0c2d4ee5b5e300c
1.31.1: sha256:51b178c9362a4fbe35644399f113d7f904d306261953a51c5c0a57676e209fa6
1.31.0: sha256:a4d6292c88c199688a03ea211bea08c8ae29f1794f5deeeef46862088d124baa
1.30.13: sha256:da7f49225c9c10f69371e5f351ea3049e3561cf02e92c31e72ee46d8575e8c1a
1.30.12: sha256:b8a5de1e9abc5c154fb466dd19758edd149cbea05ac4dfd64ba1f82461745f6f
1.30.11: sha256:ef419b7376850d2ca47413f15d6c94eeefb393ae648c9fb739e931da179adf06
1.30.10: sha256:71dc80f99598d9571191e7b5dc52b4c426da960426b3d62e644b173b50a4c2f2
@@ -221,11 +233,13 @@ kubectl_checksums:
1.30.1: sha256:b05c4c4b1c440e8797445b8b15e9f4a00010f1365533a2420b9e68428da19d89
1.30.0: sha256:ff54e96c73f4b87d740768f77edada7df8f2003f278d3c79bbbaa047b1fc708d
arm64:
1.32.5: sha256:9edee84103e63c40a37cd15bd11e04e7835f65cb3ff5a50972058ffc343b4d96
1.32.4: sha256:c6f96d0468d6976224f5f0d81b65e1a63b47195022646be83e49d38389d572c2
1.32.3: sha256:6c2c91e760efbf3fa111a5f0b99ba8975fb1c58bb3974eca88b6134bcf3717e2
1.32.2: sha256:7381bea99c83c264100f324c2ca6e7e13738a73b8928477ac805991440a065cd
1.32.1: sha256:98206fd83a4fd17f013f8c61c33d0ae8ec3a7c53ec59ef3d6a0a9400862dc5b2
1.32.0: sha256:ba4004f98f3d3a7b7d2954ff0a424caa2c2b06b78c17b1dccf2acc76a311a896
1.31.9: sha256:1e6de599df408824f13602d73333c08c3528cfa5d6c8c98c633868a966882129
1.31.8: sha256:bd76445943b22d976bdbd1d0709e4bcb5f0081cc02c10139f4b3e5e209dc3019
1.31.7: sha256:d95454093057af230f09e7b73ee9ae0714cf9e5197fbcb7b902881ca47b7e249
1.31.6: sha256:fc40a8bbdba41f022aced2dec729a1b9e937ad99872b430b6c2489f1f36a61f5
@@ -235,6 +249,7 @@ kubectl_checksums:
1.31.2: sha256:bb9fd6e5a92c2e2378954a2f1a8b4ccb2e8ba5a3635f870c3f306a53b359f971
1.31.1: sha256:3af2451191e27ecd4ac46bb7f945f76b71e934d54604ca3ffc7fe6f5dd123edb
1.31.0: sha256:f42832db7d77897514639c6df38214a6d8ae1262ee34943364ec1ffaee6c009c
1.30.13: sha256:afed1753b98ab30812203cb469e013082b25502c864f2889e8a0474aac497064
1.30.12: sha256:1af7e16a143c283a29821a09f5a006aacf0fe8368bc18adbd40588ba395e0352
1.30.11: sha256:11f86b29416f344b090c2581df4bc8a98ed7cc14a2bb28e46a6d4aa708af19f4
1.30.10: sha256:9d65d54f02b0b305d9f3f89d19a60d3e130e09f5407df99f6d48f8c10f31e2ae
@@ -249,11 +264,13 @@ kubectl_checksums:
1.30.1: sha256:d90446719b815e3abfe7b2c46ddf8b3fda17599f03ab370d6e47b1580c0e869e
1.30.0: sha256:669af0cf520757298ea60a8b6eb6b719ba443a9c7d35f36d3fb2fd7513e8c7d2
amd64:
1.32.5: sha256:aaa7e6ff3bd28c262f2d95c8c967597e097b092e9b79bcb37de699e7488e3e7b
1.32.4: sha256:10d739e9af8a59c9e7a730a2445916e04bc9cbb44bc79d22ce460cd329fa076c
1.32.3: sha256:ab209d0c5134b61486a0486585604a616a5bb2fc07df46d304b3c95817b2d79f
1.32.2: sha256:4f6a959dcc5b702135f8354cc7109b542a2933c46b808b248a214c1f69f817ea
1.32.1: sha256:e16c80f1a9f94db31063477eb9e61a2e24c1a4eee09ba776b029048f5369db0c
1.32.0: sha256:646d58f6d98ee670a71d9cdffbf6625aeea2849d567f214bc43a35f8ccb7bf70
1.31.9: sha256:720d31a15368ad56993c127a7d4fa2688a8520029c2e6be86b1a877ad6f92624
1.31.8: sha256:be0aa44a50a9aada4e9402e361ffb0d5bb1fd4f6950751399fcaf3b8b936a746
1.31.7: sha256:80a3c83f00241cd402bc8688464e5e3eedd52a461ee41d882f19cf04ad6d0379
1.31.6: sha256:c46b2f5b0027e919299d1eca073ebf13a4c5c0528dd854fc71a5b93396c9fa9d
@@ -263,6 +280,7 @@ kubectl_checksums:
1.31.2: sha256:399e9d1995da80b64d2ef3606c1a239018660d8b35209fba3f7b0bc11c631c68
1.31.1: sha256:57b514a7facce4ee62c93b8dc21fda8cf62ef3fed22e44ffc9d167eab843b2ae
1.31.0: sha256:7c27adc64a84d1c0cc3dcf7bf4b6e916cc00f3f576a2dbac51b318d926032437
1.30.13: sha256:b92bd89b27386b671841d5970b926b645c2ae44e5ca0663cff0f1c836a1530ee
1.30.12: sha256:261a3c4eb12e09207b9e08f0b43d547220569317ed8d7a22638572100ace5b80
1.30.11: sha256:228a8b2679f84de9192a1ac5ad527c9ab73b0f76c452ed74f11da812bbcfaa42
1.30.10: sha256:bc74dbeefd4b9d53f03016f6778f3ffc9a72ef4ca7b7c80fd5dc1a41d52dcab7
@@ -277,11 +295,13 @@ kubectl_checksums:
1.30.1: sha256:5b86f0b06e1a5ba6f8f00e2b01e8ed39407729c4990aeda961f83a586f975e8a
1.30.0: sha256:7c3807c0f5c1b30110a2ff1e55da1d112a6d0096201f1beb81b269f582b5d1c5
ppc64le:
1.32.5: sha256:1fc869a9d620982f16104f3b33c393aba54dd41136d18009bf6fc39accf6465c
1.32.4: sha256:61a8c1f441900b4e61defcb83bb54f61f883f9e75810897cfabfd6860ae7e195
1.32.3: sha256:11e1a377f404bdab6e3587375f7c2ee432df80b56d7ccf6151d4e48cd8063f55
1.32.2: sha256:c25500027cd331ae3e65bed2612491c5307721894e9d39e869f24ca14973677f
1.32.1: sha256:46d98d3463e065dff035d76f6c2b604c990d79634cc574d43b0c21f0367bbf0c
1.32.0: sha256:9f3f239e2601ce53ec4e70b80b7684f9c89817cc9938ed0bb14f125a3c4f8c8f
1.31.9: sha256:4a2786e8f5dcc2acc3820795811289d5a8e80ff34b5e311ac226af389236da94
1.31.8: sha256:4cc6503cecca4a385362392dc9b350837cd00a654ffc7ad424cc30ebf04c3fab
1.31.7: sha256:c00f6aca4ef62dac55b2e7e818c7907704ea96b72ff4861303ee1b5ac4a1158f
1.31.6: sha256:678d2299674c20414d83224caad9c4b8290105c2962c911ec90a2e661777e3aa
@@ -291,6 +311,7 @@ kubectl_checksums:
1.31.2: sha256:3a9405b1f8f606f282abb03bf3f926d160be454c21b3867505f15ad2123d4139
1.31.1: sha256:635275e4b207902bc6dda29de898e5152229271c46cb9613340e36c3abc2cb67
1.31.0: sha256:92393bc295423429522fa8c49724f95f31fa9bf20062d2c123e928d08886c95d
1.30.13: sha256:48a0287fb9d7b35bc2b7095976fcaf57225e9d3ae3d5c9c0165219f8d0ba39e9
1.30.12: sha256:d6434d10b4347cfe1aa93092bc8dd89a9ef0dd40e85b5aba7a705facfbff103f
1.30.11: sha256:d3de093b8b4c791aa171ad895c44fd738aa5b30135e4c7ee78ee6ac59b2967f2
1.30.10: sha256:1bd3adfcb66189575817e7e0149ecb1b6fc157bf06763232ed8d360df8ff29ab
@@ -306,11 +327,13 @@ kubectl_checksums:
1.30.0: sha256:f8a9eac6e12bc8ab7debe6c197d6536f5b3a9f199e8837afd8e4405291351811
kubeadm_checksums:
arm64:
1.32.5: sha256:2956c694ff2891acdc4690b807f87ab48419b4925d3fad2ac52ace2a1160bd17
1.32.4: sha256:1b9d97b44758dc4da20d31e3b6d46f50af75ac48be887793e16797a43d9c30e7
1.32.3: sha256:f9d007aaf1468ea862ef2a1a1a3f6f34cc57358742ceaff518e1533f5a794181
1.32.2: sha256:fd8a8c1c41d719de703bf49c6f56692dd6477188d8f43dcb77019fd8bc30cbd3
1.32.1: sha256:55a57145708aaa37f716f140ef774ca64b7088b6df5ee8eae182936ad6580328
1.32.0: sha256:5da9746a449a3b8a8312b6dd8c48dcb861036cf394306cfbc66a298ba1e8fbde
1.31.9: sha256:d8f5dbb17ce2dead6aedcc700e4293a9395e246079fcdc1772ab9e5cbfeca906
1.31.8: sha256:d0d1a6634e397e4f14b1e5f9b4bd55758ea70bfc114728730d25d563952e453e
1.31.7: sha256:3f95765db3b9ebb0cf2ff213ac3b42a831dd995a48d9a6b1d544137d3f2c3018
1.31.6: sha256:03b6df27c630f6137be129d2cef49dc4da12077381af8d234a92e451ba2a16d2
@@ -320,6 +343,7 @@ kubeadm_checksums:
1.31.2: sha256:0f9d231569b3195504f8458415e9b3080e23fb6a749fe7752abfc7a2884efadf
1.31.1: sha256:66195cd53cda3c73c9ae5e49a1352c710c0ea9ce244bbdeb68b917d809f0ea78
1.31.0: sha256:dbeb84862d844d58f67ad6be64021681a314cda162a04e6047f376f2a9ad0226
1.30.13: sha256:53a256e2ff51d51079e73c5856acfe4c2b1b71ea614aee3e832cf0a72b45fc71
1.30.12: sha256:7abc2db71e0ab3c7c30546851d254542f2c6778d4022437a47a1d48bd722a5d1
1.30.11: sha256:644f70389d6f5186685a2d94c0221b55a280a9ec14bd3f3609f008d9244c70e8
1.30.10: sha256:1dfba299e19ce4b1e605d39604b898c723274eba51495bd8547732a35b90a8c1
@@ -334,11 +358,13 @@ kubeadm_checksums:
1.30.1: sha256:bda423cb4b9d056f99a2ef116bdf227fadbc1c3309fa3d76da571427a7f41478
1.30.0: sha256:c36afd28921303e6db8e58274de16c60a80a1e75030fc3c4e9c4ed6249b6b696
amd64:
1.32.5: sha256:9070c3d469f5a3e777948b63a7a5e6c5bd7682c7416547770a78880fe4293ea9
1.32.4: sha256:445cdebd140dc0a9f4d18505821dcca77d7a21992133bf6731777f5724968255
1.32.3: sha256:be42caa726b85b7723605ca8fea22e4a26e0d439b789a3d9d6e636a7078b3db4
1.32.2: sha256:fb3a90f1bfc78146a8a03b50eb59aaf957a023c1c5a2b166062ef9412550bba6
1.32.1: sha256:5ed13bb4bc1d5fb4579b8cc8c7c2245356837122f9a3fd729c2f6d1338f58dcf
1.32.0: sha256:8a10abe691a693d6deeeb1c992bc75da9d8c76718a22327688f7eb1d7c15f0d6
1.31.9: sha256:9653845e48754df94842cce1ef76874e7f4c1a32d782dd0c7e6cf12e3a718dde
1.31.8: sha256:b979b58548902a152b0ab89265347c34aac9f1c7e9666953806267d033f0d63b
1.31.7: sha256:be84c87c7b40977edf67fb8ee231abb273b93bbab5bb770af0f3f37c0d7c4b81
1.31.6: sha256:c9d9add6c8cdbeb29d5e1743f23060fc06219b23f561eb9f959b5502fb055611
@@ -348,6 +374,7 @@ kubeadm_checksums:
1.31.2: sha256:e3d3f1051d9f7e431aabaf433f121c76fcf6d8401b7ea51f4c7af65af44f1e54
1.31.1: sha256:b3f92d19d482359116dd9ee9c0a10cb86e32a2a2aef79b853d5f07d6a093b0df
1.31.0: sha256:cf3b1a44b11ab226e40610e63d99fae7588a82940bb77da471a6dec624c819c2
1.30.13: sha256:dbea796b7b716f7b30ea99e021c3730ef3debace4c8a62c88abfc266b3ab7a96
1.30.12: sha256:88422e8b3749b5eaf50a9889a56ee5615cd8a027711f26c6687788e758b949f8
1.30.11: sha256:06ff7ff15b7fa9af60189fdece5f7c56efa8b637c38b4a498715ca2f04ccfcb2
1.30.10: sha256:177254194194975df68fd69a3647c86260a6c635bee42f516d3cecc047c4bc7c
@@ -362,11 +389,13 @@ kubeadm_checksums:
1.30.1: sha256:651faa3bbbfb368ed00460e4d11732614310b690b767c51810a7b638cc0961a2
1.30.0: sha256:29f4232c50e6524abba3443ff3b9948d386964d79eb8dfefb409e1f8a8434c14
ppc64le:
1.32.5: sha256:9ace8b24eba37d960a9cafd947015722c383bd695767b7a7c8449a4f6a3f3e9e
1.32.4: sha256:fb0223765d57c59ff4202445b3768e848b6d383dfac058b5882696bca0286053
1.32.3: sha256:68cc7669e47575ead58563c39abf89c7faf1c70fb6733ea9c727f303f2af1abf
1.32.2: sha256:02573483126e39c6b25c769131cf30ea7c470ad635374be343d5e76845a4ecdb
1.32.1: sha256:ff7f1dd3f1a6a5c0cf2c9977ec7c474bd22908850e33358dd40aeba17d8375b0
1.32.0: sha256:d79fe8cbd1d98bcbe56b8c0c3a64716603581cecf274951af49aa07748bf175a
1.31.9: sha256:0edee6d9df59cbde094dc7c78bc2cb326ef5ee05072a41196413d1952d078224
1.31.8: sha256:ce95a67e563099bf0020c8b577d12e1acd28fa622a317c5dbea4dcba38f1a4db
1.31.7: sha256:98c501edf7ceb4defd84a6925d9c69f6a8053f16342091af946ff2f2bdace10b
1.31.6: sha256:03cd9275b9437fc913cbc7b4a365671bd9cb52e67525dd1ba154c792bbfc44fa
@@ -376,6 +405,7 @@ kubeadm_checksums:
1.31.2: sha256:57771542703fbb18916728b3701298fda62f28a1d9f144ae3712846d2bb50f8a
1.31.1: sha256:76667e109e2dfcb332820c35f598b6f588b6f18c8b59acfb956fb9b4995dda4e
1.31.0: sha256:002307ea116a5aa5f78d3d9fb00e9981593711fb79fdfc9be0a9857c370bdcf3
1.30.13: sha256:6751937c03c3202afe650b015ded5ff2d2ec63db2d1a87fae50f07f3084049d8
1.30.12: sha256:dda533c81cbe3cc130f78dffa46c839015a5b75d889c95ee178f8989ff7d21f9
1.30.11: sha256:93f26ae616ad31d59a4160d1948a7b3a621cf8e8b47efe55e7ed84f9667a94fa
1.30.10: sha256:fe825263316c29eb9cf78267ad524953865d058744135121b6b0b5aa0dcbee8c
@@ -526,6 +556,11 @@ calicoctl_binary_checksums:
3.27.0: sha256:3de46d8bc30c6f9d9387d484ed62a5655c1f204b1b831b5a90f0a0d1c1ffd752
ciliumcli_binary_checksums:
arm64:
0.18.3: sha256:e0588268fc9ab6e0b7a363c4e15ecf69ed2a4cade956ab272745262e456f0e54
0.18.2: sha256:db3fae09ba005d6d345858655777bb5c972c9c841f98dc3fad3455d3084dba61
0.18.1: sha256:e6556fc7ccd071d7612446945d361c869dfeb423e0738147e0b46b2550bc2bf9
0.18.0: sha256:fd20a79875c8089694fb9b5dc3a0bf89d51711f9239637931ff0ace76ce78816
0.17.0: sha256:dee29ad27f3958882b450019e2021698282e8fcf8b136c27397798102cc1ad13
0.16.24: sha256:cf7f1276bbcf4aa5e6347d5619efe990cf1340d5898f8405931e277a1f76c670
0.16.23: sha256:7973302bead01c3f2e1d0f03e2766a0d6e76d3c52c666c750b9871a28b9afb32
0.16.22: sha256:b70c15e40b36ac34d59597f2448c5b4e0033964c517f926dbb9654aa07fb1e5b
@@ -561,6 +596,11 @@ ciliumcli_binary_checksums:
0.15.16: sha256:86ed6a2e796c39dd00072e7c141fc35b68d63392d1ac5e183a7ce9d7263e23a0
0.15.15: sha256:5c1693ea163b094a92ebc6997b6e678cc8c24a52040c22433b58b419de74b28f
amd64:
0.18.3: sha256:5fe565f3b98b5846b867319aa76bc057fca37894d80db56edc20e4e809d10b25
0.18.2: sha256:1b4bd5fd5c96ab1195cd4eb56841c983a21149c62ee39922b7955f1cd0eda23a
0.18.1: sha256:c472639d460173e8d807a3f57048f9d1bcdb325e9edba320550d7ec62b72f956
0.18.0: sha256:3ac8bd270763e40a7853c73f8c7ec9e49707e1723801884a083dc25469b6b4ba
0.17.0: sha256:4ba0687ff7d47e182a7328409fb0eae123e64fa6099cd6f8b9bf240c0012ecf4
0.16.24: sha256:019c9c765222b3db5786f7b3a0bff2cd62944a8ce32681acfb47808330f405a7
0.16.23: sha256:e7cd3b982eca9b6214226536a147490ebb6ea3caad40d5a724daeea0bec5e3be
0.16.22: sha256:8bd9faae272aef2e75c686a55de782018013098b66439a1ee0c8ff1e05c5d32c
@@ -824,6 +864,7 @@ kata_containers_binary_checksums:
3.2.0: sha256:40627b7ac677ce0f5ffc73b32c1a8bc553e75b746b6cdf8f14642ac27dac3148
gvisor_runsc_binary_checksums:
arm64:
'20250512.0': sha512:00e9edeb4a9ae702c9617a583f2978a042f20a05807acfa992bc76de4fae2e6e1e994d34ad6f21c826d2cfdea89f6a163c69c0750cf4d90135146438587a3a8c
'20250505.0': sha512:1611599c6788d3c3f7495b5054aaf9ec81e7a714061582f913359886452fa14f8e65b2bd2d139bc24b5955749167f0db03aefaa6b3ae175296b56814f53d7898
'20250429.0': sha512:bd58d212088263ad998fa62dbc7f2a8f74ea3914e8a7a319813c3e461f297dcdbc3e85069aacbcaa8c2e573b0e7b17d730d21ab96f8c3ca9516bd43acc070330
'20250421.0': sha512:647127e139c77d5d360db915d64a21f461fc11ea47d3660feb48952a70639155cd8c19e2bbe16d190a1666c6f689c45bda2aa5d3440596ef174983fe41d8539d
@@ -866,6 +907,7 @@ gvisor_runsc_binary_checksums:
'20240109.0': sha256:51a1b299997834b902192806def688b1e23ff6b14f28a9ed3397f3f6572a189a
'20231218.0': sha256:86262a78946deacc309c0f08883659ee3298c288048dc30955945e71993c81a8
amd64:
'20250512.0': sha512:981a554ad63f7ed082a43be646b8e910481e4bfc837c5ee5dd5a1353a47b0ae337f9b02700649a542db864ae35af6981e6bdef86c6a48a5e47dacfb97be9b7b0
'20250505.0': sha512:25705616c3cfc82bb5772e815b2b6b030664dccba7a0db9babcfad5de46d16ce8bff8cd9cc11d366da4acd0f01fb04a0d95bbae070aa923f1492d2f142f271c3
'20250429.0': sha512:b91d0351907290fe159cf041dbd332f8d2d4151d6a7aaafe161cd842452551b98fa1122e195e2cf42801eb8ff38716270de4f33331dde784cbfc452ec1e368a0
'20250421.0': sha512:419f80c01cef46aaab0a0eaf9be4bc20fd3aba94e8d0dd8ceacd3b166139d5bc8e701964feb11bf6de7a4274924692a7d0b5bcf5de34f5dfaeec57f7f1ecd88f
@@ -909,6 +951,7 @@ gvisor_runsc_binary_checksums:
'20231218.0': sha256:c353d36a134dfc2fab8509f72a34abf6a761603975eb00a39e4077c41aeaf31b
gvisor_containerd_shim_binary_checksums:
arm64:
'20250512.0': sha512:43daf4b8b0e094ebf2cede8bbbf89ee0695ff31924e140bdfcff529296e8f004b457485b9f991ae9ec93cf6150535e297db00a92be8a054589b3316841fbc056
'20250505.0': sha512:42cd72f9b2011a8ad166d9dc246fdb46ef602aae43127373750a7ff65be84f8b300c50e4977495ce59670af5fc5f92c3c5ba96c5d751cb4e6e2fcce373210e06
'20250429.0': sha512:9a9a2c351789e6a14896ec5e56ebe7ca1dc7424087d13c175e38a4522a7e6f1533ac8ef5aeea0bcbd554cd5e4d6b6d7ac3df2dfebcbea3e7164bf00fa823c310
'20250421.0': sha512:c86577ddb8b7b46b5b050000e242dc09bebeffa7cb9d21acb84c4ef896cfa340f024e2b9f463fd4f7945683854c524f4a45de3ff3917f4ba65552cede4229974
@@ -951,6 +994,7 @@ gvisor_containerd_shim_binary_checksums:
'20240109.0': sha256:40eb0a4f5f0013afb221e228fd6e71887127c4b09c7f2eb36705a0cd5c746d57
'20231218.0': sha256:5f66938de981221359a64f05a5c770b228090db3a2697d91ad622c18dd19f4b2
amd64:
'20250512.0': sha512:eb7acb5bbd24dd208643b0e91b2195fabd1ca3887612ad33bc34d62a86e4944f3ad80e7592ee5a49cbd6a12aeaa466127a7a220722c2ea64f37df96bebba4ac2
'20250505.0': sha512:11a1b003a73b2ae8924b03adc557966d815b79d756c9e40adc505c11ffe6f8e30153e5d133566bced39797fbd41651680fc17c0d7686d2ab3cf63b466e68dcc1
'20250429.0': sha512:42b16d541d589d96075c29e4bf7005bc429c28f411c928412fcd18f093b98f3a7969d799d567730e08f379ce9c2ba7c02bb1e8d10b7fa72179349ac2f40c8d7f
'20250421.0': sha512:eda25a84342130d3fe7f23ec3abad56de0fb08ac36c430b423c2d51cc21a75e902a4671ffac9481bf04f8985ded12110e65aa8a2032bda2699083d1b9b07a672

View File

@@ -1,9 +1,9 @@
---
cilium_min_version_required: "1.10"
cilium_min_version_required: "1.15"
# Log-level
cilium_debug: false
cilium_mtu: ""
cilium_mtu: "0"
cilium_enable_ipv4: "{{ ipv4_stack }}"
cilium_enable_ipv6: "{{ ipv6_stack }}"
@@ -11,7 +11,7 @@ cilium_enable_ipv6: "{{ ipv6_stack }}"
cilium_l2announcements: false
# Cilium agent health port
cilium_agent_health_port: "{%- if cilium_version is version('1.11.6', '>=') -%}9879{%- else -%}9876{%- endif -%}"
cilium_agent_health_port: "9879"
# Identity allocation mode selects how identities are shared between cilium
# nodes by setting how they are stored. The options are "crd" or "kvstore".
@@ -26,7 +26,7 @@ cilium_agent_health_port: "{%- if cilium_version is version('1.11.6', '>=') -%}9
# - --synchronize-k8s-nodes
# - --identity-allocation-mode=kvstore
# - Ref: https://docs.cilium.io/en/stable/internals/cilium_operator/#kvstore-operations
cilium_identity_allocation_mode: kvstore
cilium_identity_allocation_mode: crd
# Etcd SSL dirs
cilium_cert_dir: /etc/cilium/certs
@@ -55,20 +55,20 @@ cilium_enable_prometheus: false
cilium_enable_portmap: false
# Monitor aggregation level (none/low/medium/maximum)
cilium_monitor_aggregation: medium
# Kube Proxy Replacement mode (strict/partial)
cilium_kube_proxy_replacement: partial
# Kube Proxy Replacement mode (true/false)
cilium_kube_proxy_replacement: false
# If not defined `cilium_dns_proxy_enable_transparent_mode`, it will following the Cilium behavior.
# When Cilium is configured to replace kube-proxy, it automatically enables dnsProxy, which will conflict with nodelocaldns.
# You can set `false` avoid conflict with nodelocaldns.
# https://github.com/cilium/cilium/issues/33144
# cilium_dns_proxy_enable_transparent_mode:
# If upgrading from Cilium < 1.5, you may want to override some of these options
# to prevent service disruptions. See also:
# http://docs.cilium.io/en/stable/install/upgrade/#changes-that-may-require-action
cilium_preallocate_bpf_maps: false
# `cilium_tofqdns_enable_poller` is deprecated in 1.8, removed in 1.9
cilium_tofqdns_enable_poller: false
# `cilium_enable_legacy_services` is deprecated in 1.6, removed in 1.9
cilium_enable_legacy_services: false
# Auto direct nodes routes can be used to advertise pods routes in your cluster
# without any tunelling (with `cilium_tunnel_mode` sets to `disabled`).
# This works only if you have a L2 connectivity between all your nodes.
@@ -100,8 +100,8 @@ cilium_encryption_enabled: false
cilium_encryption_type: "ipsec"
# Enable encryption for pure node to node traffic.
# This option is only effective when `cilium_encryption_type` is set to `ipsec`.
cilium_ipsec_node_encryption: false
# This option is only effective when `cilium_encryption_type` is set to `wireguard`.
cilium_encryption_node_encryption: false
# If your kernel or distribution does not support WireGuard, Cilium agent can be configured to fall back on the user-space implementation.
# When this flag is enabled and Cilium detects that the kernel has no native support for WireGuard,
@@ -115,6 +115,7 @@ cilium_wireguard_userspace_fallback: false
# In case they select the Pod at egress, then the bandwidth enforcement will be disabled for those Pods.
# Bandwidth Manager requires a v5.1.x or more recent Linux kernel.
cilium_enable_bandwidth_manager: false
cilium_enable_bandwidth_manager_bbr: false
# IP Masquerade Agent
# https://docs.cilium.io/en/stable/concepts/networking/masquerading/
@@ -137,6 +138,7 @@ cilium_non_masquerade_cidrs:
### Indicates whether to masquerade traffic to the link local prefix.
### If the masqLinkLocal is not set or set to false, then 169.254.0.0/16 is appended to the non-masquerade CIDRs list.
cilium_masq_link_local: false
cilium_masq_link_local_ipv6: false
### A time interval at which the agent attempts to reload config from disk
cilium_ip_masq_resync_interval: 60s
@@ -145,10 +147,10 @@ cilium_ip_masq_resync_interval: 60s
cilium_enable_hubble: false
### Enable Hubble-ui
cilium_enable_hubble_ui: "{{ cilium_enable_hubble }}"
### Enable Hubble Metrics
### Enable Hubble Metrics (deprecated)
cilium_enable_hubble_metrics: false
### if cilium_enable_hubble_metrics: true
cilium_hubble_metrics: {}
cilium_hubble_metrics: []
# - dns
# - drop
# - tcp
@@ -160,12 +162,25 @@ cilium_hubble_install: false
### Enable auto generate certs if cilium_hubble_install: true
cilium_hubble_tls_generate: false
cilium_hubble_export_file_max_backups: "5"
cilium_hubble_export_file_max_size_mb: "10"
cilium_hubble_export_dynamic_enabled: false
cilium_hubble_export_dynamic_config_content:
- name: all
fieldMask: []
includeFilters: []
excludeFilters: []
filePath: "/var/run/cilium/hubble/events.log"
### Capacity of Hubble events buffer. The provided value must be one less than an integer power of two and no larger than 65535
### (ie: 1, 3, ..., 2047, 4095, ..., 65535) (default 4095)
# cilium_hubble_event_buffer_capacity: 4095
### Buffer size of the channel to receive monitor events.
# cilium_hubble_event_queue_size: 50
cilium_gateway_api_enabled: false
# The default IP address management mode is "Cluster Scope".
# https://docs.cilium.io/en/stable/concepts/networking/ipam/
cilium_ipam_mode: cluster-pool
@@ -190,7 +205,8 @@ cilium_ipam_mode: cluster-pool
# Extra arguments for the Cilium agent
cilium_agent_custom_args: []
cilium_agent_custom_args: [] # deprecated
cilium_agent_extra_args: []
# For adding and mounting extra volumes to the cilium agent
cilium_agent_extra_volumes: []
@@ -214,13 +230,19 @@ cilium_operator_extra_volumes: []
cilium_operator_extra_volume_mounts: []
# Extra arguments for the Cilium Operator
cilium_operator_custom_args: []
cilium_operator_custom_args: [] # deprecated
cilium_operator_extra_args: []
# Tolerations of the cilium operator
cilium_operator_tolerations:
- operator: "Exists"
# Unique ID of the cluster. Must be unique across all connected
# clusters and in the range of 1 to 255. Only required for Cluster Mesh,
# may be 0 if Cluster Mesh is not used.
cilium_cluster_id: 0
# Name of the cluster. Only relevant when building a mesh of clusters.
# The "default" name cannot be used if the Cluster ID is different from 0.
cilium_cluster_name: default
# Make Cilium take ownership over the `/etc/cni/net.d` directory on the node, renaming all non-Cilium CNI configurations to `*.cilium_bak`.
@@ -263,7 +285,7 @@ cilium_enable_bpf_masquerade: false
# host stack (true) or directly and more efficiently out of BPF (false) if
# the kernel supports it. The latter has the implication that it will also
# bypass netfilter in the host namespace.
cilium_enable_host_legacy_routing: true
cilium_enable_host_legacy_routing: false
# -- Enable use of the remote node identity.
# ref: https://docs.cilium.io/en/v1.7/install/upgrade/#configmap-remote-node-identity
@@ -307,9 +329,9 @@ cilium_rolling_restart_wait_retries_count: 30
cilium_rolling_restart_wait_retries_delay_seconds: 10
# Cilium changed the default metrics exporter ports in 1.12
cilium_agent_scrape_port: "{{ cilium_version is version('1.12', '>=') | ternary('9962', '9090') }}"
cilium_operator_scrape_port: "{{ cilium_version is version('1.12', '>=') | ternary('9963', '6942') }}"
cilium_hubble_scrape_port: "{{ cilium_version is version('1.12', '>=') | ternary('9965', '9091') }}"
cilium_agent_scrape_port: "9962"
cilium_operator_scrape_port: "9963"
cilium_hubble_scrape_port: "9965"
# Cilium certgen args for generate certificate for hubble mTLS
cilium_certgen_args:
@@ -328,23 +350,5 @@ cilium_certgen_args:
hubble-relay-client-cert-secret-name: hubble-relay-client-certs
hubble-relay-server-cert-generate: false
# A list of extra rules variables to add to clusterrole for cilium operator, formatted like:
# cilium_clusterrole_rules_operator_extra_vars:
# - apiGroups:
# - '""'
# resources:
# - pods
# verbs:
# - delete
# - apiGroups:
# - '""'
# resources:
# - nodes
# verbs:
# - list
# - watch
# resourceNames:
# - toto
cilium_clusterrole_rules_operator_extra_vars: []
cilium_enable_host_firewall: false
cilium_policy_audit_mode: false

View File

@@ -1,14 +1,7 @@
---
- name: Cilium | Start Resources
kube:
name: "{{ item.item.name }}"
namespace: "kube-system"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/{{ item.item.name }}-{{ item.item.file }}"
state: "latest"
loop: "{{ cilium_node_manifests.results }}"
when: inventory_hostname == groups['kube_control_plane'][0] and not item is skipped
- name: Cilium | Install
command: "{{ bin_dir }}/cilium install --version {{ cilium_version }} -f {{ kube_config_dir }}/cilium-values.yaml"
when: inventory_hostname == groups['kube_control_plane'][0]
- name: Cilium | Wait for pods to run
command: "{{ kubectl }} -n kube-system get pods -l k8s-app=cilium -o jsonpath='{.items[?(@.status.containerStatuses[0].ready==false)].metadata.name}'" # noqa literal-compare
@@ -19,19 +12,6 @@
failed_when: false
when: inventory_hostname == groups['kube_control_plane'][0]
- name: Cilium | Hubble install
kube:
name: "{{ item.item.name }}"
namespace: "kube-system"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/addons/hubble/{{ item.item.name }}-{{ item.item.file }}"
state: "latest"
loop: "{{ cilium_hubble_manifests.results }}"
when:
- inventory_hostname == groups['kube_control_plane'][0] and not item is skipped
- cilium_enable_hubble and cilium_hubble_install
- name: Cilium | Wait for CiliumLoadBalancerIPPool CRD to be present
command: "{{ kubectl }} wait --for condition=established --timeout=60s crd/ciliumloadbalancerippools.cilium.io"
register: cillium_lbippool_crd_ready

View File

@@ -48,7 +48,7 @@
msg: "cilium_encryption_type must be either 'ipsec' or 'wireguard'"
when: cilium_encryption_enabled
- name: Stop if cilium_version is < 1.10.0
- name: Stop if cilium_version is < {{ cilium_min_version_required }}
assert:
that: cilium_version is version(cilium_min_version_required, '>=')
msg: "cilium_version is too low. Minimum version {{ cilium_min_version_required }}"

View File

@@ -30,58 +30,6 @@
when:
- cilium_identity_allocation_mode == "kvstore"
- name: Cilium | Create hubble dir
file:
path: "{{ kube_config_dir }}/addons/hubble"
state: directory
owner: root
group: root
mode: "0755"
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_hubble_install
- name: Cilium | Create Cilium node manifests
template:
src: "{{ item.name }}/{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
mode: "0644"
loop:
- {name: cilium, file: config.yml, type: cm}
- {name: cilium-operator, file: crb.yml, type: clusterrolebinding}
- {name: cilium-operator, file: cr.yml, type: clusterrole}
- {name: cilium, file: crb.yml, type: clusterrolebinding}
- {name: cilium, file: cr.yml, type: clusterrole}
- {name: cilium, file: secret.yml, type: secret, when: "{{ cilium_encryption_enabled and cilium_encryption_type == 'ipsec' }}"}
- {name: cilium, file: ds.yml, type: ds}
- {name: cilium-operator, file: deploy.yml, type: deploy}
- {name: cilium-operator, file: sa.yml, type: sa}
- {name: cilium, file: sa.yml, type: sa}
register: cilium_node_manifests
when:
- ('kube_control_plane' in group_names)
- item.when | default(True) | bool
- name: Cilium | Create Cilium Hubble manifests
template:
src: "{{ item.name }}/{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/addons/hubble/{{ item.name }}-{{ item.file }}"
mode: "0644"
loop:
- {name: hubble, file: config.yml, type: cm}
- {name: hubble, file: crb.yml, type: clusterrolebinding}
- {name: hubble, file: cr.yml, type: clusterrole}
- {name: hubble, file: cronjob.yml, type: cronjob, when: "{{ cilium_hubble_tls_generate }}"}
- {name: hubble, file: deploy.yml, type: deploy}
- {name: hubble, file: job.yml, type: job, when: "{{ cilium_hubble_tls_generate }}"}
- {name: hubble, file: sa.yml, type: sa}
- {name: hubble, file: service.yml, type: service}
register: cilium_hubble_manifests
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_enable_hubble and cilium_hubble_install
- item.when | default(True) | bool
- name: Cilium | Enable portmap addon
template:
src: 000-cilium-portmap.conflist.j2
@@ -89,6 +37,14 @@
mode: "0644"
when: cilium_enable_portmap
- name: Cilium | Render values
template:
src: values.yaml.j2
dest: "{{ kube_config_dir }}/cilium-values.yaml"
mode: "0644"
when:
- inventory_hostname == groups['kube_control_plane'][0]
- name: Cilium | Copy Ciliumcli binary from download dir
copy:
src: "{{ local_release_dir }}/cilium"

View File

@@ -1,193 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium-operator
rules:
- apiGroups:
- ""
resources:
# to automatically delete [core|kube]dns pods so that are starting to being
# managed by Cilium
- pods
verbs:
- get
- list
- watch
- delete
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
# To remove node taints
- nodes
# To set NetworkUnavailable false on startup
- nodes/status
verbs:
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
# to perform LB IP allocation for BGP
- services/status
verbs:
- update
- patch
- apiGroups:
- ""
resources:
# to perform the translation of a CNP that contains `ToGroup` to its endpoints
- services
- endpoints
# to check apiserver connectivity
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumnetworkpolicies/status
- ciliumnetworkpolicies/finalizers
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies/status
- ciliumclusterwidenetworkpolicies/finalizers
- ciliumendpoints
- ciliumendpoints/status
- ciliumendpoints/finalizers
- ciliumnodes
- ciliumnodes/status
- ciliumnodes/finalizers
- ciliumidentities
- ciliumidentities/status
- ciliumidentities/finalizers
- ciliumlocalredirectpolicies
- ciliumlocalredirectpolicies/status
- ciliumlocalredirectpolicies/finalizers
{% if cilium_version is version('1.11', '>=') %}
- ciliumendpointslices
{% endif %}
{% if cilium_version is version('1.12', '>=') %}
- ciliumbgploadbalancerippools
- ciliumloadbalancerippools
- ciliumloadbalancerippools/status
- ciliumbgppeeringpolicies
- ciliumenvoyconfigs
{% endif %}
{% if cilium_version is version('1.15', '>=') %}
- ciliumbgppeerconfigs
- ciliumbgpadvertisements
- ciliumbgpnodeconfigs
{% endif %}
{% if cilium_version is version('1.16', '>=') %}
- ciliumbgpclusterconfigs
- ciliumbgpclusterconfigs/status
- ciliumbgpnodeconfigoverrides
{% endif %}
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- get
- list
- update
- watch
# For cilium-operator running in HA mode.
#
# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
# between multiple running instances.
# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
# common and fewer objects in the cluster watch "all Leases".
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- update
{% if cilium_version is version('1.12', '>=') %}
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- update
resourceNames:
- ciliumbgploadbalancerippools.cilium.io
- ciliumbgppeeringpolicies.cilium.io
- ciliumclusterwideenvoyconfigs.cilium.io
- ciliumclusterwidenetworkpolicies.cilium.io
- ciliumegressgatewaypolicies.cilium.io
- ciliumegressnatpolicies.cilium.io
- ciliumendpoints.cilium.io
- ciliumendpointslices.cilium.io
- ciliumenvoyconfigs.cilium.io
- ciliumexternalworkloads.cilium.io
- ciliumidentities.cilium.io
- ciliumlocalredirectpolicies.cilium.io
- ciliumnetworkpolicies.cilium.io
- ciliumnodes.cilium.io
{% if cilium_version is version('1.14', '>=') %}
- ciliumnodeconfigs.cilium.io
- ciliumcidrgroups.cilium.io
- ciliuml2announcementpolicies.cilium.io
- ciliumpodippools.cilium.io
- ciliumloadbalancerippools.cilium.io
{% endif %}
{% if cilium_version is version('1.15', '>=') %}
- ciliumbgpclusterconfigs.cilium.io
- ciliumbgppeerconfigs.cilium.io
- ciliumbgpadvertisements.cilium.io
- ciliumbgpnodeconfigs.cilium.io
- ciliumbgpnodeconfigoverrides.cilium.io
{% endif %}
{% endif %}
{% for rules in cilium_clusterrole_rules_operator_extra_vars %}
- apiGroups:
{% for api in rules['apiGroups'] %}
- {{ api }}
{% endfor %}
resources:
{% for resource in rules['resources'] %}
- {{ resource }}
{% endfor %}
verbs:
{% for verb in rules['verbs'] %}
- {{ verb }}
{% endfor %}
{% if 'resourceNames' in rules %}
resourceNames:
{% for resourceName in rules['resourceNames'] %}
- {{ resourceName }}
{% endfor %}
{% endif %}
{% endfor %}

View File

@@ -1,13 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium-operator
subjects:
- kind: ServiceAccount
name: cilium-operator
namespace: kube-system

View File

@@ -1,170 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cilium-operator
namespace: kube-system
labels:
io.cilium/app: operator
name: cilium-operator
spec:
{% if groups.k8s_cluster | length == 1 %}
replicas: 1
{% else %}
replicas: {{ cilium_operator_replicas }}
{% endif %}
selector:
matchLabels:
io.cilium/app: operator
name: cilium-operator
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
{% if cilium_enable_prometheus %}
annotations:
prometheus.io/port: "{{ cilium_operator_scrape_port }}"
prometheus.io/scrape: "true"
{% endif %}
labels:
io.cilium/app: operator
name: cilium-operator
spec:
containers:
- name: cilium-operator
image: "{{ cilium_operator_image_repo }}:{{ cilium_operator_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
command:
- cilium-operator
args:
- --config-dir=/tmp/cilium/config-map
- --debug=$(CILIUM_DEBUG)
{% if cilium_operator_custom_args is string %}
- {{ cilium_operator_custom_args }}
{% else %}
{% for flag in cilium_operator_custom_args %}
- {{ flag }}
{% endfor %}
{% endif %}
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_DEBUG
valueFrom:
configMapKeyRef:
key: debug
name: cilium-config
optional: true
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: cilium-aws
key: AWS_ACCESS_KEY_ID
optional: true
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: cilium-aws
key: AWS_SECRET_ACCESS_KEY
optional: true
- name: AWS_DEFAULT_REGION
valueFrom:
secretKeyRef:
name: cilium-aws
key: AWS_DEFAULT_REGION
optional: true
{% if (cilium_kube_proxy_replacement == 'strict') or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') %}
- name: KUBERNETES_SERVICE_HOST
value: "{{ kube_apiserver_global_endpoint | urlsplit('hostname') }}"
- name: KUBERNETES_SERVICE_PORT
value: "{{ kube_apiserver_global_endpoint | urlsplit('port') }}"
{% endif %}
{% if cilium_enable_prometheus %}
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
ports:
- name: prometheus
containerPort: {{ cilium_operator_scrape_port }}
hostPort: {{ cilium_operator_scrape_port }}
protocol: TCP
{% endif %}
livenessProbe:
httpGet:
{% if cilium_enable_ipv4 %}
host: 127.0.0.1
{% else %}
host: '::1'
{% endif %}
path: /healthz
port: 9234
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 3
volumeMounts:
- name: cilium-config-path
mountPath: /tmp/cilium/config-map
readOnly: true
{% if cilium_identity_allocation_mode == "kvstore" %}
- name: etcd-config-path
mountPath: /var/lib/etcd-config
readOnly: true
- name: etcd-secrets
mountPath: "{{ cilium_cert_dir }}"
readOnly: true
{% endif %}
{% for volume_mount in cilium_operator_extra_volume_mounts %}
- {{ volume_mount | to_nice_yaml(indent=2) | indent(14) }}
{% endfor %}
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
restartPolicy: Always
priorityClassName: system-node-critical
serviceAccount: cilium-operator
serviceAccountName: cilium-operator
# In HA mode, cilium-operator pods must not be scheduled on the same
# node as they will clash with each other.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
io.cilium/app: operator
tolerations:
{{ cilium_operator_tolerations | list | to_nice_yaml(indent=2) | indent(8) }}
volumes:
- name: cilium-config-path
configMap:
name: cilium-config
{% if cilium_identity_allocation_mode == "kvstore" %}
# To read the etcd config stored in config maps
- name: etcd-config-path
configMap:
name: cilium-config
defaultMode: 420
items:
- key: etcd-config
path: etcd.config
# To read the k8s etcd secrets in case the user might want to use TLS
- name: etcd-secrets
hostPath:
path: "{{ cilium_cert_dir }}"
{% endif %}
{% for volume in cilium_operator_extra_volumes %}
- {{ volume | to_nice_yaml(indent=2) | indent(10) }}
{% endfor %}

View File

@@ -1,6 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cilium-operator
namespace: kube-system

View File

@@ -1,299 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: kube-system
data:
identity-allocation-mode: {{ cilium_identity_allocation_mode }}
{% if cilium_identity_allocation_mode == "kvstore" %}
# This etcd-config contains the etcd endpoints of your cluster. If you use
# TLS please make sure you follow the tutorial in https://cilium.link/etcd-config
etcd-config: |-
---
endpoints:
{% for ip_addr in etcd_access_addresses.split(',') %}
- {{ ip_addr }}
{% endfor %}
# In case you want to use TLS in etcd, uncomment the 'ca-file' line
# and create a kubernetes secret by following the tutorial in
# https://cilium.link/etcd-config
{% if cilium_version | regex_replace('v') is version('1.17.0', '>=') %}
trusted-ca-file: "{{ cilium_cert_dir }}/ca_cert.crt"
{% else %}
ca-file: "{{ cilium_cert_dir }}/ca_cert.crt"
{% endif %}
# In case you want client to server authentication, uncomment the following
# lines and create a kubernetes secret by following the tutorial in
# https://cilium.link/etcd-config
key-file: "{{ cilium_cert_dir }}/key.pem"
cert-file: "{{ cilium_cert_dir }}/cert.crt"
# kvstore
# https://docs.cilium.io/en/latest/cmdref/kvstore/
kvstore: etcd
kvstore-opt: '{"etcd.config": "/var/lib/etcd-config/etcd.config"}'
{% endif %}
# If you want metrics enabled in all of your Cilium agents, set the port for
# which the Cilium agents will have their metrics exposed.
# This option deprecates the "prometheus-serve-addr" in the
# "cilium-metrics-config" ConfigMap
# NOTE that this will open the port on ALL nodes where Cilium pods are
# scheduled.
{% if cilium_enable_prometheus %}
prometheus-serve-addr: ":{{ cilium_agent_scrape_port }}"
operator-prometheus-serve-addr: ":{{ cilium_operator_scrape_port }}"
enable-metrics: "true"
{% endif %}
# If you want to run cilium in debug mode change this value to true
debug: "{{ cilium_debug }}"
enable-ipv4: "{{ cilium_enable_ipv4 }}"
enable-ipv6: "{{ cilium_enable_ipv6 }}"
# If a serious issue occurs during Cilium startup, this
# invasive option may be set to true to remove all persistent
# state. Endpoints will not be restored using knowledge from a
# prior Cilium run, so they may receive new IP addresses upon
# restart. This also triggers clean-cilium-bpf-state.
clean-cilium-state: "false"
# If you want to clean cilium BPF state, set this to true;
# Removes all BPF maps from the filesystem. Upon restart,
# endpoints are restored with the same IP addresses, however
# any ongoing connections may be disrupted briefly.
# Loadbalancing decisions will be reset, so any ongoing
# connections via a service may be loadbalanced to a different
# backend after restart.
clean-cilium-bpf-state: "false"
# Users who wish to specify their own custom CNI configuration file must set
# custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
custom-cni-conf: "false"
{% if cilium_version is version('1.14.0', '>=') %}
# Tell the agent to generate and write a CNI configuration file
write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
cni-exclusive: "{{ cilium_cni_exclusive }}"
cni-log-file: "{{ cilium_cni_log_file }}"
{% endif %}
# If you want cilium monitor to aggregate tracing for packets, set this level
# to "low", "medium", or "maximum". The higher the level, the less packets
# that will be seen in monitor output.
monitor-aggregation: "{{ cilium_monitor_aggregation }}"
# ct-global-max-entries-* specifies the maximum number of connections
# supported across all endpoints, split by protocol: tcp or other. One pair
# of maps uses these values for IPv4 connections, and another pair of maps
# use these values for IPv6 connections.
#
# If these values are modified, then during the next Cilium startup the
# tracking of ongoing connections may be disrupted. This may lead to brief
# policy drops or a change in loadbalancing decisions for a connection.
#
# For users upgrading from Cilium 1.2 or earlier, to minimize disruption
# during the upgrade process, comment out these options.
bpf-ct-global-tcp-max: "524288"
bpf-ct-global-any-max: "262144"
# Pre-allocation of map entries allows per-packet latency to be reduced, at
# the expense of up-front memory allocation for the entries in the maps. The
# default value below will minimize memory usage in the default installation;
# users who are sensitive to latency may consider setting this to "true".
#
# This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
# this option and behave as though it is set to "true".
#
# If this value is modified, then during the next Cilium startup the restore
# of existing endpoints and tracking of ongoing connections may be disrupted.
# This may lead to policy drops or a change in loadbalancing decisions for a
# connection for some time. Endpoints may need to be recreated to restore
# connectivity.
#
# If this option is set to "false" during an upgrade from 1.3 or earlier to
# 1.4 or later, then it may cause one-time disruptions during the upgrade.
preallocate-bpf-maps: "{{ cilium_preallocate_bpf_maps }}"
# Regular expression matching compatible Istio sidecar istio-proxy
# container image names
sidecar-istio-proxy-image: "cilium/istio_proxy"
# Encapsulation mode for communication between nodes
# Possible values:
# - disabled
# - vxlan (default)
# - geneve
{% if cilium_version is version('1.14.0', '<') %}
tunnel: "{{ cilium_tunnel_mode }}"
{% elif cilium_version is version('1.14.0', '>=') and cilium_tunnel_mode == 'disabled' %}
routing-mode: 'native'
{% elif cilium_version is version('1.14.0', '>=') and cilium_tunnel_mode != 'disabled' %}
routing-mode: 'tunnel'
tunnel-protocol: "{{ cilium_tunnel_mode }}"
{% endif %}
## DSR setting
bpf-lb-mode: "{{ cilium_loadbalancer_mode }}"
# l2
enable-l2-announcements: "{{ cilium_l2announcements }}"
# Enable Bandwidth Manager
# Cilium's bandwidth manager supports the kubernetes.io/egress-bandwidth Pod annotation.
# Bandwidth enforcement currently does not work in combination with L7 Cilium Network Policies.
# In case they select the Pod at egress, then the bandwidth enforcement will be disabled for those Pods.
# Bandwidth Manager requires a v5.1.x or more recent Linux kernel.
{% if cilium_enable_bandwidth_manager %}
enable-bandwidth-manager: "true"
{% endif %}
# Host Firewall and Policy Audit Mode
enable-host-firewall: "{{ cilium_enable_host_firewall | capitalize }}"
policy-audit-mode: "{{ cilium_policy_audit_mode | capitalize }}"
# Name of the cluster. Only relevant when building a mesh of clusters.
cluster-name: "{{ cilium_cluster_name }}"
# Unique ID of the cluster. Must be unique across all conneted clusters and
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
#cluster-id: 1
{% if cilium_cluster_id is defined %}
cluster-id: "{{ cilium_cluster_id }}"
{% endif %}
# `wait-bpf-mount` is removed after v1.10.4
# https://github.com/cilium/cilium/commit/d2217045cb3726a7f823174e086913b69b8090da
{% if cilium_version is version('1.10.4', '<') %}
# wait-bpf-mount makes init container wait until bpf filesystem is mounted
wait-bpf-mount: "false"
{% endif %}
# `kube-proxy-replacement=partial|strict|disabled` is deprecated since january 2024 and unsupported in 1.16.
# Replaced by `kube-proxy-replacement=true|false`
# https://github.com/cilium/cilium/pull/31286
{% if cilium_version is version('1.16', '<') %}
kube-proxy-replacement: "{{ cilium_kube_proxy_replacement }}"
{% else %}
kube-proxy-replacement: "{% if (cilium_kube_proxy_replacement == 'strict') or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') %}true{% else %}false{% endif %}"
{% endif %}
# `native-routing-cidr` is deprecated in 1.10, removed in 1.12.
# Replaced by `ipv4-native-routing-cidr`
# https://github.com/cilium/cilium/pull/16695
{% if cilium_version is version('1.12', '<') %}
native-routing-cidr: "{{ cilium_native_routing_cidr }}"
{% else %}
{% if cilium_native_routing_cidr | length %}
ipv4-native-routing-cidr: "{{ cilium_native_routing_cidr }}"
{% endif %}
{% if cilium_native_routing_cidr_ipv6 | length %}
ipv6-native-routing-cidr: "{{ cilium_native_routing_cidr_ipv6 }}"
{% endif %}
{% endif %}
auto-direct-node-routes: "{{ cilium_auto_direct_node_routes }}"
operator-api-serve-addr: "{{ cilium_operator_api_serve_addr }}"
# Hubble settings
{% if cilium_enable_hubble %}
enable-hubble: "true"
{% if cilium_enable_hubble_metrics %}
hubble-metrics-server: ":{{ cilium_hubble_scrape_port }}"
hubble-metrics:
{% for hubble_metrics_cycle in cilium_hubble_metrics %}
{{ hubble_metrics_cycle }}
{% endfor %}
{% endif %}
{% if cilium_hubble_event_buffer_capacity is defined %}
hubble-event-buffer-capacity: "{{ cilium_hubble_event_buffer_capacity }}"
{% endif %}
{% if cilium_hubble_event_queue_size is defined %}
hubble-event-queue-size: "{{ cilium_hubble_event_queue_size }}"
{% endif %}
hubble-listen-address: ":4244"
{% if cilium_enable_hubble and cilium_hubble_install %}
hubble-disable-tls: "{% if cilium_hubble_tls_generate %}false{% else %}true{% endif %}"
hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
{% endif %}
{% endif %}
# IP Masquerade Agent
enable-ip-masq-agent: "{{ cilium_ip_masq_agent_enable }}"
{% for key, value in cilium_config_extra_vars.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
# Enable transparent network encryption
{% if cilium_encryption_enabled %}
{% if cilium_encryption_type == "ipsec" %}
enable-ipsec: "true"
ipsec-key-file: /etc/ipsec/keys
encrypt-node: "{{ cilium_ipsec_node_encryption }}"
{% endif %}
{% if cilium_encryption_type == "wireguard" %}
enable-wireguard: "true"
enable-wireguard-userspace-fallback: "{{ cilium_wireguard_userspace_fallback }}"
{% endif %}
{% endif %}
# IPAM settings
ipam: "{{ cilium_ipam_mode }}"
{% if cilium_ipam_mode == "cluster-pool" %}
cluster-pool-ipv4-cidr: "{{ cilium_pool_cidr | default(kube_pods_subnet) }}"
cluster-pool-ipv4-mask-size: "{{ cilium_pool_mask_size | default(kube_network_node_prefix) }}"
{% if cilium_enable_ipv6 %}
cluster-pool-ipv6-cidr: "{{ cilium_pool_cidr_ipv6 | default(kube_pods_subnet_ipv6) }}"
cluster-pool-ipv6-mask-size: "{{ cilium_pool_mask_size_ipv6 | default(kube_network_node_prefix_ipv6) }}"
{% endif %}
{% endif %}
agent-health-port: "{{ cilium_agent_health_port }}"
{% if cilium_version is version('1.11', '>=') and cilium_cgroup_host_root != '' %}
cgroup-root: "{{ cilium_cgroup_host_root }}"
{% endif %}
bpf-map-dynamic-size-ratio: "{{ cilium_bpf_map_dynamic_size_ratio }}"
enable-ipv4-masquerade: "{{ cilium_enable_ipv4_masquerade }}"
enable-ipv6-masquerade: "{{ cilium_enable_ipv6_masquerade }}"
enable-bpf-masquerade: "{{ cilium_enable_bpf_masquerade }}"
enable-host-legacy-routing: "{{ cilium_enable_host_legacy_routing }}"
enable-remote-node-identity: "{{ cilium_enable_remote_node_identity }}"
enable-well-known-identities: "{{ cilium_enable_well_known_identities }}"
monitor-aggregation-flags: "{{ cilium_monitor_aggregation_flags }}"
enable-bpf-clock-probe: "{{ cilium_enable_bpf_clock_probe }}"
enable-bgp-control-plane: "{{ cilium_enable_bgp_control_plane }}"
disable-cnp-status-updates: "{{ cilium_disable_cnp_status_updates }}"
{% if cilium_ip_masq_agent_enable %}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ip-masq-agent
namespace: kube-system
data:
config: |
nonMasqueradeCIDRs:
{% for cidr in cilium_non_masquerade_cidrs %}
- {{ cidr }}
{% endfor %}
masqLinkLocal: {{ cilium_masq_link_local | bool }}
resyncInterval: "{{ cilium_ip_masq_resync_interval }}"
{% endif %}

View File

@@ -1,166 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium
rules:
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
- services
- pods
- endpoints
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
{% if cilium_version is version('1.12', '<') %}
- apiGroups:
- ""
resources:
- pods
- pods/finalizers
verbs:
- get
- list
- watch
- update
- delete
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
- update
{% endif %}
- apiGroups:
- ""
resources:
- nodes
- nodes/status
verbs:
- patch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
# Deprecated for removal in v1.10
- create
- list
- watch
- update
# This is used when validating policies in preflight. This will need to stay
# until we figure out how to avoid "get" inside the preflight, and then
# should be removed ideally.
- get
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumnetworkpolicies/status
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies/status
- ciliumendpoints
- ciliumendpoints/status
- ciliumnodes
- ciliumnodes/status
- ciliumidentities
- ciliumlocalredirectpolicies
- ciliumlocalredirectpolicies/status
- ciliumegressnatpolicies
{% if cilium_version is version('1.11', '>=') %}
- ciliumendpointslices
{% endif %}
{% if cilium_version is version('1.12', '>=') %}
- ciliumbgploadbalancerippools
- ciliumbgppeeringpolicies
{% if cilium_version is version('1.13', '>=') %}
- ciliumloadbalancerippools
{% endif %}
{% endif %}
{% if cilium_version is version('1.11.5', '<') %}
- ciliumnetworkpolicies/finalizers
- ciliumclusterwidenetworkpolicies/finalizers
- ciliumendpoints/finalizers
- ciliumnodes/finalizers
- ciliumidentities/finalizers
- ciliumlocalredirectpolicies/finalizers
{% endif %}
{% if cilium_version is version('1.14', '>=') %}
- ciliuml2announcementpolicies/status
{% endif %}
{% if cilium_version is version('1.15', '>=') %}
- ciliumbgpnodeconfigs
- ciliumbgpnodeconfigs/status
- ciliumbgpadvertisements
- ciliumbgppeerconfigs
{% endif %}
{% if cilium_version is version('1.16', '>=') %}
- ciliumbgpclusterconfigs
{% endif %}
verbs:
- '*'
{% if cilium_version is version('1.12', '>=') %}
- apiGroups:
- cilium.io
resources:
- ciliumclusterwideenvoyconfigs
- ciliumenvoyconfigs
- ciliumegressgatewaypolicies
verbs:
- list
- watch
{% endif %}
{% if cilium_version is version('1.14', '>=') %}
- apiGroups:
- cilium.io
resources:
- ciliumcidrgroups
- ciliuml2announcementpolicies
- ciliumpodippools
- ciliumloadbalancerippools
- ciliuml2announcementpolicies/status
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- update
- list
- delete
{% endif %}

View File

@@ -1,13 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium
subjects:
- kind: ServiceAccount
name: cilium
namespace: kube-system

View File

@@ -1,446 +0,0 @@
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium
namespace: kube-system
labels:
k8s-app: cilium
spec:
selector:
matchLabels:
k8s-app: cilium
updateStrategy:
rollingUpdate:
# Specifies the maximum number of Pods that can be unavailable during the update process.
maxUnavailable: 2
type: RollingUpdate
template:
metadata:
annotations:
{% if cilium_enable_prometheus %}
prometheus.io/port: "{{ cilium_agent_scrape_port }}"
prometheus.io/scrape: "true"
{% endif %}
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated","operator":"Equal","value":"master","effect":"NoSchedule"}]'
labels:
k8s-app: cilium
spec:
containers:
- name: cilium-agent
image: "{{ cilium_image_repo }}:{{ cilium_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
command:
- cilium-agent
args:
- --config-dir=/tmp/cilium/config-map
{% if cilium_mtu != "" %}
- --mtu={{ cilium_mtu }}
{% endif %}
{% if cilium_agent_custom_args is string %}
- {{ cilium_agent_custom_args }}
{% else %}
{% for flag in cilium_agent_custom_args %}
- {{ flag }}
{% endfor %}
{% endif %}
startupProbe:
httpGet:
host: '127.0.0.1'
path: /healthz
port: {{ cilium_agent_health_port }}
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
failureThreshold: 105
periodSeconds: 2
successThreshold: 1
livenessProbe:
httpGet:
host: '127.0.0.1'
path: /healthz
port: {{ cilium_agent_health_port }}
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
failureThreshold: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: {{ cilium_agent_health_port }}
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
initialDelaySeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
{% if (cilium_kube_proxy_replacement == 'strict') or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') %}
- name: KUBERNETES_SERVICE_HOST
value: "{{ kube_apiserver_global_endpoint | urlsplit('hostname') }}"
- name: KUBERNETES_SERVICE_PORT
value: "{{ kube_apiserver_global_endpoint | urlsplit('port') }}"
{% endif %}
{% for env_var in cilium_agent_extra_env_vars %}
- {{ env_var | to_nice_yaml(indent=2) | indent(10) }}
{% endfor %}
lifecycle:
{% if cilium_version is version('1.14', '<') %}
postStart:
exec:
command:
- "/cni-install.sh"
- "--cni-exclusive={{ cilium_cni_exclusive | string | lower }}"
{% if cilium_version is version('1.12', '>=') %}
- "--enable-debug={{ cilium_debug | string | lower }}"
- "--log-file={{ cilium_cni_log_file }}"
{% endif %}
{% endif %}
preStop:
exec:
command:
- /cni-uninstall.sh
resources:
limits:
cpu: {{ cilium_cpu_limit }}
memory: {{ cilium_memory_limit }}
requests:
cpu: {{ cilium_cpu_requests }}
memory: {{ cilium_memory_requests }}
{% if cilium_enable_prometheus or cilium_enable_hubble_metrics %}
ports:
{% endif %}
{% if cilium_enable_prometheus %}
- name: prometheus
containerPort: {{ cilium_agent_scrape_port }}
hostPort: {{ cilium_agent_scrape_port }}
protocol: TCP
{% endif %}
{% if cilium_enable_hubble_metrics %}
- name: hubble-metrics
containerPort: {{ cilium_hubble_scrape_port }}
hostPort: {{ cilium_hubble_scrape_port }}
protocol: TCP
{% endif %}
securityContext:
privileged: true
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
- name: cilium-run
mountPath: /var/run/cilium
{% if cilium_version is version('1.13.1', '<') %}
- name: cni-path
mountPath: /host/opt/cni/bin
{% endif %}
- name: etc-cni-netd
mountPath: /host/etc/cni/net.d
{% if cilium_identity_allocation_mode == "kvstore" %}
- name: etcd-config-path
mountPath: /var/lib/etcd-config
readOnly: true
- name: etcd-secrets
mountPath: "{{ cilium_cert_dir }}"
readOnly: true
{% endif %}
- name: clustermesh-secrets
mountPath: /var/lib/cilium/clustermesh
readOnly: true
- name: cilium-config-path
mountPath: /tmp/cilium/config-map
readOnly: true
{% if cilium_ip_masq_agent_enable %}
- name: ip-masq-agent
mountPath: /etc/config
readOnly: true
{% endif %}
# Needed to be able to load kernel modules
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
{% if cilium_encryption_enabled and cilium_encryption_type == "ipsec" %}
- name: cilium-ipsec-secrets
mountPath: /etc/ipsec
readOnly: true
{% endif %}
{% if cilium_hubble_install %}
- name: hubble-tls
mountPath: /var/lib/cilium/tls/hubble
readOnly: true
{% endif %}
{% for volume_mount in cilium_agent_extra_volume_mounts %}
- {{ volume_mount | to_nice_yaml(indent=2) | indent(10) }}
{% endfor %}
# In managed etcd mode, Cilium must be able to resolve the DNS name of the etcd service
{% if cilium_identity_allocation_mode == "kvstore" %}
dnsPolicy: ClusterFirstWithHostNet
{% endif %}
hostNetwork: true
initContainers:
{% if cilium_version is version('1.11', '>=') and cilium_cgroup_auto_mount %}
- name: mount-cgroup
image: "{{ cilium_image_repo }}:{{ cilium_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: CGROUP_ROOT
value: {{ cilium_cgroup_host_root }}
- name: BIN_PATH
value: /opt/cni/bin
command:
- sh
- -ec
# The statically linked Go program binary is invoked to avoid any
# dependency on utilities like sh and mount that can be missing on certain
# distros installed on the underlying host. Copy the binary to the
# same directory where we install cilium cni plugin so that exec permissions
# are available.
- |
cp /usr/bin/cilium-mount /hostbin/cilium-mount;
nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
rm /hostbin/cilium-mount
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
securityContext:
privileged: true
{% endif %}
{% if cilium_version is version('1.11.7', '>=') %}
- name: apply-sysctl-overwrites
image: "{{ cilium_image_repo }}:{{ cilium_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: BIN_PATH
value: /opt/cni/bin
command:
- sh
- -ec
# The statically linked Go program binary is invoked to avoid any
# dependency on utilities like sh that can be missing on certain
# distros installed on the underlying host. Copy the binary to the
# same directory where we install cilium cni plugin so that exec permissions
# are available.
- |
cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
rm /hostbin/cilium-sysctlfix
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
securityContext:
privileged: true
{% endif %}
- name: clean-cilium-state
image: "{{ cilium_image_repo }}:{{ cilium_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-state
optional: true
- name: CILIUM_BPF_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-bpf-state
optional: true
# Removed in 1.11 and up.
# https://github.com/cilium/cilium/commit/f7a3f59fd74983c600bfce9cac364b76d20849d9
{% if cilium_version is version('1.11', '<') %}
- name: CILIUM_WAIT_BPF_MOUNT
valueFrom:
configMapKeyRef:
key: wait-bpf-mount
name: cilium-config
optional: true
{% endif %}
{% if (cilium_kube_proxy_replacement == 'strict') or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') %}
- name: KUBERNETES_SERVICE_HOST
value: "{{ kube_apiserver_global_endpoint | urlsplit('hostname') }}"
- name: KUBERNETES_SERVICE_PORT
value: "{{ kube_apiserver_global_endpoint | urlsplit('port') }}"
{% endif %}
securityContext:
privileged: true
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
{% if cilium_version is version('1.11', '>=') %}
# Required to mount cgroup filesystem from the host to cilium agent pod
- name: cilium-cgroup
mountPath: {{ cilium_cgroup_host_root }}
mountPropagation: HostToContainer
{% endif %}
- name: cilium-run
mountPath: /var/run/cilium
resources:
requests:
cpu: 100m
memory: 100Mi
{% if cilium_version is version('1.13.1', '>=') %}
# Install the CNI binaries in an InitContainer so we don't have a writable host mount in the agent
- name: install-cni-binaries
image: "{{ cilium_image_repo }}:{{ cilium_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
command:
- "/install-plugin.sh"
resources:
requests:
cpu: 100m
memory: 10Mi
securityContext:
privileged: true
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: cni-path
mountPath: /host/opt/cni/bin
{% endif %}
restartPolicy: Always
priorityClassName: system-node-critical
serviceAccount: cilium
serviceAccountName: cilium
terminationGracePeriodSeconds: 1
hostNetwork: true
# In managed etcd mode, Cilium must be able to resolve the DNS name of the etcd service
{% if cilium_identity_allocation_mode == "kvstore" %}
dnsPolicy: ClusterFirstWithHostNet
{% endif %}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
k8s-app: cilium
tolerations:
- operator: Exists
volumes:
# To keep state between restarts / upgrades
- name: cilium-run
hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
# To keep state between restarts / upgrades for bpf maps
- name: bpf-maps
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
{% if cilium_version is version('1.11', '>=') %}
# To mount cgroup2 filesystem on the host
- name: hostproc
hostPath:
path: /proc
type: Directory
# To keep state between restarts / upgrades for cgroup2 filesystem
- name: cilium-cgroup
hostPath:
path: {{ cilium_cgroup_host_root }}
type: DirectoryOrCreate
{% endif %}
# To install cilium cni plugin in the host
- name: cni-path
hostPath:
path: /opt/cni/bin
type: DirectoryOrCreate
# To install cilium cni configuration in the host
- name: etc-cni-netd
hostPath:
path: /etc/cni/net.d
type: DirectoryOrCreate
# To be able to load kernel modules
- name: lib-modules
hostPath:
path: /lib/modules
# To access iptables concurrently with other processes (e.g. kube-proxy)
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
{% if cilium_identity_allocation_mode == "kvstore" %}
# To read the etcd config stored in config maps
- name: etcd-config-path
configMap:
name: cilium-config
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
items:
- key: etcd-config
path: etcd.config
# To read the k8s etcd secrets in case the user might want to use TLS
- name: etcd-secrets
hostPath:
path: "{{ cilium_cert_dir }}"
{% endif %}
# To read the clustermesh configuration
- name: clustermesh-secrets
secret:
secretName: cilium-clustermesh
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
optional: true
# To read the configuration from the config map
- name: cilium-config-path
configMap:
name: cilium-config
{% if cilium_ip_masq_agent_enable %}
- name: ip-masq-agent
configMap:
name: ip-masq-agent
optional: true
items:
- key: config
path: ip-masq-agent
{% endif %}
{% if cilium_encryption_enabled and cilium_encryption_type == "ipsec" %}
- name: cilium-ipsec-secrets
secret:
secretName: cilium-ipsec-keys
{% endif %}
{% if cilium_hubble_install %}
- name: hubble-tls
projected:
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
sources:
- secret:
name: hubble-server-certs
optional: true
items:
- key: ca.crt
path: client-ca.crt
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
{% endif %}

View File

@@ -1,6 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cilium
namespace: kube-system

View File

@@ -1,9 +0,0 @@
---
apiVersion: v1
data:
keys: {{ cilium_ipsec_key }}
kind: Secret
metadata:
name: cilium-ipsec-keys
namespace: kube-system
type: Opaque

View File

@@ -1,71 +0,0 @@
#jinja2: trim_blocks:False
---
# Source: cilium helm chart: cilium/templates/hubble-relay/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: hubble-relay-config
namespace: kube-system
data:
config.yaml: |
cluster-name: "{{ cilium_cluster_name }}"
peer-service: "hubble-peer.kube-system.svc.{{ dns_domain }}:443"
listen-address: :4245
metrics-listen-address: ":9966"
dial-timeout:
retry-timeout:
sort-buffer-len-max:
sort-buffer-drain-timeout:
tls-client-cert-file: /var/lib/hubble-relay/tls/client.crt
tls-client-key-file: /var/lib/hubble-relay/tls/client.key
tls-server-cert-file: /var/lib/hubble-relay/tls/server.crt
tls-server-key-file: /var/lib/hubble-relay/tls/server.key
tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt
disable-server-tls: {% if cilium_hubble_tls_generate %}false{% else %}true{% endif %}
disable-client-tls: {% if cilium_hubble_tls_generate %}false{% else %}true{% endif %}
---
# Source: cilium/templates/hubble-ui/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: hubble-ui-nginx
namespace: kube-system
data:
nginx.conf: |
server {
listen 8081;
{% if cilium_enable_ipv6 %}
listen [::]:8081;
{% endif %}
server_name localhost;
root /app;
index index.html;
client_max_body_size 1G;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# CORS
add_header Access-Control-Allow-Methods "GET, POST, PUT, HEAD, DELETE, OPTIONS";
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Max-Age 1728000;
add_header Access-Control-Expose-Headers content-length,grpc-status,grpc-message;
add_header Access-Control-Allow-Headers range,keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout;
if ($request_method = OPTIONS) {
return 204;
}
# /CORS
location /api {
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_hide_header Access-Control-Allow-Origin;
proxy_pass http://127.0.0.1:8090;
}
location / {
try_files $uri $uri/ /index.html;
}
}
}

View File

@@ -1,108 +0,0 @@
{% if cilium_hubble_tls_generate %}
---
# Source: cilium/templates/hubble-generate-certs-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: hubble-generate-certs
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- hubble-server-certs
- hubble-relay-client-certs
- hubble-relay-server-certs
verbs:
- update
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- hubble-ca-cert
verbs:
- update
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- hubble-ca-secret
verbs:
- get
{% endif %}
---
# Source: cilium/templates/hubble-relay-clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: hubble-relay
rules:
- apiGroups:
- ""
resources:
- componentstatuses
- endpoints
- namespaces
- nodes
- pods
- services
verbs:
- get
- list
- watch
{% if cilium_enable_hubble_ui %}
---
# Source: cilium/templates/hubble-ui-clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: hubble-ui
rules:
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- componentstatuses
- endpoints
- namespaces
- nodes
- pods
- services
verbs:
- get
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- "*"
verbs:
- get
- list
- watch
{% endif %}

View File

@@ -1,46 +0,0 @@
{% if cilium_hubble_tls_generate %}
---
# Source: cilium/templates/hubble-generate-certs-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hubble-generate-certs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: hubble-generate-certs
subjects:
- kind: ServiceAccount
name: hubble-generate-certs
namespace: kube-system
{% endif %}
---
# Source: cilium/templates/hubble-relay-clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: hubble-relay
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: hubble-relay
subjects:
- kind: ServiceAccount
namespace: kube-system
name: hubble-relay
{% if cilium_enable_hubble_ui %}
---
# Source: cilium/templates/hubble-ui-clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: hubble-ui
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: hubble-ui
subjects:
- kind: ServiceAccount
namespace: kube-system
name: hubble-ui
{% endif %}

View File

@@ -1,38 +0,0 @@
---
# Source: cilium/templates/hubble-generate-certs-cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hubble-generate-certs
namespace: kube-system
labels:
k8s-app: hubble-generate-certs
spec:
schedule: "0 0 1 */4 *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
k8s-app: hubble-generate-certs
spec:
serviceAccount: hubble-generate-certs
serviceAccountName: hubble-generate-certs
containers:
- name: certgen
image: "{{ cilium_hubble_certgen_image_repo }}:{{ cilium_hubble_certgen_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
command:
- "/usr/bin/cilium-certgen"
# Because this is executed as a job, we pass the values as command
# line args instead of via config map. This allows users to inspect
# the values used in past runs by inspecting the completed pod.
args:
{% for key, value in cilium_certgen_args.items() -%}
- "--{{ key }}={{ value }}"
{% endfor %}
hostNetwork: true
restartPolicy: OnFailure
ttlSecondsAfterFinished: 1800

View File

@@ -1,203 +0,0 @@
---
# Source: cilium/templates/hubble-relay-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hubble-relay
labels:
k8s-app: hubble-relay
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: hubble-relay
strategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
labels:
k8s-app: hubble-relay
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "k8s-app"
operator: In
values:
- cilium
topologyKey: "kubernetes.io/hostname"
containers:
- name: hubble-relay
image: "{{ cilium_hubble_relay_image_repo }}:{{ cilium_hubble_relay_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
command:
- hubble-relay
args:
- serve
ports:
- name: grpc
containerPort: 4245
{% if cilium_enable_prometheus %}
- name: prometheus
containerPort: 9966
protocol: TCP
{% endif %}
readinessProbe:
tcpSocket:
port: grpc
livenessProbe:
tcpSocket:
port: grpc
volumeMounts:
- mountPath: /var/run/cilium
name: hubble-sock-dir
readOnly: true
- mountPath: /etc/hubble-relay
name: config
readOnly: true
{% if cilium_hubble_tls_generate -%}
- mountPath: /var/lib/hubble-relay/tls
name: tls
readOnly: true
{%- endif %}
restartPolicy: Always
serviceAccount: hubble-relay
serviceAccountName: hubble-relay
terminationGracePeriodSeconds: 0
volumes:
- configMap:
name: hubble-relay-config
items:
- key: config.yaml
path: config.yaml
name: config
- hostPath:
path: /var/run/cilium
type: Directory
name: hubble-sock-dir
{% if cilium_hubble_tls_generate -%}
- projected:
sources:
- secret:
name: hubble-relay-client-certs
items:
- key: ca.crt
path: hubble-server-ca.crt
- key: tls.crt
path: client.crt
- key: tls.key
path: client.key
- secret:
name: hubble-server-certs
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
name: tls
{%- endif %}
{% if cilium_enable_hubble_ui %}
---
# Source: cilium/templates/hubble-ui/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: kube-system
labels:
k8s-app: hubble-ui
name: hubble-ui
spec:
replicas: 1
selector:
matchLabels:
k8s-app: hubble-ui
template:
metadata:
annotations:
labels:
k8s-app: hubble-ui
spec:
securityContext:
runAsUser: 1001
serviceAccount: hubble-ui
serviceAccountName: hubble-ui
containers:
- name: frontend
image: "{{ cilium_hubble_ui_image_repo }}:{{ cilium_hubble_ui_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
ports:
- containerPort: 8081
name: http
volumeMounts:
- name: hubble-ui-nginx-conf
mountPath: /etc/nginx/conf.d/default.conf
subPath: nginx.conf
- name: tmp-dir
mountPath: /tmp
resources:
{}
- name: backend
image: "{{ cilium_hubble_ui_backend_image_repo }}:{{ cilium_hubble_ui_backend_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: EVENTS_SERVER_PORT
value: "8090"
{% if cilium_hubble_tls_generate -%}
- name: TLS_TO_RELAY_ENABLED
value: "true"
- name: FLOWS_API_ADDR
value: "hubble-relay:443"
- name: TLS_RELAY_SERVER_NAME
value: ui.{{ cilium_cluster_name }}.hubble-grpc.cilium.io
- name: TLS_RELAY_CA_CERT_FILES
value: /var/lib/hubble-ui/certs/hubble-server-ca.crt
- name: TLS_RELAY_CLIENT_CERT_FILE
value: /var/lib/hubble-ui/certs/client.crt
- name: TLS_RELAY_CLIENT_KEY_FILE
value: /var/lib/hubble-ui/certs/client.key
{% else -%}
- name: FLOWS_API_ADDR
value: "hubble-relay:80"
{% endif %}
{% if cilium_hubble_tls_generate -%}
volumeMounts:
- name: tls
mountPath: /var/lib/hubble-ui/certs
readOnly: true
{%- endif %}
ports:
- containerPort: 8090
name: grpc
resources:
{}
volumes:
- configMap:
defaultMode: 420
name: hubble-ui-nginx
name: hubble-ui-nginx-conf
{% if cilium_hubble_tls_generate -%}
- projected:
sources:
- secret:
name: hubble-relay-client-certs
items:
- key: ca.crt
path: hubble-server-ca.crt
- key: tls.crt
path: client.crt
- key: tls.key
path: client.key
name: tls
{%- endif %}
- emptyDir: {}
name: tmp-dir
{% endif %}

View File

@@ -1,34 +0,0 @@
---
# Source: cilium/templates/hubble-generate-certs-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: hubble-generate-certs
namespace: kube-system
labels:
k8s-app: hubble-generate-certs
spec:
template:
metadata:
labels:
k8s-app: hubble-generate-certs
spec:
serviceAccount: hubble-generate-certs
serviceAccountName: hubble-generate-certs
containers:
- name: certgen
image: "{{ cilium_hubble_certgen_image_repo }}:{{ cilium_hubble_certgen_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
command:
- "/usr/bin/cilium-certgen"
# Because this is executed as a job, we pass the values as command
# line args instead of via config map. This allows users to inspect
# the values used in past runs by inspecting the completed pod.
args:
{% for key, value in cilium_certgen_args.items() -%}
- "--{{ key }}={{ value }}"
{% endfor %}
hostNetwork: true
restartPolicy: OnFailure
ttlSecondsAfterFinished: 1800

View File

@@ -1,25 +0,0 @@
{% if cilium_hubble_tls_generate %}
---
# Source: cilium/templates/hubble-generate-certs-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hubble-generate-certs
namespace: kube-system
{% endif %}
---
# Source: cilium/templates/hubble-relay-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hubble-relay
namespace: kube-system
{% if cilium_enable_hubble_ui %}
---
# Source: cilium/templates/hubble-ui-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hubble-ui
namespace: kube-system
{% endif %}

View File

@@ -1,106 +0,0 @@
{% if cilium_enable_prometheus or cilium_enable_hubble_metrics %}
---
# Source: cilium/templates/cilium-agent-service.yaml
kind: Service
apiVersion: v1
metadata:
name: hubble-metrics
namespace: kube-system
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "{{ cilium_hubble_scrape_port }}"
labels:
k8s-app: hubble
spec:
clusterIP: None
type: ClusterIP
ports:
- name: hubble-metrics
port: 9091
protocol: TCP
targetPort: hubble-metrics
selector:
k8s-app: cilium
---
# Source: cilium/templates/hubble-relay/metrics-service.yaml
# We use a separate service from hubble-relay which can be exposed externally
kind: Service
apiVersion: v1
metadata:
name: hubble-relay-metrics
namespace: kube-system
labels:
k8s-app: hubble-relay
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: "9966"
spec:
clusterIP: None
type: ClusterIP
selector:
k8s-app: hubble-relay
ports:
- name: metrics
port: 9966
protocol: TCP
targetPort: prometheus
{% endif %}
---
# Source: cilium/templates/hubble-relay-service.yaml
kind: Service
apiVersion: v1
metadata:
name: hubble-relay
namespace: kube-system
labels:
k8s-app: hubble-relay
spec:
type: ClusterIP
selector:
k8s-app: hubble-relay
ports:
- protocol: TCP
{% if cilium_hubble_tls_generate -%}
port: 443
{% else -%}
port: 80
{% endif -%}
targetPort: 4245
---
{% if cilium_enable_hubble_ui %}
# Source: cilium/templates/hubble-ui-service.yaml
kind: Service
apiVersion: v1
metadata:
name: hubble-ui
labels:
k8s-app: hubble-ui
namespace: kube-system
spec:
selector:
k8s-app: hubble-ui
ports:
- name: http
port: 80
targetPort: 8081
type: ClusterIP
---
{% endif %}
# Source: cilium/templates/hubble/peer-service.yaml
apiVersion: v1
kind: Service
metadata:
name: hubble-peer
namespace: kube-system
labels:
k8s-app: cilium
spec:
selector:
k8s-app: cilium
ports:
- name: peer-service
port: 443
protocol: TCP
targetPort: 4244
internalTrafficPolicy: Local

View File

@@ -0,0 +1,164 @@
MTU: {{ cilium_mtu }}
debug:
enabled: {{ cilium_debug }}
image:
repository: {{ cilium_image_repo }}
tag: {{ cilium_image_tag }}
k8sServiceHost: "auto"
k8sServicePort: "auto"
ipv4:
enabled: {{ cilium_enable_ipv4 }}
ipv6:
enabled: {{ cilium_enable_ipv6 }}
l2announcements:
enabled: {{ cilium_l2announcements }}
healthPort: {{ cilium_agent_health_port }}
identityAllocationMode: {{ cilium_identity_allocation_mode }}
tunnelProtocol: {{ cilium_tunnel_mode }}
loadbalancer:
mode: {{ cilium_loadbalancer_mode }}
kubeProxyReplacement: {{ cilium_kube_proxy_replacement }}
{% if cilium_dns_proxy_enable_transparent_mode is defined %}
dnsProxy:
enableTransparentMode: {{ cilium_dns_proxy_enable_transparent_mode }}
{% endif %}
extraVolumes:
{{ cilium_agent_extra_volumes | to_nice_yaml(indent=2) | indent(2) }}
extraVolumeMounts:
{{ cilium_agent_extra_volume_mounts | to_nice_yaml(indent=2) | indent(2) }}
extraArgs:
{{ cilium_agent_extra_args | to_nice_yaml(indent=2) | indent(2) }}
bpf:
masquerade: {{ cilium_enable_bpf_masquerade }}
hostLegacyRouting: {{ cilium_enable_host_legacy_routing }}
monitorAggregation: {{ cilium_monitor_aggregation }}
preallocateMaps: {{ cilium_preallocate_bpf_maps }}
mapDynamicSizeRatio: {{ cilium_bpf_map_dynamic_size_ratio }}
cni:
exclusive: {{ cilium_cni_exclusive }}
logFile: {{ cilium_cni_log_file }}
autoDirectNodeRoutes: {{ cilium_auto_direct_node_routes }}
ipv4NativeRoutingCIDR: {{ cilium_native_routing_cidr }}
ipv6NativeRoutingCIDR: {{ cilium_native_routing_cidr_ipv6 }}
encryption:
enabled: {{ cilium_encryption_enabled }}
{% if cilium_encryption_enabled %}
type: {{ cilium_encryption_type }}
{% if cilium_encryption_type == 'wireguard' %}
nodeEncryption: {{ cilium_encryption_node_encryption }}
{% endif %}
{% endif %}
bandwidthManager:
enabled: {{ cilium_enable_bandwidth_manager }}
bbr: {{ cilium_enable_bandwidth_manager_bbr }}
ipMasqAgent:
enabled: {{ cilium_ip_masq_agent_enable }}
{% if cilium_ip_masq_agent_enable %}
config:
nonMasqueradeCIDRs: {{ cilium_non_masquerade_cidrs }}
masqLinkLocal: {{ cilium_masq_link_local }}
masqLinkLocalIPv6: {{ cilium_masq_link_local_ipv6 }}
# cilium_ip_masq_resync_interval
{% endif %}
hubble:
enabled: {{ cilium_enable_hubble }}
relay:
enabled: {{ cilium_enable_hubble }}
image:
repository: {{ cilium_hubble_relay_image_repo }}
tag: {{ cilium_hubble_relay_image_tag }}
ui:
enabled: {{ cilium_enable_hubble_ui }}
backend:
image:
repository: {{ cilium_hubble_ui_backend_image_repo }}
tag: {{ cilium_hubble_ui_backend_image_tag }}
frontend:
image:
repository: {{ cilium_hubble_ui_image_repo }}
tag: {{ cilium_hubble_ui_image_tag }}
metrics:
enabled: {{ cilium_hubble_metrics }}
export:
fileMaxBackups: {{ cilium_hubble_export_file_max_backups }}
fileMaxSizeMb: {{ cilium_hubble_export_file_max_size_mb }}
dynamic:
enabled: {{ cilium_hubble_export_dynamic_enabled }}
config:
content:
{{ cilium_hubble_export_dynamic_config_content | to_nice_yaml(indent=10) | indent(10) }}
gatewayAPI:
enabled: {{ cilium_gateway_api_enabled }}
ipam:
mode: {{ cilium_ipam_mode }}
operator:
clusterPoolIPv4PodCIDRList:
- {{ cilium_pool_cidr | default(kube_pods_subnet) }}
clusterPoolIPv4MaskSize: {{ cilium_pool_mask_size | default(kube_network_node_prefix) }}
clusterPoolIPv6PodCIDRList:
- {{ cilium_pool_cidr_ipv6 | default(kube_pods_subnet_ipv6) }}
clusterPoolIPv6MaskSize: {{ cilium_pool_mask_size_ipv6 | default(kube_network_node_prefix_ipv6) }}
cgroup:
autoMount:
enabled: {{ cilium_cgroup_auto_mount }}
hostRoot: {{ cilium_cgroup_host_root }}
operator:
image:
repository: {{ cilium_operator_image_repo }}
tag: {{ cilium_operator_image_tag }}
replicas: {{ cilium_operator_replicas }}
extraArgs:
{{ cilium_operator_extra_args | to_nice_yaml(indent=2) | indent(4) }}
extraVolumes:
{{ cilium_operator_extra_volumes | to_nice_yaml(indent=2) | indent(4) }}
extraVolumeMounts:
{{ cilium_operator_extra_volume_mounts | to_nice_yaml(indent=2) | indent(4) }}
tolerations:
{{ cilium_operator_tolerations | to_nice_yaml(indent=2) | indent(4) }}
cluster:
id: {{ cilium_cluster_id }}
name: {{ cilium_cluster_name }}
enableIPv4Masquerade: {{ cilium_enable_ipv4_masquerade }}
enableIPv6Masquerade: {{ cilium_enable_ipv6_masquerade }}
hostFirewall:
enabled: {{ cilium_enable_host_firewall }}
certgen:
image:
repositry: {{ cilium_hubble_certgen_image_repo }}
tag: {{ cilium_hubble_certgen_image_tag }}
envoy:
image:
repositry: {{ cilium_hubble_envoy_image_repo }}
tag: {{ cilium_hubble_envoy_image_tag }}

View File

@@ -65,14 +65,19 @@
tags:
- bootstrap_os
- name: Install packages requirements
- name: Manage packages
package:
name: "{{ pkgs | dict2items | selectattr('value', 'ansible.builtin.all') | map(attribute='key') }}"
state: present
name: "{{ item.packages | dict2items | selectattr('value', 'ansible.builtin.all') | map(attribute='key') }}"
state: "{{ item.state }}"
register: pkgs_task_result
until: pkgs_task_result is succeeded
retries: "{{ pkg_install_retries }}"
delay: "{{ retry_stagger | random + 3 }}"
when: not (ansible_os_family in ["Flatcar", "Flatcar Container Linux by Kinvolk"] or is_fedora_coreos)
loop:
- { packages: "{{ pkgs_to_remove }}", state: "absent", action_label: "remove" }
- { packages: "{{ pkgs }}", state: "present", action_label: "install" }
loop_control:
label: "{{ item.action_label }}"
tags:
- bootstrap_os

View File

@@ -1,4 +1,9 @@
---
pkgs_to_remove:
systemd-timesyncd:
- "{{ ntp_enabled }}"
- "{{ ntp_package == 'ntp' }}"
- "{{ ansible_os_family == 'Debian' }}"
pkgs:
apparmor:
- "{{ ansible_os_family == 'Debian' }}"
@@ -9,6 +14,9 @@ pkgs:
- "{{ ansible_distribution_major_version == '10' }}"
- "{{ 'k8s_cluster' in group_names }}"
bash-completion: []
chrony:
- "{{ ntp_enabled }}"
- "{{ ntp_package == 'chrony' }}"
conntrack:
- "{{ ansible_os_family in ['Debian', 'RedHat'] }}"
- "{{ ansible_distribution != 'openEuler' }}"
@@ -70,6 +78,12 @@ pkgs:
- "{{ 'k8s_cluster' in group_names }}"
nss:
- "{{ ansible_os_family == 'RedHat' }}"
ntp:
- "{{ ntp_enabled }}"
- "{{ ntp_package == 'ntp' }}"
ntpsec:
- "{{ ntp_enabled }}"
- "{{ ntp_package == 'ntpsec' }}"
openssl: []
python-apt:
- "{{ ansible_os_family == 'Debian' }}"

View File

@@ -40,12 +40,15 @@
include_vars: ../roles/system_packages/vars/main.yml
- name: Verify that the packages list is sorted
loop:
- pkgs_to_remove
- pkgs
vars:
pkgs_lists: "{{ pkgs.keys() | list }}"
pkgs_lists: "{{ lookup('vars', item).keys() | list }}"
ansible_distribution: irrelevant
ansible_distribution_major_version: irrelevant
ansible_distribution_minor_version: irrelevant
ansible_os_family: irrelevant
assert:
that: "pkgs_lists | sort == pkgs_lists"
fail_msg: "pkgs is not sorted: {{ pkgs_lists | ansible.utils.fact_diff(pkgs_lists | sort) }}"
fail_msg: "{{ item }} is not sorted: {{ pkgs_lists | ansible.utils.fact_diff(pkgs_lists | sort) }}"

View File

@@ -25,7 +25,7 @@ from typing import Optional, Any
from . import components
CHECKSUMS_YML = Path("roles/kubespray_defaults/defaults/main/checksums.yml")
CHECKSUMS_YML = Path("roles/kubespray_defaults/vars/main/checksums.yml")
logger = logging.getLogger(__name__)

View File

@@ -14,6 +14,7 @@ kube_proxy_mode: nftables
# NTP mangement
ntp_enabled: true
ntp_package: chrony
ntp_timezone: Etc/UTC
ntp_manage_config: true
ntp_tinker_panic: true

View File

@@ -7,4 +7,6 @@ mode: ha
kube_network_plugin: cilium
enable_network_policy: true
cilium_kube_proxy_replacement: strict
cilium_kube_proxy_replacement: true
kube_owner: root

View File

@@ -4,3 +4,9 @@ cloud_image: debian-12
# Kubespray settings
kube_network_plugin: cilium
# ntp settings
ntp_enabled: true
ntp_package: ntp
kube_owner: root

View File

@@ -5,6 +5,8 @@ cloud_image: opensuse-leap-15-6
# Kubespray settings
kube_network_plugin: cilium
kube_owner: root
# Docker specific settings:
container_manager: docker
etcd_deployment_type: docker

View File

@@ -6,7 +6,9 @@ vm_memory: 3072
# Kubespray settings
kube_network_plugin: cilium
cilium_kube_proxy_replacement: strict
cilium_kube_proxy_replacement: true
kube_owner: root
# Node Feature Discovery
node_feature_discovery_enabled: true

View File

@@ -7,3 +7,5 @@ mode: separate
kube_network_plugin: cilium
enable_network_policy: true
auto_renew_certificates: true
kube_owner: root

View File

@@ -44,3 +44,7 @@ kubeadm_patches:
example.com/test: "false"
labels:
example.com/prod_level: "prep"
# ntp settings
ntp_enabled: true
ntp_package: ntpsec

View File

@@ -0,0 +1,13 @@
---
cloud_image: ubuntu-2404
cluster_layout:
- node_groups: ['kube_control_plane']
- node_groups: ['kube_control_plane']
- node_groups: ['kube_control_plane']
- node_groups: ['kube_node']
- node_groups: ['etcd']
- node_groups: ['etcd']
- node_groups: ['etcd']
kube_network_plugin: calico
calico_datastore: etcd