CI: Replace kubevirt dynamic inventory with generated yaml

VirtualMachineInstance resources sometimes temporarily loose their
IP (at least as far as the kubevirt controllers can see).
See https://github.com/kubevirt/kubevirt/issues/12698 for the upstream
bug.

This does not seems to affect actual connection (if it did, our current
CI would not work).
However, our CI execute multiple playbooks, and in particular:
1. The provisioning playbook (which checks that the IPs have been
   provisioned by querying the K8S API)
2. Kubespray itself

If any of the VirtualMachineInstance looses its IP between after 1
checked for it, and before 2 starts, the dynamic inventory (which is
invoked when the playbook is launched by ansible-playbook) will not have
an ip for that host, and will try to use the name for ssh, which of
course will not work.

Instead, when we have a valid state during provisioning (all IPs
presents), use it to construct a static inventory which will be used for
the rest of the CI run.
This commit is contained in:
Max Gautier
2024-11-13 16:01:17 +01:00
parent 329ffd45f0
commit ff4de880ae
4 changed files with 25 additions and 24 deletions

View File

@@ -21,8 +21,26 @@
retries: 30
delay: 10
- name: "Create inventory for CI tests"
template:
src: "inv.kubevirt.yml.j2"
dest: "{{ inventory_path }}/inv.kubevirt.yml"
- name: Massage VirtualMachineInstance data into an Ansible inventory structure
vars:
ips: "{{ vmis.resources | map(attribute='status.interfaces.0.ipAddress') }}"
names: "{{ vmis.resources | map(attribute='metadata.name') }}"
_groups: "{{ vmis.resources | map(attribute='metadata.annotations.ansible_groups') | map('split', ',') }}"
hosts: "{{ ips | zip(_groups, names)
| map('zip', ['ansible_host', 'ansible_groups', 'k8s_vmi_name'])
| map('map', 'reverse') | map('community.general.dict') }}"
loop: "{{ hosts | map(attribute='ansible_groups') | flatten | unique }}"
set_fact:
ci_inventory: "{{ ci_inventory|d({}) | combine({
item: {
'hosts': hosts | selectattr('ansible_groups', 'contains', item)
| rekey_on_member('k8s_vmi_name')
}
})
}}"
- name: Create inventory for CI tests
copy:
content: "{{ ci_inventory | to_yaml }}"
dest: "{{ inventory_path }}/ci_inventory.yml"
mode: "0644"

View File

@@ -1,16 +0,0 @@
plugin: kubevirt.core.kubevirt
namespaces:
- {{ pod_namespace }}
label_selector: ci_job_id={{ ci_job_id }}
create_groups: true
compose:
ci_groups: |
group_names |
select('ansible.builtin.match', 'label_kubespray_io*') |
map('regex_replace', 'label_kubespray_io_(.*)_true', '\1')
use_service: false
host_format: "{name}"
keyed_groups:
- key: ci_groups
prefix: ""
separator: ""

View File

@@ -6,15 +6,14 @@ metadata:
namespace: {{ pod_namespace }}
annotations:
kubespray.com/ci.template-path: "tests/cloud_playbooks/roles/packet-ci/templates/vm.yml.j2"
ansible_groups: "{{ kubespray_groups | join(',') }}"
# This does not use a dns prefix because dots are hard to escape with map(attribute=) in Jinja
labels:
kubevirt.io/os: {{ cloud_image }}
kubevirt.io/size: small
kubevirt.io/domain: "{{ test_name }}"
ci_job_id: "{{ ci_job_id }}"
ci_job_name: "{{ ci_job_name }}"
{% for group in kubespray_groups -%}
kubespray.io/{{ group }}: "true"
{% endfor -%}
# leverage the Kubernetes GC for resources cleanup
ownerReferences:
- apiVersion: v1