mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-13 21:34:40 +03:00
* Use alternate self-sufficient shellcheck precommit This pre-commit does not require prerequisite on the host, making it easier to run in CI workflows. * Switch to upstream ansible-lint pre-commit hook This way, the hook is self contained and does not depend on a previous virtualenv installation. * pre-commit: fix hooks dependencies - ansible-syntax-check - tox-inventory-builder - jinja-syntax-check * Fix ci-matrix pre-commit hook - Remove dependency of pydblite which fails to setup on recent pythons - Discard shell script and put everything into pre-commit * pre-commit: apply autofixes hooks and fix the rest manually - markdownlint (manual fix) - end-of-file-fixer - requirements-txt-fixer - trailing-whitespace * Convert check_typo to pre-commit + use maintained version client9/misspell is unmaintained, and has been forked by the golangci team, see https://github.com/client9/misspell/issues/197#issuecomment-1596318684. They haven't yet added a pre-commit config, so use my fork with the pre-commit hook config until the pull request is merged. * collection-build-install convert to pre-commit * Run pre-commit hooks in dynamic pipeline Use gitlab dynamic child pipelines feature to have one source of truth for the pre-commit jobs, the pre-commit config file. Use one cache per pre-commit. This should reduce the "fetching cache" time steps in gitlab-ci, since each job will have a separate cache with only its hook installed. * Remove gitlab-ci job done in pre-commit * pre-commit: adjust mardownlint default, md fixes Use a style file as recommended by upstream. This makes for only one source of truth. Conserve previous upstream default for MD007 (upstream default changed here https://github.com/markdownlint/markdownlint/pull/373) * Update pre-commit hooks --------- Co-authored-by: Max Gautier <mg@max.gautier.name>
36 lines
1.9 KiB
YAML
36 lines
1.9 KiB
YAML
---
|
|
## Etcd auto compaction retention for mvcc key value store in hour
|
|
# etcd_compaction_retention: 0
|
|
|
|
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
|
|
# etcd_metrics: basic
|
|
|
|
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
|
|
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
|
|
## This value is only relevant when deploying etcd with `etcd_deployment_type: docker`
|
|
# etcd_memory_limit: "512M"
|
|
|
|
## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
|
|
## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
|
|
## etcd documentation for more information.
|
|
# 8G is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
|
|
# etcd_quota_backend_bytes: "2147483648"
|
|
|
|
# Maximum client request size in bytes the server will accept.
|
|
# etcd is designed to handle small key value pairs typical for metadata.
|
|
# Larger requests will work, but may increase the latency of other requests
|
|
# etcd_max_request_bytes: "1572864"
|
|
|
|
### ETCD: disable peer client cert authentication.
|
|
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
|
|
# etcd_peer_client_auth: true
|
|
|
|
## Enable distributed tracing
|
|
## To enable this experimental feature, set the etcd_experimental_enable_distributed_tracing: true, along with the
|
|
## etcd_experimental_distributed_tracing_sample_rate to choose how many samples to collect per million spans,
|
|
## the default sampling rate is 0 https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
|
|
# etcd_experimental_enable_distributed_tracing: false
|
|
# etcd_experimental_distributed_tracing_sample_rate: 100
|
|
# etcd_experimental_distributed_tracing_address: "localhost:4317"
|
|
# etcd_experimental_distributed_tracing_service_name: etcd
|