mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2025-12-14 22:04:43 +03:00
Compare commits
639 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
92f25bf267 | ||
|
|
63a53c79d0 | ||
|
|
2f9a8c04dc | ||
|
|
8c67f42689 | ||
|
|
783a51e9ac | ||
|
|
841c61aaa1 | ||
|
|
157942a462 | ||
|
|
e88a27790c | ||
|
|
ed3932b7d5 | ||
|
|
2b5c185826 | ||
|
|
996ecca78b | ||
|
|
c3c128352f | ||
|
|
02a89543d6 | ||
|
|
c1954ff918 | ||
|
|
b49ae8c21d | ||
|
|
1a7b4435f3 | ||
|
|
ff5ca5f7f8 | ||
|
|
db0e458217 | ||
|
|
f01f7c54aa | ||
|
|
c59407f105 | ||
|
|
fdc5d7458f | ||
|
|
6aafb9b2d4 | ||
|
|
aa9ad1ed60 | ||
|
|
aa9b8453a0 | ||
|
|
4daa824b3c | ||
|
|
4f2e4524b8 | ||
|
|
8ac510e4d6 | ||
|
|
4f27c763af | ||
|
|
0e969c0b72 | ||
|
|
b396801e28 | ||
|
|
682c8a59c2 | ||
|
|
5a25de37ef | ||
|
|
bdb923df4a | ||
|
|
4ef2cf4c28 | ||
|
|
990ca38d21 | ||
|
|
c7e430573f | ||
|
|
a328b64464 | ||
|
|
a16d427536 | ||
|
|
c98a07825b | ||
|
|
a98ca6fcf3 | ||
|
|
4550f8c50f | ||
|
|
9afca43807 | ||
|
|
27ab364df5 | ||
|
|
615216f397 | ||
|
|
46b1b7ab34 | ||
|
|
30d9882851 | ||
|
|
dfdebda0b6 | ||
|
|
9d8a83314b | ||
|
|
e19ce27352 | ||
|
|
4d711691d0 | ||
|
|
ee0f1e9d58 | ||
|
|
a24162f596 | ||
|
|
e82443241b | ||
|
|
9f052702e5 | ||
|
|
b38382a68f | ||
|
|
785324827c | ||
|
|
31c7b6747b | ||
|
|
dc767c14b9 | ||
|
|
30ec03259d | ||
|
|
38c12288f1 | ||
|
|
0e22a90579 | ||
|
|
0cdf75d41a | ||
|
|
3c6fa6e583 | ||
|
|
ee882fa462 | ||
|
|
3431ed9857 | ||
|
|
279808b44e | ||
|
|
2fd529a993 | ||
|
|
1f6f79c91e | ||
|
|
52ee5d0fff | ||
|
|
2f44b40d68 | ||
|
|
20157254c3 | ||
|
|
09c17ba581 | ||
|
|
a5f88e14d0 | ||
|
|
e78bda65fe | ||
|
|
3ea496013f | ||
|
|
7e1873d927 | ||
|
|
fe0810aff9 | ||
|
|
e35a87e3eb | ||
|
|
a6fcf2e066 | ||
|
|
25316825b1 | ||
|
|
c74e1c9db3 | ||
|
|
be9de6b9d9 | ||
|
|
fe8c843cc8 | ||
|
|
f48ae18630 | ||
|
|
83e0b786d4 | ||
|
|
acd5185ad4 | ||
|
|
0263c649f4 | ||
|
|
8176e9155b | ||
|
|
424163c7d3 | ||
|
|
2c87170ccf | ||
|
|
02322c46de | ||
|
|
28b5281c45 | ||
|
|
4d79a55904 | ||
|
|
027cbefb87 | ||
|
|
a08d82d94e | ||
|
|
5f1456337b | ||
|
|
6eeb4883af | ||
|
|
b5a5478a8a | ||
|
|
0d0468e127 | ||
|
|
b7ae4a2cfd | ||
|
|
039205560a | ||
|
|
801268d5c1 | ||
|
|
46c536d261 | ||
|
|
4a8757161e | ||
|
|
65540c5771 | ||
|
|
6c1ab24981 | ||
|
|
61c2ae5549 | ||
|
|
04711d3b00 | ||
|
|
cb7c30a4f1 | ||
|
|
8922c45556 | ||
|
|
58390c79d0 | ||
|
|
b7eb1cf936 | ||
|
|
6e5b9e0ebf | ||
|
|
c94291558d | ||
|
|
8d553f7e91 | ||
|
|
a0be7f0e26 | ||
|
|
1c3d082b8d | ||
|
|
2ed211ba15 | ||
|
|
1161326b54 | ||
|
|
d473a6d442 | ||
|
|
8d82033bff | ||
|
|
9d4cdb7b02 | ||
|
|
b353e062c7 | ||
|
|
d8f9b9b61f | ||
|
|
0b441ade2c | ||
|
|
6f6fad5a16 | ||
|
|
465ffa3c9f | ||
|
|
539c9e0d99 | ||
|
|
649f962ac6 | ||
|
|
16bdb3fe51 | ||
|
|
7c3369e1b9 | ||
|
|
9eacde212f | ||
|
|
331647f4ab | ||
|
|
c2d4822c38 | ||
|
|
3c30be1320 | ||
|
|
d8d01bf5aa | ||
|
|
d42b7228c2 | ||
|
|
4db057e9c2 | ||
|
|
ea8e2fc651 | ||
|
|
10c30ea5b1 | ||
|
|
84b56d23a4 | ||
|
|
19d07a4f2e | ||
|
|
6a5b87dda4 | ||
|
|
6aac59394e | ||
|
|
f147163b24 | ||
|
|
16bf3549c1 | ||
|
|
b912dafd7a | ||
|
|
8b3481f511 | ||
|
|
7019c2685d | ||
|
|
d18cc38586 | ||
|
|
cee481f63d | ||
|
|
e4c8c7188e | ||
|
|
6c004efd5f | ||
|
|
1a57780a75 | ||
|
|
ce25e4aa21 | ||
|
|
ef4044b62f | ||
|
|
9ffe5940fe | ||
|
|
c8d9afce1a | ||
|
|
285983a555 | ||
|
|
ab4356aa69 | ||
|
|
e87d4e9ce3 | ||
|
|
5fcf047191 | ||
|
|
c68fb81aa7 | ||
|
|
e707f78899 | ||
|
|
41e0ca3f85 | ||
|
|
c5c10067ed | ||
|
|
43958614e3 | ||
|
|
af04906b51 | ||
|
|
c7e17688b9 | ||
|
|
ac76840c5d | ||
|
|
f5885d05ea | ||
|
|
af949cd967 | ||
|
|
eee2eb11d8 | ||
|
|
8d3961edbe | ||
|
|
4c5328fd1f | ||
|
|
1472528f6d | ||
|
|
9416c9aa86 | ||
|
|
da92c7e215 | ||
|
|
d27cf375af | ||
|
|
432a312a35 | ||
|
|
3a6230af6b | ||
|
|
ecd267854b | ||
|
|
ac846667b7 | ||
|
|
33146b9481 | ||
|
|
4bace2491d | ||
|
|
469b3ec525 | ||
|
|
22017b7ff0 | ||
|
|
88c11b5946 | ||
|
|
843252c968 | ||
|
|
ddea79f0f0 | ||
|
|
c0e1211abe | ||
|
|
c8d7f000c9 | ||
|
|
598f178054 | ||
|
|
6f8b24f367 | ||
|
|
5d1b34bdcd | ||
|
|
8efde799e1 | ||
|
|
96b61a5f53 | ||
|
|
a517a8db01 | ||
|
|
2211504790 | ||
|
|
fb8662ec19 | ||
|
|
6f7911264f | ||
|
|
ae44aff330 | ||
|
|
b83e8b020a | ||
|
|
30cd91dc6b | ||
|
|
09af3ab074 | ||
|
|
f2fa9c3b31 | ||
|
|
30a7dfa4f8 | ||
|
|
62ab477838 | ||
|
|
1edb7d771f | ||
|
|
f8a57f7598 | ||
|
|
85d18fc107 | ||
|
|
aa00c1d91a | ||
|
|
a5a88e41af | ||
|
|
35c928798d | ||
|
|
83f64a7ff9 | ||
|
|
60853fa682 | ||
|
|
b66356be65 | ||
|
|
efae2dbad6 | ||
|
|
a7b56a616d | ||
|
|
bd8b8916a8 | ||
|
|
57063b6828 | ||
|
|
69b67a293a | ||
|
|
d57ddf0be8 | ||
|
|
43e7e2d663 | ||
|
|
d355b43dce | ||
|
|
5d52025266 | ||
|
|
db470f8529 | ||
|
|
c8f3d88288 | ||
|
|
b54cf5bd0a | ||
|
|
7e4b176323 | ||
|
|
81bf4f9304 | ||
|
|
e1967b0700 | ||
|
|
507091ec8b | ||
|
|
c7529270ff | ||
|
|
48ceca4919 | ||
|
|
0171c71de0 | ||
|
|
46d0df394f | ||
|
|
207d3e7b4e | ||
|
|
426ad81db0 | ||
|
|
497d2ca306 | ||
|
|
9d3888a756 | ||
|
|
c8e090c17f | ||
|
|
77a74adedd | ||
|
|
1ccf32e08f | ||
|
|
b5aced20e1 | ||
|
|
17af348be8 | ||
|
|
1afdb05ea9 | ||
|
|
425b6741c6 | ||
|
|
d635961120 | ||
|
|
d5b865da4d | ||
|
|
89993e4833 | ||
|
|
6b5da84014 | ||
|
|
1c3d33e146 | ||
|
|
71af4b4a85 | ||
|
|
c49dd50ef3 | ||
|
|
f66c49bf42 | ||
|
|
4c9d7dedb3 | ||
|
|
5336943a8c | ||
|
|
9dfade5641 | ||
|
|
a040e521b4 | ||
|
|
dad4b26c6f | ||
|
|
0ac364dfae | ||
|
|
dfd35892f2 | ||
|
|
79166496f3 | ||
|
|
c7d12cddec | ||
|
|
1f09229740 | ||
|
|
c2d4700571 | ||
|
|
c06896a352 | ||
|
|
c119620f7c | ||
|
|
7f309bb092 | ||
|
|
e2b67b5700 | ||
|
|
82a9064d8d | ||
|
|
a70fab2249 | ||
|
|
86b45fce6a | ||
|
|
31a5a4e808 | ||
|
|
5db86f4c2b | ||
|
|
20c284c276 | ||
|
|
befc6cd650 | ||
|
|
97d95775a5 | ||
|
|
8f44cd35d8 | ||
|
|
627a06e30d | ||
|
|
bfebcfa2c5 | ||
|
|
56e230863a | ||
|
|
e5ee47408e | ||
|
|
f21a707e99 | ||
|
|
0ef7af76bc | ||
|
|
18666b3e2d | ||
|
|
ed87386d7b | ||
|
|
1ad9b33b08 | ||
|
|
000b4565c2 | ||
|
|
eda75fc706 | ||
|
|
6583add63a | ||
|
|
441ad841cc | ||
|
|
6511c5dd7a | ||
|
|
d5cbb19b39 | ||
|
|
b0fcc1ad1d | ||
|
|
417180246c | ||
|
|
1892562614 | ||
|
|
22b128dfd2 | ||
|
|
802fb8b591 | ||
|
|
c2cf0d9945 | ||
|
|
3b3ccac212 | ||
|
|
e61a9077f4 | ||
|
|
59ce9f9b87 | ||
|
|
bf54dc082b | ||
|
|
3ff7bc1f64 | ||
|
|
7516fe142f | ||
|
|
b0e4c375a7 | ||
|
|
d1388d69d0 | ||
|
|
a3149a41f1 | ||
|
|
823bd9118e | ||
|
|
394afc957b | ||
|
|
63e92d719a | ||
|
|
9b87131b19 | ||
|
|
4a15994da0 | ||
|
|
d0fb537448 | ||
|
|
59cf1770bc | ||
|
|
b77f207512 | ||
|
|
b46a69f5e1 | ||
|
|
0aaba5ea30 | ||
|
|
bd6d810d0a | ||
|
|
e3850fbbbc | ||
|
|
05d864c913 | ||
|
|
a3e34f589a | ||
|
|
a2cf6816ce | ||
|
|
7b3bc54cc3 | ||
|
|
cda88e6770 | ||
|
|
70f1abbc18 | ||
|
|
bbcafb5d7b | ||
|
|
d7039ef707 | ||
|
|
271be92b02 | ||
|
|
a31baf3c16 | ||
|
|
e83728897b | ||
|
|
282a27a07c | ||
|
|
3fe6dbb65c | ||
|
|
7547e6a272 | ||
|
|
1928dafc7e | ||
|
|
0cbc0f4119 | ||
|
|
e77b9bf3ee | ||
|
|
7f7e83a4d9 | ||
|
|
85fe716d46 | ||
|
|
85ff3eb8be | ||
|
|
e55c359cf9 | ||
|
|
8d7327c188 | ||
|
|
ca731dca95 | ||
|
|
d66da21726 | ||
|
|
1069b05e68 | ||
|
|
ec0c0d4a28 | ||
|
|
1739b27231 | ||
|
|
d9d29af87f | ||
|
|
7036b704b3 | ||
|
|
54cda80018 | ||
|
|
b46e751573 | ||
|
|
6a2ea94b39 | ||
|
|
4674b03661 | ||
|
|
e2f1964389 | ||
|
|
922de32290 | ||
|
|
7896bc7831 | ||
|
|
da07459bd6 | ||
|
|
3ca205446e | ||
|
|
eff1931283 | ||
|
|
3a37a49690 | ||
|
|
fd8ae54fa7 | ||
|
|
79fdee3979 | ||
|
|
a754c0d476 | ||
|
|
7208169db3 | ||
|
|
94dac10be7 | ||
|
|
d5fcbcd89f | ||
|
|
7b5d43cc00 | ||
|
|
c5ccedb694 | ||
|
|
858b29f425 | ||
|
|
7db76f8809 | ||
|
|
bcf695913f | ||
|
|
23cd1d41fb | ||
|
|
62f5369237 | ||
|
|
59fc17f4e3 | ||
|
|
26c1d42fff | ||
|
|
c1aa755a3c | ||
|
|
b3d9f2b4a2 | ||
|
|
29c2fbdbc1 | ||
|
|
4b9f98f933 | ||
|
|
e9870b8d25 | ||
|
|
e0c74fa082 | ||
|
|
5b93a97281 | ||
|
|
bdf74c6749 | ||
|
|
d6f9a8d752 | ||
|
|
e357d8678c | ||
|
|
b1b407a0b4 | ||
|
|
6c3d1649a6 | ||
|
|
14cf3e138b | ||
|
|
afbabebfd5 | ||
|
|
8c0a2741ae | ||
|
|
1d078e1119 | ||
|
|
d90baa8601 | ||
|
|
d5660cd37c | ||
|
|
63cec45597 | ||
|
|
f07e24db8f | ||
|
|
5d5be3e96a | ||
|
|
6e7649360f | ||
|
|
1dd38721b3 | ||
|
|
6a001e4971 | ||
|
|
96e6a6ac3f | ||
|
|
2556eb2733 | ||
|
|
d29ea386d6 | ||
|
|
a0ee569091 | ||
|
|
3f4eb9be08 | ||
|
|
5ea2d1eb67 | ||
|
|
ffc38a2237 | ||
|
|
360aff4a57 | ||
|
|
d26191373a | ||
|
|
4c06aa98b5 | ||
|
|
1b267b6599 | ||
|
|
dd6efb73f7 | ||
|
|
dfeed1c1a4 | ||
|
|
0071e3c99c | ||
|
|
0feec14b15 | ||
|
|
975f84494c | ||
|
|
7c86734d2e | ||
|
|
8665e1de87 | ||
|
|
c16efc9ab8 | ||
|
|
324c95d37f | ||
|
|
69806e0a46 | ||
|
|
ad15a4b755 | ||
|
|
002a4b03a4 | ||
|
|
96476430a3 | ||
|
|
73db44b00c | ||
|
|
b32d25942d | ||
|
|
fce705a92b | ||
|
|
6164c90f70 | ||
|
|
e036b899a3 | ||
|
|
8c7b90ebbf | ||
|
|
38d9d2ea0e | ||
|
|
384d30b675 | ||
|
|
add61868c6 | ||
|
|
b599f3084f | ||
|
|
a7493e26e1 | ||
|
|
ae3a1d7c01 | ||
|
|
e39e3d5c26 | ||
|
|
1e7d48846a | ||
|
|
6001edeecd | ||
|
|
ce0b7834ff | ||
|
|
3ac92689f0 | ||
|
|
1c0836946f | ||
|
|
bccbe323b7 | ||
|
|
d73249a793 | ||
|
|
cd9a03f86c | ||
|
|
b47c21c683 | ||
|
|
6de5303e3f | ||
|
|
2a2fb68b2f | ||
|
|
844ebb7838 | ||
|
|
332cc1cd58 | ||
|
|
e7ce83016e | ||
|
|
bf6a39eb84 | ||
|
|
42382e2cde | ||
|
|
f8e4650791 | ||
|
|
e444b3c140 | ||
|
|
d56ac216f4 | ||
|
|
420a412234 | ||
|
|
90c643f3ab | ||
|
|
1d4e380231 | ||
|
|
6d293ba899 | ||
|
|
aa086e5407 | ||
|
|
cce0940e1f | ||
|
|
daed3e5b6a | ||
|
|
e2a7f3e2ab | ||
|
|
5a351b4b00 | ||
|
|
6f2abbf79c | ||
|
|
bef1e628ac | ||
|
|
7340a163a4 | ||
|
|
a6622b176b | ||
|
|
771a5e26bb | ||
|
|
be278f9dba | ||
|
|
6479e26904 | ||
|
|
1c7053c9d8 | ||
|
|
596d0289f8 | ||
|
|
7df7054bdc | ||
|
|
5377aac936 | ||
|
|
ceb6c172ad | ||
|
|
7f52c1d3a2 | ||
|
|
af1e16b934 | ||
|
|
2257181ca8 | ||
|
|
7e75d48cc4 | ||
|
|
6330db89a7 | ||
|
|
f05d6b3711 | ||
|
|
cce9d3125d | ||
|
|
e381ce57e2 | ||
|
|
5dbce6a2bd | ||
|
|
5b0e88339a | ||
|
|
db43891f2b | ||
|
|
f72063e7c2 | ||
|
|
36a3a78952 | ||
|
|
2d1597bf10 | ||
|
|
edfa3e9b14 | ||
|
|
6fa3565dac | ||
|
|
7dec8e5caa | ||
|
|
49abf6007a | ||
|
|
f0cdf71ccb | ||
|
|
8655b92e93 | ||
|
|
e1c6992c55 | ||
|
|
486b223e01 | ||
|
|
d53fd29e34 | ||
|
|
4f89bfac48 | ||
|
|
5fee96b404 | ||
|
|
12873f916b | ||
|
|
efa180392b | ||
|
|
6d9ed398e3 | ||
|
|
6d3dbb43a4 | ||
|
|
811f546ea6 | ||
|
|
ead8a4e4de | ||
|
|
05f132c136 | ||
|
|
5f2c8ac38f | ||
|
|
14511053aa | ||
|
|
8353532a09 | ||
|
|
1c62af0c95 | ||
|
|
f103ac7640 | ||
|
|
274e06a48d | ||
|
|
a39f306184 | ||
|
|
69d11daef6 | ||
|
|
057e8b4358 | ||
|
|
18c0e54e4f | ||
|
|
85007fa9a7 | ||
|
|
5c5bf41afe | ||
|
|
5dba53a223 | ||
|
|
2bcd9eb9e9 | ||
|
|
5a54db2f3c | ||
|
|
b47542b003 | ||
|
|
14b63ede8c | ||
|
|
b07c5966a6 | ||
|
|
c7db72e1da | ||
|
|
dc5df57c26 | ||
|
|
a9c97e5253 | ||
|
|
53e5ef6b4e | ||
|
|
8800b5c01d | ||
|
|
280036fad6 | ||
|
|
a6e1f5ece9 | ||
|
|
fedd671d68 | ||
|
|
b7c22659e3 | ||
|
|
c9c0c01de0 | ||
|
|
e442b1d2b9 | ||
|
|
e9f4ff227e | ||
|
|
668bbe0528 | ||
|
|
e045a45e48 | ||
|
|
2c9fc18903 | ||
|
|
d4eecac108 | ||
|
|
ef351e0234 | ||
|
|
05adeed1fa | ||
|
|
15f1b19136 | ||
|
|
154fa45422 | ||
|
|
e35becebf8 | ||
|
|
bdd36c2d34 | ||
|
|
0a0156c946 | ||
|
|
100d9333ca | ||
|
|
a4cc416511 | ||
|
|
2ea5793782 | ||
|
|
0ddf915027 | ||
|
|
067db686f6 | ||
|
|
ed2b4b805e | ||
|
|
8375aa72e2 | ||
|
|
6334e4bd84 | ||
|
|
86ce8aac85 | ||
|
|
de46f86137 | ||
|
|
5616b08229 | ||
|
|
8682a57ea3 | ||
|
|
662a37ab4f | ||
|
|
42947c9840 | ||
|
|
3749729d5a | ||
|
|
fb8b075110 | ||
|
|
1c5391dda7 | ||
|
|
f2d10e9465 | ||
|
|
796d3fb975 | ||
|
|
5c04bdd52b | ||
|
|
17143dbc51 | ||
|
|
1c8bba36db | ||
|
|
95b329b64d | ||
|
|
de1d9df787 | ||
|
|
6450207713 | ||
|
|
edc4bb4a49 | ||
|
|
a21ee33180 | ||
|
|
bcaa31ae33 | ||
|
|
0cc1726781 | ||
|
|
aad78840a0 | ||
|
|
e3ab665e90 | ||
|
|
1a91792e7c | ||
|
|
670c37b428 | ||
|
|
040dacd5cd | ||
|
|
59541de437 | ||
|
|
fc8551bcba | ||
|
|
c2c97c36bc | ||
|
|
211fdde742 | ||
|
|
366cbb3e6f | ||
|
|
a318624fad | ||
|
|
3cf5981146 | ||
|
|
4cc065e66d | ||
|
|
ba731ed145 | ||
|
|
b77460ec34 | ||
|
|
88bee6c68e | ||
|
|
1f84d6344b | ||
|
|
699fbd64ab | ||
|
|
b42bf39fb7 | ||
|
|
5368d51d63 | ||
|
|
c5db012c9a | ||
|
|
b70d986bfa | ||
|
|
973628fc1b | ||
|
|
91fea7c956 | ||
|
|
d378d789cf | ||
|
|
9007d6621a | ||
|
|
774ec49396 | ||
|
|
bba55faae8 | ||
|
|
8f2b0772f9 | ||
|
|
1a409dc7ae | ||
|
|
404ea0270e | ||
|
|
ef939dee74 | ||
|
|
f1576eabb1 | ||
|
|
49c4345c9a | ||
|
|
f94182f77d | ||
|
|
222a77dfe7 | ||
|
|
24ceee134e | ||
|
|
04c8a73889 | ||
|
|
9a75501152 | ||
|
|
f6fbbc17a4 | ||
|
|
15dc3868c3 | ||
|
|
2525d7aff8 | ||
|
|
a5d2137ed9 | ||
|
|
a8e51e686e | ||
|
|
a2429ef64d | ||
|
|
1b88678cf3 | ||
|
|
0e96852159 | ||
|
|
19a61d838f | ||
|
|
4eec302e86 | ||
|
|
f3885aa589 | ||
|
|
b493c81ce8 | ||
|
|
9ef62194c3 | ||
|
|
91ee4aa542 | ||
|
|
e3caff833c | ||
|
|
b2995e4ec4 | ||
|
|
ccd3aeebbc | ||
|
|
7a033a1d55 | ||
|
|
1652d8bf4b | ||
|
|
c85f275bdb |
@@ -18,3 +18,13 @@ skip_list:
|
|||||||
# While it can be useful to have these metadata available, they are also available in the existing documentation.
|
# While it can be useful to have these metadata available, they are also available in the existing documentation.
|
||||||
# (Disabled in May 2019)
|
# (Disabled in May 2019)
|
||||||
- '701'
|
- '701'
|
||||||
|
|
||||||
|
# [role-name] "meta/main.yml" Role name role-name does not match ``^+$`` pattern
|
||||||
|
# Meta roles in Kubespray don't need proper names
|
||||||
|
# (Disabled in June 2021)
|
||||||
|
- 'role-name'
|
||||||
|
|
||||||
|
# [var-naming] "defaults/main.yml" File defines variable 'apiVersion' that violates variable naming standards
|
||||||
|
# In Kubespray we use variables that use camelCase to match their k8s counterparts
|
||||||
|
# (Disabled in June 2021)
|
||||||
|
- 'var-naming'
|
||||||
|
|||||||
10
.gitignore
vendored
10
.gitignore
vendored
@@ -99,3 +99,13 @@ target/
|
|||||||
# virtualenv
|
# virtualenv
|
||||||
venv/
|
venv/
|
||||||
ENV/
|
ENV/
|
||||||
|
|
||||||
|
# molecule
|
||||||
|
roles/**/molecule/**/__pycache__/
|
||||||
|
roles/**/molecule/**/*.conf
|
||||||
|
|
||||||
|
# macOS
|
||||||
|
.DS_Store
|
||||||
|
|
||||||
|
# Temp location used by our scripts
|
||||||
|
scripts/tmp/
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ stages:
|
|||||||
- deploy-special
|
- deploy-special
|
||||||
|
|
||||||
variables:
|
variables:
|
||||||
KUBESPRAY_VERSION: v2.14.1
|
KUBESPRAY_VERSION: v2.17.1
|
||||||
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
||||||
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
||||||
ANSIBLE_FORCE_COLOR: "true"
|
ANSIBLE_FORCE_COLOR: "true"
|
||||||
@@ -16,6 +16,7 @@ variables:
|
|||||||
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
|
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
|
||||||
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
|
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
|
||||||
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
|
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
|
||||||
|
CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml"
|
||||||
GS_ACCESS_KEY_ID: $GS_KEY
|
GS_ACCESS_KEY_ID: $GS_KEY
|
||||||
GS_SECRET_ACCESS_KEY: $GS_SECRET
|
GS_SECRET_ACCESS_KEY: $GS_SECRET
|
||||||
CONTAINER_ENGINE: docker
|
CONTAINER_ENGINE: docker
|
||||||
@@ -30,12 +31,15 @@ variables:
|
|||||||
MITOGEN_ENABLE: "false"
|
MITOGEN_ENABLE: "false"
|
||||||
ANSIBLE_LOG_LEVEL: "-vv"
|
ANSIBLE_LOG_LEVEL: "-vv"
|
||||||
RECOVER_CONTROL_PLANE_TEST: "false"
|
RECOVER_CONTROL_PLANE_TEST: "false"
|
||||||
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube-master[1:]"
|
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
|
||||||
|
TERRAFORM_VERSION: 1.0.8
|
||||||
|
ANSIBLE_MAJOR_VERSION: "2.10"
|
||||||
|
|
||||||
before_script:
|
before_script:
|
||||||
- ./tests/scripts/rebase.sh
|
- ./tests/scripts/rebase.sh
|
||||||
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
|
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
|
||||||
- python -m pip install -r tests/requirements.txt
|
- python -m pip uninstall -y ansible ansible-base ansible-core
|
||||||
|
- python -m pip install -r tests/requirements-${ANSIBLE_MAJOR_VERSION}.txt
|
||||||
- mkdir -p /.ssh
|
- mkdir -p /.ssh
|
||||||
|
|
||||||
.job: &job
|
.job: &job
|
||||||
@@ -49,6 +53,7 @@ before_script:
|
|||||||
|
|
||||||
.testcases: &testcases
|
.testcases: &testcases
|
||||||
<<: *job
|
<<: *job
|
||||||
|
retry: 1
|
||||||
before_script:
|
before_script:
|
||||||
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
|
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
|
||||||
- ./tests/scripts/rebase.sh
|
- ./tests/scripts/rebase.sh
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ vagrant-validate:
|
|||||||
stage: unit-tests
|
stage: unit-tests
|
||||||
tags: [light]
|
tags: [light]
|
||||||
variables:
|
variables:
|
||||||
VAGRANT_VERSION: 2.2.10
|
VAGRANT_VERSION: 2.2.19
|
||||||
script:
|
script:
|
||||||
- ./tests/scripts/vagrant-validate.sh
|
- ./tests/scripts/vagrant-validate.sh
|
||||||
except: ['triggers', 'master']
|
except: ['triggers', 'master']
|
||||||
@@ -53,6 +53,7 @@ tox-inventory-builder:
|
|||||||
- ./tests/scripts/rebase.sh
|
- ./tests/scripts/rebase.sh
|
||||||
- apt-get update && apt-get install -y python3-pip
|
- apt-get update && apt-get install -y python3-pip
|
||||||
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
|
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
|
||||||
|
- python -m pip uninstall -y ansible
|
||||||
- python -m pip install -r tests/requirements.txt
|
- python -m pip install -r tests/requirements.txt
|
||||||
script:
|
script:
|
||||||
- pip3 install tox
|
- pip3 install tox
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
.packet:
|
.packet:
|
||||||
extends: .testcases
|
extends: .testcases
|
||||||
variables:
|
variables:
|
||||||
|
ANSIBLE_TIMEOUT: "120"
|
||||||
CI_PLATFORM: packet
|
CI_PLATFORM: packet
|
||||||
SSH_USER: kubespray
|
SSH_USER: kubespray
|
||||||
tags:
|
tags:
|
||||||
@@ -22,25 +23,52 @@
|
|||||||
allow_failure: true
|
allow_failure: true
|
||||||
extends: .packet
|
extends: .packet
|
||||||
|
|
||||||
packet_ubuntu18-calico-aio:
|
# The ubuntu20-calico-aio jobs are meant as early stages to prevent running the full CI if something is horribly broken
|
||||||
stage: deploy-part1
|
|
||||||
extends: .packet_pr
|
|
||||||
when: on_success
|
|
||||||
|
|
||||||
# Future AIO job
|
|
||||||
packet_ubuntu20-calico-aio:
|
packet_ubuntu20-calico-aio:
|
||||||
stage: deploy-part1
|
stage: deploy-part1
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
|
variables:
|
||||||
|
RESET_CHECK: "true"
|
||||||
|
|
||||||
|
# Exericse ansible variants
|
||||||
|
packet_ubuntu20-calico-aio-ansible-2_9:
|
||||||
|
stage: deploy-part1
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
variables:
|
||||||
|
ANSIBLE_MAJOR_VERSION: "2.9"
|
||||||
|
RESET_CHECK: "true"
|
||||||
|
|
||||||
|
packet_ubuntu20-calico-aio-ansible-2_11:
|
||||||
|
stage: deploy-part1
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
variables:
|
||||||
|
ANSIBLE_MAJOR_VERSION: "2.11"
|
||||||
|
RESET_CHECK: "true"
|
||||||
|
|
||||||
# ### PR JOBS PART2
|
# ### PR JOBS PART2
|
||||||
|
|
||||||
packet_centos7-flannel-containerd-addons-ha:
|
packet_ubuntu18-aio-docker:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_ubuntu20-aio-docker:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_ubuntu18-calico-aio:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_centos7-flannel-addons-ha:
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
when: on_success
|
when: on_success
|
||||||
variables:
|
|
||||||
MITOGEN_ENABLE: "true"
|
|
||||||
|
|
||||||
packet_centos8-crio:
|
packet_centos8-crio:
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
@@ -51,10 +79,13 @@ packet_ubuntu18-crio:
|
|||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
when: manual
|
when: manual
|
||||||
variables:
|
|
||||||
MITOGEN_ENABLE: "true"
|
|
||||||
|
|
||||||
packet_ubuntu16-canal-kubeadm-ha:
|
packet_fedora35-crio:
|
||||||
|
extends: .packet_pr
|
||||||
|
stage: deploy-part2
|
||||||
|
when: manual
|
||||||
|
|
||||||
|
packet_ubuntu16-canal-ha:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_periodic
|
extends: .packet_periodic
|
||||||
when: on_success
|
when: on_success
|
||||||
@@ -84,12 +115,25 @@ packet_debian10-cilium-svc-proxy:
|
|||||||
extends: .packet_periodic
|
extends: .packet_periodic
|
||||||
when: on_success
|
when: on_success
|
||||||
|
|
||||||
packet_debian10-containerd:
|
packet_debian10-calico:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_debian10-docker:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_debian11-calico:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_debian11-docker:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
variables:
|
|
||||||
MITOGEN_ENABLE: "true"
|
|
||||||
|
|
||||||
packet_centos7-calico-ha-once-localhost:
|
packet_centos7-calico-ha-once-localhost:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
@@ -111,7 +155,17 @@ packet_centos8-calico:
|
|||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
|
|
||||||
packet_fedora32-weave:
|
packet_centos8-docker:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_fedora34-docker-weave:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_fedora35-kube-router:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
@@ -121,14 +175,14 @@ packet_opensuse-canal:
|
|||||||
extends: .packet_periodic
|
extends: .packet_periodic
|
||||||
when: on_success
|
when: on_success
|
||||||
|
|
||||||
packet_ubuntu18-ovn4nfv:
|
packet_opensuse-docker-cilium:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_periodic
|
extends: .packet_pr
|
||||||
when: on_success
|
when: manual
|
||||||
|
|
||||||
# ### MANUAL JOBS
|
# ### MANUAL JOBS
|
||||||
|
|
||||||
packet_ubuntu16-weave-sep:
|
packet_ubuntu16-docker-weave-sep:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
@@ -138,12 +192,18 @@ packet_ubuntu18-cilium-sep:
|
|||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
|
|
||||||
packet_ubuntu18-flannel-containerd-ha:
|
packet_ubuntu18-flannel-ha:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
|
|
||||||
packet_ubuntu18-flannel-containerd-ha-once:
|
packet_ubuntu18-flannel-ha-once:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: manual
|
||||||
|
|
||||||
|
# Calico HA eBPF
|
||||||
|
packet_centos8-calico-ha-ebpf:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
@@ -173,19 +233,34 @@ packet_oracle7-canal-ha:
|
|||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
|
|
||||||
packet_fedora33-calico:
|
packet_fedora35-docker-calico:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_periodic
|
extends: .packet_periodic
|
||||||
when: on_success
|
when: on_success
|
||||||
variables:
|
variables:
|
||||||
MITOGEN_ENABLE: "true"
|
RESET_CHECK: "true"
|
||||||
|
|
||||||
|
packet_fedora34-calico-selinux:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_periodic
|
||||||
|
when: on_success
|
||||||
|
|
||||||
|
packet_fedora35-calico-swap-selinux:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: manual
|
||||||
|
|
||||||
packet_amazon-linux-2-aio:
|
packet_amazon-linux-2-aio:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: manual
|
when: manual
|
||||||
|
|
||||||
packet_fedora32-kube-ovn-containerd:
|
packet_centos8-calico-nodelocaldns-secondary:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: manual
|
||||||
|
|
||||||
|
packet_fedora34-kube-ovn:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
extends: .packet_periodic
|
extends: .packet_periodic
|
||||||
when: on_success
|
when: on_success
|
||||||
@@ -193,29 +268,32 @@ packet_fedora32-kube-ovn-containerd:
|
|||||||
# ### PR JOBS PART3
|
# ### PR JOBS PART3
|
||||||
# Long jobs (45min+)
|
# Long jobs (45min+)
|
||||||
|
|
||||||
packet_centos7-weave-upgrade-ha:
|
packet_centos7-docker-weave-upgrade-ha:
|
||||||
stage: deploy-part3
|
stage: deploy-part3
|
||||||
extends: .packet_periodic
|
extends: .packet_periodic
|
||||||
when: on_success
|
when: on_success
|
||||||
variables:
|
variables:
|
||||||
UPGRADE_TEST: basic
|
UPGRADE_TEST: basic
|
||||||
MITOGEN_ENABLE: "false"
|
|
||||||
|
|
||||||
packet_debian9-calico-upgrade:
|
# Calico HA Wireguard
|
||||||
|
packet_ubuntu20-calico-ha-wireguard:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .packet_pr
|
||||||
|
when: manual
|
||||||
|
|
||||||
|
packet_debian10-calico-upgrade:
|
||||||
stage: deploy-part3
|
stage: deploy-part3
|
||||||
extends: .packet_pr
|
extends: .packet_pr
|
||||||
when: on_success
|
when: on_success
|
||||||
variables:
|
variables:
|
||||||
UPGRADE_TEST: graceful
|
UPGRADE_TEST: graceful
|
||||||
MITOGEN_ENABLE: "false"
|
|
||||||
|
|
||||||
packet_debian9-calico-upgrade-once:
|
packet_debian10-calico-upgrade-once:
|
||||||
stage: deploy-part3
|
stage: deploy-part3
|
||||||
extends: .packet_periodic
|
extends: .packet_periodic
|
||||||
when: on_success
|
when: on_success
|
||||||
variables:
|
variables:
|
||||||
UPGRADE_TEST: graceful
|
UPGRADE_TEST: graceful
|
||||||
MITOGEN_ENABLE: "false"
|
|
||||||
|
|
||||||
packet_ubuntu18-calico-ha-recover:
|
packet_ubuntu18-calico-ha-recover:
|
||||||
stage: deploy-part3
|
stage: deploy-part3
|
||||||
@@ -223,7 +301,7 @@ packet_ubuntu18-calico-ha-recover:
|
|||||||
when: on_success
|
when: on_success
|
||||||
variables:
|
variables:
|
||||||
RECOVER_CONTROL_PLANE_TEST: "true"
|
RECOVER_CONTROL_PLANE_TEST: "true"
|
||||||
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube-master[1:]"
|
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
|
||||||
|
|
||||||
packet_ubuntu18-calico-ha-recover-noquorum:
|
packet_ubuntu18-calico-ha-recover-noquorum:
|
||||||
stage: deploy-part3
|
stage: deploy-part3
|
||||||
@@ -231,4 +309,4 @@ packet_ubuntu18-calico-ha-recover-noquorum:
|
|||||||
when: on_success
|
when: on_success
|
||||||
variables:
|
variables:
|
||||||
RECOVER_CONTROL_PLANE_TEST: "true"
|
RECOVER_CONTROL_PLANE_TEST: "true"
|
||||||
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:],kube-master[1:]"
|
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:],kube_control_plane[1:]"
|
||||||
|
|||||||
@@ -12,13 +12,13 @@
|
|||||||
# Prepare inventory
|
# Prepare inventory
|
||||||
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
|
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
|
||||||
- ln -s contrib/terraform/$PROVIDER/hosts
|
- ln -s contrib/terraform/$PROVIDER/hosts
|
||||||
- terraform init contrib/terraform/$PROVIDER
|
- terraform -chdir="contrib/terraform/$PROVIDER" init
|
||||||
# Copy SSH keypair
|
# Copy SSH keypair
|
||||||
- mkdir -p ~/.ssh
|
- mkdir -p ~/.ssh
|
||||||
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
|
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
|
||||||
- chmod 400 ~/.ssh/id_rsa
|
- chmod 400 ~/.ssh/id_rsa
|
||||||
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
|
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
|
||||||
- mkdir -p group_vars
|
- mkdir -p contrib/terraform/$PROVIDER/group_vars
|
||||||
# Random subnet to avoid routing conflicts
|
# Random subnet to avoid routing conflicts
|
||||||
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
|
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
|
||||||
|
|
||||||
@@ -28,8 +28,8 @@
|
|||||||
tags: [light]
|
tags: [light]
|
||||||
only: ['master', /^pr-.*$/]
|
only: ['master', /^pr-.*$/]
|
||||||
script:
|
script:
|
||||||
- terraform validate -var-file=cluster.tfvars contrib/terraform/$PROVIDER
|
- terraform -chdir="contrib/terraform/$PROVIDER" validate
|
||||||
- terraform fmt -check -diff contrib/terraform/$PROVIDER
|
- terraform -chdir="contrib/terraform/$PROVIDER" fmt -check -diff
|
||||||
|
|
||||||
.terraform_apply:
|
.terraform_apply:
|
||||||
extends: .terraform_install
|
extends: .terraform_install
|
||||||
@@ -56,70 +56,48 @@
|
|||||||
tf-validate-openstack:
|
tf-validate-openstack:
|
||||||
extends: .terraform_validate
|
extends: .terraform_validate
|
||||||
variables:
|
variables:
|
||||||
TF_VERSION: 0.12.29
|
TF_VERSION: $TERRAFORM_VERSION
|
||||||
PROVIDER: openstack
|
PROVIDER: openstack
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
|
|
||||||
tf-validate-packet:
|
tf-validate-packet:
|
||||||
extends: .terraform_validate
|
extends: .terraform_validate
|
||||||
variables:
|
variables:
|
||||||
TF_VERSION: 0.12.29
|
TF_VERSION: $TERRAFORM_VERSION
|
||||||
PROVIDER: packet
|
PROVIDER: packet
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
|
|
||||||
tf-validate-aws:
|
tf-validate-aws:
|
||||||
extends: .terraform_validate
|
extends: .terraform_validate
|
||||||
variables:
|
variables:
|
||||||
TF_VERSION: 0.12.29
|
TF_VERSION: $TERRAFORM_VERSION
|
||||||
PROVIDER: aws
|
PROVIDER: aws
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
|
|
||||||
tf-0.13.x-validate-openstack:
|
tf-validate-exoscale:
|
||||||
extends: .terraform_validate
|
extends: .terraform_validate
|
||||||
variables:
|
variables:
|
||||||
TF_VERSION: 0.13.5
|
TF_VERSION: $TERRAFORM_VERSION
|
||||||
PROVIDER: openstack
|
PROVIDER: exoscale
|
||||||
|
|
||||||
|
tf-validate-vsphere:
|
||||||
|
extends: .terraform_validate
|
||||||
|
variables:
|
||||||
|
TF_VERSION: $TERRAFORM_VERSION
|
||||||
|
PROVIDER: vsphere
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
|
|
||||||
tf-0.13.x-validate-packet:
|
tf-validate-upcloud:
|
||||||
extends: .terraform_validate
|
extends: .terraform_validate
|
||||||
variables:
|
variables:
|
||||||
TF_VERSION: 0.13.5
|
TF_VERSION: $TERRAFORM_VERSION
|
||||||
PROVIDER: packet
|
PROVIDER: upcloud
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
|
|
||||||
tf-0.13.x-validate-aws:
|
|
||||||
extends: .terraform_validate
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.13.5
|
|
||||||
PROVIDER: aws
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
|
|
||||||
tf-0.14.x-validate-openstack:
|
|
||||||
extends: .terraform_validate
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.14.3
|
|
||||||
PROVIDER: openstack
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
|
|
||||||
tf-0.14.x-validate-packet:
|
|
||||||
extends: .terraform_validate
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.14.3
|
|
||||||
PROVIDER: packet
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
|
||||||
|
|
||||||
tf-0.14.x-validate-aws:
|
|
||||||
extends: .terraform_validate
|
|
||||||
variables:
|
|
||||||
TF_VERSION: 0.14.3
|
|
||||||
PROVIDER: aws
|
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
|
|
||||||
# tf-packet-ubuntu16-default:
|
# tf-packet-ubuntu16-default:
|
||||||
# extends: .terraform_apply
|
# extends: .terraform_apply
|
||||||
# variables:
|
# variables:
|
||||||
# TF_VERSION: 0.12.29
|
# TF_VERSION: $TERRAFORM_VERSION
|
||||||
# PROVIDER: packet
|
# PROVIDER: packet
|
||||||
# CLUSTER: $CI_COMMIT_REF_NAME
|
# CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
# TF_VAR_number_of_k8s_masters: "1"
|
# TF_VAR_number_of_k8s_masters: "1"
|
||||||
@@ -133,7 +111,7 @@ tf-0.14.x-validate-aws:
|
|||||||
# tf-packet-ubuntu18-default:
|
# tf-packet-ubuntu18-default:
|
||||||
# extends: .terraform_apply
|
# extends: .terraform_apply
|
||||||
# variables:
|
# variables:
|
||||||
# TF_VERSION: 0.12.29
|
# TF_VERSION: $TERRAFORM_VERSION
|
||||||
# PROVIDER: packet
|
# PROVIDER: packet
|
||||||
# CLUSTER: $CI_COMMIT_REF_NAME
|
# CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
# TF_VAR_number_of_k8s_masters: "1"
|
# TF_VAR_number_of_k8s_masters: "1"
|
||||||
@@ -168,10 +146,6 @@ tf-0.14.x-validate-aws:
|
|||||||
OS_INTERFACE: public
|
OS_INTERFACE: public
|
||||||
OS_IDENTITY_API_VERSION: "3"
|
OS_IDENTITY_API_VERSION: "3"
|
||||||
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
|
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
|
||||||
# Since ELASTX is in Stockholm, Mitogen helps with latency
|
|
||||||
MITOGEN_ENABLE: "false"
|
|
||||||
# Mitogen doesn't support interpreter discovery yet
|
|
||||||
ANSIBLE_PYTHON_INTERPRETER: "/usr/bin/python3"
|
|
||||||
|
|
||||||
tf-elastx_cleanup:
|
tf-elastx_cleanup:
|
||||||
stage: unit-tests
|
stage: unit-tests
|
||||||
@@ -188,9 +162,10 @@ tf-elastx_ubuntu18-calico:
|
|||||||
extends: .terraform_apply
|
extends: .terraform_apply
|
||||||
stage: deploy-part3
|
stage: deploy-part3
|
||||||
when: on_success
|
when: on_success
|
||||||
|
allow_failure: true
|
||||||
variables:
|
variables:
|
||||||
<<: *elastx_variables
|
<<: *elastx_variables
|
||||||
TF_VERSION: 0.12.29
|
TF_VERSION: $TERRAFORM_VERSION
|
||||||
PROVIDER: openstack
|
PROVIDER: openstack
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
ANSIBLE_TIMEOUT: "60"
|
ANSIBLE_TIMEOUT: "60"
|
||||||
@@ -216,44 +191,45 @@ tf-elastx_ubuntu18-calico:
|
|||||||
TF_VAR_image: ubuntu-18.04-server-latest
|
TF_VAR_image: ubuntu-18.04-server-latest
|
||||||
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
|
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
|
||||||
|
|
||||||
|
# OVH voucher expired, commenting job until things are sorted out
|
||||||
|
|
||||||
tf-ovh_cleanup:
|
# tf-ovh_cleanup:
|
||||||
stage: unit-tests
|
# stage: unit-tests
|
||||||
tags: [light]
|
# tags: [light]
|
||||||
image: python
|
# image: python
|
||||||
environment: ovh
|
# environment: ovh
|
||||||
variables:
|
# variables:
|
||||||
<<: *ovh_variables
|
# <<: *ovh_variables
|
||||||
before_script:
|
# before_script:
|
||||||
- pip install -r scripts/openstack-cleanup/requirements.txt
|
# - pip install -r scripts/openstack-cleanup/requirements.txt
|
||||||
script:
|
# script:
|
||||||
- ./scripts/openstack-cleanup/main.py
|
# - ./scripts/openstack-cleanup/main.py
|
||||||
|
|
||||||
tf-ovh_ubuntu18-calico:
|
# tf-ovh_ubuntu18-calico:
|
||||||
extends: .terraform_apply
|
# extends: .terraform_apply
|
||||||
when: on_success
|
# when: on_success
|
||||||
environment: ovh
|
# environment: ovh
|
||||||
variables:
|
# variables:
|
||||||
<<: *ovh_variables
|
# <<: *ovh_variables
|
||||||
TF_VERSION: 0.12.29
|
# TF_VERSION: $TERRAFORM_VERSION
|
||||||
PROVIDER: openstack
|
# PROVIDER: openstack
|
||||||
CLUSTER: $CI_COMMIT_REF_NAME
|
# CLUSTER: $CI_COMMIT_REF_NAME
|
||||||
ANSIBLE_TIMEOUT: "60"
|
# ANSIBLE_TIMEOUT: "60"
|
||||||
SSH_USER: ubuntu
|
# SSH_USER: ubuntu
|
||||||
TF_VAR_number_of_k8s_masters: "0"
|
# TF_VAR_number_of_k8s_masters: "0"
|
||||||
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
|
# TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
|
||||||
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
|
# TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
|
||||||
TF_VAR_number_of_etcd: "0"
|
# TF_VAR_number_of_etcd: "0"
|
||||||
TF_VAR_number_of_k8s_nodes: "0"
|
# TF_VAR_number_of_k8s_nodes: "0"
|
||||||
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
|
# TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
|
||||||
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
|
# TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
|
||||||
TF_VAR_number_of_bastions: "0"
|
# TF_VAR_number_of_bastions: "0"
|
||||||
TF_VAR_number_of_k8s_masters_no_etcd: "0"
|
# TF_VAR_number_of_k8s_masters_no_etcd: "0"
|
||||||
TF_VAR_use_neutron: "0"
|
# TF_VAR_use_neutron: "0"
|
||||||
TF_VAR_floatingip_pool: "Ext-Net"
|
# TF_VAR_floatingip_pool: "Ext-Net"
|
||||||
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
|
# TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
|
||||||
TF_VAR_network_name: "Ext-Net"
|
# TF_VAR_network_name: "Ext-Net"
|
||||||
TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
|
# TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
|
||||||
TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
|
# TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
|
||||||
TF_VAR_image: "Ubuntu 18.04"
|
# TF_VAR_image: "Ubuntu 18.04"
|
||||||
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
|
# TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
|
||||||
|
|||||||
@@ -11,10 +11,17 @@ molecule_tests:
|
|||||||
- tests/scripts/rebase.sh
|
- tests/scripts/rebase.sh
|
||||||
- apt-get update && apt-get install -y python3-pip
|
- apt-get update && apt-get install -y python3-pip
|
||||||
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
|
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
|
||||||
|
- python -m pip uninstall -y ansible
|
||||||
- python -m pip install -r tests/requirements.txt
|
- python -m pip install -r tests/requirements.txt
|
||||||
- ./tests/scripts/vagrant_clean.sh
|
- ./tests/scripts/vagrant_clean.sh
|
||||||
script:
|
script:
|
||||||
- ./tests/scripts/molecule_run.sh
|
- ./tests/scripts/molecule_run.sh
|
||||||
|
after_script:
|
||||||
|
- chronic ./tests/scripts/molecule_logs.sh
|
||||||
|
artifacts:
|
||||||
|
when: always
|
||||||
|
paths:
|
||||||
|
- molecule_logs/
|
||||||
|
|
||||||
.vagrant:
|
.vagrant:
|
||||||
extends: .testcases
|
extends: .testcases
|
||||||
@@ -31,12 +38,19 @@ molecule_tests:
|
|||||||
before_script:
|
before_script:
|
||||||
- apt-get update && apt-get install -y python3-pip
|
- apt-get update && apt-get install -y python3-pip
|
||||||
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
|
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
|
||||||
|
- python -m pip uninstall -y ansible
|
||||||
- python -m pip install -r tests/requirements.txt
|
- python -m pip install -r tests/requirements.txt
|
||||||
- ./tests/scripts/vagrant_clean.sh
|
- ./tests/scripts/vagrant_clean.sh
|
||||||
script:
|
script:
|
||||||
- ./tests/scripts/testcases_run.sh
|
- ./tests/scripts/testcases_run.sh
|
||||||
after_script:
|
after_script:
|
||||||
- chronic ./tests/scripts/testcases_cleanup.sh
|
- chronic ./tests/scripts/testcases_cleanup.sh
|
||||||
|
allow_failure: true
|
||||||
|
|
||||||
|
vagrant_ubuntu18-calico-dual-stack:
|
||||||
|
stage: deploy-part2
|
||||||
|
extends: .vagrant
|
||||||
|
when: on_success
|
||||||
|
|
||||||
vagrant_ubuntu18-flannel:
|
vagrant_ubuntu18-flannel:
|
||||||
stage: deploy-part2
|
stage: deploy-part2
|
||||||
|
|||||||
@@ -6,11 +6,17 @@
|
|||||||
|
|
||||||
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
|
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
|
||||||
|
|
||||||
To install development dependencies you can use `pip install -r tests/requirements.txt`
|
To install development dependencies you can set up a python virtual env with the necessary dependencies:
|
||||||
|
|
||||||
|
```ShellSession
|
||||||
|
virtualenv venv
|
||||||
|
source venv/bin/activate
|
||||||
|
pip install -r tests/requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
#### Linting
|
#### Linting
|
||||||
|
|
||||||
Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `ansible-lint`
|
Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `ansible-lint`. It is a good idea to add call these tools as part of your pre-commit hook and avoid a lot of back end forth on fixing linting issues (<https://support.gitkraken.com/working-with-repositories/githooksexample/>).
|
||||||
|
|
||||||
#### Molecule
|
#### Molecule
|
||||||
|
|
||||||
@@ -29,3 +35,5 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
|
|||||||
3. Fork the desired repo, develop and test your code changes.
|
3. Fork the desired repo, develop and test your code changes.
|
||||||
4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
|
4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
|
||||||
5. Submit a pull request.
|
5. Submit a pull request.
|
||||||
|
6. Work with the reviewers on their suggestions.
|
||||||
|
7. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.
|
||||||
|
|||||||
38
Dockerfile
38
Dockerfile
@@ -1,25 +1,33 @@
|
|||||||
# Use imutable image tags rather than mutable tags (like ubuntu:18.04)
|
# Use imutable image tags rather than mutable tags (like ubuntu:18.04)
|
||||||
FROM ubuntu:bionic-20200807
|
FROM ubuntu:bionic-20200807
|
||||||
|
|
||||||
ENV KUBE_VERSION=v1.19.9
|
RUN apt update -y \
|
||||||
|
&& apt install -y \
|
||||||
RUN mkdir /kubespray
|
|
||||||
WORKDIR /kubespray
|
|
||||||
RUN apt update -y && \
|
|
||||||
apt install -y \
|
|
||||||
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
|
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
|
||||||
ca-certificates curl gnupg2 software-properties-common python3-pip rsync
|
ca-certificates curl gnupg2 software-properties-common python3-pip unzip rsync git \
|
||||||
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
add-apt-repository \
|
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
|
||||||
|
&& add-apt-repository \
|
||||||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||||
$(lsb_release -cs) \
|
$(lsb_release -cs) \
|
||||||
stable" \
|
stable" \
|
||||||
&& apt update -y && apt-get install docker-ce -y
|
&& apt update -y && apt-get install --no-install-recommends -y docker-ce \
|
||||||
COPY . .
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
RUN /usr/bin/python3 -m pip install pip -U && /usr/bin/python3 -m pip install -r tests/requirements.txt && python3 -m pip install -r requirements.txt && update-alternatives --install /usr/bin/python python /usr/bin/python3 1
|
|
||||||
|
|
||||||
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl \
|
|
||||||
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl
|
|
||||||
|
|
||||||
# Some tools like yamllint need this
|
# Some tools like yamllint need this
|
||||||
|
# Pip needs this as well at the moment to install ansible
|
||||||
|
# (and potentially other packages)
|
||||||
|
# See: https://github.com/pypa/pip/issues/10219
|
||||||
ENV LANG=C.UTF-8
|
ENV LANG=C.UTF-8
|
||||||
|
|
||||||
|
WORKDIR /kubespray
|
||||||
|
COPY . .
|
||||||
|
RUN /usr/bin/python3 -m pip install --no-cache-dir pip -U \
|
||||||
|
&& /usr/bin/python3 -m pip install --no-cache-dir -r tests/requirements.txt \
|
||||||
|
&& python3 -m pip install --no-cache-dir -r requirements.txt \
|
||||||
|
&& update-alternatives --install /usr/bin/python python /usr/bin/python3 1
|
||||||
|
|
||||||
|
RUN KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
|
||||||
|
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl \
|
||||||
|
&& chmod a+x kubectl \
|
||||||
|
&& mv kubectl /usr/local/bin/kubectl
|
||||||
|
|||||||
4
Makefile
4
Makefile
@@ -1,5 +1,7 @@
|
|||||||
mitogen:
|
mitogen:
|
||||||
ansible-playbook -c local mitogen.yml -vv
|
@echo Mitogen support is deprecated.
|
||||||
|
@echo Please run the following command manually:
|
||||||
|
@echo ansible-playbook -c local mitogen.yml -vv
|
||||||
clean:
|
clean:
|
||||||
rm -rf dist/
|
rm -rf dist/
|
||||||
rm *.retry
|
rm *.retry
|
||||||
|
|||||||
@@ -7,11 +7,14 @@ aliases:
|
|||||||
- woopstar
|
- woopstar
|
||||||
- luckysb
|
- luckysb
|
||||||
- floryut
|
- floryut
|
||||||
|
- oomichi
|
||||||
kubespray-reviewers:
|
kubespray-reviewers:
|
||||||
- holmsten
|
- holmsten
|
||||||
- bozzo
|
- bozzo
|
||||||
- eppo
|
- eppo
|
||||||
- oomichi
|
- oomichi
|
||||||
|
- jayonlau
|
||||||
|
- cristicalin
|
||||||
kubespray-emeritus_approvers:
|
kubespray-emeritus_approvers:
|
||||||
- riverzhang
|
- riverzhang
|
||||||
- atoms
|
- atoms
|
||||||
|
|||||||
85
README.md
85
README.md
@@ -5,7 +5,7 @@
|
|||||||
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
||||||
You can get your invite [here](http://slack.k8s.io/)
|
You can get your invite [here](http://slack.k8s.io/)
|
||||||
|
|
||||||
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Packet](docs/packet.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
|
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Equinix Metal](docs/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
|
||||||
- **Highly available** cluster
|
- **Highly available** cluster
|
||||||
- **Composable** (Choice of the network plugin for instance)
|
- **Composable** (Choice of the network plugin for instance)
|
||||||
- Supports most popular **Linux distributions**
|
- Supports most popular **Linux distributions**
|
||||||
@@ -32,7 +32,7 @@ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inv
|
|||||||
|
|
||||||
# Review and change parameters under ``inventory/mycluster/group_vars``
|
# Review and change parameters under ``inventory/mycluster/group_vars``
|
||||||
cat inventory/mycluster/group_vars/all/all.yml
|
cat inventory/mycluster/group_vars/all/all.yml
|
||||||
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
|
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
|
||||||
|
|
||||||
# Deploy Kubespray with Ansible Playbook - run the playbook as root
|
# Deploy Kubespray with Ansible Playbook - run the playbook as root
|
||||||
# The option `--become` is required, as for example writing SSL keys in /etc/,
|
# The option `--become` is required, as for example writing SSL keys in /etc/,
|
||||||
@@ -48,11 +48,23 @@ As a consequence, `ansible-playbook` command will fail with:
|
|||||||
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
|
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
|
||||||
```
|
```
|
||||||
|
|
||||||
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
|
probably pointing on a task depending on a module present in requirements.txt.
|
||||||
|
|
||||||
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
|
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
|
||||||
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.
|
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.
|
||||||
|
|
||||||
|
A simple way to ensure you get all the correct version of Ansible is to use the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
|
||||||
|
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/) to get the inventory and ssh key into the container, like this:
|
||||||
|
|
||||||
|
```ShellSession
|
||||||
|
docker pull quay.io/kubespray/kubespray:v2.17.1
|
||||||
|
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
|
||||||
|
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
|
||||||
|
quay.io/kubespray/kubespray:v2.17.1 bash
|
||||||
|
# Inside the container you may now run the kubespray playbooks:
|
||||||
|
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
|
||||||
|
```
|
||||||
|
|
||||||
### Vagrant
|
### Vagrant
|
||||||
|
|
||||||
For Vagrant we need to install python dependencies for provisioning tasks.
|
For Vagrant we need to install python dependencies for provisioning tasks.
|
||||||
@@ -93,7 +105,7 @@ vagrant up
|
|||||||
- [AWS](docs/aws.md)
|
- [AWS](docs/aws.md)
|
||||||
- [Azure](docs/azure.md)
|
- [Azure](docs/azure.md)
|
||||||
- [vSphere](docs/vsphere.md)
|
- [vSphere](docs/vsphere.md)
|
||||||
- [Packet Host](docs/packet.md)
|
- [Equinix Metal](docs/equinix-metal.md)
|
||||||
- [Large deployments](docs/large-deployments.md)
|
- [Large deployments](docs/large-deployments.md)
|
||||||
- [Adding/replacing a node](docs/nodes.md)
|
- [Adding/replacing a node](docs/nodes.md)
|
||||||
- [Upgrades basics](docs/upgrades.md)
|
- [Upgrades basics](docs/upgrades.md)
|
||||||
@@ -103,51 +115,56 @@ vagrant up
|
|||||||
## Supported Linux Distributions
|
## Supported Linux Distributions
|
||||||
|
|
||||||
- **Flatcar Container Linux by Kinvolk**
|
- **Flatcar Container Linux by Kinvolk**
|
||||||
- **Debian** Buster, Jessie, Stretch, Wheezy
|
- **Debian** Bullseye, Buster, Jessie, Stretch
|
||||||
- **Ubuntu** 16.04, 18.04, 20.04
|
- **Ubuntu** 16.04, 18.04, 20.04
|
||||||
- **CentOS/RHEL** 7, 8 (experimental: see [centos 8 notes](docs/centos8.md))
|
- **CentOS/RHEL** 7, [8](docs/centos8.md)
|
||||||
- **Fedora** 32, 33
|
- **Fedora** 34, 35
|
||||||
- **Fedora CoreOS** (experimental: see [fcos Note](docs/fcos.md))
|
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
|
||||||
- **openSUSE** Leap 15.x/Tumbleweed
|
- **openSUSE** Leap 15.x/Tumbleweed
|
||||||
- **Oracle Linux** 7, 8 (experimental: [centos 8 notes](docs/centos8.md) apply)
|
- **Oracle Linux** 7, [8](docs/centos8.md)
|
||||||
|
- **Alma Linux** [8](docs/centos8.md)
|
||||||
|
- **Rocky Linux** [8](docs/centos8.md)
|
||||||
|
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md))
|
||||||
|
|
||||||
Note: Upstart/SysV init based OS types are not supported.
|
Note: Upstart/SysV init based OS types are not supported.
|
||||||
|
|
||||||
## Supported Components
|
## Supported Components
|
||||||
|
|
||||||
- Core
|
- Core
|
||||||
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.19.9
|
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.22.5
|
||||||
- [etcd](https://github.com/coreos/etcd) v3.4.13
|
- [etcd](https://github.com/coreos/etcd) v3.5.0
|
||||||
- [docker](https://www.docker.com/) v19.03 (see note)
|
- [docker](https://www.docker.com/) v20.10 (see note)
|
||||||
- [containerd](https://containerd.io/) v1.3.9
|
- [containerd](https://containerd.io/) v1.5.8
|
||||||
- [cri-o](http://cri-o.io/) v1.19 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
|
- [cri-o](http://cri-o.io/) v1.22 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
|
||||||
- Network Plugin
|
- Network Plugin
|
||||||
- [cni-plugins](https://github.com/containernetworking/plugins) v0.9.0
|
- [cni-plugins](https://github.com/containernetworking/plugins) v1.0.1
|
||||||
- [calico](https://github.com/projectcalico/calico) v3.16.9
|
- [calico](https://github.com/projectcalico/calico) v3.20.3
|
||||||
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
|
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
|
||||||
- [cilium](https://github.com/cilium/cilium) v1.8.8
|
- [cilium](https://github.com/cilium/cilium) v1.9.11
|
||||||
- [flanneld](https://github.com/coreos/flannel) v0.13.0
|
- [flanneld](https://github.com/flannel-io/flannel) v0.15.1
|
||||||
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.6.1
|
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.8.1
|
||||||
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.1.1
|
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.3.2
|
||||||
- [multus](https://github.com/intel/multus-cni) v3.7.0
|
- [multus](https://github.com/intel/multus-cni) v3.8
|
||||||
- [ovn4nfv](https://github.com/opnfv/ovn4nfv-k8s-plugin) v1.1.0
|
- [weave](https://github.com/weaveworks/weave) v2.8.1
|
||||||
- [weave](https://github.com/weaveworks/weave) v2.7.0
|
|
||||||
- Application
|
- Application
|
||||||
- [ambassador](https://github.com/datawire/ambassador): v1.5
|
|
||||||
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
|
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
|
||||||
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
|
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
|
||||||
- [cert-manager](https://github.com/jetstack/cert-manager) v0.16.1
|
- [cert-manager](https://github.com/jetstack/cert-manager) v1.5.4
|
||||||
- [coredns](https://github.com/coredns/coredns) v1.7.0
|
- [coredns](https://github.com/coredns/coredns) v1.8.0
|
||||||
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.41.2
|
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.0.4
|
||||||
|
|
||||||
Note: The list of available docker version is 18.09, 19.03 and 20.10. The recommended docker version is 19.03. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
|
## Container Runtime Notes
|
||||||
|
|
||||||
|
- The list of available docker version is 18.09, 19.03 and 20.10. The recommended docker version is 20.10. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
|
||||||
|
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
- **Minimum required version of Kubernetes is v1.18**
|
- **Minimum required version of Kubernetes is v1.20**
|
||||||
- **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is not supported for now**
|
- **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is experimentally supported for now**
|
||||||
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
|
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
|
||||||
- The target servers are configured to allow **IPv4 forwarding**.
|
- The target servers are configured to allow **IPv4 forwarding**.
|
||||||
|
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
|
||||||
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
|
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
|
||||||
in order to avoid any issue during deployment you should disable your firewall.
|
in order to avoid any issue during deployment you should disable your firewall.
|
||||||
- If kubespray is ran from non-root user account, correct privilege escalation method
|
- If kubespray is ran from non-root user account, correct privilege escalation method
|
||||||
@@ -177,8 +194,6 @@ You can choose between 10 network plugins. (default: `calico`, except Vagrant us
|
|||||||
|
|
||||||
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
|
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
|
||||||
|
|
||||||
- [ovn4nfv](docs/ovn4nfv.md): [ovn4nfv-k8s-plugins](https://github.com/opnfv/ovn4nfv-k8s-plugin) is the network controller, OVS agent and CNI server to offer basic SFC and OVN overlay networking.
|
|
||||||
|
|
||||||
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
|
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
|
||||||
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
|
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
|
||||||
|
|
||||||
@@ -199,10 +214,10 @@ See also [Network checker](docs/netcheck.md).
|
|||||||
|
|
||||||
## Ingress Plugins
|
## Ingress Plugins
|
||||||
|
|
||||||
- [ambassador](docs/ambassador.md): the Ambassador Ingress Controller and API gateway.
|
|
||||||
|
|
||||||
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
|
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
|
||||||
|
|
||||||
|
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
|
||||||
|
|
||||||
## Community docs and resources
|
## Community docs and resources
|
||||||
|
|
||||||
- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
|
- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
|
||||||
@@ -219,6 +234,6 @@ See also [Network checker](docs/netcheck.md).
|
|||||||
|
|
||||||
[](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
|
[](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
|
||||||
|
|
||||||
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Packet](https://www.packet.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
|
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
|
||||||
|
|
||||||
See the [test matrix](docs/test_cases.md) for details.
|
See the [test matrix](docs/test_cases.md) for details.
|
||||||
|
|||||||
32
Vagrantfile
vendored
32
Vagrantfile
vendored
@@ -26,8 +26,8 @@ SUPPORTED_OS = {
|
|||||||
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
|
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
|
||||||
"centos8" => {box: "centos/8", user: "vagrant"},
|
"centos8" => {box: "centos/8", user: "vagrant"},
|
||||||
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
|
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
|
||||||
"fedora32" => {box: "fedora/32-cloud-base", user: "vagrant"},
|
"fedora34" => {box: "fedora/34-cloud-base", user: "vagrant"},
|
||||||
"fedora33" => {box: "fedora/33-cloud-base", user: "vagrant"},
|
"fedora35" => {box: "fedora/35-cloud-base", user: "vagrant"},
|
||||||
"opensuse" => {box: "bento/opensuse-leap-15.2", user: "vagrant"},
|
"opensuse" => {box: "bento/opensuse-leap-15.2", user: "vagrant"},
|
||||||
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
|
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
|
||||||
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
|
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
|
||||||
@@ -49,12 +49,13 @@ $vm_cpus ||= 2
|
|||||||
$shared_folders ||= {}
|
$shared_folders ||= {}
|
||||||
$forwarded_ports ||= {}
|
$forwarded_ports ||= {}
|
||||||
$subnet ||= "172.18.8"
|
$subnet ||= "172.18.8"
|
||||||
|
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
|
||||||
$os ||= "ubuntu1804"
|
$os ||= "ubuntu1804"
|
||||||
$network_plugin ||= "flannel"
|
$network_plugin ||= "flannel"
|
||||||
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
|
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
|
||||||
$multi_networking ||= false
|
$multi_networking ||= false
|
||||||
$download_run_once ||= "True"
|
$download_run_once ||= "True"
|
||||||
$download_force_cache ||= "True"
|
$download_force_cache ||= "False"
|
||||||
# The first three nodes are etcd servers
|
# The first three nodes are etcd servers
|
||||||
$etcd_instances ||= $num_instances
|
$etcd_instances ||= $num_instances
|
||||||
# The first two nodes are kube masters
|
# The first two nodes are kube masters
|
||||||
@@ -85,9 +86,9 @@ $inventory = File.absolute_path($inventory, File.dirname(__FILE__))
|
|||||||
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
|
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
|
||||||
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible")
|
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible")
|
||||||
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
|
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
|
||||||
if ! File.exist?(File.join($vagrant_ansible,"inventory"))
|
$vagrant_inventory = File.join($vagrant_ansible,"inventory")
|
||||||
FileUtils.ln_s($inventory, File.join($vagrant_ansible,"inventory"))
|
FileUtils.rm_f($vagrant_inventory)
|
||||||
end
|
FileUtils.ln_s($inventory, $vagrant_inventory)
|
||||||
end
|
end
|
||||||
|
|
||||||
if Vagrant.has_plugin?("vagrant-proxyconf")
|
if Vagrant.has_plugin?("vagrant-proxyconf")
|
||||||
@@ -194,11 +195,22 @@ Vagrant.configure("2") do |config|
|
|||||||
end
|
end
|
||||||
|
|
||||||
ip = "#{$subnet}.#{i+100}"
|
ip = "#{$subnet}.#{i+100}"
|
||||||
node.vm.network :private_network, ip: ip
|
node.vm.network :private_network, ip: ip,
|
||||||
|
:libvirt__guest_ipv6 => 'yes',
|
||||||
|
:libvirt__ipv6_address => "#{$subnet_ipv6}::#{i+100}",
|
||||||
|
:libvirt__ipv6_prefix => "64",
|
||||||
|
:libvirt__forward_mode => "none",
|
||||||
|
:libvirt__dhcp_enabled => false
|
||||||
|
|
||||||
# Disable swap for each vm
|
# Disable swap for each vm
|
||||||
node.vm.provision "shell", inline: "swapoff -a"
|
node.vm.provision "shell", inline: "swapoff -a"
|
||||||
|
|
||||||
|
# ubuntu1804 and ubuntu2004 have IPv6 explicitly disabled. This undoes that.
|
||||||
|
if ["ubuntu1804", "ubuntu2004"].include? $os
|
||||||
|
node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
|
||||||
|
node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
|
||||||
|
end
|
||||||
|
|
||||||
# Disable firewalld on oraclelinux/redhat vms
|
# Disable firewalld on oraclelinux/redhat vms
|
||||||
if ["oraclelinux","oraclelinux8","rhel7","rhel8"].include? $os
|
if ["oraclelinux","oraclelinux8","rhel7","rhel8"].include? $os
|
||||||
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
|
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
|
||||||
@@ -241,9 +253,9 @@ Vagrant.configure("2") do |config|
|
|||||||
#ansible.tags = ['download']
|
#ansible.tags = ['download']
|
||||||
ansible.groups = {
|
ansible.groups = {
|
||||||
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
|
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
|
||||||
"kube-master" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],
|
"kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],
|
||||||
"kube-node" => ["#{$instance_name_prefix}-[1:#{$kube_node_instances}]"],
|
"kube_node" => ["#{$instance_name_prefix}-[1:#{$kube_node_instances}]"],
|
||||||
"k8s-cluster:children" => ["kube-master", "kube-node"],
|
"k8s_cluster:children" => ["kube_control_plane", "kube_node"],
|
||||||
}
|
}
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ pipelining=True
|
|||||||
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
|
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
|
||||||
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
|
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
|
||||||
[defaults]
|
[defaults]
|
||||||
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
|
|
||||||
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
|
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
|
||||||
force_valid_group_names = ignore
|
force_valid_group_names = ignore
|
||||||
|
|
||||||
|
|||||||
@@ -4,8 +4,10 @@
|
|||||||
become: no
|
become: no
|
||||||
vars:
|
vars:
|
||||||
minimal_ansible_version: 2.9.0
|
minimal_ansible_version: 2.9.0
|
||||||
maximal_ansible_version: 2.10.0
|
minimal_ansible_version_2_10: 2.10.11
|
||||||
|
maximal_ansible_version: 2.12.0
|
||||||
ansible_connection: local
|
ansible_connection: local
|
||||||
|
tags: always
|
||||||
tasks:
|
tasks:
|
||||||
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
|
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
|
||||||
assert:
|
assert:
|
||||||
@@ -15,3 +17,29 @@
|
|||||||
- ansible_version.string is version(maximal_ansible_version, "<")
|
- ansible_version.string is version(maximal_ansible_version, "<")
|
||||||
tags:
|
tags:
|
||||||
- check
|
- check
|
||||||
|
|
||||||
|
- name: "Check Ansible version > {{ minimal_ansible_version_2_10 }} when using ansible 2.10"
|
||||||
|
assert:
|
||||||
|
msg: "When using Ansible 2.10, the minimum supported version is {{ minimal_ansible_version_2_10 }}"
|
||||||
|
that:
|
||||||
|
- ansible_version.string is version(minimal_ansible_version_2_10, ">=")
|
||||||
|
- ansible_version.string is version(maximal_ansible_version, "<")
|
||||||
|
when:
|
||||||
|
- ansible_version.string is version('2.10.0', ">=")
|
||||||
|
tags:
|
||||||
|
- check
|
||||||
|
|
||||||
|
- name: "Check that python netaddr is installed"
|
||||||
|
assert:
|
||||||
|
msg: "Python netaddr is not present"
|
||||||
|
that: "'127.0.0.1' | ipaddr"
|
||||||
|
tags:
|
||||||
|
- check
|
||||||
|
|
||||||
|
# CentOS 7 provides too old jinja version
|
||||||
|
- name: "Check that jinja is not too old (install via pip)"
|
||||||
|
assert:
|
||||||
|
msg: "Your Jinja version is too old, install via pip"
|
||||||
|
that: "{% set test %}It works{% endset %}{{ test == 'It works' }}"
|
||||||
|
tags:
|
||||||
|
- check
|
||||||
|
|||||||
36
cluster.yml
36
cluster.yml
@@ -2,6 +2,9 @@
|
|||||||
- name: Check ansible version
|
- name: Check ansible version
|
||||||
import_playbook: ansible_version.yml
|
import_playbook: ansible_version.yml
|
||||||
|
|
||||||
|
- name: Ensure compatibility with old groups
|
||||||
|
import_playbook: legacy_groups.yml
|
||||||
|
|
||||||
- hosts: bastion[0]
|
- hosts: bastion[0]
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
@@ -9,7 +12,7 @@
|
|||||||
- { role: kubespray-defaults }
|
- { role: kubespray-defaults }
|
||||||
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
|
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
|
||||||
|
|
||||||
- hosts: k8s-cluster:etcd
|
- hosts: k8s_cluster:etcd
|
||||||
strategy: linear
|
strategy: linear
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
gather_facts: false
|
gather_facts: false
|
||||||
@@ -22,14 +25,14 @@
|
|||||||
tags: always
|
tags: always
|
||||||
import_playbook: facts.yml
|
import_playbook: facts.yml
|
||||||
|
|
||||||
- hosts: k8s-cluster:etcd
|
- hosts: k8s_cluster:etcd
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
roles:
|
roles:
|
||||||
- { role: kubespray-defaults }
|
- { role: kubespray-defaults }
|
||||||
- { role: kubernetes/preinstall, tags: preinstall }
|
- { role: kubernetes/preinstall, tags: preinstall }
|
||||||
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
|
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine }
|
||||||
- { role: download, tags: download, when: "not skip_downloads" }
|
- { role: download, tags: download, when: "not skip_downloads" }
|
||||||
|
|
||||||
- hosts: etcd
|
- hosts: etcd
|
||||||
@@ -45,7 +48,7 @@
|
|||||||
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
|
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
|
||||||
when: not etcd_kubeadm_enabled| default(false)
|
when: not etcd_kubeadm_enabled| default(false)
|
||||||
|
|
||||||
- hosts: k8s-cluster
|
- hosts: k8s_cluster
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
@@ -58,7 +61,7 @@
|
|||||||
etcd_events_cluster_setup: false
|
etcd_events_cluster_setup: false
|
||||||
when: not etcd_kubeadm_enabled| default(false)
|
when: not etcd_kubeadm_enabled| default(false)
|
||||||
|
|
||||||
- hosts: k8s-cluster
|
- hosts: k8s_cluster
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
@@ -66,27 +69,27 @@
|
|||||||
- { role: kubespray-defaults }
|
- { role: kubespray-defaults }
|
||||||
- { role: kubernetes/node, tags: node }
|
- { role: kubernetes/node, tags: node }
|
||||||
|
|
||||||
- hosts: kube-master
|
- hosts: kube_control_plane
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
roles:
|
roles:
|
||||||
- { role: kubespray-defaults }
|
- { role: kubespray-defaults }
|
||||||
- { role: kubernetes/master, tags: master }
|
- { role: kubernetes/control-plane, tags: master }
|
||||||
- { role: kubernetes/client, tags: client }
|
- { role: kubernetes/client, tags: client }
|
||||||
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
|
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
|
||||||
|
|
||||||
- hosts: k8s-cluster
|
- hosts: k8s_cluster
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
roles:
|
roles:
|
||||||
- { role: kubespray-defaults }
|
- { role: kubespray-defaults }
|
||||||
- { role: kubernetes/kubeadm, tags: kubeadm}
|
- { role: kubernetes/kubeadm, tags: kubeadm}
|
||||||
- { role: network_plugin, tags: network }
|
|
||||||
- { role: kubernetes/node-label, tags: node-label }
|
- { role: kubernetes/node-label, tags: node-label }
|
||||||
|
- { role: network_plugin, tags: network }
|
||||||
|
|
||||||
- hosts: calico-rr
|
- hosts: calico_rr
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
@@ -94,7 +97,7 @@
|
|||||||
- { role: kubespray-defaults }
|
- { role: kubespray-defaults }
|
||||||
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] }
|
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] }
|
||||||
|
|
||||||
- hosts: kube-master[0]
|
- hosts: kube_control_plane[0]
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
@@ -102,7 +105,7 @@
|
|||||||
- { role: kubespray-defaults }
|
- { role: kubespray-defaults }
|
||||||
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
|
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
|
||||||
|
|
||||||
- hosts: kube-master
|
- hosts: kube_control_plane
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
@@ -113,16 +116,9 @@
|
|||||||
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
|
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
|
||||||
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
|
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
|
||||||
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
|
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
|
||||||
|
|
||||||
- hosts: kube-master
|
|
||||||
gather_facts: False
|
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
|
||||||
environment: "{{ proxy_disable_env }}"
|
|
||||||
roles:
|
|
||||||
- { role: kubespray-defaults }
|
|
||||||
- { role: kubernetes-apps, tags: apps }
|
- { role: kubernetes-apps, tags: apps }
|
||||||
|
|
||||||
- hosts: k8s-cluster
|
- hosts: k8s_cluster
|
||||||
gather_facts: False
|
gather_facts: False
|
||||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||||
environment: "{{ proxy_disable_env }}"
|
environment: "{{ proxy_disable_env }}"
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ class SearchEC2Tags(object):
|
|||||||
hosts['_meta'] = { 'hostvars': {} }
|
hosts['_meta'] = { 'hostvars': {} }
|
||||||
|
|
||||||
##Search ec2 three times to find nodes of each group type. Relies on kubespray-role key/value.
|
##Search ec2 three times to find nodes of each group type. Relies on kubespray-role key/value.
|
||||||
for group in ["kube-master", "kube-node", "etcd"]:
|
for group in ["kube_control_plane", "kube_node", "etcd"]:
|
||||||
hosts[group] = []
|
hosts[group] = []
|
||||||
tag_key = "kubespray-role"
|
tag_key = "kubespray-role"
|
||||||
tag_value = ["*"+group+"*"]
|
tag_value = ["*"+group+"*"]
|
||||||
@@ -69,8 +69,8 @@ class SearchEC2Tags(object):
|
|||||||
|
|
||||||
hosts[group].append(dns_name)
|
hosts[group].append(dns_name)
|
||||||
hosts['_meta']['hostvars'][dns_name] = ansible_host
|
hosts['_meta']['hostvars'][dns_name] = ansible_host
|
||||||
|
|
||||||
hosts['k8s-cluster'] = {'children':['kube-master', 'kube-node']}
|
hosts['k8s_cluster'] = {'children':['kube_control_plane', 'kube_node']}
|
||||||
print(json.dumps(hosts, sort_keys=True, indent=2))
|
print(json.dumps(hosts, sort_keys=True, indent=2))
|
||||||
|
|
||||||
SearchEC2Tags()
|
SearchEC2Tags()
|
||||||
|
|||||||
@@ -12,3 +12,4 @@
|
|||||||
template:
|
template:
|
||||||
src: inventory.j2
|
src: inventory.j2
|
||||||
dest: "{{ playbook_dir }}/inventory"
|
dest: "{{ playbook_dir }}/inventory"
|
||||||
|
mode: 0644
|
||||||
|
|||||||
@@ -7,9 +7,9 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[kube-master]
|
[kube_control_plane]
|
||||||
{% for vm in vm_list %}
|
{% for vm in vm_list %}
|
||||||
{% if 'kube-master' in vm.tags.roles %}
|
{% if 'kube_control_plane' in vm.tags.roles %}
|
||||||
{{ vm.name }}
|
{{ vm.name }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -21,13 +21,13 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[kube-node]
|
[kube_node]
|
||||||
{% for vm in vm_list %}
|
{% for vm in vm_list %}
|
||||||
{% if 'kube-node' in vm.tags.roles %}
|
{% if 'kube_node' in vm.tags.roles %}
|
||||||
{{ vm.name }}
|
{{ vm.name }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[k8s-cluster:children]
|
[k8s_cluster:children]
|
||||||
kube-node
|
kube_node
|
||||||
kube-master
|
kube_control_plane
|
||||||
|
|||||||
@@ -22,8 +22,10 @@
|
|||||||
template:
|
template:
|
||||||
src: inventory.j2
|
src: inventory.j2
|
||||||
dest: "{{ playbook_dir }}/inventory"
|
dest: "{{ playbook_dir }}/inventory"
|
||||||
|
mode: 0644
|
||||||
|
|
||||||
- name: Generate Load Balancer variables
|
- name: Generate Load Balancer variables
|
||||||
template:
|
template:
|
||||||
src: loadbalancer_vars.j2
|
src: loadbalancer_vars.j2
|
||||||
dest: "{{ playbook_dir }}/loadbalancer_vars.yml"
|
dest: "{{ playbook_dir }}/loadbalancer_vars.yml"
|
||||||
|
mode: 0644
|
||||||
|
|||||||
@@ -7,9 +7,9 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[kube-master]
|
[kube_control_plane]
|
||||||
{% for vm in vm_roles_list %}
|
{% for vm in vm_roles_list %}
|
||||||
{% if 'kube-master' in vm.tags.roles %}
|
{% if 'kube_control_plane' in vm.tags.roles %}
|
||||||
{{ vm.name }}
|
{{ vm.name }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -21,14 +21,14 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[kube-node]
|
[kube_node]
|
||||||
{% for vm in vm_roles_list %}
|
{% for vm in vm_roles_list %}
|
||||||
{% if 'kube-node' in vm.tags.roles %}
|
{% if 'kube_node' in vm.tags.roles %}
|
||||||
{{ vm.name }}
|
{{ vm.name }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
[k8s-cluster:children]
|
[k8s_cluster:children]
|
||||||
kube-node
|
kube_node
|
||||||
kube-master
|
kube_control_plane
|
||||||
|
|
||||||
|
|||||||
@@ -8,11 +8,13 @@
|
|||||||
path: "{{ base_dir }}"
|
path: "{{ base_dir }}"
|
||||||
state: directory
|
state: directory
|
||||||
recurse: true
|
recurse: true
|
||||||
|
mode: 0755
|
||||||
|
|
||||||
- name: Store json files in base_dir
|
- name: Store json files in base_dir
|
||||||
template:
|
template:
|
||||||
src: "{{ item }}"
|
src: "{{ item }}"
|
||||||
dest: "{{ base_dir }}/{{ item }}"
|
dest: "{{ base_dir }}/{{ item }}"
|
||||||
|
mode: 0644
|
||||||
with_items:
|
with_items:
|
||||||
- network.json
|
- network.json
|
||||||
- storage.json
|
- storage.json
|
||||||
|
|||||||
@@ -144,7 +144,7 @@
|
|||||||
"[concat('Microsoft.Network/networkInterfaces/', 'master-{{i}}-nic')]"
|
"[concat('Microsoft.Network/networkInterfaces/', 'master-{{i}}-nic')]"
|
||||||
],
|
],
|
||||||
"tags": {
|
"tags": {
|
||||||
"roles": "kube-master,etcd"
|
"roles": "kube_control_plane,etcd"
|
||||||
},
|
},
|
||||||
"apiVersion": "{{apiVersion}}",
|
"apiVersion": "{{apiVersion}}",
|
||||||
"properties": {
|
"properties": {
|
||||||
|
|||||||
@@ -61,7 +61,7 @@
|
|||||||
"[concat('Microsoft.Network/networkInterfaces/', 'minion-{{i}}-nic')]"
|
"[concat('Microsoft.Network/networkInterfaces/', 'minion-{{i}}-nic')]"
|
||||||
],
|
],
|
||||||
"tags": {
|
"tags": {
|
||||||
"roles": "kube-node"
|
"roles": "kube_node"
|
||||||
},
|
},
|
||||||
"apiVersion": "{{apiVersion}}",
|
"apiVersion": "{{apiVersion}}",
|
||||||
"properties": {
|
"properties": {
|
||||||
@@ -112,4 +112,4 @@
|
|||||||
} {% if not loop.last %},{% endif %}
|
} {% if not loop.last %},{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -35,6 +35,7 @@
|
|||||||
path-exclude=/usr/share/doc/*
|
path-exclude=/usr/share/doc/*
|
||||||
path-include=/usr/share/doc/*/copyright
|
path-include=/usr/share/doc/*/copyright
|
||||||
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
|
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
|
||||||
|
mode: 0644
|
||||||
when:
|
when:
|
||||||
- ansible_os_family == 'Debian'
|
- ansible_os_family == 'Debian'
|
||||||
|
|
||||||
@@ -63,6 +64,7 @@
|
|||||||
copy:
|
copy:
|
||||||
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL"
|
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL"
|
||||||
dest: "/etc/sudoers.d/{{ distro_user }}"
|
dest: "/etc/sudoers.d/{{ distro_user }}"
|
||||||
|
mode: 0640
|
||||||
|
|
||||||
- name: Add my pubkey to "{{ distro_user }}" user authorized keys
|
- name: Add my pubkey to "{{ distro_user }}" user authorized keys
|
||||||
authorized_key:
|
authorized_key:
|
||||||
|
|||||||
@@ -46,7 +46,7 @@ test_distro() {
|
|||||||
pass_or_fail "$prefix: netcheck" || return 1
|
pass_or_fail "$prefix: netcheck" || return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
NODES=($(egrep ^kube-node hosts))
|
NODES=($(egrep ^kube_node hosts))
|
||||||
NETCHECKER_HOST=localhost
|
NETCHECKER_HOST=localhost
|
||||||
|
|
||||||
: ${OUTPUT_DIR:=./out}
|
: ${OUTPUT_DIR:=./out}
|
||||||
|
|||||||
@@ -44,11 +44,11 @@ import re
|
|||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
|
ROLES = ['all', 'kube_control_plane', 'kube_node', 'etcd', 'k8s_cluster',
|
||||||
'calico-rr']
|
'calico_rr']
|
||||||
PROTECTED_NAMES = ROLES
|
PROTECTED_NAMES = ROLES
|
||||||
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames',
|
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames',
|
||||||
'load']
|
'load', 'add']
|
||||||
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
|
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
|
||||||
'0': False, 'no': False, 'false': False, 'off': False}
|
'0': False, 'no': False, 'false': False, 'off': False}
|
||||||
yaml = YAML()
|
yaml = YAML()
|
||||||
@@ -63,7 +63,9 @@ def get_var_as_bool(name, default):
|
|||||||
|
|
||||||
|
|
||||||
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
|
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
|
||||||
KUBE_MASTERS = int(os.environ.get("KUBE_MASTERS", 2))
|
# Remove the reference of KUBE_MASTERS after some deprecation cycles.
|
||||||
|
KUBE_CONTROL_HOSTS = int(os.environ.get("KUBE_CONTROL_HOSTS",
|
||||||
|
os.environ.get("KUBE_MASTERS", 2)))
|
||||||
# Reconfigures cluster distribution at scale
|
# Reconfigures cluster distribution at scale
|
||||||
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
|
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
|
||||||
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("MASSIVE_SCALE_THRESHOLD", 200))
|
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("MASSIVE_SCALE_THRESHOLD", 200))
|
||||||
@@ -80,32 +82,46 @@ class KubesprayInventory(object):
|
|||||||
def __init__(self, changed_hosts=None, config_file=None):
|
def __init__(self, changed_hosts=None, config_file=None):
|
||||||
self.config_file = config_file
|
self.config_file = config_file
|
||||||
self.yaml_config = {}
|
self.yaml_config = {}
|
||||||
if self.config_file:
|
loadPreviousConfig = False
|
||||||
|
# See whether there are any commands to process
|
||||||
|
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
|
||||||
|
if changed_hosts[0] == "add":
|
||||||
|
loadPreviousConfig = True
|
||||||
|
changed_hosts = changed_hosts[1:]
|
||||||
|
else:
|
||||||
|
self.parse_command(changed_hosts[0], changed_hosts[1:])
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
# If the user wants to remove a node, we need to load the config anyway
|
||||||
|
if changed_hosts and changed_hosts[0][0] == "-":
|
||||||
|
loadPreviousConfig = True
|
||||||
|
|
||||||
|
if self.config_file and loadPreviousConfig: # Load previous YAML file
|
||||||
try:
|
try:
|
||||||
self.hosts_file = open(config_file, 'r')
|
self.hosts_file = open(config_file, 'r')
|
||||||
self.yaml_config = yaml.load_all(self.hosts_file)
|
self.yaml_config = yaml.load(self.hosts_file)
|
||||||
except OSError:
|
except OSError as e:
|
||||||
pass
|
# I am assuming we are catching "cannot open file" exceptions
|
||||||
|
print(e)
|
||||||
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
|
sys.exit(1)
|
||||||
self.parse_command(changed_hosts[0], changed_hosts[1:])
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
self.ensure_required_groups(ROLES)
|
self.ensure_required_groups(ROLES)
|
||||||
|
|
||||||
if changed_hosts:
|
if changed_hosts:
|
||||||
changed_hosts = self.range2ips(changed_hosts)
|
changed_hosts = self.range2ips(changed_hosts)
|
||||||
self.hosts = self.build_hostnames(changed_hosts)
|
self.hosts = self.build_hostnames(changed_hosts,
|
||||||
|
loadPreviousConfig)
|
||||||
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
|
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
|
||||||
self.set_all(self.hosts)
|
self.set_all(self.hosts)
|
||||||
self.set_k8s_cluster()
|
self.set_k8s_cluster()
|
||||||
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1
|
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1
|
||||||
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count])
|
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count])
|
||||||
if len(self.hosts) >= SCALE_THRESHOLD:
|
if len(self.hosts) >= SCALE_THRESHOLD:
|
||||||
self.set_kube_master(list(self.hosts.keys())[
|
self.set_kube_control_plane(list(self.hosts.keys())[
|
||||||
etcd_hosts_count:(etcd_hosts_count + KUBE_MASTERS)])
|
etcd_hosts_count:(etcd_hosts_count + KUBE_CONTROL_HOSTS)])
|
||||||
else:
|
else:
|
||||||
self.set_kube_master(list(self.hosts.keys())[:KUBE_MASTERS])
|
self.set_kube_control_plane(
|
||||||
|
list(self.hosts.keys())[:KUBE_CONTROL_HOSTS])
|
||||||
self.set_kube_node(self.hosts.keys())
|
self.set_kube_node(self.hosts.keys())
|
||||||
if len(self.hosts) >= SCALE_THRESHOLD:
|
if len(self.hosts) >= SCALE_THRESHOLD:
|
||||||
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count])
|
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count])
|
||||||
@@ -155,17 +171,29 @@ class KubesprayInventory(object):
|
|||||||
except IndexError:
|
except IndexError:
|
||||||
raise ValueError("Host name must end in an integer")
|
raise ValueError("Host name must end in an integer")
|
||||||
|
|
||||||
def build_hostnames(self, changed_hosts):
|
# Keeps already specified hosts,
|
||||||
|
# and adds or removes the hosts provided as an argument
|
||||||
|
def build_hostnames(self, changed_hosts, loadPreviousConfig=False):
|
||||||
existing_hosts = OrderedDict()
|
existing_hosts = OrderedDict()
|
||||||
highest_host_id = 0
|
highest_host_id = 0
|
||||||
try:
|
# Load already existing hosts from the YAML
|
||||||
for host in self.yaml_config['all']['hosts']:
|
if loadPreviousConfig:
|
||||||
existing_hosts[host] = self.yaml_config['all']['hosts'][host]
|
try:
|
||||||
host_id = self.get_host_id(host)
|
for host in self.yaml_config['all']['hosts']:
|
||||||
if host_id > highest_host_id:
|
# Read configuration of an existing host
|
||||||
highest_host_id = host_id
|
hostConfig = self.yaml_config['all']['hosts'][host]
|
||||||
except Exception:
|
existing_hosts[host] = hostConfig
|
||||||
pass
|
# If the existing host seems
|
||||||
|
# to have been created automatically, detect its ID
|
||||||
|
if host.startswith(HOST_PREFIX):
|
||||||
|
host_id = self.get_host_id(host)
|
||||||
|
if host_id > highest_host_id:
|
||||||
|
highest_host_id = host_id
|
||||||
|
except Exception as e:
|
||||||
|
# I am assuming we are catching automatically
|
||||||
|
# created hosts without IDs
|
||||||
|
print(e)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
# FIXME(mattymo): Fix condition where delete then add reuses highest id
|
# FIXME(mattymo): Fix condition where delete then add reuses highest id
|
||||||
next_host_id = highest_host_id + 1
|
next_host_id = highest_host_id + 1
|
||||||
@@ -173,6 +201,7 @@ class KubesprayInventory(object):
|
|||||||
|
|
||||||
all_hosts = existing_hosts.copy()
|
all_hosts = existing_hosts.copy()
|
||||||
for host in changed_hosts:
|
for host in changed_hosts:
|
||||||
|
# Delete the host from config the hostname/IP has a "-" prefix
|
||||||
if host[0] == "-":
|
if host[0] == "-":
|
||||||
realhost = host[1:]
|
realhost = host[1:]
|
||||||
if self.exists_hostname(all_hosts, realhost):
|
if self.exists_hostname(all_hosts, realhost):
|
||||||
@@ -181,6 +210,8 @@ class KubesprayInventory(object):
|
|||||||
elif self.exists_ip(all_hosts, realhost):
|
elif self.exists_ip(all_hosts, realhost):
|
||||||
self.debug("Marked {0} for deletion.".format(realhost))
|
self.debug("Marked {0} for deletion.".format(realhost))
|
||||||
self.delete_host_by_ip(all_hosts, realhost)
|
self.delete_host_by_ip(all_hosts, realhost)
|
||||||
|
# Host/Argument starts with a digit,
|
||||||
|
# then we assume its an IP address
|
||||||
elif host[0].isdigit():
|
elif host[0].isdigit():
|
||||||
if ',' in host:
|
if ',' in host:
|
||||||
ip, access_ip = host.split(',')
|
ip, access_ip = host.split(',')
|
||||||
@@ -200,11 +231,15 @@ class KubesprayInventory(object):
|
|||||||
next_host = subprocess.check_output(cmd, shell=True)
|
next_host = subprocess.check_output(cmd, shell=True)
|
||||||
next_host = next_host.strip().decode('ascii')
|
next_host = next_host.strip().decode('ascii')
|
||||||
else:
|
else:
|
||||||
|
# Generates a hostname because we have only an IP address
|
||||||
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
|
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
|
||||||
next_host_id += 1
|
next_host_id += 1
|
||||||
|
# Uses automatically generated node name
|
||||||
|
# in case we dont provide it.
|
||||||
all_hosts[next_host] = {'ansible_host': access_ip,
|
all_hosts[next_host] = {'ansible_host': access_ip,
|
||||||
'ip': ip,
|
'ip': ip,
|
||||||
'access_ip': access_ip}
|
'access_ip': access_ip}
|
||||||
|
# Host/Argument starts with a letter, then we assume its a hostname
|
||||||
elif host[0].isalpha():
|
elif host[0].isalpha():
|
||||||
if ',' in host:
|
if ',' in host:
|
||||||
try:
|
try:
|
||||||
@@ -223,6 +258,7 @@ class KubesprayInventory(object):
|
|||||||
'access_ip': access_ip}
|
'access_ip': access_ip}
|
||||||
return all_hosts
|
return all_hosts
|
||||||
|
|
||||||
|
# Expand IP ranges into individual addresses
|
||||||
def range2ips(self, hosts):
|
def range2ips(self, hosts):
|
||||||
reworked_hosts = []
|
reworked_hosts = []
|
||||||
|
|
||||||
@@ -266,7 +302,7 @@ class KubesprayInventory(object):
|
|||||||
|
|
||||||
def purge_invalid_hosts(self, hostnames, protected_names=[]):
|
def purge_invalid_hosts(self, hostnames, protected_names=[]):
|
||||||
for role in self.yaml_config['all']['children']:
|
for role in self.yaml_config['all']['children']:
|
||||||
if role != 'k8s-cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa
|
if role != 'k8s_cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa
|
||||||
all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa
|
all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa
|
||||||
for host in all_hosts.keys():
|
for host in all_hosts.keys():
|
||||||
if host not in hostnames and host not in protected_names:
|
if host not in hostnames and host not in protected_names:
|
||||||
@@ -287,52 +323,54 @@ class KubesprayInventory(object):
|
|||||||
if self.yaml_config['all']['hosts'] is None:
|
if self.yaml_config['all']['hosts'] is None:
|
||||||
self.yaml_config['all']['hosts'] = {host: None}
|
self.yaml_config['all']['hosts'] = {host: None}
|
||||||
self.yaml_config['all']['hosts'][host] = opts
|
self.yaml_config['all']['hosts'][host] = opts
|
||||||
elif group != 'k8s-cluster:children':
|
elif group != 'k8s_cluster:children':
|
||||||
if self.yaml_config['all']['children'][group]['hosts'] is None:
|
if self.yaml_config['all']['children'][group]['hosts'] is None:
|
||||||
self.yaml_config['all']['children'][group]['hosts'] = {
|
self.yaml_config['all']['children'][group]['hosts'] = {
|
||||||
host: None}
|
host: None}
|
||||||
else:
|
else:
|
||||||
self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa
|
self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa
|
||||||
|
|
||||||
def set_kube_master(self, hosts):
|
def set_kube_control_plane(self, hosts):
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
self.add_host_to_group('kube-master', host)
|
self.add_host_to_group('kube_control_plane', host)
|
||||||
|
|
||||||
def set_all(self, hosts):
|
def set_all(self, hosts):
|
||||||
for host, opts in hosts.items():
|
for host, opts in hosts.items():
|
||||||
self.add_host_to_group('all', host, opts)
|
self.add_host_to_group('all', host, opts)
|
||||||
|
|
||||||
def set_k8s_cluster(self):
|
def set_k8s_cluster(self):
|
||||||
k8s_cluster = {'children': {'kube-master': None, 'kube-node': None}}
|
k8s_cluster = {'children': {'kube_control_plane': None,
|
||||||
self.yaml_config['all']['children']['k8s-cluster'] = k8s_cluster
|
'kube_node': None}}
|
||||||
|
self.yaml_config['all']['children']['k8s_cluster'] = k8s_cluster
|
||||||
|
|
||||||
def set_calico_rr(self, hosts):
|
def set_calico_rr(self, hosts):
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
if host in self.yaml_config['all']['children']['kube-master']:
|
if host in self.yaml_config['all']['children']['kube_control_plane']: # noqa
|
||||||
self.debug("Not adding {0} to calico-rr group because it "
|
self.debug("Not adding {0} to calico_rr group because it "
|
||||||
"conflicts with kube-master group".format(host))
|
"conflicts with kube_control_plane "
|
||||||
|
"group".format(host))
|
||||||
continue
|
continue
|
||||||
if host in self.yaml_config['all']['children']['kube-node']:
|
if host in self.yaml_config['all']['children']['kube_node']:
|
||||||
self.debug("Not adding {0} to calico-rr group because it "
|
self.debug("Not adding {0} to calico_rr group because it "
|
||||||
"conflicts with kube-node group".format(host))
|
"conflicts with kube_node group".format(host))
|
||||||
continue
|
continue
|
||||||
self.add_host_to_group('calico-rr', host)
|
self.add_host_to_group('calico_rr', host)
|
||||||
|
|
||||||
def set_kube_node(self, hosts):
|
def set_kube_node(self, hosts):
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD:
|
if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD:
|
||||||
if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa
|
if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa
|
||||||
self.debug("Not adding {0} to kube-node group because of "
|
self.debug("Not adding {0} to kube_node group because of "
|
||||||
"scale deployment and host is in etcd "
|
"scale deployment and host is in etcd "
|
||||||
"group.".format(host))
|
"group.".format(host))
|
||||||
continue
|
continue
|
||||||
if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa
|
if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa
|
||||||
if host in self.yaml_config['all']['children']['kube-master']['hosts']: # noqa
|
if host in self.yaml_config['all']['children']['kube_control_plane']['hosts']: # noqa
|
||||||
self.debug("Not adding {0} to kube-node group because of "
|
self.debug("Not adding {0} to kube_node group because of "
|
||||||
"scale deployment and host is in kube-master "
|
"scale deployment and host is in "
|
||||||
"group.".format(host))
|
"kube_control_plane group.".format(host))
|
||||||
continue
|
continue
|
||||||
self.add_host_to_group('kube-node', host)
|
self.add_host_to_group('kube_node', host)
|
||||||
|
|
||||||
def set_etcd(self, hosts):
|
def set_etcd(self, hosts):
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
@@ -389,9 +427,11 @@ help - Display this message
|
|||||||
print_cfg - Write inventory file to stdout
|
print_cfg - Write inventory file to stdout
|
||||||
print_ips - Write a space-delimited list of IPs from "all" group
|
print_ips - Write a space-delimited list of IPs from "all" group
|
||||||
print_hostnames - Write a space-delimited list of Hostnames from "all" group
|
print_hostnames - Write a space-delimited list of Hostnames from "all" group
|
||||||
|
add - Adds specified hosts into an already existing inventory
|
||||||
|
|
||||||
Advanced usage:
|
Advanced usage:
|
||||||
Add another host after initial creation: inventory.py 10.10.1.5
|
Create new or overwrite old inventory file: inventory.py 10.10.1.5
|
||||||
|
Add another host after initial creation: inventory.py add 10.10.1.6
|
||||||
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
|
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
|
||||||
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
|
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
|
||||||
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
|
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
|
||||||
@@ -402,9 +442,9 @@ Configurable env vars:
|
|||||||
DEBUG Enable debug printing. Default: True
|
DEBUG Enable debug printing. Default: True
|
||||||
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml
|
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml
|
||||||
HOST_PREFIX Host prefix for generated hosts. Default: node
|
HOST_PREFIX Host prefix for generated hosts. Default: node
|
||||||
KUBE_MASTERS Set the number of kube-masters. Default: 2
|
KUBE_CONTROL_HOSTS Set the number of kube-control-planes. Default: 2
|
||||||
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
|
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
|
||||||
MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
|
MASSIVE_SCALE_THRESHOLD Separate K8s control-plane and ETCD if # of nodes >= 200
|
||||||
''' # noqa
|
''' # noqa
|
||||||
print(help_text)
|
print(help_text)
|
||||||
|
|
||||||
@@ -425,6 +465,7 @@ def main(argv=None):
|
|||||||
if not argv:
|
if not argv:
|
||||||
argv = sys.argv[1:]
|
argv = sys.argv[1:]
|
||||||
KubesprayInventory(argv, CONFIG_FILE)
|
KubesprayInventory(argv, CONFIG_FILE)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|||||||
@@ -13,8 +13,8 @@
|
|||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
import inventory
|
import inventory
|
||||||
import mock
|
|
||||||
import unittest
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
import sys
|
import sys
|
||||||
@@ -67,23 +67,14 @@ class TestInventory(unittest.TestCase):
|
|||||||
self.assertRaisesRegex(ValueError, "Host name must end in an",
|
self.assertRaisesRegex(ValueError, "Host name must end in an",
|
||||||
self.inv.get_host_id, hostname)
|
self.inv.get_host_id, hostname)
|
||||||
|
|
||||||
def test_build_hostnames_add_one(self):
|
|
||||||
changed_hosts = ['10.90.0.2']
|
|
||||||
expected = OrderedDict([('node1',
|
|
||||||
{'ansible_host': '10.90.0.2',
|
|
||||||
'ip': '10.90.0.2',
|
|
||||||
'access_ip': '10.90.0.2'})])
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
|
||||||
self.assertEqual(expected, result)
|
|
||||||
|
|
||||||
def test_build_hostnames_add_duplicate(self):
|
def test_build_hostnames_add_duplicate(self):
|
||||||
changed_hosts = ['10.90.0.2']
|
changed_hosts = ['10.90.0.2']
|
||||||
expected = OrderedDict([('node1',
|
expected = OrderedDict([('node3',
|
||||||
{'ansible_host': '10.90.0.2',
|
{'ansible_host': '10.90.0.2',
|
||||||
'ip': '10.90.0.2',
|
'ip': '10.90.0.2',
|
||||||
'access_ip': '10.90.0.2'})])
|
'access_ip': '10.90.0.2'})])
|
||||||
self.inv.yaml_config['all']['hosts'] = expected
|
self.inv.yaml_config['all']['hosts'] = expected
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
def test_build_hostnames_add_two(self):
|
def test_build_hostnames_add_two(self):
|
||||||
@@ -99,6 +90,30 @@ class TestInventory(unittest.TestCase):
|
|||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_add_three(self):
|
||||||
|
changed_hosts = ['10.90.0.2', '10.90.0.3', '10.90.0.4']
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node1', {'ansible_host': '10.90.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '10.90.0.2'}),
|
||||||
|
('node2', {'ansible_host': '10.90.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '10.90.0.3'}),
|
||||||
|
('node3', {'ansible_host': '10.90.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '10.90.0.4'})])
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_add_one(self):
|
||||||
|
changed_hosts = ['10.90.0.2']
|
||||||
|
expected = OrderedDict([('node1',
|
||||||
|
{'ansible_host': '10.90.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '10.90.0.2'})])
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
def test_build_hostnames_delete_first(self):
|
def test_build_hostnames_delete_first(self):
|
||||||
changed_hosts = ['-10.90.0.2']
|
changed_hosts = ['-10.90.0.2']
|
||||||
existing_hosts = OrderedDict([
|
existing_hosts = OrderedDict([
|
||||||
@@ -113,7 +128,24 @@ class TestInventory(unittest.TestCase):
|
|||||||
('node2', {'ansible_host': '10.90.0.3',
|
('node2', {'ansible_host': '10.90.0.3',
|
||||||
'ip': '10.90.0.3',
|
'ip': '10.90.0.3',
|
||||||
'access_ip': '10.90.0.3'})])
|
'access_ip': '10.90.0.3'})])
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_delete_by_hostname(self):
|
||||||
|
changed_hosts = ['-node1']
|
||||||
|
existing_hosts = OrderedDict([
|
||||||
|
('node1', {'ansible_host': '10.90.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '10.90.0.2'}),
|
||||||
|
('node2', {'ansible_host': '10.90.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '10.90.0.3'})])
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing_hosts
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '10.90.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '10.90.0.3'})])
|
||||||
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
def test_exists_hostname_positive(self):
|
def test_exists_hostname_positive(self):
|
||||||
@@ -222,11 +254,11 @@ class TestInventory(unittest.TestCase):
|
|||||||
self.inv.yaml_config['all']['children'][group]['hosts'].get(host),
|
self.inv.yaml_config['all']['children'][group]['hosts'].get(host),
|
||||||
None)
|
None)
|
||||||
|
|
||||||
def test_set_kube_master(self):
|
def test_set_kube_control_plane(self):
|
||||||
group = 'kube-master'
|
group = 'kube_control_plane'
|
||||||
host = 'node1'
|
host = 'node1'
|
||||||
|
|
||||||
self.inv.set_kube_master([host])
|
self.inv.set_kube_control_plane([host])
|
||||||
self.assertIn(
|
self.assertIn(
|
||||||
host, self.inv.yaml_config['all']['children'][group]['hosts'])
|
host, self.inv.yaml_config['all']['children'][group]['hosts'])
|
||||||
|
|
||||||
@@ -241,8 +273,8 @@ class TestInventory(unittest.TestCase):
|
|||||||
self.inv.yaml_config['all']['hosts'].get(host), opt)
|
self.inv.yaml_config['all']['hosts'].get(host), opt)
|
||||||
|
|
||||||
def test_set_k8s_cluster(self):
|
def test_set_k8s_cluster(self):
|
||||||
group = 'k8s-cluster'
|
group = 'k8s_cluster'
|
||||||
expected_hosts = ['kube-node', 'kube-master']
|
expected_hosts = ['kube_node', 'kube_control_plane']
|
||||||
|
|
||||||
self.inv.set_k8s_cluster()
|
self.inv.set_k8s_cluster()
|
||||||
for host in expected_hosts:
|
for host in expected_hosts:
|
||||||
@@ -251,7 +283,7 @@ class TestInventory(unittest.TestCase):
|
|||||||
self.inv.yaml_config['all']['children'][group]['children'])
|
self.inv.yaml_config['all']['children'][group]['children'])
|
||||||
|
|
||||||
def test_set_kube_node(self):
|
def test_set_kube_node(self):
|
||||||
group = 'kube-node'
|
group = 'kube_node'
|
||||||
host = 'node1'
|
host = 'node1'
|
||||||
|
|
||||||
self.inv.set_kube_node([host])
|
self.inv.set_kube_node([host])
|
||||||
@@ -275,12 +307,12 @@ class TestInventory(unittest.TestCase):
|
|||||||
|
|
||||||
self.inv.set_all(hosts)
|
self.inv.set_all(hosts)
|
||||||
self.inv.set_etcd(list(hosts.keys())[0:3])
|
self.inv.set_etcd(list(hosts.keys())[0:3])
|
||||||
self.inv.set_kube_master(list(hosts.keys())[0:2])
|
self.inv.set_kube_control_plane(list(hosts.keys())[0:2])
|
||||||
self.inv.set_kube_node(hosts.keys())
|
self.inv.set_kube_node(hosts.keys())
|
||||||
for h in range(3):
|
for h in range(3):
|
||||||
self.assertFalse(
|
self.assertFalse(
|
||||||
list(hosts.keys())[h] in
|
list(hosts.keys())[h] in
|
||||||
self.inv.yaml_config['all']['children']['kube-node']['hosts'])
|
self.inv.yaml_config['all']['children']['kube_node']['hosts'])
|
||||||
|
|
||||||
def test_scale_scenario_two(self):
|
def test_scale_scenario_two(self):
|
||||||
num_nodes = 500
|
num_nodes = 500
|
||||||
@@ -291,12 +323,12 @@ class TestInventory(unittest.TestCase):
|
|||||||
|
|
||||||
self.inv.set_all(hosts)
|
self.inv.set_all(hosts)
|
||||||
self.inv.set_etcd(list(hosts.keys())[0:3])
|
self.inv.set_etcd(list(hosts.keys())[0:3])
|
||||||
self.inv.set_kube_master(list(hosts.keys())[3:5])
|
self.inv.set_kube_control_plane(list(hosts.keys())[3:5])
|
||||||
self.inv.set_kube_node(hosts.keys())
|
self.inv.set_kube_node(hosts.keys())
|
||||||
for h in range(5):
|
for h in range(5):
|
||||||
self.assertFalse(
|
self.assertFalse(
|
||||||
list(hosts.keys())[h] in
|
list(hosts.keys())[h] in
|
||||||
self.inv.yaml_config['all']['children']['kube-node']['hosts'])
|
self.inv.yaml_config['all']['children']['kube_node']['hosts'])
|
||||||
|
|
||||||
def test_range2ips_range(self):
|
def test_range2ips_range(self):
|
||||||
changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8']
|
changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8']
|
||||||
@@ -313,7 +345,7 @@ class TestInventory(unittest.TestCase):
|
|||||||
self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid",
|
self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid",
|
||||||
self.inv.range2ips, host_range)
|
self.inv.range2ips, host_range)
|
||||||
|
|
||||||
def test_build_hostnames_different_ips_add_one(self):
|
def test_build_hostnames_create_with_one_different_ips(self):
|
||||||
changed_hosts = ['10.90.0.2,192.168.0.2']
|
changed_hosts = ['10.90.0.2,192.168.0.2']
|
||||||
expected = OrderedDict([('node1',
|
expected = OrderedDict([('node1',
|
||||||
{'ansible_host': '192.168.0.2',
|
{'ansible_host': '192.168.0.2',
|
||||||
@@ -322,17 +354,7 @@ class TestInventory(unittest.TestCase):
|
|||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
def test_build_hostnames_different_ips_add_duplicate(self):
|
def test_build_hostnames_create_with_two_different_ips(self):
|
||||||
changed_hosts = ['10.90.0.2,192.168.0.2']
|
|
||||||
expected = OrderedDict([('node1',
|
|
||||||
{'ansible_host': '192.168.0.2',
|
|
||||||
'ip': '10.90.0.2',
|
|
||||||
'access_ip': '192.168.0.2'})])
|
|
||||||
self.inv.yaml_config['all']['hosts'] = expected
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
|
||||||
self.assertEqual(expected, result)
|
|
||||||
|
|
||||||
def test_build_hostnames_different_ips_add_two(self):
|
|
||||||
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3']
|
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3']
|
||||||
expected = OrderedDict([
|
expected = OrderedDict([
|
||||||
('node1', {'ansible_host': '192.168.0.2',
|
('node1', {'ansible_host': '192.168.0.2',
|
||||||
@@ -341,6 +363,210 @@ class TestInventory(unittest.TestCase):
|
|||||||
('node2', {'ansible_host': '192.168.0.3',
|
('node2', {'ansible_host': '192.168.0.3',
|
||||||
'ip': '10.90.0.3',
|
'ip': '10.90.0.3',
|
||||||
'access_ip': '192.168.0.3'})])
|
'access_ip': '192.168.0.3'})])
|
||||||
self.inv.yaml_config['all']['hosts'] = OrderedDict()
|
|
||||||
result = self.inv.build_hostnames(changed_hosts)
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
self.assertEqual(expected, result)
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_create_with_three_different_ips(self):
|
||||||
|
changed_hosts = ['10.90.0.2,192.168.0.2',
|
||||||
|
'10.90.0.3,192.168.0.3',
|
||||||
|
'10.90.0.4,192.168.0.4']
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node1', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node2', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'})])
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_overwrite_one_with_different_ips(self):
|
||||||
|
changed_hosts = ['10.90.0.2,192.168.0.2']
|
||||||
|
expected = OrderedDict([('node1',
|
||||||
|
{'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'})])
|
||||||
|
existing = OrderedDict([('node5',
|
||||||
|
{'ansible_host': '192.168.0.5',
|
||||||
|
'ip': '10.90.0.5',
|
||||||
|
'access_ip': '192.168.0.5'})])
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_overwrite_three_with_different_ips(self):
|
||||||
|
changed_hosts = ['10.90.0.2,192.168.0.2']
|
||||||
|
expected = OrderedDict([('node1',
|
||||||
|
{'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'})])
|
||||||
|
existing = OrderedDict([
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'}),
|
||||||
|
('node5', {'ansible_host': '192.168.0.5',
|
||||||
|
'ip': '10.90.0.5',
|
||||||
|
'access_ip': '192.168.0.5'})])
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing
|
||||||
|
result = self.inv.build_hostnames(changed_hosts)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_different_ips_add_duplicate(self):
|
||||||
|
changed_hosts = ['10.90.0.2,192.168.0.2']
|
||||||
|
expected = OrderedDict([('node3',
|
||||||
|
{'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'})])
|
||||||
|
existing = expected
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing
|
||||||
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_add_two_different_ips_into_one_existing(self):
|
||||||
|
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4']
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'})])
|
||||||
|
|
||||||
|
existing = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'})])
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing
|
||||||
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_add_two_different_ips_into_two_existing(self):
|
||||||
|
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5']
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'}),
|
||||||
|
('node5', {'ansible_host': '192.168.0.5',
|
||||||
|
'ip': '10.90.0.5',
|
||||||
|
'access_ip': '192.168.0.5'})])
|
||||||
|
|
||||||
|
existing = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'})])
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing
|
||||||
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
def test_build_hostnames_add_two_different_ips_into_three_existing(self):
|
||||||
|
changed_hosts = ['10.90.0.5,192.168.0.5', '10.90.0.6,192.168.0.6']
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'}),
|
||||||
|
('node5', {'ansible_host': '192.168.0.5',
|
||||||
|
'ip': '10.90.0.5',
|
||||||
|
'access_ip': '192.168.0.5'}),
|
||||||
|
('node6', {'ansible_host': '192.168.0.6',
|
||||||
|
'ip': '10.90.0.6',
|
||||||
|
'access_ip': '192.168.0.6'})])
|
||||||
|
|
||||||
|
existing = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'})])
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing
|
||||||
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
# Add two IP addresses into a config that has
|
||||||
|
# three already defined IP addresses. One of the IP addresses
|
||||||
|
# is a duplicate.
|
||||||
|
def test_build_hostnames_add_two_duplicate_one_overlap(self):
|
||||||
|
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5']
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'}),
|
||||||
|
('node5', {'ansible_host': '192.168.0.5',
|
||||||
|
'ip': '10.90.0.5',
|
||||||
|
'access_ip': '192.168.0.5'})])
|
||||||
|
|
||||||
|
existing = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'})])
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing
|
||||||
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|
||||||
|
# Add two duplicate IP addresses into a config that has
|
||||||
|
# three already defined IP addresses
|
||||||
|
def test_build_hostnames_add_two_duplicate_two_overlap(self):
|
||||||
|
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4']
|
||||||
|
expected = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'})])
|
||||||
|
|
||||||
|
existing = OrderedDict([
|
||||||
|
('node2', {'ansible_host': '192.168.0.2',
|
||||||
|
'ip': '10.90.0.2',
|
||||||
|
'access_ip': '192.168.0.2'}),
|
||||||
|
('node3', {'ansible_host': '192.168.0.3',
|
||||||
|
'ip': '10.90.0.3',
|
||||||
|
'access_ip': '192.168.0.3'}),
|
||||||
|
('node4', {'ansible_host': '192.168.0.4',
|
||||||
|
'ip': '10.90.0.4',
|
||||||
|
'access_ip': '192.168.0.4'})])
|
||||||
|
self.inv.yaml_config['all']['hosts'] = existing
|
||||||
|
result = self.inv.build_hostnames(changed_hosts, True)
|
||||||
|
self.assertEqual(expected, result)
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
- name: Install required packages
|
- name: Install required packages
|
||||||
yum:
|
package:
|
||||||
name: "{{ item }}"
|
name: "{{ item }}"
|
||||||
state: present
|
state: present
|
||||||
with_items:
|
with_items:
|
||||||
|
|||||||
@@ -11,6 +11,7 @@
|
|||||||
state: directory
|
state: directory
|
||||||
owner: "{{ k8s_deployment_user }}"
|
owner: "{{ k8s_deployment_user }}"
|
||||||
group: "{{ k8s_deployment_user }}"
|
group: "{{ k8s_deployment_user }}"
|
||||||
|
mode: 0700
|
||||||
|
|
||||||
- name: Configure sudo for deployment user
|
- name: Configure sudo for deployment user
|
||||||
copy:
|
copy:
|
||||||
|
|||||||
@@ -15,10 +15,10 @@
|
|||||||
roles:
|
roles:
|
||||||
- { role: glusterfs/server }
|
- { role: glusterfs/server }
|
||||||
|
|
||||||
- hosts: k8s-cluster
|
- hosts: k8s_cluster
|
||||||
roles:
|
roles:
|
||||||
- { role: glusterfs/client }
|
- { role: glusterfs/client }
|
||||||
|
|
||||||
- hosts: kube-master[0]
|
- hosts: kube_control_plane[0]
|
||||||
roles:
|
roles:
|
||||||
- { role: kubernetes-pv }
|
- { role: kubernetes-pv }
|
||||||
|
|||||||
@@ -11,10 +11,10 @@
|
|||||||
# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default).
|
# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default).
|
||||||
# ## As in the previous case, you can set ip to give direct communication on internal IPs
|
# ## As in the previous case, you can set ip to give direct communication on internal IPs
|
||||||
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
|
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
|
||||||
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
|
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
|
||||||
# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
|
# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
|
||||||
|
|
||||||
# [kube-master]
|
# [kube_control_plane]
|
||||||
# node1
|
# node1
|
||||||
# node2
|
# node2
|
||||||
|
|
||||||
@@ -23,16 +23,16 @@
|
|||||||
# node2
|
# node2
|
||||||
# node3
|
# node3
|
||||||
|
|
||||||
# [kube-node]
|
# [kube_node]
|
||||||
# node2
|
# node2
|
||||||
# node3
|
# node3
|
||||||
# node4
|
# node4
|
||||||
# node5
|
# node5
|
||||||
# node6
|
# node6
|
||||||
|
|
||||||
# [k8s-cluster:children]
|
# [k8s_cluster:children]
|
||||||
# kube-node
|
# kube_node
|
||||||
# kube-master
|
# kube_control_plane
|
||||||
|
|
||||||
# [gfs-cluster]
|
# [gfs-cluster]
|
||||||
# gfs_node1
|
# gfs_node1
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ Installs and configures GlusterFS on Linux.
|
|||||||
|
|
||||||
For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role).
|
For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role).
|
||||||
|
|
||||||
This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/gluster_volume_module.html) module to ease the management of Gluster volumes.
|
This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/ansible/latest/collections/gluster/gluster/gluster_volume_module.html) module to ease the management of Gluster volumes.
|
||||||
|
|
||||||
## Role Variables
|
## Role Variables
|
||||||
|
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
---
|
---
|
||||||
- name: Install Prerequisites
|
- name: Install Prerequisites
|
||||||
yum: name={{ item }} state=present
|
package: name={{ item }} state=present
|
||||||
with_items:
|
with_items:
|
||||||
- "centos-release-gluster{{ glusterfs_default_release }}"
|
- "centos-release-gluster{{ glusterfs_default_release }}"
|
||||||
|
|
||||||
- name: Install Packages
|
- name: Install Packages
|
||||||
yum: name={{ item }} state=present
|
package: name={{ item }} state=present
|
||||||
with_items:
|
with_items:
|
||||||
- glusterfs-client
|
- glusterfs-client
|
||||||
|
|||||||
@@ -9,7 +9,7 @@
|
|||||||
when: ansible_os_family == "Debian"
|
when: ansible_os_family == "Debian"
|
||||||
|
|
||||||
- name: install xfs RedHat
|
- name: install xfs RedHat
|
||||||
yum: name=xfsprogs state=present
|
package: name=xfsprogs state=present
|
||||||
when: ansible_os_family == "RedHat"
|
when: ansible_os_family == "RedHat"
|
||||||
|
|
||||||
# Format external volumes in xfs
|
# Format external volumes in xfs
|
||||||
@@ -82,6 +82,7 @@
|
|||||||
template:
|
template:
|
||||||
dest: "{{ gluster_mount_dir }}/.test-file.txt"
|
dest: "{{ gluster_mount_dir }}/.test-file.txt"
|
||||||
src: test-file.txt
|
src: test-file.txt
|
||||||
|
mode: 0644
|
||||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||||
|
|
||||||
- name: Unmount glusterfs
|
- name: Unmount glusterfs
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
---
|
---
|
||||||
- name: Install Prerequisites
|
- name: Install Prerequisites
|
||||||
yum: name={{ item }} state=present
|
package: name={{ item }} state=present
|
||||||
with_items:
|
with_items:
|
||||||
- "centos-release-gluster{{ glusterfs_default_release }}"
|
- "centos-release-gluster{{ glusterfs_default_release }}"
|
||||||
|
|
||||||
- name: Install Packages
|
- name: Install Packages
|
||||||
yum: name={{ item }} state=present
|
package: name={{ item }} state=present
|
||||||
with_items:
|
with_items:
|
||||||
- glusterfs-server
|
- glusterfs-server
|
||||||
- glusterfs-client
|
- glusterfs-client
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
|
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
|
||||||
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
|
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
|
||||||
register: gluster_pv
|
register: gluster_pv
|
||||||
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
|
when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
|
||||||
|
|
||||||
- name: Kubernetes Apps | Set GlusterFS endpoint and PV
|
- name: Kubernetes Apps | Set GlusterFS endpoint and PV
|
||||||
kube:
|
kube:
|
||||||
@@ -19,4 +19,4 @@
|
|||||||
filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
|
filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
|
||||||
state: "{{ item.changed | ternary('latest','present') }}"
|
state: "{{ item.changed | ternary('latest','present') }}"
|
||||||
with_items: "{{ gluster_pv.results }}"
|
with_items: "{{ gluster_pv.results }}"
|
||||||
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined
|
when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
- hosts: kube-master[0]
|
- hosts: kube_control_plane[0]
|
||||||
roles:
|
roles:
|
||||||
- { role: tear-down }
|
- { role: tear-down }
|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
roles:
|
roles:
|
||||||
- { role: prepare }
|
- { role: prepare }
|
||||||
|
|
||||||
- hosts: kube-master[0]
|
- hosts: kube_control_plane[0]
|
||||||
tags:
|
tags:
|
||||||
- "provision"
|
- "provision"
|
||||||
roles:
|
roles:
|
||||||
|
|||||||
@@ -2,18 +2,25 @@ all:
|
|||||||
vars:
|
vars:
|
||||||
heketi_admin_key: "11elfeinhundertundelf"
|
heketi_admin_key: "11elfeinhundertundelf"
|
||||||
heketi_user_key: "!!einseinseins"
|
heketi_user_key: "!!einseinseins"
|
||||||
|
glusterfs_daemonset:
|
||||||
|
readiness_probe:
|
||||||
|
timeout_seconds: 3
|
||||||
|
initial_delay_seconds: 3
|
||||||
|
liveness_probe:
|
||||||
|
timeout_seconds: 3
|
||||||
|
initial_delay_seconds: 10
|
||||||
children:
|
children:
|
||||||
k8s-cluster:
|
k8s_cluster:
|
||||||
vars:
|
vars:
|
||||||
kubelet_fail_swap_on: false
|
kubelet_fail_swap_on: false
|
||||||
children:
|
children:
|
||||||
kube-master:
|
kube_control_plane:
|
||||||
hosts:
|
hosts:
|
||||||
node1:
|
node1:
|
||||||
etcd:
|
etcd:
|
||||||
hosts:
|
hosts:
|
||||||
node2:
|
node2:
|
||||||
kube-node:
|
kube_node:
|
||||||
hosts: &kube_nodes
|
hosts: &kube_nodes
|
||||||
node1:
|
node1:
|
||||||
node2:
|
node2:
|
||||||
|
|||||||
@@ -11,7 +11,7 @@
|
|||||||
|
|
||||||
- name: "Install glusterfs mount utils (RedHat)"
|
- name: "Install glusterfs mount utils (RedHat)"
|
||||||
become: true
|
become: true
|
||||||
yum:
|
package:
|
||||||
name: "glusterfs-fuse"
|
name: "glusterfs-fuse"
|
||||||
state: "present"
|
state: "present"
|
||||||
when: "ansible_os_family == 'RedHat'"
|
when: "ansible_os_family == 'RedHat'"
|
||||||
|
|||||||
@@ -1,7 +1,10 @@
|
|||||||
---
|
---
|
||||||
- name: "Kubernetes Apps | Lay Down Heketi Bootstrap"
|
- name: "Kubernetes Apps | Lay Down Heketi Bootstrap"
|
||||||
become: true
|
become: true
|
||||||
template: { src: "heketi-bootstrap.json.j2", dest: "{{ kube_config_dir }}/heketi-bootstrap.json" }
|
template:
|
||||||
|
src: "heketi-bootstrap.json.j2"
|
||||||
|
dest: "{{ kube_config_dir }}/heketi-bootstrap.json"
|
||||||
|
mode: 0640
|
||||||
register: "rendering"
|
register: "rendering"
|
||||||
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
|
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
|
||||||
kube:
|
kube:
|
||||||
|
|||||||
@@ -10,6 +10,7 @@
|
|||||||
template:
|
template:
|
||||||
src: "topology.json.j2"
|
src: "topology.json.j2"
|
||||||
dest: "{{ kube_config_dir }}/topology.json"
|
dest: "{{ kube_config_dir }}/topology.json"
|
||||||
|
mode: 0644
|
||||||
- name: "Copy topology configuration into container."
|
- name: "Copy topology configuration into container."
|
||||||
changed_when: false
|
changed_when: false
|
||||||
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
|
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
|
||||||
|
|||||||
@@ -1,6 +1,9 @@
|
|||||||
---
|
---
|
||||||
- name: "Kubernetes Apps | Lay Down GlusterFS Daemonset"
|
- name: "Kubernetes Apps | Lay Down GlusterFS Daemonset"
|
||||||
template: { src: "glusterfs-daemonset.json.j2", dest: "{{ kube_config_dir }}/glusterfs-daemonset.json" }
|
template:
|
||||||
|
src: "glusterfs-daemonset.json.j2"
|
||||||
|
dest: "{{ kube_config_dir }}/glusterfs-daemonset.json"
|
||||||
|
mode: 0644
|
||||||
become: true
|
become: true
|
||||||
register: "rendering"
|
register: "rendering"
|
||||||
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
|
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
|
||||||
@@ -27,7 +30,10 @@
|
|||||||
delay: 5
|
delay: 5
|
||||||
|
|
||||||
- name: "Kubernetes Apps | Lay Down Heketi Service Account"
|
- name: "Kubernetes Apps | Lay Down Heketi Service Account"
|
||||||
template: { src: "heketi-service-account.json.j2", dest: "{{ kube_config_dir }}/heketi-service-account.json" }
|
template:
|
||||||
|
src: "heketi-service-account.json.j2"
|
||||||
|
dest: "{{ kube_config_dir }}/heketi-service-account.json"
|
||||||
|
mode: 0644
|
||||||
become: true
|
become: true
|
||||||
register: "rendering"
|
register: "rendering"
|
||||||
- name: "Kubernetes Apps | Install and configure Heketi Service Account"
|
- name: "Kubernetes Apps | Install and configure Heketi Service Account"
|
||||||
|
|||||||
@@ -4,6 +4,7 @@
|
|||||||
template:
|
template:
|
||||||
src: "heketi-deployment.json.j2"
|
src: "heketi-deployment.json.j2"
|
||||||
dest: "{{ kube_config_dir }}/heketi-deployment.json"
|
dest: "{{ kube_config_dir }}/heketi-deployment.json"
|
||||||
|
mode: 0644
|
||||||
register: "rendering"
|
register: "rendering"
|
||||||
|
|
||||||
- name: "Kubernetes Apps | Install and configure Heketi"
|
- name: "Kubernetes Apps | Install and configure Heketi"
|
||||||
|
|||||||
@@ -5,7 +5,7 @@
|
|||||||
changed_when: false
|
changed_when: false
|
||||||
|
|
||||||
- name: "Kubernetes Apps | Deploy cluster role binding."
|
- name: "Kubernetes Apps | Deploy cluster role binding."
|
||||||
when: "clusterrolebinding_state.stdout == \"\""
|
when: "clusterrolebinding_state.stdout | length == 0"
|
||||||
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
|
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
|
||||||
|
|
||||||
- name: Get clusterrolebindings again
|
- name: Get clusterrolebindings again
|
||||||
@@ -15,7 +15,7 @@
|
|||||||
|
|
||||||
- name: Make sure that clusterrolebindings are present now
|
- name: Make sure that clusterrolebindings are present now
|
||||||
assert:
|
assert:
|
||||||
that: "clusterrolebinding_state.stdout != \"\""
|
that: "clusterrolebinding_state.stdout | length > 0"
|
||||||
msg: "Cluster role binding is not present."
|
msg: "Cluster role binding is not present."
|
||||||
|
|
||||||
- name: Get the heketi-config-secret secret
|
- name: Get the heketi-config-secret secret
|
||||||
@@ -28,9 +28,10 @@
|
|||||||
template:
|
template:
|
||||||
src: "heketi.json.j2"
|
src: "heketi.json.j2"
|
||||||
dest: "{{ kube_config_dir }}/heketi.json"
|
dest: "{{ kube_config_dir }}/heketi.json"
|
||||||
|
mode: 0644
|
||||||
|
|
||||||
- name: "Deploy Heketi config secret"
|
- name: "Deploy Heketi config secret"
|
||||||
when: "secret_state.stdout == \"\""
|
when: "secret_state.stdout | length == 0"
|
||||||
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
|
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
|
||||||
|
|
||||||
- name: Get the heketi-config-secret secret again
|
- name: Get the heketi-config-secret secret again
|
||||||
@@ -40,5 +41,5 @@
|
|||||||
|
|
||||||
- name: Make sure the heketi-config-secret secret exists now
|
- name: Make sure the heketi-config-secret secret exists now
|
||||||
assert:
|
assert:
|
||||||
that: "secret_state.stdout != \"\""
|
that: "secret_state.stdout | length > 0"
|
||||||
msg: "Heketi config secret is not present."
|
msg: "Heketi config secret is not present."
|
||||||
|
|||||||
@@ -2,7 +2,10 @@
|
|||||||
- name: "Kubernetes Apps | Lay Down Heketi Storage"
|
- name: "Kubernetes Apps | Lay Down Heketi Storage"
|
||||||
become: true
|
become: true
|
||||||
vars: { nodes: "{{ groups['heketi-node'] }}" }
|
vars: { nodes: "{{ groups['heketi-node'] }}" }
|
||||||
template: { src: "heketi-storage.json.j2", dest: "{{ kube_config_dir }}/heketi-storage.json" }
|
template:
|
||||||
|
src: "heketi-storage.json.j2"
|
||||||
|
dest: "{{ kube_config_dir }}/heketi-storage.json"
|
||||||
|
mode: 0644
|
||||||
register: "rendering"
|
register: "rendering"
|
||||||
- name: "Kubernetes Apps | Install and configure Heketi Storage"
|
- name: "Kubernetes Apps | Install and configure Heketi Storage"
|
||||||
kube:
|
kube:
|
||||||
|
|||||||
@@ -16,6 +16,7 @@
|
|||||||
template:
|
template:
|
||||||
src: "storageclass.yml.j2"
|
src: "storageclass.yml.j2"
|
||||||
dest: "{{ kube_config_dir }}/storageclass.yml"
|
dest: "{{ kube_config_dir }}/storageclass.yml"
|
||||||
|
mode: 0644
|
||||||
register: "rendering"
|
register: "rendering"
|
||||||
- name: "Kubernetes Apps | Install and configure Storace Class"
|
- name: "Kubernetes Apps | Install and configure Storace Class"
|
||||||
kube:
|
kube:
|
||||||
|
|||||||
@@ -10,6 +10,7 @@
|
|||||||
template:
|
template:
|
||||||
src: "topology.json.j2"
|
src: "topology.json.j2"
|
||||||
dest: "{{ kube_config_dir }}/topology.json"
|
dest: "{{ kube_config_dir }}/topology.json"
|
||||||
|
mode: 0644
|
||||||
- name: "Copy topology configuration into container." # noqa 503
|
- name: "Copy topology configuration into container." # noqa 503
|
||||||
when: "rendering.changed"
|
when: "rendering.changed"
|
||||||
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
|
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
|
||||||
|
|||||||
@@ -73,8 +73,8 @@
|
|||||||
"privileged": true
|
"privileged": true
|
||||||
},
|
},
|
||||||
"readinessProbe": {
|
"readinessProbe": {
|
||||||
"timeoutSeconds": 3,
|
"timeoutSeconds": {{ glusterfs_daemonset.readiness_probe.timeout_seconds }},
|
||||||
"initialDelaySeconds": 3,
|
"initialDelaySeconds": {{ glusterfs_daemonset.readiness_probe.initial_delay_seconds }},
|
||||||
"exec": {
|
"exec": {
|
||||||
"command": [
|
"command": [
|
||||||
"/bin/bash",
|
"/bin/bash",
|
||||||
@@ -84,8 +84,8 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"livenessProbe": {
|
"livenessProbe": {
|
||||||
"timeoutSeconds": 3,
|
"timeoutSeconds": {{ glusterfs_daemonset.liveness_probe.timeout_seconds }},
|
||||||
"initialDelaySeconds": 10,
|
"initialDelaySeconds": {{ glusterfs_daemonset.liveness_probe.initial_delay_seconds }},
|
||||||
"exec": {
|
"exec": {
|
||||||
"command": [
|
"command": [
|
||||||
"/bin/bash",
|
"/bin/bash",
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
- name: "Install lvm utils (RedHat)"
|
- name: "Install lvm utils (RedHat)"
|
||||||
become: true
|
become: true
|
||||||
yum:
|
package:
|
||||||
name: "lvm2"
|
name: "lvm2"
|
||||||
state: "present"
|
state: "present"
|
||||||
when: "ansible_os_family == 'RedHat'"
|
when: "ansible_os_family == 'RedHat'"
|
||||||
@@ -19,7 +19,7 @@
|
|||||||
become: true
|
become: true
|
||||||
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
|
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
|
||||||
register: "volume_groups"
|
register: "volume_groups"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
changed_when: false
|
changed_when: false
|
||||||
|
|
||||||
- name: "Remove volume groups." # noqa 301
|
- name: "Remove volume groups." # noqa 301
|
||||||
@@ -35,11 +35,11 @@
|
|||||||
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
|
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
|
||||||
become: true
|
become: true
|
||||||
command: "pvremove {{ disk_volume_device_1 }} --yes"
|
command: "pvremove {{ disk_volume_device_1 }} --yes"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
|
|
||||||
- name: "Remove lvm utils (RedHat)"
|
- name: "Remove lvm utils (RedHat)"
|
||||||
become: true
|
become: true
|
||||||
yum:
|
package:
|
||||||
name: "lvm2"
|
name: "lvm2"
|
||||||
state: "absent"
|
state: "absent"
|
||||||
when: "ansible_os_family == 'RedHat' and heketi_remove_lvm"
|
when: "ansible_os_family == 'RedHat' and heketi_remove_lvm"
|
||||||
|
|||||||
@@ -1,51 +1,51 @@
|
|||||||
---
|
---
|
||||||
- name: "Remove storage class." # noqa 301
|
- name: Remove storage class. # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
|
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Tear down heketi." # noqa 301
|
- name: Tear down heketi. # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
|
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Tear down heketi." # noqa 301
|
- name: Tear down heketi. # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
|
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Tear down bootstrap."
|
- name: Tear down bootstrap.
|
||||||
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
|
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
|
||||||
- name: "Ensure there is nothing left over." # noqa 301
|
- name: Ensure there is nothing left over. # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
|
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
|
||||||
register: "heketi_result"
|
register: "heketi_result"
|
||||||
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
|
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
|
||||||
retries: 60
|
retries: 60
|
||||||
delay: 5
|
delay: 5
|
||||||
- name: "Ensure there is nothing left over." # noqa 301
|
- name: Ensure there is nothing left over. # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
|
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
|
||||||
register: "heketi_result"
|
register: "heketi_result"
|
||||||
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
|
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
|
||||||
retries: 60
|
retries: 60
|
||||||
delay: 5
|
delay: 5
|
||||||
- name: "Tear down glusterfs." # noqa 301
|
- name: Tear down glusterfs. # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
|
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Remove heketi storage service." # noqa 301
|
- name: Remove heketi storage service. # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
|
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Remove heketi gluster role binding" # noqa 301
|
- name: Remove heketi gluster role binding # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
|
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Remove heketi config secret" # noqa 301
|
- name: Remove heketi config secret # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
|
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Remove heketi db backup" # noqa 301
|
- name: Remove heketi db backup # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
|
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Remove heketi service account" # noqa 301
|
- name: Remove heketi service account # noqa 301
|
||||||
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
|
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
- name: "Get secrets"
|
- name: Get secrets
|
||||||
command: "{{ bin_dir }}/kubectl get secrets --output=\"json\""
|
command: "{{ bin_dir }}/kubectl get secrets --output=\"json\""
|
||||||
register: "secrets"
|
register: "secrets"
|
||||||
changed_when: false
|
changed_when: false
|
||||||
- name: "Remove heketi storage secret"
|
- name: Remove heketi storage secret
|
||||||
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
|
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
|
||||||
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
|
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
|
||||||
when: "storage_query is defined"
|
when: "storage_query is defined"
|
||||||
ignore_errors: true
|
ignore_errors: true # noqa ignore-errors
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
# Container image collecting script for offline deployment
|
# Offline deployment
|
||||||
|
|
||||||
|
## manage-offline-container-images.sh
|
||||||
|
|
||||||
|
Container image collecting script for offline deployment
|
||||||
|
|
||||||
This script has two features:
|
This script has two features:
|
||||||
(1) Get container images from an environment which is deployed online.
|
(1) Get container images from an environment which is deployed online.
|
||||||
@@ -19,3 +23,21 @@ Step(2) can be operated with:
|
|||||||
```shell
|
```shell
|
||||||
manage-offline-container-images.sh register
|
manage-offline-container-images.sh register
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## generate_list.sh
|
||||||
|
|
||||||
|
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main.yml` file.
|
||||||
|
|
||||||
|
Run this script will generates three files, all downloaded files url in files.list, all container images in images.list, all component version in generate.sh.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
bash generate_list.sh
|
||||||
|
tree temp
|
||||||
|
temp
|
||||||
|
├── files.list
|
||||||
|
├── generate.sh
|
||||||
|
└── images.list
|
||||||
|
0 directories, 3 files
|
||||||
|
```
|
||||||
|
|
||||||
|
In some cases you may want to update some component version, you can edit `generate.sh` file, then run `bash generate.sh | grep 'https' > files.list` to update file.list or run `bash generate.sh | grep -v 'https'> images.list` to update images.list.
|
||||||
|
|||||||
57
contrib/offline/generate_list.sh
Normal file
57
contrib/offline/generate_list.sh
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -eo pipefail
|
||||||
|
|
||||||
|
CURRENT_DIR=$(cd $(dirname $0); pwd)
|
||||||
|
TEMP_DIR="${CURRENT_DIR}/temp"
|
||||||
|
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
|
||||||
|
|
||||||
|
: ${IMAGE_ARCH:="amd64"}
|
||||||
|
: ${ANSIBLE_SYSTEM:="linux"}
|
||||||
|
: ${ANSIBLE_ARCHITECTURE:="x86_64"}
|
||||||
|
: ${DOWNLOAD_YML:="roles/download/defaults/main.yml"}
|
||||||
|
: ${KUBE_VERSION_YAML:="roles/kubespray-defaults/defaults/main.yaml"}
|
||||||
|
|
||||||
|
mkdir -p ${TEMP_DIR}
|
||||||
|
|
||||||
|
# ARCH used in convert {%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%} to {{arch}}
|
||||||
|
if [ "${IMAGE_ARCH}" != "amd64" ]; then ARCH="${IMAGE_ARCH}"; fi
|
||||||
|
|
||||||
|
cat > ${TEMP_DIR}/generate.sh << EOF
|
||||||
|
arch=${ARCH}
|
||||||
|
image_arch=${IMAGE_ARCH}
|
||||||
|
ansible_system=${ANSIBLE_SYSTEM}
|
||||||
|
ansible_architecture=${ANSIBLE_ARCHITECTURE}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# generate all component version by $DOWNLOAD_YML
|
||||||
|
grep 'kube_version:' ${REPO_ROOT_DIR}/${KUBE_VERSION_YAML} \
|
||||||
|
| sed 's/: /=/g' >> ${TEMP_DIR}/generate.sh
|
||||||
|
grep '_version:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
|
||||||
|
| sed 's/: /=/g;s/{{/${/g;s/}}/}/g' | tr -d ' ' >> ${TEMP_DIR}/generate.sh
|
||||||
|
sed -i 's/kube_major_version=.*/kube_major_version=${kube_version%.*}/g' ${TEMP_DIR}/generate.sh
|
||||||
|
sed -i 's/crictl_version=.*/crictl_version=${kube_version%.*}.0/g' ${TEMP_DIR}/generate.sh
|
||||||
|
|
||||||
|
# generate all download files url
|
||||||
|
grep 'download_url:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
|
||||||
|
| sed 's/: /=/g;s/ //g;s/{{/${/g;s/}}/}/g;s/|lower//g;s/^.*_url=/echo /g' >> ${TEMP_DIR}/generate.sh
|
||||||
|
|
||||||
|
# generate all images list
|
||||||
|
grep -E '_repo:|_tag:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
|
||||||
|
| sed "s#{%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%}#{{arch}}#g" \
|
||||||
|
| sed 's/: /=/g;s/{{/${/g;s/}}/}/g' | tr -d ' ' >> ${TEMP_DIR}/generate.sh
|
||||||
|
sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
|
||||||
|
| sed -n "s/repo: //p;s/tag: //p" | tr -d ' ' | sed 's/{{/${/g;s/}}/}/g' \
|
||||||
|
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/^/echo /g' >> ${TEMP_DIR}/generate.sh
|
||||||
|
|
||||||
|
# special handling for https://github.com/kubernetes-sigs/kubespray/pull/7570
|
||||||
|
sed -i 's#^coredns_image_repo=.*#coredns_image_repo=${kube_image_repo}$(if printf "%s\\n%s\\n" v1.21 ${kube_version%.*} | sort --check=quiet --version-sort; then echo -n /coredns/coredns;else echo -n /coredns; fi)#' ${TEMP_DIR}/generate.sh
|
||||||
|
sed -i 's#^coredns_image_tag=.*#coredns_image_tag=$(if printf "%s\\n%s\\n" v1.21 ${kube_version%.*} | sort --check=quiet --version-sort; then echo -n ${coredns_version};else echo -n ${coredns_version/v/}; fi)#' ${TEMP_DIR}/generate.sh
|
||||||
|
|
||||||
|
# add kube-* images to images list
|
||||||
|
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
|
||||||
|
echo "${KUBE_IMAGES}" | tr ' ' '\n' | xargs -L1 -I {} \
|
||||||
|
echo 'echo ${kube_image_repo}/{}:${kube_version}' >> ${TEMP_DIR}/generate.sh
|
||||||
|
|
||||||
|
# print files.list and images.list
|
||||||
|
bash ${TEMP_DIR}/generate.sh | grep 'https' | sort > ${TEMP_DIR}/files.list
|
||||||
|
bash ${TEMP_DIR}/generate.sh | grep -v 'https' | sort > ${TEMP_DIR}/images.list
|
||||||
@@ -100,15 +100,35 @@ function register_container_images() {
|
|||||||
|
|
||||||
tar -zxvf ${IMAGE_TAR_FILE}
|
tar -zxvf ${IMAGE_TAR_FILE}
|
||||||
sudo docker load -i ${IMAGE_DIR}/registry-latest.tar
|
sudo docker load -i ${IMAGE_DIR}/registry-latest.tar
|
||||||
sudo docker run --restart=always -d -p 5000:5000 --name registry registry:latest
|
|
||||||
set +e
|
set +e
|
||||||
|
sudo docker container inspect registry >/dev/null 2>&1
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
sudo docker run --restart=always -d -p 5000:5000 --name registry registry:latest
|
||||||
|
fi
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
while read -r line; do
|
while read -r line; do
|
||||||
file_name=$(echo ${line} | awk '{print $1}')
|
file_name=$(echo ${line} | awk '{print $1}')
|
||||||
org_image=$(echo ${line} | awk '{print $2}')
|
raw_image=$(echo ${line} | awk '{print $2}')
|
||||||
new_image="${LOCALHOST_NAME}:5000/${org_image}"
|
new_image="${LOCALHOST_NAME}:5000/${raw_image}"
|
||||||
image_id=$(tar -tf ${IMAGE_DIR}/${file_name} | grep "\.json" | grep -v manifest.json | sed s/"\.json"//)
|
org_image=$(sudo docker load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
|
||||||
|
image_id=$(sudo docker image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
|
||||||
|
if [ -z "${file_name}" ]; then
|
||||||
|
echo "Failed to get file_name for line ${line}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [ -z "${raw_image}" ]; then
|
||||||
|
echo "Failed to get raw_image for line ${line}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [ -z "${org_image}" ]; then
|
||||||
|
echo "Failed to get org_image for line ${line}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [ -z "${image_id}" ]; then
|
||||||
|
echo "Failed to get image_id for file ${file_name}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
sudo docker load -i ${IMAGE_DIR}/${file_name}
|
sudo docker load -i ${IMAGE_DIR}/${file_name}
|
||||||
sudo docker tag ${image_id} ${new_image}
|
sudo docker tag ${image_id} ${new_image}
|
||||||
sudo docker push ${new_image}
|
sudo docker push ${new_image}
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
---
|
---
|
||||||
- hosts: all
|
- hosts: all
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- role_under_test
|
- { role: prepare }
|
||||||
2
contrib/os-services/roles/prepare/defaults/main.yml
Normal file
2
contrib/os-services/roles/prepare/defaults/main.yml
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
---
|
||||||
|
disable_service_firewall: false
|
||||||
23
contrib/os-services/roles/prepare/tasks/main.yml
Normal file
23
contrib/os-services/roles/prepare/tasks/main.yml
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
---
|
||||||
|
- block:
|
||||||
|
- name: List services
|
||||||
|
service_facts:
|
||||||
|
|
||||||
|
- name: Disable service firewalld
|
||||||
|
systemd:
|
||||||
|
name: firewalld
|
||||||
|
state: stopped
|
||||||
|
enabled: no
|
||||||
|
when:
|
||||||
|
"'firewalld.service' in services"
|
||||||
|
|
||||||
|
- name: Disable service ufw
|
||||||
|
systemd:
|
||||||
|
name: ufw
|
||||||
|
state: stopped
|
||||||
|
enabled: no
|
||||||
|
when:
|
||||||
|
"'ufw.service' in services"
|
||||||
|
|
||||||
|
when:
|
||||||
|
- disable_service_firewall is defined and disable_service_firewall
|
||||||
@@ -9,8 +9,8 @@ Summary: Ansible modules for installing Kubernetes
|
|||||||
|
|
||||||
Group: System Environment/Libraries
|
Group: System Environment/Libraries
|
||||||
License: ASL 2.0
|
License: ASL 2.0
|
||||||
Url: https://github.com/kubernetes-incubator/kubespray
|
Url: https://github.com/kubernetes-sigs/kubespray
|
||||||
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz#/%{name}-%{release}.tar.gz
|
Source0: https://github.com/kubernetes-sigs/kubespray/archive/%{upstream_version}.tar.gz#/%{name}-%{release}.tar.gz
|
||||||
|
|
||||||
BuildArch: noarch
|
BuildArch: noarch
|
||||||
BuildRequires: git
|
BuildRequires: git
|
||||||
@@ -51,7 +51,7 @@ export SKIP_PIP_INSTALL=1
|
|||||||
%doc %{_docdir}/%{name}/inventory/sample/hosts.ini
|
%doc %{_docdir}/%{name}/inventory/sample/hosts.ini
|
||||||
%config %{_sysconfdir}/%{name}/ansible.cfg
|
%config %{_sysconfdir}/%{name}/ansible.cfg
|
||||||
%config %{_sysconfdir}/%{name}/inventory/sample/group_vars/all.yml
|
%config %{_sysconfdir}/%{name}/inventory/sample/group_vars/all.yml
|
||||||
%config %{_sysconfdir}/%{name}/inventory/sample/group_vars/k8s-cluster.yml
|
%config %{_sysconfdir}/%{name}/inventory/sample/group_vars/k8s_cluster.yml
|
||||||
%license %{_docdir}/%{name}/LICENSE
|
%license %{_docdir}/%{name}/LICENSE
|
||||||
%{python2_sitelib}/%{srcname}-%{release}-py%{python2_version}.egg-info
|
%{python2_sitelib}/%{srcname}-%{release}-py%{python2_version}.egg-info
|
||||||
%{_datarootdir}/%{name}/roles/
|
%{_datarootdir}/%{name}/roles/
|
||||||
|
|||||||
1
contrib/terraform/aws/.gitignore
vendored
1
contrib/terraform/aws/.gitignore
vendored
@@ -1,2 +1,3 @@
|
|||||||
*.tfstate*
|
*.tfstate*
|
||||||
|
.terraform.lock.hcl
|
||||||
.terraform
|
.terraform
|
||||||
|
|||||||
@@ -122,7 +122,7 @@ You can use the following set of commands to get the kubeconfig file from your n
|
|||||||
|
|
||||||
```commandline
|
```commandline
|
||||||
# Get the controller's IP address.
|
# Get the controller's IP address.
|
||||||
CONTROLLER_HOST_NAME=$(cat ./inventory/hosts | grep "\[kube-master\]" -A 1 | tail -n 1)
|
CONTROLLER_HOST_NAME=$(cat ./inventory/hosts | grep "\[kube_control_plane\]" -A 1 | tail -n 1)
|
||||||
CONTROLLER_IP=$(cat ./inventory/hosts | grep $CONTROLLER_HOST_NAME | grep ansible_host | cut -d'=' -f2)
|
CONTROLLER_IP=$(cat ./inventory/hosts | grep $CONTROLLER_HOST_NAME | grep ansible_host | cut -d'=' -f2)
|
||||||
|
|
||||||
# Get the hostname of the load balancer.
|
# Get the hostname of the load balancer.
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ module "aws-vpc" {
|
|||||||
|
|
||||||
aws_cluster_name = var.aws_cluster_name
|
aws_cluster_name = var.aws_cluster_name
|
||||||
aws_vpc_cidr_block = var.aws_vpc_cidr_block
|
aws_vpc_cidr_block = var.aws_vpc_cidr_block
|
||||||
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, 2)
|
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names))
|
||||||
aws_cidr_subnets_private = var.aws_cidr_subnets_private
|
aws_cidr_subnets_private = var.aws_cidr_subnets_private
|
||||||
aws_cidr_subnets_public = var.aws_cidr_subnets_public
|
aws_cidr_subnets_public = var.aws_cidr_subnets_public
|
||||||
default_tags = var.default_tags
|
default_tags = var.default_tags
|
||||||
@@ -31,7 +31,7 @@ module "aws-elb" {
|
|||||||
|
|
||||||
aws_cluster_name = var.aws_cluster_name
|
aws_cluster_name = var.aws_cluster_name
|
||||||
aws_vpc_id = module.aws-vpc.aws_vpc_id
|
aws_vpc_id = module.aws-vpc.aws_vpc_id
|
||||||
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, 2)
|
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names))
|
||||||
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
|
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
|
||||||
aws_elb_api_port = var.aws_elb_api_port
|
aws_elb_api_port = var.aws_elb_api_port
|
||||||
k8s_secure_api_port = var.k8s_secure_api_port
|
k8s_secure_api_port = var.k8s_secure_api_port
|
||||||
@@ -52,20 +52,20 @@ module "aws-iam" {
|
|||||||
resource "aws_instance" "bastion-server" {
|
resource "aws_instance" "bastion-server" {
|
||||||
ami = data.aws_ami.distro.id
|
ami = data.aws_ami.distro.id
|
||||||
instance_type = var.aws_bastion_size
|
instance_type = var.aws_bastion_size
|
||||||
count = length(var.aws_cidr_subnets_public)
|
count = var.aws_bastion_num
|
||||||
associate_public_ip_address = true
|
associate_public_ip_address = true
|
||||||
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
|
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
|
||||||
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
|
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
|
||||||
|
|
||||||
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
||||||
|
|
||||||
key_name = var.AWS_SSH_KEY_NAME
|
key_name = var.AWS_SSH_KEY_NAME
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
|
Name = "kubernetes-${var.aws_cluster_name}-bastion-${count.index}"
|
||||||
"Cluster", var.aws_cluster_name,
|
Cluster = var.aws_cluster_name
|
||||||
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
|
Role = "bastion-${var.aws_cluster_name}-${count.index}"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -79,19 +79,23 @@ resource "aws_instance" "k8s-master" {
|
|||||||
|
|
||||||
count = var.aws_kube_master_num
|
count = var.aws_kube_master_num
|
||||||
|
|
||||||
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
|
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
|
||||||
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
||||||
|
|
||||||
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
||||||
|
|
||||||
iam_instance_profile = module.aws-iam.kube-master-profile
|
root_block_device {
|
||||||
|
volume_size = var.aws_kube_master_disk_size
|
||||||
|
}
|
||||||
|
|
||||||
|
iam_instance_profile = module.aws-iam.kube_control_plane-profile
|
||||||
key_name = var.AWS_SSH_KEY_NAME
|
key_name = var.AWS_SSH_KEY_NAME
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
|
Name = "kubernetes-${var.aws_cluster_name}-master${count.index}"
|
||||||
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
|
||||||
"Role", "master"
|
Role = "master"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_elb_attachment" "attach_master_nodes" {
|
resource "aws_elb_attachment" "attach_master_nodes" {
|
||||||
@@ -106,18 +110,22 @@ resource "aws_instance" "k8s-etcd" {
|
|||||||
|
|
||||||
count = var.aws_etcd_num
|
count = var.aws_etcd_num
|
||||||
|
|
||||||
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
|
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
|
||||||
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
||||||
|
|
||||||
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
||||||
|
|
||||||
|
root_block_device {
|
||||||
|
volume_size = var.aws_etcd_disk_size
|
||||||
|
}
|
||||||
|
|
||||||
key_name = var.AWS_SSH_KEY_NAME
|
key_name = var.AWS_SSH_KEY_NAME
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
|
Name = "kubernetes-${var.aws_cluster_name}-etcd${count.index}"
|
||||||
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
|
||||||
"Role", "etcd"
|
Role = "etcd"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_instance" "k8s-worker" {
|
resource "aws_instance" "k8s-worker" {
|
||||||
@@ -126,19 +134,23 @@ resource "aws_instance" "k8s-worker" {
|
|||||||
|
|
||||||
count = var.aws_kube_worker_num
|
count = var.aws_kube_worker_num
|
||||||
|
|
||||||
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
|
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
|
||||||
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
||||||
|
|
||||||
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
||||||
|
|
||||||
|
root_block_device {
|
||||||
|
volume_size = var.aws_kube_worker_disk_size
|
||||||
|
}
|
||||||
|
|
||||||
iam_instance_profile = module.aws-iam.kube-worker-profile
|
iam_instance_profile = module.aws-iam.kube-worker-profile
|
||||||
key_name = var.AWS_SSH_KEY_NAME
|
key_name = var.AWS_SSH_KEY_NAME
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
|
Name = "kubernetes-${var.aws_cluster_name}-worker${count.index}"
|
||||||
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
|
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
|
||||||
"Role", "worker"
|
Role = "worker"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -152,10 +164,10 @@ data "template_file" "inventory" {
|
|||||||
public_ip_address_bastion = join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))
|
public_ip_address_bastion = join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))
|
||||||
connection_strings_master = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.private_dns, aws_instance.k8s-master.*.private_ip))
|
connection_strings_master = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.private_dns, aws_instance.k8s-master.*.private_ip))
|
||||||
connection_strings_node = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.private_dns, aws_instance.k8s-worker.*.private_ip))
|
connection_strings_node = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.private_dns, aws_instance.k8s-worker.*.private_ip))
|
||||||
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
|
|
||||||
list_master = join("\n", aws_instance.k8s-master.*.private_dns)
|
list_master = join("\n", aws_instance.k8s-master.*.private_dns)
|
||||||
list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
|
list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
|
||||||
list_etcd = join("\n", aws_instance.k8s-etcd.*.private_dns)
|
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
|
||||||
|
list_etcd = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_dns) : (aws_instance.k8s-master.*.private_dns)))
|
||||||
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
|
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,9 +2,9 @@ resource "aws_security_group" "aws-elb" {
|
|||||||
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
|
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
|
||||||
vpc_id = var.aws_vpc_id
|
vpc_id = var.aws_vpc_id
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
|
Name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "aws-allow-api-access" {
|
resource "aws_security_group_rule" "aws-allow-api-access" {
|
||||||
@@ -51,7 +51,7 @@ resource "aws_elb" "aws-elb-api" {
|
|||||||
connection_draining = true
|
connection_draining = true
|
||||||
connection_draining_timeout = 400
|
connection_draining_timeout = 400
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-elb-api"
|
Name = "kubernetes-${var.aws_cluster_name}-elb-api"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
#Add AWS Roles for Kubernetes
|
#Add AWS Roles for Kubernetes
|
||||||
|
|
||||||
resource "aws_iam_role" "kube-master" {
|
resource "aws_iam_role" "kube_control_plane" {
|
||||||
name = "kubernetes-${var.aws_cluster_name}-master"
|
name = "kubernetes-${var.aws_cluster_name}-master"
|
||||||
|
|
||||||
assume_role_policy = <<EOF
|
assume_role_policy = <<EOF
|
||||||
@@ -40,9 +40,9 @@ EOF
|
|||||||
|
|
||||||
#Add AWS Policies for Kubernetes
|
#Add AWS Policies for Kubernetes
|
||||||
|
|
||||||
resource "aws_iam_role_policy" "kube-master" {
|
resource "aws_iam_role_policy" "kube_control_plane" {
|
||||||
name = "kubernetes-${var.aws_cluster_name}-master"
|
name = "kubernetes-${var.aws_cluster_name}-master"
|
||||||
role = aws_iam_role.kube-master.id
|
role = aws_iam_role.kube_control_plane.id
|
||||||
|
|
||||||
policy = <<EOF
|
policy = <<EOF
|
||||||
{
|
{
|
||||||
@@ -130,9 +130,9 @@ EOF
|
|||||||
|
|
||||||
#Create AWS Instance Profiles
|
#Create AWS Instance Profiles
|
||||||
|
|
||||||
resource "aws_iam_instance_profile" "kube-master" {
|
resource "aws_iam_instance_profile" "kube_control_plane" {
|
||||||
name = "kube_${var.aws_cluster_name}_master_profile"
|
name = "kube_${var.aws_cluster_name}_master_profile"
|
||||||
role = aws_iam_role.kube-master.name
|
role = aws_iam_role.kube_control_plane.name
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_iam_instance_profile" "kube-worker" {
|
resource "aws_iam_instance_profile" "kube-worker" {
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
output "kube-master-profile" {
|
output "kube_control_plane-profile" {
|
||||||
value = aws_iam_instance_profile.kube-master.name
|
value = aws_iam_instance_profile.kube_control_plane.name
|
||||||
}
|
}
|
||||||
|
|
||||||
output "kube-worker-profile" {
|
output "kube-worker-profile" {
|
||||||
|
|||||||
@@ -5,9 +5,9 @@ resource "aws_vpc" "cluster-vpc" {
|
|||||||
enable_dns_support = true
|
enable_dns_support = true
|
||||||
enable_dns_hostnames = true
|
enable_dns_hostnames = true
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
|
Name = "kubernetes-${var.aws_cluster_name}-vpc"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_eip" "cluster-nat-eip" {
|
resource "aws_eip" "cluster-nat-eip" {
|
||||||
@@ -18,9 +18,9 @@ resource "aws_eip" "cluster-nat-eip" {
|
|||||||
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
|
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
|
||||||
vpc_id = aws_vpc.cluster-vpc.id
|
vpc_id = aws_vpc.cluster-vpc.id
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-internetgw"
|
Name = "kubernetes-${var.aws_cluster_name}-internetgw"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_subnet" "cluster-vpc-subnets-public" {
|
resource "aws_subnet" "cluster-vpc-subnets-public" {
|
||||||
@@ -29,10 +29,10 @@ resource "aws_subnet" "cluster-vpc-subnets-public" {
|
|||||||
availability_zone = element(var.aws_avail_zones, count.index)
|
availability_zone = element(var.aws_avail_zones, count.index)
|
||||||
cidr_block = element(var.aws_cidr_subnets_public, count.index)
|
cidr_block = element(var.aws_cidr_subnets_public, count.index)
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
|
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
|
||||||
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
|
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_nat_gateway" "cluster-nat-gateway" {
|
resource "aws_nat_gateway" "cluster-nat-gateway" {
|
||||||
@@ -47,9 +47,9 @@ resource "aws_subnet" "cluster-vpc-subnets-private" {
|
|||||||
availability_zone = element(var.aws_avail_zones, count.index)
|
availability_zone = element(var.aws_avail_zones, count.index)
|
||||||
cidr_block = element(var.aws_cidr_subnets_private, count.index)
|
cidr_block = element(var.aws_cidr_subnets_private, count.index)
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
|
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
#Routing in VPC
|
#Routing in VPC
|
||||||
@@ -64,9 +64,9 @@ resource "aws_route_table" "kubernetes-public" {
|
|||||||
gateway_id = aws_internet_gateway.cluster-vpc-internetgw.id
|
gateway_id = aws_internet_gateway.cluster-vpc-internetgw.id
|
||||||
}
|
}
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
|
Name = "kubernetes-${var.aws_cluster_name}-routetable-public"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_route_table" "kubernetes-private" {
|
resource "aws_route_table" "kubernetes-private" {
|
||||||
@@ -78,9 +78,9 @@ resource "aws_route_table" "kubernetes-private" {
|
|||||||
nat_gateway_id = element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)
|
nat_gateway_id = element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)
|
||||||
}
|
}
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
|
Name = "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_route_table_association" "kubernetes-public" {
|
resource "aws_route_table_association" "kubernetes-public" {
|
||||||
@@ -101,9 +101,9 @@ resource "aws_security_group" "kubernetes" {
|
|||||||
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
|
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
|
||||||
vpc_id = aws_vpc.cluster-vpc.id
|
vpc_id = aws_vpc.cluster-vpc.id
|
||||||
|
|
||||||
tags = merge(var.default_tags, map(
|
tags = merge(var.default_tags, tomap({
|
||||||
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
|
Name = "kubernetes-${var.aws_cluster_name}-securitygroup"
|
||||||
))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "allow-all-ingress" {
|
resource "aws_security_group_rule" "allow-all-ingress" {
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ output "workers" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
output "etcd" {
|
output "etcd" {
|
||||||
value = join("\n", aws_instance.k8s-etcd.*.private_ip)
|
value = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_ip) : (aws_instance.k8s-master.*.private_ip)))
|
||||||
}
|
}
|
||||||
|
|
||||||
output "aws_elb_api_fqdn" {
|
output "aws_elb_api_fqdn" {
|
||||||
|
|||||||
@@ -9,6 +9,8 @@ aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
|
|||||||
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
|
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
|
||||||
|
|
||||||
#Bastion Host
|
#Bastion Host
|
||||||
|
aws_bastion_num = 1
|
||||||
|
|
||||||
aws_bastion_size = "t2.medium"
|
aws_bastion_size = "t2.medium"
|
||||||
|
|
||||||
#Kubernetes Cluster
|
#Kubernetes Cluster
|
||||||
@@ -17,22 +19,26 @@ aws_kube_master_num = 3
|
|||||||
|
|
||||||
aws_kube_master_size = "t2.medium"
|
aws_kube_master_size = "t2.medium"
|
||||||
|
|
||||||
|
aws_kube_master_disk_size = 50
|
||||||
|
|
||||||
aws_etcd_num = 3
|
aws_etcd_num = 3
|
||||||
|
|
||||||
aws_etcd_size = "t2.medium"
|
aws_etcd_size = "t2.medium"
|
||||||
|
|
||||||
|
aws_etcd_disk_size = 50
|
||||||
|
|
||||||
aws_kube_worker_num = 4
|
aws_kube_worker_num = 4
|
||||||
|
|
||||||
aws_kube_worker_size = "t2.medium"
|
aws_kube_worker_size = "t2.medium"
|
||||||
|
|
||||||
|
aws_kube_worker_disk_size = 50
|
||||||
|
|
||||||
#Settings AWS ELB
|
#Settings AWS ELB
|
||||||
|
|
||||||
aws_elb_api_port = 6443
|
aws_elb_api_port = 6443
|
||||||
|
|
||||||
k8s_secure_api_port = 6443
|
k8s_secure_api_port = 6443
|
||||||
|
|
||||||
kube_insecure_apiserver_address = "0.0.0.0"
|
|
||||||
|
|
||||||
default_tags = {
|
default_tags = {
|
||||||
# Env = "devtest" # Product = "kubernetes"
|
# Env = "devtest" # Product = "kubernetes"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,22 +7,21 @@ ${public_ip_address_bastion}
|
|||||||
[bastion]
|
[bastion]
|
||||||
${public_ip_address_bastion}
|
${public_ip_address_bastion}
|
||||||
|
|
||||||
[kube-master]
|
[kube_control_plane]
|
||||||
${list_master}
|
${list_master}
|
||||||
|
|
||||||
|
[kube_node]
|
||||||
[kube-node]
|
|
||||||
${list_node}
|
${list_node}
|
||||||
|
|
||||||
|
|
||||||
[etcd]
|
[etcd]
|
||||||
${list_etcd}
|
${list_etcd}
|
||||||
|
|
||||||
|
[calico_rr]
|
||||||
|
|
||||||
[k8s-cluster:children]
|
[k8s_cluster:children]
|
||||||
kube-node
|
kube_node
|
||||||
kube-master
|
kube_control_plane
|
||||||
|
calico_rr
|
||||||
|
|
||||||
|
[k8s_cluster:vars]
|
||||||
[k8s-cluster:vars]
|
|
||||||
${elb_api_fqdn}
|
${elb_api_fqdn}
|
||||||
|
|||||||
@@ -6,26 +6,34 @@ aws_vpc_cidr_block = "10.250.192.0/18"
|
|||||||
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
|
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
|
||||||
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
|
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
|
||||||
|
|
||||||
#Bastion Host
|
# single AZ deployment
|
||||||
aws_bastion_size = "t2.medium"
|
#aws_cidr_subnets_private = ["10.250.192.0/20"]
|
||||||
|
#aws_cidr_subnets_public = ["10.250.224.0/20"]
|
||||||
|
|
||||||
|
# 3+ AZ deployment
|
||||||
|
#aws_cidr_subnets_private = ["10.250.192.0/24","10.250.193.0/24","10.250.194.0/24","10.250.195.0/24"]
|
||||||
|
#aws_cidr_subnets_public = ["10.250.224.0/24","10.250.225.0/24","10.250.226.0/24","10.250.227.0/24"]
|
||||||
|
|
||||||
|
#Bastion Host
|
||||||
|
aws_bastion_num = 1
|
||||||
|
aws_bastion_size = "t3.small"
|
||||||
|
|
||||||
#Kubernetes Cluster
|
#Kubernetes Cluster
|
||||||
|
aws_kube_master_num = 3
|
||||||
|
aws_kube_master_size = "t3.medium"
|
||||||
|
aws_kube_master_disk_size = 50
|
||||||
|
|
||||||
aws_kube_master_num = 3
|
aws_etcd_num = 0
|
||||||
aws_kube_master_size = "t2.medium"
|
aws_etcd_size = "t3.medium"
|
||||||
|
aws_etcd_disk_size = 50
|
||||||
|
|
||||||
aws_etcd_num = 3
|
aws_kube_worker_num = 4
|
||||||
aws_etcd_size = "t2.medium"
|
aws_kube_worker_size = "t3.medium"
|
||||||
|
aws_kube_worker_disk_size = 50
|
||||||
aws_kube_worker_num = 4
|
|
||||||
aws_kube_worker_size = "t2.medium"
|
|
||||||
|
|
||||||
#Settings AWS ELB
|
#Settings AWS ELB
|
||||||
|
aws_elb_api_port = 6443
|
||||||
aws_elb_api_port = 6443
|
k8s_secure_api_port = 6443
|
||||||
k8s_secure_api_port = 6443
|
|
||||||
kube_insecure_apiserver_address = "0.0.0.0"
|
|
||||||
|
|
||||||
default_tags = {
|
default_tags = {
|
||||||
# Env = "devtest"
|
# Env = "devtest"
|
||||||
|
|||||||
@@ -8,25 +8,26 @@ aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
|
|||||||
aws_avail_zones = ["eu-central-1a","eu-central-1b"]
|
aws_avail_zones = ["eu-central-1a","eu-central-1b"]
|
||||||
|
|
||||||
#Bastion Host
|
#Bastion Host
|
||||||
aws_bastion_ami = "ami-5900cc36"
|
aws_bastion_num = 1
|
||||||
aws_bastion_size = "t2.small"
|
aws_bastion_size = "t3.small"
|
||||||
|
|
||||||
|
|
||||||
#Kubernetes Cluster
|
#Kubernetes Cluster
|
||||||
|
|
||||||
aws_kube_master_num = 3
|
aws_kube_master_num = 3
|
||||||
aws_kube_master_size = "t2.medium"
|
aws_kube_master_size = "t3.medium"
|
||||||
|
aws_kube_master_disk_size = 50
|
||||||
|
|
||||||
aws_etcd_num = 3
|
aws_etcd_num = 3
|
||||||
aws_etcd_size = "t2.medium"
|
aws_etcd_size = "t3.medium"
|
||||||
|
aws_etcd_disk_size = 50
|
||||||
|
|
||||||
aws_kube_worker_num = 4
|
aws_kube_worker_num = 4
|
||||||
aws_kube_worker_size = "t2.medium"
|
aws_kube_worker_size = "t3.medium"
|
||||||
|
aws_kube_worker_disk_size = 50
|
||||||
aws_cluster_ami = "ami-903df7ff"
|
|
||||||
|
|
||||||
#Settings AWS ELB
|
#Settings AWS ELB
|
||||||
|
|
||||||
aws_elb_api_port = 6443
|
aws_elb_api_port = 6443
|
||||||
k8s_secure_api_port = 6443
|
k8s_secure_api_port = 6443
|
||||||
kube_insecure_apiserver_address = 0.0.0.0
|
|
||||||
|
default_tags = { }
|
||||||
|
|
||||||
|
inventory_file = "../../../inventory/hosts"
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ data "aws_ami" "distro" {
|
|||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "name"
|
name = "name"
|
||||||
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
|
values = ["debian-10-amd64-*"]
|
||||||
}
|
}
|
||||||
|
|
||||||
filter {
|
filter {
|
||||||
@@ -33,7 +33,7 @@ data "aws_ami" "distro" {
|
|||||||
values = ["hvm"]
|
values = ["hvm"]
|
||||||
}
|
}
|
||||||
|
|
||||||
owners = ["099720109477"] # Canonical
|
owners = ["136693071363"] # Debian-10
|
||||||
}
|
}
|
||||||
|
|
||||||
//AWS VPC Variables
|
//AWS VPC Variables
|
||||||
@@ -63,10 +63,18 @@ variable "aws_bastion_size" {
|
|||||||
* The number should be divisable by the number of used
|
* The number should be divisable by the number of used
|
||||||
* AWS Availability Zones without an remainder.
|
* AWS Availability Zones without an remainder.
|
||||||
*/
|
*/
|
||||||
|
variable "aws_bastion_num" {
|
||||||
|
description = "Number of Bastion Nodes"
|
||||||
|
}
|
||||||
|
|
||||||
variable "aws_kube_master_num" {
|
variable "aws_kube_master_num" {
|
||||||
description = "Number of Kubernetes Master Nodes"
|
description = "Number of Kubernetes Master Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "aws_kube_master_disk_size" {
|
||||||
|
description = "Disk size for Kubernetes Master Nodes (in GiB)"
|
||||||
|
}
|
||||||
|
|
||||||
variable "aws_kube_master_size" {
|
variable "aws_kube_master_size" {
|
||||||
description = "Instance size of Kube Master Nodes"
|
description = "Instance size of Kube Master Nodes"
|
||||||
}
|
}
|
||||||
@@ -75,6 +83,10 @@ variable "aws_etcd_num" {
|
|||||||
description = "Number of etcd Nodes"
|
description = "Number of etcd Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "aws_etcd_disk_size" {
|
||||||
|
description = "Disk size for etcd Nodes (in GiB)"
|
||||||
|
}
|
||||||
|
|
||||||
variable "aws_etcd_size" {
|
variable "aws_etcd_size" {
|
||||||
description = "Instance size of etcd Nodes"
|
description = "Instance size of etcd Nodes"
|
||||||
}
|
}
|
||||||
@@ -83,6 +95,10 @@ variable "aws_kube_worker_num" {
|
|||||||
description = "Number of Kubernetes Worker Nodes"
|
description = "Number of Kubernetes Worker Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "aws_kube_worker_disk_size" {
|
||||||
|
description = "Disk size for Kubernetes Worker Nodes (in GiB)"
|
||||||
|
}
|
||||||
|
|
||||||
variable "aws_kube_worker_size" {
|
variable "aws_kube_worker_size" {
|
||||||
description = "Instance size of Kubernetes Worker Nodes"
|
description = "Instance size of Kubernetes Worker Nodes"
|
||||||
}
|
}
|
||||||
|
|||||||
154
contrib/terraform/exoscale/README.md
Normal file
154
contrib/terraform/exoscale/README.md
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
# Kubernetes on Exoscale with Terraform
|
||||||
|
|
||||||
|
Provision a Kubernetes cluster on [Exoscale](https://www.exoscale.com/) using Terraform and Kubespray
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The setup looks like following
|
||||||
|
|
||||||
|
```text
|
||||||
|
Kubernetes cluster
|
||||||
|
+-----------------------+
|
||||||
|
+---------------+ | +--------------+ |
|
||||||
|
| | | | +--------------+ |
|
||||||
|
| API server LB +---------> | | | |
|
||||||
|
| | | | | Master/etcd | |
|
||||||
|
+---------------+ | | | node(s) | |
|
||||||
|
| +-+ | |
|
||||||
|
| +--------------+ |
|
||||||
|
| ^ |
|
||||||
|
| | |
|
||||||
|
| v |
|
||||||
|
+---------------+ | +--------------+ |
|
||||||
|
| | | | +--------------+ |
|
||||||
|
| Ingress LB +---------> | | | |
|
||||||
|
| | | | | Worker | |
|
||||||
|
+---------------+ | | | node(s) | |
|
||||||
|
| +-+ | |
|
||||||
|
| +--------------+ |
|
||||||
|
+-----------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
* Terraform 0.13.0 or newer
|
||||||
|
|
||||||
|
*0.12 also works if you modify the provider block to include version and remove all `versions.tf` files*
|
||||||
|
|
||||||
|
## Quickstart
|
||||||
|
|
||||||
|
NOTE: *Assumes you are at the root of the kubespray repo*
|
||||||
|
|
||||||
|
Copy the sample inventory for your cluster and copy the default terraform variables.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CLUSTER=my-exoscale-cluster
|
||||||
|
cp -r inventory/sample inventory/$CLUSTER
|
||||||
|
cp contrib/terraform/exoscale/default.tfvars inventory/$CLUSTER/
|
||||||
|
cd inventory/$CLUSTER
|
||||||
|
```
|
||||||
|
|
||||||
|
Edit `default.tfvars` to match your setup. You MUST, at the very least, change `ssh_public_keys`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Ensure $EDITOR points to your favorite editor, e.g., vim, emacs, VS Code, etc.
|
||||||
|
$EDITOR default.tfvars
|
||||||
|
```
|
||||||
|
|
||||||
|
For authentication you can use the credentials file `~/.cloudstack.ini` or `./cloudstack.ini`.
|
||||||
|
The file should look like something like this:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[cloudstack]
|
||||||
|
key = <API key>
|
||||||
|
secret = <API secret>
|
||||||
|
```
|
||||||
|
|
||||||
|
Follow the [Exoscale IAM Quick-start](https://community.exoscale.com/documentation/iam/quick-start/) to learn how to generate API keys.
|
||||||
|
|
||||||
|
### Encrypted credentials
|
||||||
|
|
||||||
|
To have the credentials encrypted at rest, you can use [sops](https://github.com/mozilla/sops) and only decrypt the credentials at runtime.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat << EOF > cloudstack.ini
|
||||||
|
[cloudstack]
|
||||||
|
key =
|
||||||
|
secret =
|
||||||
|
EOF
|
||||||
|
sops --encrypt --in-place --pgp <PGP key fingerprint> cloudstack.ini
|
||||||
|
sops cloudstack.ini
|
||||||
|
```
|
||||||
|
|
||||||
|
Run terraform to create the infrastructure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
terraform init ../../contrib/terraform/exoscale
|
||||||
|
terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale
|
||||||
|
```
|
||||||
|
|
||||||
|
If your cloudstack credentials file is encrypted using sops, run the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
terraform init ../../contrib/terraform/exoscale
|
||||||
|
sops exec-file -no-fifo cloudstack.ini 'CLOUDSTACK_CONFIG={} terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale'
|
||||||
|
```
|
||||||
|
|
||||||
|
You should now have a inventory file named `inventory.ini` that you can use with kubespray.
|
||||||
|
You can now copy your inventory file and use it with kubespray to set up a cluster.
|
||||||
|
You can type `terraform output` to find out the IP addresses of the nodes, as well as control-plane and data-plane load-balancer.
|
||||||
|
|
||||||
|
It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible -i inventory.ini -m ping all
|
||||||
|
```
|
||||||
|
|
||||||
|
Example to use this with the default sample inventory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook -i inventory.ini ../../cluster.yml -b -v
|
||||||
|
```
|
||||||
|
|
||||||
|
## Teardown
|
||||||
|
|
||||||
|
The Kubernetes cluster cannot create any load-balancers or disks, hence, teardown is as simple as Terraform destroy:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
terraform destroy -var-file default.tfvars ../../contrib/terraform/exoscale
|
||||||
|
```
|
||||||
|
|
||||||
|
## Variables
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
* `ssh_public_keys`: List of public SSH keys to install on all machines
|
||||||
|
* `zone`: The zone where to run the cluster
|
||||||
|
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
|
||||||
|
* `node_type`: The role of this node *(master|worker)*
|
||||||
|
* `size`: The size to use
|
||||||
|
* `boot_disk`: The boot disk to use
|
||||||
|
* `image_name`: Name of the image
|
||||||
|
* `root_partition_size`: Size *(in GB)* for the root partition
|
||||||
|
* `ceph_partition_size`: Size *(in GB)* for the partition for rook to use as ceph storage. *(Set to 0 to disable)*
|
||||||
|
* `node_local_partition_size`: Size *(in GB)* for the partition for node-local-storage. *(Set to 0 to disable)*
|
||||||
|
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
|
||||||
|
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
|
||||||
|
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
* `prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)*
|
||||||
|
|
||||||
|
An example variables file can be found `default.tfvars`
|
||||||
|
|
||||||
|
## Known limitations
|
||||||
|
|
||||||
|
### Only single disk
|
||||||
|
|
||||||
|
Since Exoscale doesn't support additional disks to be mounted onto an instance, this script has the ability to create partitions for [Rook](https://rook.io/) and [node-local-storage](https://kubernetes.io/docs/concepts/storage/volumes/#local).
|
||||||
|
|
||||||
|
### No Kubernetes API
|
||||||
|
|
||||||
|
The current solution doesn't use the [Exoscale Kubernetes cloud controller](https://github.com/exoscale/exoscale-cloud-controller-manager).
|
||||||
|
This means that we need to set up a HTTP(S) loadbalancer in front of all workers and set the Ingress controller to DaemonSet mode.
|
||||||
65
contrib/terraform/exoscale/default.tfvars
Normal file
65
contrib/terraform/exoscale/default.tfvars
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
prefix = "default"
|
||||||
|
zone = "ch-gva-2"
|
||||||
|
|
||||||
|
inventory_file = "inventory.ini"
|
||||||
|
|
||||||
|
ssh_public_keys = [
|
||||||
|
# Put your public SSH key here
|
||||||
|
"ssh-rsa I-did-not-read-the-docs",
|
||||||
|
"ssh-rsa I-did-not-read-the-docs 2",
|
||||||
|
]
|
||||||
|
|
||||||
|
machines = {
|
||||||
|
"master-0" : {
|
||||||
|
"node_type" : "master",
|
||||||
|
"size" : "Medium",
|
||||||
|
"boot_disk" : {
|
||||||
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
|
"root_partition_size" : 50,
|
||||||
|
"node_local_partition_size" : 0,
|
||||||
|
"ceph_partition_size" : 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"worker-0" : {
|
||||||
|
"node_type" : "worker",
|
||||||
|
"size" : "Large",
|
||||||
|
"boot_disk" : {
|
||||||
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
|
"root_partition_size" : 50,
|
||||||
|
"node_local_partition_size" : 0,
|
||||||
|
"ceph_partition_size" : 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"worker-1" : {
|
||||||
|
"node_type" : "worker",
|
||||||
|
"size" : "Large",
|
||||||
|
"boot_disk" : {
|
||||||
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
|
"root_partition_size" : 50,
|
||||||
|
"node_local_partition_size" : 0,
|
||||||
|
"ceph_partition_size" : 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"worker-2" : {
|
||||||
|
"node_type" : "worker",
|
||||||
|
"size" : "Large",
|
||||||
|
"boot_disk" : {
|
||||||
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
|
"root_partition_size" : 50,
|
||||||
|
"node_local_partition_size" : 0,
|
||||||
|
"ceph_partition_size" : 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
nodeport_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
|
|
||||||
|
ssh_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
|
|
||||||
|
api_server_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
49
contrib/terraform/exoscale/main.tf
Normal file
49
contrib/terraform/exoscale/main.tf
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
provider "exoscale" {}
|
||||||
|
|
||||||
|
module "kubernetes" {
|
||||||
|
source = "./modules/kubernetes-cluster"
|
||||||
|
|
||||||
|
prefix = var.prefix
|
||||||
|
|
||||||
|
machines = var.machines
|
||||||
|
|
||||||
|
ssh_public_keys = var.ssh_public_keys
|
||||||
|
|
||||||
|
ssh_whitelist = var.ssh_whitelist
|
||||||
|
api_server_whitelist = var.api_server_whitelist
|
||||||
|
nodeport_whitelist = var.nodeport_whitelist
|
||||||
|
}
|
||||||
|
|
||||||
|
#
|
||||||
|
# Generate ansible inventory
|
||||||
|
#
|
||||||
|
|
||||||
|
data "template_file" "inventory" {
|
||||||
|
template = file("${path.module}/templates/inventory.tpl")
|
||||||
|
|
||||||
|
vars = {
|
||||||
|
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
|
||||||
|
keys(module.kubernetes.master_ip_addresses),
|
||||||
|
values(module.kubernetes.master_ip_addresses).*.public_ip,
|
||||||
|
values(module.kubernetes.master_ip_addresses).*.private_ip,
|
||||||
|
range(1, length(module.kubernetes.master_ip_addresses) + 1)))
|
||||||
|
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
|
||||||
|
keys(module.kubernetes.worker_ip_addresses),
|
||||||
|
values(module.kubernetes.worker_ip_addresses).*.public_ip,
|
||||||
|
values(module.kubernetes.worker_ip_addresses).*.private_ip))
|
||||||
|
|
||||||
|
list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
|
||||||
|
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
|
||||||
|
api_lb_ip_address = module.kubernetes.control_plane_lb_ip_address
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "null_resource" "inventories" {
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
|
||||||
|
}
|
||||||
|
|
||||||
|
triggers = {
|
||||||
|
template = data.template_file.inventory.rendered
|
||||||
|
}
|
||||||
|
}
|
||||||
193
contrib/terraform/exoscale/modules/kubernetes-cluster/main.tf
Normal file
193
contrib/terraform/exoscale/modules/kubernetes-cluster/main.tf
Normal file
@@ -0,0 +1,193 @@
|
|||||||
|
data "exoscale_compute_template" "os_image" {
|
||||||
|
for_each = var.machines
|
||||||
|
|
||||||
|
zone = var.zone
|
||||||
|
name = each.value.boot_disk.image_name
|
||||||
|
}
|
||||||
|
|
||||||
|
data "exoscale_compute" "master_nodes" {
|
||||||
|
for_each = exoscale_compute.master
|
||||||
|
|
||||||
|
id = each.value.id
|
||||||
|
|
||||||
|
# Since private IP address is not assigned until the nics are created we need this
|
||||||
|
depends_on = [exoscale_nic.master_private_network_nic]
|
||||||
|
}
|
||||||
|
|
||||||
|
data "exoscale_compute" "worker_nodes" {
|
||||||
|
for_each = exoscale_compute.worker
|
||||||
|
|
||||||
|
id = each.value.id
|
||||||
|
|
||||||
|
# Since private IP address is not assigned until the nics are created we need this
|
||||||
|
depends_on = [exoscale_nic.worker_private_network_nic]
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_network" "private_network" {
|
||||||
|
zone = var.zone
|
||||||
|
name = "${var.prefix}-network"
|
||||||
|
|
||||||
|
start_ip = cidrhost(var.private_network_cidr, 1)
|
||||||
|
# cidr -1 = Broadcast address
|
||||||
|
# cidr -2 = DHCP server address (exoscale specific)
|
||||||
|
end_ip = cidrhost(var.private_network_cidr, -3)
|
||||||
|
netmask = cidrnetmask(var.private_network_cidr)
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_compute" "master" {
|
||||||
|
for_each = {
|
||||||
|
for name, machine in var.machines :
|
||||||
|
name => machine
|
||||||
|
if machine.node_type == "master"
|
||||||
|
}
|
||||||
|
|
||||||
|
display_name = "${var.prefix}-${each.key}"
|
||||||
|
template_id = data.exoscale_compute_template.os_image[each.key].id
|
||||||
|
size = each.value.size
|
||||||
|
disk_size = each.value.boot_disk.root_partition_size + each.value.boot_disk.node_local_partition_size + each.value.boot_disk.ceph_partition_size
|
||||||
|
state = "Running"
|
||||||
|
zone = var.zone
|
||||||
|
security_groups = [exoscale_security_group.master_sg.name]
|
||||||
|
|
||||||
|
user_data = templatefile(
|
||||||
|
"${path.module}/templates/cloud-init.tmpl",
|
||||||
|
{
|
||||||
|
eip_ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
||||||
|
node_local_partition_size = each.value.boot_disk.node_local_partition_size
|
||||||
|
ceph_partition_size = each.value.boot_disk.ceph_partition_size
|
||||||
|
root_partition_size = each.value.boot_disk.root_partition_size
|
||||||
|
node_type = "master"
|
||||||
|
ssh_public_keys = var.ssh_public_keys
|
||||||
|
}
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_compute" "worker" {
|
||||||
|
for_each = {
|
||||||
|
for name, machine in var.machines :
|
||||||
|
name => machine
|
||||||
|
if machine.node_type == "worker"
|
||||||
|
}
|
||||||
|
|
||||||
|
display_name = "${var.prefix}-${each.key}"
|
||||||
|
template_id = data.exoscale_compute_template.os_image[each.key].id
|
||||||
|
size = each.value.size
|
||||||
|
disk_size = each.value.boot_disk.root_partition_size + each.value.boot_disk.node_local_partition_size + each.value.boot_disk.ceph_partition_size
|
||||||
|
state = "Running"
|
||||||
|
zone = var.zone
|
||||||
|
security_groups = [exoscale_security_group.worker_sg.name]
|
||||||
|
|
||||||
|
user_data = templatefile(
|
||||||
|
"${path.module}/templates/cloud-init.tmpl",
|
||||||
|
{
|
||||||
|
eip_ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
||||||
|
node_local_partition_size = each.value.boot_disk.node_local_partition_size
|
||||||
|
ceph_partition_size = each.value.boot_disk.ceph_partition_size
|
||||||
|
root_partition_size = each.value.boot_disk.root_partition_size
|
||||||
|
node_type = "worker"
|
||||||
|
ssh_public_keys = var.ssh_public_keys
|
||||||
|
}
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_nic" "master_private_network_nic" {
|
||||||
|
for_each = exoscale_compute.master
|
||||||
|
|
||||||
|
compute_id = each.value.id
|
||||||
|
network_id = exoscale_network.private_network.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_nic" "worker_private_network_nic" {
|
||||||
|
for_each = exoscale_compute.worker
|
||||||
|
|
||||||
|
compute_id = each.value.id
|
||||||
|
network_id = exoscale_network.private_network.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_security_group" "master_sg" {
|
||||||
|
name = "${var.prefix}-master-sg"
|
||||||
|
description = "Security group for Kubernetes masters"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_security_group_rules" "master_sg_rules" {
|
||||||
|
security_group_id = exoscale_security_group.master_sg.id
|
||||||
|
|
||||||
|
# SSH
|
||||||
|
ingress {
|
||||||
|
protocol = "TCP"
|
||||||
|
cidr_list = var.ssh_whitelist
|
||||||
|
ports = ["22"]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Kubernetes API
|
||||||
|
ingress {
|
||||||
|
protocol = "TCP"
|
||||||
|
cidr_list = var.api_server_whitelist
|
||||||
|
ports = ["6443"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_security_group" "worker_sg" {
|
||||||
|
name = "${var.prefix}-worker-sg"
|
||||||
|
description = "security group for kubernetes worker nodes"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_security_group_rules" "worker_sg_rules" {
|
||||||
|
security_group_id = exoscale_security_group.worker_sg.id
|
||||||
|
|
||||||
|
# SSH
|
||||||
|
ingress {
|
||||||
|
protocol = "TCP"
|
||||||
|
cidr_list = var.ssh_whitelist
|
||||||
|
ports = ["22"]
|
||||||
|
}
|
||||||
|
|
||||||
|
# HTTP(S)
|
||||||
|
ingress {
|
||||||
|
protocol = "TCP"
|
||||||
|
cidr_list = ["0.0.0.0/0"]
|
||||||
|
ports = ["80", "443"]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Kubernetes Nodeport
|
||||||
|
ingress {
|
||||||
|
protocol = "TCP"
|
||||||
|
cidr_list = var.nodeport_whitelist
|
||||||
|
ports = ["30000-32767"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_ipaddress" "ingress_controller_lb" {
|
||||||
|
zone = var.zone
|
||||||
|
healthcheck_mode = "http"
|
||||||
|
healthcheck_port = 80
|
||||||
|
healthcheck_path = "/healthz"
|
||||||
|
healthcheck_interval = 10
|
||||||
|
healthcheck_timeout = 2
|
||||||
|
healthcheck_strikes_ok = 2
|
||||||
|
healthcheck_strikes_fail = 3
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_secondary_ipaddress" "ingress_controller_lb" {
|
||||||
|
for_each = exoscale_compute.worker
|
||||||
|
|
||||||
|
compute_id = each.value.id
|
||||||
|
ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_ipaddress" "control_plane_lb" {
|
||||||
|
zone = var.zone
|
||||||
|
healthcheck_mode = "tcp"
|
||||||
|
healthcheck_port = 6443
|
||||||
|
healthcheck_interval = 10
|
||||||
|
healthcheck_timeout = 2
|
||||||
|
healthcheck_strikes_ok = 2
|
||||||
|
healthcheck_strikes_fail = 3
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "exoscale_secondary_ipaddress" "control_plane_lb" {
|
||||||
|
for_each = exoscale_compute.master
|
||||||
|
|
||||||
|
compute_id = each.value.id
|
||||||
|
ip_address = exoscale_ipaddress.control_plane_lb.ip_address
|
||||||
|
}
|
||||||
@@ -0,0 +1,31 @@
|
|||||||
|
output "master_ip_addresses" {
|
||||||
|
value = {
|
||||||
|
for key, instance in exoscale_compute.master :
|
||||||
|
instance.name => {
|
||||||
|
"private_ip" = contains(keys(data.exoscale_compute.master_nodes), key) ? data.exoscale_compute.master_nodes[key].private_network_ip_addresses[0] : ""
|
||||||
|
"public_ip" = exoscale_compute.master[key].ip_address
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
output "worker_ip_addresses" {
|
||||||
|
value = {
|
||||||
|
for key, instance in exoscale_compute.worker :
|
||||||
|
instance.name => {
|
||||||
|
"private_ip" = contains(keys(data.exoscale_compute.worker_nodes), key) ? data.exoscale_compute.worker_nodes[key].private_network_ip_addresses[0] : ""
|
||||||
|
"public_ip" = exoscale_compute.worker[key].ip_address
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
output "cluster_private_network_cidr" {
|
||||||
|
value = var.private_network_cidr
|
||||||
|
}
|
||||||
|
|
||||||
|
output "ingress_controller_lb_ip_address" {
|
||||||
|
value = exoscale_ipaddress.ingress_controller_lb.ip_address
|
||||||
|
}
|
||||||
|
|
||||||
|
output "control_plane_lb_ip_address" {
|
||||||
|
value = exoscale_ipaddress.control_plane_lb.ip_address
|
||||||
|
}
|
||||||
@@ -0,0 +1,52 @@
|
|||||||
|
#cloud-config
|
||||||
|
%{ if ceph_partition_size > 0 || node_local_partition_size > 0}
|
||||||
|
bootcmd:
|
||||||
|
- [ cloud-init-per, once, move-second-header, sgdisk, --move-second-header, /dev/vda ]
|
||||||
|
%{ if node_local_partition_size > 0 }
|
||||||
|
# Create partition for node local storage
|
||||||
|
- [ cloud-init-per, once, create-node-local-part, parted, --script, /dev/vda, 'mkpart extended ext4 ${root_partition_size}GB %{ if ceph_partition_size == 0 }-1%{ else }${root_partition_size + node_local_partition_size}GB%{ endif }' ]
|
||||||
|
- [ cloud-init-per, once, create-fs-node-local-part, mkfs.ext4, /dev/vda2 ]
|
||||||
|
%{ endif }
|
||||||
|
%{ if ceph_partition_size > 0 }
|
||||||
|
# Create partition for rook to use for ceph
|
||||||
|
- [ cloud-init-per, once, create-ceph-part, parted, --script, /dev/vda, 'mkpart extended ${root_partition_size + node_local_partition_size}GB -1' ]
|
||||||
|
%{ endif }
|
||||||
|
%{ endif }
|
||||||
|
|
||||||
|
ssh_authorized_keys:
|
||||||
|
%{ for ssh_public_key in ssh_public_keys ~}
|
||||||
|
- ${ssh_public_key}
|
||||||
|
%{ endfor ~}
|
||||||
|
|
||||||
|
write_files:
|
||||||
|
- path: /etc/netplan/eth1.yaml
|
||||||
|
content: |
|
||||||
|
network:
|
||||||
|
version: 2
|
||||||
|
ethernets:
|
||||||
|
eth1:
|
||||||
|
dhcp4: true
|
||||||
|
%{ if node_type == "worker" }
|
||||||
|
# TODO: When a VM is seen as healthy and is added to the EIP loadbalancer
|
||||||
|
# pool it no longer can send traffic back to itself via the EIP IP
|
||||||
|
# address.
|
||||||
|
# Remove this if it ever gets solved.
|
||||||
|
- path: /etc/netplan/20-eip-fix.yaml
|
||||||
|
content: |
|
||||||
|
network:
|
||||||
|
version: 2
|
||||||
|
ethernets:
|
||||||
|
"lo:0":
|
||||||
|
match:
|
||||||
|
name: lo
|
||||||
|
dhcp4: false
|
||||||
|
addresses:
|
||||||
|
- ${eip_ip_address}/32
|
||||||
|
%{ endif }
|
||||||
|
runcmd:
|
||||||
|
- netplan apply
|
||||||
|
%{ if node_local_partition_size > 0 }
|
||||||
|
- mkdir -p /mnt/disks/node-local-storage
|
||||||
|
- chown nobody:nogroup /mnt/disks/node-local-storage
|
||||||
|
- mount /dev/vda2 /mnt/disks/node-local-storage
|
||||||
|
%{ endif }
|
||||||
@@ -0,0 +1,42 @@
|
|||||||
|
variable "zone" {
|
||||||
|
type = string
|
||||||
|
# This is currently the only zone that is supposed to be supporting
|
||||||
|
# so called "managed private networks".
|
||||||
|
# See: https://www.exoscale.com/syslog/introducing-managed-private-networks
|
||||||
|
default = "ch-gva-2"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "prefix" {}
|
||||||
|
|
||||||
|
variable "machines" {
|
||||||
|
type = map(object({
|
||||||
|
node_type = string
|
||||||
|
size = string
|
||||||
|
boot_disk = object({
|
||||||
|
image_name = string
|
||||||
|
root_partition_size = number
|
||||||
|
ceph_partition_size = number
|
||||||
|
node_local_partition_size = number
|
||||||
|
})
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_public_keys" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_whitelist" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "api_server_whitelist" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "nodeport_whitelist" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "private_network_cidr" {
|
||||||
|
default = "172.0.10.0/24"
|
||||||
|
}
|
||||||
@@ -0,0 +1,9 @@
|
|||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
exoscale = {
|
||||||
|
source = "exoscale/exoscale"
|
||||||
|
version = ">= 0.21"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
required_version = ">= 0.13"
|
||||||
|
}
|
||||||
15
contrib/terraform/exoscale/output.tf
Normal file
15
contrib/terraform/exoscale/output.tf
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
output "master_ips" {
|
||||||
|
value = module.kubernetes.master_ip_addresses
|
||||||
|
}
|
||||||
|
|
||||||
|
output "worker_ips" {
|
||||||
|
value = module.kubernetes.worker_ip_addresses
|
||||||
|
}
|
||||||
|
|
||||||
|
output "ingress_controller_lb_ip_address" {
|
||||||
|
value = module.kubernetes.ingress_controller_lb_ip_address
|
||||||
|
}
|
||||||
|
|
||||||
|
output "control_plane_lb_ip_address" {
|
||||||
|
value = module.kubernetes.control_plane_lb_ip_address
|
||||||
|
}
|
||||||
65
contrib/terraform/exoscale/sample-inventory/cluster.tfvars
Normal file
65
contrib/terraform/exoscale/sample-inventory/cluster.tfvars
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
prefix = "default"
|
||||||
|
zone = "ch-gva-2"
|
||||||
|
|
||||||
|
inventory_file = "inventory.ini"
|
||||||
|
|
||||||
|
ssh_public_keys = [
|
||||||
|
# Put your public SSH key here
|
||||||
|
"ssh-rsa I-did-not-read-the-docs",
|
||||||
|
"ssh-rsa I-did-not-read-the-docs 2",
|
||||||
|
]
|
||||||
|
|
||||||
|
machines = {
|
||||||
|
"master-0" : {
|
||||||
|
"node_type" : "master",
|
||||||
|
"size" : "Small",
|
||||||
|
"boot_disk" : {
|
||||||
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
|
"root_partition_size" : 50,
|
||||||
|
"node_local_partition_size" : 0,
|
||||||
|
"ceph_partition_size" : 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"worker-0" : {
|
||||||
|
"node_type" : "worker",
|
||||||
|
"size" : "Large",
|
||||||
|
"boot_disk" : {
|
||||||
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
|
"root_partition_size" : 50,
|
||||||
|
"node_local_partition_size" : 0,
|
||||||
|
"ceph_partition_size" : 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"worker-1" : {
|
||||||
|
"node_type" : "worker",
|
||||||
|
"size" : "Large",
|
||||||
|
"boot_disk" : {
|
||||||
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
|
"root_partition_size" : 50,
|
||||||
|
"node_local_partition_size" : 0,
|
||||||
|
"ceph_partition_size" : 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"worker-2" : {
|
||||||
|
"node_type" : "worker",
|
||||||
|
"size" : "Large",
|
||||||
|
"boot_disk" : {
|
||||||
|
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||||
|
"root_partition_size" : 50,
|
||||||
|
"node_local_partition_size" : 0,
|
||||||
|
"ceph_partition_size" : 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
nodeport_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
|
|
||||||
|
ssh_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
|
|
||||||
|
api_server_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
1
contrib/terraform/exoscale/sample-inventory/group_vars
Symbolic link
1
contrib/terraform/exoscale/sample-inventory/group_vars
Symbolic link
@@ -0,0 +1 @@
|
|||||||
|
../../../../inventory/sample/group_vars
|
||||||
19
contrib/terraform/exoscale/templates/inventory.tpl
Normal file
19
contrib/terraform/exoscale/templates/inventory.tpl
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
[all]
|
||||||
|
${connection_strings_master}
|
||||||
|
${connection_strings_worker}
|
||||||
|
|
||||||
|
[kube_control_plane]
|
||||||
|
${list_master}
|
||||||
|
|
||||||
|
[kube_control_plane:vars]
|
||||||
|
supplementary_addresses_in_ssl_keys = [ "${api_lb_ip_address}" ]
|
||||||
|
|
||||||
|
[etcd]
|
||||||
|
${list_master}
|
||||||
|
|
||||||
|
[kube_node]
|
||||||
|
${list_worker}
|
||||||
|
|
||||||
|
[k8s_cluster:children]
|
||||||
|
kube_control_plane
|
||||||
|
kube_node
|
||||||
46
contrib/terraform/exoscale/variables.tf
Normal file
46
contrib/terraform/exoscale/variables.tf
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
variable "zone" {
|
||||||
|
description = "The zone where to run the cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "prefix" {
|
||||||
|
description = "Prefix for resource names"
|
||||||
|
default = "default"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "machines" {
|
||||||
|
description = "Cluster machines"
|
||||||
|
type = map(object({
|
||||||
|
node_type = string
|
||||||
|
size = string
|
||||||
|
boot_disk = object({
|
||||||
|
image_name = string
|
||||||
|
root_partition_size = number
|
||||||
|
ceph_partition_size = number
|
||||||
|
node_local_partition_size = number
|
||||||
|
})
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_public_keys" {
|
||||||
|
description = "List of public SSH keys which are injected into the VMs."
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_whitelist" {
|
||||||
|
description = "List of IP ranges (CIDR) to whitelist for ssh"
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "api_server_whitelist" {
|
||||||
|
description = "List of IP ranges (CIDR) to whitelist for kubernetes api server"
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "nodeport_whitelist" {
|
||||||
|
description = "List of IP ranges (CIDR) to whitelist for kubernetes nodeports"
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "inventory_file" {
|
||||||
|
description = "Where to store the generated inventory file"
|
||||||
|
}
|
||||||
15
contrib/terraform/exoscale/versions.tf
Normal file
15
contrib/terraform/exoscale/versions.tf
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
exoscale = {
|
||||||
|
source = "exoscale/exoscale"
|
||||||
|
version = ">= 0.21"
|
||||||
|
}
|
||||||
|
null = {
|
||||||
|
source = "hashicorp/null"
|
||||||
|
}
|
||||||
|
template = {
|
||||||
|
source = "hashicorp/template"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
required_version = ">= 0.13"
|
||||||
|
}
|
||||||
@@ -50,13 +50,13 @@ for name in "${WORKER_NAMES[@]}"; do
|
|||||||
done
|
done
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo "[kube-master]"
|
echo "[kube_control_plane]"
|
||||||
for name in "${MASTER_NAMES[@]}"; do
|
for name in "${MASTER_NAMES[@]}"; do
|
||||||
echo "${name}"
|
echo "${name}"
|
||||||
done
|
done
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo "[kube-master:vars]"
|
echo "[kube_control_plane:vars]"
|
||||||
echo "supplementary_addresses_in_ssl_keys = [ '${API_LB}' ]" # Add LB address to API server certificate
|
echo "supplementary_addresses_in_ssl_keys = [ '${API_LB}' ]" # Add LB address to API server certificate
|
||||||
echo ""
|
echo ""
|
||||||
echo "[etcd]"
|
echo "[etcd]"
|
||||||
@@ -65,12 +65,12 @@ for name in "${MASTER_NAMES[@]}"; do
|
|||||||
done
|
done
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo "[kube-node]"
|
echo "[kube_node]"
|
||||||
for name in "${WORKER_NAMES[@]}"; do
|
for name in "${WORKER_NAMES[@]}"; do
|
||||||
echo "${name}"
|
echo "${name}"
|
||||||
done
|
done
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo "[k8s-cluster:children]"
|
echo "[k8s_cluster:children]"
|
||||||
echo "kube-master"
|
echo "kube_control_plane"
|
||||||
echo "kube-node"
|
echo "kube_node"
|
||||||
|
|||||||
107
contrib/terraform/hetzner/README.md
Normal file
107
contrib/terraform/hetzner/README.md
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
# Kubernetes on Hetzner with Terraform
|
||||||
|
|
||||||
|
Provision a Kubernetes cluster on [Hetzner](https://www.hetzner.com/cloud) using Terraform and Kubespray
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The setup looks like following
|
||||||
|
|
||||||
|
```text
|
||||||
|
Kubernetes cluster
|
||||||
|
+--------------------------+
|
||||||
|
| +--------------+ |
|
||||||
|
| | +--------------+ |
|
||||||
|
| --> | | | |
|
||||||
|
| | | Master/etcd | |
|
||||||
|
| | | node(s) | |
|
||||||
|
| +-+ | |
|
||||||
|
| +--------------+ |
|
||||||
|
| ^ |
|
||||||
|
| | |
|
||||||
|
| v |
|
||||||
|
| +--------------+ |
|
||||||
|
| | +--------------+ |
|
||||||
|
| --> | | | |
|
||||||
|
| | | Worker | |
|
||||||
|
| | | node(s) | |
|
||||||
|
| +-+ | |
|
||||||
|
| +--------------+ |
|
||||||
|
+--------------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
The nodes uses a private network for node to node communication and a public interface for all external communication.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
* Terraform 0.14.0 or newer
|
||||||
|
|
||||||
|
## Quickstart
|
||||||
|
|
||||||
|
NOTE: Assumes you are at the root of the kubespray repo.
|
||||||
|
|
||||||
|
For authentication in your cluster you can use the environment variables.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export HCLOUD_TOKEN=api-token
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy the cluster configuration file.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CLUSTER=my-hetzner-cluster
|
||||||
|
cp -r inventory/sample inventory/$CLUSTER
|
||||||
|
cp contrib/terraform/hetzner/default.tfvars inventory/$CLUSTER/
|
||||||
|
cd inventory/$CLUSTER
|
||||||
|
```
|
||||||
|
|
||||||
|
Edit `default.tfvars` to match your requirement.
|
||||||
|
|
||||||
|
Run Terraform to create the infrastructure.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
terraform init ../../contrib/terraform/hetzner
|
||||||
|
terraform apply --var-file default.tfvars ../../contrib/terraform/hetzner/
|
||||||
|
```
|
||||||
|
|
||||||
|
You should now have a inventory file named `inventory.ini` that you can use with kubespray.
|
||||||
|
You can use the inventory file with kubespray to set up a cluster.
|
||||||
|
|
||||||
|
It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible -i inventory.ini -m ping all
|
||||||
|
```
|
||||||
|
|
||||||
|
You can setup Kubernetes with kubespray using the generated inventory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook -i inventory.ini ../../cluster.yml -b -v
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cloud controller
|
||||||
|
|
||||||
|
For better support with the cloud you can install the [hcloud cloud controller](https://github.com/hetznercloud/hcloud-cloud-controller-manager) and [CSI driver](https://github.com/hetznercloud/csi-driver).
|
||||||
|
|
||||||
|
Please read the instructions in both repos on how to install it.
|
||||||
|
|
||||||
|
## Teardown
|
||||||
|
|
||||||
|
You can teardown your infrastructure using the following Terraform command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
terraform destroy --var-file default.tfvars ../../contrib/terraform/hetzner
|
||||||
|
```
|
||||||
|
|
||||||
|
## Variables
|
||||||
|
|
||||||
|
* `prefix`: Prefix to add to all resources, if set to "" don't set any prefix
|
||||||
|
* `ssh_public_keys`: List of public SSH keys to install on all machines
|
||||||
|
* `zone`: The zone where to run the cluster
|
||||||
|
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
|
||||||
|
* `node_type`: The role of this node *(master|worker)*
|
||||||
|
* `size`: Size of the VM
|
||||||
|
* `image`: The image to use for the VM
|
||||||
|
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
|
||||||
|
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
|
||||||
|
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
|
||||||
|
* `ingress_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to kubernetes workers on port 80 and 443
|
||||||
44
contrib/terraform/hetzner/default.tfvars
Normal file
44
contrib/terraform/hetzner/default.tfvars
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
prefix = "default"
|
||||||
|
zone = "hel1"
|
||||||
|
|
||||||
|
inventory_file = "inventory.ini"
|
||||||
|
|
||||||
|
ssh_public_keys = [
|
||||||
|
# Put your public SSH key here
|
||||||
|
"ssh-rsa I-did-not-read-the-docs",
|
||||||
|
"ssh-rsa I-did-not-read-the-docs 2",
|
||||||
|
]
|
||||||
|
|
||||||
|
machines = {
|
||||||
|
"master-0" : {
|
||||||
|
"node_type" : "master",
|
||||||
|
"size" : "cx21",
|
||||||
|
"image" : "ubuntu-20.04",
|
||||||
|
},
|
||||||
|
"worker-0" : {
|
||||||
|
"node_type" : "worker",
|
||||||
|
"size" : "cx21",
|
||||||
|
"image" : "ubuntu-20.04",
|
||||||
|
},
|
||||||
|
"worker-1" : {
|
||||||
|
"node_type" : "worker",
|
||||||
|
"size" : "cx21",
|
||||||
|
"image" : "ubuntu-20.04",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
nodeport_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
|
|
||||||
|
ingress_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
|
|
||||||
|
ssh_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
|
|
||||||
|
api_server_whitelist = [
|
||||||
|
"0.0.0.0/0"
|
||||||
|
]
|
||||||
51
contrib/terraform/hetzner/main.tf
Normal file
51
contrib/terraform/hetzner/main.tf
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
provider "hcloud" {}
|
||||||
|
|
||||||
|
module "kubernetes" {
|
||||||
|
source = "./modules/kubernetes-cluster"
|
||||||
|
|
||||||
|
prefix = var.prefix
|
||||||
|
|
||||||
|
zone = var.zone
|
||||||
|
|
||||||
|
machines = var.machines
|
||||||
|
|
||||||
|
ssh_public_keys = var.ssh_public_keys
|
||||||
|
|
||||||
|
ssh_whitelist = var.ssh_whitelist
|
||||||
|
api_server_whitelist = var.api_server_whitelist
|
||||||
|
nodeport_whitelist = var.nodeport_whitelist
|
||||||
|
ingress_whitelist = var.ingress_whitelist
|
||||||
|
}
|
||||||
|
|
||||||
|
#
|
||||||
|
# Generate ansible inventory
|
||||||
|
#
|
||||||
|
|
||||||
|
data "template_file" "inventory" {
|
||||||
|
template = file("${path.module}/templates/inventory.tpl")
|
||||||
|
|
||||||
|
vars = {
|
||||||
|
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
|
||||||
|
keys(module.kubernetes.master_ip_addresses),
|
||||||
|
values(module.kubernetes.master_ip_addresses).*.public_ip,
|
||||||
|
values(module.kubernetes.master_ip_addresses).*.private_ip,
|
||||||
|
range(1, length(module.kubernetes.master_ip_addresses) + 1)))
|
||||||
|
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
|
||||||
|
keys(module.kubernetes.worker_ip_addresses),
|
||||||
|
values(module.kubernetes.worker_ip_addresses).*.public_ip,
|
||||||
|
values(module.kubernetes.worker_ip_addresses).*.private_ip))
|
||||||
|
|
||||||
|
list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
|
||||||
|
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "null_resource" "inventories" {
|
||||||
|
provisioner "local-exec" {
|
||||||
|
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
|
||||||
|
}
|
||||||
|
|
||||||
|
triggers = {
|
||||||
|
template = data.template_file.inventory.rendered
|
||||||
|
}
|
||||||
|
}
|
||||||
122
contrib/terraform/hetzner/modules/kubernetes-cluster/main.tf
Normal file
122
contrib/terraform/hetzner/modules/kubernetes-cluster/main.tf
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
resource "hcloud_network" "kubernetes" {
|
||||||
|
name = "${var.prefix}-network"
|
||||||
|
ip_range = var.private_network_cidr
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "hcloud_network_subnet" "kubernetes" {
|
||||||
|
type = "cloud"
|
||||||
|
network_id = hcloud_network.kubernetes.id
|
||||||
|
network_zone = "eu-central"
|
||||||
|
ip_range = var.private_subnet_cidr
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "hcloud_server" "master" {
|
||||||
|
for_each = {
|
||||||
|
for name, machine in var.machines :
|
||||||
|
name => machine
|
||||||
|
if machine.node_type == "master"
|
||||||
|
}
|
||||||
|
|
||||||
|
name = "${var.prefix}-${each.key}"
|
||||||
|
image = each.value.image
|
||||||
|
server_type = each.value.size
|
||||||
|
location = var.zone
|
||||||
|
|
||||||
|
user_data = templatefile(
|
||||||
|
"${path.module}/templates/cloud-init.tmpl",
|
||||||
|
{
|
||||||
|
ssh_public_keys = var.ssh_public_keys
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
firewall_ids = [hcloud_firewall.master.id]
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "hcloud_server_network" "master" {
|
||||||
|
for_each = hcloud_server.master
|
||||||
|
|
||||||
|
server_id = each.value.id
|
||||||
|
|
||||||
|
subnet_id = hcloud_network_subnet.kubernetes.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "hcloud_server" "worker" {
|
||||||
|
for_each = {
|
||||||
|
for name, machine in var.machines :
|
||||||
|
name => machine
|
||||||
|
if machine.node_type == "worker"
|
||||||
|
}
|
||||||
|
|
||||||
|
name = "${var.prefix}-${each.key}"
|
||||||
|
image = each.value.image
|
||||||
|
server_type = each.value.size
|
||||||
|
location = var.zone
|
||||||
|
|
||||||
|
user_data = templatefile(
|
||||||
|
"${path.module}/templates/cloud-init.tmpl",
|
||||||
|
{
|
||||||
|
ssh_public_keys = var.ssh_public_keys
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
firewall_ids = [hcloud_firewall.worker.id]
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "hcloud_server_network" "worker" {
|
||||||
|
for_each = hcloud_server.worker
|
||||||
|
|
||||||
|
server_id = each.value.id
|
||||||
|
|
||||||
|
subnet_id = hcloud_network_subnet.kubernetes.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "hcloud_firewall" "master" {
|
||||||
|
name = "${var.prefix}-master-firewall"
|
||||||
|
|
||||||
|
rule {
|
||||||
|
direction = "in"
|
||||||
|
protocol = "tcp"
|
||||||
|
port = "22"
|
||||||
|
source_ips = var.ssh_whitelist
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
direction = "in"
|
||||||
|
protocol = "tcp"
|
||||||
|
port = "6443"
|
||||||
|
source_ips = var.api_server_whitelist
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "hcloud_firewall" "worker" {
|
||||||
|
name = "${var.prefix}-worker-firewall"
|
||||||
|
|
||||||
|
rule {
|
||||||
|
direction = "in"
|
||||||
|
protocol = "tcp"
|
||||||
|
port = "22"
|
||||||
|
source_ips = var.ssh_whitelist
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
direction = "in"
|
||||||
|
protocol = "tcp"
|
||||||
|
port = "80"
|
||||||
|
source_ips = var.ingress_whitelist
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
direction = "in"
|
||||||
|
protocol = "tcp"
|
||||||
|
port = "443"
|
||||||
|
source_ips = var.ingress_whitelist
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
direction = "in"
|
||||||
|
protocol = "tcp"
|
||||||
|
port = "30000-32767"
|
||||||
|
source_ips = var.nodeport_whitelist
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,23 @@
|
|||||||
|
output "master_ip_addresses" {
|
||||||
|
value = {
|
||||||
|
for key, instance in hcloud_server.master :
|
||||||
|
instance.name => {
|
||||||
|
"private_ip" = hcloud_server_network.master[key].ip
|
||||||
|
"public_ip" = hcloud_server.master[key].ipv4_address
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
output "worker_ip_addresses" {
|
||||||
|
value = {
|
||||||
|
for key, instance in hcloud_server.worker :
|
||||||
|
instance.name => {
|
||||||
|
"private_ip" = hcloud_server_network.worker[key].ip
|
||||||
|
"public_ip" = hcloud_server.worker[key].ipv4_address
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
output "cluster_private_network_cidr" {
|
||||||
|
value = var.private_subnet_cidr
|
||||||
|
}
|
||||||
@@ -0,0 +1,17 @@
|
|||||||
|
#cloud-config
|
||||||
|
|
||||||
|
users:
|
||||||
|
- default
|
||||||
|
- name: ubuntu
|
||||||
|
shell: /bin/bash
|
||||||
|
sudo: "ALL=(ALL) NOPASSWD:ALL"
|
||||||
|
ssh_authorized_keys:
|
||||||
|
%{ for ssh_public_key in ssh_public_keys ~}
|
||||||
|
- ${ssh_public_key}
|
||||||
|
%{ endfor ~}
|
||||||
|
|
||||||
|
ssh_authorized_keys:
|
||||||
|
%{ for ssh_public_key in ssh_public_keys ~}
|
||||||
|
- ${ssh_public_key}
|
||||||
|
%{ endfor ~}
|
||||||
|
|
||||||
@@ -0,0 +1,41 @@
|
|||||||
|
variable "zone" {
|
||||||
|
type = string
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "prefix" {}
|
||||||
|
|
||||||
|
variable "machines" {
|
||||||
|
type = map(object({
|
||||||
|
node_type = string
|
||||||
|
size = string
|
||||||
|
image = string
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_public_keys" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_whitelist" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "api_server_whitelist" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "nodeport_whitelist" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ingress_whitelist" {
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "private_network_cidr" {
|
||||||
|
default = "10.0.0.0/16"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "private_subnet_cidr" {
|
||||||
|
default = "10.0.10.0/24"
|
||||||
|
}
|
||||||
@@ -0,0 +1,9 @@
|
|||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
hcloud = {
|
||||||
|
source = "hetznercloud/hcloud"
|
||||||
|
version = "1.31.1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
required_version = ">= 0.14"
|
||||||
|
}
|
||||||
7
contrib/terraform/hetzner/output.tf
Normal file
7
contrib/terraform/hetzner/output.tf
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
output "master_ips" {
|
||||||
|
value = module.kubernetes.master_ip_addresses
|
||||||
|
}
|
||||||
|
|
||||||
|
output "worker_ips" {
|
||||||
|
value = module.kubernetes.worker_ip_addresses
|
||||||
|
}
|
||||||
16
contrib/terraform/hetzner/templates/inventory.tpl
Normal file
16
contrib/terraform/hetzner/templates/inventory.tpl
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
[all]
|
||||||
|
${connection_strings_master}
|
||||||
|
${connection_strings_worker}
|
||||||
|
|
||||||
|
[kube-master]
|
||||||
|
${list_master}
|
||||||
|
|
||||||
|
[etcd]
|
||||||
|
${list_master}
|
||||||
|
|
||||||
|
[kube-node]
|
||||||
|
${list_worker}
|
||||||
|
|
||||||
|
[k8s-cluster:children]
|
||||||
|
kube-master
|
||||||
|
kube-node
|
||||||
46
contrib/terraform/hetzner/variables.tf
Normal file
46
contrib/terraform/hetzner/variables.tf
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
variable "zone" {
|
||||||
|
description = "The zone where to run the cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "prefix" {
|
||||||
|
description = "Prefix for resource names"
|
||||||
|
default = "default"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "machines" {
|
||||||
|
description = "Cluster machines"
|
||||||
|
type = map(object({
|
||||||
|
node_type = string
|
||||||
|
size = string
|
||||||
|
image = string
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_public_keys" {
|
||||||
|
description = "Public SSH key which are injected into the VMs."
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_whitelist" {
|
||||||
|
description = "List of IP ranges (CIDR) to whitelist for ssh"
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "api_server_whitelist" {
|
||||||
|
description = "List of IP ranges (CIDR) to whitelist for kubernetes api server"
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "nodeport_whitelist" {
|
||||||
|
description = "List of IP ranges (CIDR) to whitelist for kubernetes nodeports"
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ingress_whitelist" {
|
||||||
|
description = "List of IP ranges (CIDR) to whitelist for HTTP"
|
||||||
|
type = list(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "inventory_file" {
|
||||||
|
description = "Where to store the generated inventory file"
|
||||||
|
}
|
||||||
15
contrib/terraform/hetzner/versions.tf
Normal file
15
contrib/terraform/hetzner/versions.tf
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
hcloud = {
|
||||||
|
source = "hetznercloud/hcloud"
|
||||||
|
version = "1.31.1"
|
||||||
|
}
|
||||||
|
null = {
|
||||||
|
source = "hashicorp/null"
|
||||||
|
}
|
||||||
|
template = {
|
||||||
|
source = "hashicorp/template"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
required_version = ">= 0.14"
|
||||||
|
}
|
||||||
@@ -251,6 +251,7 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|
|||||||
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|
||||||
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|
||||||
|`k8s_master_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to master nodes instead of creating new random floating IPs. |
|
|`k8s_master_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to master nodes instead of creating new random floating IPs. |
|
||||||
|
|`bastion_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to bastion node instead of creating new random floating IPs. |
|
||||||
|`external_net` | UUID of the external network that will be routed to |
|
|`external_net` | UUID of the external network that will be routed to |
|
||||||
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|
||||||
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|
||||||
@@ -263,25 +264,30 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|
|||||||
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
|
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
|
||||||
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
|
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
|
||||||
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|
||||||
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube_node` for tainting them as nodes, empty by default. |
|
||||||
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
|
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube_ingress` for running ingress controller pods, empty by default. |
|
||||||
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|
||||||
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|
||||||
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|
||||||
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|
||||||
|
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|
||||||
|`wait_for_floatingip` | Let Terraform poll the instance until the floating IP has been associated, `false` by default. |
|
|`wait_for_floatingip` | Let Terraform poll the instance until the floating IP has been associated, `false` by default. |
|
||||||
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|
||||||
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|
||||||
|
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
|
||||||
|
|`node_volume_type` | Volume type of the root volume for nodes, 'Default' by default |
|
||||||
|`gfs_root_volume_size_in_gb` | Size of the root volume for gluster, 0 to use ephemeral storage |
|
|`gfs_root_volume_size_in_gb` | Size of the root volume for gluster, 0 to use ephemeral storage |
|
||||||
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|
||||||
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|
||||||
|`use_server_group` | Create and use openstack nova servergroups, default: false |
|
|`master_server_group_policy` | Enable and use openstack nova servergroups for masters with set policy, default: "" (disabled) |
|
||||||
|
|`node_server_group_policy` | Enable and use openstack nova servergroups for nodes with set policy, default: "" (disabled) |
|
||||||
|
|`etcd_server_group_policy` | Enable and use openstack nova servergroups for etcd with set policy, default: "" (disabled) |
|
||||||
|`use_access_ip` | If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1. |
|
|`use_access_ip` | If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1. |
|
||||||
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|
||||||
|
|
||||||
##### k8s_nodes
|
##### k8s_nodes
|
||||||
|
|
||||||
Allows a custom defintion of worker nodes giving the operator full control over individual node flavor and
|
Allows a custom definition of worker nodes giving the operator full control over individual node flavor and
|
||||||
availability zone placement. To enable the use of this mode set the `number_of_k8s_nodes` and
|
availability zone placement. To enable the use of this mode set the `number_of_k8s_nodes` and
|
||||||
`number_of_k8s_nodes_no_floating_ip` variables to 0. Then define your desired worker node configuration
|
`number_of_k8s_nodes_no_floating_ip` variables to 0. Then define your desired worker node configuration
|
||||||
using the `k8s_nodes` variable.
|
using the `k8s_nodes` variable.
|
||||||
@@ -420,7 +426,7 @@ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack
|
|||||||
```
|
```
|
||||||
|
|
||||||
if you chose to create a bastion host, this script will create
|
if you chose to create a bastion host, this script will create
|
||||||
`contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for Ansible to
|
`contrib/terraform/openstack/k8s_cluster.yml` with an ssh command for Ansible to
|
||||||
be able to access your machines tunneling through the bastion's IP address. If
|
be able to access your machines tunneling through the bastion's IP address. If
|
||||||
you want to manually handle the ssh tunneling to these machines, please delete
|
you want to manually handle the ssh tunneling to these machines, please delete
|
||||||
or move that file. If you want to use this, just leave it there, as ansible will
|
or move that file. If you want to use this, just leave it there, as ansible will
|
||||||
@@ -545,7 +551,7 @@ bin_dir: /opt/bin
|
|||||||
cloud_provider: openstack
|
cloud_provider: openstack
|
||||||
```
|
```
|
||||||
|
|
||||||
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml`:
|
Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml`:
|
||||||
|
|
||||||
- Set variable **kube_network_plugin** to your desired networking plugin.
|
- Set variable **kube_network_plugin** to your desired networking plugin.
|
||||||
- **flannel** works out-of-the-box
|
- **flannel** works out-of-the-box
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user