Compare commits
2749 Commits
Author | SHA1 | Date | |
---|---|---|---|
1e1dbb2392 | |||
ff1f08c93f | |||
78fb932156 | |||
c142134a28 | |||
b44b91462e | |||
5921b2c035 | |||
a19672befc | |||
694728c496 | |||
1557f8b534 | |||
4b9bfa17ee | |||
8de0c0419a | |||
3039c639c0 | |||
91335d01bb | |||
a8c84ffc93 | |||
939337f450 | |||
2a6d50470d | |||
d62e39d5ca | |||
7f0f5e2b3c | |||
eb1589ad35 | |||
546d5fe835 | |||
fddae84ce2 | |||
6d406285e6 | |||
8dc20ead31 | |||
d3a3c3154e | |||
d5572964e1 | |||
ea51c25030 | |||
d1447a8f5a | |||
c28c14a5f4 | |||
f9eb75044a | |||
2250f71e23 | |||
52be1d7b19 | |||
712024d3e5 | |||
7d99afdc7c | |||
5ceea41af4 | |||
2f74456443 | |||
fc87ae4202 | |||
f1d7dd87da | |||
ad212d339b | |||
9f49665284 | |||
78f8d6e185 | |||
a954a0de53 | |||
0c3defdd2b | |||
814588d166 | |||
2d2932822c | |||
e211fb6de3 | |||
fb7e274309 | |||
4a61fcf42d | |||
4c8fa30dda | |||
01c4f35b30 | |||
15e9510d2c | |||
09b7fd4975 | |||
bb66589f8c | |||
267a2fc8c9 | |||
1fc300ecbd | |||
877d0ce469 | |||
2188513161 | |||
5c7cff66b6 | |||
8c99ab80bd | |||
9d43462d17 | |||
78d68226e6 | |||
e9d576c3d6 | |||
1c578cd442 | |||
b97714b3e6 | |||
ce0a61ff67 | |||
74783a38ae | |||
d372ff96a0 | |||
8c7b9db9cc | |||
f1fb342305 | |||
fa0f278783 | |||
e197c14847 | |||
e7bf5477de | |||
f81b72fd93 | |||
d0d1a87aa9 | |||
7c6a9a7317 | |||
be8f102efb | |||
3003901447 | |||
157cfac31b | |||
40a1704e6f | |||
30981ecb0a | |||
f65a11ced5 | |||
db4838d4eb | |||
8ab42fb045 | |||
ff9a0a3527 | |||
c31bec0f29 | |||
19fe4b0cac | |||
a5d94fe229 | |||
e8f3cbf1c6 | |||
856502f788 | |||
ae23b0ef2f | |||
5ee89be616 | |||
38373b342d | |||
536a5f594b | |||
49e6916e66 | |||
b9b6f6f7c4 | |||
6ecbb3bbc5 | |||
cb2a496c4d | |||
fdf525a3fd | |||
40468ab11f | |||
f8f79666d4 | |||
fefcf348f1 | |||
81d39a75ff | |||
8f2b48465f | |||
026c1734b2 | |||
81e1d03d02 | |||
6171334595 | |||
55de54a757 | |||
c14aad0ba6 | |||
91ccc93042 | |||
61fc123e7a | |||
71d2008385 | |||
79794bf556 | |||
db0ca8963f | |||
27a3356c74 | |||
4526284326 | |||
0b0b1992b8 | |||
ed7ef5be8b | |||
ff5be50ee5 | |||
a032b3b914 | |||
9388a27649 | |||
af1d732916 | |||
939aa66b48 | |||
3365dd4ff0 | |||
959d55ae80 | |||
3e1992140a | |||
b547b982b9 | |||
56477ca998 | |||
66722b1ada | |||
963339d265 | |||
c87594f27c | |||
e72ad5dd2a | |||
3eb5d24cab | |||
8b9041a938 | |||
864ffec88c | |||
12bc2bba36 | |||
3a43afce5a | |||
0e56ea37e7 | |||
743192aa3b | |||
e8b156578f | |||
61f3338ce7 | |||
effffdbdca | |||
9bac803bee | |||
9169ad0d7d | |||
482a7839d9 | |||
ba3058ca79 | |||
0e90e504f5 | |||
998fa0de76 | |||
c273735729 | |||
c85f736522 | |||
a375ff172e | |||
1893af9bbd | |||
b4c655677a | |||
c2160adf1d | |||
5ada311416 | |||
f042cd7d9c | |||
f0a400a3a8 | |||
6066977280 | |||
fc88eccc74 | |||
5cb28a7d83 | |||
de57e88643 | |||
967fc70173 | |||
4a8d32eaa6 | |||
643c2a310d | |||
c3a191b38d | |||
83efd2c745 | |||
307331cc31 | |||
2abd22a13b | |||
2a4db4307f | |||
ebd6e8c4b1 | |||
8c1ab62bc5 | |||
8d2b340629 | |||
0b449a24bb | |||
a1804390b1 | |||
8b290c680a | |||
c1c9a2c96c | |||
f75e333264 | |||
378bac79e1 | |||
20a747ea09 | |||
4cd5e7ebb2 | |||
881903b6d3 | |||
939912c425 | |||
cbd3807b30 | |||
10b1ba7886 | |||
2f1467cb27 | |||
bd680c3302 | |||
fd7de051a4 | |||
9d7ed0e63a | |||
b82ef007f5 | |||
29bbcdd110 | |||
0afc51c762 | |||
4a8fbb9d5d | |||
d690634bd6 | |||
62b44a85f8 | |||
e7d705b25f | |||
e1640cc72f | |||
a6a1eb8378 | |||
33c375dc44 | |||
1f2dcbb935 | |||
c6cf88ef7f | |||
4e84bd2e3c | |||
c09f0ca9d4 | |||
218ee40f11 | |||
32c252f003 | |||
f4641accc3 | |||
b7cda38653 | |||
5bd9b9614f | |||
201fd70afc | |||
1763f7d4d1 | |||
271785cd55 | |||
8f0d4092c3 | |||
c6219a209d | |||
22db11f876 | |||
d826f95c77 | |||
b6e4858a25 | |||
6526097bfc | |||
3e7feb4033 | |||
fba225cee5 | |||
95078c296d | |||
e15020055e | |||
74fd7709ad | |||
31e3899663 | |||
8516d8ccc5 | |||
6ce9aed8c5 | |||
7a1739a3e8 | |||
5b4677b7d7 | |||
b9f5a00b13 | |||
90893735cf | |||
2e3d27e910 | |||
f337754e72 | |||
aa58aff18c | |||
0bcab05465 | |||
71d7c85b6b | |||
16e92d1379 | |||
8468b38631 | |||
7a65cb5847 | |||
f6cd4d4f5b | |||
63c7e9f840 | |||
f63eb2f6a4 | |||
3505c254e1 | |||
386374a6d0 | |||
066062a5e0 | |||
00da3ca725 | |||
713e006bc6 | |||
fd01db9e60 | |||
b44bd6d2a9 | |||
47f5b7c3ad | |||
87d99fe038 | |||
dfdaf082c5 | |||
8b7b7222dd | |||
a53a9e167e | |||
b8875515a4 | |||
01a985eda5 | |||
010ffc0692 | |||
8c9f01ef53 | |||
aa85b0cea7 | |||
aac2292ab5 | |||
3a2e7653f2 | |||
c232814003 | |||
2655540481 | |||
25eef5a6e4 | |||
7d21d6c894 | |||
af7d051019 | |||
90af2ff302 | |||
230106dd3c | |||
8b081ce9b3 | |||
07ad18178d | |||
db6f45e939 | |||
1f8de1aab0 | |||
f7f30f2361 | |||
9451fa1f9c | |||
c3b96f8a69 | |||
60dbad5a85 | |||
505bf8c708 | |||
2e32d2142d | |||
282c6fd17d | |||
c2959c998f | |||
e9a63473a0 | |||
7f05e220a4 | |||
4edbae4a91 | |||
3b251b0ed3 | |||
4203320d04 | |||
feb930e357 | |||
e4e057f8f7 | |||
9fee35b02d | |||
f6d0dda187 | |||
8f40517adb | |||
61c5a0c6ae | |||
85fa594265 | |||
c2d6a92b01 | |||
24e85b2454 | |||
27b3bf230b | |||
de2e959b27 | |||
31d5d610fc | |||
e33b10a666 | |||
61abf25859 | |||
43e5f892f6 | |||
5533c3058a | |||
72d2adca62 | |||
01b6cdf13d | |||
24f0423088 | |||
3d504737e4 | |||
bb42ba5f4e | |||
6dd8fb6f24 | |||
fdf445b5a0 | |||
f065d8e258 | |||
b0e9d24fb6 | |||
b1720b779c | |||
6c1ce697a6 | |||
3f1f5e5215 | |||
b8f08d400d | |||
066f9bf7e3 | |||
f0ca65a95d | |||
7e6d876385 | |||
7239249155 | |||
cfeab9324e | |||
77fd369b1c | |||
cbd7ef4ee6 | |||
747993de08 | |||
96d6f05391 | |||
22943e7e06 | |||
d818ef2c76 | |||
4e21f87e3d | |||
52613b262b | |||
c309d745a6 | |||
2a3229c00a | |||
3e7bd47cd5 | |||
2059c8e9e7 | |||
b77de97136 | |||
633a0a847b | |||
f674a1b583 | |||
7cb860a31b | |||
d2e69b339f | |||
41e77c9db6 | |||
50f29bd661 | |||
6486be673b | |||
4959663f90 | |||
c49a87bd04 | |||
60b9adc267 | |||
3ce31acda4 | |||
96aaeee4f5 | |||
a9e04061b1 | |||
77fbe10dfc | |||
debc69e1f2 | |||
72fb756af3 | |||
d57ad8ec8d | |||
fa85445ef8 | |||
327f09fcb4 | |||
2af1605db3 | |||
91f6aee4f2 | |||
b94b8b5707 | |||
fbbc4a4979 | |||
2fd6df922a | |||
cb8524fbec | |||
78afc853f4 | |||
b5384ac1c0 | |||
70f0bbe38c | |||
f3053265ae | |||
f224d74ed7 | |||
d5f414f69b | |||
f254e38385 | |||
2ef3eac5ca | |||
76fb6ebcbb | |||
978cf804ca | |||
6f06e1cb47 | |||
c5d4f3e7db | |||
7f159b6a8d | |||
ca4acceb1e | |||
94f6a11bbf | |||
c1300c81b3 | |||
e6a789d541 | |||
2bb33181b6 | |||
7da451640f | |||
4ab818a856 | |||
ec470944f8 | |||
fe1ce3a2f0 | |||
91039bef7c | |||
a73950545a | |||
14d6ed9e5f | |||
a9087ee659 | |||
bf987185a9 | |||
07c07cea25 | |||
0c8988aa07 | |||
8309ca92d7 | |||
fb6287240f | |||
85e87e8f6b | |||
8bad78cb98 | |||
bfd5f38af3 | |||
3a93928b07 | |||
82b7e4fd3b | |||
da1bba8f39 | |||
633a4e6b52 | |||
cf8c66c9f0 | |||
85c9ea92bb | |||
a2b5444a26 | |||
393e4335b7 | |||
fd11523af9 | |||
04fc57ac1d | |||
385e18bc6c | |||
35dff4cbc3 | |||
d24a763a12 | |||
fcd4871e2a | |||
d3456b5ecd | |||
3d8e2e1171 | |||
c654370d6d | |||
e1306bff8f | |||
ba299bcaaf | |||
8fa4b8da6e | |||
cb408ace21 | |||
05582ad5b2 | |||
30552e28ed | |||
f10a70401b | |||
94044cee4f | |||
5161b74799 | |||
dd0d590217 | |||
2511535ea0 | |||
714b48a4b4 | |||
8fdf8f752b | |||
6dd807481c | |||
45406d8486 | |||
4fcea334ad | |||
8aaa1ed911 | |||
99a2d6c4b1 | |||
cbe37e5213 | |||
e7e7451213 | |||
e771c6042b | |||
c011e2ddd5 | |||
81291b23b1 | |||
c798f81398 | |||
8a5f085a65 | |||
cb979bc2cc | |||
253e5a90bb | |||
ac69e63fa8 | |||
5000d29b4a | |||
8ffd58fb3b | |||
cd470f9ccd | |||
472a536052 | |||
c407e097e2 | |||
ea5f6dab6b | |||
0d52598fc1 | |||
cf8ab8c7a6 | |||
6b030ed7db | |||
6ad9d1609a | |||
f92c11e1f2 | |||
f0143916de | |||
7e3dd74314 | |||
0e7fd4a37c | |||
e2d0db95eb | |||
2951e7f6e4 | |||
fdf7798137 | |||
8efc42e25f | |||
cfbc5e5c3b | |||
04354f32ab | |||
957c9cd1df | |||
8fdfac2843 | |||
1153e1e7d9 | |||
7607ace95a | |||
6c2fb5105d | |||
56b111df0c | |||
8ce579aac9 | |||
9eb3e2c6b4 | |||
0b19921ec0 | |||
537c7100b0 | |||
2dd361aba5 | |||
8077be93b8 | |||
b9f9d2e786 | |||
67f2e41f20 | |||
4582a7e900 | |||
46971fa1db | |||
9b8e39e7ca | |||
e58d39611a | |||
780a7d359c | |||
33a0496b5e | |||
d9ec6b4d22 | |||
68837b9693 | |||
2046d66927 | |||
2d97500e64 | |||
373a04a181 | |||
70a9929b5d | |||
cad1215b18 | |||
18bccb4285 | |||
712f6cb0e1 | |||
817825d549 | |||
79d27328e3 | |||
95c6c4b713 | |||
4f9aa276bd | |||
6ebadda395 | |||
e521a9116f | |||
7684bfdf65 | |||
ce2f65508d | |||
b4869cb03e | |||
216a6347b2 | |||
fd5766bdf6 | |||
7fb1f68ff8 | |||
a0dc471520 | |||
4d1b8b1e47 | |||
7da79de74b | |||
b694cfc69f | |||
d26bdbaf81 | |||
8db8d01712 | |||
2030c85071 | |||
93594006df | |||
78a5eb79b5 | |||
b5dd41e625 | |||
a1a72202ff | |||
2a074523a4 | |||
25acdbf41b | |||
55e2355326 | |||
5f366db7d1 | |||
78422eaa17 | |||
bf047ed9d5 | |||
dc8115a534 | |||
9ba69ff317 | |||
135a40751e | |||
4b4f5be74a | |||
80c1b9c13a | |||
d1ae4cd5bd | |||
4b5bb7f212 | |||
a6cab69c88 | |||
2769cae6bd | |||
c0560be98a | |||
9ba435d902 | |||
63bb560820 | |||
88d4e7ebeb | |||
7e05b33aa0 | |||
d31701bab5 | |||
25ed908c18 | |||
369d561350 | |||
7c5991c2e6 | |||
bea4c62965 | |||
2bc1dfd921 | |||
e1cf766695 | |||
7388911e0c | |||
aab2eda7df | |||
408de4124b | |||
dee467dc24 | |||
83577a5d08 | |||
d42c1f5131 | |||
43f795a485 | |||
4f27981c46 | |||
c7bdd7e2c5 | |||
c4a45c5713 | |||
42d56d5ef7 | |||
d51d381eca | |||
63355062dc | |||
f7c99208b5 | |||
c0fc389c98 | |||
31c1931b7b | |||
6978471712 | |||
3edd36315d | |||
23e952ccfd | |||
1e3274dfa2 | |||
8a7a548a6d | |||
d9069120bb | |||
972d8c55ab | |||
9bc3c0bd05 | |||
7f2d6b3ef6 | |||
7adf4d7c94 | |||
a204b14503 | |||
0a7fc7cd34 | |||
fd5984af56 | |||
d6efc0b22b | |||
f67bdc2eed | |||
63c6824905 | |||
7dbc4549d9 | |||
a0149106b8 | |||
8963cf2f8b | |||
e56e43064f | |||
f13bea0bb0 | |||
ea06ea41e5 | |||
24e4c94d98 | |||
8dafaf390a | |||
8d07200bbf | |||
38a9149735 | |||
d204b6c3b7 | |||
f5f4791023 | |||
8ad935ef2c | |||
5aebe1a52d | |||
62d7bae496 | |||
e6b685b1ed | |||
512bac0ee9 | |||
8024a0d15f | |||
7db7744737 | |||
833769f59f | |||
b55ea6a70b | |||
9ca7f22e84 | |||
0472b2dc9f | |||
d0d4b1378b | |||
ca22c4c384 | |||
ef3bd4ecc5 | |||
809e6110a0 | |||
1ff0b71b30 | |||
a0c97282c3 | |||
dae2755253 | |||
36735d52a4 | |||
eafab47f05 | |||
faad828c51 | |||
6b784908ad | |||
c90a4b96d1 | |||
0bf110e27f | |||
a915ff8419 | |||
5c642ae314 | |||
123b25845c | |||
65ad91b14d | |||
60d3375599 | |||
a4ab5e55f9 | |||
4c7ffe4442 | |||
fded83f111 | |||
e70c8ac4a2 | |||
caa73c176f | |||
5dea73860f | |||
e6f72b4f42 | |||
9381b103bb | |||
2e1e1c95bd | |||
da6a035afb | |||
997e83f8ea | |||
631f790689 | |||
b2a465e354 | |||
b9cfa4cef9 | |||
a26964c855 | |||
f18ae033a7 | |||
608a2be9c5 | |||
f763048156 | |||
54efb460af | |||
ab1cf751a3 | |||
ad2111a6f4 | |||
e9bfcc02ce | |||
b81cb999fb | |||
204335d304 | |||
54928f5deb | |||
3f8eab8439 | |||
21217c30f9 | |||
37bdc94860 | |||
36ece32a61 | |||
0256953b28 | |||
8afc468b64 | |||
161c7f6bdf | |||
23719f99c6 | |||
7ef75e373a | |||
9dcb975724 | |||
ed68bf89ff | |||
26abd25cd3 | |||
e7a0c9128a | |||
8d0d942c47 | |||
c40b86bcde | |||
0c87467f69 | |||
068d806bde | |||
25e3ce1feb | |||
a39107a3b8 | |||
049ca8746a | |||
85f989ab3d | |||
397a42efbe | |||
f35d7d9608 | |||
66d147766f | |||
facbb64090 | |||
e0de6536c8 | |||
1f8c7b33e7 | |||
f9b6066dd6 | |||
da10d5d057 | |||
9f34d3493d | |||
1a75165ed8 | |||
dd465d0e40 | |||
ff6d6867b0 | |||
5cda22a17d | |||
9e034f4b4b | |||
22c52b6d2e | |||
d1a9ccb2b9 | |||
6511171725 | |||
e127214c6c | |||
327e255695 | |||
7698a2a546 | |||
2d5f890091 | |||
17e2e762b1 | |||
cd70ea33ce | |||
95870a21eb | |||
5594f695bc | |||
b9d91483d0 | |||
004c1388fb | |||
27550b229a | |||
effa6e0767 | |||
aca2abd8fe | |||
3a1368d4d2 | |||
ae7b4ee8ed | |||
53ca03b655 | |||
cfdad38f4e | |||
432c19de61 | |||
fba87558a6 | |||
21ac657e67 | |||
8a3fee15a3 | |||
5e4b008106 | |||
f292a4c953 | |||
5015480e0c | |||
79f4c196b8 | |||
2f1542c06d | |||
1a91ed0e99 | |||
d78b03fb27 | |||
39c733ebe7 | |||
5856c8bce9 | |||
80c10e150f | |||
902c676cdb | |||
a23609efe6 | |||
a2a6b693f1 | |||
dea2516177 | |||
8f83d11724 | |||
27960911af | |||
7a6b61cd6f | |||
df839f3b7f | |||
3e86779ad5 | |||
b36734f1d3 | |||
18a813a9fe | |||
a087325452 | |||
7f0733cf46 | |||
eed4a3f035 | |||
a9588952a0 | |||
ace3a217b0 | |||
276039e835 | |||
01d1a579bc | |||
781196fa87 | |||
e3218e2dd1 | |||
1a6be700d8 | |||
148c923c72 | |||
4409932132 | |||
1b1fabef8f | |||
3a61fe596b | |||
94d5936180 | |||
7b541f9003 | |||
300323fa50 | |||
ad1a790116 | |||
c737bf3d2a | |||
47cd9d0277 | |||
763a37d3f1 | |||
d51c8bb640 | |||
a2cdd908dc | |||
b025cdd097 | |||
90b5f3587d | |||
5193965005 | |||
34fca0caa9 | |||
312ac5824f | |||
d051b3b4e4 | |||
76aa7f6935 | |||
bf0aa68f89 | |||
fbcc6db64c | |||
60bdc47fa0 | |||
38f27599b9 | |||
593489d454 | |||
eb6a47f87e | |||
52bc997e0b | |||
d0d3c768d9 | |||
9c9156b478 | |||
b744cecd20 | |||
0d851e49e3 | |||
debeccd605 | |||
c848ee9d86 | |||
0a692b0524 | |||
911ae60edf | |||
0c38f1ff8d | |||
0a9e2fe1f2 | |||
9afe4e87fd | |||
310641630e | |||
8baaa06cce | |||
5351953425 | |||
01dd60c0f7 | |||
ad1d48b73d | |||
c8ea343a76 | |||
4d69d9663b | |||
095407df58 | |||
f862b47e92 | |||
5f4412996d | |||
ddcf14102e | |||
889dd1b22f | |||
dbf654cf77 | |||
8bc6cea90c | |||
d1dcc828c8 | |||
0ed3c83e49 | |||
58da8b17ee | |||
f0c184b3a2 | |||
33acbb694b | |||
8d438c2939 | |||
39dc5315ed | |||
cd7d68fed0 | |||
e4f40f6554 | |||
beb58c434c | |||
7f94afdb8c | |||
13e36f963d | |||
e016015196 | |||
1bcbd82c8b | |||
7a25257fb2 | |||
9713b1f3ef | |||
234c4b1685 | |||
6f0723f23f | |||
0d48fc5511 | |||
7f43fdde74 | |||
3fa3d7dac6 | |||
320768b2e9 | |||
4a7b27921d | |||
3f515e1849 | |||
43eca30a08 | |||
1a3a345468 | |||
3eb2fdbd99 | |||
df657d4690 | |||
7907936066 | |||
1fc0803840 | |||
382ffe679d | |||
831abf82b1 | |||
ed90481510 | |||
f8a290e7ca | |||
a7a93f54a4 | |||
7b1ccca373 | |||
a4a84184e8 | |||
e5d94a296f | |||
3d75395875 | |||
bd6e6c11f8 | |||
79de3be6a7 | |||
db560574dd | |||
317f3571ff | |||
3f187a103b | |||
c8a2c7f64f | |||
270dc9427b | |||
4e1ce81e17 | |||
4e2fe050f5 | |||
b6eedbacf9 | |||
5039c7b4ab | |||
8a57b90e7f | |||
9ba658f59b | |||
b68416f735 | |||
71937151d0 | |||
4aa68e0231 | |||
b7ee8f4967 | |||
2831b9dcfd | |||
fb81fb44fa | |||
e16db3347a | |||
e52f41a6d1 | |||
bd6f1c9e48 | |||
85c22f4562 | |||
42c98123b3 | |||
ad45958841 | |||
1753623f87 | |||
5da5b834e5 | |||
9cc013fec0 | |||
1e252f1feb | |||
763aef87b9 | |||
6092e1ad24 | |||
ae0c4b4c87 | |||
3296c15a32 | |||
5cdb557560 | |||
db91277216 | |||
2eb8243d94 | |||
134d1cb4e0 | |||
931cf3454a | |||
010cc287bb | |||
28e9ba365a | |||
d111c8fe3b | |||
cf547aa403 | |||
f76ca01aed | |||
d3aebbf0ce | |||
edd298f85a | |||
1f413cff64 | |||
aca4ea2a29 | |||
17ae440991 | |||
324d2383b8 | |||
1a9cd7bf36 | |||
8f744fe46b | |||
633cfbe241 | |||
bbd8f4e6f6 | |||
22f0386683 | |||
c4f1e64de7 | |||
298d58841e | |||
01557ebc8f | |||
c231950cdb | |||
15d8ca7726 | |||
2ec8572a8c | |||
833aa518d8 | |||
9fbdd0a84a | |||
3eaf2f6558 | |||
119d0520c6 | |||
3f756d502b | |||
9d74eb5c60 | |||
86c9bf5c3f | |||
72a531e8b2 | |||
5df56fa615 | |||
df3bb333ca | |||
c3a678be75 | |||
c0c4c7cb76 | |||
f97a077257 | |||
f2e9936de5 | |||
6431382a75 | |||
c90c757a56 | |||
dd16463ad4 | |||
25403970f5 | |||
12d3e4e473 | |||
3c306cdb3e | |||
8b097f279d | |||
0c0fbbd7c5 | |||
3c20bdd004 | |||
86cb9f2490 | |||
29a6fd65ad | |||
56b4e6b71f | |||
2ac44eab81 | |||
6f193ea1df | |||
4e114d3549 | |||
bc6bebe7b0 | |||
9b84127739 | |||
2533c2a50c | |||
f203a61469 | |||
07129a6370 | |||
78fbe669ad | |||
b5be18a744 | |||
51435df179 | |||
4d2aa80ecf | |||
c9452c6ad4 | |||
9342647e0c | |||
a5cf7fdc87 | |||
507bd2ab4b | |||
4fb8d30f0a | |||
5d3597a5f2 | |||
65b59f4423 | |||
ba52bd07ba | |||
05b82f2022 | |||
4274db46f2 | |||
49a12371c1 | |||
4608210154 | |||
80de75431e | |||
2510a1488c | |||
80ab321f9d | |||
1d521556ae | |||
2f8b9ce9aa | |||
a4a8393cb7 | |||
36f5b713bf | |||
49a0a63fc3 | |||
ad1b754e02 | |||
8cb5e05fc9 | |||
67e3fc55d7 | |||
78d153fc5a | |||
2cc273291d | |||
808ee4e57c | |||
3d994f8653 | |||
c200be6432 | |||
e0ddded077 | |||
853f68071b | |||
43740a8d3c | |||
e52a985a3a | |||
fb7dd0f688 | |||
ab03a42f06 | |||
2925f02aac | |||
0d08ffa282 | |||
bcfbb096e2 | |||
2ca1823a96 | |||
c22ba766d5 | |||
9f8e82e1c0 | |||
1fe2a9b124 | |||
47cb8a012a | |||
cc14f14216 | |||
1a4a4fa7ac | |||
98249bc950 | |||
5afa4e4fdf | |||
0914b8b707 | |||
c4fc8c0989 | |||
9b72c8ba1b | |||
366e689eae | |||
0944a50d3f | |||
c182428e52 | |||
bf5ecf6555 | |||
cf5cc18f02 | |||
a213b3abf5 | |||
739accc242 | |||
a9f10bdeee | |||
9976d869c1 | |||
280b65fe4d | |||
6fb99a8585 | |||
4d055ca73b | |||
950a9da9d9 | |||
720234d32b | |||
23b5a29101 | |||
8c43bd06a0 | |||
4203c766fb | |||
01a1dae7ae | |||
d159353d51 | |||
ae5c89ff12 | |||
56c706ff91 | |||
e42fa18ccf | |||
9def4cb9fe | |||
e3f4b43614 | |||
b465b48476 | |||
42e7d4d09d | |||
f74142187d | |||
2656b594bb | |||
5d41e7f09b | |||
beef5eea37 | |||
0df1822212 | |||
46cac6f292 | |||
89bb9048dd | |||
c6e9892af4 | |||
0f53ad0b84 | |||
cd9f0a1721 | |||
0191509637 | |||
7d6280fa82 | |||
c586218ec6 | |||
d2716fc5ae | |||
9767098331 | |||
f127f462c6 | |||
75ae50a90f | |||
5dace5f6dc | |||
19d30fd4a7 | |||
78540c5e7b | |||
3351a71e84 | |||
ae2e8fa462 | |||
18af48a9dc | |||
e3b325c196 | |||
9dbde1cc52 | |||
0c4e67c1f4 | |||
5a67b0aba6 | |||
63572567b4 | |||
54cf0317c3 | |||
b1b78c537c | |||
6838ac3ba5 | |||
072eda508b | |||
fa1cbd5890 | |||
094be295a1 | |||
56286ccd29 | |||
a2c44a8b65 | |||
11619f8db2 | |||
eb42a5cb2f | |||
55c98982d1 | |||
10a401c7c6 | |||
fb7365ef3c | |||
af20ba21cb | |||
6bef2bddca | |||
d7cc9be3fd | |||
2fce80e4c0 | |||
37fb2c454f | |||
84a81d8caf | |||
d3191d1afb | |||
95edd1bc58 | |||
8a87769a09 | |||
5ac4e4255a | |||
508c9dfe5c | |||
a9bf593bdc | |||
90f6a4a28d | |||
ce9f73a34c | |||
a674116f07 | |||
b9bbfda874 | |||
13b70ed545 | |||
0fea49f8fa | |||
02f4a9a034 | |||
ab532524b0 | |||
aace95a5bd | |||
e75f52b97a | |||
6443c25422 | |||
e0f4dd4cca | |||
8b952fb8dc | |||
5aab92414f | |||
861cb5cfa2 | |||
165c77f14e | |||
4374d944d4 | |||
89f7cc51fa | |||
deb11b3594 | |||
555b8047e6 | |||
13420b33a0 | |||
8695511153 | |||
8604d1863b | |||
a81234a25b | |||
59880a0ab8 | |||
7e31ddd32a | |||
dfb2ed07db | |||
92cec10103 | |||
074af101f1 | |||
aa79523d33 | |||
367b064bcd | |||
3196d08c7a | |||
b788790e56 | |||
71c45906ad | |||
a630735c29 | |||
5d8cceb164 | |||
06a27d8590 | |||
1ada4f939f | |||
9d0d4be7d1 | |||
3902d5ab0a | |||
96e0f50673 | |||
82d56b6314 | |||
e446c2c2c7 | |||
e4b8c874d2 | |||
a94d20d1e4 | |||
f80914fba2 | |||
acec15ebc6 | |||
94eec5d41a | |||
1cd3fd81c8 | |||
8ad5e29447 | |||
be28981234 | |||
4fad94246d | |||
8a779ce709 | |||
81c1288d60 | |||
5cf0d6678b | |||
d03d7f0c0d | |||
99639186cd | |||
eb88a5f288 | |||
b2d5c91f0e | |||
90509008e9 | |||
8c0282ab24 | |||
85e14a841a | |||
933bcac6da | |||
293c75b133 | |||
fcaa509e4c | |||
1a962df596 | |||
5c774ff571 | |||
307e14028c | |||
462dbfe10d | |||
abf7847fa5 | |||
69606bb95f | |||
3a40421aa5 | |||
2db9d3b702 | |||
bad2f03cd0 | |||
df55438a60 | |||
b8e9bd2b42 | |||
eba41cd7b3 | |||
017ea3df50 | |||
eb7a804ca8 | |||
b9d3bd8d42 | |||
6f9a20803c | |||
699b1e5b3a | |||
26d99269c0 | |||
783eaf9de6 | |||
9886e9448e | |||
c5a9d54835 | |||
118fd18eb6 | |||
0f8060bede | |||
5dffa38fb2 | |||
e03850c4ac | |||
d94d22122b | |||
a66f133209 | |||
8752ee52a5 | |||
e655420d33 | |||
7f8b5774a4 | |||
8eea93942d | |||
4730bddea7 | |||
fa9a78450c | |||
ea94aea136 | |||
a8cc11375f | |||
0c88795a19 | |||
21e3418553 | |||
bb797c1ee9 | |||
304606ab0b | |||
74bad576ed | |||
7dfe503f1c | |||
af51f87ad2 | |||
9fa6c95054 | |||
5e3b20e70c | |||
3d97da0672 | |||
c89eae790d | |||
432bda4dec | |||
6d443ba3f9 | |||
6ce03389c8 | |||
34136a69c8 | |||
c23d666328 | |||
da8fd18d8e | |||
c624caabb1 | |||
824277cb3a | |||
c512839382 | |||
d431b64d97 | |||
0df543dbb3 | |||
6e730af65a | |||
43dd751c47 | |||
6f801d2ae8 | |||
925d1d74ce | |||
e44d3abc77 | |||
88bdd8a5d9 | |||
f0fa5ec507 | |||
b32a8010a7 | |||
522232212d | |||
16135165c2 | |||
d20f23c795 | |||
c39a59c0be | |||
5278ea5ed0 | |||
8adfc06084 | |||
4a245a632a | |||
7bb768ba34 | |||
f99c76cb47 | |||
6ab8dcb679 | |||
bc2d47118d | |||
953b0c6ba2 | |||
628e83ecc7 | |||
998f8bf291 | |||
af5b8190d2 | |||
cf382dbe60 | |||
acfa601075 | |||
6825ffe1a4 | |||
a42b399f4e | |||
5feb4e1027 | |||
fd72ecfe92 | |||
e179225f28 | |||
154f268031 | |||
10d3b81c39 | |||
f9f691ef1f | |||
729dcd51ce | |||
559a82f66e | |||
40ae83beab | |||
37501e2a5d | |||
7aeddf6cd7 | |||
d0f301adb7 | |||
b8444d4d35 | |||
5fac6b8d15 | |||
2b5f9e1c6b | |||
fc8cd44c72 | |||
61064a7be3 | |||
5cb6dd268b | |||
0af1679b61 | |||
24601ca24b | |||
75441390b6 | |||
9b5eb1ae5a | |||
29e14dde0c | |||
cbb6ede69d | |||
d25f9feb19 | |||
74e7614759 | |||
d9a3472894 | |||
0dce29ae57 | |||
8242049a33 | |||
734dd75565 | |||
2a1bae0c2a | |||
b741452d03 | |||
4e1010c1b9 | |||
67c75606db | |||
b5cde6b321 | |||
1643ed5667 | |||
f876ccb055 | |||
12d930b40f | |||
3519a9784e | |||
9690220cd1 | |||
e2463569e7 | |||
46062efa78 | |||
e63059ec31 | |||
36b2d3f5eb | |||
00e00f16bb | |||
b940e0d514 | |||
a662ddefbb | |||
407afc69ed | |||
c00084812c | |||
db8b15bf8f | |||
89b18ff1af | |||
2faf72f47c | |||
17873f7be8 | |||
d9b9821551 | |||
920b155f17 | |||
7b7feb46fc | |||
fef4a79528 | |||
1a8e3cad9a | |||
591bb5e7f6 | |||
acbf0fa452 | |||
e625400f1d | |||
8151d4d0bc | |||
d62ce55584 | |||
e58287f026 | |||
af3451be26 | |||
bef87cc953 | |||
f95f7a3027 | |||
2f0e82a31e | |||
780d2f2a59 | |||
531c3061c1 | |||
a375e91c66 | |||
46bd842db9 | |||
87b1d9571f | |||
d9e928de7a | |||
109577351b | |||
fa733e1e9c | |||
e71ff361a4 | |||
52e3dc5eb9 | |||
93e303ec71 | |||
a1e572b460 | |||
5aeee917a7 | |||
14c851c863 | |||
86a43849fb | |||
35fd5dc9fc | |||
b126e31132 | |||
d46b753186 | |||
86d7390804 | |||
5183ce0118 | |||
e0bcd4d516 | |||
d8513adf1d | |||
26a3e9a740 | |||
29c30b2387 | |||
13b05aeff8 | |||
246fb29d8a | |||
a9f72ee0d4 | |||
8f88632218 | |||
626df4d77c | |||
cc931a2319 | |||
4ca78aa89f | |||
972ef3c92e | |||
1e60f88786 | |||
cb9277f339 | |||
cdde0368ad | |||
a53175949e | |||
454f1da2f2 | |||
e3d8ef4cea | |||
1a8e78cd55 | |||
301abddc72 | |||
cc37beff35 | |||
4e831810c9 | |||
7d16e7d27e | |||
b294ab13a4 | |||
797d826117 | |||
5f3140987e | |||
be740dc436 | |||
5b7582365e | |||
468187de31 | |||
7e74b3f846 | |||
eb8646a381 | |||
3512f114e4 | |||
b8e09bf849 | |||
0c5d1d5641 | |||
f3cb93015c | |||
55307d48ac | |||
6ec4b9c26a | |||
0a15c1b9c6 | |||
6969369a32 | |||
20dca1eb80 | |||
cf60588b27 | |||
2c06def8ca | |||
cb75c40a8b | |||
46e63cc14a | |||
d2a6bbd9c6 | |||
01c8b25284 | |||
f8b480cd6f | |||
1e92b7929c | |||
de58a9c733 | |||
f095334788 | |||
367f513674 | |||
b713113094 | |||
fcbfff6a00 | |||
a98de7efa7 | |||
69cc9fdd17 | |||
7c0ae91d78 | |||
09252c4e07 | |||
2f96a68a20 | |||
da3b71b531 | |||
96626d0a23 | |||
1bee237acf | |||
c4e5081562 | |||
529806dba1 | |||
be1f36d97c | |||
f6042890b7 | |||
fdd89df1eb | |||
cfd10b4bbf | |||
58150937c0 | |||
1b0ffdaff0 | |||
9c364efef6 | |||
b21731c022 | |||
9603d5e31f | |||
994e0d2182 | |||
cbee2b74a3 | |||
3fd1d951f8 | |||
91ff6f30b5 | |||
2509e7ad2c | |||
8fefd1f471 | |||
f62ed3d642 | |||
b9b14b15d6 | |||
62398954e4 | |||
5559a026d7 | |||
2b6ad93036 | |||
e62e9ce193 | |||
40f0193c4c | |||
f60a5d6025 | |||
340ba8353c | |||
d844440ffb | |||
0cb680800e | |||
1f954dc9f4 | |||
a686c994cd | |||
f61b4ae5ad | |||
76bb33781f | |||
9647012cb1 | |||
b9e9c9483b | |||
5e351956b9 | |||
5d60482357 | |||
4e52b80590 | |||
5f2b5e8b9d | |||
394ab43587 | |||
60908c64a6 | |||
98cd3fddc9 | |||
f1e0525c81 | |||
7079bf9a75 | |||
8eec86f7fb | |||
e7f4010cca | |||
cac30beed5 | |||
d680b8b5fb | |||
a076510cc1 | |||
8aa03a5959 | |||
ad16b63cce | |||
dfe853ebff | |||
c31b1ab8d1 | |||
a08103c088 | |||
aea9c6668f | |||
ede51b10f8 | |||
ec5f9bce63 | |||
f7c721b746 | |||
2ccba33dd1 | |||
2ac1c4c9ed | |||
ff96769b55 | |||
0326d6fdd3 | |||
69470b5e5f | |||
d7c98a4695 | |||
f2eb8560ed | |||
859142033f | |||
e6d1ebcc1d | |||
bc6f5ad53e | |||
62bd5477b9 | |||
16e3ab0f11 | |||
e8d06d8e4d | |||
b1178469be | |||
7e7c7e157e | |||
bb4884e957 | |||
a39509ee5b | |||
7618fdd1d6 | |||
2acf0806fb | |||
c1581732fd | |||
428cb21a3f | |||
74ae67b835 | |||
b7cc698444 | |||
ccf154e706 | |||
6d9168a2ec | |||
3d5ba43211 | |||
7da3019f42 | |||
43078d3ced | |||
097cdbd0e4 | |||
68b04b7067 | |||
456569f45d | |||
9a20743190 | |||
4401d88546 | |||
aa2b5aec1b | |||
f014cca644 | |||
95fb41a923 | |||
377f19b003 | |||
7afc490c95 | |||
e55b8485dd | |||
1358a9d460 | |||
7c8f13aed7 | |||
98a7c642d4 | |||
677606da7d | |||
7cac755df2 | |||
5e810e30cc | |||
d073512def | |||
90ea3fbadc | |||
e40da39143 | |||
a2e86c1371 | |||
45bba11f12 | |||
625366875d | |||
70fd684843 | |||
6d83590434 | |||
6604306398 | |||
3c97e7a475 | |||
e5b6324771 | |||
b4726a9501 | |||
5af4de0930 | |||
395cf7de51 | |||
ec459c2185 | |||
4d5a12a248 | |||
4b417da1be | |||
38ce362629 | |||
1f7e88d851 | |||
1ef243e436 | |||
745cd730a7 | |||
952eb4fade | |||
7c0035637d | |||
ef024049df | |||
b8b72f80f9 | |||
baa4e4ee56 | |||
8631f47568 | |||
0a8e28524b | |||
0b78ef8de1 | |||
b16c93a885 | |||
523a859ad9 | |||
1a25b2ff3e | |||
55d25f6f4d | |||
5b8300f08b | |||
859ac6dfd8 | |||
0f68810505 | |||
4cf5b76d18 | |||
ab6b175a2a | |||
c2fd42b556 | |||
4a1e89150b | |||
3ed63af51a | |||
a4dcceb8aa | |||
c20d31adc5 | |||
9c7a0a68e5 | |||
c817df1d32 | |||
dbb692e50f | |||
0f5d9f00ad | |||
9dd75a946f | |||
396a71ee9e | |||
425acb28c4 | |||
107d7b663c | |||
a93d8dfe62 | |||
510676fea9 | |||
2955d58776 | |||
754daf918b | |||
1a969ffc52 | |||
1aeeb38459 | |||
7b5e5eadb1 | |||
b2f8d8c397 | |||
57fb2a2b35 | |||
2af31f99c3 | |||
97ac128fef | |||
c9cc1efb67 | |||
2f34547d39 | |||
ecd4803ccc | |||
011c452b65 | |||
352d4fa3fa | |||
476ff67047 | |||
e5987dea37 | |||
91360e1495 | |||
67082e5bd1 | |||
bf08a6142c | |||
27459425fa | |||
a0d206c51f | |||
6a0a0a7ea1 | |||
6ffd7e3ed1 | |||
aa526cd53d | |||
31a6efbc13 | |||
f82aac2fc6 | |||
b7ab5c6384 | |||
6968028020 | |||
649fe7f2af | |||
b28b38fb6d | |||
c5ac02164d | |||
97e96feb1d | |||
4d2ec2fec1 | |||
bbc1cdafef | |||
cc304ac03c | |||
2fb2b463a3 | |||
f85701a46f | |||
2ba42990ec | |||
c931f4d164 | |||
378257161f | |||
fe755b6250 | |||
844378f0a7 | |||
195570b621 | |||
8ec4215279 | |||
13acad85b3 | |||
7d777a4a64 | |||
5b7728f3cb | |||
9b470ef4c0 | |||
c33d04fb54 | |||
71bad561e8 | |||
43045500b2 | |||
71a533fec3 | |||
4f60f1b71f | |||
8a03c95dd4 | |||
b30dc10812 | |||
7ef17d3e97 | |||
4575353693 | |||
0684d8c4c6 | |||
94c804b81a | |||
de008c8a4a | |||
c781f30ed5 | |||
db83736b7b | |||
fdf433024f | |||
136c02da71 | |||
b64de4707d | |||
d51a7dba43 | |||
73b4a58ac0 | |||
4969a0e9e7 | |||
308f2a1695 | |||
72fc5f7d1b | |||
9f0ee53e86 | |||
30d37b2165 | |||
5bd00ab1f6 | |||
a1a2d2b1e7 | |||
7e06a95942 | |||
4a42c72b5e | |||
e5c3978725 | |||
86c4a74139 | |||
4a08678ce1 | |||
cb5c92f69b | |||
0345226759 | |||
bc3e056b4a | |||
34c906be55 | |||
d8ea9d22b6 | |||
a0360a83c9 | |||
8f718e2e5a | |||
c6cd63dc35 | |||
a1bfb31219 | |||
ea05711522 | |||
7f5a7d1da5 | |||
89107a49fa | |||
1b36162659 | |||
0a3d45a307 | |||
8fd1dd7862 | |||
c99a9f4075 | |||
2c974abcb9 | |||
6ec03d3f7c | |||
8825392da2 | |||
8d9e2623e1 | |||
d7c21e6837 | |||
1dc60bb97e | |||
c58ae95429 | |||
e489229153 | |||
12e4dfa9c4 | |||
b398233f4f | |||
99539ff031 | |||
12488d4a70 | |||
bb97adda0d | |||
6e7e346c93 | |||
d7bc15300b | |||
ddc94aaf9e | |||
4ba63237ce | |||
cd618323d0 | |||
924ece6ae7 | |||
4e1d3f0f52 | |||
18739e766a | |||
efeceef0ca | |||
8d5e969f12 | |||
69ae49a1dd | |||
f2cff42cb8 | |||
8e5f34fd97 | |||
e7e29ba249 | |||
c549978b8e | |||
4eefdaa4bb | |||
0bb2384547 | |||
77be124391 | |||
0642b4e61e | |||
a1afb21e33 | |||
06e2ce116c | |||
ae99c91903 | |||
43df091067 | |||
22aa710c1f | |||
81f151eed2 | |||
92c987f75d | |||
90146d863c | |||
cb3a5eaff1 | |||
a5c93840b4 | |||
f38a5d19a8 | |||
1e330a90c7 | |||
bd1985d84b | |||
65eb3038fe | |||
8f3abda5b8 | |||
21e65eec08 | |||
d582fdcc1b | |||
1ad038d02e | |||
ef9d55800f | |||
994e8e4f40 | |||
7d30326968 | |||
791aeb39a6 | |||
60c0a5503e | |||
b72a413b71 | |||
0626ee048e | |||
46716fe9fb | |||
161eb2c457 | |||
c100e40715 | |||
a66c25121b | |||
a25d4ac821 | |||
a2cfb56581 | |||
0f1eb14374 | |||
dd1920883c | |||
9e6912fe82 | |||
e719d8641e | |||
970abbb60a | |||
9bfbc12d7d | |||
a47797fdf1 | |||
9205a242b9 | |||
94ea82c00d | |||
b3f0eeabe4 | |||
6b1b13eabb | |||
4dab78e72c | |||
751a8d5b04 | |||
50523e22d8 | |||
e95b571e7c | |||
0bd9179835 | |||
28a29d9ecd | |||
46d4ff823f | |||
401ef96ace | |||
00837b0736 | |||
cf93a74aa8 | |||
73cae7abd0 | |||
ca87a13b18 | |||
10cead3139 | |||
255670106f | |||
c6ebc13b43 | |||
11c38fb1eb | |||
e69c2fd382 | |||
c9b7fc46ff | |||
f550af7ef4 | |||
4de2128344 | |||
3a6d4b7f12 | |||
1cd6fefd49 | |||
20bdb315f5 | |||
ab2b58a80f | |||
ed75d93625 | |||
7d86d1050e | |||
5c60478953 | |||
6a33f0ffd5 | |||
24c284160b | |||
8297322176 | |||
7022d2d00c | |||
75a65e1a70 | |||
fac20b228d | |||
5457c029d7 | |||
39e9b1f75a | |||
cc96f91156 | |||
1e29715185 | |||
e1547a775b | |||
698a789644 | |||
f2b953d4f7 | |||
8334790777 | |||
a81997ac3f | |||
06fd31cde9 | |||
4c444df7a6 | |||
cb178a78ea | |||
ce6276a2e8 | |||
45588c1f9f | |||
66f9e81c9a | |||
8081254498 | |||
a00ed609c3 | |||
522be31192 | |||
77d6ecbc5f | |||
84508697ce | |||
296427fc78 | |||
d1660b5ba3 | |||
eb9a01258e | |||
d585b43abe | |||
b2b03d9926 | |||
052e314372 | |||
57008f1690 | |||
9df97eb441 | |||
354891f75d | |||
c3948284a0 | |||
614adb0230 | |||
546873f27e | |||
0c61d8804a | |||
a97866b629 | |||
3dbd30fcaa | |||
7d50dc06a2 | |||
93225ebafc | |||
cb9c77c4ba | |||
064e02f4b3 | |||
66f945c4bf | |||
084c407a8d | |||
e9f3101c49 | |||
17a6025ac8 | |||
4c1a738caf | |||
dbaa44372b | |||
c10dad41a3 | |||
9ac2c8072a | |||
a7247b3c7e | |||
2448f6a003 | |||
d7f69d0f92 | |||
4a07bbec59 | |||
e3558a64cf | |||
69ea359e62 | |||
b9f3ef09e1 | |||
def1a3b77f | |||
3a6fe61c03 | |||
fd60205e95 | |||
ef4e3ef55a | |||
602fd6a67e | |||
644ec0ddef | |||
c1d115b322 | |||
ac4d39cfb0 | |||
3f60ee0d27 | |||
e011ea25ca | |||
e1e16d9b28 | |||
ab2a20402e | |||
71f8f3ceb6 | |||
f1437a8932 | |||
75f812eaa3 | |||
4e4140040a | |||
e2bd6f2213 | |||
f3cdfcdcf4 | |||
686282393d | |||
e7d8292cd1 | |||
3d28faa3eb | |||
ea9e857eb9 | |||
cbbd1f0f44 | |||
a862fd9f0f | |||
10cafe56b8 | |||
4a5fa261c6 | |||
65ac718a11 | |||
5adca4a720 | |||
9970ded79f | |||
eae70c9379 | |||
b8079b7fc0 | |||
fa1e28102e | |||
e28706d9e2 | |||
cc04d80b09 | |||
54c252ee63 | |||
84d2ff93b0 | |||
de8adc9e03 | |||
8c60a532a6 | |||
beb194967e | |||
bdbb32dfe8 | |||
b65a2cec18 | |||
f0469f7f25 | |||
3cbc5285e0 | |||
0f0c048e29 | |||
279c103517 | |||
7f0d5946ff | |||
b980ab0c67 | |||
f2af08f5aa | |||
f67f8d3b31 | |||
d5cd563ce7 | |||
06d5cf2d52 | |||
e285f599e2 | |||
8e1c989ec3 | |||
98897b7603 | |||
25f1088edd | |||
9c5a32eb7a | |||
5269bbd277 | |||
dc8bf26cd8 | |||
19122b463e | |||
02f557068e | |||
904e5090fd | |||
5b50658118 | |||
9ce398f8a6 | |||
33e4f2ea28 | |||
9b56e51ca7 | |||
8174fcf201 | |||
dfe85b26cc | |||
b7f02a8c0a | |||
29dd3cf5bd | |||
0dc14d1771 | |||
c26ebe3262 | |||
dd607b5eff | |||
a96a28d603 | |||
962433c17f | |||
f45542394b | |||
02912fe8c4 | |||
5c51c600aa | |||
37bd0932f7 | |||
613525f711 | |||
4f9be94643 | |||
289e3c0c63 | |||
7225c77a3b | |||
4871a4a5f3 | |||
c349e089b1 | |||
6ac284a577 | |||
dac6e700f8 | |||
868617ef86 | |||
b8017004ba | |||
a781f4ebda | |||
9473e9c30e | |||
d80c13555a | |||
bf2581390d | |||
0ca0260c89 | |||
2353cbca71 | |||
f5588526cc | |||
d0c29cc610 | |||
0b8b40ccca | |||
231530e0c5 | |||
ea0c65797a | |||
6c2414ebd1 | |||
f4ec303d1b | |||
1e1dd24d05 | |||
dcfbcb7a68 | |||
3807faeddf | |||
eee23eaf43 | |||
7d48855630 | |||
a6eb2939b1 | |||
8ef6687018 | |||
e68cd086ee | |||
1e3a71d098 | |||
150576fa72 | |||
e5f4fb1a79 | |||
ab20187f93 | |||
6167a2aaa7 | |||
e3e3993022 | |||
7d9355ffba | |||
cd1306f866 | |||
e1550bae61 | |||
e1efdd591e | |||
83f2fa7adc | |||
e2d51961dd | |||
213e8a5b15 | |||
595743651b | |||
06546cf100 | |||
7a95831018 | |||
cf83de6488 | |||
f5b9238a3c | |||
f957c401d3 | |||
20211ed6bf | |||
cf09562e40 | |||
ecb577d40c | |||
15d268709e | |||
2469a95685 | |||
4ef44d1130 | |||
044e5cf3a9 | |||
0e493c11c2 | |||
69f5b4ba79 | |||
af8728f328 | |||
51aa220449 | |||
308038e96a | |||
b1e4defc48 | |||
804e215981 | |||
5fa233a564 | |||
35ff70656b | |||
ea97aa3f0f | |||
1601ee761a | |||
4de39d3683 | |||
30b26f8f50 | |||
3453ce55e3 | |||
4ec0fce109 | |||
27c500d8d0 | |||
3f7f6fb557 | |||
a32518006c | |||
bcda9af15d | |||
d743b8b866 | |||
deef16b376 | |||
592538986d | |||
cdb1e34799 | |||
c016325647 | |||
4426e282d6 | |||
3492753edf | |||
113b27229b | |||
13e7172b4b | |||
e4fbf7db00 | |||
4b83f40618 | |||
666d555450 | |||
15fa8dd866 | |||
064411b51c | |||
d3906e75bf | |||
05175480b3 | |||
0604fccfea | |||
cff06ef64d | |||
409fc439d1 | |||
b2c4992a82 | |||
e8adc24c32 | |||
d6a3ce17d5 | |||
e5ff5d92e6 | |||
b91d8625c8 | |||
9743ee8b83 | |||
095cff4415 | |||
d4eff5381c | |||
3da8c6512b | |||
3e67702d4b | |||
b586060812 | |||
690a0b6f00 | |||
69c7ea0b4a | |||
0fb2cab221 | |||
c9e06fa1ed | |||
d26cfdb7d1 | |||
f11b35eb71 | |||
b9d18d4ac9 | |||
3866e78c26 | |||
a70513621c | |||
328c42f1b7 | |||
2dc06787ae | |||
629d9e7dab | |||
db9ed233dc | |||
8c9a88c7d4 | |||
33dbf5c6bd | |||
7a48ca4cea | |||
ac2077559d | |||
63d6a4e0e1 | |||
4a7c1da9b3 | |||
6c408eb779 | |||
86aeeca644 | |||
0d65061a2d | |||
01a0db0fce | |||
0a8bf60a9d | |||
fef6557f6c | |||
b571f4d627 | |||
5c2053109b | |||
8827619f5b | |||
143e2f27fc | |||
d6904ce415 | |||
c6feb695dc | |||
37fa6ac45c | |||
2724c3946e | |||
c658fa62c5 | |||
624eb609fa | |||
1b1e54a281 | |||
9913e0073c | |||
7cd7b5d539 | |||
a12b317552 | |||
bb337c87d0 | |||
fb760b4c53 | |||
d814804fa1 | |||
cd3a7fb833 | |||
64e1a327ee | |||
b3a083d336 | |||
5cfa9e2384 | |||
e77baa3dcb | |||
059f419ac5 | |||
82af0c4a7d | |||
9b1fe45853 | |||
004a5f0dbc | |||
aa7a35798d | |||
5bd251a6fa | |||
c0981a90f7 | |||
c74ac99871 | |||
3730802fef | |||
8eac9fb93d | |||
4211c0b7af | |||
eeca614cd3 | |||
672472f85e | |||
4e2b09a7ca | |||
c350cd7679 | |||
9b91e96510 | |||
9f829fdab7 | |||
c6bfdb909b | |||
afef9cc312 | |||
6f4e3696d2 | |||
c7212b438d | |||
0d35ba9b94 | |||
e6a7f25065 | |||
cfe717e926 | |||
8c492c70ef | |||
56084a7cc8 | |||
fa2e9c2449 | |||
17e7f83212 | |||
b0481ba858 | |||
3df8838501 | |||
af0264d2e6 | |||
ce01fb3cdf | |||
8a63071463 | |||
ef1ef0ba16 | |||
710b14ce56 | |||
840f4d48c8 | |||
bfb9d837d9 | |||
caaa8a48aa | |||
03b9d6f24c | |||
9a67d71e6c | |||
8f47468a40 | |||
a571655983 | |||
0250f0c984 | |||
92f141d670 | |||
d5edb62bd0 | |||
b22b405465 | |||
20fc9dc463 | |||
ccb46d2024 | |||
0b675845f6 | |||
aa6b1e6a10 | |||
b7dc6cc604 | |||
04a4cea630 | |||
4c08f6767c | |||
55ba3d95fb | |||
78cfc8db95 | |||
63b0cd470d | |||
0712ebc9b5 | |||
2e25a772a5 | |||
617d2d5b98 | |||
3132e36bf3 | |||
33b3fdc627 | |||
758f0d9017 | |||
17377f5642 | |||
8b764aac71 | |||
bb3ba1ee1c | |||
28d80ad709 | |||
e9f841627c | |||
4563efd766 | |||
68f2fdc1ff | |||
bd7107bd4b | |||
c449da6ff9 | |||
0cc2f82e7e | |||
1aec483e42 | |||
1defeda792 | |||
0b6350227c | |||
656167d760 | |||
a6c905ad96 | |||
f411583ed1 | |||
534cb0b749 | |||
7b7b29ad1e | |||
5ea6990a73 | |||
ce49fb6ec4 | |||
7e182fa24a | |||
b24527f2f0 | |||
ad318ee891 | |||
af5ab7b351 | |||
7644a8ad76 | |||
2752169d6a | |||
c1948f2940 | |||
da6a0f0594 | |||
96ed856bca | |||
e508ce36ef | |||
0b9c65c82f | |||
fd0539c8cc | |||
d36c0a1444 | |||
bc5d7bbe03 | |||
271df0dd71 | |||
b17b482268 | |||
65fb1ad362 | |||
4a33aa3917 | |||
1ebeef5cbf | |||
1b40fe7709 | |||
da26e230a0 | |||
f36267bf74 | |||
a66b1e7c60 | |||
5c8ba23767 | |||
ec9e77db96 | |||
2e0dc8467d | |||
cccbf302f2 | |||
56cfe40184 | |||
b56ee178d5 | |||
0d07154926 | |||
81bd381048 | |||
805d4cbd93 | |||
eded62e60c | |||
5b14b834c9 | |||
8cd47c4348 | |||
f7293125cf | |||
51b4d6b7a8 | |||
acc270edbf | |||
ed2b3314b8 | |||
e93ee6179c | |||
666e7bd120 | |||
b1740f5fe4 | |||
c59e0aa83e | |||
7b2f769643 | |||
3489fa82fb | |||
d3ecebd14e | |||
26999db927 | |||
9ef0f5ef8a | |||
9e5bccd458 | |||
b982c80c14 | |||
48706a9cd6 | |||
5b60be9626 | |||
d016383740 | |||
44e710f76c | |||
a6d22b96c3 | |||
2d552927e0 | |||
a1598d767b | |||
54ab9a1aba | |||
3aa2d1b40e | |||
c8ad147c0a | |||
e29c79c54c | |||
28277b5a65 | |||
2943bf9086 | |||
48941cea95 | |||
ff7458508f | |||
b9cd329c61 | |||
771ee43169 | |||
5c06fc9093 | |||
2da7b63809 | |||
fb39e96862 | |||
572bfd99ff | |||
82053f04b2 | |||
7873c25abd | |||
e7314a2460 | |||
9e9bbb829e | |||
547bf1a92d | |||
9aee3f01cd | |||
9497e9678c | |||
48f4a7d037 | |||
a7a867c1e6 | |||
f4c30425c0 | |||
452dedf8ab | |||
f6cda8ac0b | |||
396fac416e | |||
db7e38b0ed | |||
69ed560fae | |||
754b9025c4 | |||
1c59708c51 | |||
524a5a1afb | |||
45079ec6c1 | |||
4f150b06e5 | |||
fa79d42b98 | |||
86bf2bc443 | |||
e53b99588a | |||
5e963608b7 | |||
3552420dfd | |||
64ac631863 | |||
f73258a51f | |||
0bf2ef3c1b | |||
a0759298c5 | |||
017aac88a8 | |||
0be190df4d | |||
1437388f77 | |||
c388b2f22f | |||
a50c707050 | |||
3a49cbb769 | |||
af4f82228c | |||
df54ad2208 | |||
267063efd0 | |||
417b9469aa | |||
254c0ea814 | |||
4f5cacc835 | |||
f1ead43482 | |||
58a36cb651 | |||
0d8d9a374c | |||
488ae52a51 | |||
f2b7c501cc | |||
bb110b0a2d | |||
159c8ee6e0 | |||
1c989edb47 | |||
3dc12e33f1 | |||
8e4fcaa6dc | |||
86dcfbf205 | |||
83e66d2962 | |||
c12104bd15 | |||
7f3d4bfae5 | |||
959f860a40 | |||
0c37df7265 | |||
e1789aa531 | |||
028b954052 | |||
49ef47a9a4 | |||
13f79affb6 | |||
aa89bc35fd | |||
722d66b03d | |||
be38c50567 | |||
1d58c7d3b2 | |||
3b92384394 | |||
c39b7205a6 | |||
3d5d3b90e9 | |||
0504b277b6 | |||
4c7bced34e | |||
8c88c1611e | |||
784c4446d9 | |||
262c98f327 | |||
83de13e4a8 | |||
940402a27d | |||
8db4f5b8e1 | |||
146bce3377 | |||
eaa5d9772f | |||
c8bbb8c53e | |||
5e6d2a23b7 | |||
01471481a9 | |||
f4b6ed2469 | |||
da1e022890 | |||
5e9fe0dc23 | |||
5630a76766 | |||
8021487b7a | |||
a8fc4396e2 | |||
9b3b1f80dd | |||
00f5a01378 | |||
cc4f4b47bc | |||
a20d4a2d31 | |||
14f6dd4ded | |||
10c9e238f0 | |||
f9d122066e | |||
57fde954b9 | |||
d0fa390048 | |||
5aa935f3b7 | |||
f2fedbae9b | |||
a5022c1cba | |||
e7a7fb2bb1 | |||
6655afda4b | |||
47b6449934 | |||
30cf8b7f0f | |||
83dd121bae | |||
38c370a7c5 | |||
fb00a32b86 | |||
f91f7dfb91 | |||
3f0f4bfee7 | |||
28b797b538 | |||
cf063ed475 | |||
b499f69181 | |||
e1519cf460 | |||
8d7703528a | |||
3eadf964f4 | |||
ee3797ddff | |||
46765ad79c | |||
462eb511c5 | |||
b125d590cf | |||
b9d01fb98b | |||
a4ef36c8bf | |||
d5d2370fc8 | |||
961b03420e | |||
16b2d9ca5e | |||
449923c98b | |||
7b84456366 | |||
c3f069c9fc | |||
0307382c1a | |||
db834301eb | |||
feaff17259 | |||
2cc245e8bf | |||
29372f9dd2 | |||
ddf65421e7 | |||
b207dd095c | |||
d5900e8b63 | |||
e810dec662 | |||
e8594b60b1 | |||
d23392ed8e | |||
bd450c1ba3 | |||
561c3b918a | |||
9eb6ea34bd | |||
d0d8e49e20 | |||
911c8442b7 | |||
96e018634a | |||
f14fd43548 | |||
0503676bde | |||
ae4b4109b2 | |||
1b5a129bbe | |||
19b35c939a | |||
4d3b281369 | |||
6b671b88dc | |||
d788eb8d92 | |||
a205242ca5 | |||
64a0e34602 | |||
7b11c288fe | |||
1fec4ba127 | |||
817de6d212 | |||
5eff6fb7db | |||
f975fe8068 | |||
0a00328a7c | |||
82a3d90763 | |||
92a0f08722 | |||
67b1c7cce5 | |||
429d5ab20b | |||
c6c6cfb502 | |||
c33ea20fef | |||
965b2901d5 | |||
aa9837e8ff | |||
e742ff331f | |||
6205a9a6cb | |||
de06dc1272 | |||
d3812ed664 | |||
f8ee322b08 | |||
8a32929d29 | |||
937ae658dd | |||
a1ce07a321 | |||
a56cb82180 | |||
e64ef3f261 | |||
f4141f0f51 | |||
d72cee1b0c | |||
1644679d00 | |||
7eb43ea75b | |||
f5549cba2a | |||
de864d3b58 | |||
2bb1f9c8a4 | |||
eb97aba581 | |||
6de993b468 | |||
06e2338108 | |||
d219e96359 | |||
b6f5b6b1c9 | |||
2b5a5c77cf | |||
a5e4fbd335 | |||
2ca87f6c03 | |||
81f5e31ed2 | |||
2d3eda4afa | |||
1c83a46c6d | |||
2b996b6038 | |||
88a77f30e1 | |||
8c1c291332 | |||
5e651a0d0d | |||
c3c41234f1 | |||
c7e4198742 | |||
8f3a11c73c | |||
f58a119b44 | |||
adbd936f22 | |||
39f39c185e | |||
918af500c3 | |||
311c19e494 | |||
5f0c122496 | |||
bb28c9ab00 | |||
c6cf015e26 | |||
fb7c4da361 | |||
978ae9de29 | |||
7678b84f2c | |||
619a40b22b | |||
f6a1585902 | |||
107a07563f | |||
69204397ee | |||
f505bcb91a | |||
f1f31f1015 | |||
c71f0ea174 | |||
9063ce5e3f | |||
9764652356 | |||
854a215329 | |||
4a7fabd219 | |||
6c3efde51b | |||
d69d438289 | |||
7ed8a133d2 | |||
c38f0290a7 | |||
c46955b60a | |||
e2a956c0c4 | |||
bd62b0a646 | |||
ddddecc3ab | |||
75c06cacae | |||
4d59b6f52c | |||
fd757756f5 | |||
29a077bdbe | |||
41dee84733 | |||
eb36d0dbba | |||
a752338d45 | |||
d1809830bb | |||
ab4ac828f3 | |||
e218834b58 | |||
cd781bf30c | |||
6e7baab32c | |||
cabd28516c | |||
c8cc87c3f5 | |||
bc9882f521 | |||
57c68ab1db | |||
c30a436829 | |||
33c3583b50 | |||
76e62c39b0 | |||
bf71497537 | |||
c0a8da7fd0 | |||
4db07dbc93 | |||
755eee0d30 | |||
b23045e34d | |||
fc4b30a1e0 | |||
9836990aa7 | |||
87498e0209 | |||
59ac42ff38 | |||
911dcc9386 | |||
a2715e3bda | |||
9311d7b77e | |||
5a83f05e96 | |||
a60387bab2 | |||
564bf8d17e | |||
4d309f0cb7 | |||
06da46c4ee | |||
b43722dd48 | |||
8d12017fe2 | |||
992f628e6e | |||
e2088b8073 | |||
86de0797e1 | |||
72eb2d8893 | |||
4c9a2a65c9 | |||
943fe70178 | |||
79d25a6884 | |||
3d8e4ace47 | |||
76a99fa1c3 | |||
1153350a95 | |||
cfe09d34b8 | |||
205f10aeb6 | |||
6136b26f38 | |||
273c6f6ba9 | |||
de99dfb134 | |||
982e18d80b | |||
6e95ce26fb | |||
13c2d32061 | |||
a75688bd17 | |||
3c3b33b00f | |||
0090573749 | |||
de2c3ec3db | |||
640d511684 | |||
914e9266cb | |||
0d6c028aa2 | |||
484f579905 | |||
864947a825 | |||
d6b22323a8 | |||
6079be7dae | |||
537057bd11 | |||
42fc36b4d6 | |||
7f0f9795bf | |||
2b4c37f54a | |||
418bb5e176 | |||
1cad722a6d | |||
ac96963003 | |||
4fa9363aca | |||
020a24f1c3 | |||
38b69a9301 | |||
fffa484a9f | |||
b4ce427d45 | |||
116a1b5855 | |||
abbefc9e25 | |||
5b288f6cd1 | |||
4ff6c72257 | |||
8f4a36fd32 | |||
ec5c5d9ddf | |||
c603b5e6a1 | |||
2bf55e3a15 | |||
fee9e2b183 | |||
de638a5e4d | |||
214c1e55b0 | |||
32553c5796 | |||
624187d25f | |||
00c9fe4753 | |||
f18d5433cc | |||
42db8f55b2 | |||
e001848270 | |||
5066981cc7 | |||
25aeeb35c3 | |||
68ece954fb | |||
be001c44e8 | |||
9510bd6036 | |||
0f0d32b073 | |||
ff5709bb41 | |||
ab17165352 | |||
768ccb8c10 | |||
becbd9f3d6 | |||
7b3d502b96 | |||
17e0164f57 | |||
54df540c2c | |||
15aa64eb3c | |||
65d7e7963a | |||
8c8742f43c | |||
a289bf58e6 | |||
299ebc6137 | |||
a7b098b26d | |||
82ddeb38b4 | |||
aba478fb8a | |||
edcfcae332 | |||
ef6b74411c | |||
8abae076d1 | |||
6e290abee2 | |||
99e0655c2f | |||
80c2e4098d | |||
1c5754f02d | |||
e5f0cdcc69 | |||
783675f91c | |||
d3d954d659 | |||
e177d9eda2 | |||
1bf78476cf | |||
c7c5cd324b | |||
fcc96c9ebd | |||
d914502090 | |||
27a30768e1 | |||
a1d823c2aa | |||
a61862acc7 | |||
5cccb49498 | |||
5271cf0160 | |||
8d897fd51f | |||
e177f391f2 | |||
32ed0aa0b3 | |||
969bcd282b | |||
7fbc1e39a6 | |||
7bfe75cbf3 | |||
3a5e418ff9 | |||
cae56f583e | |||
e1892e264d | |||
851d69181d | |||
b86e723107 | |||
c920ce0453 | |||
fd24340903 | |||
58aa3483c3 | |||
6dbdf6e55f | |||
3f74e9db0d | |||
b61f882635 | |||
1c8b30dbdb | |||
dc80ae86d9 | |||
8893ab0198 | |||
984badeb03 | |||
50be793f09 | |||
e7c1594c82 | |||
6e53f75092 | |||
cab2e45319 | |||
336e4f2f28 | |||
d9e939d5d1 | |||
52764f1e5a | |||
bdfbd26e94 | |||
2d761d64a4 | |||
884452c403 | |||
cb9ee7320b | |||
331ec82400 | |||
4a5795b55f | |||
04155423f5 | |||
4835322aa1 | |||
b26f1bb2b6 | |||
93e3112471 | |||
3839a55910 | |||
34602b87ec | |||
5f3aa43899 | |||
ecebe7b979 | |||
5b92e17e86 | |||
4a7b730e69 | |||
4ec94989cf | |||
b2b98399fb | |||
1ba7bb237f | |||
38d38f2635 | |||
1dfafd8fe0 | |||
b50d2395fd | |||
0419d3ecf7 | |||
3e21d9f023 | |||
bf0be0fe5e | |||
b3f8490660 | |||
d8f0ef0e80 | |||
d9a8a326df | |||
07ed4da2ff | |||
51c5c307fa | |||
ee78f590ba | |||
575682f593 | |||
14d7dc940d | |||
ba2725c2d0 | |||
ceb9fe4822 | |||
b0f2e5e64a | |||
8e59fb749c | |||
c0cc161ba8 | |||
27b03f0ed5 | |||
35d379b052 | |||
2f7da66d43 | |||
6b487fb199 | |||
3d109be3b4 | |||
071eac3838 | |||
c7881fddc2 | |||
9bcf5a83fb | |||
c32dd164fe | |||
8368e6a992 | |||
97ff1abb3e | |||
439b96f090 | |||
06fd46f835 | |||
41a98dbd66 | |||
f6ef6157cc | |||
c0299ca6f4 | |||
f4f33ea767 | |||
81d5ae3ce1 | |||
7114a27345 | |||
8273e1c07e | |||
a243064e76 | |||
6392ef5c44 | |||
7432e9fbe9 | |||
b9f6de9277 | |||
b2c1112288 | |||
c36a40ca15 | |||
eb08f2274e | |||
cc26f2c889 | |||
4bc29e2b9c | |||
8a21be721f | |||
7edb6bcbe1 | |||
6f3a40cb53 | |||
ea0a569c4d | |||
f65e75e4b3 | |||
c0f292e6b8 | |||
55ca788efe | |||
2b6f04a58e | |||
a3347e3e68 | |||
5b0d52f8c3 | |||
e8e561e8f5 | |||
e5b5cf02d3 | |||
0d9b6ba0ab | |||
da44e17b58 | |||
c396b6aaaa | |||
c47689d98f | |||
474eb1b44b | |||
f78d4713ea | |||
90889ebc0f | |||
df94f58462 | |||
eded9f5f84 | |||
a153448b84 | |||
abb20ec51f | |||
7c39f41e7c | |||
b970e03e19 | |||
ce8900e3b4 | |||
e6d15b966c | |||
6f0a67603a | |||
a2760c9f49 | |||
c30f89f1d0 | |||
946b3cce1d | |||
4f2da16d82 | |||
427496ebb8 | |||
dc2dced129 | |||
b6a497214e | |||
0b0cbaac09 | |||
d4e0e419dc | |||
16b0c1d1e1 | |||
88a9cf2cea | |||
c4a280e511 | |||
244b1d7d20 | |||
1c9e0a0e33 | |||
4db8f018cb | |||
3a080143a7 | |||
3451623c71 | |||
8c4df9a96f | |||
234c30c061 | |||
7ec822107a | |||
12bf1a3382 | |||
a78cdeae81 | |||
929d6ab62c | |||
c853704ac9 | |||
c642430fae | |||
066afd6abd | |||
f19cef960e | |||
beab76c7a9 | |||
ff5ddd0909 | |||
660f0fcc3d | |||
8c71eb71df | |||
c52bf1ac5d | |||
9e0de02fde | |||
c7dd74d8d3 | |||
881a120453 | |||
b566ca225c | |||
8d99a666f9 | |||
c76dcc5190 | |||
df61322e5b | |||
7cb61af245 | |||
70bf768005 | |||
8a8a8253fa | |||
9b5e99efe0 | |||
13a4056327 | |||
5991209c2d | |||
7cc4596ebd | |||
9405583745 | |||
1af7c400d1 | |||
a5f043c85b | |||
c6a3048e81 | |||
ba023e539a | |||
c8c5f41a01 | |||
8d4701bb1d | |||
40c4a7894d | |||
ab6f49dc67 | |||
a53f538f27 | |||
d163aefc1a | |||
bf0ab6a2df | |||
c7a0830a62 | |||
b3464a918b | |||
b7f5f8fc99 | |||
581f847e06 | |||
0d44947c11 | |||
78b143b800 | |||
a2f6ec3128 | |||
c68d60c99f | |||
4cd834910e | |||
cb1a1426b1 | |||
04a9141e45 | |||
548360b140 | |||
8ce7481a7f | |||
74d75a96eb | |||
0938c861f0 | |||
8c96d2573f | |||
ad556b7e7d | |||
ea0eab84a4 | |||
5f4d1c8891 | |||
dc49016987 | |||
0e137e21bc | |||
9b47ca5972 | |||
3b80df7f4e | |||
b7d0497c47 | |||
150321f5ac | |||
2cc2372165 | |||
6d8c647db8 | |||
5f459a64ce | |||
402df5bd03 | |||
63f78bf7c8 | |||
66d195ff75 | |||
ff908b4ba8 | |||
dc218fb41d | |||
fd5bc21522 | |||
7f3b2e23a4 | |||
e020b2a228 | |||
8e9097d0c0 | |||
3b91648070 | |||
a4667cb863 | |||
2e2f405b1e | |||
f28a87d835 | |||
83d9ce3d7c | |||
8f8ff4d519 | |||
745e1e2cf9 | |||
15f2fd0726 | |||
df31eab136 | |||
66107b8653 | |||
8e825de35f | |||
8216fdc59f | |||
4f57bb313f | |||
1b2f025414 | |||
1db4ee8c61 | |||
81322b498e | |||
ede0b584b8 | |||
da180e0790 | |||
bc6d7659af | |||
ae057ec508 | |||
dced92f8bd | |||
5f1c763993 | |||
ddffdc3e37 | |||
ec232ec9d8 | |||
38035c8c13 | |||
ef9754910e | |||
1c25aa6c48 | |||
0cd5c658aa | |||
ac68f70843 | |||
0faae33ace | |||
8df37d53d6 | |||
da85108ca2 | |||
34b0736f2c | |||
7ba352d9ca |
5
.github/ISSUE_TEMPLATE.md
vendored
5
.github/ISSUE_TEMPLATE.md
vendored
@ -1,8 +1,7 @@
|
||||
# Bug reporting
|
||||
|
||||
A good bug report has some very specific qualities, so please read over our short document on
|
||||
[reporting bugs][report_bugs] before you submit your bug report.
|
||||
A good bug report has some very specific qualities, so please read over our short document on [reporting bugs][report_bugs] before submitting a bug report.
|
||||
|
||||
To ask a question, go ahead and ignore this.
|
||||
|
||||
[report_bugs]: ../Documentation/reporting_bugs.md
|
||||
[report_bugs]: https://github.com/coreos/etcd/blob/master/Documentation/reporting_bugs.md
|
||||
|
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -2,4 +2,4 @@
|
||||
|
||||
Please read our [contribution workflow][contributing] before submitting a pull request.
|
||||
|
||||
[contributing]: ../CONTRIBUTING.md#contribution-flow
|
||||
[contributing]: https://github.com/coreos/etcd/blob/master/CONTRIBUTING.md#contribution-flow
|
||||
|
4
.gitignore
vendored
4
.gitignore
vendored
@ -1,5 +1,7 @@
|
||||
/agent-*
|
||||
/coverage
|
||||
/gopath
|
||||
/gopath.proto
|
||||
/go-bindata
|
||||
/machine*
|
||||
/bin
|
||||
@ -10,3 +12,5 @@
|
||||
/hack/insta-discovery/.env
|
||||
*.test
|
||||
tools/functional-tester/docker/bin
|
||||
hack/tls-setup/certs
|
||||
.idea
|
||||
|
16
.semaphore.sh
Executable file
16
.semaphore.sh
Executable file
@ -0,0 +1,16 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
TEST_SUFFIX=$(date +%s | base64 | head -c 15)
|
||||
|
||||
TEST_OPTS="RELEASE_TEST=y INTEGRATION=y PASSES='build unit release integration_e2e functional' MANUAL_VER=v3.2.9"
|
||||
if [ "$TEST_ARCH" == "386" ]; then
|
||||
TEST_OPTS="GOARCH=386 PASSES='build unit integration_e2e'"
|
||||
fi
|
||||
|
||||
docker run \
|
||||
--rm \
|
||||
--volume=`pwd`:/go/src/github.com/coreos/etcd \
|
||||
gcr.io/etcd-development/etcd-test:go1.8.5 \
|
||||
/bin/bash -c "${TEST_OPTS} ./test 2>&1 | tee test-${TEST_SUFFIX}.log"
|
||||
|
||||
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked|Too many goroutines)" -B50 -A10 test-${TEST_SUFFIX}.log
|
98
.travis.yml
98
.travis.yml
@ -1,60 +1,88 @@
|
||||
dist: trusty
|
||||
language: go
|
||||
go_import_path: github.com/coreos/etcd
|
||||
sudo: false
|
||||
|
||||
sudo: required
|
||||
|
||||
services: docker
|
||||
|
||||
go:
|
||||
- 1.5
|
||||
- 1.6
|
||||
- tip
|
||||
- 1.8.5
|
||||
- tip
|
||||
|
||||
notifications:
|
||||
on_success: never
|
||||
on_failure: never
|
||||
|
||||
env:
|
||||
global:
|
||||
- GO15VENDOREXPERIMENT=1
|
||||
matrix:
|
||||
- TARGET=amd64
|
||||
- TARGET=arm64
|
||||
- TARGET=arm
|
||||
- TARGET=ppc64le
|
||||
- TARGET=amd64
|
||||
- TARGET=amd64-go-tip
|
||||
- TARGET=darwin-amd64
|
||||
- TARGET=windows-amd64
|
||||
- TARGET=arm64
|
||||
- TARGET=arm
|
||||
- TARGET=386
|
||||
- TARGET=ppc64le
|
||||
|
||||
matrix:
|
||||
fast_finish: true
|
||||
allow_failures:
|
||||
- go: tip
|
||||
- go: tip
|
||||
env: TARGET=amd64-go-tip
|
||||
exclude:
|
||||
- go: 1.5
|
||||
env: TARGET=arm
|
||||
- go: 1.5
|
||||
env: TARGET=ppc64le
|
||||
- go: 1.6
|
||||
env: TARGET=arm64
|
||||
- go: 1.8.5
|
||||
env: TARGET=amd64-go-tip
|
||||
- go: tip
|
||||
env: TARGET=amd64
|
||||
- go: tip
|
||||
env: TARGET=darwin-amd64
|
||||
- go: tip
|
||||
env: TARGET=windows-amd64
|
||||
- go: tip
|
||||
env: TARGET=arm
|
||||
- go: tip
|
||||
env: TARGET=arm64
|
||||
- go: tip
|
||||
env: TARGET=386
|
||||
- go: tip
|
||||
env: TARGET=ppc64le
|
||||
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- libpcap-dev
|
||||
- libaspell-dev
|
||||
- libhunspell-dev
|
||||
|
||||
before_install:
|
||||
- go get -v github.com/chzchzchz/goword
|
||||
- go get -v honnef.co/go/simple/cmd/gosimple
|
||||
- go get -v honnef.co/go/unused/cmd/unused
|
||||
- docker pull gcr.io/etcd-development/etcd-test:go1.8.5
|
||||
|
||||
# disable godep restore override
|
||||
install:
|
||||
- pushd cmd/ && go get -t -v ./... && popd
|
||||
- pushd cmd/etcd && go get -t -v ./... && popd
|
||||
|
||||
script:
|
||||
- >
|
||||
if [ "${TARGET}" == "amd64" ]; then
|
||||
GOARCH="${TARGET}" ./test;
|
||||
else
|
||||
GOARCH="${TARGET}" ./build;
|
||||
fi
|
||||
case "${TARGET}" in
|
||||
amd64)
|
||||
docker run --rm \
|
||||
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.8.5 \
|
||||
/bin/bash -c "GOARCH=amd64 ./test"
|
||||
;;
|
||||
amd64-go-tip)
|
||||
GOARCH=amd64 ./test
|
||||
;;
|
||||
darwin-amd64)
|
||||
docker run --rm \
|
||||
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.8.5 \
|
||||
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOOS=darwin GOARCH=amd64 ./build"
|
||||
;;
|
||||
windows-amd64)
|
||||
docker run --rm \
|
||||
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.8.5 \
|
||||
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOOS=windows GOARCH=amd64 ./build"
|
||||
;;
|
||||
386)
|
||||
docker run --rm \
|
||||
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.8.5 \
|
||||
/bin/bash -c "GOARCH=386 PASSES='build unit' ./test"
|
||||
;;
|
||||
*)
|
||||
# test building out of gopath
|
||||
docker run --rm \
|
||||
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.8.5 \
|
||||
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOARCH='${TARGET}' ./build"
|
||||
;;
|
||||
esac
|
||||
|
@ -1,6 +1,6 @@
|
||||
# How to contribute
|
||||
|
||||
etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests. This document outlines some of the conventions on commit message formatting, contact points for developers and other resources to make getting your contribution into etcd easier.
|
||||
etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests. This document outlines some of the conventions on commit message formatting, contact points for developers, and other resources to help get contributions into etcd.
|
||||
|
||||
# Email and chat
|
||||
|
||||
@ -14,24 +14,20 @@ etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
|
||||
|
||||
## Reporting bugs and creating issues
|
||||
|
||||
Reporting bugs is one of the best ways to contribute. However, a good bug report
|
||||
has some very specific qualities, so please read over our short document on
|
||||
[reporting bugs](https://github.com/coreos/etcd/blob/master/Documentation/reporting_bugs.md)
|
||||
before you submit your bug report. This document might contain links known
|
||||
issues, another good reason to take a look there, before reporting your bug.
|
||||
Reporting bugs is one of the best ways to contribute. However, a good bug report has some very specific qualities, so please read over our short document on [reporting bugs](https://github.com/coreos/etcd/blob/master/Documentation/reporting_bugs.md) before submitting a bug report. This document might contain links to known issues, another good reason to take a look there before reporting a bug.
|
||||
|
||||
## Contribution flow
|
||||
|
||||
This is a rough outline of what a contributor's workflow looks like:
|
||||
|
||||
- Create a topic branch from where you want to base your work. This is usually master.
|
||||
- Create a topic branch from where to base the contribution. This is usually master.
|
||||
- Make commits of logical units.
|
||||
- Make sure your commit messages are in the proper format (see below).
|
||||
- Push your changes to a topic branch in your fork of the repository.
|
||||
- Make sure commit messages are in the proper format (see below).
|
||||
- Push changes in a topic branch to a personal fork of the repository.
|
||||
- Submit a pull request to coreos/etcd.
|
||||
- Your PR must receive a LGTM from two maintainers found in the MAINTAINERS file.
|
||||
- The PR must receive a LGTM from two maintainers found in the MAINTAINERS file.
|
||||
|
||||
Thanks for your contributions!
|
||||
Thanks for contributing!
|
||||
|
||||
### Code style
|
||||
|
||||
@ -48,8 +44,7 @@ the body of the commit should describe the why.
|
||||
```
|
||||
scripts: add the test-cluster command
|
||||
|
||||
this uses tmux to setup a test cluster that you can easily kill and
|
||||
start for debugging.
|
||||
this uses tmux to setup a test cluster that can easily be killed and started for debugging.
|
||||
|
||||
Fixes #38
|
||||
```
|
||||
@ -64,7 +59,4 @@ The format can be described more formally as follows:
|
||||
<footer>
|
||||
```
|
||||
|
||||
The first line is the subject and should be no longer than 70 characters, the
|
||||
second line is always blank, and other lines should be wrapped at 80 characters.
|
||||
This allows the message to be easier to read on GitHub as well as in various
|
||||
git tools.
|
||||
The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools.
|
||||
|
@ -1,8 +1,15 @@
|
||||
FROM alpine:latest
|
||||
|
||||
ADD bin/etcd /usr/local/bin/
|
||||
ADD bin/etcdctl /usr/local/bin/
|
||||
ADD etcd /usr/local/bin/
|
||||
ADD etcdctl /usr/local/bin/
|
||||
RUN mkdir -p /var/etcd/
|
||||
RUN mkdir -p /var/lib/etcd/
|
||||
|
||||
# Alpine Linux doesn't use pam, which means that there is no /etc/nsswitch.conf,
|
||||
# but Golang relies on /etc/nsswitch.conf to check the order of DNS resolving
|
||||
# (see https://github.com/golang/go/commit/9dee7771f561cf6aee081c0af6658cc81fac3918)
|
||||
# To fix this we just create /etc/nsswitch.conf and add the following line:
|
||||
RUN echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
|
||||
|
||||
EXPOSE 2379 2380
|
||||
|
||||
|
11
Dockerfile-release.arm64
Normal file
11
Dockerfile-release.arm64
Normal file
@ -0,0 +1,11 @@
|
||||
FROM aarch64/ubuntu:16.04
|
||||
|
||||
ADD etcd /usr/local/bin/
|
||||
ADD etcdctl /usr/local/bin/
|
||||
ADD var/etcd /var/etcd
|
||||
ADD var/lib/etcd /var/lib/etcd
|
||||
|
||||
EXPOSE 2379 2380
|
||||
|
||||
# Define default command.
|
||||
CMD ["/usr/local/bin/etcd"]
|
11
Dockerfile-release.ppc64le
Normal file
11
Dockerfile-release.ppc64le
Normal file
@ -0,0 +1,11 @@
|
||||
FROM ppc64le/ubuntu:16.04
|
||||
|
||||
ADD etcd /usr/local/bin/
|
||||
ADD etcdctl /usr/local/bin/
|
||||
ADD var/etcd /var/etcd
|
||||
ADD var/lib/etcd /var/lib/etcd
|
||||
|
||||
EXPOSE 2379 2380
|
||||
|
||||
# Define default command.
|
||||
CMD ["/usr/local/bin/etcd"]
|
57
Dockerfile-test
Normal file
57
Dockerfile-test
Normal file
@ -0,0 +1,57 @@
|
||||
FROM ubuntu:16.10
|
||||
|
||||
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
|
||||
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
|
||||
|
||||
RUN apt-get -y update \
|
||||
&& apt-get -y install \
|
||||
build-essential \
|
||||
gcc \
|
||||
apt-utils \
|
||||
pkg-config \
|
||||
software-properties-common \
|
||||
apt-transport-https \
|
||||
libssl-dev \
|
||||
sudo \
|
||||
bash \
|
||||
curl \
|
||||
wget \
|
||||
tar \
|
||||
git \
|
||||
netcat \
|
||||
libaspell-dev \
|
||||
libhunspell-dev \
|
||||
hunspell-en-us \
|
||||
aspell-en \
|
||||
shellcheck \
|
||||
&& apt-get -y update \
|
||||
&& apt-get -y upgrade \
|
||||
&& apt-get -y autoremove \
|
||||
&& apt-get -y autoclean
|
||||
|
||||
ENV GOROOT /usr/local/go
|
||||
ENV GOPATH /go
|
||||
ENV PATH ${GOPATH}/bin:${GOROOT}/bin:${PATH}
|
||||
ENV GO_VERSION REPLACE_ME_GO_VERSION
|
||||
ENV GO_DOWNLOAD_URL https://storage.googleapis.com/golang
|
||||
RUN rm -rf ${GOROOT} \
|
||||
&& curl -s ${GO_DOWNLOAD_URL}/go${GO_VERSION}.linux-amd64.tar.gz | tar -v -C /usr/local/ -xz \
|
||||
&& mkdir -p ${GOPATH}/src ${GOPATH}/bin \
|
||||
&& go version
|
||||
|
||||
RUN mkdir -p ${GOPATH}/src/github.com/coreos/etcd
|
||||
WORKDIR ${GOPATH}/src/github.com/coreos/etcd
|
||||
|
||||
ADD ./scripts/install-marker.sh /tmp/install-marker.sh
|
||||
|
||||
RUN go get -v -u -tags spell github.com/chzchzchz/goword \
|
||||
&& go get -v -u github.com/coreos/license-bill-of-materials \
|
||||
&& go get -v -u honnef.co/go/tools/cmd/gosimple \
|
||||
&& go get -v -u honnef.co/go/tools/cmd/unused \
|
||||
&& go get -v -u honnef.co/go/tools/cmd/staticcheck \
|
||||
&& go get -v -u github.com/wadey/gocovmerge \
|
||||
&& go get -v -u github.com/gordonklaus/ineffassign \
|
||||
&& /tmp/install-marker.sh amd64 \
|
||||
&& rm -f /tmp/install-marker.sh \
|
||||
&& curl -s https://codecov.io/bash >/codecov \
|
||||
&& chmod 700 /codecov
|
1
Documentation/README.md
Symbolic link
1
Documentation/README.md
Symbolic link
@ -0,0 +1 @@
|
||||
docs.md
|
@ -14,7 +14,7 @@ GCE n1-highcpu-2 machine type
|
||||
|
||||
## Testing
|
||||
|
||||
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
|
||||
Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
|
||||
|
||||
## Performance
|
||||
|
||||
@ -48,5 +48,5 @@ Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send r
|
||||
| 256 | 64 | all servers | 1033 | 121.5 |
|
||||
| 256 | 256 | all servers | 3061 | 119.3 |
|
||||
|
||||
[boom]: https://github.com/rakyll/boom
|
||||
[hack-benchmark]: /hack/benchmark/
|
||||
[hey]: https://github.com/rakyll/hey
|
||||
[hack-benchmark]: https://github.com/coreos/etcd/tree/master/hack/benchmark
|
||||
|
@ -24,7 +24,7 @@ Go OS/Arch: linux/amd64
|
||||
|
||||
## Testing
|
||||
|
||||
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool](https://github.com/rakyll/boom) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
|
||||
Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
|
||||
|
||||
The performance is calulated through results of 100 benchmark rounds.
|
||||
|
||||
@ -66,4 +66,4 @@ The performance is calulated through results of 100 benchmark rounds.
|
||||
|
||||
- Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
|
||||
|
||||
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
|
||||
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
|
||||
|
@ -24,7 +24,7 @@ Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base perfor
|
||||
|
||||
## Testing
|
||||
|
||||
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
|
||||
Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
|
||||
|
||||
## Performance
|
||||
|
||||
@ -66,7 +66,7 @@ Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send r
|
||||
|
||||
- write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.
|
||||
|
||||
[boom]: https://github.com/rakyll/boom
|
||||
[hey]: https://github.com/rakyll/hey
|
||||
[c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
|
||||
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md
|
||||
[hack-benchmark]: /hack/benchmark/
|
||||
[hack-benchmark]: ../../hack/benchmark/
|
||||
|
@ -39,7 +39,7 @@ The length of key name is always 64 bytes, which is a reasonable length of avera
|
||||
## Data Size Threshold
|
||||
|
||||
- When etcd reaches data size threshold, it may trigger leader election easily and drop part of proposals.
|
||||
- At most cases, etcd cluster should work smoothly if it doesn't hit the threshold. If it doesn't work well due to insufficient resources, you need to decrease its data size.
|
||||
- For most cases, the etcd cluster should work smoothly if it doesn't hit the threshold. If it doesn't work well due to insufficient resources, decrease its data size.
|
||||
|
||||
| value bytes | key number limitation | suggested data size threshold(MB) | consumed RSS(MB) |
|
||||
|-------------|-----------------------|-----------------------------------|------------------|
|
||||
|
@ -39,4 +39,4 @@ The performance is nearly the same as the one with empty server handler.
|
||||
The performance with empty server handler is not affected by one put. So the
|
||||
performance downgrade should be caused by storage package.
|
||||
|
||||
[etcd-v3-benchmark]: /tools/benchmark/
|
||||
[etcd-v3-benchmark]: ../../tools/benchmark/
|
||||
|
168
Documentation/dev-guide/api_concurrency_reference_v3.md
Normal file
168
Documentation/dev-guide/api_concurrency_reference_v3.md
Normal file
@ -0,0 +1,168 @@
|
||||
### etcd concurrency API Reference
|
||||
|
||||
|
||||
This is a generated documentation. Please read the proto files for more.
|
||||
|
||||
|
||||
##### service `Lock` (etcdserver/api/v3lock/v3lockpb/v3lock.proto)
|
||||
|
||||
The lock service exposes client-side locking facilities as a gRPC interface.
|
||||
|
||||
| Method | Request Type | Response Type | Description |
|
||||
| ------ | ------------ | ------------- | ----------- |
|
||||
| Lock | LockRequest | LockResponse | Lock acquires a distributed shared lock on a given named lock. On success, it will return a unique key that exists so long as the lock is held by the caller. This key can be used in conjunction with transactions to safely ensure updates to etcd only occur while holding lock ownership. The lock is held until Unlock is called on the key or the lease associate with the owner expires. |
|
||||
| Unlock | UnlockRequest | UnlockResponse | Unlock takes a key returned by Lock and releases the hold on lock. The next Lock caller waiting for the lock will then be woken up and given ownership of the lock. |
|
||||
|
||||
|
||||
|
||||
##### message `LockRequest` (etcdserver/api/v3lock/v3lockpb/v3lock.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| name | name is the identifier for the distributed shared lock to be acquired. | bytes |
|
||||
| lease | lease is the ID of the lease that will be attached to ownership of the lock. If the lease expires or is revoked and currently holds the lock, the lock is automatically released. Calls to Lock with the same lease will be treated as a single acquistion; locking twice with the same lease is a no-op. | int64 |
|
||||
|
||||
|
||||
|
||||
##### message `LockResponse` (etcdserver/api/v3lock/v3lockpb/v3lock.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | etcdserverpb.ResponseHeader |
|
||||
| key | key is a key that will exist on etcd for the duration that the Lock caller owns the lock. Users should not modify this key or the lock may exhibit undefined behavior. | bytes |
|
||||
|
||||
|
||||
|
||||
##### message `UnlockRequest` (etcdserver/api/v3lock/v3lockpb/v3lock.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| key | key is the lock ownership key granted by Lock. | bytes |
|
||||
|
||||
|
||||
|
||||
##### message `UnlockResponse` (etcdserver/api/v3lock/v3lockpb/v3lock.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | etcdserverpb.ResponseHeader |
|
||||
|
||||
|
||||
|
||||
##### service `Election` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
The election service exposes client-side election facilities as a gRPC interface.
|
||||
|
||||
| Method | Request Type | Response Type | Description |
|
||||
| ------ | ------------ | ------------- | ----------- |
|
||||
| Campaign | CampaignRequest | CampaignResponse | Campaign waits to acquire leadership in an election, returning a LeaderKey representing the leadership if successful. The LeaderKey can then be used to issue new values on the election, transactionally guard API requests on leadership still being held, and resign from the election. |
|
||||
| Proclaim | ProclaimRequest | ProclaimResponse | Proclaim updates the leader's posted value with a new value. |
|
||||
| Leader | LeaderRequest | LeaderResponse | Leader returns the current election proclamation, if any. |
|
||||
| Observe | LeaderRequest | LeaderResponse | Observe streams election proclamations in-order as made by the election's elected leaders. |
|
||||
| Resign | ResignRequest | ResignResponse | Resign releases election leadership so other campaigners may acquire leadership on the election. |
|
||||
|
||||
|
||||
|
||||
##### message `CampaignRequest` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| name | name is the election's identifier for the campaign. | bytes |
|
||||
| lease | lease is the ID of the lease attached to leadership of the election. If the lease expires or is revoked before resigning leadership, then the leadership is transferred to the next campaigner, if any. | int64 |
|
||||
| value | value is the initial proclaimed value set when the campaigner wins the election. | bytes |
|
||||
|
||||
|
||||
|
||||
##### message `CampaignResponse` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | etcdserverpb.ResponseHeader |
|
||||
| leader | leader describes the resources used for holding leadereship of the election. | LeaderKey |
|
||||
|
||||
|
||||
|
||||
##### message `LeaderKey` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| name | name is the election identifier that correponds to the leadership key. | bytes |
|
||||
| key | key is an opaque key representing the ownership of the election. If the key is deleted, then leadership is lost. | bytes |
|
||||
| rev | rev is the creation revision of the key. It can be used to test for ownership of an election during transactions by testing the key's creation revision matches rev. | int64 |
|
||||
| lease | lease is the lease ID of the election leader. | int64 |
|
||||
|
||||
|
||||
|
||||
##### message `LeaderRequest` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| name | name is the election identifier for the leadership information. | bytes |
|
||||
|
||||
|
||||
|
||||
##### message `LeaderResponse` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | etcdserverpb.ResponseHeader |
|
||||
| kv | kv is the key-value pair representing the latest leader update. | mvccpb.KeyValue |
|
||||
|
||||
|
||||
|
||||
##### message `ProclaimRequest` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| leader | leader is the leadership hold on the election. | LeaderKey |
|
||||
| value | value is an update meant to overwrite the leader's current value. | bytes |
|
||||
|
||||
|
||||
|
||||
##### message `ProclaimResponse` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | etcdserverpb.ResponseHeader |
|
||||
|
||||
|
||||
|
||||
##### message `ResignRequest` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| leader | leader is the leadership to relinquish by resignation. | LeaderKey |
|
||||
|
||||
|
||||
|
||||
##### message `ResignResponse` (etcdserver/api/v3election/v3electionpb/v3election.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | etcdserverpb.ResponseHeader |
|
||||
|
||||
|
||||
|
||||
##### message `Event` (mvcc/mvccpb/kv.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| type | type is the kind of event. If type is a PUT, it indicates new data has been stored to the key. If type is a DELETE, it indicates the key was deleted. | EventType |
|
||||
| kv | kv holds the KeyValue for the event. A PUT event contains current kv pair. A PUT event with kv.Version=1 indicates the creation of a key. A DELETE/EXPIRE event contains the deleted key with its modification revision set to the revision of deletion. | KeyValue |
|
||||
| prev_kv | prev_kv holds the key-value pair before the event happens. | KeyValue |
|
||||
|
||||
|
||||
|
||||
##### message `KeyValue` (mvcc/mvccpb/kv.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| key | key is the key in bytes. An empty key is not allowed. | bytes |
|
||||
| create_revision | create_revision is the revision of last creation on this key. | int64 |
|
||||
| mod_revision | mod_revision is the revision of last modification on this key. | int64 |
|
||||
| version | version is the version of the key. A deletion resets the version to zero and any modification of the key increases its version. | int64 |
|
||||
| value | value is the value held by the key, in bytes. | bytes |
|
||||
| lease | lease is the ID of the lease that attached to key. When the attached lease expires, the key will be deleted. If lease is 0, then no lease is attached to the key. | int64 |
|
||||
|
||||
|
||||
|
@ -8,6 +8,8 @@ etcd v3 uses [gRPC][grpc] for its messaging protocol. The etcd project includes
|
||||
|
||||
The gateway accepts a [JSON mapping][json-mapping] for etcd's [protocol buffer][api-ref] message definitions. Note that `key` and `value` fields are defined as byte arrays and therefore must be base64 encoded in JSON.
|
||||
|
||||
Use `curl` to put and get a key:
|
||||
|
||||
```bash
|
||||
<<COMMENT
|
||||
https://www.base64encode.org/
|
||||
@ -17,21 +19,48 @@ COMMENT
|
||||
|
||||
curl -L http://localhost:2379/v3alpha/kv/put \
|
||||
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
|
||||
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"}}
|
||||
|
||||
curl -L http://localhost:2379/v3alpha/kv/range \
|
||||
-X POST -d '{"key": "Zm9v"}'
|
||||
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"}
|
||||
|
||||
# get all keys prefixed with "foo"
|
||||
curl -L http://localhost:2379/v3alpha/kv/range \
|
||||
-X POST -d '{"key": "Zm9v", "range_end": "Zm9w"}'
|
||||
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"}
|
||||
```
|
||||
|
||||
Use `curl` to watch a key:
|
||||
|
||||
```bash
|
||||
curl http://localhost:2379/v3alpha/watch \
|
||||
-X POST -d '{"create_request": {"key":"Zm9v"} }' &
|
||||
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"1","raft_term":"2"},"created":true}}
|
||||
|
||||
curl -L http://localhost:2379/v3alpha/kv/put \
|
||||
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' >/dev/null 2>&1
|
||||
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"2"},"events":[{"kv":{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}}]}}
|
||||
```
|
||||
|
||||
Use `curl` to issue a transaction:
|
||||
|
||||
```bash
|
||||
curl -L http://localhost:2379/v3alpha/kv/txn \
|
||||
-X POST \
|
||||
-d '{"compare":[{"target":"CREATE","key":"Zm9v","createRevision":"2"}],"success":[{"requestPut":{"key":"Zm9v","value":"YmFy"}}]}'
|
||||
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"3","raft_term":"2"},"succeeded":true,"responses":[{"response_put":{"header":{"revision":"3"}}}]}
|
||||
```
|
||||
|
||||
## Swagger
|
||||
|
||||
Generated [Swapper][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].
|
||||
Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].
|
||||
|
||||
[api-ref]: ./api_reference_v3.md
|
||||
[go-client]: https://github.com/coreos/etcd/tree/master/clientv3
|
||||
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
|
||||
[grpc]: http://www.grpc.io/
|
||||
[grpc-gateway]: https://github.com/gengo/grpc-gateway
|
||||
[grpc-gateway]: https://github.com/grpc-ecosystem/grpc-gateway
|
||||
[json-mapping]: https://developers.google.com/protocol-buffers/docs/proto3#json
|
||||
[swagger]: http://swagger.io/
|
||||
[swagger-doc]: apispec/swagger/rpc.swagger.json
|
||||
|
@ -40,8 +40,6 @@ This is a generated documentation. Please read the proto files for more.
|
||||
|
||||
##### service `KV` (etcdserver/etcdserverpb/rpc.proto)
|
||||
|
||||
for grpc-gateway
|
||||
|
||||
| Method | Request Type | Response Type | Description |
|
||||
| ------ | ------------ | ------------- | ----------- |
|
||||
| Range | RangeRequest | RangeResponse | Range gets the keys in the range from the key-value store. |
|
||||
@ -59,6 +57,7 @@ for grpc-gateway
|
||||
| LeaseGrant | LeaseGrantRequest | LeaseGrantResponse | LeaseGrant creates a lease which expires if the server does not receive a keepAlive within a given time to live period. All keys attached to the lease will be expired and deleted if the lease expires. Each expired key generates a delete event in the event history. |
|
||||
| LeaseRevoke | LeaseRevokeRequest | LeaseRevokeResponse | LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted. |
|
||||
| LeaseKeepAlive | LeaseKeepAliveRequest | LeaseKeepAliveResponse | LeaseKeepAlive keeps the lease alive by streaming keep alive requests from the client to the server and streaming keep alive responses from the server to the client. |
|
||||
| LeaseTimeToLive | LeaseTimeToLiveRequest | LeaseTimeToLiveResponse | LeaseTimeToLive retrieves lease information. |
|
||||
|
||||
|
||||
|
||||
@ -93,8 +92,6 @@ for grpc-gateway
|
||||
|
||||
##### message `AlarmRequest` (etcdserver/etcdserverpb/rpc.proto)
|
||||
|
||||
default, used to query if any alarm is active space quota is exhausted
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| action | action is the kind of alarm request to issue. The action may GET alarm statuses, ACTIVATE an alarm, or DEACTIVATE a raised alarm. | AlarmAction |
|
||||
@ -426,7 +423,8 @@ Empty field.
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| key | key is the first key to delete in the range. | bytes |
|
||||
| range_end | range_end is the key following the last key to delete for the range [key, range_end). If range_end is not given, the range is defined to contain only the key argument. If range_end is '\0', the range is all keys greater than or equal to the key argument. | bytes |
|
||||
| range_end | range_end is the key following the last key to delete for the range [key, range_end). If range_end is not given, the range is defined to contain only the key argument. If range_end is one bit larger than the given key, then the range is all the keys with the prefix (the given key). If range_end is '\0', the range is all keys greater than or equal to the key argument. | bytes |
|
||||
| prev_kv | If prev_kv is set, etcd gets the previous key-value pairs before deleting it. The previous key-value pairs will be returned in the delete response. | bool |
|
||||
|
||||
|
||||
|
||||
@ -436,6 +434,7 @@ Empty field.
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | ResponseHeader |
|
||||
| deleted | deleted is the number of keys deleted by the delete range request. | int64 |
|
||||
| prev_kvs | if prev_kv is set in the request, the previous key-value pairs will be returned. | (slice of) mvccpb.KeyValue |
|
||||
|
||||
|
||||
|
||||
@ -508,6 +507,27 @@ Empty field.
|
||||
|
||||
|
||||
|
||||
##### message `LeaseTimeToLiveRequest` (etcdserver/etcdserverpb/rpc.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| ID | ID is the lease ID for the lease. | int64 |
|
||||
| keys | keys is true to query all the keys attached to this lease. | bool |
|
||||
|
||||
|
||||
|
||||
##### message `LeaseTimeToLiveResponse` (etcdserver/etcdserverpb/rpc.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | ResponseHeader |
|
||||
| ID | ID is the lease ID from the keep alive request. | int64 |
|
||||
| TTL | TTL is the remaining TTL in seconds for the lease; the lease will expire in under TTL+1 seconds. | int64 |
|
||||
| grantedTTL | GrantedTTL is the initial granted time in seconds upon lease creation/renewal. | int64 |
|
||||
| keys | Keys is the list of keys attached to this lease. | (slice of) bytes |
|
||||
|
||||
|
||||
|
||||
##### message `Member` (etcdserver/etcdserverpb/rpc.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
@ -533,6 +553,7 @@ Empty field.
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | ResponseHeader |
|
||||
| member | member is the member information for the added member. | Member |
|
||||
| members | members is a list of all members after adding the new member. | (slice of) Member |
|
||||
|
||||
|
||||
|
||||
@ -564,6 +585,7 @@ Empty field.
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | ResponseHeader |
|
||||
| members | members is a list of all members after removing the member. | (slice of) Member |
|
||||
|
||||
|
||||
|
||||
@ -581,6 +603,7 @@ Empty field.
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | ResponseHeader |
|
||||
| members | members is a list of all members after updating the member. | (slice of) Member |
|
||||
|
||||
|
||||
|
||||
@ -591,6 +614,9 @@ Empty field.
|
||||
| key | key is the key, in bytes, to put into the key-value store. | bytes |
|
||||
| value | value is the value, in bytes, to associate with the key in the key-value store. | bytes |
|
||||
| lease | lease is the lease ID to associate with the key in the key-value store. A lease value of 0 indicates no lease. | int64 |
|
||||
| prev_kv | If prev_kv is set, etcd gets the previous key-value pair before changing it. The previous key-value pair will be returned in the put response. | bool |
|
||||
| ignore_value | If ignore_value is set, etcd updates the key using its current value. Returns an error if the key does not exist. | bool |
|
||||
| ignore_lease | If ignore_lease is set, etcd updates the key using its current lease. Returns an error if the key does not exist. | bool |
|
||||
|
||||
|
||||
|
||||
@ -599,6 +625,7 @@ Empty field.
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | ResponseHeader |
|
||||
| prev_kv | if prev_kv is set in the request, the previous key-value pair will be returned. | mvccpb.KeyValue |
|
||||
|
||||
|
||||
|
||||
@ -606,13 +633,19 @@ Empty field.
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| key | default, no sorting lowest target value first highest target value first key is the first key for the range. If range_end is not given, the request only looks up key. | bytes |
|
||||
| range_end | range_end is the upper bound on the requested range [key, range_end). If range_end is '\0', the range is all keys >= key. If the range_end is one bit larger than the given key, then the range requests get the all keys with the prefix (the given key). If both key and range_end are '\0', then range requests returns all keys. | bytes |
|
||||
| limit | limit is a limit on the number of keys returned for the request. | int64 |
|
||||
| key | key is the first key for the range. If range_end is not given, the request only looks up key. | bytes |
|
||||
| range_end | range_end is the upper bound on the requested range [key, range_end). If range_end is '\0', the range is all keys >= key. If range_end is key plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"), then the range request gets all keys prefixed with key. If both key and range_end are '\0', then the range request returns all keys. | bytes |
|
||||
| limit | limit is a limit on the number of keys returned for the request. When limit is set to 0, it is treated as no limit. | int64 |
|
||||
| revision | revision is the point-in-time of the key-value store to use for the range. If revision is less or equal to zero, the range is over the newest key-value store. If the revision has been compacted, ErrCompacted is returned as a response. | int64 |
|
||||
| sort_order | sort_order is the order for returned sorted results. | SortOrder |
|
||||
| sort_target | sort_target is the key-value field to use for sorting. | SortTarget |
|
||||
| serializable | serializable sets the range request to use serializable member-local reads. Range requests are linearizable by default; linearizable requests have higher latency and lower throughput than serializable requests but reflect the current consensus of the cluster. For better performance, in exchange for possible stale reads, a serializable range request is served locally without needing to reach consensus with other nodes in the cluster. | bool |
|
||||
| keys_only | keys_only when set returns only the keys and not the values. | bool |
|
||||
| count_only | count_only when set returns only the count of the keys in the range. | bool |
|
||||
| min_mod_revision | min_mod_revision is the lower bound for returned key mod revisions; all keys with lesser mod revisions will be filtered away. | int64 |
|
||||
| max_mod_revision | max_mod_revision is the upper bound for returned key mod revisions; all keys with greater mod revisions will be filtered away. | int64 |
|
||||
| min_create_revision | min_create_revision is the lower bound for returned key create revisions; all keys with lesser create trevisions will be filtered away. | int64 |
|
||||
| max_create_revision | max_create_revision is the upper bound for returned key create revisions; all keys with greater create revisions will be filtered away. | int64 |
|
||||
|
||||
|
||||
|
||||
@ -621,8 +654,9 @@ Empty field.
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| header | | ResponseHeader |
|
||||
| kvs | kvs is the list of key-value pairs matched by the range request. | (slice of) mvccpb.KeyValue |
|
||||
| kvs | kvs is the list of key-value pairs matched by the range request. kvs is empty when count is requested. | (slice of) mvccpb.KeyValue |
|
||||
| more | more indicates if there are more keys to return in the requested range. | bool |
|
||||
| count | count is set to the number of keys within the range when requested. | int64 |
|
||||
|
||||
|
||||
|
||||
@ -729,9 +763,11 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| key | key is the key to register for watching. | bytes |
|
||||
| range_end | range_end is the end of the range [key, range_end) to watch. If range_end is not given, only the key argument is watched. If range_end is equal to '\0', all keys greater than or equal to the key argument are watched. | bytes |
|
||||
| range_end | range_end is the end of the range [key, range_end) to watch. If range_end is not given, only the key argument is watched. If range_end is equal to '\0', all keys greater than or equal to the key argument are watched. If the range_end is one bit larger than the given key, then all keys with the prefix (the given key) will be watched. | bytes |
|
||||
| start_revision | start_revision is an optional revision to watch from (inclusive). No start_revision is "now". | int64 |
|
||||
| progress_notify | progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server may decide how often it will send notifications based on current load. | bool |
|
||||
| filters | filters filter the events at server side before it sends back to the watcher. | (slice of) FilterType |
|
||||
| prev_kv | If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned. | bool |
|
||||
|
||||
|
||||
|
||||
@ -754,6 +790,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
|
||||
| created | created is set to true if the response is for a create watch request. The client should record the watch_id and expect to receive events for the created watcher from the same stream. All events sent to the created watcher will attach with the same watch_id. | bool |
|
||||
| canceled | canceled is set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher. | bool |
|
||||
| compact_revision | compact_revision is set to the minimum index if a watcher tries to watch at a compacted index. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The client should treat the watcher as canceled and should not try to create any watcher with the same start_revision again. | int64 |
|
||||
| cancel_reason | cancel_reason indicates the reason for canceling the watcher. | string |
|
||||
| events | | (slice of) mvccpb.Event |
|
||||
|
||||
|
||||
@ -764,6 +801,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
|
||||
| ----- | ----------- | ---- |
|
||||
| type | type is the kind of event. If type is a PUT, it indicates new data has been stored to the key. If type is a DELETE, it indicates the key was deleted. | EventType |
|
||||
| kv | kv holds the KeyValue for the event. A PUT event contains current kv pair. A PUT event with kv.Version=1 indicates the creation of a key. A DELETE/EXPIRE event contains the deleted key with its modification revision set to the revision of deletion. | KeyValue |
|
||||
| prev_kv | prev_kv holds the key-value pair before the event happens. | KeyValue |
|
||||
|
||||
|
||||
|
||||
@ -789,6 +827,22 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
|
||||
|
||||
|
||||
|
||||
##### message `LeaseInternalRequest` (lease/leasepb/lease.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| LeaseTimeToLiveRequest | | etcdserverpb.LeaseTimeToLiveRequest |
|
||||
|
||||
|
||||
|
||||
##### message `LeaseInternalResponse` (lease/leasepb/lease.proto)
|
||||
|
||||
| Field | Description | Type |
|
||||
| ----- | ----------- | ---- |
|
||||
| LeaseTimeToLiveResponse | | etcdserverpb.LeaseTimeToLiveResponse |
|
||||
|
||||
|
||||
|
||||
##### message `Permission` (auth/authpb/auth.proto)
|
||||
|
||||
Permission is a single entity
|
||||
|
File diff suppressed because it is too large
Load Diff
334
Documentation/dev-guide/apispec/swagger/v3election.swagger.json
Normal file
334
Documentation/dev-guide/apispec/swagger/v3election.swagger.json
Normal file
@ -0,0 +1,334 @@
|
||||
{
|
||||
"swagger": "2.0",
|
||||
"info": {
|
||||
"title": "etcdserver/api/v3election/v3electionpb/v3election.proto",
|
||||
"version": "version not set"
|
||||
},
|
||||
"schemes": [
|
||||
"http",
|
||||
"https"
|
||||
],
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"paths": {
|
||||
"/v3alpha/election/campaign": {
|
||||
"post": {
|
||||
"summary": "Campaign waits to acquire leadership in an election, returning a LeaderKey\nrepresenting the leadership if successful. The LeaderKey can then be used\nto issue new values on the election, transactionally guard API requests on\nleadership still being held, and resign from the election.",
|
||||
"operationId": "Campaign",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbCampaignResponse"
|
||||
}
|
||||
}
|
||||
},
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"in": "body",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbCampaignRequest"
|
||||
}
|
||||
}
|
||||
],
|
||||
"tags": [
|
||||
"Election"
|
||||
]
|
||||
}
|
||||
},
|
||||
"/v3alpha/election/leader": {
|
||||
"post": {
|
||||
"summary": "Leader returns the current election proclamation, if any.",
|
||||
"operationId": "Leader",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbLeaderResponse"
|
||||
}
|
||||
}
|
||||
},
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"in": "body",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbLeaderRequest"
|
||||
}
|
||||
}
|
||||
],
|
||||
"tags": [
|
||||
"Election"
|
||||
]
|
||||
}
|
||||
},
|
||||
"/v3alpha/election/observe": {
|
||||
"post": {
|
||||
"summary": "Observe streams election proclamations in-order as made by the election's\nelected leaders.",
|
||||
"operationId": "Observe",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "(streaming responses)",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbLeaderResponse"
|
||||
}
|
||||
}
|
||||
},
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"in": "body",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbLeaderRequest"
|
||||
}
|
||||
}
|
||||
],
|
||||
"tags": [
|
||||
"Election"
|
||||
]
|
||||
}
|
||||
},
|
||||
"/v3alpha/election/proclaim": {
|
||||
"post": {
|
||||
"summary": "Proclaim updates the leader's posted value with a new value.",
|
||||
"operationId": "Proclaim",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbProclaimResponse"
|
||||
}
|
||||
}
|
||||
},
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"in": "body",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbProclaimRequest"
|
||||
}
|
||||
}
|
||||
],
|
||||
"tags": [
|
||||
"Election"
|
||||
]
|
||||
}
|
||||
},
|
||||
"/v3alpha/election/resign": {
|
||||
"post": {
|
||||
"summary": "Resign releases election leadership so other campaigners may acquire\nleadership on the election.",
|
||||
"operationId": "Resign",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbResignResponse"
|
||||
}
|
||||
}
|
||||
},
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"in": "body",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3electionpbResignRequest"
|
||||
}
|
||||
}
|
||||
],
|
||||
"tags": [
|
||||
"Election"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"definitions": {
|
||||
"etcdserverpbResponseHeader": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cluster_id": {
|
||||
"type": "string",
|
||||
"format": "uint64",
|
||||
"description": "cluster_id is the ID of the cluster which sent the response."
|
||||
},
|
||||
"member_id": {
|
||||
"type": "string",
|
||||
"format": "uint64",
|
||||
"description": "member_id is the ID of the member which sent the response."
|
||||
},
|
||||
"revision": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "revision is the key-value store revision when the request was applied."
|
||||
},
|
||||
"raft_term": {
|
||||
"type": "string",
|
||||
"format": "uint64",
|
||||
"description": "raft_term is the raft term when the request was applied."
|
||||
}
|
||||
}
|
||||
},
|
||||
"mvccpbKeyValue": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"key": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "key is the key in bytes. An empty key is not allowed."
|
||||
},
|
||||
"create_revision": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "create_revision is the revision of last creation on this key."
|
||||
},
|
||||
"mod_revision": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "mod_revision is the revision of last modification on this key."
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "version is the version of the key. A deletion resets\nthe version to zero and any modification of the key\nincreases its version."
|
||||
},
|
||||
"value": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "value is the value held by the key, in bytes."
|
||||
},
|
||||
"lease": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "lease is the ID of the lease that attached to key.\nWhen the attached lease expires, the key will be deleted.\nIf lease is 0, then no lease is attached to the key."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbCampaignRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "name is the election's identifier for the campaign."
|
||||
},
|
||||
"lease": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "lease is the ID of the lease attached to leadership of the election. If the\nlease expires or is revoked before resigning leadership, then the\nleadership is transferred to the next campaigner, if any."
|
||||
},
|
||||
"value": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "value is the initial proclaimed value set when the campaigner wins the\nelection."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbCampaignResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"header": {
|
||||
"$ref": "#/definitions/etcdserverpbResponseHeader"
|
||||
},
|
||||
"leader": {
|
||||
"$ref": "#/definitions/v3electionpbLeaderKey",
|
||||
"description": "leader describes the resources used for holding leadereship of the election."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbLeaderKey": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "name is the election identifier that correponds to the leadership key."
|
||||
},
|
||||
"key": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "key is an opaque key representing the ownership of the election. If the key\nis deleted, then leadership is lost."
|
||||
},
|
||||
"rev": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "rev is the creation revision of the key. It can be used to test for ownership\nof an election during transactions by testing the key's creation revision\nmatches rev."
|
||||
},
|
||||
"lease": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "lease is the lease ID of the election leader."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbLeaderRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "name is the election identifier for the leadership information."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbLeaderResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"header": {
|
||||
"$ref": "#/definitions/etcdserverpbResponseHeader"
|
||||
},
|
||||
"kv": {
|
||||
"$ref": "#/definitions/mvccpbKeyValue",
|
||||
"description": "kv is the key-value pair representing the latest leader update."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbProclaimRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"leader": {
|
||||
"$ref": "#/definitions/v3electionpbLeaderKey",
|
||||
"description": "leader is the leadership hold on the election."
|
||||
},
|
||||
"value": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "value is an update meant to overwrite the leader's current value."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbProclaimResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"header": {
|
||||
"$ref": "#/definitions/etcdserverpbResponseHeader"
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbResignRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"leader": {
|
||||
"$ref": "#/definitions/v3electionpbLeaderKey",
|
||||
"description": "leader is the leadership to relinquish by resignation."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3electionpbResignResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"header": {
|
||||
"$ref": "#/definitions/etcdserverpbResponseHeader"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
146
Documentation/dev-guide/apispec/swagger/v3lock.swagger.json
Normal file
146
Documentation/dev-guide/apispec/swagger/v3lock.swagger.json
Normal file
@ -0,0 +1,146 @@
|
||||
{
|
||||
"swagger": "2.0",
|
||||
"info": {
|
||||
"title": "etcdserver/api/v3lock/v3lockpb/v3lock.proto",
|
||||
"version": "version not set"
|
||||
},
|
||||
"schemes": [
|
||||
"http",
|
||||
"https"
|
||||
],
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"paths": {
|
||||
"/v3alpha/lock/lock": {
|
||||
"post": {
|
||||
"summary": "Lock acquires a distributed shared lock on a given named lock.\nOn success, it will return a unique key that exists so long as the\nlock is held by the caller. This key can be used in conjunction with\ntransactions to safely ensure updates to etcd only occur while holding\nlock ownership. The lock is held until Unlock is called on the key or the\nlease associate with the owner expires.",
|
||||
"operationId": "Lock",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3lockpbLockResponse"
|
||||
}
|
||||
}
|
||||
},
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"in": "body",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3lockpbLockRequest"
|
||||
}
|
||||
}
|
||||
],
|
||||
"tags": [
|
||||
"Lock"
|
||||
]
|
||||
}
|
||||
},
|
||||
"/v3alpha/lock/unlock": {
|
||||
"post": {
|
||||
"summary": "Unlock takes a key returned by Lock and releases the hold on lock. The\nnext Lock caller waiting for the lock will then be woken up and given\nownership of the lock.",
|
||||
"operationId": "Unlock",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3lockpbUnlockResponse"
|
||||
}
|
||||
}
|
||||
},
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"in": "body",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"$ref": "#/definitions/v3lockpbUnlockRequest"
|
||||
}
|
||||
}
|
||||
],
|
||||
"tags": [
|
||||
"Lock"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"definitions": {
|
||||
"etcdserverpbResponseHeader": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cluster_id": {
|
||||
"type": "string",
|
||||
"format": "uint64",
|
||||
"description": "cluster_id is the ID of the cluster which sent the response."
|
||||
},
|
||||
"member_id": {
|
||||
"type": "string",
|
||||
"format": "uint64",
|
||||
"description": "member_id is the ID of the member which sent the response."
|
||||
},
|
||||
"revision": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "revision is the key-value store revision when the request was applied."
|
||||
},
|
||||
"raft_term": {
|
||||
"type": "string",
|
||||
"format": "uint64",
|
||||
"description": "raft_term is the raft term when the request was applied."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3lockpbLockRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "name is the identifier for the distributed shared lock to be acquired."
|
||||
},
|
||||
"lease": {
|
||||
"type": "string",
|
||||
"format": "int64",
|
||||
"description": "lease is the ID of the lease that will be attached to ownership of the\nlock. If the lease expires or is revoked and currently holds the lock,\nthe lock is automatically released. Calls to Lock with the same lease will\nbe treated as a single acquistion; locking twice with the same lease is a\nno-op."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3lockpbLockResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"header": {
|
||||
"$ref": "#/definitions/etcdserverpbResponseHeader"
|
||||
},
|
||||
"key": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "key is a key that will exist on etcd for the duration that the Lock caller\nowns the lock. Users should not modify this key or the lock may exhibit\nundefined behavior."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3lockpbUnlockRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"key": {
|
||||
"type": "string",
|
||||
"format": "byte",
|
||||
"description": "key is the lock ownership key granted by Lock."
|
||||
}
|
||||
}
|
||||
},
|
||||
"v3lockpbUnlockResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"header": {
|
||||
"$ref": "#/definitions/etcdserverpbResponseHeader"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,8 +1,7 @@
|
||||
# Experimental APIs and features
|
||||
|
||||
For the most part, the etcd project is stable, but we are still moving fast! We believe in the release fast philosophy. We want to get early feedback on features still in development and stabilizing. Thus, there are, and will be more, experimental features and APIs. We plan to improve these features based on the early feedback from the community, or abandon them if there is little interest, in the next few releases. If you are running a production system, please do not rely on any experimental features or APIs.
|
||||
For the most part, the etcd project is stable, but we are still moving fast! We believe in the release fast philosophy. We want to get early feedback on features still in development and stabilizing. Thus, there are, and will be more, experimental features and APIs. We plan to improve these features based on the early feedback from the community, or abandon them if there is little interest, in the next few releases. Please do not rely on any experimental features or APIs in production environment.
|
||||
|
||||
## The current experimental API/features are:
|
||||
|
||||
- v3 auth API: expect to be stale in 3.1 release
|
||||
- etcd gateway: expect to be stable in 3.1 release
|
||||
(none currently)
|
||||
|
65
Documentation/dev-guide/grpc_naming.md
Normal file
65
Documentation/dev-guide/grpc_naming.md
Normal file
@ -0,0 +1,65 @@
|
||||
# gRPC naming and discovery
|
||||
|
||||
etcd provides a gRPC resolver to support an alternative name system that fetches endpoints from etcd for discovering gRPC services. The underlying mechanism is based on watching updates to keys prefixed with the service name.
|
||||
|
||||
## Using etcd discovery with go-grpc
|
||||
|
||||
The etcd client provides a gRPC resolver for resolving gRPC endpoints with an etcd backend. The resolver is initialized with an etcd client and given a target for resolution:
|
||||
|
||||
```go
|
||||
import (
|
||||
"github.com/coreos/etcd/clientv3"
|
||||
etcdnaming "github.com/coreos/etcd/clientv3/naming"
|
||||
|
||||
"google.golang.org/grpc"
|
||||
)
|
||||
|
||||
...
|
||||
|
||||
cli, cerr := clientv3.NewFromURL("http://localhost:2379")
|
||||
r := &etcdnaming.GRPCResolver{Client: cli}
|
||||
b := grpc.RoundRobin(r)
|
||||
conn, gerr := grpc.Dial("my-service", grpc.WithBalancer(b))
|
||||
```
|
||||
|
||||
## Managing service endpoints
|
||||
|
||||
The etcd resolver treats all keys under the prefix of the resolution target following a "/" (e.g., "my-service/") with JSON-encoded go-grpc `naming.Update` values as potential service endpoints. Endpoints are added to the service by creating new keys and removed from the service by deleting keys.
|
||||
|
||||
### Adding an endpoint
|
||||
|
||||
New endpoints can be added to the service through `etcdctl`:
|
||||
|
||||
```sh
|
||||
ETCDCTL_API=3 etcdctl put my-service/1.2.3.4 '{"Addr":"1.2.3.4","Metadata":"..."}'
|
||||
```
|
||||
|
||||
The etcd client's `GRPCResolver.Update` method can also register new endpoints with a key matching the `Addr`:
|
||||
|
||||
```go
|
||||
r.Update(context.TODO(), "my-service", naming.Update{Op: naming.Add, Addr: "1.2.3.4", Metadata: "..."})
|
||||
```
|
||||
|
||||
### Deleting an endpoint
|
||||
|
||||
Hosts can be deleted from the service through `etcdctl`:
|
||||
|
||||
```sh
|
||||
ETCDCTL_API=3 etcdctl del my-service/1.2.3.4
|
||||
```
|
||||
|
||||
The etcd client's `GRPCResolver.Update` method also supports deleting endpoints:
|
||||
|
||||
```go
|
||||
r.Update(context.TODO(), "my-service", naming.Update{Op: naming.Delete, Addr: "1.2.3.4"})
|
||||
```
|
||||
|
||||
### Registering an endpoint with a lease
|
||||
|
||||
Registering an endpoint with a lease ensures that if the host can't maintain a keepalive heartbeat (e.g., its machine fails), it will be removed from the service:
|
||||
|
||||
```sh
|
||||
lease=`ETCDCTL_API=3 etcdctl lease grant 5 | cut -f2 -d' '`
|
||||
ETCDCTL_API=3 etcdctl put --lease=$lease my-service/1.2.3.4 '{"Addr":"1.2.3.4","Metadata":"..."}'
|
||||
ETCDCTL_API=3 etcdctl lease keep-alive $lease
|
||||
```
|
@ -4,30 +4,54 @@ Users mostly interact with etcd by putting or getting the value of a key. This s
|
||||
|
||||
By default, etcdctl talks to the etcd server with the v2 API for backward compatibility. For etcdctl to speak to etcd using the v3 API, the API version must be set to version 3 via the `ETCDCTL_API` environment variable.
|
||||
|
||||
``` bash
|
||||
```bash
|
||||
export ETCDCTL_API=3
|
||||
```
|
||||
|
||||
## Find versions
|
||||
|
||||
etcdctl version and Server API version can be useful in finding the appropriate commands to be used for performing various operations on etcd.
|
||||
|
||||
Here is the command to find the versions:
|
||||
|
||||
```bash
|
||||
$ etcdctl version
|
||||
etcdctl version: 3.1.0-alpha.0+git
|
||||
API version: 3.1
|
||||
```
|
||||
|
||||
## Write a key
|
||||
|
||||
Applications store keys into the etcd cluster by writing to keys. Every stored key is replicated to all etcd cluster members through the Raft protocol to achieve consistency and reliability.
|
||||
|
||||
Here is the command to set the value of key `foo` to `bar`:
|
||||
|
||||
``` bash
|
||||
```bash
|
||||
$ etcdctl put foo bar
|
||||
OK
|
||||
```
|
||||
|
||||
Also a key can be set for a specified interval of time by attaching lease to it.
|
||||
|
||||
Here is the command to set the value of key `foo1` to `bar1` for 10s.
|
||||
|
||||
```bash
|
||||
$ etcdctl put foo1 bar1 --lease=1234abcd
|
||||
OK
|
||||
```
|
||||
|
||||
Note: The lease id `1234abcd` in the above command refers to id returned on creating the lease of 10s. This id can then be attached to the key.
|
||||
|
||||
## Read keys
|
||||
|
||||
Applications can read values of keys from an etcd cluster. Queries may read a single key, or a range of keys.
|
||||
Applications can read values of keys from an etcd cluster. Queries may read a single key, or a range of keys.
|
||||
|
||||
Suppose the etcd cluster has stored the following keys:
|
||||
|
||||
```
|
||||
```bash
|
||||
foo = bar
|
||||
foo1 = bar1
|
||||
foo2 = bar2
|
||||
foo3 = bar3
|
||||
```
|
||||
|
||||
@ -39,18 +63,59 @@ foo
|
||||
bar
|
||||
```
|
||||
|
||||
Here is the command to range over the keys from `foo` to `foo9`:
|
||||
Here is the command to read the value of key `foo` in hex format:
|
||||
|
||||
```bash
|
||||
$ etcdctl get foo foo9
|
||||
$ etcdctl get foo --hex
|
||||
\x66\x6f\x6f # Key
|
||||
\x62\x61\x72 # Value
|
||||
```
|
||||
|
||||
Here is the command to read only the value of key `foo`:
|
||||
|
||||
```bash
|
||||
$ etcdctl get foo --print-value-only
|
||||
bar
|
||||
```
|
||||
|
||||
Here is the command to range over the keys from `foo` to `foo3`:
|
||||
|
||||
```bash
|
||||
$ etcdctl get foo foo3
|
||||
foo
|
||||
bar
|
||||
foo1
|
||||
bar1
|
||||
foo2
|
||||
bar2
|
||||
```
|
||||
|
||||
Note that `foo3` is excluded since the range is over the half-open interval `[foo, foo3)`, excluding `foo3`.
|
||||
|
||||
Here is the command to range over all keys prefixed with `foo`:
|
||||
|
||||
```bash
|
||||
$ etcdctl get --prefix foo
|
||||
foo
|
||||
bar
|
||||
foo1
|
||||
bar1
|
||||
foo2
|
||||
bar2
|
||||
foo3
|
||||
bar3
|
||||
```
|
||||
|
||||
Here is the command to range over all keys prefixed with `foo`, limiting the number of results to 2:
|
||||
|
||||
```bash
|
||||
$ etcdctl get --prefix --limit=2 foo
|
||||
foo
|
||||
bar
|
||||
foo1
|
||||
bar1
|
||||
```
|
||||
|
||||
## Read past version of keys
|
||||
|
||||
Applications may want to read superseded versions of a key. For example, an application may wish to roll back to an old configuration by accessing an earlier version of a key. Alternatively, an application may want a consistent view over multiple keys through multiple requests by accessing key history.
|
||||
@ -58,45 +123,81 @@ Since every modification to the etcd cluster key-value store increments the glob
|
||||
|
||||
Suppose an etcd cluster already has the following keys:
|
||||
|
||||
``` bash
|
||||
$ etcdctl put foo bar # revision = 2
|
||||
$ etcdctl put foo1 bar1 # revision = 3
|
||||
$ etcdctl put foo bar_new # revision = 4
|
||||
$ etcdctl put foo1 bar1_new # revision = 5
|
||||
```bash
|
||||
foo = bar # revision = 2
|
||||
foo1 = bar1 # revision = 3
|
||||
foo = bar_new # revision = 4
|
||||
foo1 = bar1_new # revision = 5
|
||||
```
|
||||
|
||||
Here are an example to access the past versions of keys:
|
||||
|
||||
```bash
|
||||
$ etcdctl get foo foo9 # access the most recent versions of keys
|
||||
$ etcdctl get --prefix foo # access the most recent versions of keys
|
||||
foo
|
||||
bar_new
|
||||
foo1
|
||||
bar1_new
|
||||
|
||||
$ etcdctl get --rev=4 foo foo9 # access the versions of keys at revision 4
|
||||
$ etcdctl get --prefix --rev=4 foo # access the versions of keys at revision 4
|
||||
foo
|
||||
bar_new
|
||||
foo1
|
||||
bar1
|
||||
|
||||
$ etcdctl get --rev=3 foo foo9 # access the versions of keys at revision 3
|
||||
$ etcdctl get --prefix --rev=3 foo # access the versions of keys at revision 3
|
||||
foo
|
||||
bar
|
||||
foo1
|
||||
bar1
|
||||
|
||||
$ etcdctl get --rev=2 foo foo9 # access the versions of keys at revision 2
|
||||
$ etcdctl get --prefix --rev=2 foo # access the versions of keys at revision 2
|
||||
foo
|
||||
bar
|
||||
|
||||
$ etcdctl get --rev=1 foo foo9 # access the versions of keys at revision 1
|
||||
$ etcdctl get --prefix --rev=1 foo # access the versions of keys at revision 1
|
||||
```
|
||||
|
||||
## Read keys which are greater than or equal to the byte value of the specified key
|
||||
|
||||
Applications may want to read keys which are greater than or equal to the byte value of the specified key.
|
||||
|
||||
Suppose an etcd cluster already has the following keys:
|
||||
|
||||
```bash
|
||||
a = 123
|
||||
b = 456
|
||||
z = 789
|
||||
```
|
||||
|
||||
Here is the command to read keys which are greater than or equal to the byte value of key `b` :
|
||||
|
||||
```bash
|
||||
$ etcdctl get --from-key b
|
||||
b
|
||||
456
|
||||
z
|
||||
789
|
||||
```
|
||||
|
||||
## Delete keys
|
||||
|
||||
Applications can delete a key or a range of keys from an etcd cluster.
|
||||
|
||||
Suppose an etcd cluster already has the following keys:
|
||||
|
||||
```bash
|
||||
foo = bar
|
||||
foo1 = bar1
|
||||
foo3 = bar3
|
||||
zoo = val
|
||||
zoo1 = val1
|
||||
zoo2 = val2
|
||||
a = 123
|
||||
b = 456
|
||||
z = 789
|
||||
```
|
||||
|
||||
Here is the command to delete key `foo`:
|
||||
|
||||
```bash
|
||||
@ -111,6 +212,29 @@ $ etcdctl del foo foo9
|
||||
2 # two keys are deleted
|
||||
```
|
||||
|
||||
Here is the command to delete key `zoo` with the deleted key value pair returned:
|
||||
|
||||
```bash
|
||||
$ etcdctl del --prev-kv zoo
|
||||
1 # one key is deleted
|
||||
zoo # deleted key
|
||||
val # the value of the deleted key
|
||||
```
|
||||
|
||||
Here is the command to delete keys having prefix as `zoo`:
|
||||
|
||||
```bash
|
||||
$ etcdctl del --prefix zoo
|
||||
2 # two keys are deleted
|
||||
```
|
||||
|
||||
Here is the command to delete keys which are greater than or equal to the byte value of key `b` :
|
||||
|
||||
```bash
|
||||
$ etcdctl del --from-key b
|
||||
2 # two keys are deleted
|
||||
```
|
||||
|
||||
## Watch key changes
|
||||
|
||||
Applications can watch on a key or a range of keys to monitor for any updates.
|
||||
@ -118,38 +242,86 @@ Applications can watch on a key or a range of keys to monitor for any updates.
|
||||
Here is the command to watch on key `foo`:
|
||||
|
||||
```bash
|
||||
$ etcdctl watch foo
|
||||
$ etcdctl watch foo
|
||||
# in another terminal: etcdctl put foo bar
|
||||
PUT
|
||||
foo
|
||||
bar
|
||||
```
|
||||
|
||||
Here is the command to watch on key `foo` in hex format:
|
||||
|
||||
```bash
|
||||
$ etcdctl watch foo --hex
|
||||
# in another terminal: etcdctl put foo bar
|
||||
PUT
|
||||
\x66\x6f\x6f # Key
|
||||
\x62\x61\x72 # Value
|
||||
```
|
||||
|
||||
Here is the command to watch on a range key from `foo` to `foo9`:
|
||||
|
||||
```bash
|
||||
$ etcdctl watch foo foo9
|
||||
# in another terminal: etcdctl put foo bar
|
||||
PUT
|
||||
foo
|
||||
bar
|
||||
# in another terminal: etcdctl put foo1 bar1
|
||||
PUT
|
||||
foo1
|
||||
bar1
|
||||
```
|
||||
|
||||
Here is the command to watch on keys having prefix `foo`:
|
||||
|
||||
```bash
|
||||
$ etcdctl watch --prefix foo
|
||||
# in another terminal: etcdctl put foo bar
|
||||
PUT
|
||||
foo
|
||||
bar
|
||||
# in another terminal: etcdctl put fooz1 barz1
|
||||
PUT
|
||||
fooz1
|
||||
barz1
|
||||
```
|
||||
|
||||
Here is the command to watch on multiple keys `foo` and `zoo`:
|
||||
|
||||
```bash
|
||||
$ etcdctl watch -i
|
||||
$ watch foo
|
||||
$ watch zoo
|
||||
# in another terminal: etcdctl put foo bar
|
||||
PUT
|
||||
foo
|
||||
bar
|
||||
# in another terminal: etcdctl put zoo val
|
||||
PUT
|
||||
zoo
|
||||
val
|
||||
```
|
||||
|
||||
## Watch historical changes of keys
|
||||
|
||||
Applications may want to watch for historical changes of keys in etcd. For example, an application may wish to receive all the modifications of a key; if the application stays connected to etcd, then `watch` is good enough. However, if the application or etcd fails, a change may happen during the failure, and the application will not receive the update in real time. To guarantee the update is delivered, the application must be able to watch for historical changes to keys. To do this, an application can specify a historical revision on a watch, just like reading past version of keys.
|
||||
|
||||
Suppose we finished the following sequence of operations:
|
||||
|
||||
``` bash
|
||||
etcdctl put foo bar # revision = 2
|
||||
etcdctl put foo1 bar1 # revision = 3
|
||||
etcdctl put foo bar_new # revision = 4
|
||||
etcdctl put foo1 bar1_new # revision = 5
|
||||
```bash
|
||||
$ etcdctl put foo bar # revision = 2
|
||||
OK
|
||||
$ etcdctl put foo1 bar1 # revision = 3
|
||||
OK
|
||||
$ etcdctl put foo bar_new # revision = 4
|
||||
OK
|
||||
$ etcdctl put foo1 bar1_new # revision = 5
|
||||
OK
|
||||
```
|
||||
|
||||
Here is an example to watch the historical changes:
|
||||
|
||||
```bash
|
||||
# watch for changes on key `foo` since revision 2
|
||||
$ etcdctl watch --rev=2 foo
|
||||
@ -159,7 +331,9 @@ bar
|
||||
PUT
|
||||
foo
|
||||
bar_new
|
||||
```
|
||||
|
||||
```bash
|
||||
# watch for changes on key `foo` since revision 3
|
||||
$ etcdctl watch --rev=3 foo
|
||||
PUT
|
||||
@ -167,6 +341,19 @@ foo
|
||||
bar_new
|
||||
```
|
||||
|
||||
Here is an example to watch only from the last historical change:
|
||||
|
||||
```bash
|
||||
# watch for changes on key `foo` and return last revision value along with modified value
|
||||
$ etcdctl watch --prev-kv foo
|
||||
# in another terminal: etcdctl put foo bar_latest
|
||||
PUT
|
||||
foo # key
|
||||
bar_new # last value of foo key before modification
|
||||
foo # key
|
||||
bar_latest # value of foo key after modification
|
||||
```
|
||||
|
||||
## Compacted revisions
|
||||
|
||||
As we mentioned, etcd keeps revisions so that applications can read past versions of keys. However, to avoid accumulating an unbounded amount of history, it is important to compact past revisions. After compacting, etcd removes historical revisions, releasing resources for future use. All superseded data with revisions before the compacted revision will be unavailable.
|
||||
@ -182,13 +369,20 @@ $ etcdctl get --rev=4 foo
|
||||
Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted
|
||||
```
|
||||
|
||||
Note: The current revision of etcd server can be found using get command on any key (existent or non-existent) in json format. Example is shown below for mykey which does not exist in etcd server:
|
||||
|
||||
```bash
|
||||
$ etcdctl get mykey -w=json
|
||||
{"header":{"cluster_id":14841639068965178418,"member_id":10276657743932975437,"revision":15,"raft_term":4}}
|
||||
```
|
||||
|
||||
## Grant leases
|
||||
|
||||
Applications can grant leases for keys from an etcd cluster. When a key is attached to a lease, its lifetime is bound to the lease's lifetime which in turn is governed by a time-to-live (TTL). Each lease has a minimum time-to-live (TTL) value specified by the application at grant time. The lease's actual TTL value is at least the minimum TTL and is chosen by the etcd cluster. Once a lease's TTL elapses, the lease expires and all attached keys are deleted.
|
||||
|
||||
Here is the command to grant a lease:
|
||||
|
||||
```
|
||||
```bash
|
||||
# grant a lease with 10 second TTL
|
||||
$ etcdctl lease grant 10
|
||||
lease 32695410dcc0ca06 granted with TTL(10s)
|
||||
@ -204,7 +398,7 @@ Applications revoke leases by lease ID. Revoking a lease deletes all of its atta
|
||||
|
||||
Suppose we finished the following sequence of operations:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ etcdctl lease grant 10
|
||||
lease 32695410dcc0ca06 granted with TTL(10s)
|
||||
$ etcdctl put --lease=32695410dcc0ca06 foo bar
|
||||
@ -213,7 +407,7 @@ OK
|
||||
|
||||
Here is the command to revoke the same lease:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ etcdctl lease revoke 32695410dcc0ca06
|
||||
lease 32695410dcc0ca06 revoked
|
||||
|
||||
@ -227,17 +421,55 @@ Applications can keep a lease alive by refreshing its TTL so it does not expire.
|
||||
|
||||
Suppose we finished the following sequence of operations:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ etcdctl lease grant 10
|
||||
lease 32695410dcc0ca06 granted with TTL(10s)
|
||||
```
|
||||
|
||||
Here is the command to keep the same lease alive:
|
||||
|
||||
```
|
||||
$ etcdctl lease keep-alive 32695410dcc0ca0
|
||||
lease 32695410dcc0ca0 keepalived with TTL(100)
|
||||
lease 32695410dcc0ca0 keepalived with TTL(100)
|
||||
lease 32695410dcc0ca0 keepalived with TTL(100)
|
||||
```bash
|
||||
$ etcdctl lease keep-alive 32695410dcc0ca06
|
||||
lease 32695410dcc0ca06 keepalived with TTL(100)
|
||||
lease 32695410dcc0ca06 keepalived with TTL(100)
|
||||
lease 32695410dcc0ca06 keepalived with TTL(100)
|
||||
...
|
||||
```
|
||||
|
||||
## Get lease information
|
||||
|
||||
Applications may want to know about lease information, so that they can be renewed or to check if the lease still exists or it has expired. Applications may also want to know the keys to which a particular lease is attached.
|
||||
|
||||
Suppose we finished the following sequence of operations:
|
||||
|
||||
```bash
|
||||
# grant a lease with 500 second TTL
|
||||
$ etcdctl lease grant 500
|
||||
lease 694d5765fc71500b granted with TTL(500s)
|
||||
|
||||
# attach key zoo1 to lease 694d5765fc71500b
|
||||
$ etcdctl put zoo1 val1 --lease=694d5765fc71500b
|
||||
OK
|
||||
|
||||
# attach key zoo2 to lease 694d5765fc71500b
|
||||
$ etcdctl put zoo2 val2 --lease=694d5765fc71500b
|
||||
OK
|
||||
```
|
||||
|
||||
Here is the command to get information about the lease:
|
||||
|
||||
```bash
|
||||
$ etcdctl lease timetolive 694d5765fc71500b
|
||||
lease 694d5765fc71500b granted with TTL(500s), remaining(258s)
|
||||
```
|
||||
|
||||
Here is the command to get information about the lease along with the keys attached with the lease:
|
||||
|
||||
```bash
|
||||
$ etcdctl lease timetolive --keys 694d5765fc71500b
|
||||
lease 694d5765fc71500b granted with TTL(500s), remaining(132s), attached keys([zoo2 zoo1])
|
||||
|
||||
# if the lease has expired or does not exist it will give the below response:
|
||||
Error: etcdserver: requested lease not found
|
||||
```
|
||||
|
||||
|
10
Documentation/dev-guide/limit.md
Normal file
10
Documentation/dev-guide/limit.md
Normal file
@ -0,0 +1,10 @@
|
||||
# System limits
|
||||
|
||||
## Request size limit
|
||||
|
||||
etcd is designed to handle small key value pairs typical for metadata. Larger requests will work, but may increase the latency of other requests. For the time being, etcd guarantees to support RPC requests with up to 1MB of data. In the future, the size limit may be loosened or made configurable.
|
||||
|
||||
## Storage size limit
|
||||
|
||||
The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag; supports up to 8GB.
|
||||
|
@ -28,7 +28,7 @@ bar
|
||||
|
||||
## Local multi-member cluster
|
||||
|
||||
A Procfile is provided to easily set up a local multi-member cluster. Start a multi-member cluster with a few commands:
|
||||
A `Procfile` at the base of this git repo is provided to easily set up a local multi-member cluster. To start a multi-member cluster go to the root of an etcd source tree and run:
|
||||
|
||||
```
|
||||
# install goreman program to control Profile-based applications.
|
||||
@ -37,7 +37,7 @@ $ goreman -f Procfile start
|
||||
...
|
||||
```
|
||||
|
||||
The started members listen on `localhost:12379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
|
||||
The started members listen on `localhost:2379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
|
||||
|
||||
To interact with the started cluster by using etcdctl:
|
||||
|
||||
@ -45,16 +45,16 @@ To interact with the started cluster by using etcdctl:
|
||||
# use API version 3
|
||||
$ export ETCDCTL_API=3
|
||||
|
||||
$ etcdctl --write-out=table --endpoints=localhost:12379 member list
|
||||
$ etcdctl --write-out=table --endpoints=localhost:2379 member list
|
||||
+------------------+---------+--------+------------------------+------------------------+
|
||||
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
|
||||
+------------------+---------+--------+------------------------+------------------------+
|
||||
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:12380 | http://127.0.0.1:12379 |
|
||||
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380 | http://127.0.0.1:2379 |
|
||||
| 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
|
||||
| fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
|
||||
+------------------+---------+--------+------------------------+------------------------+
|
||||
|
||||
$ etcdctl --endpoints=localhost:12379 put foo bar
|
||||
$ etcdctl put foo bar
|
||||
OK
|
||||
```
|
||||
|
||||
@ -64,10 +64,10 @@ To exercise etcd's fault tolerance, kill a member:
|
||||
# kill etcd2
|
||||
$ goreman run stop etcd2
|
||||
|
||||
$ etcdctl --endpoints=localhost:12379 put key hello
|
||||
$ etcdctl put key hello
|
||||
OK
|
||||
|
||||
$ etcdctl --endpoints=localhost:12379 get key
|
||||
$ etcdctl get key
|
||||
hello
|
||||
|
||||
# try to get key from the killed member
|
||||
|
@ -3,7 +3,7 @@
|
||||
etcd uses the [capnslog][capnslog] library for logging application output categorized into *levels*. A log message's level is determined according to these conventions:
|
||||
|
||||
* Error: Data has been lost, a request has failed for a bad reason, or a required resource has been lost
|
||||
* Examples:
|
||||
* Examples:
|
||||
* A failure to allocate disk space for WAL
|
||||
|
||||
* Warning: (Hopefully) Temporary conditions that may cause errors, but may work fine. A replica disappearing (that may reconnect) is a warning.
|
||||
@ -26,4 +26,4 @@ etcd uses the [capnslog][capnslog] library for logging application output catego
|
||||
* Send a normal message to a remote peer
|
||||
* Write a log entry to disk
|
||||
|
||||
[capnslog]: [https://github.com/coreos/pkg/tree/master/capnslog]
|
||||
[capnslog]: https://github.com/coreos/pkg/tree/master/capnslog
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
The guide talks about how to release a new version of etcd.
|
||||
|
||||
The procedure includes some manual steps for sanity checking but it can probably be further scripted. Please keep this document up-to-date if making changes to the release process.
|
||||
The procedure includes some manual steps for sanity checking, but it can probably be further scripted. Please keep this document up-to-date if making changes to the release process.
|
||||
|
||||
## Prepare release
|
||||
|
||||
@ -31,8 +31,8 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
|
||||
## Write release note
|
||||
|
||||
- Write introduction for the new release. For example, what major bug we fix, what new features we introduce or what performance improvement we make.
|
||||
- Write changelog for the last release. ChangeLog should be straightforward and easy to understand for the end-user.
|
||||
- Put `[GH XXXX]` at the head of change line to reference Pull Request that introduces the change. Moreover, add a link on it to jump to the Pull Request.
|
||||
- Find PRs with `release-note` label and explain them in `NEWS` file, as a straightforward summary of changes for end-users.
|
||||
|
||||
## Tag version
|
||||
|
||||
@ -47,7 +47,7 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
|
||||
|
||||
## Build release binaries and images
|
||||
|
||||
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
|
||||
- Ensure `acbuild` is available.
|
||||
- Ensure `docker` is available.
|
||||
|
||||
Run release script in root directory:
|
||||
@ -58,7 +58,7 @@ Run release script in root directory:
|
||||
|
||||
It generates all release binaries and images under directory ./release.
|
||||
|
||||
## Sign binaries and images
|
||||
## Sign binaries, images, and source code
|
||||
|
||||
etcd project key must be used to sign the generated binaries and images.`$SUBKEYID` is the key ID of etcd project Yubikey. Connect the key and run `gpg2 --card-status` to get the ID.
|
||||
|
||||
@ -68,6 +68,15 @@ The following commands are used for public release sign:
|
||||
cd release
|
||||
for i in etcd-*{.zip,.tar.gz}; do gpg2 --default-key $SUBKEYID --armor --output ${i}.asc --detach-sign ${i}; done
|
||||
for i in etcd-*{.zip,.tar.gz}; do gpg2 --verify ${i}.asc ${i}; done
|
||||
|
||||
# sign zipped source code files
|
||||
wget https://github.com/coreos/etcd/archive/${VERSION}.zip
|
||||
gpg2 --armor --default-key $SUBKEYID --output ${VERSION}.zip.asc --detach-sign ${VERSION}.zip
|
||||
gpg2 --verify ${VERSION}.zip.asc ${VERSION}.zip
|
||||
|
||||
wget https://github.com/coreos/etcd/archive/${VERSION}.tar.gz
|
||||
gpg2 --armor --default-key $SUBKEYID --output ${VERSION}.tar.gz.asc --detach-sign ${VERSION}.tar.gz
|
||||
gpg2 --verify ${VERSION}.tar.gz.asc ${VERSION}.tar.gz
|
||||
```
|
||||
|
||||
The public key for GPG signing can be found at [CoreOS Application Signing Key](https://coreos.com/security/app-signing-key)
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
## System requirements
|
||||
|
||||
The etcd performance benchmarks run etcd on 8 vCPU, 16GB RAM, 50GB SSD GCE instances, but any relatively modern machine with low latency storage and a few gigabytes of memory should suffice for most use cases. Applications with large v2 data stores will require more memory than a large v3 data store since data is kept in anonymous memory instead of memory mapped from a file. than For running etcd on a cloud provider, we suggest at least a medium instance on AWS or a standard-1 instance on GCE.
|
||||
The etcd performance benchmarks run etcd on 8 vCPU, 16GB RAM, 50GB SSD GCE instances, but any relatively modern machine with low latency storage and a few gigabytes of memory should suffice for most use cases. Applications with large v2 data stores will require more memory than a large v3 data store since data is kept in anonymous memory instead of memory mapped from a file. For running etcd on a cloud provider, we suggest at least a medium instance on AWS or a standard-1 instance on GCE.
|
||||
|
||||
## Download the pre-built binary
|
||||
|
||||
@ -10,27 +10,35 @@ The easiest way to get etcd is to use one of the pre-built release binaries whic
|
||||
|
||||
## Build the latest version
|
||||
|
||||
For those wanting to try the very latest version, build etcd from the `master` branch.
|
||||
[Go](https://golang.org/) version 1.5+ is required to build the latest version of etcd.
|
||||
For those wanting to try the very latest version, build etcd from the `master` branch. [Go](https://golang.org/) version 1.8+ is required to build the latest version of etcd. To ensure etcd is built against well-tested libraries, etcd vendors its dependencies for official release binaries. However, etcd's vendoring is also optional to avoid potential import conflicts when embedding the etcd server or using the etcd client.
|
||||
|
||||
Here are the commands to build an etcd binary from the `master` branch:
|
||||
To build `etcd` from the `master` branch without a `GOPATH` using the official `build` script:
|
||||
|
||||
```
|
||||
# go is required
|
||||
$ go version
|
||||
go version go1.6 darwin/amd64
|
||||
|
||||
# GOPATH should be set correctly
|
||||
$ echo $GOPATH
|
||||
/Users/example/go
|
||||
|
||||
$ mkdir -p $GOPATH/src/github.com/coreos
|
||||
$ cd $GOPATH/src/github.com/coreos
|
||||
$ git clone github.com:coreos/etcd.git
|
||||
```sh
|
||||
$ git clone https://github.com/coreos/etcd.git
|
||||
$ cd etcd
|
||||
$ ./build
|
||||
$ ./bin/etcd
|
||||
...
|
||||
```
|
||||
|
||||
To build a vendored `etcd` from the `master` branch via `go get`:
|
||||
|
||||
```sh
|
||||
# GOPATH should be set
|
||||
$ echo $GOPATH
|
||||
/Users/example/go
|
||||
$ go get github.com/coreos/etcd/cmd/etcd
|
||||
$ $GOPATH/bin/etcd
|
||||
```
|
||||
|
||||
To build `etcd` from the `master` branch without vendoring (may not build due to upstream conflicts):
|
||||
|
||||
```sh
|
||||
# GOPATH should be set
|
||||
$ echo $GOPATH
|
||||
/Users/example/go
|
||||
$ go get github.com/coreos/etcd
|
||||
$ $GOPATH/bin/etcd
|
||||
```
|
||||
|
||||
## Test the installation
|
||||
@ -54,3 +62,6 @@ If OK is printed, then etcd is working!
|
||||
|
||||
[github-release]: https://github.com/coreos/etcd/releases/
|
||||
[go]: https://golang.org/doc/install
|
||||
[build-script]: ../build
|
||||
[cmd-directory]: ../cmd
|
||||
|
||||
|
@ -12,62 +12,107 @@ The easiest way to get started using etcd as a distributed key-value store is to
|
||||
|
||||
- [Setting up local clusters][local_cluster]
|
||||
- [Interacting with etcd][interacting]
|
||||
- [API references][api_ref]
|
||||
- [gRPC gateway][api_grpc_gateway]
|
||||
- gRPC [etcd core][api_ref] and [etcd concurrency][api_concurrency_ref] API references
|
||||
- [HTTP JSON API through the gRPC gateway][api_grpc_gateway]
|
||||
- [gRPC naming and discovery][grpc_naming]
|
||||
- [Client][namespace_client] and [proxy][namespace_proxy] namespacing
|
||||
- [Embedding etcd][embed_etcd]
|
||||
- [Experimental features and APIs][experimental]
|
||||
- [System limits][system-limit]
|
||||
|
||||
## Operating etcd clusters
|
||||
|
||||
Administrators who need to create reliable and scalable key-value stores for the developers they support should begin with a [cluster on multiple machines][clustering].
|
||||
|
||||
- [Setting up clusters][clustering]
|
||||
- [Run etcd clusters inside containers][container]
|
||||
- [Setting up etcd clusters][clustering]
|
||||
- [Setting up etcd gateways][gateway]
|
||||
- [Setting up etcd gRPC proxy][grpc_proxy]
|
||||
- [Hardware recommendations][hardware]
|
||||
- [Configuration][conf]
|
||||
- [Security][security]
|
||||
- Monitoring
|
||||
- [Authentication][authentication]
|
||||
- [Monitoring][monitoring]
|
||||
- [Maintenance][maintenance]
|
||||
- [Understand failures][failures]
|
||||
- [Disaster recovery][recovery]
|
||||
- [Performance][performance]
|
||||
- [Versioning][versioning]
|
||||
- [Supported platform][supported_platform]
|
||||
|
||||
### Platform guides
|
||||
|
||||
- [Supported systems][supported_platforms]
|
||||
- [Docker container][container_docker]
|
||||
- [Container Linux, systemd][container_linux_platform]
|
||||
- [rkt container][container_rkt]
|
||||
- [Amazon Web Services][aws_platform]
|
||||
- [FreeBSD][freebsd_platform]
|
||||
|
||||
### Security
|
||||
|
||||
- [TLS][security]
|
||||
- [Role-based access control][authentication]
|
||||
|
||||
### Maintenance and troubleshooting
|
||||
|
||||
- [Frequently asked questions][common questions]
|
||||
- [Monitoring][monitoring]
|
||||
- [Maintenance][maintenance]
|
||||
- [Failure modes][failures]
|
||||
- [Disaster recovery][recovery]
|
||||
- [Upgrading][upgrading]
|
||||
|
||||
## Learning
|
||||
|
||||
To learn more about the concepts and internals behind etcd, read the following pages:
|
||||
|
||||
- Why etcd (TODO)
|
||||
- [Why etcd?][why]
|
||||
- [Understand data model][data_model]
|
||||
- [Understand APIs][understand_apis]
|
||||
- [Glossary][glossary]
|
||||
- Internals (TODO)
|
||||
- Internals
|
||||
- [Auth subsystem][auth_design]
|
||||
|
||||
## Upgrading and compatibility
|
||||
## Frequently Asked Questions (FAQ)
|
||||
|
||||
- [Migrate applications from using API v2 to API v3][v2_migration]
|
||||
- [Updating v2.3 to v3.0][v3_upgrade]
|
||||
|
||||
## Troubleshooting
|
||||
Answers to [common questions] about etcd.
|
||||
|
||||
[api_ref]: dev-guide/api_reference_v3.md
|
||||
[api_concurrency_ref]: dev-guide/api_concurrency_reference_v3.md
|
||||
[api_grpc_gateway]: dev-guide/api_grpc_gateway.md
|
||||
[clustering]: op-guide/clustering.md
|
||||
[conf]: op-guide/configuration.md
|
||||
[system-limit]: dev-guide/limit.md
|
||||
[common questions]: faq.md
|
||||
[why]: learning/why.md
|
||||
[data_model]: learning/data_model.md
|
||||
[demo]: demo.md
|
||||
[download_build]: dl_build.md
|
||||
[embed_etcd]: https://godoc.org/github.com/coreos/etcd/embed
|
||||
[grpc_naming]: dev-guide/grpc_naming.md
|
||||
[failures]: op-guide/failures.md
|
||||
[gateway]: op-guide/gateway.md
|
||||
[glossary]: learning/glossary.md
|
||||
[namespace_client]: https://godoc.org/github.com/coreos/etcd/clientv3/namespace
|
||||
[namespace_proxy]: op-guide/grpc_proxy.md#namespacing
|
||||
[grpc_proxy]: op-guide/grpc_proxy.md
|
||||
[hardware]: op-guide/hardware.md
|
||||
[interacting]: dev-guide/interacting_v3.md
|
||||
[local_cluster]: dev-guide/local_cluster.md
|
||||
[performance]: op-guide/performance.md
|
||||
[recovery]: op-guide/recovery.md
|
||||
[maintenance]: op-guide/maintenance.md
|
||||
[security]: op-guide/security.md
|
||||
[monitoring]: op-guide/monitoring.md
|
||||
[v2_migration]: op-guide/v2-migration.md
|
||||
[container]: op-guide/container.md
|
||||
[container_rkt]: op-guide/container.md#rkt
|
||||
[container_docker]: op-guide/container.md#docker
|
||||
[understand_apis]: learning/api.md
|
||||
[versioning]: op-guide/versioning.md
|
||||
[supported_platform]: op-guide/supported-platform.md
|
||||
[supported_platforms]: op-guide/supported-platform.md
|
||||
[container_linux_platform]: platforms/container-linux-systemd.md
|
||||
[freebsd_platform]: platforms/freebsd.md
|
||||
[aws_platform]: platforms/aws.md
|
||||
[experimental]: dev-guide/experimental_apis.md
|
||||
[v3_upgrade]: upgrades/upgrade_3_0.md
|
||||
[authentication]: op-guide/authentication.md
|
||||
[auth_design]: learning/auth_design.md
|
||||
[upgrading]: upgrades/upgrading-etcd.md
|
||||
|
151
Documentation/faq.md
Normal file
151
Documentation/faq.md
Normal file
@ -0,0 +1,151 @@
|
||||
## Frequently Asked Questions (FAQ)
|
||||
|
||||
### etcd, general
|
||||
|
||||
#### Do clients have to send requests to the etcd leader?
|
||||
|
||||
[Raft][raft] is leader-based; the leader handles all client requests which need cluster consensus. However, the client does not need to know which node is the leader. Any request that requires consensus sent to a follower is automatically forwarded to the leader. Requests that do not require consensus (e.g., serialized reads) can be processed by any cluster member.
|
||||
|
||||
### Configuration
|
||||
|
||||
#### What is the difference between listen-<client,peer>-urls, advertise-client-urls or initial-advertise-peer-urls?
|
||||
|
||||
`listen-client-urls` and `listen-peer-urls` specify the local addresses etcd server binds to for accepting incoming connections. To listen on a port for all interfaces, specify `0.0.0.0` as the listen IP address.
|
||||
|
||||
`advertise-client-urls` and `initial-advertise-peer-urls` specify the addresses etcd clients or other etcd members should use to contact the etcd server. The advertise addresses must be reachable from the remote machines. Do not advertise addresses like `localhost` or `0.0.0.0` for a production setup since these addresses are unreachable from remote machines.
|
||||
|
||||
### Deployment
|
||||
|
||||
#### System requirements
|
||||
|
||||
Since etcd writes data to disk, SSD is highly recommended. To prevent performance degradation or unintentionally overloading the key-value store, etcd enforces a 2GB default storage size quota, configurable up to 8GB. To avoid swapping or running out of memory, the machine should have at least as much RAM to cover the quota. At CoreOS, an etcd cluster is usually deployed on dedicated CoreOS Container Linux machines with dual-core processors, 2GB of RAM, and 80GB of SSD *at the very least*. **Note that performance is intrinsically workload dependent; please test before production deployment**. See [hardware][hardware-setup] for more recommendations.
|
||||
|
||||
Most stable production environment is Linux operating system with amd64 architecture; see [supported platform][supported-platform] for more.
|
||||
|
||||
#### Why an odd number of cluster members?
|
||||
|
||||
An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1. For any odd-sized cluster, adding one node will always increase the number of nodes necessary for quorum. Although adding a node to an odd-sized cluster appears better since there are more machines, the fault tolerance is worse since exactly the same number of nodes may fail without losing quorum but there are more nodes that can fail. If the cluster is in a state where it can't tolerate any more failures, adding a node before removing nodes is dangerous because if the new node fails to register with the cluster (e.g., the address is misconfigured), quorum will be permanently lost.
|
||||
|
||||
#### What is maximum cluster size?
|
||||
|
||||
Theoretically, there is no hard limit. However, an etcd cluster probably should have no more than seven nodes. [Google Chubby lock service][chubby], similar to etcd and widely deployed within Google for many years, suggests running five nodes. A 5-member etcd cluster can tolerate two member failures, which is enough in most cases. Although larger clusters provide better fault tolerance, the write performance suffers because data must be replicated across more machines.
|
||||
|
||||
#### What is failure tolerance?
|
||||
|
||||
An etcd cluster operates so long as a member quorum can be established. If quorum is lost through transient network failures (e.g., partitions), etcd automatically and safely resumes once the network recovers and restores quorum; Raft enforces cluster consistency. For power loss, etcd persists the Raft log to disk; etcd replays the log to the point of failure and resumes cluster participation. For permanent hardware failure, the node may be removed from the cluster through [runtime reconfiguration][runtime reconfiguration].
|
||||
|
||||
It is recommended to have an odd number of members in a cluster. An odd-size cluster tolerates the same number of failures as an even-size cluster but with fewer nodes. The difference can be seen by comparing even and odd sized clusters:
|
||||
|
||||
| Cluster Size | Majority | Failure Tolerance |
|
||||
|:-:|:-:|:-:|
|
||||
| 1 | 1 | 0 |
|
||||
| 2 | 2 | 0 |
|
||||
| 3 | 2 | 1 |
|
||||
| 4 | 3 | 1 |
|
||||
| 5 | 3 | 2 |
|
||||
| 6 | 4 | 2 |
|
||||
| 7 | 4 | 3 |
|
||||
| 8 | 5 | 3 |
|
||||
| 9 | 5 | 4 |
|
||||
|
||||
Adding a member to bring the size of cluster up to an even number doesn't buy additional fault tolerance. Likewise, during a network partition, an odd number of members guarantees that there will always be a majority partition that can continue to operate and be the source of truth when the partition ends.
|
||||
|
||||
#### Does etcd work in cross-region or cross data center deployments?
|
||||
|
||||
Deploying etcd across regions improves etcd's fault tolerance since members are in separate failure domains. The cost is higher consensus request latency from crossing data center boundaries. Since etcd relies on a member quorum for consensus, the latency from crossing data centers will be somewhat pronounced because at least a majority of cluster members must respond to consensus requests. Additionally, cluster data must be replicated across all peers, so there will be bandwidth cost as well.
|
||||
|
||||
With longer latencies, the default etcd configuration may cause frequent elections or heartbeat timeouts. See [tuning] for adjusting timeouts for high latency deployments.
|
||||
|
||||
### Operation
|
||||
|
||||
#### How to backup a etcd cluster?
|
||||
|
||||
etcdctl provides a `snapshot` command to create backups. See [backup][backup] for more details.
|
||||
|
||||
#### Should I add a member before removing an unhealthy member?
|
||||
|
||||
When replacing an etcd node, it's important to remove the member first and then add its replacement.
|
||||
|
||||
etcd employs distributed consensus based on a quorum model; (n+1)/2 members, a majority, must agree on a proposal before it can be committed to the cluster. These proposals include key-value updates and membership changes. This model totally avoids any possibility of split brain inconsistency. The downside is permanent quorum loss is catastrophic.
|
||||
|
||||
How this applies to membership: If a 3-member cluster has 1 downed member, it can still make forward progress because the quorum is 2 and 2 members are still live. However, adding a new member to a 3-member cluster will increase the quorum to 3 because 3 votes are required for a majority of 4 members. Since the quorum increased, this extra member buys nothing in terms of fault tolerance; the cluster is still one node failure away from being unrecoverable.
|
||||
|
||||
Additionally, that new member is risky because it may turn out to be misconfigured or incapable of joining the cluster. In that case, there's no way to recover quorum because the cluster has two members down and two members up, but needs three votes to change membership to undo the botched membership addition. etcd will by default reject member add attempts that could take down the cluster in this manner.
|
||||
|
||||
On the other hand, if the downed member is removed from cluster membership first, the number of members becomes 2 and the quorum remains at 2. Following that removal by adding a new member will also keep the quorum steady at 2. So, even if the new node can't be brought up, it's still possible to remove the new member through quorum on the remaining live members.
|
||||
|
||||
#### Why won't etcd accept my membership changes?
|
||||
|
||||
etcd sets `strict-reconfig-check` in order to reject reconfiguration requests that would cause quorum loss. Abandoning quorum is really risky (especially when the cluster is already unhealthy). Although it may be tempting to disable quorum checking if there's quorum loss to add a new member, this could lead to full fledged cluster inconsistency. For many applications, this will make the problem even worse ("disk geometry corruption" being a candidate for most terrifying).
|
||||
|
||||
#### Why does etcd lose its leader from disk latency spikes?
|
||||
|
||||
This is intentional; disk latency is part of leader liveness. Suppose the cluster leader takes a minute to fsync a raft log update to disk, but the etcd cluster has a one second election timeout. Even though the leader can process network messages within the election interval (e.g., send heartbeats), it's effectively unavailable because it can't commit any new proposals; it's waiting on the slow disk. If the cluster frequently loses its leader due to disk latencies, try [tuning][tuning] the disk settings or etcd time parameters.
|
||||
|
||||
#### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
|
||||
|
||||
Every new etcd cluster generates a new cluster ID based on the initial cluster configuration and a user-provided unique `initial-cluster-token` value. By having unique cluster ID's, etcd is protected from cross-cluster interaction which could corrupt the cluster.
|
||||
|
||||
Usually this warning happens after tearing down an old cluster, then reusing some of the peer addresses for the new cluster. If any etcd process from the old cluster is still running it will try to contact the new cluster. The new cluster will recognize a cluster ID mismatch, then ignore the request and emit this warning. This warning is often cleared by ensuring peer addresses among distinct clusters are disjoint.
|
||||
|
||||
#### What does "mvcc: database space exceeded" mean and how do I fix it?
|
||||
|
||||
The [multi-version concurrency control][api-mvcc] data model in etcd keeps an exact history of the keyspace. Without periodically compacting this history (e.g., by setting `--auto-compaction`), etcd will eventually exhaust its storage space. If etcd runs low on storage space, it raises a space quota alarm to protect the cluster from further writes. So long as the alarm is raised, etcd responds to write requests with the error `mvcc: database space exceeded`.
|
||||
|
||||
To recover from the low space quota alarm:
|
||||
|
||||
1. [Compact][maintenance-compact] etcd's history.
|
||||
2. [Defragment][maintenance-defragment] every etcd endpoint.
|
||||
3. [Disarm][maintenance-disarm] the alarm.
|
||||
|
||||
### Performance
|
||||
|
||||
#### How should I benchmark etcd?
|
||||
|
||||
Try the [benchmark] tool. Current [benchmark results][benchmark-result] are available for comparison.
|
||||
|
||||
#### What does the etcd warning "apply entries took too long" mean?
|
||||
|
||||
After a majority of etcd members agree to commit a request, each etcd server applies the request to its data store and persists the result to disk. Even with a slow mechanical disk or a virtualized network disk, such as Amazon’s EBS or Google’s PD, applying a request should normally take fewer than 50 milliseconds. If the average apply duration exceeds 100 milliseconds, etcd will warn that entries are taking too long to apply.
|
||||
|
||||
Usually this issue is caused by a slow disk. The disk could be experiencing contention among etcd and other applications, or the disk is too simply slow (e.g., a shared virtualized disk). To rule out a slow disk from causing this warning, monitor [backend_commit_duration_seconds][backend_commit_metrics] (p99 duration should be less than 25ms) to confirm the disk is reasonably fast. If the disk is too slow, assigning a dedicated disk to etcd or using faster disk will typically solve the problem.
|
||||
|
||||
The second most common cause is CPU starvation. If monitoring of the machine’s CPU usage shows heavy utilization, there may not be enough compute capacity for etcd. Moving etcd to dedicated machine, increasing process resource isolation cgroups, or renicing the etcd server process into a higher priority can usually solve the problem.
|
||||
|
||||
Expensive user requests which access too many keys (e.g., fetching the entire keyspace) can also cause long apply latencies. Accessing fewer than a several hundred keys per request, however, should always be performant.
|
||||
|
||||
If none of the above suggestions clear the warnings, please [open an issue][new_issue] with detailed logging, monitoring, metrics and optionally workload information.
|
||||
|
||||
#### What does the etcd warning "failed to send out heartbeat on time" mean?
|
||||
|
||||
etcd uses a leader-based consensus protocol for consistent data replication and log execution. Cluster members elect a single leader, all other members become followers. The elected leader must periodically send heartbeats to its followers to maintain its leadership. Followers infer leader failure if no heartbeats are received within an election interval and trigger an election. If a leader doesn’t send its heartbeats in time but is still running, the election is spurious and likely caused by insufficient resources. To catch these soft failures, if the leader skips two heartbeat intervals, etcd will warn it failed to send a heartbeat on time.
|
||||
|
||||
Usually this issue is caused by a slow disk. Before the leader sends heartbeats attached with metadata, it may need to persist the metadata to disk. The disk could be experiencing contention among etcd and other applications, or the disk is too simply slow (e.g., a shared virtualized disk). To rule out a slow disk from causing this warning, monitor [wal_fsync_duration_seconds][wal_fsync_duration_seconds] (p99 duration should be less than 10ms) to confirm the disk is reasonably fast. If the disk is too slow, assigning a dedicated disk to etcd or using faster disk will typically solve the problem.
|
||||
|
||||
The second most common cause is CPU starvation. If monitoring of the machine’s CPU usage shows heavy utilization, there may not be enough compute capacity for etcd. Moving etcd to dedicated machine, increasing process resource isolation with cgroups, or renicing the etcd server process into a higher priority can usually solve the problem.
|
||||
|
||||
A slow network can also cause this issue. If network metrics among the etcd machines shows long latencies or high drop rate, there may not be enough network capacity for etcd. Moving etcd members to a less congested network will typically solve the problem. However, if the etcd cluster is deployed across data centers, long latency between members is expected. For such deployments, tune the `heartbeat-interval` configuration to roughly match the round trip time between the machines, and the `election-timeout` configuration to be at least 5 * `heartbeat-interval`. See [tuning documentation][tuning] for detailed information.
|
||||
|
||||
If none of the above suggestions clear the warnings, please [open an issue][new_issue] with detailed logging, monitoring, metrics and optionally workload information.
|
||||
|
||||
#### What does the etcd warning "snapshotting is taking more than x seconds to finish ..." mean?
|
||||
|
||||
etcd sends a snapshot of its complete key-value store to refresh slow followers and for [backups][backup]. Slow snapshot transfer times increase MTTR; if the cluster is ingesting data with high throughput, slow followers may livelock by needing a new snapshot before finishing receiving a snapshot. To catch slow snapshot performance, etcd warns when sending a snapshot takes more than thirty seconds and exceeds the expected transfer time for a 1Gbps connection.
|
||||
|
||||
|
||||
[hardware-setup]: ./op-guide/hardware.md
|
||||
[supported-platform]: ./op-guide/supported-platform.md
|
||||
[wal_fsync_duration_seconds]: ./metrics.md#disk
|
||||
[tuning]: ./tuning.md
|
||||
[new_issue]: https://github.com/coreos/etcd/issues/new
|
||||
[backend_commit_metrics]: ./metrics.md#disk
|
||||
[raft]: https://raft.github.io/raft.pdf
|
||||
[backup]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#snapshotting-the-keyspace
|
||||
[chubby]: http://static.googleusercontent.com/media/research.google.com/en//archive/chubby-osdi06.pdf
|
||||
[runtime reconfiguration]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md
|
||||
[benchmark]: https://github.com/coreos/etcd/tree/master/tools/benchmark
|
||||
[benchmark-result]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/performance.md
|
||||
[api-mvcc]: learning/api.md#revisions
|
||||
[maintenance-compact]: op-guide/maintenance.md#history-compaction
|
||||
[maintenance-defragment]: op-guide/maintenance.md#defragmentation
|
||||
[maintenance-disarm]: ../etcdctl/README.md#alarm-disarm
|
@ -14,15 +14,18 @@
|
||||
- [etcdtool](https://github.com/mickep76/etcdtool) - Export/Import/Edit etcd directory as JSON/YAML/TOML and Validate directory using JSON schema
|
||||
- [etcd-rest](https://github.com/mickep76/etcd-rest) - Create generic REST API in Go using etcd as a backend with validation using JSON schema
|
||||
- [etcdsh](https://github.com/kamilhark/etcdsh) - A command line client with support of command history and tab completion. Supports v2
|
||||
- [etcdloadtest](https://github.com/sinsharat/etcdloadtest) - A command line load test client for etcd version 3.0 and above.
|
||||
|
||||
**Go libraries**
|
||||
|
||||
- [etcd/clientv3](https://github.com/coreos/etcd/blob/master/clientv3) - the officially maintained Go client for v3
|
||||
- [etcd/client](https://github.com/coreos/etcd/blob/master/client) - the officially maintained Go client for v2
|
||||
- [go-etcd](https://github.com/coreos/go-etcd) - the deprecated official client. May be useful for older (<2.0.0) versions of etcd.
|
||||
- [encWrapper](https://github.com/lumjjb/etcd/tree/enc_wrapper/clientwrap/encwrapper) - encWrapper is an encryption wrapper for the etcd client Keys API/KV.
|
||||
|
||||
**Java libraries**
|
||||
|
||||
- [coreos/jetcd](https://github.com/coreos/jetcd) - Supports v3
|
||||
- [boonproject/etcd](https://github.com/boonproject/boon/blob/master/etcd/README.md) - Supports v2, Async/Sync and waits
|
||||
- [justinsb/jetcd](https://github.com/justinsb/jetcd)
|
||||
- [diwakergupta/jetcd](https://github.com/diwakergupta/jetcd) - Supports v2
|
||||
@ -33,13 +36,17 @@
|
||||
**Scala libraries**
|
||||
|
||||
- [maciej/etcd-client](https://github.com/maciej/etcd-client) - Supports v2. Akka HTTP-based fully async client
|
||||
- [eiipii/etcdhttpclient](https://bitbucket.org/eiipii/etcdhttpclient) - Supports v2. Async HTTP client based on Netty and Scala Futures.
|
||||
|
||||
**Python libraries**
|
||||
|
||||
- [kragniz/python-etcd3](https://github.com/kragniz/python-etcd3) - Work in progress client for v3
|
||||
- [jplana/python-etcd](https://github.com/jplana/python-etcd) - Supports v2
|
||||
- [russellhaering/txetcd](https://github.com/russellhaering/txetcd) - a Twisted Python library
|
||||
- [cholcombe973/autodock](https://github.com/cholcombe973/autodock) - A docker deployment automation tool
|
||||
- [lisael/aioetcd](https://github.com/lisael/aioetcd) - (Python 3.4+) Asyncio coroutines client (Supports v2)
|
||||
- [txaio-etcd](https://github.com/crossbario/txaio-etcd) - Asynchronous etcd v3-only client library for Twisted (today) and asyncio (future)
|
||||
- [dims/etcd3-gateway](https://github.com/dims/etcd3-gateway) - etcd v3 API library using the HTTP grpc gateway
|
||||
|
||||
**Node libraries**
|
||||
|
||||
@ -52,15 +59,19 @@
|
||||
- [iconara/etcd-rb](https://github.com/iconara/etcd-rb)
|
||||
- [jpfuentes2/etcd-ruby](https://github.com/jpfuentes2/etcd-ruby)
|
||||
- [ranjib/etcd-ruby](https://github.com/ranjib/etcd-ruby) - Supports v2
|
||||
- [davissp14/etcdv3-ruby](https://github.com/davissp14/etcdv3-ruby) - Supports v3
|
||||
|
||||
**C libraries**
|
||||
|
||||
- [apache/celix/etcdlib](https://github.com/apache/celix/tree/develop/etcdlib) - Supports v2
|
||||
- [jdarcy/etcd-api](https://github.com/jdarcy/etcd-api) - Supports v2
|
||||
- [shafreeck/cetcd](https://github.com/shafreeck/cetcd) - Supports v2
|
||||
|
||||
**C++ libraries**
|
||||
- [edwardcapriolo/etcdcpp](https://github.com/edwardcapriolo/etcdcpp) - Supports v2
|
||||
- [suryanathan/etcdcpp](https://github.com/suryanathan/etcdcpp) - Supports v2 (with waits)
|
||||
- [nokia/etcd-cpp-api](https://github.com/nokia/etcd-cpp-api) - Supports v2
|
||||
- [nokia/etcd-cpp-apiv3](https://github.com/nokia/etcd-cpp-apiv3) - Supports v3
|
||||
|
||||
**Clojure libraries**
|
||||
|
||||
@ -80,6 +91,7 @@
|
||||
**PHP Libraries**
|
||||
|
||||
- [linkorb/etcd-php](https://github.com/linkorb/etcd-php)
|
||||
- [activecollab/etcd](https://github.com/activecollab/etcd)
|
||||
|
||||
**Haskell libraries**
|
||||
|
||||
@ -89,10 +101,18 @@
|
||||
|
||||
- [ropensci/etseed](https://github.com/ropensci/etseed)
|
||||
|
||||
**Nim libraries**
|
||||
|
||||
- [etcd_client](https://github.com/FedericoCeratto/nim-etcd-client)
|
||||
|
||||
**Tcl libraries**
|
||||
|
||||
- [efrecon/etcd-tcl](https://github.com/efrecon/etcd-tcl) - Supports v2, except wait.
|
||||
|
||||
**Rust libraries**
|
||||
|
||||
- [jimmycuadra/rust-etcd](https://github.com/jimmycuadra/rust-etcd) - Supports v2
|
||||
|
||||
**Gradle Plugins**
|
||||
|
||||
- [gradle-etcd-rest-plugin](https://github.com/cdancy/gradle-etcd-rest-plugin) - Supports v2
|
||||
@ -112,8 +132,11 @@
|
||||
|
||||
**Projects using etcd**
|
||||
|
||||
- [apache/celix](https://github.com/apache/celix) - an implementation of the OSGi specification adapted to C and C++
|
||||
- [binocarlos/yoda](https://github.com/binocarlos/yoda) - etcd + ZeroMQ
|
||||
- [blox/blox](https://github.com/blox/blox) - a collection of open source projects for container management and orchestration with AWS ECS
|
||||
- [calavera/active-proxy](https://github.com/calavera/active-proxy) - HTTP Proxy configured with etcd
|
||||
- [chain/chain](https://github.com/chain/chain) - software designed to operate and connect to highly scalable permissioned blockchain networks
|
||||
- [derekchiang/etcdplus](https://github.com/derekchiang/etcdplus) - A set of distributed synchronization primitives built upon etcd
|
||||
- [go-discover](https://github.com/flynn/go-discover) - service discovery in Go
|
||||
- [gleicon/goreman](https://github.com/gleicon/goreman/tree/etcd) - Branch of the Go Foreman clone with etcd support
|
||||
@ -122,7 +145,6 @@
|
||||
- [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration
|
||||
- [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd
|
||||
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
|
||||
- [scrz](https://github.com/scrz/scrz) - Container manager, stores configuration in etcd.
|
||||
- [fleet](https://github.com/coreos/fleet) - Distributed init system
|
||||
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
|
||||
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.
|
||||
@ -133,3 +155,4 @@
|
||||
- [lytics/metafora](https://github.com/lytics/metafora) - Go distributed task library
|
||||
- [ryandoyle/nss-etcd](https://github.com/ryandoyle/nss-etcd) - A GNU libc NSS module for resolving names from etcd.
|
||||
- [Gru](https://github.com/dnaeon/gru) - Orchestration made easy with Go
|
||||
- [Vitess](http://vitess.io/) - Vitess is a database clustering system for horizontal scaling of MySQL.
|
@ -1,10 +1,35 @@
|
||||
# etcd3 API
|
||||
|
||||
NOTE: this doc is not finished!
|
||||
This document is meant to give an overview of the etcd3 API's central design. It is by no means all encompassing, but intended to focus on the basic ideas needed to understand etcd without the distraction of less common API calls. All etcd3 API's are defined in [gRPC services][grpc-service], which categorize remote procedure calls (RPCs) understood by the etcd server. A full listing of all etcd RPCs are documented in markdown in the [gRPC API listing][grpc-api].
|
||||
|
||||
## Response header
|
||||
## gRPC Services
|
||||
|
||||
All Responses from etcd API have a [response header][response_header] attached. The response header includes the metadata of the response.
|
||||
Every API request sent to an etcd server is a gRPC remote procedure call. RPCs in etcd3 are categorized based on functionality into services.
|
||||
|
||||
Services important for dealing with etcd's key space include:
|
||||
* KV - Creates, updates, fetches, and deletes key-value pairs.
|
||||
* Watch - Monitors changes to keys.
|
||||
* Lease - Primitives for consuming client keep-alive messages.
|
||||
|
||||
Services which manage the cluster itself include:
|
||||
* Auth - Role based authentication mechanism for authenticating users.
|
||||
* Cluster - Provides membership information and configuration facilities.
|
||||
* Maintenance - Takes recovery snapshots, defragments the store, and returns per-member status information.
|
||||
|
||||
### Requests and Responses
|
||||
|
||||
All RPCs in etcd3 follow the same format. Each RPC has a function `Name` which takes `NameRequest` as an argument and returns `NameResponse` as a response. For example, here is the `Range` RPC description:
|
||||
|
||||
```protobuf
|
||||
service KV {
|
||||
Range(RangeRequest) returns (RangeResponse)
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
### Response header
|
||||
|
||||
All Responses from etcd API have an attached response header which includes cluster metadata for the response:
|
||||
|
||||
```proto
|
||||
message ResponseHeader {
|
||||
@ -15,10 +40,10 @@ message ResponseHeader {
|
||||
}
|
||||
```
|
||||
|
||||
* Cluster_ID - the ID of the cluster that generates the response
|
||||
* Member_ID - the ID of the member that generates the response
|
||||
* Revision - the revision of the key-value store when the response is generated
|
||||
* Raft_Term - the Raft term of the member when the response is generated
|
||||
* Cluster_ID - the ID of the cluster generating the response.
|
||||
* Member_ID - the ID of the member generating the response.
|
||||
* Revision - the revision of the key-value store when generating the response.
|
||||
* Raft_Term - the Raft term of the member when generating the response.
|
||||
|
||||
An application may read the Cluster_ID (Member_ID) field to ensure it is communicating with the intended cluster (member).
|
||||
|
||||
@ -28,11 +53,13 @@ Applications can use `Raft_Term` to detect when the cluster completes a new lead
|
||||
|
||||
## Key-Value API
|
||||
|
||||
Key-Value API is used to manipulate key-value pairs stored inside etcd. The key-value API is defined as a [gRPC service][kv-service]. The Key-Value pair is defined as structured data in [protobuf format][kv-proto].
|
||||
The Key-Value API manipulates key-value pairs stored inside etcd. The majority of requests made to etcd are usually key-value requests.
|
||||
|
||||
### System primitives
|
||||
|
||||
### Key-Value pair
|
||||
|
||||
A key-value pair is the smallest unit that the key-value API can manipulate. Each key-value pair has a number of fields:
|
||||
A key-value pair is the smallest unit that the key-value API can manipulate. Each key-value pair has a number of fields, defined in [protobuf format][kv-proto]:
|
||||
|
||||
```protobuf
|
||||
message KeyValue {
|
||||
@ -52,6 +79,403 @@ message KeyValue {
|
||||
* Mod_Revision - revision of the last modification on the key.
|
||||
* Lease - the ID of the lease attached to the key. If lease is 0, then no lease is attached to the key.
|
||||
|
||||
|
||||
In addition to just the key and value, etcd attaches additional revision metadata as part of the key message. This revision information orders keys by time of creation and modification, which is useful for managing concurrency for distributed synchronization. The etcd client's [distributed shared locks][locks] use the creation revision to wait for lock ownership. Similarly, the modification revision is used for detecting [software transactional memory][STM] read set conflicts and waiting on [leader election][elections] updates.
|
||||
|
||||
#### Revisions
|
||||
|
||||
etcd maintains a 64-bit cluster-wide counter, the store revision, that is incremented each time the key space is modified. The revision serves as a global logical clock, sequentially ordering all updates to the store. The change represented by a new revisions is incremental; the data associated with a revision is the data that changed the store. Internally, a new revision means writing the changes to the backend's B+tree, keyed by the incremented revision.
|
||||
|
||||
Revisions become more valuable when taking considering etcd3's [multi-version concurrency control][mvcc] backend. The MVCC model means that the key-value store can be viewed from past revisions since historical key revisions are retained. The retention policy for this history can be configured by cluster administrators for fine-grained storage management; usually etcd3 discards old revisions of keys on a timer. A typical etcd3 cluster retains superseded key data for hours. This also buys reliable handling for long client disconnection, not just transient network disruptions: watchers simply resume from the last observed historical revision. Similarly, to read from the store at a particular point-in-time, read requests can be tagged with a revision to return keys from a view of the key space at the point in time that revision was committed.
|
||||
|
||||
#### Key ranges
|
||||
|
||||
The etcd3 data model indexes all keys over a flat binary key space. This differs from other key-value store systems that use a hierarchical system of organizing keys into directories. Instead of listing keys by directory, keys are listed by key intervals `[a, b)`.
|
||||
|
||||
These intervals are often referred to as "ranges" in etcd3. Operations over ranges are more powerful than operations on directories. Like a hierarchical store, intervals support single key lookups via `[a, a+1)` (e.g., ['a', 'a\x00') looks up 'a') and directory lookups by encoding keys by directory depth. In addition to those operations, intervals can also encode prefixes; for example the interval `['a', 'b')` looks up all keys prefixed by the string 'a'.
|
||||
|
||||
By convention, ranges for a Request are denoted by the fields `key` and `range_end`. The `key` field is the first key of the range and should be non-empty. The `range_end` is the key following the last key of the range. If `range_end` is not given or empty, the range is defined to contain only the key argument. If `range_end` is `key` plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"), then the range represents all keys prefixed with key. If both `key` and `range_end` are '\0', then range represents all keys. If `range_end` is '\0', the range is all keys greater than or equal to the key argument.
|
||||
|
||||
### Range
|
||||
|
||||
Keys are fetched from the key-value store using the `Range` API call, which takes a `RangeRequest`:
|
||||
|
||||
```protobuf
|
||||
message RangeRequest {
|
||||
enum SortOrder {
|
||||
NONE = 0; // default, no sorting
|
||||
ASCEND = 1; // lowest target value first
|
||||
DESCEND = 2; // highest target value first
|
||||
}
|
||||
enum SortTarget {
|
||||
KEY = 0;
|
||||
VERSION = 1;
|
||||
CREATE = 2;
|
||||
MOD = 3;
|
||||
VALUE = 4;
|
||||
}
|
||||
|
||||
bytes key = 1;
|
||||
bytes range_end = 2;
|
||||
int64 limit = 3;
|
||||
int64 revision = 4;
|
||||
SortOrder sort_order = 5;
|
||||
SortTarget sort_target = 6;
|
||||
bool serializable = 7;
|
||||
bool keys_only = 8;
|
||||
bool count_only = 9;
|
||||
int64 min_mod_revision = 10;
|
||||
int64 max_mod_revision = 11;
|
||||
int64 min_create_revision = 12;
|
||||
int64 max_create_revision = 13;
|
||||
}
|
||||
```
|
||||
|
||||
* Key, Range_End - The key range to fetch.
|
||||
* Limit - the maximum number of keys returned for the request. When limit is set to 0, it is treated as no limit.
|
||||
* Revision - the point-in-time of the key-value store to use for the range. If revision is less or equal to zero, the range is over the latest key-value store If the revision is compacted, ErrCompacted is returned as a response.
|
||||
* Sort_Order - the ordering for sorted requests.
|
||||
* Sort_Target - the key-value field to sort.
|
||||
* Serializable - sets the range request to use serializable member-local reads. By default, Range is linearizable; it reflects the current consensus of the cluster. For better performance and availability, in exchange for possible stale reads, a serializable range request is served locally without needing to reach consensus with other nodes in the cluster.
|
||||
* Keys_Only - return only the keys and not the values.
|
||||
* Count_Only - return only the count of the keys in the range.
|
||||
* Min_Mod_Revision - the lower bound for key mod revisions; filters out lesser mod revisions.
|
||||
* Max_Mod_Revision - the upper bound for key mod revisions; filters out greater mod revisions.
|
||||
* Min_Create_Revision - the lower bound for key create revisions; filters out lesser create revisions.
|
||||
* Max_Create_Revision - the upper bound for key create revisions; filters out greater create revisions.
|
||||
|
||||
The client receives a `RangeResponse` message from the `Range` call:
|
||||
|
||||
```protobuf
|
||||
message RangeResponse {
|
||||
ResponseHeader header = 1;
|
||||
repeated mvccpb.KeyValue kvs = 2;
|
||||
bool more = 3;
|
||||
int64 count = 4;
|
||||
}
|
||||
```
|
||||
|
||||
* Kvs - the list of key-value pairs matched by the range request. When `Count_Only` is set, `Kvs` is empty.
|
||||
* More - indicates if there are more keys to return in the requested range if `limit` is set.
|
||||
* Count - the total number of keys satisfying the range request.
|
||||
|
||||
### Put
|
||||
|
||||
Keys are saved into the key-value store by issuing a `Put` call, which takes a `PutRequest`:
|
||||
|
||||
```protobuf
|
||||
message PutRequest {
|
||||
bytes key = 1;
|
||||
bytes value = 2;
|
||||
int64 lease = 3;
|
||||
bool prev_kv = 4;
|
||||
bool ignore_value = 5;
|
||||
bool ignore_lease = 6;
|
||||
}
|
||||
```
|
||||
|
||||
* Key - the name of the key to put into the key-value store.
|
||||
* Value - the value, in bytes, to associate with the key in the key-value store.
|
||||
* Lease - the lease ID to associate with the key in the key-value store. A lease value of 0 indicates no lease.
|
||||
* Prev_Kv - when set, responds with the key-value pair data before the update from this `Put` request.
|
||||
* Ignore_Value - when set, update the key without changing its current value. Returns an error if the key does not exist.
|
||||
* Ignore_Lease - when set, update the key without changing its current lease. Returns an error if the key does not exist.
|
||||
|
||||
The client receives a `PutResponse` message from the `Put` call:
|
||||
|
||||
```protobuf
|
||||
message PutResponse {
|
||||
ResponseHeader header = 1;
|
||||
mvccpb.KeyValue prev_kv = 2;
|
||||
}
|
||||
```
|
||||
|
||||
* Prev_Kv - the key-value pair overwritten by the `Put`, if `Prev_Kv` was set in the `PutRequest`.
|
||||
|
||||
### Delete Range
|
||||
|
||||
Ranges of keys are deleted using the `DeleteRange` call, which takes a `DeleteRangeRequest`:
|
||||
|
||||
```protobuf
|
||||
message DeleteRangeRequest {
|
||||
bytes key = 1;
|
||||
bytes range_end = 2;
|
||||
bool prev_kv = 3;
|
||||
}
|
||||
```
|
||||
|
||||
* Key, Range_End - The key range to delete.
|
||||
* Prev_Kv - when set, return the contents of the deleted key-value pairs.
|
||||
|
||||
The client receives a `DeleteRangeResponse` message from the `DeleteRange` call:
|
||||
|
||||
```protobuf
|
||||
message DeleteRangeResponse {
|
||||
ResponseHeader header = 1;
|
||||
int64 deleted = 2;
|
||||
repeated mvccpb.KeyValue prev_kvs = 3;
|
||||
}
|
||||
```
|
||||
|
||||
* Deleted - number of keys deleted.
|
||||
* Prev_Kv - a list of all key-value pairs deleted by the DeleteRange operation.
|
||||
|
||||
### Transaction
|
||||
|
||||
A transaction is an atomic If/Then/Else construct over the key-value store. It provides a primitive for grouping requests together in atomic blocks (i.e., then/else) whose execution is guarded (i.e., if) based on the contents of the key-value store. Transactions can be used for protecting keys from unintended concurrent updates, building compare-and-swap operations, and developing higher-level concurrency control.
|
||||
|
||||
A transaction can atomically process multiple requests in a single request. For modifications to the key-value store, this means the store's revision is incremented only once for the transaction and all events generated by the transaction will have the same revision. However, modifications to the same key multiple times within a single transaction are forbidden.
|
||||
|
||||
All transactions are guarded by a conjunction of comparisons, similar to an "If" statement. Each comparison checks a single key in the store. It may check for the absence or presence of a value, compare with a given value, or check a key's revision or version. Two different comparisons may apply to the same or different keys. All comparisons are applied atomically; if all comparisons are true, the transaction is said to succeed and etcd applies the transaction's then / `success` request block, otherwise it is said to fail and applies the else / `failure` request block.
|
||||
|
||||
Each comparison is encoded as a `Compare` message:
|
||||
|
||||
```protobuf
|
||||
message Compare {
|
||||
enum CompareResult {
|
||||
EQUAL = 0;
|
||||
GREATER = 1;
|
||||
LESS = 2;
|
||||
NOT_EQUAL = 3;
|
||||
}
|
||||
enum CompareTarget {
|
||||
VERSION = 0;
|
||||
CREATE = 1;
|
||||
MOD = 2;
|
||||
VALUE= 3;
|
||||
}
|
||||
CompareResult result = 1;
|
||||
// target is the key-value field to inspect for the comparison.
|
||||
CompareTarget target = 2;
|
||||
// key is the subject key for the comparison operation.
|
||||
bytes key = 3;
|
||||
oneof target_union {
|
||||
int64 version = 4;
|
||||
int64 create_revision = 5;
|
||||
int64 mod_revision = 6;
|
||||
bytes value = 7;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Result - the kind of logical comparison operation (e.g., equal, less than, etc).
|
||||
* Target - the key-value field to be compared. Either the key's version, create revision, modification revision, or value.
|
||||
* Key - the key for the comparison.
|
||||
* Target_Union - the user-specified data for the comparison.
|
||||
|
||||
After processing the comparison block, the transaction applies a block of requests. A block is a list of `RequestOp` messages:
|
||||
|
||||
```protobuf
|
||||
message RequestOp {
|
||||
// request is a union of request types accepted by a transaction.
|
||||
oneof request {
|
||||
RangeRequest request_range = 1;
|
||||
PutRequest request_put = 2;
|
||||
DeleteRangeRequest request_delete_range = 3;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Request_Range - a `RangeRequest`.
|
||||
* Request_Put - a `PutRequest`. The keys must be unique. It may not share keys with any other Puts or Deletes.
|
||||
* Request_Delete_Range - a `DeleteRangeRequest`. It may not share keys with any Puts or Deletes requests.
|
||||
|
||||
All together, a transaction is issued with a `Txn` API call, which takes a `TxnRequest`:
|
||||
|
||||
```protobuf
|
||||
message TxnRequest {
|
||||
repeated Compare compare = 1;
|
||||
repeated RequestOp success = 2;
|
||||
repeated RequestOp failure = 3;
|
||||
}
|
||||
```
|
||||
|
||||
* Compare - A list of predicates representing a conjunction of terms for guarding the transaction.
|
||||
* Success - A list of requests to process if all compare tests evaluate to true.
|
||||
* Failure - A list of requests to process if any compare test evaluates to false.
|
||||
|
||||
The client receives a `TxnResponse` message from the `Txn` call:
|
||||
|
||||
```protobuf
|
||||
message TxnResponse {
|
||||
ResponseHeader header = 1;
|
||||
bool succeeded = 2;
|
||||
repeated ResponseOp responses = 3;
|
||||
}
|
||||
```
|
||||
|
||||
* Succeeded - Whether `Compare` evaluated to true or false.
|
||||
* Responses - A list of responses corresponding to the results from applying the `Success` block if succeeded is true or the `Failure` if succeeded is false.
|
||||
|
||||
The `Responses` list corresponds to the results from the applied `RequestOp` list, with each response encoded as a `ResponseOp`:
|
||||
|
||||
```protobuf
|
||||
message ResponseOp {
|
||||
oneof response {
|
||||
RangeResponse response_range = 1;
|
||||
PutResponse response_put = 2;
|
||||
DeleteRangeResponse response_delete_range = 3;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Watch API
|
||||
|
||||
The Watch API provides an event-based interface for asynchronously monitoring changes to keys. An etcd3 watch waits for changes to keys by continuously watching from a given revision, either current or historical, and streams key updates back to the client.
|
||||
|
||||
### Events
|
||||
|
||||
Every change to every key is represented with `Event` messages. An `Event` message provides both the update's data and the type of update:
|
||||
|
||||
```protobuf
|
||||
message Event {
|
||||
enum EventType {
|
||||
PUT = 0;
|
||||
DELETE = 1;
|
||||
}
|
||||
EventType type = 1;
|
||||
KeyValue kv = 2;
|
||||
KeyValue prev_kv = 3;
|
||||
}
|
||||
```
|
||||
|
||||
* Type - The kind of event. A PUT type indicates new data has been stored to the key. A DELETE indicates the key was deleted.
|
||||
* KV - The KeyValue associated with the event. A PUT event contains current kv pair. A PUT event with kv.Version=1 indicates the creation of a key. A DELETE event contains the deleted key with its modification revision set to the revision of deletion.
|
||||
* Prev_KV - The key-value pair for the key from the revision immediately before the event. To save bandwidth, it is only filled out if the watch has explicitly enabled it.
|
||||
|
||||
### Watch streams
|
||||
|
||||
Watches are long-running requests and use gRPC streams to stream event data. A watch stream is bi-directional; the client writes to the stream to establish watches and reads to receive watch event. A single watch stream can multiplex many distinct watches by tagging events with per-watch identifiers. This multiplexing helps reducing the memory footprint and connection overhead on the core etcd cluster.
|
||||
|
||||
Watches make three guarantees about events:
|
||||
* Ordered - events are ordered by revision; an event will never appear on a watch if it precedes an event in time that has already been posted.
|
||||
* Reliable - a sequence of events will never drop any subsequence of events; if there are events ordered in time as a < b < c, then if the watch receives events a and c, it is guaranteed to receive b.
|
||||
* Atomic - a list of events is guaranteed to encompass complete revisions; updates in the same revision over multiple keys will not be split over several lists of events.
|
||||
|
||||
A client creates a watch by sending a `WatchCreateRequest` over a stream returned by `Watch`:
|
||||
|
||||
```protobuf
|
||||
message WatchCreateRequest {
|
||||
bytes key = 1;
|
||||
bytes range_end = 2;
|
||||
int64 start_revision = 3;
|
||||
bool progress_notify = 4;
|
||||
|
||||
enum FilterType {
|
||||
NOPUT = 0;
|
||||
NODELETE = 1;
|
||||
}
|
||||
repeated FilterType filters = 5;
|
||||
bool prev_kv = 6;
|
||||
}
|
||||
```
|
||||
|
||||
* Key, Range_End - The key range to watch.
|
||||
* Start_Revision - An optional revision for where to inclusively begin watching. If not given, it will stream events following the revision of the watch creation response header revision. The entire available event history can be watched starting from the last compaction revision.
|
||||
* Progress_Notify - When set, the watch will periodically receive a WatchResponse with no events, if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server decides how often to send notifications based on current server load.
|
||||
* Filters - A list of event types to filter away at server side.
|
||||
* Prev_Kv - When set, the watch receives the key-value data from before the event happens. This is useful for knowing what data has been overwritten.
|
||||
|
||||
In response to a `WatchCreateRequest` or if there is a new event for some established watch, the client receives a `WatchResponse`:
|
||||
|
||||
```protobuf
|
||||
message WatchResponse {
|
||||
ResponseHeader header = 1;
|
||||
int64 watch_id = 2;
|
||||
bool created = 3;
|
||||
bool canceled = 4;
|
||||
int64 compact_revision = 5;
|
||||
|
||||
repeated mvccpb.Event events = 11;
|
||||
}
|
||||
```
|
||||
|
||||
* Watch_ID - the ID of the watch that corresponds to the response.
|
||||
* Created - set to true if the response is for a create watch request. The client should record ID and expect to receive events for the watch on the stream. All events sent to the created watcher will have the same watch_id.
|
||||
* Canceled - set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher.
|
||||
* Compact_Revision - set to the minimum historical revision available to etcd if a watcher tries watching at a compacted revision. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The watcher will be canceled; creating new watches with the same start_revision will fail.
|
||||
* Events - a list of new events in sequence corresponding to the given watch ID.
|
||||
|
||||
If the client wishes to stop receiving events for a watch, it issues a `WatchCancelRequest`:
|
||||
|
||||
```protobuf
|
||||
message WatchCancelRequest {
|
||||
int64 watch_id = 1;
|
||||
}
|
||||
```
|
||||
|
||||
* Watch_ID - the ID of the watch to cancel so that no more events are transmitted.
|
||||
|
||||
## Lease API
|
||||
|
||||
Leases are a mechanism for detecting client liveness. The cluster grants leases with a time-to-live. A lease expires if the etcd cluster does not receive a keepAlive within a given TTL period.
|
||||
|
||||
To tie leases into the key-value store, each key may be attached to at most one lease. When a lease expires or is revoked, all keys attached to that lease will be deleted. Each expired key generates a delete event in the event history.
|
||||
|
||||
### Obtaining leases
|
||||
|
||||
Leases are obtained through the `LeaseGrant` API call, which takes a `LeaseGrantRequest`:
|
||||
|
||||
```protobuf
|
||||
message LeaseGrantRequest {
|
||||
int64 TTL = 1;
|
||||
int64 ID = 2;
|
||||
}
|
||||
```
|
||||
|
||||
* TTL - the advisory time-to-live, in seconds.
|
||||
* ID - the requested ID for the lease. If ID is set to 0, etcd will choose an ID.
|
||||
|
||||
The client receives a `LeaseGrantResponse` from the `LeaseGrant` call:
|
||||
|
||||
```protobuf
|
||||
message LeaseGrantResponse {
|
||||
ResponseHeader header = 1;
|
||||
int64 ID = 2;
|
||||
int64 TTL = 3;
|
||||
}
|
||||
```
|
||||
|
||||
* ID - the lease ID for the granted lease.
|
||||
* TTL - is the server selected time-to-live, in seconds, for the lease.
|
||||
|
||||
```protobuf
|
||||
message LeaseRevokeRequest {
|
||||
int64 ID = 1;
|
||||
}
|
||||
```
|
||||
|
||||
* ID - the lease ID to revoke. When the lease is revoked, all attached keys are deleted.
|
||||
|
||||
### Keep alives
|
||||
|
||||
Leases are refreshed using a bi-directional stream created with the `LeaseKeepAlive` API call. When the client wishes to refresh a lease, it sends a `LeaseKeepAliveRequest` over the stream:
|
||||
|
||||
```protobuf
|
||||
message LeaseKeepAliveRequest {
|
||||
int64 ID = 1;
|
||||
}
|
||||
```
|
||||
|
||||
* ID - the lease ID for the lease to keep alive.
|
||||
|
||||
The keep alive stream responds with a `LeaseKeepAliveResponse`:
|
||||
|
||||
```protobuf
|
||||
message LeaseKeepAliveResponse {
|
||||
ResponseHeader header = 1;
|
||||
int64 ID = 2;
|
||||
int64 TTL = 3;
|
||||
}
|
||||
```
|
||||
|
||||
* ID - the lease that was refreshed with a new TTL.
|
||||
* TTL - the new time-to-live, in seconds, that the lease has remaining.
|
||||
|
||||
[elections]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/election.go
|
||||
[kv-proto]: https://github.com/coreos/etcd/blob/master/mvcc/mvccpb/kv.proto
|
||||
[kv-service]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
|
||||
[response_header]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
|
||||
[grpc-api]: ../dev-guide/api_reference_v3.md
|
||||
[grpc-service]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
|
||||
[locks]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/mutex.go
|
||||
[mvcc]: https://en.wikipedia.org/wiki/Multiversion_concurrency_control
|
||||
[stm]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/stm.go
|
||||
|
@ -1,6 +1,6 @@
|
||||
# KV API guarantees
|
||||
|
||||
etcd is a consistent and durable key value store with mini-transaction(TODO: link to txn doc when we have it) support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.
|
||||
etcd is a consistent and durable key value store with [mini-transaction][txn] support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.
|
||||
|
||||
### APIs to consider
|
||||
|
||||
@ -21,7 +21,7 @@ An etcd operation is considered complete when it is committed through consensus,
|
||||
|
||||
#### Revision
|
||||
|
||||
An etcd operation that modifies the key value store is assigned with a single increasing revision. A transaction operation might modifies the key value store multiple times, but only one revision is assigned. The revision attribute of a key value pair that modified by the operation has the same value as the revision of the operation. The revision can be used as a logical clock for key value store. A key value pair that has a larger revision is modified after a key value pair with a smaller revision. Two key value pairs that have the same revision are modified by an operation "concurrently".
|
||||
An etcd operation that modifies the key value store is assigned a single increasing revision. A transaction operation might modify the key value store multiple times, but only one revision is assigned. The revision attribute of a key value pair that was modified by the operation has the same value as the revision of the operation. The revision can be used as a logical clock for key value store. A key value pair that has a larger revision is modified after a key value pair with a smaller revision. Two key value pairs that have the same revision are modified by an operation "concurrently".
|
||||
|
||||
### Guarantees provided
|
||||
|
||||
@ -61,3 +61,4 @@ etcd ensures linearizability for all other operations by default. Linearizabilit
|
||||
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
|
||||
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
|
||||
[Linearizability]: #Linearizability
|
||||
[txn]: api.md#transactions
|
||||
|
77
Documentation/learning/auth_design.md
Normal file
77
Documentation/learning/auth_design.md
Normal file
@ -0,0 +1,77 @@
|
||||
# etcd v3 authentication design
|
||||
|
||||
## Why not reuse the v2 auth system?
|
||||
|
||||
The v3 protocol uses gRPC as its transport instead of a RESTful interface like v2. This new protocol provides an opportunity to iterate on and improve the v2 design. For example, v3 auth has connection based authentication, rather than v2's slower per-request authentication. Additionally, v2 auth's semantics tend to be unwieldy in practice with respect to reasoning about consistency, which will be described in the next sections. For v3, there is a well-defined description and implementation of the authentication mechanism which fixes the deficiencies in the v2 auth system.
|
||||
|
||||
### Functionality requirements
|
||||
|
||||
* Per connection authentication, not per request
|
||||
* User ID + password based authentication implemented for the gRPC API
|
||||
* Authentication must be refreshed after auth policy changes
|
||||
* Its functionality should be as simple and useful as v2
|
||||
* v3 provides a flat key space, unlike the directory structure of v2. Permission checking will be provided as interval matching.
|
||||
* It should have stronger consistency guarantees than v2 auth
|
||||
|
||||
### Main required changes
|
||||
|
||||
* A client must create a dedicated connection only for authentication before sending authenticated requests
|
||||
* Add permission information (user ID and authorized revision) to the Raft commands (`etcdserverpb.InternalRaftRequest`)
|
||||
* Every request is permission checked in the state machine layer, rather than API layer
|
||||
|
||||
### Permission metadata consistency
|
||||
|
||||
The metadata for auth should also be stored and managed in the storage controlled by etcd's Raft protocol like other data stored in etcd. It is required for not sacrificing availability and consistency of the entire etcd cluster. If reading or writing the metadata (e.g. permission information) needs an agreement of every node (more than quorum), single node failure can stop the entire cluster. Requiring all nodes to agree at once means that checking ordinary read/write requests cannot be completed if any cluster member is down, even if the cluster has an available quorum. This unanimous scheme ultimately degrades cluster availability; quorum based consensus from raft should suffice since agreement follows from consistent ordering.
|
||||
|
||||
The authentication mechanism in the etcd v2 protocol has a tricky part because the metadata consistency should work as in the above, but does not: each permission check is processed by the etcd member that receives the client request (etcdserver/api/v2http/client.go), including follower members. Therefore, it's possible the check may be based on stale metadata.
|
||||
|
||||
|
||||
This staleness means that auth configuration cannot be reflected as soon as operators execute etcdctl. Therefore there is no way to know how long the stale metadata is active. Practically, the configuration change is reflected immediately after the command execution. However, in some cases of heavy load, the inconsistent state can be prolonged and it might result in counter-intuitive situations for users and developers. It requires a workaround like this: https://github.com/coreos/etcd/pull/4317#issuecomment-179037582
|
||||
|
||||
### Inconsistent permissions are unsafe for linearized requests
|
||||
|
||||
Inconsistent authentication state is most serious for writes. Even if an operator disables write on a user, if the write is only ordered with respect to the key value store but not the authentication system, it's possible the write will complete successfully. Without ordering on both the auth store and the key-value store, the system will be susceptible to stale permission attacks.
|
||||
|
||||
Therefore, the permission checking logic should be added to the state machine of etcd. Each state machine should check the requests based on its permission information in the apply phase (so the auth information must not be stale).
|
||||
|
||||
## Design and implementation
|
||||
|
||||
### Authentication
|
||||
|
||||
At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The reponse will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests.
|
||||
|
||||
The client connection used to request the authentication token is typically thrown away; it cannot carry the new token's credentials. This is because gRPC doesn't provide a way for adding per RPC credential after creation of the connection (calling `grpc.Dial()`). Therefore, a client cannot assign a token to its connection that is obtained through the connection. The client needs a new connection for using the token.
|
||||
|
||||
#### Notes on the implementation of `Authenticate()` RPC
|
||||
|
||||
`Authenticate()` RPC generates an authentication token based on a given user name and password. etcd saves and checks a configured password and a given password using Go's `bcrypt` package. By design, `bcrypt`'s password checking mechanism is computationally expensive, taking nearly 100ms on an ordinary x64 server. Therefore, performing this check in the state machine apply phase would cause performance trouble: the entire etcd cluster can only serve almost 10 `Authenticate()` requests per second.
|
||||
|
||||
For good performance, the v3 auth mechanism checks passwords in etcd's API layer, where it can be parallelized outside of raft. However, this can lead to potential time-of-check/time-of-use (TOCTOU) permission lapses:
|
||||
1. client A sends a request `Authenticate()`
|
||||
1. the API layer processes the password checking part of `Authenticate()`
|
||||
1. another client B sends a request of `ChangePassword()` and the server completes it
|
||||
1. the state machine layer processes the part of getting a revision number for the `Authenticate()` from A
|
||||
1. the server returns a success to A
|
||||
1. now A is authenticated on an obsolete password
|
||||
|
||||
For avoiding such a situation, the API layer performs *version number validation* based on the revision number of the auth store. During password checking, the API layer saves the revision number of auth store. After successful password checking, the API layer compares the saved revision number and the latest revision number. If the numbers differ, it means someone else updated the auth metadata. So it retries the checking. With this mechanism, the successful password checking based on the obsolete password can be avoided.
|
||||
|
||||
### Resolving a token in the API layer
|
||||
|
||||
After authenticating with `Authenticate()`, a client can create a gRPC connection as it would without auth. In addition to the existing initialization process, the client must associate the token with the newly created connection. `grpc.WithPerRPCCredentials()` provides the functionality for this purpose.
|
||||
|
||||
Every authenticated request from the client has a token. The token can be obtained with `grpc.metadata.FromIncomingContext()` in the server side. The server can obtain who is issuing the request and when the user was authorized. The information will be filled by the API layer in the header (`etcdserverpb.RequestHeader.Username` and `etcdserverpb.RequestHeader.AuthRevision`) of a raft log entry (`etcdserverpb.InternalRaftRequest`).
|
||||
|
||||
### Checking permission in the state machine
|
||||
|
||||
The auth info in `etcdserverpb.RequestHeader` is checked in the apply phase of the state machine. This step checks the user is granted permission to requested keys on the latest revision of auth store.
|
||||
|
||||
### Two types of tokens: simple and JWT
|
||||
|
||||
There are two kinds of token types: simple and JWT. The simple token isn't designed for production use cases. Its tokens aren't cryptographically signed and servers must statefully track token-user correspondence; it is meant for development testing. JWT tokens should be used for production deployments since it is cryptographically signed and verified. From the implementation perspective, JWT is stateless. Its token can include metadata including username and revision, so servers don't need to remember correspondence between tokens and the metadata.
|
||||
|
||||
## Notes on the difference between KVS models and file system models
|
||||
|
||||
etcd v3 is a KVS, not a file system. So the permissions can be granted to the users in form of an exact key name or a key range like `["start key", "end key")`. It means that granting a permission of a nonexistent key is possible. Users should care about unintended permission granting. In a case of file system like system (e.g. Chubby or ZooKeeper), an inode like data structure can include the permission information. So granting permission to a nonexist key won't be possible (except the case of sticky bits).
|
||||
|
||||
The etcd v3 model requires multiple lookup of the metadata unlike the file system like systems. The worst case lookup cost will be sum the user's total granted keys and intervals. The cost cannot be avoided because v3's flat key space is completely different from Unix's file system model (every inode includes permission metadata). Practically the cost won’t be a serious problem because the metadata is small enough to benefit from caching.
|
@ -2,15 +2,17 @@
|
||||
|
||||
This document defines the various terms used in etcd documentation, command line and source code.
|
||||
|
||||
## Node
|
||||
## Alarm
|
||||
|
||||
Node is an instance of raft state machine.
|
||||
The etcd server raises an alarm whenever the cluster needs operator intervention to remain reliable.
|
||||
|
||||
It has a unique identification, and records other nodes' progress internally when it is the leader.
|
||||
## Authentication
|
||||
|
||||
## Member
|
||||
Authentication manages user access permissions for etcd resources.
|
||||
|
||||
Member is an instance of etcd. It hosts a node, and provides service to clients.
|
||||
## Client
|
||||
|
||||
A client connects to the etcd cluster to issue service requests such as fetching key-value pairs, writing data, or watching for updates.
|
||||
|
||||
## Cluster
|
||||
|
||||
@ -18,6 +20,42 @@ Cluster consists of several members.
|
||||
|
||||
The node in each member follows raft consensus protocol to replicate logs. Cluster receives proposals from members, commits them and apply to local store.
|
||||
|
||||
## Compaction
|
||||
|
||||
Compaction discards all etcd event history and superseded keys prior to a given revision. It is used to reclaim storage space in the etcd backend database.
|
||||
|
||||
## Election
|
||||
|
||||
The etcd cluster holds elections among its members to choose a leader as part of the raft consensus protocol.
|
||||
|
||||
## Endpoint
|
||||
|
||||
A URL pointing to an etcd service or resource.
|
||||
|
||||
## Key
|
||||
|
||||
A user-defined identifier for storing and retrieving user-defined values in etcd.
|
||||
|
||||
## Key range
|
||||
|
||||
A set of keys containing either an individual key, a lexical interval for all x such that a < x <= b, or all keys greater than a given key.
|
||||
|
||||
## Keyspace
|
||||
|
||||
The set of all keys in an etcd cluster.
|
||||
|
||||
## Lease
|
||||
|
||||
A short-lived renewable contract that deletes keys associated with it on its expiry.
|
||||
|
||||
## Member
|
||||
|
||||
A logical etcd server that participates in serving an etcd cluster.
|
||||
|
||||
## Modification Revision
|
||||
|
||||
The first revision to hold the last write to a given key.
|
||||
|
||||
## Peer
|
||||
|
||||
Peer is another member of the same cluster.
|
||||
@ -26,10 +64,34 @@ Peer is another member of the same cluster.
|
||||
|
||||
A proposal is a request (for example a write request, a configuration change request) that needs to go through raft protocol.
|
||||
|
||||
## Client
|
||||
## Quorum
|
||||
|
||||
Client is a caller of the cluster's HTTP API.
|
||||
The number of active members needed for consensus to modify the cluster state. etcd requires a member majority to reach quorum.
|
||||
|
||||
## Machine (deprecated)
|
||||
## Revision
|
||||
|
||||
The alternative of Member in etcd before 2.0
|
||||
A 64-bit cluster-wide counter that is incremented each time the keyspace is modified.
|
||||
|
||||
## Role
|
||||
|
||||
A unit of permissions over a set of key ranges which may be granted to a set of users for access control.
|
||||
|
||||
## Snapshot
|
||||
|
||||
A point-in-time backup of the etcd cluster state.
|
||||
|
||||
## Store
|
||||
|
||||
The physical storage backing the cluster keyspace.
|
||||
|
||||
## Transaction
|
||||
|
||||
An atomically executed set of operations. All modified keys in a transaction share the same modification revision.
|
||||
|
||||
## Key Version
|
||||
|
||||
The number of writes to a key since it was created, starting at 1. The version of a nonexistent or deleted key is 0.
|
||||
|
||||
## Watcher
|
||||
|
||||
A client opens a watcher to observe updates on a given key range.
|
||||
|
115
Documentation/learning/why.md
Normal file
115
Documentation/learning/why.md
Normal file
@ -0,0 +1,115 @@
|
||||
# etcd versus other key-value stores
|
||||
|
||||
The name "etcd" originated from two ideas, the unix "/etc" folder and "d"istibuted systems. The "/etc" folder is a place to store configuration data for a single system whereas etcd stores configuration information for large scale distributed systems. Hence, a "d"istributed "/etc" is "etcd".
|
||||
|
||||
etcd is designed as a general substrate for large scale distributed systems. These are systems that will never tolerate split-brain operation and are willing to sacrifice availability to achieve this end. etcd stores metadata in a consistent and fault-tolerant way. An etcd cluster is meant to provide key-value storage with best of class stability, reliability, scalability and performance.
|
||||
|
||||
Distributed systems use etcd as a consistent key-value store for configuration management, service discovery, and coordinating distributed work. Many [organizations][production-users] use etcd to implement production systems such as container schedulers, service discovery services, and distributed data storage. Common distributed patterns using etcd include [leader election][etcd-etcdctl-elect], [distributed locks][etcd-etcdctl-lock], and monitoring machine liveness.
|
||||
|
||||
## Use cases
|
||||
|
||||
- Container Linux by CoreOS: Applications running on [Container Linux][container-linux] get automatic, zero-downtime Linux kernel updates. Container Linux uses [locksmith] to coordinate updates. Locksmith implements a distributed semaphore over etcd to ensure only a subset of a cluster is rebooting at any given time.
|
||||
- [Kubernetes][kubernetes] stores configuration data into etcd for service discovery and cluster management; etcd's consistency is crucial for correctly scheduling and operating services. The Kubernetes API server persists cluster state into etcd. It uses etcd's watch API to monitor the cluster and roll out critical configuration changes.
|
||||
|
||||
## Comparison chart
|
||||
|
||||
Perhaps etcd already seems like a good fit, but as with all technological decisions, proceed with caution. Please note this documentation is written by the etcd team. Although the ideal is a disinterested comparison of technology and features, the authors’ expertise and biases obviously favor etcd. Use only as directed.
|
||||
|
||||
The table below is a handy quick reference for spotting the differences among etcd and its most popular alternatives at a glance. Further commentary and details for each column are in the sections following the table.
|
||||
|
||||
| | etcd | ZooKeeper | Consul | NewSQL (Cloud Spanner, CockroachDB, TiDB) |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| Concurrency Primitives | [Lock RPCs][etcd-v3lock], [Election RPCs][etcd-v3election], [command line locks][etcd-etcdctl-lock], [command line elections][etcd-etcdctl-elect], [recipes][etcd-recipe] in go | External [curator recipes][curator] in Java | [Native lock API][consul-lock] | [Rare][newsql-leader], if any |
|
||||
| Linearizable Reads | [Yes][etcd-linread] | No | [Yes][consul-linread] | Sometimes |
|
||||
| Multi-version Concurrency Control | [Yes][etcd-mvcc] | No | No | Sometimes |
|
||||
| Transactions | [Field compares, Read, Write][etcd-txn] | [Version checks, Write][zk-txn] | [Field compare, Lock, Read, Write][consul-txn] | SQL-style |
|
||||
| Change Notification | [Historical and current key intervals][etcd-watch] | [Current keys and directories][zk-watch] | [Current keys and prefixes][consul-watch] | Triggers (sometimes) |
|
||||
| User permissions | [Role based][etcd-rbac] | [ACLs][zk-acl] | [ACLs][consul-acl] | Varies (per-table [GRANT][cockroach-grant], per-database [roles][spanner-roles]) |
|
||||
| HTTP/JSON API | [Yes][etcd-json] | No | [Yes][consul-json] | Rarely |
|
||||
| Membership Reconfiguration | [Yes][etcd-reconfig] | [>3.5.0][zk-reconfig] | [Yes][consul-reconfig] | Yes |
|
||||
| Maximum reliable database size | Several gigabytes | Hundreds of megabytes (sometimes several gigabytes) | Hundreds of MBs | Terabytes+ |
|
||||
| Minimum read linearization latency | Network RTT | No read linearization | RTT + fsync | Clock barriers (atomic, NTP) |
|
||||
|
||||
### ZooKeeper
|
||||
|
||||
ZooKeeper solves the same problem as etcd: distributed system coordination and metadata storage. However, etcd has the luxury of hindsight taken from engineering and operational experience with ZooKeeper’s design and implementation. The lessons learned from Zookeeper certainly informed etcd’s design, helping it support large scale systems like Kubernetes. The improvements etcd made over Zookeeper include:
|
||||
|
||||
* Dynamic cluster membership reconfiguration
|
||||
* Stable read/write under high load
|
||||
* A multi-version concurrency control data model
|
||||
* Reliable key monitoring which never silently drop events
|
||||
* Lease primitives decoupling connections from sessions
|
||||
* APIs for safe distributed shared locks
|
||||
|
||||
Furthermore, etcd supports a wide range of languages and frameworks out of the box. Whereas Zookeeper has its own custom Jute RPC protocol, which is totally unique to Zookeeper and limits its [supported language bindings][zk-bindings], etcd’s client protocol is built from [gRPC][grpc], a popular RPC framework with language bindings for go, C++, Java, and more. Likewise, gRPC can be serialized into JSON over HTTP, so even general command line utilities like `curl` can talk to it. Since systems can select from a variety of choices, they are built on etcd with native tooling rather than around etcd with a single fixed set of technologies.
|
||||
|
||||
When considering features, support, and stability, new applications planning to use Zookeeper for a consistent key value store would do well to choose etcd instead.
|
||||
|
||||
### Consul
|
||||
|
||||
Consul bills itself as an end-to-end service discovery framework. To wit, it includes services such as health checking, failure detection, and DNS. Incidentally, Consul also exposes a key value store with mediocre performance and an intricate API. As it stands in Consul 0.7, the storage system does not scales well; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches.
|
||||
|
||||
etcd and Consul solve different problems. If looking for a distributed consistent key value store, etcd is a better choice over Consul. If looking for end-to-end cluster service discovery, etcd will not have enough features; choose Kubernetes, Consul, or SmartStack.
|
||||
|
||||
### NewSQL (Cloud Spanner, CockroachDB, TiDB)
|
||||
|
||||
Both etcd and NewSQL databases (e.g., [Cockroach][cockroach], [TiDB][tidb], [Google Spanner][spanner]) provide strong data consistency guarantees with high availability. However, the significantly different system design parameters lead to significantly different client APIs and performance characteristics.
|
||||
|
||||
NewSQL databases are meant to horizontally scale across data centers. These systems typically partition data across multiple consistent replication groups (shards), potentially distant, storing data sets on the order of terabytes and above. This sort of scaling makes them poor candidates for distributed coordination as they have long latencies from waiting on clocks and expect updates with mostly localized dependency graphs. The data is organized into tables, including SQL-style query facilities with richer semantics than etcd, but at the cost of additional complexity for processing, planning, and optimizing queries.
|
||||
|
||||
In short, choose etcd for storing metadata or coordinating distributed applications. If storing more than a few GB of data or if full SQL queries are needed, choose a NewSQL database.
|
||||
|
||||
## Using etcd for metadata
|
||||
|
||||
etcd replicates all data within a single consistent replication group. For storing up to a few GB of data with consistent ordering, this is the most efficient approach. Each modification of cluster state, which may change multiple keys, is assigned a global unique ID, called a revision in etcd, from a monotonically increasing counter for reasoning over ordering. Since there’s only a single replication group, the modification request only needs to go through the raft protocol to commit. By limiting consensus to one replication group, etcd gets distributed consistency with a simple protocol while achieving low latency and high throughput.
|
||||
|
||||
The replication behind etcd cannot horizontally scale because it lacks data sharding. In contrast, NewSQL databases usually shard data across multiple consistent replication groups, storing data sets on the order of terabytes and above. However, to assign each modification a global unique and increasing ID, each request must go through an additional coordination protocol among replication groups. This extra coordination step may potentially conflict on the global ID, forcing ordered requests to retry. The result is a more complicated approach with typically worse performance than etcd for strict ordering.
|
||||
|
||||
If an application reasons primarily about metadata or metadata ordering, such as to coordinate processes, choose etcd. If the application needs a large data store spanning multiple data centers and does not heavily depend on strong global ordering properties, choose a NewSQL database.
|
||||
|
||||
## Using etcd for distributed coordination
|
||||
|
||||
etcd has distributed coordination primitives such as event watches, leases, elections, and distributed shared locks out of the box. These primitives are both maintained and supported by the etcd developers; leaving these primitives to external libraries shirks the responsibility of developing foundational distributed software, essentially leaving the system incomplete. NewSQL databases usually expect these distributed coordination primitives to be authored by third parties. Likewise, ZooKeeper famously has a separate and independent [library][curator] of coordination recipes. Consul, which provides a native locking API, goes so far as to apologize that it’s “[not a bulletproof method][consul-bulletproof]”.
|
||||
|
||||
In theory, it’s possible to build these primitives atop any storage systems providing strong consistency. However, the algorithms tend to be subtle; it is easy to develop a locking algorithm that appears to work, only to suddenly break due to thundering herd and timing skew. Furthermore, other primitives supported by etcd, such as transactional memory depend on etcd’s MVCC data model; simple strong consistency is not enough.
|
||||
|
||||
For distributed coordination, choosing etcd can help prevent operational headaches and save engineering effort.
|
||||
|
||||
[production-users]: ../production-users.md
|
||||
[grpc]: http://www.grpc.io
|
||||
[consul-bulletproof]: https://www.consul.io/docs/internals/sessions.html
|
||||
[curator]: http://curator.apache.org/
|
||||
[cockroach]: https://github.com/cockroachdb/cockroach
|
||||
[spanner]: https://cloud.google.com/spanner/
|
||||
[tidb]: https://github.com/pingcap/tidb
|
||||
[etcd-v3lock]: https://godoc.org/github.com/coreos/etcd/etcdserver/api/v3lock/v3lockpb
|
||||
[etcd-v3election]: https://godoc.org/github.com/coreos/etcd/etcdserver/api/v3election/v3electionpb
|
||||
[etcd-etcdctl-lock]: ../../etcdctl/README.md#lock-lockname-command-arg1-arg2-
|
||||
[etcd-etcdctl-elect]: ../../etcdctl/README.md#elect-options-election-name-proposal
|
||||
[etcd-mvcc]: data_model.md
|
||||
[etcd-recipe]: https://godoc.org/github.com/coreos/etcd/contrib/recipes
|
||||
[consul-lock]: https://www.consul.io/docs/commands/lock.html
|
||||
[newsql-leader]: http://dl.acm.org/citation.cfm?id=2960999
|
||||
[etcd-reconfig]: ../op-guide/runtime-configuration.md
|
||||
[zk-reconfig]: https://zookeeper.apache.org/doc/trunk/zookeeperReconfig.html
|
||||
[consul-reconfig]: https://www.consul.io/docs/guides/servers.html
|
||||
[etcd-linread]: api_guarantees.md#linearizability
|
||||
[consul-linread]: https://www.consul.io/docs/agent/http.html#consistency
|
||||
[etcd-json]: ../dev-guide/api_grpc_gateway.md
|
||||
[consul-json]: https://www.consul.io/docs/agent/http.html#formatted-json-output
|
||||
[etcd-txn]: api.md#transaction
|
||||
[zk-txn]: https://zookeeper.apache.org/doc/r3.4.3/api/org/apache/zookeeper/ZooKeeper.html#multi(java.lang.Iterable)
|
||||
[consul-txn]: https://www.consul.io/docs/agent/http/kv.html#txn
|
||||
[etcd-watch]: api.md#watch-streams
|
||||
[zk-watch]: https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches
|
||||
[consul-watch]: https://www.consul.io/docs/agent/watches.html
|
||||
[etcd-commonname]: ../op-guide/authentication.md#using-tls-common-name
|
||||
[etcd-rbac]: ../op-guide/authentication.md#working-with-roles
|
||||
[zk-acl]: https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_ZooKeeperAccessControl
|
||||
[consul-acl]: https://www.consul.io/docs/internals/acl.html
|
||||
[cockroach-grant]: https://www.cockroachlabs.com/docs/grant.html
|
||||
[spanner-roles]: https://cloud.google.com/spanner/docs/iam#roles
|
||||
[zk-bindings]: https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#ch_bindings
|
||||
[container-linux]: https://coreos.com/why
|
||||
[locksmith]: https://github.com/coreos/locksmith
|
||||
[kubernetes]: http://kubernetes.io/docs/whatisk8s
|
@ -70,6 +70,8 @@ All these metrics are prefixed with `etcd_network_`
|
||||
|---------------------------|--------------------------------------------------------------------|---------------|
|
||||
| peer_sent_bytes_total | The total number of bytes sent to the peer with ID `To`. | Counter(To) |
|
||||
| peer_received_bytes_total | The total number of bytes received from the peer with ID `From`. | Counter(From) |
|
||||
| peer_sent_failures_total | The total number of send failures from the peer with ID `To`. | Counter(To) |
|
||||
| peer_received_failures_total | The total number of receive failures from the peer with ID `From`. | Counter(From) |
|
||||
| peer_round_trip_time_seconds | Round-Trip-Time histogram between peers. | Histogram(To) |
|
||||
| client_grpc_sent_bytes_total | The total number of bytes sent to grpc clients. | Counter |
|
||||
| client_grpc_received_bytes_total| The total number of bytes received to grpc clients. | Counter |
|
||||
@ -80,30 +82,7 @@ All these metrics are prefixed with `etcd_network_`
|
||||
|
||||
### gRPC requests
|
||||
|
||||
These metrics describe the requests served by a specific etcd member: total received requests, total failed requests, and processing latency. They are useful for tracking user-generated traffic hitting the etcd cluster.
|
||||
|
||||
All these metrics are prefixed with `etcd_grpc_`
|
||||
|
||||
| Name | Description | Type |
|
||||
|--------------------------------|-------------------------------------------------------------------------------------|------------------------|
|
||||
| requests_total | Total number of received requests | Counter(method) |
|
||||
| requests_failed_total | Total number of failed requests. | Counter(method,error) |
|
||||
| unary_requests_duration_seconds | Bucketed handling duration of the requests. | Histogram(method) |
|
||||
|
||||
|
||||
Example Prometheus queries that may be useful from these metrics (across all etcd members):
|
||||
|
||||
* `sum(rate(etcd_grpc_requests_failed_total{job="etcd"}[1m]) by (grpc_method) / sum(rate(etcd_grpc_total{job="etcd"})[1m]) by (grpc_method)`
|
||||
|
||||
Shows the fraction of events that failed by gRPC method across all members, across a time window of `1m`.
|
||||
|
||||
* `sum(rate(etcd_grpc_requests_total{job="etcd",grpc_method="PUT"})[1m]) by (grpc_method)`
|
||||
|
||||
Shows the rate of PUT requests across all members, across a time window of `1m`.
|
||||
|
||||
* `histogram_quantile(0.9, sum(rate(etcd_grpc_unary_requests_duration_seconds{job="etcd",grpc_method="PUT"}[5m]) ) by (le))`
|
||||
|
||||
Show the 0.90-tile latency (in seconds) of PUT request handling across all members, with a window of `5m`.
|
||||
These metrics are exposed via [go-grpc-prometheus][go-grpc-prometheus].
|
||||
|
||||
## etcd_debugging namespace metrics
|
||||
|
||||
@ -134,3 +113,4 @@ Heavy file descriptor (`process_open_fds`) usage (i.e., near the process's file
|
||||
[prometheus-getting-started]: http://prometheus.io/docs/introduction/getting_started/
|
||||
[prometheus-naming]: http://prometheus.io/docs/practices/naming/
|
||||
[v2-http-metrics]: v2/metrics.md#http-requests
|
||||
[go-grpc-prometheus]: https://github.com/grpc-ecosystem/go-grpc-prometheus
|
164
Documentation/op-guide/authentication.md
Normal file
164
Documentation/op-guide/authentication.md
Normal file
@ -0,0 +1,164 @@
|
||||
# Authentication Guide
|
||||
|
||||
## Overview
|
||||
|
||||
Authentication was added in etcd 2.1. The etcd v3 API slightly modified the authentication feature's API and user interface to better fit the new data model. This guide is intended to help users set up basic authentication in etcd v3.
|
||||
|
||||
## Special users and roles
|
||||
|
||||
There is one special user, `root`, and one special role, `root`.
|
||||
|
||||
### User `root`
|
||||
|
||||
The `root` user, which has full access to etcd, must be created before activating authentication. The idea behind the `root` user is for administrative purposes: managing roles and ordinary users. The `root` user must have the `root` role and is allowed to change anything inside etcd.
|
||||
|
||||
### Role `root`
|
||||
|
||||
The role `root` may be granted to any user, in addition to the root user. A user with the `root` role has both global read-write access and permission to update the cluster's authentication configuration. Furthermore, the `root` role grants privileges for general cluster maintenance, including modifying cluster membership, defragmenting the store, and taking snapshots.
|
||||
|
||||
## Working with users
|
||||
|
||||
The `user` subcommand for `etcdctl` handles all things having to do with user accounts.
|
||||
|
||||
A listing of users can be found with:
|
||||
|
||||
```
|
||||
$ etcdctl user list
|
||||
```
|
||||
|
||||
Creating a user is as easy as
|
||||
|
||||
```
|
||||
$ etcdctl user add myusername
|
||||
```
|
||||
|
||||
Creating a new user will prompt for a new password. The password can be supplied from standard input when an option `--interactive=false` is given.
|
||||
|
||||
Roles can be granted and revoked for a user with:
|
||||
|
||||
```
|
||||
$ etcdctl user grant-role myusername foo
|
||||
$ etcdctl user revoke-role myusername bar
|
||||
```
|
||||
|
||||
The user's settings can be inspected with:
|
||||
|
||||
```
|
||||
$ etcdctl user get myusername
|
||||
```
|
||||
|
||||
And the password for a user can be changed with
|
||||
|
||||
```
|
||||
$ etcdctl user passwd myusername
|
||||
```
|
||||
|
||||
Changing the password will prompt again for a new password. The password can be supplied from standard input when an option `--interactive=false` is given.
|
||||
|
||||
Delete an account with:
|
||||
```
|
||||
$ etcdctl user delete myusername
|
||||
```
|
||||
|
||||
|
||||
## Working with roles
|
||||
|
||||
The `role` subcommand for `etcdctl` handles all things having to do with access controls for particular roles, as were granted to individual users.
|
||||
|
||||
List roles with:
|
||||
|
||||
```
|
||||
$ etcdctl role list
|
||||
```
|
||||
|
||||
Create a new role with:
|
||||
|
||||
```
|
||||
$ etcdctl role add myrolename
|
||||
```
|
||||
|
||||
A role has no password; it merely defines a new set of access rights.
|
||||
|
||||
Roles are granted access to a single key or a range of keys.
|
||||
|
||||
The range can be specified as an interval [start-key, end-key) where start-key should be lexically less than end-key in an alphabetical manner.
|
||||
|
||||
Access can be granted as either read, write, or both, as in the following examples:
|
||||
|
||||
```
|
||||
# Give read access to a key /foo
|
||||
$ etcdctl role grant-permission myrolename read /foo
|
||||
|
||||
# Give read access to keys with a prefix /foo/. The prefix is equal to the range [/foo/, /foo0)
|
||||
$ etcdctl role grant-permission myrolename --prefix=true read /foo/
|
||||
|
||||
# Give write-only access to the key at /foo/bar
|
||||
$ etcdctl role grant-permission myrolename write /foo/bar
|
||||
|
||||
# Give full access to keys in a range of [key1, key5)
|
||||
$ etcdctl role grant-permission myrolename readwrite key1 key5
|
||||
|
||||
# Give full access to keys with a prefix /pub/
|
||||
$ etcdctl role grant-permission myrolename --prefix=true readwrite /pub/
|
||||
```
|
||||
|
||||
To see what's granted, we can look at the role at any time:
|
||||
|
||||
```
|
||||
$ etcdctl role get myrolename
|
||||
```
|
||||
|
||||
Revocation of permissions is done the same logical way:
|
||||
|
||||
```
|
||||
$ etcdctl role revoke-permission myrolename /foo/bar
|
||||
```
|
||||
|
||||
As is removing a role entirely:
|
||||
|
||||
```
|
||||
$ etcdctl role remove myrolename
|
||||
```
|
||||
|
||||
## Enabling authentication
|
||||
|
||||
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
|
||||
|
||||
Make sure the root user is created:
|
||||
|
||||
```
|
||||
$ etcdctl user add root
|
||||
Password of root:
|
||||
```
|
||||
|
||||
Enable authentication:
|
||||
|
||||
```
|
||||
$ etcdctl auth enable
|
||||
```
|
||||
|
||||
After this, etcd is running with authentication enabled. To disable it for any reason, use the reciprocal command:
|
||||
|
||||
```
|
||||
$ etcdctl --user root:rootpw auth disable
|
||||
```
|
||||
|
||||
## Using `etcdctl` to authenticate
|
||||
|
||||
`etcdctl` supports a similar flag as `curl` for authentication.
|
||||
|
||||
```
|
||||
$ etcdctl --user user:password get foo
|
||||
```
|
||||
|
||||
The password can be taken from a prompt:
|
||||
|
||||
```
|
||||
$ etcdctl --user user get foo
|
||||
```
|
||||
|
||||
Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.
|
||||
|
||||
## Using TLS Common Name
|
||||
|
||||
If an etcd server is launched with the option `--client-cert-auth=true`, the field of Common Name (CN) in the client's TLS cert will be used as an etcd user. In this case, the common name authenticates the user and the client does not need a password.
|
@ -83,7 +83,7 @@ A cluster using self-signed certificates both encrypts traffic and authenticates
|
||||
On each machine, etcd would be started with these flags:
|
||||
|
||||
```
|
||||
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
|
||||
$ etcd --name infra0 --initial-advertise-peer-urls https://10.0.1.10:2380 \
|
||||
--listen-peer-urls https://10.0.1.10:2380 \
|
||||
--listen-client-urls https://10.0.1.10:2379,https://127.0.0.1:2379 \
|
||||
--advertise-client-urls https://10.0.1.10:2379 \
|
||||
@ -126,7 +126,7 @@ $ etcd --name infra2 --initial-advertise-peer-urls https://10.0.1.12:2380 \
|
||||
|
||||
If the cluster needs encrypted communication but does not require authenticated connections, etcd can be configured to automatically generate its keys. On initialization, each member creates its own set of keys based on its advertised IP addresses and hosts.
|
||||
|
||||
On each machine, etcd would be started with these flag:
|
||||
On each machine, etcd would be started with these flags:
|
||||
|
||||
```
|
||||
$ etcd --name infra0 --initial-advertise-peer-urls https://10.0.1.10:2380 \
|
||||
@ -205,7 +205,7 @@ exit 1
|
||||
|
||||
## Discovery
|
||||
|
||||
In a number of cases, the IPs of the cluster peers may not be known ahead of time. This is common when utilizing cloud providers or when the network uses DHCP. In these cases, rather than specifying a static configuration, use an existing etcd cluster to bootstrap a new one. We call this process "discovery".
|
||||
In a number of cases, the IPs of the cluster peers may not be known ahead of time. This is common when utilizing cloud providers or when the network uses DHCP. In these cases, rather than specifying a static configuration, use an existing etcd cluster to bootstrap a new one. This process is called "discovery".
|
||||
|
||||
There two methods that can be used for discovery:
|
||||
|
||||
@ -214,17 +214,17 @@ There two methods that can be used for discovery:
|
||||
|
||||
### etcd discovery
|
||||
|
||||
To better understand the design about discovery service protocol, we suggest reading the discovery service protocol [documentation][discovery-proto].
|
||||
To better understand the design of the discovery service protocol, we suggest reading the discovery service protocol [documentation][discovery-proto].
|
||||
|
||||
#### Lifetime of a discovery URL
|
||||
|
||||
A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery URL, always create discovery URLs for new clusters.
|
||||
A discovery URL identifies a unique etcd cluster. Instead of reusing an existing discovery URL, each etcd instance shares a new discovery URL to bootstrap the new cluster.
|
||||
|
||||
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime-conf] guide.
|
||||
|
||||
#### Custom etcd discovery service
|
||||
|
||||
Discovery uses an existing cluster to bootstrap itself. If using a private etcd cluster, can create a URL like so:
|
||||
Discovery uses an existing cluster to bootstrap itself. If using a private etcd cluster, create a URL like so:
|
||||
|
||||
```
|
||||
$ curl -X PUT https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/_config/size -d value=3
|
||||
@ -271,7 +271,7 @@ $ curl https://discovery.etcd.io/new?size=3
|
||||
https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
```
|
||||
|
||||
This will create the cluster with an initial expected size of 3 members. If no size is specified, a default of 3 is used.
|
||||
This will create the cluster with an initial size of 3 members. If no size is specified, a default of 3 is used.
|
||||
|
||||
```
|
||||
ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
@ -281,7 +281,7 @@ ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573d
|
||||
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
```
|
||||
|
||||
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
|
||||
**Each member must have a different name flag specified or else discovery will fail due to duplicated names. `Hostname` or `machine-id` can be a good choice. **
|
||||
|
||||
Now we start etcd with those relevant flags for each member:
|
||||
|
||||
@ -357,6 +357,8 @@ To help clients discover the etcd cluster, the following DNS SRV records are loo
|
||||
|
||||
If `_etcd-client-ssl._tcp.example.com` is found, clients will attempt to communicate with the etcd cluster over SSL/TLS.
|
||||
|
||||
If etcd is using TLS without a custom certificate authority, the discovery domain (e.g., example.com) must match the SRV record domain (e.g., infra1.example.com). This is to mitigate attacks that forge SRV records to point to a different domain; the domain would have a valid certificate under PKI but be controlled by an unknown third party.
|
||||
|
||||
#### Create DNS SRV records
|
||||
|
||||
```
|
||||
@ -454,6 +456,10 @@ $ etcd --name infra2 \
|
||||
--listen-peer-urls http://10.0.1.12:2380
|
||||
```
|
||||
|
||||
### Gateway
|
||||
|
||||
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. Please read [gateway guide][gateway] for more information.
|
||||
|
||||
### Proxy
|
||||
|
||||
When the `--proxy` flag is set, etcd runs in [proxy mode][proxy]. This proxy mode only supports the etcd v2 API; there are no plans to support the v3 API. Instead, for v3 API support, there will be a new proxy with enhanced features following the etcd 3.0 release.
|
||||
@ -469,4 +475,5 @@ To setup an etcd cluster with proxies of v2 API, please read the the [clustering
|
||||
[proxy]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/proxy.md
|
||||
[clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md
|
||||
[security-guide]: security.md
|
||||
[tls-setup]: /hack/tls-setup
|
||||
[tls-setup]: ../../hack/tls-setup
|
||||
[gateway]: gateway.md
|
||||
|
@ -28,7 +28,7 @@ To start etcd automatically using custom settings at startup in Linux, using a [
|
||||
|
||||
### --snapshot-count
|
||||
+ Number of committed transactions to trigger a snapshot to disk.
|
||||
+ default: "10000"
|
||||
+ default: "100000"
|
||||
+ env variable: ETCD_SNAPSHOT_COUNT
|
||||
|
||||
### --heartbeat-interval
|
||||
@ -140,6 +140,12 @@ To start etcd automatically using custom settings at startup in Linux, using a [
|
||||
+ default: 0
|
||||
+ env variable: ETCD_AUTO_COMPACTION_RETENTION
|
||||
|
||||
|
||||
### --enable-v2
|
||||
+ Accept etcd V2 client requests
|
||||
+ default: true
|
||||
+ env variable: ETCD_ENABLE_V2
|
||||
|
||||
## Proxy flags
|
||||
|
||||
`--proxy` prefix flags configures etcd to run in [proxy mode][proxy]. "proxy" supports v2 API only.
|
||||
@ -179,7 +185,10 @@ To start etcd automatically using custom settings at startup in Linux, using a [
|
||||
|
||||
The security flags help to [build a secure etcd cluster][security].
|
||||
|
||||
### --ca-file [DEPRECATED]
|
||||
### --ca-file
|
||||
|
||||
**DEPRECATED**
|
||||
|
||||
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
|
||||
+ default: none
|
||||
+ env variable: ETCD_CA_FILE
|
||||
@ -209,7 +218,10 @@ The security flags help to [build a secure etcd cluster][security].
|
||||
+ default: false
|
||||
+ env variable: ETCD_AUTO_TLS
|
||||
|
||||
### --peer-ca-file [DEPRECATED]
|
||||
### --peer-ca-file
|
||||
|
||||
**DEPRECATED**
|
||||
|
||||
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
|
||||
+ default: none
|
||||
+ env variable: ETCD_PEER_CA_FILE
|
||||
@ -247,7 +259,7 @@ The security flags help to [build a secure etcd cluster][security].
|
||||
+ env variable: ETCD_DEBUG
|
||||
|
||||
### --log-package-levels
|
||||
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
|
||||
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
|
||||
+ default: none (INFO for all packages)
|
||||
+ env variable: ETCD_LOG_PACKAGE_LEVELS
|
||||
|
||||
@ -276,13 +288,24 @@ Follow the instructions when using these flags.
|
||||
## Profiling flags
|
||||
|
||||
### --enable-pprof
|
||||
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
|
||||
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/"
|
||||
+ default: false
|
||||
|
||||
### --metrics
|
||||
+ Set level of detail for exported metrics, specify 'extensive' to include histogram metrics.
|
||||
+ default: basic
|
||||
|
||||
## Auth flags
|
||||
|
||||
### --auth-token
|
||||
+ Specify a token type and token specific options, especially for JWT. Its format is "type,var1=val1,var2=val2,...". Possible type is 'simple' or 'jwt'. Possible variables are 'sign-method' for specifying a sign method of jwt (its possible values are 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512', 'RS256', 'RS384', 'RS512', 'PS256', 'PS384', or 'PS512'), 'pub-key' for specifying a path to a public key for verifying jwt, and 'priv-key' for specifying a path to a private key for signing jwt.
|
||||
+ Example option of JWT: '--auth-token jwt,pub-key=app.rsa.pub,priv-key=app.rsa,sign-method=RS512'
|
||||
+ default: "simple"
|
||||
|
||||
[build-cluster]: clustering.md#static
|
||||
[reconfig]: runtime-configuration.md
|
||||
[discovery]: clustering.md#discovery
|
||||
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
|
||||
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
|
||||
[proxy]: ../v2/proxy.md
|
||||
[restore]: ../v2/admin_guide.md#restoring-a-backup
|
||||
[security]: security.md
|
||||
|
@ -2,13 +2,106 @@
|
||||
|
||||
The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering.md#static).
|
||||
|
||||
## rkt
|
||||
|
||||
### Running a single node etcd
|
||||
|
||||
The following rkt run command will expose the etcd client API on port 2379 and expose the peer API on port 2380.
|
||||
|
||||
Use the host IP address when configuring etcd.
|
||||
|
||||
```
|
||||
export NODE1=192.168.1.21
|
||||
```
|
||||
|
||||
Trust the CoreOS [App Signing Key](https://coreos.com/security/app-signing-key/).
|
||||
|
||||
```
|
||||
sudo rkt trust --prefix coreos.com/etcd
|
||||
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
|
||||
```
|
||||
|
||||
Run the `v3.1.2` version of etcd or specify another release version.
|
||||
|
||||
```
|
||||
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.1.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380
|
||||
```
|
||||
|
||||
List the cluster member.
|
||||
|
||||
```
|
||||
etcdctl --endpoints=http://192.168.1.21:2379 member list
|
||||
```
|
||||
|
||||
### Running a 3 node etcd cluster
|
||||
|
||||
Setup a 3 node cluster with rkt locally, using the `-initial-cluster` flag.
|
||||
|
||||
```sh
|
||||
export NODE1=172.16.28.21
|
||||
export NODE2=172.16.28.22
|
||||
export NODE3=172.16.28.23
|
||||
```
|
||||
|
||||
```
|
||||
# node 1
|
||||
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.1.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
|
||||
|
||||
# node 2
|
||||
sudo rkt run --net=default:IP=${NODE2} coreos.com/etcd:v3.1.2 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
|
||||
|
||||
# node 3
|
||||
sudo rkt run --net=default:IP=${NODE3} coreos.com/etcd:v3.1.2 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
|
||||
```
|
||||
|
||||
Verify the cluster is healthy and can be reached.
|
||||
|
||||
```
|
||||
ETCDCTL_API=3 etcdctl --endpoints=http://172.16.28.21:2379,http://172.16.28.22:2379,http://172.16.28.23:2379 endpoint health
|
||||
```
|
||||
|
||||
### DNS
|
||||
|
||||
Production clusters which refer to peers by DNS name known to the local resolver must mount the [host's DNS configuration](https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html#customizing-rkt-options).
|
||||
|
||||
## Docker
|
||||
|
||||
In order to expose the etcd API to clients outside of Docker host, use the host IP address of the container. Please see [`docker inspect`](https://docs.docker.com/engine/reference/commandline/inspect) for more detail on how to get the IP address. Alternatively, specify `--net=host` flag to `docker run` command to skip placing the container inside of a separate network stack.
|
||||
|
||||
### Running a single node etcd
|
||||
|
||||
Use the host IP address when configuring etcd:
|
||||
|
||||
```
|
||||
export NODE1=192.168.1.21
|
||||
```
|
||||
|
||||
Run the latest version of etcd:
|
||||
|
||||
```
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:latest \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name node1 \
|
||||
--initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://${NODE1}:2380 \
|
||||
--advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://${NODE1}:2379 \
|
||||
--initial-cluster node1=http://${NODE1}:2380
|
||||
```
|
||||
|
||||
List the cluster member:
|
||||
|
||||
```
|
||||
etcdctl --endpoints=http://${NODE1}:2379 member list
|
||||
```
|
||||
|
||||
### Running a 3 node etcd cluster
|
||||
|
||||
```
|
||||
# For each machine
|
||||
ETCD_VERSION=v3.0.0-beta.0
|
||||
ETCD_VERSION=latest
|
||||
TOKEN=my-etcd-token
|
||||
CLUSTER_STATE=new
|
||||
NAME_1=etcd-node-0
|
||||
@ -18,39 +111,52 @@ HOST_1=10.20.30.1
|
||||
HOST_2=10.20.30.2
|
||||
HOST_3=10.20.30.3
|
||||
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
|
||||
DATA_DIR=/var/lib/etcd
|
||||
|
||||
# For node 1
|
||||
THIS_NAME=${NAME_1}
|
||||
THIS_IP=${HOST_1}
|
||||
sudo docker run --net=host --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=data.etcd --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
|
||||
# For node 2
|
||||
THIS_NAME=${NAME_2}
|
||||
THIS_IP=${HOST_2}
|
||||
sudo docker run --net=host --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=data.etcd --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
|
||||
# For node 3
|
||||
THIS_NAME=${NAME_3}
|
||||
THIS_IP=${HOST_3}
|
||||
sudo docker run --net=host --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=data.etcd --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
```
|
||||
|
||||
To run `etcdctl` using API version 3:
|
||||
@ -59,3 +165,32 @@ To run `etcdctl` using API version 3:
|
||||
docker exec etcd /bin/sh -c "export ETCDCTL_API=3 && /usr/local/bin/etcdctl put foo bar"
|
||||
```
|
||||
|
||||
## Bare Metal
|
||||
|
||||
To provision a 3 node etcd cluster on bare-metal, the examples in the [baremetal repo](https://github.com/coreos/coreos-baremetal/tree/master/examples) may be useful.
|
||||
|
||||
## Mounting a certificate volume
|
||||
|
||||
The etcd release container does not include default root certificates. To use HTTPS with certificates trusted by a root authority (e.g., for discovery), mount a certificate directory into the etcd container:
|
||||
|
||||
```
|
||||
rkt run \
|
||||
--volume etcd-ssl-certs-bundle,kind=host,source=/etc/ssl/certs/ca-certificates.crt \
|
||||
--mount volume=etcd-ssl-certs-bundle,target=/etc/ssl/certs/ca-certificates.crt \
|
||||
quay.io/coreos/etcd:latest -- --name my-name \
|
||||
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
|
||||
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
|
||||
--discovery https://discovery.etcd.io/c11fbcdc16972e45253491a24fcf45e1
|
||||
```
|
||||
|
||||
```
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt \
|
||||
quay.io/coreos/etcd:latest \
|
||||
/usr/local/bin/etcd --name my-name \
|
||||
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
|
||||
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
|
||||
--discovery https://discovery.etcd.io/86a9ff6c8cb8b4c4544c1a2f88f8b801
|
||||
```
|
||||
|
BIN
Documentation/op-guide/etcd-sample-grafana.png
Normal file
BIN
Documentation/op-guide/etcd-sample-grafana.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 96 KiB |
206
Documentation/op-guide/etcd3_alert.rules
Normal file
206
Documentation/op-guide/etcd3_alert.rules
Normal file
@ -0,0 +1,206 @@
|
||||
# general cluster availability
|
||||
|
||||
# alert if another failed member will result in an unavailable cluster
|
||||
ALERT InsufficientMembers
|
||||
IF count(up{job="etcd"} == 0) > (count(up{job="etcd"}) / 2 - 1)
|
||||
FOR 3m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "etcd cluster insufficient members",
|
||||
description = "If one more etcd member goes down the cluster will be unavailable",
|
||||
}
|
||||
|
||||
# etcd leader alerts
|
||||
# ==================
|
||||
|
||||
# alert if any etcd instance has no leader
|
||||
ALERT NoLeader
|
||||
IF etcd_server_has_leader{job="etcd"} == 0
|
||||
FOR 1m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "etcd member has no leader",
|
||||
description = "etcd member {{ $labels.instance }} has no leader",
|
||||
}
|
||||
|
||||
# alert if there are lots of leader changes
|
||||
ALERT HighNumberOfLeaderChanges
|
||||
IF increase(etcd_server_leader_changes_seen_total{job="etcd"}[1h]) > 3
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of leader changes within the etcd cluster are happening",
|
||||
description = "etcd instance {{ $labels.instance }} has seen {{ $value }} leader changes within the last hour",
|
||||
}
|
||||
|
||||
# gRPC request alerts
|
||||
# ===================
|
||||
|
||||
# alert if more than 1% of gRPC method calls have failed within the last 5 minutes
|
||||
ALERT HighNumberOfFailedGRPCRequests
|
||||
IF sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
|
||||
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m])) > 0.01
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of gRPC requests are failing",
|
||||
description = "{{ $value }}% of requests for {{ $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}",
|
||||
}
|
||||
|
||||
# alert if more than 5% of gRPC method calls have failed within the last 5 minutes
|
||||
ALERT HighNumberOfFailedGRPCRequests
|
||||
IF sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
|
||||
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m])) > 0.05
|
||||
FOR 5m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of gRPC requests are failing",
|
||||
description = "{{ $value }}% of requests for {{ $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}",
|
||||
}
|
||||
|
||||
# alert if the 99th percentile of gRPC method calls take more than 150ms
|
||||
ALERT GRPCRequestsSlow
|
||||
IF histogram_quantile(0.99, rate(etcd_grpc_unary_requests_duration_seconds_bucket[5m])) > 0.15
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "slow gRPC requests",
|
||||
description = "on etcd instance {{ $labels.instance }} gRPC requests to {{ $label.grpc_method }} are slow",
|
||||
}
|
||||
|
||||
# HTTP requests alerts
|
||||
# ====================
|
||||
|
||||
# alert if more than 1% of requests to an HTTP endpoint have failed within the last 5 minutes
|
||||
ALERT HighNumberOfFailedHTTPRequests
|
||||
IF sum by(method) (rate(etcd_http_failed_total{job="etcd"}[5m]))
|
||||
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.01
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of HTTP requests are failing",
|
||||
description = "{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}",
|
||||
}
|
||||
|
||||
# alert if more than 5% of requests to an HTTP endpoint have failed within the last 5 minutes
|
||||
ALERT HighNumberOfFailedHTTPRequests
|
||||
IF sum by(method) (rate(etcd_http_failed_total{job="etcd"}[5m]))
|
||||
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.05
|
||||
FOR 5m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of HTTP requests are failing",
|
||||
description = "{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}",
|
||||
}
|
||||
|
||||
# alert if the 99th percentile of HTTP requests take more than 150ms
|
||||
ALERT HTTPRequestsSlow
|
||||
IF histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m])) > 0.15
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "slow HTTP requests",
|
||||
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $label.method }} are slow",
|
||||
}
|
||||
|
||||
# file descriptor alerts
|
||||
# ======================
|
||||
|
||||
instance:fd_utilization = process_open_fds / process_max_fds
|
||||
|
||||
# alert if file descriptors are likely to exhaust within the next 4 hours
|
||||
ALERT FdExhaustionClose
|
||||
IF predict_linear(instance:fd_utilization[1h], 3600 * 4) > 1
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "file descriptors soon exhausted",
|
||||
description = "{{ $labels.job }} instance {{ $labels.instance }} will exhaust its file descriptors soon",
|
||||
}
|
||||
|
||||
# alert if file descriptors are likely to exhaust within the next hour
|
||||
ALERT FdExhaustionClose
|
||||
IF predict_linear(instance:fd_utilization[10m], 3600) > 1
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "file descriptors soon exhausted",
|
||||
description = "{{ $labels.job }} instance {{ $labels.instance }} will exhaust its file descriptors soon",
|
||||
}
|
||||
|
||||
# etcd member communication alerts
|
||||
# ================================
|
||||
|
||||
# alert if 99th percentile of round trips take 150ms
|
||||
ALERT EtcdMemberCommunicationSlow
|
||||
IF histogram_quantile(0.99, rate(etcd_network_member_round_trip_time_seconds_bucket[5m])) > 0.15
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "etcd member communication is slow",
|
||||
description = "etcd instance {{ $labels.instance }} member communication with {{ $label.To }} is slow",
|
||||
}
|
||||
|
||||
# etcd proposal alerts
|
||||
# ====================
|
||||
|
||||
# alert if there are several failed proposals within an hour
|
||||
ALERT HighNumberOfFailedProposals
|
||||
IF increase(etcd_server_proposals_failed_total{job="etcd"}[1h]) > 5
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of proposals within the etcd cluster are failing",
|
||||
description = "etcd instance {{ $labels.instance }} has seen {{ $value }} proposal failures within the last hour",
|
||||
}
|
||||
|
||||
# etcd disk io latency alerts
|
||||
# ===========================
|
||||
|
||||
# alert if 99th percentile of fsync durations is higher than 500ms
|
||||
ALERT HighFsyncDurations
|
||||
IF histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) > 0.5
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "high fsync durations",
|
||||
description = "etcd instance {{ $labels.instance }} fync durations are high",
|
||||
}
|
||||
|
||||
# alert if 99th percentile of commit durations is higher than 250ms
|
||||
ALERT HighCommitDurations
|
||||
IF histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) > 0.25
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "high commit durations",
|
||||
description = "etcd instance {{ $labels.instance }} commit durations are high",
|
||||
}
|
105
Documentation/op-guide/gateway.md
Normal file
105
Documentation/op-guide/gateway.md
Normal file
@ -0,0 +1,105 @@
|
||||
# etcd gateway
|
||||
|
||||
## What is etcd gateway
|
||||
|
||||
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses.
|
||||
|
||||
The gateway supports multiple etcd server endpoints and works on a simple round-robin policy. It only routes to available enpoints and hides failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
|
||||
|
||||
## When to use etcd gateway
|
||||
|
||||
Every application that accesses etcd must first have the address of an etcd cluster client endpoint. If multiple applications on the same server access the same etcd cluster, every application still needs to know the advertised client endpoints of the etcd cluster. If the etcd cluster is reconfigured to have different endpoints, every application may also need to update its endpoint list. This wide-scale reconfiguration is both tedious and error prone.
|
||||
|
||||
etcd gateway solves this problem by serving as a stable local endpoint. A typical etcd gateway configuration has each machine running a gateway listening on a local address and every etcd application connecting to its local gateway. The upshot is only the gateway needs to update its endpoints instead of updating each and every application.
|
||||
|
||||
In summary, to automatically propagate cluster endpoint changes, the etcd gateway runs on every machine serving multiple applications accessing the same etcd cluster.
|
||||
|
||||
## When not to use etcd gateway
|
||||
|
||||
- Improving performance
|
||||
|
||||
The gateway is not designed for improving etcd cluster performance. It does not provide caching, watch coalescing or batching. The etcd team is developing a caching proxy designed for improving cluster scalability.
|
||||
|
||||
- Running on a cluster management system
|
||||
|
||||
Advanced cluster management systems like Kubernetes natively support service discovery. Applications can access an etcd cluster with a DNS name or a virtual IP address managed by the system. For example, kube-proxy is equivalent to etcd gateway.
|
||||
|
||||
## Start etcd gateway
|
||||
|
||||
Consider an etcd cluster with the following static endpoints:
|
||||
|
||||
|Name|Address|Hostname|
|
||||
|------|---------|------------------|
|
||||
|infra0|10.0.1.10|infra0.example.com|
|
||||
|infra1|10.0.1.11|infra1.example.com|
|
||||
|infra2|10.0.1.12|infra2.example.com|
|
||||
|
||||
Start the etcd gateway to use these static endpoints with the command:
|
||||
|
||||
```bash
|
||||
$ etcd gateway start --endpoints=infra0.example.com,infra1.example.com,infra2.example.com
|
||||
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
|
||||
```
|
||||
|
||||
Alternatively, if using DNS for service discovery, consider the DNS SRV entries:
|
||||
|
||||
```bash
|
||||
$ dig +noall +answer SRV _etcd-client._tcp.example.com
|
||||
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra0.example.com.
|
||||
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra1.example.com.
|
||||
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra2.example.com.
|
||||
```
|
||||
|
||||
```bash
|
||||
$ dig +noall +answer infra0.example.com infra1.example.com infra2.example.com
|
||||
infra0.example.com. 300 IN A 10.0.1.10
|
||||
infra1.example.com. 300 IN A 10.0.1.11
|
||||
infra2.example.com. 300 IN A 10.0.1.12
|
||||
```
|
||||
|
||||
Start the etcd gateway to fetch the endpoints from the DNS SRV entries with the command:
|
||||
|
||||
```bash
|
||||
$ etcd gateway --discovery-srv=example.com
|
||||
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
|
||||
```
|
||||
|
||||
## Configuration flags
|
||||
|
||||
### etcd cluster
|
||||
|
||||
#### --endpoints
|
||||
|
||||
* Comma-separated list of etcd server targets for forwarding client connections.
|
||||
* Default: `127.0.0.1:2379`
|
||||
* Invalid example: `https://127.0.0.1:2379` (gateway does not terminate TLS)
|
||||
|
||||
#### --discovery-srv
|
||||
|
||||
* DNS domain used to bootstrap cluster endpoints through SRV recrods.
|
||||
* Default: (not set)
|
||||
|
||||
### Network
|
||||
|
||||
#### --listen-addr
|
||||
|
||||
* Interface and port to bind for accepting client requests.
|
||||
* Default: `127.0.0.1:23790`
|
||||
|
||||
#### --retry-delay
|
||||
|
||||
* Duration of delay before retrying to connect to failed endpoints.
|
||||
* Default: 1m0s
|
||||
* Invalid example: "123" (expects time unit in format)
|
||||
|
||||
### Security
|
||||
|
||||
#### --insecure-discovery
|
||||
|
||||
* Accept SRV records that are insecure or susceptible to man-in-the-middle attacks.
|
||||
* Default: `false`
|
||||
|
||||
#### --trusted-ca-file
|
||||
|
||||
* Path to the client TLS CA file for the etcd cluster. Used to authenticate endpoints.
|
||||
* Default: (not set)
|
1015
Documentation/op-guide/grafana.json
Normal file
1015
Documentation/op-guide/grafana.json
Normal file
File diff suppressed because it is too large
Load Diff
193
Documentation/op-guide/grpc_proxy.md
Normal file
193
Documentation/op-guide/grpc_proxy.md
Normal file
@ -0,0 +1,193 @@
|
||||
# gRPC proxy
|
||||
|
||||
The gRPC proxy is a stateless etcd reverse proxy operating at the gRPC layer (L7). The proxy is designed to reduce the total processing load on the core etcd cluster. For horizontal scalability, it coalesces watch and lease API requests. To protect the cluster against abusive clients, it caches key range requests.
|
||||
|
||||
The gRPC proxy supports multiple etcd server endpoints. When the proxy starts, it randomly picks one etcd server endpoint to use. This endpoint serves all requests until the proxy detects an endpoint failure. If the gRPC proxy detects an endpoint failure, it switches to a different endpoint, if available, to hide failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
|
||||
|
||||
## Scalable watch API
|
||||
|
||||
The gRPC proxy coalesces multiple client watchers (`c-watchers`) on the same key or range into a single watcher (`s-watcher`) connected to an etcd server. The proxy broadcasts all events from the `s-watcher` to its `c-watchers`.
|
||||
|
||||
Assuming N clients watch the same key, one gRPC proxy can reduce the watch load on the etcd server from N to 1. Users can deploy multiple gRPC proxies to further distribute server load.
|
||||
|
||||
In the following example, three clients watch on key A. The gRPC proxy coalesces the three watchers, creating a single watcher attached to the etcd server.
|
||||
|
||||
```
|
||||
+-------------+
|
||||
| etcd server |
|
||||
+------+------+
|
||||
^ watch key A (s-watcher)
|
||||
|
|
||||
+-------+-----+
|
||||
| gRPC proxy | <-------+
|
||||
| | |
|
||||
++-----+------+ |watch key A (c-watcher)
|
||||
watch key A ^ ^ watch key A |
|
||||
(c-watcher) | | (c-watcher) |
|
||||
+-------+-+ ++--------+ +----+----+
|
||||
| client | | client | | client |
|
||||
| | | | | |
|
||||
+---------+ +---------+ +---------+
|
||||
```
|
||||
|
||||
### Limitations
|
||||
|
||||
To effectively coalesce multiple client watchers into a single watcher, the gRPC proxy coalesces new `c-watchers` into an existing `s-watcher` when possible. This coalesced `s-watcher` may be out of sync with the etcd server due to network delays or buffered undelivered events. When the watch revision is unspecified, the gRPC proxy will not guarantee the `c-watcher` will start watching from the most recent store revision. For example, if a client watches from an etcd server with revision 1000, that watcher will begin at revision 1000. If a client watches from the gRPC proxy, may begin watching from revision 990.
|
||||
|
||||
Similar limitations apply to cancellation. When the watcher is cancelled, the etcd server’s revision may be greater than the cancellation response revision.
|
||||
|
||||
These two limitations should not cause problems for most use cases. In the future, there may be additional options to force the watcher to bypass the gRPC proxy for more accurate revision responses.
|
||||
|
||||
## Scalable lease API
|
||||
|
||||
To keep its leases alive, a client must establish at least one gRPC stream to an etcd server for sending periodic heartbeats. If an etcd workload involves heavy lease activity spread over many clients, these streams may contribute to excessive CPU utilization. To reduce the total number of streams on the core cluster, the proxy supports lease stream coalescing.
|
||||
|
||||
Assuming N clients are updating leases, a single gRPC proxy reduces the stream load on the etcd server from N to 1. Deployments may have additional gRPC proxies to further distribute streams across multiple proxies.
|
||||
|
||||
In the following example, three clients update three independent leases (`L1`, `L2`, and `L3`). The gRPC proxy coalesces the three client lease streams (`c-streams`) into a single lease keep alive stream (`s-stream`) attached to an etcd server. The proxy forwards client-side lease heartbeats from the c-streams to the s-stream, then returns the responses to the corresponding c-streams.
|
||||
|
||||
```
|
||||
+-------------+
|
||||
| etcd server |
|
||||
+------+------+
|
||||
^
|
||||
| heartbeat L1, L2, L3
|
||||
| (s-stream)
|
||||
v
|
||||
+-------+-----+
|
||||
| gRPC proxy +<-----------+
|
||||
+---+------+--+ | heartbeat L3
|
||||
^ ^ | (c-stream)
|
||||
heartbeat L1 | | heartbeat L2 |
|
||||
(c-stream) v v (c-stream) v
|
||||
+------+-+ +-+------+ +-----+--+
|
||||
| client | | client | | client |
|
||||
+--------+ +--------+ +--------+
|
||||
```
|
||||
|
||||
## Abusive clients protection
|
||||
|
||||
The gRPC proxy caches responses for requests when it does not break consistency requirements. This can protect the etcd server from abusive clients in tight for loops.
|
||||
|
||||
## Start etcd gRPC proxy
|
||||
|
||||
Consider an etcd cluster with the following static endpoints:
|
||||
|
||||
|Name|Address|Hostname|
|
||||
|------|---------|------------------|
|
||||
|infra0|10.0.1.10|infra0.example.com|
|
||||
|infra1|10.0.1.11|infra1.example.com|
|
||||
|infra2|10.0.1.12|infra2.example.com|
|
||||
|
||||
Start the etcd gRPC proxy to use these static endpoints with the command:
|
||||
|
||||
```bash
|
||||
$ etcd grpc-proxy start --endpoints=infra0.example.com,infra1.example.com,infra2.example.com --listen-addr=127.0.0.1:2379
|
||||
```
|
||||
|
||||
The etcd gRPC proxy starts and listens on port 8080. It forwards client requests to one of the three endpoints provided above.
|
||||
|
||||
Sending requests through the proxy:
|
||||
|
||||
```bash
|
||||
$ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 put foo bar
|
||||
OK
|
||||
$ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 get foo
|
||||
foo
|
||||
bar
|
||||
```
|
||||
|
||||
## Client endpoint synchronization and name resolution
|
||||
|
||||
The proxy supports registering its endpoints for discovery by writing to a user-defined endpoint. This serves two purposes. First, it allows clients to synchronize their endpoints against a set of proxy endpoints for high availability. Second, it is an endpoint provider for etcd [gRPC naming](../dev-guide/grpc_naming.md).
|
||||
|
||||
Register proxy(s) by providing a user-defined prefix:
|
||||
|
||||
```bash
|
||||
$ etcd grpc-proxy start --endpoints=localhost:2379 \
|
||||
--listen-addr=127.0.0.1:23790 \
|
||||
--advertise-client-url=127.0.0.1:23790 \
|
||||
--resolver-prefix="___grpc_proxy_endpoint" \
|
||||
--resolver-ttl=60
|
||||
|
||||
$ etcd grpc-proxy start --endpoints=localhost:2379 \
|
||||
--listen-addr=127.0.0.1:23791 \
|
||||
--advertise-client-url=127.0.0.1:23791 \
|
||||
--resolver-prefix="___grpc_proxy_endpoint" \
|
||||
--resolver-ttl=60
|
||||
```
|
||||
|
||||
The proxy will list all its members for member list:
|
||||
|
||||
```bash
|
||||
ETCDCTL_API=3 ./bin/etcdctl --endpoints=http://localhost:23790 member list --write-out table
|
||||
|
||||
+----+---------+--------------------------------+------------+-----------------+
|
||||
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
|
||||
+----+---------+--------------------------------+------------+-----------------+
|
||||
| 0 | started | Gyu-Hos-MBP.sfo.coreos.systems | | 127.0.0.1:23791 |
|
||||
| 0 | started | Gyu-Hos-MBP.sfo.coreos.systems | | 127.0.0.1:23790 |
|
||||
+----+---------+--------------------------------+------------+-----------------+
|
||||
```
|
||||
|
||||
This lets clients automatically discover proxy endpoints through Sync:
|
||||
|
||||
```go
|
||||
cli, err := clientv3.New(clientv3.Config{
|
||||
Endpoints: []string{"http://localhost:23790"},
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer cli.Close()
|
||||
|
||||
// fetch registered grpc-proxy endpoints
|
||||
if err := cli.Sync(context.Background()); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
Note that if a proxy is configured without a resolver prefix,
|
||||
|
||||
```bash
|
||||
$ etcd grpc-proxy start --endpoints=localhost:2379 \
|
||||
--listen-addr=127.0.0.1:23792 \
|
||||
--advertise-client-url=127.0.0.1:23792
|
||||
```
|
||||
|
||||
the member list API to the grpc-proxy returns its own `advertise-client-url`:
|
||||
|
||||
```bash
|
||||
ETCDCTL_API=3 ./bin/etcdctl --endpoints=http://localhost:23792 member list --write-out table
|
||||
|
||||
+----+---------+--------------------------------+------------+-----------------+
|
||||
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
|
||||
+----+---------+--------------------------------+------------+-----------------+
|
||||
| 0 | started | Gyu-Hos-MBP.sfo.coreos.systems | | 127.0.0.1:23792 |
|
||||
+----+---------+--------------------------------+------------+-----------------+
|
||||
```
|
||||
|
||||
## Namespacing
|
||||
|
||||
Suppose an application expects full control over the entire key space, but the etcd cluster is shared with other applications. To let all appications run without interfering with each other, the proxy can partition the etcd keyspace so clients appear to have access to the complete keyspace. When the proxy is given the flag `--namespace`, all client requests going into the proxy are translated to have a user-defined prefix on the keys. Accesses to the etcd cluster will be under the prefix and responses from the proxy will strip away the prefix; to the client, it appears as if there is no prefix at all.
|
||||
|
||||
To namespace a proxy, start it with `--namespace`:
|
||||
|
||||
```bash
|
||||
$ etcd grpc-proxy start --endpoints=localhost:2379 \
|
||||
--listen-addr=127.0.0.1:23790 \
|
||||
--namespace=my-prefix/
|
||||
```
|
||||
|
||||
Accesses to the proxy are now transparently prefixed on the etcd cluster:
|
||||
|
||||
```bash
|
||||
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:23790 put my-key abc
|
||||
# OK
|
||||
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:23790 get my-key
|
||||
# my-key
|
||||
# abc
|
||||
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:2379 get my-prefix/my-key
|
||||
# my-prefix/my-key
|
||||
# abc
|
||||
```
|
93
Documentation/op-guide/hardware.md
Normal file
93
Documentation/op-guide/hardware.md
Normal file
@ -0,0 +1,93 @@
|
||||
# Hardware recommendations
|
||||
|
||||
etcd usually runs well with limited resources for development or testing purposes; it’s common to develop with etcd on a laptop or a cheap cloud machine. However, when running etcd clusters in production, some hardware guidelines are useful for proper administration. These suggestions are not hard rules; they serve as a good starting point for a robust production deployment. As always, deployments should be tested with simulated workloads before running in production.
|
||||
|
||||
## CPUs
|
||||
|
||||
Few etcd deployments require a lot of CPU capacity. Typical clusters need two to four cores to run smoothly.
|
||||
Heavily loaded etcd deployments, serving thousands of clients or tens of thousands of requests per second, tend to be CPU bound since etcd can serve requests from memory. Such heavy deployments usually need eight to sixteen dedicated cores.
|
||||
|
||||
|
||||
## Memory
|
||||
|
||||
etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly.
|
||||
|
||||
|
||||
## Disks
|
||||
|
||||
Fast disks are the most critical factor for etcd deployment performance and stability.
|
||||
|
||||
A slow disk will increase etcd request latency and potentially hurt cluster stability. Since etcd’s consensus protocol depends on persistently storing metadata to a log, a majority of etcd cluster members must write every request down to disk. Additionally, etcd will also incrementally checkpoint its state to disk so it can truncate this log. If these writes take too long, heartbeats may time out and trigger an election, undermining the stability of the cluster.
|
||||
|
||||
etcd is very sensitive to disk write latency. Typically 50 sequential IOPS (e.g., a 7200 RPM disk) is required. For heavily loaded clusters, 500 sequential IOPS (e.g., a typical local SSD or a high performance virtualized block device) is recommended. Note that most cloud providers publish concurrent IOPS rather than sequential IOPS; the published concurrent IOPS can be 10x greater than the sequential IOPS. To measure actual sequential IOPS, we suggest using a disk benchmarking tool such as [diskbench][diskbench] or [fio][fio].
|
||||
|
||||
etcd requires only modest disk bandwidth but more disk bandwidth buys faster recovery times when a failed member has to catch up with the cluster. Typically 10MB/s will recover 100MB data within 15 seconds. For large clusters, 100MB/s or higher is suggested for recovering 1GB data within 15 seconds.
|
||||
|
||||
When possible, back etcd’s storage with a SSD. A SSD usually provides lower write latencies and with less variance than a spinning disk, thus improving the stability and reliability of etcd. If using spinning disk, get the fastest disks possible (15,000 RPM). Using RAID 0 is also an effective way to increase disk speed, for both spinning disks and SSD. With at least three cluster members, mirroring and/or parity variants of RAID are unnecessary; etcd's consistent replication already gets high availability.
|
||||
|
||||
|
||||
## Network
|
||||
|
||||
Multi-member etcd deployments benefit from a fast and reliable network. In order for etcd to be both consistent and partition tolerant, an unreliable network with partitioning outages will lead to poor availability. Low latency ensures etcd members can communicate fast. High bandwidth can reduce the time to recover a failed etcd member. 1GbE is sufficient for common etcd deployments. For large etcd clusters, a 10GbE network will reduce mean time to recovery.
|
||||
|
||||
Deploy etcd members within a single data center when possible to avoid latency overheads and lessen the possibility of partitioning events. If a failure domain in another data center is required, choose a data center closer to the existing one. Please also read the [tuning][tuning] documentation for more information on cross data center deployment.
|
||||
|
||||
|
||||
## Example hardware configurations
|
||||
|
||||
Here are a few example hardware setups on AWS and GCE environments. As mentioned before, but must be stressed regardless, administrators should test an etcd deployment with a simulated workload before putting it into production.
|
||||
|
||||
Note that these configurations assume these machines are totally dedicated to etcd. Running other applications along with etcd on these machines may cause resource contentions and lead to cluster instability.
|
||||
|
||||
### Small cluster
|
||||
|
||||
A small cluster serves fewer than 100 clients, fewer than 200 of requests per second, and stores no more than 100MB of data.
|
||||
|
||||
Example application workload: A 50-node Kubernetes cluster
|
||||
|
||||
| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
|
||||
|----------|------|-------|--------|------|----------------|
|
||||
| AWS | m4.large | 2 | 8 | 3600 | 56.25 |
|
||||
| GCE | n1-standard-1 + 50GB PD SSD | 2 | 7.5 | 1500 | 25 |
|
||||
|
||||
|
||||
### Medium cluster
|
||||
|
||||
A medium cluster serves fewer than 500 clients, fewer than 1,000 of requests per second, and stores no more than 500MB of data.
|
||||
|
||||
Example application workload: A 250-node Kubernetes cluster
|
||||
|
||||
| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
|
||||
|----------|------|-------|--------|------|----------------|
|
||||
| AWS | m4.xlarge | 4 | 16 | 6000 | 93.75 |
|
||||
| GCE | n1-standard-4 + 150GB PD SSD | 4 | 15 | 4500 | 75 |
|
||||
|
||||
|
||||
### Large cluster
|
||||
|
||||
A large cluster serves fewer than 1,500 clients, fewer than 10,000 of requests per second, and stores no more than 1GB of data.
|
||||
|
||||
Example application workload: A 1,000-node Kubernetes cluster
|
||||
|
||||
| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
|
||||
|----------|------|-------|--------|------|----------------|
|
||||
| AWS | m4.2xlarge | 8 | 32 | 8000 | 125 |
|
||||
| GCE | n1-standard-8 + 250GB PD SSD | 8 | 30 | 7500 | 125 |
|
||||
|
||||
|
||||
### xLarge cluster
|
||||
|
||||
An xLarge cluster serves more than 1,500 clients, more than 10,000 of requests per second, and stores more than 1GB data.
|
||||
|
||||
Example application workload: A 3,000 node Kubernetes cluster
|
||||
|
||||
| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
|
||||
|----------|------|-------|--------|------|----------------|
|
||||
| AWS | m4.4xlarge | 16 | 64 | 16,000 | 250 |
|
||||
| GCE | n1-standard-16 + 500GB PD SSD | 16 | 60 | 15,000 | 250 |
|
||||
|
||||
|
||||
[diskbench]: https://github.com/ongardie/diskbenchmark
|
||||
[fio]: https://github.com/axboe/fio
|
||||
[tuning]: ../tuning.md
|
||||
|
@ -49,51 +49,50 @@ Finished defragmenting etcd member[127.0.0.1:2379]
|
||||
|
||||
## Space quota
|
||||
|
||||
The space quota in `etcd` ensures the cluster operates in a reliable fashion. Without a space quota, `etcd` may suffer from poor performance if the keyspace grows excessively large, or it may simply run out of storage space, leading to unpredictable cluster behavior. If the keyspace's backend database for any member exceeds the space quota, `etcd` raises a cluster-wide alarm that puts the cluster into a maintenance mode which only accepts key reads and deletes. After freeing enough space in the keyspace, the alarm can be disarmed and the cluster will resume normal operation.
|
||||
The space quota in `etcd` ensures the cluster operates in a reliable fashion. Without a space quota, `etcd` may suffer from poor performance if the keyspace grows excessively large, or it may simply run out of storage space, leading to unpredictable cluster behavior. If the keyspace's backend database for any member exceeds the space quota, `etcd` raises a cluster-wide alarm that puts the cluster into a maintenance mode which only accepts key reads and deletes. Only after freeing enough space in the keyspace and defragmenting the backend database, along with clearing the space quota alarm can the cluster resume normal operation.
|
||||
|
||||
By default, `etcd` sets a conservative space quota suitable for most applications, but it may be configured on the command line, in bytes:
|
||||
|
||||
```sh
|
||||
# set a very small 16MB quota
|
||||
$ etcd --quota-backend-bytes=16777216
|
||||
$ etcd --quota-backend-bytes=$((16*1024*1024))
|
||||
```
|
||||
|
||||
The space quota can be triggered with a loop:
|
||||
|
||||
```sh
|
||||
# fill keyspace
|
||||
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | etcdctl put key || break; done
|
||||
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | ETCDCTL_API=3 etcdctl put key || break; done
|
||||
...
|
||||
Error: rpc error: code = 8 desc = etcdserver: mvcc: database space exceeded
|
||||
# confirm quota space is exceeded
|
||||
$ etcdctl --write-out=table endpoint status
|
||||
$ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
|
||||
+----------------+------------------+-----------+---------+-----------+-----------+------------+
|
||||
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
|
||||
+----------------+------------------+-----------+---------+-----------+-----------+------------+
|
||||
| 127.0.0.1:2379 | bf9071f4639c75cc | 2.3.0+git | 18 MB | true | 2 | 3332 |
|
||||
+----------------+------------------+-----------+---------+-----------+-----------+------------+
|
||||
# confirm alarm is raised
|
||||
$ etcdctl alarm list
|
||||
$ ETCDCTL_API=3 etcdctl alarm list
|
||||
memberID:13803658152347727308 alarm:NOSPACE
|
||||
```
|
||||
|
||||
Removing excessive keyspace data will put the cluster back within the quota limits so the alarm can be disarmed:
|
||||
Removing excessive keyspace data and defragmenting the backend database will put the cluster back within the quota limits:
|
||||
|
||||
```sh
|
||||
# get current revision
|
||||
$ etcdctl --endpoints=:2379 endpoint status
|
||||
[{"Endpoint":"127.0.0.1:2379","Status":{"header":{"cluster_id":8925027824743593106,"member_id":13803658152347727308,"revision":1516,"raft_term":2},"version":"2.3.0+git","dbSize":17973248,"leader":13803658152347727308,"raftIndex":6359,"raftTerm":2}}]
|
||||
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')
|
||||
# compact away all old revisions
|
||||
$ etdctl compact 1516
|
||||
$ ETCDCTL_API=3 etcdctl compact $rev
|
||||
compacted revision 1516
|
||||
# defragment away excessive space
|
||||
$ etcdctl defrag
|
||||
$ ETCDCTL_API=3 etcdctl defrag
|
||||
Finished defragmenting etcd member[127.0.0.1:2379]
|
||||
# disarm alarm
|
||||
$ etcdctl alarm disarm
|
||||
$ ETCDCTL_API=3 etcdctl alarm disarm
|
||||
memberID:13803658152347727308 alarm:NOSPACE
|
||||
# test puts are allowed again
|
||||
$ etdctl put newkey 123
|
||||
$ ETCDCTL_API=3 etcdctl put newkey 123
|
||||
OK
|
||||
```
|
||||
|
||||
|
124
Documentation/op-guide/monitoring.md
Normal file
124
Documentation/op-guide/monitoring.md
Normal file
@ -0,0 +1,124 @@
|
||||
# Monitoring etcd
|
||||
|
||||
Each etcd server provides local monitoring information on its client port through http endpoints. The monitoring data is useful for both system health checking and cluster debugging.
|
||||
|
||||
## Debug endpoint
|
||||
|
||||
If `--debug` is set, the etcd server exports debugging information on its client port under the `/debug` path. Take care when setting `--debug`, since there will be degraded performance and verbose logging.
|
||||
|
||||
The `/debug/pprof` endpoint is the standard go runtime profiling endpoint. This can be used to profile CPU, heap, mutex, and goroutine utilization. For example, here `go tool pprof` gets the top 10 functions where etcd spends its time:
|
||||
|
||||
```sh
|
||||
$ go tool pprof http://localhost:2379/debug/pprof/profile
|
||||
Fetching profile from http://localhost:2379/debug/pprof/profile
|
||||
Please wait... (30s)
|
||||
Saved profile in /home/etcd/pprof/pprof.etcd.localhost:2379.samples.cpu.001.pb.gz
|
||||
Entering interactive mode (type "help" for commands)
|
||||
(pprof) top10
|
||||
310ms of 480ms total (64.58%)
|
||||
Showing top 10 nodes out of 157 (cum >= 10ms)
|
||||
flat flat% sum% cum cum%
|
||||
130ms 27.08% 27.08% 130ms 27.08% runtime.futex
|
||||
70ms 14.58% 41.67% 70ms 14.58% syscall.Syscall
|
||||
20ms 4.17% 45.83% 20ms 4.17% github.com/coreos/etcd/cmd/vendor/golang.org/x/net/http2/hpack.huffmanDecode
|
||||
20ms 4.17% 50.00% 30ms 6.25% runtime.pcvalue
|
||||
20ms 4.17% 54.17% 50ms 10.42% runtime.schedule
|
||||
10ms 2.08% 56.25% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).AuthInfoFromCtx
|
||||
10ms 2.08% 58.33% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).Lead
|
||||
10ms 2.08% 60.42% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/wait.(*timeList).Trigger
|
||||
10ms 2.08% 62.50% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/github.com/prometheus/client_golang/prometheus.(*MetricVec).hashLabelValues
|
||||
10ms 2.08% 64.58% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/golang.org/x/net/http2.(*Framer).WriteHeaders
|
||||
```
|
||||
|
||||
The `/debug/requests` endpoint gives gRPC traces and performance statistics through a web browser. For example, here is a `Range` request for the key `abc`:
|
||||
|
||||
```
|
||||
When Elapsed (s)
|
||||
2017/08/18 17:34:51.999317 0.000244 /etcdserverpb.KV/Range
|
||||
17:34:51.999382 . 65 ... RPC: from 127.0.0.1:47204 deadline:4.999377747s
|
||||
17:34:51.999395 . 13 ... recv: key:"abc"
|
||||
17:34:51.999499 . 104 ... OK
|
||||
17:34:51.999535 . 36 ... sent: header:<cluster_id:14841639068965178418 member_id:10276657743932975437 revision:15 raft_term:17 > kvs:<key:"abc" create_revision:6 mod_revision:14 version:9 value:"asda" > count:1
|
||||
```
|
||||
|
||||
The metrics can be fetched with `curl`:
|
||||
|
||||
```sh
|
||||
$ curl -L http://localhost:2379/metrics
|
||||
|
||||
# HELP etcd_debugging_mvcc_keys_total Total number of keys.
|
||||
# TYPE etcd_debugging_mvcc_keys_total gauge
|
||||
etcd_debugging_mvcc_keys_total 0
|
||||
# HELP etcd_debugging_mvcc_pending_events_total Total number of pending events to be sent.
|
||||
# TYPE etcd_debugging_mvcc_pending_events_total gauge
|
||||
etcd_debugging_mvcc_pending_events_total 0
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
## Prometheus
|
||||
|
||||
Running a [Prometheus][prometheus] monitoring service is the easiest way to ingest and record etcd's metrics.
|
||||
|
||||
First, install Prometheus:
|
||||
|
||||
```sh
|
||||
PROMETHEUS_VERSION="1.3.1"
|
||||
wget https://github.com/prometheus/prometheus/releases/download/v$PROMETHEUS_VERSION/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz -O /tmp/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz
|
||||
tar -xvzf /tmp/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz --directory /tmp/ --strip-components=1
|
||||
/tmp/prometheus -version
|
||||
```
|
||||
|
||||
Set Prometheus's scraper to target the etcd cluster endpoints:
|
||||
|
||||
```sh
|
||||
cat > /tmp/test-etcd.yaml <<EOF
|
||||
global:
|
||||
scrape_interval: 10s
|
||||
scrape_configs:
|
||||
- job_name: test-etcd
|
||||
static_configs:
|
||||
- targets: ['10.240.0.32:2379','10.240.0.33:2379','10.240.0.34:2379']
|
||||
EOF
|
||||
cat /tmp/test-etcd.yaml
|
||||
```
|
||||
|
||||
Set up the Prometheus handler:
|
||||
|
||||
```sh
|
||||
nohup /tmp/prometheus \
|
||||
-config.file /tmp/test-etcd.yaml \
|
||||
-web.listen-address ":9090" \
|
||||
-storage.local.path "test-etcd.data" >> /tmp/test-etcd.log 2>&1 &
|
||||
```
|
||||
|
||||
Now Prometheus will scrape etcd metrics every 10 seconds.
|
||||
|
||||
|
||||
## Alerting
|
||||
|
||||
There is a [set of default alerts for etcd v3 clusters](./etcd3_alert.rules).
|
||||
|
||||
> Note: `job` labels may need to be adjusted to fit a particular need. The rules were written to apply to a single cluster so it is recommended to choose labels unique to a cluster.
|
||||
|
||||
## Grafana
|
||||
|
||||
[Grafana][grafana] has built-in Prometheus support; just add a Prometheus data source:
|
||||
|
||||
```
|
||||
Name: test-etcd
|
||||
Type: Prometheus
|
||||
Url: http://localhost:9090
|
||||
Access: proxy
|
||||
```
|
||||
|
||||
Then import the default [etcd dashboard template][template] and customize. For instance, if Prometheus data source name is `my-etcd`, the `datasource` field values in JSON also need to be `my-etcd`.
|
||||
|
||||
Sample dashboard:
|
||||
|
||||

|
||||
|
||||
|
||||
[prometheus]: https://prometheus.io/
|
||||
[grafana]: http://grafana.org/
|
||||
[template]: ./grafana.json
|
@ -17,58 +17,54 @@ For some baseline performance numbers, we consider a three member etcd cluster w
|
||||
- Google Cloud Compute Engine
|
||||
- 3 machines of 8 vCPUs + 16GB Memory + 50GB SSD
|
||||
- 1 machine(client) of 16 vCPUs + 30GB Memory + 50GB SSD
|
||||
- Ubuntu 15.10
|
||||
- etcd v3 master branch (commit SHA d8f325d), Go 1.6.2
|
||||
- Ubuntu 17.04
|
||||
- etcd 3.2.0, go 1.8.3
|
||||
|
||||
With this configuration, etcd can approximately write:
|
||||
|
||||
| Number of keys | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Target etcd server | Average write QPS | Average latency per request | Memory |
|
||||
|----------------|-------------------|---------------------|-----------------------|-------------------|--------------------|-------------------|-----------------------------|--------|
|
||||
| 10,000 | 8 | 256 | 1 | 1 | leader only | 525 | 2ms | 35 MB |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | leader only | 25,000 | 30ms | 35 MB |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | all members | 33,000 | 25ms | 35 MB |
|
||||
| Number of keys | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Target etcd server | Average write QPS | Average latency per request | Average server RSS |
|
||||
|---------------:|------------------:|--------------------:|----------------------:|------------------:|--------------------|------------------:|----------------------------:|-------------------:|
|
||||
| 10,000 | 8 | 256 | 1 | 1 | leader only | 583 | 1.6ms | 48 MB |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | leader only | 44,341 | 22ms | 124MB |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | all members | 50,104 | 20ms | 126MB |
|
||||
|
||||
Sample commands are:
|
||||
|
||||
```
|
||||
# assuming IP_1 is leader, write requests to the leader
|
||||
benchmark --endpoints={IP_1} --conns=1 --clients=1 \
|
||||
```sh
|
||||
# write to leader
|
||||
benchmark --endpoints=${HOST_1} --target-leader --conns=1 --clients=1 \
|
||||
put --key-size=8 --sequential-keys --total=10000 --val-size=256
|
||||
benchmark --endpoints={IP_1} --conns=100 --clients=1000 \
|
||||
benchmark --endpoints=${HOST_1} --target-leader --conns=100 --clients=1000 \
|
||||
put --key-size=8 --sequential-keys --total=100000 --val-size=256
|
||||
|
||||
# write to all members
|
||||
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=100 --clients=1000 \
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
|
||||
put --key-size=8 --sequential-keys --total=100000 --val-size=256
|
||||
```
|
||||
|
||||
Linearizable read requests go through a quorum of cluster members for consensus to fetch the most recent data. Serializable read requests are cheaper than linearizable reads since they are served by any single etcd member, instead of a quorum of members, in exchange for possibly serving stale data. etcd can read:
|
||||
|
||||
| Number of requests | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Consistency | Average latency per request | Average read QPS |
|
||||
|--------------------|-------------------|---------------------|-----------------------|-------------------|-------------|-----------------------------|------------------|
|
||||
| 10,000 | 8 | 256 | 1 | 1 | Linearizable | 2ms | 560 |
|
||||
| 10,000 | 8 | 256 | 1 | 1 | Serializable | 0.4ms | 7,500 |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | Linearizable | 15ms | 43,000 |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | Serializable | 9ms | 93,000 |
|
||||
| Number of requests | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Consistency | Average read QPS | Average latency per request |
|
||||
|-------------------:|------------------:|--------------------:|----------------------:|------------------:|-------------|-----------------:|----------------------------:|
|
||||
| 10,000 | 8 | 256 | 1 | 1 | Linearizable | 1,353 | 0.7ms |
|
||||
| 10,000 | 8 | 256 | 1 | 1 | Serializable | 2,909 | 0.3ms |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | Linearizable | 141,578 | 5.5ms |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | Serializable | 185,758 | 2.2ms |
|
||||
|
||||
Sample commands are:
|
||||
|
||||
```
|
||||
# Linearizable read requests
|
||||
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=1 --clients=1 \
|
||||
```sh
|
||||
# Single connection read requests
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=1 --clients=1 \
|
||||
range YOUR_KEY --consistency=l --total=10000
|
||||
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=100 --clients=1000 \
|
||||
range YOUR_KEY --consistency=l --total=100000
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=1 --clients=1 \
|
||||
range YOUR_KEY --consistency=s --total=10000
|
||||
|
||||
# Serializable read requests for each member and sum up the numbers
|
||||
for endpoint in {IP_1} {IP_2} {IP_3}; do
|
||||
benchmark --endpoints=$endpoint --conns=1 --clients=1 \
|
||||
range YOUR_KEY --consistency=s --total=10000
|
||||
done
|
||||
for endpoint in {IP_1} {IP_2} {IP_3}; do
|
||||
benchmark --endpoints=$endpoint --conns=100 --clients=1000 \
|
||||
range YOUR_KEY --consistency=s --total=100000
|
||||
done
|
||||
# Many concurrent read requests
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
|
||||
range YOUR_KEY --consistency=l --total=100000
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
|
||||
range YOUR_KEY --consistency=s --total=100000
|
||||
```
|
||||
|
||||
We encourage running the benchmark test when setting up an etcd cluster for the first time in a new environment to ensure the cluster achieves adequate performance; cluster latency and throughput can be sensitive to minor environment differences.
|
||||
We encourage running the benchmark test when setting up an etcd cluster for the first time in a new environment to ensure the cluster achieves adequate performance; cluster latency and throughput can be sensitive to minor environment differences.
|
||||
|
@ -11,7 +11,7 @@ To recover from disastrous failure, etcd v3 provides snapshot and restore facili
|
||||
Recovering a cluster first needs a snapshot of the keyspace from an etcd member. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd data directory. For example, the following command snapshots the keyspace served by `$ENDPOINT` to the file `snapshot.db`:
|
||||
|
||||
```sh
|
||||
$ etcdctl --endpoints $ENDPOINT snapshot save snapshot.db
|
||||
$ ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db
|
||||
```
|
||||
|
||||
### Restoring a cluster
|
||||
@ -23,19 +23,19 @@ Snapshot integrity may be optionally verified at restore time. If the snapshot i
|
||||
A restore initializes a new member of a new cluster, with a fresh cluster configuration using `etcd`'s cluster configuration flags, but preserves the contents of the etcd keyspace. Continuing from the previous example, the following creates new etcd data directories (`m1.etcd`, `m2.etcd`, `m3.etcd`) for a three member cluster:
|
||||
|
||||
```sh
|
||||
$ etcdctl snapshot restore snapshot.db \
|
||||
$ ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
|
||||
--name m1 \
|
||||
--initial-cluster m1=http:/host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
|
||||
--initial-cluster m1=http://host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
|
||||
--initial-cluster-token etcd-cluster-1 \
|
||||
--initial-advertise-peer-urls http://host1:2380
|
||||
$ etcdctl snapshot restore snapshot.db \
|
||||
$ ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
|
||||
--name m2 \
|
||||
--initial-cluster m1=http:/host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
|
||||
--initial-cluster m1=http://host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
|
||||
--initial-cluster-token etcd-cluster-1 \
|
||||
--initial-advertise-peer-urls http://host2:2380
|
||||
$ etcdctl snapshot restore snapshot.db \
|
||||
$ ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
|
||||
--name m3 \
|
||||
--initial-cluster m1=http:/host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
|
||||
--initial-cluster m1=http://host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
|
||||
--initial-cluster-token etcd-cluster-1 \
|
||||
--initial-advertise-peer-urls http://host3:2380
|
||||
```
|
||||
|
@ -2,23 +2,23 @@
|
||||
|
||||
etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.
|
||||
|
||||
Reconfiguration requests can only be processed when the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
|
||||
Reconfiguration requests can only be processed when a majority of cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
|
||||
|
||||
To better understand the design behind runtime reconfiguration, we suggest reading [the runtime reconfiguration document][runtime-reconf].
|
||||
To better understand the design behind runtime reconfiguration, please read [the runtime reconfiguration document][runtime-reconf].
|
||||
|
||||
## Reconfiguration use cases
|
||||
|
||||
Let's walk through some common reasons for reconfiguring a cluster. Most of these just involve combinations of adding or removing a member, which are explained below under [Cluster Reconfiguration Operations][cluster-reconf].
|
||||
This section will walk through some common reasons for reconfiguring a cluster. Most of these reasons just involve combinations of adding or removing a member, which are explained below under [Cluster Reconfiguration Operations][cluster-reconf].
|
||||
|
||||
### Cycle or upgrade multiple machines
|
||||
|
||||
If multiple cluster members need to move due to planned maintenance (hardware upgrades, network downtime, etc.), it is recommended to modify members one at a time.
|
||||
|
||||
It is safe to remove the leader, however there is a brief period of downtime while the election process takes place. If the cluster holds more than 50MB, it is recommended to [migrate the member's data directory][member migration].
|
||||
It is safe to remove the leader, however there is a brief period of downtime while the election process takes place. If the cluster holds more than 50MB of v2 data, it is recommended to [migrate the member's data directory][member migration].
|
||||
|
||||
### Change the cluster size
|
||||
|
||||
Increasing the cluster size can enhance [failure tolerance][fault tolerance table] and provide better read performance. Since clients can read from any member, increasing the number of members increases the overall read throughput.
|
||||
Increasing the cluster size can enhance [failure tolerance][fault tolerance table] and provide better read performance. Since clients can read from any member, increasing the number of members increases the overall serialized read throughput.
|
||||
|
||||
Decreasing the cluster size can improve the write performance of a cluster, with a trade-off of decreased resilience. Writes into the cluster are replicated to a majority of members of the cluster before considered committed. Decreasing the cluster size lowers the majority, and each write is committed more quickly.
|
||||
|
||||
@ -30,42 +30,34 @@ To replace the machine, follow the instructions for [removing the member][remove
|
||||
|
||||
### Restart cluster from majority failure
|
||||
|
||||
If the majority of the cluster is lost or all of the nodes have changed IP addresses, then manual action is necessary to recover safely.
|
||||
The basic steps in the recovery process include [creating a new cluster using the old data][disaster recovery], forcing a single member to act as the leader, and finally using runtime configuration to [add new members][add member] to this new cluster one at a time.
|
||||
If the majority of the cluster is lost or all of the nodes have changed IP addresses, then manual action is necessary to recover safely. The basic steps in the recovery process include [creating a new cluster using the old data][disaster recovery], forcing a single member to act as the leader, and finally using runtime configuration to [add new members][add member] to this new cluster one at a time.
|
||||
|
||||
## Cluster reconfiguration operations
|
||||
|
||||
Now that we have the use cases in mind, let us lay out the operations involved in each.
|
||||
With these use cases in mind, the involved operations can be described for each.
|
||||
|
||||
Before making any change, the simple majority (quorum) of etcd members must be available.
|
||||
This is essentially the same requirement as for any other write to etcd.
|
||||
Before making any change, a simple majority (quorum) of etcd members must be available. This is essentially the same requirement for any kind of write to etcd.
|
||||
|
||||
All changes to the cluster are done one at a time:
|
||||
All changes to the cluster must be done sequentially:
|
||||
|
||||
* To update a single member peerURLs, make an update operation
|
||||
* To replace a single member, make an add then a remove operation
|
||||
* To increase from 3 to 5 members, make two add operations
|
||||
* To decrease from 5 to 3, make two remove operations
|
||||
* To update a single member peerURLs, issue an update operation
|
||||
* To replace a healthy single member, add a new member then remove the old member
|
||||
* To increase from 3 to 5 members, issue two add operations
|
||||
* To decrease from 5 to 3, issue two remove operations
|
||||
|
||||
All of these examples will use the `etcdctl` command line tool that ships with etcd.
|
||||
To change membership without `etcdctl`, use the [v2 HTTP members API][member-api] or the [v3 gRPC members API][member-api-grpc].
|
||||
All of these examples use the `etcdctl` command line tool that ships with etcd. To change membership without `etcdctl`, use the [v2 HTTP members API][member-api] or the [v3 gRPC members API][member-api-grpc].
|
||||
|
||||
### Update a member
|
||||
|
||||
#### Update advertise client URLs
|
||||
|
||||
To update the advertise client URLs of a member, simply restart
|
||||
that member with updated client urls flag (`--advertise-client-urls`) or environment variable
|
||||
(`ETCD_ADVERTISE_CLIENT_URLS`). The restarted member will self publish the updated URLs.
|
||||
A wrongly updated client URL will not affect the health of the etcd cluster.
|
||||
To update the advertise client URLs of a member, simply restart that member with updated client urls flag (`--advertise-client-urls`) or environment variable (`ETCD_ADVERTISE_CLIENT_URLS`). The restarted member will self publish the updated URLs. A wrongly updated client URL will not affect the health of the etcd cluster.
|
||||
|
||||
#### Update advertise peer URLs
|
||||
|
||||
To update the advertise peer URLs of a member, first update
|
||||
it explicitly via member command and then restart the member. The additional action is required
|
||||
since updating peer URLs changes the cluster wide configuration and can affect the health of the etcd cluster.
|
||||
To update the advertise peer URLs of a member, first update it explicitly via member command and then restart the member. The additional action is required since updating peer URLs changes the cluster wide configuration and can affect the health of the etcd cluster.
|
||||
|
||||
To update the peer URLs, first, we need to find the target member's ID. To list all members with `etcdctl`:
|
||||
To update the peer URLs, first find the target member's ID. To list all members with `etcdctl`:
|
||||
|
||||
```sh
|
||||
$ etcdctl member list
|
||||
@ -74,7 +66,7 @@ $ etcdctl member list
|
||||
a8266ecf031671f3: name=node1 peerURLs=http://localhost:23801 clientURLs=http://127.0.0.1:23791
|
||||
```
|
||||
|
||||
In this example let's `update` a8266ecf031671f3 member ID and change its peerURLs value to http://10.0.1.10:2380
|
||||
This example will `update` a8266ecf031671f3 member ID and change its peerURLs value to `http://10.0.1.10:2380`:
|
||||
|
||||
```sh
|
||||
$ etcdctl member update a8266ecf031671f3 http://10.0.1.10:2380
|
||||
@ -83,8 +75,7 @@ Updated member with ID a8266ecf031671f3 in cluster
|
||||
|
||||
### Remove a member
|
||||
|
||||
Let us say the member ID we want to remove is a8266ecf031671f3.
|
||||
We then use the `remove` command to perform the removal:
|
||||
Suppose the member ID to remove is a8266ecf031671f3. Use the `remove` command to perform the removal:
|
||||
|
||||
```sh
|
||||
$ etcdctl member remove a8266ecf031671f3
|
||||
@ -106,7 +97,7 @@ Adding a member is a two step process:
|
||||
* Add the new member to the cluster via the [HTTP members API][member-api], the [gRPC members API][member-api-grpc], or the `etcdctl member add` command.
|
||||
* Start the new member with the new cluster configuration, including a list of the updated members (existing members + the new member).
|
||||
|
||||
Using `etcdctl` let's add the new member to the cluster by specifying its [name][conf-name] and [advertised peer URLs][conf-adv-peer]:
|
||||
`etcdctl` adds a new member to the cluster by specifying the member's [name][conf-name] and [advertised peer URLs][conf-adv-peer]:
|
||||
|
||||
```sh
|
||||
$ etcdctl member add infra3 http://10.0.1.13:2380
|
||||
@ -117,8 +108,7 @@ ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,
|
||||
ETCD_INITIAL_CLUSTER_STATE=existing
|
||||
```
|
||||
|
||||
`etcdctl` has informed the cluster about the new member and printed out the environment variables needed to successfully start it.
|
||||
Now start the new etcd process with the relevant flags for the new member:
|
||||
`etcdctl` has informed the cluster about the new member and printed out the environment variables needed to successfully start it. Now start the new etcd process with the relevant flags for the new member:
|
||||
|
||||
```sh
|
||||
$ export ETCD_NAME="infra3"
|
||||
@ -129,13 +119,11 @@ $ etcd --listen-client-urls http://10.0.1.13:2379 --advertise-client-urls http:/
|
||||
|
||||
The new member will run as a part of the cluster and immediately begin catching up with the rest of the cluster.
|
||||
|
||||
If adding multiple members the best practice is to configure a single member at a time and verify it starts correctly before adding more new members.
|
||||
If adding a new member to a 1-node cluster, the cluster cannot make progress before the new member starts because it needs two members as majority to agree on the consensus. This behavior only happens between the time `etcdctl member add` informs the cluster about the new member and the new member successfully establishing a connection to the existing one.
|
||||
If adding multiple members the best practice is to configure a single member at a time and verify it starts correctly before adding more new members. If adding a new member to a 1-node cluster, the cluster cannot make progress before the new member starts because it needs two members as majority to agree on the consensus. This behavior only happens between the time `etcdctl member add` informs the cluster about the new member and the new member successfully establishing a connection to the existing one.
|
||||
|
||||
#### Error cases when adding members
|
||||
|
||||
In the following case we have not included our new host in the list of enumerated nodes.
|
||||
If this is a new cluster, the node must be added to the list of initial cluster members.
|
||||
In the following case a new host is not included in the list of enumerated nodes. If this is a new cluster, the node must be added to the list of initial cluster members.
|
||||
|
||||
```sh
|
||||
$ etcd --name infra3 \
|
||||
@ -145,7 +133,7 @@ etcdserver: assign ids error: the member count is unequal
|
||||
exit 1
|
||||
```
|
||||
|
||||
In this case we give a different address (10.0.1.14:2380) to the one that we used to join the cluster (10.0.1.13:2380).
|
||||
In this case, give a different address (10.0.1.14:2380) from the one used to join the cluster (10.0.1.13:2380):
|
||||
|
||||
```sh
|
||||
$ etcd --name infra4 \
|
||||
@ -155,7 +143,7 @@ etcdserver: assign ids error: unmatched member while checking PeerURLs
|
||||
exit 1
|
||||
```
|
||||
|
||||
When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any active member in the cluster:
|
||||
If etcd starts using the data directory of a removed member, etcd automatically exits if it connects to any active member in the cluster:
|
||||
|
||||
```sh
|
||||
$ etcd
|
||||
@ -169,7 +157,7 @@ As described in the above, the best practice of adding new members is to configu
|
||||
|
||||
For avoiding this problem, etcd provides an option `-strict-reconfig-check`. If this option is passed to etcd, etcd rejects reconfiguration requests if the number of started members will be less than a quorum of the reconfigured cluster.
|
||||
|
||||
It is recommended to enable this option. However, it is disabled by default because of keeping compatibility.
|
||||
It is enabled by default.
|
||||
|
||||
[add member]: #add-a-new-member
|
||||
[cluster-reconf]: #cluster-reconfiguration-operations
|
||||
|
@ -16,7 +16,7 @@ etcd takes several certificate related configuration options, either through com
|
||||
|
||||
`--key-file=<path>`: Key for the certificate. Must be unencrypted.
|
||||
|
||||
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail.
|
||||
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail. If [authentication][auth] is enabled, the certificate provides credentials for the user name given by the Common Name field.
|
||||
|
||||
`--trusted-ca-file=<path>`: Trusted certificate authority.
|
||||
|
||||
@ -219,6 +219,7 @@ Make sure to sign the certificates with a Subject Name the member's public IP ad
|
||||
The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too.
|
||||
|
||||
[cfssl]: https://github.com/cloudflare/cfssl
|
||||
[tls-setup]: /hack/tls-setup
|
||||
[tls-setup]: ../../hack/tls-setup
|
||||
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
|
||||
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName
|
||||
[auth]: authentication.md
|
||||
|
@ -1,14 +1,40 @@
|
||||
## Supported platform
|
||||
## Supported platforms
|
||||
|
||||
### Current support
|
||||
|
||||
The following table lists etcd support status for common architectures and operating systems,
|
||||
|
||||
| Architecture | Operating System | Status | Maintainers |
|
||||
| ------------ | ---------------- | ------------ | --------------------------- |
|
||||
| amd64 | Darwin | Experimental | etcd maintainers |
|
||||
| amd64 | Linux | Stable | etcd maintainers |
|
||||
| amd64 | Windows | Experimental | |
|
||||
| arm64 | Linux | Experimental | @glevand |
|
||||
| arm | Linux | Unstable | |
|
||||
| 386 | Linux | Unstable | |
|
||||
| ppc64le | Linux | Stable | etcd maintainers, @mkumatag |
|
||||
|
||||
* etcd-maintainers are listed in https://github.com/coreos/etcd/blob/master/MAINTAINERS.
|
||||
|
||||
Experimental platforms appear to work in practice and have some platform specific code in etcd, but do not fully conform to the stable support policy. Unstable platforms have been lightly tested, but less than experimental. Unlisted architecture and operating system pairs are currently unsupported; caveat emptor.
|
||||
|
||||
### Supporting a new platform
|
||||
|
||||
For etcd to officially support a new platform as stable, a few requirements are necessary to ensure acceptable quality:
|
||||
|
||||
1. An "official" maintainer for the platform with clear motivation; someone must be responsible for taking care of the platform.
|
||||
2. Set up CI for build; etcd must compile.
|
||||
3. Set up CI for running unit tests; etcd must pass simple tests.
|
||||
4. Set up CI (TravisCI, SemaphoreCI or Jenkins) for running integration tests; etcd must pass intensive tests.
|
||||
5. (Optional) Set up a functional testing cluster; an etcd cluster should survive stress testing.
|
||||
|
||||
### 32-bit and other unsupported systems
|
||||
|
||||
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See #[358][358] for more information.
|
||||
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See the [Go issue][go-issue] and [atomic package][go-atomic] for more information.
|
||||
|
||||
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unsupported architectures will print
|
||||
a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to
|
||||
the target architecture.
|
||||
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unstable or unsupported architectures will print a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to the target architecture.
|
||||
|
||||
Currently only the amd64 architecture is officially supported by `etcd`.
|
||||
|
||||
[358]: https://github.com/coreos/etcd/issues/358
|
||||
Currently amd64 and ppc64le architectures are officially supported by `etcd`.
|
||||
|
||||
[go-issue]: https://github.com/golang/go/issues/599
|
||||
[go-atomic]: https://golang.org/pkg/sync/atomic/#pkg-note-BUG
|
||||
|
@ -22,6 +22,12 @@ There are some notable differences between API v2 and API v3:
|
||||
|
||||
Application data can be migrated either offline or online. Offline migration is much simpler than online migration and is recommended.
|
||||
|
||||
Sometimes an etcd cluster will possibly have v3 data which should not be overwritten. In this case, the migration process may want to confirm no v3 data is committed before proceeding. One way to check the cluster has no v3 keys is to issue the following `etcdctl` command, which scans the entire v3 keyspace for any key, expecting `0` as output:
|
||||
|
||||
```sh
|
||||
ETCDCTL_API=3 etcdctl get "" --from-key --keys-only --limit 1 | wc -l
|
||||
```
|
||||
|
||||
### Offline migration
|
||||
|
||||
Offline migration is very simple but requires etcd downtime. If an etcd downtime window spanning from seconds to minutes is acceptable, offline migration is a good choice and is easy to automate.
|
||||
|
77
Documentation/platforms/aws.md
Normal file
77
Documentation/platforms/aws.md
Normal file
@ -0,0 +1,77 @@
|
||||
## Introduction
|
||||
|
||||
This guide assumes operational knowledge of Amazon Web Services (AWS), specifically Amazon Elastic Compute Cloud (EC2). This guide provides an introduction to design considerations when designing an etcd deployment on AWS EC2 and how AWS specific features may be utilized in that context.
|
||||
|
||||
## Capacity planning
|
||||
|
||||
As a critical building block for distributed systems it is crucial to perform adequate capacity planning in order to support the intended cluster workload. As a highly available and strongly consistent data store increasing the number of nodes in an etcd cluster will generally affect performance adversely. This makes sense intuitively, as more nodes means more members for the leader to coordinate state across. The most direct way to increase throughput and decrease latency of an etcd cluster is allocate more disk I/O, network I/O, CPU, and memory to cluster members. In the event it is impossible to temporarily divert incoming requests to the cluster, scaling the EC2 instances which comprise the etcd cluster members one at a time may improve performance. It is, however, best to avoid bottlenecks through capacity planning.
|
||||
|
||||
The etcd team has produced a [hardware recommendation guide](../op-guide/hardware.md) which is very useful for “ballparking” how many nodes and what instance type are necessary for a cluster.
|
||||
|
||||
AWS provides a service for creating groups of EC2 instances which are dynamically sized to match load on the instances. Using an Auto Scaling Group ([ASG](http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html)) to dynamically scale an etcd cluster is not recommended for several reasons including:
|
||||
|
||||
* etcd performance is generally inversely proportional to the number of members in a cluster due to the synchronous replication which provides strong consistency of data stored in etcd
|
||||
* the operational complexity of adding [lifecycle hooks](http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) to properly add and remove members from an etcd cluster by modifying the [runtime configuration](../op-guide/runtime-configuration.md)
|
||||
|
||||
Auto Scaling Groups do provide a number of benefits besides cluster scaling which include:
|
||||
|
||||
* distribution of EC2 instances across Availability Zones (AZs)
|
||||
* EC2 instance fail over across AZs
|
||||
* consolidated monitoring and life cycle control of instances within an ASG
|
||||
|
||||
The use of an ASG to create a [self healing etcd cluster](#self-healing) is one of the design considerations when deploying an etcd cluster to AWS.
|
||||
|
||||
## Cluster design
|
||||
|
||||
The purpose of this section is to provide foundational guidance for deploying etcd on AWS. The discussion will be framed by the following three critical design criteria about the etcd cluster itself:
|
||||
|
||||
* block device provider: limited to the tradeoffs between EBS or instance storage (InstanceStore)
|
||||
* cluster topology: how many nodes should make up an etcd cluster; should these nodes be distributed over multiple AZs
|
||||
* managing etcd members: creating a static cluster of EC2 instances or using an ASG.
|
||||
|
||||
The intended cluster workload should dictate the cluster design. A configuration store for microservices may require different design considerations than a distributed lock service, a secrets store, or a Kubernetes control plane. Cluster design tradeoffs include considerations such as:
|
||||
|
||||
* availability
|
||||
* data durability after member failure
|
||||
* performance/throughput
|
||||
* self healing
|
||||
|
||||
### Availability
|
||||
|
||||
Instance availability on AWS is ultimately determined by the Amazon EC2 Region Service Level Agreement ([SLA](https://aws.amazon.com/ec2/sla/)) which is the policy by which Amazon describes their precise definition of a regional outage.
|
||||
|
||||
In the context of an etcd cluster this means a cluster must contain a minimum of three members where EC2 instances are spread across at least two AZs in order for an etcd cluster to be considered highly available at a Regional level.
|
||||
|
||||
For most use cases the additional latency associated with a cluster spanning across Availability Zones will introduce a negligible performance impact.
|
||||
|
||||
Availability considerations apply to all components of an application; if the application which accesses the etcd cluster will only be deployed to a single Availability Zone it may not make sense to make the etcd cluster highly available across zones.
|
||||
|
||||
### Data durability after member failure
|
||||
|
||||
A highly available etcd cluster is resilient to member loss, however, it is important to consider data durability in the event of disaster when designing an etcd deployment. Deploying etcd on AWS supports multiple mechanisms for data durability.
|
||||
|
||||
* replication: etcd replicates all data to all members of the etcd cluster. Therefore, given more members in the cluster and more independent failure domains, the less likely that data stored in an etcd cluster will be permanently lost in the event of disaster.
|
||||
* Point in time etcd snapshotting: the etcd v3 API introduced support for snapshotting clusters. The operation is cheap enough (completing in the order of minutes) to run quite frequently and the resulting archives can be archived in a storage service like Amazon Simple Storage Service (S3).
|
||||
* Amazon Elastic Block Storage (EBS): an EBS volume is a replicated network attached block device which have stronger storage safety guarantees than InstanceStore which has a life cycle associated with the life cycle of the attached EC2 instance. The life cycle of an EBS volume is not necessarily tied to an EC2 instance and can be detached and snapshotted independently which means that a single node etcd cluster backed by an EBS volume can provide a fairly reasonable level of data durability.
|
||||
|
||||
### Performance/Throughput
|
||||
|
||||
The performance of an etcd cluster is roughly quantifiable through latency and throughput metrics which are primarily affected by disk and network performance. Detailed performance planning information is provided in the [performance section](../op-guide/performance.md) of the etcd operations guide.
|
||||
|
||||
#### Network
|
||||
|
||||
AWS offers EC2 Placement Groups which allow the collocation of EC2 instances within a single Availability Zone which can be utilized in order to minimize network latency between etcd members in the cluster. It is important to remember that collocation of etcd nodes within a single AZ will provide weaker fault tolerance than distributing members across multiple AZs. [Enhanced networking for EC2 instances](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html) may also improve network performance of individual EC2 instances.
|
||||
|
||||
#### Disk
|
||||
|
||||
AWS provides two basic types of block storage: [EBS volumes](https://aws.amazon.com/ebs/) and [EC2 Instance Store](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html). As mentioned, an EBS volume is a network attached block device while instance storage is directly attached to the hypervisor of the EC2 host. EBS volumes will generally have higher latency, lower throughput, and greater performance variance than Instance Store volumes. If performance, rather than data safety, is the primary concern it is highly recommended that instance storage on the EC2 instances be utilized. Remember that the amount of available instance storage varies by EC2 [instance types](https://aws.amazon.com/ec2/instance-types/) which may impose additional performance considerations.
|
||||
|
||||
Inconsistent EBS volume performance can introduce etcd cluster instability. [Provisioned IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_piops) can provide more consistent performance than general purpose SSD EBS volumes. More information about EBS volume performance is available [from AWS](https://aws.amazon.com/ebs/details/) and Datadog has shared their experience with [getting optimal performance with AWS EBS Provisioned IOPS](https://www.datadoghq.com/blog/aws-ebs-provisioned-iops-getting-optimal-performance/) in their engineering blog.
|
||||
|
||||
### Self healing
|
||||
|
||||
While using an ASG to scale the size of an etcd cluster is not recommended, an ASG can be used effectively to maintain the desired number of nodes in the event of node failure. The maintenance of a stable number of etcd nodes will provide the etcd cluster with a measure of self healing.
|
||||
|
||||
### Next steps
|
||||
|
||||
The operational life cycle of an etcd cluster can be greatly simplified through the use of the etcd-operator. The open source etcd operator is a Kubernetes control plane operator which deploys and manages etcd clusters atop Kubernetes. While still in its early stages the etcd-operator already offers periodic backups to S3, detection and replacement of failed nodes, and automated disaster recovery from backups in the event of permanent quorum loss.
|
203
Documentation/platforms/container-linux-systemd.md
Normal file
203
Documentation/platforms/container-linux-systemd.md
Normal file
@ -0,0 +1,203 @@
|
||||
# Run etcd on Container Linux with systemd
|
||||
|
||||
The following guide shows how to run etcd with [systemd][systemd-docs] under [Container Linux][container-linux-docs].
|
||||
|
||||
## Provisioning an etcd cluster
|
||||
|
||||
Cluster bootstrapping in Container Linux is simplest with [Ignition][container-linux-ignition]; `coreos-metadata.service` dynamically fetches the machine's IP for discovery. Note that etcd's discovery service protocol is only meant for bootstrapping, and cannot be used with runtime reconfiguration or cluster monitoring.
|
||||
|
||||
The [Container Linux Config Transpiler][container-linux-ct] compiles etcd configuration files into Ignition configuration files:
|
||||
|
||||
```yaml container-linux-config:norender
|
||||
etcd:
|
||||
version: 3.2.0
|
||||
name: s1
|
||||
data_dir: /var/lib/etcd
|
||||
advertise_client_urls: http://{PUBLIC_IPV4}:2379
|
||||
initial_advertise_peer_urls: http://{PRIVATE_IPV4}:2380
|
||||
listen_client_urls: http://0.0.0.0:2379
|
||||
listen_peer_urls: http://{PRIVATE_IPV4}:2380
|
||||
discovery: https://discovery.etcd.io/<token>
|
||||
```
|
||||
|
||||
`ct` would produce the following Ignition Config:
|
||||
|
||||
```
|
||||
$ ct --platform=gce --in-file /tmp/ct-etcd.cnf
|
||||
{"ignition":{"version":"2.0.0","config"...
|
||||
```
|
||||
|
||||
```json ignition-config
|
||||
{
|
||||
"ignition":{"version":"2.0.0","config":{}},
|
||||
"storage":{},
|
||||
"systemd":{
|
||||
"units":[{
|
||||
"name":"etcd-member.service",
|
||||
"enable":true,
|
||||
"dropins":[{
|
||||
"name":"20-clct-etcd-member.conf",
|
||||
"contents":"[Unit]\nRequires=coreos-metadata.service\nAfter=coreos-metadata.service\n\n[Service]\nEnvironmentFile=/run/metadata/coreos\nEnvironment=\"ETCD_IMAGE_TAG=v3.1.8\"\nExecStart=\nExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS \\\n --name=\"s1\" \\\n --data-dir=\"/var/lib/etcd\" \\\n --listen-peer-urls=\"http://${COREOS_GCE_IP_LOCAL_0}:2380\" \\\n --listen-client-urls=\"http://0.0.0.0:2379\" \\\n --initial-advertise-peer-urls=\"http://${COREOS_GCE_IP_LOCAL_0}:2380\" \\\n --advertise-client-urls=\"http://${COREOS_GCE_IP_EXTERNAL_0}:2379\" \\\n --discovery=\"https://discovery.etcd.io/\u003ctoken\u003e\""}]}]},
|
||||
"networkd":{},
|
||||
"passwd":{}}
|
||||
```
|
||||
|
||||
To avoid accidental misconfiguration, the transpiler helpfully verifies etcd configurations when generating Ignition files:
|
||||
|
||||
```yaml container-linux-config:norender
|
||||
etcd:
|
||||
version: 3.2.0
|
||||
name: s1
|
||||
data_dir_x: /var/lib/etcd
|
||||
advertise_client_urls: http://{PUBLIC_IPV4}:2379
|
||||
initial_advertise_peer_urls: http://{PRIVATE_IPV4}:2380
|
||||
listen_client_urls: http://0.0.0.0:2379
|
||||
listen_peer_urls: http://{PRIVATE_IPV4}:2380
|
||||
discovery: https://discovery.etcd.io/<token>
|
||||
```
|
||||
|
||||
```
|
||||
$ ct --platform=gce --in-file /tmp/ct-etcd.cnf
|
||||
warning at line 3, column 2
|
||||
Config has unrecognized key: data_dir_x
|
||||
```
|
||||
|
||||
See [Container Linux Provisioning][container-linux-provision] for more details.
|
||||
|
||||
## etcd 3.x service
|
||||
|
||||
[Container Linux][container-linux-docs] does not include etcd 3.x binaries by default. Different versions of etcd 3.x can be fetched via `etcd-member.service`.
|
||||
|
||||
Confirm unit file exists:
|
||||
|
||||
```
|
||||
systemctl cat etcd-member.service
|
||||
```
|
||||
|
||||
Check if the etcd service is running:
|
||||
|
||||
```
|
||||
systemctl status etcd-member.service
|
||||
```
|
||||
|
||||
Example systemd drop-in unit to override the default service settings:
|
||||
|
||||
```bash
|
||||
cat > /tmp/20-cl-etcd-member.conf <<EOF
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.2.0"
|
||||
Environment="ETCD_DATA_DIR=/var/lib/etcd"
|
||||
Environment="ETCD_SSL_DIR=/etc/ssl/certs"
|
||||
Environment="ETCD_OPTS=--name s1 \
|
||||
--listen-client-urls https://10.240.0.1:2379 \
|
||||
--advertise-client-urls https://10.240.0.1:2379 \
|
||||
--listen-peer-urls https://10.240.0.1:2380 \
|
||||
--initial-advertise-peer-urls https://10.240.0.1:2380 \
|
||||
--initial-cluster s1=https://10.240.0.1:2380,s2=https://10.240.0.2:2380,s3=https://10.240.0.3:2380 \
|
||||
--initial-cluster-token mytoken \
|
||||
--initial-cluster-state new \
|
||||
--client-cert-auth \
|
||||
--trusted-ca-file /etc/ssl/certs/etcd-root-ca.pem \
|
||||
--cert-file /etc/ssl/certs/s1.pem \
|
||||
--key-file /etc/ssl/certs/s1-key.pem \
|
||||
--peer-client-cert-auth \
|
||||
--peer-trusted-ca-file /etc/ssl/certs/etcd-root-ca.pem \
|
||||
--peer-cert-file /etc/ssl/certs/s1.pem \
|
||||
--peer-key-file /etc/ssl/certs/s1-key.pem \
|
||||
--auto-compaction-retention 1"
|
||||
EOF
|
||||
mv /tmp/20-cl-etcd-member.conf /etc/systemd/system/etcd-member.service.d/20-cl-etcd-member.conf
|
||||
```
|
||||
|
||||
Or use a Container Linux Config:
|
||||
|
||||
```yaml container-linux-config:norender
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
dropins:
|
||||
- name: conf1.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_SSL_DIR=/etc/ssl/certs"
|
||||
|
||||
etcd:
|
||||
version: 3.2.0
|
||||
name: s1
|
||||
data_dir: /var/lib/etcd
|
||||
listen_client_urls: https://0.0.0.0:2379
|
||||
advertise_client_urls: https://{PUBLIC_IPV4}:2379
|
||||
listen_peer_urls: https://{PRIVATE_IPV4}:2380
|
||||
initial_advertise_peer_urls: https://{PRIVATE_IPV4}:2380
|
||||
initial_cluster: s1=https://{PRIVATE_IPV4}:2380,s2=https://10.240.0.2:2380,s3=https://10.240.0.3:2380
|
||||
initial_cluster_token: mytoken
|
||||
initial_cluster_state: new
|
||||
client_cert_auth: true
|
||||
trusted_ca_file: /etc/ssl/certs/etcd-root-ca.pem
|
||||
cert-file: /etc/ssl/certs/s1.pem
|
||||
key-file: /etc/ssl/certs/s1-key.pem
|
||||
peer-client-cert-auth: true
|
||||
peer-trusted-ca-file: /etc/ssl/certs/etcd-root-ca.pem
|
||||
peer-cert-file: /etc/ssl/certs/s1.pem
|
||||
peer-key-file: /etc/ssl/certs/s1-key.pem
|
||||
auto-compaction-retention: 1
|
||||
```
|
||||
|
||||
```
|
||||
$ ct --platform=gce --in-file /tmp/ct-etcd.cnf
|
||||
{"ignition":{"version":"2.0.0","config"...
|
||||
```
|
||||
|
||||
To see all runtime drop-in changes for system units:
|
||||
|
||||
```
|
||||
systemd-delta --type=extended
|
||||
```
|
||||
|
||||
To enable and start:
|
||||
|
||||
```
|
||||
systemctl daemon-reload
|
||||
systemctl enable --now etcd-member.service
|
||||
```
|
||||
|
||||
To see the logs:
|
||||
|
||||
```
|
||||
journalctl --unit etcd-member.service --lines 10
|
||||
```
|
||||
|
||||
To stop and disable the service:
|
||||
|
||||
```
|
||||
systemctl disable --now etcd-member.service
|
||||
```
|
||||
|
||||
## etcd 2.x service
|
||||
|
||||
[Container Linux][container-linux-docs] includes a unit file `etcd2.service` for etcd 2.x, which will be removed in the near future. See [Container Linux FAQ][container-linux-faq] for more details.
|
||||
|
||||
Confirm unit file is installed:
|
||||
|
||||
```
|
||||
systemctl cat etcd2.service
|
||||
```
|
||||
|
||||
Check if the etcd service is running:
|
||||
|
||||
```
|
||||
systemctl status etcd2.service
|
||||
```
|
||||
|
||||
To stop and disable:
|
||||
|
||||
```
|
||||
systemctl disable --now etcd2.service
|
||||
```
|
||||
|
||||
[systemd-docs]: https://github.com/systemd/systemd
|
||||
[container-linux-docs]: https://coreos.com/os/docs/latest
|
||||
[container-linux-faq]: https://github.com/coreos/docs/blob/master/etcd/os-faq.md
|
||||
[container-linux-provision]: https://github.com/coreos/docs/blob/master/os/provisioning.md
|
||||
[container-linux-ignition]: https://github.com/coreos/docs/blob/master/ignition/what-is-ignition.md
|
||||
[container-linux-ct]: https://github.com/coreos/container-linux-config-transpiler
|
@ -1,23 +1,18 @@
|
||||
# FreeBSD
|
||||
|
||||
Starting with version 0.1.2 both etcd and etcdctl have been ported to FreeBSD and can
|
||||
be installed either via packages or ports system. Their versions have been recently
|
||||
updated to 0.2.0 so now you can enjoy using etcd and etcdctl on FreeBSD 10.0 (RC4 as
|
||||
of now) and 9.x where they have been tested. They might also work when installed from
|
||||
ports on earlier versions of FreeBSD, but your mileage may vary.
|
||||
Starting with version 0.1.2 both etcd and etcdctl have been ported to FreeBSD and can be installed either via packages or ports system. Their versions have been recently updated to 0.2.0 so now etcd and etcdctl can be enjoyed on FreeBSD 10.0 (RC4 as of now) and 9.x, where they have been tested. They might also work when installed from ports on earlier versions of FreeBSD, but it is untested; caveat emptor.
|
||||
|
||||
## Installation
|
||||
|
||||
### Using pkgng package system
|
||||
|
||||
1. If you do not have pkgng installed, install it with command `pkg` and answering 'Y'
|
||||
when asked
|
||||
1. If pkgng is not installed, install it with command `pkg` and answering 'Y' when asked.
|
||||
|
||||
2. Update your repository data with `pkg update`
|
||||
2. Update the repository data with `pkg update`.
|
||||
|
||||
3. Install etcd with `pkg install coreos-etcd coreos-etcdctl`
|
||||
3. Install etcd with `pkg install coreos-etcd coreos-etcdctl`.
|
||||
|
||||
4. Verify successful installation with `pkg info | grep etcd` and you should get:
|
||||
4. Verify successful installation by confirming `pkg info | grep etcd` matches:
|
||||
|
||||
```
|
||||
r@fbsd10:/ # pkg info | grep etcd
|
||||
@ -26,21 +21,17 @@ coreosetcdctl0.2.0 Simple commandline client for et
|
||||
r@fbsd10:/ #
|
||||
```
|
||||
|
||||
5. You’re ready to use etcd and etcdctl! For more information about using pkgng, please
|
||||
see: http://www.freebsd.org/doc/handbook/pkgngintro.html
|
||||
5. etcd and etcdctl are ready to use! For more information about using pkgng, please see: http://www.freebsd.org/doc/handbook/pkgngintro.html
|
||||
|
||||
### Using ports system
|
||||
|
||||
1. If you do not have ports installed, install with with `portsnap fetch extract` (it
|
||||
may take some time depending on your hardware and network connection)
|
||||
1. If ports is not installed, install with `portsnap fetch extract` (it may take some time depending on hardware and network connection).
|
||||
|
||||
2. Build etcd with `cd /usr/ports/devel/etcd && make install clean`, you
|
||||
will get an option to build and install documentation and etcdctl with it.
|
||||
2. Build etcd with `cd /usr/ports/devel/etcd && make install clean`. There will be an option to build and install documentation and etcdctl with it.
|
||||
|
||||
3. If you haven't installed it with etcdctl, and you would like to install it later, you can build it
|
||||
with `cd /usr/ports/devel/etcdctl && make install clean`
|
||||
3. If etcd wasn't installed with etcdctl, it can be built later with `cd /usr/ports/devel/etcdctl && make install clean`.
|
||||
|
||||
4. Verify successful installation with `pkg info | grep etcd` and you should get:
|
||||
4. Verify successful installation by confirming `pkg info | grep etcd` matches:
|
||||
|
||||
|
||||
```
|
||||
@ -50,13 +41,8 @@ coreosetcdctl0.2.0 Simple commandline client for et
|
||||
r@fbsd10:/ #
|
||||
```
|
||||
|
||||
5. You’re ready to use etcd and etcdctl! For more information about using ports system,
|
||||
please see: https://www.freebsd.org/doc/handbook/portsusing.html
|
||||
5. etcd and etcdctl are ready to use! For more information about using ports system, please see: https://www.freebsd.org/doc/handbook/portsusing.html
|
||||
|
||||
## Issues
|
||||
|
||||
If you find any issues with the build/install procedure or you've found a problem that
|
||||
you've verified is local to FreeBSD version only (for example, by not being able to
|
||||
reproduce it on any other platform, like OSX or Linux), please sent a
|
||||
problem report using this page for more
|
||||
information: http://www.freebsd.org/sendpr.html
|
||||
If there are any issues with the build/install procedure or there's a problem that is local to FreeBSD only (for example, by not being able to reproduce it on any other platform, like OSX or Linux), please send a problem report using this page for more information: http://www.freebsd.org/sendpr.html
|
||||
|
@ -2,6 +2,15 @@
|
||||
|
||||
This document tracks people and use cases for etcd in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various etcd applications, operation environments, and cluster sizes. The etcd development team may reach out periodically to check-in on how etcd is working in the field and update this list.
|
||||
|
||||
## All Kubernetes Users
|
||||
|
||||
- *Application*: https://kubernetes.io/
|
||||
- *Environments*: AWS, OpenStack, Azure, Google Cloud, Huawei Cloud, Bare Metal, etc
|
||||
|
||||
**This is a meta user; please feel free to document specific Kubernetes clusters!**
|
||||
|
||||
All Kubernetes clusters use etcd as their primary data store. This means etcd's users include such companies as [Niantic, Inc Pokemon Go](https://cloudplatform.googleblog.com/2016/09/bringing-Pokemon-GO-to-life-on-Google-Cloud.html), [Box](https://blog.box.com/blog/kubernetes-box-microservices-maximum-velocity/), [CoreOS](https://coreos.com/tectonic), [Ticketmaster](https://www.youtube.com/watch?v=wqXVKneP0Hg), [Salesforce](https://www.salesforce.com) and many many more.
|
||||
|
||||
## discovery.etcd.io
|
||||
|
||||
- *Application*: https://github.com/coreos/discovery.etcd.io
|
||||
@ -50,7 +59,7 @@ Radius Intelligence uses Kubernetes running CoreOS to containerize and scale int
|
||||
|
||||
## Vonage
|
||||
|
||||
- *Application*: system configuration for microservices, scheduling, locks (future - service discovery)
|
||||
- *Application*: kubernetes, vault backend, system configuration for microservices, scheduling, locks (future - service discovery)
|
||||
- *Launched*: August 2015
|
||||
- *Cluster Size*: 2 clusters of 5 members in 2 DCs, n local proxies 1-to-1 with microservice, (ssl and SRV look up)
|
||||
- *Order of Data Size*: kilobytes
|
||||
@ -58,5 +67,173 @@ Radius Intelligence uses Kubernetes running CoreOS to containerize and scale int
|
||||
- *Environment*: VMWare, AWS
|
||||
- *Backups*: Daily snapshots on VMs. Backups done for upgrades.
|
||||
|
||||
## PD
|
||||
|
||||
- *Application*: embed etcd
|
||||
- *Launched*: Mar 2016
|
||||
- *Cluster Size*: 3 or 5 members
|
||||
- *Order of Data Size*: megabytes
|
||||
- *Operator*: PingCAP, Inc.
|
||||
- *Environment*: Bare Metal, AWS, etc.
|
||||
- *Backups*: None.
|
||||
|
||||
PD(Placement Driver) is the central controller in the TiDB cluster. It saves the cluster meta information, schedule the data, allocate the global unique timestamp for the distributed transaction, etc. It embeds etcd to supply high availability and auto failover.
|
||||
|
||||
## Canal
|
||||
|
||||
- *Application*: system configuration for overlay network
|
||||
- *Launched*: June 2016
|
||||
- *Cluster Size*: 3 members for each cluster
|
||||
- *Order of Data Size*: kilobytes
|
||||
- *Operator*: Huawei Euler Department
|
||||
- *Environment*: [Huawei Cloud](http://www.hwclouds.com/product/cce.html)
|
||||
- *Backups*: None, all data can be recreated if necessary.
|
||||
|
||||
[teamcity]: https://www.jetbrains.com/teamcity/
|
||||
[raoofm]:https://github.com/raoofm
|
||||
|
||||
## Qiniu Cloud
|
||||
|
||||
- *Application*: system configuration for microservices, distributed locks
|
||||
- *Launched*: Jan. 2016
|
||||
- *Cluster Size*: 3 members each with several clusters
|
||||
- *Order of Data Size*: kilobytes
|
||||
- *Operator*: Pandora, chenchao@qiniu.com
|
||||
- *Environment*: Baremetal
|
||||
- *Backups*: None, all data can be recreated if necessary
|
||||
|
||||
## QingCloud
|
||||
|
||||
- *Application*: [QingCloud][qingcloud] appcenter cluster for service discovery as [metad][metad] backend.
|
||||
- *Launched*: December 2016
|
||||
- *Cluster Size*: 1 cluster of 3 members per user.
|
||||
- *Order of Data Size*: kilobytes
|
||||
- *Operator*: [yunify][yunify]
|
||||
- *Environment*: QingCloud IaaS
|
||||
- *Backups*: None, all data can be recreated if necessary.
|
||||
|
||||
[metad]:https://github.com/yunify/metad
|
||||
[yunify]:https://github.com/yunify
|
||||
[qingcloud]:https://qingcloud.com/
|
||||
|
||||
|
||||
## Yandex
|
||||
|
||||
- *Application*: system configuration for services, service discovery
|
||||
- *Launched*: March 2016
|
||||
- *Cluster Size*: 3 clusters of 5 members
|
||||
- *Order of Data Size*: several gigabytes
|
||||
- *Operator*: Yandex; [nekto0n][nekto0n]
|
||||
- *Environment*: Bare Metal
|
||||
- *Backups*: None
|
||||
|
||||
[nekto0n]:https://github.com/nekto0n
|
||||
|
||||
## Tencent Games
|
||||
|
||||
- *Application*: Meta data and configuration data for service discovery, Kubernetes, etc.
|
||||
- *Launched*: Jan. 2015
|
||||
- *Cluster Size*: 3 members each with 10s of clusters
|
||||
- *Order of Data Size*: 10s of Megabytes
|
||||
- *Operator*: Tencent Game Operations Department
|
||||
- *Environment*: Baremetal
|
||||
- *Backups*: Periodic sync to backup server
|
||||
|
||||
In Tencent games, we use Docker and Kubernetes to deploy and run our applications, and use etcd to save meta data for service discovery, Kubernetes, etc.
|
||||
|
||||
## Hyper.sh
|
||||
|
||||
- *Application*: Kubernetes, distributed locks, etc.
|
||||
- *Launched*: April 2016
|
||||
- *Cluster Size*: 1 cluster of 3 members
|
||||
- *Order of Data Size*: 10s of MB
|
||||
- *Operator*: Hyper.sh
|
||||
- *Environment*: Baremetal
|
||||
- *Backups*: None, all data can be recreated if necessary.
|
||||
|
||||
In [hyper.sh][hyper.sh], the container service is backed by [hypernetes][hypernetes], a multi-tenant kubernetes distro. Moreover, we use etcd to coordinate the multiple manage services and store global meta data.
|
||||
|
||||
[hypernetes]:https://github.com/hyperhq/hypernetes
|
||||
[Hyper.sh]:https://www.hyper.sh
|
||||
|
||||
## Meitu
|
||||
- *Application*: system configuration for services, service discovery, kubernetes in test environment
|
||||
- *Launched*: October 2015
|
||||
- *Cluster Size*: 1 cluster of 3 members
|
||||
- *Order of Data Size*: megabytes
|
||||
- *Operator*: Meitu, hxj@meitu.com, [shafreeck][shafreeck]
|
||||
- *Environment*: Bare Metal
|
||||
- *Backups*: None, all data can be recreated if necessary.
|
||||
|
||||
[shafreeck]:https://github.com/shafreeck
|
||||
|
||||
## Grab
|
||||
- *Application*: system configuration for services, service discovery
|
||||
- *Launched*: June 2016
|
||||
- *Cluster Size*: 1 cluster of 7 members
|
||||
- *Order of Data Size*: megabytes
|
||||
- *Operator*: Grab, [taxitan][taxitan], [reterVision][reterVision]
|
||||
- *Environment*: AWS
|
||||
- *Backups*: None, all data can be recreated if necessary.
|
||||
|
||||
[taxitan]:https://github.com/taxitan
|
||||
[reterVision]:https://github.com/reterVision
|
||||
|
||||
## DaoCloud.io
|
||||
|
||||
- *Application*: container management
|
||||
- *Launched*: Sep. 2015
|
||||
- *Cluster Size*: 1000+ deployments, each deployment contains a 3 node cluster.
|
||||
- *Order of Data Size*: 100s of Megabytes
|
||||
- *Operator*: daocloud.io
|
||||
- *Environment*: Baremetal and virtual machines
|
||||
- *Backups*: None, all data can be recreated if necessary.
|
||||
|
||||
In [DaoCloud][DaoCloud], we use Docker and Swarm to deploy and run our applications, and we use etcd to save metadata for service discovery.
|
||||
|
||||
[DaoCloud]:https://www.daocloud.io
|
||||
|
||||
## Branch.io
|
||||
|
||||
- *Application*: Kubernetes
|
||||
- *Launched*: April 2016
|
||||
- *Cluster Size*: Multiple clusters, multiple sizes
|
||||
- *Order of Data Size*: 100s of Megabytes
|
||||
- *Operator*: branch.io
|
||||
- *Environment*: AWS, Kubernetes
|
||||
- *Backups*: EBS volume backups
|
||||
|
||||
At [Branch][branch], we use kubernetes heavily as our core microservice platform for staging and production.
|
||||
|
||||
[branch]: https://branch.io
|
||||
|
||||
## Baidu Waimai
|
||||
|
||||
- *Application*: SkyDNS, Kubernetes, UDC, CMDB and other distributed systems
|
||||
- *Launched*: April. 2016
|
||||
- *Cluster Size*: 3 clusters of 5 members
|
||||
- *Order of Data Size*: several gigabytes
|
||||
- *Operator*: Baidu Waimai Operations Department
|
||||
- *Environment*: CentOS 6.5
|
||||
- *Backups*: backup scripts
|
||||
|
||||
## Salesforce.com
|
||||
|
||||
- *Application*: Kubernetes
|
||||
- *Launched*: Jan 2017
|
||||
- *Cluster Size*: Multiple clusters of 3 members
|
||||
- *Order of Data Size*: 100s of Megabytes
|
||||
- *Operator*: Salesforce.com (krmayankk@github)
|
||||
- *Environment*: BareMetal
|
||||
- *Backups*: None, all data can be recreated
|
||||
|
||||
## Hosted Graphite
|
||||
|
||||
- *Application*: Service discovery, locking, ephemeral application data
|
||||
- *Launched*: January 2017
|
||||
- *Cluster Size*: 2 clusters of 7 members
|
||||
- *Order of Data Size*: Megabytes
|
||||
- *Operator*: Hosted Graphite (sre@hostedgraphite.com)
|
||||
- *Environment*: Bare Metal
|
||||
- *Backups*: None, all data is considered ephemeral.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Reporting bugs
|
||||
|
||||
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
|
||||
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
|
||||
|
||||
To make the bug report accurate and easy to understand, please try to create bug reports that are:
|
||||
|
||||
|
@ -6,33 +6,16 @@ The network isn't the only source of latency. Each request and response may be i
|
||||
|
||||
## Time parameters
|
||||
|
||||
The underlying distributed consensus protocol relies on two separate time parameters to ensure that nodes can handoff leadership if one stalls or goes offline.
|
||||
The first parameter is called the *Heartbeat Interval*.
|
||||
This is the frequency with which the leader will notify followers that it is still the leader.
|
||||
For best practices, the parameter should be set around round-trip time between members.
|
||||
By default, etcd uses a `100ms` heartbeat interval.
|
||||
The underlying distributed consensus protocol relies on two separate time parameters to ensure that nodes can handoff leadership if one stalls or goes offline. The first parameter is called the *Heartbeat Interval*. This is the frequency with which the leader will notify followers that it is still the leader.
|
||||
For best practices, the parameter should be set around round-trip time between members. By default, etcd uses a `100ms` heartbeat interval.
|
||||
|
||||
The second parameter is the *Election Timeout*.
|
||||
This timeout is how long a follower node will go without hearing a heartbeat before attempting to become leader itself.
|
||||
By default, etcd uses a `1000ms` election timeout.
|
||||
The second parameter is the *Election Timeout*. This timeout is how long a follower node will go without hearing a heartbeat before attempting to become leader itself. By default, etcd uses a `1000ms` election timeout.
|
||||
|
||||
Adjusting these values is a trade off.
|
||||
The value of heartbeat interval is recommended to be around the maximum of average round-trip time (RTT) between members, normally around 0.5-1.5x the round-trip time.
|
||||
If heartbeat interval is too low, etcd will send unnecessary messages that increase the usage of CPU and network resources.
|
||||
On the other side, a too high heartbeat interval leads to high election timeout. Higher election timeout takes longer time to detect a leader failure.
|
||||
The easiest way to measure round-trip time (RTT) is to use [PING utility][ping].
|
||||
Adjusting these values is a trade off. The value of heartbeat interval is recommended to be around the maximum of average round-trip time (RTT) between members, normally around 0.5-1.5x the round-trip time. If heartbeat interval is too low, etcd will send unnecessary messages that increase the usage of CPU and network resources. On the other side, a too high heartbeat interval leads to high election timeout. Higher election timeout takes longer time to detect a leader failure. The easiest way to measure round-trip time (RTT) is to use [PING utility][ping].
|
||||
|
||||
The election timeout should be set based on the heartbeat interval and average round-trip time between members.
|
||||
Election timeouts must be at least 10 times the round-trip time so it can account for variance in the network.
|
||||
For example, if the round-trip time between members is 10ms then the election timeout should be at least 100ms.
|
||||
The election timeout should be set based on the heartbeat interval and average round-trip time between members. Election timeouts must be at least 10 times the round-trip time so it can account for variance in the network. For example, if the round-trip time between members is 10ms then the election timeout should be at least 100ms.
|
||||
|
||||
The election timeout should be set to at least 5 to 10 times the heartbeat interval to account for variance in leader replication.
|
||||
For a heartbeat interval of 50ms, set the election timeout to at least 250ms - 500ms.
|
||||
|
||||
The upper limit of election timeout is 50000ms (50s), which should only be used when deploying a globally-distributed etcd cluster.
|
||||
A reasonable round-trip time for the continental United States is 130ms, and the time between US and Japan is around 350-400ms.
|
||||
If the network has uneven performance or regular packet delays/loss then it is possible that a couple of retries may be necessary to successfully send a packet. So 5s is a safe upper limit of global round-trip time.
|
||||
As the election timeout should be an order of magnitude bigger than broadcast time, in the case of ~5s for a globally distributed cluster, then 50 seconds becomes a reasonable maximum.
|
||||
The upper limit of election timeout is 50000ms (50s), which should only be used when deploying a globally-distributed etcd cluster. A reasonable round-trip time for the continental United States is 130ms, and the time between US and Japan is around 350-400ms. If the network has uneven performance or regular packet delays/loss then it is possible that a couple of retries may be necessary to successfully send a packet. So 5s is a safe upper limit of global round-trip time. As the election timeout should be an order of magnitude bigger than broadcast time, in the case of ~5s for a globally distributed cluster, then 50 seconds becomes a reasonable maximum.
|
||||
|
||||
The heartbeat interval and election timeout value should be the same for all members in one cluster. Setting different values for etcd members may disrupt cluster stability.
|
||||
|
||||
@ -50,18 +33,13 @@ The values are specified in milliseconds.
|
||||
|
||||
## Snapshots
|
||||
|
||||
etcd appends all key changes to a log file.
|
||||
This log grows forever and is a complete linear history of every change made to the keys.
|
||||
A complete history works well for lightly used clusters but clusters that are heavily used would carry around a large log.
|
||||
etcd appends all key changes to a log file. This log grows forever and is a complete linear history of every change made to the keys. A complete history works well for lightly used clusters but clusters that are heavily used would carry around a large log.
|
||||
|
||||
To avoid having a huge log etcd makes periodic snapshots.
|
||||
These snapshots provide a way for etcd to compact the log by saving the current state of the system and removing old logs.
|
||||
To avoid having a huge log etcd makes periodic snapshots. These snapshots provide a way for etcd to compact the log by saving the current state of the system and removing old logs.
|
||||
|
||||
### Snapshot tuning
|
||||
|
||||
Creating snapshots can be expensive so they're only created after a given number of changes to etcd.
|
||||
By default, snapshots will be made after every 10,000 changes.
|
||||
If etcd's memory usage and disk usage are too high, try lowering the snapshot threshold by setting the following on the command line:
|
||||
Creating snapshots with the V2 backend can be expensive, so snapshots are only created after a given number of changes to etcd. By default, snapshots will be made after every 10,000 changes. If etcd's memory usage and disk usage are too high, try lowering the snapshot threshold by setting the following on the command line:
|
||||
|
||||
```sh
|
||||
# Command line arguments:
|
||||
@ -71,4 +49,34 @@ $ etcd --snapshot-count=5000
|
||||
$ ETCD_SNAPSHOT_COUNT=5000 etcd
|
||||
```
|
||||
|
||||
## Disk
|
||||
|
||||
An etcd cluster is very sensitive to disk latencies. Since etcd must persist proposals to its log, disk activity from other processes may cause long `fsync` latencies. The upshot is etcd may miss heartbeats, causing request timeouts and temporary leader loss. An etcd server can sometimes stably run alongside these processes when given a high disk priority.
|
||||
|
||||
On Linux, etcd's disk priority can be configured with `ionice`:
|
||||
|
||||
```sh
|
||||
# best effort, highest priority
|
||||
$ sudo ionice -c2 -n0 -p `pgrep etcd`
|
||||
```
|
||||
|
||||
## Network
|
||||
|
||||
If the etcd leader serves a large number of concurrent client requests, it may delay processing follower peer requests due to network congestion. This manifests as send buffer error messages on the follower nodes:
|
||||
|
||||
```
|
||||
dropped MsgProp to 247ae21ff9436b2d since streamMsg's sending buffer is full
|
||||
dropped MsgAppResp to 247ae21ff9436b2d since streamMsg's sending buffer is full
|
||||
```
|
||||
|
||||
These errors may be resolved by prioritizing etcd's peer traffic over its client traffic. On Linux, peer traffic can be prioritized by using the traffic control mechanism:
|
||||
|
||||
```
|
||||
tc qdisc add dev eth0 root handle 1: prio bands 3
|
||||
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip sport 2380 0xffff flowid 1:1
|
||||
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 2380 0xffff flowid 1:1
|
||||
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip sport 2739 0xffff flowid 1:1
|
||||
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip dport 2739 0xffff flowid 1:1
|
||||
```
|
||||
|
||||
[ping]: https://en.wikipedia.org/wiki/Ping_(networking_utility)
|
||||
|
@ -6,27 +6,27 @@ In the general case, upgrading from etcd 2.3 to 3.0 can be a zero-downtime, roll
|
||||
|
||||
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
|
||||
|
||||
### Upgrade Checklists
|
||||
### Upgrade checklists
|
||||
|
||||
#### Upgrade Requirements
|
||||
#### Upgrade requirements
|
||||
|
||||
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.0) before upgrading to 3.0.
|
||||
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0.
|
||||
|
||||
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. You can check the health of the cluster by using the `etcdctl cluster-health` command.
|
||||
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl cluster-health` command before proceeding.
|
||||
|
||||
#### Preparation
|
||||
|
||||
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
|
||||
|
||||
Before beginning, [backup the etcd data directory](admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version.
|
||||
Before beginning, [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version.
|
||||
|
||||
#### Mixed Versions
|
||||
#### Mixed versions
|
||||
|
||||
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.0. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
|
||||
|
||||
#### Limitations
|
||||
|
||||
It might take up to 2 minutes for the newly upgraded member to catch up with the existing cluster when the total data size is larger than 50MB. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
|
||||
It might take up to 2 minutes for the newly upgraded member to catch up with the existing cluster when the total data size is larger than 50MB. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
|
||||
|
||||
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we’ll be happy to provide advice on the procedure.
|
||||
|
||||
@ -34,15 +34,15 @@ For a much larger total data size, 100MB or more , this one-time process might t
|
||||
|
||||
If all members have been upgraded to v3.0, the cluster will be upgraded to v3.0, and downgrade from this completed state is **not possible**. If any single member is still v2.3, however, the cluster and its operations remains “v2.3”, and it is possible from this mixed cluster state to return to using a v2.3 etcd binary on all members.
|
||||
|
||||
Please [backup the data directory](admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
|
||||
Please [backup the data directory](../v2/admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
|
||||
|
||||
### Upgrade Procedure
|
||||
### Upgrade procedure
|
||||
|
||||
This example details the upgrade of a three-member v2.3 ectd cluster running on a local machine.
|
||||
This example details the upgrade of a three-member v2.3 ectd cluster running on a local machine.
|
||||
|
||||
#### 1. Check upgrade requirements.
|
||||
|
||||
Is the the cluster healthy and running v.2.3.x?
|
||||
Is the cluster healthy and running v.2.3.x?
|
||||
|
||||
```
|
||||
$ etcdctl cluster-health
|
||||
@ -52,7 +52,7 @@ member 8211f1d0f64f3269 is healthy: got healthy result from http://localhost:123
|
||||
cluster is healthy
|
||||
|
||||
$ curl http://localhost:2379/version
|
||||
{"etcdserver":"2.3.x","etcdcluster":"2.3.0"}
|
||||
{"etcdserver":"2.3.x","etcdcluster":"2.3.8"}
|
||||
```
|
||||
|
||||
#### 2. Stop the existing etcd process
|
||||
@ -64,7 +64,7 @@ When each etcd process is stopped, expected errors will be logged by other clust
|
||||
2016-06-27 15:21:48.624175 I | rafthttp: the connection with 8211f1d0f64f3269 became inactive
|
||||
```
|
||||
|
||||
It’s a good idea at this point to [backup the etcd data directory](https://github.com/coreos/etcd/blob/master/Documentation/v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur:
|
||||
It’s a good idea at this point to [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur:
|
||||
|
||||
```
|
||||
$ etcdctl backup \
|
||||
@ -102,7 +102,7 @@ Upgraded members will log warnings like the following until the entire cluster i
|
||||
|
||||
#### 5. Finish
|
||||
|
||||
When all members are upgraded, the cluster will report upgrading to 3.0 successfully:
|
||||
When all members are upgraded, the cluster will report upgrading to 3.0 successfully:
|
||||
|
||||
```
|
||||
2016-06-27 15:22:19.873751 N | membership: updated the cluster version from 2.3 to 3.0
|
||||
@ -116,4 +116,14 @@ $ ETCDCTL_API=3 etcdctl endpoint health
|
||||
127.0.0.1:22379 is healthy: successfully committed proposal: took = 18.513301ms
|
||||
```
|
||||
|
||||
## Further considerations
|
||||
|
||||
- etcdctl environment variables have been updated. If `ETCDCTL_API=2 etcdctl cluster-health` works properly but `ETCDCTL_API=3 etcdctl endpoints health` responds with `Error: grpc: timed out when dialing`, be sure to use the [new variable names](https://github.com/coreos/etcd/tree/master/etcdctl#etcdctl).
|
||||
|
||||
## Known Issues
|
||||
|
||||
- etcd < v3.1 does not work properly if built with Go > v1.7. See [Issue 6951](https://github.com/coreos/etcd/issues/6951) for additional information.
|
||||
- If an error such as `transport: http2Client.notifyError got notified that the client transport was broken unexpected EOF.` shows up in the etcd server logs, be sure etcd is a pre-built release or built with (etcd v3.1+ & go v1.7+) or (etcd <v3.1 & go v1.6.x).
|
||||
- Adding a v3 node to v2.3 cluster during upgrades is not supported and could trigger panics. See [Issue 7249](https://github.com/coreos/etcd/issues/7429) for additional information. Mixed versions of etcd members are only allowed during v3 migration. Finish upgrades before making any membership changes.
|
||||
|
||||
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev
|
||||
|
123
Documentation/upgrades/upgrade_3_1.md
Normal file
123
Documentation/upgrades/upgrade_3_1.md
Normal file
@ -0,0 +1,123 @@
|
||||
## Upgrade etcd from 3.0 to 3.1
|
||||
|
||||
In the general case, upgrading from etcd 3.0 to 3.1 can be a zero-downtime, rolling upgrade:
|
||||
- one by one, stop the etcd v3.0 processes and replace them with etcd v3.1 processes
|
||||
- after running all v3.1 processes, new features in v3.1 are available to the cluster
|
||||
|
||||
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
|
||||
|
||||
### Upgrade checklists
|
||||
|
||||
#### Upgrade requirements
|
||||
|
||||
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please [upgrade to 3.0](upgrade_3_0.md) before upgrading to 3.1.
|
||||
|
||||
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
|
||||
|
||||
#### Preparation
|
||||
|
||||
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
|
||||
|
||||
Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
|
||||
|
||||
#### Mixed versions
|
||||
|
||||
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.1. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
|
||||
|
||||
#### Limitations
|
||||
|
||||
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
|
||||
|
||||
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
|
||||
|
||||
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
|
||||
|
||||
#### Downgrade
|
||||
|
||||
If all members have been upgraded to v3.1, the cluster will be upgraded to v3.1, and downgrade from this completed state is **not possible**. If any single member is still v3.0, however, the cluster and its operations remains "v3.0", and it is possible from this mixed cluster state to return to using a v3.0 etcd binary on all members.
|
||||
|
||||
Please [backup the data directory](../op-guide/maintenance.md#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
|
||||
|
||||
### Upgrade procedure
|
||||
|
||||
This example shows how to upgrade a 3-member v3.0 ectd cluster running on a local machine.
|
||||
|
||||
#### 1. Check upgrade requirements
|
||||
|
||||
Is the cluster healthy and running v3.0.x?
|
||||
|
||||
```
|
||||
$ ETCDCTL_API=3 etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
|
||||
localhost:2379 is healthy: successfully committed proposal: took = 6.600684ms
|
||||
localhost:22379 is healthy: successfully committed proposal: took = 8.540064ms
|
||||
localhost:32379 is healthy: successfully committed proposal: took = 8.763432ms
|
||||
|
||||
$ curl http://localhost:2379/version
|
||||
{"etcdserver":"3.0.16","etcdcluster":"3.0.0"}
|
||||
```
|
||||
|
||||
#### 2. Stop the existing etcd process
|
||||
|
||||
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
|
||||
|
||||
```
|
||||
2017-01-17 09:34:18.352662 I | raft: raft.node: 1640829d9eea5cfb elected leader 1640829d9eea5cfb at term 5
|
||||
2017-01-17 09:34:18.359630 W | etcdserver: failed to reach the peerURL(http://localhost:2380) of member fd32987dcd0511e0 (Get http://localhost:2380/version: dial tcp 127.0.0.1:2380: getsockopt: connection refused)
|
||||
2017-01-17 09:34:18.359679 W | etcdserver: cannot get the version of member fd32987dcd0511e0 (Get http://localhost:2380/version: dial tcp 127.0.0.1:2380: getsockopt: connection refused)
|
||||
2017-01-17 09:34:18.548116 W | rafthttp: lost the TCP streaming connection with peer fd32987dcd0511e0 (stream Message writer)
|
||||
2017-01-17 09:34:19.147816 W | rafthttp: lost the TCP streaming connection with peer fd32987dcd0511e0 (stream MsgApp v2 writer)
|
||||
2017-01-17 09:34:34.364907 W | etcdserver: failed to reach the peerURL(http://localhost:2380) of member fd32987dcd0511e0 (Get http://localhost:2380/version: dial tcp 127.0.0.1:2380: getsockopt: connection refused)
|
||||
```
|
||||
|
||||
It's a good idea at this point to [backup the etcd data](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur:
|
||||
|
||||
```
|
||||
$ etcdctl snapshot save backup.db
|
||||
```
|
||||
|
||||
#### 3. Drop-in etcd v3.1 binary and start the new etcd process
|
||||
|
||||
The new v3.1 etcd will publish its information to the cluster:
|
||||
|
||||
```
|
||||
2017-01-17 09:36:00.996590 I | etcdserver: published {Name:my-etcd-1 ClientURLs:[http://localhost:2379]} to cluster 46bc3ce73049e678
|
||||
```
|
||||
|
||||
Verify that each member, and then the entire cluster, becomes healthy with the new v3.1 etcd binary:
|
||||
|
||||
```
|
||||
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
|
||||
localhost:22379 is healthy: successfully committed proposal: took = 5.540129ms
|
||||
localhost:32379 is healthy: successfully committed proposal: took = 7.321671ms
|
||||
localhost:2379 is healthy: successfully committed proposal: took = 10.629901ms
|
||||
```
|
||||
|
||||
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.1:
|
||||
|
||||
```
|
||||
2017-01-17 09:36:38.406268 W | etcdserver: the local etcd version 3.0.16 is not up-to-date
|
||||
2017-01-17 09:36:38.406295 W | etcdserver: member fd32987dcd0511e0 has a higher version 3.1.0
|
||||
2017-01-17 09:36:42.407695 W | etcdserver: the local etcd version 3.0.16 is not up-to-date
|
||||
2017-01-17 09:36:42.407730 W | etcdserver: member fd32987dcd0511e0 has a higher version 3.1.0
|
||||
```
|
||||
|
||||
#### 4. Repeat step 2 to step 3 for all other members
|
||||
|
||||
#### 5. Finish
|
||||
|
||||
When all members are upgraded, the cluster will report upgrading to 3.1 successfully:
|
||||
|
||||
```
|
||||
2017-01-17 09:37:03.100015 I | etcdserver: updating the cluster version from 3.0 to 3.1
|
||||
2017-01-17 09:37:03.104263 N | etcdserver/membership: updated the cluster version from 3.0 to 3.1
|
||||
2017-01-17 09:37:03.104374 I | etcdserver/api: enabled capabilities for version 3.1
|
||||
```
|
||||
|
||||
```
|
||||
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
|
||||
localhost:2379 is healthy: successfully committed proposal: took = 2.312897ms
|
||||
localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
|
||||
localhost:32379 is healthy: successfully committed proposal: took = 2.516902ms
|
||||
```
|
||||
|
||||
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev
|
172
Documentation/upgrades/upgrade_3_2.md
Normal file
172
Documentation/upgrades/upgrade_3_2.md
Normal file
@ -0,0 +1,172 @@
|
||||
## Upgrade etcd from 3.1 to 3.2
|
||||
|
||||
In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, rolling upgrade:
|
||||
- one by one, stop the etcd v3.1 processes and replace them with etcd v3.2 processes
|
||||
- after running all v3.2 processes, new features in v3.2 are available to the cluster
|
||||
|
||||
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
|
||||
|
||||
### Client upgrade checklists
|
||||
|
||||
3.2 introduces two breaking changes.
|
||||
|
||||
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/coreos/etcd/pull/7305)).
|
||||
|
||||
Before
|
||||
|
||||
```go
|
||||
// when leaseID does not exist
|
||||
resp, err := TimeToLive(ctx, leaseID)
|
||||
resp == nil
|
||||
err == lease.ErrLeaseNotFound
|
||||
```
|
||||
|
||||
After
|
||||
|
||||
```go
|
||||
// when leaseID does not exist
|
||||
resp, err := TimeToLive(ctx, leaseID)
|
||||
resp.TTL == -1
|
||||
err == nil
|
||||
```
|
||||
|
||||
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
|
||||
|
||||
Before
|
||||
|
||||
```go
|
||||
import "github.com/coreos/etcd/clientv3"
|
||||
clientv3.NewFromConfigFile
|
||||
```
|
||||
|
||||
After
|
||||
|
||||
```go
|
||||
import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
|
||||
clientv3yaml.NewConfig
|
||||
```
|
||||
|
||||
### Server upgrade checklists
|
||||
|
||||
#### Upgrade requirements
|
||||
|
||||
To upgrade an existing etcd deployment to 3.2, the running cluster must be 3.1 or greater. If it's before 3.1, please [upgrade to 3.1](upgrade_3_1.md) before upgrading to 3.2.
|
||||
|
||||
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
|
||||
|
||||
#### Preparation
|
||||
|
||||
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
|
||||
|
||||
Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
|
||||
|
||||
#### Mixed versions
|
||||
|
||||
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.2. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
|
||||
|
||||
#### Limitations
|
||||
|
||||
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
|
||||
|
||||
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
|
||||
|
||||
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
|
||||
|
||||
#### Downgrade
|
||||
|
||||
If all members have been upgraded to v3.2, the cluster will be upgraded to v3.2, and downgrade from this completed state is **not possible**. If any single member is still v3.1, however, the cluster and its operations remains "v3.1", and it is possible from this mixed cluster state to return to using a v3.1 etcd binary on all members.
|
||||
|
||||
Please [backup the data directory](../op-guide/maintenance.md#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
|
||||
|
||||
### Upgrade procedure
|
||||
|
||||
This example shows how to upgrade a 3-member v3.1 ectd cluster running on a local machine.
|
||||
|
||||
#### 1. Check upgrade requirements
|
||||
|
||||
Is the cluster healthy and running v3.1.x?
|
||||
|
||||
```
|
||||
$ ETCDCTL_API=3 etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
|
||||
localhost:2379 is healthy: successfully committed proposal: took = 6.600684ms
|
||||
localhost:22379 is healthy: successfully committed proposal: took = 8.540064ms
|
||||
localhost:32379 is healthy: successfully committed proposal: took = 8.763432ms
|
||||
|
||||
$ curl http://localhost:2379/version
|
||||
{"etcdserver":"3.1.7","etcdcluster":"3.1.0"}
|
||||
```
|
||||
|
||||
#### 2. Stop the existing etcd process
|
||||
|
||||
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
|
||||
|
||||
```
|
||||
2017-04-27 14:13:31.491746 I | raft: c89feb932daef420 [term 3] received MsgTimeoutNow from 6d4f535bae3ab960 and starts an election to get leadership.
|
||||
2017-04-27 14:13:31.491769 I | raft: c89feb932daef420 became candidate at term 4
|
||||
2017-04-27 14:13:31.491788 I | raft: c89feb932daef420 received MsgVoteResp from c89feb932daef420 at term 4
|
||||
2017-04-27 14:13:31.491797 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 6d4f535bae3ab960 at term 4
|
||||
2017-04-27 14:13:31.491805 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 9eda174c7df8a033 at term 4
|
||||
2017-04-27 14:13:31.491815 I | raft: raft.node: c89feb932daef420 lost leader 6d4f535bae3ab960 at term 4
|
||||
2017-04-27 14:13:31.524084 I | raft: c89feb932daef420 received MsgVoteResp from 6d4f535bae3ab960 at term 4
|
||||
2017-04-27 14:13:31.524108 I | raft: c89feb932daef420 [quorum:2] has received 2 MsgVoteResp votes and 0 vote rejections
|
||||
2017-04-27 14:13:31.524123 I | raft: c89feb932daef420 became leader at term 4
|
||||
2017-04-27 14:13:31.524136 I | raft: raft.node: c89feb932daef420 elected leader c89feb932daef420 at term 4
|
||||
2017-04-27 14:13:31.592650 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream MsgApp v2 reader)
|
||||
2017-04-27 14:13:31.592825 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message reader)
|
||||
2017-04-27 14:13:31.693275 E | rafthttp: failed to dial 6d4f535bae3ab960 on stream Message (dial tcp [::1]:2380: getsockopt: connection refused)
|
||||
2017-04-27 14:13:31.693289 I | rafthttp: peer 6d4f535bae3ab960 became inactive
|
||||
2017-04-27 14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer)
|
||||
```
|
||||
|
||||
It's a good idea at this point to [backup the etcd data](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur:
|
||||
|
||||
```
|
||||
$ etcdctl snapshot save backup.db
|
||||
```
|
||||
|
||||
#### 3. Drop-in etcd v3.2 binary and start the new etcd process
|
||||
|
||||
The new v3.2 etcd will publish its information to the cluster:
|
||||
|
||||
```
|
||||
2017-04-27 14:14:25.363225 I | etcdserver: published {Name:s1 ClientURLs:[http://localhost:2379]} to cluster a9ededbffcb1b1f1
|
||||
```
|
||||
|
||||
Verify that each member, and then the entire cluster, becomes healthy with the new v3.2 etcd binary:
|
||||
|
||||
```
|
||||
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
|
||||
localhost:22379 is healthy: successfully committed proposal: took = 5.540129ms
|
||||
localhost:32379 is healthy: successfully committed proposal: took = 7.321771ms
|
||||
localhost:2379 is healthy: successfully committed proposal: took = 10.629901ms
|
||||
```
|
||||
|
||||
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.2:
|
||||
|
||||
```
|
||||
2017-04-27 14:15:17.071804 W | etcdserver: member c89feb932daef420 has a higher version 3.2.0
|
||||
2017-04-27 14:15:21.073110 W | etcdserver: the local etcd version 3.1.7 is not up-to-date
|
||||
2017-04-27 14:15:21.073142 W | etcdserver: member 6d4f535bae3ab960 has a higher version 3.2.0
|
||||
2017-04-27 14:15:21.073157 W | etcdserver: the local etcd version 3.1.7 is not up-to-date
|
||||
2017-04-27 14:15:21.073164 W | etcdserver: member c89feb932daef420 has a higher version 3.2.0
|
||||
```
|
||||
|
||||
#### 4. Repeat step 2 to step 3 for all other members
|
||||
|
||||
#### 5. Finish
|
||||
|
||||
When all members are upgraded, the cluster will report upgrading to 3.2 successfully:
|
||||
|
||||
```
|
||||
2017-04-27 14:15:54.536901 N | etcdserver/membership: updated the cluster version from 3.1 to 3.2
|
||||
2017-04-27 14:15:54.537035 I | etcdserver/api: enabled capabilities for version 3.2
|
||||
```
|
||||
|
||||
```
|
||||
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
|
||||
localhost:2379 is healthy: successfully committed proposal: took = 2.312897ms
|
||||
localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
|
||||
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
|
||||
```
|
||||
|
||||
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev
|
19
Documentation/upgrades/upgrading-etcd.md
Normal file
19
Documentation/upgrades/upgrading-etcd.md
Normal file
@ -0,0 +1,19 @@
|
||||
# Upgrading etcd clusters and applications
|
||||
|
||||
This section contains documents specific to upgrading etcd clusters and applications.
|
||||
|
||||
## Moving from etcd API v2 to API v3
|
||||
* [Migrate applications from using API v2 to API v3][migrate-apps]
|
||||
|
||||
## Upgrading an etcd v3.x cluster
|
||||
* [Upgrade etcd from 3.0 to 3.1][upgrade-3-1]
|
||||
* [Upgrade etcd from 3.1 to 3.2][upgrade-3-2]
|
||||
|
||||
## Upgrading from etcd v2.3
|
||||
* [Upgrade a v2.3 cluster to v3.0][upgrade-cluster]
|
||||
|
||||
|
||||
[migrate-apps]: ../op-guide/v2-migration.md
|
||||
[upgrade-cluster]: upgrade_3_0.md
|
||||
[upgrade-3-1]: upgrade_3_1.md
|
||||
[upgrade-3-2]: upgrade_3_2.md
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Snapshot Migration
|
||||
|
||||
You can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.2 cluster using a snapshot migration. After snapshot migration, the etcd indexes of your data will change. Many etcd applications rely on these indexes to behave correctly. This operation should only be done while all etcd applications are stopped.
|
||||
|
@ -1,165 +1,85 @@
|
||||
# etcd2
|
||||
# Documentation
|
||||
|
||||
[](https://goreportcard.com/report/github.com/coreos/etcd)
|
||||
[](https://travis-ci.org/coreos/etcd)
|
||||
[](https://semaphoreci.com/coreos/etcd)
|
||||
[](https://quay.io/repository/coreos/etcd-git)
|
||||
etcd is a distributed key-value store designed to reliably and quickly preserve and provide access to critical data. It enables reliable distributed coordination through distributed locking, leader elections, and write barriers. An etcd cluster is intended for high availability and permanent data storage and retrieval.
|
||||
|
||||
**Note**: The `master` branch may be in an *unstable or even broken state* during development. Please use [releases][github-release] instead of the `master` branch in order to get stable binaries.
|
||||
This is the etcd v2 documentation set. For more recent versions, please see the [etcd v3 guides][etcd-v3].
|
||||
|
||||

|
||||
## Communicating with etcd v2
|
||||
|
||||
etcd is a distributed, consistent key-value store for shared configuration and service discovery, with a focus on being:
|
||||
Reading and writing into the etcd keyspace is done via a simple, RESTful HTTP API, or using language-specific libraries that wrap the HTTP API with higher level primitives.
|
||||
|
||||
* *Simple*: curl'able user-facing API (HTTP+JSON)
|
||||
* *Secure*: optional SSL client cert authentication
|
||||
* *Fast*: benchmarked 1000s of writes/s per instance
|
||||
* *Reliable*: properly distributed using Raft
|
||||
### Reading and Writing
|
||||
|
||||
etcd is written in Go and uses the [Raft][raft] consensus algorithm to manage a highly-available replicated log.
|
||||
- [Client API Documentation][api]
|
||||
- [Libraries, Tools, and Language Bindings][libraries]
|
||||
- [Admin API Documentation][admin-api]
|
||||
- [Members API][members-api]
|
||||
|
||||
etcd is used [in production by many companies](./production-users.md), and the development team stands behind it in critical deployment scenarios, where etcd is frequently teamed with applications such as [Kubernetes][k8s], [fleet][fleet], [locksmith][locksmith], [vulcand][vulcand], and many others.
|
||||
### Security, Auth, Access control
|
||||
|
||||
See [etcdctl][etcdctl] for a simple command line client.
|
||||
Or feel free to just use `curl`, as in the examples below.
|
||||
- [Security Model][security]
|
||||
- [Auth and Security][auth_api]
|
||||
- [Authentication Guide][authentication]
|
||||
|
||||
[raft]: https://raft.github.io/
|
||||
[k8s]: http://kubernetes.io/
|
||||
[fleet]: https://github.com/coreos/fleet
|
||||
[locksmith]: https://github.com/coreos/locksmith
|
||||
[vulcand]: https://github.com/vulcand/vulcand
|
||||
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
|
||||
## etcd v2 Cluster Administration
|
||||
|
||||
## Getting Started
|
||||
Configuration values are distributed within the cluster for your applications to read. Values can be changed programmatically and smart applications can reconfigure automatically. You'll never again have to run a configuration management tool on every machine in order to change a single config value.
|
||||
|
||||
### Getting etcd
|
||||
### General Info
|
||||
|
||||
The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, AppC (ACI), and Docker. Instructions for using these binaries are on the [GitHub releases page][github-release].
|
||||
- [etcd Proxies][proxy]
|
||||
- [Production Users][production-users]
|
||||
- [Admin Guide][admin_guide]
|
||||
- [Configuration Flags][configuration]
|
||||
- [Frequently Asked Questions][faq]
|
||||
|
||||
For those wanting to try the very latest version, you can build the latest version of etcd from the `master` branch.
|
||||
You will first need [*Go*](https://golang.org/) installed on your machine (version 1.5+ is required).
|
||||
All development occurs on `master`, including new features and bug fixes.
|
||||
Bug fixes are first targeted at `master` and subsequently ported to release branches, as described in the [branch management][branch-management] guide.
|
||||
### Initial Setup
|
||||
|
||||
[github-release]: https://github.com/coreos/etcd/releases/
|
||||
[branch-management]: branch_management.md
|
||||
- [Tuning etcd Clusters][tuning]
|
||||
- [Discovery Service Protocol][discovery_protocol]
|
||||
- [Running etcd under Docker][docker_guide]
|
||||
|
||||
### Running etcd
|
||||
### Live Reconfiguration
|
||||
|
||||
First start a single-member cluster of etcd:
|
||||
- [Runtime Configuration][runtime-configuration]
|
||||
|
||||
```sh
|
||||
./bin/etcd
|
||||
```
|
||||
### Debugging etcd
|
||||
|
||||
This will bring up etcd listening on port 2379 for client communication and on port 2380 for server-to-server communication.
|
||||
- [Metrics Collection][metrics]
|
||||
- [Error Code][errorcode]
|
||||
- [Reporting Bugs][reporting_bugs]
|
||||
|
||||
Next, let's set a single key, and then retrieve it:
|
||||
### Migration
|
||||
|
||||
```
|
||||
curl -L http://127.0.0.1:2379/v2/keys/mykey -XPUT -d value="this is awesome"
|
||||
curl -L http://127.0.0.1:2379/v2/keys/mykey
|
||||
```
|
||||
- [Upgrade etcd to 2.3][upgrade_2_3]
|
||||
- [Upgrade etcd to 2.2][upgrade_2_2]
|
||||
- [Upgrade to etcd 2.1][upgrade_2_1]
|
||||
- [Snapshot Migration (0.4.x to 2.x)][04_to_2_snapshot_migration]
|
||||
- [Backward Compatibility][backward_compatibility]
|
||||
|
||||
You have successfully started an etcd and written a key to the store.
|
||||
|
||||
### etcd TCP ports
|
||||
|
||||
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication. To maintain compatibility, some etcd configuration and documentation continues to refer to the legacy ports 4001 and 7001, but all new etcd use and discussion should adopt the IANA-assigned ports. The legacy ports 4001 and 7001 will be fully deprecated, and support for their use removed, in future etcd releases.
|
||||
|
||||
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
|
||||
|
||||
### Running local etcd cluster
|
||||
|
||||
First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
|
||||
|
||||
Our [Procfile script](./Procfile) will set up a local example cluster. You can start it with:
|
||||
|
||||
```sh
|
||||
goreman start
|
||||
```
|
||||
|
||||
This will bring up 3 etcd members `infra1`, `infra2` and `infra3` and etcd proxy `proxy`, which runs locally and composes a cluster.
|
||||
|
||||
You can write a key to the cluster and retrieve the value back from any member or proxy.
|
||||
|
||||
### Next Steps
|
||||
|
||||
Now it's time to dig into the full etcd API and other guides.
|
||||
|
||||
- Explore the full [API][api].
|
||||
- Set up a [multi-machine cluster][clustering].
|
||||
- Learn the [config format, env variables and flags][configuration].
|
||||
- Find [language bindings and tools][libraries-and-tools].
|
||||
- Use TLS to [secure an etcd cluster][security].
|
||||
- [Tune etcd][tuning].
|
||||
- [Upgrade from 0.4.9+ to 2.2.0][upgrade].
|
||||
|
||||
[api]: ./api.md
|
||||
[clustering]: ./clustering.md
|
||||
[configuration]: ./configuration.md
|
||||
[libraries-and-tools]: ./libraries-and-tools.md
|
||||
[security]: ./security.md
|
||||
[tuning]: ./tuning.md
|
||||
[upgrade]: ./04_to_2_snapshot_migration.md
|
||||
|
||||
## Contact
|
||||
|
||||
- Mailing list: [etcd-dev](https://groups.google.com/forum/?hl=en#!forum/etcd-dev)
|
||||
- IRC: #[etcd](irc://irc.freenode.org:6667/#etcd) on freenode.org
|
||||
- Planning/Roadmap: [milestones](https://github.com/coreos/etcd/milestones), [roadmap](../../ROADMAP.md)
|
||||
- Bugs: [issues](https://github.com/coreos/etcd/issues)
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING](../../CONTRIBUTING.md) for details on submitting patches and the contribution workflow.
|
||||
|
||||
## Reporting bugs
|
||||
|
||||
See [reporting bugs](reporting_bugs.md) for details about reporting any issue you may encounter.
|
||||
|
||||
## Known bugs
|
||||
|
||||
[GH518](https://github.com/coreos/etcd/issues/518) is a known bug. Issue is that:
|
||||
|
||||
```
|
||||
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar
|
||||
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d dir=true -d prevExist=true
|
||||
```
|
||||
|
||||
If the previous node is a key and client tries to overwrite it with `dir=true`, it does not give warnings such as `Not a directory`. Instead, the key is set to empty value.
|
||||
|
||||
## Project Details
|
||||
|
||||
### Versioning
|
||||
|
||||
#### Service Versioning
|
||||
|
||||
etcd uses [semantic versioning](http://semver.org)
|
||||
New minor versions may add additional features to the API.
|
||||
|
||||
You can get the version of etcd by issuing a request to /version:
|
||||
|
||||
```sh
|
||||
curl -L http://127.0.0.1:2379/version
|
||||
```
|
||||
|
||||
#### API Versioning
|
||||
|
||||
The `v2` API responses should not change after the 2.0.0 release but new features will be added over time.
|
||||
|
||||
#### 32-bit and other unsupported systems
|
||||
|
||||
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See #[358][358] for more information.
|
||||
|
||||
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unsupported architectures will print
|
||||
a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to
|
||||
the target architecture.
|
||||
|
||||
Currently only the amd64 architecture is officially supported by `etcd`.
|
||||
|
||||
[358]: https://github.com/coreos/etcd/issues/358
|
||||
|
||||
### License
|
||||
|
||||
etcd is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
|
||||
[etcd-v3]: ../docs.md
|
||||
[api]: api.md
|
||||
[libraries]: libraries-and-tools.md
|
||||
[admin-api]: other_apis.md
|
||||
[members-api]: members_api.md
|
||||
[security]: security.md
|
||||
[auth_api]: auth_api.md
|
||||
[authentication]: authentication.md
|
||||
[proxy]: proxy.md
|
||||
[production-users]: production-users.md
|
||||
[admin_guide]: admin_guide.md
|
||||
[configuration]: configuration.md
|
||||
[faq]: faq.md
|
||||
[tuning]: tuning.md
|
||||
[discovery_protocol]: discovery_protocol.md
|
||||
[docker_guide]: docker_guide.md
|
||||
[runtime-configuration]: runtime-configuration.md
|
||||
[metrics]: metrics.md
|
||||
[errorcode]: errorcode.md
|
||||
[reporting_bugs]: reporting_bugs.md
|
||||
[upgrade_2_3]: upgrade_2_3.md
|
||||
[upgrade_2_2]: upgrade_2_2.md
|
||||
[upgrade_2_1]: upgrade_2_1.md
|
||||
[04_to_2_snapshot_migration]: 04_to_2_snapshot_migration.md
|
||||
[backward_compatibility]: backward_compatibility.md
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Administration
|
||||
|
||||
## Data Directory
|
||||
@ -8,7 +13,7 @@ When first started, etcd stores its configuration into a data directory specifie
|
||||
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
|
||||
The write ahead log and snapshot files are used during member operation and to recover after a restart.
|
||||
|
||||
Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster.
|
||||
Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster.
|
||||
It is highly recommended to dedicate a wal disk and set `--wal-dir` to point to a directory on that device for a production cluster deployment.
|
||||
|
||||
If a member’s data directory is ever lost or corrupted then the user should [remove][remove-a-member] the etcd member from the cluster using `etcdctl` tool.
|
||||
@ -51,7 +56,7 @@ $ curl -L http://127.0.0.1:2379/health
|
||||
You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.
|
||||
|
||||
```
|
||||
$./etcdctl cluster-health
|
||||
$./etcdctl cluster-health
|
||||
member 8211f1d0f64f3269 is healthy: got healthy result from http://127.0.0.1:12379
|
||||
member 91bc3c398fb3c146 is healthy: got healthy result from http://127.0.0.1:22379
|
||||
member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:32379
|
||||
@ -216,7 +221,7 @@ To recover from such scenarios, etcd provides functionality to backup and restor
|
||||
|
||||
#### Backing up the datastore
|
||||
|
||||
**NB:** Windows users must stop etcd before running the backup command.
|
||||
**Note:** Windows users must stop etcd before running the backup command.
|
||||
|
||||
The first step of the recovery is to backup the data directory and wal directory, if stored separately, on a functioning etcd node. To do this, use the `etcdctl backup` command, passing in the original data (and wal) directory used by etcd. For example:
|
||||
|
||||
@ -262,7 +267,9 @@ Once you have verified that etcd has started successfully, shut it down and move
|
||||
|
||||
Now that the node is running successfully, [change its advertised peer URLs][update-a-member], as the `--force-new-cluster` option has set the peer URL to the default listening on localhost.
|
||||
|
||||
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details. **NB:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
|
||||
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details.
|
||||
|
||||
**Note:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
|
||||
|
||||
### Client Request Timeout
|
||||
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# etcd API
|
||||
|
||||
## Running a Single Machine Cluster
|
||||
@ -318,7 +323,7 @@ The first terminal should get the notification and return with the same response
|
||||
|
||||
However, the watch command can do more than this.
|
||||
Using the index, we can watch for commands that have happened in the past.
|
||||
This is useful for ensuring you don't miss events between watch commands.
|
||||
This is useful for ensuring you don't miss events between watch commands.
|
||||
Typically, we watch again from the `modifiedIndex` + 1 of the node we got.
|
||||
|
||||
Let's try to watch for the set command of index 7 again:
|
||||
@ -338,13 +343,13 @@ curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'
|
||||
Then even if etcd is on index 9 or 800, the first event to occur to the `/foo`
|
||||
key between 8 and the current index will be returned.
|
||||
|
||||
**Note**: etcd only keeps the responses of the most recent 1000 events across all etcd keys.
|
||||
**Note**: etcd only keeps the responses of the most recent 1000 events across all etcd keys.
|
||||
It is recommended to send the response to another thread to process immediately
|
||||
instead of blocking the watch while processing the result.
|
||||
instead of blocking the watch while processing the result.
|
||||
|
||||
#### Watch from cleared event index
|
||||
|
||||
If we miss all the 1000 events, we need to recover the current state of the
|
||||
If we miss all the 1000 events, we need to recover the current state of the
|
||||
watching key space through a get and then start to watch from the
|
||||
`X-Etcd-Index` + 1.
|
||||
|
||||
@ -366,7 +371,7 @@ To start watch, first we need to fetch the current state of key `/foo`:
|
||||
curl 'http://127.0.0.1:2379/v2/keys/foo' -vv
|
||||
```
|
||||
|
||||
```
|
||||
```
|
||||
< HTTP/1.1 200 OK
|
||||
< Content-Type: application/json
|
||||
< X-Etcd-Cluster-Id: 7e27652122e8b2ae
|
||||
@ -375,7 +380,7 @@ curl 'http://127.0.0.1:2379/v2/keys/foo' -vv
|
||||
< X-Raft-Term: 2
|
||||
< Date: Mon, 05 Jan 2015 18:54:43 GMT
|
||||
< Transfer-Encoding: chunked
|
||||
<
|
||||
<
|
||||
{"action":"get","node":{"key":"/foo","value":"bar","modifiedIndex":7,"createdIndex":7}}
|
||||
```
|
||||
|
||||
@ -559,6 +564,25 @@ Let's create a key-value pair first: `foo=one`.
|
||||
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=one
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action":"set",
|
||||
"node":{
|
||||
"key":"/foo",
|
||||
"value":"one",
|
||||
"modifiedIndex":4,
|
||||
"createdIndex":4
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Specifying `noValueOnSuccess` option skips returning the node as value.
|
||||
|
||||
```sh
|
||||
curl http://127.0.0.1:2379/v2/keys/foo?noValueOnSuccess=true -XPUT -d value=one
|
||||
# {"action":"set"}
|
||||
```
|
||||
|
||||
Now let's try some invalid `CompareAndSwap` commands.
|
||||
|
||||
Trying to set this existing key with `prevExist=false` fails as expected:
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# etcd3 API
|
||||
|
||||
TODO: API doc
|
||||
@ -18,7 +23,7 @@ A key’s lifetime spans a generation. Each key may have one or multiple generat
|
||||
|
||||
### Physical View
|
||||
|
||||
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the store’s state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
|
||||
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the store’s state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
|
||||
|
||||
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.
|
||||
|
||||
@ -47,7 +52,7 @@ An etcd operation is considered complete when it is committed through consensus,
|
||||
|
||||
#### revision
|
||||
|
||||
An etcd operation that modifies the key value store is assigned with a single increasing revision. A transaction operation might modifies the key value store multiple times, but only one revision is assigned. The revision attribute of a key value pair that modified by the operation has the same value as the revision of the operation. The revision can be used as a logical clock for key value store. A key value pair that has a larger revision is modified after a key value pair with a smaller revision. Two key value pairs that have the same revision are modified by an operation "concurrently".
|
||||
An etcd operation that modifies the key value store is assigned with a single increasing revision. A transaction operation might modify the key value store multiple times, but only one revision is assigned. The revision attribute of a key value pair that modified by the operation has the same value as the revision of the operation. The revision can be used as a logical clock for key value store. A key value pair that has a larger revision is modified after a key value pair with a smaller revision. Two key value pairs that have the same revision are modified by an operation "concurrently".
|
||||
|
||||
### Guarantees Provided
|
||||
|
||||
@ -73,7 +78,7 @@ Any completed operations are durable. All accessible data is also durable data.
|
||||
|
||||
#### Linearizability
|
||||
|
||||
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
|
||||
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
|
||||
|
||||
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operation’s timestamp precedes another, that operation must also precede the other operation in the sequence.
|
||||
|
||||
@ -83,10 +88,10 @@ etcd does not ensure linearizability for watch operations. Users are expected to
|
||||
|
||||
etcd ensures linearizability for all other operations by default. Linearizability comes with a cost, however, because linearized requests must go through the Raft consensus process. To obtain lower latencies and higher throughput for read requests, clients can configure a request’s consistency mode to `serializable`, which may access stale data with respect to quorum, but removes the performance penalty of linearized accesses' reliance on live consensus.
|
||||
|
||||
[persistent-ds]: [https://en.wikipedia.org/wiki/Persistent_data_structure]
|
||||
[btree]: [https://en.wikipedia.org/wiki/B-tree]
|
||||
[b+tree]: [https://en.wikipedia.org/wiki/B%2B_tree]
|
||||
[seq_consistency]: [https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency]
|
||||
[strict_consistency]: [https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency]
|
||||
[serializable_isolation]: [https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable]
|
||||
[Linearizability]: [#Linearizability]
|
||||
[persistent-ds]: https://en.wikipedia.org/wiki/Persistent_data_structure
|
||||
[btree]: https://en.wikipedia.org/wiki/B-tree
|
||||
[b+tree]: https://en.wikipedia.org/wiki/B%2B_tree
|
||||
[seq_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency
|
||||
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
|
||||
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
|
||||
[Linearizability]: #linearizability
|
||||
|
@ -1,13 +1,18 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# v2 Auth and Security
|
||||
|
||||
## etcd Resources
|
||||
## etcd Resources
|
||||
There are three types of resources in etcd
|
||||
|
||||
1. permission resources: users and roles in the user store
|
||||
2. key-value resources: key-value pairs in the key-value store
|
||||
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
|
||||
|
||||
### Permission Resources
|
||||
### Permission Resources
|
||||
|
||||
#### Users
|
||||
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
|
||||
@ -15,7 +20,7 @@ A user is an identity to be authenticated. Each user can have multiple roles. Th
|
||||
A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
|
||||
|
||||
#### Roles
|
||||
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
|
||||
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
|
||||
|
||||
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
|
||||
|
||||
@ -30,8 +35,8 @@ A Permission List is a list of allowed patterns for that particular permission (
|
||||
### Key-Value Resources
|
||||
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
|
||||
|
||||
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
|
||||
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
|
||||
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
|
||||
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
|
||||
|
||||
### Settings Resources
|
||||
|
||||
@ -66,7 +71,7 @@ An Error JSON corresponds to:
|
||||
}
|
||||
|
||||
#### Enable and Disable Authentication
|
||||
|
||||
|
||||
**Get auth status**
|
||||
|
||||
GET /v2/auth/enable
|
||||
@ -215,8 +220,8 @@ PUT /v2/auth/users/charlie
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
JSON struct, above, matching the appropriate name
|
||||
* Starting password and roles when creating.
|
||||
JSON struct, above, matching the appropriate name
|
||||
* Starting password and roles when creating.
|
||||
* Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
@ -345,7 +350,7 @@ PUT /v2/auth/roles/rkt
|
||||
401 Unauthorized
|
||||
404 Not Found (update non-existent roles)
|
||||
409 Conflict (when granting duplicated permission or revoking non-existent permission)
|
||||
200 Body:
|
||||
200 Body:
|
||||
JSON state of the role
|
||||
|
||||
**Remove A Role**
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Authentication Guide
|
||||
|
||||
## Overview
|
||||
@ -14,7 +19,7 @@ There is one special user, `root`, and there are two special roles, `root` and `
|
||||
|
||||
### User `root`
|
||||
|
||||
User `root` must be created before security can be activated. It has the `root` role and allows for the changing of anything inside etcd. The idea behind the `root` user is for recovery purposes -- a password is generated and stored somewhere -- and the root role is granted to the administrator accounts on the system. In the future, for troubleshooting and recovery, we will need to assume some access to the system, and future documentation will assume this root user (though anyone with the role will suffice).
|
||||
User `root` must be created before security can be activated. It has the `root` role and allows for the changing of anything inside etcd. The idea behind the `root` user is for recovery purposes -- a password is generated and stored somewhere -- and the root role is granted to the administrator accounts on the system. In the future, for troubleshooting and recovery, we will need to assume some access to the system, and future documentation will assume this root user (though anyone with the role will suffice).
|
||||
|
||||
### Role `root`
|
||||
|
||||
@ -104,7 +109,7 @@ $ etcdctl role grant myrolename -path '/foo/bar' -write
|
||||
$ etcdctl role grant myrolename -path '/pub/*' -readwrite
|
||||
```
|
||||
|
||||
Beware that
|
||||
Beware that
|
||||
|
||||
```
|
||||
# Give full access to keys under /pub??
|
||||
@ -133,12 +138,12 @@ $ etcdctl role remove myrolename
|
||||
|
||||
## Enabling authentication
|
||||
|
||||
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
|
||||
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
|
||||
|
||||
Make sure the root user is created:
|
||||
|
||||
```
|
||||
$ etcdctl user add root
|
||||
$ etcdctl user add root
|
||||
New password:
|
||||
```
|
||||
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Backward Compatibility
|
||||
|
||||
The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
|
||||
@ -32,7 +37,7 @@ The consistent flag for read operations is removed in etcd 2.0.0. The normal rea
|
||||
|
||||
The read consistency guarantees are:
|
||||
|
||||
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
|
||||
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
|
||||
|
||||
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
|
||||
|
||||
@ -56,6 +61,7 @@ Proxy mode in 2.0 will provide similar functionality, and with improved control
|
||||
## Discovery Service
|
||||
|
||||
A size key needs to be provided inside a [discovery token][discoverytoken].
|
||||
|
||||
[discoverytoken]: clustering.md#custom-etcd-discovery-service
|
||||
|
||||
## HTTP Admin API
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
# Benchmarks
|
||||
|
||||
etcd benchmarks will be published regularly and tracked for each release below:
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
## Physical machines
|
||||
|
||||
GCE n1-highcpu-2 machine type
|
||||
@ -49,4 +54,4 @@ Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send r
|
||||
| 256 | 256 | all servers | 3061 | 119.3 |
|
||||
|
||||
[boom]: https://github.com/rakyll/boom
|
||||
[hack-benchmark]: /hack/benchmark/
|
||||
[hack-benchmark]: ../../../hack/benchmark/
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
# Benchmarking etcd v2.2.0
|
||||
|
||||
## Physical Machines
|
||||
@ -24,7 +29,7 @@ Go OS/Arch: linux/amd64
|
||||
|
||||
## Testing
|
||||
|
||||
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool](https://github.com/rakyll/boom) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
|
||||
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool][boom] with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions][hack] for the patch and the steps to reproduce our procedures.
|
||||
|
||||
The performance is calulated through results of 100 benchmark rounds.
|
||||
|
||||
@ -66,4 +71,7 @@ The performance is calulated through results of 100 benchmark rounds.
|
||||
|
||||
- Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
|
||||
|
||||
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
|
||||
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
|
||||
|
||||
[boom]: https://github.com/rakyll/boom
|
||||
[hack]: ../../../hack/benchmark/
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
## Physical machines
|
||||
|
||||
GCE n1-highcpu-2 machine type
|
||||
@ -69,4 +74,4 @@ Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send r
|
||||
[boom]: https://github.com/rakyll/boom
|
||||
[c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
|
||||
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md
|
||||
[hack-benchmark]: /hack/benchmark/
|
||||
[hack-benchmark]: ../../../hack/benchmark/
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
## Physical machine
|
||||
|
||||
GCE n1-standard-2 machine type
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
## Physical machines
|
||||
|
||||
GCE n1-highcpu-2 machine type
|
||||
@ -39,4 +44,4 @@ The performance is nearly the same as the one with empty server handler.
|
||||
The performance with empty server handler is not affected by one put. So the
|
||||
performance downgrade should be caused by storage package.
|
||||
|
||||
[etcd-v3-benchmark]: /tools/benchmark/
|
||||
[etcd-v3-benchmark]: ../../../tools/benchmark/
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
# Watch Memory Usage Benchmark
|
||||
|
||||
*NOTE*: The watch features are under active development, and their memory usage may change as that development progresses. We do not expect it to significantly increase beyond the figures stated below.
|
||||
@ -5,10 +10,10 @@
|
||||
A primary goal of etcd is supporting a very large number of watchers doing a massively large amount of watching. etcd aims to support O(10k) clients, O(100K) watch streams (O(10) streams per client) and O(10M) total watchings (O(100) watching per stream). The memory consumed by each individual watching accounts for the largest portion of etcd's overall usage, and is therefore the focus of current and future optimizations.
|
||||
|
||||
|
||||
Three related components of etcd watch consume physical memory: each `grpc.Conn`, each watch stream, and each instance of the watching activity. `grpc.Conn` maintains the actual TCP connection and other gRPC connection state. Each `grpc.Conn` consumes O(10kb) of memory, and might have multiple watch streams attached.
|
||||
Three related components of etcd watch consume physical memory: each `grpc.Conn`, each watch stream, and each instance of the watching activity. `grpc.Conn` maintains the actual TCP connection and other gRPC connection state. Each `grpc.Conn` consumes O(10kb) of memory, and might have multiple watch streams attached.
|
||||
|
||||
Each watch stream is an independent HTTP2 connection which consumes another O(10kb) of memory.
|
||||
Multiple watchings might share one watch stream.
|
||||
Each watch stream is an independent HTTP2 connection which consumes another O(10kb) of memory.
|
||||
Multiple watchings might share one watch stream.
|
||||
|
||||
Watching is the actual struct that tracks the changes on the key-value store. Each watching should only consume < O(1kb).
|
||||
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
# Storage Memory Usage Benchmark
|
||||
|
||||
<!---todo: link storage to storage design doc-->
|
||||
@ -60,7 +65,7 @@ GCE n1-standard-2 machine type
|
||||
|
||||
In this test, we only benchmark the memory usage of the in-memory index. The goal is to find `c1` and `c2` mentioned above and to understand the hard limit of memory consumption of the storage.
|
||||
|
||||
We calculate the memory usage consumption via the Go runtime.ReadMemStats. We calculate the total allocated bytes difference before creating the index and after creating the index. It cannot perfectly reflect the memory usage of the in-memory index itself but can show the rough consumption pattern.
|
||||
We calculate the memory usage consumption via the Go runtime.ReadMemStats. We calculate the total allocated bytes difference before creating the index and after creating the index. It cannot perfectly reflect the memory usage of the in-memory index itself but can show the rough consumption pattern.
|
||||
|
||||
| N | versions | key size | memory usage |
|
||||
|------|----------|----------|--------------|
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Branch Management
|
||||
|
||||
## Guide
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Clustering Guide
|
||||
|
||||
## Overview
|
||||
@ -423,7 +428,7 @@ To make understanding this feature easier, we changed the naming of some flags,
|
||||
|-peers |none |Deprecated. The --initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
|
||||
|-peers-file |none |Deprecated. The --initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
|
||||
|
||||
[client]: /client
|
||||
[client]: ../../client
|
||||
[client-discoverer]: https://godoc.org/github.com/coreos/etcd/client#Discoverer
|
||||
[conf-adv-client]: configuration.md#-advertise-client-urls
|
||||
[conf-listen-client]: configuration.md#-listen-client-urls
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Configuration Flags
|
||||
|
||||
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment.
|
||||
@ -176,7 +181,10 @@ To start etcd automatically using custom settings at startup in Linux, using a [
|
||||
|
||||
The security flags help to [build a secure etcd cluster][security].
|
||||
|
||||
### --ca-file [DEPRECATED]
|
||||
### --ca-file
|
||||
|
||||
**DEPRECATED**
|
||||
|
||||
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
|
||||
+ default: none
|
||||
+ env variable: ETCD_CA_FILE
|
||||
@ -201,7 +209,10 @@ The security flags help to [build a secure etcd cluster][security].
|
||||
+ default: none
|
||||
+ env variable: ETCD_TRUSTED_CA_FILE
|
||||
|
||||
### --peer-ca-file [DEPRECATED]
|
||||
### --peer-ca-file
|
||||
|
||||
**DEPRECATED**
|
||||
|
||||
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
|
||||
+ default: none
|
||||
+ env variable: ETCD_PEER_CA_FILE
|
||||
@ -234,7 +245,7 @@ The security flags help to [build a secure etcd cluster][security].
|
||||
+ env variable: ETCD_DEBUG
|
||||
|
||||
### --log-package-levels
|
||||
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
|
||||
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
|
||||
+ default: none (INFO for all packages)
|
||||
+ env variable: ETCD_LOG_PACKAGE_LEVELS
|
||||
|
||||
@ -266,13 +277,13 @@ Follow the instructions when using these flags.
|
||||
## Profiling flags
|
||||
|
||||
### --enable-pprof
|
||||
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
|
||||
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/"
|
||||
+ default: false
|
||||
|
||||
[build-cluster]: clustering.md#static
|
||||
[reconfig]: runtime-configuration.md
|
||||
[discovery]: clustering.md#discovery
|
||||
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
|
||||
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
|
||||
[proxy]: proxy.md
|
||||
[reconfig]: runtime-configuration.md
|
||||
[restore]: admin_guide.md#restoring-a-backup
|
||||
|
@ -1,8 +1,13 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
# etcd release guide
|
||||
|
||||
The guide talks about how to release a new version of etcd.
|
||||
|
||||
The procedure includes some manual steps for sanity checking but it can probably be further scripted. Please keep this document up-to-date if you want to make changes to the release process.
|
||||
The procedure includes some manual steps for sanity checking but it can probably be further scripted. Please keep this document up-to-date if you want to make changes to the release process.
|
||||
|
||||
## Prepare Release
|
||||
|
||||
@ -48,7 +53,7 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
|
||||
|
||||
## Build Release Binaries and Images
|
||||
|
||||
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
|
||||
- Ensure `acbuild` is available.
|
||||
- Ensure `docker` is available.
|
||||
|
||||
Run release script in root directory:
|
||||
@ -70,7 +75,7 @@ cd release
|
||||
# personal GPG is okay for now
|
||||
for i in etcd-*{.zip,.tar.gz}; do gpg --sign ${i}; done
|
||||
# use `CoreOS ACI Builder <release@coreos.com>` secret key
|
||||
gpg -u 88182190 -a --output etcd-${VERSION}-linux-amd64.aci.asc --detach-sig etcd-${VERSION}-linux-amd64.aci
|
||||
for aci in etcd-${VERSION}.*.aci; do gpg -u 88182190 -a --output ${aci}.asc --detach-sig ${aci}; done
|
||||
```
|
||||
|
||||
## Publish Release Page in GitHub
|
||||
@ -88,6 +93,7 @@ gpg -u 88182190 -a --output etcd-${VERSION}-linux-amd64.aci.asc --detach-sig etc
|
||||
```
|
||||
docker login quay.io
|
||||
docker push quay.io/coreos/etcd:${VERSION}
|
||||
docker push quay.io/coreos/etcd:${VERSION}-${arch}
|
||||
```
|
||||
|
||||
- Add `latest` tag to the new image on [quay.io](https://quay.io/repository/coreos/etcd?tag=latest&tab=tags) if this is a stable release.
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Discovery Service Protocol
|
||||
|
||||
Discovery service protocol helps new etcd member to discover all other members in cluster bootstrap phase using a shared discovery URL.
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Running etcd under Docker
|
||||
|
||||
The following guide will show you how to run etcd under Docker using the [static bootstrap process](clustering.md#static).
|
||||
@ -16,7 +21,7 @@ This will run the latest release version of etcd. You can specify version if nee
|
||||
|
||||
```
|
||||
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
|
||||
--name etcd quay.io/coreos/etcd \
|
||||
--name etcd quay.io/coreos/etcd:v2.3.8 \
|
||||
-name etcd0 \
|
||||
-advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \
|
||||
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
|
||||
@ -42,11 +47,13 @@ etcdctl -C http://192.168.12.50:4001 member list
|
||||
Using Docker to setup a multi-node cluster is very similar to the standalone mode configuration.
|
||||
The main difference being the value used for the `-initial-cluster` flag, which must contain the peer urls for each etcd member in the cluster.
|
||||
|
||||
**Although the following commands look very similar, note that `-name`, `-advertise-client-urls` and `-initial-advertise-peer-urls` differ for each cluster member**
|
||||
|
||||
### etcd0
|
||||
|
||||
```
|
||||
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
|
||||
--name etcd quay.io/coreos/etcd \
|
||||
--name etcd quay.io/coreos/etcd:v2.3.8 \
|
||||
-name etcd0 \
|
||||
-advertise-client-urls http://192.168.12.50:2379,http://192.168.12.50:4001 \
|
||||
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
|
||||
@ -61,7 +68,7 @@ docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380
|
||||
|
||||
```
|
||||
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
|
||||
--name etcd quay.io/coreos/etcd \
|
||||
--name etcd quay.io/coreos/etcd:v2.3.8 \
|
||||
-name etcd1 \
|
||||
-advertise-client-urls http://192.168.12.51:2379,http://192.168.12.51:4001 \
|
||||
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
|
||||
@ -76,7 +83,7 @@ docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380
|
||||
|
||||
```
|
||||
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
|
||||
--name etcd quay.io/coreos/etcd \
|
||||
--name etcd quay.io/coreos/etcd:v2.3.8 \
|
||||
-name etcd2 \
|
||||
-advertise-client-urls http://192.168.12.52:2379,http://192.168.12.52:4001 \
|
||||
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Error Code
|
||||
======
|
||||
|
||||
|
121
Documentation/v2/etcd_alert.rules
Normal file
121
Documentation/v2/etcd_alert.rules
Normal file
@ -0,0 +1,121 @@
|
||||
### General cluster availability ###
|
||||
|
||||
# alert if another failed member will result in an unavailable cluster
|
||||
ALERT InsufficientMembers
|
||||
IF count(up{job="etcd"} == 0) > (count(up{job="etcd"}) / 2 - 1)
|
||||
FOR 3m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "etcd cluster insufficient members",
|
||||
description = "If one more etcd member goes down the cluster will be unavailable",
|
||||
}
|
||||
|
||||
### HTTP requests alerts ###
|
||||
|
||||
# alert if more than 1% of requests to an HTTP endpoint have failed with a non 4xx response
|
||||
ALERT HighNumberOfFailedHTTPRequests
|
||||
IF sum by(method) (rate(etcd_http_failed_total{job="etcd", code!~"4[0-9]{2}"}[5m]))
|
||||
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.01
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of HTTP requests are failing",
|
||||
description = "{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}",
|
||||
}
|
||||
|
||||
# alert if more than 5% of requests to an HTTP endpoint have failed with a non 4xx response
|
||||
ALERT HighNumberOfFailedHTTPRequests
|
||||
IF sum by(method) (rate(etcd_http_failed_total{job="etcd", code!~"4[0-9]{2}"}[5m]))
|
||||
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.05
|
||||
FOR 5m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of HTTP requests are failing",
|
||||
description = "{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}",
|
||||
}
|
||||
|
||||
# alert if 50% of requests get a 4xx response
|
||||
ALERT HighNumberOfFailedHTTPRequests
|
||||
IF sum by(method) (rate(etcd_http_failed_total{job="etcd", code=~"4[0-9]{2}"}[5m]))
|
||||
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.5
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of HTTP requests are failing",
|
||||
description = "{{ $value }}% of requests for {{ $labels.method }} failed with 4xx responses on etcd instance {{ $labels.instance }}",
|
||||
}
|
||||
|
||||
# alert if the 99th percentile of HTTP requests take more than 150ms
|
||||
ALERT HTTPRequestsSlow
|
||||
IF histogram_quantile(0.99, rate(etcd_http_successful_duration_second_bucket[5m])) > 0.15
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "slow HTTP requests",
|
||||
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $label.method }} are slow",
|
||||
}
|
||||
|
||||
### File descriptor alerts ###
|
||||
|
||||
instance:fd_utilization = process_open_fds / process_max_fds
|
||||
|
||||
# alert if file descriptors are likely to exhaust within the next 4 hours
|
||||
ALERT FdExhaustionClose
|
||||
IF predict_linear(instance:fd_utilization[1h], 3600 * 4) > 1
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "file descriptors soon exhausted",
|
||||
description = "{{ $labels.job }} instance {{ $labels.instance }} will exhaust its file descriptors soon",
|
||||
}
|
||||
|
||||
# alert if file descriptors are likely to exhaust within the next hour
|
||||
ALERT FdExhaustionClose
|
||||
IF predict_linear(instance:fd_utilization[10m], 3600) > 1
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "critical"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "file descriptors soon exhausted",
|
||||
description = "{{ $labels.job }} instance {{ $labels.instance }} will exhaust its file descriptors soon",
|
||||
}
|
||||
|
||||
### etcd proposal alerts ###
|
||||
|
||||
# alert if there are several failed proposals within an hour
|
||||
ALERT HighNumberOfFailedProposals
|
||||
IF increase(etcd_server_proposal_failed_total{job="etcd"}[1h]) > 5
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "a high number of proposals within the etcd cluster are failing",
|
||||
description = "etcd instance {{ $labels.instance }} has seen {{ $value }} proposal failures within the last hour",
|
||||
}
|
||||
|
||||
### etcd disk io latency alerts ###
|
||||
|
||||
# alert if 99th percentile of fsync durations is higher than 500ms
|
||||
ALERT HighFsyncDurations
|
||||
IF histogram_quantile(0.99, rate(etcd_wal_fsync_durations_seconds_bucket[5m])) > 0.5
|
||||
FOR 10m
|
||||
LABELS {
|
||||
severity = "warning"
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "high fsync durations",
|
||||
description = "etcd instance {{ $labels.instance }} fync durations are high",
|
||||
}
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# FAQ
|
||||
|
||||
## 1) Why can an etcd client read an old version of data when a majority of the etcd cluster members are down?
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Glossary
|
||||
|
||||
This document defines the various terms used in etcd documentation, command line and source code.
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# FAQ
|
||||
|
||||
## Initial Bootstrapping UX
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Versioning
|
||||
|
||||
Goal: We want to be able to upgrade an individual peer in an etcd cluster to a newer version of etcd.
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Libraries and Tools
|
||||
|
||||
**Tools**
|
||||
@ -53,8 +58,11 @@
|
||||
- [shafreeck/cetcd](https://github.com/shafreeck/cetcd) - Supports v2
|
||||
|
||||
**C++ libraries**
|
||||
|
||||
- [edwardcapriolo/etcdcpp](https://github.com/edwardcapriolo/etcdcpp) - Supports v2
|
||||
- [suryanathan/etcdcpp](https://github.com/suryanathan/etcdcpp) - Supports v2 (with waits)
|
||||
- [nokia/etcd-cpp-api](https://github.com/nokia/etcd-cpp-api) - Supports v2
|
||||
- [nokia/etcd-cpp-apiv3](https://github.com/nokia/etcd-cpp-apiv3)
|
||||
|
||||
**Clojure libraries**
|
||||
|
||||
@ -112,7 +120,6 @@
|
||||
- [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration
|
||||
- [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd
|
||||
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
|
||||
- [scrz](https://github.com/scrz/scrz) - Container manager, stores configuration in etcd.
|
||||
- [fleet](https://github.com/coreos/fleet) - Distributed init system
|
||||
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
|
||||
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Members API
|
||||
|
||||
* [List members](#list-members)
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Metrics
|
||||
|
||||
etcd uses [Prometheus][prometheus] for metrics reporting. The metrics can be used for real-time monitoring and debugging. etcd does not persist its metrics; if a member restarts, the metrics will be reset.
|
||||
@ -14,9 +19,9 @@ The metrics under the `etcd` prefix are for monitoring and alerting. They are st
|
||||
|
||||
### http requests
|
||||
|
||||
These metrics describe the serving of requests (non-watch events) served by etcd members in non-proxy mode: total
|
||||
These metrics describe the serving of requests (non-watch events) served by etcd members in non-proxy mode: total
|
||||
incoming requests, request failures and processing latency (inc. raft rounds for storage). They are useful for tracking
|
||||
user-generated traffic hitting the etcd cluster .
|
||||
user-generated traffic hitting the etcd cluster .
|
||||
|
||||
All these metrics are prefixed with `etcd_http_`
|
||||
|
||||
@ -28,20 +33,20 @@ All these metrics are prefixed with `etcd_http_`
|
||||
|
||||
|
||||
Example Prometheus queries that may be useful from these metrics (across all etcd members):
|
||||
|
||||
* `sum(rate(etcd_http_failed_total{job="etcd"}[1m]) by (method) / sum(rate(etcd_http_events_received_total{job="etcd"})[1m]) by (method)`
|
||||
|
||||
|
||||
* `sum(rate(etcd_http_failed_total{job="etcd"}[1m]) by (method) / sum(rate(etcd_http_events_received_total{job="etcd"})[1m]) by (method)`
|
||||
|
||||
Shows the fraction of events that failed by HTTP method across all members, across a time window of `1m`.
|
||||
|
||||
|
||||
* `sum(rate(etcd_http_received_total{job="etcd",method="GET})[1m]) by (method)`
|
||||
`sum(rate(etcd_http_received_total{job="etcd",method~="GET})[1m]) by (method)`
|
||||
|
||||
|
||||
Shows the rate of successful readonly/write queries across all servers, across a time window of `1m`.
|
||||
|
||||
|
||||
* `histogram_quantile(0.9, sum(rate(etcd_http_successful_duration_seconds{job="etcd",method="GET"}[5m]) ) by (le))`
|
||||
`histogram_quantile(0.9, sum(rate(etcd_http_successful_duration_seconds{job="etcd",method!="GET"}[5m]) ) by (le))`
|
||||
|
||||
Show the 0.90-tile latency (in seconds) of read/write (respectively) event handling across all members, with a window of `5m`.
|
||||
|
||||
Show the 0.90-tile latency (in seconds) of read/write (respectively) event handling across all members, with a window of `5m`.
|
||||
|
||||
### proxy
|
||||
|
||||
@ -56,21 +61,21 @@ All these metrics are prefixed with `etcd_proxy_`
|
||||
| requests_total | Total number of requests by this proxy instance. | Counter(method) |
|
||||
| handled_total | Total number of fully handled requests, with responses from etcd members. | Counter(method) |
|
||||
| dropped_total | Total number of dropped requests due to forwarding errors to etcd members. | Counter(method,error) |
|
||||
| handling_duration_seconds | Bucketed handling times by HTTP method, including round trip to member instances. | Histogram(method) |
|
||||
| handling_duration_seconds | Bucketed handling times by HTTP method, including round trip to member instances. | Histogram(method) |
|
||||
|
||||
Example Prometheus queries that may be useful from these metrics (across all etcd servers):
|
||||
|
||||
* `sum(rate(etcd_proxy_handled_total{job="etcd"}[1m])) by (method)`
|
||||
|
||||
Rate of requests (by HTTP method) handled by all proxies, across a window of `1m`.
|
||||
|
||||
Rate of requests (by HTTP method) handled by all proxies, across a window of `1m`.
|
||||
|
||||
* `histogram_quantile(0.9, sum(rate(handling_duration_seconds{job="etcd",method="GET"}[5m])) by (le))`
|
||||
`histogram_quantile(0.9, sum(rate(handling_duration_seconds{job="etcd",method!="GET"}[5m])) by (le))`
|
||||
|
||||
Show the 0.90-tile latency (in seconds) of handling of user requests across all proxy machines, with a window of `5m`.
|
||||
|
||||
|
||||
Show the 0.90-tile latency (in seconds) of handling of user requests across all proxy machines, with a window of `5m`.
|
||||
|
||||
* `sum(rate(etcd_proxy_dropped_total{job="etcd"}[1m])) by (proxying_error)`
|
||||
|
||||
|
||||
Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to the etcd cluster.
|
||||
|
||||
## etcd_debugging namespace metrics
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../docs.md#documentation
|
||||
|
||||
|
||||
# Miscellaneous APIs
|
||||
|
||||
* [Getting the etcd version](#getting-the-etcd-version)
|
||||
|
@ -1,3 +1,8 @@
|
||||
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
|
||||
|
||||
[v3-docs]: ../../docs.md#documentation
|
||||
|
||||
|
||||
# FreeBSD
|
||||
|
||||
Starting with version 0.1.2 both etcd and etcdctl have been ported to FreeBSD and can
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user