Compare commits

...

1450 Commits

Author SHA1 Message Date
5cf5d88a18 version: 3.3.14
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-16 16:21:44 -07:00
af8cb6c5b9 Documentation/upgrades: special upgrade guides for >= 3.3.14
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-16 16:21:11 -07:00
9dd98b7c90 version: 3.3.14-rc.0
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 15:03:43 -07:00
2f3aa893ec vendor: regenerate
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 15:02:26 -07:00
d65219c1ef go.mod: regenerate
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 15:02:03 -07:00
b9c976eed8 gitignore: track vendor directory
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 15:00:46 -07:00
b196734290 *: test with Go 1.12.9
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 14:42:32 -07:00
1aa4af83c0 version: 3.3.14-beta.0
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 11:52:26 -07:00
95a5c57754 tests/e2e: add missing curl
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 11:31:53 -07:00
082c5e0705 e2e: move
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 11:22:39 -07:00
33668f4eff test: do not run "v2store" tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 11:12:46 -07:00
c7c09c61d0 test: bump up timeout for e2e tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:52:16 -07:00
4f1e65418f travis: fix functional tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:40:16 -07:00
e16b21be7b functional: add back, travis
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
0e96b34d9f auth: fix tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
3c2b1cd76a travis: do not run functional for now
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
37d10dd8b8 travis: skip windows build
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
84508f7c98 test: fix repo path
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
be3babffb7 tests/e2e: fix
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
61065db065 build: remove tools
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
0ddda8c72e integration: fix tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
b889245252 integration: fix "HashKVRequest"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
6e37ece3b9 functional: update
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
f68fac655e travis.yml: fix, run e2e
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
dbfc7bd612 integration: update
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
e5c2dff346 etcdserver: detect leader change on reads
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
9561f6b3b6 clientv3: rewrite based on 3.4
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:06 -07:00
a317433854 raft: fix compile error in "Panic"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 04:05:07 -07:00
7eb9a29e26 pkg/*: add
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 04:05:04 -07:00
5a678bb4e3 etcdserver/api/v3rpc: support watch fragmentation
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:22:29 -07:00
92a750432f tests: update
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:22:29 -07:00
d167714b36 *: regenerate proto
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:22:23 -07:00
9f7294f1e0 etcdserver/etcdserverpb/rpc.proto: add watch progress/fragment
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:17:29 -07:00
830bba337f vendor: regenerate, upgrade gRPC to 1.23.0
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:16:44 -07:00
27cf72b231 go.mod: migrate to Go module
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:16:09 -07:00
d7fc66bcbb scripts: update release, genproto, dep
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:14:34 -07:00
cc1591aa4e Makefile/build: sync with 3.4 branch
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:13:22 -07:00
08124105ad *: use new adt.IntervalTree interface
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:15:49 -07:00
ffe90b9ff3 pkg/adt: remove TODO
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:02:28 -07:00
036bd1ab09 pkg/adt: fix interval tree black-height property based on rbtree
Author: xkey <xk33430@ly.com>
ref. https://github.com/etcd-io/etcd/pull/10978

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:02:21 -07:00
33e4877b56 pkg/adt: document textbook implementation with pseudo-code
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:02:15 -07:00
c25f746f77 pkg/adt: mask test failure, add TODO
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:02:07 -07:00
f4341fd35c pkg/adt: add "IntervalTree.Delete" failure case
Described in https://github.com/etcd-io/etcd/issues/10877.

"black-height" property: Every path from a node to any descendant leaf node must have the same number of black nodes.

Expected

    After deleting 11 (requires rebalancing):
                            [510,511]
                             /      \
                   ----------        --------------------------
                  /                                            \
              [383,384]                                       [830,831]
              /       \                                      /          \
             /         \                                    /            \
      [261,262](red)  [410,411]                     [647,648]           [899,900](red)
          /               \                              \                      /    \
         /                 \                              \                    /      \
      [82,83]           [292,293]                      [815,816](red)   [888,889]    [972,973]
            \                                                           /
             \                                                         /
          [238,239](red)                                       [953,954](red)

Got

    After deleting 11 (requires rebalancing):
                            [510,511]
                             /      \
                   ----------        --------------------------
                  /                                            \
              [82,83]                                       [830,831]
                    \                                      /          \
                     \                                    /            \
                  [383,384]                        [647,648]            [899,900]
                  /       \                              \                  /    \
                 /         \                              \                /      \
           [261,262]      [410,411]                      [815,816]   [888,889]    [972,973]
             /   \                                                                  /
            /     \                                                                /
     [238,239]   [292,293]                                                  [953,954]

This violates "black-height" property.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:58 -07:00
b3152365bb pkg/adt: test node "11" deletion
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:51 -07:00
d938435e44 pkg/adt: README "IntervalTree.Delete" test case images
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:43 -07:00
594e7d6627 pkg/adt: README initial commit
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:35 -07:00
266214d19e pkg/adt: add "visitLevel", make "IntervalTree" interface, more tests
Make "IntervalTree" an interface to abstract range tree interface

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:16 -07:00
0b37ae05b1 pkg: clean up code format
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2019-08-09 11:00:44 -07:00
3aef9a1a8f travis: update
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 10:57:38 -07:00
4527f4c4b0 etcdserver: add "etcd_server_snapshot_apply_inflights_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-08 15:13:14 -07:00
1c8fab7365 etcdserver/api: add "etcd_network_snapshot_send_inflights_total", "etcd_network_snapshot_receive_inflights_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-08 15:12:08 -07:00
789ff21b18 Merge pull request #10570 from sbenderli/cherry-pick-of-#8334
raft: cherry pick of #8334 to release-3.3
2019-07-23 11:42:42 -07:00
d12f13279f Merge pull request #10827 from yznima/pr-race-3.3
Raft HTTP: fix pause/resume race condition
2019-07-23 10:59:02 -07:00
9f1d6ca1c9 Raft HTTP: fix pause/resume race condition
(cherry picked from commit b1812a410f)
2019-06-17 13:33:27 -04:00
5832014353 Merge pull request #10793 from jingyih/automated-cherry-pick-of-#10788-origin-release-3.3
Automated cherry pick of #10788 on release-3.3
2019-06-05 14:39:55 -07:00
d005486359 ctlv3: add missing newline in EndpointHealth
To make the output consistent with the output before #9540.
2019-06-05 14:36:57 -07:00
89429703db Merge pull request #10782 from jingyih/cherrypick_9540_to_release3p3
ctlv3: cherry pick of #9540 to release 3.3
2019-06-04 09:55:19 -07:00
f835a85965 ctlv3: support "write-out" for "endpoint health" command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2019-06-03 17:01:54 -07:00
b0babe5d1e Merge pull request #10718 from rohitsardesai83/release-3.3
etcd: Replace ghodss/yaml with sigs.k8s.io/yaml in 3.3
2019-05-29 13:47:56 -07:00
8ed3e70d7c etcd: Replace ghodss/yaml with sigs.k8s.io/yaml 2019-05-29 23:03:16 +05:30
98d3084268 version: bump up 3.3.13
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-05-02 10:22:46 -07:00
b7001c05bc clientv3: fix race condition in "Endpoints" methods
From https://github.com/etcd-io/etcd/pull/10595.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-05-02 10:17:58 -07:00
f179d4d6a3 etcdserver: improve heartbeat send failures logging
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-05-02 10:02:28 -07:00
c46aa44143 Documentation metadata for 3.3 branch (#10692)
* Update Documentation folder

Signed-off-by: lucperkins <lucperkins@gmail.com>

* Re-add README file

Signed-off-by: lucperkins <lucperkins@gmail.com>
2019-04-30 14:03:05 -07:00
ad7c2cddb0 vendor: add missing files
Change-Id: I53b30e9317de6cd058833d743bc88c46686cea20
2019-04-25 15:45:49 -04:00
6499c14cb6 vendor: Run scripts/updatedeps.sh to cleanup unused code 2019-04-25 15:45:49 -04:00
6e91e3559c client: Switch to case sensitive unmarshalling to be compatible with ugorji
Using lessons learned from k8s changes:
https://github.com/kubernetes/kubernetes/pull/65034

Change-Id: Ia17a8f94ae6ed00c5af2595c2b48d3c9a0344427
2019-04-25 15:45:49 -04:00
7ff7e0aadd *: update bill-of-materials
Change-Id: Ibfa24e28cacd58388f7606a945c8ac35e1c34580
2019-04-25 15:45:49 -04:00
02ccf2013d vendor: Add json-iterator and its dependencies
Change-Id: I1f3fc00f95efadd6da9b4c248156f8460ae0ff97
2019-04-25 15:45:49 -04:00
20bd0c064c scripts: Remove generated code and script
Change-Id: Iac4601443bcad71920fd96b97bfe21c16116577a
2019-04-25 15:45:49 -04:00
69e0daf809 client: Replace ugorji/codec with json-iterator/go
We need to use the stdlib-compatible one that is case-sensitive, etc

Change-Id: Id0df573a70e09967ac7d8c0a63d99d6a49ce82f1
2019-04-25 15:45:49 -04:00
5f4a45596e Merge pull request #10656 from jpbetz/automated-cherry-pick-of-#10646-release-3.3
Automated cherry pick of #10646
2019-04-18 14:10:02 -07:00
38bf1bdbe0 mvcc: fix db_compaction_total_duration_milliseconds 2019-04-17 16:31:06 -07:00
e206a8b495 wal: Add test for Verify
Signed-off-by: Shreyas Rao <shreyas.sriganesh.rao@sap.com>
2019-04-12 06:56:08 -04:00
cf4836fb2c wal: add Verify function to perform corruption check on wal contents
Signed-off-by: Shreyas Rao <shreyas.sriganesh.rao@sap.com>
2019-04-12 06:56:08 -04:00
43386ac29b *: Change gRPC proxy to expose etcd server endpoint /metrics
This PR resolves an issue where the `/metrics` endpoints exposed by the proxy were not returning metrics of the etcd members servers but of the proxy itself.

Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-04-11 17:07:40 -04:00
332e995ccd travis: fix tests by using proper code path
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-04-11 16:19:36 -04:00
ad5e169dcf Merge pull request #10597 from purpleidea/3.3/fatal-corruption
etcdserver: Use panic instead of fatal on no space left error
2019-03-29 14:54:46 -07:00
7814718c73 etcdserver: Use panic instead of fatal on no space left error
When using the embed package to embed etcd, sometimes the storage prefix
being used might be full. In this case, this code path triggers, causing
an: `etcdserver: create wal error: no space left on device` error, which
causes a fatal. A fatal differs from a panic in that it also calls
os.Exit(1). In this situation, the calling program that embeds the etcd
server will be abruptly killed, which prevents it from cleaning up
safely, and giving a proper error message. Depending on what the calling
program is, this can cause corruption and data loss.

This patch switches the fatal to a panic. Ideally this would be a
regular error which would get propagated upwards to the StartEtcd
command, but in the meantime at least this can be caught with recover().

This fixes the most common fatal that I've experienced, but there are
surely more that need looking into. If possible, the errors should be
threaded down into the code path so that embedding etcd can be more
robust.

Fixes: https://github.com/etcd-io/etcd/issues/10588

This is a cherry-picked version of upstream: 368f70a37c
2019-03-29 17:45:48 -04:00
ec22eb908a raft: cherry pick of #8334 to release-3.3 2019-03-21 16:02:30 -04:00
c6964428ff travis.yml: update Go 1.10.8
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-02-07 10:45:15 -08:00
d57e8b8d97 version: 3.3.12
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-02-07 10:41:58 -08:00
e634184dc6 etcdctl: fix strings.HasPrefix args order
Signed-off-by: Iskander Sharipov <quasilyte@gmail.com>
2019-02-07 10:41:44 -08:00
410a879601 version: 3.3.11+git
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-02-07 10:41:33 -08:00
2cf9e51d2a version: 3.3.11
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-01-11 11:12:25 -08:00
15903736d5 auth: fix cherry-pick
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-01-09 13:10:32 -05:00
c7f744d6d3 auth: disable CommonName auth for gRPC-gateway
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-01-08 21:01:25 +00:00
e6b2f00047 Merge pull request #10335 from gyuho/release-3.3-patch
[Cherry pick 3.3] grpcproxy: fix memory leak
2018-12-17 20:37:04 -08:00
59cc0f9ac5 grpcproxy: fix memory leak
use set instead of slice as interval value

fixes #10326
2018-12-17 19:00:57 -08:00
3a7b8b31fd travis: use Go 1.10.7
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-12-17 19:00:22 -08:00
6f250f9a47 version: 3.3.10+git
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-10 13:30:14 -07:00
27fc7e2296 version: 3.3.10
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-10 10:17:54 -07:00
eb932c2083 travis.yml: use Go 1.10.4
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-10 10:17:36 -07:00
957700f444 etcdserver: add "etcd_server_read_indexes_failed_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 18:22:02 -07:00
b45f5306dc rafthttp: probe all raft transports
This PR adds another probing routine to monitor the connection
for Raft message transports. Previously, we only monitored
snapshot transports.

In our production cluster, we found one TCP connection had >8-sec
latencies to a remote peer, but "etcd_network_peer_round_trip_time_seconds"
metrics shows <1-sec latency distribution, which means etcd server
was not sampling enough while such latency spikes happen
outside of snapshot pipeline connection.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 18:18:27 -07:00
8491137b55 etcdserver: add "etcd_server_health_success/failures"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 17:54:30 -07:00
ebe950fc1c Merge pull request #10161 from jingyih/automated-cherry-pick-of-#10153-origin-release-3.3
clientv3: automated cherry pick of #10153 to release-3.3
2018-10-08 18:37:52 -07:00
20d83e405f clientv3: concurrency.Mutex.Lock() - preserve invariant
Convenient invariant:
- if werr == nil then lock is supposed to be locked at the moment.

While we could not be confident in stronger invariant ('is exactly locked'),
it were inconvenient that previous code could return `werr == nil` after
Mutex.Unlock.

It could happen when ctx is canceled/timeouted exactly after waitDeletes
successfully returned werr == nil and before `<-ctx.Done()` checked.
While such situation is very rare, it is still possible.

fixes #10111
2018-10-08 16:42:26 -07:00
cb57901e03 Merge pull request #10041 from wenjiaswe/automated-cherry-pick-of-#9997-upstream-release-3.3
Automated cherry pick of #9997
2018-10-03 13:52:02 -07:00
d838e24f80 etcdserver/api/rafthttp: add v3 snapshot send/receive metrics
Distribution would be:
0.1 second or more
...
25.6 seconds or more
51.2 seconds or more

etcd_network_snapshot_send_success
etcd_network_snapshot_send_failures
etcd_network_snapshot_send_total_duration_seconds
etcd_network_snapshot_receive_success
etcd_network_snapshot_receive_failures
etcd_network_snapshot_receive_total_duration_seconds

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-03 11:12:42 -07:00
7ec9ff62b5 etcdserver/api/snap: add v3 snapshot fsync metrics
etcd_snap_db_fsync_duration_seconds_count
etcd_snap_db_save_total_duration_seconds_bucket

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-03 11:12:41 -07:00
dc02dc2ede tests/Dockerfile: update, fix GOPATH
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-01 01:30:23 -07:00
40ed18a457 Merge pull request #10122 from jingyih/cherry-pick-of-#10109-origin-release-3.3
etcdctl: cherry pick of #10109 to release-3.3
2018-09-25 17:30:01 -07:00
60d546e309 etcdctl: cherry pick of #10109 to release-3.3
Add snapshot file integrity verification in snapshot status.
2018-09-25 16:50:47 -07:00
e774f7309c Merge pull request #10093 from jingyih/remove_duplicated_import
etcdserver: remove duplicated imports
2018-09-13 20:57:09 -07:00
9eee0b078e etcdserver: remove duplicated imports
Removed duplicated imports of package 'context' in server.go
2018-09-13 20:44:03 -07:00
d1acb5a5c8 etcdserver: add "etcd_server_id"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:50:17 -07:00
73c1100b04 etcdserver: clarify read index wait timeout warnings
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:38:59 -07:00
c577335a64 rafthttp: clarify "became inactive" warning
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:34:15 -07:00
f69413e9ee Merge pull request #10027 from hexfusion/cherry-pick-a205cfe
etcdserver: cherry-pick #9861 to release-3.3
2018-08-20 12:54:43 -07:00
0dc4632e28 Merge pull request #9861 from gyuho/race
etcdserver/api/v3rpc: remove duplicate gRPC logger set
2018-08-17 22:32:10 -04:00
f8fc923fc0 Merge pull request #10004 from jingyih/automated-cherry-pick-of-#9990-origin-release-3.3
Automated cherry pick of #9990
2018-08-15 06:37:33 -07:00
264bb51a9a etcdserver: code clean up
Code clean up in interceptor.go
2018-08-14 17:08:45 -07:00
c6c0d03522 vendor: add go-grpc-middleware
Rebased to master PR #9994.  Fixed a Go format issue in
v3rpc/interceptor.go.  Updated vendor to include go-grpc-middleware.
2018-08-14 17:08:45 -07:00
94f81368ae etcdserver: add grpc interceptor to log info on incoming requests to etcd server
To improve debuggability of etcd v3. Added a grpc interceptor to log
info on incoming requests to etcd server. The log output includes
remote client info, request content (with value field redacted), request
handling latency, response size, etc. Uses zap logger if available,
otherwise uses capnslog.

Also did some clean up on the chaining of grpc interceptors on server
side.
2018-08-14 16:20:13 -07:00
051587f56f version: bump up to 3.3.9+git 2018-07-24 10:17:06 -07:00
fca8add78a version: 3.3.9
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-24 09:48:32 -07:00
ea40e9f059 etcdserver: add "etcd_server_go_version" metric
Currently, one has to look at server logs manually,
to see what Go version was used to build etcd server.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-23 16:39:24 -07:00
fbc0510a4e clientv3: fix keepalive send interval when response queue is full
client should update next keepalive send time
even when lease keepalive response queue becomes full.

Otherwise, client sends keepalive request every 500ms
regardless of TTL when the send is only expected to happen
with the interval of TTL / 3 at minimum.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-23 08:51:18 -07:00
267a62199c Merge pull request #9940 from wenjiaswe/automated-cherry-pick-of-#9761-upstream-release-3.3
Automated cherry pick of #9761
2018-07-19 18:27:15 -07:00
143fc4ce79 added "now := time.Now()" 2018-07-19 17:27:40 -07:00
7f421efe48 remove "github.com/gogo/protobuf/plugin/stringer" 2018-07-19 17:15:32 -07:00
d509620793 etcdserver: rename to "heartbeat_send_failures_total"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:58:14 -07:00
d5654ba459 mvcc: add "etcd_mvcc_hash_(rev)_duration_seconds"
etcd_mvcc_hash_duration_seconds
etcd_mvcc_hash_rev_duration_seconds

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:57:04 -07:00
da304d7aae mvcc/backend: fix defrag duration scale
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:54:26 -07:00
978727a963 mvcc/backend: add "etcd_disk_backend_defrag_duration_seconds"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:54:26 -07:00
4ad350482e mvcc/backend: document metrics ExponentialBuckets
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:53:31 -07:00
f7367d94ff mvcc/backend: clean up mutex, logging
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:53:31 -07:00
e43224c3b6 etcdserver: add "etcd_server_slow_apply_total"
{"level":"warn","ts":1527101858.6985068,"caller":"etcdserver/util.go:115","msg":"apply request took too long","took":0.114101529,"expected-duration":0.1,"prefix":"","request":"header:<ID:1029181977902852337> put:<key:\"\\000\\000...

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:52:37 -07:00
4c7bf51030 etcdserver: add "etcd_server_heartbeat_failures_total"
{"level":"warn","ts":1527101858.4149103,"caller":"etcdserver/raft.go:370","msg":"failed to send out heartbeat; took too long, server is overloaded likely from slow disk","heartbeat-interval":0.1,"expected-duration":0.2,"exceeded-duration":0.025771662}
{"level":"warn","ts":1527101858.4149644,"caller":"etcdserver/raft.go:370","msg":"failed to send out heartbeat; took too long, server is overloaded likely from slow disk","heartbeat-interval":0.1,"expected-duration":0.2,"exceeded-duration":0.034015766}

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:51:08 -07:00
ffe52f74c0 e2e: log errors TestV3CurlCipherSuitesMismatch for now
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 10:11:10 -07:00
1da638c4dc Makefile: use Go 1.10.3 by default
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 10:01:27 -07:00
82ce873987 *: use Go 1.10.3 for testing
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 09:56:59 -07:00
adfd0d3fe7 mvcc: avoid unnecessary metrics update
https://github.com/coreos/etcd/pull/9300

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:51:08 -07:00
a410463a0b mvcc: add "etcd_mvcc_db_total_size_in_use_in_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:36:18 -07:00
1da3603e31 mvcc: add "etcd_mvcc_db_total_size_in_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:35:48 -07:00
72c51d3e12 etcdserver: add "etcd_server_quota_backend_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 13:26:49 -07:00
4481238224 etcdserver: add "etcd_server_slow_read_indexes_total"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 13:00:08 -07:00
82e670766a etcdserver: clarify read index warnings
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 12:53:21 -07:00
09addbdaa0 tests: update test scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-18 14:08:36 -07:00
4ea2271f86 version: 3.3.8+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 10:21:06 -07:00
33245c6b5b version: 3.3.8
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:41:56 -07:00
4c18c56bf6 travis: use Go 1.9.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:41:41 -07:00
cb46e9ee0b gitignore: ignore "docs" and "vendor"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:34:20 -07:00
1fea97b898 clientv3: backoff on reestablishing watches when Unavailable errors are encountered 2018-06-14 10:47:46 -07:00
5227545764 tests/semaphore.test.bash: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 14:39:38 -07:00
1ba7c71975 Makefile: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 14:39:02 -07:00
b7c19232bc etcdserver: Fix txn request 'took too long' warnings to use loggable request stringer 2018-06-12 09:33:33 -07:00
07f833ae3e etcdserver: Add response byte size and range response count to took too long warning 2018-06-11 11:26:26 -07:00
ef154094b3 etcdserver: Replace value contents with value_size in request took too long warning 2018-06-08 09:49:43 -07:00
21f186a40b version: bump up to 3.3.7+git 2018-06-06 10:08:16 -07:00
56536de551 version: 3.3.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:19 -07:00
a0ebf8cb1c e2e: test client-side cipher suites with curl
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:19 -07:00
13715724b8 etcdmain: add "--cipher-suites" flag
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:15 -07:00
22d65d8cc2 embed: support custom cipher suites
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:18:16 -07:00
6c2add4142 integration: test client-side TLS cipher suites
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:11:16 -07:00
6a3842776b pkg/transport: add "TLSInfo.CipherSuites" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:10:35 -07:00
641bddca0f pkg/tlsutil: add "GetCipherSuite"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:10:16 -07:00
21a1162ad1 tests/e2e: test move-leader command with TLS
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 13:56:31 -07:00
e2cb9cbaec ctlv3: support TLS endpoints for move-leader command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 13:56:05 -07:00
243074c5c5 scripts/release: Fix docker push for 3.1 releases, remove inaccurate warning at the end of release script 2018-05-31 14:44:29 -07:00
26a73f2fa1 version: bump up to 3.3.6+git 2018-05-31 11:57:20 -07:00
932c3c01f9 version: 3.3.6
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-31 11:41:42 -07:00
41888ddbaa mvcc: fix panic by allowing future revision watcher from restore operation
This also happens without gRPC proxy.

Fix panic when gRPC proxy leader watcher is restored:

```
go test -v -tags cluster_proxy -cpu 4 -race -run TestV3WatchRestoreSnapshotUnsync

=== RUN   TestV3WatchRestoreSnapshotUnsync
panic: watcher minimum revision 9223372036854775805 should not exceed current revision 16

goroutine 156 [running]:
github.com/coreos/etcd/mvcc.(*watcherGroup).chooseAll(0xc4202b8720, 0x10, 0xffffffffffffffff, 0x1)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:242 +0x3b5
github.com/coreos/etcd/mvcc.(*watcherGroup).choose(0xc4202b8720, 0x200, 0x10, 0xffffffffffffffff, 0xc420253378, 0xc420253378)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:225 +0x289
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchers(0xc4202b86e0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:340 +0x237
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchersLoop(0xc4202b86e0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:214 +0x280
created by github.com/coreos/etcd/mvcc.newWatchableStore
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:90 +0x477
exit status 2
FAIL	github.com/coreos/etcd/integration	2.551s
```

gRPC proxy spawns a watcher with a key "proxy-namespace__lostleader"
and watch revision "int64(math.MaxInt64 - 2)" to detect leader loss.
But, when the partitioned node restores, this watcher triggers
panic with "watcher minimum revision ... should not exceed current ...".

This check was added a long time ago, by my PR, when there was no gRPC proxy:

https://github.com/coreos/etcd/pull/4043#discussion_r48457145

> we can remove this checking actually. it is impossible for a unsynced watching to have a future rev. or we should just panic here.

However, now it's possible that a unsynced watcher has a future
revision, when it was moved from a synced watcher group through
restore operation.

This PR adds "restore" flag to indicate that a watcher was moved
from the synced watcher group with restore operation. Otherwise,
the watcher with future revision in an unsynced watcher group
would still panic.

Example logs with future revision watcher from restore operation:

```
{"level":"info","ts":1527196358.9057755,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
{"level":"info","ts":1527196358.910349,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
```

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-31 11:41:34 -07:00
7292963ae7 auth: fix panic using WithRoot and improve JWT coverage
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-23 23:45:24 -07:00
37767bc6e2 auth: a new auth token provider nop
This commit adds a new auth token provider named nop. The nop provider
refuses every Authenticate() request so CN based authentication can
only be allowed. If the tokenOpts parameter of auth.NewTokenProvider()
is empty, the provider will be used.
2018-05-23 15:48:39 -07:00
d659771bb8 scripts: Fix remote tag check, gcloud login and umask in release script 2018-05-09 11:08:23 -07:00
39d01e716f version: 3.3.5+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-09 11:07:52 -07:00
70c8726202 version: 3.3.5
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-09 09:23:59 -07:00
aaca01a0fa tests/e2e: separate coverage tests for exec commands
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-03 18:48:16 -07:00
bc2d400b4c etcdctl/ctlv3: fix watch with exec commands
Following command was failing because the parser incorrectly
picks up the second "watch" string in exec command, thus
passing wrong exec commands.

```
ETCDCTL_API=3 ./bin/etcdctl watch aaa -- echo watch event received

panic: runtime error: slice bounds out of range

goroutine 1 [running]:
github.com/coreos/etcd/etcdctl/ctlv3/command.parseWatchArgs(0xc42002e080, 0x8, 0x8, 0xc420206a20, 0x5, 0x6, 0x0, 0x0, 0x0, 0x0, ...)
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/command/watch_command.go:303 +0xbed
github.com/coreos/etcd/etcdctl/ctlv3/command.watchCommandFunc(0xc4202a7180, 0xc420206a20, 0x5, 0x6)
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/command/watch_command.go:73 +0x11d
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).execute(0xc4202a7180, 0xc420206960, 0x6, 0x6, 0xc4202a7180, 0xc420206960)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:766 +0x2c1
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1363de0, 0xc420128638, 0xc420185e01, 0xc420185ee8)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:852 +0x30a
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).Execute(0x1363de0, 0x0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:800 +0x2b
github.com/coreos/etcd/etcdctl/ctlv3.Start()
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/ctl_nocov.go:25 +0x8e
main.main()
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/main.go:40 +0x17b
```

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-03 18:48:08 -07:00
913a98567e tests: use Go 1.9.6
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-01 10:22:04 -07:00
3f888b8085 functional/tester: handle retries in "caseUntilSnapshot"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-30 14:37:20 -07:00
c15c8c6116 functional.yaml: use lower ports
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-30 13:36:36 -07:00
f535bb64f3 scripts: Fix a few etcd release script bugs and make it reenterant. 2018-04-25 10:04:43 -07:00
f01d690e6f etcdmain: document peer-cert-allowed-cn flag 2018-04-24 13:57:51 -07:00
d09fa9c537 version: 3.3.4+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-24 13:56:13 -07:00
fdde8705f5 version: 3.3.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-24 12:05:29 -07:00
600b2d1967 scripts: Add scripts/release that performs 'etcd-release-runbook' (https://goo.gl/Gxwysq) style release workflow 2018-04-24 12:05:18 -07:00
870138accb etcdserver: log skipping initial election tick
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:59:01 -07:00
758203bd86 etcdmain: add "--initial-election-tick-advance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:58:57 -07:00
8886a6397c embed: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:26:48 -07:00
ea829611b5 integration: set InitialElectionTickAdvance to true by default
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:22:16 -07:00
b923c74fe5 etcdserver: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:21:51 -07:00
7cbc2f1068 etcdserver: add is_leader prometheus metric that is 1 on the leader.
Before this change, we had now way to find a leader using /metrics
endpoint. This commit adds a metric to do that.
2018-04-19 14:59:31 -07:00
78109152b9 integration: re-overwrite "httptest.Server" TLS.Certificates
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-17 06:17:46 -07:00
08dc184618 pkg/transport: don't set certificates on tls config 2018-04-17 06:17:38 -07:00
48f4ee9268 functional: create symlinks for build
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 16:05:36 -07:00
07a34aa76b travis: run build tests for "functional"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:56:30 -07:00
2cabb82375 snapshot: remove tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:24:02 -07:00
56a9778bc2 functional: initial commit (copied from master)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 13:19:22 -07:00
5abe521e77 snapshot: initial commit (for functional tests)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 13:19:19 -07:00
3c4ace2d27 test: simplify
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 11:09:25 -07:00
095fc0b411 etcdserver/stats: make all fields guarded by mutex. 2018-04-11 19:49:00 -07:00
d40abbb502 etcdserver/stats: fix stats data race. 2018-04-11 19:49:00 -07:00
c19be730fd test: remove build flag "-a"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-11 10:17:31 -07:00
99e4a5ffae cmd/vendor: add "go.uber.org/zap"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:46:00 -07:00
3736a126df pkg/proxy: move from "pkg/transport"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:43:23 -07:00
074e417770 tools: remove
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:43:16 -07:00
dd9f05567d travis: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:34:27 -07:00
a28cf17f25 test/*: clean up semaphore scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:33:50 -07:00
cdbb8ffdc1 etcdserver: fix "lease_expired_total" metrics
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 17:57:35 -07:00
68ba797549 tests: move test scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-09 11:33:23 -07:00
5d97bccff2 semaphore.sh: update Go version
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:20:26 -07:00
e5ec25fe0b travis: use Go 1.9.5
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:07:35 -07:00
c522f6060f version: 3.3.3+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:07:10 -07:00
e348b1aedd version: 3.3.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 13:00:06 -07:00
4355d91fcc Documentation/upgrades: backport all upgrade guides
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-27 10:32:43 -07:00
ce7b86b65a compactor: simplify interval logic on periodic compactor
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-26 05:37:31 -07:00
d70a218b19 compactor: adjust interval for period <1-hour 2018-03-26 05:37:24 -07:00
e029de320a compactor: clean up
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-22 11:03:22 -07:00
863a56a998 rafthttp: add missing "peer_sent_failures_total" metrics call
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-14 12:44:38 -04:00
3282d90707 etcdserver: adjust election ticks on restart
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:56 -08:00
b2d5c6c7bd etcdserver: make "advanceTicks" method
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:50 -08:00
6fe7316ec4 rafthttp: add "ActivePeers" to "Transport"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:35 -08:00
40e02256c7 version: 3.3.2+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 14:49:14 -08:00
c9d46ab379 version: 3.3.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 12:57:09 -08:00
d1da2023b9 clientv3/integration: test "rpctypes.ErrLeaseTTLTooLarge"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 10:34:34 -08:00
eaa0050d4d *: enforce max lease TTL with 9,000,000,000 seconds
math.MaxInt64 / time.Second is 9,223,372,036. 9,000,000,000 is easier to
remember/document.
2018-03-08 10:34:12 -08:00
99a12662c1 *: remove unused env vars
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 01:35:36 -08:00
e6d44fa3f2 hack/scripts-dev: fix indentation in run.sh
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:32:27 -08:00
43caf2b28a hack/scripts-dev: sync with master branch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:18:58 -08:00
bfb7a155b4 travis: update Go version string
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:04:14 -08:00
f76ef3ce8d e2e: fix missing "apiPrefix"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 00:03:02 -08:00
462ba8bb09 embed: fix wrong compactor imports
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-06 23:26:45 -08:00
146ed08052 Documentation/op-guide: highlight defrag operation "--endpoints" flag
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-05 11:15:05 -08:00
1bc974d536 etcdctl: highlight "defrag" command caveats
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-05 11:15:02 -08:00
3e3468d1fa e2e: add "Election" grpc-gateway test cases
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:50 -08:00
207f19354b e2e: add "spawnWithExpectLines"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:41 -08:00
bb8a5377ce api/v3election: error on missing "leader" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:34 -08:00
8291e16128 Documentation: make "Consul" section more objective
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:22 -08:00
a5b31087e8 etcdserver: enable "CheckQuorum" when starting with "ForceNewCluster"
We enable "raft.Config.CheckQuorum" by default in other
Raft initial starts. So should start with "ForceNewCluster".

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:08 -08:00
cec79dd706 httpproxy: cancel requests when client closes a connection 2018-03-02 10:39:46 -08:00
3641af83e7 semaphore: release test version 2018-02-27 11:29:58 -08:00
240fda5128 embed: fix revision-based compaction with default value
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-21 09:35:00 -08:00
d627301735 embed: document/validate compaction mode
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-21 09:34:59 -08:00
534c31b4ca version: 3.3.1+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 14:36:11 -08:00
28f3f26c0e version: 3.3.1
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:29:11 -08:00
4737f3a620 hack/scripts-dev: Makefile with Go 1.9.4, 1.8.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:56 -08:00
bc6e235052 travis: use Go 1.9.4 with TARGET_GO_VERSION
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:56 -08:00
13c5cedfb8 semaphore: use Go 1.9.4, update release upgrade test version
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:55 -08:00
9942f904fb etcdserver: improve request took too long warning 2018-02-06 16:58:04 -08:00
eaf7d631ad mvcc: restore unsynced watchers
In case syncWatchersLoop() starts before Restore() is called,
watchers already added by that moment are moved to s.synced by the loop.
However, there is a broken logic that moves watchers from s.synced
to s.uncyned without setting keyWatchers of the watcherGroup.
Eventually syncWatchers() fails to pickup those watchers from s.unsynced
and no events are sent to the watchers, because newWatcherBatch() called
in the function uses wg.watcherSetByKey() internally that requires
a proper keyWatchers value.
2018-02-06 11:34:46 -08:00
21a1a28c18 hack: sync with etcd master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:07:01 -08:00
c932e9e2ba tools/functional-tester: update README for local docker testing
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:35 -08:00
cf96d8a130 Dockerfile-functional-tester: initial commit
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:25 -08:00
a3ec84e311 gitignore: add ".Dockerfile-functional-tester"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:12 -08:00
29aca652bf test: configure advertise ports in functional_pass
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:42 -08:00
bbfd0077e8 etcd-tester: set advertise ports, delay w/ network faults
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:33 -08:00
18df07754f etcd-agent: use "pkg/transport.Proxy"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:10 -08:00
56178a8a06 test: remove "use-root" in functional_pass
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:58 -08:00
a9a616a09f etcd-agent: remove "use-root"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:41 -08:00
abdfa87ae5 functional-tester: remove old assets
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:29 -08:00
a4cbba89ff pkg/transport: implement "Proxy"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:02:34 -08:00
0bc06d72df pkg/transport: add "fixtures" for TLS tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:02:25 -08:00
a1fbed5abc *: Remove 8GiB quota limitation from documents
Also mention that in v3.3 change log.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-02 14:28:26 -08:00
665fb01f95 version: 3.3.0+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-01 14:14:07 -08:00
c23606781f version: 3.3.0
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-01 10:03:36 -08:00
afa01aaef0 etcdmain: define "defaultGRPCMaxCallSendMsgSize"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:50:27 -08:00
d20e5a6bb5 Documentation/op-guide: highlight defragment operation
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:42:37 -08:00
6931dd8442 Documentation/op-guide: revert "--discovery-srv-name" doc changes
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:41:42 -08:00
f320348682 Documentation: sync with etcd master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:37:57 -08:00
d7e6dd77bb grpcproxy: configure --max-send-bytes and --max-recv-bytes for grpc proxy 2018-01-30 09:33:16 -08:00
50d2a00f01 etcdserver: clarify warnings on backend open taking >10 seconds
If db file is 10 GiB, it can take more than 1-second.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-26 10:55:16 -08:00
c5bba152ee etcdserver: add detailed errors in "ValidateClusterAndAssignIDs"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:21 -08:00
dbde4e986b pkg/netutil: return error from "URLStringsEqual"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:14 -08:00
f9b7fccf1b etcdserver: add error details on DNS resolution failure on advertise URLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:07 -08:00
9deb838ddb semaphore,travis: test with Go 1.9.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-23 14:03:24 -08:00
baf7320e10 version: 3.3.0-rc.4+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-23 14:03:07 -08:00
ea6360f550 version: 3.3.0-rc.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:32:00 -08:00
2aa3d91759 clientv3/integration: add TestMemberAddUpdateWrongURLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:31:45 -08:00
7973612c6e words: whitelit "rafthttp"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:29:51 -08:00
1c91ddc6f4 clientv3: prevent no-scheme URLs to cluster APIs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:27:25 -08:00
8a18cc96d0 etcdserver/api/v3rpc: debug-log client disconnect on TLS, http/2 stream CANCEL
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-19 12:50:20 -08:00
a90f301ba8 version: 3.3.0-rc.3+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-19 12:48:21 -08:00
374dc5743f version: 3.3.0-rc.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:23:08 -08:00
55505617df proxy/grpcproxy: remove "Errors" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
a9317d3d77 e2e: remove "/health" "errors" field test
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
02d362ccde etcdserver/api/etcdhttp: remove "errors" field in /health
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
d292337d14 api/etcdhttp: change /health type back to string for backwards compatibility 2018-01-17 12:44:38 -08:00
7974f008f3 etcdctl: document "ETCD_WATCH_*"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:44:22 -08:00
4a3f99415e e2e: test ETCD_WATCH_VALUE
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:44:07 -08:00
6340564c84 ctlv3: set ETCD_WATCH_* on watch exec
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:55 -08:00
6735028ec0 ctlv3: exit on exec watch error
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:45 -08:00
906f098053 ctlv3: set ETCD_WATCH_KEY, ETCD_WATCH_VALUE on exec watch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:38 -08:00
8a66237693 ctlv3: handle pkg/flags warnings
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:27 -08:00
d37afffb98 etcdctl: document watch with ETCDCTL_WATCH_*
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:12 -08:00
7e2759da8d e2e: add watch tests with ETCDCTL_WATCH_*
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:02 -08:00
ad4df985fc ctlv3: support ETCDCTL_WATCH_KEY, ETCDCTL_WATCH_RANGE_END
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:42:54 -08:00
2df89c8bf6 Documentation/op-guide: clarify security.md on TLS auth
Make it more accurate (just as pkg/transport/listener_tls.go does).

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-12 15:23:06 -08:00
6178c45066 etcdctl: don't ask password twice for etcdctl endpoint health --cluster
Current etcdctl endpoint health --cluster asks password twice if auth
is enabled. This is because the command creates two client instances:
one for the purpose of checking endpoint health and another for
getting cluster members with MemberList(). The latter client doesn't
need to be authenticated because MemberList() is a public RPC. This
commit makes the client with no authed one.

Fix https://github.com/coreos/etcd/issues/9094
2018-01-12 09:59:31 -08:00
9ccae0f81a etcd-tester: update stresser weights with txn stresser
Large key writes (stressEntries[1].weight) should not take this
much weight. It was triggering "database size exceeded" errors.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-12 09:41:51 -08:00
a5079cc381 version: 3.3.0-rc.2+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 14:16:08 -08:00
9e079d8f02 version: 3.3.0-rc.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 11:18:46 -08:00
bd57c9ca5b etcd-tester: fix "writeTxn" key selection
Found when debugging https://github.com/coreos/etcd/issues/9130.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 11:18:05 -08:00
58c402a47b test: limit stress-qps for slow CI machines, add txn flags
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2018-01-09 14:18:45 -08:00
3ce73b70bc etcd-tester: add txn stresser
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2018-01-09 14:18:33 -08:00
ee3c81d8d3 ctlv3: add "snapshot restore --wal-dir"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-09 11:12:29 -08:00
2dfabfbef6 DocCommand: use regex wildcard
The current command as such produces no output on mac term or bash shell.
Using regex wildcard works fine on mac and linux.
2018-01-09 09:11:16 -08:00
bf83d5269f clientv3/integration: fix typos
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-09 09:11:15 -08:00
a609b1eb47 integration: add constant RequestWaitTimeout. 2018-01-09 09:11:15 -08:00
1ae0c0b47d mvcc: check null before set FillPercent not to panic
Since CreateBucketIfNotExists() can return nil when it gets an error,
accessing FillPercent must be done after a nil check, not to cause
a panic.
2018-01-08 13:08:03 -08:00
ec43197344 etcdserver/api/v3rpc: debug user cancellation and log warning for rest
The context error with cancel code is typically for user cancellation which
should be at debug level. For other error codes we should display a warning.

Fixes #9085
2018-01-08 10:14:37 -08:00
70ba0518f1 embed: enable extensive metrics if specified 2018-01-07 18:48:59 -08:00
e330f5004f etcdmain: unset ETCD_UNSUPPORTED_ARCH after arch check
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-05 03:38:35 +00:00
0ec5023b7b pkg/expect: fix deadlock in mac OS
bufio.NewReader.ReadString blocks even
when the process received syscall.SIGKILL.
Remove ptyMu mutex and make ReadString return
when *os.File is closed.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 14:34:01 -08:00
0f69520622 version: bump up to 3.3.0-rc.1+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 14:33:10 -08:00
d3c2acf090 version: bump up to 3.3.0-rc.1
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:27:15 -08:00
5e35f79087 clientv3/integration: fix TestKVLargeRequests with -tags cluster_proxy
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:07:24 -08:00
6dff1a9398 tools/functional-tester: remove duplicate grpclog set
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
325913d6fb etcdserver/api/v3rpc: set grpclog once
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
24c9fb0527 etcdserver,embed: discard gRPC info logs when debug is off
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
8511db5e2b etcdserver/api/v3rpc: log stream error with debug level
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
3193f3c9ab clientv3/leasing: fix racey waitSession
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 17:51:03 -08:00
bdc508cadf grpc-proxy: add "--debug" flag to "etcd grpc-proxy start" command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 14:44:10 -08:00
d5a0609412 embed: only discard infos when debug flag is off
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 14:44:02 -08:00
67af1a2138 CHANGELOG: remove rc in release-3.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:32:15 -08:00
66d68a8fdb *: update release upgrade test versions
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:16:59 -08:00
ebaa83c985 version: bump up to 3.3.0+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:16:49 -08:00
f7a395f030 version: bump up to 3.3.0
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 13:48:48 -08:00
09fe2ddc31 Merge pull request #9057 from gyuho/version-bump-up
*: server-side version bump up to 3.3
2017-12-20 13:48:22 -08:00
bd9bd71a61 rafthttp: add 3.3.0 support
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 13:34:12 -08:00
b3ec44ca99 etcdserver/api: add 3.3.0 as compatible server capability
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 13:34:12 -08:00
578d13c53c Merge pull request #8724 from gyuho/cleanup-retry
clientv3/retry: clean up retryRPCFunc
2017-12-20 13:33:53 -08:00
b01f16a4a4 Merge pull request #9058 from hexfusion/fx_lease_proto
Documentation/dev-guide: update TimeToLive documentation.
2017-12-20 12:50:33 -08:00
211805b188 Merge pull request #9056 from gyuho/lease-expire-doc
Document/upgrades: add "lease timetolive" output change
2017-12-20 12:40:58 -08:00
eb65f26182 Documentation/dev-guide: Update TimeToLive documentation. 2017-12-20 15:39:37 -05:00
255476b5e5 clientv3/retry: clean up retryRPCFunc
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-20 12:30:33 -08:00
a7282a3f9f Document/upgrades: add "lease timetolive" output change
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:29:03 -08:00
94e50a1e68 Merge pull request #9047 from gyuho/client-grpc-call-options
*: configurable gRPC message size limits
2017-12-20 12:25:53 -08:00
c00255908d Merge pull request #9053 from tomwilkie/peer_round_trip_time_seconds
Documentation/op-guide: fix typo, s/member_round_trip_time/peer_round_trip_time/
2017-12-20 12:17:47 -08:00
e49e231b81 Merge pull request #8979 from gyuho/changelog-3.3
CHANGELOG: add v3.3 pre-release
2017-12-20 12:16:44 -08:00
3c5eb4f4fe Documentation/upgrades: highlight raw gRPC client wrapper changes
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 11:09:05 -08:00
3d56045da0 integration: bump up wait leader timeout for slow CIs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
88fe8de99b clientv3/integration: fix TestKVPutError
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
6bfde98be7 Documentation/upgrades: highlight request limit changes in v3.2, v3.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
3d924aedc8 Documentation/upgrades: clean up 3.2, 3.3 guides
Make headers consistent.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
1b3ed912a2 words: whitelist more
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
f38593bbad clientv3/integration: test large KV requests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
497412c588 clientv3: call other APIs with default gRPC call options
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
f87760998b clientv3: call KV/Txn APIs with default gRPC call options
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
63d66b1011 clientv3: configure gRPC message limits in Config
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:01 -08:00
9442f90016 integration: remove typo in "TestV3LargeRequests"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:54:40 -08:00
22127895d8 Merge pull request #8919 from gyuho/exec-watch
etcdctl: support exec watch in v3
2017-12-20 10:53:30 -08:00
cd2f83900a CHANGELOG: add "lease timetolive" change
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:54:49 -08:00
5628fff00f CHANGELOG: link to upgrade guides for every release
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:54:49 -08:00
38f92801db CHANGELOG: highlight request limit changes, add v3.2.12
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:54:45 -08:00
59269aa7b0 CHANGELOG: update "code changes" links
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:52:22 -08:00
d9d12acf78 CHANGELOG: add v3.3.0 pre-release
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:52:22 -08:00
849c6cfb21 CHANGELOG: minor formatting update in previous releases
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:52:22 -08:00
e378b9831c Merge pull request #9052 from gyuho/lease-timetolive-output
etcdctl/ctlv3: clarify "lease timetolive" output on expired lease
2017-12-20 09:49:34 -08:00
f59808a2ca etcdctl: update README for new "lease timetolive" output
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 06:55:23 -08:00
13a8634630 Documentation/op-guide: s/member_round_trip_time/peer_round_trip_time/ 2017-12-20 13:25:54 +00:00
c559b0eede e2e: update "leaseTestTimeToLiveExpire"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 00:52:42 -08:00
9978b4fd35 etcdctl/ctlv3: clarify "lease timetolive" output on expired lease
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 00:40:57 -08:00
25222c22d9 e2e: test watch exec in v3 etcdctl
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-19 19:45:27 -08:00
e89fc20542 etcdctl: document watch exec in v3
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-19 19:45:27 -08:00
904513fa5c etcdctl/ctlv3: support "exec-watch" in watch command
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-19 19:45:24 -08:00
828289db32 Merge pull request #9046 from gyuho/request-limit
integration: test large request response back from server
2017-12-19 16:41:27 -08:00
abfc09b1ca integration: test large request response back from server
Address https://github.com/coreos/etcd/issues/9043.
Won't fix it, but we need test coverage on response back
from server as well.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 14:45:20 -08:00
74a600afae Merge pull request #9045 from gyuho/test-limit
test: bump up clientv3/integration test time out
2017-12-19 14:44:04 -08:00
ba702ae601 test: bump up clientv3/integration test time out
Recently we've added many more tests.
Until we parallelize tests, just increase the timeout.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 14:17:55 -08:00
b0a7623be8 Merge pull request #9023 from gyuho/keepalive-doc
clientv3: document context to "KeepAlive" API
2017-12-19 11:53:56 -08:00
ecbd1aec06 Merge pull request #9038 from gyuho/snapshot-error
clientv3: translate Snapshot API gRPC status error
2017-12-19 11:14:25 -08:00
c8a516d515 Documentation/upgrades: document Snapshot API error handling
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 10:46:22 -08:00
7cd985bdac clientv3: translate Snapshot API gRPC status error
To be consistent with other APIs.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 10:46:19 -08:00
6f899186f8 Merge pull request #9042 from gyuho/grpc-go
vendor: pin "grpc/grpc-go" v1.7.5
2017-12-19 09:40:54 -08:00
d57002ba9c vendor: pin "grpc/grpc-go" v1.7.5
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 15:56:23 -08:00
a7445d752b Merge pull request #9039 from gyuho/mvcc-doc
mvcc: clean-up godoc in key_index.go
2017-12-18 13:56:51 -08:00
eb58e7607b Merge pull request #9040 from gyuho/etcd-tester-main
etcd-tester: discard gRPC balancer logs
2017-12-18 13:56:39 -08:00
e21eac808e etcd-tester: discard gRPC balancer logs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 13:39:55 -08:00
76dd9d56a1 mvcc: clean-up godoc in key_index.go
Minor clean-up.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 13:20:00 -08:00
95ecf9602a Merge pull request #9031 from gyuho/fix-mvcc
mvcc: fetch revisions with current revision, not 0, in HashByRev
2017-12-18 12:26:30 -08:00
2e95ace82b mvcc: fetch revisions with current revision, not 0, in HashByRev
It was getting revisions with "atRev==0", which makes
"available" from "keep" method always empty since
"walk" on "keyIndex" only returns true.

"available" should be populated with all revisions to be
kept if the compaction happens with the given revision.
But, "available" was being empty when "kvindex.Keep(0)"
since it's always the case that "rev.main > atRev==0".

Fix https://github.com/coreos/etcd/issues/9022.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 12:17:06 -08:00
fffb26596c words: whitelist "KeepAlive"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 10:34:43 -08:00
3e58dd707f clientv3: document lease KeepAlive streaming errors
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 10:19:41 -08:00
da3e3b7240 clientv3: document from "don't halt lease client if there is a lease error"
From https://github.com/coreos/etcd/pull/7866.
2017-12-18 09:55:00 -08:00
9c326ab78c Merge pull request #9034 from hexfusion/fx_ttl_doc
clientv3/lease.go: TTL, document expired Lease.
2017-12-18 09:19:55 -08:00
9b98cbb819 Merge pull request #9017 from hexfusion/test_lease_auth
e2e: improve Lease coverage
2017-12-18 09:19:29 -08:00
ed3672850c e2e: improve lease coverage 2017-12-18 10:47:42 -05:00
33a1d307df Merge pull request #9032 from hexfusion/perl_intergration
Documentation/integrations: minor style fix.
2017-12-18 07:20:41 -08:00
a5d9bff24c clientv3/lease.go: TTL, document expired Lease. 2017-12-18 08:34:19 -05:00
ac58646298 Documentation/integrations: minor style fix. 2017-12-18 07:57:28 -05:00
940dace5d1 Merge pull request #9025 from gyuho/meeting
README: add "Community meetings"
2017-12-15 14:59:55 -08:00
207827a94e README: add "Community meetings"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 14:37:37 -08:00
91415f8aaa Merge pull request #8961 from gyuho/test-scripts-4
hack/scripts-dev: add "docker-dns-example-certs-common-name-run"
2017-12-15 13:43:43 -08:00
5783460dbb hack/scripts-dev: rename to example
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 13:37:55 -08:00
7533f700f1 hack/scripts-dev: add "docker-dns-test-certs-common-name-run"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 13:33:41 -08:00
da52b23542 hack/scripts-dev/docker-dns: add "certs-common-name" test case
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 13:33:41 -08:00
9deaee3ea1 Merge pull request #9020 from mkumatag/fix_govet
Clientv3: Fix govet error for gotip
2017-12-15 09:21:19 -08:00
b0f0ba7f81 Merge pull request #9018 from gyuho/auth-ctx
*: fix server-side lease expire when auth is enabled
2017-12-15 08:57:26 -08:00
18746c65da Clientv3: Fix govet error for gotip 2017-12-15 14:31:27 +05:30
9fb7bbdb2d integration: add "TestV3AuthWithLeaseRevokeWithRoot"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-14 21:45:50 -08:00
85af65eca9 etcdserver: log lease revoke error
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-14 21:45:20 -08:00
1f191a0e34 auth: use NewIncomingContext for "WithRoot"
"WithRoot" is only used within local node, and
"AuthInfoFromCtx" expects token from incoming context.
Embed token with "NewIncomingContext" so that token
can be found in "AuthInfoFromCtx".

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-14 21:45:16 -08:00
014c375099 Merge pull request #9012 from gyuho/gyuho/help-flag
etcdmain: display default --enable-v2, --strict-reconfig-check value ("true")
2017-12-14 11:40:52 -08:00
608961b2b8 Merge pull request #8921 from gyuho/fileutil-darwin
pkg/fileutil: fix preallocate under OS X kernel
2017-12-14 11:38:17 -08:00
0133d77f0a etcdmain: display default --enable-v2, --strict-reconfig-check value ("true")
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-14 11:25:20 -08:00
954ced48d2 Merge pull request #8923 from gyuho/client-logger
clientv3: simplify balancer level logging
2017-12-14 10:39:02 -08:00
962976f2df pkg/fileutil: fix preallocate under OS X kernel
ftruncate changes st_blocks, and following fallocate
syscalls would return EINVAL when allocated block size
is already greater than requested block size
(e.g. st_blocks==8, requested blocks are 2).

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-14 10:36:57 -08:00
b48cf77abb Merge pull request #9004 from spzala/readme
Update Running etcd section for pre-built install of etcd
2017-12-13 09:11:41 -08:00
24bdfed0a5 README: update running etcd section for pre-built etcd install
The current Running etcd section only shows how to run etcd for installation
with master branch. If user has installed a pre-built release following the
instructions on the release page, the ./bin/etcd won't work to bring up the
etcd. The Getting etcd section covers both, pre-built and master branch,
with recommendation of pre-built usage so the Running etcd section is updated
accordingly.

fix #9003
2017-12-12 22:43:24 -05:00
c886bda7fe Merge pull request #8996 from tinx/api_md_typo_fixes
Documentation/learning/api.md: fix typos
2017-12-12 13:23:02 -08:00
5a2b0dd0a7 Documentation/learning/api.md: fix markup, wording
A few words have been emphasized to be consistent with the rest
of the text. Also, some phrases have been altered for better
readability.
2017-12-12 18:48:06 +01:00
b52856d4f6 Merge pull request #8999 from yudai/fix_revision_failed_message
compactor: fix error message of Revision compactor
2017-12-11 11:44:13 -08:00
06365b6008 compactor: fix error message of Revision compactor
Reorder the parameters so that Noticef can output the error properly.
2017-12-11 10:39:00 -08:00
5f24a81f64 Documentation/learning/api.md: fix typos 2017-12-10 15:18:45 +01:00
809b0d71a3 Merge pull request #8995 from hexfusion/perl_intergration
Documentation/integrations: add Perl clients.
2017-12-09 15:49:03 -08:00
e8ff7da057 Documentation/integrations: add Perl clients. 2017-12-09 13:33:14 -05:00
a7f1fbe00e Merge pull request #8992 from gyuho/server-close
embed: stop *grpc.Server on *serveCtx serve error
2017-12-08 19:54:03 -08:00
e5e109609f Merge pull request #8991 from gyuho/upgrade-guide
Documentation/upgrades: highlight 3.2 breaking change, require gRPC v1.7.4
2017-12-08 18:55:46 -08:00
9744e1ee87 embed: stop *grpc.Server on *serveCtx serve error
If serve errors before *grpc.Server is sent to serversC,
it should be closed manually.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-08 18:50:37 -08:00
e3da56a8df Merge pull request #8985 from gyuho/update-dep
*: update bbolt, gogo/protobuf, golang/protobuf, regenerate protobuf
2017-12-08 08:57:49 -08:00
bbd2147248 Merge pull request #8988 from gyuho/error-return
embed/config: remove v3.2 TODO
2017-12-08 06:41:12 -08:00
015c04bcf5 Merge pull request #8987 from gyuho/tls-shutdown
embed: fix *grpc.Server panic on GracefulStop with TLS-enabled server
2017-12-08 06:40:50 -08:00
4d1a95c18b bill-of-materials: regenerate
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:31:13 -08:00
bcd5390b35 *: regenerate protobuf, grpc-gateway
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:31:13 -08:00
b1c6b98f3d scripts/genproto: require protoc 3.5, update gogo/proto
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:31:13 -08:00
749a9b14e0 vendor: upgrade bbolt, gogo/protobuf, golang/protobuf
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:31:13 -08:00
605dcb57a8 Documentation/upgrades: highlight 3.2 breaking change, require gRPC v1.7.4
There's already a section called "Server upgrade checklists" below.
Instead, highlight the listen URLs change as a breaking change in
server. Also update 3.2 and 3.3 gRPC requirements as v1.7.4.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:28:19 -08:00
af5a5b3998 embed/config: remove v3.2 TODO
Already returning error.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 20:37:12 -08:00
9bd07c91de integration: test GracefulStop on secure embedded server
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 20:36:31 -08:00
552b58dcfb embed: only gracefully shutdown insecure grpc.Server
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 20:36:31 -08:00
e39915feec embed: Avoid panic when shutting down gRPC Server
Avoid panic when stopping gRPC Server if TLS configuration is present.
Provided solution (attempts to) implement suggestion from gRPC team: https://github.com/grpc/grpc-go/issues/1384#issuecomment-317124531.

Fixes #8916
2017-12-07 20:36:31 -08:00
fc2eecf90c Merge pull request #8989 from gyuho/upgrade-doc
Document/upgrades: add server upgrade checklists on listen URLs
2017-12-07 19:59:59 -08:00
3d44e55179 Document/upgrades: add server upgrade checklists on listen URLs
Address https://github.com/coreos/etcd/issues/6336#issuecomment-246486183
about https://github.com/coreos/etcd/pull/7236.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 17:45:26 -08:00
5b059acd65 semaphore: run upgrade tests against v3.2.11
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 14:35:41 -08:00
a2256a6f24 hack/scripts-dev/Makefile: grpc-proxy with additional metrics URLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-06 14:24:11 -08:00
84e51cabc7 hack/scripts-dev: fix Makefile quoute, configurable host tmp dir
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-06 11:16:53 -08:00
805bcc828c clientv3: simplify V(4) logger with Lvl(4)
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-05 18:48:36 -08:00
5d2461e139 clientv3: add Lvl method to logger
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-05 18:48:36 -08:00
7e0fc6136e Merge pull request #8970 from gyuho/coverage-test
*: fix coverage test failures
2017-12-05 18:38:51 -08:00
f97233d206 test: log gocovmerge merging
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 17:18:13 -08:00
89047ab598 Dockerfile-test: use forked version of gocovmerge
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 17:17:17 -08:00
1b280a5a12 Merge pull request #8978 from gyuho/clientv3-doc
clientv3/config.go: remove extra whitespace character
2017-12-05 17:14:38 -08:00
944bd2c663 hack/scripts-dev: remove "Too many goroutines" in test scripts
Otherwise, "pkg/testutil" unit tests will trigger test failures.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 17:02:24 -08:00
3a941c9455 clientv3/config.go: remove extra whitespace character
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 14:51:36 -08:00
3ecd69cc2f hack/scripts-dev: mount host /tmp for Jenkins tests
Was running out of disk space in Jenkins.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 10:09:32 -08:00
21252e4219 Merge pull request #8973 from gyuho/changelog
CHANGELOG: add v3.2.11
2017-12-05 09:40:29 -08:00
8b13f7ff12 Merge pull request #8974 from gyuho/indentation
clientv3: fix indentation in doc.go
2017-12-04 17:34:26 -08:00
6458e22708 clientv3: fix indentation in doc.go
Looks off in https://godoc.org/github.com/coreos/etcd/clientv3.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 17:05:31 -08:00
788e759559 CHANGELOG: add v3.2.11
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 16:57:37 -08:00
6babf6a656 hack/scripts-dev: fix typo in Makefile
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 16:27:26 -08:00
287c23c4b1 Merge pull request #8972 from gyuho/grpc
vendor: upgrade grpc/grpc-go to v1.7.4
2017-12-04 16:21:14 -08:00
148192245c Merge pull request #8971 from gyuho/gosimple
*: fix gosimple warnings on sort.StringSlice
2017-12-04 16:13:57 -08:00
b3f53ce16d vendor: upgrade grpc/grpc-go to v1.7.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 14:27:06 -08:00
21d4307982 lease: use sort.Strings instead of StringSlice
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 14:10:14 -08:00
645c7c9a92 auth: use "sort.Strings" instead of StringSlice
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 14:09:27 -08:00
70db68b6e2 Merge pull request #8938 from gyuho/goddoc
clientv3/doc: update dial-timeout error handling with new gRPC
2017-12-04 13:46:10 -08:00
6b6013fad5 clientv3/doc: update dial-timeout error handling with new gRPC
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-04 13:45:42 -08:00
198d8d6b24 Merge pull request #8963 from gyuho/certs-gateway-srv
hack/scripts-dev: add "certs-gateway" test case with SRV
2017-12-04 13:39:16 -08:00
44e059879c Merge pull request #8968 from zbindenren/master
clientv3: Fix comment for DialKeepAliveTime and DialKeepAliveTimeout
2017-12-04 09:22:27 -08:00
e18afc462b clientv3: Fix comment for DialKeepAliveTime and DialKeepAliveTimeout 2017-12-04 14:22:34 +01:00
7e79c257ca Merge pull request #8960 from jpbetz/version-metric
metrics: Add server_version metric
2017-12-02 12:15:45 -08:00
49b4117077 hack/scripts-dev: add "docker-dns-srv-test-certs-gateway-run" to Makefile
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-01 15:56:10 -08:00
952f3b1a3b hack/scripts-dev/docker-dns-srv: add "certs-gateway" test case
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-01 15:54:58 -08:00
d0ee3e3c64 Merge pull request #8962 from gyuho/test-scripts-5
hack/scripts-dev/docker-dns: add "certs-gateway" test case
2017-12-01 15:51:12 -08:00
6fcfff132f Merge pull request #8959 from gyuho/test-scripts-3
hack/scripts-dev: share docker image between test cases, clean up DNS SRV tests
2017-12-01 15:46:24 -08:00
d50eb4d671 hack/scripts-dev: add separate certs, scripts to "docker-dns-srv"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 15:45:35 -08:00
e3b3608175 hack/scripts-dev: add docker-dns-srv-test-certs-run, docker-dns-srv-test-certs-wildcard-run
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 15:44:53 -08:00
37ae6e0c41 hack/scripts-dev: keep only shared scripts in docker-dns-srv
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 15:44:53 -08:00
7c6fb57f2f hack/scripts-dev: add "docker-dns-test-certs-gateway-run" to Makefile
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-01 15:43:36 -08:00
9da00c73bd hack/scripts-dev/docker-dns: add "certs-gateway" test case
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-01 15:43:33 -08:00
413aa48593 Merge pull request #8958 from gyuho/test-scripts-2
hack/scripts-dev: share docker image between test cases, clean up DNS tests
2017-12-01 15:41:28 -08:00
461d70254e hack/scripts-dev: add separate certs to "docker-dns"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 15:39:10 -08:00
4cacbf19dd metrics: Add server_version metric 2017-12-01 15:25:46 -08:00
b1cb99d3eb hack/scripts-dev: add docker-dns-test-certs-run, docker-dns-test-certs-wildcard-run
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 13:49:08 -08:00
a2c2f8ebc6 hack/scripts-dev: only keep shared scripts between test cases
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 13:45:50 -08:00
5db3cdd3bb Merge pull request #8957 from gyuho/test-scripts-1
semaphore, test: grep "test timed out" first, specify leaky goroutine string
2017-12-01 13:44:01 -08:00
75fc59fe0d hack/scripts-dev: grep "test timed out" first
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 13:35:15 -08:00
c6c3e81026 semaphore: grep "test timed out" first, then leaky goroutines
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 13:32:09 -08:00
c152be097b Merge pull request #8955 from gyuho/fix-e2e
e2e: fix remote error string in TestEtcdPeerCNAuth
2017-12-01 13:29:11 -08:00
156722e26a e2e: fix remote error string in TestEtcdPeerCNAuth
Now error is embed: rejected connection from "127.0.0.1:58527" (error "remote error: tls: bad certificate", ServerName "").
Change from https://github.com/coreos/etcd/pull/8952.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 12:54:29 -08:00
3537582bcf Merge pull request #8895 from gyuho/tls-doc
Documentation/op-guide: document TLS changes in 3.2
2017-12-01 09:41:32 -08:00
1613ef5822 Merge pull request #8952 from gyuho/tls-log
embed: provide more details on TLS handshake failure
2017-12-01 09:41:16 -08:00
ae589018cb embed: provide more details on TLS handshake failure
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 09:40:23 -08:00
83aa59b480 Documentation/op-guide: document TLS changes in 3.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-30 19:20:34 -08:00
b041ce5d51 Merge pull request #8946 from gyuho/cherry-pick
hack/patch: fix some typos in README, make cherrypick.sh executable
2017-11-30 11:35:15 -08:00
3167780cde hack/patch: fix some typos in README, make cherrypick.sh executable
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-30 11:21:38 -08:00
66a9508fdf Merge pull request #8945 from gyuho/logrus
glide: pin transitive dependency "sirupsen/logrus"
2017-11-30 10:19:15 -08:00
c232c85ba7 glide: pin transitive dependency "sirupsen/logrus"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-30 10:03:04 -08:00
56a012f2ab Merge pull request #8841 from gyuho/test-test
clientv3/integration: add more tests on balancer switch, inflight range
2017-11-30 09:38:53 -08:00
faaac0f964 Merge pull request #8937 from gyuho/quay
Documentation/upgrades: gcr.io as primary, do not deprecate quay.io
2017-11-29 17:36:13 -08:00
7a1deaa12a Merge pull request #8939 from gyuho/server-stream-error-log
api/v3rpc: log grpc stream send/recv errors in server-side
2017-11-29 17:35:20 -08:00
6bd41f36ff api/v3rpc: log grpc stream send/recv errors in server-side
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-29 17:34:05 -08:00
08905ee594 Documentation/upgrades: gcr.io as primary, do not deprecate quay.io
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-29 11:03:44 -08:00
b21180d198 Merge pull request #8936 from gyuho/godoc-clientv3-errors
clientv3: update error handling godoc
2017-11-29 11:00:49 -08:00
92167e8773 clientv3: update error handling godoc
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-29 10:53:54 -08:00
4ad8bd9299 Merge pull request #8858 from gyuho/aaa
test: clean up fmt tests
2017-11-29 09:51:07 -08:00
dbd1787672 Merge pull request #8925 from jpbetz/patch-manager-table
Documentation: Add release manager table
2017-11-28 17:46:07 -08:00
614ef75c01 Merge pull request #8932 from gyuho/release-gcr
scripts/build-docker: build both gcr.io and quay.io images
2017-11-28 15:10:13 -08:00
aca39e2ae1 scripts/build-docker: build both gcr.io and quay.io images
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-28 15:02:09 -08:00
6e116542c2 Merge pull request #8928 from gyuho/timeout-config
embed: error on zero heartbeat-interval, election-timeout
2017-11-28 11:11:20 -08:00
878367ba4c Merge pull request #8930 from jpbetz/changelog-3.1.11
CHANGELOG: add v3.1.11, bug fixes
2017-11-28 11:08:44 -08:00
8cc9063ea6 CHANGELOG: add v3.1.11, bug fixes 2017-11-28 10:59:34 -08:00
c8277e1b02 etcdmain: test wrong heartbeat-interval, election-timeout in TestConfigFileElectionTimeout
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-28 09:53:35 -08:00
cffa130253 embed: error on zero heartbeat-interval, election-timeout
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-28 09:53:32 -08:00
7e876ecc90 Documentation: Add release manager table 2017-11-27 15:43:35 -08:00
a7cb307a18 clientv3/integration: add more tests on balancer switch, inflight range
Test all possible cases of server shutdown with inflight range requests.
Removed redundant tests in kv_test.go.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 15:05:12 -08:00
9717a1240c Merge pull request #8926 from gyuho/fix-test
clientv3/integration: move isServerCtxTimeout to server_shutdown_test.go
2017-11-27 15:03:43 -08:00
bd76ac85db clientv3/integration: move isServerCtxTimeout to server_shutdown_test.go
Tests with cluster_proxy tags were failing, since isServerCtxTimeout
was defined with "+build !cluster_proxy".

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 15:02:48 -08:00
96e32a408f Merge pull request #8896 from gyuho/error
clientv3/integration: handle server-side context timeouts from clock-drift
2017-11-27 14:34:22 -08:00
289653d914 Merge pull request #8920 from gyuho/fix-flag
pkg/flags: fix "SetFlagsFromEnv" error masking
2017-11-27 14:23:09 -08:00
a9105b5a8d clientv3: document context timeout error with server-side clock skew
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 14:06:42 -08:00
0d0e8e78f7 clientv3/integration: handle server-side context timeouts from clock-drift
Due to clock drifts in server-side, client context times out
first in server-side, while original client-side context is
not timed out yet.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 14:02:09 -08:00
bdff651428 Merge pull request #8922 from gyuho/corrupt-check
*: enable initial corrupt check in tests
2017-11-27 10:37:39 -08:00
a20b24be7b etcd-tester: enable initial corrupt check by default
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 09:41:53 -08:00
f228f6a002 e2e: enable initialCorruptCheck by default
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 09:39:22 -08:00
965d9806d5 pkg/flags: fix "SetFlagsFromEnv" error masking
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 06:41:43 -08:00
5fbb4590fd Merge pull request #8918 from xiang90/ic
integration: always enable initial corruption check
2017-11-27 00:11:35 -08:00
1c69cc5657 integration: always enable initial corruption check 2017-11-26 16:51:04 -08:00
d84d3f2f77 Merge pull request #8554 from gyuho/initial-hash-checking
*: check data corruption on boot
2017-11-23 09:57:26 -08:00
0e4e8ed3d1 embed: corrupt-check on restart member
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-22 21:20:19 -08:00
e0dfc4368f etcdserver: CheckInitialHashKV when "InitialCorruptCheck==true"
etcdserver: only compare hash values if any

It's possible that peer has higher revision than local node.
In such case, hashes will still be different on requested
revision, but peer's header revision is greater.

etcdserver: count mismatch only when compact revisions are same

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-22 21:20:14 -08:00
f739853ec6 Merge pull request #8564 from gyuho/upgrade-doc
Documentation/upgrades: add 3.3 changes
2017-11-22 16:20:40 -08:00
1f38f1fddb e2e: add corruption checking tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-22 15:52:09 -08:00
3db5ad8d57 embed,etcdmain: add "--experimental-initial-corrupt-check"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-22 15:27:14 -08:00
c983f0ae98 Documentation: fix typo in upgrade 3.2 guide 2017-11-22 11:08:21 -08:00
321a9ca0a0 Documentation/upgrades: add 3.3 changes
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-21 10:01:25 -08:00
15bfc1b361 Merge pull request #8893 from dahefanteng/fix-typo
Documentation: change "key file" to "cert file"
2017-11-20 23:55:27 -08:00
8e4d1cb707 Merge pull request #8901 from mitake/auth-context
auth, etcdserver: follow the correct usage of context
2017-11-20 23:54:26 -08:00
f649132a5a auth, etcdserver: follow the correct usage of context
The keys of context shouldn't be string. They should be a struct of
their own type.

Fix https://github.com/coreos/etcd/issues/8826
2017-11-21 15:31:19 +09:00
f62cd1d66f Merge pull request #8897 from mkumatag/fix_gotip_fmt
Fix go fmt for gotip
2017-11-20 22:25:07 -08:00
e1b1ec8348 etcdmain: Fix go fmt for gotip 2017-11-21 11:37:09 +05:30
fb9e78ff3e Merge pull request #8898 from gyuho/z2
etcdserver,embed: clean up/reorganize client/peer/corrupt handlers
2017-11-20 16:14:02 -08:00
75ababa61f embed: split peer/client/metrics serve methods
Priliminary commit to start client server later.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-20 15:23:15 -08:00
08434d0665 etcdserver/corrupt: document data corrupt checking in checkHashKV
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-20 15:04:50 -08:00
1ce3a41e69 etcdserver/corrupt: add "getPeerHashKVs" method
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-20 15:04:45 -08:00
f6f0fb12e0 etcdserver/corrupt: set dial timeout for peer clientv3
Preliminary commit for initial hash checking.
Dial timeout when other nodes have not been booted.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-20 15:02:57 -08:00
a4c407ece4 Documentation: change "key file" to "cert file"
when refered "--trusted-ca-file",what we need provide should be a CA cert file,not the CA private key file.
2017-11-20 00:44:32 -05:00
3cff8dd6f8 Merge pull request #8894 from gyuho/a
vendor: upgrade grpc-gateway to v1.3.0, dustin/go-humanize
2017-11-17 15:27:51 -08:00
6a4a30f5d1 vendor: upgrade grpc-gateway to v1.3.0, dustin/go-humanize
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-17 14:02:13 -08:00
24b19ee222 CHANGELOG: fix typos in v3.2.10 release 2017-11-16 23:43:01 -08:00
23fb330df7 CHANGELOG: fix v3.2.10 release date 2017-11-16 13:23:18 -08:00
3766b04b38 Merge pull request #8891 from gyuho/bbb
vendor: coreos/bbolt v1.3.1-coreos.5
2017-11-16 11:34:17 -08:00
ba163efe2e vendor: coreos/bbolt v1.3.1-coreos.5
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 10:30:45 -08:00
cbe8c7eda7 Merge pull request #8880 from gyuho/v3beta-endpoint
*: replace grpc-gateway endpoint with /v3beta
2017-11-16 09:42:19 -08:00
7a55a4084d Merge pull request #8884 from gyuho/revert-srv-dns-patch
Revert "embed: fix HTTPs + DNS SRV discovery"
2017-11-15 14:30:08 -08:00
37b3108ce5 Documentation/op-guide: add security guide link to clustering.md
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 14:07:06 -08:00
9b772ba94c Documentation/op-guide: add notes for DNS SRV in security.md
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 14:07:06 -08:00
94355cb6a5 CHANGELOG: add SRV ServerName auth revert change
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 14:07:02 -08:00
fe7b094f63 Revert "embed: fix HTTPs + DNS SRV discovery"
This reverts commit f79d5aaca4.
2017-11-15 13:00:21 -08:00
6260df7404 Merge pull request #8878 from brancz/init-metrics
*: initialize gRPC server metrics with zero values
2017-11-15 10:20:41 -08:00
4a8c788dbf Merge pull request #8879 from brancz/adapt-rules
Adapt rules to use new gRPC metrics
2017-11-15 09:38:25 -08:00
092b270697 Documentation/op-guide: Fix link to Prometheus 2.0 alerting rules 2017-11-15 14:34:55 +01:00
79446ea677 Documentation/op-guide: Adapt alerting rules to new gRPC metrics 2017-11-15 14:33:52 +01:00
627cffd6f8 *: initialize gRPC server metrics with zero values 2017-11-15 11:21:29 +01:00
0f9f452722 e2e: test /v3alpha,beta in v3 curl tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 02:14:07 -08:00
c706c6e238 embed: mutate /v3alpha requests with /v3beta for backward compatibilities
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 02:14:04 -08:00
5fd419ff50 embed: replace v3alpha serve path with v3beta
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 01:55:49 -08:00
02be1ace59 e2e: replace v3alpha with v3beta in curl grpc-gateway tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 01:55:49 -08:00
980942fa44 Documentation/dev-guide: replace v3alpha with v3beta in grpc-gateway doc
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 01:55:44 -08:00
ab526e8814 *: regenerate proto, swagger specs
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 01:22:09 -08:00
ce6bb4f1c9 etcdserver: replace /v3alpha with /v3beta in proto definitions
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 01:19:53 -08:00
d01f3daf95 Merge pull request #8873 from gyuho/grpc-upgrade
vendor: upgrade grpc/grpc-go to v1.7.3
2017-11-14 16:00:59 -08:00
f0497de216 vendor: upgrade grpc/grpc-go to v1.7.3
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-14 13:56:28 -08:00
ec25a5c5b4 Merge pull request #8871 from gyuho/test-script-on-functional-tests
test: Clean agent directories on disk before functional test runs, no…
2017-11-14 13:28:35 -08:00
1bca2e969f test: Clean agent directories on disk before functional test runs, not after
This is primarily so CI tooling can capture the agent logs after the functional tester runs.
2017-11-14 13:09:52 -08:00
6f077bd74c Merge pull request #8866 from hubo1016/patch-1
Documentation/integrations.md: Add aioetcd3 to Python language bindings
2017-11-13 22:07:12 -08:00
6ba39450c3 Documentation/integrations.md: Add aioetcd3 to Python language bindings
aioetcd3 is a Python binding for etcdv3 API for asyncio.

#8866

Signed-off-by: hubo <hubo1016@126.com>
2017-11-14 13:55:35 +08:00
632ba72c6d Merge pull request #8868 from gyuho/bbb
Documentation/upgrades: add client upgrade check list for 3.2.10
2017-11-13 19:31:29 -08:00
eaf47ec053 Documentation/upgrades: add client upgrade check list for 3.2.10
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-13 13:35:21 -08:00
eb19ab14e2 Merge pull request #8656 from gyuho/readme
README: update badges
2017-11-13 11:00:35 -08:00
adeb1fb620 Merge pull request #8848 from brancz/prom-2.0-rules
Documentation/op-guide: Add rules for Prometheus 2.0
2017-11-13 08:44:32 -08:00
27519ffdb4 test: clean up fmt tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-12 14:19:53 -08:00
02ae7a3005 Merge pull request #8861 from gyuho/coverage
*: grpclog.SetLoggerV2 on clientv3.SetLogger, disable gRPC client logs
2017-11-11 22:01:05 -08:00
5a154e8e2b *: disable gRPC client logs in tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-11 20:56:00 -08:00
deb514989c etcdctl/ctlv3: disable grpc client logs when --debug is off
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-11 20:52:33 -08:00
977f33a5a6 clientv3: grpclog.SetLoggerV2 on clientv3.SetLogger
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-11 20:51:45 -08:00
a8fde603b1 Merge pull request #8751 from siddontang/siddontang/raft_learner
raft: add raft learner
2017-11-11 18:43:10 -08:00
43dfefe9e3 Merge pull request #8857 from gyuho/test
*: fix naked returns, integrate with CI
2017-11-10 19:12:26 -08:00
75110dd839 *: fix naked returns
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 18:46:15 -08:00
c6f2db2e92 raft: support learner 2017-11-11 10:38:21 +08:00
65a606e2e8 test: add naked return checks
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 17:55:55 -08:00
0b03d22b5b Dockerfile-test: add "alexkohler/nakedret"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 17:54:55 -08:00
b64c1bfce6 Merge pull request #8840 from gyuho/health-balancer
*: refactor clientv3 balancer, upgrade gRPC to v1.7.2
2017-11-10 15:41:00 -08:00
c669ff9765 clientv3: retry mutable ops on "no connection available"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 15:40:06 -08:00
93f12da1be vendor: upgrade grpc to v1.7.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 15:39:43 -08:00
123b869a0f clientv3/integration: match grpc.ErrClientConnClosing in TestKVNewAfterClose
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 15:39:35 -08:00
103efd922b clientv3/balancer: only notify healthy addresses
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 15:39:25 -08:00
012b013538 clientv3: combine "healthBalancer" and "simpleBalancer"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 15:38:05 -08:00
64acd71c11 Merge pull request #8853 from gyuho/ttt
clientv3/integration: remove TestKVGetOneEndpointDown
2017-11-10 14:55:00 -08:00
52f4bc9061 clientv3/integration: remove TestKVGetOneEndpointDown
Already tested in other server shutdown tests.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 14:46:40 -08:00
dfe0f8c2bc Merge pull request #8839 from gyuho/test-balancer
clientv3/integration: test linearizable get with leader election, network partition
2017-11-10 13:55:11 -08:00
0dbcd7c1a7 Merge pull request #8849 from gyuho/promhttp
*: deprecate prometheus.Handler, upgrade Prometheus dependencies
2017-11-10 12:04:05 -08:00
6654ae4c2a Merge pull request #8851 from gyuho/doc-doc
*: highlight gRPC metrics change in v3.1
2017-11-10 11:45:50 -08:00
700c9a50c3 CHANGELOG: highlight metrics change in v3.1
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 11:00:51 -08:00
8d309bf34a Documentation/upgrades: highlight "go-grpc-prometheus" change
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 10:58:00 -08:00
00b15e38df words: whitelist prometheus
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 10:08:48 -08:00
993a0cf569 tools: update metrics to use promhttp
Update function-tester/etcd-tester/main.go to use promhttp.Handler() instead of prometheus.Handler()
2017-11-10 09:47:49 -08:00
527d03e0d2 etcdserver: update metrics to use promhttp
Update api/etcdhttp/metrics.go to use promhttp.Handler() instead of prometheus.Handler()

fixes #8729
2017-11-10 09:47:49 -08:00
973857107e clientv3: update metrics to use promhttp
Update clientv3/example_metrics_test.go and clientv3/integration/metrics_test.go to use promhttp.Handler() instead of prometheus.Handler()

fixes #8729
2017-11-10 09:47:49 -08:00
143de553e6 vendor: upgrade Prometheus dependencies
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 09:47:44 -08:00
7ccde4ac8a Merge pull request #8850 from gyuho/you
hack/patch: remove "you" in markdown doc
2017-11-10 09:39:43 -08:00
bb4637bffe hack/patch: remove "you" in markdown doc
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-10 08:59:51 -08:00
10a863aac2 Documentation/op-guide: Add rules for Prometheus 2.0 2017-11-10 14:58:13 +01:00
75c7e62dc7 Merge pull request #8805 from jpbetz/patch-manager-docs
release, documentation, tools: Expand patch management support to the previous two minor versions
2017-11-09 15:06:01 -08:00
3a0e24e6c5 release, documentation, tools: Expand patch management support to the previous two minor versions 2017-11-09 14:07:54 -08:00
05e5b3b62d Merge pull request #8845 from tamalsaha/gw13
*: upgrade grpc-gateway to v1.3
2017-11-08 22:21:51 -08:00
ec881b0507 scripts/genproto: upgrade protoc to 3.4
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-08 18:50:29 -08:00
7ba4ae01b8 vendor: upgrade grpc-gateway to v1.3
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-08 18:46:32 -08:00
c0c19465fc *: upgrade grpc-gateway to v1.3 2017-11-08 18:38:41 -08:00
24c718f7de Merge pull request #8844 from gyuho/ee
etcdmain: do not embed structs (fix go vet warnings)
2017-11-08 15:05:07 -08:00
672d4ae93f Merge pull request #8843 from gyuho/log
store: silence server logs in v2v3 store tests
2017-11-08 15:03:13 -08:00
370ff6b670 etcdmain: do not embed structs (fix go vet warnings)
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-08 14:20:52 -08:00
5cea18baf1 store: silence server logs in v2v3 store tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-08 13:22:00 -08:00
21178f5119 Merge pull request #8815 from zrss/fix-dbstatus-unexpected-write
etcdctl: fix snapshot status accidentally modified the db file
2017-11-08 11:47:06 -08:00
5ed5ee51f5 Merge pull request #8833 from gyuho/release-test
semaphore: manually pin last release version for release tests
2017-11-08 11:15:29 -08:00
efb0057513 Merge pull request #8835 from gyuho/log
*: disable grpc client log in tests by default
2017-11-08 10:23:53 -08:00
0ce02abf59 etcdctl: fix snapshot status accidentally modified the db file 2017-11-09 01:07:48 +08:00
706cf20339 clientv3/integration: test linearizable get with leader election, network partition
Test case that failed my balancer refactor https://github.com/coreos/etcd/pull/8834.
Current, kv network partition tests do not specifically test
isolated leader case.

This PR moves TestKVSwitchUnavailable to network_partition_test.go
and make it always isolate leader.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-07 19:51:24 -08:00
47728b8caf Merge pull request #8837 from gyuho/timed-out
test: fail when test times out
2017-11-07 16:42:26 -08:00
dd35fce66c test: fail when test times out
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-07 15:30:51 -08:00
f49f5c9094 *: disable grpc client log in tests by default
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-07 15:05:13 -08:00
1b9f96ebc1 semaphore: manually pin last release version for release tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-07 12:27:26 -08:00
d83820d143 Merge pull request #8824 from gyuho/convert-error-code-2
api/v3rpc: do not convert server context error to grpc/*status.statusError
2017-11-06 17:59:16 -08:00
f48fe8ecda api/v3rpc: do not convert server context error to grpc/*status.statusError
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-06 17:32:23 -08:00
9791429524 Merge pull request #8825 from gyuho/lock
auth: clean up mutex lock/unlocks
2017-11-06 13:39:47 -08:00
38942a2a51 auth: clean up mutex lock/unlocks
Only hold locks when needed.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-06 13:17:29 -08:00
1d01aaa395 Merge pull request #8823 from gyuho/pre-allocate
*: preallocate slice (instead of append)
2017-11-06 12:53:46 -08:00
568b856be8 auth: pre-allocate slices in store
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-06 09:16:15 -08:00
ba233e2f4d etcdserver: preallocate slice in apply
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-06 09:13:12 -08:00
21ba9a89a7 Merge pull request #8819 from WIZARD-CXY/fixgrafana
Documentation/op-guide: fix unit in grafana
2017-11-06 08:48:05 -08:00
0b72f651a1 Documentation/op-guide: fix unit in grafana 2017-11-06 13:52:05 +08:00
80d5e1cbb7 Merge pull request #8820 from gyuho/error-code
api/v3rpc: deprecate grpc.Errorf
2017-11-04 23:03:46 -07:00
5d98710b2e api/v3rpc: deprecate grpc.Errorf
It's been deprecated as of grpc/grpc-go v1.6.x.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-04 22:08:17 -07:00
de950a40e0 Merge pull request #8818 from gyuho/ttt
*: fail tests with egrep "(--- FAIL:|leak)"
2017-11-03 10:56:11 -07:00
e35d34ccea hack/scripts-dev: fail tests with "(--- FAIL:|leak)" in Makefile
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-03 10:15:24 -07:00
31912e35b3 semaphore.sh: fail tests with "(--- FAIL:|leak)"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-03 10:15:19 -07:00
3f93d9ae00 test: fail tests with "--- FAIL:"
To differentiate from gRPC client log "TRANSIENT_FAILURE"

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-03 10:15:14 -07:00
41d37fcc51 Merge pull request #8816 from gyuho/gofmt
lease/leasehttp: use keyed fields in composite literals
2017-11-03 10:11:12 -07:00
0048df6faf lease/leasehttp: use keyed fields in composite literals
Was complaining leasepb.LeaseInternalRequest composite literal uses unkeyed fields

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-03 09:58:03 -07:00
ef475b2502 Merge pull request #8812 from gyuho/default
embed: NewConfig sets LogOutput to "default"
2017-11-02 16:31:33 -07:00
adc3cea8cf etcdmain: use embed.DefaultLogOutput for flags
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 14:43:06 -07:00
cdc71ae38e embed: NewConfig sets LogOutput to "default"
Otherwise, embedded etcd will panic in SetupLogging

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 14:41:32 -07:00
b65435b86d Merge pull request #8811 from gyuho/fmt
store/stats.go: fix gofmt warnings
2017-11-02 14:19:50 -07:00
f6ca686882 store/stats.go: fix gofmt warnings
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 14:18:00 -07:00
ef0e8e17d9 Merge pull request #8810 from gyuho/grpclog-embed
*: move logging to embed, disable grpc server log by default
2017-11-02 14:10:11 -07:00
6127f785a4 embed: disable grpc server logging by default
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 13:19:49 -07:00
1fa295e3ba etcdmain: move SetupLogging to embed
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 13:19:49 -07:00
4b1e09f2b4 embed: move SetupLogging, LogOutput from etcdmain
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 13:19:44 -07:00
f8bec0f631 Merge pull request #8764 from gyuho/hack
hack: add dev scripts
2017-11-02 08:35:43 -07:00
ff05596ba7 hack/scripts: add Makefile for etcd development
Adding some frequently used commands.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 06:59:02 -07:00
70f64bb1b6 Dockerfile-test: make Go version flexible, move other test Dockerfiles
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 06:59:02 -07:00
736b9f0be3 gitignore: ignore hidden Dockerfiles for docker build
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 06:58:58 -07:00
3ac54be402 Merge pull request #8801 from gyuho/windows
vendor: upgrade coreos/bbolt to v1.3.1-coreos.3
2017-10-31 20:09:07 -07:00
67722fc3ff vendor: upgrade coreos/bbolt to v1.3.1-coreos.3
And pin some other dependencies.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-31 19:02:29 -07:00
9e509eb77f Merge pull request #8799 from gyuho/vvv
semaphore,travis: use Go 1.9.2
2017-10-31 13:34:13 -07:00
791370bacf Merge pull request #8796 from gyuho/aaa
clientv3/integration: match more errors in put retries
2017-10-31 13:33:56 -07:00
0ca8f420d4 clientv3/integration: match more errors in put retries
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-31 13:31:38 -07:00
ba749166d5 semaphore,travis: use Go 1.9.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-31 13:27:16 -07:00
4e2ef67f2b Merge pull request #8795 from gyuho/balancer-timeout
clientv3/integration: increase balancer switch timeout for TestKVGetResetLoneEndpoint
2017-10-31 13:09:28 -07:00
0f86f0e0e6 words: whitelist more
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-31 11:05:26 -07:00
2c13231e7b clientv3/integration: increase balancer switch timeout for TestKVGetResetLoneEndpoint
Since 3-second is the minimum time to keep an endpoint in unhealthy,
it is possible that endpoint switch happens right after context timeout.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-31 10:48:15 -07:00
63d0ac0fe6 Merge pull request #8790 from gyuho/blackhole-immutable
clientv3/integration: add blackhole tests for range RPCs
2017-10-30 19:58:42 -07:00
8d23e1c870 clientv3/integration: add blackhole tests for range RPCs
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-30 19:18:53 -07:00
9731910754 Merge pull request #8792 from gyuho/blackhole-watch
clientv3/integration: move to TestBalancerUnderBlackholeKeepAliveWatch
2017-10-30 17:40:38 -07:00
a37dd0055f clientv3/integration: move to TestBalancerUnderBlackholeKeepAliveWatch
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-30 17:19:48 -07:00
9ca733255a Merge pull request #8789 from gyuho/blackhole-tests
clientv3/integration: add blackhole tests on mutable operations
2017-10-30 14:14:02 -07:00
8d5c284b6c clientv3/integration: add blackhole tests on mutable operations
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-30 13:32:59 -07:00
299c704295 Merge pull request #8785 from gyuho/ttt
clientv3/integration: finish isolated node test cases
2017-10-30 12:38:02 -07:00
bea930f44d clientv3/integration: finish isolated node test cases
1. one with retry
2. one without retry (range request with longer timeouts)

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-30 11:17:43 -07:00
2200450022 Merge pull request #8783 from gyuho/election-timeout
integration: expose ElectionTimeout method
2017-10-30 10:47:38 -07:00
a41f3b64aa integration: expose ElectionTimeout, multiply ticks to timeout
To be consistent with etcdserver

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-30 09:22:40 -07:00
87ad10c155 Merge pull request #8681 from mitake/binsearch-root-role
auth: use binary search for checking root permission
2017-10-27 15:09:55 -07:00
ca1e6a74e0 Merge pull request #8782 from gyuho/rename
clientv3/integration: rename to 'mustWaitPinReady'
2017-10-27 15:07:31 -07:00
4eb5a70126 Merge pull request #8784 from gyuho/ttt
clientv3/integration: remove client keepalive in network partition tests
2017-10-27 15:01:03 -07:00
5d169b866f clientv3/integration: rename to 'mustWaitPinReady'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-27 15:00:31 -07:00
d75a6a39f5 Merge pull request #8775 from marcovc/master
etcdctl/v3: add lease keep-alive --once flag
2017-10-27 14:58:59 -07:00
03ce2fa037 clientv3/integration: remove client keepalive in network partition tests
Those tests are about balancer endpoint switch, not about keepalive pings.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-27 14:47:37 -07:00
2cea13ba68 Merge pull request #8779 from gyuho/shutdown-test
clientv3/integration: add TestBalancerUnderServerShutdownImmutable
2017-10-27 12:23:22 -07:00
732c40531b Merge pull request #8762 from gyuho/partition-test
clientv3/integration: add TestBalancerUnderNetworkPartitionWatch
2017-10-27 12:22:32 -07:00
0fcafcb828 Merge pull request #8712 from harryge00/benchmark-prompt-password
benchmark ask for password is not supplied
2017-10-27 11:46:46 -07:00
62821158aa Merge pull request #8767 from xiang90/f
clientv3/integration: fix a todo in testNetworkPartitionBalancer
2017-10-27 11:26:40 -07:00
9d95cfb105 clientv3/integration: add TestBalancerUnderServerShutdownImmutable
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-27 10:55:12 -07:00
aaf4a70cd0 etcdctl v3: e2e test for the --once option to the lease keep-alive command
Follow up #8775
2017-10-27 08:48:22 +01:00
1c3567da90 tools/benchmark: ask for password when it is not supplied 2017-10-27 14:30:43 +08:00
a33a3b2872 Merge pull request #8773 from jpbetz/fix-lease-grant-int-test
test: Deflake TestV3LeasePrmote integration test
2017-10-26 21:01:23 -07:00
e980bde82d clientv3/integration: add TestBalancerUnderNetworkPartitionWatch
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-26 18:17:00 -07:00
a9996f8768 test: Deflake TestV3LeasePrmote integration test 2017-10-26 16:58:37 -07:00
0160cd76e5 Merge pull request #8772 from gyuho/shutdown
clientv3/integration: add TestBalancerUnderServerShutdownMutable*
2017-10-26 16:58:33 -07:00
0bfc6a0d92 clientv3/integration: add TestBalancerUnderServerShutdownMutable*
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-26 16:42:33 -07:00
cb188d0b26 etcdctl v3: adds the --once option to the lease keep-alive command
Fixes: #8719
2017-10-27 00:27:11 +01:00
f46c063285 Merge pull request #8774 from gyuho/sync
clientv3/integration: add waitPinReady
2017-10-26 15:13:46 -07:00
6a8d6b6ad9 clientv3/integration: use waitPinReady in blackhole test
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-26 15:12:27 -07:00
af53f54042 clientv3/integration: add waitPinReady
RPC should be sent to trigger 'readyWait' on new pin address.
Otherwise, endpoints other than ep[0] may be pinned.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-26 15:11:10 -07:00
9b26bde147 Merge pull request #8769 from xiang90/bk
clientv3/integration: add put blackhole test
2017-10-26 15:10:35 -07:00
10c971db70 clientv3/integration: add put blackhole test 2017-10-26 14:09:51 -07:00
7d7e9b6e43 clientv3/integration: fix a todo in testNetworkPartitionBalancer 2017-10-25 22:54:44 -07:00
20f2914e13 Merge pull request #8763 from gyuho/temp
clientv3/integration: Get with context timeout
2017-10-25 17:52:00 -07:00
8fa35216b0 clientv3/integration: Get with context timeout
Address https://github.com/coreos/etcd/pull/8762#discussion_r147019068.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-25 17:43:29 -07:00
995d79a0fc Merge pull request #8758 from gyuho/failure-test
clientv3/integration: add TestBalancerUnderServerShutdownWatch
2017-10-25 17:03:33 -07:00
cea7387b73 clientv3/integration: add TestBalancerUnderServerShutdownWatch
Current Watch integration tests haven't covered the balancer
switch behavior under server failures.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-25 16:13:46 -07:00
c50cfbeaf6 Merge pull request #8759 from gyuho/mmm
integration: use variadic parameters for *Partition
2017-10-25 15:31:33 -07:00
3462d8ba70 Merge pull request #8760 from gyuho/name
clientv3/integration: rename partition tests
2017-10-25 15:00:11 -07:00
6f8c476599 clientv3/integration: rename partition tests
To be consistent with TestBalancerUnderShutdown*

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-25 14:57:16 -07:00
b6f770fc24 integration: use variadic parameters for *Partition
'member' type is not exported.
In network partition tests, we want do

InjectPartition(t, clus.Members[lead], clus.Members[lead+1])

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-25 14:55:03 -07:00
da0a387aac auth: use binary search for checking root permission
authpb.User.Roles is sorted so we don't need a linear search for
checking the user has a root role or not.
2017-10-25 13:16:37 +09:00
fff1fb2ed7 Merge pull request #8756 from gyuho/tests
clientv3/integration: do not create v3 clients when not used
2017-10-24 17:38:12 -07:00
ff2ed93b5c clientv3/integration: do not create v3 clients when not used
Add 'SkipCreatingClients' field to skip creating clients if not used.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-24 16:59:41 -07:00
f42534cb21 Merge pull request #8755 from coreos/philips-patch-1
Documentation: add OpenStack to integrations.md
2017-10-24 15:35:39 -07:00
5032feaf22 Documentation: add OpenStack to integrations.md 2017-10-24 15:35:00 -07:00
d095a5c48b Merge pull request #8752 from xiang90/fix_keepalive
clientv3/integration: fix keepalive by waiting for unhealthy
2017-10-24 10:12:13 -07:00
6277828f13 Merge pull request #8743 from dmyerscough/fix-example-snippet
Documentation/op-guide: Fix missing docker volume commands and specify the initial DATA_DIR
2017-10-24 07:03:26 -07:00
8d1f9c654a clientv3/integration: fix keepalive by waiting for unhealthy 2017-10-24 00:56:09 -07:00
abc606f139 Documentation/op-guide: Fix missing docker volume commands and specifying the initial DATA_DIR usage 2017-10-23 22:40:43 -07:00
d16de1b914 Merge pull request #8742 from xiang90/debug_ordering
clientv3: fix balancer unresponsiveness
2017-10-23 21:57:33 -07:00
109f52e3d6 clientv3: fix balancer unresponsiveness
When no address is pined, and balancer ignores the addr Up due to
its current unhealthy state, balancer will be unresponsive forever.

This PR fixes it by doing a full reset when there is no pined addr,
thus re-trigger the Up call.
2017-10-23 21:19:21 -07:00
fdaa04e95f Merge pull request #8749 from gyuho/docker-test
*: fix test docker images, switch travis to docker
2017-10-23 21:12:53 -07:00
2a49b04f09 clientv3/integration: fix typos
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-23 20:13:53 -07:00
0d76ede274 words: whitelist more
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-23 20:13:50 -07:00
d5fc37072c travis: use docker
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-23 20:05:57 -07:00
cd4ca4065e Dockerfile-test: use ubuntu 16.10 as base image
Debian base image from golang-stretch was breaking
shellcheck tests.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-23 19:28:00 -07:00
1724cfa937 Merge pull request #8748 from gyuho/mmm
semaphore: add test scripts
2017-10-23 16:41:24 -07:00
249a2c30d2 Makefile: delete
moving to somewhere else
2017-10-23 16:35:04 -07:00
6337e4a1ec semaphore: add test scripts 2017-10-23 16:35:00 -07:00
319658aef3 Merge pull request #8747 from gyuho/makefile
Makefile: clean up all redundant targets
2017-10-23 13:38:41 -07:00
997469a8cf test: add 'VERBOSE' flag to enable client debugs 2017-10-23 13:13:28 -07:00
2b5733d742 Makefile: remove redundant commands 2017-10-23 13:13:11 -07:00
fa7c8f3f83 gitignore: add covdir 2017-10-23 10:34:30 -07:00
149ee61e02 Dockerfile-test: add codecov for coverage tests 2017-10-23 10:29:08 -07:00
b699c7cff7 Merge pull request #8737 from xiang90/fix_TestWatchKeepAlive
clientv3/integration: shorten keepalive timeout
2017-10-22 21:21:22 -07:00
97f0b28bdb Merge pull request #8738 from gyuho/ccc
clientv3: fix balancer notify, stale endpoint handling, retry
2017-10-22 21:20:44 -07:00
2ae10a8184 Merge pull request #8741 from gyuho/ppp
clientv3/integration: match ErrTimeout in testNetworkPartitionBalancer
2017-10-22 19:16:55 -07:00
f65575073a clientv3/integration: match ErrTimeout in testNetworkPartitionBalancer
For put, etcd can return timeout errors from network partitions.
2017-10-22 18:44:35 -07:00
5943229921 clientv3: wait for current pin endpoint down on notify 2017-10-22 18:02:58 -07:00
3899f9e3c5 clientv3/integration: shorten keepalive timeout 2017-10-22 18:02:15 -07:00
59af91fc69 clientv3: use hostPortError in down function
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-22 18:01:31 -07:00
63ab5addfa clientv3: do not mark stale endpoints as unhealthy 2017-10-22 17:59:26 -07:00
725df70664 clientv3: only stop if EtcdError code is not Unavailable, retry with more error codes
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-22 17:54:14 -07:00
5eef654c3c Merge pull request #8734 from xiang90/testing_log
clientv3: disable server logging for client testing
2017-10-22 16:50:21 -07:00
6f0771d2f6 clientv3: disable server logging for client testing 2017-10-22 16:32:42 -07:00
0c5ca488c1 Merge pull request #8736 from xiang90/disable_retry
clientv3/integration: skip retry test on txn read
2017-10-22 16:15:36 -07:00
06e591d526 clientv3/integration: skip retry test on txn read 2017-10-22 16:14:39 -07:00
ebc09b1149 Merge pull request #8727 from CDKGlobal/fix/close-restore-backup-backend-master
etcdctl: close snapshot backend to close open file on member/snap/db
2017-10-21 10:51:33 -07:00
785a5a11ed Merge pull request #8728 from gyuho/eee
clientv3: remove balancer interface
2017-10-20 16:43:32 -07:00
439c97d465 clientv3: remove balancer interface
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-20 16:31:02 -07:00
7ffcca5946 etcdctl: close snapshot backend to close open file on member/snap/db 2017-10-20 15:25:21 -07:00
6c35754481 Merge pull request #8725 from gyuho/condition
v3rpc/rpctypes: use codes.FailedPrecondition for ErrGRPCNotLeader
2017-10-20 15:06:57 -07:00
2feb8ba545 v3rpc/rpctypes: use codes.FailedPrecondition for ErrGRPCNotLeader
Changes ErrGRPCNotLeader error code to FailedPrecondition,
to disable retry with unavailable.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-20 14:28:17 -07:00
f83ac25412 Merge pull request #8721 from andrewmeissner/feature/update-codecgen
client/v2: regenerate with latest ugorji/go/codec
2017-10-20 09:07:59 -07:00
81ca10f991 client/keys.generated.go: remove ineffassign yynn2 = 0
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-20 08:52:48 -07:00
1b2a62d9d0 client/keys.generated.go: remove redundant and: x.Expiration != nil
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-20 08:47:01 -07:00
cd859cfaa3 scripts: update
ran the updatedep.sh
2017-10-20 09:39:37 -06:00
12a6efb74b update: client
Updating the codec required codecgen to be reran on the client/keys.go file.  This is the result of that run.
2017-10-20 09:23:23 -06:00
b896e985b6 glide: update github.com/ugorji/go/codec
Updating github.com/ugorji/go/codec to the latest commit/version
2017-10-20 09:22:27 -06:00
40b6fcd761 Merge pull request #8717 from gyuho/retry-cleanup
clientv3: clean up retry wrapper, remove all FailFast=false
2017-10-19 16:08:59 -07:00
54ef60d033 clientv3: remove redundant retries in Auth, set FailFast=true
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 16:03:12 -07:00
1fa60c9882 clientv3: add TODO for watch retry
Later we can do:

```diff
+// RetryWatchClient implements a WatchClient.
+func RetryWatchClient(c *Client) pb.WatchClient {
+	readRetry := c.newRetryWrapper(isReadStopError)
+	wc := pb.NewWatchClient(c.conn)
+	return &retryWatchClient{wc, readRetry}
+}
+
+type retryWatchClient struct {
+	pb.WatchClient
+	readRetry retryRPCFunc
+}
+
+func (rwc *retryWatchClient) Watch(ctx context.Context, opts ...grpc.CallOption) (stream pb.Watch_WatchClient, err error) {
+	err = rwc.readRetry(ctx, func(rctx context.Context) error {
+		stream, err = rwc.WatchClient.Watch(rctx, opts...)
+		return err
+	})
+	return stream, err
+}

-	return NewWatchFromWatchClient(pb.NewWatchClient(c.conn))
+	return NewWatchFromWatchClient(RetryWatchClient(c))
```

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 16:02:01 -07:00
141170c1d4 clientv3: remove redundant retries in Maintenance, set FailFast=true
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 16:01:50 -07:00
c09a89d834 clientv3: remove redundant retries in Cluster, set FailFast=true
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 16:00:45 -07:00
fecd26f141 clientv3: rename to isRepeatableStopError, isNonRepeatableStopError
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 15:58:12 -07:00
b46ab2c36e clientv3: remove redundant retries in KV, set FailFast=true
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 15:57:10 -07:00
ad7882590c Merge pull request #8718 from gyuho/qqq
clientv3: remove redundant retries in Lease, set FailFast=true
2017-10-19 15:04:46 -07:00
f95f865060 clientv3: unexport pb.LeaseClient in lease client
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 15:02:19 -07:00
87fe8c12ae clientv3: rename to repeatableRetry in lease client
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 14:58:54 -07:00
29aa4ce2a1 clientv3: remove redundant retries in Lease, set FailFast=true
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 14:53:01 -07:00
a2c61cf04f Merge pull request #8716 from gyuho/ready-wait
clientv3: separate readyWait for ConnectNotify
2017-10-19 13:10:17 -07:00
2540859ee7 clientv3: separate readyWait for ConnectNotify
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 13:07:22 -07:00
c945b7b44a Merge pull request #8714 from gyuho/aaa
clientv3: handle stale endpoints, clean up logging
2017-10-19 12:35:30 -07:00
1549403dd2 clientv3: clean up logging, clarify var/field names
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 12:33:25 -07:00
ad24700252 clientv3: handle stale endpoint in health balancer
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-19 12:02:31 -07:00
a8f9de2abf Merge pull request #8704 from gyuho/typo
*: fix typo in Makefile, add *.log, release directory to gitignore
2017-10-17 09:06:34 -07:00
5790ffde7c gitignore: ignore *.log, release directory
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-17 09:05:10 -07:00
39fe293649 Makefile: fix typo in 'docker-test-proxy'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-17 09:05:01 -07:00
b989e1992f Merge pull request #8695 from jpbetz/fix-disabled-simple-token-assign
auth: Fix simpleToken to respect disabled state for assign
2017-10-14 15:49:36 +09:00
d3c9643761 auth: Fix simpleToken to respect disabled state for assign 2017-10-13 21:44:07 -07:00
d392debf82 Merge pull request #8693 from gyuho/makefile
Makefile: fix 'test', add 'test-all' commands with docker
2017-10-13 12:42:07 -07:00
f0a78eb516 Makefile: fix 'test', add 'test-all' commands with docker
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-13 12:14:26 -07:00
764a0f79b2 Merge pull request #8683 from gyuho/ctl
etcdctl/ctlv3: inherit/update flags only once in 'check' command
2017-10-11 10:51:40 -07:00
e80b2474fa etcdctl/ctlv3: inherit/update flags only once in 'check' command
When creating multiple clients, 'mustClientFromCmd' overwrites
inherited flags with environment variables, so later clients
were printing warnings on duplicate key updates.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-11 10:37:15 -07:00
0ef0abf9bf Merge pull request #8676 from gyuho/aaa
clientv3: fix typo in 'testNetworkPartitionBalancer'
2017-10-10 19:17:32 -07:00
7f2b6a19d6 clientv3: fix typo in 'testNetworkPartitionBalancer'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-10 16:07:03 -07:00
bc03ce9cab Merge pull request #8674 from gyuho/set-endpoints
clientv3: reset unhealthy on updateAddrs
2017-10-10 13:29:01 -07:00
500c2499f4 clientv3: reset unhealthy on updateAddrs
Otherwise, 'mayPin' incorrectly decides if an address
should be pinned or not.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-10 12:28:57 -07:00
8329963d69 Merge pull request #8669 from gyuho/balancer
clientv3/balancer: handle network partition in health check
2017-10-09 16:54:31 -07:00
e9e17e3fe5 clientv3: pin any endpoint when all unhealthy
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 16:02:18 -07:00
826de3c07a words: whitelist more words
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 14:54:53 -07:00
8224c748c9 clientv3/integration: add balancer network partition tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 14:54:47 -07:00
fbed568b6a clientv3/balancer: mark partitioned member as unhealthy
Previous behavior is when server returns errors, retry
wrapper does not do anything, while passively expecting
balancer to gray-list the isolated endpoint. This is
problematic when multiple endpoints are passed, and
network partition happens.

This patch adds 'endpointError' method to 'balancer' interface
to actively(possibly even before health-check API gets called)
handle RPC errors and gray-list endpoints for the time being,
thus speeding up the endpoint switch.

This is safe in a single-endpoint case, because balancer will
retry no matter what in such case.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 13:40:03 -07:00
1704443c6d clientv3: only health-check when timeout elapses since last failure
Otherwise network-partitioned member with active health-check
server would not be gray-listed, making health-balancer stuck
with isolated endpoint.

Also clarifies some log messages.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 13:40:03 -07:00
e47be1f325 Merge pull request #8672 from gyuho/require-leader
etcdctl/ctlv3: enable 'require-leader' for 'watch' command
2017-10-09 13:38:52 -07:00
d44f7d5f67 etcdctl/ctlv3: enable 'require-leader' for 'watch' command
To help with network partition cases.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 13:19:30 -07:00
ed92420950 Merge pull request #8666 from lorneli/ordering
clientv3/ordering: compare and update prevRev atomically
2017-10-09 11:14:40 -07:00
09a38a7953 Merge pull request #8671 from gyuho/ddd
Dockerfile-test: add 'ineffassign' to image
2017-10-09 10:38:45 -07:00
2bbd26e8e0 README: update badges
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 09:48:23 -07:00
6571829f16 Merge pull request #8663 from YuleiXiao/add_keepalive_for_ctlv3
etcdctl/v3: add keep alive time/timeout
2017-10-09 09:45:59 -07:00
66f2a65f6b Dockerfile-test: add 'ineffassign' to image
Was missing for 'fmt' tests.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 09:36:30 -07:00
71197ab2a5 Merge pull request #8670 from gyuho/rrr
README: update 'goreman' guide with 'grpc-proxy'
2017-10-09 09:35:47 -07:00
90c3f91f29 README: update 'goreman' guide with 'grpc-proxy'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-09 09:28:22 -07:00
5096b4ed5d clientv3/ordering: compare and update prevRev atomically
Several goroutines may call setPrevRev concurrently with different
revisions, all higher than prevRev. Previously all of these goroutines
could set prevRev, so prevRev may be replaced by older one.

If response's revision equals to prevRev, there's no need to call
setPrevRev.
2017-10-09 20:06:19 +08:00
04940efcc2 etcdctl: add keep alive time/timeout in etcdctl
client can switch from fault node to normal when keep alive is timeout

Fixes #7941
2017-10-09 09:51:43 +08:00
a68a3dc79e Merge pull request #8661 from jpbetz/docker-dns-srv-fix
Dockerfile: Improve file permissions for docker build images using bind9
2017-10-07 11:17:57 -07:00
abc81d03a7 Dockerfile: Improve file permissions for docker build images using bind9
/etc/init.d/bind9 is run as the 'bind' user. This fixes file permissions
for the configuration files added by the Dockerfile to match.
2017-10-06 23:34:39 -07:00
b766a26059 Merge pull request #8257 from yudai/websocket_streams
embed: support websocket for bi-directional streams
2017-10-06 21:33:55 -07:00
e8e3467455 Merge pull request #8659 from gyuho/pinned
clientv3: add pinned() method to 'balancer'
2017-10-06 16:03:14 -07:00
bed5f388a8 clientv3: add pinned() method to 'balancer'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-06 15:28:21 -07:00
0cdf5b2d58 vendor: add github.com/tmc/grpc-websocket-proxy
Updating golang.org/x/net as well so that a new dependency
github.com/sirupsen/logrus can be compiled on Windows environments.
2017-10-06 15:14:01 -07:00
077b361bfc Merge pull request #8658 from gyuho/etcdhttp-godoc
etcdserver/api/etcdhttp: document package in doc.go
2017-10-06 10:51:08 -07:00
1109c6c321 etcdserver/api/etcdhttp: document package in doc.go
It was missing from godoc.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-06 10:47:47 -07:00
dcaa0cddfc Merge pull request #8657 from gyuho/debug-line
clientv3: add debugging lines to 'retry' paths
2017-10-06 10:38:44 -07:00
1c6fbcd3d0 clientv3: add debugging lines to 'retry' paths
Helpful for debugging client balancer.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-06 10:13:25 -07:00
d2b3e578e7 Merge pull request #8653 from gyuho/changelog
CHANGELOG: add v3.2.9, minor updates
2017-10-06 09:00:49 -07:00
39912e7018 Merge pull request #8655 from gyuho/makefile
Makefile: suffix test log files
2017-10-06 08:52:11 -07:00
d9e8d4665c Makefile: suffix test log files
In preparation of running all tests inside container.
Currently, we run Jenkins in shared environment.
This is not good. Need manual Go runtime updates,
cannot run two different branches, port conflicts,
out of disk errors, etc.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-06 08:36:32 -07:00
37eabd770e embed: support websocket for bi-directional streams 2017-10-05 16:08:18 -07:00
c58ba620dd Merge pull request #8654 from gyuho/update
e2e/docker-dns-srv: test with TLS
2017-10-05 16:02:23 -07:00
db0ea5d44b Merge pull request #8651 from xiang90/https_srv
embed: fix HTTPs + DNS SRV discovery
2017-10-05 15:49:42 -07:00
cab94ac128 CHANGELOG: add v3.2.9, minor updates
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 15:42:28 -07:00
f79d5aaca4 embed: fix HTTPs + DNS SRV discovery 2017-10-05 15:21:45 -07:00
5d3a5912eb e2e/docker-dns-srv: enable peer, client TLS
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 15:15:02 -07:00
d57159f79a e2e/docker-dns-srv: use 'etcd.local' as SRV, clean up
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 22:05:33 +00:00
e7e24dab64 e2e/docker-dns: enable client-cert-auth in /run.sh
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 22:05:33 +00:00
09f02e5507 fixtures: add 'localhost' to wildcard cert for local cluster
Otherwise, local cluster tests fail.
2017-10-05 22:05:20 +00:00
867e3da0c4 Merge pull request #8652 from gyuho/proxy-tests-Makefile
Makefile: add 'test-proxy', 'test-coverage'
2017-10-05 11:38:02 -07:00
b0dc639807 Makefile: add 'test-proxy', 'test-coverage'
To dockerize all test runs in Jenkins.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 10:52:22 -07:00
70aa30f281 e2e/docker-dns-srv: upgrade Go version to 1.9.1
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 10:40:29 -07:00
8b75689c05 Merge pull request #8648 from gyuho/mu
mvcc: move 'keyi' define before holding locks
2017-10-05 10:28:44 -07:00
9154b31bf3 mvcc: move 'keyi' define before holding locks
To make it consistent with other code paths.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 10:06:28 -07:00
75a51f77f3 Merge pull request #8628 from gyuho/makefile
Makefile: initial commit
2017-10-05 09:58:49 -07:00
b3ff3982b8 Merge pull request #8650 from gyuho/travis
travis: specify Go minor versions
2017-10-05 09:57:57 -07:00
2c93dbf0a8 travis: specify Go minor versions
1.9.x doesn't work with travis Go 'gimme'.
https://travis-ci.org/coreos/etcd/jobs/283789582#L616-L629

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:54:52 -07:00
ded97c874b Dockerfile-test: upgrade Go version to 1.9.1
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:48:09 -07:00
f5b1da6a20 Makefile: add 'docker-dns-srv-*'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:48:06 -07:00
db1be7ebc0 e2e/docker-dns: clean up Procfile.tls
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:47:37 -07:00
85bbd0cead e2e/docker-dns-srv: initial commit
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:47:37 -07:00
23a302364c Makefile: initial commit
Initial commit to run DNS/SRV tests.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:47:33 -07:00
b401659fbb Merge pull request #8649 from gyuho/crypto
vendor: update 'golang.org/x/crypto'
2017-10-05 09:45:20 -07:00
0e6e2f5ec5 vendor: update 'golang.org/x/crypto'
To include 6c586e17d9.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 07:35:11 -07:00
999f329c87 Merge pull request #8634 from gyuho/config
clientv3/yaml: add 'TrustedCAfile' field to replace 'CAfile'
2017-10-04 14:01:40 -07:00
1f2197b1f8 pkg/transport: add TODO to deprecate 'CAFile' field in v4
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-04 14:01:01 -07:00
05f96e8770 clientv3/yaml: add 'TrustedCAfile' field to replace 'CAfile'
To be consistent with etcdmain.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-04 14:00:54 -07:00
58e825c636 Merge pull request #8644 from gyuho/changelog
CHANGELOG: convert from plain text 'news'
2017-10-04 12:28:39 -07:00
2b09a554a2 CHANGELOG: convert from plain text 'news'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-04 11:48:31 -07:00
863dfd1f0e Merge pull request #8616 from mitake/peer-cn-auth
RFC: etcdmain, pkg: CN based auth for inter peer connection
2017-10-04 10:00:53 -07:00
78c57418e0 Merge pull request #8643 from gyuho/ordering
clientv3/ordering: add missing 'errOrderViolation' error check
2017-10-03 18:39:28 -07:00
b2f5393b64 clientv3/ordering: add missing 'errOrderViolation' error check
Fix https://github.com/coreos/etcd/issues/8641.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-03 18:04:36 -07:00
7fb5b90bed Merge pull request #8642 from gyuho/mu
clientv3/ordering: acquire setPrevRev mutex only when needed
2017-10-03 15:56:15 -07:00
69031e3a6d clientv3/ordering: acquire setPrevRev mutex only when needed
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-03 15:06:37 -07:00
e44ce19c1f Merge pull request #8639 from gyuho/ineffassign
test: add 'ineffassign'
2017-10-03 10:30:55 -07:00
6555262cae Merge pull request #8640 from gyuho/proc
Procfile: use grpc-proxy instead of v2 proxy
2017-10-03 10:28:52 -07:00
207c90c5e7 travis: install 'ineffassign'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-03 10:14:37 -07:00
0199bdc266 *: fix 'ineffassign' issues
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-03 10:14:33 -07:00
182d071fd0 Documentation/v2: add Procfile.v2 for proxy
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-03 09:22:31 -07:00
01e83a4334 Procfile: use grpc-proxy instead of v2 proxy
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-03 09:18:56 -07:00
72fbe0576d test: run ineffassign in fmt pass
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-03 02:14:02 -07:00
46223a2202 Merge pull request #8638 from gyuho/typo
Documentation/op-guide: fix typo in configuration.md
2017-10-02 16:47:22 -07:00
530d421f61 Documentation/op-guide: fix typo in configuration.md
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-02 16:35:26 -07:00
8b7fc3e28f Merge pull request #8637 from gyuho/health-log
clientv3: add more health balancer debugging logs
2017-10-02 15:53:45 -07:00
c6e7d3ab7d Merge pull request #8635 from gyuho/options
Documentation/op-guide: add missing flags to configuration.md
2017-10-02 15:42:30 -07:00
b186265003 Merge pull request #8636 from gyuho/monitoring
Documentation/op-guide: add Grafana dashboard link
2017-10-02 15:40:50 -07:00
3f596db104 clientv3: add more health balancer debugging logs
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-02 15:38:36 -07:00
3a566fd3ad Merge pull request #8612 from lorneli/clientv3_integration
clientv3/integration: test leasing txn invalidates deleted cache
2017-10-02 12:29:35 -07:00
245d03f129 Documentation/op-guide: add Grafana dashboard link
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-02 12:24:05 -07:00
834add042e Documentation/op-guide: add missing flags to configuration.md
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-02 09:49:43 -07:00
5f7ce4f7e1 e2e: add a test case for --peer-cert-allowed-cn 2017-10-02 15:59:17 +09:00
1d28a7a69b integration/fixtures: add cert and key of different CN for testing purpose 2017-10-02 15:59:17 +09:00
70018e9207 etcdmain, pkg: CN based auth for inter peer connection
This commit adds an authentication mechanism to inter peer connection
(rafthttp). If the cert based peer auth is enabled and a new option
`--peer-cert-allowed-cn` is passed, an etcd process denies a peer
connection whose CN doesn't match.
2017-10-02 15:59:17 +09:00
aac652009d clientv3/integration: test leasing txn invalidates deleted cache
Test cache invalidating in txnLeasing.commitToCache function.
2017-09-30 13:04:06 +08:00
f361dcc639 Merge pull request #8629 from gyuho/debug-client
integration: enable client debug logging on CLIENT_DEBUG
2017-09-29 12:53:36 -07:00
bc5b7c0937 integration: enable client debug logging on EXPECT_DEBUG
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-29 12:19:59 -07:00
bcef78c665 Merge pull request #8563 from fanminshi/make_auto_compaction_granular
*: support auto-compaction with finer granularity
2017-09-29 11:18:51 -07:00
0e48b5fa0d Merge pull request #8540 from gyuho/news
NEWS: add v3.2.8
2017-09-29 10:52:12 -07:00
daa224a088 Merge pull request #8621 from tpot/agent-test-data-dir
functional-tester: don't specify data dir on tester side
2017-09-29 08:50:04 -07:00
f8e63934b1 functional-tester: don't specify data dir on tester side
Data directory is added automatially in commit 2e3d27e but test was
not updated.
2017-09-29 15:06:52 +10:00
0e1993f131 etcdmain: check for empty AutoCompactionRetention 2017-09-28 17:31:09 -07:00
253259452b compactor: support finer retention period in compactor.go 2017-09-28 17:22:52 -07:00
733de98cfb *: modify etcd flags to support finner compaction retention 2017-09-28 17:22:44 -07:00
99cda531cb NEWS: add v3.2.8
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 16:32:17 -07:00
2cfe0d6774 Merge pull request #8626 from gyuho/kc
*: add watch with client keepalive test
2017-09-28 16:20:25 -07:00
65ffb52e5f clientv3/integration: add TestWatchKeepAlive
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 15:35:29 -07:00
b5c31522ee words: mask more words in spellcheck
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 15:35:26 -07:00
044aca7f50 integration: configure keepalive parameters for server
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 15:34:39 -07:00
741d7e9dca integration: add Blackhole to bridgeConn
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 15:34:36 -07:00
55b728973c Merge pull request #8625 from gyuho/kl
vendor: upgrade grpc/grpc-go to v1.6.0
2017-09-28 14:47:48 -07:00
6b06a69aba vendor: upgrade grpc-go to v1.6.0
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 13:35:53 -07:00
10202a54ef Merge pull request #8535 from gyuho/keepalive-server
*: configure server keepalive
2017-09-28 13:26:25 -07:00
4b3d4000af etcdmain: add 'grpc-keepalive-*' flags
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 11:24:02 -07:00
157c8eccf0 embed: define keepalive server options
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 11:19:29 -07:00
32e15d790f api/rpc: accept grpc.ServerOption's for keepalive policy
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-28 10:55:00 -07:00
e1e236155c Merge pull request #8620 from gyuho/bbolt
vendor: upgrade coreos/bbolt to v1.3.1-coreos.2
2017-09-27 16:21:34 -07:00
8f6a0ee26c vendor: upgrade coreos/bbolt to v1.3.1-coreos.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-27 15:28:52 -07:00
398e6ba2a6 Merge pull request #8601 from gyuho/notify
clientv3: wait for ConnectNotify before sending RPCs
2017-09-27 14:22:43 -07:00
636815909d clientv3/integration: match context errors to stopped server
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-27 13:12:08 -07:00
a439095697 clientv3: wait for ConnectNotify before sending RPCs
With slow CPU, gRPC can lag behind with RPCs being sent before
calling 'Up', returning 'no address available' on the first try.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-27 13:12:08 -07:00
b6b4898f6b Merge pull request #8619 from gyuho/lfix
clientv3/integration: fix license, minor nits in leasing_test.go
2017-09-27 09:40:13 -07:00
92f5746c54 clientv3/integration: fix license, minor nits in leasing_test.go
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-27 09:23:39 -07:00
554298d429 Merge pull request #8594 from mitake/auth-priority
RFC: etcdserver: swap priority of cert CN and username + password
2017-09-26 08:41:30 -07:00
f815d9a65b e2e: add and update test cases for CN based auth 2017-09-26 16:12:43 +09:00
2240b6a592 Merge pull request #8604 from gyuho/debug-client
etcdctl,clientv3: add debugging logs
2017-09-26 07:18:00 +09:00
090c192517 clientv3: add debugging logs, warnings
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-26 07:16:16 +09:00
c63d6b6a25 ctlv3: print envs, configure grpc logger with debug flag
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-26 07:05:10 +09:00
b355232dd6 Merge pull request #8606 from gyuho/doc
Documentation/op-guide: remove grafana demo link
2017-09-26 02:39:26 +09:00
607d0762eb Documentation/op-guide: remove grafana demo link
The dashboard was removed during Tectonic migration
in AWS, while the Grafana still runs in GCP.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-26 01:52:35 +09:00
4830ca74e6 Merge pull request #8599 from xiang90/longer_timeout
etcdserver: make dial timeout longer
2017-09-23 16:14:50 -07:00
35e285674b etcdserver: make tick duration calculation clear 2017-09-23 15:43:12 -07:00
230323255a etcdserver: make dial timeout longer 2017-09-22 14:56:41 -07:00
6515a1dfd0 Merge pull request #8289 from mitake/auth-proxy
clientv3, etcdmain, proxy: support authed RPCs with grpcproxy
2017-09-22 16:14:37 +09:00
1296281b27 etcdserver: swap priority of cert CN and username + password 2017-09-22 15:53:47 +09:00
cbddcfd9ad Merge pull request #8556 from gyuho/go-tip
client: fix TestHTTPClusterClientSyncUnpinEndpoint
2017-09-22 13:33:34 +09:00
fbc7acde95 client: permute endpoints manually (for Go 1.9>)
To keep backward compatibility, use old algorithm of
rand.Rand.Perm.

Reference: caae0917bf (diff-d4a72c5ba8515eae95a093e0aec62635).

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-22 10:19:30 +09:00
527429f30a Merge pull request #8588 from gyuho/aaa
Documentation: use Go 1.9+ in dl_build.md
2017-09-22 06:05:12 +09:00
24cce732b6 Merge pull request #8590 from raoofm/patch-13
etcd.conf.yml.example: peer-client-cert-auth flag
2017-09-21 12:27:19 -07:00
36e37580f3 etcd.conf.yml.example: peer-client-cert-auth flag
Previous config was incorrect for peer client cert auth
  # Enable peer client cert authentication.
  client-cert-auth: false

corrected to 
peer-client-cert-auth
2017-09-21 10:41:52 -04:00
517c15d3e1 Documentation: use Go 1.9+ in dl_build.md
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-21 20:21:52 +09:00
3a7858c439 Merge pull request #8582 from mitake/derr
e2e: log an error of TempDir() during the preparation of cluster crea…
2017-09-20 17:05:24 +09:00
8a4c8dc3b0 e2e: log an error of TempDir() during the preparation of cluster creation 2017-09-20 17:01:04 +09:00
e8c18e3368 proxy: handle authed snapshot request in grpcproxy
Like the previous commit 10f783efdd12, this commit lets grpcproxy
forward an auth token supplied by its client in an explicit
manner. snapshot is a stream RPC so this process is required like
watch.
2017-09-20 15:27:27 +09:00
c50960e39a e2e: enable tests related to auth and proxy 2017-09-20 15:27:26 +09:00
94b5071c30 etcdmain, proxy: handle authed watch in grpcproxy
This commit lets grpcproxy handle authed watch. The main changes are:
1. forwrading a token of a new broadcast client
2. checking permission of a new client that participates to an
   existing broadcast
2017-09-20 15:27:26 +09:00
e709f83253 etcdmain, proxy: support authed RPCs with grpcproxy
This commit lets grpcproxy support authed RPCs. Auth tokens supplied
by clients are now forwarded to etcdserver by grpcproxy.
2017-09-20 11:14:45 +09:00
aca8a0d5b9 Merge pull request #8574 from abronan/regenerate_keys
client: regenerate sources for etcd/client with new ugorji/go changes
2017-09-20 09:39:13 +09:00
a819e689b0 Merge pull request #8580 from zbwright/patch-1
docs: remove link-breaking space
2017-09-20 08:10:27 +09:00
f45ba5935a docs: remove link-breaking space 2017-09-19 15:54:16 -07:00
8dc4833a3e client: regenerate sources for etcd/client with new codec version
Major updates to ugorji/go changed the signature of some
methods, resulting in the build failing for etcd/client
with default installation of the codec.

We regenerate the sources using codecgen with the new version
to reflect on the new changes.

Fixes #8573

Signed-off-by: Alexandre Beslic <abeslic@abronan.com>
2017-09-19 15:14:58 +02:00
5bb9f9591f Merge pull request #8572 from gyuho/op-guide
Documentation/op-guide: add docker:// to 'rkt run gcr.io'
2017-09-19 10:59:01 +09:00
94e563e111 Documentation/op-guide: add docker:// to 'rkt run gcr.io'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-19 10:51:01 +09:00
bcbf18491f Merge pull request #8570 from a-robinson/indent
raft: fix bullet point indentation in README
2017-09-18 13:57:21 -07:00
b9c4f5b22a raft: fix bullet point indentation in README 2017-09-18 16:07:51 -04:00
3cad5e4da1 Merge pull request #8545 from heyitsanthony/health-balancer
clientv3: Health balancer
2017-09-18 09:24:45 -07:00
a4777080cb Merge pull request #8567 from xiang90/r_l
raft: ensure CheckQuorum is enabled when readonlyoption is lease based
2017-09-17 19:29:22 -07:00
9801fd7297 raft: ensure CheckQuorum is enabled when readonlyoption is lease based 2017-09-17 10:46:12 -07:00
085adc5b8b Merge pull request #8566 from heyitsanthony/fix-cov
test: fix flags in coverage test
2017-09-17 10:18:41 -07:00
166e6918a6 test: fix flags in coverage test
broken when fixing shellcheck errors
2017-09-17 00:33:56 -07:00
49e5e78d0f clientv3/integration: test endpoint switches on partitioned member 2017-09-16 13:55:39 -07:00
efd7800e0f clientv3: try next endpoint point on unavailable error 2017-09-16 13:55:39 -07:00
e3deb9f482 clientv3: test health balancer gray listing 2017-09-15 14:24:46 -07:00
84db8fdaea clientv3: health check balancer 2017-09-15 14:24:46 -07:00
6cf0fd7cb0 Merge pull request #8339 from javaforfun/shawnsli/check-msg-type-before-become-follower
raft: check whether it's leader PRC request when recv message with higher term
2017-09-14 18:25:25 -07:00
58b98c6a14 raft: check leader request when becomeFollower 2017-09-15 08:23:18 +08:00
4afb99ffc1 Merge pull request #8552 from heyitsanthony/fix-proxy-keys-only
grpcproxy: respect KeysOnly flag
2017-09-13 12:01:48 -07:00
7f4464415a grpcproxy: respect KeysOnly flag
Fixes #8478
2017-09-13 09:57:08 -07:00
4366f35e1e e2e: test no value is returned in TestCtlV3GetKeysOnly
Test was checking key name is returned, but was not correctly checking
no value is returned.
2017-09-13 09:50:24 -07:00
1b85dad7b0 Merge pull request #8514 from mitake/empty-key-perm
etcdctl: handle empty key permission correctly
2017-09-13 17:26:08 +09:00
e4c0e11702 e2e: enhance test cases for a way of handling empty keys 2017-09-13 14:25:52 +09:00
1ae6f1614d etcdctl: handle empty key permission correctly
Current `etcdctl role grant-permission` doesn't handle an empty key
("") correctly. Because the range permissions are treated as
BytesAffineInterval internally, just specifying the empty key as a
beginning of range introduces an invalid permission which doesn't work
and betray users' intuition. This commit fix the way of handling empty
key as a prefix or from key in permission granting.

Fix https://github.com/coreos/etcd/issues/8494
2017-09-13 14:25:52 +09:00
510d884e62 Merge pull request #8537 from lorneli/lease_test
lease: test minLeaseTTL limit
2017-09-12 14:01:46 -07:00
6f6279075a Merge pull request #8546 from heyitsanthony/receiver-ci
test: check for inconsistent receiver names
2017-09-12 13:59:52 -07:00
846255b95e Merge pull request #8513 from shenlanse/bug-fix
rafthttp: add remote in pipeline and snapshot handler
2017-09-12 13:48:56 -07:00
10b731baa8 Merge pull request #8516 from purpleidea/feat/leaseid-okay
clientv3: Allow naked LeaseID or int64 for LeaseValue Compare's
2017-09-12 09:05:33 -07:00
28a22075ca lease: test minLeaseTTL limit
Test whether lease's ttl is set to minLeaseTTL when passing a ttl
smaller than minLeaseTTL to Grant function.
2017-09-12 20:24:27 +08:00
4fa1dd196c *: make receiver names consistent 2017-09-12 03:54:04 -07:00
9553afbb24 Merge pull request #8533 from gyuho/grpc
*: upgrade grpclog to LoggerV2
2017-09-12 03:53:04 -07:00
bb4e0473ae Merge pull request #8531 from gyuho/error
*: deprecate grpc.Code, grpc.ErrorDesc
2017-09-12 03:52:30 -07:00
98e4a05068 test: check for inconsistent receiver names 2017-09-12 03:41:10 -07:00
5f36875272 rafthttp: add remote in pipeline and snapshot handler when corresponding peer or remote do not exist
Fixes: #8506
2017-09-12 18:38:18 +08:00
69f32bac34 Merge pull request #8542 from gyuho/go-systemd
vendor: upgrade go-systemd to v15, remove cockroachdb/cmux
2017-09-11 15:26:50 -07:00
7761a4672e vendor: upgrade go-systemd to v15, remove cockroachdb/cmux
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-11 14:53:37 -07:00
6f76d52a1a *: deprecate grpc.Code, grpc.ErrorDesc
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-11 09:28:56 -07:00
18ba4d60ec v3rpc/rpctypes: use grpc.status for errors
grpc.Code, grpc.ErrorDesc, grpc.Errorf have been deprecated.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-11 09:20:03 -07:00
bc50a4591a Merge pull request #8536 from gyuho/typo
*: fix minor typos
2017-09-11 07:33:54 -07:00
0b2d8a6c96 *: fix minor typos
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-11 07:33:35 -07:00
3b3d392540 *: use grpclog.LoggerV2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-08 15:26:16 -07:00
f37ff4a4e2 v3rpc: use grpclog.LoggerV2 for grpc logs
grpclog.Logger has been deprecated.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-08 15:25:36 -07:00
d6c33367c4 clientv3: upgrade grpclog to LoggerV2
grpclog.Logger has been deprecated.
2017-09-08 15:25:32 -07:00
80aa810309 Merge pull request #8519 from heyitsanthony/client-oneshot-failover
client: fail over to next endpoint on oneshot failure
2017-09-08 12:54:35 -07:00
f4355a00ae Merge pull request #8518 from dvrkps/patch-1
travis: add 1.9.x instead of 1.9 to go version
2017-09-08 11:43:04 -07:00
76a35e71be client: fail over to next endpoint on oneshot failure
Fixes #8515
2017-09-08 11:20:20 -07:00
ba89bbb47d Merge pull request #8528 from gyuho/ctx
tools/benchmark: replace 'golang.org/x/net/context' with 'context'
2017-09-08 10:53:31 -07:00
640c0e6ff4 tools/benchmark: replace 'golang.org/x/net/context' with 'context'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-08 09:50:00 -07:00
9a1e294ec6 Merge pull request #8523 from heyitsanthony/remove-gosimple-mask
test: remove S1024 mask from gosimple pass
2017-09-08 09:45:48 -07:00
ae63ac1cf7 test: remove S1024 mask from gosimple pass
Also get stray remaining egreps
2017-09-08 09:21:42 -07:00
f445b463a2 travis: add 1.9.x instead 1.x go version 2017-09-08 07:41:24 +02:00
6930e471ed Merge pull request #8521 from gyuho/grep
test: use 'grep -E' for non-standard 'egrep'
2017-09-07 20:04:28 -07:00
70c20a9e73 Merge pull request #8522 from gyuho/lessor
lease: use time.Until in 'Remaining'
2017-09-07 18:44:16 -07:00
0e0d9e492f lease: use time.Until in 'Remaining'
Fix 'gosimple' warnings.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-07 18:41:36 -07:00
e49d93ccb7 test: use 'grep -E' for non-standard 'egrep'
Fix shellcheck complaints.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-07 18:37:33 -07:00
6e39a39e3a Merge pull request #8511 from gyuho/ctx
*: deprecate 'golang.org/x/net/context'
2017-09-07 18:07:57 -07:00
eb55917ef6 Merge pull request #8507 from lorneli/lease_monotime
lease: use monotime in time.Time for Go 1.9
2017-09-07 15:43:24 -07:00
89ee9d6671 travis: add 1.x instead 1.9 to go version 2017-09-07 23:53:31 +02:00
24498ea167 test: mask 'nil Context' for staticcheck
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-07 13:39:42 -07:00
9a726b424d *: fix leaky context creation with cancel
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-07 13:39:42 -07:00
9d12ba26e0 README: require Go 1.9+
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-07 13:39:42 -07:00
887a0585e6 vendor: upgrade 'golang.org/x/net' with type alias
Use Go 1.9 type alias.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-07 13:39:42 -07:00
f65aee0759 *: replace 'golang.org/x/net/context' with 'context'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-07 13:39:42 -07:00
ff31fb4b8b Merge pull request #8512 from gyuho/docker
Dockerfile-test: add test image with Go 1.9
2017-09-07 13:33:42 -07:00
a44e11414f Dockerfile-test: add test image with Go 1.9
Not to be blocked on Go 1.9 migration by CIs
(e.g. Semaphore CI not supporting Go 1.9).

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-07 13:12:27 -07:00
252cab0c13 clientv3: Allow naked LeaseID or int64 for LeaseValue Compare's
The logical input to Compare would be a LeaseID (type int64) but the
check panics if we give a LeaseID directly. Allow both so that we don't
unnecessarily annoy and confuse the programmer using the API in the most
logical way.
2017-09-07 13:49:35 -04:00
9c3474e4e0 Merge pull request #8500 from heyitsanthony/clientv3-spelling
clientv3: goword spelling check
2017-09-07 10:06:24 -07:00
63aa64d240 lease: use monotime in time.Time for Go 1.9
The golang/time package tracks monotonic time in each time.Time
returned by time.Now function for Go 1.9.

Use time.Time to measure whether a lease is expired and remove
previous pkg/monotime. Use zero time.Time to mean forever. No
expiration when expiry.IsZero() is true.
2017-09-07 14:18:19 +08:00
2bb893b478 rafthttp: add remote in pipeline and snapshot handler when corresponding peer or remote do not exist
Fixes: #8506
2017-09-07 13:49:39 +08:00
2d0eec0b35 clientv3: goword spelling check 2017-09-06 22:11:33 -07:00
4587d56731 travis: enable goword spell checking 2017-09-06 20:47:08 -07:00
ec36d0040b Merge pull request #8508 from heyitsanthony/shellcheck-more
*: fix shellcheck warnings
2017-09-06 20:31:15 -07:00
9abe9da9db *: fix shellcheck warnings
Fixes scripts and removes shellcheck warning suppressions.

* regexp warnings
* use ./*glob* so names don't become options
* use $(..) instead of legacy `..`
* read with -r to avoid mangling backslashes
* double quote to prevent globbing and word splitting
2017-09-06 19:18:04 -07:00
a0361ea3f9 rafthttp: add remote in pipeline and snapshot handler when corresponding peer or remote do not exist
Fixes: #8506
2017-09-07 10:14:54 +08:00
3c1845604b Merge pull request #8484 from lorneli/dev
wal: tiny refactor
2017-09-06 13:50:38 -07:00
05d7dc307b Merge pull request #8490 from lorneli/lease_dev
lease: fix typo and modify findExpiredLeases function
2017-09-06 12:47:25 -07:00
7c50c06fb8 wal: tiny refactor
a. add comment of reopening file in cut function.
b. add const frameSizeBytes in decoder.
c. return directly if locked files empty in ReleaseLockTo function.
2017-09-07 02:50:37 +08:00
7063a5e5cc lease: add limit in lessor.findExpiredLeases function
Function findExpiredLeases finds expired leases in the leaseMap until
reaching expired limit.
2017-09-07 02:34:56 +08:00
77a19cd9d4 lease: fix typos
a. fix typo in godoc
b. make receiver of FakeLessor's function identical
2017-09-07 02:34:15 +08:00
4cbe2e8cae Merge pull request #8505 from gyuho/conn-timeout
clientv3: deprecate grpc.ErrClientConnTimeout errors
2017-09-05 16:50:39 -07:00
40e969b02a Merge pull request #8485 from irfansharif/TestRecvMsgPreVote
raft: (re-)introduce TestRecvMsgPreVote
2017-09-05 16:11:52 -07:00
b1595f2792 Merge pull request #8488 from purpleidea/feat/leaseid-helper
clientv3: Add LeaseValue helper to Cmp LeaseID values in Txn
2017-09-05 16:11:21 -07:00
550765d037 clientv3: Add LeaseValue helper to Cmp LeaseID values in Txn 2017-09-05 18:51:12 -04:00
8a351f9851 Documentation/upgrades: add 3.3 upgrade guide
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-05 14:41:53 -07:00
15c3c1be28 *: replace 'grpc.ErrClientConnTimeout' with 'context.DeadlineExceeded'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-05 14:10:43 -07:00
312c68a9c6 clientv3: deprecate grpc.ErrClientConnTimeout errors
Replace with context.DeadlineExceeded.
Address https://github.com/coreos/etcd/issues/8504.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-05 14:10:02 -07:00
9a84c84ea6 Merge pull request #8479 from heyitsanthony/ctlv2-backup-v3
ctlv2: backup --with-v3
2017-09-05 13:46:29 -07:00
9021b85692 Merge pull request #8462 from jiaxuanzhou/serverName
etcdctl: add discovery-srv global flag for v3
2017-09-05 12:29:17 -07:00
9a0f8c5917 etcdctl: add discovery-srv global flag for v3 2017-09-02 10:24:36 +08:00
589a7a19ac Merge pull request #8489 from gyuho/news
NEWS: add v3.2.7
2017-09-01 14:55:45 -07:00
a51135a5f0 NEWS: add v3.2.7
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-01 14:45:38 -07:00
09e30117f5 Merge pull request #8480 from heyitsanthony/fix-decrease-cluster
integration: retry remove in TestDecreaseClusterSize
2017-09-01 13:19:02 -07:00
59d232adf9 integration: retry remove in TestDecreaseClusterSize
Cluster may go through a second leader election if test machine is
overloaded. Retry remove until it passes without error.

Fixes #8225
2017-09-01 12:06:59 -07:00
e832048a1f Merge pull request #8481 from heyitsanthony/data-model-generation
Documentation: modifying a key does not create a new gen in data model
2017-09-01 10:03:06 -07:00
248384a468 raft: (re-)introduce TestRecvMsgPreVote
TestRecvMsgPreVote was intended to be introduced in
github.com/coreos/etcd/pull/6624 but was uncapitalized (search for
testRecvMsgPreVote instead) and then subsequently removed due to it
being unused.
2017-09-01 10:45:47 -04:00
079d578959 e2e: test etcdctl backup saves v3 db 2017-09-01 00:24:57 -07:00
b70263247d e2e: launch etcdctl with api=3 when calling etcdctl3
Setting the ETCDCTL_API=3, then calling etcdctl was unwieldy and not
thread safe; all ctl v3 tests had to go through the ctlv3 wrapper and
could not easily mix with v2 commands.
2017-09-01 00:24:57 -07:00
4cd99d1091 Documentation: modifying a key does not create a new gen in data model
Fixes #8444
2017-08-31 23:56:04 -07:00
9f7375c225 ctlv2: save v3 db with v2 data using --with-v3
Also strips out v3 data if not given --with-v3.
2017-08-31 22:57:41 -07:00
b61c7489e0 Merge pull request #8475 from heyitsanthony/mvcc-100-range
mvcc: don't allocate keys when computing Revisions
2017-08-31 16:42:16 -07:00
1b19a5c708 Merge pull request #8407 from heyitsanthony/v2v3
v2 emulation over v3
2017-08-31 16:41:45 -07:00
4c725cee26 Merge pull request #8474 from heyitsanthony/netutil-cmp
netutil: test schemes for URLStringsEqual
2017-08-31 13:40:17 -07:00
9d79d5fe65 mvcc: don't allocate keys when computing Revisions 2017-08-31 13:23:23 -07:00
be7d488982 mvcc: add range benchmark for fetching 100 keys 2017-08-31 13:23:23 -07:00
492bbc9659 netutil: test schemes for URLStringsEqual
add tests for http/https mismatch and unix scheme
2017-08-31 12:41:05 -07:00
32bfd9e5ab test: add v2v3 store tests to integration and cov passes 2017-08-31 12:25:13 -07:00
d4b8193c55 hack/benchmark: update bench.sh to match procfile 2017-08-31 11:47:41 -07:00
e9cf07fa4d e2e: test v2v3 emulation 2017-08-31 11:47:41 -07:00
a0adee5209 etcdmain: add command line flag to etcdmain 2017-08-31 11:47:41 -07:00
5d669290e3 embed: support experimental v2v3 proxy option 2017-08-31 11:47:41 -07:00
75eb05a272 store: test v2v3 store
Changes main store tests to use a timeout select instead of expecting
events to be immediately posted before returning.
2017-08-31 11:47:41 -07:00
cab7572b00 store: separate tests that need Store from those needing *store 2017-08-31 11:47:40 -07:00
8091be6e97 v2v3: ServerV2 backed by clientv3 2017-08-31 11:47:40 -07:00
525fbba1bd etcdctl3: update to use RequestV2 instead of Request 2017-08-31 11:47:40 -07:00
758c3c09fd etcdserver: refactor v2 request processing
Makes interfaces more reusable.
2017-08-31 11:47:40 -07:00
1d3afd4bb5 etcdhttp, v2http, etcdserver: use etcdserver.{Server,ServerV2} interfaces 2017-08-31 11:47:40 -07:00
565831c21c Merge pull request #8455 from janardhan1993/patch-1
Persist entries before hardstate.
2017-08-31 06:45:50 -07:00
b847cde981 raft: update doc for persisting entries before hardstate 2017-08-31 16:24:28 +10:00
7d4a8a6935 Merge pull request #8466 from heyitsanthony/tls-srv-mismatch
srv: if a host matches a peer, only use if url schemes match
2017-08-30 10:42:20 -07:00
409805e9c7 Merge pull request #8469 from mkumatag/fix_govet
Fix go vet errors
2017-08-30 10:06:06 -07:00
247b4ef904 Merge pull request #8465 from heyitsanthony/covbadge
README: add coverage badge
2017-08-30 10:05:03 -07:00
cd772ea737 pkg/pbutil: Fix go vet errors 2017-08-30 20:07:14 +05:30
a671703c08 srv: if a host matches a peer, only use if url schemes match
The https scheme for a peer advertise URL was ignored when resolving through
SRV records.
2017-08-29 23:29:56 -07:00
d31c442197 README: add coverage badge 2017-08-29 22:39:11 -07:00
7cf8eb8dce Merge pull request #8459 from heyitsanthony/mvcc-cancel-close
mvcc: only remove watch cancel after cancel completes
2017-08-29 09:52:48 -07:00
896447ed99 mvcc: only remove watch cancel after cancel completes
If Close() is called before Cancel()'s cancel() completes, the
watch channel will be closed while the watch is still in the
synced list. If there's an event, etcd will try to write to a
closed channel. Instead, remove the watch from the bookkeeping
structures only after cancel completes, so Close() will always
call it.

Fixes #8443
2017-08-28 17:06:33 -07:00
bd53ae5680 mvcc: test concurrently closing watch streams and canceling watches
Triggers a race that causes a write to a closed watch stream channel.
2017-08-28 17:06:32 -07:00
86d15d1b1c Merge pull request #8457 from mitake/fix-false-groutine-leaks
integration: clean up resources in error paths of TestV3WatchFromCurr…
2017-08-28 12:21:54 -07:00
3fefac17b2 integration: clean up resources in error paths of TestV3WatchFromCurrentRevision
Current error paths of TestV3WatchFromCurrentRevision don't clean the
used resources including goroutines. Because go's tests are executed
continuously in a single process, the leaked goroutines makes error
logs bloated like the below case:
https://jenkins-etcd-public.prod.coreos.systems/job/etcd-coverage/2143/

This commit lets the error paths clean the resources.
2017-08-28 16:31:36 +09:00
9b92e1b2d0 flag: improve StringFlags by support set default value when init (#8447)
* flag: improve StringFlags by support set default value when init

when init flagSet, set default value should be moved to StringFlags init
func, which is more friendly

personal proposal

* flag: code improved for StringFlags
2017-08-28 00:02:11 -07:00
60d46a3626 Merge pull request #8453 from heyitsanthony/fix-ctlcov
etcdctl: unset ETCDCTL_ARGS on cov builds
2017-08-27 19:34:11 -07:00
fec145f086 Merge pull request #8454 from lorneli/master
pkg/wait: change list's lock to RWMutex
2017-08-27 10:44:50 -07:00
54fcdb4b5c pkg/wait: change list's lock to RWMutex
Change list's lock from Mutex to RWMutex, which allows concurrent
access for list.IsRegistered function.
2017-08-27 18:23:18 +08:00
1dea4c688e etcdctl: unset ETCDCTL_ARGS on cov builds
The stricter warnings on pkg/flags generates extra output that
break coverage tests. Unset the ETCDCTL_ARGS environment variable
so the warnings aren't printed.
2017-08-25 22:43:14 -07:00
c9f677c0ea Merge pull request #8452 from gyuho/badge
clientv3: fix godoc badge link
2017-08-25 17:47:29 -07:00
e441c57972 clientv3: fix godoc badge link
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-25 17:44:36 -07:00
ef5e77e361 Merge pull request #8442 from heyitsanthony/oldrev-test
integration: check concurrent auth ops don't cause old rev errors
2017-08-25 12:03:32 -07:00
d76b29c4d7 Merge pull request #8449 from gyuho/go1.9
*: bump up to Go 1.9 in tests
2017-08-25 09:48:44 -07:00
52855bac49 *: bump up to Go 1.9 in tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-24 19:29:26 -07:00
4ec31f4f7f Merge pull request #8437 from fanminshi/no_outbound_limit_size
v3rpc: use MaxRecvMsgSize and MaxSendMsgSize to limit msg size
2017-08-24 09:52:15 -07:00
752c161ebf Merge pull request #8435 from gyuho/doc
Documentation/v2: remove implementation detail
2017-08-24 08:32:13 -07:00
d3c8f9e856 Documentation/v2: remove implementation detail
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-23 14:56:44 -07:00
dfed636e5a integration: check concurrent auth ops don't cause old rev errors 2017-08-23 14:29:38 -07:00
67d932154c testutil: don't panic on AssertNil on non-nil errors 2017-08-23 14:26:03 -07:00
897cadc88c Merge pull request #8436 from gyuho/bbolt
vendor: upgrade 'coreos/bbolt' to v1.3.1-coreos.1
2017-08-22 20:51:30 -07:00
8e7a0de114 Merge pull request #8439 from heyitsanthony/stm-serialized-snapshot
concurrency: retry snapshot serializable stm if writes since first header rev
2017-08-22 20:47:57 -07:00
b206afc4a7 concurrency: fix STM example to add to balance
Worked by coincidence; the txn would always retry and there
was a 1/10 chance it would pass by selecting the same to/from keys.
2017-08-22 19:39:22 -07:00
1d195521c7 concurrency: retry snapshot serializable stm if writes since first header rev
Was checking the rset key mod rev, which does not work.
2017-08-22 19:39:22 -07:00
b9ef49142c integration: test serializable snapshot STM with old readset revisions
Was hanging.
2017-08-22 19:39:22 -07:00
d2ca782277 v3rpc: limit recv size using MaxRecvMsgSize and send using MaxSendMsgSize
grpc 1.3 uses MaxMsgSize() to limit received message size. However, grpc 1.4 introduces a 4mb default limit on send message size. In etcd, server shouldn't be limit size of message that it can be sent. Hence, set maximum size of send message using MaxSendMsgSize().
2017-08-22 14:31:01 -07:00
af4957ead8 vendor: upgrade 'coreos/bbolt' to v1.3.1-coreos.1
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-22 11:03:24 -07:00
5c975fdb10 Merge pull request #8420 from heyitsanthony/corrupt-alarm
corruption alarm
2017-08-22 11:00:43 -07:00
603f84bb6d vendor: cockroachdb/cmux -> soheilhy/cmux
Official release is ahead of the fork.
2017-08-22 09:59:59 -07:00
35c5dcefc2 *: cockroachdb/cmux -> soheilhy/cmux
Has fixes not in fork. Includes SetReadTimeout.
2017-08-22 09:59:59 -07:00
6e02779c4f integration: add corruption test 2017-08-22 09:59:59 -07:00
5c611a493b integration: grpc on etcd peer ports 2017-08-22 09:59:59 -07:00
86aeaad924 etcdmain: support experimental-corrupt-check-time flag 2017-08-22 09:59:59 -07:00
1f734e0299 embed: support experimental-corrupt-check-time flag 2017-08-22 09:59:59 -07:00
31381da53a etcdserver: raise alarm on cluster corruption
Fixes #7125
2017-08-22 09:59:59 -07:00
35dffc7bc1 rpctypes,v3rpc: add Corrupt error code 2017-08-22 09:59:59 -07:00
153ba92830 embed: serve basic v3 grpc over peer port 2017-08-22 09:59:59 -07:00
b8bcc891a6 *: regenerate gRPC assets 2017-08-22 09:59:59 -07:00
6be5f9a841 etcdserverpb: add corrupt alarm 2017-08-22 09:59:59 -07:00
65c054003f Merge pull request #8429 from heyitsanthony/leasing-no-acquire-ttl
leasing: don't acquire lease on ttl'd keys
2017-08-21 14:21:26 -07:00
0bf404676d Merge pull request #8428 from heyitsanthony/mvcc-revisions
mvcc: Revisions() method for index to avoid key allocation
2017-08-21 13:30:27 -07:00
02c6f0d559 Merge pull request #8430 from gyuho/news
NEWS: add v3.2.6
2017-08-21 13:05:23 -07:00
94e80e5f57 NEWS: add v3.2.6
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-21 13:00:03 -07:00
5c03ade973 leasing: don't acquire lease on ttl'd keys
TTL'd keys may expire on cluster without lease holder's consent.
2017-08-21 12:12:53 -07:00
cf0a07be52 integration: test leasing client does not acuire lease on TTL'd keys 2017-08-21 12:11:19 -07:00
f58c0cfb66 mvcc: Revisions() method for index to avoid key allocation
Save another alloc on the one key path.
2017-08-21 11:30:02 -07:00
7e6a0a8f92 Merge pull request #8427 from gyuho/mvcc-patch-cherry-pick
mvcc: sending events after restore
2017-08-21 10:38:45 -07:00
13041c15ba mvcc: sending events after restore
Fixes: #8411
2017-08-21 10:32:49 -07:00
953c199b74 Merge pull request #8425 from heyitsanthony/bench-get
mvcc: benchmark Range() on a single key
2017-08-21 09:52:40 -07:00
ee5bdf458b Merge pull request #8426 from heyitsanthony/weaken-certs
test: weaken certs
2017-08-21 09:40:23 -07:00
d3f5109215 test: weaken certs
The penalty for TLS is non-trivial with race detection enabled.
Weakening the test certs from 4096-bit RSA to 2048-bit gives ~4x faster
runtimes for TestDoubleTLSClusterSizeOf3.
2017-08-21 03:23:47 -07:00
8b872196d0 backend: cache buckets in read tx
Saves an alloc and about 10% of Range() time.
2017-08-21 02:16:55 -07:00
10b65c97dd mvcc: benchmark Range() on a single key 2017-08-21 00:14:46 -07:00
a9e56e103c Merge pull request #8424 from heyitsanthony/pflag-v1.0.0
vendor: spf13/pflags v1.0.0
2017-08-19 19:20:01 -07:00
8a956459d8 vendor: spf13/pflags v1.0.0 2017-08-19 18:38:34 -07:00
bea33f65a4 Merge pull request #8423 from heyitsanthony/document-grpc-trace
op-guide: add /debug details
2017-08-19 10:58:00 -07:00
47d5ae4971 op-guide: add /debug details
Fixes #8418
2017-08-18 17:58:38 -07:00
3e32cd3877 Merge pull request #8422 from heyitsanthony/close-leasing
leasing, integration, etcdmain: closer function for leasing kv
2017-08-18 16:03:57 -07:00
126e91c449 leasing, integration, etcdmain: closer function for leasing kv
Semaphore was seeing goroutine leaks
2017-08-18 14:05:57 -07:00
2321835c47 Merge pull request #8415 from heyitsanthony/fix-resolv-unix
netutil: don't resolve unix socket URLs when comparing URLs
2017-08-18 13:24:34 -07:00
dc4ab898eb Merge pull request #8421 from heyitsanthony/doc-get-all
etcdctl: document getting all keys with etcdctl3
2017-08-18 12:23:58 -07:00
6fd37dd9a3 etcdctl: document getting all keys with etcdctl3
People keep asking
2017-08-18 09:49:55 -07:00
1f228e753d Merge pull request #8419 from gyuho/ctx
auth: replace NewContext with NewOutgoingContext
2017-08-17 20:32:38 -07:00
7734b97b57 e2e: test etcd boots with unix peers 2017-08-17 19:59:09 -07:00
6464574952 netutil: don't resolve unix socket URLs when comparing URLs
Was causing VerifyBootstrap() to hang on unix peers.
2017-08-17 19:58:24 -07:00
35b11bf438 auth: replace NewContext with NewOutgoingContext
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-17 19:46:19 -07:00
c1b7e78c60 Merge pull request #8414 from heyitsanthony/fix-multi-peer
embed: associate peer serve() listener with corresponding peer
2017-08-17 13:29:49 -07:00
15c511ea6a e2e: test booting etcd with multiple peer listeners 2017-08-17 11:25:40 -07:00
f4183c68cc embed: associate peer serve() listener with corresponding peer
Fixes #8383
2017-08-17 10:25:00 -07:00
f33d64a930 Merge pull request #8408 from gyuho/1
Documentation: Update to include Huawei (renamed Canal)
2017-08-16 16:30:55 -07:00
abe5c9c63e Merge pull request #8409 from gyuho/2
dev-guide: note v2 keys to v3 API incompatibility
2017-08-16 15:47:47 -07:00
46b42e3cf0 Merge pull request #8394 from chris-wagner/patch-1
Fix field names in Container Linux Config for etcd 3.x service
2017-08-16 15:45:19 -07:00
61fd39e5d7 dev-guide: note v2 keys to v3 API incompatibility
Update interacting_v3.md

Making it clear to the user that keys created via the v2 API are not readable by the etcdctl with the v3 API.  A etcdctl v3 get of a v2 key key exits with 0 and no data, which is quite confusing, hopefully this just makes that it a bit clearer if the user upgraded etcd 3 in the past (and forget some of the 2.3 to 3.0 to 3.1 to 3.2 upgrade details) but never updated the API they used as v2 was the default and happen to trying to figure out wtf, this is a further reminder of that backward incompatibility.
2017-08-16 15:43:58 -07:00
0c456df5c3 Documentation: Update to include Huawei (renamed Canal) 2017-08-16 15:42:03 -07:00
bfb1d9d6a6 Documentation/platforms: fix field names in configuration example 2017-08-16 09:59:51 +02:00
fa32a85e69 Merge pull request #8405 from joshgav/container-listen-ip
docs: use 0.0.0.0 to listen on container-local addrs
2017-08-15 14:35:18 -07:00
c9c20d93ac Documentation/op-guide: use 0.0.0.0 to listen on container-local addr
Since container is in a separate network namespace it can't bind to
host's IP address. Instead bind to all addresses via 0.0.0.0.
2017-08-15 16:25:01 -05:00
8060b9dd83 Merge pull request #8404 from gyuho/pprof
embed: add 'enable-pprof' tag for config file
2017-08-15 11:50:51 -07:00
e24de6c9ac embed: add 'enable-pprof' tag for config file
Fix https://github.com/coreos/etcd/issues/8402.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-15 11:22:10 -07:00
f1509a102c Merge pull request #8385 from gyuho/shadowed-environment-variables
pkg/flags: warns on shadowed environment variable flags
2017-08-14 16:59:50 -07:00
deb0098d33 Merge pull request #8358 from gyuho/lease-list
api: lease list
2017-08-14 14:32:03 -07:00
01f1013203 e2e: test 'lease list' command
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:57 -07:00
1f20d5d924 etcdctl/ctlv3: add 'lease list' command
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:57 -07:00
556c1a1fe0 integration,clientv3/integration: test LeaseLeases API
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:57 -07:00
f8141db2c7 proxy/grpcproxy: implement LeaseLeases API
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:56 -07:00
15ef98a4ee clientv3: implement LeaseLeases API
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:56 -07:00
d25ae50c02 etcdserver: implement LeaseLeases API
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:56 -07:00
8005f00bcf *: regenerate proto
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:56 -07:00
a7413bbf28 etcdserverpb: define LeaseLeases API
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:56 -07:00
099fbde809 lease: add 'Leases' method
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-14 14:18:53 -07:00
8df21326f9 Merge pull request #8384 from gyuho/advertise-url
embed: warns about empty hosts in advertise urls
2017-08-11 10:15:35 -07:00
135b7f78c9 Merge pull request #8392 from gyuho/bbolt
vendor: coreos/bbolt v1.3.1-coreos.0, add others in glide.yaml
2017-08-10 17:44:31 -07:00
2513e8c9ce integration: increase numPuts to write more than 1 page
For ppc64.
Reference: https://github.com/coreos/bbolt/issues/15#issuecomment-321700834.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-10 16:43:41 -07:00
6489084a51 vendor: coreos/bbolt v1.3.1-coreos.0, add others in glide.yaml 2017-08-10 15:02:58 -07:00
6c4d990c1a Merge pull request #8390 from heyitsanthony/reset-keysgauge-restore
mvcc: reset keys gauge on restore
2017-08-10 12:57:50 -07:00
fe344ef302 embed: warns about empty hosts in advertise urls
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-10 12:31:58 -07:00
ccd1bb1780 mvcc: test keys gauge is reloaded correctly on restore 2017-08-10 09:21:39 -07:00
32866572bf mvcc: reset keys gauge on restore
Fixes #8388
2017-08-10 08:37:50 -07:00
195744aea6 pkg/flags: warns on shadowed environment variable flags
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-09 15:58:48 -07:00
04413454ac Merge pull request #8370 from jiaxuanzhou/lock_cmd
etcdctl: add ttl flag for lock command
2017-08-09 10:04:32 -07:00
9c21eefd09 etcdctl: add ttl flag for lock command 2017-08-09 22:04:43 +08:00
754f454974 Merge pull request #8367 from jpbetz/defrag-file
etcdctlv3: Add option to defrag a data directory directly
2017-08-08 12:27:52 -07:00
921e0dbd72 Merge pull request #8374 from heyitsanthony/fix-leasing-reconn
leasing: retry on errors from acquire txn
2017-08-08 12:07:22 -07:00
39432ac31f etcdctlv3: Add option to defrag a data directory directly, for cases where etcd is not running. 2017-08-08 10:19:32 -07:00
2c958939bb Merge pull request #8378 from heyitsanthony/doc-tls-termination
op-guide: TLS termination with grpc-proxy
2017-08-08 10:19:00 -07:00
7ef41aa285 op-guide: TLS termination with grpc-proxy
Also made the etcdctl calls consistent across the file.
2017-08-08 09:33:51 -07:00
cf0eb3b7ce integration: increase timeout for TestLeasingReconnectOwnerRevoke
Adding retry to acquire on failure causes Get to now retry until a
connection can be reestablished to the etcd server, causing the
timeout to trigger and fail the test.
2017-08-07 15:51:27 -07:00
61ebb98e55 leasing: retry on errors from acquire txn
Gets should retry on transient failure, but the txn inserts a write, skipping
the retry logic in the client. Instead, check the error if the txn should be
retried.

Fixes #8372
2017-08-07 11:39:12 -07:00
a9b9ef5640 Merge pull request #8351 from gyuho/hash
*: add 'endpoint hashkv' command
2017-08-07 09:21:50 -07:00
c9cd3afa58 Merge pull request #8369 from gyuho/container
Documentation/op-guide: add gcr.io image as alternative
2017-08-07 09:13:22 -07:00
e4e61479f2 op-guide/v2-migration: endpoint hashkv post migration
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-06 13:50:22 -07:00
43ccc549fb e2e: test 'endpoint hashkv' command
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-05 18:17:06 -07:00
5176b63fa0 ctlv3: add 'endpoint hashkv' command
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-05 18:17:06 -07:00
9982cd0528 clientv3/integration: add 'TestMaintenanceHashKV'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-05 18:17:06 -07:00
8c32cd96fb clientv3: add 'HashKV' to 'Maintenance' interface
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-05 18:17:06 -07:00
b39891eb45 Merge pull request #8341 from visheshnp/leasing-pr
clientv3: Disconnected Linearized Reads
2017-08-05 17:03:48 -07:00
6ca928c669 dev-internal/release: add gcr.io image commands
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-04 19:56:17 -07:00
7d4b470397 Documentation/op-guide: add gcr.io image as alternative
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-04 19:49:23 -07:00
4aa528c58e Merge pull request #8368 from gyuho/news
NEWS: add v3.2.5
2017-08-04 18:25:45 -07:00
da7f5725e0 NEWS: add v3.2.5
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-04 13:24:25 -07:00
d3716b86ae clientv3: s/ToOpResponse/OpResponse
Closer to idiomatic go.
2017-08-04 11:35:36 -07:00
8fe94356f4 clientv3: more Op accessors 2017-08-04 11:35:36 -07:00
b402ea8590 test: increase clientv3/integration time to accomodate leasing tests 2017-08-04 11:35:36 -07:00
9be715bb66 etcdmain: support key leasing in grpcproxy 2017-08-04 11:35:36 -07:00
468078ffcd integration: leasing tests 2017-08-04 11:35:36 -07:00
a425e98a7e leasing: KV leasing 2017-08-04 11:35:36 -07:00
366f5381e0 Merge pull request #8366 from heyitsanthony/prevkey-proxy
grpcproxy: forward PrevKv flag in Put
2017-08-04 07:31:27 -07:00
6a4194c556 grpcproxy: forward PrevKv flag in Put 2017-08-03 21:38:20 -07:00
c3ae033f25 integration: test Put with PrevKey=true
Was missing in proxy.
2017-08-03 21:37:06 -07:00
faa4a62410 Merge pull request #8355 from heyitsanthony/expect-fd
e2e: remove SIGQUIT debugging for elect and lock
2017-08-03 17:18:17 -07:00
71a706509e Merge pull request #8364 from gyuho/fixtures
integration/fixtures: fix base64 flag, add wildcard.json
2017-08-03 15:55:16 -07:00
107c18f19f Merge pull request #8356 from heyitsanthony/election-example
concurrency: add examples
2017-08-03 15:43:08 -07:00
5072530a80 e2e: remove SIGQUIT debugging for elect and lock
Causes etcdctl to hang with pending SIGQUIT signals according to
/proc/pid/status. The debugging wasn't very useful on travis
either; just totally remove it to get CI working again.
2017-08-03 15:38:06 -07:00
a3ef719598 integration/fixtures: fix base64 flag, add wildcard.json
MacOS base64 uses -D and linux uses -d, while --decode
works on both platforms. And add missing server-ca-csr-wildcard.json.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-03 15:31:29 -07:00
b7b31e5770 concurrency: add examples 2017-08-02 21:09:05 -07:00
033c0cbdd8 Merge pull request #8346 from javaforfun/shawnsli/reset-votes-when-become-pre-candidate
raft: reset votes when becomePreCandidate
2017-08-02 19:52:17 -07:00
e77ecb593c Merge pull request #8360 from heyitsanthony/fix-osx-fmt
test: fix PASSES=fmt for OSX
2017-08-02 18:12:21 -07:00
322e6ff022 test: fix PASSES=fmt for OSX
OSX dirname doesn't support multiple arguments; use a for loop instead.

Fixes #8359
2017-08-02 14:43:15 -07:00
42cc64a9e5 raft: add TestPreVoteWithSplitVote 2017-08-02 17:59:28 +08:00
ae748716e6 Merge pull request #8350 from gyuho/fix-typo
ctlv3/command: remove double-quote typos in fields printer
2017-08-01 17:24:41 -07:00
9040b3eb2b ctlv3/command: remove double-quote typos in fields printer
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-01 17:21:15 -07:00
d543870966 Merge pull request #8347 from heyitsanthony/use-from-grpc-md
clientv3: use FromOutgoingContext to bucket watches
2017-08-01 17:05:56 -07:00
98adbbf031 Merge pull request #8321 from zbwright/revise-readme
docs: revising to match sidebar structure.
2017-08-01 16:55:33 -07:00
45e6b658dd Merge pull request #8349 from gyuho/fix-lease-test
clientv3/integration: match context canceled on client close
2017-08-01 14:53:31 -07:00
9f1bfd9e4b Merge pull request #8335 from heyitsanthony/test-put-atmostonce
clientv3: put at most once
2017-08-01 14:52:04 -07:00
b89ef7e295 clientv3/integration: match context canceled on client close
Fix https://github.com/coreos/etcd/issues/8329.

Different behavior from https://github.com/grpc/grpc-go/pull/1369,
in grpc-go transportMonitor.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-01 13:25:13 -07:00
7de417d745 clientv3/integration: use grpc metadata to create unique watch ctxs 2017-08-01 13:14:31 -07:00
fdba9e5fb1 clientv3/integration: test Put succeeds following SetEndpoint
Still gets transport closing errors, but no unavailable endpoint errors.
2017-08-01 12:59:37 -07:00
10db0319d1 ordering: use default clients to populate etcd data
Switching endpoints on the same client was triggering balancer
reconnect errors that should be tested in clientv3/integration.
2017-08-01 12:56:04 -07:00
4669aaa9a2 clientv3: only retry mutable KV RPCs if no endpoints found
Was retrying when it shouldn't, causing multiple puts
2017-08-01 12:55:51 -07:00
8385c6682a clientv3/integration: test client puts at most once on bad connection 2017-08-01 10:31:13 -07:00
585b1d7bdc Merge pull request #8333 from fanminshi/retrieve_keep_from_index
mvcc: fix TestHashKVWhenCompacting hash mismatch
2017-08-01 09:57:08 -07:00
1c75c383a1 clientv3: use FromOutgoingContext to bucket watches
Watches were bucketed on string(ctx) for historical reasons;
metadata.FromOutgoingContext should be enough to key watches now.

Fixes #8338
2017-08-01 09:26:07 -07:00
3740793b42 raft: reset votes when becomePreCandidate 2017-08-01 22:42:09 +08:00
df5a3d15ce mvcc: increase rev for TestHashKVWhenCompacting 2017-07-31 17:59:49 -07:00
bb86c327e2 mvcc: HashKV gets keep from kvindex.Keep 2017-07-31 17:59:49 -07:00
4c2c5b0084 mvcc: add tests for Keep 2017-07-31 17:59:42 -07:00
e0843c691b Merge pull request #8322 from gyuho/health-grpc-proxy
*: add /health endpoint to grpc-proxy
2017-07-31 15:45:42 -07:00
073fa562d8 Merge pull request #8342 from gyuho/ep-exit
ctlv3: exit non-zero on unhealty ep command
2017-07-31 15:45:30 -07:00
cd142a0d1c Merge pull request #8324 from heyitsanthony/txn-cmp-lease
api: lease comparison target
2017-07-31 14:52:14 -07:00
6603a77561 ctlv3: exit non-zero on unhealty ep command 2017-07-31 14:17:01 -07:00
661da1e609 e2e: test /metrics, /health endpoint in grpc-proxy
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-31 14:07:59 -07:00
b8fd5c3dba etcdmain: add '/health' endpoint to grpc-proxy
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-31 14:07:59 -07:00
cd37ef2c1b *: expose etcdhttp.Health, define proxy health handler
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-31 14:07:54 -07:00
7b8fb3cf0a mvcc: add and implement Keep api to index
Keep finds all revisions to be kept for a Compaction at the given rev.
2017-07-31 14:04:03 -07:00
341664f7b6 integration: test txn lease comparisons 2017-07-31 13:00:04 -07:00
79660db61b etcdctl: add lease comparison to txn command 2017-07-31 13:00:04 -07:00
52b031cfa2 clientv3: accept Compare_LEASE in Compare() 2017-07-31 13:00:04 -07:00
ec4ca4408f etcdserver: support lease txn comparison 2017-07-31 13:00:04 -07:00
71e56a44b7 *: regenerate protobuf assets 2017-07-31 13:00:04 -07:00
d8ca2bbffb etcdserverpb: add lease to txn comparison targets
Also shifts down fields following target_union in case there's any more
reason to expand. OK since range_end is still pre-release.
2017-07-31 13:00:04 -07:00
2951faf770 Merge pull request #8315 from heyitsanthony/experimental-ordering
add experimental serializable ordering feature to grpcproxy
2017-07-28 14:48:53 -07:00
f216165aad Merge pull request #8332 from gyuho/peer-url
ctlv3: print 'ETCD_INITIAL_ADVERTISE_PEER_URLS' in 'member add'
2017-07-28 14:21:21 -07:00
98fc5e5769 ctlv3: print 'ETCD_INITIAL_ADVERTISE_PEER_URLS' in 'member add'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-28 13:53:13 -07:00
ca586147bd Merge pull request #8323 from fanminshi/fix_TestV3HashKV_Hash_MisMatch
integeration: fix TestV3HashKV hash mismatch
2017-07-28 10:45:43 -07:00
451b062184 mvcc/backend: add TestBackendWritebackForEach to backend_test.go 2017-07-28 09:39:48 -07:00
785deebd62 mvcc/backend: enforce ordering for UnsafeForEach in read_tx.go
This pr changes  UnsafeForEach to traverse on boltdb before on the buffer.
This ordering guarantees that UnsafeForEach traverses in the same order
before or after the commit of buffer.
2017-07-28 09:30:23 -07:00
b36463efe5 Merge pull request #8312 from gyuho/health-lists
api/etcdhttp: serve error information in '/health', marshal health in JSON
2017-07-27 15:46:39 -07:00
8c7b639f81 Documentation/v2: update /health response
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-27 15:37:04 -07:00
4267d368df api/etcdhttp: serve error information in '/health', marshal health in JSON
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-27 15:36:59 -07:00
ff1c8c2191 docs: revising to match sidebar structure. 2017-07-27 15:06:59 -07:00
8365233d2a Merge pull request #8296 from gyuho/grpc
vendor: upgrade grpc/grpc-go to v1.5.1
2017-07-27 13:21:20 -07:00
f6acd0316c etcdmain: add --experimental-serializable-ordering to grpc proxy
Connect to another endpoint on stale reads.
2017-07-27 12:39:30 -07:00
9fee4b77de bill-of-materials: update 'grpc' LICENSE
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-27 10:47:24 -07:00
8a589d2d73 grpcproxy/cluster_test: serve grpc server after register service
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-27 10:47:24 -07:00
be794d586c vendor: upgrade grpc-go to v1.5.1
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-27 10:47:17 -07:00
fca56f132a ordering: use x/net/context and add doc.go
Compilation troubles when using the "context" package.
2017-07-26 20:58:41 -07:00
5088ae3e67 docs: add ordering wrapper as experimental feature 2017-07-26 20:58:41 -07:00
2a348fb8e9 Merge pull request #8263 from fanminshi/hash_by_rev
api: hash by rev
2017-07-26 11:22:33 -07:00
ee1c340126 Merge pull request #8309 from gyuho/test-timeout
integration: increase dial timeout in testTLSReload
2017-07-26 10:04:48 -07:00
8609521ce2 mvcc: add TestHashKVWhenCompacting to kvstore_test 2017-07-26 09:48:29 -07:00
766c2540ae integration: add TestV3HashKV in v3_grpc_test.go 2017-07-26 09:48:24 -07:00
9b6799a5b6 integration: increase dial timeout in testTLSReload 2017-07-26 09:37:51 -07:00
ff7a021c8f Merge pull request #8282 from gyuho/metrics-port
*: serve '/metrics' in insecure port
2017-07-26 09:27:37 -07:00
411ab276b0 e2e: test /metrics, /health endpoints
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-26 06:23:55 -07:00
74c8050adc *: use etcdhttp.Handle* for health, prometheus handlers
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-26 06:23:55 -07:00
78432e3bd2 etcdhttp: add metrics.go for metrics, health handler
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-26 06:23:55 -07:00
16943f04e2 Merge pull request #8283 from heyitsanthony/cancel-compact-rpc
v3rpc: set Canceled=true on compacted watch
2017-07-25 19:15:18 -07:00
8b1177194e Merge pull request #8306 from heyitsanthony/v3server-raftreq
etcdserver: consolidate error checking for v3_server functions
2017-07-25 19:14:01 -07:00
a6ae677d8f proxy: support HashKV in grpcproxy 2017-07-25 17:00:56 -07:00
deca9879c2 mvcc: add HashByRev to kv.go
HashByRev computes the hash of all MVCC keys up to a given revision.
2017-07-25 17:00:46 -07:00
478ba2c4f2 etcdserver: consolidate error checking for v3_server functions
Duplicated error checking code moved into raftRequest/raftRequestOnce.
2017-07-25 14:28:39 -07:00
05603c4908 Merge pull request #8291 from zbwright/upgrade-index
docs: adding an index for upgrade pages.
2017-07-25 12:40:52 -07:00
9581f7676c grpcproxy: forward Canceled field when broadcasting watch responses 2017-07-25 12:36:01 -07:00
318caeee7e clientv3: return CompactRevision wresp when set with Canceled 2017-07-25 12:36:01 -07:00
6fb08672d8 v3rpc: set canceled=true when stream is compacted
Fixes #8231
2017-07-25 12:36:01 -07:00
ebcfdd1a3d integration: check Canceled is true in compacted watch response 2017-07-25 12:36:01 -07:00
ffa54929ea docs: adding an index for upgrade pages. 2017-07-25 10:53:02 -07:00
d2654f8522 Merge pull request #8092 from mangoslicer/kv-ordering-wrapper
Added initial kv order caching functionality
2017-07-24 22:28:40 -07:00
26bf8c0524 Merge pull request #8292 from zbwright/why-tweak
docs: slight rearranging of top two sections.
2017-07-24 21:08:45 -07:00
93826f2f78 Merge pull request #8288 from irfansharif/pre-vote
raft: introduce/fix TestNodeWithSmallerTermCanCompleteElection
2017-07-24 21:05:42 -07:00
fe33bd1879 Merge pull request #8294 from mitake/proxy-cachemiss
proxy: don't inc a cache miss count in a case of linearizable range
2017-07-24 20:47:19 -07:00
986e98418d Merge pull request #8300 from heyitsanthony/proxy-self-cert
etcdmain: create self-signed certs when listening on https for httpproxy
2017-07-24 18:30:34 -07:00
51d7786050 etcdmain: create self-signed certs when listening on https for httpproxy
Fixes failures from TestCtlV3PutClientAutoTLS in proxy coverage tests.
2017-07-24 15:37:05 -07:00
dfd3ef42cf Merge pull request #8297 from fanminshi/fix_txn_ctl
etcdctl:  print "del" instead of "delete" in txn interactive mode
2017-07-24 14:05:14 -07:00
09f67a0d5e e2e: change expectatation string in ctlTxn 2017-07-24 10:51:31 -07:00
e9a7f3551b Merge pull request #8281 from heyitsanthony/san-rdns
transport: use reverse lookup to match wildcard DNS SAN
2017-07-22 08:02:57 -07:00
e9d5f75323 e2e/docker: docker image for testing wildcard DNS 2017-07-21 17:14:50 -07:00
52dd13fa35 fixtures: generate wildcard DNS SAN cert
DNS: *.etcd.local
2017-07-21 16:43:26 -07:00
b1aa962233 transport: use reverse lookup to match wildcard DNS SAN
Fixes #8268
2017-07-21 16:43:25 -07:00
bb0e144b43 etcdctl: print "del" instead of "delete" in txn interactive mode 2017-07-21 14:31:39 -07:00
2eb9353019 Merge pull request #8277 from heyitsanthony/test-e2e-grpcproxy
e2e grpcproxy tests
2017-07-21 12:57:25 -07:00
954ec4d1a5 e2e: fix range indexing for args2env conversion
Was dropping the last argument in the slice.
2017-07-21 11:00:23 -07:00
107828d777 test: support -tags cluster_proxy for e2e tests 2017-07-21 11:00:22 -07:00
1dcae41b20 grpcproxy: return nil on receiving snapshot EOF
Gets "code = OutOfRange desc = EOF" errors otherwise.
2017-07-21 11:00:22 -07:00
c5447c2ec9 etcdmain: support crl in grpcproxy 2017-07-21 11:00:22 -07:00
efbee9d8c7 etcdmain: support --auto-tls and --insecure-skip-verify in grpcproxy 2017-07-21 11:00:22 -07:00
1365f87d40 etcdmain: cleanup grpcproxy; support different certs for proxy/etcd
Enables TLS termination in grpcproxy.
2017-07-21 11:00:22 -07:00
d5a0d4d696 etcdmain, embed: --auto-peer-tls and --auto-tls for v2 proxy
Fixes #7930
2017-07-21 11:00:22 -07:00
5d6c6ad20e etcdmain: use client tls info for v2 proxy client connections
Was defaulting to PeerTLSInfo for client connections to the etcd cluster.
Since proxy users may rely on this behavior, only use the client tls
info if given, and fall back to peer tls otherwise.
2017-07-21 11:00:22 -07:00
426ad25924 transport: include InsecureSkipVerify in TLSInfo
Some functions take a TLSInfo to generate a tls.Config and there was no
way to force the InsecureSkipVerify flag.
2017-07-21 11:00:22 -07:00
7c22d35dff etcdmain: support grpc-proxy/gateway compiled with -tags cov 2017-07-21 11:00:22 -07:00
5c6a6bdc5a e2e: refactor to support -tags cluster_proxy 2017-07-21 11:00:22 -07:00
a92ceeec25 raft: introduce/fix TestNodeWithSmallerTermCanCompleteElection
TestNodeWithSmallerTermCanCompleteElection tests the scenario where a
node that has been partitioned away (and fallen behind) rejoins the
cluster at about the same time the leader node gets partitioned away.
Previously the cluster would come to a standstill when run with PreVote
enabled.

When responding to Msg{Pre,}Vote messages we now include the term from
the message, not the local term. To see why consider the case where a
single node was previously partitioned away and it's local term is now
of date. If we include the local term (recall that for pre-votes we
don't update the local term), the (pre-)campaigning node on the other
end will proceed to ignore the message (it ignores all out of date
messages).
The term in the original message and current local term are the same in
the case of regular votes, but different for pre-votes.

NB: Had to change TestRecvMsgVote to include pb.Message.Term when
sending MsgVote messages. The new sanity checks on MsgVoteResp
(m.Term != 0) would panic with the old test as raft.Term would be equal
to 0 when responding with MsgVoteResp messages.
2017-07-21 02:26:02 -04:00
488df4db34 proxy: don't inc a cache miss count in a case of linearizable range
Requests of linearizable range don't touch the cache of grpcproxy. So
incrementing the miss count wouldn't be meaningful.
2017-07-20 21:51:10 -07:00
a64d15eeed Merge pull request #8286 from heyitsanthony/wal-check-locks
wal: fall back to closing wal if locked dir rename fails
2017-07-20 18:52:08 -07:00
2c4e22fd43 docs: link fix. 2017-07-20 13:35:55 -07:00
fe1ddab714 wal: fall back to closing wal if locked dir rename fails
Detecting windows at compile time isn't enough since etcd might be
on linux but the fs is backed by windows.

Fixes: #8178
Fixes: #6984
2017-07-20 13:30:41 -07:00
fb717aec9b Merge pull request #8280 from jpbetz/compaction-metrics
mvcc: Add metric for count of db key revisions compacted.
2017-07-20 13:16:39 -07:00
01a49a9f7e docs: slight rearranging of top two sections. 2017-07-20 12:04:05 -07:00
c06953ae08 mvcc: Add metric for count of db key revisions compacted.
When digging into etcd/boltdb "storage space exceeded" issues, this metric may help answer questions about if/when compactions occured and how much data was freed.
2017-07-20 10:07:56 -07:00
46ee06a85c Merge pull request #8284 from heyitsanthony/whitelist-close
testutil: whitelist os.(*file).close
2017-07-19 21:32:55 -07:00
887df72d13 clientv3/ordering: kv order caching 2017-07-19 21:40:50 -04:00
cfbf666dd4 Merge pull request #8285 from gyuho/news
NEWS: add v3.2.4
2017-07-19 14:51:36 -07:00
55d445b891 NEWS: add v3.2.4
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-19 14:39:43 -07:00
bb42d2b40e testutil: whitelist os.(*file).close
Leak detector is catching goroutines trying to close files which appear
runtime related:

1 instances of:
syscall.Syscall(...)
	/usr/local/golang/1.8.3/go/src/syscall/asm_linux_386.s:20 +0x5
syscall.Close(...)
	/usr/local/golang/1.8.3/go/src/syscall/zsyscall_linux_386.go:296 +0x3d
os.(*file).close(...)
	/usr/local/golang/1.8.3/go/src/os/file_unix.go:140 +0x62

It's unlikely a user goroutine will leak on file close; whitelist it.
2017-07-19 13:28:15 -07:00
608df0fc90 Merge pull request #8272 from gyuho/health
/health reports unhealthy when alarm is raised
2017-07-18 16:15:08 -07:00
9dc65936b1 Merge pull request #8279 from gyuho/aaa
contrib/raftexample: use bytes.Buffer.String (no 'string()')
2017-07-18 16:09:17 -07:00
f78498b42a contrib/raftexample: use bytes.Buffer.String (no 'string()')
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-18 16:06:22 -07:00
91470a8a54 e2e: test '/health' when alarm is raised 2017-07-18 15:51:30 -07:00
61a736a068 etcdserver: check alarms in health handler
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-18 15:51:28 -07:00
d8481c9fda Merge pull request #8278 from gyuho/cherry-pick
Documentation/integrations: add 'networking-vpp', raft lib users
2017-07-18 15:50:03 -07:00
45206b6edf Documentation/integrations: add link to etcd raft lib users 2017-07-18 15:47:42 -07:00
21232017fa Documentation/integrations: add 'networking-vpp' 2017-07-18 15:44:39 -07:00
82126a742e Merge pull request #8274 from lclarkmichalek/patch-2
Add lclarkmichalek/etcdhcp to integrations list
2017-07-18 09:25:09 -07:00
ebb7649e3d Documentation: Add lclarkmichalek/etcdhcp to integrations list 2017-07-18 17:01:28 +01:00
9ce7bb6a1c Merge pull request #8267 from gyuho/close-server
embed: wait up to request-timeout for pending RPCs when closing
2017-07-14 18:51:54 -07:00
fbb75d24a4 v3rpc: add HashKV to server rpc 2017-07-14 16:44:00 -07:00
3dcd2cdcb4 doc: update rpc swagger for HashKV rpc and its req/resp 2017-07-14 16:42:04 -07:00
ed052ce9a3 proto: add HashKV grpc
HashKV rpc hash of all MVCC keys up to a given revision for a given node.
2017-07-14 16:41:23 -07:00
34fd848a4f integration: test embed.Etcd.Close with watch
Ensure 'Close' returns in time when there are open
connections (watch streams).

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 15:32:08 -07:00
334554f655 embed: wait up to request timeout for pending RPCs when closing
Both grpc.Server.Stop and grpc.Server.GracefulStop close the listeners
first, to stop accepting the new connections. GracefulStop blocks until
all clients close their open transports(connections). Unary RPCs
only take a few seconds to finish. Stream RPCs, like watch, might never
close the connections from client side, thus making gRPC server wait
forever.

This patch still calls GracefulStop, but waits up to 10s before manually
closing the open transports.

Address https://github.com/coreos/etcd/issues/8224.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 15:31:40 -07:00
d28334831d Merge pull request #8242 from gyuho/ppp
*: support additional '/metrics' endpoints
2017-07-14 15:06:15 -07:00
c47d4450c7 etcdmain/grpc-proxy: add 'metrics-addr' option
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 11:14:09 -07:00
8463b377d9 etcdmain: add 'listen-metrics-urls' option
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 11:14:09 -07:00
9bb5ede659 embed: configure 'ListenMetricsUrls'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 11:14:05 -07:00
511f4d5c99 Merge pull request #8266 from heyitsanthony/fix-contributing-irc
Documentation: point contributing irc channel to #etcd
2017-07-14 11:06:39 -07:00
89e4b62a01 Documentation: point contributing irc channel to #etcd 2017-07-14 10:56:09 -07:00
5133d8e993 Merge pull request #8265 from gyuho/news
NEWS: add v3.1.10, v3.2.3
2017-07-14 10:55:03 -07:00
fe0941426d NEWS: add v3.1.10, v3.2.3
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 09:57:20 -07:00
858938e32d Merge pull request #8259 from heyitsanthony/etcdctl-err-space
etcdctl: remove extra space in error message
2017-07-14 09:55:06 -07:00
7ff1e8f3bb Merge pull request #8261 from heyitsanthony/fix-test-leasettl
integration: sync lapi server after puts in TestLeaseTimeToLive
2017-07-13 21:31:18 -07:00
755270fa6a integration: sync lapi server after puts in TestLeaseTimeToLive
Linearized read to ensure the keys have committed.
2017-07-13 14:55:43 -07:00
3614c5185d e2e: update tests to use single space for etcdctl errors 2017-07-13 14:27:46 -07:00
28b4dce4f1 etcdctl: remove extra space in error message
Fprintln will insert a space between arguments, so printing "Error: "
would lead to two spaces between the "Error:" and the error string.
2017-07-13 13:04:21 -07:00
3dd7de3908 Merge pull request #8252 from gyuho/test-functional
test: sync with etcd-agent start in functional_pass
2017-07-13 11:13:26 -07:00
14401021ee Merge pull request #8251 from heyitsanthony/whitelist-wg-done
testutil: whitelist WaitGroup.Done
2017-07-13 09:26:15 -07:00
02585157f6 test: sync with etcd-agent start in functional_pass
Fix https://github.com/coreos/etcd/issues/8211.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-13 09:17:42 -07:00
026e05518e testutil: whitelist WaitGroup.Done
Calling a WaitGroup.Done() in a defer will sometimes trigger the leak
detector since the WaitGroup.Wait() will unblock before the defer
block completes. If the leak detector runs before the Done() is
rescheduled, it will spuriously report the finishing Done() as a leak.
This happens enough in CI to be irritating; whitelist it and ignore.
2017-07-12 14:04:24 -07:00
17be3b551a Merge pull request #8250 from heyitsanthony/reorg-op-guide
Documentation, op-guide: reorganize etcd operation section
2017-07-12 13:28:00 -07:00
1b4f8d9904 Documentation, op-guide: reorganize etcd operation section
Reorganizes sections in README.md, slightly changes some titles, puts
sections at a consistent depth.
2017-07-12 12:13:06 -07:00
fd3516f283 Merge pull request #8249 from gyuho/patch
Documentation: refer to LeaseKeepAliveRequest for lease refresh
2017-07-12 09:50:25 -07:00
1e72ace38d Documentation: refer to LeaseKeepAliveRequest for lease refresh 2017-07-12 09:29:24 -07:00
0a2b580f73 Merge pull request #8238 from yudai/allow_more_streams
v3rpc: Let clients establish unlimited streams
2017-07-11 18:21:09 -07:00
148ed90b0d Merge pull request #8245 from heyitsanthony/doc-json-prefix-range
dev-guide: document using range_end for prefixes with json
2017-07-11 18:10:23 -07:00
ae33c5e82a Merge pull request #8244 from heyitsanthony/bridge-default-passthrough
bridge: make pass-through the default
2017-07-11 18:10:06 -07:00
52101e6e93 v3rpc: Let clients establish unlimited streams
From go-grpc v1.2.0, the number of max streams per client is set to 100
by default by the server side. This change makes it impossible
for third party proxies and custom clients to establish many streams.
2017-07-11 13:02:01 -07:00
da2f4bb25d dev-guide: document using range_end for prefixes with json
Lack of a range_end example has caused some confusion.
2017-07-11 11:23:07 -07:00
39f4502cc0 local-tester: use new bridge flags 2017-07-11 10:42:31 -07:00
07bc71b87c bridge: make pass-through the default
Setting only latency options is a pain since every fault must
be disabled on the command line. Instead, by default start
as a standard bridge without any fault injection.
2017-07-11 10:42:31 -07:00
1010b82de2 Merge pull request #8236 from heyitsanthony/v2http-split
*: move v2http handlers without /v2 prefix to etcdhttp
2017-07-10 09:08:03 -07:00
acfde8aba0 Merge pull request #8199 from sakshamsharma/clientv3-keep-alive
clientv3: add keep-alive to connection
2017-07-08 16:32:35 -07:00
e29db923bc *: move v2http handlers without /v2 prefix to etcdhttp
Lets --enable-v2=false configurations provide /metrics, /health, etc.

Fixes #8167
2017-07-07 18:35:57 -07:00
97f37e42e6 Merge pull request #8213 from heyitsanthony/nil-endrev
mvcc: don't allocate end revision while computing range
2017-07-07 15:56:08 -07:00
91dbebfeb2 Merge pull request #8233 from gyuho/version-bump
version: bump up to 3.2.0+git
2017-07-07 15:36:30 -07:00
e51c34124c version: bump up to 3.2.0+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-07 14:16:44 -07:00
69e8a9241a Merge pull request #8232 from gyuho/NEWS
NEWS: add v3.2.2
2017-07-07 13:49:22 -07:00
64840adf70 Merge pull request #8135 from radhikapc/local-cluster
dev-guide: clarify concepts in local_cluster doc
2017-07-07 09:56:31 -07:00
9391c06004 NEWS: add v3.2.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-07 09:32:39 -07:00
68eb96e985 dev-guide: clarify concepts in local_cluster doc 2017-07-07 09:28:08 -07:00
4e897529c2 Merge pull request #8229 from jseldess/link-update
Documentation/learning/why: Update link to CockroachDB
2017-07-07 09:02:47 -07:00
4caf9fc7fa Documentation/learning/why: Update link to CockroachDB
Since we implemented docs versioning, the default url is
https://cockroachlabs.com/docs/stable instead of
https://cockroachlabs.com/docs. We have a redirect in place
from /docs to /docs/stable, so existing links aren't broken,
but it's a better user experience to bypass the redirect.
2017-07-07 18:01:12 +02:00
67fa8b823f Merge pull request #8223 from heyitsanthony/ip-san-exit
transport: accept connection if matched IP SAN but no DNS match
2017-07-06 22:46:09 -07:00
e6f4563ea1 Merge pull request #8222 from heyitsanthony/fix-experimental-doc
dev-guide: update experimental APIs
2017-07-06 19:56:10 -07:00
eacb46bf50 Merge pull request #8221 from heyitsanthony/gateway-user-listen-address
embed: connect json gateway with user-provided listen address
2017-07-06 19:50:34 -07:00
00aede4a39 Merge pull request #8219 from heyitsanthony/test-times
test: bump grpcproxy timeout to 20m, print pass times
2017-07-06 19:46:23 -07:00
ab95eb0795 transport: accept connection if matched IP SAN but no DNS match
The IP SAN check would always do a DNS SAN check if DNS is given
and the connection's IP is verified. Instead, don't check DNS
entries if there's a matching iP.

Fixes #8206
2017-07-06 16:11:53 -07:00
e9d096ae6b mvcc: don't allocate end revision while computing range
Use 'nil' since it's only reading a single key. Also preallocates
the result slice based on limit / number of revisions fetched.

Fixes #8208
2017-07-06 15:59:27 -07:00
b8bc005e60 dev-guide: update experimental APIs
No experimental APIs at the moment.

Fixes #8212
2017-07-06 15:45:40 -07:00
63350f5ac1 embed: connect json gateway with user-provided listen address
net.Listener says its address is [::] when given 0.0.0.0, breaking
hosts that have ipv6 disabled.

Fixes #8151
Fixes #7961
2017-07-06 14:24:29 -07:00
2e7615281e Merge pull request #8210 from gyuho/bbolt
*: use 'coreos/bbolt' (replace 'boltdb/bolt')
2017-07-06 13:00:21 -07:00
a57405a958 Merge pull request #8153 from gyuho/leadership-transfer
*: expose Leadership Transfer API to clients
2017-07-06 13:00:08 -07:00
2a30a754e9 clientv3: add keep-alive to connection
this makes the grpc client connection use a keep-alive.
2017-07-06 12:55:52 -07:00
a2a80cb1bf test: bump grpcproxy timeout to 20m, print pass times 2017-07-06 12:51:24 -07:00
d48e59e389 Merge pull request #8201 from arthurkiller/master
concurrency: fix typo in Serializable godoc
2017-07-06 11:03:55 -07:00
4df1970188 concurrency: fix typo in Serializable godoc 2017-07-06 12:57:55 +08:00
870302afa6 mvcc/backend: enable 'NoFreelistSync' by default (linux)
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-05 16:10:04 -07:00
89ced7c0c9 bill-of-materials.json: regenerate with 'coreos/bbolt'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-05 14:35:25 -07:00
2b9bfda1d5 vendor: regenerate
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-05 14:34:54 -07:00
318e9c766f *: replace 'boltdb' import paths with 'coreos/bbolt'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-05 14:32:13 -07:00
75665c0fd0 glide.yaml: replace 'boltdb/bolt' with 'coreos/bbolt'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-05 14:29:31 -07:00
894751ef44 Merge pull request #8164 from mitake/auth-granted-keys
allow users to know their roles and permissions
2017-07-05 12:26:59 -07:00
1408b337b6 Merge pull request #8200 from huikang/fix-typo-doc
Documentation: cleanup and fix some typo
2017-06-30 22:03:43 -07:00
663f8835cf Documentation: cleanup and fix some typo
Signed-off-by: Hui Kang <kangh@us.ibm.com>
2017-06-30 20:41:25 -04:00
b7cf080e2c Merge pull request #8198 from xiang90/n
NEWS: clarify the corruption problem
2017-06-30 15:01:35 -07:00
cc114caddd NEWS: clearify the corruption problem 2017-06-30 14:19:51 -07:00
673c6f0650 Merge pull request #8194 from gyuho/lease
lease: fix racey access to 'leaseRevokeRate'
2017-06-30 11:18:21 -07:00
8d41820741 lease: stop lessors after tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-30 10:13:36 -07:00
36c655f29b Merge pull request #8170 from gyuho/v2-docs
Documentation/v2: 'etcd v2' to the title
2017-06-29 16:24:31 -07:00
49fe77eea0 Documentation/v2: 'etcd v2' to the title
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-29 16:10:05 -07:00
7ab442f8c5 Merge pull request #8183 from yudai/fix_revision_compactor_test_race
compactor: Fix data race in revision compactor test
2017-06-27 14:33:08 -07:00
c678dcbd91 compactor: Fix data race in revision compactor test
Use atomic functions to manipulate `rev` of `fakeRevGetter`
so that the tester goroutine can update the
value without race with the compactor's goroutine.
2017-06-27 13:51:40 -07:00
4dd7ef0a60 Merge pull request #8176 from huikang/fixing-typo-faq
Documentation/faq: fix typo in flag names
2017-06-27 13:20:56 -07:00
625c67f5c5 Documentation/faq: fix typo in flag names
Signed-off-by: Hui Kang <kangh@us.ibm.com>
2017-06-27 16:04:03 -04:00
db595887cf e2e: add test cases for getting user and role information of user itself 2017-06-26 22:20:46 -07:00
e0c33ef881 auth, etcdserver: allow users to know their roles and permissions
Current UserGet() and RoleGet() RPCs require admin permission. It
means that users cannot know which roles they belong to and what
permissions the roles have. This commit change the semantics and now
users can know their roles and permissions.
2017-06-26 22:20:41 -07:00
86eced670c Merge pull request #8177 from gyuho/NEWS
NEWS: add v3.2.1
2017-06-26 13:13:56 -07:00
54a75f9431 NEWS: add v3.2.1
Highlights some important bug fixes + user facing
changes in debugging metrics.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-26 06:55:53 -07:00
703663d1f6 Merge pull request #8163 from huikang/stm-comment-update
Default stm isolation level is serializable snapshot isolation
2017-06-25 18:50:32 -07:00
204c4aa0b0 Merge pull request #7987 from smetro/disable_proposal_forwarding
raft: add DisableProposalForwarding option
2017-06-24 00:02:22 -07:00
6ea5676db4 Merge pull request #8145 from mitake/non-authorized-rpcs
integration: add a test case for non authorized RPCs
2017-06-24 11:51:47 +09:00
d4289588ac e2e: test 'move-leader' command
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 13:19:16 -07:00
6e9b776fce etcdctl/ctlv3: add 'move-leader' command
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 13:19:12 -07:00
581a83dfd9 clientv3/*: add 'MoveLeader' method to 'Maintenance'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 13:07:17 -07:00
c5532dd2a2 integration: test 'MoveLeader' service
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 12:53:22 -07:00
403ba1dfa7 etcdserver: expose 'transferLeadership' as 'MoveLeader'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 12:51:28 -07:00
3e263d5a4d proxy/*: add 'MoveLeader' RPC
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 12:51:27 -07:00
b1a0ae3a3e etcdserver/api/v3rpc: add 'MoveLeader' to 'maintenanceServer'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 12:51:24 -07:00
939bbd77c0 etcdserver/*: add 'ErrNotLeader'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 12:46:07 -07:00
265303c19a *: regenerate proto with 'MoveLeader' RPC
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 12:45:53 -07:00
d82f2572a4 etcdserver/etcdserverpb: define 'MoveLeader' RPC
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 12:43:29 -07:00
7fffd8b827 concurrency: comment the default stm isolation level is serializable snapshot
Default stm isolation level is serializable snapshot isolation, which
is different than snapshot isolation (SI)

Signed-off-by: Hui Kang <kangh@us.ibm.com>
2017-06-22 22:24:17 -04:00
47a8156851 Merge pull request #8161 from heyitsanthony/fix-watchcancel-test
clientv3/integration: wait for leader before trying to count watches
2017-06-22 18:12:30 -07:00
0fe8fdcb29 Merge pull request #8123 from yudai/revision_compactor
Compactor: Add Revisional compactor
2017-06-22 16:34:28 -07:00
4d6174f770 Merge pull request #8160 from gyuho/ggg
vendor: upgrade grpc-go to 1.4.2
2017-06-22 16:17:19 -07:00
4c43fb83df Merge pull request #8159 from heyitsanthony/crl-test-fix
e2e: accept more kinds of errors in CRL test
2017-06-22 15:59:26 -07:00
9e574afb84 clientv3/integration: wait for leader before trying to count watches
Fixes #8044
2017-06-22 15:02:41 -07:00
861ebe6950 vendor: upgrade grpc-go to 1.4.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-22 14:26:45 -07:00
f7df3c80d5 Merge pull request #8149 from heyitsanthony/lease-sort-promote
lessor: extend leases on promote if expires will be rate limited
2017-06-22 13:31:34 -07:00
e22d00a9f1 e2e: accept more kinds of errors in CRL test
Semaphore is failing with context exceeded errors and dial timeouts, only
returning an "Error: ..." from expect on etcdctl. So, only test for
"Error:" instead of grpc internal errors.
2017-06-22 13:27:36 -07:00
cdc7d77beb Merge pull request #8158 from gyuho/fix
etcdctl/ctlv3: remove unnecessary 'return'
2017-06-22 12:47:49 -07:00
f2d8929a09 etcdctl/ctlv3: remove unnecessary 'return'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-22 12:24:00 -07:00
ac061671d5 Revert "integration: remove lease exist checking on randomized expiry"
This reverts commit 95bc33f37f. The new
lease extension algorithm should pass this test.
2017-06-22 11:25:45 -07:00
c38c00f7c3 lessor: extend leases on promote if expires will be rate limited
Instead of unconditionally randomizing, extend leases on promotion
if too many leases expire within the same time span. If the server
has few leases or spread out expires, there will be no extension.
2017-06-22 11:25:34 -07:00
310a09691f Merge pull request #8150 from heyitsanthony/update-db-size-defrag
mvcc: use GaugeFunc metric to load db size when requested
2017-06-22 09:38:00 -07:00
522e75cb4f mvcc: use GaugeFunc metric to load db size when requested
Relying on mvcc to set the db size metric can cause it to
miss size changes when a txn commits after the last write
completes before a quiescent period. Instead, load the
db size on demand.

Fixes #8146
2017-06-21 23:58:37 -07:00
4e6c77185b integration: test mvcc db size metric is updated following defrag 2017-06-21 22:59:48 -07:00
9cb12deca6 Merge pull request #8102 from heyitsanthony/txn-nested
api: nested txns
2017-06-21 19:56:43 -07:00
6fe2249bcd integration: add a test case for non authorized RPCs
This commit add a new test case which ensures that non authorized RPCs
failed with ErrUserEmpty. The case can happen in a schedule like
below:
1. create a cluster
2. create clients
3. enable authentication of the cluster
4. the clients issue RPCs

Fix https://github.com/coreos/etcd/issues/7770
2017-06-22 11:35:37 +09:00
23c816718a Merge pull request #8143 from heyitsanthony/endpoint-all
etcdctl: use cluster endpoints when passed --cluster
2017-06-21 16:11:13 -07:00
a3f8f47422 *: add Revision compactor 2017-06-21 15:41:07 -07:00
e461017ac5 raft: add DisableProposalForwarding option
this allows users to disable followers from forwarding proposals to the
leader.
2017-06-21 14:58:28 -07:00
b10ea20113 namespace: support nested txns 2017-06-21 14:33:16 -07:00
f465e3ea8a grpcproxy: support nested txns 2017-06-21 14:33:15 -07:00
f400010028 clientv3/integration: test clientv3 nested txns 2017-06-21 14:33:15 -07:00
f8dbcd86ec clientv3: support nested Txns with OpTxn 2017-06-21 14:33:15 -07:00
0dd4c2ac69 integration: test grpc nested txns 2017-06-21 14:33:15 -07:00
6ed51dc621 etcdserver, v3rpc: support nested txns 2017-06-21 14:33:15 -07:00
5c7efaa288 adt: Union for interval trees 2017-06-21 14:33:15 -07:00
822473bc31 etcdserverpb: add txns to requestop/responseop 2017-06-21 14:33:15 -07:00
8b09309c81 Merge pull request #8147 from gyuho/monitoring
Documentation: use 'etcd_disk_' metrics in monitoring
2017-06-21 14:07:18 -07:00
1a2be432c5 etcdctl: --cluster flag using cluster endpoints for endpoint commands
Queries the cluster for endpoints to use for the endpoint commands.

Fixes #8117
2017-06-21 13:55:23 -07:00
7ebcfcf871 Documentation: use 'etcd_disk_' metrics in monitoring
Rather than 'etcd_debugging_' ones that might change
in the future.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-21 12:35:08 -07:00
a40cdc7baa Merge pull request #8142 from gyuho/a
Documentation/release: sign *.aci files
2017-06-20 16:57:53 -07:00
20881bde05 Merge pull request #8128 from gyuho/functional-tester
*: run basic functional-tester cases to test script
2017-06-20 16:20:11 -07:00
6e31901108 test: run basic functional tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 16:04:51 -07:00
7689a2535e etcd-tester: add 'exit-on-failure'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 16:04:48 -07:00
ca6d7bd836 Documentation/release: sign *.aci files
Thanks to
https://github.com/coreos/etcd/issues/8085#issuecomment-308232300.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 14:18:19 -07:00
1df8b90d67 Merge pull request #8121 from gyuho/health-check-service
*: add basic health check service
2017-06-20 13:07:52 -07:00
8ce2c79197 integration: add 'HealthClient.Check' test
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 13:02:11 -07:00
117a83c1bf Merge pull request #8140 from heyitsanthony/doc-curl-auth
dev-guide: add authentication example for grpc/json
2017-06-20 10:59:39 -07:00
30029c0019 Merge pull request #8124 from heyitsanthony/crl
reject connections based on CRL file
2017-06-20 10:52:41 -07:00
c1e3172e3a etcdserver/api/v3rpc: add default grpc health service 2017-06-20 10:48:06 -07:00
71e6fe183f vendor: add 'grpc/health/*' 2017-06-20 10:48:06 -07:00
ac62c6c811 Merge pull request #8133 from gyuho/release-test
test: 'FAIL' on release binary download failure
2017-06-20 10:46:11 -07:00
8837719a8d dev-guide: add authentication example for grpc/json 2017-06-20 10:12:17 -07:00
41e26f741b e2e: test rejecting CRL'd client certs 2017-06-19 15:23:41 -07:00
798b14979c fixtures: add gencerts.sh, generate CRL 2017-06-19 15:23:41 -07:00
87d16af2e2 embed: use transport TLS listener for client listener for CRLs 2017-06-19 15:23:41 -07:00
7d7d1ae6a0 etcdmain: configure CRL file through command line 2017-06-19 15:23:41 -07:00
322976bedc transport: CRL checking 2017-06-19 15:23:41 -07:00
a65e3c69a6 Merge pull request #8122 from yudai/fast_fail_proxy
grpcproxy: Disable fast fail on lease grant call to cluster
2017-06-19 15:04:25 -07:00
66f553a96b Merge pull request #8127 from heyitsanthony/fix-restore
mvcc: restore into tree index with one key index
2017-06-19 12:58:18 -07:00
8f8f550443 test: 'FAIL' on release binary download failure
I see CI is failing to download release binaries
but exit code doesn't trigger CI job failure.

We need 'FAIL' string.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-19 12:52:02 -07:00
51a568aa81 mvcc: restore into tree index with one key index
Clobbering the mvcc kvindex with new keyIndexes for each restore
chunk would cause index corruption by dropping historical information.
2017-06-19 12:04:01 -07:00
02164874d9 mvcc: test restore and deletes with small chunk sizes 2017-06-19 12:04:01 -07:00
45fbac5544 Merge pull request #8025 from heyitsanthony/txn-cmp-range
api: txn comparisons on ranges
2017-06-18 11:11:43 -07:00
df2cc4bc8c grpcproxy: Disable fast fail on lease grant call to cluster
Problem Observed
----------------

When there is no etcd process behind the proxy,
clients repeat resending lease grant requests without delay.
This behavior can cause abnormal resource consumption on CPU/RAM and
network.

Problem Detail
--------------

`LeaseGrant()` uses a bare protobuf client to forward requests.
However, it doesn't use `grpc.FailFast(false)`, which means the method returns
an `Unavailable` error immediately when no etcd process is available.
In clientv3, `Unavailable` errors are not considered the "Halt" error,
and library retries the request without delay.
Both clients and the proxy consume much CPU cycles to process retry requests.

Resolution
----------

Add `grpc.FailFast(false))` to `LeaseGrant()` of the `leaseProxy`.
This makes the proxy not to return immediately when no etcd process is
available. Clients will simply timeout requests instead.
2017-06-16 15:09:05 -07:00
8f34d0c8b6 clientv3/integration: test compare on range 2017-06-16 12:13:27 -07:00
7ff6e62c56 namespace: prefix comparison range_end 2017-06-16 12:13:27 -07:00
aeb2dc03aa grpcproxy: invalidate cache on comparison range 2017-06-16 12:13:27 -07:00
fcf1abd23b clientv3: compare helper functions to set range/prefix 2017-06-16 12:13:27 -07:00
fafb054624 integration: test txn range comparisons 2017-06-16 12:13:27 -07:00
8d7c29c732 etcdserver, etcdserverpb: Txn.Compare range_end support 2017-06-16 12:13:27 -07:00
2389 changed files with 342458 additions and 162221 deletions

7
.gitignore vendored
View File

@ -1,15 +1,20 @@
/agent-*
/coverage
/covdir
/docs
/gopath
/gopath.proto
/go-bindata
/release
/machine*
/bin
.vagrant
*.etcd
*.log
/etcd
*.swp
/hack/insta-discovery/.env
*.test
tools/functional-tester/docker/bin
hack/tls-setup/certs
.idea
*.bak

View File

@ -1,11 +1,15 @@
dist: trusty
language: go
go_import_path: github.com/coreos/etcd
sudo: false
sudo: required
services: docker
go:
- 1.8.3
- tip
- 1.12.9
env:
- GO111MODULE=on
notifications:
on_success: never
@ -13,71 +17,50 @@ notifications:
env:
matrix:
- TARGET=amd64
- TARGET=darwin-amd64
- TARGET=windows-amd64
- TARGET=arm64
- TARGET=arm
- TARGET=386
- TARGET=ppc64le
- TARGET=linux-amd64-integration-1-cpu
- TARGET=linux-amd64-integration-4-cpu
- TARGET=linux-amd64-functional
- TARGET=linux-amd64-unit
- TARGET=linux-amd64-e2e
- TARGET=all-build
- TARGET=linux-386-unit
matrix:
fast_finish: true
allow_failures:
- go: tip
exclude:
- go: tip
env: TARGET=darwin-amd64
- go: tip
env: TARGET=windows-amd64
- go: tip
env: TARGET=arm
- go: tip
env: TARGET=arm64
- go: tip
env: TARGET=386
- go: tip
env: TARGET=ppc64le
- go: 1.12.9
env: TARGET=linux-386-unit
addons:
apt:
sources:
- debian-sid
packages:
- libpcap-dev
- libaspell-dev
- libhunspell-dev
- shellcheck
before_install:
- go get -v -u github.com/chzchzchz/goword
- go get -v -u github.com/coreos/license-bill-of-materials
- go get -v -u honnef.co/go/tools/cmd/gosimple
- go get -v -u honnef.co/go/tools/cmd/unused
- go get -v -u honnef.co/go/tools/cmd/staticcheck
- ./scripts/install-marker.sh amd64
# disable godep restore override
install:
- pushd cmd/etcd && go get -t -v ./... && popd
- go get -t -v -d ./...
script:
- echo "TRAVIS_GO_VERSION=${TRAVIS_GO_VERSION}"
- >
case "${TARGET}" in
amd64)
GOARCH=amd64 ./test
linux-amd64-integration-1-cpu)
GOARCH=amd64 CPU=1 PASSES='integration' ./test
;;
darwin-amd64)
GO_BUILD_FLAGS="-a -v" GOPATH="" GOOS=darwin GOARCH=amd64 ./build
linux-amd64-integration-4-cpu)
GOARCH=amd64 CPU=4 PASSES='integration' ./test
;;
windows-amd64)
GO_BUILD_FLAGS="-a -v" GOPATH="" GOOS=windows GOARCH=amd64 ./build
linux-amd64-functional)
./build && GOARCH=amd64 PASSES='functional' ./test
;;
386)
GOARCH=386 PASSES="build unit" ./test
linux-amd64-unit)
./build && GOARCH=amd64 PASSES='unit' ./test
;;
*)
# test building out of gopath
GO_BUILD_FLAGS="-a -v" GOPATH="" GOARCH="${TARGET}" ./build
linux-amd64-e2e)
GOARCH=amd64 PASSES='build release e2e' MANUAL_VER=v3.3.13 ./test
;;
all-build)
GOARCH=386 PASSES='build' ./test \
&& GO_BUILD_FLAGS='-v' GOOS=darwin GOARCH=amd64 ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=arm ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=arm64 ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=ppc64le ./build
;;
linux-386-unit)
GOARCH=386 ./build && GOARCH=386 PASSES='unit' ./test
;;
esac

44
.words Normal file
View File

@ -0,0 +1,44 @@
DefaultMaxRequestBytes
ErrCodeEnhanceYourCalm
ErrTimeout
GoAway
KeepAlive
Keepalive
MiB
ResourceExhausted
RPC
RPCs
TODO
backoff
blackhole
blackholed
cancelable
cancelation
cluster_proxy
defragment
defragmenting
etcd
gRPC
goroutine
goroutines
healthcheck
iff
inflight
keepalive
keepalives
keyspace
linearization
localhost
mutex
prefetching
protobuf
prometheus
rafthttp
repin
serializable
teardown
too_many_pings
uncontended
unprefixed
unlisting

View File

@ -5,7 +5,7 @@ etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
# Email and chat
- Email: [etcd-dev](https://groups.google.com/forum/?hl=en#!forum/etcd-dev)
- IRC: #[coreos](irc://irc.freenode.org:6667/#coreos) IRC channel on freenode.org
- IRC: #[etcd](irc://irc.freenode.org:6667/#etcd) IRC channel on freenode.org
## Getting started

View File

@ -0,0 +1,53 @@
FROM ubuntu:17.10
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get -y update \
&& apt-get -y install \
build-essential \
gcc \
apt-utils \
pkg-config \
software-properties-common \
apt-transport-https \
libssl-dev \
sudo \
bash \
curl \
wget \
tar \
git \
&& apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y autoremove \
&& apt-get -y autoclean
ENV GOROOT /usr/local/go
ENV GOPATH /go
ENV PATH ${GOPATH}/bin:${GOROOT}/bin:${PATH}
ENV GO_VERSION REPLACE_ME_GO_VERSION
ENV GO_DOWNLOAD_URL https://storage.googleapis.com/golang
RUN rm -rf ${GOROOT} \
&& curl -s ${GO_DOWNLOAD_URL}/go${GO_VERSION}.linux-amd64.tar.gz | tar -v -C /usr/local/ -xz \
&& mkdir -p ${GOPATH}/src ${GOPATH}/bin \
&& go version
RUN mkdir -p ${GOPATH}/src/github.com/coreos/etcd
ADD . ${GOPATH}/src/github.com/coreos/etcd
RUN go get -v github.com/coreos/gofail \
&& pushd ${GOPATH}/src/github.com/coreos/etcd \
&& GO_BUILD_FLAGS="-v" ./build \
&& cp ./bin/etcd /etcd \
&& cp ./bin/etcdctl /etcdctl \
&& GO_BUILD_FLAGS="-v" FAILPOINTS=1 ./build \
&& cp ./bin/etcd /etcd-failpoints \
&& ./tools/functional-tester/build \
&& cp ./bin/etcd-agent /etcd-agent \
&& cp ./bin/etcd-tester /etcd-tester \
&& cp ./bin/etcd-runner /etcd-runner \
&& go build -v -o /benchmark ./cmd/tools/benchmark \
&& go build -v -o /etcd-test-proxy ./cmd/tools/etcd-test-proxy \
&& popd \
&& rm -rf ${GOPATH}/src/github.com/coreos/etcd

58
Dockerfile-test Normal file
View File

@ -0,0 +1,58 @@
FROM ubuntu:16.10
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get -y update \
&& apt-get -y install \
build-essential \
gcc \
apt-utils \
pkg-config \
software-properties-common \
apt-transport-https \
libssl-dev \
sudo \
bash \
curl \
wget \
tar \
git \
netcat \
libaspell-dev \
libhunspell-dev \
hunspell-en-us \
aspell-en \
shellcheck \
&& apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y autoremove \
&& apt-get -y autoclean
ENV GOROOT /usr/local/go
ENV GOPATH /go
ENV PATH ${GOPATH}/bin:${GOROOT}/bin:${PATH}
ENV GO_VERSION REPLACE_ME_GO_VERSION
ENV GO_DOWNLOAD_URL https://storage.googleapis.com/golang
RUN rm -rf ${GOROOT} \
&& curl -s ${GO_DOWNLOAD_URL}/go${GO_VERSION}.linux-amd64.tar.gz | tar -v -C /usr/local/ -xz \
&& mkdir -p ${GOPATH}/src ${GOPATH}/bin \
&& go version
RUN mkdir -p ${GOPATH}/src/github.com/coreos/etcd
WORKDIR ${GOPATH}/src/github.com/coreos/etcd
ADD ./scripts/install-marker.sh /tmp/install-marker.sh
RUN go get -v -u -tags spell github.com/chzchzchz/goword \
&& go get -v -u github.com/coreos/license-bill-of-materials \
&& go get -v -u honnef.co/go/tools/cmd/gosimple \
&& go get -v -u honnef.co/go/tools/cmd/unused \
&& go get -v -u honnef.co/go/tools/cmd/staticcheck \
&& go get -v -u github.com/gyuho/gocovmerge \
&& go get -v -u github.com/gordonklaus/ineffassign \
&& go get -v -u github.com/alexkohler/nakedret \
&& /tmp/install-marker.sh amd64 \
&& rm -f /tmp/install-marker.sh \
&& curl -s https://codecov.io/bash >/codecov \
&& chmod 700 /codecov

3
Documentation/_index.md Normal file
View File

@ -0,0 +1,3 @@
---
title: etcd version 3.3.12
---

View File

@ -1,18 +0,0 @@
# Benchmarks
etcd benchmarks will be published regularly and tracked for each release below:
- [etcd v2.1.0-alpha][2.1]
- [etcd v2.2.0-rc][2.2]
- [etcd v3 demo][3.0]
# Memory Usage Benchmarks
It records expected memory usage in different scenarios.
- [etcd v2.2.0-rc][2.2-mem]
[2.1]: etcd-2-1-0-alpha-benchmarks.md
[2.2]: etcd-2-2-0-rc-benchmarks.md
[2.2-mem]: etcd-2-2-0-rc-memory-benchmarks.md
[3.0]: etcd-3-demo-benchmarks.md

View File

@ -0,0 +1,3 @@
---
title: Benchmarks
---

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v2.1.0
---
## Physical machines
GCE n1-highcpu-2 machine type

View File

@ -1,4 +1,6 @@
# Benchmarking etcd v2.2.0
---
title: Benchmarking etcd v2.2.0
---
## Physical Machines
@ -26,7 +28,7 @@ Go OS/Arch: linux/amd64
Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
The performance is calulated through results of 100 benchmark rounds.
The performance is calculated through results of 100 benchmark rounds.
## Performance

View File

@ -1,4 +1,8 @@
## Physical machines
---
title: Benchmarking etcd v2.2.0-rc
---
## Physical machine
GCE n1-highcpu-2 machine type

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v2.2.0-rc-memory
---
## Physical machine
GCE n1-standard-2 machine type

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v3
---
## Physical machines
GCE n1-highcpu-2 machine type

View File

@ -1,4 +1,6 @@
# Watch Memory Usage Benchmark
---
title: Watch Memory Usage Benchmark
---
*NOTE*: The watch features are under active development, and their memory usage may change as that development progresses. We do not expect it to significantly increase beyond the figures stated below.

View File

@ -1,4 +1,6 @@
# Storage Memory Usage Benchmark
---
title: Storage Memory Usage Benchmark
---
<!---todo: link storage to storage design doc-->
Two components of etcd storage consume physical memory. The etcd process allocates an *in-memory index* to speed key lookup. The process's *page cache*, managed by the operating system, stores recently-accessed data from disk for quick re-use.

View File

@ -1,4 +1,6 @@
# Branch management
---
title: Branch management
---
## Guide
@ -7,7 +9,7 @@
* Backwards-compatible bug fixes should target the master branch and subsequently be ported to stable branches.
* Once the master branch is ready for release, it will be tagged and become the new stable branch.
The etcd team has adopted a *rolling release model* and supports one stable version of etcd.
The etcd team has adopted a *rolling release model* and supports two stable versions of etcd.
### Master branch
@ -15,12 +17,12 @@ The `master` branch is our development branch. All new features land here first.
To try new and experimental features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs.
Before the release of the next stable version, feature PRs will be frozen. We will focus on the testing, bug-fix and documentation for one to two weeks.
Before the release of the next stable version, feature PRs will be frozen. A [release manager](./dev-internal/release.md#release-management) will be assigned to major/minor version and will lead the etcd community in test, bug-fix and documentation of the release for one to two weeks.
### Stable branches
All branches with prefix `release-` are considered _stable_ branches.
After every minor release (http://semver.org/), we will have a new stable branch for that release. We will keep fixing the backwards-compatible bugs for the latest stable release, but not previous releases. The _patch_ release, incorporating any bug fixes, will be once every two weeks, given any patches.
After every minor release (http://semver.org/), we will have a new stable branch for that release, managed by a [patch release manager](./dev-internal/release.md#release-management). We will keep fixing the backwards-compatible bugs for the latest two stable releases. A _patch_ release to each supported release branch, incorporating any bug fixes, will be once every two weeks, given any patches.
[master]: https://github.com/coreos/etcd/tree/master

View File

@ -1,4 +1,6 @@
# Demo
---
title: Demo
---
This series of examples shows the basic procedures for working with an etcd cluster.

View File

@ -0,0 +1,3 @@
---
title: Developer guide
---

View File

@ -1,4 +1,6 @@
### etcd concurrency API Reference
---
title: etcd concurrency API Reference
---
This is a generated documentation. Please read the proto files for more.
@ -20,7 +22,7 @@ The lock service exposes client-side locking facilities as a gRPC interface.
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | name is the identifier for the distributed shared lock to be acquired. | bytes |
| lease | lease is the ID of the lease that will be attached to ownership of the lock. If the lease expires or is revoked and currently holds the lock, the lock is automatically released. Calls to Lock with the same lease will be treated as a single acquistion; locking twice with the same lease is a no-op. | int64 |
| lease | lease is the ID of the lease that will be attached to ownership of the lock. If the lease expires or is revoked and currently holds the lock, the lock is automatically released. Calls to Lock with the same lease will be treated as a single acquisition; locking twice with the same lease is a no-op. | int64 |

View File

@ -1,14 +1,29 @@
---
title: Why gRPC gateway
---
## Why grpc-gateway
etcd v3 uses [gRPC][grpc] for its messaging protocol. The etcd project includes a gRPC-based [Go client][go-client] and a command line utility, [etcdctl][etcdctl], for communicating with an etcd cluster through gRPC. For languages with no gRPC support, etcd provides a JSON [gRPC gateway][grpc-gateway]. This gateway serves a RESTful proxy that translates HTTP/JSON requests into gRPC messages.
etcd v3 uses [gRPC][grpc] for its messaging protocol. The etcd project includes a gRPC-based [Go client][go-client] and a command line utility, [etcdctl][etcdctl], for communicating with an etcd cluster through gRPC. For languages with no gRPC support, etcd provides a JSON [grpc-gateway][grpc-gateway]. This gateway serves a RESTful proxy that translates HTTP/JSON requests into gRPC messages.
## Using gRPC gateway
The gateway accepts a [JSON mapping][json-mapping] for etcd's [protocol buffer][api-ref] message definitions. Note that `key` and `value` fields are defined as byte arrays and therefore must be base64 encoded in JSON. The following examples use `curl`, but any HTTP/JSON client should work all the same.
## Using grpc-gateway
### Notes
The gateway accepts a [JSON mapping][json-mapping] for etcd's [protocol buffer][api-ref] message definitions. Note that `key` and `value` fields are defined as byte arrays and therefore must be base64 encoded in JSON.
gRPC gateway endpoint has changed since etcd v3.3:
Use `curl` to put and get a key:
- etcd v3.2 or before uses only `[CLIENT-URL]/v3alpha/*`.
- etcd v3.3 uses `[CLIENT-URL]/v3beta/*` while keeping `[CLIENT-URL]/v3alpha/*`.
- etcd v3.4 uses `[CLIENT-URL]/v3/*` while keeping `[CLIENT-URL]/v3beta/*`.
- **`[CLIENT-URL]/v3alpha/*` is deprecated**.
- etcd v3.5 or later uses only `[CLIENT-URL]/v3/*`.
- **`[CLIENT-URL]/v3beta/*` is deprecated**.
gRPC-gateway does not support authentication using TLS Common Name.
### Put and get keys
Use the `/v3/kv/range` and `/v3/kv/put` services to read and write keys:
```bash
<<COMMENT
@ -17,36 +32,97 @@ foo is 'Zm9v' in Base64
bar is 'YmFy'
COMMENT
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
curl -L http://localhost:2379/v3/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"}}
curl -L http://localhost:2379/v3alpha/kv/range \
-X POST -d '{"key": "Zm9v"}'
curl -L http://localhost:2379/v3/kv/range \
-X POST -d '{"key": "Zm9v"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"}
# get all keys prefixed with "foo"
curl -L http://localhost:2379/v3/kv/range \
-X POST -d '{"key": "Zm9v", "range_end": "Zm9w"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"}
```
Use `curl` to watch a key:
### Watch keys
Use the `/v3/watch` service to watch keys:
```bash
curl http://localhost:2379/v3alpha/watch \
-X POST -d '{"create_request": {"key":"Zm9v"} }' &
curl -N http://localhost:2379/v3/watch \
-X POST -d '{"create_request": {"key":"Zm9v"} }' &
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"1","raft_term":"2"},"created":true}}
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' >/dev/null 2>&1
curl -L http://localhost:2379/v3/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' >/dev/null 2>&1
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"2"},"events":[{"kv":{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}}]}}
```
Use `curl` to issue a transaction:
### Transactions
Issue a transaction with `/v3/kv/txn`:
```bash
curl -L http://localhost:2379/v3alpha/kv/txn \
-X POST \
-d '{"compare":[{"target":"CREATE","key":"Zm9v","createRevision":"2"}],"success":[{"requestPut":{"key":"Zm9v","value":"YmFy"}}]}'
# target CREATE
curl -L http://localhost:2379/v3/kv/txn \
-X POST \
-d '{"compare":[{"target":"CREATE","key":"Zm9v","createRevision":"2"}],"success":[{"requestPut":{"key":"Zm9v","value":"YmFy"}}]}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"3","raft_term":"2"},"succeeded":true,"responses":[{"response_put":{"header":{"revision":"3"}}}]}
```
```bash
# target VERSION
curl -L http://localhost:2379/v3/kv/txn \
-X POST \
-d '{"compare":[{"version":"4","result":"EQUAL","target":"VERSION","key":"Zm9v"}],"success":[{"requestRange":{"key":"Zm9v"}}]}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"6","raft_term":"3"},"succeeded":true,"responses":[{"response_range":{"header":{"revision":"6"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"6","version":"4","value":"YmF6"}],"count":"1"}}]}
```
### Authentication
Set up authentication with the `/v3/auth` service:
```bash
# create root user
curl -L http://localhost:2379/v3/auth/user/add \
-X POST -d '{"name": "root", "password": "pass"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# create root role
curl -L http://localhost:2379/v3/auth/role/add \
-X POST -d '{"name": "root"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# grant root role
curl -L http://localhost:2379/v3/auth/user/grant \
-X POST -d '{"user": "root", "role": "root"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# enable auth
curl -L http://localhost:2379/v3/auth/enable -X POST -d '{}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
```
Authenticate with etcd for an authentication token using `/v3/auth/authenticate`:
```bash
# get the auth token for the root user
curl -L http://localhost:2379/v3/auth/authenticate \
-X POST -d '{"name": "root", "password": "pass"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"},"token":"sssvIpwfnLAcWAQH.9"}
```
Set the `Authorization` header to the authentication token to fetch a key using authentication credentials:
```bash
curl -L http://localhost:2379/v3/kv/put \
-H 'Authorization : sssvIpwfnLAcWAQH.9' \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}
```
## Swagger
Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].
@ -54,9 +130,8 @@ Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][
[api-ref]: ./api_reference_v3.md
[go-client]: https://github.com/coreos/etcd/tree/master/clientv3
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
[grpc]: http://www.grpc.io/
[grpc]: https://www.grpc.io/
[grpc-gateway]: https://github.com/grpc-ecosystem/grpc-gateway
[json-mapping]: https://developers.google.com/protocol-buffers/docs/proto3#json
[swagger]: http://swagger.io/
[swagger-doc]: apispec/swagger/rpc.swagger.json

View File

@ -1,4 +1,6 @@
### etcd API Reference
---
title: etcd API Reference
---
This is a generated documentation. Please read the proto files for more.
@ -58,6 +60,7 @@ This is a generated documentation. Please read the proto files for more.
| LeaseRevoke | LeaseRevokeRequest | LeaseRevokeResponse | LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted. |
| LeaseKeepAlive | LeaseKeepAliveRequest | LeaseKeepAliveResponse | LeaseKeepAlive keeps the lease alive by streaming keep alive requests from the client to the server and streaming keep alive responses from the server to the client. |
| LeaseTimeToLive | LeaseTimeToLiveRequest | LeaseTimeToLiveResponse | LeaseTimeToLive retrieves lease information. |
| LeaseLeases | LeaseLeasesRequest | LeaseLeasesResponse | LeaseLeases lists all existing leases. |
@ -68,8 +71,10 @@ This is a generated documentation. Please read the proto files for more.
| Alarm | AlarmRequest | AlarmResponse | Alarm activates, deactivates, and queries alarms regarding cluster health. |
| Status | StatusRequest | StatusResponse | Status gets the status of the member. |
| Defragment | DefragmentRequest | DefragmentResponse | Defragment defragments a member's backend database to recover storage space. |
| Hash | HashRequest | HashResponse | Hash returns the hash of the local KV state for consistency checking purpose. This is designed for testing; do not use this in production when there are ongoing transactions. |
| Hash | HashRequest | HashResponse | Hash computes the hash of whole backend keyspace, including key, lease, and other buckets in storage. This is designed for testing ONLY! Do not rely on this in production with ongoing transactions, since Hash operation does not hold MVCC locks. Use "HashKV" API instead for "key" bucket consistency checks. |
| HashKV | HashKVRequest | HashKVResponse | HashKV computes the hash of all MVCC keys up to a given revision. It only iterates "key" bucket in backend storage. |
| Snapshot | SnapshotRequest | SnapshotResponse | Snapshot sends a snapshot of the entire backend from a member over a stream to a client. |
| MoveLeader | MoveLeaderRequest | MoveLeaderResponse | MoveLeader requests current leader node to transfer its leadership to transferee. |
@ -223,8 +228,8 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| role | | string |
| key | | string |
| range_end | | string |
| key | | bytes |
| range_end | | bytes |
@ -401,6 +406,8 @@ CompactionRequest compacts the key-value store up to a given revision. All super
| create_revision | create_revision is the creation revision of the given key | int64 |
| mod_revision | mod_revision is the last modified revision of the given key. | int64 |
| value | value is the value of the given key, in bytes. | bytes |
| lease | lease is the lease id of the given key. | int64 |
| range_end | range_end compares the given target to all keys in the range [key, range_end). See RangeRequest for more details on key ranges. | bytes |
@ -438,6 +445,24 @@ Empty field.
##### message `HashKVRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| revision | revision is the key-value store revision for the hash operation. | int64 |
##### message `HashKVResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| hash | hash is the hash value computed from the responding member's MVCC keys up to a given revision. | uint32 |
| compact_revision | compact_revision is the compacted revision of key-value store when hash begins. | int64 |
##### message `HashRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
@ -449,7 +474,32 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| hash | hash is the hash value computed from the responding member's key-value store. | uint32 |
| hash | hash is the hash value computed from the responding member's KV's backend. | uint32 |
##### message `LeaseCheckpoint` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the lease ID to checkpoint. | int64 |
| remaining_TTL | Remaining_TTL is the remaining time until expiry of the lease. | int64 |
##### message `LeaseCheckpointRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| checkpoints | | (slice of) LeaseCheckpoint |
##### message `LeaseCheckpointResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
@ -457,7 +507,7 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| TTL | TTL is the advisory time-to-live in seconds. | int64 |
| TTL | TTL is the advisory time-to-live in seconds. Expired lease will return -1. | int64 |
| ID | ID is the requested ID for the lease. If ID is set to 0, the lessor chooses an ID. | int64 |
@ -491,6 +541,21 @@ Empty field.
##### message `LeaseLeasesRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `LeaseLeasesResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| leases | | (slice of) LeaseStatus |
##### message `LeaseRevokeRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
@ -507,6 +572,14 @@ Empty field.
##### message `LeaseStatus` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | | int64 |
##### message `LeaseTimeToLiveRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
@ -607,6 +680,22 @@ Empty field.
##### message `MoveLeaderRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| targetID | targetID is the node ID for the new leader. | uint64 |
##### message `MoveLeaderResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `PutRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
@ -644,7 +733,7 @@ Empty field.
| count_only | count_only when set returns only the count of the keys in the range. | bool |
| min_mod_revision | min_mod_revision is the lower bound for returned key mod revisions; all keys with lesser mod revisions will be filtered away. | int64 |
| max_mod_revision | max_mod_revision is the upper bound for returned key mod revisions; all keys with greater mod revisions will be filtered away. | int64 |
| min_create_revision | min_create_revision is the lower bound for returned key create revisions; all keys with lesser create trevisions will be filtered away. | int64 |
| min_create_revision | min_create_revision is the lower bound for returned key create revisions; all keys with lesser create revisions will be filtered away. | int64 |
| max_create_revision | max_create_revision is the upper bound for returned key create revisions; all keys with greater create revisions will be filtered away. | int64 |
@ -668,6 +757,7 @@ Empty field.
| request_range | | RangeRequest |
| request_put | | PutRequest |
| request_delete_range | | DeleteRangeRequest |
| request_txn | | TxnRequest |
@ -677,7 +767,7 @@ Empty field.
| ----- | ----------- | ---- |
| cluster_id | cluster_id is the ID of the cluster which sent the response. | uint64 |
| member_id | member_id is the ID of the member which sent the response. | uint64 |
| revision | revision is the key-value store revision when the request was applied. | int64 |
| revision | revision is the key-value store revision when the request was applied. For watch progress responses, the header.revision indicates progress. All future events recieved in this stream are guaranteed to have a higher revision number than the header.revision number. | int64 |
| raft_term | raft_term is the raft term when the request was applied. | uint64 |
@ -690,6 +780,7 @@ Empty field.
| response_range | | RangeResponse |
| response_put | | PutResponse |
| response_delete_range | | DeleteRangeResponse |
| response_txn | | TxnResponse |
@ -721,10 +812,13 @@ Empty field.
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| version | version is the cluster protocol version used by the responding member. | string |
| dbSize | dbSize is the size of the backend database, in bytes, of the responding member. | int64 |
| dbSize | dbSize is the size of the backend database physically allocated, in bytes, of the responding member. | int64 |
| leader | leader is the member ID which the responding member believes is the current leader. | uint64 |
| raftIndex | raftIndex is the current raft index of the responding member. | uint64 |
| raftIndex | raftIndex is the current raft committed index of the responding member. | uint64 |
| raftTerm | raftTerm is the current raft term of the responding member. | uint64 |
| raftAppliedIndex | raftAppliedIndex is the current raft applied index of the responding member. | uint64 |
| errors | errors contains alarm/health information and status. | (slice of) string |
| dbSizeInUse | dbSizeInUse is the size of the backend database logically in use, in bytes, of the responding member. | int64 |
@ -768,6 +862,16 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| progress_notify | progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server may decide how often it will send notifications based on current load. | bool |
| filters | filters filter the events at server side before it sends back to the watcher. | (slice of) FilterType |
| prev_kv | If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned. | bool |
| watch_id | If watch_id is provided and non-zero, it will be assigned to this watcher. Since creating a watcher in etcd is not a synchronous operation, this can be used ensure that ordering is correct when creating multiple watchers on the same stream. Creating a watcher with an ID already in use on the stream will cause an error to be returned. | int64 |
| fragment | fragment enables splitting large revisions into multiple watch responses. | bool |
##### message `WatchProgressRequest` (etcdserver/etcdserverpb/rpc.proto)
Requests the a watch stream progress status be sent in the watch response stream as soon as possible.
Empty field.
@ -778,6 +882,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| request_union | request_union is a request to either create a new watcher or cancel an existing watcher. | oneof |
| create_request | | WatchCreateRequest |
| cancel_request | | WatchCancelRequest |
| progress_request | | WatchProgressRequest |
@ -791,6 +896,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| canceled | canceled is set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher. | bool |
| compact_revision | compact_revision is set to the minimum index if a watcher tries to watch at a compacted index. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The client should treat the watcher as canceled and should not try to create any watcher with the same start_revision again. | int64 |
| cancel_reason | cancel_reason indicates the reason for canceling the watcher. | string |
| fragment | framgment is true if large watch response was split over multiple responses. | bool |
| events | | (slice of) mvccpb.Event |
@ -824,6 +930,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| ----- | ----------- | ---- |
| ID | | int64 |
| TTL | | int64 |
| RemainingTTL | | int64 |

File diff suppressed because it is too large Load Diff

View File

@ -15,13 +15,13 @@
"application/json"
],
"paths": {
"/v3alpha/election/campaign": {
"/v3/election/campaign": {
"post": {
"summary": "Campaign waits to acquire leadership in an election, returning a LeaderKey\nrepresenting the leadership if successful. The LeaderKey can then be used\nto issue new values on the election, transactionally guard API requests on\nleadership still being held, and resign from the election.",
"operationId": "Campaign",
"responses": {
"200": {
"description": "",
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v3electionpbCampaignResponse"
}
@ -42,13 +42,13 @@
]
}
},
"/v3alpha/election/leader": {
"/v3/election/leader": {
"post": {
"summary": "Leader returns the current election proclamation, if any.",
"operationId": "Leader",
"responses": {
"200": {
"description": "",
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v3electionpbLeaderResponse"
}
@ -69,13 +69,13 @@
]
}
},
"/v3alpha/election/observe": {
"/v3/election/observe": {
"post": {
"summary": "Observe streams election proclamations in-order as made by the election's\nelected leaders.",
"operationId": "Observe",
"responses": {
"200": {
"description": "(streaming responses)",
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/definitions/v3electionpbLeaderResponse"
}
@ -96,13 +96,13 @@
]
}
},
"/v3alpha/election/proclaim": {
"/v3/election/proclaim": {
"post": {
"summary": "Proclaim updates the leader's posted value with a new value.",
"operationId": "Proclaim",
"responses": {
"200": {
"description": "",
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v3electionpbProclaimResponse"
}
@ -123,13 +123,13 @@
]
}
},
"/v3alpha/election/resign": {
"/v3/election/resign": {
"post": {
"summary": "Resign releases election leadership so other campaigners may acquire\nleadership on the election.",
"operationId": "Resign",
"responses": {
"200": {
"description": "",
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v3electionpbResignResponse"
}
@ -168,7 +168,7 @@
"revision": {
"type": "string",
"format": "int64",
"description": "revision is the key-value store revision when the request was applied."
"description": "revision is the key-value store revision when the request was applied.\nFor watch progress responses, the header.revision indicates progress. All future events\nrecieved in this stream are guaranteed to have a higher revision number than the\nheader.revision number."
},
"raft_term": {
"type": "string",

View File

@ -15,13 +15,13 @@
"application/json"
],
"paths": {
"/v3alpha/lock/lock": {
"/v3/lock/lock": {
"post": {
"summary": "Lock acquires a distributed shared lock on a given named lock.\nOn success, it will return a unique key that exists so long as the\nlock is held by the caller. This key can be used in conjunction with\ntransactions to safely ensure updates to etcd only occur while holding\nlock ownership. The lock is held until Unlock is called on the key or the\nlease associate with the owner expires.",
"operationId": "Lock",
"responses": {
"200": {
"description": "",
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v3lockpbLockResponse"
}
@ -42,13 +42,13 @@
]
}
},
"/v3alpha/lock/unlock": {
"/v3/lock/unlock": {
"post": {
"summary": "Unlock takes a key returned by Lock and releases the hold on lock. The\nnext Lock caller waiting for the lock will then be woken up and given\nownership of the lock.",
"operationId": "Unlock",
"responses": {
"200": {
"description": "",
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v3lockpbUnlockResponse"
}
@ -87,7 +87,7 @@
"revision": {
"type": "string",
"format": "int64",
"description": "revision is the key-value store revision when the request was applied."
"description": "revision is the key-value store revision when the request was applied.\nFor watch progress responses, the header.revision indicates progress. All future events\nrecieved in this stream are guaranteed to have a higher revision number than the\nheader.revision number."
},
"raft_term": {
"type": "string",
@ -107,7 +107,7 @@
"lease": {
"type": "string",
"format": "int64",
"description": "lease is the ID of the lease that will be attached to ownership of the\nlock. If the lease expires or is revoked and currently holds the lock,\nthe lock is automatically released. Calls to Lock with the same lease will\nbe treated as a single acquistion; locking twice with the same lease is a\nno-op."
"description": "lease is the ID of the lease that will be attached to ownership of the\nlock. If the lease expires or is revoked and currently holds the lock,\nthe lock is automatically released. Calls to Lock with the same lease will\nbe treated as a single acquisition; locking twice with the same lease is a\nno-op."
}
}
},

View File

@ -1,11 +1,9 @@
# Experimental APIs and features
---
title: Experimental APIs and features
---
For the most part, the etcd project is stable, but we are still moving fast! We believe in the release fast philosophy. We want to get early feedback on features still in development and stabilizing. Thus, there are, and will be more, experimental features and APIs. We plan to improve these features based on the early feedback from the community, or abandon them if there is little interest, in the next few releases. Please do not rely on any experimental features or APIs in production environment.
## The current experimental API/features are:
- [gateway][gateway]: beta, to be stable in 3.2 release
- [gRPC proxy][grpc-proxy]: alpha, to be stable in 3.2 release
[gateway]: ../op-guide/gateway.md
[grpc-proxy]: ../op-guide/grpc_proxy.md
- [KV ordering](https://godoc.org/github.com/etcd-io/etcd/clientv3/ordering) wrapper. When an etcd client switches endpoints, responses to serializable reads may go backward in time if the new endpoint is lagging behind the rest of the cluster. The ordering wrapper caches the current cluster revision from response headers. If a response revision is less than the cached revision, the client selects another endpoint and reissues the read. Enable in grpcproxy with `--experimental-serializable-ordering`.

View File

@ -1,4 +1,6 @@
# gRPC naming and discovery
---
title: gRPC naming and discovery
---
etcd provides a gRPC resolver to support an alternative name system that fetches endpoints from etcd for discovering gRPC services. The underlying mechanism is based on watching updates to keys prefixed with the service name.
@ -8,8 +10,8 @@ The etcd client provides a gRPC resolver for resolving gRPC endpoints with an et
```go
import (
"github.com/coreos/etcd/clientv3"
etcdnaming "github.com/coreos/etcd/clientv3/naming"
"go.etcd.io/etcd/clientv3"
etcdnaming "go.etcd.io/etcd/clientv3/naming"
"google.golang.org/grpc"
)
@ -19,7 +21,7 @@ import (
cli, cerr := clientv3.NewFromURL("http://localhost:2379")
r := &etcdnaming.GRPCResolver{Client: cli}
b := grpc.RoundRobin(r)
conn, gerr := grpc.Dial("my-service", grpc.WithBalancer(b))
conn, gerr := grpc.Dial("my-service", grpc.WithBalancer(b), grpc.WithBlock(), ...)
```
## Managing service endpoints

View File

@ -1,8 +1,13 @@
# Interacting with etcd
---
title: Interacting with etcd
---
Users mostly interact with etcd by putting or getting the value of a key. This section describes how to do that by using etcdctl, a command line tool for interacting with etcd server. The concepts described here should apply to the gRPC APIs or client library APIs.
By default, etcdctl talks to the etcd server with the v2 API for backward compatibility. For etcdctl to speak to etcd using the v3 API, the API version must be set to version 3 via the `ETCDCTL_API` environment variable.
The API version used by etcdctl to speak to etcd may be set to version `2` or `3` via the `ETCDCTL_API` environment variable. By default, etcdctl on master (3.4) uses the v3 API and earlier versions (3.3 and earlier) default to the v2 API.
Note that any key that was created using the v2 API will not be able to be queried via the v2 API. A v3 API ```etcdctl get``` of a v2 key will exit with 0 and no key data, this is the expected behaviour.
```bash
export ETCDCTL_API=3
@ -215,7 +220,7 @@ $ etcdctl del foo foo9
Here is the command to delete key `zoo` with the deleted key value pair returned:
```bash
$ etcdctl del --prev-kv zoo
$ etcdctl del --prev-kv zoo
1 # one key is deleted
zoo # deleted key
val # the value of the deleted key
@ -224,7 +229,7 @@ val # the value of the deleted key
Here is the command to delete keys having prefix as `zoo`:
```bash
$ etcdctl del --prefix zoo
$ etcdctl del --prefix zoo
2 # two keys are deleted
```
@ -290,7 +295,7 @@ barz1
Here is the command to watch on multiple keys `foo` and `zoo`:
```bash
$ etcdctl watch -i
$ etcdctl watch -i
$ watch foo
$ watch zoo
# in another terminal: etcdctl put foo bar
@ -354,6 +359,26 @@ foo # key
bar_latest # value of foo key after modification
```
## Watch progress
Applications may want to check the progress of a watch to determine how up-to-date the watch stream is. For example, if a watch is used to update a cache, it can be useful to know if the cache is stale compared to the revision from a quorum read.
Progress requests can be issued using the "progress" command in interactive watch session to ask the etcd server to send a progress notify update in the watch stream:
```bash
$ etcdctl watch -i
$ watch a
$ progress
progress notify: 1
# in another terminal: etcdctl put x 0
# in another terminal: etcdctl put y 1
$ progress
progress notify: 3
```
Note: The revision number in the progress notify response is the revision from the local etcd server node that the watch stream is connected to. If this node is partitioned and not part of quorum, this progress notify revision might be lower than
than the revision returned by a quorum read against a non-partitioned etcd server node.
## Compacted revisions
As we mentioned, etcd keeps revisions so that applications can read past versions of keys. However, to avoid accumulating an unbounded amount of history, it is important to compact past revisions. After compacting, etcd removes historical revisions, releasing resources for future use. All superseded data with revisions before the compacted revision will be unavailable.
@ -430,9 +455,9 @@ Here is the command to keep the same lease alive:
```bash
$ etcdctl lease keep-alive 32695410dcc0ca06
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(10)
lease 32695410dcc0ca06 keepalived with TTL(10)
lease 32695410dcc0ca06 keepalived with TTL(10)
...
```
@ -472,4 +497,3 @@ lease 694d5765fc71500b granted with TTL(500s), remaining(132s), attached keys([z
# if the lease has expired or does not exist it will give the below response:
Error: etcdserver: requested lease not found
```

View File

@ -1,10 +1,11 @@
# System limits
---
title: System limits
---
## Request size limit
etcd is designed to handle small key value pairs typical for metadata. Larger requests will work, but may increase the latency of other requests. For the time being, etcd guarantees to support RPC requests with up to 1MB of data. In the future, the size limit may be loosened or made configurable.
etcd is designed to handle small key value pairs typical for metadata. Larger requests will work, but may increase the latency of other requests. By default, the maximum size of any request is 1.5 MiB. This limit is configurable through `--max-request-bytes` flag for etcd server.
## Storage size limit
The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag; supports up to 8GB.
The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag. 8GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.

View File

@ -1,90 +1,151 @@
# Setup a local cluster
---
title: Set up a local cluster
---
For testing and development deployments, the quickest and easiest way is to set up a local cluster. For a production deployment, refer to the [clustering][clustering] section.
For testing and development deployments, the quickest and easiest way is to configure a local cluster. For a production deployment, refer to the [clustering][clustering] section.
## Local standalone cluster
Deploying an etcd cluster as a standalone cluster is straightforward. Start it with just one command:
### Starting a cluster
Run the following to deploy an etcd cluster as a standalone cluster:
```
$ ./etcd
...
```
The started etcd member listens on `localhost:2379` for client requests.
If the `etcd` binary is not present in the current working directory, it might be located either at `$GOPATH/bin/etcd` or at `/usr/local/bin/etcd`. Run the command appropriately.
To interact with the started cluster by using etcdctl:
The running etcd member listens on `localhost:2379` for client requests.
```
# use API version 3
$ export ETCDCTL_API=3
### Interacting with the cluster
$ ./etcdctl put foo bar
OK
Use `etcdctl` to interact with the running cluster:
$ ./etcdctl get foo
bar
```
1. Store an example key-value pair in the cluster:
```
$ ./etcdctl put foo bar
OK
```
If OK is printed, storing key-value pair is successful.
2. Retrieve the value of `foo`:
```
$ ./etcdctl get foo
bar
```
If `bar` is returned, interaction with the etcd cluster is working as expected.
## Local multi-member cluster
A `Procfile` at the base of this git repo is provided to easily set up a local multi-member cluster. To start a multi-member cluster go to the root of an etcd source tree and run:
### Starting a cluster
```
# install goreman program to control Profile-based applications.
$ go get github.com/mattn/goreman
$ goreman -f Procfile start
...
```
A `Procfile` at the base of the etcd git repository is provided to easily configure a local multi-member cluster. To start a multi-member cluster, navigate to the root of the etcd source tree and perform the following:
The started members listen on `localhost:2379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
1. Install `goreman` to control Procfile-based applications:
To interact with the started cluster by using etcdctl:
```
$ go get github.com/mattn/goreman
```
```
# use API version 3
$ export ETCDCTL_API=3
2. Start a cluster with `goreman` using etcd's stock Procfile:
$ etcdctl --write-out=table --endpoints=localhost:2379 member list
+------------------+---------+--------+------------------------+------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+--------+------------------------+------------------------+
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380 | http://127.0.0.1:2379 |
| 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
| fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
+------------------+---------+--------+------------------------+------------------------+
```
$ goreman -f Procfile start
```
$ etcdctl put foo bar
OK
```
The members start running. They listen on `localhost:2379`, `localhost:22379`, and `localhost:32379` respectively for client requests.
To exercise etcd's fault tolerance, kill a member:
### Interacting with the cluster
```
# kill etcd2
$ goreman run stop etcd2
Use `etcdctl` to interact with the running cluster:
$ etcdctl put key hello
OK
1. Print the list of members:
$ etcdctl get key
hello
```
$ etcdctl --write-out=table --endpoints=localhost:2379 member list
```
The list of etcd members are displayed as follows:
# try to get key from the killed member
$ etcdctl --endpoints=localhost:22379 get key
2016/04/18 23:07:35 grpc: Conn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:22379: getsockopt: connection refused"; Reconnecting to "localhost:22379"
Error: grpc: timed out trying to connect
```
+------------------+---------+--------+------------------------+------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+--------+------------------------+------------------------+
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380 | http://127.0.0.1:2379 |
| 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
| fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
+------------------+---------+--------+------------------------+------------------------+
```
# restart the killed member
$ goreman run restart etcd2
2. Store an example key-value pair in the cluster:
# get the key from restarted member
$ etcdctl --endpoints=localhost:22379 get key
hello
```
```
$ etcdctl put foo bar
OK
```
To learn more about interacting with etcd, read [interacting with etcd section][interacting].
If OK is printed, storing key-value pair is successful.
### Testing fault tolerance
To exercise etcd's fault tolerance, kill a member and attempt to retrieve the key.
1. Identify the process name of the member to be stopped.
The `Procfile` lists the properties of the multi-member cluster. For example, consider the member with the process name, `etcd2`.
2. Stop the member:
```
# kill etcd2
$ goreman run stop etcd2
```
3. Store a key:
```
$ etcdctl put key hello
OK
```
4. Retrieve the key that is stored in the previous step:
```
$ etcdctl get key
hello
```
5. Retrieve a key from the stopped member:
```
$ etcdctl --endpoints=localhost:22379 get key
```
The command should display an error caused by connection failure:
```
2017/06/18 23:07:35 grpc: Conn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:22379: getsockopt: connection refused"; Reconnecting to "localhost:22379"
Error: grpc: timed out trying to connect
```
6. Restart the stopped member:
```
$ goreman run restart etcd2
```
7. Get the key from the restarted member:
```
$ etcdctl --endpoints=localhost:22379 get key
hello
```
Restarting the member re-establish the connection. `etcdctl` will now be able to retrieve the key successfully. To learn more about interacting with etcd, read [interacting with etcd section][interacting].
[interacting]: ./interacting_v3.md
[clustering]: ../op-guide/clustering.md

View File

@ -1,4 +1,6 @@
# Discovery service protocol
---
title: Discovery service protocol
---
Discovery service protocol helps new etcd member to discover all other members in cluster bootstrap phase using a shared discovery URL.

View File

@ -1,4 +1,6 @@
# Logging conventions
---
title: Logging conventions
---
etcd uses the [capnslog][capnslog] library for logging application output categorized into *levels*. A log message's level is determined according to these conventions:

View File

@ -1,9 +1,23 @@
# etcd release guide
---
title: etcd release guide
---
The guide talks about how to release a new version of etcd.
The procedure includes some manual steps for sanity checking, but it can probably be further scripted. Please keep this document up-to-date if making changes to the release process.
## Release management
etcd community members are assigned to manage the release each etcd major/minor version as well as manage patches
and to each stable release branch. The managers are responsible for communicating the timelines and status of each
release and for ensuring the stability of the release branch.
| Releases | Manager |
| -------- | ------- |
| 3.1 patch (post 3.1.0) | Joe Betz [@jpbetz](https://github.com/jpbetz) |
| 3.2 patch (post 3.2.0) | Joe Betz [@jpbetz](https://github.com/jpbetz) |
| 3.3 patch (post 3.3.0) | Gyuho Lee [@gyuho](https://github.com/gyuho) |
## Prepare release
Set desired version as environment variable for following steps. Here is an example to release 2.3.0:
@ -18,15 +32,17 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
### Major, minor version release, or its pre-release
- Ensure the relevant milestone on GitHub is complete. All referenced issues should be closed, or moved elsewhere.
- Remove this release from [roadmap](https://github.com/coreos/etcd/blob/master/ROADMAP.md), if necessary.
- Remove this release from [roadmap](https://github.com/etcd-io/etcd/blob/master/ROADMAP.md), if necessary.
- Ensure the latest upgrade documentation is available.
- Bump [hardcoded MinClusterVerion in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L29), if necessary.
- Bump [hardcoded MinClusterVerion in the repository](https://github.com/etcd-io/etcd/blob/master/version/version.go#L29), if necessary.
- Add feature capability maps for the new version, if necessary.
### Patch version release
- Discuss about commits that are backported to the patch release. The commits should not include merge commits.
- Cherry-pick these commits starting from the oldest one into stable branch.
- To request a backport, devlopers submit cherrypick PRs targeting the release branch. The commits should not include merge commits. The commits should be restricted to bug fixes and security patches.
- The cherrypick PRs should target the appropriate release branch (`base:release-<major>-<minor>`). `hack/patch/cherrypick.sh` may be used to automatically generate cherrypick PRs.
- The release patch manager reviews the cherrypick PRs. Please discuss carefully what is backported to the patch release. Each patch release should be strictly better than it's predecessor.
- The release patch manager will cherry-pick these commits starting from the oldest one into stable branch.
## Write release note
@ -36,14 +52,14 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
## Tag version
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the latest version `${VERSION}`.
- Bump [hardcoded Version in the repository](https://github.com/etcd-io/etcd/blob/master/version/version.go#L30) to the latest version `${VERSION}`.
- Ensure all tests on CI system are passed.
- Manually check etcd is buildable in Linux, Darwin and Windows.
- Manually check upgrade etcd cluster of previous minor version works well.
- Manually check new features work well.
- Add a signed tag through `git tag -s ${VERSION}`.
- Sanity check tag correctness through `git show tags/$VERSION`.
- Push the tag to GitHub through `git push origin tags/$VERSION`. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
- Push the tag to GitHub through `git push origin tags/$VERSION`. This assumes `origin` corresponds to "https://github.com/etcd-io/etcd".
## Build release binaries and images
@ -53,7 +69,7 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
Run release script in root directory:
```
./scripts/release.sh ${VERSION}
TAG=gcr.io/etcd-development/etcd ./scripts/release.sh ${VERSION}
```
It generates all release binaries and images under directory ./release.
@ -70,11 +86,11 @@ for i in etcd-*{.zip,.tar.gz}; do gpg2 --default-key $SUBKEYID --armor --output
for i in etcd-*{.zip,.tar.gz}; do gpg2 --verify ${i}.asc ${i}; done
# sign zipped source code files
wget https://github.com/coreos/etcd/archive/${VERSION}.zip
wget https://github.com/etcd-io/etcd/archive/${VERSION}.zip
gpg2 --armor --default-key $SUBKEYID --output ${VERSION}.zip.asc --detach-sign ${VERSION}.zip
gpg2 --verify ${VERSION}.zip.asc ${VERSION}.zip
wget https://github.com/coreos/etcd/archive/${VERSION}.tar.gz
wget https://github.com/etcd-io/etcd/archive/${VERSION}.tar.gz
gpg2 --armor --default-key $SUBKEYID --output ${VERSION}.tar.gz.asc --detach-sign ${VERSION}.tar.gz
gpg2 --verify ${VERSION}.tar.gz.asc ${VERSION}.tar.gz
```
@ -86,17 +102,45 @@ The public key for GPG signing can be found at [CoreOS Application Signing Key](
- Set release title as the version name.
- Follow the format of previous release pages.
- Attach the generated binaries, aci image and signatures.
- Attach the generated binaries and signatures.
- Select whether it is a pre-release.
- Publish the release!
## Publish docker image in gcr.io
- Push docker image:
```
gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd.json)" https://gcr.io
for TARGET_ARCH in "-arm64" "-ppc64le" ""; do
gcloud docker -- push gcr.io/etcd-development/etcd:${VERSION}${TARGET_ARCH}
done
```
- Add `latest` tag to the new image on [gcr.io](https://console.cloud.google.com/gcr/images/etcd-development/GLOBAL/etcd?project=etcd-development&authuser=1) if this is a stable release.
## Publish docker image in Quay.io
- Build docker images with quay.io:
```
for TARGET_ARCH in "amd64" "arm64" "ppc64le"; do
TAG=quay.io/coreos/etcd GOARCH=${TARGET_ARCH} \
BINARYDIR=release/etcd-${VERSION}-linux-${TARGET_ARCH} \
BUILDDIR=release \
./scripts/build-docker ${VERSION}
done
```
- Push docker image:
```
docker login quay.io
docker push quay.io/coreos/etcd:${VERSION}
for TARGET_ARCH in "-arm64" "-ppc64le" ""; do
docker push quay.io/coreos/etcd:${VERSION}${TARGET_ARCH}
done
```
- Add `latest` tag to the new image on [quay.io](https://quay.io/repository/coreos/etcd?tag=latest&tab=tags) if this is a stable release.
@ -114,5 +158,5 @@ git log ...${PREV_VERSION} --pretty=format:"%an" | sort | uniq | tr '\n' ',' | s
## Post release
- Create new stable branch through `git push origin ${VERSION_MAJOR}.${VERSION_MINOR}` if this is a major stable release. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the version `${VERSION}+git`.
- Create new stable branch through `git push origin ${VERSION_MAJOR}.${VERSION_MINOR}` if this is a major stable release. This assumes `origin` corresponds to "https://github.com/etcd-io/etcd".
- Bump [hardcoded Version in the repository](https://github.com/etcd-io/etcd/blob/master/version/version.go#L30) to the version `${VERSION}+git`.

View File

@ -1,4 +1,7 @@
# Download and build
---
title: Download and build
weight: 1
---
## System requirements
@ -10,12 +13,12 @@ The easiest way to get etcd is to use one of the pre-built release binaries whic
## Build the latest version
For those wanting to try the very latest version, build etcd from the `master` branch. [Go](https://golang.org/) version 1.8+ is required to build the latest version of etcd. To ensure etcd is built against well-tested libraries, etcd vendors its dependencies for official release binaries. However, etcd's vendoring is also optional to avoid potential import conflicts when embedding the etcd server or using the etcd client.
For those wanting to try the very latest version, build etcd from the `master` branch. [Go](https://golang.org/) version 1.9+ is required to build the latest version of etcd. To ensure etcd is built against well-tested libraries, etcd vendors its dependencies for official release binaries. However, etcd's vendoring is also optional to avoid potential import conflicts when embedding the etcd server or using the etcd client.
To build `etcd` from the `master` branch without a `GOPATH` using the official `build` script:
```sh
$ git clone https://github.com/coreos/etcd.git
$ git clone https://github.com/etcd-io/etcd.git
$ cd etcd
$ ./build
```
@ -26,16 +29,8 @@ To build a vendored `etcd` from the `master` branch via `go get`:
# GOPATH should be set
$ echo $GOPATH
/Users/example/go
$ go get github.com/coreos/etcd/cmd/etcd
```
To build `etcd` from the `master` branch without vendoring (may not build due to upstream conflicts):
```sh
# GOPATH should be set
$ echo $GOPATH
/Users/example/go
$ go get github.com/coreos/etcd
$ go get -v go.etcd.io/etcd
$ go get -v go.etcd.io/etcd/etcdctl
```
## Test the installation
@ -44,14 +39,14 @@ Check the etcd binary is built correctly by starting etcd and setting a key.
### Starting etcd
If etcd is built without using GOPATH, run the following:
If etcd is built without using `go get`, run the following:
```
```sh
$ ./bin/etcd
```
If etcd is built using GOPATH, run the following:
If etcd is built using `go get`, run the following:
```
```sh
$ $GOPATH/bin/etcd
```
@ -59,14 +54,16 @@ $ $GOPATH/bin/etcd
Run the following:
```
$ ETCDCTL_API=3 ./bin/etcdctl put foo bar
```sh
$ ./bin/etcdctl put foo bar
OK
```
(or `$GOPATH/bin/etcdctl put foo bar` if etcdctl was installed with `go get`)
If OK is printed, then etcd is working!
[github-release]: https://github.com/coreos/etcd/releases/
[github-release]: https://github.com/etcd-io/etcd/releases/
[go]: https://golang.org/doc/install
[build-script]: ../build
[cmd-directory]: ../cmd

View File

@ -1,113 +0,0 @@
# Documentation
etcd is a distributed key-value store designed to reliably and quickly preserve and provide access to critical data. It enables reliable distributed coordination through distributed locking, leader elections, and write barriers. An etcd cluster is intended for high availability and permanent data storage and retrieval.
## Getting started
New etcd users and developers should get started by [downloading and building][download_build] etcd. After getting etcd, follow this [quick demo][demo] to see the basics of creating and working with an etcd cluster.
## Developing with etcd
The easiest way to get started using etcd as a distributed key-value store is to [set up a local cluster][local_cluster].
- [Setting up local clusters][local_cluster]
- [Interacting with etcd][interacting]
- gRPC [etcd core][api_ref] and [etcd concurrency][api_concurrency_ref] API references
- [HTTP JSON API through the gRPC gateway][api_grpc_gateway]
- [gRPC naming and discovery][grpc_naming]
- [Client][namespace_client] and [proxy][namespace_proxy] namespacing
- [Embedding etcd][embed_etcd]
- [Experimental features and APIs][experimental]
- [System limits][system-limit]
## Operating etcd clusters
Administrators who need to create reliable and scalable key-value stores for the developers they support should begin with a [cluster on multiple machines][clustering].
- [Setting up etcd clusters][clustering]
- [Setting up etcd gateways][gateway]
- [Setting up etcd gRPC proxy][grpc_proxy]
- [Hardware recommendations][hardware]
- [Configuration][conf]
- [Security][security]
- [Authentication][authentication]
- [Monitoring][monitoring]
- [Maintenance][maintenance]
- [Understand failures][failures]
- [Disaster recovery][recovery]
- [Performance][performance]
- [Versioning][versioning]
### Platform guides
- [Supported systems][supported_platforms]
- [Docker container][container_docker]
- [Container Linux, systemd][container_linux_platform]
- [rkt container][container_rkt]
- [Amazon Web Services][aws_platform]
- [FreeBSD][freebsd_platform]
### Upgrading and compatibility
- [Migrate applications from using API v2 to API v3][v2_migration]
- [Upgrading a v2.3 cluster to v3.0][v3_upgrade]
- [Upgrading a v3.0 cluster to v3.1][v31_upgrade]
- [Upgrading a v3.1 cluster to v3.2][v32_upgrade]
## Learning
To learn more about the concepts and internals behind etcd, read the following pages:
- [Why etcd?][why]
- [Understand data model][data_model]
- [Understand APIs][understand_apis]
- [Glossary][glossary]
- Internals
- [Auth subsystem][auth_design]
## Frequently Asked Questions (FAQ)
Answers to [common questions] about etcd.
[api_ref]: dev-guide/api_reference_v3.md
[api_concurrency_ref]: dev-guide/api_concurrency_reference_v3.md
[api_grpc_gateway]: dev-guide/api_grpc_gateway.md
[clustering]: op-guide/clustering.md
[conf]: op-guide/configuration.md
[system-limit]: dev-guide/limit.md
[common questions]: faq.md
[why]: learning/why.md
[data_model]: learning/data_model.md
[demo]: demo.md
[download_build]: dl_build.md
[embed_etcd]: https://godoc.org/github.com/coreos/etcd/embed
[grpc_naming]: dev-guide/grpc_naming.md
[failures]: op-guide/failures.md
[gateway]: op-guide/gateway.md
[glossary]: learning/glossary.md
[namespace_client]: https://godoc.org/github.com/coreos/etcd/clientv3/namespace
[namespace_proxy]: op-guide/grpc_proxy.md#namespacing
[grpc_proxy]: op-guide/grpc_proxy.md
[hardware]: op-guide/hardware.md
[interacting]: dev-guide/interacting_v3.md
[local_cluster]: dev-guide/local_cluster.md
[performance]: op-guide/performance.md
[recovery]: op-guide/recovery.md
[maintenance]: op-guide/maintenance.md
[security]: op-guide/security.md
[monitoring]: op-guide/monitoring.md
[v2_migration]: op-guide/v2-migration.md
[container_rkt]: op-guide/container.md#rkt
[container_docker]: op-guide/container.md#docker
[understand_apis]: learning/api.md
[versioning]: op-guide/versioning.md
[supported_platforms]: op-guide/supported-platform.md
[container_linux_platform]: platforms/container-linux-systemd.md
[freebsd_platform]: platforms/freebsd.md
[aws_platform]: platforms/aws.md
[experimental]: dev-guide/experimental_apis.md
[v3_upgrade]: upgrades/upgrade_3_0.md
[v31_upgrade]: upgrades/upgrade_3_1.md
[v32_upgrade]: upgrades/upgrade_3_2.md
[authentication]: op-guide/authentication.md
[auth_design]: learning/auth_design.md

View File

@ -1,40 +1,42 @@
## Frequently Asked Questions (FAQ)
---
title: Frequently Asked Questions (FAQ)
---
### etcd, general
## etcd, general
#### Do clients have to send requests to the etcd leader?
### Do clients have to send requests to the etcd leader?
[Raft][raft] is leader-based; the leader handles all client requests which need cluster consensus. However, the client does not need to know which node is the leader. Any request that requires consensus sent to a follower is automatically forwarded to the leader. Requests that do not require consensus (e.g., serialized reads) can be processed by any cluster member.
### Configuration
## Configuration
#### What is the difference between advertise-urls and listen-urls?
### What is the difference between listen-<client,peer>-urls, advertise-client-urls or initial-advertise-peer-urls?
`listen-urls` specifies the local addresses etcd server binds to for accepting incoming connections. To listen on a port for all interfaces, specify `0.0.0.0` as the listen IP address.
`listen-client-urls` and `listen-peer-urls` specify the local addresses etcd server binds to for accepting incoming connections. To listen on a port for all interfaces, specify `0.0.0.0` as the listen IP address.
`advertise-urls` specifies the addresses etcd clients or other etcd members should use to contact the etcd server. The advertise addresses must be reachable from the remote machines. Do not advertise addresses like `localhost` or `0.0.0.0` for a production setup since these addresses are unreachable from remote machines.
`advertise-client-urls` and `initial-advertise-peer-urls` specify the addresses etcd clients or other etcd members should use to contact the etcd server. The advertise addresses must be reachable from the remote machines. Do not advertise addresses like `localhost` or `0.0.0.0` for a production setup since these addresses are unreachable from remote machines.
#### Why doesn't changing `--listen-peer-urls` or `--initial-advertise-peer-urls` update the advertised peer URLs in `etcdctl member list`?
### Why doesn't changing `--listen-peer-urls` or `--initial-advertise-peer-urls` update the advertised peer URLs in `etcdctl member list`?
A member's advertised peer URLs come from `--initial-advertise-peer-urls` on initial cluster boot. Changing the listen peer URLs or the initial advertise peers after booting the member won't affect the exported advertise peer URLs since changes must go through quorum to avoid membership configuration split brain. Use `etcdctl member update` to update a member's peer URLs.
### Deployment
## Deployment
#### System requirements
### System requirements
Since etcd writes data to disk, SSD is highly recommended. To prevent performance degradation or unintentionally overloading the key-value store, etcd enforces a 2GB default storage size quota, configurable up to 8GB. To avoid swapping or running out of memory, the machine should have at least as much RAM to cover the quota. At CoreOS, an etcd cluster is usually deployed on dedicated CoreOS Container Linux machines with dual-core processors, 2GB of RAM, and 80GB of SSD *at the very least*. **Note that performance is intrinsically workload dependent; please test before production deployment**. See [hardware][hardware-setup] for more recommendations.
Since etcd writes data to disk, SSD is highly recommended. To prevent performance degradation or unintentionally overloading the key-value store, etcd enforces a configurable storage size quota set to 2GB by default. To avoid swapping or running out of memory, the machine should have at least as much RAM to cover the quota. 8GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it. At CoreOS, an etcd cluster is usually deployed on dedicated CoreOS Container Linux machines with dual-core processors, 2GB of RAM, and 80GB of SSD *at the very least*. **Note that performance is intrinsically workload dependent; please test before production deployment**. See [hardware][hardware-setup] for more recommendations.
Most stable production environment is Linux operating system with amd64 architecture; see [supported platform][supported-platform] for more.
#### Why an odd number of cluster members?
### Why an odd number of cluster members?
An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1. For any odd-sized cluster, adding one node will always increase the number of nodes necessary for quorum. Although adding a node to an odd-sized cluster appears better since there are more machines, the fault tolerance is worse since exactly the same number of nodes may fail without losing quorum but there are more nodes that can fail. If the cluster is in a state where it can't tolerate any more failures, adding a node before removing nodes is dangerous because if the new node fails to register with the cluster (e.g., the address is misconfigured), quorum will be permanently lost.
#### What is maximum cluster size?
### What is maximum cluster size?
Theoretically, there is no hard limit. However, an etcd cluster probably should have no more than seven nodes. [Google Chubby lock service][chubby], similar to etcd and widely deployed within Google for many years, suggests running five nodes. A 5-member etcd cluster can tolerate two member failures, which is enough in most cases. Although larger clusters provide better fault tolerance, the write performance suffers because data must be replicated across more machines.
#### What is failure tolerance?
### What is failure tolerance?
An etcd cluster operates so long as a member quorum can be established. If quorum is lost through transient network failures (e.g., partitions), etcd automatically and safely resumes once the network recovers and restores quorum; Raft enforces cluster consistency. For power loss, etcd persists the Raft log to disk; etcd replays the log to the point of failure and resumes cluster participation. For permanent hardware failure, the node may be removed from the cluster through [runtime reconfiguration][runtime reconfiguration].
@ -54,19 +56,19 @@ It is recommended to have an odd number of members in a cluster. An odd-size clu
Adding a member to bring the size of cluster up to an even number doesn't buy additional fault tolerance. Likewise, during a network partition, an odd number of members guarantees that there will always be a majority partition that can continue to operate and be the source of truth when the partition ends.
#### Does etcd work in cross-region or cross data center deployments?
### Does etcd work in cross-region or cross data center deployments?
Deploying etcd across regions improves etcd's fault tolerance since members are in separate failure domains. The cost is higher consensus request latency from crossing data center boundaries. Since etcd relies on a member quorum for consensus, the latency from crossing data centers will be somewhat pronounced because at least a majority of cluster members must respond to consensus requests. Additionally, cluster data must be replicated across all peers, so there will be bandwidth cost as well.
With longer latencies, the default etcd configuration may cause frequent elections or heartbeat timeouts. See [tuning] for adjusting timeouts for high latency deployments.
### Operation
## Operation
#### How to backup a etcd cluster?
### How to backup a etcd cluster?
etcdctl provides a `snapshot` command to create backups. See [backup][backup] for more details.
#### Should I add a member before removing an unhealthy member?
### Should I add a member before removing an unhealthy member?
When replacing an etcd node, it's important to remove the member first and then add its replacement.
@ -78,21 +80,21 @@ Additionally, that new member is risky because it may turn out to be misconfigur
On the other hand, if the downed member is removed from cluster membership first, the number of members becomes 2 and the quorum remains at 2. Following that removal by adding a new member will also keep the quorum steady at 2. So, even if the new node can't be brought up, it's still possible to remove the new member through quorum on the remaining live members.
#### Why won't etcd accept my membership changes?
### Why won't etcd accept my membership changes?
etcd sets `strict-reconfig-check` in order to reject reconfiguration requests that would cause quorum loss. Abandoning quorum is really risky (especially when the cluster is already unhealthy). Although it may be tempting to disable quorum checking if there's quorum loss to add a new member, this could lead to full fledged cluster inconsistency. For many applications, this will make the problem even worse ("disk geometry corruption" being a candidate for most terrifying).
#### Why does etcd lose its leader from disk latency spikes?
### Why does etcd lose its leader from disk latency spikes?
This is intentional; disk latency is part of leader liveness. Suppose the cluster leader takes a minute to fsync a raft log update to disk, but the etcd cluster has a one second election timeout. Even though the leader can process network messages within the election interval (e.g., send heartbeats), it's effectively unavailable because it can't commit any new proposals; it's waiting on the slow disk. If the cluster frequently loses its leader due to disk latencies, try [tuning][tuning] the disk settings or etcd time parameters.
#### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
Every new etcd cluster generates a new cluster ID based on the initial cluster configuration and a user-provided unique `initial-cluster-token` value. By having unique cluster ID's, etcd is protected from cross-cluster interaction which could corrupt the cluster.
Usually this warning happens after tearing down an old cluster, then reusing some of the peer addresses for the new cluster. If any etcd process from the old cluster is still running it will try to contact the new cluster. The new cluster will recognize a cluster ID mismatch, then ignore the request and emit this warning. This warning is often cleared by ensuring peer addresses among distinct clusters are disjoint.
#### What does "mvcc: database space exceeded" mean and how do I fix it?
### What does "mvcc: database space exceeded" mean and how do I fix it?
The [multi-version concurrency control][api-mvcc] data model in etcd keeps an exact history of the keyspace. Without periodically compacting this history (e.g., by setting `--auto-compaction`), etcd will eventually exhaust its storage space. If etcd runs low on storage space, it raises a space quota alarm to protect the cluster from further writes. So long as the alarm is raised, etcd responds to write requests with the error `mvcc: database space exceeded`.
@ -102,16 +104,22 @@ To recover from the low space quota alarm:
2. [Defragment][maintenance-defragment] every etcd endpoint.
3. [Disarm][maintenance-disarm] the alarm.
### Performance
### What does the etcd warning "etcdserver/api/v3rpc: transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:2379->127.0.0.1:43020: read: connection reset by peer" mean?
#### How should I benchmark etcd?
This is gRPC-side warning when a server receives a TCP RST flag with client-side streams being prematurely closed. For example, a client closes its connection, while gRPC server has not yet processed all HTTP/2 frames in the TCP queue. Some data may have been lost in server side, but it is ok so long as client connection has already been closed.
Only [old versions of gRPC](https://github.com/grpc/grpc-go/issues/1362) log this. etcd [>=v3.2.13 by default log this with DEBUG level](https://github.com/etcd-io/etcd/pull/9080), thus only visible with `--debug` flag enabled.
## Performance
### How should I benchmark etcd?
Try the [benchmark] tool. Current [benchmark results][benchmark-result] are available for comparison.
#### What does the etcd warning "apply entries took too long" mean?
### What does the etcd warning "apply entries took too long" mean?
After a majority of etcd members agree to commit a request, each etcd server applies the request to its data store and persists the result to disk. Even with a slow mechanical disk or a virtualized network disk, such as Amazons EBS or Googles PD, applying a request should normally take fewer than 50 milliseconds. If the average apply duration exceeds 100 milliseconds, etcd will warn that entries are taking too long to apply.
Usually this issue is caused by a slow disk. The disk could be experiencing contention among etcd and other applications, or the disk is too simply slow (e.g., a shared virtualized disk). To rule out a slow disk from causing this warning, monitor [backend_commit_duration_seconds][backend_commit_metrics] (p99 duration should be less than 25ms) to confirm the disk is reasonably fast. If the disk is too slow, assigning a dedicated disk to etcd or using faster disk will typically solve the problem.
The second most common cause is CPU starvation. If monitoring of the machines CPU usage shows heavy utilization, there may not be enough compute capacity for etcd. Moving etcd to dedicated machine, increasing process resource isolation cgroups, or renicing the etcd server process into a higher priority can usually solve the problem.
@ -120,7 +128,7 @@ Expensive user requests which access too many keys (e.g., fetching the entire ke
If none of the above suggestions clear the warnings, please [open an issue][new_issue] with detailed logging, monitoring, metrics and optionally workload information.
#### What does the etcd warning "failed to send out heartbeat on time" mean?
### What does the etcd warning "failed to send out heartbeat on time" mean?
etcd uses a leader-based consensus protocol for consistent data replication and log execution. Cluster members elect a single leader, all other members become followers. The elected leader must periodically send heartbeats to its followers to maintain its leadership. Followers infer leader failure if no heartbeats are received within an election interval and trigger an election. If a leader doesnt send its heartbeats in time but is still running, the election is spurious and likely caused by insufficient resources. To catch these soft failures, if the leader skips two heartbeat intervals, etcd will warn it failed to send a heartbeat on time.
@ -132,7 +140,7 @@ A slow network can also cause this issue. If network metrics among the etcd mach
If none of the above suggestions clear the warnings, please [open an issue][new_issue] with detailed logging, monitoring, metrics and optionally workload information.
#### What does the etcd warning "snapshotting is taking more than x seconds to finish ..." mean?
### What does the etcd warning "snapshotting is taking more than x seconds to finish ..." mean?
etcd sends a snapshot of its complete key-value store to refresh slow followers and for [backups][backup]. Slow snapshot transfer times increase MTTR; if the cluster is ingesting data with high throughput, slow followers may livelock by needing a new snapshot before finishing receiving a snapshot. To catch slow snapshot performance, etcd warns when sending a snapshot takes more than thirty seconds and exceeds the expected transfer time for a 1Gbps connection.
@ -141,14 +149,14 @@ etcd sends a snapshot of its complete key-value store to refresh slow followers
[supported-platform]: ./op-guide/supported-platform.md
[wal_fsync_duration_seconds]: ./metrics.md#disk
[tuning]: ./tuning.md
[new_issue]: https://github.com/coreos/etcd/issues/new
[new_issue]: https://github.com/etcd-io/etcd/issues/new
[backend_commit_metrics]: ./metrics.md#disk
[raft]: https://raft.github.io/raft.pdf
[backup]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#snapshotting-the-keyspace
[backup]: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md#snapshotting-the-keyspace
[chubby]: http://static.googleusercontent.com/media/research.google.com/en//archive/chubby-osdi06.pdf
[runtime reconfiguration]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md
[runtime reconfiguration]: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/runtime-configuration.md
[benchmark]: https://github.com/coreos/etcd/tree/master/tools/benchmark
[benchmark-result]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/performance.md
[benchmark-result]: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/performance.md
[api-mvcc]: learning/api.md#revisions
[maintenance-compact]: op-guide/maintenance.md#history-compaction
[maintenance-defragment]: op-guide/maintenance.md#defragmentation

View File

@ -1,8 +1,11 @@
# Libraries and tools
---
title: Libraries and tools
weight: 2
---
**Tools**
- [etcdctl](https://github.com/coreos/etcd/tree/master/etcdctl) - A command line client for etcd
- [etcdctl](https://github.com/etcd-io/etcd/tree/master/etcdctl) - A command line client for etcd
- [etcd-backup](https://github.com/fanhattan/etcd-backup) - A powerful command line utility for dumping/restoring etcd - Supports v2
- [etcd-dump](https://npmjs.org/package/etcd-dump) - Command line utility for dumping/restoring etcd.
- [etcd-fs](https://github.com/xetorthio/etcd-fs) - FUSE filesystem for etcd
@ -15,17 +18,18 @@
- [etcd-rest](https://github.com/mickep76/etcd-rest) - Create generic REST API in Go using etcd as a backend with validation using JSON schema
- [etcdsh](https://github.com/kamilhark/etcdsh) - A command line client with support of command history and tab completion. Supports v2
- [etcdloadtest](https://github.com/sinsharat/etcdloadtest) - A command line load test client for etcd version 3.0 and above.
- [lucas](https://github.com/ringtail/lucas) - A web-based key-value viewer for kubernetes etcd3.0+ cluster.
**Go libraries**
- [etcd/clientv3](https://github.com/coreos/etcd/blob/master/clientv3) - the officially maintained Go client for v3
- [etcd/client](https://github.com/coreos/etcd/blob/master/client) - the officially maintained Go client for v2
- [etcd/clientv3](https://github.com/etcd-io/etcd/blob/master/clientv3) - the officially maintained Go client for v3
- [etcd/client](https://github.com/etcd-io/etcd/blob/master/client) - the officially maintained Go client for v2
- [go-etcd](https://github.com/coreos/go-etcd) - the deprecated official client. May be useful for older (<2.0.0) versions of etcd.
- [encWrapper](https://github.com/lumjjb/etcd/tree/enc_wrapper/clientwrap/encwrapper) - encWrapper is an encryption wrapper for the etcd client Keys API/KV.
**Java libraries**
- [coreos/jetcd](https://github.com/coreos/jetcd) - Supports v3
- [coreos/jetcd](https://github.com/etcd-io/jetcd) - Supports v3
- [boonproject/etcd](https://github.com/boonproject/boon/blob/master/etcd/README.md) - Supports v2, Async/Sync and waits
- [justinsb/jetcd](https://github.com/justinsb/jetcd)
- [diwakergupta/jetcd](https://github.com/diwakergupta/jetcd) - Supports v2
@ -38,6 +42,11 @@
- [maciej/etcd-client](https://github.com/maciej/etcd-client) - Supports v2. Akka HTTP-based fully async client
- [eiipii/etcdhttpclient](https://bitbucket.org/eiipii/etcdhttpclient) - Supports v2. Async HTTP client based on Netty and Scala Futures.
**Perl libraries**
- [hexfusion/perl-net-etcd](https://github.com/hexfusion/perl-net-etcd) - Supports v3 grpc gateway HTTP API
- [robn/p5-etcd](https://github.com/robn/p5-etcd) - Supports v2
**Python libraries**
- [kragniz/python-etcd3](https://github.com/kragniz/python-etcd3) - Client for v3
@ -47,6 +56,8 @@
- [lisael/aioetcd](https://github.com/lisael/aioetcd) - (Python 3.4+) Asyncio coroutines client (Supports v2)
- [txaio-etcd](https://github.com/crossbario/txaio-etcd) - Asynchronous etcd v3-only client library for Twisted (today) and asyncio (future)
- [dims/etcd3-gateway](https://github.com/dims/etcd3-gateway) - etcd v3 API library using the HTTP grpc gateway
- [aioetcd3](https://github.com/gaopeiliang/aioetcd3) - (Python 3.6+) etcd v3 API for asyncio
- [Revolution1/etcd3-py](https://github.com/Revolution1/etcd3-py) - (python2.7 and python3.5+) Python client for etcd v3, using gRPC-JSON-Gateway
**Node libraries**
@ -82,17 +93,20 @@
**Erlang libraries**
- [marshall-lee/etcd.erl](https://github.com/marshall-lee/etcd.erl)
- [marshall-lee/etcd.erl](https://github.com/marshall-lee/etcd.erl) - Supports v2
- [zhongwencool/eetcd](https://github.com/zhongwencool/eetcd) - Supports v3+ (GRPC only)
**.Net Libraries**
- [wangjia184/etcdnet](https://github.com/wangjia184/etcdnet) - Supports v2
- [drusellers/etcetera](https://github.com/drusellers/etcetera)
- [shubhamranjan/dotnet-etcd](https://github.com/shubhamranjan/dotnet-etcd) - Supports v3+ (GRPC only)
**PHP Libraries**
- [linkorb/etcd-php](https://github.com/linkorb/etcd-php)
- [activecollab/etcd](https://github.com/activecollab/etcd)
- [ouqiang/etcd-php](https://github.com/ouqiang/etcd-php) - Client for v3 gRPC gateway
**Haskell libraries**
@ -133,6 +147,7 @@
**Projects using etcd**
- [etcd Raft users](../raft/README.md#notable-users) - projects using etcd's raft library implementation.
- [apache/celix](https://github.com/apache/celix) - an implementation of the OSGi specification adapted to C and C++
- [binocarlos/yoda](https://github.com/binocarlos/yoda) - etcd + ZeroMQ
- [blox/blox](https://github.com/blox/blox) - a collection of open source projects for container management and orchestration with AWS ECS
@ -146,7 +161,6 @@
- [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration
- [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
- [fleet](https://github.com/coreos/fleet) - Distributed init system
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.
- [duedil-ltd/discodns](https://github.com/duedil-ltd/discodns) - Simple DNS nameserver using etcd as a database for names and records.
@ -157,3 +171,11 @@
- [ryandoyle/nss-etcd](https://github.com/ryandoyle/nss-etcd) - A GNU libc NSS module for resolving names from etcd.
- [Gru](https://github.com/dnaeon/gru) - Orchestration made easy with Go
- [Vitess](http://vitess.io/) - Vitess is a database clustering system for horizontal scaling of MySQL.
- [lclarkmichalek/etcdhcp](https://github.com/lclarkmichalek/etcdhcp) - DHCP server that uses etcd for persistence and coordination.
- [openstack/networking-vpp](https://github.com/openstack/networking-vpp) - A networking driver that programs the [FD.io VPP dataplane](https://wiki.fd.io/view/VPP) to provide [OpenStack](https://www.openstack.org/) cloud virtual networking
- [OpenStack](https://github.com/openstack/governance/blob/master/reference/base-services.rst) - OpenStack services can rely on etcd as a base service.
- [CoreDNS](https://github.com/coredns/coredns/tree/master/plugin/etcd) - CoreDNS is a DNS server that chains plugins, part of CNCF and Kubernetes
- [Uber M3](https://github.com/m3db/m3) - M3: Ubers Open Source, Large-scale Metrics Platform for Prometheus
- [Rook](https://github.com/rook/rook) - Storage Orchestration for Kubernetes
- [Patroni](https://github.com/zalando/patroni) - A template for PostgreSQL High Availability with ZooKeeper, etcd, or Consul
- [Trillian](https://github.com/google/trillian) - Trillian implements a Merkle tree whose contents are served from a data storage layer, to allow scalability to extremely large trees.

View File

@ -0,0 +1,3 @@
---
title: Learning
---

View File

@ -1,4 +1,6 @@
# etcd3 API
---
title: etcd3 API
---
This document is meant to give an overview of the etcd3 API's central design. It is by no means all encompassing, but intended to focus on the basic ideas needed to understand etcd without the distraction of less common API calls. All etcd3 API's are defined in [gRPC services][grpc-service], which categorize remote procedure calls (RPCs) understood by the etcd server. A full listing of all etcd RPCs are documented in markdown in the [gRPC API listing][grpc-api].
@ -45,9 +47,9 @@ message ResponseHeader {
* Revision - the revision of the key-value store when generating the response.
* Raft_Term - the Raft term of the member when generating the response.
An application may read the Cluster_ID (Member_ID) field to ensure it is communicating with the intended cluster (member).
An application may read the `Cluster_ID` or `Member_ID` field to ensure it is communicating with the intended cluster (member).
Applications can use the `Revision` to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make time `travel query` and wishes to know the latest revision at the time of the request.
Applications can use the `Revision` field to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make a `time travel query` and wish to know the latest revision at the time of the request.
Applications can use `Raft_Term` to detect when the cluster completes a new leader election.
@ -84,9 +86,9 @@ In addition to just the key and value, etcd attaches additional revision metadat
#### Revisions
etcd maintains a 64-bit cluster-wide counter, the store revision, that is incremented each time the key space is modified. The revision serves as a global logical clock, sequentially ordering all updates to the store. The change represented by a new revisions is incremental; the data associated with a revision is the data that changed the store. Internally, a new revision means writing the changes to the backend's B+tree, keyed by the incremented revision.
etcd maintains a 64-bit cluster-wide counter, the store revision, that is incremented each time the key space is modified. The revision serves as a global logical clock, sequentially ordering all updates to the store. The change represented by a new revision is incremental; the data associated with a revision is the data that changed the store. Internally, a new revision means writing the changes to the backend's B+tree, keyed by the incremented revision.
Revisions become more valuable when taking considering etcd3's [multi-version concurrency control][mvcc] backend. The MVCC model means that the key-value store can be viewed from past revisions since historical key revisions are retained. The retention policy for this history can be configured by cluster administrators for fine-grained storage management; usually etcd3 discards old revisions of keys on a timer. A typical etcd3 cluster retains superseded key data for hours. This also buys reliable handling for long client disconnection, not just transient network disruptions: watchers simply resume from the last observed historical revision. Similarly, to read from the store at a particular point-in-time, read requests can be tagged with a revision to return keys from a view of the key space at the point in time that revision was committed.
Revisions become more valuable when considering etcd3's [multi-version concurrency control][mvcc] backend. The MVCC model means that the key-value store can be viewed from past revisions since historical key revisions are retained. The retention policy for this history can be configured by cluster administrators for fine-grained storage management; usually etcd3 discards old revisions of keys on a timer. A typical etcd3 cluster retains superseded key data for hours. This also provides reliable handling for long client disconnection, not just transient network disruptions: watchers simply resume from the last observed historical revision. Similarly, to read from the store at a particular point-in-time, read requests can be tagged with a revision to return keys from a view of the key space at the point-in-time that revision was committed.
#### Key ranges
@ -94,7 +96,7 @@ The etcd3 data model indexes all keys over a flat binary key space. This differs
These intervals are often referred to as "ranges" in etcd3. Operations over ranges are more powerful than operations on directories. Like a hierarchical store, intervals support single key lookups via `[a, a+1)` (e.g., ['a', 'a\x00') looks up 'a') and directory lookups by encoding keys by directory depth. In addition to those operations, intervals can also encode prefixes; for example the interval `['a', 'b')` looks up all keys prefixed by the string 'a'.
By convention, ranges for a Request are denoted by the fields `key` and `range_end`. The `key` field is the first key of the range and should be non-empty. The `range_end` is the key following the last key of the range. If `range_end` is not given or empty, the range is defined to contain only the key argument. If `range_end` is `key` plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"), then the range represents all keys prefixed with key. If both `key` and `range_end` are '\0', then range represents all keys. If `range_end` is '\0', the range is all keys greater than or equal to the key argument.
By convention, ranges for a request are denoted by the fields `key` and `range_end`. The `key` field is the first key of the range and should be non-empty. The `range_end` is the key following the last key of the range. If `range_end` is not given or empty, the range is defined to contain only the key argument. If `range_end` is `key` plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"), then the range represents all keys prefixed with key. If both `key` and `range_end` are '\0', then range represents all keys. If `range_end` is '\0', the range is all keys greater than or equal to the key argument.
### Range
@ -133,7 +135,7 @@ message RangeRequest {
* Key, Range_End - The key range to fetch.
* Limit - the maximum number of keys returned for the request. When limit is set to 0, it is treated as no limit.
* Revision - the point-in-time of the key-value store to use for the range. If revision is less or equal to zero, the range is over the latest key-value store If the revision is compacted, ErrCompacted is returned as a response.
* Revision - the point-in-time of the key-value store to use for the range. If revision is less or equal to zero, the range is over the latest key-value store. If the revision is compacted, ErrCompacted is returned as a response.
* Sort_Order - the ordering for sorted requests.
* Sort_Target - the key-value field to sort.
* Serializable - sets the range request to use serializable member-local reads. By default, Range is linearizable; it reflects the current consensus of the cluster. For better performance and availability, in exchange for possible stale reads, a serializable range request is served locally without needing to reach consensus with other nodes in the cluster.
@ -218,7 +220,7 @@ message DeleteRangeResponse {
```
* Deleted - number of keys deleted.
* Prev_Kv - a list of all key-value pairs deleted by the DeleteRange operation.
* Prev_Kv - a list of all key-value pairs deleted by the `DeleteRange` operation.
### Transaction
@ -226,7 +228,7 @@ A transaction is an atomic If/Then/Else construct over the key-value store. It p
A transaction can atomically process multiple requests in a single request. For modifications to the key-value store, this means the store's revision is incremented only once for the transaction and all events generated by the transaction will have the same revision. However, modifications to the same key multiple times within a single transaction are forbidden.
All transactions are guarded by a conjunction of comparisons, similar to an "If" statement. Each comparison checks a single key in the store. It may check for the absence or presence of a value, compare with a given value, or check a key's revision or version. Two different comparisons may apply to the same or different keys. All comparisons are applied atomically; if all comparisons are true, the transaction is said to succeed and etcd applies the transaction's then / `success` request block, otherwise it is said to fail and applies the else / `failure` request block.
All transactions are guarded by a conjunction of comparisons, similar to an `If` statement. Each comparison checks a single key in the store. It may check for the absence or presence of a value, compare with a given value, or check a key's revision or version. Two different comparisons may apply to the same or different keys. All comparisons are applied atomically; if all comparisons are true, the transaction is said to succeed and etcd applies the transaction's then / `success` request block, otherwise it is said to fail and applies the else / `failure` request block.
Each comparison is encoded as a `Compare` message:
@ -321,7 +323,7 @@ message ResponseOp {
## Watch API
The Watch API provides an event-based interface for asynchronously monitoring changes to keys. An etcd3 watch waits for changes to keys by continuously watching from a given revision, either current or historical, and streams key updates back to the client.
The `Watch` API provides an event-based interface for asynchronously monitoring changes to keys. An etcd3 watch waits for changes to keys by continuously watching from a given revision, either current or historical, and streams key updates back to the client.
### Events
@ -345,7 +347,7 @@ message Event {
### Watch streams
Watches are long-running requests and use gRPC streams to stream event data. A watch stream is bi-directional; the client writes to the stream to establish watches and reads to receive watch event. A single watch stream can multiplex many distinct watches by tagging events with per-watch identifiers. This multiplexing helps reducing the memory footprint and connection overhead on the core etcd cluster.
Watches are long-running requests and use gRPC streams to stream event data. A watch stream is bi-directional; the client writes to the stream to establish watches and reads to receive watch events. A single watch stream can multiplex many distinct watches by tagging events with per-watch identifiers. This multiplexing helps reducing the memory footprint and connection overhead on the core etcd cluster.
Watches make three guarantees about events:
* Ordered - events are ordered by revision; an event will never appear on a watch if it precedes an event in time that has already been posted.
@ -391,7 +393,7 @@ message WatchResponse {
```
* Watch_ID - the ID of the watch that corresponds to the response.
* Created - set to true if the response is for a create watch request. The client should record ID and expect to receive events for the watch on the stream. All events sent to the created watcher will have the same watch_id.
* Created - set to true if the response is for a create watch request. The client should store the ID and expect to receive events for the watch on the stream. All events sent to the created watcher will have the same watch_id.
* Canceled - set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher.
* Compact_Revision - set to the minimum historical revision available to etcd if a watcher tries watching at a compacted revision. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The watcher will be canceled; creating new watches with the same start_revision will fail.
* Events - a list of new events in sequence corresponding to the given watch ID.
@ -449,7 +451,7 @@ message LeaseRevokeRequest {
### Keep alives
Leases are refreshed using a bi-directional stream created with the `LeaseKeepAlive` API call. When the client wishes to refresh a lease, it sends a `LeaseGrantRequest` over the stream:
Leases are refreshed using a bi-directional stream created with the `LeaseKeepAlive` API call. When the client wishes to refresh a lease, it sends a `LeaseKeepAliveRequest` over the stream:
```protobuf
message LeaseKeepAliveRequest {
@ -472,10 +474,10 @@ message LeaseKeepAliveResponse {
* ID - the lease that was refreshed with a new TTL.
* TTL - the new time-to-live, in seconds, that the lease has remaining.
[elections]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/election.go
[kv-proto]: https://github.com/coreos/etcd/blob/master/mvcc/mvccpb/kv.proto
[elections]: https://github.com/etcd-io/etcd/blob/master/clientv3/concurrency/election.go
[kv-proto]: https://github.com/etcd-io/etcd/blob/master/mvcc/mvccpb/kv.proto
[grpc-api]: ../dev-guide/api_reference_v3.md
[grpc-service]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[locks]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/mutex.go
[grpc-service]: https://github.com/etcd-io/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[locks]: https://github.com/etcd-io/etcd/blob/master/clientv3/concurrency/mutex.go
[mvcc]: https://en.wikipedia.org/wiki/Multiversion_concurrency_control
[stm]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/stm.go
[stm]: https://github.com/etcd-io/etcd/blob/master/clientv3/concurrency/stm.go

View File

@ -1,4 +1,6 @@
# KV API guarantees
---
title: KV API guarantees
---
etcd is a consistent and durable key value store with [mini-transaction][txn] support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.
@ -51,7 +53,7 @@ Linearizability (also known as Atomic Consistency or External Consistency) is a
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence.
For example, consider a client completing a write at time point 1 (*t1*). A client issuing a read at *t2* (for *t2* > *t1*) should receive a value at least as recent as the previous write, completed at *t1*. However, the read might actually complete only by *t3*, and the returned value, current at *t2* when the read began, might be "stale" by *t3*.
For example, consider a client completing a write at time point 1 (*t1*). A client issuing a read at *t2* (for *t2* > *t1*) should receive a value at least as recent as the previous write, completed at *t1*. However, the read might actually complete only by *t3*. Linearizability guarantees the read returns the most current value. Without linearizability guarantee, the returned value, current at *t2* when the read began, might be "stale" by *t3* because a concurrent write might happen between *t2* and *t3*.
etcd does not ensure linearizability for watch operations. Users are expected to verify the revision of watch responses to ensure correct ordering.

View File

@ -1,4 +1,6 @@
# etcd v3 authentication design
---
title: etcd v3 authentication design
---
## Why not reuse the v2 auth system?
@ -26,7 +28,7 @@ The metadata for auth should also be stored and managed in the storage controlle
The authentication mechanism in the etcd v2 protocol has a tricky part because the metadata consistency should work as in the above, but does not: each permission check is processed by the etcd member that receives the client request (etcdserver/api/v2http/client.go), including follower members. Therefore, it's possible the check may be based on stale metadata.
This staleness means that auth configuration cannot be reflected as soon as operators execute etcdctl. Therefore there is no way to know how long the stale metadata is active. Practically, the configuration change is reflected immediately after the command execution. However, in some cases of heavy load, the inconsistent state can be prolonged and it might result in counter-intuitive situations for users and developers. It requires a workaround like this: https://github.com/coreos/etcd/pull/4317#issuecomment-179037582
This staleness means that auth configuration cannot be reflected as soon as operators execute etcdctl. Therefore there is no way to know how long the stale metadata is active. Practically, the configuration change is reflected immediately after the command execution. However, in some cases of heavy load, the inconsistent state can be prolonged and it might result in counter-intuitive situations for users and developers. It requires a workaround like this: https://github.com/etcd-io/etcd/pull/4317#issuecomment-179037582
### Inconsistent permissions are unsafe for linearized requests
@ -38,7 +40,7 @@ Therefore, the permission checking logic should be added to the state machine of
### Authentication
At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The reponse will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests.
At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The response will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests.
The client connection used to request the authentication token is typically thrown away; it cannot carry the new token's credentials. This is because gRPC doesn't provide a way for adding per RPC credential after creation of the connection (calling `grpc.Dial()`). Therefore, a client cannot assign a token to its connection that is obtained through the connection. The client needs a new connection for using the token.

View File

@ -0,0 +1,114 @@
---
title: etcd client architecture
weight: 1
---
## Introduction
etcd server has proven its robustness with years of failure injection testing. Most complex application logic is already handled by etcd server and its data stores (e.g. cluster membership is transparent to clients, with Raft-layer forwarding proposals to leader). Although server components are correct, its composition with client requires a different set of intricate protocols to guarantee its correctness and high availability under faulty conditions. Ideally, etcd server provides one logical cluster view of many physical machines, and client implements automatic failover between replicas. This documents client architectural decisions and its implementation details.
## Glossary
**clientv3** --- etcd Official Go client for etcd v3 API.
**clientv3-grpc1.0** --- Official client implementation, with [grpc-go v1.0.x](https://github.com/grpc/grpc-go/releases/tag/v1.0.0), which is used in latest etcd v3.1.
**clientv3-grpc1.7** --- Official client implementation, with [grpc-go v1.7.x](https://github.com/grpc/grpc-go/releases/tag/v1.7.0), which is used in latest etcd v3.2 and v3.3.
**clientv3-grpc1.14** --- Official client implementation, with [grpc-go v1.14.x](https://github.com/grpc/grpc-go/releases/tag/v1.14.0), which is used in latest etcd v3.4.
**Balancer** --- etcd client load balancer that implements retry and failover mechanism. etcd client should automatically balance loads between multiple endpoints.
**Endpoints** --- A list of etcd server endpoints that clients can connect to. Typically, 3 or 5 client URLs of an etcd cluster.
**Pinned endpoint** --- When configured with multiple endpoints, <= v3.3 client balancer chooses only one endpoint to establish a TCP connection, in order to conserve total open connections to etcd cluster. In v3.4, balancer round-robins pinned endpoints for every request, thus distributing loads more evenly.
**Client Connection** --- TCP connection that has been established to an etcd server, via gRPC Dial.
**Sub Connection** --- gRPC SubConn interface. Each sub-connection contains a list of addresses. Balancer creates a SubConn from a list of resolved addresses. gRPC ClientConn can map to multiple SubConn (e.g. example.com resolves to `10.10.10.1` and `10.10.10.2` of two sub-connections). etcd v3.4 balancer employs internal resolver to establish one sub-connection for each endpoint.
**Transient disconnect** --- When gRPC server returns a status error of [code Unavailable](https://godoc.org/google.golang.org/grpc/codes#Code).
## Client requirements
**Correctness** --- Requests may fail in the presence of server faults. However, it never violates consistency guarantees: global ordering properties, never write corrupted data, at-most once semantics for mutable operations, watch never observes partial events, and so on.
**Liveness** --- Servers may fail or disconnect briefly. Clients should make progress in either way. Clients should [never deadlock](https://github.com/etcd-io/etcd/issues/8980) waiting for a server to come back from offline, unless configured to do so. Ideally, clients detect unavailable servers with HTTP/2 ping and failover to other nodes with clear error messages.
**Effectiveness** --- Clients should operate effectively with minimum resources: previous TCP connections should be [gracefully closed](https://github.com/etcd-io/etcd/issues/9212) after endpoint switch. Failover mechanism should effectively predict the next replica to connect, without wastefully retrying on failed nodes.
**Portability** --- Official client should be clearly documented and its implementation be applicable to other language bindings. Error handling between different language bindings should be consistent. Since etcd is fully committed to gRPC, implementation should be closely aligned with gRPC long-term design goals (e.g. pluggable retry policy should be compatible with [gRPC retry](https://github.com/grpc/proposal/blob/master/A6-client-retries.md)). Upgrades between two client versions should be non-disruptive.
## Client overview
The etcd client implements the following components:
* balancer that establishes gRPC connections to an etcd cluster,
* API client that sends RPCs to an etcd server, and
* error handler that decides whether to retry a failed request or switch endpoints.
Languages may differ in how to establish an initial connection (e.g. configure TLS), how to encode and send Protocol Buffer messages to server, how to handle stream RPCs, and so on. However, errors returned from etcd server will be the same. So should be error handling and retry policy.
For example, etcd server may return `"rpc error: code = Unavailable desc = etcdserver: request timed out"`, which is transient error that expects retries. Or return `rpc error: code = InvalidArgument desc = etcdserver: key is not provided`, which means request was invalid and should not be retried. Go client can parse errors with `google.golang.org/grpc/status.FromError`, and Java client with `io.grpc.Status.fromThrowable`.
### clientv3-grpc1.0: Balancer Overview
`clientv3-grpc1.0` maintains multiple TCP connections when configured with multiple etcd endpoints. Then pick one address and use it to send all client requests. The pinned address is maintained until the client object is closed (see *Figure 1*). When the client receives an error, it randomly picks another and retries.
{{< figure src="/img/client-architecture-balancer-figure-01.png" >}}
### clientv3-grpc1.0: Balancer Limitation
`clientv3-grpc1.0` opening multiple TCP connections may provide faster balancer failover but requires more resources. The balancer does not understand nodes health status or cluster membership. So, it is possible that balancer gets stuck with one failed or partitioned node.
### clientv3-grpc1.7: Balancer Overview
`clientv3-grpc1.7` maintains only one TCP connection to a chosen etcd server. When given multiple cluster endpoints, a client first tries to connect to them all. As soon as one connection is up, balancer pins the address, closing others (see **Figure 2**).
{{< figure src="/img/client-architecture-balancer-figure-02.png" >}}
The pinned address is to be maintained until the client object is closed. An error, from server or client network fault, is sent to client error handler (see **Figure 3**).
{{< figure src="/img/client-architecture-balancer-figure-03.png" >}}
The client error handler takes an error from gRPC server, and decides whether to retry on the same endpoint, or to switch to other addresses, based on the error code and message (see **Figure 4** and **Figure 5**).
{{< figure src="/img/client-architecture-balancer-figure-04.png" >}}
{{< figure src="/img/client-architecture-balancer-figure-05.png" >}}
Stream RPCs, such as Watch and KeepAlive, are often requested with no timeouts. Instead, client can send periodic HTTP/2 pings to check the status of a pinned endpoint; if the server does not respond to the ping, balancer switches to other endpoints (see **Figure 6**).
{{< figure src="/img/client-architecture-balancer-figure-06.png" >}}
### clientv3-grpc1.7: Balancer Limitation
`clientv3-grpc1.7` balancer sends HTTP/2 keepalives to detect disconnects from streaming requests. It is a simple gRPC server ping mechanism and does not reason about cluster membership, thus unable to detect network partitions. Since partitioned gRPC server can still respond to client pings, balancer may get stuck with a partitioned node. Ideally, keepalive ping detects partition and triggers endpoint switch, before request time-out (see [issue #8673](https://github.com/etcd-io/etcd/issues/8673) and **Figure 7**).
{{< figure src="/img/client-architecture-balancer-figure-07.png" >}}
`clientv3-grpc1.7` balancer maintains a list of unhealthy endpoints. Disconnected addresses are added to “unhealthy” list, and considered unavailable until after wait duration, which is hard coded as dial timeout with default value 5-second. Balancer can have false positives on which endpoints are unhealthy. For instance, endpoint A may come back right after being blacklisted, but still unusable for next 5 seconds (see **Figure 8**).
`clientv3-grpc1.0` suffered the same problems above.
{{< figure src="/img/client-architecture-balancer-figure-08.png" >}}
Upstream gRPC Go had already migrated to new balancer interface. For example, `clientv3-grpc1.7` underlying balancer implementation uses new gRPC balancer and tries to be consistent with old balancer behaviors. While its compatibility has been maintained reasonably well, etcd client still [suffered from subtle breaking changes](https://github.com/grpc/grpc-go/issues/1649). Furthermore, gRPC maintainer recommends [not relying on the old balancer interface](https://github.com/grpc/grpc-go/issues/1942#issuecomment-375368665). In general, to get better support from upstream, it is best to be in sync with latest gRPC releases. And new features, such as retry policy, may not be backported to gRPC 1.7 branch. Thus, both etcd server and client must migrate to latest gRPC versions.
### clientv3-grpc1.14: Balancer Overview
`clientv3-grpc1.7` is so tightly coupled with old gRPC interface, that every single gRPC dependency upgrade broke client behavior. Majority of development and debugging efforts were devoted to fixing those client behavior changes. As a result, its implementation has become overly complicated with bad assumptions on server connectivities.
The primary goal of `clientv3-grpc1.14` is to simplify balancer failover logic; rather than maintaining a list of unhealthy endpoints, which may be stale, simply roundrobin to the next endpoint whenever client gets disconnected from the current endpoint. It does not assume endpoint status. Thus, no more complicated status tracking is needed (see *Figure 8* and above). Upgrading to `clientv3-grpc1.14` should be no issue; all changes were internal while keeping all the backward compatibilities.
Internally, when given multiple endpoints, `clientv3-grpc1.14` creates multiple sub-connections (one sub-connection per each endpoint), while `clientv3-grpc1.7` creates only one connection to a pinned endpoint (see *Figure 9*). For instance, in 5-node cluster, `clientv3-grpc1.14` balancer would require 5 TCP connections, while `clientv3-grpc1.7` only requires one. By preserving the pool of TCP connections, `clientv3-grpc1.14` may consume more resources but provide more flexible load balancer with better failover performance. The default balancing policy is round robin but can be easily extended to support other types of balancers (e.g. power of two, pick leader, etc.). `clientv3-grpc1.14` uses gRPC resolver group and implements balancer picker policy, in order to delegate complex balancing work to upstream gRPC. On the other hand, `clientv3-grpc1.7` manually handles each gRPC connection and balancer failover, which complicates the implementation. `clientv3-grpc1.14` implements retry in the gRPC interceptor chain that automatically handles gRPC internal errors and enables more advanced retry policies like backoff, while `clientv3-grpc1.7` manually interprets gRPC errors for retries.
{{< figure src="/img/client-architecture-balancer-figure-09.png" >}}
### clientv3-grpc1.14: Balancer Limitation
Improvements can be made by caching the status of each endpoint. For instance, balancer can ping each server in advance to maintain a list of healthy candidates, and use this information when doing round-robin. Or when disconnected, balancer can prioritize healthy endpoints. This may complicate the balancer implementation, thus can be addressed in later versions.
Client-side keepalive ping still does not reason about network partitions. Streaming request may get stuck with a partitioned node. Advanced health checking service need to be implemented to understand the cluster membership (see [issue #8673](https://github.com/etcd-io/etcd/issues/8673) for more detail).
Currently, retry logic is handled manually as an interceptor. This may be simplified via [official gRPC retries](https://github.com/grpc/proposal/blob/master/A6-client-retries.md).

View File

@ -0,0 +1,157 @@
---
title: Client feature matrix
---
## Features
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
Automatic retry | Yes | .
Retry backoff | Yes | .
Automatic failover | Yes | .
Load balancer | Round-Robin | ·
`WithRequireLeader(context.Context)` | Yes | .
`TLS` | Yes | Yes
`SetEndpoints` | Yes | .
`Sync` endpoints | Yes | .
`AutoSyncInterval` | Yes | .
`KeepAlive` ping | Yes | .
`MaxCallSendMsgSize` | Yes | .
`MaxCallRecvMsgSize` | Yes | .
`RejectOldCluster` | Yes | .
## [KV](https://godoc.org/go.etcd.io/etcd/clientv3#KV)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Put` | Yes | .
`Get` | Yes | .
`Delete` | Yes | .
`Compact` | Yes | .
`Do(Op)` | Yes | .
`Txn` | Yes | .
## [Lease](https://godoc.org/go.etcd.io/etcd/clientv3#Lease)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Grant` | Yes | .
`Revoke` | Yes | .
`TimeToLive` | Yes | .
`Leases` | Yes | .
`KeepAlive` | Yes | .
`KeepAliveOnce` | Yes | .
## [Watcher](https://godoc.org/go.etcd.io/etcd/clientv3#Watcher)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Watch` | Yes | Yes
`RequestProgress` | Yes | .
## [Cluster](https://godoc.org/go.etcd.io/etcd/clientv3#Cluster)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`MemberList` | Yes | Yes
`MemberAdd` | Yes | Yes
`MemberRemove` | Yes | Yes
`MemberUpdate` | Yes | Yes
## [Maintenance](https://godoc.org/go.etcd.io/etcd/clientv3#Maintenance)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`AlarmList` | Yes | Yes
`AlarmDisarm` | Yes | ·
`Defragment` | Yes | ·
`Status` | Yes | ·
`HashKV` | Yes | ·
`Snapshot` | Yes | ·
`MoveLeader` | Yes | ·
## [Auth](https://godoc.org/go.etcd.io/etcd/clientv3#Auth)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`AuthEnable` | Yes | .
`AuthDisable` | Yes | .
`UserAdd` | Yes | .
`UserDelete` | Yes | .
`UserChangePassword` | Yes | .
`UserGrantRole` | Yes | .
`UserGet` | Yes | .
`UserList` | Yes | .
`UserRevokeRole` | Yes | .
`RoleAdd` | Yes | .
`RoleGrantPermission` | Yes | .
`RoleGet` | Yes | .
`RoleList` | Yes | .
`RoleRevokePermission` | Yes | .
`RoleDelete` | Yes | .
## [clientv3util](https://godoc.org/go.etcd.io/etcd/clientv3/clientv3util)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`KeyExists` | Yes | No
`KeyMissing` | Yes | No
## [Concurrency](https://godoc.org/go.etcd.io/etcd/clientv3/concurrency)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Session` | Yes | No
`NewMutex(Session, prefix)` | Yes | No
`NewElection(Session, prefix)` | Yes | No
`NewLocker(Session, prefix)` | Yes | No
`STM Isolation SerializableSnapshot` | Yes | No
`STM Isolation Serializable` | Yes | No
`STM Isolation RepeatableReads` | Yes | No
`STM Isolation ReadCommitted` | Yes | No
`STM Get` | Yes | No
`STM Put` | Yes | No
`STM Rev` | Yes | No
`STM Del` | Yes | No
## [Leasing](https://godoc.org/go.etcd.io/etcd/clientv3/leasing)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`NewKV(Client, prefix)` | Yes | No
## [Mirror](https://godoc.org/go.etcd.io/etcd/clientv3/mirror)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`SyncBase` | Yes | No
`SyncUpdates` | Yes | No
## [Namespace](https://godoc.org/go.etcd.io/etcd/clientv3/namespace)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`KV` | Yes | No
`Lease` | Yes | No
`Watcher` | Yes | No
## [Naming](https://godoc.org/go.etcd.io/etcd/clientv3/naming)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`GRPCResolver` | Yes | No
## [Ordering](https://godoc.org/go.etcd.io/etcd/clientv3/ordering)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`KV` | Yes | No
## [Snapshot](https://godoc.org/go.etcd.io/etcd/clientv3/snapshot)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Save` | Yes | No
`Status` | Yes | No
`Restore` | Yes | No

View File

@ -1,20 +1,22 @@
# Data model
---
title: Data model
---
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases.
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generates a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generate a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time and from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
### Logical view
The stores logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive.
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to recover space, revisions before the compact revision will be removed.
The key space maintains multiple **revisions**. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to save space, revisions before the compact revision will be removed. Revisions are monotonically increasing over the lifetime of a cluster.
A keys lifetime spans a generation. Each key may have one or multiple generations. Creating a key increments the generation of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the keys current generation. Each modification of a key creates a new version of the key. Once a compaction happens, any generation ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
A key's life spans a generation, from creation to deletion. Each key may have one or multiple generations. Creating a key increments the **version** of that key, starting at 1 if the key does not exist at the current revision. Deleting a key generates a key tombstone, concluding the keys current generation by resetting its version to 0. Each modification of a key increments its version; so, versions are monotonically increasing within a key's generation. Once a compaction happens, any generation ended before the compaction revision will be removed, and values set before the compaction revision except the latest one will be removed.
### Physical view
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.

View File

@ -1,4 +1,6 @@
# Glossary
---
title: Glossary
---
This document defines the various terms used in etcd documentation, command line and source code.

View File

@ -0,0 +1,106 @@
---
title: Learner
---
## Background
Membership reconfiguration has been one of the biggest operational challenges. Lets review common challenges.
A newly joined etcd member starts with no data, thus demanding more updates from leader until it catches up with leaders logs. Then leaders network is more likely to be overloaded, blocking or dropping leader heartbeats to followers. In such case, a follower may election-timeout to start a new leader election. That is, a cluster with a new member is more vulnerable to leader election. Both leader election and the subsequent update propagation to the new member are prone to causing periods of cluster unavailability (see **Figure 1** below).
{{< figure src="/img/server-learner-figure-01.png" >}}
What if network partition happens? It depends on leader partition. If the leader still maintains the active quorum, the cluster would continue to operate (see **Figure 2**).
{{< figure src="/img/server-learner-figure-02.png" >}}
What if the leader becomes isolated from the rest of the cluster? Leader monitors progress of each follower. When leader loses connectivity from the quorum it reverts back to follower which will affect the cluster availability (see **Figure 3**).
{{< figure src="/img/server-learner-figure-03.png" >}}
When a new node is added to 3 node cluster, the cluster size becomes 4 and the quorum size becomes 3. What if a new node had joined the cluster, and then network partition happens? It depends on which partition the new member gets located after partition. If the new node happens to be located in the same partition as leaders, the leader still maintains the active quorum of 3. No leadership election happens, and no cluster availability gets affected (see **Figure 4**).
{{< figure src="/img/server-learner-figure-04.png" >}}
If the cluster is 2-and-2 partitioned, then neither of partition maintains the quorum of 3. In this case, leadership election happens (see **Figure 5**).
{{< figure src="/img/server-learner-figure-05.png" >}}
What if network partition happens first, and then a new member gets added? A partitioned 3-node cluster already has one disconnected follower. When a new member is added, the quorum changes from 2 to 3. Now, this cluster has only 2 active nodes out 4, thus losing quorum and starting a new leadership election (see **Figure 6**).
{{< figure src="/img/server-learner-figure-06.png" >}}
Since member add operation can change the size of quorum, it is always recommended to “member remove” first to replace an unhealthy node.
Adding a new member to a 1-node cluster changes the quorum size to 2, immediately causing a leader election when the previous leader finds out quorum is not active. This is because “member add” operation is a 2-step process where user needs to apply “member add” command first, and then starts the new node process (see **Figure 7**).
{{< figure src="/img/server-learner-figure-07.png" >}}
An even worse case is when an added member is misconfigured. Membership reconfiguration is a two-step process: “etcdctl member add” and starting an etcd server process with the given peer URL. That is, “member add” command is applied regardless of URL, even when the URL value is invalid. If the first step is applied with invalid URLs, the second step cannot even start the new etcd. Once the cluster loses quorum, there is no way to revert the membership change (see **Figure 8**).
{{< figure src="/img/server-learner-figure-08.png" >}}
Same applies to a multi-node cluster. For example, the cluster has two members down (one is failed, the other is misconfigured) and two members up, but now it requires at least 3 votes to change the cluster membership (see **Figure 9**).
{{< figure src="/img/server-learner-figure-09.png" >}}
As seen above, a simple misconfiguration can fail the whole cluster into an inoperative state. In such case, an operator need manually recreate the cluster with `etcd --force-new-cluster` flag. As etcd has become a mission-critical service for [Kubernetes](https://kubernetes.io), even the slightest outage may have significant impact on users. What can we better to make etcd such operations easier? Among other things, leader election is most critical to cluster availability: Can we make membership reconfiguration less disruptive by not changing the size of quorum? Can a new node be idle, only requesting the minimum updates from leader, until it catches up? Can membership misconfiguration be always reversible and handled in a more secure way (wrong member add command run should never fail the cluster)? Should an user worry about network topology when adding a new member? Can member add API work regardless of the location of nodes and ongoing network partitions?
## Raft learner
In order to mitigate such availability gaps in the previous section, [Raft §4.2.1](https://ramcloud.stanford.edu/~ongaro/thesis.pdf) introduces a new node state “Learner,” which joins the cluster as a **non-voting member** until it catches up to the leaders logs.
## Features in v3.4
An operator should do the minimum amount of work possible to add a new learner node. `member add --learner` command to add a new learner, which joins cluster as a non-voting member but still receives all data from leader (see **Figure 10**).
{{< figure src="/img/server-learner-figure-10.png" >}}
When a learner has caught up with leaders progress, the learner can be promoted to a voting member using the `member promote` API, which then counts towards the quorum (see **Figure 11**).
{{< figure src="/img/server-learner-figure-11.png" >}}
etcd server validates promote request to ensure its operational safety. Only after its log has caught up to leaders can learner be promoted to a voting member (see **Figure 12**).
{{< figure src="/img/server-learner-figure-12.png" >}}
Learner only serves as a standby node until promoted: Leadership cannot be transferred to learner. Learner rejects client reads and writes (client balancer should not route requests to learner). Which means learner does not need issue Read Index requests to leader. Such limitation simplifies the initial learner implementation in v3.4 release (see **Figure 13**).
{{< figure src="/img/server-learner-figure-13.png" >}}
In addition, etcd limits the total number of learners that a cluster can have, and avoids overloading the leader with log replication. Learner never promotes itself. While etcd provides learner status information and safety checks, cluster operator must make the final decision whether to promote learner or not.
## Features in v3.5
**Make learner state only and default** --- Defaulting a new member state to learner will greatly improve membership reconfiguration safety, because learner does not change the size of quorum. Misconfiguration will always be reversible without losing the quorum.
**Make voting-member promotion fully automatic** --- Once a learner catches up to leaders logs, a cluster can automatically promote the learner. etcd requires certain thresholds to be defined by the user, and once the requirements are satisfied, learner promotes itself to a voting member. From a users perspective, “member add” command would work the same way as today but with greater safety provided by learner feature.
**Make learner standby failover node** --- A learner joins as a standby node, and gets automatically promoted when the cluster availability is affected.
**Make learner read-only** --- A learner can serve as a read-only node that never gets promoted. In a weak consistency mode, learner only receives data from leader and never process writes. Serving reads locally without consensus overhead would greatly decrease the workloads to leader but may serve stale data. In a strong consistency mode, learner requests read index from leader to serve latest data, but still rejects writes.
## Learner vs. mirror maker
etcd implements “mirror maker” using watch API to continuously relay key creates and updates to a separate cluster. Mirroring usually has low latency overhead once it completes initial synchronization. Learner and mirroring overlap in that both can be used to replicate existing data for read-only. However, mirroring does not guarantee linearizability. During network disconnects, previous key-values might have been discarded, and clients are expected to verify watch responses for correct ordering. Thus, there is no ordering guarantee in mirror. Use mirror for minimum latency (e.g. cross data center) at the costs of consistency. Use learner to retain all historical data and its ordering.
## Appendix: learner implementation in v3.4
### Expose "Learner" node type to "MemberAdd" API
etcd client adds a flag to “MemberAdd” API for learner node. And etcd server handler applies membership change entry with `pb.ConfChangeAddLearnerNode` type. Once the command has been applied, a server joins the cluster with `etcd --initial-cluster-state=existing` flag. This learner node can neither vote nor count as quorum.
etcd server must not transfer leadership to learner, since it may still lag behind and does not count as quorum. etcd server limits the number of learners that cluster can have to one: the more learners we have, the more data the leader has to propagate. Clients may talk to learner node, but learner rejects all requests other than serializable read and member status API. This is for simplicity of initial implementation. In the future, learner can be extended as a read-only server that continuously mirrors cluster data. Client balancer must provide helper function to exclude learner node endpoint. Otherwise, request sent to learner may fail. Client sync member call should factor into learner node type. So should client endpoints update call.
`MemberList` and `MemberStatus` responses should indicate which node is learner.
### Add "MemberPromote" API
Internally in Raft, second `MemberAdd` call to learner node promotes it to a voting member. Leader maintains the progress of each follower and learner. If learner has not completed its snapshot message, reject promote request. Only accept promote request if and only if: The learner node is in a healthy state. The learner is in sync with leader or the delta is within the threshold (e.g. the number of entries to replicate to learner is less than 1/10 of snapshot count, which means it is less likely that even after promotion leader would not need send snapshot to the learner). All these logic are hard-coded in `etcdserver` package and not configurable.
## Reference
* Original GitHub issue ([issue #9161](https://github.com/etcd-io/etcd/issues/9161))
* Use case ([issue #3715](https://github.com/etcd-io/etcd/issues/3715))
* Use case ([issue #8888](https://github.com/etcd-io/etcd/issues/8888))
* Use case ([issue #10114](https://github.com/etcd-io/etcd/issues/10114))

View File

@ -1,17 +1,19 @@
# Why etcd
---
title: etcd versus other key-value stores
---
The name "etcd" originated from two ideas, the unix "/etc" folder and "d"istibuted systems. The "/etc" folder is a place to store configuration data for a single system whereas etcd stores configuration information for large scale distributed systems. Hence, a "d"istributed "/etc" is "etcd".
The name "etcd" originated from two ideas, the unix "/etc" folder and "d"istributed systems. The "/etc" folder is a place to store configuration data for a single system whereas etcd stores configuration information for large scale distributed systems. Hence, a "d"istributed "/etc" is "etcd".
etcd stores metadata in a consistent and fault-tolerant way. Distributed systems use etcd as a consistent key-value store for configuration management, service discovery, and coordinating distributed work. Common distributed patterns using etcd include [leader election][etcd-etcdctl-elect], [distributed locks][etcd-etcdctl-lock], and monitoring machine liveness.
etcd is designed as a general substrate for large scale distributed systems. These are systems that will never tolerate split-brain operation and are willing to sacrifice availability to achieve this end. etcd stores metadata in a consistent and fault-tolerant way. An etcd cluster is meant to provide key-value storage with best of class stability, reliability, scalability and performance.
Distributed systems use etcd as a consistent key-value store for configuration management, service discovery, and coordinating distributed work. Many [organizations][production-users] use etcd to implement production systems such as container schedulers, service discovery services, and distributed data storage. Common distributed patterns using etcd include [leader election][etcd-etcdctl-elect], [distributed locks][etcd-etcdctl-lock], and monitoring machine liveness.
## Use cases
- Container Linux by CoreOS: Application running on [Container Linux][container-linux] gets automatic, zero-downtime Linux kernel updates. Container Linux uses [locksmith] to coordinate updates. locksmith implements a distributed semaphore over etcd to ensure only a subset of a cluster is rebooting at any given time.
- Container Linux by CoreOS: Applications running on [Container Linux][container-linux] get automatic, zero-downtime Linux kernel updates. Container Linux uses [locksmith] to coordinate updates. Locksmith implements a distributed semaphore over etcd to ensure only a subset of a cluster is rebooting at any given time.
- [Kubernetes][kubernetes] stores configuration data into etcd for service discovery and cluster management; etcd's consistency is crucial for correctly scheduling and operating services. The Kubernetes API server persists cluster state into etcd. It uses etcd's watch API to monitor the cluster and roll out critical configuration changes.
## etcd versus other key-value stores
When deciding whether to use etcd as a key-value store, its worth keeping in mind etcds main goal. Namely, etcd is designed as a general substrate for large scale distributed systems. These are systems that will never tolerate split-brain operation and are willing to sacrifice availability to achieve this end. An etcd cluster is meant to provide consistent key-value storage with best of class stability, reliability, scalability and performance. The upshot of this focus is many [organizations][production-users] already use etcd to implement production systems such as container schedulers, service discovery services, distributed data storage, and more.
## Comparison chart
Perhaps etcd already seems like a good fit, but as with all technological decisions, proceed with caution. Please note this documentation is written by the etcd team. Although the ideal is a disinterested comparison of technology and features, the authors expertise and biases obviously favor etcd. Use only as directed.
@ -47,7 +49,7 @@ When considering features, support, and stability, new applications planning to
### Consul
Consul bills itself as an end-to-end service discovery framework. To wit, it includes services such as health checking, failure detection, and DNS. Incidentally, Consul also exposes a key value store with mediocre performance and an intricate API. As it stands in Consul 0.7, the storage system does not scales well; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches.
Consul is an end-to-end service discovery framework. It provides built-in health checking, failure detection, and DNS services. In addition, Consul exposes a key value store with RESTful HTTP APIs. [As it stands in Consul 1.0][dbtester-comparison-results], the storage system does not scale as well as other systems like etcd or Zookeeper in key-value operations; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches.
etcd and Consul solve different problems. If looking for a distributed consistent key value store, etcd is a better choice over Consul. If looking for end-to-end cluster service discovery, etcd will not have enough features; choose Kubernetes, Consul, or SmartStack.
@ -76,18 +78,18 @@ In theory, its possible to build these primitives atop any storage systems pr
For distributed coordination, choosing etcd can help prevent operational headaches and save engineering effort.
[production-users]: ../production-users.md
[grpc]: http://www.grpc.io
[grpc]: https://www.grpc.io
[consul-bulletproof]: https://www.consul.io/docs/internals/sessions.html
[curator]: http://curator.apache.org/
[cockroach]: https://github.com/cockroachdb/cockroach
[spanner]: https://cloud.google.com/spanner/
[tidb]: https://github.com/pingcap/tidb
[etcd-v3lock]: https://godoc.org/github.com/coreos/etcd/etcdserver/api/v3lock/v3lockpb
[etcd-v3election]: https://godoc.org/github.com/coreos/etcd/etcdserver/api/v3election/v3electionpb
[etcd-etcdctl-lock]: ../../etcdctl/README.md#lock-lockname
[etcd-v3lock]: https://godoc.org/github.com/etcd-io/etcd/etcdserver/api/v3lock/v3lockpb
[etcd-v3election]: https://godoc.org/github.com/coreos/etcd-io/etcdserver/api/v3election/v3electionpb
[etcd-etcdctl-lock]: ../../etcdctl/README.md#lock-lockname-command-arg1-arg2-
[etcd-etcdctl-elect]: ../../etcdctl/README.md#elect-options-election-name-proposal
[etcd-mvcc]: data_model.md
[etcd-recipe]: https://godoc.org/github.com/coreos/etcd/contrib/recipes
[etcd-recipe]: https://godoc.org/github.com/etcd-io/etcd/contrib/recipes
[consul-lock]: https://www.consul.io/docs/commands/lock.html
[newsql-leader]: http://dl.acm.org/citation.cfm?id=2960999
[etcd-reconfig]: ../op-guide/runtime-configuration.md
@ -107,10 +109,10 @@ For distributed coordination, choosing etcd can help prevent operational headach
[etcd-rbac]: ../op-guide/authentication.md#working-with-roles
[zk-acl]: https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_ZooKeeperAccessControl
[consul-acl]: https://www.consul.io/docs/internals/acl.html
[cockroach-grant]: https://www.cockroachlabs.com/docs/grant.html
[cockroach-grant]: https://www.cockroachlabs.com/docs/stable/grant.html
[spanner-roles]: https://cloud.google.com/spanner/docs/iam#roles
[zk-bindings]: https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#ch_bindings
[container-linux]: https://coreos.com/why
[locksmith]: https://github.com/coreos/locksmith
[kubernetes]: http://kubernetes.io/docs/whatisk8s
[kubernetes]: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
[dbtester-comparison-results]: https://github.com/coreos/dbtester/tree/master/test-results/2018Q1-02-etcd-zookeeper-consul

View File

@ -1,4 +1,7 @@
# Metrics
---
title: Metrics
weight: 3
---
etcd uses [Prometheus][prometheus] for metrics reporting. The metrics can be used for real-time monitoring and debugging. etcd does not persist its metrics; if a member restarts, the metrics will be reset.
@ -99,7 +102,7 @@ Abnormally high snapshot duration (`snapshot_save_total_duration_seconds`) indic
## Prometheus supplied metrics
The Prometheus client library provides a number of metrics under the `go` and `process` namespaces. There are a few that are particlarly interesting.
The Prometheus client library provides a number of metrics under the `go` and `process` namespaces. There are a few that are particularly interesting.
| Name | Description | Type |
|-----------------------------------|--------------------------------------------|--------------|
@ -113,4 +116,4 @@ Heavy file descriptor (`process_open_fds`) usage (i.e., near the process's file
[prometheus-getting-started]: http://prometheus.io/docs/introduction/getting_started/
[prometheus-naming]: http://prometheus.io/docs/practices/naming/
[v2-http-metrics]: v2/metrics.md#http-requests
[go-grpc-prometheus]: https://github.com/grpc-ecosystem/go-grpc-prometheus
[go-grpc-prometheus]: https://github.com/grpc-ecosystem/go-grpc-prometheus

View File

@ -0,0 +1,3 @@
---
title: Operations guide
---

View File

@ -1,8 +1,10 @@
# Authentication Guide
---
title: Role-based access control
---
## Overview
Authentication was added in etcd 2.1. The etcd v3 API slightly modified the authentication feature's API and user interface to better fit the new data model. This guide is intended to help users set up basic authentication in etcd v3.
Authentication was added in etcd 2.1. The etcd v3 API slightly modified the authentication feature's API and user interface to better fit the new data model. This guide is intended to help users set up basic authentication and role-based access control in etcd v3.
## Special users and roles
@ -32,7 +34,7 @@ Creating a user is as easy as
$ etcdctl user add myusername
```
Creating a new user will prompt for a new password. The password can be supplied from standard input when an option `--interactive=false` is given.
Creating a new user will prompt for a new password. The password can be supplied from standard input when an option `--interactive=false` is given. `--new-user-password` can also be used for supplying the password.
Roles can be granted and revoked for a user with:
@ -122,12 +124,12 @@ $ etcdctl role remove myrolename
## Enabling authentication
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
Make sure the root user is created:
```
$ etcdctl user add root
$ etcdctl user add root
Password of root:
```
@ -157,8 +159,18 @@ The password can be taken from a prompt:
$ etcdctl --user user get foo
```
The password can also be taken from a command line flag `--password`:
```
$ etcdctl --user user --password password get foo
```
Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.
## Using TLS Common Name
As of version v3.2 if an etcd server is launched with the option `--client-cert-auth=true`, the field of Common Name (CN) in the client's TLS cert will be used as an etcd user. In this case, the common name authenticates the user and the client does not need a password. Note that if both of 1. `--client-cert-auth=true` is passed and CN is provided by the client, and 2. username and password are provided by the client, the username and password based authentication is prioritized. Note that this feature cannot be used with gRPC-proxy and gRPC-gateway. This is because gRPC-proxy terminates TLS from its client so all the clients share a cert of the proxy. gRPC-gateway uses a TLS connection internally for transforming HTTP request to gRPC request so it shares the same limitation. Therefore the clients cannot provide their CN to the server correctly. gRPC-proxy will cause an error and stop if a given cert has non empty CN. gRPC-proxy returns an error which indicates that the client has an non empty CN in its cert.
As of version v3.3 if an etcd server is launched with the option `--peer-cert-allowed-cn` filtering of CN inter-peer connections is enabled. Nodes can only join the etcd cluster if their CN match the allowed one.
See [etcd security page](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/security.md) for more details.
If an etcd server is launched with the option `--client-cert-auth=true`, the field of Common Name (CN) in the client's TLS cert will be used as an etcd user. In this case, the common name authenticates the user and the client does not need a password.

View File

@ -1,4 +1,6 @@
# Clustering Guide
---
title: Clustering Guide
---
## Overview
@ -342,8 +344,8 @@ etcdserver: discovery token ignored since a cluster has already been initialized
### DNS discovery
DNS [SRV records][rfc-srv] can be used as a discovery mechanism.
The `-discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found.
The following DNS SRV records are looked up in the listed order:
The `--discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found.
Setting `--discovery-srv example.com` causes DNS SRV records to be looked up in the listed order:
* _etcd-server-ssl._tcp.example.com
* _etcd-server._tcp.example.com
@ -357,8 +359,21 @@ To help clients discover the etcd cluster, the following DNS SRV records are loo
If `_etcd-client-ssl._tcp.example.com` is found, clients will attempt to communicate with the etcd cluster over SSL/TLS.
If etcd is using TLS, the discovery SRV record (e.g. `example.com`) must be included in the SSL certificate DNS SAN along with the hostname, or clustering will fail with log messages like the following:
```
[...] rejected connection from "10.0.1.11:53162" (error "remote error: tls: bad certificate", ServerName "example.com")
```
If etcd is using TLS without a custom certificate authority, the discovery domain (e.g., example.com) must match the SRV record domain (e.g., infra1.example.com). This is to mitigate attacks that forge SRV records to point to a different domain; the domain would have a valid certificate under PKI but be controlled by an unknown third party.
The `-discovery-srv-name` flag additionally configures a suffix to the SRV name that is queried during discovery.
Use this flag to differentiate between multiple etcd clusters under the same domain.
For example, if `discovery-srv=example.com` and `-discovery-srv-name=foo` are set, the following DNS SRV queries are made:
* _etcd-server-ssl-foo._tcp.example.com
* _etcd-server-foo._tcp.example.com
#### Create DNS SRV records
```
@ -384,7 +399,8 @@ infra2.example.com. 300 IN A 10.0.1.12
#### Bootstrap the etcd cluster using DNS
etcd cluster members can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
etcd cluster members can advertise domain names or IP address, the bootstrap process will resolve DNS A records.
Since 3.2 (3.1 prints warnings) `--listen-peer-urls` and `--listen-client-urls` will reject domain name for the network interface binding.
The resolved address in `--initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. The etcd member reads the resolved address to find out if it belongs to the cluster defined in the SRV records.
@ -395,8 +411,8 @@ $ etcd --name infra0 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra0.example.com:2379 \
--listen-client-urls http://infra0.example.com:2379 \
--listen-peer-urls http://infra0.example.com:2380
--listen-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://0.0.0.0:2380
```
```
@ -406,8 +422,8 @@ $ etcd --name infra1 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra1.example.com:2379 \
--listen-client-urls http://infra1.example.com:2379 \
--listen-peer-urls http://infra1.example.com:2380
--listen-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://0.0.0.0:2380
```
```
@ -417,8 +433,8 @@ $ etcd --name infra2 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra2.example.com:2379 \
--listen-client-urls http://infra2.example.com:2379 \
--listen-peer-urls http://infra2.example.com:2380
--listen-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://0.0.0.0:2380
```
The cluster can also bootstrap using IP addresses instead of domain names:
@ -456,6 +472,8 @@ $ etcd --name infra2 \
--listen-peer-urls http://10.0.1.12:2380
```
Since v3.1.0 (except v3.2.9), when `etcd --discovery-srv=example.com` is configured with TLS, server will only authenticate peers/clients when the provided certs have root domain `example.com` as an entry in Subject Alternative Name (SAN) field. See [Notes for DNS SRV][security-guide-dns-srv].
### Gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. Please read [gateway guide][gateway] for more information.
@ -475,5 +493,6 @@ To setup an etcd cluster with proxies of v2 API, please read the the [clustering
[proxy]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/proxy.md
[clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md
[security-guide]: security.md
[security-guide-dns-srv]: security.md#notes-for-dns-srv
[tls-setup]: ../../hack/tls-setup
[gateway]: gateway.md

View File

@ -1,6 +1,13 @@
# Configuration flags
---
title: Configuration flags
---
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment.
etcd is configurable through a configuration file, various command-line flags, and environment variables.
A reusable configuration file is a YAML file made with name and value of one or more command-line flags described below. In order to use this file, specify the file path as a value to the `--config-file` flag. The [sample configuration file][sample-config-file] can be used as a starting point to create a new configuration file as needed.
Options set on the command line take precedence over those from the environment. If a configuration file is provided, other command line flags and environment variables will be ignored.
For example, `etcd --config-file etcd.conf.yml.sample --data-dir /tmp` will ignore the `--data-dir` flag.
The format of environment variable for flag `--my-flag` is `ETCD_MY_FLAG`. It applies to all flags.
@ -42,14 +49,14 @@ To start etcd automatically using custom settings at startup in Linux, using a [
+ env variable: ETCD_ELECTION_TIMEOUT
### --listen-peer-urls
+ List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be either http or https.If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be http or https. Alternatively, use `unix://<file-path>` or `unixs://<file-path>` for unix sockets. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2380"
+ env variable: ETCD_LISTEN_PEER_URLS
+ example: "http://10.0.0.1:2380"
+ invalid example: "http://example.com:2380" (domain name is invalid for binding)
### --listen-client-urls
+ List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. Alternatively, use `unix://<file-path>` or `unixs://<file-path>` for unix sockets. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2379"
+ env variable: ETCD_LISTEN_CLIENT_URLS
+ example: "http://10.0.0.1:2379"
@ -69,12 +76,52 @@ To start etcd automatically using custom settings at startup in Linux, using a [
### --cors
+ Comma-separated white list of origins for CORS (cross-origin resource sharing).
+ default: none
+ default: ""
+ env variable: ETCD_CORS
### --quota-backend-bytes
+ Raise alarms when backend size exceeds the given quota (0 defaults to low space quota).
+ default: 0
+ env variable: ETCD_QUOTA_BACKEND_BYTES
### --backend-batch-limit
+ BackendBatchLimit is the maximum operations before commit the backend transaction.
+ default: 0
+ env variable: ETCD_BACKEND_BATCH_LIMIT
### --backend-batch-interval
+ BackendBatchInterval is the maximum time before commit the backend transaction.
+ default: 0
+ env variable: ETCD_BACKEND_BATCH_INTERVAL
### --max-txn-ops
+ Maximum number of operations permitted in a transaction.
+ default: 128
+ env variable: ETCD_MAX_TXN_OPS
### --max-request-bytes
+ Maximum client request size in bytes the server will accept.
+ default: 1572864
+ env variable: ETCD_MAX_REQUEST_BYTES
### --grpc-keepalive-min-time
+ Minimum duration interval that a client should wait before pinging server.
+ default: 5s
+ env variable: ETCD_GRPC_KEEPALIVE_MIN_TIME
### --grpc-keepalive-interval
+ Frequency duration of server-to-client ping to check if a connection is alive (0 to disable).
+ default: 2h
+ env variable: ETCD_GRPC_KEEPALIVE_INTERVAL
### --grpc-keepalive-timeout
+ Additional duration of wait before closing a non-responsive connection (0 to disable).
+ default: 20s
+ env variable: ETCD_GRPC_KEEPALIVE_TIMEOUT
## Clustering flags
`--initial` prefix flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member.
`--initial-advertise-peer-urls`, `--initial-cluster`, `--initial-cluster-state`, and `--initial-cluster-token` flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member.
`--discovery` prefix flags need to be set when using [discovery service][discovery].
@ -112,14 +159,19 @@ To start etcd automatically using custom settings at startup in Linux, using a [
### --discovery
+ Discovery URL used to bootstrap the cluster.
+ default: none
+ default: ""
+ env variable: ETCD_DISCOVERY
### --discovery-srv
+ DNS srv domain used to bootstrap the cluster.
+ default: none
+ default: ""
+ env variable: ETCD_DISCOVERY_SRV
### --discovery-srv-name
+ Suffix to the DNS srv name queried when bootstrapping using DNS.
+ default: ""
+ env variable: ETCD_DISCOVERY_SRV_NAME
### --discovery-fallback
+ Expected behavior ("exit" or "proxy") when discovery services fails. "proxy" supports v2 API only.
+ default: "proxy"
@ -127,12 +179,12 @@ To start etcd automatically using custom settings at startup in Linux, using a [
### --discovery-proxy
+ HTTP proxy to use for traffic to discovery service.
+ default: none
+ default: ""
+ env variable: ETCD_DISCOVERY_PROXY
### --strict-reconfig-check
+ Reject reconfiguration requests that would cause quorum loss.
+ default: false
+ default: true
+ env variable: ETCD_STRICT_RECONFIG_CHECK
### --auto-compaction-retention
@ -140,6 +192,10 @@ To start etcd automatically using custom settings at startup in Linux, using a [
+ default: 0
+ env variable: ETCD_AUTO_COMPACTION_RETENTION
### --auto-compaction-mode
+ Interpret 'auto-compaction-retention' one of: 'periodic', 'revision'. 'periodic' for duration based retention, defaulting to hours if no time unit is provided (e.g. '5m'). 'revision' for revision number based retention.
+ default: periodic
+ env variable: ETCD_AUTO_COMPACTION_MODE
### --enable-v2
+ Accept etcd V2 client requests
@ -185,32 +241,38 @@ To start etcd automatically using custom settings at startup in Linux, using a [
The security flags help to [build a secure etcd cluster][security].
### --ca-file
### --ca-file
**DEPRECATED**
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
+ default: none
+ default: ""
+ env variable: ETCD_CA_FILE
### --cert-file
+ Path to the client server TLS cert file.
+ default: none
+ default: ""
+ env variable: ETCD_CERT_FILE
### --key-file
+ Path to the client server TLS key file.
+ default: none
+ default: ""
+ env variable: ETCD_KEY_FILE
### --client-cert-auth
+ Enable client cert authentication.
+ default: false
+ env variable: ETCD_CLIENT_CERT_AUTH
+ CN authentication is not supported by gRPC-gateway.
### --client-crl-file
+ Path to the client certificate revocation list file.
+ default: ""
+ env variable: ETCD_CLIENT_CRL_FILE
### --trusted-ca-file
+ Path to the client server TLS trusted CA key file.
+ default: none
+ Path to the client server TLS trusted CA cert file.
+ default: ""
+ env variable: ETCD_TRUSTED_CA_FILE
### --auto-tls
@ -218,22 +280,22 @@ The security flags help to [build a secure etcd cluster][security].
+ default: false
+ env variable: ETCD_AUTO_TLS
### --peer-ca-file
### --peer-ca-file
**DEPRECATED**
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
+ default: none
+ default: ""
+ env variable: ETCD_PEER_CA_FILE
### --peer-cert-file
+ Path to the peer server TLS cert file.
+ default: none
+ Path to the peer server TLS cert file. This is the cert for peer-to-peer traffic, used both for server and client.
+ default: ""
+ env variable: ETCD_PEER_CERT_FILE
### --peer-key-file
+ Path to the peer server TLS key file.
+ default: none
+ Path to the peer server TLS key file. This is the key for peer-to-peer traffic, used both for server and client.
+ default: ""
+ env variable: ETCD_PEER_KEY_FILE
### --peer-client-cert-auth
@ -241,9 +303,14 @@ The security flags help to [build a secure etcd cluster][security].
+ default: false
+ env variable: ETCD_PEER_CLIENT_CERT_AUTH
### --peer-crl-file
+ Path to the peer certificate revocation list file.
+ default: ""
+ env variable: ETCD_PEER_CRL_FILE
### --peer-trusted-ca-file
+ Path to the peer server TLS trusted CA file.
+ default: none
+ default: ""
+ env variable: ETCD_PEER_TRUSTED_CA_FILE
### --peer-auto-tls
@ -251,8 +318,32 @@ The security flags help to [build a secure etcd cluster][security].
+ default: false
+ env variable: ETCD_PEER_AUTO_TLS
### --peer-cert-allowed-cn
+ Allowed CommonName for inter peer authentication.
+ default: none
+ env variable: ETCD_PEER_CERT_ALLOWED_CN
### --cipher-suites
+ Comma-separated list of supported TLS cipher suites between server/client and peers.
+ default: ""
+ env variable: ETCD_CIPHER_SUITES
## Logging flags
### --logger
**Available from v3.4**
+ Specify 'zap' for structured logging or 'capnslog'.
+ default: capnslog
+ env variable: ETCD_LOGGER
### --log-outputs
+ Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd, or list of comma separated output targets.
+ default: default
+ env variable: ETCD_LOG_OUTPUTS
+ 'default' use 'stderr' config for v3.4 during zap logger migraion
### --debug
+ Drop the default log level to DEBUG for all subpackages.
+ default: false (INFO for all packages)
@ -260,10 +351,9 @@ The security flags help to [build a secure etcd cluster][security].
### --log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: none (INFO for all packages)
+ default: "" (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
## Unsafe flags
Please be CAUTIOUS when using unsafe flags because it will break the guarantees given by the consensus protocol.
@ -271,7 +361,7 @@ For example, it may panic if other members in the cluster are still alive.
Follow the instructions when using these flags.
### --force-new-cluster
+ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
+ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself, but is strongly discouraged. Please review the [disaster recovery][recovery] documentation for preferred v3 recovery procedures.
+ default: false
+ env variable: ETCD_FORCE_NEW_CLUSTER
@ -283,24 +373,52 @@ Follow the instructions when using these flags.
### --config-file
+ Load server configuration from a file.
+ default: none
+ default: ""
+ example: [sample configuration file][sample-config-file]
+ env variable: ETCD_CONFIG_FILE
## Profiling flags
### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/"
+ default: false
+ env variable: ETCD_ENABLE_PPROF
### --metrics
+ Set level of detail for exported metrics, specify 'extensive' to include histogram metrics.
+ default: basic
+ env variable: ETCD_METRICS
### --listen-metrics-urls
+ List of additional URLs to listen on that will respond to both the `/metrics` and `/health` endpoints
+ default: ""
+ env variable: ETCD_LISTEN_METRICS_URLS
## Auth flags
### --auth-token
+ Specify a token type and token specific options, especially for JWT. Its format is "type,var1=val1,var2=val2,...". Possible type is 'simple' or 'jwt'. Possible variables are 'sign-method' for specifying a sign method of jwt (its possible values are 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512', 'RS256', 'RS384', 'RS512', 'PS256', 'PS384', or 'PS512'), 'pub-key' for specifying a path to a public key for verifying jwt, and 'priv-key' for specifying a path to a private key for signing jwt.
+ Example option of JWT: '--auth-token jwt,pub-key=app.rsa.pub,priv-key=app.rsa,sign-method=RS512'
+ Specify a token type and token specific options, especially for JWT. Its format is "type,var1=val1,var2=val2,...". Possible type is 'simple' or 'jwt'. Possible variables are 'sign-method' for specifying a sign method of jwt (its possible values are 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512', 'RS256', 'RS384', 'RS512', 'PS256', 'PS384', or 'PS512'), 'pub-key' for specifying a path to a public key for verifying jwt, 'priv-key' for specifying a path to a private key for signing jwt, and 'ttl' for specifying TTL of jwt tokens.
+ For asymmetric algorithms ('RS', 'PS', 'ES'), the public key is optional, as the private key contains enough information to both sign and verify tokens.
+ Example option of JWT: '--auth-token jwt,pub-key=app.rsa.pub,priv-key=app.rsa,sign-method=RS512,ttl=10m'
+ default: "simple"
+ env variable: ETCD_AUTH_TOKEN
### --bcrypt-cost
+ Specify the cost / strength of the bcrypt algorithm for hashing auth passwords. Valid values are between 4 and 31.
+ default: 10
+ env variable: (not supported)
## Experimental flags
### --experimental-backend-bbolt-freelist-type
+ The freelist type that etcd backend(bboltdb) uses (array and map are supported types).
+ default: array
+ env variable: ETCD_EXPERIMENTAL_BACKEND_BBOLT_FREELIST_TYPE
### --experimental-corrupt-check-time
+ Duration of time between cluster corruption check passes
+ default: 0s
+ env variable: ETCD_EXPERIMENTAL_CORRUPT_CHECK_TIME
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
@ -311,3 +429,5 @@ Follow the instructions when using these flags.
[security]: security.md
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
[tuning]: ../tuning.md#time-parameters
[sample-config-file]: ../../etcd.conf.yml.sample
[recovery]: recovery.md#disaster-recovery

View File

@ -1,4 +1,6 @@
# Run etcd clusters inside containers
---
title: Run etcd clusters inside containers
---
The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering.md#static).
@ -17,14 +19,14 @@ export NODE1=192.168.1.21
Trust the CoreOS [App Signing Key](https://coreos.com/security/app-signing-key/).
```
sudo rkt trust --prefix coreos.com/etcd
sudo rkt trust --prefix quay.io/coreos/etcd
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
```
Run the `v3.1.2` version of etcd or specify another release version.
Run the `v3.2` version of etcd or specify another release version.
```
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.1.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380
sudo rkt run --net=default:IP=${NODE1} quay.io/coreos/etcd:v3.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380
```
List the cluster member.
@ -45,13 +47,13 @@ export NODE3=172.16.28.23
```
# node 1
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.1.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
sudo rkt run --net=default:IP=${NODE1} quay.io/coreos/etcd:v3.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 2
sudo rkt run --net=default:IP=${NODE2} coreos.com/etcd:v3.1.2 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
sudo rkt run --net=default:IP=${NODE2} quay.io/coreos/etcd:v3.2 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 3
sudo rkt run --net=default:IP=${NODE3} coreos.com/etcd:v3.1.2 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
sudo rkt run --net=default:IP=${NODE3} quay.io/coreos/etcd:v3.2 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
```
Verify the cluster is healthy and can be reached.
@ -76,18 +78,29 @@ Use the host IP address when configuring etcd:
export NODE1=192.168.1.21
```
Configure a Docker volume to store etcd data:
```
docker volume create --name etcd-data
export DATA_DIR="etcd-data"
```
Run the latest version of etcd:
```
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
REGISTRY=gcr.io/etcd-development/etcd
docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:latest \
--name etcd ${REGISTRY}:latest \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name node1 \
--initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://${NODE1}:2380 \
--advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://${NODE1}:2379 \
 --initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://0.0.0.0:2380 \
 --advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-cluster node1=http://${NODE1}:2380
```
@ -100,6 +113,10 @@ etcdctl --endpoints=http://${NODE1}:2379 member list
### Running a 3 node etcd cluster
```
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
REGISTRY=gcr.io/etcd-development/etcd
# For each machine
ETCD_VERSION=latest
TOKEN=my-etcd-token
@ -120,11 +137,11 @@ docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
--name etcd ${REGISTRY}:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
 --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
 --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
@ -135,11 +152,11 @@ docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
--name etcd ${REGISTRY}:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
 --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
 --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
@ -150,11 +167,11 @@ docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
--name etcd ${REGISTRY}:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
 --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
 --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
```
@ -174,21 +191,30 @@ To provision a 3 node etcd cluster on bare-metal, the examples in the [baremetal
The etcd release container does not include default root certificates. To use HTTPS with certificates trusted by a root authority (e.g., for discovery), mount a certificate directory into the etcd container:
```
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
REGISTRY=docker://gcr.io/etcd-development/etcd
rkt run \
--insecure-options=image \
--volume etcd-ssl-certs-bundle,kind=host,source=/etc/ssl/certs/ca-certificates.crt \
--mount volume=etcd-ssl-certs-bundle,target=/etc/ssl/certs/ca-certificates.crt \
quay.io/coreos/etcd:latest -- --name my-name \
${REGISTRY}:latest -- --name my-name \
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
--discovery https://discovery.etcd.io/c11fbcdc16972e45253491a24fcf45e1
```
```
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
REGISTRY=gcr.io/etcd-development/etcd
docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt \
quay.io/coreos/etcd:latest \
${REGISTRY}:latest \
/usr/local/bin/etcd --name my-name \
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \

View File

@ -43,8 +43,8 @@ ANNOTATIONS {
# alert if more than 1% of gRPC method calls have failed within the last 5 minutes
ALERT HighNumberOfFailedGRPCRequests
IF sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m])) > 0.01
IF 100 * (sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m]))) > 1
FOR 10m
LABELS {
severity = "warning"
@ -56,8 +56,8 @@ ANNOTATIONS {
# alert if more than 5% of gRPC method calls have failed within the last 5 minutes
ALERT HighNumberOfFailedGRPCRequests
IF sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m])) > 0.05
IF 100 * (sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m]))) > 5
FOR 5m
LABELS {
severity = "critical"
@ -69,7 +69,7 @@ ANNOTATIONS {
# alert if the 99th percentile of gRPC method calls take more than 150ms
ALERT GRPCRequestsSlow
IF histogram_quantile(0.99, rate(etcd_grpc_unary_requests_duration_seconds_bucket[5m])) > 0.15
IF histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job="etcd",grpc_type="unary"}[5m])) by (grpc_service, grpc_method, le)) > 0.15
FOR 10m
LABELS {
severity = "critical"
@ -79,47 +79,6 @@ ANNOTATIONS {
description = "on etcd instance {{ $labels.instance }} gRPC requests to {{ $labels.grpc_method }} are slow",
}
# HTTP requests alerts
# ====================
# alert if more than 1% of requests to an HTTP endpoint have failed within the last 5 minutes
ALERT HighNumberOfFailedHTTPRequests
IF sum by(method) (rate(etcd_http_failed_total{job="etcd"}[5m]))
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.01
FOR 10m
LABELS {
severity = "warning"
}
ANNOTATIONS {
summary = "a high number of HTTP requests are failing",
description = "{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}",
}
# alert if more than 5% of requests to an HTTP endpoint have failed within the last 5 minutes
ALERT HighNumberOfFailedHTTPRequests
IF sum by(method) (rate(etcd_http_failed_total{job="etcd"}[5m]))
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.05
FOR 5m
LABELS {
severity = "critical"
}
ANNOTATIONS {
summary = "a high number of HTTP requests are failing",
description = "{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}",
}
# alert if the 99th percentile of HTTP requests take more than 150ms
ALERT HTTPRequestsSlow
IF histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m])) > 0.15
FOR 10m
LABELS {
severity = "warning"
}
ANNOTATIONS {
summary = "slow HTTP requests",
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method }} are slow",
}
# file descriptor alerts
# ======================
@ -154,7 +113,7 @@ ANNOTATIONS {
# alert if 99th percentile of round trips take 150ms
ALERT EtcdMemberCommunicationSlow
IF histogram_quantile(0.99, rate(etcd_network_member_round_trip_time_seconds_bucket[5m])) > 0.15
IF histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[5m])) > 0.15
FOR 10m
LABELS {
severity = "warning"

View File

@ -0,0 +1,134 @@
# these rules synced manually from https://github.com/etcd-io/etcd/blob/master/Documentation/etcd-mixin/mixin.libsonnet
groups:
- name: etcd
rules:
- alert: etcdInsufficientMembers
annotations:
message: 'etcd cluster "{{ $labels.job }}": insufficient members ({{ $value
}}).'
expr: |
sum(up{job=~".*etcd.*"} == bool 1) by (job) < ((count(up{job=~".*etcd.*"}) by (job) + 1) / 2)
for: 3m
labels:
severity: critical
- alert: etcdNoLeader
annotations:
message: 'etcd cluster "{{ $labels.job }}": member {{ $labels.instance }} has
no leader.'
expr: |
etcd_server_has_leader{job=~".*etcd.*"} == 0
for: 1m
labels:
severity: critical
- alert: etcdHighNumberOfLeaderChanges
annotations:
message: 'etcd cluster "{{ $labels.job }}": instance {{ $labels.instance }}
has seen {{ $value }} leader changes within the last hour.'
expr: |
rate(etcd_server_leader_changes_seen_total{job=~".*etcd.*"}[15m]) > 3
for: 15m
labels:
severity: warning
- alert: etcdHighNumberOfFailedGRPCRequests
annotations:
message: 'etcd cluster "{{ $labels.job }}": {{ $value }}% of requests for {{
$labels.grpc_method }} failed on etcd instance {{ $labels.instance }}.'
expr: |
100 * sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code!="OK"}[5m])) BY (job, instance, grpc_service, grpc_method)
/
sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) BY (job, instance, grpc_service, grpc_method)
> 1
for: 10m
labels:
severity: warning
- alert: etcdHighNumberOfFailedGRPCRequests
annotations:
message: 'etcd cluster "{{ $labels.job }}": {{ $value }}% of requests for {{
$labels.grpc_method }} failed on etcd instance {{ $labels.instance }}.'
expr: |
100 * sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code!="OK"}[5m])) BY (job, instance, grpc_service, grpc_method)
/
sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) BY (job, instance, grpc_service, grpc_method)
> 5
for: 5m
labels:
severity: critical
- alert: etcdGRPCRequestsSlow
annotations:
message: 'etcd cluster "{{ $labels.job }}": gRPC requests to {{ $labels.grpc_method
}} are taking {{ $value }}s on etcd instance {{ $labels.instance }}.'
expr: |
histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job=~".*etcd.*", grpc_type="unary"}[5m])) by (job, instance, grpc_service, grpc_method, le))
> 0.15
for: 10m
labels:
severity: critical
- alert: etcdMemberCommunicationSlow
annotations:
message: 'etcd cluster "{{ $labels.job }}": member communication with {{ $labels.To
}} is taking {{ $value }}s on etcd instance {{ $labels.instance }}.'
expr: |
histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket{job=~".*etcd.*"}[5m]))
> 0.15
for: 10m
labels:
severity: warning
- alert: etcdHighNumberOfFailedProposals
annotations:
message: 'etcd cluster "{{ $labels.job }}": {{ $value }} proposal failures within
the last hour on etcd instance {{ $labels.instance }}.'
expr: |
rate(etcd_server_proposals_failed_total{job=~".*etcd.*"}[15m]) > 5
for: 15m
labels:
severity: warning
- alert: etcdHighFsyncDurations
annotations:
message: 'etcd cluster "{{ $labels.job }}": 99th percentile fync durations are
{{ $value }}s on etcd instance {{ $labels.instance }}.'
expr: |
histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=~".*etcd.*"}[5m]))
> 0.5
for: 10m
labels:
severity: warning
- alert: etcdHighCommitDurations
annotations:
message: 'etcd cluster "{{ $labels.job }}": 99th percentile commit durations
{{ $value }}s on etcd instance {{ $labels.instance }}.'
expr: |
histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket{job=~".*etcd.*"}[5m]))
> 0.25
for: 10m
labels:
severity: warning
- alert: etcdHighNumberOfFailedHTTPRequests
annotations:
message: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
instance {{ $labels.instance }}'
expr: |
sum(rate(etcd_http_failed_total{job=~".*etcd.*", code!="404"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job=~".*etcd.*"}[5m]))
BY (method) > 0.01
for: 10m
labels:
severity: warning
- alert: etcdHighNumberOfFailedHTTPRequests
annotations:
message: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
instance {{ $labels.instance }}.'
expr: |
sum(rate(etcd_http_failed_total{job=~".*etcd.*", code!="404"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job=~".*etcd.*"}[5m]))
BY (method) > 0.05
for: 10m
labels:
severity: critical
- alert: etcdHTTPRequestsSlow
annotations:
message: etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method
}} are slow.
expr: |
histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m]))
> 0.15
for: 10m
labels:
severity: warning

View File

@ -1,4 +1,6 @@
# Understand failures
---
title: Failure modes
---
Failures are common in a large deployment of machines. A machine fails when its hardware or software malfunctions. Multiple machines fail together when there are power failures or network issues. Multiple kinds of failures can also happen at once; it is almost impossible to enumerate all possible failure cases.

View File

@ -1,10 +1,12 @@
# etcd gateway
---
title: etcd gateway
---
## What is etcd gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses.
The gateway supports multiple etcd server endpoints and works on a simple round-robin policy. It only routes to available enpoints and hides failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
The gateway supports multiple etcd server endpoints and works on a simple round-robin policy. It only routes to available endpoints and hides failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
## When to use etcd gateway
@ -60,7 +62,7 @@ infra2.example.com. 300 IN A 10.0.1.12
Start the etcd gateway to fetch the endpoints from the DNS SRV entries with the command:
```bash
$ etcd gateway --discovery-srv=example.com
$ etcd gateway start --discovery-srv=example.com
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
```

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,6 @@
# gRPC proxy
---
title: gRPC proxy
---
The gRPC proxy is a stateless etcd reverse proxy operating at the gRPC layer (L7). The proxy is designed to reduce the total processing load on the core etcd cluster. For horizontal scalability, it coalesces watch and lease API requests. To protect the cluster against abusive clients, it caches key range requests.
@ -85,14 +87,14 @@ Start the etcd gRPC proxy to use these static endpoints with the command:
$ etcd grpc-proxy start --endpoints=infra0.example.com,infra1.example.com,infra2.example.com --listen-addr=127.0.0.1:2379
```
The etcd gRPC proxy starts and listens on port 8080. It forwards client requests to one of the three endpoints provided above.
The etcd gRPC proxy starts and listens on port 2379. It forwards client requests to one of the three endpoints provided above.
Sending requests through the proxy:
```bash
$ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 put foo bar
$ ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 put foo bar
OK
$ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 get foo
$ ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 get foo
foo
bar
```
@ -120,7 +122,7 @@ $ etcd grpc-proxy start --endpoints=localhost:2379 \
The proxy will list all its members for member list:
```bash
ETCDCTL_API=3 ./bin/etcdctl --endpoints=http://localhost:23790 member list --write-out table
ETCDCTL_API=3 etcdctl --endpoints=http://localhost:23790 member list --write-out table
+----+---------+--------------------------------+------------+-----------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
@ -155,10 +157,10 @@ $ etcd grpc-proxy start --endpoints=localhost:2379 \
--advertise-client-url=127.0.0.1:23792
```
the member list API to the grpc-proxy returns its own `advertise-client-url`:
The member list API to the grpc-proxy returns its own `advertise-client-url`:
```bash
ETCDCTL_API=3 ./bin/etcdctl --endpoints=http://localhost:23792 member list --write-out table
ETCDCTL_API=3 etcdctl --endpoints=http://localhost:23792 member list --write-out table
+----+---------+--------------------------------+------------+-----------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
@ -182,12 +184,69 @@ $ etcd grpc-proxy start --endpoints=localhost:2379 \
Accesses to the proxy are now transparently prefixed on the etcd cluster:
```bash
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:23790 put my-key abc
$ ETCDCTL_API=3 etcdctl --endpoints=localhost:23790 put my-key abc
# OK
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:23790 get my-key
$ ETCDCTL_API=3 etcdctl --endpoints=localhost:23790 get my-key
# my-key
# abc
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:2379 get my-prefix/my-key
$ ETCDCTL_API=3 etcdctl --endpoints=localhost:2379 get my-prefix/my-key
# my-prefix/my-key
# abc
```
## TLS termination
Terminate TLS from a secure etcd cluster with the gRPC proxy by serving an unencrypted local endpoint.
To try it out, start a single member etcd cluster with client https:
```sh
$ etcd --listen-client-urls https://localhost:2379 --advertise-client-urls https://localhost:2379 --cert-file=peer.crt --key-file=peer.key --trusted-ca-file=ca.crt --client-cert-auth
```
Confirm the client port is serving https:
```sh
# fails
$ ETCDCTL_API=3 etcdctl --endpoints=http://localhost:2379 endpoint status
# works
$ ETCDCTL_API=3 etcdctl --endpoints=https://localhost:2379 --cert=client.crt --key=client.key --cacert=ca.crt endpoint status
```
Next, start a gRPC proxy on `localhost:12379` by connecting to the etcd endpoint `https://localhost:2379` using the client certificates:
```sh
$ etcd grpc-proxy start --endpoints=https://localhost:2379 --listen-addr localhost:12379 --cert client.crt --key client.key --cacert=ca.crt --insecure-skip-tls-verify &
```
Finally, test the TLS termination by putting a key into the proxy over http:
```sh
$ ETCDCTL_API=3 etcdctl --endpoints=http://localhost:12379 put abc def
# OK
```
## Metrics and Health
The gRPC proxy exposes `/health` and Prometheus `/metrics` endpoints for the etcd members defined by `--endpoints`. An alternative define an additional URL that will respond to both the `/metrics` and `/health` endpoints with the `--metrics-addr` flag.
```bash
$ etcd grpc-proxy start \
--endpoints https://localhost:2379 \
--metrics-addr https://0.0.0.0:4443 \
--listen-addr 127.0.0.1:23790 \
--key client.key \
--key-file proxy-server.key \
--cert client.crt \
--cert-file proxy-server.crt \
--cacert ca.pem \
--trusted-ca-file proxy-ca.pem
```
### Known issue
The main interface of the proxy serves both HTTP2 and HTTP/1.1. If proxy is setup with TLS as show in the above example, when using a client such as cURL against the listening interface will require explicitly setting the protocol to HTTP/1.1 on the request to return `/metrics` or `/health`. By using the `--metrics-addr` flag the secondary interface will not have this requirement.
```bash
$ curl --cacert proxy-ca.pem --key proxy-client.key --cert proxy-client.crt https://127.0.0.1:23790/metrics --http1.1
```

View File

@ -1,4 +1,6 @@
# Hardware recommendations
---
title: Hardware recommendations
---
etcd usually runs well with limited resources for development or testing purposes; its common to develop with etcd on a laptop or a cheap cloud machine. However, when running etcd clusters in production, some hardware guidelines are useful for proper administration. These suggestions are not hard rules; they serve as a good starting point for a robust production deployment. As always, deployments should be tested with simulated workloads before running in production.
@ -48,7 +50,7 @@ Example application workload: A 50-node Kubernetes cluster
| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
|----------|------|-------|--------|------|----------------|
| AWS | m4.large | 2 | 8 | 3600 | 56.25 |
| GCE | n1-standard-1 + 50GB PD SSD | 2 | 7.5 | 1500 | 25 |
| GCE | n1-standard-2 + 50GB PD SSD | 2 | 7.5 | 1500 | 25 |
### Medium cluster

View File

@ -1,4 +1,6 @@
# Maintenance
---
title: Maintenance
---
## Overview
@ -6,25 +8,27 @@ An etcd cluster needs periodic maintenance to remain reliable. Depending on an e
All etcd maintenance manages storage resources consumed by the etcd keyspace. Failure to adequately control the keyspace size is guarded by storage space quotas; if an etcd member runs low on space, a quota will trigger cluster-wide alarms which will put the system into a limited-operation maintenance mode. To avoid running out of space for writes to the keyspace, the etcd keyspace history must be compacted. Storage space itself may be reclaimed by defragmenting etcd members. Finally, periodic snapshot backups of etcd member state makes it possible to recover any unintended logical data loss or corruption caused by operational error.
## History compaction
## Raft log retention
`etcd --snapshot-count` configures the number of applied Raft entries to hold in-memory before compaction. When `--snapshot-count` reaches, server first persists snapshot data onto disk, and then truncates old entries. When a slow follower requests logs before a compacted index, leader sends the snapshot forcing the follower to overwrite its state.
Higher `--snapshot-count` holds more Raft entries in memory until snapshot, thus causing [recurrent higher memory usage](https://github.com/kubernetes/kubernetes/issues/60589#issuecomment-371977156). Since leader retains latest Raft entries for longer, a slow follower has more time to catch up before leader snapshot. `--snapshot-count` is a tradeoff between higher memory usage and better availabilities of slow followers.
Since v3.2, the default value of `--snapshot-count` has [changed from from 10,000 to 100,000](https://github.com/etcd-io/etcd/pull/7160).
In performance-wise, `--snapshot-count` greater than 100,000 may impact the write throughput. Higher number of in-memory objects can slow down [Go GC mark phase `runtime.scanobject`](https://golang.org/src/runtime/mgc.go), and infrequent memory reclamation makes allocation slow. Performance varies depending on the workloads and system environments. However, in general, too frequent compaction affects cluster availabilities and write throughputs. Too infrequent compaction is also harmful placing too much pressure on Go garbage collector. See https://www.slideshare.net/mitakeh/understanding-performance-aspects-of-etcd-and-raft for more research results.
## History compaction: v3 API Key-Value Database
Since etcd keeps an exact history of its keyspace, this history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion. Compacting the keyspace history drops all information about keys superseded prior to a given keyspace revision. The space used by these keys then becomes available for additional writes to the keyspace.
The keyspace can be compacted automatically with `etcd`'s time windowed history retention policy, or manually with `etcdctl`. The `etcdctl` method provides fine-grained control over the compacting process whereas automatic compacting fits applications that only need key history for some length of time.
`etcd` can be set to automatically compact the keyspace with the `--auto-compaction` option with a period of hours:
```sh
# keep one hour of history
$ etcd --auto-compaction-retention=1
```
An `etcdctl` initiated compaction works as follows:
```sh
# compact up to revision 3
$ etcdctl compact 3
```
Revisions prior to the compaction revision become inaccessible:
@ -34,11 +38,43 @@ $ etcdctl get --rev=2 somekey
Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted
```
### Auto Compaction
`etcd` can be set to automatically compact the keyspace with the `--auto-compaction-*` option with a period of hours:
```sh
# keep one hour of history
$ etcd --auto-compaction-retention=1
```
[v3.0.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.0.md) and [v3.1.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.1.md) with `--auto-compaction-retention=10` run periodic compaction on v3 key-value store for every 10-hour. Compactor only supports periodic compaction. Compactor records latest revisions every 5-minute, until it reaches the first compaction period (e.g. 10-hour). In order to retain key-value history of last compaction period, it uses the last revision that was fetched before compaction period, from the revision records that were collected every 5-minute. When `--auto-compaction-retention=10`, compactor uses revision 100 for compact revision where revision 100 is the latest revision fetched from 10 hours ago. If compaction succeeds or requested revision has already been compacted, it resets period timer and starts over with new historical revision records (e.g. restart revision collect and compact for the next 10-hour period). If compaction fails, it retries in 5 minutes.
[v3.2.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md) compactor runs [every hour](https://github.com/etcd-io/etcd/pull/7875). Compactor only supports periodic compaction. Compactor continues to record latest revisions every 5-minute. For every hour, it uses the last revision that was fetched before compaction period, from the revision records that were collected every 5-minute. That is, for every hour, compactor discards historical data created before compaction period. The retention window of compaction period moves to next hour. For instance, when hourly writes are 100 and `--auto-compaction-retention=10`, v3.1 compacts revision 1000, 2000, and 3000 for every 10-hour, while v3.2.x, v3.3.0, v3.3.1, and v3.3.2 compact revision 1000, 1100, and 1200 for every 1-hour. If compaction succeeds or requested revision has already been compacted, it resets period timer and removes used compacted revision from historical revision records (e.g. start next revision collect and compaction from previously collected revisions). If compaction fails, it retries in 5 minutes.
In [v3.3.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md), [v3.3.1](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md), and [v3.3.2](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md), `--auto-compaction-mode=revision --auto-compaction-retention=1000` automatically `Compact` on `"latest revision" - 1000` every 5-minute (when latest revision is 30000, compact on revision 29000). For instance, `--auto-compaction-mode=periodic --auto-compaction-retention=72h` automatically `Compact` with 72-hour retention windown, for every 7.2-hour. For instance, `--auto-compaction-mode=periodic --auto-compaction-retention=30m` automatically `Compact` with 30-minute retention windown, for every 3-minute. Periodic compactor continues to record latest revisions for every 1/10 of given compaction period (e.g. 1-hour when `--auto-compaction-mode=periodic --auto-compaction-retention=10h`). For every 1/10 of given compaction period, compactor uses the last revision that was fetched before compaction period, to discard historical data. The retention window of compaction period moves for every 1/10 of given compaction period. For instance, when hourly writes are 100 and `--auto-compaction-retention=10`, v3.1 compacts revision 1000, 2000, and 3000 for every 10-hour, while v3.2.x, v3.3.0, v3.3.1, and v3.3.2 compact revision 1000, 1100, and 1200 for every 1-hour. Futhermore, when writes per minute are 1000, v3.3.0, v3.3.1, and v3.3.2 with `--auto-compaction-mode=periodic --auto-compaction-retention=30m` compact revision 30000, 33000, and 36000, for every 3-minute with more finer granularity.
When `--auto-compaction-retention=10h`, etcd first waits 10-hour for the first compaction, and then does compaction every hour (1/10 of 10-hour) afterwards like this:
```
0Hr (rev = 1)
1hr (rev = 10)
...
8hr (rev = 80)
9hr (rev = 90)
10hr (rev = 100, Compact(1))
11hr (rev = 110, Compact(10))
...
```
Whether compaction succeeds or not, this process repeats for every 1/10 of given compaction period. If compaction succeeds, it just removes compacted revision from historical revision records.
In [v3.3.3](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md), `--auto-compaction-mode=revision --auto-compaction-retention=1000` automatically `Compact` on `"latest revision" - 1000` every 5-minute (when latest revision is 30000, compact on revision 29000). Previously, `--auto-compaction-mode=periodic --auto-compaction-retention=72h` automatically `Compact` with 72-hour retention windown for every 7.2-hour. **Now, `Compact` happens, for every 1-hour but still with 72-hour retention window.** Previously, `--auto-compaction-mode=periodic --auto-compaction-retention=30m` automatically `Compact` with 30-minute retention windown for every 3-minute. **Now, `Compact` happens, for every 30-minute but still with 30-minute retention window.** Periodic compactor keeps recording latest revisions for every compaction period when given period is less than 1-hour, or for every 1-hour when given compaction period is greater than 1-hour (e.g. 1-hour when `--auto-compaction-mode=periodic --auto-compaction-retention=24h`). For every compaction period or 1-hour, compactor uses the last revision that was fetched before compaction period, to discard historical data. The retention window of compaction period moves for every given compaction period or hour. For instance, when hourly writes are 100 and `--auto-compaction-mode=periodic --auto-compaction-retention=24h`, `v3.2.x`, `v3.3.0`, `v3.3.1`, and `v3.3.2` compact revision 2400, 2640, and 2880 for every 2.4-hour, while `v3.3.3` *or later* compacts revision 2400, 2500, 2600 for every 1-hour. Furthermore, when `--auto-compaction-mode=periodic --auto-compaction-retention=30m` and writes per minute are about 1000, `v3.3.0`, `v3.3.1`, and `v3.3.2` compact revision 30000, 33000, and 36000, for every 3-minute, while `v3.3.3` *or later* compacts revision 30000, 60000, and 90000, for every 30-minute.
## Defragmentation
After compacting the keyspace, the backend database may exhibit internal fragmentation. Any internal fragmentation is space that is free to use by the backend but still consumes storage space. The process of defragmentation releases this storage space back to the file system. Defragmentation is issued on a per-member so that cluster-wide latency spikes may be avoided.
After compacting the keyspace, the backend database may exhibit internal fragmentation. Any internal fragmentation is space that is free to use by the backend but still consumes storage space. Compacting old revisions internally fragments `etcd` by leaving gaps in backend database. Fragmented space is available for use by `etcd` but unavailable to the host filesystem. In other words, deleting application data does not reclaim the space on disk.
Compacting old revisions internally fragments `etcd` by leaving gaps in backend database. Fragmented space is available for use by `etcd` but unavailable to the host filesystem.
The process of defragmentation releases this storage space back to the file system. Defragmentation is issued on a per-member so that cluster-wide latency spikes may be avoided.
To defragment an etcd member, use the `etcdctl defrag` command:
@ -47,6 +83,25 @@ $ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
```
**Note that defragmentation to a live member blocks the system from reading and writing data while rebuilding its states**.
**Note that defragmentation request does not get replicated over cluster. That is, the request is only applied to the local node. Specify all members in `--endpoints` flag or `--cluster` flag to automatically find all cluster members.**
Run defragment operations for all endpoints in the cluster associated with the default endpoint:
```bash
$ etcdctl defrag --cluster
Finished defragmenting etcd member[http://127.0.0.1:2379]
Finished defragmenting etcd member[http://127.0.0.1:22379]
Finished defragmenting etcd member[http://127.0.0.1:32379]
```
To defragment an etcd data directory directly, while etcd is not running, use the command:
``` sh
$ etcdctl defrag --data-dir <path-to-etcd-data-dir>
```
## Space quota
The space quota in `etcd` ensures the cluster operates in a reliable fashion. Without a space quota, `etcd` may suffer from poor performance if the keyspace grows excessively large, or it may simply run out of storage space, leading to unpredictable cluster behavior. If the keyspace's backend database for any member exceeds the space quota, `etcd` raises a cluster-wide alarm that puts the cluster into a maintenance mode which only accepts key reads and deletes. Only after freeing enough space in the keyspace and defragmenting the backend database, along with clearing the space quota alarm can the cluster resume normal operation.
@ -74,14 +129,14 @@ $ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
+----------------+------------------+-----------+---------+-----------+-----------+------------+
# confirm alarm is raised
$ ETCDCTL_API=3 etcdctl alarm list
memberID:13803658152347727308 alarm:NOSPACE
memberID:13803658152347727308 alarm:NOSPACE
```
Removing excessive keyspace data and defragmenting the backend database will put the cluster back within the quota limits:
```sh
# get current revision
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9].*')
# compact away all old revisions
$ ETCDCTL_API=3 etcdctl compact $rev
compacted revision 1516
@ -90,12 +145,16 @@ $ ETCDCTL_API=3 etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
# disarm alarm
$ ETCDCTL_API=3 etcdctl alarm disarm
memberID:13803658152347727308 alarm:NOSPACE
memberID:13803658152347727308 alarm:NOSPACE
# test puts are allowed again
$ ETCDCTL_API=3 etcdctl put newkey 123
OK
```
The metric `etcd_mvcc_db_total_size_in_use_in_bytes` indicates the actual database usage after a history compaction, while `etcd_debugging_mvcc_db_total_size_in_bytes` shows the database size including free space waiting for defragmentation. The latter increases only when the former is close to it, meaning when both of these metrics are close to the quota, a history compaction is required to avoid triggering the space quota.
`etcd_debugging_mvcc_db_total_size_in_bytes` is renamed to `etcd_mvcc_db_total_size_in_bytes` from v3.4.
## Snapshot backup
Snapshotting the `etcd` cluster on a regular basis serves as a durable backup for an etcd keyspace. By taking periodic snapshots of an etcd member's backend database, an `etcd` cluster can be recovered to a point in time with a known good state.
@ -110,5 +169,4 @@ $ etcdctl --write-out=table snapshot status backup.db
+----------+----------+------------+------------+
| fe01cf57 | 10 | 7 | 2.1 MB |
+----------+----------+------------+------------+
```

View File

@ -1,21 +1,69 @@
# Monitoring etcd
---
title: Monitoring etcd
---
Each etcd server exports metrics under the `/metrics` path on its client port.
Each etcd server provides local monitoring information on its client port through http endpoints. The monitoring data is useful for both system health checking and cluster debugging.
## Debug endpoint
If `--debug` is set, the etcd server exports debugging information on its client port under the `/debug` path. Take care when setting `--debug`, since there will be degraded performance and verbose logging.
The `/debug/pprof` endpoint is the standard go runtime profiling endpoint. This can be used to profile CPU, heap, mutex, and goroutine utilization. For example, here `go tool pprof` gets the top 10 functions where etcd spends its time:
```sh
$ go tool pprof http://localhost:2379/debug/pprof/profile
Fetching profile from http://localhost:2379/debug/pprof/profile
Please wait... (30s)
Saved profile in /home/etcd/pprof/pprof.etcd.localhost:2379.samples.cpu.001.pb.gz
Entering interactive mode (type "help" for commands)
(pprof) top10
310ms of 480ms total (64.58%)
Showing top 10 nodes out of 157 (cum >= 10ms)
flat flat% sum% cum cum%
130ms 27.08% 27.08% 130ms 27.08% runtime.futex
70ms 14.58% 41.67% 70ms 14.58% syscall.Syscall
20ms 4.17% 45.83% 20ms 4.17% github.com/coreos/etcd/vendor/golang.org/x/net/http2/hpack.huffmanDecode
20ms 4.17% 50.00% 30ms 6.25% runtime.pcvalue
20ms 4.17% 54.17% 50ms 10.42% runtime.schedule
10ms 2.08% 56.25% 10ms 2.08% github.com/coreos/etcd/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).AuthInfoFromCtx
10ms 2.08% 58.33% 10ms 2.08% github.com/coreos/etcd/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).Lead
10ms 2.08% 60.42% 10ms 2.08% github.com/coreos/etcd/vendor/github.com/coreos/etcd/pkg/wait.(*timeList).Trigger
10ms 2.08% 62.50% 10ms 2.08% github.com/coreos/etcd/vendor/github.com/prometheus/client_golang/prometheus.(*MetricVec).hashLabelValues
10ms 2.08% 64.58% 10ms 2.08% github.com/coreos/etcd/vendor/golang.org/x/net/http2.(*Framer).WriteHeaders
```
The `/debug/requests` endpoint gives gRPC traces and performance statistics through a web browser. For example, here is a `Range` request for the key `abc`:
```
When Elapsed (s)
2017/08/18 17:34:51.999317 0.000244 /etcdserverpb.KV/Range
17:34:51.999382 . 65 ... RPC: from 127.0.0.1:47204 deadline:4.999377747s
17:34:51.999395 . 13 ... recv: key:"abc"
17:34:51.999499 . 104 ... OK
17:34:51.999535 . 36 ... sent: header:<cluster_id:14841639068965178418 member_id:10276657743932975437 revision:15 raft_term:17 > kvs:<key:"abc" create_revision:6 mod_revision:14 version:9 value:"asda" > count:1
```
## Metrics endpoint
Each etcd server exports metrics under the `/metrics` path on its client port and optionally on locations given by `--listen-metrics-urls`.
The metrics can be fetched with `curl`:
```sh
$ curl -L http://localhost:2379/metrics
$ curl -L http://localhost:2379/metrics | grep -v debugging # ignore unstable debugging metrics
# HELP etcd_debugging_mvcc_keys_total Total number of keys.
# TYPE etcd_debugging_mvcc_keys_total gauge
etcd_debugging_mvcc_keys_total 0
# HELP etcd_debugging_mvcc_pending_events_total Total number of pending events to be sent.
# TYPE etcd_debugging_mvcc_pending_events_total gauge
etcd_debugging_mvcc_pending_events_total 0
# HELP etcd_disk_backend_commit_duration_seconds The latency distributions of commit called by backend.
# TYPE etcd_disk_backend_commit_duration_seconds histogram
etcd_disk_backend_commit_duration_seconds_bucket{le="0.002"} 72756
etcd_disk_backend_commit_duration_seconds_bucket{le="0.004"} 401587
etcd_disk_backend_commit_duration_seconds_bucket{le="0.008"} 405979
etcd_disk_backend_commit_duration_seconds_bucket{le="0.016"} 406464
...
```
## Health Check
Since v3.3.0, in addition to responding to the `/metrics` endpoint, any locations specified by `--listen-metrics-urls` will also respond to the `/health` endpoint. This can be useful if the standard endpoint is configured with mutual (client) TLS authentication, but a load balancer or monitoring service still needs access to the health check.
## Prometheus
@ -24,7 +72,7 @@ Running a [Prometheus][prometheus] monitoring service is the easiest way to inge
First, install Prometheus:
```sh
PROMETHEUS_VERSION="1.3.1"
PROMETHEUS_VERSION="2.0.0"
wget https://github.com/prometheus/prometheus/releases/download/v$PROMETHEUS_VERSION/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz -O /tmp/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz
tar -xvzf /tmp/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz --directory /tmp/ --strip-components=1
/tmp/prometheus -version
@ -56,13 +104,13 @@ nohup /tmp/prometheus \
Now Prometheus will scrape etcd metrics every 10 seconds.
## Alerting
### Alerting
There is a [set of default alerts for etcd v3 clusters](./etcd3_alert.rules).
There is a set of default alerts for etcd v3 clusters for [Prometheus 1.x](./etcd3_alert.rules) as well as [Prometheus 2.x](./etcd3_alert.rules.yml).
> Note: `job` labels may need to be adjusted to fit a particular need. The rules were written to apply to a single cluster so it is recommended to choose labels unique to a cluster.
## Grafana
### Grafana
[Grafana][grafana] has built-in Prometheus support; just add a Prometheus data source:
@ -75,8 +123,6 @@ Access: proxy
Then import the default [etcd dashboard template][template] and customize. For instance, if Prometheus data source name is `my-etcd`, the `datasource` field values in JSON also need to be `my-etcd`.
See the [demo][demo].
Sample dashboard:
![](./etcd-sample-grafana.png)
@ -85,4 +131,3 @@ Sample dashboard:
[prometheus]: https://prometheus.io/
[grafana]: http://grafana.org/
[template]: ./grafana.json
[demo]: http://dash.etcd.io/dashboard/db/test-etcd

View File

@ -1,10 +1,12 @@
# Performance
---
title: Performance
---
## Understanding performance
etcd provides stable, sustained high performance. Two factors define performance: latency and throughput. Latency is the time taken to complete an operation. Throughput is the total operations completed within some time period. Usually average latency increases as the overall throughput increases when etcd accepts concurrent client requests. In common cloud environments, like a standard `n-4` on Google Compute Engine (GCE) or a comparable machine type on AWS, a three member etcd cluster finishes a request in less than one millisecond under light load, and can complete more than 30,000 requests per second under heavy load.
etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanant storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanent storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
There are other sub-systems which impact the overall performance of etcd. Each serialized etcd request must run through etcds boltdb-backed MVCC storage engine, which usually takes tens of microseconds to finish. Periodically etcd incrementally snapshots its recently applied requests, merging them back with the previous on-disk snapshot. This process may lead to a latency spike. Although this is usually not a problem on SSDs, it may double the observed latency on HDD. Likewise, inflight compactions can impact etcds performance. Fortunately, the impact is often insignificant since the compaction is staggered so it does not compete for resources with regular requests. The RPC system, gRPC, gives etcd a well-defined, extensible API, but it also introduces additional latency, especially for local reads.

View File

@ -1,4 +1,6 @@
## Disaster recovery
---
title: Disaster recovery
---
etcd is designed to withstand machine failures. An etcd cluster automatically recovers from temporary failures (e.g., machine reboots) and tolerates up to *(N-1)/2* permanent failures for a cluster of N members. When a member permanently fails, whether due to hardware failure or disk corruption, it loses access to the cluster. If the cluster permanently loses more than *(N-1)/2* members then it disastrously fails, irrevocably losing quorum. Once quorum is lost, the cluster cannot reach consensus and therefore cannot continue accepting updates.
@ -6,7 +8,7 @@ To recover from disastrous failure, etcd v3 provides snapshot and restore facili
[v2_recover]: ../v2/admin_guide.md#disaster-recovery
### Snapshotting the keyspace
## Snapshotting the keyspace
Recovering a cluster first needs a snapshot of the keyspace from an etcd member. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd data directory. For example, the following command snapshots the keyspace served by `$ENDPOINT` to the file `snapshot.db`:
@ -14,7 +16,7 @@ Recovering a cluster first needs a snapshot of the keyspace from an etcd member.
$ ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db
```
### Restoring a cluster
## Restoring a cluster
To restore a cluster, all that is needed is a single snapshot "db" file. A cluster restore with `etcdctl snapshot restore` creates new etcd data directories; all members should restore using the same snapshot. Restoring overwrites some snapshot metadata (specifically, the member ID and cluster ID); the member loses its former identity. This metadata overwrite prevents the new member from inadvertently joining an existing cluster. Therefore in order to start a cluster from a snapshot, the restore must start a new logical cluster.
@ -61,3 +63,9 @@ $ etcd \
```
Now the restored etcd cluster should be available and serving the keyspace given by the snapshot.
## Restoring a cluster from membership mis-reconfiguration with wrong URLs
Previously, etcd panics on [membership mis-reconfiguration with wrong URLs](https://github.com/etcd-io/etcd/issues/9173) (v3.2.15 or later returns [error early in client-side](https://github.com/etcd-io/etcd/pull/9174) before etcd server panic).
Recommended way is restore from [snapshot](#snapshotting-the-keyspace). `--force-new-cluster` can be used to overwrite cluster membership while keeping existing application data, but is strongly discouraged because it will panic if other members from previous cluster are still alive. Make sure to save snapshot periodically.

View File

@ -1,8 +1,10 @@
# Runtime reconfiguration
---
title: Runtime reconfiguration
---
etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.
Reconfiguration requests can only be processed when a majority of cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
Reconfiguration requests can only be processed when a majority of cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not be able to make progress and need to [restart from majority failure][majority failure].
To better understand the design behind runtime reconfiguration, please read [the runtime reconfiguration document][runtime-reconf].
@ -41,7 +43,7 @@ Before making any change, a simple majority (quorum) of etcd members must be ava
All changes to the cluster must be done sequentially:
* To update a single member peerURLs, issue an update operation
* To replace a healthy single member, add a new member then remove the old member
* To replace a healthy single member, remove the old member then add a new member
* To increase from 3 to 5 members, issue two add operations
* To decrease from 5 to 3, issue two remove operations
@ -55,9 +57,9 @@ To update the advertise client URLs of a member, simply restart that member with
#### Update advertise peer URLs
To update the advertise peer URLs of a member, first update it explicitly via member command and then restart the member. The additional action is required since updating peer URLs changes the cluster wide configuration and can affect the health of the etcd cluster.
To update the advertise peer URLs of a member, first update it explicitly via member command and then restart the member. The additional action is required since updating peer URLs changes the cluster wide configuration and can affect the health of the etcd cluster.
To update the peer URLs, first find the target member's ID. To list all members with `etcdctl`:
To update the advertise peer URLs, first find the target member's ID. To list all members with `etcdctl`:
```sh
$ etcdctl member list
@ -69,7 +71,7 @@ a8266ecf031671f3: name=node1 peerURLs=http://localhost:23801 clientURLs=http://1
This example will `update` a8266ecf031671f3 member ID and change its peerURLs value to `http://10.0.1.10:2380`:
```sh
$ etcdctl member update a8266ecf031671f3 http://10.0.1.10:2380
$ etcdctl member update a8266ecf031671f3 --peer-urls=http://10.0.1.10:2380
Updated member with ID a8266ecf031671f3 in cluster
```
@ -100,7 +102,7 @@ Adding a member is a two step process:
`etcdctl` adds a new member to the cluster by specifying the member's [name][conf-name] and [advertised peer URLs][conf-adv-peer]:
```sh
$ etcdctl member add infra3 http://10.0.1.13:2380
$ etcdctl member add infra3 --peer-urls=http://10.0.1.13:2380
added member 9bf1b35fc7761a23 to cluster
ETCD_NAME="infra3"

View File

@ -1,4 +1,6 @@
# Design of runtime reconfiguration
---
title: Design of runtime reconfiguration
---
Runtime reconfiguration is one of the hardest and most error prone features in a distributed system, especially in a consensus based system like etcd.
@ -6,17 +8,17 @@ Read on to learn about the design of etcd's runtime reconfiguration commands and
## Two phase config changes keep the cluster safe
In etcd, every runtime reconfiguration has to go through [two phases][add-member] for safety reasons. For example, to add a member, first inform cluster of new configuration and then start the new member.
In etcd, every runtime reconfiguration has to go through [two phases][add-member] for safety reasons. For example, to add a member, first inform the cluster of the new configuration and then start the new member.
Phase 1 - Inform cluster of new configuration
To add a member into etcd cluster, make an API call to request a new member to be added to the cluster. This is only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
To add a member into an etcd cluster, make an API call to request a new member to be added to the cluster. This is the only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
Phase 2 - Start new member
To join the etcd member into the existing cluster, specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, the cluster has reached the expected configuration.
To join the new etcd member into the existing cluster, specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, the cluster has reached the expected configuration.
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change,, it will not be accepted by the cluster.
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change, it will not be accepted by the cluster.
Without the explicit workflow around cluster membership etcd would be vulnerable to unexpected cluster membership changes. For example, if etcd is running under an init system such as systemd, etcd would be restarted after being removed via the membership API, and attempt to rejoin the cluster on startup. This cycle would continue every time a member is removed via the API and systemd is set to restart etcd after failing, which is unexpected.
@ -26,21 +28,21 @@ We expect runtime reconfiguration to be an infrequent operation. We decided to k
If a cluster permanently loses a majority of its members, a new cluster will need to be started from an old data directory to recover the previous state.
It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or force removed through different members in the same cluster, etcd will end up with a diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards.
It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or force removed through different members in the same cluster, etcd will end up with a diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards.
With a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and prepare for permanent majority lose before putting etcd into production.
With a correct deployment, the possibility of permanent majority loss is very low. But it is a severe enough problem that is worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and preparing for permanent majority loss before putting etcd into production.
## Do not use public discovery service for runtime reconfiguration
The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, use runtime reconfiguration API.
The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, use the runtime reconfiguration API.
Discovery service is designed for bootstrapping an etcd cluster in the cloud environment, when the IP addresses of all the members are not known beforehand. After successfully bootstrapping a cluster, the IP addresses of all the members are known. Technically, the discovery service should no longer be needed.
The discovery service is designed for bootstrapping an etcd cluster in a cloud environment, when the IP addresses of all the members are not known beforehand. After successfully bootstrapping a cluster, the IP addresses of all the members are known. Technically, the discovery service should no longer be needed.
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
1. it introduces external dependencies for the entire life-cycle of the cluster, not just bootstrap time. If there is a network issue between the cluster and public discovery service, the cluster will suffer from it.
2. public discovery service must reflect correct runtime configuration of the cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard.
2. public discovery service must reflect correct runtime configuration of the cluster during its life-cycle. It has to provide security mechanisms to avoid bad actions, and it is hard.
3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload.

View File

@ -1,4 +1,6 @@
# Security model
---
title: Transport security model
---
etcd supports automatic TLS as well as authentication through client certificates for both clients to server as well as peer (server to server / cluster) communication.
@ -38,6 +40,8 @@ The peer options work the same way as the client-to-server options:
If either a client-to-server or peer certificate is supplied the key must also be set. All of these configuration options are also available through the environment variables, `ETCD_CA_FILE`, `ETCD_PEER_CA_FILE` and so on.
`--cipher-suites`: Comma-separated list of supported TLS cipher suites between server/client and peers (empty will be auto-populated by Go). Available from v3.2.22+, v3.3.7+, and v3.4+.
## Example 1: Client-to-server transport security with HTTPS
For this, have a CA certificate (`ca.crt`) and signed key pair (`server.crt`, `server.key`) ready.
@ -122,6 +126,49 @@ And also the response from the server:
}
```
Specify cipher suites to block [weak TLS cipher suites](https://github.com/etcd-io/etcd/issues/8320).
TLS handshake would fail when client hello is requested with invalid cipher suites.
For instance:
```bash
$ etcd \
--cert-file ./server.crt \
--key-file ./server.key \
--trusted-ca-file ./ca.crt \
--cipher-suites TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
```
Then, client requests must specify one of the cipher suites specified in the server:
```bash
# valid cipher suite
$ curl \
--cacert ./ca.crt \
--cert ./server.crt \
--key ./server.key \
-L [CLIENT-URL]/metrics \
--ciphers ECDHE-RSA-AES128-GCM-SHA256
# request succeeds
etcd_server_version{server_version="3.2.22"} 1
...
```
```bash
# invalid cipher suite
$ curl \
--cacert ./ca.crt \
--cert ./server.crt \
--key ./server.key \
-L [CLIENT-URL]/metrics \
--ciphers ECDHE-RSA-DES-CBC3-SHA
# request fails with
(35) error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
```
## Example 3: Transport security & client certificates in a cluster
etcd supports the same model as above for **peer communication**, that means the communication between etcd members in a cluster.
@ -181,6 +228,10 @@ To disable certificate chain checking, invoke curl with the `-k` flag:
$ curl -k https://127.0.0.1:2379/v2/keys/foo -Xput -d value=bar -v
```
## Notes for DNS SRV
Since v3.1.0 (except v3.2.9), discovery SRV bootstrapping authenticates `ServerName` with a root domain name from `--discovery-srv` flag. This is to avoid man-in-the-middle cert attacks, by requiring a certificate to have matching root domain name in its Subject Alternative Name (SAN) field. For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `etcd.local` as an entry in Subject Alternative Name (SAN) field
## Notes for etcd proxy
etcd proxy terminates the TLS from its client if the connection is secure, and uses proxy's own key/cert specified in `--peer-key-file` and `--peer-cert-file` to communicate with etcd members.
@ -189,6 +240,163 @@ The proxy communicates with etcd members through both the `--advertise-client-ur
When client authentication is enabled for an etcd member, the administrator must ensure that the peer certificate specified in the proxy's `--peer-cert-file` option is valid for that authentication. The proxy's peer certificate must also be valid for peer authentication if peer authentication is enabled.
## Notes for TLS authentication
Since [v3.2.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [TLS certificates get reloaded on every client connection](https://github.com/etcd-io/etcd/pull/7829). This is useful when replacing expiry certs without stopping etcd servers; it can be done by overwriting old certs with new ones. Refreshing certs for every connection should not have too much overhead, but can be improved in the future, with caching layer. Example tests can be found [here](https://github.com/coreos/etcd/blob/b041ce5d514a4b4aaeefbffb008f0c7570a18986/integration/v3_grpc_test.go#L1601-L1757).
Since [v3.2.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [server denies incoming peer certs with wrong IP `SAN`](https://github.com/etcd-io/etcd/pull/7687). For instance, if peer cert contains any IP addresses in Subject Alternative Name (SAN) field, server authenticates a peer only when the remote IP address matches one of those IP addresses. This is to prevent unauthorized endpoints from joining the cluster. For example, peer B's CSR (with `cfssl`) is:
```json
{
"CN": "etcd peer",
"hosts": [
"*.example.default.svc",
"*.example.default.svc.cluster.local",
"10.138.0.27"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "CA",
"ST": "San Francisco"
}
]
}
```
when peer B's actual IP address is `10.138.0.2`, not `10.138.0.27`. When peer B tries to join the cluster, peer A will reject B with the error `x509: certificate is valid for 10.138.0.27, not 10.138.0.2`, because B's remote IP address does not match the one in Subject Alternative Name (SAN) field.
Since [v3.2.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [server resolves TLS `DNSNames` when checking `SAN`](https://github.com/etcd-io/etcd/pull/7767). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server authenticates a peer only when forward-lookups (`dig b.com`) on those DNS names have matching IP with the remote IP address. For example, peer B's CSR (with `cfssl`) is:
```json
{
"CN": "etcd peer",
"hosts": [
"b.com"
],
```
when peer B's remote IP address is `10.138.0.2`. When peer B tries to join the cluster, peer A looks up the incoming host `b.com` to get the list of IP addresses (e.g. `dig b.com`). And rejects B if the list does not contain the IP `10.138.0.2`, with the error `tls: 10.138.0.2 does not match any of DNSNames ["b.com"]`.
Since [v3.2.2](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v322-2017-07-07), [server accepts connections if IP matches, without checking DNS entries](https://github.com/etcd-io/etcd/pull/8223). For instance, if peer cert contains IP addresses and DNS names in Subject Alternative Name (SAN) field, and the remote IP address matches one of those IP addresses, server just accepts connection without further checking the DNS names. For example, peer B's CSR (with `cfssl`) is:
```json
{
"CN": "etcd peer",
"hosts": [
"invalid.domain",
"10.138.0.2"
],
```
when peer B's remote IP address is `10.138.0.2` and `invalid.domain` is a invalid host. When peer B tries to join the cluster, peer A successfully authenticates B, since Subject Alternative Name (SAN) field has a valid matching IP address. See [issue#8206](https://github.com/etcd-io/etcd/issues/8206) for more detail.
Since [v3.2.5](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v325-2017-08-04), [server supports reverse-lookup on wildcard DNS `SAN`](https://github.com/etcd-io/etcd/pull/8281). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server first reverse-lookups the remote IP address to get a list of names mapping to that address (e.g. `nslookup IPADDR`). Then accepts the connection if those names have a matching name with peer cert's DNS names (either by exact or wildcard match). If none is matched, server forward-lookups each DNS entry in peer cert (e.g. look up `example.default.svc` when the entry is `*.example.default.svc`), and accepts connection only when the host's resolved addresses have the matching IP address with the peer's remote IP address. For example, peer B's CSR (with `cfssl`) is:
```json
{
"CN": "etcd peer",
"hosts": [
"*.example.default.svc",
"*.example.default.svc.cluster.local"
],
```
when peer B's remote IP address is `10.138.0.2`. When peer B tries to join the cluster, peer A reverse-lookup the IP `10.138.0.2` to get the list of host names. And either exact or wildcard match the host names with peer B's cert DNS names in Subject Alternative Name (SAN) field. If none of reverse/forward lookups worked, it returns an error `"tls: "10.138.0.2" does not match any of DNSNames ["*.example.default.svc","*.example.default.svc.cluster.local"]`. See [issue#8268](https://github.com/etcd-io/etcd/issues/8268) for more detail.
[v3.3.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md) adds [`etcd --peer-cert-allowed-cn`](https://github.com/etcd-io/etcd/pull/8616) flag to support [CN(Common Name)-based auth for inter-peer connections](https://github.com/etcd-io/etcd/issues/8262). Kubernetes TLS bootstrapping involves generating dynamic certificates for etcd members and other system components (e.g. API server, kubelet, etc.). Maintaining different CAs for each component provides tighter access control to etcd cluster but often tedious. When `--peer-cert-allowed-cn` flag is specified, node can only join with matching common name even with shared CAs. For example, each member in 3-node cluster is set up with CSRs (with `cfssl`) as below:
```json
{
"CN": "etcd.local",
"hosts": [
"m1.etcd.local",
"127.0.0.1",
"localhost"
],
```
```json
{
"CN": "etcd.local",
"hosts": [
"m2.etcd.local",
"127.0.0.1",
"localhost"
],
```
```json
{
"CN": "etcd.local",
"hosts": [
"m3.etcd.local",
"127.0.0.1",
"localhost"
],
```
Then only peers with matching common names will be authenticated if `--peer-cert-allowed-cn etcd.local` is given. And nodes with different CNs in CSRs or different `--peer-cert-allowed-cn` will be rejected:
```bash
$ etcd --peer-cert-allowed-cn m1.etcd.local
I | embed: rejected connection from "127.0.0.1:48044" (error "CommonName authentication failed", ServerName "m1.etcd.local")
I | embed: rejected connection from "127.0.0.1:55702" (error "remote error: tls: bad certificate", ServerName "m3.etcd.local")
```
Each process should be started with:
```bash
etcd --peer-cert-allowed-cn etcd.local
I | pkg/netutil: resolving m3.etcd.local:32380 to 127.0.0.1:32380
I | pkg/netutil: resolving m2.etcd.local:22380 to 127.0.0.1:22380
I | pkg/netutil: resolving m1.etcd.local:2380 to 127.0.0.1:2380
I | etcdserver: published {Name:m3 ClientURLs:[https://m3.etcd.local:32379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | etcdserver: published {Name:m1 ClientURLs:[https://m1.etcd.local:2379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | etcdserver: published {Name:m2 ClientURLs:[https://m2.etcd.local:22379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | embed: serving client requests on 127.0.0.1:32379
I | embed: serving client requests on 127.0.0.1:22379
I | embed: serving client requests on 127.0.0.1:2379
```
[v3.2.19](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md) and [v3.3.4](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md) fixes TLS reload when [certificate SAN field only includes IP addresses but no domain names](https://github.com/etcd-io/etcd/issues/9541). For example, a member is set up with CSRs (with `cfssl`) as below:
```json
{
"CN": "etcd.local",
"hosts": [
"127.0.0.1"
],
```
In Go, server calls `(*tls.Config).GetCertificate` for TLS reload if and only if server's `(*tls.Config).Certificates` field is not empty, or `(*tls.ClientHelloInfo).ServerName` is not empty with a valid SNI from the client. Previously, etcd always populates `(*tls.Config).Certificates` on the initial client TLS handshake, as non-empty. Thus, client was always expected to supply a matching SNI in order to pass the TLS verification and to trigger `(*tls.Config).GetCertificate` to reload TLS assets.
However, a certificate whose SAN field does [not include any domain names but only IP addresses](https://github.com/etcd-io/etcd/issues/9541) would request `*tls.ClientHelloInfo` with an empty `ServerName` field, thus failing to trigger the TLS reload on initial TLS handshake; this becomes a problem when expired certificates need to be replaced online.
Now, `(*tls.Config).Certificates` is created empty on initial TLS client handshake, first to trigger `(*tls.Config).GetCertificate`, and then to populate rest of the certificates on every new TLS connection, even when client SNI is empty (e.g. cert only includes IPs).
## Notes for Host Whitelist
`etcd --host-whitelist` flag specifies acceptable hostnames from HTTP client requests. Client origin policy protects against ["DNS Rebinding"](https://en.wikipedia.org/wiki/DNS_rebinding) attacks to insecure etcd servers. That is, any website can simply create an authorized DNS name, and direct DNS to `"localhost"` (or any other address). Then, all HTTP endpoints of etcd server listening on `"localhost"` becomes accessible, thus vulnerable to DNS rebinding attacks. See [CVE-2018-5702](https://bugs.chromium.org/p/project-zero/issues/detail?id=1447#c2) for more detail.
Client origin policy works as follows:
1. If client connection is secure via HTTPS, allow any hostnames.
2. If client connection is not secure and `"HostWhitelist"` is not empty, only allow HTTP requests whose Host field is listed in whitelist.
Note that the client origin policy is enforced whether authentication is enabled or not, for tighter controls.
By default, `etcd --host-whitelist` and `embed.Config.HostWhitelist` are set *empty* to allow all hostnames. Note that when specifying hostnames, loopback addresses are not added automatically. To allow loopback interfaces, add them to whitelist manually (e.g. `"localhost"`, `"127.0.0.1"`, etc.).
## Frequently asked questions
### I'm seeing a SSLv3 alert handshake failure when using TLS client authentication?

View File

@ -1,12 +1,14 @@
## Supported platforms
---
title: Supported systems
---
### Current support
## Current support
The following table lists etcd support status for common architectures and operating systems,
The following table lists etcd support status for common architectures and operating systems:
| Architecture | Operating System | Status | Maintainers |
| ------------ | ---------------- | ------------ | --------------------------- |
| amd64 | Darwin | Experimental | etcd maintainers |
| amd64 | Darwin | Experimental | etcd maintainers |
| amd64 | Linux | Stable | etcd maintainers |
| amd64 | Windows | Experimental | |
| arm64 | Linux | Experimental | @glevand |
@ -14,11 +16,11 @@ The following table lists etcd support status for common architectures and opera
| 386 | Linux | Unstable | |
| ppc64le | Linux | Stable | etcd maintainers, @mkumatag |
* etcd-maintainers are listed in https://github.com/coreos/etcd/blob/master/MAINTAINERS.
* etcd-maintainers are listed in https://github.com/etcd-io/etcd/blob/master/MAINTAINERS.
Experimental platforms appear to work in practice and have some platform specific code in etcd, but do not fully conform to the stable support policy. Unstable platforms have been lightly tested, but less than experimental. Unlisted architecture and operating system pairs are currently unsupported; caveat emptor.
### Supporting a new platform
## Supporting a new system platform
For etcd to officially support a new platform as stable, a few requirements are necessary to ensure acceptable quality:
@ -28,7 +30,7 @@ For etcd to officially support a new platform as stable, a few requirements are
4. Set up CI (TravisCI, SemaphoreCI or Jenkins) for running integration tests; etcd must pass intensive tests.
5. (Optional) Set up a functional testing cluster; an etcd cluster should survive stress testing.
### 32-bit and other unsupported systems
## 32-bit and other unsupported systems
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See the [Go issue][go-issue] and [atomic package][go-atomic] for more information.

View File

@ -1,4 +1,6 @@
# Migrate applications from using API v2 to API v3
---
title: Migrate applications from using API v2 to API v3
---
The data store v2 is still accessible from the API v2 after upgrading to etcd3. Thus, it will work as before and require no application changes. With etcd 3, applications use the new grpc API v3 to access the mvcc store, which provides more features and improved performance. The mvcc store and the old store v2 are separate and isolated; writes to the store v2 will not affect the mvcc store and, similarly, writes to the mvcc store will not affect the store v2.
@ -6,7 +8,7 @@ Migrating an application from the API v2 to the API v3 involves two steps: 1) mi
## Migrate client library
API v3 is different from API v2, thus application developers need to use a new client library to send requests to etcd API v3. The documentation of the client v3 is available at https://godoc.org/github.com/coreos/etcd/clientv3.
API v3 is different from API v2, thus application developers need to use a new client library to send requests to etcd API v3. The documentation of the client v3 is available at https://godoc.org/github.com/coreos/etcd/clientv3.
There are some notable differences between API v2 and API v3:
@ -38,13 +40,17 @@ Second, migrate the v2 keys into v3 with the [migrate][migrate_command] (`ETCDCT
Restart the etcd members and everything should just work.
For etcd v3.3+, run `ETCDCTL_API=3 etcdctl endpoint hashkv --cluster` to ensure key-value stores are consistent post migration.
**Warn**: When v2 store has expiring TTL keys and migrate command intends to preserve TTLs, migration may be inconsistent with the last committed v2 state when run on any member with a raft index less than the last leader's raft index.
### Online migration
If the application cannot tolerate any downtime, then it must migrate online. The implementation of online migration will vary from application to application but the overall idea is the same.
First, write application code using the v3 API. The application must support two modes: a migration mode and a normal mode. The application starts in migration mode. When running in migration mode, the application reads keys using the v3 API first, and, if it cannot find the key, it retries with the API v2. In normal mode, the application only reads keys using the v3 API. The application writes keys over the API v3 in both modes. To acknowledge a switch from migration mode to normal mode, the application watches on a switch mode key. When switch keys value turns to `true`, the application switches over from migration mode to normal mode.
Second, start a background job to migrate data from the store v2 to the mvcc store by reading keys from the API v2 and writing keys to the API v3.
Second, start a background job to migrate data from the store v2 to the mvcc store by reading keys from the API v2 and writing keys to the API v3.
After finishing data migration, the background job writes `true` into the switch mode key to notify the application that it may switch modes.

View File

@ -1,6 +1,8 @@
## Versioning
---
title: Versioning
---
### Service versioning
## Service versioning
etcd uses [semantic versioning](http://semver.org)
New minor versions may add additional features to the API.
@ -11,7 +13,7 @@ Get the running etcd cluster version with `etcdctl`:
ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 endpoint status
```
### API versioning
## API versioning
The `v3` API responses should not change after the 3.0.0 release but new features will be added over time.

View File

@ -0,0 +1,3 @@
---
title: Platforms
---

View File

@ -1,4 +1,6 @@
## Introduction
---
title: Amazon Web Services
---
This guide assumes operational knowledge of Amazon Web Services (AWS), specifically Amazon Elastic Compute Cloud (EC2). This guide provides an introduction to design considerations when designing an etcd deployment on AWS EC2 and how AWS specific features may be utilized in that context.
@ -6,7 +8,7 @@ This guide assumes operational knowledge of Amazon Web Services (AWS), specifica
As a critical building block for distributed systems it is crucial to perform adequate capacity planning in order to support the intended cluster workload. As a highly available and strongly consistent data store increasing the number of nodes in an etcd cluster will generally affect performance adversely. This makes sense intuitively, as more nodes means more members for the leader to coordinate state across. The most direct way to increase throughput and decrease latency of an etcd cluster is allocate more disk I/O, network I/O, CPU, and memory to cluster members. In the event it is impossible to temporarily divert incoming requests to the cluster, scaling the EC2 instances which comprise the etcd cluster members one at a time may improve performance. It is, however, best to avoid bottlenecks through capacity planning.
The etcd team has produced a [hardware recommendation guide]( ../op-guide/hardware.md) which is very useful for “ballparking” how many nodes and what instance type are necessary for a cluster.
The etcd team has produced a [hardware recommendation guide](../op-guide/hardware.md) which is very useful for “ballparking” how many nodes and what instance type are necessary for a cluster.
AWS provides a service for creating groups of EC2 instances which are dynamically sized to match load on the instances. Using an Auto Scaling Group ([ASG](http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html)) to dynamically scale an etcd cluster is not recommended for several reasons including:

View File

@ -1,4 +1,6 @@
# Run etcd on Container Linux with systemd
---
title: Container Linux with systemd
---
The following guide shows how to run etcd with [systemd][systemd-docs] under [Container Linux][container-linux-docs].
@ -134,13 +136,13 @@ etcd:
initial_cluster_state: new
client_cert_auth: true
trusted_ca_file: /etc/ssl/certs/etcd-root-ca.pem
cert-file: /etc/ssl/certs/s1.pem
key-file: /etc/ssl/certs/s1-key.pem
peer-client-cert-auth: true
peer-trusted-ca-file: /etc/ssl/certs/etcd-root-ca.pem
peer-cert-file: /etc/ssl/certs/s1.pem
peer-key-file: /etc/ssl/certs/s1-key.pem
auto-compaction-retention: 1
cert_file: /etc/ssl/certs/s1.pem
key_file: /etc/ssl/certs/s1-key.pem
peer_client_cert_auth: true
peer_trusted_ca_file: /etc/ssl/certs/etcd-root-ca.pem
peer_cert_file: /etc/ssl/certs/s1.pem
peer_key_file: /etc/ssl/certs/s1-key.pem
auto_compaction_retention: 1
```
```

View File

@ -1,4 +1,6 @@
# FreeBSD
---
title: FreeBSD
---
Starting with version 0.1.2 both etcd and etcdctl have been ported to FreeBSD and can be installed either via packages or ports system. Their versions have been recently updated to 0.2.0 so now etcd and etcdctl can be enjoyed on FreeBSD 10.0 (RC4 as of now) and 9.x, where they have been tested. They might also work when installed from ports on earlier versions of FreeBSD, but it is untested; caveat emptor.

View File

@ -1,4 +1,6 @@
# Production users
---
title: Production users
---
This document tracks people and use cases for etcd in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various etcd applications, operation environments, and cluster sizes. The etcd development team may reach out periodically to check-in on how etcd is working in the field and update this list.
@ -79,9 +81,9 @@ Radius Intelligence uses Kubernetes running CoreOS to containerize and scale int
PD(Placement Driver) is the central controller in the TiDB cluster. It saves the cluster meta information, schedule the data, allocate the global unique timestamp for the distributed transaction, etc. It embeds etcd to supply high availability and auto failover.
## Canal
## Huawei
- *Application*: system configuration for overlay network
- *Application*: System configuration for overlay network (Canal)
- *Launched*: June 2016
- *Cluster Size*: 3 members for each cluster
- *Order of Data Size*: kilobytes
@ -237,3 +239,12 @@ At [Branch][branch], we use kubernetes heavily as our core microservice platform
- *Environment*: Bare Metal
- *Backups*: None, all data is considered ephemeral.
## Transwarp
- *Application*: Transwarp Data Cloud, Transwarp Operating System, Transwarp Data Hub, Sophon
- *Launched*: January 2016
- *Cluster Size*: Multiple clusters, multiple sizes
- *Order of Data Size*: Megabytes
- *Operator*: Trasnwarp Operating System
- *Environment*: Bare Metal, Container
- *Backups*: backup scripts

View File

@ -1,4 +1,6 @@
# Reporting bugs
---
title: Reporting bugs
---
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
@ -41,5 +43,5 @@ $ sudo journalctl -u etcd2
Due to an upstream systemd bug, journald may miss the last few log lines when its processes exit. If journalctl says etcd stopped without fatal or panic message, try `sudo journalctl -f -t etcd2` to get full log.
[etcd-issue]: https://github.com/coreos/etcd/issues/new
[etcd-issue]: https://github.com/etcd-io/etcd/issues/new
[filing-good-bugs]: http://fantasai.inkedblade.net/style/talks/filing-good-bugs/

View File

@ -1,4 +1,6 @@
# Overview
---
title: etcd v3 API
---
The etcd v3 API is designed to give users a more efficient and cleaner abstraction compared to etcd v2. There are a number of semantic and protocol changes in this new API. For an overview [see Xiang Li's video](https://youtu.be/J5AioGtEPeQ?t=211).
@ -52,6 +54,7 @@ the size in the future a little bit or make it configurable.
## Examples
### Put a key (foo=bar)
```
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
@ -207,5 +210,5 @@ WatchResponse {
```
[api-protobuf]: https://github.com/coreos/etcd/blob/release-2.3/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/release-2.3/storage/storagepb/kv.proto
[api-protobuf]: https://github.com/etcd-io/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/etcd-io/etcd/blob/master/mvcc/mvccpb/kv.proto

View File

@ -1,211 +0,0 @@
# Overview
The etcd v3 API is designed to give users a more efficient and cleaner abstraction compared to etcd v2. There are a number of semantic and protocol changes in this new API. For an overview [see Xiang Li's video](https://youtu.be/J5AioGtEPeQ?t=211).
To prove out the design of the v3 API the team has also built [a number of example recipes](https://github.com/coreos/etcd/tree/master/contrib/recipes), there is a [video discussing these recipes too](https://www.youtube.com/watch?v=fj-2RY-3yVU&feature=youtu.be&t=590).
# Design
1. Flatten binary key-value space
2. Keep the event history until compaction
- access to old version of keys
- user controlled history compaction
3. Support range query
- Pagination support with limit argument
- Support consistency guarantee across multiple range queries
4. Replace TTL key with Lease
- more efficient/ low cost keep alive
- a logical group of TTL keys
5. Replace CAS/CAD with multi-object Txn
- MUCH MORE powerful and flexible
6. Support efficient watching with multiple ranges
7. RPC API supports the completed set of APIs.
- more efficient than JSON/HTTP
- additional txn/lease support
8. HTTP API supports a subset of APIs.
- easy for people to try out etcd
- easy for people to write simple etcd application
## Notes
### Request Size Limitation
The max request size is around 1MB. Since etcd replicates requests in a streaming fashion, a very large
request might block other requests for a long time. The use case for etcd is to store small configuration
values, so we prevent user from submitting large requests. This also applies to Txn requests. We might loosen
the size in the future a little bit or make it configurable.
## Protobuf Defined API
[api protobuf][api-protobuf]
[kv protobuf][kv-protobuf]
## Examples
### Put a key (foo=bar)
```
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
PutResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
}
```
### Get a key (assume we have foo=bar)
```
Get ( RangeRequest { key = foo } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
kvs = {
{
key = foo,
value = bar,
create_revision = 1,
mod_revision = 1,
version = 1;
},
},
}
```
### Range over a key space (assume we have foo0=bar0… foo100=bar100)
```
Range ( RangeRequest { key = foo, end_key = foo80, limit = 30 } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 100,
raft_term = 0x1,
kvs = {
{
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
...,
{
key = foo30,
value = bar30,
create_revision = 30,
mod_revision = 30,
version = 1;
},
},
}
```
### Finish a txn (assume we have foo0=bar0, foo1=bar1)
```
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
compare = {
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = greater, key = foo1, mod_revision = 1}}
},
// if the comparison succeeds, put foo2 = bar2
success = {PutRequest { key = foo2, value = success }},
// if the comparison fails, put foo2=fail
failure = {PutRequest { key = foo2, value = failure }},
)
TxnResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
succeeded = true,
responses = {
// response of PUT foo2=success
{
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
}
}
}
```
### Watch on a key/range
```
Watch( WatchRequest{
key = foo,
end_key = fop, // prefix foo
start_revision = 20,
end_revision = 10000,
// server decided notification frequency
progress_notification = true,
}
… // this can be a watch request stream
)
// put (foo0=bar0) event at 3
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
}
// a notification at 2000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 2000,
raft_term = 0x1,
// nil event as notification
}
// put (foo0=bar3000) event at 3000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3000,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar3000,
create_revision = 1,
mod_revision = 3000,
version = 2;
},
}
```
[api-protobuf]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/master/mvcc/mvccpb/kv.proto

View File

@ -1,4 +1,6 @@
# Tuning
---
title: Tuning
---
The default settings in etcd should work well for installations on a local network where the average network latency is low. However, when using etcd across multiple data centers or over networks with high latency, the heartbeat interval and election timeout settings may need tuning.
@ -71,12 +73,12 @@ dropped MsgAppResp to 247ae21ff9436b2d since streamMsg's sending buffer is full
These errors may be resolved by prioritizing etcd's peer traffic over its client traffic. On Linux, peer traffic can be prioritized by using the traffic control mechanism:
```
```sh
tc qdisc add dev eth0 root handle 1: prio bands 3
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip sport 2380 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 2380 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip sport 2739 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip dport 2739 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip sport 2379 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip dport 2379 0xffff flowid 1:1
```
[ping]: https://en.wikipedia.org/wiki/Ping_(networking_utility)

View File

@ -0,0 +1,3 @@
---
title: Upgrading
---

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 2.3 to 3.0
---
title: Upgrade etcd from 2.3 to 3.0
---
In the general case, upgrading from etcd 2.3 to 3.0 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v2.3 processes and replace them with etcd v3.0 processes
@ -8,6 +10,8 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0.
@ -122,8 +126,8 @@ $ ETCDCTL_API=3 etcdctl endpoint health
## Known Issues
- etcd &lt; v3.1 does not work properly if built with Go &gt; v1.7. See [Issue 6951](https://github.com/coreos/etcd/issues/6951) for additional information.
- etcd &lt; v3.1 does not work properly if built with Go &gt; v1.7. See [Issue 6951](https://github.com/etcd-io/etcd/issues/6951) for additional information.
- If an error such as `transport: http2Client.notifyError got notified that the client transport was broken unexpected EOF.` shows up in the etcd server logs, be sure etcd is a pre-built release or built with (etcd v3.1+ &amp; go v1.7+) or (etcd &lt;v3.1 &amp; go v1.6.x).
- Adding a v3 node to v2.3 cluster during upgrades is not supported and could trigger panics. See [Issue 7249](https://github.com/coreos/etcd/issues/7429) for additional information. Mixed versions of etcd members are only allowed during v3 migration. Finish upgrades before making any membership changes.
- Adding a v3 node to v2.3 cluster during upgrades is not supported and could trigger panics. See [Issue 7249](https://github.com/etcd-io/etcd/issues/7429) for additional information. Mixed versions of etcd members are only allowed during v3 migration. Finish upgrades before making any membership changes.
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 3.0 to 3.1
---
title: Upgrade etcd from 3.0 to 3.1
---
In the general case, upgrading from etcd 3.0 to 3.1 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.0 processes and replace them with etcd v3.1 processes
@ -8,6 +10,17 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Monitoring
Following metrics from v3.0.x have been deprecated in favor of [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus):
- `etcd_grpc_requests_total`
- `etcd_grpc_requests_failed_total`
- `etcd_grpc_active_streams`
- `etcd_grpc_unary_requests_duration_seconds`
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please [upgrade to 3.0](upgrade_3_0.md) before upgrading to 3.1.

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 3.1 to 3.2
---
title: Upgrade etcd from 3.1 to 3.2
---
In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.1 processes and replace them with etcd v3.2 processes
@ -6,11 +8,171 @@ In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, roll
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Client upgrade checklists
### Upgrade checklists
3.2 introduces two breaking changes.
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/coreos/etcd/pull/7305)).
Highlighted breaking changes in 3.2.
#### Changed default `snapshot-count` value
Higher `--snapshot-count` holds more Raft entries in memory until snapshot, thus causing [recurrent higher memory usage](https://github.com/kubernetes/kubernetes/issues/60589#issuecomment-371977156). Since leader retains latest Raft entries for longer, a slow follower has more time to catch up before leader snapshot. `--snapshot-count` is a tradeoff between higher memory usage and better availabilities of slow followers.
Since v3.2, the default value of `--snapshot-count` has [changed from from 10,000 to 100,000](https://github.com/etcd-io/etcd/pull/7160).
#### Changed gRPC dependency (>=3.2.10)
3.2.10 or later now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5` (<=3.2.9 requires `v1.2.1`).
##### Deprecated `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.SetLogger(log.New(os.Stderr, "grpc: ", 0))
```
After
```go
import "github.com/coreos/etcd/clientv3"
import "google.golang.org/grpc/grpclog"
clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface)
```
##### Deprecated `grpc.ErrClientConnTimeout`
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.2 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/etcd-io/etcd/issues/8504)).
Before
```go
// expect dial time-out on ipv4 blackhole
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://254.0.0.1:12345"},
DialTimeout: 2 * time.Second
})
if err == grpc.ErrClientConnTimeout {
// handle errors
}
```
After
```go
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://254.0.0.1:12345"},
DialTimeout: 2 * time.Second
})
if err == context.DeadlineExceeded {
// handle errors
}
```
#### Changed maximum request size limits (>=3.2.10)
3.2.10 and 3.2.11 allow custom request size limits in server side. >=3.2.12 allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag:
```bash
# limits request size to 1.5 KiB
etcd --max-request-bytes 1536
# client writes exceeding 1.5 KiB will be rejected
etcdctl put foo [LARGE VALUE...]
# etcdserver: request is too large
```
Or configure `embed.Config.MaxRequestBytes` field:
```go
import "github.com/coreos/etcd/embed"
import "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
// limit requests to 5 MiB
cfg := embed.NewConfig()
cfg.MaxRequestBytes = 5 * 1024 * 1024
// client writes exceeding 5 MiB will be rejected
_, err := cli.Put(ctx, "foo", [LARGE VALUE...])
err == rpctypes.ErrRequestTooLarge
```
**If not specified, server-side limit defaults to 1.5 MiB**.
Client-side request limits must be configured based on server-side limits.
```bash
# limits request size to 1 MiB
etcd --max-request-bytes 1048576
```
```go
import "github.com/coreos/etcd/clientv3"
cli, _ := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:2379"},
MaxCallSendMsgSize: 2 * 1024 * 1024,
MaxCallRecvMsgSize: 3 * 1024 * 1024,
})
// client writes exceeding "--max-request-bytes" will be rejected from etcd server
_, err := cli.Put(ctx, "foo", strings.Repeat("a", 1*1024*1024+5))
err == rpctypes.ErrRequestTooLarge
// client writes exceeding "MaxCallSendMsgSize" will be rejected from client-side
_, err = cli.Put(ctx, "foo", strings.Repeat("a", 5*1024*1024))
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (5242890 vs. 2097152)"
// some writes under limits
for i := range []int{0,1,2,3,4} {
_, err = cli.Put(ctx, fmt.Sprintf("foo%d", i), strings.Repeat("a", 1*1024*1024-500))
if err != nil {
panic(err)
}
}
// client reads exceeding "MaxCallRecvMsgSize" will be rejected from client-side
_, err = cli.Get(ctx, "foo", clientv3.WithPrefix())
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5240509 vs. 3145728)"
```
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Changed raw gRPC client wrappers
3.2.12 or later changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/etcd-io/etcd/pull/9047).
Before and after
```diff
-func NewKVFromKVClient(remote pb.KVClient) KV {
+func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
-func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
+func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
-func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
+func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
-func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
+func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
-func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
```
#### Changed `clientv3.Lease.TimeToLive` API
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/etcd-io/etcd/pull/7305)).
Before
@ -30,6 +192,8 @@ resp.TTL == -1
err == nil
```
#### Moved `clientv3.NewFromConfigFile` to `clientv3.yaml.NewConfig`
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
Before
@ -46,6 +210,12 @@ import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
clientv3yaml.NewConfig
```
#### Change in `--listen-peer-urls` and `--listen-client-urls`
3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formated as `scheme://IP:port`.
See [issue #6336](https://github.com/etcd-io/etcd/issues/6336) for more contexts.
### Server upgrade checklists
#### Upgrade requirements

View File

@ -0,0 +1,541 @@
---
title: Upgrade etcd from 3.2 to 3.3
---
In the general case, upgrading from etcd 3.2 to 3.3 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.2 processes and replace them with etcd v3.3 processes
- after running all v3.3 processes, new features in v3.3 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.3.
#### Changed value type of `etcd --auto-compaction-retention` flag to `string`
Changed `--auto-compaction-retention` flag to [accept string values](https://github.com/etcd-io/etcd/pull/8563) with [finer granularity](https://github.com/etcd-io/etcd/issues/8503). Now that `--auto-compaction-retention` accepts string values, etcd configuration YAML file `auto-compaction-retention` field must be changed to `string` type. Previously, `--config-file etcd.config.yaml` can have `auto-compaction-retention: 24` field, now must be `auto-compaction-retention: "24"` or `auto-compaction-retention: "24h"`. If configured as `--auto-compaction-mode periodic --auto-compaction-retention "24h"`, the time duration value for `--auto-compaction-retention` flag must be valid for [`time.ParseDuration`](https://golang.org/pkg/time/#ParseDuration) function in Go.
```diff
# etcd.config.yaml
+auto-compaction-mode: periodic
-auto-compaction-retention: 24
+auto-compaction-retention: "24"
+# Or
+auto-compaction-retention: "24h"
```
#### Changed `etcdserver.EtcdServer.ServerConfig` to `*etcdserver.EtcdServer.ServerConfig`
`etcdserver.EtcdServer` has changed the type of its member field `*etcdserver.ServerConfig` to `etcdserver.ServerConfig`. And `etcdserver.NewServer` now takes `etcdserver.ServerConfig`, instead of `*etcdserver.ServerConfig`.
Before and after (e.g. [k8s.io/kubernetes/test/e2e_node/services/etcd.go](https://github.com/kubernetes/kubernetes/blob/release-1.8/test/e2e_node/services/etcd.go#L50-L55))
```diff
import "github.com/coreos/etcd/etcdserver"
type EtcdServer struct {
*etcdserver.EtcdServer
- config *etcdserver.ServerConfig
+ config etcdserver.ServerConfig
}
func NewEtcd(dataDir string) *EtcdServer {
- config := &etcdserver.ServerConfig{
+ config := etcdserver.ServerConfig{
DataDir: dataDir,
...
}
return &EtcdServer{config: config}
}
func (e *EtcdServer) Start() error {
var err error
e.EtcdServer, err = etcdserver.NewServer(e.config)
...
```
#### Added `embed.Config.LogOutput` struct
**Note that this field has been renamed to `embed.Config.LogOutputs` in `[]string` type in v3.4. Please see [v3.4 upgrade guide](https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrade_3_4.md) for more details.**
Field `LogOutput` is added to `embed.Config`:
```diff
package embed
type Config struct {
Debug bool `json:"debug"`
LogPkgLevels string `json:"log-package-levels"`
+ LogOutput string `json:"log-output"`
...
```
Before gRPC server warnings were logged in etcdserver.
```
WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {localhost:2379 <nil>}
WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {localhost:2379 <nil>}
```
From v3.3, gRPC server logs are disabled by default.
**Note that `embed.Config.SetupLogging` method has been deprecated in v3.4. Please see [v3.4 upgrade guide](https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrade_3_4.md) for more details.**
```go
import "github.com/coreos/etcd/embed"
cfg := &embed.Config{Debug: false}
cfg.SetupLogging()
```
Set `embed.Config.Debug` field to `true` to enable gRPC server logs.
#### Changed `/health` endpoint response
Previously, `[endpoint]:[client-port]/health` returned manually marshaled JSON value. 3.3 now defines [`etcdhttp.Health`](https://godoc.org/github.com/coreos/etcd/etcdserver/api/etcdhttp#Health) struct.
Note that in v3.3.0-rc.0, v3.3.0-rc.1, and v3.3.0-rc.2, `etcdhttp.Health` has boolean type `"health"` and `"errors"` fields. For backward compatibilities, we reverted `"health"` field to `string` type and removed `"errors"` field. Further health information will be provided in separate APIs.
```bash
$ curl http://localhost:2379/health
{"health":"true"}
```
#### Changed gRPC gateway HTTP endpoints (replaced `/v3alpha` with `/v3beta`)
Before
```bash
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
After
```bash
curl -L http://localhost:2379/v3beta/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
Requests to `/v3alpha` endpoints will redirect to `/v3beta`, and `/v3alpha` will be removed in 3.4 release.
#### Changed maximum request size limits
3.3 now allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag:
```bash
# limits request size to 1.5 KiB
etcd --max-request-bytes 1536
# client writes exceeding 1.5 KiB will be rejected
etcdctl put foo [LARGE VALUE...]
# etcdserver: request is too large
```
Or configure `embed.Config.MaxRequestBytes` field:
```go
import "github.com/coreos/etcd/embed"
import "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
// limit requests to 5 MiB
cfg := embed.NewConfig()
cfg.MaxRequestBytes = 5 * 1024 * 1024
// client writes exceeding 5 MiB will be rejected
_, err := cli.Put(ctx, "foo", [LARGE VALUE...])
err == rpctypes.ErrRequestTooLarge
```
**If not specified, server-side limit defaults to 1.5 MiB**.
Client-side request limits must be configured based on server-side limits.
```bash
# limits request size to 1 MiB
etcd --max-request-bytes 1048576
```
```go
import "github.com/coreos/etcd/clientv3"
cli, _ := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:2379"},
MaxCallSendMsgSize: 2 * 1024 * 1024,
MaxCallRecvMsgSize: 3 * 1024 * 1024,
})
// client writes exceeding "--max-request-bytes" will be rejected from etcd server
_, err := cli.Put(ctx, "foo", strings.Repeat("a", 1*1024*1024+5))
err == rpctypes.ErrRequestTooLarge
// client writes exceeding "MaxCallSendMsgSize" will be rejected from client-side
_, err = cli.Put(ctx, "foo", strings.Repeat("a", 5*1024*1024))
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (5242890 vs. 2097152)"
// some writes under limits
for i := range []int{0,1,2,3,4} {
_, err = cli.Put(ctx, fmt.Sprintf("foo%d", i), strings.Repeat("a", 1*1024*1024-500))
if err != nil {
panic(err)
}
}
// client reads exceeding "MaxCallRecvMsgSize" will be rejected from client-side
_, err = cli.Get(ctx, "foo", clientv3.WithPrefix())
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5240509 vs. 3145728)"
```
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Changed raw gRPC client wrapper function signatures
3.3 changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/etcd-io/etcd/pull/9047).
Before and after
```diff
-func NewKVFromKVClient(remote pb.KVClient) KV {
+func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
-func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
+func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
-func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
+func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
-func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
+func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
-func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
```
#### Changed clientv3 `Snapshot` API error type
Previously, clientv3 `Snapshot` API returned raw [`grpc/*status.statusError`] type error. v3.3 now translates those errors to corresponding public error types, to be consistent with other APIs.
Before
```go
import "context"
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc, _ := cli.Snapshot(ctx)
cancel()
_, err := io.Copy(f, rc)
err.Error() == "rpc error: code = Canceled desc = context canceled"
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc, _ = cli.Snapshot(ctx)
time.Sleep(2 * time.Second)
_, err = io.Copy(f, rc)
err.Error() == "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
```
After
```go
import "context"
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc, _ := cli.Snapshot(ctx)
cancel()
_, err := io.Copy(f, rc)
err == context.Canceled
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc, _ = cli.Snapshot(ctx)
time.Sleep(2 * time.Second)
_, err = io.Copy(f, rc)
err == context.DeadlineExceeded
```
#### Changed `etcdctl lease timetolive` command output
Previously, `lease timetolive LEASE_ID` command on expired lease prints `-1s` for remaining seconds. 3.3 now outputs clearer messages.
Before
```bash
lease 2d8257079fa1bc0c granted with TTL(0s), remaining(-1s)
```
After
```bash
lease 2d8257079fa1bc0c already expired
```
#### Changed `golang.org/x/net/context` imports
`clientv3` has deprecated `golang.org/x/net/context`. If a project vendors `golang.org/x/net/context` in other code (e.g. etcd generated protocol buffer code) and imports `github.com/coreos/etcd/clientv3`, it requires Go 1.9+ to compile.
Before
```go
import "golang.org/x/net/context"
cli.Put(context.Background(), "f", "v")
```
After
```go
import "context"
cli.Put(context.Background(), "f", "v")
```
#### Changed gRPC dependency
3.3 now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5`.
##### Deprecated `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.SetLogger(log.New(os.Stderr, "grpc: ", 0))
```
After
```go
import "github.com/coreos/etcd/clientv3"
import "google.golang.org/grpc/grpclog"
clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface)
```
##### Deprecated `grpc.ErrClientConnTimeout`
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.3 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/etcd-io/etcd/issues/8504)).
Before
```go
// expect dial time-out on ipv4 blackhole
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://254.0.0.1:12345"},
DialTimeout: 2 * time.Second
})
if err == grpc.ErrClientConnTimeout {
// handle errors
}
```
After
```go
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://254.0.0.1:12345"},
DialTimeout: 2 * time.Second
})
if err == context.DeadlineExceeded {
// handle errors
}
```
#### Changed official container registry
etcd now uses [`gcr.io/etcd-development/etcd`](https://gcr.io/etcd-development/etcd) as a primary container registry, and [`quay.io/coreos/etcd`](https://quay.io/coreos/etcd) as secondary.
Before
```bash
docker pull quay.io/coreos/etcd:v3.2.5
```
After
```bash
docker pull gcr.io/etcd-development/etcd:v3.3.0
```
### Upgrades to >= v3.3.14
[v3.3.14](https://github.com/etcd-io/etcd/releases/tag/v3.3.14) had to include some features from 3.4, while trying to minimize the difference between client balancer implementation. This release fixes ["kube-apiserver 1.13.x refuses to work when first etcd-server is not available" (kubernetes#72102)](https://github.com/kubernetes/kubernetes/issues/72102).
`grpc.ErrClientConnClosing` has been [deprecated in gRPC >= 1.10](https://github.com/grpc/grpc-go/pull/1854).
```diff
import (
+ "go.etcd.io/etcd/clientv3"
"google.golang.org/grpc"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
)
_, err := kvc.Get(ctx, "a")
-if err == grpc.ErrClientConnClosing {
+if clientv3.IsConnCanceled(err) {
// or
+s, ok := status.FromError(err)
+if ok {
+ if s.Code() == codes.Canceled
```
[The new client balancer](https://github.com/etcd-io/etcd/blob/master/Documentation/learning/design-client.md) uses an asynchronous resolver to pass endpoints to the gRPC dial function. As a result, [v3.3.14](https://github.com/etcd-io/etcd/releases/tag/v3.3.14) or later requires `grpc.WithBlock` dial option to wait until the underlying connection is up.
```diff
import (
"time"
"go.etcd.io/etcd/clientv3"
+ "google.golang.org/grpc"
)
+// "grpc.WithBlock()" to block until the underlying connection is up
ccfg := clientv3.Config{
Endpoints: []string{"localhost:2379"},
DialTimeout: time.Second,
+ DialOptions: []grpc.DialOption{grpc.WithBlock()},
DialKeepAliveTime: time.Second,
DialKeepAliveTimeout: 500 * time.Millisecond,
}
```
Please see [CHANGELOG](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md) for a full list of changes.
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.3, the running cluster must be 3.2 or greater. If it's before 3.2, please [upgrade to 3.2](upgrade_3_2.md) before upgrading to 3.3.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
#### Mixed versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.3. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.3, the cluster will be upgraded to v3.3, and downgrade from this completed state is **not possible**. If any single member is still v3.2, however, the cluster and its operations remains "v3.2", and it is possible from this mixed cluster state to return to using a v3.2 etcd binary on all members.
Please [backup the data directory](../op-guide/maintenance.md#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade procedure
This example shows how to upgrade a 3-member v3.2 ectd cluster running on a local machine.
#### 1. Check upgrade requirements
Is the cluster healthy and running v3.2.x?
```
$ ETCDCTL_API=3 etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 6.600684ms
localhost:22379 is healthy: successfully committed proposal: took = 8.540064ms
localhost:32379 is healthy: successfully committed proposal: took = 8.763432ms
$ curl http://localhost:2379/version
{"etcdserver":"3.2.7","etcdcluster":"3.2.0"}
```
#### 2. Stop the existing etcd process
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```
14:13:31.491746 I | raft: c89feb932daef420 [term 3] received MsgTimeoutNow from 6d4f535bae3ab960 and starts an election to get leadership.
14:13:31.491769 I | raft: c89feb932daef420 became candidate at term 4
14:13:31.491788 I | raft: c89feb932daef420 received MsgVoteResp from c89feb932daef420 at term 4
14:13:31.491797 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 6d4f535bae3ab960 at term 4
14:13:31.491805 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 9eda174c7df8a033 at term 4
14:13:31.491815 I | raft: raft.node: c89feb932daef420 lost leader 6d4f535bae3ab960 at term 4
14:13:31.524084 I | raft: c89feb932daef420 received MsgVoteResp from 6d4f535bae3ab960 at term 4
14:13:31.524108 I | raft: c89feb932daef420 [quorum:2] has received 2 MsgVoteResp votes and 0 vote rejections
14:13:31.524123 I | raft: c89feb932daef420 became leader at term 4
14:13:31.524136 I | raft: raft.node: c89feb932daef420 elected leader c89feb932daef420 at term 4
14:13:31.592650 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream MsgApp v2 reader)
14:13:31.592825 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message reader)
14:13:31.693275 E | rafthttp: failed to dial 6d4f535bae3ab960 on stream Message (dial tcp [::1]:2380: getsockopt: connection refused)
14:13:31.693289 I | rafthttp: peer 6d4f535bae3ab960 became inactive
14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer)
```
It's a good idea at this point to [backup the etcd data](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur:
```
$ etcdctl snapshot save backup.db
```
#### 3. Drop-in etcd v3.3 binary and start the new etcd process
The new v3.3 etcd will publish its information to the cluster:
```
14:14:25.363225 I | etcdserver: published {Name:s1 ClientURLs:[http://localhost:2379]} to cluster a9ededbffcb1b1f1
```
Verify that each member, and then the entire cluster, becomes healthy with the new v3.3 etcd binary:
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:22379 is healthy: successfully committed proposal: took = 5.540129ms
localhost:32379 is healthy: successfully committed proposal: took = 7.321771ms
localhost:2379 is healthy: successfully committed proposal: took = 10.629901ms
```
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.3:
```
14:15:17.071804 W | etcdserver: member c89feb932daef420 has a higher version 3.3.0
14:15:21.073110 W | etcdserver: the local etcd version 3.2.7 is not up-to-date
14:15:21.073142 W | etcdserver: member 6d4f535bae3ab960 has a higher version 3.3.0
14:15:21.073157 W | etcdserver: the local etcd version 3.2.7 is not up-to-date
14:15:21.073164 W | etcdserver: member c89feb932daef420 has a higher version 3.3.0
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, the cluster will report upgrading to 3.3 successfully:
```
14:15:54.536901 N | etcdserver/membership: updated the cluster version from 3.2 to 3.3
14:15:54.537035 I | etcdserver/api: enabled capabilities for version 3.3
```
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 2.312897ms
localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -0,0 +1,343 @@
---
title: Upgrade etcd from 3.4 to 3.5
---
In the general case, upgrading from etcd 3.4 to 3.5 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.4 processes and replace them with etcd v3.5 processes
- after running all v3.5 processes, new features in v3.5 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.5.
#### Deprecate `etcd_debugging_mvcc_db_total_size_in_bytes` Prometheus metrics
v3.4 promoted `etcd_debugging_mvcc_db_total_size_in_bytes` Prometheus metrics to `etcd_mvcc_db_total_size_in_bytes`, in order to encourage etcd storage monitoring. And v3.5 completely deprcates `etcd_debugging_mvcc_db_total_size_in_bytes`.
```diff
-etcd_debugging_mvcc_db_total_size_in_bytes
+etcd_mvcc_db_total_size_in_bytes
```
Note that `etcd_debugging_*` namespace metrics have been marked as experimental. As we improve monitoring guide, we will promote more metrics.
#### Deprecated in `etcd --logger capnslog`
v3.4 defaults to `--logger=zap` in order to support multiple log outputs and structured logging.
**`etcd --logger=capnslog` has been deprecated in v3.5**, and now `--logger=zap` is the default.
```diff
-etcd --logger=capnslog
+etcd --logger=zap --log-outputs=stderr
+# to write logs to stderr and a.log file at the same time
+etcd --logger=zap --log-outputs=stderr,a.log
```
TODO(add more monitoring guides); v3.4 adds `etcd --logger=zap` support for structured logging and multiple log outputs. Main motivation is to promote automated etcd monitoring, rather than looking back server logs when it starts breaking. Future development will make etcd log as few as possible, and make etcd easier to monitor with metrics and alerts. **`etcd --logger=capnslog` will be deprecated in v3.5.**
#### Deprecated in `etcd --log-output`
v3.4 renamed [`etcd --log-output` to `--log-outputs`](https://github.com/etcd-io/etcd/pull/9624) to support multiple log outputs.
**`etcd --log-output` has been deprecated in v3.5.**
```diff
-etcd --log-output=stderr
+etcd --log-outputs=stderr
```
#### Deprecated `etcd --log-package-levels`
**`etcd --log-package-levels` flag for `capnslog` has been deprecated.**
Now, **`etcd --logger=zap`** is the default.
```diff
-etcd --log-package-levels 'etcdmain=CRITICAL,etcdserver=DEBUG'
+etcd --logger=zap --log-outputs=stderr
```
#### Deprecated `[CLIENT-URL]/config/local/log`
**`/config/local/log` endpoint is being deprecated in v3.5, as is `etcd --log-package-levels` flag.**
```diff
-$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
-# debug logging enabled
```
#### Changed gRPC gateway HTTP endpoints (deprecated `/v3beta`)
Before
```bash
curl -L http://localhost:2379/v3beta/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
After
```bash
curl -L http://localhost:2379/v3/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
`/v3beta` has been removed in 3.5 release.
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.5, the running cluster must be 3.4 or greater. If it's before 3.4, please [upgrade to 3.4](upgrade_3_3.md) before upgrading to 3.5.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
#### Mixed versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.5. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.5, the cluster will be upgraded to v3.5, and downgrade from this completed state is **not possible**. If any single member is still v3.4, however, the cluster and its operations remains "v3.4", and it is possible from this mixed cluster state to return to using a v3.4 etcd binary on all members.
Please [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup) to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade procedure
This example shows how to upgrade a 3-member v3.4 ectd cluster running on a local machine.
#### Step 1: check upgrade requirements
Is the cluster healthy and running v3.4.x?
```bash
etcdctl --endpoints=localhost:2379,localhost:22379,localhost:32379 endpoint health
<<COMMENT
localhost:2379 is healthy: successfully committed proposal: took = 2.118638ms
localhost:22379 is healthy: successfully committed proposal: took = 3.631388ms
localhost:32379 is healthy: successfully committed proposal: took = 2.157051ms
COMMENT
curl http://localhost:2379/version
<<COMMENT
{"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
COMMENT
curl http://localhost:22379/version
<<COMMENT
{"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
COMMENT
curl http://localhost:32379/version
<<COMMENT
{"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
COMMENT
```
#### Step 2: download snapshot backup from leader
[Download the snapshot backup](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur.
etcd leader is guaranteed to have the latest application data, thus fetch snapshot from leader:
```bash
curl -sL http://localhost:2379/metrics | grep etcd_server_is_leader
<<COMMENT
# HELP etcd_server_is_leader Whether or not this member is a leader. 1 if is, 0 otherwise.
# TYPE etcd_server_is_leader gauge
etcd_server_is_leader 1
COMMENT
curl -sL http://localhost:22379/metrics | grep etcd_server_is_leader
<<COMMENT
etcd_server_is_leader 0
COMMENT
curl -sL http://localhost:32379/metrics | grep etcd_server_is_leader
<<COMMENT
etcd_server_is_leader 0
COMMENT
etcdctl --endpoints=localhost:2379 snapshot save backup.db
<<COMMENT
{"level":"info","ts":1526585787.148433,"caller":"snapshot/v3_snapshot.go:109","msg":"created temporary db file","path":"backup.db.part"}
{"level":"info","ts":1526585787.1485257,"caller":"snapshot/v3_snapshot.go:120","msg":"fetching snapshot","endpoint":"localhost:2379"}
{"level":"info","ts":1526585787.1519694,"caller":"snapshot/v3_snapshot.go:133","msg":"fetched snapshot","endpoint":"localhost:2379","took":0.003502721}
{"level":"info","ts":1526585787.1520295,"caller":"snapshot/v3_snapshot.go:142","msg":"saved","path":"backup.db"}
Snapshot saved at backup.db
COMMENT
```
#### Step 3: stop one existing etcd server
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```bash
{"level":"info","ts":1526587281.2001143,"caller":"etcdserver/server.go:2249","msg":"updating cluster version","from":"3.0","to":"3.4"}
{"level":"info","ts":1526587281.2010646,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","from":"3.0","from":"3.4"}
{"level":"info","ts":1526587281.2012327,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}
{"level":"info","ts":1526587281.2013083,"caller":"etcdserver/server.go:2272","msg":"cluster version is updated","cluster-version":"3.4"}
^C{"level":"info","ts":1526587299.0717514,"caller":"osutil/interrupt_unix.go:63","msg":"received signal; shutting down","signal":"interrupt"}
{"level":"info","ts":1526587299.0718873,"caller":"embed/etcd.go:285","msg":"closing etcd server","name":"s1","data-dir":"/tmp/etcd/s1","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
{"level":"info","ts":1526587299.0722554,"caller":"etcdserver/server.go:1341","msg":"leadership transfer starting","local-member-id":"7339c4e5e833c029","current-leader-member-id":"7339c4e5e833c029","transferee-member-id":"729934363faa4a24"}
{"level":"info","ts":1526587299.0723994,"caller":"raft/raft.go:1107","msg":"7339c4e5e833c029 [term 3] starts to transfer leadership to 729934363faa4a24"}
{"level":"info","ts":1526587299.0724802,"caller":"raft/raft.go:1113","msg":"7339c4e5e833c029 sends MsgTimeoutNow to 729934363faa4a24 immediately as 729934363faa4a24 already has up-to-date log"}
{"level":"info","ts":1526587299.0737045,"caller":"raft/raft.go:797","msg":"7339c4e5e833c029 [term: 3] received a MsgVote message with higher term from 729934363faa4a24 [term: 4]"}
{"level":"info","ts":1526587299.0737681,"caller":"raft/raft.go:656","msg":"7339c4e5e833c029 became follower at term 4"}
{"level":"info","ts":1526587299.073831,"caller":"raft/raft.go:882","msg":"7339c4e5e833c029 [logterm: 3, index: 9, vote: 0] cast MsgVote for 729934363faa4a24 [logterm: 3, index: 9] at term 4"}
{"level":"info","ts":1526587299.0738947,"caller":"raft/node.go:312","msg":"raft.node: 7339c4e5e833c029 lost leader 7339c4e5e833c029 at term 4"}
{"level":"info","ts":1526587299.0748374,"caller":"raft/node.go:306","msg":"raft.node: 7339c4e5e833c029 elected leader 729934363faa4a24 at term 4"}
{"level":"info","ts":1526587299.1726425,"caller":"etcdserver/server.go:1362","msg":"leadership transfer finished","local-member-id":"7339c4e5e833c029","old-leader-member-id":"7339c4e5e833c029","new-leader-member-id":"729934363faa4a24","took":0.100389359}
{"level":"info","ts":1526587299.1728148,"caller":"rafthttp/peer.go:333","msg":"stopping remote peer","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1751974,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1752589,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.177348,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1774004,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b548c2511513015"}
{"level":"info","ts":1526587299.177515,"caller":"rafthttp/pipeline.go:86","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1777067,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015","error":"read tcp 127.0.0.1:34636->127.0.0.1:32380: use of closed network connection"}
{"level":"info","ts":1526587299.1778402,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1780295,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015","error":"read tcp 127.0.0.1:34634->127.0.0.1:32380: use of closed network connection"}
{"level":"info","ts":1526587299.1780987,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
{"level":"info","ts":1526587299.1781602,"caller":"rafthttp/peer.go:340","msg":"stopped remote peer","remote-peer-id":"b548c2511513015"}
{"level":"info","ts":1526587299.1781986,"caller":"rafthttp/peer.go:333","msg":"stopping remote peer","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1802843,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1803446,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1824749,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.18255,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"729934363faa4a24"}
{"level":"info","ts":1526587299.18261,"caller":"rafthttp/pipeline.go:86","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1827736,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24","error":"read tcp 127.0.0.1:51482->127.0.0.1:22380: use of closed network connection"}
{"level":"info","ts":1526587299.182845,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1830168,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24","error":"context canceled"}
{"level":"warn","ts":1526587299.1831107,"caller":"rafthttp/peer_status.go:65","msg":"peer became inactive","peer-id":"729934363faa4a24","error":"failed to read 729934363faa4a24 on stream Message (context canceled)"}
{"level":"info","ts":1526587299.1831737,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
{"level":"info","ts":1526587299.1832306,"caller":"rafthttp/peer.go:340","msg":"stopped remote peer","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1837125,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"b548c2511513015","cluster-id":"7dee9ba76d59ed53"}
{"level":"warn","ts":1526587299.1840093,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"b548c2511513015","cluster-id":"7dee9ba76d59ed53"}
{"level":"warn","ts":1526587299.1842315,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"729934363faa4a24","cluster-id":"7dee9ba76d59ed53"}
{"level":"warn","ts":1526587299.1844475,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"729934363faa4a24","cluster-id":"7dee9ba76d59ed53"}
{"level":"info","ts":1526587299.2056687,"caller":"embed/etcd.go:473","msg":"stopping serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":1526587299.205819,"caller":"embed/etcd.go:480","msg":"stopped serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":1526587299.2058413,"caller":"embed/etcd.go:289","msg":"closed etcd server","name":"s1","data-dir":"/tmp/etcd/s1","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
```
#### Step 4: restart the etcd server with same configuration
Restart the etcd server with same configuration but with the new etcd binary.
```diff
-etcd-old --name s1 \
+etcd-new --name s1 \
--data-dir /tmp/etcd/s1 \
--listen-client-urls http://localhost:2379 \
--advertise-client-urls http://localhost:2379 \
--listen-peer-urls http://localhost:2380 \
--initial-advertise-peer-urls http://localhost:2380 \
--initial-cluster s1=http://localhost:2380,s2=http://localhost:22380,s3=http://localhost:32380 \
--initial-cluster-token tkn \
--initial-cluster-state new
```
The new v3.5 etcd will publish its information to the cluster. At this point, cluster still operates as v3.4 protocol, which is the lowest common version.
> `{"level":"info","ts":1526586617.1647713,"caller":"membership/cluster.go:485","msg":"set initial cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","cluster-version":"3.0"}`
> `{"level":"info","ts":1526586617.1648536,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.0"}`
> `{"level":"info","ts":1526586617.1649303,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","from":"3.0","from":"3.4"}`
> `{"level":"info","ts":1526586617.1649797,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}`
> `{"level":"info","ts":1526586617.2107732,"caller":"etcdserver/server.go:1770","msg":"published local member to cluster through raft","local-member-id":"7339c4e5e833c029","local-member-attributes":"{Name:s1 ClientURLs:[http://localhost:2379]}","request-path":"/0/members/7339c4e5e833c029/attributes","cluster-id":"7dee9ba76d59ed53","publish-timeout":7}`
Verify that each member, and then the entire cluster, becomes healthy with the new v3.5 etcd binary:
```bash
etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
<<COMMENT
localhost:32379 is healthy: successfully committed proposal: took = 2.337471ms
localhost:22379 is healthy: successfully committed proposal: took = 1.130717ms
localhost:2379 is healthy: successfully committed proposal: took = 2.124843ms
COMMENT
```
Un-upgraded members will log warnings like the following until the entire cluster is upgraded.
This is expected and will cease after all etcd cluster members are upgraded to v3.5:
```
:41.942121 W | etcdserver: member 7339c4e5e833c029 has a higher version 3.5.0
:45.945154 W | etcdserver: the local etcd version 3.4.0 is not up-to-date
```
#### Step 5: repeat *step 3* and *step 4* for rest of the members
When all members are upgraded, the cluster will report upgrading to 3.5 successfully:
Member 1:
> `{"level":"info","ts":1526586949.0920913,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
> `{"level":"info","ts":1526586949.0921566,"caller":"etcdserver/server.go:2272","msg":"cluster version is updated","cluster-version":"3.5"}`
Member 2:
> `{"level":"info","ts":1526586949.092117,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"729934363faa4a24","from":"3.4","from":"3.5"}`
> `{"level":"info","ts":1526586949.0923078,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
Member 3:
> `{"level":"info","ts":1526586949.0921423,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"b548c2511513015","from":"3.4","from":"3.5"}`
> `{"level":"info","ts":1526586949.0922918,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
```bash
endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
<<COMMENT
localhost:2379 is healthy: successfully committed proposal: took = 492.834µs
localhost:22379 is healthy: successfully committed proposal: took = 1.015025ms
localhost:32379 is healthy: successfully committed proposal: took = 1.853077ms
COMMENT
curl http://localhost:2379/version
<<COMMENT
{"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
COMMENT
curl http://localhost:22379/version
<<COMMENT
{"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
COMMENT
curl http://localhost:32379/version
<<COMMENT
{"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
COMMENT
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -0,0 +1,26 @@
---
title: Upgrading etcd clusters and applications
---
This section contains documents specific to upgrading etcd clusters and applications.
## Moving from etcd API v2 to API v3
* [Migrate applications from using API v2 to API v3][migrate-apps]
## Upgrading an etcd v3.x cluster
* [Upgrade etcd from 3.0 to 3.1][upgrade-3-1]
* [Upgrade etcd from 3.1 to 3.2][upgrade-3-2]
* [Upgrade etcd from 3.2 to 3.3][upgrade-3-3]
* [Upgrade etcd from 3.3 to 3.4][upgrade-3-4]
## Upgrading from etcd v2.3
* [Upgrade a v2.3 cluster to v3.0][upgrade-cluster]
[migrate-apps]: ../op-guide/v2-migration.md
[upgrade-cluster]: upgrade_3_0.md
[upgrade-3-1]: upgrade_3_1.md
[upgrade-3-2]: upgrade_3_2.md
[upgrade-3-3]: upgrade_3_3.md
[upgrade-3-4]: upgrade_3_4.md

View File

@ -1,31 +0,0 @@
# Snapshot Migration
You can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.2 cluster using a snapshot migration. After snapshot migration, the etcd indexes of your data will change. Many etcd applications rely on these indexes to behave correctly. This operation should only be done while all etcd applications are stopped.
To get started get the newest data snapshot from the 0.4.9+ cluster:
```
curl http://cluster.example.com:4001/v2/migration/snapshot > backup.snap
```
Now, import the snapshot into your new cluster:
```
etcdctl --endpoint new_cluster.example.com import --snap backup.snap
```
If you have a large amount of data, you can specify more concurrent works to copy data in parallel by using `-c` flag.
If you have hidden keys to copy, you can use `--hidden` flag to specify. For example fleet uses `/_coreos.com/fleet` so to import those keys use `--hidden /_coreos.com`.
And the data will quickly copy into the new cluster:
```
entering dir: /
entering dir: /foo
entering dir: /foo/bar
copying key: /foo/bar/1 1
entering dir: /
entering dir: /foo2
entering dir: /foo2/bar2
copying key: /foo2/bar2/2 2
```

View File

@ -1,165 +0,0 @@
# etcd2
[![Go Report Card](https://goreportcard.com/badge/github.com/coreos/etcd)](https://goreportcard.com/report/github.com/coreos/etcd)
[![Build Status](https://travis-ci.org/coreos/etcd.svg?branch=master)](https://travis-ci.org/coreos/etcd)
[![Build Status](https://semaphoreci.com/api/v1/coreos/etcd/branches/master/shields_badge.svg)](https://semaphoreci.com/coreos/etcd)
[![Docker Repository on Quay.io](https://quay.io/repository/coreos/etcd-git/status "Docker Repository on Quay.io")](https://quay.io/repository/coreos/etcd-git)
**Note**: The `master` branch may be in an *unstable or even broken state* during development. Please use [releases][github-release] instead of the `master` branch in order to get stable binaries.
![etcd Logo](../../logos/etcd-horizontal-color.png)
etcd is a distributed, consistent key-value store for shared configuration and service discovery, with a focus on being:
* *Simple*: curl'able user-facing API (HTTP+JSON)
* *Secure*: optional SSL client cert authentication
* *Fast*: benchmarked 1000s of writes/s per instance
* *Reliable*: properly distributed using Raft
etcd is written in Go and uses the [Raft][raft] consensus algorithm to manage a highly-available replicated log.
etcd is used [in production by many companies](./production-users.md), and the development team stands behind it in critical deployment scenarios, where etcd is frequently teamed with applications such as [Kubernetes][k8s], [fleet][fleet], [locksmith][locksmith], [vulcand][vulcand], and many others.
See [etcdctl][etcdctl] for a simple command line client.
Or feel free to just use `curl`, as in the examples below.
[raft]: https://raft.github.io/
[k8s]: http://kubernetes.io/
[fleet]: https://github.com/coreos/fleet
[locksmith]: https://github.com/coreos/locksmith
[vulcand]: https://github.com/vulcand/vulcand
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
## Getting Started
### Getting etcd
The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, AppC (ACI), and Docker. Instructions for using these binaries are on the [GitHub releases page][github-release].
For those wanting to try the very latest version, you can build the latest version of etcd from the `master` branch.
You will first need [*Go*](https://golang.org/) installed on your machine (version 1.5+ is required).
All development occurs on `master`, including new features and bug fixes.
Bug fixes are first targeted at `master` and subsequently ported to release branches, as described in the [branch management][branch-management] guide.
[github-release]: https://github.com/coreos/etcd/releases/
[branch-management]: branch_management.md
### Running etcd
First start a single-member cluster of etcd:
```sh
./bin/etcd
```
This will bring up etcd listening on port 2379 for client communication and on port 2380 for server-to-server communication.
Next, let's set a single key, and then retrieve it:
```
curl -L http://127.0.0.1:2379/v2/keys/mykey -XPUT -d value="this is awesome"
curl -L http://127.0.0.1:2379/v2/keys/mykey
```
You have successfully started an etcd and written a key to the store.
### etcd TCP ports
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication. To maintain compatibility, some etcd configuration and documentation continues to refer to the legacy ports 4001 and 7001, but all new etcd use and discussion should adopt the IANA-assigned ports. The legacy ports 4001 and 7001 will be fully deprecated, and support for their use removed, in future etcd releases.
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
### Running local etcd cluster
First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
Our [Procfile script](../../V2Procfile) will set up a local example cluster. You can start it with:
```sh
goreman start
```
This will bring up 3 etcd members `infra1`, `infra2` and `infra3` and etcd proxy `proxy`, which runs locally and composes a cluster.
You can write a key to the cluster and retrieve the value back from any member or proxy.
### Next Steps
Now it's time to dig into the full etcd API and other guides.
- Explore the full [API][api].
- Set up a [multi-machine cluster][clustering].
- Learn the [config format, env variables and flags][configuration].
- Find [language bindings and tools][libraries-and-tools].
- Use TLS to [secure an etcd cluster][security].
- [Tune etcd][tuning].
- [Upgrade from 0.4.9+ to 2.2.0][upgrade].
[api]: ./api.md
[clustering]: ./clustering.md
[configuration]: ./configuration.md
[libraries-and-tools]: ./libraries-and-tools.md
[security]: ./security.md
[tuning]: ./tuning.md
[upgrade]: ./04_to_2_snapshot_migration.md
## Contact
- Mailing list: [etcd-dev](https://groups.google.com/forum/?hl=en#!forum/etcd-dev)
- IRC: #[etcd](irc://irc.freenode.org:6667/#etcd) on freenode.org
- Planning/Roadmap: [milestones](https://github.com/coreos/etcd/milestones), [roadmap](../../ROADMAP.md)
- Bugs: [issues](https://github.com/coreos/etcd/issues)
## Contributing
See [CONTRIBUTING](../../CONTRIBUTING.md) for details on submitting patches and the contribution workflow.
## Reporting bugs
See [reporting bugs](reporting_bugs.md) for details about reporting any issue you may encounter.
## Known bugs
[GH518](https://github.com/coreos/etcd/issues/518) is a known bug. Issue is that:
```
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d dir=true -d prevExist=true
```
If the previous node is a key and client tries to overwrite it with `dir=true`, it does not give warnings such as `Not a directory`. Instead, the key is set to empty value.
## Project Details
### Versioning
#### Service Versioning
etcd uses [semantic versioning](http://semver.org)
New minor versions may add additional features to the API.
You can get the version of etcd by issuing a request to /version:
```sh
curl -L http://127.0.0.1:2379/version
```
#### API Versioning
The `v2` API responses should not change after the 2.0.0 release but new features will be added over time.
#### 32-bit and other unsupported systems
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See #[358][358] for more information.
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unsupported architectures will print
a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to
the target architecture.
Currently only the amd64 architecture is officially supported by `etcd`.
[358]: https://github.com/coreos/etcd/issues/358
### License
etcd is under the Apache 2.0 license. See the [LICENSE](../../LICENSE) file for details.

View File

@ -1,312 +0,0 @@
# Administration
## Data Directory
### Lifecycle
When first started, etcd stores its configuration into a data directory specified by the data-dir configuration parameter.
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
The write ahead log and snapshot files are used during member operation and to recover after a restart.
Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster.
It is highly recommended to dedicate a wal disk and set `--wal-dir` to point to a directory on that device for a production cluster deployment.
If a members data directory is ever lost or corrupted then the user should [remove][remove-a-member] the etcd member from the cluster using `etcdctl` tool.
A user should avoid restarting an etcd member with a data directory from an out-of-date backup.
Using an out-of-date data directory can lead to inconsistency as the member had agreed to store information via raft then re-joins saying it needs that information again.
For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
Once removed the member can be re-added with an empty data directory.
### Contents
The data directory has two sub-directories in it:
1. wal: write ahead log files are stored here. For details see the [wal package documentation][wal-pkg]
2. snap: log snapshots are stored here. For details see the [snap package documentation][snap-pkg]
If `--wal-dir` flag is set, etcd will write the write ahead log files to the specified directory instead of data directory.
## Cluster Management
### Lifecycle
If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.
### Monitoring
It is important to monitor your production etcd cluster for healthy information and runtime metrics.
#### Health Monitoring
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health": "true"}`, then the cluster is healthy. Please note the `/health` endpoint is still an experimental one as in etcd 2.2.
```
$ curl -L http://127.0.0.1:2379/health
{"health": "true"}
```
You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.
```
$./etcdctl cluster-health
member 8211f1d0f64f3269 is healthy: got healthy result from http://127.0.0.1:12379
member 91bc3c398fb3c146 is healthy: got healthy result from http://127.0.0.1:22379
member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:32379
cluster is healthy
```
#### Runtime Metrics
etcd uses [Prometheus][prometheus] for metrics reporting in the server. You can read more through the runtime metrics [doc][metrics].
### Debugging
Debugging a distributed system can be difficult. etcd provides several ways to make debug
easier.
#### Enabling Debug Logging
When you want to debug etcd without stopping it, you can enable debug logging at runtime.
etcd exposes logging configuration at `/config/local/log`.
```
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
$ # debug logging enabled
$
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
$ # debug logging disabled
```
#### Debugging Variables
Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
`cmdline` is the command line arguments passed into etcd.
`file_descriptor_limit` is the max number of file descriptors etcd can utilize.
`memstats` is explained in detail in the [Go runtime documentation][golang-memstats].
`raft.status` is useful when you want to debug low level raft issues if you are familiar with raft internals. In most cases, you do not need to check `raft.status`.
```json
{
"cmdline": ["./etcd"],
"file_descriptor_limit": 0,
"memstats": {"Alloc":4105744,"TotalAlloc":42337320,"Sys":12560632,"...":"..."},
"raft.status": {"id":"ce2a822cea30bfca","term":5,"vote":"ce2a822cea30bfca","commit":23509,"lead":"ce2a822cea30bfca","raftState":"StateLeader","progress":{"ce2a822cea30bfca":{"match":23509,"next":23510,"state":"ProgressStateProbe"}}}
}
```
### Optimal Cluster Size
The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.
#### Fault Tolerance Table
It is recommended to have an odd number of members in a cluster. Having an odd cluster size doesn't change the number needed for majority, but you gain a higher tolerance for failure by adding the extra member. You can see this in practice when comparing even and odd sized clusters:
| Cluster Size | Majority | Failure Tolerance |
|--------------|------------|-------------------|
| 1 | 1 | 0 |
| 2 | 2 | 0 |
| 3 | 2 | **1** |
| 4 | 3 | 1 |
| 5 | 3 | **2** |
| 6 | 4 | 2 |
| 7 | 4 | **3** |
| 8 | 5 | 3 |
| 9 | 5 | **4** |
As you can see, adding another member to bring the size of cluster up to an odd size is always worth it. During a network partition, an odd number of members also guarantees that there will almost always be a majority of the cluster that can continue to operate and be the source of truth when the partition ends.
#### Changing Cluster Size
After your cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-reconfig], which allows the cluster to be modified without downtime. The `etcdctl` tool has `member list`, `member add` and `member remove` commands to complete this process.
### Member Migration
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
The data directory contains all the data to recover a member to its point-in-time state. To migrate a member:
* Stop the member process.
* Copy the data directory of the now-idle member to the new machine.
* Update the peer URLs for the replaced member to reflect the new machine according to the [runtime reconfiguration instructions][update-a-member].
* Start etcd on the new machine, using the same configuration and the copy of the data directory.
This example will walk you through the process of migrating the infra1 member to a new machine:
|Name|Peer URL|
|------|--------------|
|infra0|10.0.1.10:2380|
|infra1|10.0.1.11:2380|
|infra2|10.0.1.12:2380|
```sh
$ export ETCDCTL_ENDPOINT=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
```
```sh
$ etcdctl member list
84194f7c5edd8b37: name=infra0 peerURLs=http://10.0.1.10:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.10:2379
b4db3bf5e495e255: name=infra1 peerURLs=http://10.0.1.11:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.11:2379
bc1083c870280d44: name=infra2 peerURLs=http://10.0.1.12:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.12:2379
```
#### Stop the member etcd process
```sh
$ ssh 10.0.1.11
```
```sh
$ kill `pgrep etcd`
```
#### Copy the data directory of the now-idle member to the new machine
```
$ tar -cvzf infra1.etcd.tar.gz %data_dir%
```
```sh
$ scp infra1.etcd.tar.gz 10.0.1.13:~/
```
#### Update the peer URLs for that member to reflect the new machine
```sh
$ curl http://10.0.1.10:2379/v2/members/b4db3bf5e495e255 -XPUT \
-H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.1.13:2380"]}'
```
Or use `etcdctl member update` command
```sh
$ etcdctl member update b4db3bf5e495e255 http://10.0.1.13:2380
```
#### Start etcd on the new machine, using the same configuration and the copy of the data directory
```sh
$ ssh 10.0.1.13
```
```sh
$ tar -xzvf infra1.etcd.tar.gz -C %data_dir%
```
```
etcd -name infra1 \
-listen-peer-urls http://10.0.1.13:2380 \
-listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
```
### Disaster Recovery
etcd is designed to be resilient to machine failures. An etcd cluster can automatically recover from any number of temporary failures (for example, machine reboots), and a cluster of N members can tolerate up to _(N-1)/2_ permanent failures (where a member can no longer access the cluster, due to hardware failure or disk corruption). However, in extreme circumstances, a cluster might permanently lose enough members such that quorum is irrevocably lost. For example, if a three-node cluster suffered two simultaneous and unrecoverable machine failures, it would be normally impossible for the cluster to restore quorum and continue functioning.
To recover from such scenarios, etcd provides functionality to backup and restore the datastore and recreate the cluster without data loss.
#### Backing up the datastore
**Note:** Windows users must stop etcd before running the backup command.
The first step of the recovery is to backup the data directory and wal directory, if stored separately, on a functioning etcd node. To do this, use the `etcdctl backup` command, passing in the original data (and wal) directory used by etcd. For example:
```sh
etcdctl backup \
--data-dir %data_dir% \
[--wal-dir %wal_dir%] \
--backup-dir %backup_data_dir%
[--backup-wal-dir %backup_wal_dir%]
```
This command will rewrite some of the metadata contained in the backup (specifically, the node ID and cluster ID), which means that the node will lose its former identity. In order to recreate a cluster from the backup, you will need to start a new, single-node cluster. The metadata is rewritten to prevent the new node from inadvertently being joined onto an existing cluster.
#### Restoring a backup
To restore a backup using the procedure created above, start etcd with the `-force-new-cluster` option and pointing to the backup directory. This will initialize a new, single-member cluster with the default advertised peer URLs, but preserve the entire contents of the etcd data store. Continuing from the previous example:
```sh
etcd \
-data-dir=%backup_data_dir% \
[-wal-dir=%backup_wal_dir%] \
-force-new-cluster \
...
```
Now etcd should be available on this node and serving the original datastore.
Once you have verified that etcd has started successfully, shut it down and move the data and wal, if stored separately, back to the previous location (you may wish to make another copy as well to be safe):
```sh
pkill etcd
rm -fr %data_dir%
rm -fr %wal_dir%
mv %backup_data_dir% %data_dir%
mv %backup_wal_dir% %wal_dir%
etcd \
-data-dir=%data_dir% \
[-wal-dir=%wal_dir%] \
...
```
#### Restoring the cluster
Now that the node is running successfully, [change its advertised peer URLs][update-a-member], as the `--force-new-cluster` option has set the peer URL to the default listening on localhost.
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details.
**Note:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
### Client Request Timeout
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon (https://github.com/coreos/etcd/issues/2038).
#### Get requests
Timeout is not set for get requests, because etcd serves the result locally in a non-blocking way.
**Note**: QuorumGet request is a different type, which is mentioned in the following sections.
#### Watch requests
Timeout is not set for watch requests. etcd will not stop a watch request until client cancels it, or the connection is broken.
#### Delete, Put, Post, QuorumGet requests
The default timeout is 5 seconds. It should be large enough to allow all key modifications if the majority of cluster is functioning.
If the request times out, it indicates two possibilities:
1. the server the request sent to was not functioning at that time.
2. the majority of the cluster is not functioning.
If timeout happens several times continuously, administrators should check status of cluster and resolve it as soon as possible.
### Best Practices
#### Maximum OS threads
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [has changed in Go 1.5][golang1.5-runtime]).
When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable GOMAXPROCS to the desired number when starting etcd. For more information on this variable, see the [Go runtime documentation][golang-runtime].
[add-a-member]: runtime-configuration.md#add-a-new-member
[golang1.5-runtime]: https://golang.org/doc/go1.5#runtime
[golang-memstats]: https://golang.org/pkg/runtime/#MemStats
[golang-runtime]: https://golang.org/pkg/runtime
[metrics]: metrics.md
[prometheus]: http://prometheus.io/
[remove-a-member]: runtime-configuration.md#remove-a-member
[runtime-reconfig]: runtime-configuration.md#cluster-reconfiguration-operations
[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap
[update-a-member]: runtime-configuration.md#update-a-member
[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal

File diff suppressed because it is too large Load Diff

View File

@ -1,92 +0,0 @@
# etcd3 API
TODO: API doc
## Data Model
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases.
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generates a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
### Logical View
The stores logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive.
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to recover space, revisions before the compact revision will be removed.
A keys lifetime spans a generation. Each key may have one or multiple generations. Creating a key increments the generation of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the keys current generation. Each modification of a key creates a new version of the key. Once a compaction happens, any generation ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
### Physical View
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.
etcd also keeps a secondary in-memory [btree][btree] index to speed up range queries over keys. The keys in the btree index are the keys of the store exposed to user. The value is a pointer to the modification of the persistent b+tree. Compaction removes dead pointers.
## KV API Guarantees
etcd is a consistent and durable key value store with mini-transaction(TODO: link to txn doc when we have it) support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.
### APIs to consider
* Read APIs
* range
* watch
* Write APIs
* put
* delete
* Combination (read-modify-write) APIs
* txn
### etcd Specific Definitions
#### operation completed
An etcd operation is considered complete when it is committed through consensus, and therefore “executed” -- permanently stored -- by the etcd storage engine. The client knows an operation is completed when it receives a response from the etcd server. Note that the client may be uncertain about the status of an operation if it times out, or there is a network disruption between the client and the etcd member. etcd may also abort operations when there is a leader election. etcd does not send `abort` responses to clients outstanding requests in this event.
#### revision
An etcd operation that modifies the key value store is assigned with a single increasing revision. A transaction operation might modify the key value store multiple times, but only one revision is assigned. The revision attribute of a key value pair that modified by the operation has the same value as the revision of the operation. The revision can be used as a logical clock for key value store. A key value pair that has a larger revision is modified after a key value pair with a smaller revision. Two key value pairs that have the same revision are modified by an operation "concurrently".
### Guarantees Provided
#### Atomicity
All API requests are atomic; an operation either completes entirely or not at all. For watch requests, all events generated by one operation will be in one watch response. Watch never observes partial events for a single operation.
#### Consistency
All API calls ensure [sequential consistency][seq_consistency], the strongest consistency guarantee available from distributed systems. No matter which etcd member server a client makes requests to, a client reads the same events in the same order. If two members complete the same number of operations, the state of the two members is consistent.
For watch operations, etcd guarantees to return the same value for the same key across all members for the same revision. For range operations, etcd has a similar guarantee for [linearized][Linearizability] access; serialized access may be behind the quorum state, so that the later revision is not yet available.
As with all distributed systems, it is impossible for etcd to ensure [strict consistency][strict_consistency]. etcd does not guarantee that it will return to a read the “most recent” value (as measured by a wall clock when a request is completed) available on any cluster member.
#### Isolation
etcd ensures [serializable isolation][serializable_isolation], which is the highest isolation level available in distributed systems. Read operations will never observe any intermediate data.
#### Durability
Any completed operations are durable. All accessible data is also durable data. A read will never return data that has not been made durable.
#### Linearizability
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence.
For example, consider a client completing a write at time point 1 (*t1*). A client issuing a read at *t2* (for *t2* > *t1*) should receive a value at least as recent as the previous write, completed at *t1*. However, the read might actually complete only by *t3*, and the returned value, current at *t2* when the read began, might be "stale" by *t3*.
etcd does not ensure linearizability for watch operations. Users are expected to verify the revision of watch responses to ensure correct ordering.
etcd ensures linearizability for all other operations by default. Linearizability comes with a cost, however, because linearized requests must go through the Raft consensus process. To obtain lower latencies and higher throughput for read requests, clients can configure a requests consistency mode to `serializable`, which may access stale data with respect to quorum, but removes the performance penalty of linearized accesses' reliance on live consensus.
[persistent-ds]: https://en.wikipedia.org/wiki/Persistent_data_structure
[btree]: https://en.wikipedia.org/wiki/B-tree
[b+tree]: https://en.wikipedia.org/wiki/B%2B_tree
[seq_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
[Linearizability]: #linearizability

View File

@ -1,511 +0,0 @@
# v2 Auth and Security
## etcd Resources
There are three types of resources in etcd
1. permission resources: users and roles in the user store
2. key-value resources: key-value pairs in the key-value store
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
### Permission Resources
#### Users
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
#### Roles
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatibility. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
#### Permissions
There are two types of permissions, `read` and `write`. All management and settings require the ROOT role.
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes are supported. DENY becomes more complicated and is TBD.
### Key-Value Resources
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
### Settings Resources
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling authentication, replacing certificates, and any other dynamic configuration by the administrator (holder of the ROOT role).
## v2 Auth
### Basic Auth
We only support [Basic Auth][basic-auth] for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
### Authorization field for operations
Added to requests to /v2/keys, /v2/auth
Add code 401 Unauthorized to the set of responses from the v2 API
Authorization: Basic {encoded string}
### Future Work
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
### Things out of Scope for etcd Permissions
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possibility)
* Very fine-grained access controls (eg: users modifying keys outside work hours)
## API endpoints
An Error JSON corresponds to:
{
"name": "ErrErrorName",
"description" : "The longer helpful description of the error."
}
#### Enable and Disable Authentication
**Get auth status**
GET /v2/auth/enable
Sent Headers:
Possible Status Codes:
200 OK
200 Body:
{
"enabled": true
}
**Enable auth**
PUT /v2/auth/enable
Sent Headers:
Put Body: (empty)
Possible Status Codes:
200 OK
400 Bad Request (if root user has not been created)
409 Conflict (already enabled)
200 Body: (empty)
**Disable auth**
DELETE /v2/auth/enable
Sent Headers:
Authorization: Basic <RootAuthString>
Possible Status Codes:
200 OK
401 Unauthorized (if not a root user)
409 Conflict (already disabled)
200 Body: (empty)
#### Users
The User JSON object is formed as follows:
```
{
"user": "userName",
"password": "password",
"roles": [
"role1",
"role2"
],
"grant": [],
"revoke": []
}
```
Password is only passed when necessary.
**Get a List of Users**
GET/HEAD /v2/auth/users
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"users": [
{
"user": "alice",
"roles": [
{
"role": "root",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
},
{
"user": "bob",
"roles": [
{
"role": "guest",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
}
]
}
**Get User Details**
GET/HEAD /v2/auth/users/alice
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"user" : "alice",
"roles" : [
{
"role": "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
},
{
"role": "etcd",
"permissions" : {
"kv" : {
"read": [ "/*" ],
"write": [ "/*" ]
}
}
}
]
}
**Create Or Update A User**
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
PUT /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
JSON struct, above, matching the appropriate name
* Starting password and roles when creating.
* Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent users)
409 Conflict (when granting duplicated roles or revoking non-existent roles)
200 Headers:
Content-type: application/json
200 Body:
JSON state of the user
**Remove A User**
DELETE /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root user when auth is enabled)
404 Not Found
200 Headers:
200 Body: (empty)
#### Roles
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
```
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read" : [ "/fleet/" ],
"write": [ "/fleet/" ]
}
},
"grant" : {"kv": {...}},
"revoke": {"kv": {...}}
}
```
**Get Role Details**
GET/HEAD /v2/auth/roles/fleet
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
}
**Get a list of Roles**
GET/HEAD /v2/auth/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"roles": [
{
"role": "fleet",
"permissions": {
"kv": {
"read": ["/fleet/"],
"write": ["/fleet/"]
}
}
},
{
"role": "etcd",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
},
{
"role": "quay",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
}
**Create Or Update A Role**
PUT /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
Initial desired JSON state, including the role name for verification and:
* Starting permission set if creating
* Granted/Revoked permission set if updating
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent roles)
409 Conflict (when granting duplicated permission or revoking non-existent permission)
200 Body:
JSON state of the role
**Remove A Role**
DELETE /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root)
404 Not Found
200 Headers:
200 Body: (empty)
## Example Workflow
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
### Create root role
```
PUT /v2/auth/users/root
Put Body:
{"user" : "root", "password": "betterRootPW!"}
```
### Enable auth
```
PUT /v2/auth/enable
```
### Modify guest role (revoke write permission)
```
PUT /v2/auth/roles/guest
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "guest",
"revoke" : {
"kv" : {
"write": [
"/*"
]
}
}
}
```
### Create Roles for the Applications
Create the rkt role fully specified:
```
PUT /v2/auth/roles/rkt
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "rkt",
"permissions" : {
"kv": {
"read": [
"/rkt/*"
],
"write": [
"/rkt/*"
]
}
}
}
```
But let's make fleet just a basic role for now:
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "fleet"
}
```
### Optional: Grant some permissions to the roles
Well, we finally figured out where we want fleet to live. Let's fix it.
(Note that we avoided this in the rkt case. So this step is optional.)
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "fleet",
"grant" : {
"kv" : {
"read": [
"/rkt/fleet",
"/fleet/*"
]
}
}
}
```
### Create Users
Same as before, let's use rocket all at once and fleet separately
```
PUT /v2/auth/users/rktuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "rktuser", "password" : "rktpw", "roles" : ["rkt"]}
```
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "fleetuser", "password" : "fleetpw"}
```
### Optional: Grant Roles to Users
Likewise, let's explicitly grant fleetuser access.
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user": "fleetuser", "grant": ["fleet"]}
```
#### Start to use fleetuser and rktuser
For example:
```
PUT /v2/keys/rkt/RktData
Headers:
Authorization: Basic <rktuser:rktpw>
Body:
value=launch
```
Reads and writes outside the prefixes granted will fail with a 401 Unauthorized.
[basic-auth]: https://en.wikipedia.org/wiki/Basic_access_authentication

View File

@ -1,180 +0,0 @@
# Authentication Guide
## Overview
Authentication -- having users and roles in etcd -- was added in etcd 2.1. This guide will help you set up basic authentication in etcd.
etcd before 2.1 was a completely open system; anyone with access to the API could change keys. In order to preserve backward compatibility and upgradability, this feature is off by default.
For a full discussion of the RESTful API, see [the authentication API documentation][auth-api]
## Special Users and Roles
There is one special user, `root`, and there are two special roles, `root` and `guest`.
### User `root`
User `root` must be created before security can be activated. It has the `root` role and allows for the changing of anything inside etcd. The idea behind the `root` user is for recovery purposes -- a password is generated and stored somewhere -- and the root role is granted to the administrator accounts on the system. In the future, for troubleshooting and recovery, we will need to assume some access to the system, and future documentation will assume this root user (though anyone with the role will suffice).
### Role `root`
Role `root` cannot be modified, but it may be granted to any user. Having access via the root role not only allows global read-write access (as was the case before 2.1) but allows modification of the authentication policy and all administrative things, like modifying the cluster membership.
### Role `guest`
The `guest` role defines the permissions granted to any request that does not provide an authentication. This will be created on security activation (if it doesn't already exist) to have full access to all keys, as was true in etcd 2.0. It may be modified at any time, and cannot be removed.
## Working with users
The `user` subcommand for `etcdctl` handles all things having to do with user accounts.
A listing of users can be found with
```
$ etcdctl user list
```
Creating a user is as easy as
```
$ etcdctl user add myusername
```
And there will be prompt for a new password.
Roles can be granted and revoked for a user with
```
$ etcdctl user grant myusername -roles foo,bar,baz
$ etcdctl user revoke myusername -roles bar,baz
```
We can look at this user with
```
$ etcdctl user get myusername
```
And the password for a user can be changed with
```
$ etcdctl user passwd myusername
```
Which will prompt again for a new password.
To delete an account, there's always
```
$ etcdctl user remove myusername
```
## Working with roles
The `role` subcommand for `etcdctl` handles all things having to do with access controls for particular roles, as were granted to individual users.
A listing of roles can be found with
```
$ etcdctl role list
```
A new role can be created with
```
$ etcdctl role add myrolename
```
A role has no password; we are merely defining a new set of access rights.
Roles are granted access to various parts of the keyspace, a single path at a time.
Reading a path is simple; if the path ends in `*`, that key **and all keys prefixed with it**, are granted to holders of this role. If it does not end in `*`, only that key and that key alone is granted.
Access can be granted as either read, write, or both, as in the following examples:
```
# Give read access to keys under the /foo directory
$ etcdctl role grant myrolename -path '/foo/*' -read
# Give write-only access to the key at /foo/bar
$ etcdctl role grant myrolename -path '/foo/bar' -write
# Give full access to keys under /pub
$ etcdctl role grant myrolename -path '/pub/*' -readwrite
```
Beware that
```
# Give full access to keys under /pub??
$ etcdctl role grant myrolename -path '/pub*' -readwrite
```
Without the slash may include keys under `/publishing`, for example. To do both, grant `/pub` and `/pub/*`
To see what's granted, we can look at the role at any time:
```
$ etcdctl role get myrolename
```
Revocation of permissions is done the same logical way:
```
$ etcdctl role revoke myrolename -path '/foo/bar' -write
```
As is removing a role entirely
```
$ etcdctl role remove myrolename
```
## Enabling authentication
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
Make sure the root user is created:
```
$ etcdctl user add root
New password:
```
And enable authentication
```
$ etcdctl auth enable
```
After this, etcd is running with authentication enabled. To disable it for any reason, use the reciprocal command:
```
$ etcdctl -u root:rootpw auth disable
```
It would also be good to check what guests (unauthenticated users) are allowed to do:
```
$ etcdctl -u root:rootpw role get guest
```
And modify this role appropriately, depending on your policies.
## Using `etcdctl` to authenticate
`etcdctl` supports a similar flag as `curl` for authentication.
```
$ etcdctl -u user:password get foo
```
or if you prefer to be prompted:
```
$ etcdctl -u user get foo
```
Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.
[auth-api]: auth_api.md

View File

@ -1,72 +0,0 @@
# Backward Compatibility
The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
The other main focus of this release was a more reliable Raft implementation, but as this change is internal it should not have any notable effects to users.
## Command Line Flags Changes
The major flag changes are to mostly related to bootstrapping. The `initial-*` flags provide an improved way to specify the required criteria to start the cluster. The advertised URLs now support a list of values instead of a single value, which allows etcd users to gracefully migrate to the new set of IANA-assigned ports (2379/client and 2380/peers) while maintaining backward compatibility with the old ports.
- `-addr` is replaced by `-advertise-client-urls`.
- `-bind-addr` is replaced by `-listen-client-urls`.
- `-peer-addr` is replaced by `-initial-advertise-peer-urls`.
- `-peer-bind-addr` is replaced by `-listen-peer-urls`.
- `-peers` is replaced by `-initial-cluster`.
- `-peers-file` is replaced by `-initial-cluster`.
- `-peer-heartbeat-interval` is replaced by `-heartbeat-interval`.
- `-peer-election-timeout` is replaced by `-election-timeout`.
The documentation of new command line flags can be found at
https://github.com/coreos/etcd/blob/master/Documentation/v2/configuration.md.
## Data Directory Naming
The default data dir location has changed from {$hostname}.etcd to {name}.etcd.
## Key-Value API
### Read consistency flag
The consistent flag for read operations is removed in etcd 2.0.0. The normal read operations provides the same consistency guarantees with the 0.4.6 read operations with consistent flag set.
The read consistency guarantees are:
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
Reads do not provide linearizability. If you want linearizable read, you need to set quorum option to true.
**Previous behavior**
We added an option for a consistent read in the old version of etcd since etcd 0.x redirects the write request to the leader. When the user get back the result from the leader, the member it sent the request to originally might not apply the write request yet. With the consistent flag set to true, the client will always send read request to the leader. So one client should be able to see its last write when consistent=true is enabled. There is no order guarantees among different clients.
## Standby
etcd 0.4s standby mode has been deprecated. [Proxy mode][proxymode] is introduced to solve a subset of problems standby was solving.
Standby mode was intended for large clusters that had a subset of the members acting in the consensus process. Overall this process was too magical and allowed for operators to back themselves into a corner.
Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
[proxymode]: proxy.md
## Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: clustering.md#custom-etcd-discovery-service
## HTTP Admin API
`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/members API][members-api] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
[members-api]: members_api.md
## HTTP Key Value API
- The follower can now transparently proxy write requests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
- Expiration time is in UTC instead of local time.

View File

@ -1,18 +0,0 @@
# Benchmarks
etcd benchmarks will be published regularly and tracked for each release below:
- [etcd v2.1.0-alpha][2.1]
- [etcd v2.2.0-rc][2.2]
- [etcd v3 demo][3.0]
# Memory Usage Benchmarks
It records expected memory usage in different scenarios.
- [etcd v2.2.0-rc][2.2-mem]
[2.1]: etcd-2-1-0-alpha-benchmarks.md
[2.2]: etcd-2-2-0-rc-benchmarks.md
[2.2-mem]: etcd-2-2-0-rc-memory-benchmarks.md
[3.0]: etcd-3-demo-benchmarks.md

View File

@ -1,52 +0,0 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.1.0 alpha
## etcd Cluster
3 etcd members, each runs on a single machine
## Testing
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 1534 | 0.7 |
| 64 | 64 | leader only | 10125 | 9.1 |
| 64 | 256 | leader only | 13892 | 27.1 |
| 256 | 1 | leader only | 1530 | 0.8 |
| 256 | 64 | leader only | 10106 | 10.1 |
| 256 | 256 | leader only | 14667 | 27.0 |
| 64 | 64 | all servers | 24200 | 3.9 |
| 64 | 256 | all servers | 33300 | 11.8 |
| 256 | 64 | all servers | 24800 | 3.9 |
| 256 | 256 | all servers | 33000 | 11.5 |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 60 | 21.4 |
| 64 | 64 | leader only | 1742 | 46.8 |
| 64 | 256 | leader only | 3982 | 90.5 |
| 256 | 1 | leader only | 58 | 20.3 |
| 256 | 64 | leader only | 1770 | 47.8 |
| 256 | 256 | leader only | 4157 | 105.3 |
| 64 | 64 | all servers | 1028 | 123.4 |
| 64 | 256 | all servers | 3260 | 123.8 |
| 256 | 64 | all servers | 1033 | 121.5 |
| 256 | 256 | all servers | 3061 | 119.3 |
[boom]: https://github.com/rakyll/boom
[hack-benchmark]: ../../../hack/benchmark/

View File

@ -1,72 +0,0 @@
# Benchmarking etcd v2.2.0
## Physical Machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted as etcd data directory
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
## etcd Cluster
3 etcd 2.2.0 members, each runs on a single machine.
Detailed versions:
```
etcd Version: 2.2.0
Git SHA: e4561dd
Go Version: go1.5
Go OS/Arch: linux/amd64
```
## Testing
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool][boom] with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions][hack] for the patch and the steps to reproduce our procedures.
The performance is calulated through results of 100 benchmark rounds.
## Performance
### Single Key Read Performance
| key size in bytes | number of clients | target etcd server | average read QPS | read QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
|-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
| 64 | 1 | leader only | 2303 | 200 | 0.49 | 0.06 |
| 64 | 64 | leader only | 15048 | 685 | 7.60 | 0.46 |
| 64 | 256 | leader only | 14508 | 434 | 29.76 | 1.05 |
| 256 | 1 | leader only | 2162 | 214 | 0.52 | 0.06 |
| 256 | 64 | leader only | 14789 | 792 | 7.69| 0.48 |
| 256 | 256 | leader only | 14424 | 512 | 29.92 | 1.42 |
| 64 | 64 | all servers | 45752 | 2048 | 2.47 | 0.14 |
| 64 | 256 | all servers | 46592 | 1273 | 10.14 | 0.59 |
| 256 | 64 | all servers | 45332 | 1847 | 2.48| 0.12 |
| 256 | 256 | all servers | 46485 | 1340 | 10.18 | 0.74 |
### Single Key Write Performance
| key size in bytes | number of clients | target etcd server | average write QPS | write QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
|-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
| 64 | 1 | leader only | 55 | 4 | 24.51 | 13.26 |
| 64 | 64 | leader only | 2139 | 125 | 35.23 | 3.40 |
| 64 | 256 | leader only | 4581 | 581 | 70.53 | 10.22 |
| 256 | 1 | leader only | 56 | 4 | 22.37| 4.33 |
| 256 | 64 | leader only | 2052 | 151 | 36.83 | 4.20 |
| 256 | 256 | leader only | 4442 | 560 | 71.59 | 10.03 |
| 64 | 64 | all servers | 1625 | 85 | 58.51 | 5.14 |
| 64 | 256 | all servers | 4461 | 298 | 89.47 | 36.48 |
| 256 | 64 | all servers | 1599 | 94 | 60.11| 6.43 |
| 256 | 256 | all servers | 4315 | 193 | 88.98 | 7.01 |
## Performance Changes
- Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.
- Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
[boom]: https://github.com/rakyll/boom
[hack]: ../../../hack/benchmark/

View File

@ -1,72 +0,0 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
## etcd Cluster
3 etcd 2.2.0-rc members, each runs on a single machine.
Detailed versions:
```
etcd Version: 2.2.0-alpha.1+git
Git SHA: 59a5a7e
Go Version: go1.4.2
Go OS/Arch: linux/amd64
```
Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base performance. etcd's commit head is at [c7146bd5][c7146bd5], which is the same as the one that we use in [etcd 2.1 benchmark][etcd-2.1-benchmark].
## Testing
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 2804 (-5%) | 0.4 (+0%) |
| 64 | 64 | leader only | 17816 (+0%) | 5.7 (-6%) |
| 64 | 256 | leader only | 18667 (-6%) | 20.4 (+2%) |
| 256 | 1 | leader only | 2181 (-15%) | 0.5 (+25%) |
| 256 | 64 | leader only | 17435 (-7%) | 6.0 (+9%) |
| 256 | 256 | leader only | 18180 (-8%) | 21.3 (+3%) |
| 64 | 64 | all servers | 46965 (-4%) | 2.1 (+0%) |
| 64 | 256 | all servers | 55286 (-6%) | 7.4 (+6%) |
| 256 | 64 | all servers | 46603 (-6%) | 2.1 (+5%) |
| 256 | 256 | all servers | 55291 (-6%) | 7.3 (+4%) |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 76 (+22%) | 19.4 (-15%) |
| 64 | 64 | leader only | 2461 (+45%) | 31.8 (-32%) |
| 64 | 256 | leader only | 4275 (+1%) | 69.6 (-10%) |
| 256 | 1 | leader only | 64 (+20%) | 16.7 (-30%) |
| 256 | 64 | leader only | 2385 (+30%) | 31.5 (-19%) |
| 256 | 256 | leader only | 4353 (-3%) | 74.0 (+9%) |
| 64 | 64 | all servers | 2005 (+81%) | 49.8 (-55%) |
| 64 | 256 | all servers | 4868 (+35%) | 81.5 (-40%) |
| 256 | 64 | all servers | 1925 (+72%) | 47.7 (-59%) |
| 256 | 256 | all servers | 4975 (+36%) | 70.3 (-36%) |
### performance changes explanation
- read QPS in most scenarios is decreased by 5~8%. The reason is that etcd records store metrics for each store operation. The metrics is important for monitoring and debugging, so this is acceptable.
- write QPS to leader is increased by 20~30%. This is because we decouple raft main loop and entry apply loop, which avoids them blocking each other.
- write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.
[boom]: https://github.com/rakyll/boom
[c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md
[hack-benchmark]: ../../../hack/benchmark/

View File

@ -1,47 +0,0 @@
## Physical machine
GCE n1-standard-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 7.5 GB memory
- 2x CPUs
## etcd
```
etcd Version: 2.2.0-rc.0+git
Git SHA: 103cb5c
Go Version: go1.5
Go OS/Arch: linux/amd64
```
## Testing
Start 3-member etcd cluster, each of which uses 2 cores.
The length of key name is always 64 bytes, which is a reasonable length of average key bytes.
## Memory Maximal Usage
- etcd may use maximal memory if one follower is dead and the leader keeps sending snapshots.
- `max RSS` is the maximal memory usage recorded in 3 runs.
| value bytes | key number | data size(MB) | max RSS(MB) | max RSS/data rate on leader |
|-------------|-------------|---------------|-------------|-----------------------------|
| 128 | 50000 | 6 | 433 | 72x |
| 128 | 100000 | 12 | 659 | 54x |
| 128 | 200000 | 24 | 1466 | 61x |
| 1024 | 50000 | 48 | 1253 | 26x |
| 1024 | 100000 | 96 | 2344 | 24x |
| 1024 | 200000 | 192 | 4361 | 22x |
## Data Size Threshold
- When etcd reaches data size threshold, it may trigger leader election easily and drop part of proposals.
- At most cases, etcd cluster should work smoothly if it doesn't hit the threshold. If it doesn't work well due to insufficient resources, you need to decrease its data size.
| value bytes | key number limitation | suggested data size threshold(MB) | consumed RSS(MB) |
|-------------|-----------------------|-----------------------------------|------------------|
| 128 | 400K | 48 | 2400 |
| 1024 | 300K | 292 | 6500 |

View File

@ -1,42 +0,0 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.2.0
## etcd Cluster
1 etcd member running in v3 demo mode
## Testing
Use [etcd v3 benchmark tool][etcd-v3-benchmark].
## Performance
### reading one single key
| key size in bytes | number of clients | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|----------|---------------|
| 256 | 1 | 2716 | 0.4 |
| 256 | 64 | 16623 | 6.1 |
| 256 | 256 | 16622 | 21.7 |
The performance is nearly the same as the one with empty server handler.
### reading one single key after putting
| key size in bytes | number of clients | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|----------|---------------|
| 256 | 1 | 2269 | 0.5 |
| 256 | 64 | 13582 | 8.6 |
| 256 | 256 | 13262 | 47.5 |
The performance with empty server handler is not affected by one put. So the
performance downgrade should be caused by storage package.
[etcd-v3-benchmark]: ../../../tools/benchmark/

Some files were not shown because too many files have changed in this diff Show More