Compare commits

..

406 Commits

Author SHA1 Message Date
7dc07f2a9b Merge pull request #12811 from hexfusion/cp-8469-release-3.2
Manual cherry pick of #8469
2021-03-28 17:07:40 -04:00
7ec9c48a45 pkg/pbutil: Fix go vet errors 2021-03-28 16:47:09 -04:00
8ce82ff877 Merge pull request #12809 from hexfusion/fix-logger
raft: correctly pass arguments to Logger.Panic
2021-03-28 21:33:47 +02:00
6064a0e39c raft: correctly pass arguments to Logger.Panic
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-03-28 15:20:34 -04:00
78f1a05493 version: 3.2.32
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-03-26 13:04:17 -04:00
7a07e9f3b3 Merge pull request #12639 from retroflexer/check-nil-decoder
wal: fix panic when decoder not set
2021-01-21 13:13:55 -05:00
8d4ab97008 wal: fix panic when decoder not set
Handle the related panic and clarify doc.
2021-01-21 13:00:34 -05:00
79c998d91a Merge pull request #12638 from retroflexer/check-slice-range-in-ReadAll
wal: check out of range slice in "ReadAll", "decoder"
2021-01-21 14:26:24 +01:00
b14255c0b4 wal: check out of range slice in "ReadAll", "decoder"
wal: add slice bound checks in decoder

CHANGELOG-3.5: add wal slice bound check
CHANGELOG-3.5: add "decodeRecord"

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2021-01-21 07:25:04 -05:00
13465d6d3d Merge pull request #12553 from kolyshkin/3.2-fix-lock
[3.2 backport] pkg/fileutil: fix constant for linux locking
2021-01-16 22:19:50 +01:00
229492c969 pkg/fileutil: fix constant for linux locking
The constant F_OFD_GETLK is 36, not 37, according to
/usr/include/bits/fcntl-linux.h
Credits go to joakim-tjernlund who digged deep enough
to find this.

Fixes #31182
2020-12-14 10:59:42 -08:00
ba92a0e70f version: 3.2.31
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-18 09:37:40 -07:00
55db5b49e8 etcdserver: add OS level FD metrics
Similar counts are exposed via Prometheus.
This adds the one that are perceived by etcd server.

e.g.

os_fd_limit 120000
os_fd_used 14
process_cpu_seconds_total 0.31
process_max_fds 120000
process_open_fds 17

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-18 09:37:11 -07:00
b7243f0175 pkg/runtime: optimize FDUsage by removing sort
No need sort when we just want the counts.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-18 09:35:58 -07:00
7619b2f744 etcdserver/etcdserverpb: change protobuf field type from int to int64
ref. https://github.com/etcd-io/etcd/pull/12106

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-18 09:34:43 -07:00
17acb61209 Merge pull request #11691 from wswcfan/fix-etcd-3.2-to-3.3-upgrade-bug
etcdserver: fix LeaseRevoke may fail to apply when authentication is enabled and upgrading cluster from etcd-3.2 to etcd-3.3
2020-05-22 03:28:16 +08:00
6e77b87c06 auth, etcdserver: attaching a fake root token when calling LeaseRevoke
fix LeaseRevoke may fail to apply when authentication is enabled
and upgrading cluster from etcd-3.2 to etcd-3.3 (#11691)
2020-05-11 23:49:14 +08:00
b7644ae5f0 version: 3.2.30
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-04-01 10:48:30 -07:00
325f2d253c wal: add "etcd_wal_writes_bytes_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-04-01 10:48:27 -07:00
b05103392d pkg/ioutil: add "FlushN"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-04-01 10:44:18 -07:00
b69c0023c9 version: 3.2.29
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:19:14 -07:00
35d1d2d58e etcdserver/api/v3rpc: handle api version metadata, add metrics
ref.
https://github.com/etcd-io/etcd/pull/11687

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:19:14 -07:00
20b27a887d clientv3: embed api version in metadata
ref.
https://github.com/etcd-io/etcd/pull/11687

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>

clientv3: fix racy writes to context key

=== RUN   TestWatchOverlapContextCancel

==================

WARNING: DATA RACE

Write at 0x00c42110dd40 by goroutine 99:

  runtime.mapassign()

      /usr/local/go/src/runtime/hashmap.go:485 +0x0

  github.com/coreos/etcd/clientv3.metadataSet()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/ctx.go:61 +0x8c

  github.com/coreos/etcd/clientv3.withVersion()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/ctx.go:47 +0x137

  github.com/coreos/etcd/clientv3.newStreamClientInterceptor.func1()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/client.go:309 +0x81

  google.golang.org/grpc.NewClientStream()

      /go/src/github.com/coreos/etcd/gopath/src/google.golang.org/grpc/stream.go:101 +0x10e

  github.com/coreos/etcd/etcdserver/etcdserverpb.(*watchClient).Watch()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/etcdserver/etcdserverpb/rpc.pb.go:3193 +0xe9

  github.com/coreos/etcd/clientv3.(*watchGrpcStream).openWatchClient()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:788 +0x143

  github.com/coreos/etcd/clientv3.(*watchGrpcStream).newWatchClient()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:700 +0x5c3

  github.com/coreos/etcd/clientv3.(*watchGrpcStream).run()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:431 +0x12b

Previous read at 0x00c42110dd40 by goroutine 130:

  reflect.maplen()

      /usr/local/go/src/runtime/hashmap.go:1165 +0x0

  reflect.Value.MapKeys()

      /usr/local/go/src/reflect/value.go:1090 +0x43b

  fmt.(*pp).printValue()

      /usr/local/go/src/fmt/print.go:741 +0x1885

  fmt.(*pp).printArg()

      /usr/local/go/src/fmt/print.go:682 +0x1b1

  fmt.(*pp).doPrintf()

      /usr/local/go/src/fmt/print.go:998 +0x1cad

  fmt.Sprintf()

      /usr/local/go/src/fmt/print.go:196 +0x77

  github.com/coreos/etcd/clientv3.streamKeyFromCtx()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:825 +0xc8

  github.com/coreos/etcd/clientv3.(*watcher).Watch()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:265 +0x426

  github.com/coreos/etcd/clientv3/integration.testWatchOverlapContextCancel.func1()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/integration/watch_test.go:959 +0x23e

Goroutine 99 (running) created at:

  github.com/coreos/etcd/clientv3.(*watcher).newWatcherGrpcStream()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:236 +0x59d

  github.com/coreos/etcd/clientv3.(*watcher).Watch()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:278 +0xbb6

  github.com/coreos/etcd/clientv3/integration.testWatchOverlapContextCancel.func1()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/integration/watch_test.go:959 +0x23e

Goroutine 130 (running) created at:

  github.com/coreos/etcd/clientv3/integration.testWatchOverlapContextCancel()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/integration/watch_test.go:979 +0x76d

  github.com/coreos/etcd/clientv3/integration.TestWatchOverlapContextCancel()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/integration/watch_test.go:922 +0x44

  testing.tRunner()

      /usr/local/go/src/testing/testing.go:657 +0x107

==================

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:19:14 -07:00
42d749057d etcdserver/api/etcdhttp: log server-side /health checks
ref.
https://github.com/etcd-io/etcd/pull/11704

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:19:14 -07:00
a0d497b54a mvcc/backend: Delete orphaned db.tmp files before defrag 2020-02-13 12:44:59 -08:00
2d861f39e9 version: 3.2.28
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-11-03 07:33:22 -05:00
aaab73020c Merge pull request #11316 from jingyih/automated-cherry-pick-of-#11308-upstream-release-3.2
Automated cherry pick of #11308 on release-3.2
2019-10-30 21:24:44 -07:00
e2cb1a417c etcdserver: wait purge file loop during shutdown
To prevent the purge file loop from accidentally acquiring the file lock
and remove the files during server shutdowm.
2019-10-30 17:07:03 -07:00
fb5e7bfe00 Merge pull request #11277 from wenjiaswe/fixtests
e2e test: remove metricsURLScheme from e2e tests
2019-10-17 17:12:27 -07:00
8a0b902bc6 Fix e2e tests failure 2019-10-17 17:02:47 -07:00
db9f5b0e43 Merge pull request #11271 from wenjiaswe/automated-cherry-pick-of-#11261-upstream-release-3.2
cherry pick "etcd_cluster_version" metric" (#10257, #11233, #11254, #11265) to release-3.2
2019-10-17 14:17:44 -07:00
fc5f94144b Add cluster version fix #11233, #11254, #11265 2019-10-17 12:39:47 -07:00
a038dc6464 tests/e2e: test cluster version
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-10-17 12:39:47 -07:00
5454bb9982 etcdserver/*: add "etcd_cluster_version" metric
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-10-17 12:39:47 -07:00
cc959b4e5a Merge pull request #11195 from andyliuliming/release-3.2-cherry
etcdserver: cherry-pick skip client san verification option for 3.2 version.
2019-10-09 09:42:22 -07:00
c552f11bfe helper document update. 2019-10-09 13:32:13 +08:00
ab1f4fec07 etcdserver: add unit test. 2019-10-03 16:03:47 +08:00
e16076a8e0 etcdserver: cherry-pick skip client san verification option for 3.2 version.
Co-authored-by: Martin Weindel <martin.weindel@sap.com>
Co-authored-by: Jingyi Hu <jingyih@google.com>
Co-authored-by: Liming Liu <andyliuliming@outlook.com>
2019-10-03 10:15:09 +08:00
bdd97d5ffa version: 3.2.27
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-09-17 13:32:44 -07:00
b7fab558a1 Merge pull request #11157 from gyuho/release-3.2-patch
etcdctl/ctlv3: do not modify db file on "restore"
2019-09-17 11:10:58 -07:00
c7e4e50879 scripts: remove "ACI" builds
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-09-16 22:53:44 -07:00
4e375497ea etcdctl/ctlv3: do not modify db file on "restore" 2019-09-16 22:43:25 -07:00
c074e5c12b Merge pull request #11135 from jingyih/automated-cherry-pick-of-#11126-origin-release-3.2
Automated cherry pick of #11126 on release-3.2
2019-09-07 00:03:52 -07:00
948c284235 mvcc: add store revision metrics
Add experimental metrics etcd_debugging_mvcc_current_revision and
etcd_debugging_mvcc_compact_revision.
2019-09-06 17:22:45 -07:00
85015077a4 Merge pull request #10794 from jingyih/automated-cherry-pick-of-#10788-origin-release-3.2
Automated cherry pick of #10788 on release-3.2
2019-06-05 14:40:04 -07:00
6dd1e913a1 ctlv3: add missing newline in EndpointHealth
To make the output consistent with the output before #9540.
2019-06-05 14:37:59 -07:00
4a1ffd836a Merge pull request #10783 from jingyih/cherrypick_9540_to_release3p2
ctlv3: cherry pick of #9540 to release 3.2
2019-06-04 09:52:32 -07:00
64d53adb1f ctlv3: support "write-out" for "endpoint health" command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2019-06-03 17:30:31 -07:00
29d42fa1d2 Documentation metadata for 3.2 branch (#10695)
* Add documentation metadata and remove v2 docs

Signed-off-by: lucperkins <lucperkins@gmail.com>

* Add upgrade _index.md

Signed-off-by: lucperkins <lucperkins@gmail.com>
2019-04-30 14:02:44 -07:00
42f7d25d8c Merge pull request #10657 from jpbetz/automated-cherry-pick-of-#10646-release-3.2
Automated cherry pick of #10646
2019-04-18 14:10:05 -07:00
bcbe6dbc29 mvcc: fix db_compaction_total_duration_milliseconds 2019-04-17 16:32:46 -07:00
e06761ed79 Merge pull request #10452 from jpbetz/automated-cherry-pick-of-#10443-origin-release-3.2
Automated cherry pick of #10443 to release 3.2
2019-02-06 09:59:10 -08:00
1d80d5f497 etcdctl: fix strings.HasPrefix args order
Signed-off-by: Iskander Sharipov <quasilyte@gmail.com>
2019-02-05 13:17:54 -08:00
06cec40911 version: 3.2.26
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-01-11 10:04:58 -08:00
ab4693d97f Merge pull request #10386 from hexfusion/release-3.2
[Cherry-pick 3.2] auth: disable CommonName auth for gRPC-gateway
2019-01-11 10:01:12 -08:00
a2b420c364 auth: disable CommonName auth for gRPC-gateway
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-01-08 21:09:07 +00:00
dfd8fe97c5 Merge pull request #10334 from gyuho/patch-grpc-proxy
[Cherry-pick 3.2] grpcproxy: fix memory leak
2018-12-17 20:35:23 -08:00
ada4af3b2a grpcproxy: fix memory leak
use set instead of slice as interval value

fixes #10326
2018-12-17 18:58:04 -08:00
2e27fef277 version: bump up to 3.2.25+git 2018-10-10 11:13:42 -07:00
182de1a9e1 version: bump up to 3.2.25 2018-10-10 10:53:11 -07:00
6e15f11fd9 etcdserver: add "etcd_server_read_indexes_failed_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 18:21:41 -07:00
b6d11019e0 rafthttp: probe all raft transports
This PR adds another probing routine to monitor the connection
for Raft message transports. Previously, we only monitored
snapshot transports.

In our production cluster, we found one TCP connection had >8-sec
latencies to a remote peer, but "etcd_network_peer_round_trip_time_seconds"
metrics shows <1-sec latency distribution, which means etcd server
was not sampling enough while such latency spikes happen
outside of snapshot pipeline connection.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 18:17:16 -07:00
86fdbdc7f9 etcdserver: add "etcd_server_health_success/failures"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 18:04:40 -07:00
afa5beda46 Merge pull request #10162 from jingyih/automated-cherry-pick-of-#10153-origin-release-3.2
clientv3: automated cherry pick of #10153 to release-3.2
2018-10-08 18:38:02 -07:00
26cce20022 clientv3: concurrency.Mutex.Lock() - preserve invariant
Convenient invariant:
- if werr == nil then lock is supposed to be locked at the moment.

While we could not be confident in stronger invariant ('is exactly locked'),
it were inconvenient that previous code could return `werr == nil` after
Mutex.Unlock.

It could happen when ctx is canceled/timeouted exactly after waitDeletes
successfully returned werr == nil and before `<-ctx.Done()` checked.
While such situation is very rare, it is still possible.

fixes #10111
2018-10-08 16:46:22 -07:00
a4a8d0752e Merge pull request #10123 from jingyih/cherry-pick-of-#10109-origin-release-3.2
etcdctl: cherry pick of #10109 to release-3.2
2018-09-25 19:55:07 -07:00
affd468424 etcdctl: cherry pick of #10109 to release-3.2
Add snapshot file integrity verification when querying snapshot status.
2018-09-25 17:07:44 -07:00
9452e5c1e5 Merge pull request #10042 from wenjiaswe/automated-cherry-pick-of-#9997-upstream-release-3.2
Automated cherry pick of #9997
2018-09-04 12:54:58 -07:00
e2dfe0f5d9 etcdserver/api/rafthttp: add v3 snapshot send/receive metrics
Distribution would be:
0.1 second or more
...
25.6 seconds or more
51.2 seconds or more

etcd_network_snapshot_send_success
etcd_network_snapshot_send_failures
etcd_network_snapshot_send_total_duration_seconds
etcd_network_snapshot_receive_success
etcd_network_snapshot_receive_failures
etcd_network_snapshot_receive_total_duration_seconds

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:51:31 -07:00
9d7242e271 etcdserver: add "etcd_server_id"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:49:26 -07:00
95afd1fb24 etcdserver: clarify read index wait timeout warnings
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:38:43 -07:00
7f337ef13a rafthttp: clarify "became inactive" warning
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:33:46 -07:00
8f1d366c0e etcdserver/api/snap: add v3 snapshot fsync metrics
etcd_snap_db_fsync_duration_seconds_count
etcd_snap_db_save_total_duration_seconds_bucket

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-28 14:03:12 -07:00
b3fa36eb7f Merge pull request #10032 from gyuho/init-metrics-3.2
etcdserver/api/v3rpc: display all registered gRPC metrics at start (v3.2)
2018-08-24 18:52:34 -07:00
4928558bc9 etcdserver/api/v3rpc: display all registered gRPC metrics at start
Previously, only display the one that has been requested at least once.
Now it shows all metrics, as we do in v3.3 and v3.4+.

grpc_server_started_total{grpc_method="Alarm",grpc_service="etcdserverpb.Maintenance",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="AuthDisable",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="AuthEnable",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="Authenticate",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="Compact",grpc_service="etcdserverpb.KV",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="Defragment",grpc_service="etcdserverpb.Maintenance",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="DeleteRange",grpc_service="etcdserverpb.KV",grpc_type="unary"} 0

Should help document metrics.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-22 19:12:58 -07:00
73b1a2b8db Merge pull request #10025 from jingyih/automated-cherry-pick-of-#9990-origin-release-3.2-1534373481
etcdserver: cherry pick of #9990 to release-3.2
2018-08-20 12:48:14 -07:00
ae0f433761 etcdserver: add grpc interceptor to log info on incoming request to
etcdserver.

To improve debuggability of etcd v3. Added a grpc interceptor to log
info on incoming requests to etcd server. The log output includes remote
client info, request content (with value field redacted), request
handling latency, response size, etc.

Dependency on zap logger and grpc_middleware is removed during
backporting.

Added checking in logging interceptor. If debug level is disabled, skip
logUnaryRequestStats() to avoid potential performance degradation. (PR #10021)
2018-08-17 17:06:13 -07:00
5a3cbe4cf7 version: bump up to 3.2.24+git 2018-07-24 10:29:31 -07:00
420a452267 version: bump up to 3.2.24 2018-07-24 10:24:31 -07:00
348edfeae6 etcdserver: add "etcd_server_go_version" metric
Currently, one has to look at server logs manually,
to see what Go version was used to build etcd server.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-23 16:38:52 -07:00
0d5497a107 clientv3: fix keepalive send interval when response queue is full
client should update next keepalive send time
even when lease keepalive response queue becomes full.

Otherwise, client sends keepalive request every 500ms
regardless of TTL when the send is only expected to happen
with the interval of TTL / 3 at minimum.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-23 08:50:44 -07:00
87418c3432 Merge pull request #9942 from wenjiaswe/automated-cherry-pick-of-#9761-upstream-release-3.2
Automated cherry pick of #9761
2018-07-20 14:26:13 -07:00
8c9fd1b5e6 remove hashRevDurations 2018-07-20 13:48:35 -07:00
a3c0a99067 remove hashRevDurations 2018-07-20 13:45:33 -07:00
b3ab14ca9a remove HashByRev 2018-07-20 13:44:15 -07:00
8798c5cd43 etcdserver: rename to "heartbeat_send_failures_total"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-20 09:58:32 -07:00
4e08898571 mvcc: add "etcd_mvcc_hash_(rev)_duration_seconds"
etcd_mvcc_hash_duration_seconds
etcd_mvcc_hash_rev_duration_seconds

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-20 09:57:47 -07:00
8ac6c888cd mvcc/backend: fix defrag duration scale
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-20 09:52:46 -07:00
aca5c8f4b6 mvcc/backend: add "etcd_disk_backend_defrag_duration_seconds"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-20 09:52:46 -07:00
3535f7a61f mvcc/backend: document metrics ExponentialBuckets
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-20 09:44:15 -07:00
fae9b6f667 mvcc/backend: clean up mutex, logging
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-20 09:44:15 -07:00
66d8194e4d etcdserver: add "etcd_server_slow_apply_total"
{"level":"warn","ts":1527101858.6985068,"caller":"etcdserver/util.go:115","msg":"apply request took too long","took":0.114101529,"expected-duration":0.1,"prefix":"","request":"header:<ID:1029181977902852337> put:<key:\"\\000\\000...

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-20 09:42:52 -07:00
2f0e3fd2df etcdserver: add "etcd_server_heartbeat_failures_total"
{"level":"warn","ts":1527101858.4149103,"caller":"etcdserver/raft.go:370","msg":"failed to send out heartbeat; took too long, server is overloaded likely from slow disk","heartbeat-interval":0.1,"expected-duration":0.2,"exceeded-duration":0.025771662}
{"level":"warn","ts":1527101858.4149644,"caller":"etcdserver/raft.go:370","msg":"failed to send out heartbeat; took too long, server is overloaded likely from slow disk","heartbeat-interval":0.1,"expected-duration":0.2,"exceeded-duration":0.034015766}

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-20 09:37:04 -07:00
cad3cf7b11 mvcc/backend: avoid unnecessary metrics update
https://github.com/coreos/etcd/pull/9300

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:52:16 -07:00
bedba66c69 mvcc: add "etcd_mvcc_db_total_size_in_use_in_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:32:56 -07:00
9bc1e15386 mvcc: add "etcd_mvcc_db_total_size_in_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:24:56 -07:00
6e0131e83b etcdserver: add "etcd_server_quota_backend_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 13:27:15 -07:00
c0e9e14248 etcdserver: add "etcd_server_slow_read_indexes_total"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 12:59:53 -07:00
b763b506ab etcdserver: clarify read index warnings
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 12:54:42 -07:00
d22ee8423d Merge pull request #9894 from xmudrii/3.2-grpcproxy-tls
etcdmain: backport support for different certs for etcd-gRPC proxy
2018-07-02 10:57:39 -07:00
e5531a4d54 etcdmain/grpc-proxy: add 'metrics-addr' option
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2018-07-02 12:06:25 +02:00
8dabfe12ca etcdmain: cleanup grpcproxy; support different certs for proxy/etcd
Enables TLS termination in grpcproxy.
2018-07-02 11:20:14 +02:00
360484a3f0 tests: update test scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-18 14:14:15 -07:00
f8af50a8d8 version: bump up to 3.2.23+git 2018-06-15 09:45:59 -07:00
c9504f61fc version: bump up to 3.2.23 2018-06-15 09:40:41 -07:00
75c159baa8 clientv3: backoff on reestablishing watches when Unavailable errors are encountered 2018-06-14 10:52:52 -07:00
41ece2cf2d e2e: do not test cipher suite in release-3.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 16:04:21 -07:00
5e6adfac06 Merge pull request #9845 from wenjiaswe/automated-cherry-pick-of-#8960-upstream-release-3.2
Automated cherry pick of #8960
2018-06-13 16:02:48 -07:00
b163084a5f metrics: Add server_version metric 2018-06-13 15:03:10 -07:00
ad7db2bb1e tests/semaphore.test.bash: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 14:41:17 -07:00
b5dc2266a6 Makefile: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 14:40:42 -07:00
ba233791e1 Merge pull request #9821 from jpbetz/automated-cherry-pick-of-#9288-origin-release-3.2
Automated cherry pick of detailed "took too long" warnings to release-3.2
2018-06-12 12:51:18 -07:00
0ce2ef14a1 etcdserver: Fix txn request 'took too long' warnings to use loggable request stringer 2018-06-12 12:31:38 -07:00
4db8b94cca etcdserver: Add response byte size and range response count to took too long warning 2018-06-11 16:23:31 -07:00
734e4cf8e6 etcdserver: Replace value contents with value_size in request took too long warning 2018-06-11 15:58:03 -07:00
dcf30b1c54 etcdserver: not print password in the warning message of expensive request
Fix https://github.com/coreos/etcd/issues/9635
2018-06-11 15:50:55 -07:00
065053d859 etcdserver: Fix to backport of #9288 for pre-RequestV2 code 2018-06-07 11:02:00 -07:00
1935a663df etcdserver: improve request took too long warning 2018-06-07 10:29:29 -07:00
2c7eb87c85 version: bump up to 3.2.22+git 2018-06-06 10:48:55 -07:00
1674e682fe version: 3.2.22
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 19:53:43 -07:00
7c47afd7d2 e2e: test client-side cipher suites with curl
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 19:53:43 -07:00
3e0cc1e717 etcdmain: add "--cipher-suites" flag
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 19:53:43 -07:00
6fa95eb497 embed: support custom cipher suites
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 19:53:43 -07:00
ba4a7e004b integration: test client-side TLS cipher suites
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 19:53:43 -07:00
4bd81d0933 pkg/transport: add "TLSInfo.CipherSuites" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 19:53:43 -07:00
f690f3a425 pkg/tlsutil: add "GetCipherSuite"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 19:53:40 -07:00
af6f459a23 version: bump up to 3.2.21+git 2018-05-31 12:50:14 -07:00
3ac81f3ae2 version: bump up to 3.2.21 2018-05-31 12:43:49 -07:00
4ace7c7d77 mvcc: fix panic by allowing future revision watcher from restore operation
This also happens without gRPC proxy.

Fix panic when gRPC proxy leader watcher is restored:

```
go test -v -tags cluster_proxy -cpu 4 -race -run TestV3WatchRestoreSnapshotUnsync

=== RUN   TestV3WatchRestoreSnapshotUnsync
panic: watcher minimum revision 9223372036854775805 should not exceed current revision 16

goroutine 156 [running]:
github.com/coreos/etcd/mvcc.(*watcherGroup).chooseAll(0xc4202b8720, 0x10, 0xffffffffffffffff, 0x1)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:242 +0x3b5
github.com/coreos/etcd/mvcc.(*watcherGroup).choose(0xc4202b8720, 0x200, 0x10, 0xffffffffffffffff, 0xc420253378, 0xc420253378)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:225 +0x289
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchers(0xc4202b86e0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:340 +0x237
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchersLoop(0xc4202b86e0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:214 +0x280
created by github.com/coreos/etcd/mvcc.newWatchableStore
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:90 +0x477
exit status 2
FAIL	github.com/coreos/etcd/integration	2.551s
```

gRPC proxy spawns a watcher with a key "proxy-namespace__lostleader"
and watch revision "int64(math.MaxInt64 - 2)" to detect leader loss.
But, when the partitioned node restores, this watcher triggers
panic with "watcher minimum revision ... should not exceed current ...".

This check was added a long time ago, by my PR, when there was no gRPC proxy:

https://github.com/coreos/etcd/pull/4043#discussion_r48457145

> we can remove this checking actually. it is impossible for a unsynced watching to have a future rev. or we should just panic here.

However, now it's possible that a unsynced watcher has a future
revision, when it was moved from a synced watcher group through
restore operation.

This PR adds "restore" flag to indicate that a watcher was moved
from the synced watcher group with restore operation. Otherwise,
the watcher with future revision in an unsynced watcher group
would still panic.

Example logs with future revision watcher from restore operation:

```
{"level":"info","ts":1527196358.9057755,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
{"level":"info","ts":1527196358.910349,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
```

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-31 11:42:50 -07:00
a09874b40c auth: Fix simpleToken to respect disabled state for assign 2018-05-23 15:45:34 -07:00
a5437f246b version: bump up to 3.2.20+git 2018-05-09 10:19:54 -07:00
f272557516 version: bump up to 3.2.20 2018-05-09 10:03:42 -07:00
71eba353d2 Merge pull request #9694 from mohitsoni/release-3.2
Cherry-picking PR 7967 to release-3.2
2018-05-04 12:16:09 -07:00
557eee826f etcdserver: purge old snap.db files
Lots of garbage db files in #7957. Should purge.
2018-05-04 10:51:59 -07:00
b71df1f814 version: bump up to 3.2.19+git 2018-04-24 14:25:10 -07:00
8a9b3d5385 version: bump up to 3.2.19 2018-04-24 14:15:36 -07:00
4e7af272b5 etcdmain: fix "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 11:09:46 -07:00
8ba6bf466f etcdserver: log skipping initial election tick
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:58:11 -07:00
d549256dd9 etcdmain: add "--initial-election-tick-advance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:57:47 -07:00
40aee7bdf8 embed: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:56:03 -07:00
7de2064559 integration: set InitialElectionTickAdvance to true by default
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:53:49 -07:00
0d2fe21d8e etcdserver: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:53:33 -07:00
d45053c068 etcdserver: add is_leader prometheus metric that is 1 on the leader.
Before this change, we had now way to find a leader using /metrics
endpoint. This commit adds a metric to do that.
2018-04-19 14:59:53 -07:00
dfcdaa5cc9 integration: fix peer TLS tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-17 15:15:02 -07:00
f9d58d2c9f integration: re-overwrite "httptest.Server" TLS.Certificates
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-17 06:16:52 -07:00
7f1225a128 pkg/transport: don't set certificates on tls config 2018-04-17 06:16:15 -07:00
21e7a30d31 functional/tester: remove Txn stresser in 3.2
Nested Txn is not supported

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 19:42:33 -07:00
4e11cea8cb functional: disable auto TLS
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 19:13:50 -07:00
8f59849ca2 vendor: add "gogo/protobuf/gogoproto"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 19:00:01 -07:00
3b770ee8b4 test: set up gopath in 3.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 18:26:15 -07:00
71a5f77032 functional: create symlinks for build
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 16:05:13 -07:00
14ce0ea9ba travis: run "build" tests for "functional"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:57:15 -07:00
7b1d09023b functional/rpcpb: remove "InsecureSkipVerify"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:55:20 -07:00
6efde070b8 functional: disable TLS in release-3.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:32:14 -07:00
244b3b3d3c snapshot: remove tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:23:35 -07:00
5bc5c49193 functional: initial commit (copied from master)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 13:20:06 -07:00
2a42b47400 snapshot: initial commit (for functional tests)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 13:20:06 -07:00
df90e3ce21 test: simplify
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 11:08:01 -07:00
1dfce6b565 etcdserver/stats: make all fields guarded by mutex. 2018-04-11 19:49:31 -07:00
67a97c9f1a etcdserver/stats: fix stats data race. 2018-04-11 19:49:31 -07:00
7f1d94d5e2 test: remove build flag "-a"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-11 10:17:13 -07:00
a43ae13106 cmd/vendor: add "go.uber.org/zap"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:48:34 -07:00
748d2204a2 pkg/proxy: initial commit (for functional tests)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:47:47 -07:00
9dea2f7f1f tools: remove
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:46:52 -07:00
c3c88f49bd Makefile: sync with master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:34:50 -07:00
9e88e0c017 tests/*: clean up travis, semaphore scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:32:41 -07:00
487c8d3d61 etcdserver: fix "lease_expired_total" metrics
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 17:59:07 -07:00
a5dc3b7cb1 tests: move test scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-09 11:34:44 -07:00
71e96522dc version: bump up to 3.2.18+git 2018-03-29 10:57:58 -07:00
eddf599c68 version: bump up to 3.2.18 2018-03-29 10:45:17 -07:00
a00b652460 semaphore: run release tests with v3.2.17
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:23:55 -07:00
a089a747b5 e2e: remove "/v3beta" endpoints
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 12:12:44 -07:00
dd64080eac e2e: remove "authHeader"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 11:38:45 -07:00
3f8213a7af e2e: remove some tests from master branch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 11:34:55 -07:00
2160c476a2 clientv3: skip "TestDialTimeout"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 11:28:14 -07:00
29185da0e0 Merge pull request #9502 from jpbetz/automated-cherry-pick-of-#9415-release-3.2
Automated cherry pick of #9415
2018-03-28 11:21:36 -07:00
4acfe50869 Merge pull request #9503 from jpbetz/automated-cherry-pick-of-#9437-release-3.2
Automated cherry pick of #9437
2018-03-28 11:04:17 -07:00
e6c5cdf935 etcdserver: adjust election ticks on restart
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 10:58:09 -07:00
0a4560319f rafthttp: add missing "peer_sent_failures_total" metrics call
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 10:09:50 -07:00
6d7f592c38 etcdserver: make "advanceTicks" method
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 10:06:57 -07:00
431fd391da rafthttp: add "ActivePeers" to "Transport"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 10:06:28 -07:00
39ea00bc92 Documentation/upgrades: backport all upgrade guides
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-27 10:31:57 -07:00
6f848b3fd3 version: bump up to 3.2.17+git 2018-03-08 14:16:24 -08:00
28c47bb2f8 version: bump up to 3.2.17 2018-03-08 13:45:31 -08:00
ea0fda66eb clientv3/integration: test "rpctypes.ErrLeaseTTLTooLarge"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 10:38:18 -08:00
6e5e3d134e *: enforce max lease TTL with 9,000,000,000 seconds
math.MaxInt64 / time.Second is 9,223,372,036. 9,000,000,000 is easier to
remember/document.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 10:38:07 -08:00
58f9080f60 Merge pull request #9404 from jpbetz/automated-cherry-pick-of-#9379-origin-release-3.2
Automated cherry pick of #9379
2018-03-08 08:21:35 -08:00
f8fc817ce8 *: remove unused env vars
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 01:36:21 -08:00
3bb8edc6aa e2e: fix missing "apiPrefix"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 01:08:26 -08:00
4e7b9d223d e2e: add "Election" grpc-gateway test cases
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 01:07:19 -08:00
fe90bc448c Merge pull request #9405 from jpbetz/automated-cherry-pick-of-#9347-origin-release-3.2
Automated cherry pick of #9347
2018-03-08 00:59:26 -08:00
3710c249eb Merge pull request #9403 from jpbetz/automated-cherry-pick-of-#9336-origin-release-3.2
Automated cherry pick of #9336
2018-03-08 00:58:52 -08:00
cbea4efaf2 etcdserver: enable "CheckQuorum" when starting with "ForceNewCluster"
We enable "raft.Config.CheckQuorum" by default in other
Raft initial starts. So should start with "ForceNewCluster".

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 23:12:32 -08:00
273a43d4d8 api/v3election: error on missing "leader" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 23:08:28 -08:00
ceaa55e57e httpproxy: cancel requests when client closes a connection 2018-03-07 23:01:10 -08:00
a537163e9e hack/scripts-dev: fix indentation in run.sh
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:32:08 -08:00
660f7fd8a0 hack/scripts-dev: sync with master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:22:18 -08:00
e48a18256f travis: use Go 1.8.7, sync with master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:21:02 -08:00
a61ba42918 Documentation/op-guide: highlight defrag operation "--endpoints" flag
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-05 11:13:54 -08:00
83c94e9e58 etcdctl: highlight "defrag" command caveats
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-05 11:12:41 -08:00
55e008f64b Documentation: make "Consul" section more objective
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:42:13 -08:00
a4827447be travis: update Go version 2018-02-27 11:28:37 -08:00
41830ca523 semaphore: update Go, release test version 2018-02-27 11:27:47 -08:00
4a620e2013 version: 3.2.16+git 2018-02-12 14:29:08 -08:00
121edf0467 version: 3.2.16 2018-02-12 09:43:33 -08:00
b5abfe1858 Merge pull request #9297 from jpbetz/automated-cherry-pick-of-#9281-origin-release-3.2
Automated cherry pick of #9281
2018-02-07 22:32:59 -08:00
33633da64c mvcc: fix watchable store test for 3.2 cherrypick of #9281 2018-02-07 15:57:34 -08:00
e08abbeae4 mvcc: restore unsynced watchers
In case syncWatchersLoop() starts before Restore() is called,
watchers already added by that moment are moved to s.synced by the loop.
However, there is a broken logic that moves watchers from s.synced
to s.uncyned without setting keyWatchers of the watcherGroup.
Eventually syncWatchers() fails to pickup those watchers from s.unsynced
and no events are sent to the watchers, because newWatcherBatch() called
in the function uses wg.watcherSetByKey() internally that requires
a proper keyWatchers value.
2018-02-07 15:34:21 -08:00
bdc3ed1970 version: 3.2.15+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-23 14:04:11 -08:00
1b3ac99e8a version: 3.2.15
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:31:16 -08:00
fd4595aa04 clientv3/integration: add TestMemberAddUpdateWrongURLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:31:01 -08:00
e5f63b64c3 clientv3: prevent no-scheme URLs to cluster APIs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:27:04 -08:00
68d27b2d84 etcdserver/api/v3rpc: debug-log client disconnect on TLS, http/2 stream CANCEL
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-19 12:50:06 -08:00
7c4274be05 version: 3.2.14+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 14:15:36 -08:00
fb5cd6f1c7 version: 3.2.14
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 11:19:37 -08:00
6999bbb47b mvcc: check null before set FillPercent not to panic
Since CreateBucketIfNotExists() can return nil when it gets an error,
accessing FillPercent must be done after a nil check, not to cause
a panic.
2018-01-08 17:46:06 -08:00
df4036ab73 etcdserver/api/v3rpc: debug user cancellation and log warning for rest
The context error with cancel code is typically for user cancellation which
should be at debug level. For other error codes we should display a warning.

Fixes #9085
2018-01-08 17:46:01 -08:00
848590e99e version: bump up to 3.2.13+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 14:32:36 -08:00
95a726a27e version: bump up to 3.2.13
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 13:29:56 -08:00
288ef7d6fc embed: fix gRPC server panic on GracefulStop
Cherry-pick https://github.com/coreos/etcd/pull/8987.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 13:29:40 -08:00
7b7722ed97 integration: test GracefulStop on secure embedded server
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 12:44:28 -08:00
8a358f832a clientv3/integration: fix TestKVLargeRequests with -tags cluster_proxy
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:12:44 -08:00
2a63909648 tools/functional-tester: remove duplicate grpclog set
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:12:38 -08:00
7fb1fafe0c etcdserver/api/v3rpc: set grpclog once
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>

Conflicts:
	etcdserver/api/v3rpc/grpc.go
2018-01-02 11:12:33 -08:00
7025d7c665 etcdserver,embed: discard gRPC info logs when debug is off
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>

Conflicts:
	embed/etcd.go
	etcdserver/api/v3rpc/grpc.go
	etcdserver/config.go
2018-01-02 11:12:26 -08:00
4ab213a4ec etcdserver/api/v3rpc: log stream error with debug level
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:11:38 -08:00
bb27a63e64 version: bump up to 3.2.12+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:11:25 -08:00
b19dae0065 version: bump up to 3.2.12
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:49:55 -08:00
c8915bdb04 integration: bump up wait leader timeout for slow CIs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:47:58 -08:00
b6896aa951 clientv3/integration: fix TestKVPutError
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:47:49 -08:00
452ccd693d clientv3/integration: test large KV requests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:47:30 -08:00
348b25f3dc clientv3: call other APIs with default gRPC call options
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:41:57 -08:00
c67e6d5f5e clientv3: call KV/Txn APIs with default gRPC call options
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:39:11 -08:00
e82f0557ac clientv3: configure gRPC message limits in Config
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:22:48 -08:00
4cebdd274c integration: remove typo in "TestV3LargeRequests"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:10:45 -08:00
0363c4b1ef integration: test large request response back from server
Address https://github.com/coreos/etcd/issues/9043.
Won't fix it, but we need test coverage on response back
from server as well.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:10:15 -08:00
5579dc200d test: bump up clientv3/integration timeout
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:09:47 -08:00
3fd6e7e1de vendor: pin grpc v1.7.5, grpc-gateway v1.3.0 (no code change)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 13:04:45 -08:00
1fa227da71 integration: add "TestV3AuthWithLeaseRevokeWithRoot"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 09:17:30 -08:00
47f6d32e3e Merge pull request #9013 from gyuho/automated-cherry-pick-of-#8999-origin-release-3.2
Automated cherry pick of #8999
2017-12-14 09:40:21 -08:00
0265457183 compactor: fix error message of Revision compactor
Reorder the parameters so that Noticef can output the error properly.
2017-12-14 08:39:22 -08:00
04ec94f8d1 semaphore: run upgrade tests against v3.2.11
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 14:36:30 -08:00
ed4d70888c semaphore.sh: do not fail on "Too many goroutines"
To not fail on "pkg/testutil" unit tests.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 17:03:32 -08:00
b9aa507f66 hack: sync with master branch (needed for release)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 11:11:58 -08:00
4ed57689cb gitignore: sync with master branch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 11:09:26 -08:00
a2850218b2 scripts/build-docker: build both gcr.io and quay.io images
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 11:08:34 -08:00
fc25300cf0 version: bump up to v3.2.11+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 11:07:43 -08:00
1e1dbb2392 version: bump up to v3.2.11
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 14:24:14 -08:00
ff1f08c93f vendor: upgrade grpc/grpc-go to v1.7.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 14:23:43 -08:00
78fb932156 Merge pull request #8947 from gyuho/automated-cherry-pick-of-#8939-origin-release-3.2
Automated cherry pick of #8939
2017-12-04 09:35:30 -08:00
c142134a28 Documentation/op-guide: remove non-released flag in monitoring.md
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 09:27:45 -08:00
b44b91462e etcdmain: add more details to TLS HandshakeFailure
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 09:50:57 -08:00
5921b2c035 api/v3rpc: log grpc stream send/recv errors in server-side
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-30 11:21:48 -08:00
a19672befc version: bump up to v3.2.10+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-17 11:10:48 -08:00
694728c496 version: bump up to v3.2.10
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 13:20:19 -08:00
1557f8b534 Merge pull request #8813 from jpbetz/bbolt-3.2
vendor: Backport bbolt freelist corruption and fragmentation fixes to 3.2 branch
2017-11-16 13:15:04 -08:00
4b9bfa17ee test: Clean agent directories on disk before functional test runs, not after
This is primarily so CI tooling can capture the agent logs after the functional tester runs.
2017-11-16 12:44:30 -08:00
8de0c0419a vendor: Switch from boltdb v1.3.0 to coreos/bbolt v1.3.1-coreos.3 2017-11-16 12:43:17 -08:00
3039c639c0 Merge pull request #8867 from gyuho/clientv3-backport-to-release-3.2
clientv3: backport new balancer to release-3.2, upgrade gRPC to v1.7.3
2017-11-16 10:12:40 -08:00
91335d01bb proxy/grpcproxy: wait until register before Serve
It was fatal-ing with:

grpclog.Fatalf("grpc: Server.RegisterService after Server.Serve for %q", sd.ServiceName)

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:34:11 -08:00
a8c84ffc93 clientv3: fix client balancer with gRPC v1.7
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:05:06 -08:00
939337f450 *: add max requests bytes, keepalive to server, blackhole methods to integration
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:05:06 -08:00
2a6d50470d *: use grpclog.NewLoggerV2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:05:06 -08:00
d62e39d5ca *: deprecate "metadata.NewContext"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:05:06 -08:00
7f0f5e2b3c bill-of-materials: regenerate
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:05:06 -08:00
eb1589ad35 *: regenerate proto
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:05:06 -08:00
546d5fe835 scripts/genproto: update protobuf, grpc-gateway gen
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:05:06 -08:00
fddae84ce2 vendor: update grpc, grpc-gateway, protobuf
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 09:05:02 -08:00
6d406285e6 glide: update grpc, grpc-gateway
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-16 05:32:02 -08:00
8dc20ead31 Merge pull request #8886 from gyuho/qqq
release-3.2: Revert "embed: fix HTTPs + DNS SRV discovery"
2017-11-15 14:47:57 -08:00
d3a3c3154e Revert "embed: fix HTTPs + DNS SRV discovery"
This reverts commit f79d5aaca4.
2017-11-15 14:46:32 -08:00
d5572964e1 Merge pull request #8874 from gyuho/release-branch
release-3.2: fix unit test script, remove old tests, backport functional testing data dir commands
2017-11-15 14:41:41 -08:00
ea51c25030 clientv3: remove balancer tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 00:15:06 -08:00
d1447a8f5a test: fix unit tests, remove some unnecessary tests
Unit tests weren't running in CIs.
And removing some unnecessary tests (v2 client, Examples)
in release branch.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-15 00:15:02 -08:00
c28c14a5f4 test: Clean agent directories on disk before functional test runs, not after
This is primarily so CI tooling can capture the agent logs after the functional tester runs.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-14 17:08:26 -08:00
f9eb75044a semaphore: priotize time out test fails
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-14 17:04:31 -08:00
2250f71e23 semaphore: manually pin v3.2.9 for release upgrade tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-08 12:45:00 -08:00
52be1d7b19 hack/scripts-dev: add Makefile, Dockerfile-test
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-06 14:13:10 -08:00
712024d3e5 semaphore.sh: fail tests with "(--- FAIL:|leak)"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-03 10:59:01 -07:00
7d99afdc7c test: fail tests with "--- FAIL:"
To differentiate from gRPC client log "TRANSIENT_FAILURE"

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-03 10:58:43 -07:00
5ceea41af4 travis: upgrade Go to 1.8.5, and use container
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-25 19:45:33 -07:00
2f74456443 semaphore.sh: add to release-3.2 branch
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-25 19:44:05 -07:00
fc87ae4202 version: bump up to v3.2.9+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-25 19:43:30 -07:00
f1d7dd87da version: bump up to v3.2.9
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-06 08:58:06 -07:00
ad212d339b Makefile: sync with master branch on test commands
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-06 08:57:42 -07:00
9f49665284 Documentation/op-guide: remove git merge line in monitoring.md
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-06 08:55:56 -07:00
78f8d6e185 embed: fix HTTPs + DNS SRV discovery 2017-10-05 16:03:22 -07:00
a954a0de53 Makefile: initial commit, update Dockerfile
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 10:31:24 -07:00
0c3defdd2b vendor: update 'golang.org/x/crypto'
To include 6c586e17d9.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:49:00 -07:00
814588d166 travis: use Go 1.8.4
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:19:29 -07:00
2d2932822c version: bump up to 3.2.8+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-10-05 09:18:48 -07:00
e211fb6de3 version: bump up to 3.2.8
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-26 02:41:18 +09:00
fb7e274309 Documentation/op-guide: remove grafana demo link
The dashboard was removed during Tectonic migration
in AWS, while the Grafana still runs in GCP.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-26 02:40:59 +09:00
4a61fcf42d docs: remove link-breaking space 2017-09-20 08:11:02 +09:00
4c8fa30dda e2e: test no value is returned in TestCtlV3GetKeysOnly
Test was checking key name is returned, but was not correctly checking
no value is returned.
2017-09-14 04:42:06 +09:00
01c4f35b30 grpcproxy: respect KeysOnly flag
Fixes #8478
2017-09-14 04:41:58 +09:00
15e9510d2c client: fail over to next endpoint on oneshot failure
Fixes #8515
2017-09-08 13:28:55 -07:00
09b7fd4975 version: bump up to 3.2.7+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-01 14:03:26 -07:00
bb66589f8c version: bump up to 3.2.7
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-09-01 09:15:15 -07:00
267a2fc8c9 integration: check concurrent auth ops don't cause old rev errors 2017-08-25 13:13:56 -07:00
1fc300ecbd testutil: don't panic on AssertNil on non-nil errors 2017-08-25 13:13:26 -07:00
877d0ce469 etcdserver: consolidate error checking for v3_server functions
Duplicated error checking code moved into raftRequest/raftRequestOnce.
2017-08-23 14:39:59 -07:00
2188513161 concurrency: retry snapshot serializable stm if writes since first header rev
Was checking the rset key mod rev, which does not work.
2017-08-22 20:53:47 -07:00
5c7cff66b6 integration: test serializable snapshot STM with old readset revisions
Was hanging.
2017-08-22 20:53:41 -07:00
8c99ab80bd version: bump up to 3.2.6+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-21 13:07:06 -07:00
9d43462d17 version: bump up to 3.2.6
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-21 10:40:55 -07:00
78d68226e6 mvcc: sending events after restore
Fixes: #8411
2017-08-21 10:39:46 -07:00
e9d576c3d6 Documentation: fix broken link on FAQ
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-18 08:43:14 -07:00
1c578cd442 e2e: test booting etcd with multiple peer listeners 2017-08-18 08:43:06 -07:00
b97714b3e6 embed: associate peer serve() listener with corresponding peer
Fixes #8383
2017-08-17 14:16:56 -07:00
ce0a61ff67 dl_build: fix minor typo 2017-08-17 11:44:02 -07:00
74783a38ae docs: revising to match sidebar structure. 2017-08-16 17:03:28 -07:00
d372ff96a0 docs: link fix. 2017-08-16 17:03:12 -07:00
8c7b9db9cc docs: slight rearranging of top two sections. 2017-08-16 17:02:05 -07:00
f1fb342305 docs: adding an index for upgrade pages. 2017-08-16 17:01:49 -07:00
fa0f278783 embed: add 'enable-pprof' tag for config file
Fix https://github.com/coreos/etcd/issues/8402.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-15 11:52:48 -07:00
e197c14847 mvcc: test keys gauge is reloaded correctly on restore 2017-08-10 12:59:24 -07:00
e7bf5477de mvcc: reset keys gauge on restore
Fixes #8388
2017-08-10 12:59:19 -07:00
f81b72fd93 version: bump up to v3.2.5+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-04 11:30:57 -07:00
d0d1a87aa9 version: bump up to v3.2.5
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-04 08:40:50 -07:00
7c6a9a7317 contrib/raftexample: use bytes.Buffer.String (no 'string()')
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-04 08:40:29 -07:00
be8f102efb grpcproxy: forward PrevKv flag in Put 2017-08-04 07:32:17 -07:00
3003901447 integration: test Put with PrevKey=true
Was missing in proxy.
2017-08-04 07:32:11 -07:00
157cfac31b ctlv3/command: remove double-quote typos in fields printer
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-08-01 17:25:53 -07:00
40a1704e6f ctlv3: exit non-zero on unhealty ep command 2017-07-31 16:00:23 -07:00
30981ecb0a e2e/docker: docker image for testing wildcard DNS 2017-07-24 09:54:55 -07:00
f65a11ced5 fixtures: generate wildcard DNS SAN cert
DNS: *.etcd.local
2017-07-24 09:54:55 -07:00
db4838d4eb transport: use reverse lookup to match wildcard DNS SAN
Fixes #8268
2017-07-24 09:54:55 -07:00
8ab42fb045 *: move v2http handlers without /v2 prefix to etcdhttp
Lets --enable-v2=false configurations provide /metrics, /health, etc.

Fixes #8167
2017-07-24 09:54:48 -07:00
ff9a0a3527 version: bump up to 3.2.4+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-24 09:14:34 -07:00
c31bec0f29 version: bump up to 3.2.4
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-19 08:37:30 -07:00
19fe4b0cac grpcproxy: return nil on receiving snapshot EOF
Gets "code = OutOfRange desc = EOF" errors otherwise.
2017-07-19 08:33:44 -07:00
a5d94fe229 integration: test embed.Etcd.Close with watch
Ensure 'Close' returns in time when there are open
connections (watch streams).

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 18:52:20 -07:00
e8f3cbf1c6 embed: wait up to request timeout for pending RPCs when closing
Both grpc.Server.Stop and grpc.Server.GracefulStop close the listeners
first, to stop accepting the new connections. GracefulStop blocks until
all clients close their open transports(connections). Unary RPCs
only take a few seconds to finish. Stream RPCs, like watch, might never
close the connections from client side, thus making gRPC server wait
forever.

This patch still calls GracefulStop, but waits up to 10s before manually
closing the open transports.

Address https://github.com/coreos/etcd/issues/8224.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 18:52:20 -07:00
856502f788 version: bump up to 3.2.3+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 16:04:54 -07:00
ae23b0ef2f version: bump up to 3.2.3
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-13 12:09:48 -07:00
5ee89be616 testutil: whitelist WaitGroup.Done
Calling a WaitGroup.Done() in a defer will sometimes trigger the leak
detector since the WaitGroup.Wait() will unblock before the defer
block completes. If the leak detector runs before the Done() is
rescheduled, it will spuriously report the finishing Done() as a leak.
This happens enough in CI to be irritating; whitelist it and ignore.
2017-07-13 11:14:12 -07:00
38373b342d test: sync with etcd-agent start in functional_pass
Fix https://github.com/coreos/etcd/issues/8211.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-13 11:14:03 -07:00
536a5f594b v3rpc: Let clients establish unlimited streams
From go-grpc v1.2.0, the number of max streams per client is set to 100
by default by the server side. This change makes it impossible
for third party proxies and custom clients to establish many streams.
2017-07-12 10:46:33 -07:00
49e6916e66 dev-guide: document using range_end for prefixes with json
Lack of a range_end example has caused some confusion.
2017-07-12 10:40:37 -07:00
b9b6f6f7c4 Documentation: refer to LeaseKeepAliveRequest for lease refresh 2017-07-12 10:40:26 -07:00
6ecbb3bbc5 version: bump up to 3.2.2+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-12 10:36:16 -07:00
cb2a496c4d version: bump up to 3.2.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-07 09:01:47 -07:00
fdf525a3fd dev-guide: update experimental APIs
No experimental APIs at the moment.

Fixes #8212
2017-07-07 09:01:30 -07:00
40468ab11f transport: accept connection if matched IP SAN but no DNS match
The IP SAN check would always do a DNS SAN check if DNS is given
and the connection's IP is verified. Instead, don't check DNS
entries if there's a matching iP.

Fixes #8206
2017-07-07 09:01:11 -07:00
f8f79666d4 embed: connect json gateway with user-provided listen address
net.Listener says its address is [::] when given 0.0.0.0, breaking
hosts that have ipv6 disabled.

Fixes #8151
Fixes #7961
2017-07-07 09:00:40 -07:00
fefcf348f1 embed: share grpc connection for grpc json services 2017-07-07 09:00:32 -07:00
81d39a75ff fixtures: add gencerts.sh, generate CRL 2017-07-07 09:00:01 -07:00
8f2b48465f lease: stop lessors after tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-30 11:18:55 -07:00
026c1734b2 Documentation/faq: fix typo in flag names
Signed-off-by: Hui Kang <kangh@us.ibm.com>
2017-06-30 01:28:44 -07:00
81e1d03d02 Documentation/v2: 'etcd v2' to the title
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-30 01:28:20 -07:00
6171334595 benchmark: refactor watch benchmark 2017-06-27 07:35:01 -07:00
55de54a757 lessor: extend leases on promote if expires will be rate limited
Instead of unconditionally randomizing, extend leases on promotion
if too many leases expire within the same time span. If the server
has few leases or spread out expires, there will be no extension.

Squashed previous commits for https://github.com/coreos/etcd/pull/8149.

Author: Anthony Romano <anthony.romano@coreos.com>

This is a combination of 4 commits below:

lease: randomize expiry on initial refresh call

Randomize the very first expiry on lease recovery
to prevent recovered leases from expiring all at
the same time.

Address https://github.com/coreos/etcd/issues/8096.

integration: remove lease exist checking on randomized expiry

Lease with TTL 5 should be renewed with randomization,
thus it's still possible to exist after 3 seconds.

lessor: extend leases on promote if expires will be rate limited

Instead of unconditionally randomizing, extend leases on promotion
if too many leases expire within the same time span. If the server
has few leases or spread out expires, there will be no extension.

Revert "integration: remove lease exist checking on randomized expiry"

This reverts commit 95bc33f37f. The new
lease extension algorithm should pass this test.
2017-06-23 13:31:59 -07:00
c14aad0ba6 lease: rate limit revoke runLoop
Fix https://github.com/coreos/etcd/issues/8097.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 13:28:33 -07:00
91ccc93042 version: bump up to v3.2.1+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-23 10:33:24 -07:00
61fc123e7a version: bump up to 3.2.1
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-22 09:47:21 -07:00
71d2008385 mvcc: use GaugeFunc metric to load db size when requested
Relying on mvcc to set the db size metric can cause it to
miss size changes when a txn commits after the last write
completes before a quiescent period. Instead, load the
db size on demand.

Fixes #8146
2017-06-22 09:47:01 -07:00
79794bf556 integration: test mvcc db size metric is updated following defrag 2017-06-22 09:46:54 -07:00
db0ca8963f test: run basic functional tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 17:15:22 -07:00
27a3356c74 etcd-tester: add 'exit-on-failure'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 17:15:16 -07:00
4526284326 mvcc: restore into tree index with one key index
Clobbering the mvcc kvindex with new keyIndexes for each restore
chunk would cause index corruption by dropping historical information.
2017-06-20 10:58:42 -07:00
0b0b1992b8 mvcc: test restore and deletes with small chunk sizes 2017-06-20 10:58:35 -07:00
ed7ef5be8b mvcc: set db size metric on restore
Fixes #8080
2017-06-20 10:58:16 -07:00
ff5be50ee5 integration: test mvcc db size metric is set on restore 2017-06-20 10:58:10 -07:00
a032b3b914 v3rpc: treat nil txn request op as error
Fixes #7889
2017-06-20 10:57:41 -07:00
9388a27649 dev-guide: add txn json example 2017-06-20 10:57:35 -07:00
af1d732916 e2e: test txn over grpc json 2017-06-20 10:57:27 -07:00
939aa66b48 test: 'FAIL' on release binary download failure
I see CI is failing to download release binaries
but exit code doesn't trigger CI job failure.

We need 'FAIL' string.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 10:55:19 -07:00
3365dd4ff0 Documentation/op-guide: fix failed RPC rate, leader election metrics
This fixes failed RPC rate query, where we do not need
subtraction because we already query by the status code.
Also adds grpc_method to make it more specific. Most of the
time, the failure recovers within 10-second, which is our
Prometheus scrap interval, so 'rate' query might not cover
that time window, showing as 0s, but still shows up in the graph.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-15 12:00:40 -07:00
959d55ae80 bill-of-materials: regenerate with multi licenses
Fix https://github.com/coreos/etcd/issues/8086.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-14 08:44:11 -07:00
3e1992140a build-aci: Fix ACI image name
The appc discovery spec states that the architecture specifier in the ACI
image file name will be an ACI architecture value.  Our build scripts were
using GOARCH in the image name, which is incorrect for arm64/aarch64.
See: https://github.com/appc/spec/blob/master/spec/discovery.md

Fixes errors like these on arm64 machines:

  $ rkt --debug --insecure-options=image fetch coreos.com/etcd:v3.2.0-rc.1
  image: remote fetching from URL "https://github.com/coreos/etcd/releases/download/v3.2.0-rc.1/etcd-v3.2.0-rc.1-linux-aarch64.aci"
  fetch: bad HTTP status code: 404

Signed-off-by: Geoff Levand <geoff@infradead.org>
2017-06-14 08:43:58 -07:00
b547b982b9 Documentation/upgrades: link to previous guides
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 13:04:10 -07:00
56477ca998 version: bump up to 3.2.0+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 13:03:56 -07:00
66722b1ada version: bump up to 3.2.0
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 10:59:09 -07:00
963339d265 rafthttp: permit very large v2 snapshots
v2 snapshots were hitting the 512MB message decode limit, causing
sending snapshots to new members to fail for being too big.
2017-06-09 10:49:51 -07:00
c87594f27c etcdserver: use same ReadView for read-only txns
A read-only txn isn't serialized by raft, but it uses a fresh
read txn for every mvcc access prior to executing its request ops.
If a write txn modifies the keys matching the read txn's comparisons,
the read txn may return inconsistent results.

To fix, use the same read-only mvcc txn for the duration of the etcd
txn. Probably gets a modest txn speedup as well since there are
fewer read txn allocations.
2017-06-09 09:50:43 -07:00
e72ad5dd2a mvcc: create TxnWrites from TxnRead with NewReadOnlyTxnWrite
Already used internally by mvcc, but needed by etcdserver txns.
2017-06-09 09:50:37 -07:00
3eb5d24cab integration: test txn comparison and concurrent put ordering 2017-06-09 09:50:30 -07:00
8b9041a938 Documentation/op-guide: do not use host network, fix indentation
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 09:14:21 -07:00
864ffec88c v2http: put back /v2/machines and mark as non-deprecated
This reverts commit 2bb33181b6. python-etcd
seems to depend on /v2/machines and the maintainer vanished. Plus, it is
prefixed with /v2/ so it probably can't be deprecated anyway.
2017-06-08 12:05:59 -07:00
12bc2bba36 etcdserver: add leaseExpired debugging metrics
Fix https://github.com/coreos/etcd/issues/8050.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-08 11:23:12 -07:00
3a43afce5a Documentation/op-guide: fix 'grpc_code' field in metrics
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-08 10:16:07 -07:00
0e56ea37e7 fileutil: return immediately if preallocating 0 bytes
fallocate will return EINVAL, causing zeroing to the end of a
0 byte file to fail.

Fixes #8045
2017-06-07 12:59:35 -07:00
743192aa3b *: clear rarer shellcheck errors on scripts
Clean up the tail of the warnings
2017-06-06 10:44:59 -07:00
e8b156578f travis: add shellcheck 2017-06-06 10:44:53 -07:00
61f3338ce7 test: shellcheck 2017-06-06 10:44:46 -07:00
effffdbdca test, osutil: disable setting SIG_DFL on linux if built with cov tag
Was causing etcd to terminate before finishing writing its
coverage profile.
2017-06-06 09:47:22 -07:00
9bac803bee Documentation/op-guide: fix typo in grafana.json
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-06 09:47:15 -07:00
9169ad0d7d *: fix go tool vet -all -shadow errors 2017-06-06 09:47:06 -07:00
482a7839d9 test: speedup and strengthen go vet checking
Was iterating over every file, reloading everything. Instead,
analyze the package directories. On my machine, the time for
vet checking goes from 34s to 3s. Scans more code too.
2017-06-06 09:46:54 -07:00
ba3058ca79 op-guide: document CN certs in security.md 2017-06-06 09:46:47 -07:00
0e90e504f5 scripts, Documentation: fix swagger generation
Changes to the genproto to support splitting out the grpc-gateway broke
swagger generation.
2017-06-02 11:05:21 -07:00
998fa0de76 Documentation, scripts: regen RPC docs
Was missing the new cancel_reason field. Also includes updated protodoc
sha to fix generating documentation for upcoming txn compare range patchset.
2017-06-02 10:27:49 -07:00
c273735729 op-guide: document configuration flags for gateway 2017-06-01 15:59:49 -07:00
c85f736522 mvcc: time restore in restore benchmark
This never worked.
2017-06-01 14:59:31 -07:00
a375ff172e mvcc: chunk reads for restoring
Loading all keys at once would cause etcd to use twice as much
memory than it would need to serve the keys, causing RSS to spike on
boot. Instead, load the keys into the mvcc by chunk. Uses pipelining
for some concurrency.

Fixes #7822
2017-06-01 14:59:27 -07:00
1893af9bbd integration: use unixs:// if client port configured for tls 2017-06-01 09:47:08 -07:00
b4c655677a clientv3: support unixs:// scheme
For using TLS without giving a TLSConfig to the client.
2017-06-01 09:47:03 -07:00
c2160adf1d clientv3/integration: test dialing to TLS without a TLS config times out
etcdctl was getting ctx errors from timing out trying to issue RPCs to
a TLS endpoint but without using TLS for transmission. Client should
immediately bail out with a time out error.
2017-06-01 09:46:57 -07:00
5ada311416 clientv3: use Endpoints[0] to initialize grpc creds
Dialing out without specifying TLS creds but giving https uses some
default behavior that depends on passing an endpoint with https to
Dial(), so it's not enough to completely rely on the balancer to supply
endpoints.

Fixes #8008

Also ctx-izes grpc.Dial
2017-06-01 09:46:48 -07:00
f042cd7d9c vendor: ghodss/yaml v1.0.0 2017-05-30 14:44:30 -07:00
f0a400a3a8 vendor: kr/pty v1.0.0 2017-05-30 14:44:23 -07:00
6066977280 op-guide: update performance.md
It's been a year, time to refresh with 3.2.0 data.
2017-05-30 10:16:19 -07:00
fc88eccc74 vendor: use v0.2.0 of go-semver 2017-05-30 10:15:23 -07:00
5cb28a7d83 Documentation: add 'yaml.NewConfig' change in 3.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-30 10:14:55 -07:00
de57e88643 Documentation: add FAQ entry for "database space exceeded" errors
Also moves miscategorized cluster id mismatch entry from "performance"
to "operation".
2017-05-26 09:13:13 -07:00
1305 changed files with 97237 additions and 117774 deletions

View File

@ -1,16 +0,0 @@
#!/usr/bin/env bash
TEST_SUFFIX=$(date +%s | base64 | head -c 15)
TEST_OPTS="RELEASE_TEST=y INTEGRATION=y PASSES='build unit release integration_e2e functional' MANUAL_VER=v3.2.9"
if [ "$TEST_ARCH" == "386" ]; then
TEST_OPTS="GOARCH=386 PASSES='build unit integration_e2e'"
fi
docker run \
--rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "${TEST_OPTS} ./test 2>&1 | tee test-${TEST_SUFFIX}.log"
! egrep "(--- FAIL:|leak|panic: test timed out)" -A10 -B50 test-${TEST_SUFFIX}.log

View File

@ -6,8 +6,7 @@ sudo: required
services: docker
go:
- 1.9.2
- tip
- 1.8.7
notifications:
on_success: never
@ -15,74 +14,60 @@ notifications:
env:
matrix:
- TARGET=amd64
- TARGET=amd64-go-tip
- TARGET=darwin-amd64
- TARGET=windows-amd64
- TARGET=arm64
- TARGET=arm
- TARGET=386
- TARGET=ppc64le
- TARGET=linux-amd64-integration
- TARGET=linux-amd64-functional
- TARGET=linux-amd64-unit
- TARGET=all-build
- TARGET=linux-386-unit
matrix:
fast_finish: true
allow_failures:
- go: tip
env: TARGET=amd64-go-tip
- go: 1.8.7
env: TARGET=linux-386-unit
exclude:
- go: 1.9.2
env: TARGET=amd64-go-tip
- go: tip
env: TARGET=amd64
- go: tip
env: TARGET=darwin-amd64
- go: tip
env: TARGET=windows-amd64
- go: tip
env: TARGET=arm
- go: tip
env: TARGET=arm64
- go: tip
env: TARGET=386
- go: tip
env: TARGET=ppc64le
env: TARGET=linux-386-unit
before_install:
- docker pull gcr.io/etcd-development/etcd-test:go1.9.2
- if [[ $TRAVIS_GO_VERSION == 1.* ]]; then docker pull gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION}; fi
install:
- pushd cmd/etcd && go get -t -v ./... && popd
script:
- echo "TRAVIS_GO_VERSION=${TRAVIS_GO_VERSION}"
- >
case "${TARGET}" in
amd64)
linux-amd64-integration)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GOARCH=amd64 ./test"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='integration' ./test"
;;
amd64-go-tip)
GOARCH=amd64 ./test
;;
darwin-amd64)
linux-amd64-functional)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOOS=darwin GOARCH=amd64 ./build"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "./build && GOARCH=amd64 PASSES='functional' ./test"
;;
windows-amd64)
linux-amd64-unit)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOOS=windows GOARCH=amd64 ./build"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='unit' ./test"
;;
386)
all-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GOARCH=386 PASSES='build unit' ./test"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='build' ./test \
&& GOARCH=386 PASSES='build' ./test \
&& GO_BUILD_FLAGS='-v' GOOS=darwin GOARCH=amd64 ./build \
&& GO_BUILD_FLAGS='-v' GOOS=windows GOARCH=amd64 ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=arm ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=arm64 ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=ppc64le ./build"
;;
*)
# test building out of gopath
linux-386-unit)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOARCH='${TARGET}' ./build"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=386 PASSES='unit' ./test"
;;
esac

38
.words
View File

@ -1,38 +0,0 @@
ErrCodeEnhanceYourCalm
ErrTimeout
GoAway
RPC
RPCs
TODO
backoff
blackhole
blackholed
cancelable
cancelation
cluster_proxy
defragment
defragmenting
etcd
gRPC
goroutine
goroutines
healthcheck
iff
inflight
keepalive
keepalives
keyspace
linearization
localhost
mutex
prefetching
protobuf
prometheus
repin
serializable
teardown
too_many_pings
uncontended
unprefixed
unlisting

View File

@ -1,490 +0,0 @@
## [v3.2.10](https://github.com/coreos/etcd/releases/tag/v3.2.10) (2017-11-16)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.9...v3.2.10).
### Fixed
- Replace backend key-value database `boltdb/bolt` with [`coreos/bbolt`](https://github.com/coreos/bbolt/releases) to address [backend database size issue](https://github.com/coreos/etcd/issues/8009)
- Fix clientv3 balancer to handle [network partition](https://github.com/coreos/etcd/issues/8711)
- Upgrade [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) v1.2.1 to v1.7.3
- Upgrade [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) v1.2 to v1.3
- Revert [discovery SRV auth `ServerName` with `*.{ROOT_DOMAIN}`](https://github.com/coreos/etcd/pull/8651) to support non-wildcard subject alternative names in the certs (see [issue #8445](https://github.com/coreos/etcd/issues/8445) for more contexts)
- For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `etcd.local` (**not `*.etcd.local`**) as an entry in Subject Alternative Name (SAN) field
## [v3.2.9](https://github.com/coreos/etcd/releases/tag/v3.2.9) (2017-10-06)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.8...v3.2.9).
### Fixed(Security)
- Compile with [Go 1.8.4](https://groups.google.com/d/msg/golang-nuts/sHfMg4gZNps/a-HDgDDDAAAJ)
- Update `golang.org/x/crypto/bcrypt` (See [golang/crypto@6c586e1](https://github.com/golang/crypto/commit/6c586e17d90a7d08bbbc4069984180dce3b04117) for more)
- Fix discovery SRV bootstrapping to [authenticate `ServerName` with `*.{ROOT_DOMAIN}`](https://github.com/coreos/etcd/pull/8651), in order to support sub-domain wildcard matching (see [issue #8445](https://github.com/coreos/etcd/issues/8445) for more contexts)
- For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `*.etcd.local` as an entry in Subject Alternative Name (SAN) field
## [v3.2.8](https://github.com/coreos/etcd/releases/tag/v3.2.8) (2017-09-29)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.7...v3.2.8).
### Fixed
- Fix v2 client failover to next endpoint on mutable operation
- Fix grpc-proxy to respect `KeysOnly` flag
## [v3.2.7](https://github.com/coreos/etcd/releases/tag/v3.2.7) (2017-09-01)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.6...v3.2.7).
### Fixed
- Fix server-side auth so concurrent auth operations do not return old revision error
- Fix concurrency/stm Put with serializable snapshot
- Use store revision from first fetch to resolve write conflicts instead of modified revision
## [v3.2.6](https://github.com/coreos/etcd/releases/tag/v3.2.6) (2017-08-21)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.5...v3.2.6).
### Fixed
- Fix watch restore from snapshot
- Fix `etcd_debugging_mvcc_keys_total` inconsistency
- Fix multiple URLs for `--listen-peer-urls` flag
- Add `enable-pprof` to etcd configuration file format
## [v3.2.5](https://github.com/coreos/etcd/releases/tag/v3.2.5) (2017-08-04)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.4...v3.2.5).
### Changed
- Use reverse lookup to match wildcard DNS SAN
- Return non-zero exit code on unhealthy `endpoint health`
### Fixed
- Fix unreachable /metrics endpoint when `--enable-v2=false`
- Fix grpc-proxy to respect `PrevKv` flag
### Added
- Add container registry `gcr.io/etcd-development/etcd`
## [v3.2.4](https://github.com/coreos/etcd/releases/tag/v3.2.4) (2017-07-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.3...v3.2.4).
### Fixed
- Do not block on active client stream when stopping server
- Fix gRPC proxy Snapshot RPC error handling
## [v3.2.3](https://github.com/coreos/etcd/releases/tag/v3.2.3) (2017-07-14)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.2...v3.2.3).
### Fixed
- Let clients establish unlimited streams
### Added
- Tag docker images with minor versions
- e.g. `docker pull quay.io/coreos/etcd:v3.2` to fetch latest v3.2 versions
## [v3.1.10](https://github.com/coreos/etcd/releases/tag/v3.1.10) (2017-07-14)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.9...v3.1.10).
### Changed
- Compile with Go 1.8.3 to fix panic on `net/http.CloseNotify`
### Added
- Tag docker images with minor versions
- e.g. `docker pull quay.io/coreos/etcd:v3.1` to fetch latest v3.1 versions
## [v3.2.2](https://github.com/coreos/etcd/releases/tag/v3.2.2) (2017-07-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.1...v3.2.2).
### Improved
- Rate-limit lease revoke on expiration
- Extend leases on promote to avoid queueing effect on lease expiration
### Fixed
- Use user-provided listen address to connect to gRPC gateway
- `net.Listener` rewrites IPv4 0.0.0.0 to IPv6 [::], breaking IPv6 disabled hosts
- Only v3.2.0, v3.2.1 are affected
- Accept connection with matched IP SAN but no DNS match
- Don't check DNS entries in certs if there's a matching IP
- Fix 'tools/benchmark' watch command
## [v3.2.1](https://github.com/coreos/etcd/releases/tag/v3.2.1) (2017-06-23)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.0...v3.2.1).
### Fixed
- Fix backend database in-memory index corruption issue on restore (only 3.2.0 is affected)
- Fix gRPC gateway Txn marshaling issue
- Fix backend database size debugging metrics
## [v3.2.0](https://github.com/coreos/etcd/releases/tag/v3.2.0) (2017-06-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.10...v3.2.0).
See [upgrade 3.2](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md).
### Improved
- Improve backend read concurrency
### Added
- Embedded etcd
- `Etcd.Peers` field is now `[]*peerListener`
- RPCs
- Add Election, Lock service
- Native client etcdserver/api/v3client
- client "embedded" in the server
- gRPC proxy
- Proxy endpoint discovery
- Namespaces
- Coalesce lease requests
- v3 client
- STM prefetching
- Add namespace feature
- Add `ErrOldCluster` with server version checking
- Translate WithPrefix() into WithFromKey() for empty key
- v3 etcdctl
- Add `check perf` command
- Add `--from-key` flag to role grant-permission command
- `lock` command takes an optional command to execute
- etcd flags
- Add `--enable-v2` flag to configure v2 backend (enabled by default)
- Add `--auth-token` flag
- `etcd gateway`
- Support DNS SRV priority
- Auth
- Support Watch API
- JWT tokens
- Logging, monitoring
- Server warns large snapshot operations
- Add `etcd_debugging_server_lease_expired_total` metrics
- Security
- Deny incoming peer certs with wrong IP SAN
- Resolve TLS DNSNames when SAN checking
- Reload TLS certificates on every client connection
- Release
- Annotate acbuild with supports-systemd-notify
- Add `nsswitch.conf` to Docker container image
- Add ppc64le, arm64(experimental) builds
- Compile with Go 1.8.3
### Changed
- v3 client
- `LeaseTimeToLive` returns TTL=-1 resp on lease not found
- `clientv3.NewFromConfigFile` is moved to `clientv3/yaml.NewConfig`
- concurrency package's elections updated to match RPC interfaces
- let client dial endpoints not in the balancer
- Dependencies
- Update gRPC to v1.2.1
- Update grpc-gateway to v1.2.0
### Fixed
- Allow v2 snapshot over 512MB
## [v3.1.9](https://github.com/coreos/etcd/releases/tag/v3.1.9) (2017-06-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.8...v3.1.9).
### Fixed
- Allow v2 snapshot over 512MB
## [v3.1.8](https://github.com/coreos/etcd/releases/tag/v3.1.8) (2017-05-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.7...v3.1.8).
## [v3.1.7](https://github.com/coreos/etcd/releases/tag/v3.1.7) (2017-04-28)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.6...v3.1.7).
## [v3.1.6](https://github.com/coreos/etcd/releases/tag/v3.1.6) (2017-04-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.5...v3.1.6).
### Changed
- Remove auth check in Status API
### Fixed
- Fill in Auth API response header
## [v3.1.5](https://github.com/coreos/etcd/releases/tag/v3.1.5) (2017-03-27)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.4...v3.1.5).
### Added
- Add `/etc/nsswitch.conf` file to alpine-based Docker image
### Fixed
- Fix raft memory leak issue
- Fix Windows file path issues
## [v3.1.4](https://github.com/coreos/etcd/releases/tag/v3.1.4) (2017-03-22)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.3...v3.1.4).
## [v3.1.3](https://github.com/coreos/etcd/releases/tag/v3.1.3) (2017-03-10)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.2...v3.1.3).
### Changed
- Use machine default host when advertise URLs are default values(localhost:2379,2380) AND if listen URL is 0.0.0.0
### Fixed
- Fix `etcd gateway` schema handling in DNS discovery
- Fix sd_notify behaviors in gateway, grpc-proxy
## [v3.1.2](https://github.com/coreos/etcd/releases/tag/v3.1.2) (2017-02-24)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.1...v3.1.2).
### Changed
- Use IPv4 default host, by default (when IPv4 and IPv6 are available)
### Fixed
- Fix `etcd gateway` with multiple endpoints
## [v3.1.1](https://github.com/coreos/etcd/releases/tag/v3.1.1) (2017-02-17)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.0...v3.1.1).
### Changed
- Compile with Go 1.7.5
## [v2.3.8](https://github.com/coreos/etcd/releases/tag/v2.3.8) (2017-02-17)
See [code changes](https://github.com/coreos/etcd/compare/v2.3.7...v2.3.8).
### Changed
- Compile with Go 1.7.5
## [v3.1.0](https://github.com/coreos/etcd/releases/tag/v3.1.0) (2017-01-20)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.16...v3.1.0).
See [upgrade 3.1](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md).
### Improved
- Faster linearizable reads (implements Raft read-index)
- v3 authentication API is now stable
### Added
- Automatic leadership transfer when leader steps down
- etcd flags
- `--strict-reconfig-check` flag is set by default
- Add `--log-output` flag
- Add `--metrics` flag
- v3 client
- Add SetEndpoints method; update endpoints at runtime
- Add Sync method; auto-update endpoints at runtime
- Add Lease TimeToLive API; fetch lease information
- replace Config.Logger field with global logger
- Get API responses are sorted in ascending order by default
- v3 etcdctl
- Add lease timetolive command
- Add `--print-value-only` flag to get command
- Add `--dest-prefix` flag to make-mirror command
- command get responses are sorted in ascending order by default
- `recipes` now conform to sessions defined in clientv3/concurrency
- ACI has symlinks to `/usr/local/bin/etcd*`
- experimental gRPC proxy feature
### Changed
- Deprecated following gRPC metrics in favor of [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus):
- `etcd_grpc_requests_total`
- `etcd_grpc_requests_failed_total`
- `etcd_grpc_active_streams`
- `etcd_grpc_unary_requests_duration_seconds`
- etcd uses default route IP if advertise URL is not given
- Cluster rejects removing members if quorum will be lost
- SRV records (e.g., infra1.example.com) must match the discovery domain (i.e., example.com) if no custom certificate authority is given
- TLSConfig ServerName is ignored with user-provided certificates for backwards compatibility; to be deprecated in 3.2
- For example, `etcd --discovery-srv=example.com` will only authenticate peers/clients when the provided certs have root domain `example.com` as an entry in Subject Alternative Name (SAN) field
- Discovery now has upper limit for waiting on retries
- Warn on binding listeners through domain names; to be deprecated in 3.2
## [v3.0.16](https://github.com/coreos/etcd/releases/tag/v3.0.16) (2016-11-13)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.15...v3.0.16).
## [v3.0.15](https://github.com/coreos/etcd/releases/tag/v3.0.15) (2016-11-11)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.14...v3.0.15).
### Fixed
- Fix cancel watch request with wrong range end
## [v3.0.14](https://github.com/coreos/etcd/releases/tag/v3.0.14) (2016-11-04)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.13...v3.0.14).
### Added
- v3 etcdctl migrate command now supports `--no-ttl` flag to discard keys on transform
## [v3.0.13](https://github.com/coreos/etcd/releases/tag/v3.0.13) (2016-10-24)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.12...v3.0.13).
## [v3.0.12](https://github.com/coreos/etcd/releases/tag/v3.0.12) (2016-10-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.11...v3.0.12).
## [v3.0.11](https://github.com/coreos/etcd/releases/tag/v3.0.11) (2016-10-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.10...v3.0.11).
### Added
- Server returns previous key-value (optional)
- `clientv3.WithPrevKV` option
- v3 etcdctl put,watch,del `--prev-kv` flag
## [v3.0.10](https://github.com/coreos/etcd/releases/tag/v3.0.10) (2016-09-23)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.9...v3.0.10).
## [v3.0.9](https://github.com/coreos/etcd/releases/tag/v3.0.9) (2016-09-15)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.8...v3.0.9).
### Added
- Warn on domain names on listen URLs (v3.2 will reject domain names)
## [v3.0.8](https://github.com/coreos/etcd/releases/tag/v3.0.8) (2016-09-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.7...v3.0.8).
### Changed
- Allow only IP addresses in listen URLs (domain names are rejected)
## [v3.0.7](https://github.com/coreos/etcd/releases/tag/v3.0.7) (2016-08-31)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.6...v3.0.7).
### Changed
- SRV records only allow A records (RFC 2052)
## [v3.0.6](https://github.com/coreos/etcd/releases/tag/v3.0.6) (2016-08-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.5...v3.0.6).
## [v3.0.5](https://github.com/coreos/etcd/releases/tag/v3.0.5) (2016-08-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.4...v3.0.5).
### Changed
- SRV records (e.g., infra1.example.com) must match the discovery domain (i.e., example.com) if no custom certificate authority is given
## [v3.0.4](https://github.com/coreos/etcd/releases/tag/v3.0.4) (2016-07-27)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.3...v3.0.4).
### Changed
- v2 auth can now use common name from TLS certificate when `--client-cert-auth` is enabled
### Added
- v2 etcdctl ls command now supports `--output=json`
- Add /var/lib/etcd directory to etcd official Docker image
## [v3.0.3](https://github.com/coreos/etcd/releases/tag/v3.0.3) (2016-07-15)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.2...v3.0.3).
### Changed
- Revert Dockerfile to use `CMD`, instead of `ENTRYPOINT`, to support etcdctl run
- Docker commands for v3.0.2 won't work without specifying executable binary paths
- v3 etcdctl default endpoints are now 127.0.0.1:2379
## [v3.0.2](https://github.com/coreos/etcd/releases/tag/v3.0.2) (2016-07-08)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.1...v3.0.2).
### Changed
- Dockerfile uses `ENTRYPOINT`, instead of `CMD`, to run etcd without binary path specified
## [v3.0.1](https://github.com/coreos/etcd/releases/tag/v3.0.1) (2016-07-01)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.0...v3.0.1).
## [v3.0.0](https://github.com/coreos/etcd/releases/tag/v3.0.0) (2016-06-30)
See [code changes](https://github.com/coreos/etcd/compare/v2.3.8...v3.0.0).
See [upgrade 3.0](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md).

View File

@ -1,63 +0,0 @@
## CoreOS Community Code of Conduct
### Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of
fostering an open and welcoming community, we pledge to respect all people who
contribute through reporting issues, posting feature requests, updating
documentation, submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free
experience for everyone, regardless of level of experience, gender, gender
identity and expression, sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing others' private information, such as physical or electronic addresses, without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct. By adopting this Code of Conduct,
project maintainers commit themselves to fairly and consistently applying these
principles to every aspect of managing this project. Project maintainers who do
not follow or enforce the Code of Conduct may be permanently removed from the
project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting a project maintainer, Brandon Philips
<brandon.philips@coreos.com>, and/or Meghan Schofield
<meghan.schofield@coreos.com>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
http://contributor-covenant.org/version/1/2/0/
### CoreOS Events Code of Conduct
CoreOS events are working conferences intended for professional networking and
collaboration in the CoreOS community. Attendees are expected to behave
according to professional standards and in accordance with their employers
policies on appropriate workplace behavior.
While at CoreOS events or related social networking opportunities, attendees
should not engage in discriminatory or offensive speech or actions including
but not limited to gender, sexuality, race, age, disability, or religion.
Speakers should be especially aware of these concerns.
CoreOS does not condone any statements by speakers contrary to these standards.
CoreOS reserves the right to deny entrance and/or eject from an event (without
refund) any individual found to be engaging in discriminatory or offensive
speech or actions.
Please bring any concerns to the immediate attention of designated on-site
staff, Brandon Philips <brandon.philips@coreos.com>, and/or Meghan Schofield
<meghan.schofield@coreos.com>.

View File

@ -5,7 +5,7 @@ etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
# Email and chat
- Email: [etcd-dev](https://groups.google.com/forum/?hl=en#!forum/etcd-dev)
- IRC: #[etcd](irc://irc.freenode.org:6667/#etcd) IRC channel on freenode.org
- IRC: #[coreos](irc://irc.freenode.org:6667/#coreos) IRC channel on freenode.org
## Getting started

View File

@ -51,7 +51,6 @@ RUN go get -v -u -tags spell github.com/chzchzchz/goword \
&& go get -v -u honnef.co/go/tools/cmd/staticcheck \
&& go get -v -u github.com/wadey/gocovmerge \
&& go get -v -u github.com/gordonklaus/ineffassign \
&& go get -v -u github.com/alexkohler/nakedret \
&& /tmp/install-marker.sh amd64 \
&& rm -f /tmp/install-marker.sh \
&& curl -s https://codecov.io/bash >/codecov \

3
Documentation/_index.md Normal file
View File

@ -0,0 +1,3 @@
---
title: etcd version 3.2.17
---

View File

@ -0,0 +1,3 @@
---
title: Benchmarks
---

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v2.1.0
---
## Physical machines
GCE n1-highcpu-2 machine type

View File

@ -1,4 +1,6 @@
# Benchmarking etcd v2.2.0
---
title: Benchmarking etcd v2.2.0
---
## Physical Machines

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v2.2.0-rc
---
## Physical machines
GCE n1-highcpu-2 machine type

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v2.2.0-rc-memory
---
## Physical machine
GCE n1-standard-2 machine type

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v3-demo
---
## Physical machines
GCE n1-highcpu-2 machine type

View File

@ -1,4 +1,6 @@
# Watch Memory Usage Benchmark
---
title: Watch Memory Usage Benchmark
---
*NOTE*: The watch features are under active development, and their memory usage may change as that development progresses. We do not expect it to significantly increase beyond the figures stated below.

View File

@ -1,4 +1,6 @@
# Storage Memory Usage Benchmark
---
title: Storage Memory Usage Benchmark
---
<!---todo: link storage to storage design doc-->
Two components of etcd storage consume physical memory. The etcd process allocates an *in-memory index* to speed key lookup. The process's *page cache*, managed by the operating system, stores recently-accessed data from disk for quick re-use.

View File

@ -1,4 +1,6 @@
# Branch management
---
title: Branch management
---
## Guide
@ -7,7 +9,7 @@
* Backwards-compatible bug fixes should target the master branch and subsequently be ported to stable branches.
* Once the master branch is ready for release, it will be tagged and become the new stable branch.
The etcd team has adopted a *rolling release model* and supports two stable versions of etcd.
The etcd team has adopted a *rolling release model* and supports one stable version of etcd.
### Master branch
@ -21,6 +23,6 @@ Before the release of the next stable version, feature PRs will be frozen. We wi
All branches with prefix `release-` are considered _stable_ branches.
After every minor release (http://semver.org/), we will have a new stable branch for that release. We will keep fixing the backwards-compatible bugs for the latest two stable releases. A _patch_ release to each supported release branch, incorporating any bug fixes, will be once every two weeks, given any patches.
After every minor release (http://semver.org/), we will have a new stable branch for that release. We will keep fixing the backwards-compatible bugs for the latest stable release, but not previous releases. The _patch_ release, incorporating any bug fixes, will be once every two weeks, given any patches.
[master]: https://github.com/coreos/etcd/tree/master

View File

@ -1,4 +1,6 @@
# Demo
---
title: Demo
---
This series of examples shows the basic procedures for working with an etcd cluster.

View File

@ -0,0 +1,3 @@
---
title: Developer guide
---

View File

@ -1,3 +1,7 @@
---
title: etcd concurrency API Reference
---
### etcd concurrency API Reference

View File

@ -1,16 +1,15 @@
## Why grpc-gateway
---
title: gRPC gateway
---
etcd v3 uses [gRPC][grpc] for its messaging protocol. The etcd project includes a gRPC-based [Go client][go-client] and a command line utility, [etcdctl][etcdctl], for communicating with an etcd cluster through gRPC. For languages with no gRPC support, etcd provides a JSON [grpc-gateway][grpc-gateway]. This gateway serves a RESTful proxy that translates HTTP/JSON requests into gRPC messages.
## Using grpc-gateway
The gateway accepts a [JSON mapping][json-mapping] for etcd's [protocol buffer][api-ref] message definitions. Note that `key` and `value` fields are defined as byte arrays and therefore must be base64 encoded in JSON. The following examples use `curl`, but any HTTP/JSON client should work all the same.
The gateway accepts a [JSON mapping][json-mapping] for etcd's [protocol buffer][api-ref] message definitions. Note that `key` and `value` fields are defined as byte arrays and therefore must be base64 encoded in JSON.
### Put and get keys
Use the `/v3beta/kv/range` and `/v3beta/kv/put` services to read and write keys:
Use `curl` to put and get a key:
```bash
<<COMMENT
@ -19,88 +18,41 @@ foo is 'Zm9v' in Base64
bar is 'YmFy'
COMMENT
curl -L http://localhost:2379/v3beta/kv/put \
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"}}
curl -L http://localhost:2379/v3beta/kv/range \
curl -L http://localhost:2379/v3alpha/kv/range \
-X POST -d '{"key": "Zm9v"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"}
# get all keys prefixed with "foo"
curl -L http://localhost:2379/v3beta/kv/range \
curl -L http://localhost:2379/v3alpha/kv/range \
-X POST -d '{"key": "Zm9v", "range_end": "Zm9w"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"}
```
### Watch keys
Use the `/v3beta/watch` service to watch keys:
Use `curl` to watch a key:
```bash
curl http://localhost:2379/v3beta/watch \
curl http://localhost:2379/v3alpha/watch \
-X POST -d '{"create_request": {"key":"Zm9v"} }' &
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"1","raft_term":"2"},"created":true}}
curl -L http://localhost:2379/v3beta/kv/put \
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' >/dev/null 2>&1
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"2"},"events":[{"kv":{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}}]}}
```
### Transactions
Issue a transaction with `/v3beta/kv/txn`:
Use `curl` to issue a transaction:
```bash
curl -L http://localhost:2379/v3beta/kv/txn \
curl -L http://localhost:2379/v3alpha/kv/txn \
-X POST \
-d '{"compare":[{"target":"CREATE","key":"Zm9v","createRevision":"2"}],"success":[{"requestPut":{"key":"Zm9v","value":"YmFy"}}]}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"3","raft_term":"2"},"succeeded":true,"responses":[{"response_put":{"header":{"revision":"3"}}}]}
```
### Authentication
Set up authentication with the `/v3beta/auth` service:
```bash
# create root user
curl -L http://localhost:2379/v3beta/auth/user/add \
-X POST -d '{"name": "root", "password": "pass"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# create root role
curl -L http://localhost:2379/v3beta/auth/role/add \
-X POST -d '{"name": "root"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# grant root role
curl -L http://localhost:2379/v3beta/auth/user/grant \
-X POST -d '{"user": "root", "role": "root"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# enable auth
curl -L http://localhost:2379/v3beta/auth/enable -X POST -d '{}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
```
Authenticate with etcd for an authentication token using `/v3beta/auth/authenticate`:
```bash
# get the auth token for the root user
curl -L http://localhost:2379/v3beta/auth/authenticate \
-X POST -d '{"name": "root", "password": "pass"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"},"token":"sssvIpwfnLAcWAQH.9"}
```
Set the `Authorization` header to the authentication token to fetch a key using authentication credentials:
```bash
curl -L http://localhost:2379/v3beta/kv/put \
-H 'Authorization : sssvIpwfnLAcWAQH.9' \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}
```
## Swagger
Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].

View File

@ -1,3 +1,7 @@
---
title: etcd API reference
---
### etcd API Reference
@ -58,7 +62,6 @@ This is a generated documentation. Please read the proto files for more.
| LeaseRevoke | LeaseRevokeRequest | LeaseRevokeResponse | LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted. |
| LeaseKeepAlive | LeaseKeepAliveRequest | LeaseKeepAliveResponse | LeaseKeepAlive keeps the lease alive by streaming keep alive requests from the client to the server and streaming keep alive responses from the server to the client. |
| LeaseTimeToLive | LeaseTimeToLiveRequest | LeaseTimeToLiveResponse | LeaseTimeToLive retrieves lease information. |
| LeaseLeases | LeaseLeasesRequest | LeaseLeasesResponse | LeaseLeases lists all existing leases. |
@ -69,10 +72,8 @@ This is a generated documentation. Please read the proto files for more.
| Alarm | AlarmRequest | AlarmResponse | Alarm activates, deactivates, and queries alarms regarding cluster health. |
| Status | StatusRequest | StatusResponse | Status gets the status of the member. |
| Defragment | DefragmentRequest | DefragmentResponse | Defragment defragments a member's backend database to recover storage space. |
| Hash | HashRequest | HashResponse | Hash computes the hash of the KV's backend. This is designed for testing; do not use this in production when there are ongoing transactions. |
| HashKV | HashKVRequest | HashKVResponse | HashKV computes the hash of all MVCC keys up to a given revision. |
| Hash | HashRequest | HashResponse | Hash returns the hash of the local KV state for consistency checking purpose. This is designed for testing; do not use this in production when there are ongoing transactions. |
| Snapshot | SnapshotRequest | SnapshotResponse | Snapshot sends a snapshot of the entire backend from a member over a stream to a client. |
| MoveLeader | MoveLeaderRequest | MoveLeaderResponse | MoveLeader requests current leader node to transfer its leadership to transferee. |
@ -404,8 +405,6 @@ CompactionRequest compacts the key-value store up to a given revision. All super
| create_revision | create_revision is the creation revision of the given key | int64 |
| mod_revision | mod_revision is the last modified revision of the given key. | int64 |
| value | value is the value of the given key, in bytes. | bytes |
| lease | lease is the lease id of the given key. | int64 |
| range_end | range_end compares the given target to all keys in the range [key, range_end). See RangeRequest for more details on key ranges. | bytes |
@ -443,24 +442,6 @@ Empty field.
##### message `HashKVRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| revision | revision is the key-value store revision for the hash operation. | int64 |
##### message `HashKVResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| hash | hash is the hash value computed from the responding member's MVCC keys up to a given revision. | uint32 |
| compact_revision | compact_revision is the compacted revision of key-value store when hash begins. | int64 |
##### message `HashRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
@ -472,7 +453,7 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| hash | hash is the hash value computed from the responding member's KV's backend. | uint32 |
| hash | hash is the hash value computed from the responding member's key-value store. | uint32 |
@ -514,21 +495,6 @@ Empty field.
##### message `LeaseLeasesRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `LeaseLeasesResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| leases | | (slice of) LeaseStatus |
##### message `LeaseRevokeRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
@ -545,14 +511,6 @@ Empty field.
##### message `LeaseStatus` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | | int64 |
##### message `LeaseTimeToLiveRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
@ -653,22 +611,6 @@ Empty field.
##### message `MoveLeaderRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| targetID | targetID is the node ID for the new leader. | uint64 |
##### message `MoveLeaderResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `PutRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
@ -730,7 +672,6 @@ Empty field.
| request_range | | RangeRequest |
| request_put | | PutRequest |
| request_delete_range | | DeleteRangeRequest |
| request_txn | | TxnRequest |
@ -753,7 +694,6 @@ Empty field.
| response_range | | RangeResponse |
| response_put | | PutResponse |
| response_delete_range | | DeleteRangeResponse |
| response_txn | | TxnResponse |

File diff suppressed because it is too large Load Diff

View File

@ -15,7 +15,7 @@
"application/json"
],
"paths": {
"/v3beta/election/campaign": {
"/v3alpha/election/campaign": {
"post": {
"summary": "Campaign waits to acquire leadership in an election, returning a LeaderKey\nrepresenting the leadership if successful. The LeaderKey can then be used\nto issue new values on the election, transactionally guard API requests on\nleadership still being held, and resign from the election.",
"operationId": "Campaign",
@ -42,7 +42,7 @@
]
}
},
"/v3beta/election/leader": {
"/v3alpha/election/leader": {
"post": {
"summary": "Leader returns the current election proclamation, if any.",
"operationId": "Leader",
@ -69,7 +69,7 @@
]
}
},
"/v3beta/election/observe": {
"/v3alpha/election/observe": {
"post": {
"summary": "Observe streams election proclamations in-order as made by the election's\nelected leaders.",
"operationId": "Observe",
@ -96,7 +96,7 @@
]
}
},
"/v3beta/election/proclaim": {
"/v3alpha/election/proclaim": {
"post": {
"summary": "Proclaim updates the leader's posted value with a new value.",
"operationId": "Proclaim",
@ -123,7 +123,7 @@
]
}
},
"/v3beta/election/resign": {
"/v3alpha/election/resign": {
"post": {
"summary": "Resign releases election leadership so other campaigners may acquire\nleadership on the election.",
"operationId": "Resign",

View File

@ -15,7 +15,7 @@
"application/json"
],
"paths": {
"/v3beta/lock/lock": {
"/v3alpha/lock/lock": {
"post": {
"summary": "Lock acquires a distributed shared lock on a given named lock.\nOn success, it will return a unique key that exists so long as the\nlock is held by the caller. This key can be used in conjunction with\ntransactions to safely ensure updates to etcd only occur while holding\nlock ownership. The lock is held until Unlock is called on the key or the\nlease associate with the owner expires.",
"operationId": "Lock",
@ -42,7 +42,7 @@
]
}
},
"/v3beta/lock/unlock": {
"/v3alpha/lock/unlock": {
"post": {
"summary": "Unlock takes a key returned by Lock and releases the hold on lock. The\nnext Lock caller waiting for the lock will then be woken up and given\nownership of the lock.",
"operationId": "Unlock",

View File

@ -1,7 +1,9 @@
# Experimental APIs and features
---
title: Experimental APIs and features
---
For the most part, the etcd project is stable, but we are still moving fast! We believe in the release fast philosophy. We want to get early feedback on features still in development and stabilizing. Thus, there are, and will be more, experimental features and APIs. We plan to improve these features based on the early feedback from the community, or abandon them if there is little interest, in the next few releases. Please do not rely on any experimental features or APIs in production environment.
## The current experimental API/features are:
- [KV ordering](https://godoc.org/github.com/coreos/etcd/clientv3/ordering) wrapper. When an etcd client switches endpoints, responses to serializable reads may go backward in time if the new endpoint is lagging behind the rest of the cluster. The ordering wrapper caches the current cluster revision from response headers. If a response revision is less than the cached revision, the client selects another endpoint and reissues the read. Enable in grpcproxy with `--experimental-serializable-ordering`.
(none currently)

View File

@ -1,4 +1,6 @@
# gRPC naming and discovery
---
title: gRPC naming and discovery
---
etcd provides a gRPC resolver to support an alternative name system that fetches endpoints from etcd for discovering gRPC services. The underlying mechanism is based on watching updates to keys prefixed with the service name.

View File

@ -1,9 +1,10 @@
# Interacting with etcd
---
title: Interacting with etcd
---
Users mostly interact with etcd by putting or getting the value of a key. This section describes how to do that by using etcdctl, a command line tool for interacting with etcd server. The concepts described here should apply to the gRPC APIs or client library APIs.
By default, etcdctl talks to the etcd server with the v2 API for backward compatibility. For etcdctl to speak to etcd using the v3 API, the API version must be set to version 3 via the `ETCDCTL_API` environment variable. However note that any key that was created using the v2 API will not be able to be queried via the v3 API. A v3 API ```etcdctl get``` of a v2 key will exit with 0 and no key data, this is the expected behaviour.
By default, etcdctl talks to the etcd server with the v2 API for backward compatibility. For etcdctl to speak to etcd using the v3 API, the API version must be set to version 3 via the `ETCDCTL_API` environment variable.
```bash
export ETCDCTL_API=3
@ -216,7 +217,7 @@ $ etcdctl del foo foo9
Here is the command to delete key `zoo` with the deleted key value pair returned:
```bash
$ etcdctl del --prev-kv zoo
$ etcdctl del --prev-kv zoo
1 # one key is deleted
zoo # deleted key
val # the value of the deleted key
@ -225,7 +226,7 @@ val # the value of the deleted key
Here is the command to delete keys having prefix as `zoo`:
```bash
$ etcdctl del --prefix zoo
$ etcdctl del --prefix zoo
2 # two keys are deleted
```
@ -291,7 +292,7 @@ barz1
Here is the command to watch on multiple keys `foo` and `zoo`:
```bash
$ etcdctl watch -i
$ etcdctl watch -i
$ watch foo
$ watch zoo
# in another terminal: etcdctl put foo bar
@ -431,9 +432,9 @@ Here is the command to keep the same lease alive:
```bash
$ etcdctl lease keep-alive 32695410dcc0ca06
lease 32695410dcc0ca06 keepalived with TTL(10)
lease 32695410dcc0ca06 keepalived with TTL(10)
lease 32695410dcc0ca06 keepalived with TTL(10)
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(100)
...
```
@ -473,3 +474,4 @@ lease 694d5765fc71500b granted with TTL(500s), remaining(132s), attached keys([z
# if the lease has expired or does not exist it will give the below response:
Error: etcdserver: requested lease not found
```

View File

@ -1,4 +1,6 @@
# System limits
---
title: System limits
---
## Request size limit

View File

@ -1,163 +1,92 @@
# Set up a local cluster
---
title: Set up a local cluster
---
For testing and development deployments, the quickest and easiest way is to configure a local cluster. For a production deployment, refer to the [clustering][clustering] section.
For testing and development deployments, the quickest and easiest way is to set up a local cluster. For a production deployment, refer to the [clustering][clustering] section.
## Local standalone cluster
### Starting a cluster
Run the following to deploy an etcd cluster as a standalone cluster:
Deploying an etcd cluster as a standalone cluster is straightforward. Start it with just one command:
```
$ ./etcd
...
```
If the `etcd` binary is not present in the current working directory, it might be located either at `$GOPATH/bin/etcd` or at `/usr/local/bin/etcd`. Run the command appropriately.
The started etcd member listens on `localhost:2379` for client requests.
The running etcd member listens on `localhost:2379` for client requests.
To interact with the started cluster by using etcdctl:
### Interacting with the cluster
```
# use API version 3
$ export ETCDCTL_API=3
Use `etcdctl` to interact with the running cluster:
$ ./etcdctl put foo bar
OK
1. Configure the environment to have `ETCDCTL_API=3` so `etcdctl` uses the etcd API version 3 instead of defaulting to version 2.
```
# use API version 3
$ export ETCDCTL_API=3
```
2. Store an example key-value pair in the cluster:
```
$ ./etcdctl put foo bar
OK
```
If OK is printed, storing key-value pair is successful.
3. Retrieve the value of `foo`:
```
$ ./etcdctl get foo
bar
```
If `bar` is returned, interaction with the etcd cluster is working as expected.
$ ./etcdctl get foo
bar
```
## Local multi-member cluster
### Starting a cluster
A `Procfile` at the base of this git repo is provided to easily set up a local multi-member cluster. To start a multi-member cluster go to the root of an etcd source tree and run:
A `Procfile` at the base of the etcd git repository is provided to easily configure a local multi-member cluster. To start a multi-member cluster, navigate to the root of the etcd source tree and perform the following:
```
# install goreman program to control Profile-based applications.
$ go get github.com/mattn/goreman
$ goreman -f Procfile start
...
```
1. Install `goreman` to control Procfile-based applications:
The started members listen on `localhost:2379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
```
$ go get github.com/mattn/goreman
```
To interact with the started cluster by using etcdctl:
2. Start a cluster with `goreman` using etcd's stock Procfile:
```
# use API version 3
$ export ETCDCTL_API=3
```
$ goreman -f Procfile start
```
$ etcdctl --write-out=table --endpoints=localhost:2379 member list
+------------------+---------+--------+------------------------+------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+--------+------------------------+------------------------+
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380 | http://127.0.0.1:2379 |
| 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
| fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
+------------------+---------+--------+------------------------+------------------------+
The members start running. They listen on `localhost:2379`, `localhost:22379`, and `localhost:32379` respectively for client requests.
$ etcdctl put foo bar
OK
```
### Interacting with the cluster
To exercise etcd's fault tolerance, kill a member:
Use `etcdctl` to interact with the running cluster:
```
# kill etcd2
$ goreman run stop etcd2
1. Configure the environment to have `ETCDCTL_API=3` so `etcdctl` uses the etcd API version 3 instead of defaulting to version 2.
$ etcdctl put key hello
OK
```
# use API version 3
$ export ETCDCTL_API=3
```
$ etcdctl get key
hello
2. Print the list of members:
# try to get key from the killed member
$ etcdctl --endpoints=localhost:22379 get key
2016/04/18 23:07:35 grpc: Conn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:22379: getsockopt: connection refused"; Reconnecting to "localhost:22379"
Error: grpc: timed out trying to connect
```
$ etcdctl --write-out=table --endpoints=localhost:2379 member list
```
The list of etcd members are displayed as follows:
# restart the killed member
$ goreman run restart etcd2
```
+------------------+---------+--------+------------------------+------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+--------+------------------------+------------------------+
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380 | http://127.0.0.1:2379 |
| 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
| fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
+------------------+---------+--------+------------------------+------------------------+
```
# get the key from restarted member
$ etcdctl --endpoints=localhost:22379 get key
hello
```
3. Store an example key-value pair in the cluster:
```
$ etcdctl put foo bar
OK
```
If OK is printed, storing key-value pair is successful.
### Testing fault tolerance
To exercise etcd's fault tolerance, kill a member and attempt to retrieve the key.
1. Identify the process name of the member to be stopped.
The `Procfile` lists the properties of the multi-member cluster. For example, consider the member with the process name, `etcd2`.
2. Stop the member:
```
# kill etcd2
$ goreman run stop etcd2
```
3. Store a key:
```
$ etcdctl put key hello
OK
```
4. Retrieve the key that is stored in the previous step:
```
$ etcdctl get key
hello
```
5. Retrieve a key from the stopped member:
```
$ etcdctl --endpoints=localhost:22379 get key
```
The command should display an error caused by connection failure:
```
2017/06/18 23:07:35 grpc: Conn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:22379: getsockopt: connection refused"; Reconnecting to "localhost:22379"
Error: grpc: timed out trying to connect
```
6. Restart the stopped member:
```
$ goreman run restart etcd2
```
7. Get the key from the restarted member:
```
$ etcdctl --endpoints=localhost:22379 get key
hello
```
Restarting the member re-establish the connection. `etcdctl` will now be able to retrieve the key successfully. To learn more about interacting with etcd, read [interacting with etcd section][interacting].
To learn more about interacting with etcd, read [interacting with etcd section][interacting].
[interacting]: ./interacting_v3.md
[clustering]: ../op-guide/clustering.md

View File

@ -0,0 +1,3 @@
---
title: etcd dev internal
---

View File

@ -1,4 +1,6 @@
# Discovery service protocol
---
title: Discovery service protocol
---
Discovery service protocol helps new etcd member to discover all other members in cluster bootstrap phase using a shared discovery URL.

View File

@ -1,4 +1,6 @@
# Logging conventions
---
title: Logging conventions
---
etcd uses the [capnslog][capnslog] library for logging application output categorized into *levels*. A log message's level is determined according to these conventions:

View File

@ -1,4 +1,6 @@
# etcd release guide
---
title: etcd release guide
---
The guide talks about how to release a new version of etcd.
@ -25,10 +27,8 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
### Patch version release
- To request a backport, devlopers submit cherrypick PRs targeting the release branch. The commits should not include merge commits. The commits should be restricted to bug fixes and security patches.
- The cherrypick PRs should target the appropriate release branch (`base:release-<major>-<minor>`). `hack/patch/cherrypick.sh` may be used to automatically generate cherrypick PRs.
- The release patch manager reviews the cherrypick PRs. Please discuss carefully what is backported to the patch release. Each patch release should be strictly better than it's predecessor.
- The release patch manager will cherry-pick these commits starting from the oldest one into stable branch.
- Discuss about commits that are backported to the patch release. The commits should not include merge commits.
- Cherry-pick these commits starting from the oldest one into stable branch.
## Write release note
@ -55,7 +55,7 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
Run release script in root directory:
```
TAG=gcr.io/etcd-development/etcd ./scripts/release.sh ${VERSION}
./scripts/release.sh ${VERSION}
```
It generates all release binaries and images under directory ./release.
@ -68,8 +68,8 @@ The following commands are used for public release sign:
```
cd release
for i in etcd-*{.zip,.tar.gz,.aci}; do gpg2 --default-key $SUBKEYID --armor --output ${i}.asc --detach-sign ${i}; done
for i in etcd-*{.zip,.tar.gz,.aci}; do gpg2 --verify ${i}.asc ${i}; done
for i in etcd-*{.zip,.tar.gz}; do gpg2 --default-key $SUBKEYID --armor --output ${i}.asc --detach-sign ${i}; done
for i in etcd-*{.zip,.tar.gz}; do gpg2 --verify ${i}.asc ${i}; done
# sign zipped source code files
wget https://github.com/coreos/etcd/archive/${VERSION}.zip
@ -92,41 +92,13 @@ The public key for GPG signing can be found at [CoreOS Application Signing Key](
- Select whether it is a pre-release.
- Publish the release!
## Publish docker image in gcr.io
- Push docker image:
```
gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd.json)" https://gcr.io
for TARGET_ARCH in "-arm64" "-ppc64le" ""; do
gcloud docker -- push gcr.io/etcd-development/etcd:${VERSION}${TARGET_ARCH}
done
```
- Add `latest` tag to the new image on [gcr.io](https://console.cloud.google.com/gcr/images/etcd-development/GLOBAL/etcd?project=etcd-development&authuser=1) if this is a stable release.
## Publish docker image in Quay.io
- Build docker images with quay.io:
```
for TARGET_ARCH in "amd64" "arm64" "ppc64le"; do
TAG=quay.io/coreos/etcd GOARCH=${TARGET_ARCH} \
BINARYDIR=release/etcd-${VERSION}-linux-${TARGET_ARCH} \
BUILDDIR=release \
./scripts/build-docker ${VERSION}
done
```
- Push docker image:
```
docker login quay.io
for TARGET_ARCH in "-arm64" "-ppc64le" ""; do
docker push quay.io/coreos/etcd:${VERSION}${TARGET_ARCH}
done
docker push quay.io/coreos/etcd:${VERSION}
```
- Add `latest` tag to the new image on [quay.io](https://quay.io/repository/coreos/etcd?tag=latest&tab=tags) if this is a stable release.

View File

@ -1,8 +1,10 @@
# Download and build
---
title: Download and build
---
## System requirements
The etcd performance benchmarks run etcd on 8 vCPU, 16GB RAM, 50GB SSD GCE instances, but any relatively modern machine with low latency storage and a few gigabytes of memory should suffice for most use cases. Applications with large v2 data stores will require more memory than a large v3 data store since data is kept in anonymous memory instead of memory mapped from a file. For running etcd on a cloud provider, see the [Example hardware configuration][example-hardware-configurations] documentation.
The etcd performance benchmarks run etcd on 8 vCPU, 16GB RAM, 50GB SSD GCE instances, but any relatively modern machine with low latency storage and a few gigabytes of memory should suffice for most use cases. Applications with large v2 data stores will require more memory than a large v3 data store since data is kept in anonymous memory instead of memory mapped from a file. For running etcd on a cloud provider, we suggest at least a medium instance on AWS or a standard-1 instance on GCE.
## Download the pre-built binary
@ -10,7 +12,7 @@ The easiest way to get etcd is to use one of the pre-built release binaries whic
## Build the latest version
For those wanting to try the very latest version, build etcd from the `master` branch. [Go](https://golang.org/) version 1.9+ is required to build the latest version of etcd. To ensure etcd is built against well-tested libraries, etcd vendors its dependencies for official release binaries. However, etcd's vendoring is also optional to avoid potential import conflicts when embedding the etcd server or using the etcd client.
For those wanting to try the very latest version, build etcd from the `master` branch. [Go](https://golang.org/) version 1.8+ is required to build the latest version of etcd. To ensure etcd is built against well-tested libraries, etcd vendors its dependencies for official release binaries. However, etcd's vendoring is also optional to avoid potential import conflicts when embedding the etcd server or using the etcd client.
To build `etcd` from the `master` branch without a `GOPATH` using the official `build` script:
@ -18,6 +20,7 @@ To build `etcd` from the `master` branch without a `GOPATH` using the official `
$ git clone https://github.com/coreos/etcd.git
$ cd etcd
$ ./build
$ ./bin/etcd
```
To build a vendored `etcd` from the `master` branch via `go get`:
@ -27,6 +30,7 @@ To build a vendored `etcd` from the `master` branch via `go get`:
$ echo $GOPATH
/Users/example/go
$ go get github.com/coreos/etcd/cmd/etcd
$ $GOPATH/bin/etcd
```
To build `etcd` from the `master` branch without vendoring (may not build due to upstream conflicts):
@ -36,28 +40,20 @@ To build `etcd` from the `master` branch without vendoring (may not build due to
$ echo $GOPATH
/Users/example/go
$ go get github.com/coreos/etcd
$ $GOPATH/bin/etcd
```
## Test the installation
Check the etcd binary is built correctly by starting etcd and setting a key.
### Starting etcd
If etcd is built without using GOPATH, run the following:
Start etcd:
```
$ ./bin/etcd
```
If etcd is built using GOPATH, run the following:
```
$ $GOPATH/bin/etcd
```
### Setting a key
Run the following:
Set a key:
```
$ ETCDCTL_API=3 ./bin/etcdctl put foo bar
@ -70,4 +66,4 @@ If OK is printed, then etcd is working!
[go]: https://golang.org/doc/install
[build-script]: ../build
[cmd-directory]: ../cmd
[example-hardware-configurations]: op-guide/hardware.md#example-hardware-configurations

View File

@ -22,29 +22,30 @@ The easiest way to get started using etcd as a distributed key-value store is to
## Operating etcd clusters
Administrators who need a fault-tolerant etcd cluster for either development or production should begin with a [cluster on multiple machines][clustering].
Administrators who need to create reliable and scalable key-value stores for the developers they support should begin with a [cluster on multiple machines][clustering].
### Setting up etcd
- [Configuration flags][conf]
- [Multi-member cluster][clustering]
- [gRPC proxy][grpc_proxy]
- [L4 gateway][gateway]
### System configuration
- [Supported systems][supported_platforms]
- [Setting up etcd clusters][clustering]
- [Setting up etcd gateways][gateway]
- [Setting up etcd gRPC proxy][grpc_proxy]
- [Hardware recommendations][hardware]
- [Performance benchmarking][performance]
- [Tuning][tuning]
- [Configuration][conf]
- [Security][security]
- [Authentication][authentication]
- [Monitoring][monitoring]
- [Maintenance][maintenance]
- [Understand failures][failures]
- [Disaster recovery][recovery]
- [Performance][performance]
- [Versioning][versioning]
### Platform guides
- [Amazon Web Services][aws_platform]
- [Container Linux, systemd][container_linux_platform]
- [FreeBSD][freebsd_platform]
- [Supported systems][supported_platforms]
- [Docker container][container_docker]
- [Container Linux, systemd][container_linux_platform]
- [rkt container][container_rkt]
- [Amazon Web Services][aws_platform]
- [FreeBSD][freebsd_platform]
### Security
@ -53,7 +54,7 @@ Administrators who need a fault-tolerant etcd cluster for either development or
### Maintenance and troubleshooting
- [Frequently asked questions][faq]
- [Frequently asked questions][common questions]
- [Monitoring][monitoring]
- [Maintenance][maintenance]
- [Failure modes][failures]
@ -71,13 +72,17 @@ To learn more about the concepts and internals behind etcd, read the following p
- Internals
- [Auth subsystem][auth_design]
## Frequently Asked Questions (FAQ)
Answers to [common questions] about etcd.
[api_ref]: dev-guide/api_reference_v3.md
[api_concurrency_ref]: dev-guide/api_concurrency_reference_v3.md
[api_grpc_gateway]: dev-guide/api_grpc_gateway.md
[clustering]: op-guide/clustering.md
[conf]: op-guide/configuration.md
[system-limit]: dev-guide/limit.md
[faq]: faq.md
[common questions]: faq.md
[why]: learning/why.md
[data_model]: learning/data_model.md
[demo]: demo.md
@ -110,5 +115,4 @@ To learn more about the concepts and internals behind etcd, read the following p
[experimental]: dev-guide/experimental_apis.md
[authentication]: op-guide/authentication.md
[auth_design]: learning/auth_design.md
[tuning]: tuning.md
[upgrading]: upgrades/upgrading-etcd.md

View File

@ -1,40 +1,38 @@
# Frequently Asked Questions (FAQ)
---
title: Frequently Asked Questions (FAQ)
---
## etcd, general
### etcd, general
### Do clients have to send requests to the etcd leader?
#### Do clients have to send requests to the etcd leader?
[Raft][raft] is leader-based; the leader handles all client requests which need cluster consensus. However, the client does not need to know which node is the leader. Any request that requires consensus sent to a follower is automatically forwarded to the leader. Requests that do not require consensus (e.g., serialized reads) can be processed by any cluster member.
## Configuration
### Configuration
### What is the difference between listen-<client,peer>-urls, advertise-client-urls or initial-advertise-peer-urls?
#### What is the difference between listen-<client,peer>-urls, advertise-client-urls or initial-advertise-peer-urls?
`listen-client-urls` and `listen-peer-urls` specify the local addresses etcd server binds to for accepting incoming connections. To listen on a port for all interfaces, specify `0.0.0.0` as the listen IP address.
`advertise-client-urls` and `initial-advertise-peer-urls` specify the addresses etcd clients or other etcd members should use to contact the etcd server. The advertise addresses must be reachable from the remote machines. Do not advertise addresses like `localhost` or `0.0.0.0` for a production setup since these addresses are unreachable from remote machines.
### Why doesn't changing `--listen-peer-urls` or `--initial-advertise-peer-urls` update the advertised peer URLs in `etcdctl member list`?
### Deployment
A member's advertised peer URLs come from `--initial-advertise-peer-urls` on initial cluster boot. Changing the listen peer URLs or the initial advertise peers after booting the member won't affect the exported advertise peer URLs since changes must go through quorum to avoid membership configuration split brain. Use `etcdctl member update` to update a member's peer URLs.
## Deployment
### System requirements
#### System requirements
Since etcd writes data to disk, SSD is highly recommended. To prevent performance degradation or unintentionally overloading the key-value store, etcd enforces a 2GB default storage size quota, configurable up to 8GB. To avoid swapping or running out of memory, the machine should have at least as much RAM to cover the quota. At CoreOS, an etcd cluster is usually deployed on dedicated CoreOS Container Linux machines with dual-core processors, 2GB of RAM, and 80GB of SSD *at the very least*. **Note that performance is intrinsically workload dependent; please test before production deployment**. See [hardware][hardware-setup] for more recommendations.
Most stable production environment is Linux operating system with amd64 architecture; see [supported platform][supported-platform] for more.
### Why an odd number of cluster members?
#### Why an odd number of cluster members?
An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1. For any odd-sized cluster, adding one node will always increase the number of nodes necessary for quorum. Although adding a node to an odd-sized cluster appears better since there are more machines, the fault tolerance is worse since exactly the same number of nodes may fail without losing quorum but there are more nodes that can fail. If the cluster is in a state where it can't tolerate any more failures, adding a node before removing nodes is dangerous because if the new node fails to register with the cluster (e.g., the address is misconfigured), quorum will be permanently lost.
### What is maximum cluster size?
#### What is maximum cluster size?
Theoretically, there is no hard limit. However, an etcd cluster probably should have no more than seven nodes. [Google Chubby lock service][chubby], similar to etcd and widely deployed within Google for many years, suggests running five nodes. A 5-member etcd cluster can tolerate two member failures, which is enough in most cases. Although larger clusters provide better fault tolerance, the write performance suffers because data must be replicated across more machines.
### What is failure tolerance?
#### What is failure tolerance?
An etcd cluster operates so long as a member quorum can be established. If quorum is lost through transient network failures (e.g., partitions), etcd automatically and safely resumes once the network recovers and restores quorum; Raft enforces cluster consistency. For power loss, etcd persists the Raft log to disk; etcd replays the log to the point of failure and resumes cluster participation. For permanent hardware failure, the node may be removed from the cluster through [runtime reconfiguration][runtime reconfiguration].
@ -54,19 +52,19 @@ It is recommended to have an odd number of members in a cluster. An odd-size clu
Adding a member to bring the size of cluster up to an even number doesn't buy additional fault tolerance. Likewise, during a network partition, an odd number of members guarantees that there will always be a majority partition that can continue to operate and be the source of truth when the partition ends.
### Does etcd work in cross-region or cross data center deployments?
#### Does etcd work in cross-region or cross data center deployments?
Deploying etcd across regions improves etcd's fault tolerance since members are in separate failure domains. The cost is higher consensus request latency from crossing data center boundaries. Since etcd relies on a member quorum for consensus, the latency from crossing data centers will be somewhat pronounced because at least a majority of cluster members must respond to consensus requests. Additionally, cluster data must be replicated across all peers, so there will be bandwidth cost as well.
With longer latencies, the default etcd configuration may cause frequent elections or heartbeat timeouts. See [tuning] for adjusting timeouts for high latency deployments.
## Operation
### Operation
### How to backup a etcd cluster?
#### How to backup a etcd cluster?
etcdctl provides a `snapshot` command to create backups. See [backup][backup] for more details.
### Should I add a member before removing an unhealthy member?
#### Should I add a member before removing an unhealthy member?
When replacing an etcd node, it's important to remove the member first and then add its replacement.
@ -78,21 +76,21 @@ Additionally, that new member is risky because it may turn out to be misconfigur
On the other hand, if the downed member is removed from cluster membership first, the number of members becomes 2 and the quorum remains at 2. Following that removal by adding a new member will also keep the quorum steady at 2. So, even if the new node can't be brought up, it's still possible to remove the new member through quorum on the remaining live members.
### Why won't etcd accept my membership changes?
#### Why won't etcd accept my membership changes?
etcd sets `strict-reconfig-check` in order to reject reconfiguration requests that would cause quorum loss. Abandoning quorum is really risky (especially when the cluster is already unhealthy). Although it may be tempting to disable quorum checking if there's quorum loss to add a new member, this could lead to full fledged cluster inconsistency. For many applications, this will make the problem even worse ("disk geometry corruption" being a candidate for most terrifying).
### Why does etcd lose its leader from disk latency spikes?
#### Why does etcd lose its leader from disk latency spikes?
This is intentional; disk latency is part of leader liveness. Suppose the cluster leader takes a minute to fsync a raft log update to disk, but the etcd cluster has a one second election timeout. Even though the leader can process network messages within the election interval (e.g., send heartbeats), it's effectively unavailable because it can't commit any new proposals; it's waiting on the slow disk. If the cluster frequently loses its leader due to disk latencies, try [tuning][tuning] the disk settings or etcd time parameters.
### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
#### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
Every new etcd cluster generates a new cluster ID based on the initial cluster configuration and a user-provided unique `initial-cluster-token` value. By having unique cluster ID's, etcd is protected from cross-cluster interaction which could corrupt the cluster.
Usually this warning happens after tearing down an old cluster, then reusing some of the peer addresses for the new cluster. If any etcd process from the old cluster is still running it will try to contact the new cluster. The new cluster will recognize a cluster ID mismatch, then ignore the request and emit this warning. This warning is often cleared by ensuring peer addresses among distinct clusters are disjoint.
### What does "mvcc: database space exceeded" mean and how do I fix it?
#### What does "mvcc: database space exceeded" mean and how do I fix it?
The [multi-version concurrency control][api-mvcc] data model in etcd keeps an exact history of the keyspace. Without periodically compacting this history (e.g., by setting `--auto-compaction`), etcd will eventually exhaust its storage space. If etcd runs low on storage space, it raises a space quota alarm to protect the cluster from further writes. So long as the alarm is raised, etcd responds to write requests with the error `mvcc: database space exceeded`.
@ -102,13 +100,13 @@ To recover from the low space quota alarm:
2. [Defragment][maintenance-defragment] every etcd endpoint.
3. [Disarm][maintenance-disarm] the alarm.
## Performance
### Performance
### How should I benchmark etcd?
#### How should I benchmark etcd?
Try the [benchmark] tool. Current [benchmark results][benchmark-result] are available for comparison.
### What does the etcd warning "apply entries took too long" mean?
#### What does the etcd warning "apply entries took too long" mean?
After a majority of etcd members agree to commit a request, each etcd server applies the request to its data store and persists the result to disk. Even with a slow mechanical disk or a virtualized network disk, such as Amazons EBS or Googles PD, applying a request should normally take fewer than 50 milliseconds. If the average apply duration exceeds 100 milliseconds, etcd will warn that entries are taking too long to apply.
@ -120,7 +118,7 @@ Expensive user requests which access too many keys (e.g., fetching the entire ke
If none of the above suggestions clear the warnings, please [open an issue][new_issue] with detailed logging, monitoring, metrics and optionally workload information.
### What does the etcd warning "failed to send out heartbeat on time" mean?
#### What does the etcd warning "failed to send out heartbeat on time" mean?
etcd uses a leader-based consensus protocol for consistent data replication and log execution. Cluster members elect a single leader, all other members become followers. The elected leader must periodically send heartbeats to its followers to maintain its leadership. Followers infer leader failure if no heartbeats are received within an election interval and trigger an election. If a leader doesnt send its heartbeats in time but is still running, the election is spurious and likely caused by insufficient resources. To catch these soft failures, if the leader skips two heartbeat intervals, etcd will warn it failed to send a heartbeat on time.
@ -132,7 +130,7 @@ A slow network can also cause this issue. If network metrics among the etcd mach
If none of the above suggestions clear the warnings, please [open an issue][new_issue] with detailed logging, monitoring, metrics and optionally workload information.
### What does the etcd warning "snapshotting is taking more than x seconds to finish ..." mean?
#### What does the etcd warning "snapshotting is taking more than x seconds to finish ..." mean?
etcd sends a snapshot of its complete key-value store to refresh slow followers and for [backups][backup]. Slow snapshot transfer times increase MTTR; if the cluster is ingesting data with high throughput, slow followers may livelock by needing a new snapshot before finishing receiving a snapshot. To catch slow snapshot performance, etcd warns when sending a snapshot takes more than thirty seconds and exceeds the expected transfer time for a 1Gbps connection.

View File

@ -1,4 +1,6 @@
# Libraries and tools
---
title: Libraries and tools
---
**Tools**
@ -40,18 +42,16 @@
**Python libraries**
- [kragniz/python-etcd3](https://github.com/kragniz/python-etcd3) - Client for v3
- [kragniz/python-etcd3](https://github.com/kragniz/python-etcd3) - Work in progress client for v3
- [jplana/python-etcd](https://github.com/jplana/python-etcd) - Supports v2
- [russellhaering/txetcd](https://github.com/russellhaering/txetcd) - a Twisted Python library
- [cholcombe973/autodock](https://github.com/cholcombe973/autodock) - A docker deployment automation tool
- [lisael/aioetcd](https://github.com/lisael/aioetcd) - (Python 3.4+) Asyncio coroutines client (Supports v2)
- [txaio-etcd](https://github.com/crossbario/txaio-etcd) - Asynchronous etcd v3-only client library for Twisted (today) and asyncio (future)
- [dims/etcd3-gateway](https://github.com/dims/etcd3-gateway) - etcd v3 API library using the HTTP grpc gateway
- [aioetcd3](https://github.com/gaopeiliang/aioetcd3) - (Python 3.6+) etcd v3 API for asyncio
**Node libraries**
- [mixer/etcd3](https://github.com/mixer/etcd3) - Supports v3
- [stianeikeland/node-etcd](https://github.com/stianeikeland/node-etcd) - Supports v2 (w Coffeescript)
- [lavagetto/nodejs-etcd](https://github.com/lavagetto/nodejs-etcd) - Supports v2
- [deedubs/node-etcd-config](https://github.com/deedubs/node-etcd-config) - Supports v2
@ -133,7 +133,7 @@
- [cloudfoundry/cf-release](https://github.com/cloudfoundry/cf-release/tree/master/jobs/etcd)
**Projects using etcd**
- [etcd Raft users](../raft/README.md#notable-users) - projects using etcd's raft library implementation.
- [apache/celix](https://github.com/apache/celix) - an implementation of the OSGi specification adapted to C and C++
- [binocarlos/yoda](https://github.com/binocarlos/yoda) - etcd + ZeroMQ
- [blox/blox](https://github.com/blox/blox) - a collection of open source projects for container management and orchestration with AWS ECS
@ -158,6 +158,3 @@
- [ryandoyle/nss-etcd](https://github.com/ryandoyle/nss-etcd) - A GNU libc NSS module for resolving names from etcd.
- [Gru](https://github.com/dnaeon/gru) - Orchestration made easy with Go
- [Vitess](http://vitess.io/) - Vitess is a database clustering system for horizontal scaling of MySQL.
- [lclarkmichalek/etcdhcp](https://github.com/lclarkmichalek/etcdhcp) - DHCP server that uses etcd for persistence and coordination.
- [openstack/networking-vpp](https://github.com/openstack/networking-vpp) - A networking driver that programs the [FD.io VPP dataplane](https://wiki.fd.io/view/VPP) to provide [OpenStack](https://www.openstack.org/) cloud virtual networking
- [openstack](https://github.com/openstack/governance/blob/master/reference/base-services.rst) - OpenStack services can rely on etcd as a base service.

View File

@ -1,4 +1,6 @@
# etcd3 API
---
title: etcd3 API
---
This document is meant to give an overview of the etcd3 API's central design. It is by no means all encompassing, but intended to focus on the basic ideas needed to understand etcd without the distraction of less common API calls. All etcd3 API's are defined in [gRPC services][grpc-service], which categorize remote procedure calls (RPCs) understood by the etcd server. A full listing of all etcd RPCs are documented in markdown in the [gRPC API listing][grpc-api].
@ -47,7 +49,7 @@ message ResponseHeader {
An application may read the Cluster_ID (Member_ID) field to ensure it is communicating with the intended cluster (member).
Applications can use the `Revision` to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make time `travel query` and wish to know the latest revision at the time of the request.
Applications can use the `Revision` to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make time `travel query` and wishes to know the latest revision at the time of the request.
Applications can use `Raft_Term` to detect when the cluster completes a new leader election.

View File

@ -1,4 +1,6 @@
# KV API guarantees
---
title: KV API guarantees
---
etcd is a consistent and durable key value store with [mini-transaction][txn] support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.

View File

@ -1,4 +1,6 @@
# etcd v3 authentication design
---
title: etcd v3 authentication design
---
## Why not reuse the v2 auth system?

View File

@ -1,20 +1,22 @@
# Data model
---
title: Data model
---
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases.
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generate a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time and from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generates a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
### Logical view
The stores logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive.
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to save space, revisions before the compact revision will be removed.
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to recover space, revisions before the compact revision will be removed.
A keys lifetime spans a generation, denoted by its version. Each key may have one or multiple generations. Creating a key increments the version of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the keys current generation by resetting its version. Each modification of a key increments its version. Once a compaction happens, any version ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
A keys lifetime spans a generation. Each key may have one or multiple generations. Creating a key increments the generation of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the keys current generation. Each modification of a key creates a new version of the key. Once a compaction happens, any generation ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
### Physical view
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.

View File

@ -1,4 +1,6 @@
# Glossary
---
title: Glossary
---
This document defines the various terms used in etcd documentation, command line and source code.

View File

@ -1,4 +1,6 @@
# etcd versus other key-value stores
---
title: etcd versus other key-value stores
---
The name "etcd" originated from two ideas, the unix "/etc" folder and "d"istibuted systems. The "/etc" folder is a place to store configuration data for a single system whereas etcd stores configuration information for large scale distributed systems. Hence, a "d"istributed "/etc" is "etcd".
@ -47,7 +49,7 @@ When considering features, support, and stability, new applications planning to
### Consul
Consul bills itself as an end-to-end service discovery framework. To wit, it includes services such as health checking, failure detection, and DNS. Incidentally, Consul also exposes a key value store with mediocre performance and an intricate API. As it stands in Consul 0.7, the storage system does not scales well; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches.
Consul is an end-to-end service discovery framework. It provides built-in health checking, failure detection, and DNS services. In addition, Consul exposes a key value store with RESTful HTTP APIs. [As it stands in Consul 1.0][dbtester-comparison-results], the storage system does not scale as well as other systems like etcd or Zookeeper in key-value operations; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches.
etcd and Consul solve different problems. If looking for a distributed consistent key value store, etcd is a better choice over Consul. If looking for end-to-end cluster service discovery, etcd will not have enough features; choose Kubernetes, Consul, or SmartStack.
@ -107,9 +109,10 @@ For distributed coordination, choosing etcd can help prevent operational headach
[etcd-rbac]: ../op-guide/authentication.md#working-with-roles
[zk-acl]: https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_ZooKeeperAccessControl
[consul-acl]: https://www.consul.io/docs/internals/acl.html
[cockroach-grant]: https://www.cockroachlabs.com/docs/stable/grant.html
[cockroach-grant]: https://www.cockroachlabs.com/docs/grant.html
[spanner-roles]: https://cloud.google.com/spanner/docs/iam#roles
[zk-bindings]: https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#ch_bindings
[container-linux]: https://coreos.com/why
[locksmith]: https://github.com/coreos/locksmith
[kubernetes]: http://kubernetes.io/docs/whatisk8s
[dbtester-comparison-results]: https://github.com/coreos/dbtester/tree/master/test-results/2018Q1-02-etcd-zookeeper-consul

View File

@ -1,4 +1,6 @@
# Metrics
---
title: Metrics
---
etcd uses [Prometheus][prometheus] for metrics reporting. The metrics can be used for real-time monitoring and debugging. etcd does not persist its metrics; if a member restarts, the metrics will be reset.

View File

@ -0,0 +1,3 @@
---
title: etcd operations guide
---

View File

@ -1,8 +1,10 @@
# Role-based access control
---
title: Authentication Guide
---
## Overview
Authentication was added in etcd 2.1. The etcd v3 API slightly modified the authentication feature's API and user interface to better fit the new data model. This guide is intended to help users set up basic authentication and role-based access control in etcd v3.
Authentication was added in etcd 2.1. The etcd v3 API slightly modified the authentication feature's API and user interface to better fit the new data model. This guide is intended to help users set up basic authentication in etcd v3.
## Special users and roles
@ -161,4 +163,4 @@ Otherwise, all `etcdctl` commands remain the same. Users and roles can still be
## Using TLS Common Name
If an etcd server is launched with the option `--client-cert-auth=true`, the field of Common Name (CN) in the client's TLS cert will be used as an etcd user. In this case, the common name authenticates the user and the client does not need a password. Note that if both of 1. `--client-cert-auth=true` is passed and CN is provided by the client, and 2. username and password are provided by the client, the username and password based authentication is prioritized.
If an etcd server is launched with the option `--client-cert-auth=true`, the field of Common Name (CN) in the client's TLS cert will be used as an etcd user. In this case, the common name authenticates the user and the client does not need a password.

View File

@ -1,4 +1,6 @@
# Clustering Guide
---
title: Clustering Guide
---
## Overview
@ -281,7 +283,7 @@ ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573d
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
**Each member must have a different name flag specified or else discovery will fail due to duplicated names. `Hostname` or `machine-id` can be a good choice.**
**Each member must have a different name flag specified or else discovery will fail due to duplicated names. `Hostname` or `machine-id` can be a good choice. **
Now we start etcd with those relevant flags for each member:
@ -456,8 +458,6 @@ $ etcd --name infra2 \
--listen-peer-urls http://10.0.1.12:2380
```
Since v3.1.0 (except v3.2.9), when `etcd --discovery-srv=example.com` is configured with TLS, server will only authenticate peers/clients when the provided certs have root domain `example.com` as an entry in Subject Alternative Name (SAN) field. See [Notes for DNS SRV][security-guide-dns-srv].
### Gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. Please read [gateway guide][gateway] for more information.
@ -477,6 +477,5 @@ To setup an etcd cluster with proxies of v2 API, please read the the [clustering
[proxy]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/proxy.md
[clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md
[security-guide]: security.md
[security-guide-dns-srv]: security.md#notes-for-dns-srv
[tls-setup]: ../../hack/tls-setup
[gateway]: gateway.md

View File

@ -1,4 +1,6 @@
# Configuration flags
---
title: Configuration flags
---
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment.
@ -69,39 +71,9 @@ To start etcd automatically using custom settings at startup in Linux, using a [
### --cors
+ Comma-separated white list of origins for CORS (cross-origin resource sharing).
+ default: ""
+ default: none
+ env variable: ETCD_CORS
### --quota-backend-bytes
+ Raise alarms when backend size exceeds the given quota (0 defaults to low space quota).
+ default: 0
+ env variable: ETCD_QUOTA_BACKEND_BYTES
### --max-txn-ops
+ Maximum number of operations permitted in a transaction.
+ default: 128
+ env variable: ETCD_MAX_TXN_OPS
### --max-request-bytes
+ Maximum client request size in bytes the server will accept.
+ default: 1572864
+ env variable: ETCD_MAX_REQUEST_BYTES
### --grpc-keepalive-min-time
+ Minimum duration interval that a client should wait before pinging server.
+ default: 5s
+ env variable: ETCD_GRPC_KEEPALIVE_MIN_TIME
### --grpc-keepalive-interval
+ Frequency duration of server-to-client ping to check if a connection is alive (0 to disable).
+ default: 2h
+ env variable: ETCD_GRPC_KEEPALIVE_INTERVAL
### --grpc-keepalive-timeout
+ Additional duration of wait before closing a non-responsive connection (0 to disable).
+ default: 20s
+ env variable: ETCD_GRPC_KEEPALIVE_TIMEOUT
## Clustering flags
`--initial` prefix flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member.
@ -142,12 +114,12 @@ To start etcd automatically using custom settings at startup in Linux, using a [
### --discovery
+ Discovery URL used to bootstrap the cluster.
+ default: ""
+ default: none
+ env variable: ETCD_DISCOVERY
### --discovery-srv
+ DNS srv domain used to bootstrap the cluster.
+ default: ""
+ default: none
+ env variable: ETCD_DISCOVERY_SRV
### --discovery-fallback
@ -157,7 +129,7 @@ To start etcd automatically using custom settings at startup in Linux, using a [
### --discovery-proxy
+ HTTP proxy to use for traffic to discovery service.
+ default: ""
+ default: none
+ env variable: ETCD_DISCOVERY_PROXY
### --strict-reconfig-check
@ -170,10 +142,6 @@ To start etcd automatically using custom settings at startup in Linux, using a [
+ default: 0
+ env variable: ETCD_AUTO_COMPACTION_RETENTION
### --auto-compaction-mode
+ Interpret 'auto-compaction-retention' one of: periodic|revision. 'periodic' for duration based retention, defaulting to hours if no time unit is provided (e.g. '5m'). 'revision' for revision number based retention.
+ default: periodic
+ env variable: ETCD_AUTO_COMPACTION_MODE
### --enable-v2
+ Accept etcd V2 client requests
@ -219,22 +187,22 @@ To start etcd automatically using custom settings at startup in Linux, using a [
The security flags help to [build a secure etcd cluster][security].
### --ca-file
### --ca-file
**DEPRECATED**
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
+ default: ""
+ default: none
+ env variable: ETCD_CA_FILE
### --cert-file
+ Path to the client server TLS cert file.
+ default: ""
+ default: none
+ env variable: ETCD_CERT_FILE
### --key-file
+ Path to the client server TLS key file.
+ default: ""
+ default: none
+ env variable: ETCD_KEY_FILE
### --client-cert-auth
@ -242,14 +210,9 @@ The security flags help to [build a secure etcd cluster][security].
+ default: false
+ env variable: ETCD_CLIENT_CERT_AUTH
### --client-crl-file
+ Path to the client certificate revocation list file.
+ default: ""
+ env variable: ETCD_CLIENT_CRL_FILE
### --trusted-ca-file
+ Path to the client server TLS trusted CA cert file.
+ default: ""
+ Path to the client server TLS trusted CA key file.
+ default: none
+ env variable: ETCD_TRUSTED_CA_FILE
### --auto-tls
@ -257,22 +220,22 @@ The security flags help to [build a secure etcd cluster][security].
+ default: false
+ env variable: ETCD_AUTO_TLS
### --peer-ca-file
### --peer-ca-file
**DEPRECATED**
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
+ default: ""
+ default: none
+ env variable: ETCD_PEER_CA_FILE
### --peer-cert-file
+ Path to the peer server TLS cert file.
+ default: ""
+ default: none
+ env variable: ETCD_PEER_CERT_FILE
### --peer-key-file
+ Path to the peer server TLS key file.
+ default: ""
+ default: none
+ env variable: ETCD_PEER_KEY_FILE
### --peer-client-cert-auth
@ -280,14 +243,9 @@ The security flags help to [build a secure etcd cluster][security].
+ default: false
+ env variable: ETCD_PEER_CLIENT_CERT_AUTH
### --peer-crl-file
+ Path to the peer certificate revocation list file.
+ default: ""
+ env variable: ETCD_PEER_CRL_FILE
### --peer-trusted-ca-file
+ Path to the peer server TLS trusted CA file.
+ default: ""
+ default: none
+ env variable: ETCD_PEER_TRUSTED_CA_FILE
### --peer-auto-tls
@ -295,10 +253,10 @@ The security flags help to [build a secure etcd cluster][security].
+ default: false
+ env variable: ETCD_PEER_AUTO_TLS
### --peer-cert-allowed-cn
+ Allowed CommonName for inter peer authentication.
+ default: none
+ env variable: ETCD_PEER_CERT_ALLOWED_CN
### --experimental-peer-skip-client-san-verification
+ Skip verification of SAN field in client certificate for peer connections.
+ default: false
+ env variable: ETCD_EXPERIMENTAL_PEER_SKIP_CLIENT_SAN_VERIFICATION
## Logging flags
@ -309,9 +267,10 @@ The security flags help to [build a secure etcd cluster][security].
### --log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: "" (INFO for all packages)
+ default: none (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
## Unsafe flags
Please be CAUTIOUS when using unsafe flags because it will break the guarantees given by the consensus protocol.
@ -331,7 +290,7 @@ Follow the instructions when using these flags.
### --config-file
+ Load server configuration from a file.
+ default: ""
+ default: none
## Profiling flags
@ -343,10 +302,6 @@ Follow the instructions when using these flags.
+ Set level of detail for exported metrics, specify 'extensive' to include histogram metrics.
+ default: basic
### --listen-metrics-urls
+ List of URLs to listen on for metrics.
+ default: ""
## Auth flags
### --auth-token
@ -354,12 +309,6 @@ Follow the instructions when using these flags.
+ Example option of JWT: '--auth-token jwt,pub-key=app.rsa.pub,priv-key=app.rsa,sign-method=RS512'
+ default: "simple"
## Experimental flags
### --experimental-corrupt-check-time
+ Duration of time between cluster corruption check passes
+ default: 0s
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery

View File

@ -1,4 +1,6 @@
# Run etcd clusters inside containers
---
title: Run etcd clusters inside containers
---
The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering.md#static).
@ -76,29 +78,18 @@ Use the host IP address when configuring etcd:
export NODE1=192.168.1.21
```
Configure a Docker volume to store etcd data:
```
docker volume create --name etcd-data
export DATA_DIR="etcd-data"
```
Run the latest version of etcd:
```
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
REGISTRY=gcr.io/etcd-development/etcd
docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd ${REGISTRY}:latest \
--name etcd quay.io/coreos/etcd:latest \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name node1 \
 --initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://0.0.0.0:2380 \
 --advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://${NODE1}:2380 \
--advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://${NODE1}:2379 \
--initial-cluster node1=http://${NODE1}:2380
```
@ -111,10 +102,6 @@ etcdctl --endpoints=http://${NODE1}:2379 member list
### Running a 3 node etcd cluster
```
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
REGISTRY=gcr.io/etcd-development/etcd
# For each machine
ETCD_VERSION=latest
TOKEN=my-etcd-token
@ -135,11 +122,11 @@ docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd ${REGISTRY}:${ETCD_VERSION} \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
 --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
 --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
@ -150,11 +137,11 @@ docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd ${REGISTRY}:${ETCD_VERSION} \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
 --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
 --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
@ -165,11 +152,11 @@ docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd ${REGISTRY}:${ETCD_VERSION} \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
 --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
 --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
```
@ -189,30 +176,21 @@ To provision a 3 node etcd cluster on bare-metal, the examples in the [baremetal
The etcd release container does not include default root certificates. To use HTTPS with certificates trusted by a root authority (e.g., for discovery), mount a certificate directory into the etcd container:
```
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
REGISTRY=docker://gcr.io/etcd-development/etcd
rkt run \
--insecure-options=image \
--volume etcd-ssl-certs-bundle,kind=host,source=/etc/ssl/certs/ca-certificates.crt \
--mount volume=etcd-ssl-certs-bundle,target=/etc/ssl/certs/ca-certificates.crt \
${REGISTRY}:latest -- --name my-name \
quay.io/coreos/etcd:latest -- --name my-name \
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
--discovery https://discovery.etcd.io/c11fbcdc16972e45253491a24fcf45e1
```
```
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
REGISTRY=gcr.io/etcd-development/etcd
docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt \
${REGISTRY}:latest \
quay.io/coreos/etcd:latest \
/usr/local/bin/etcd --name my-name \
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \

View File

@ -69,14 +69,14 @@ ANNOTATIONS {
# alert if the 99th percentile of gRPC method calls take more than 150ms
ALERT GRPCRequestsSlow
IF histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job="etcd",grpc_type="unary"}[5m])) by (grpc_service, grpc_method, le)) > 0.15
IF histogram_quantile(0.99, rate(etcd_grpc_unary_requests_duration_seconds_bucket[5m])) > 0.15
FOR 10m
LABELS {
severity = "critical"
}
ANNOTATIONS {
summary = "slow gRPC requests",
description = "on etcd instance {{ $labels.instance }} gRPC requests to {{ $labels.grpc_method }} are slow",
description = "on etcd instance {{ $labels.instance }} gRPC requests to {{ $label.grpc_method }} are slow",
}
# HTTP requests alerts
@ -84,8 +84,8 @@ ANNOTATIONS {
# alert if more than 1% of requests to an HTTP endpoint have failed within the last 5 minutes
ALERT HighNumberOfFailedHTTPRequests
IF sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.01
IF sum by(method) (rate(etcd_http_failed_total{job="etcd"}[5m]))
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.01
FOR 10m
LABELS {
severity = "warning"
@ -97,8 +97,8 @@ ANNOTATIONS {
# alert if more than 5% of requests to an HTTP endpoint have failed within the last 5 minutes
ALERT HighNumberOfFailedHTTPRequests
IF sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.05
IF sum by(method) (rate(etcd_http_failed_total{job="etcd"}[5m]))
/ sum by(method) (rate(etcd_http_received_total{job="etcd"}[5m])) > 0.05
FOR 5m
LABELS {
severity = "critical"
@ -117,7 +117,7 @@ LABELS {
}
ANNOTATIONS {
summary = "slow HTTP requests",
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method }} are slow",
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $label.method }} are slow",
}
# file descriptor alerts
@ -161,7 +161,7 @@ LABELS {
}
ANNOTATIONS {
summary = "etcd member communication is slow",
description = "etcd instance {{ $labels.instance }} member communication with {{ $labels.To }} is slow",
description = "etcd instance {{ $labels.instance }} member communication with {{ $label.To }} is slow",
}
# etcd proposal alerts

View File

@ -1,143 +0,0 @@
groups:
- name: etcd3_alert.rules
rules:
- alert: InsufficientMembers
expr: count(up{job="etcd"} == 0) > (count(up{job="etcd"}) / 2 - 1)
for: 3m
labels:
severity: critical
annotations:
description: If one more etcd member goes down the cluster will be unavailable
summary: etcd cluster insufficient members
- alert: NoLeader
expr: etcd_server_has_leader{job="etcd"} == 0
for: 1m
labels:
severity: critical
annotations:
description: etcd member {{ $labels.instance }} has no leader
summary: etcd member has no leader
- alert: HighNumberOfLeaderChanges
expr: increase(etcd_server_leader_changes_seen_total{job="etcd"}[1h]) > 3
labels:
severity: warning
annotations:
description: etcd instance {{ $labels.instance }} has seen {{ $value }} leader
changes within the last hour
summary: a high number of leader changes within the etcd cluster are happening
- alert: HighNumberOfFailedGRPCRequests
expr: sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.01
for: 10m
labels:
severity: warning
annotations:
description: '{{ $value }}% of requests for {{ $labels.grpc_method }} failed
on etcd instance {{ $labels.instance }}'
summary: a high number of gRPC requests are failing
- alert: HighNumberOfFailedGRPCRequests
expr: sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.05
for: 5m
labels:
severity: critical
annotations:
description: '{{ $value }}% of requests for {{ $labels.grpc_method }} failed
on etcd instance {{ $labels.instance }}'
summary: a high number of gRPC requests are failing
- alert: GRPCRequestsSlow
expr: histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job="etcd",grpc_type="unary"}[5m])) by (grpc_service, grpc_method, le))
> 0.15
for: 10m
labels:
severity: critical
annotations:
description: on etcd instance {{ $labels.instance }} gRPC requests to {{ $labels.grpc_method
}} are slow
summary: slow gRPC requests
- alert: HighNumberOfFailedHTTPRequests
expr: sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
BY (method) > 0.01
for: 10m
labels:
severity: warning
annotations:
description: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
instance {{ $labels.instance }}'
summary: a high number of HTTP requests are failing
- alert: HighNumberOfFailedHTTPRequests
expr: sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
BY (method) > 0.05
for: 5m
labels:
severity: critical
annotations:
description: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
instance {{ $labels.instance }}'
summary: a high number of HTTP requests are failing
- alert: HTTPRequestsSlow
expr: histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m]))
> 0.15
for: 10m
labels:
severity: warning
annotations:
description: on etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method
}} are slow
summary: slow HTTP requests
- record: instance:fd_utilization
expr: process_open_fds / process_max_fds
- alert: FdExhaustionClose
expr: predict_linear(instance:fd_utilization[1h], 3600 * 4) > 1
for: 10m
labels:
severity: warning
annotations:
description: '{{ $labels.job }} instance {{ $labels.instance }} will exhaust
its file descriptors soon'
summary: file descriptors soon exhausted
- alert: FdExhaustionClose
expr: predict_linear(instance:fd_utilization[10m], 3600) > 1
for: 10m
labels:
severity: critical
annotations:
description: '{{ $labels.job }} instance {{ $labels.instance }} will exhaust
its file descriptors soon'
summary: file descriptors soon exhausted
- alert: EtcdMemberCommunicationSlow
expr: histogram_quantile(0.99, rate(etcd_network_member_round_trip_time_seconds_bucket[5m]))
> 0.15
for: 10m
labels:
severity: warning
annotations:
description: etcd instance {{ $labels.instance }} member communication with
{{ $labels.To }} is slow
summary: etcd member communication is slow
- alert: HighNumberOfFailedProposals
expr: increase(etcd_server_proposals_failed_total{job="etcd"}[1h]) > 5
labels:
severity: warning
annotations:
description: etcd instance {{ $labels.instance }} has seen {{ $value }} proposal
failures within the last hour
summary: a high number of proposals within the etcd cluster are failing
- alert: HighFsyncDurations
expr: histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m]))
> 0.5
for: 10m
labels:
severity: warning
annotations:
description: etcd instance {{ $labels.instance }} fync durations are high
summary: high fsync durations
- alert: HighCommitDurations
expr: histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[5m]))
> 0.25
for: 10m
labels:
severity: warning
annotations:
description: etcd instance {{ $labels.instance }} commit durations are high
summary: high commit durations

View File

@ -1,4 +1,6 @@
# Failure modes
---
title: Understand failures
---
Failures are common in a large deployment of machines. A machine fails when its hardware or software malfunctions. Multiple machines fail together when there are power failures or network issues. Multiple kinds of failures can also happen at once; it is almost impossible to enumerate all possible failure cases.

View File

@ -1,4 +1,6 @@
# etcd gateway
---
title: etcd gateway
---
## What is etcd gateway

View File

@ -550,7 +550,7 @@
"values": []
},
"yaxes": [{
"format": "Bps",
"format": "short",
"label": null,
"logBase": 1,
"max": null,

View File

@ -1,4 +1,6 @@
# gRPC proxy
---
title: gRPC proxy
---
The gRPC proxy is a stateless etcd reverse proxy operating at the gRPC layer (L7). The proxy is designed to reduce the total processing load on the core etcd cluster. For horizontal scalability, it coalesces watch and lease API requests. To protect the cluster against abusive clients, it caches key range requests.
@ -90,9 +92,9 @@ The etcd gRPC proxy starts and listens on port 8080. It forwards client requests
Sending requests through the proxy:
```bash
$ ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 put foo bar
$ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 put foo bar
OK
$ ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 get foo
$ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 get foo
foo
bar
```
@ -120,7 +122,7 @@ $ etcd grpc-proxy start --endpoints=localhost:2379 \
The proxy will list all its members for member list:
```bash
ETCDCTL_API=3 etcdctl --endpoints=http://localhost:23790 member list --write-out table
ETCDCTL_API=3 ./bin/etcdctl --endpoints=http://localhost:23790 member list --write-out table
+----+---------+--------------------------------+------------+-----------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
@ -155,10 +157,10 @@ $ etcd grpc-proxy start --endpoints=localhost:2379 \
--advertise-client-url=127.0.0.1:23792
```
The member list API to the grpc-proxy returns its own `advertise-client-url`:
the member list API to the grpc-proxy returns its own `advertise-client-url`:
```bash
ETCDCTL_API=3 etcdctl --endpoints=http://localhost:23792 member list --write-out table
ETCDCTL_API=3 ./bin/etcdctl --endpoints=http://localhost:23792 member list --write-out table
+----+---------+--------------------------------+------------+-----------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
@ -182,44 +184,12 @@ $ etcd grpc-proxy start --endpoints=localhost:2379 \
Accesses to the proxy are now transparently prefixed on the etcd cluster:
```bash
$ ETCDCTL_API=3 etcdctl --endpoints=localhost:23790 put my-key abc
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:23790 put my-key abc
# OK
$ ETCDCTL_API=3 etcdctl --endpoints=localhost:23790 get my-key
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:23790 get my-key
# my-key
# abc
$ ETCDCTL_API=3 etcdctl --endpoints=localhost:2379 get my-prefix/my-key
$ ETCDCTL_API=3 ./bin/etcdctl --endpoints=localhost:2379 get my-prefix/my-key
# my-prefix/my-key
# abc
```
## TLS termination
Terminate TLS from a secure etcd cluster with the grpc proxy by serving an unencrypted local endpoint.
To try it out, start a single member etcd cluster with client https:
```sh
$ etcd --listen-client-urls https://localhost:2379 --advertise-client-urls https://localhost:2379 --cert-file=peer.crt --key-file=peer.key --trusted-ca-file=ca.crt --client-cert-auth
```
Confirm the client port is serving https:
```sh
# fails
$ ETCDCTL_API=3 etcdctl --endpoints=http://localhost:2379 endpoint status
# works
$ ETCDCTL_API=3 etcdctl --endpoints=https://localhost:2379 --cert=client.crt --key=client.key --cacert=ca.crt endpoint status
```
Next, start a grpc proxy on `localhost:12379` by connecting to the etcd endpoint `https://localhost:2379` using the client certificates:
```sh
$ etcd grpc-proxy start --endpoints=https://localhost:2379 --listen-addr localhost:12379 --cert client.crt --key client.key --cacert=ca.crt --insecure-skip-tls-verify &
```
Finally, test the TLS termination by putting a key into the proxy over http:
```sh
$ ETCDCTL_API=3 etcdctl --endpoints=http://localhost:12379 put abc def
# OK
```

View File

@ -1,4 +1,6 @@
# Hardware recommendations
---
title: Hardware recommendations
---
etcd usually runs well with limited resources for development or testing purposes; its common to develop with etcd on a laptop or a cheap cloud machine. However, when running etcd clusters in production, some hardware guidelines are useful for proper administration. These suggestions are not hard rules; they serve as a good starting point for a robust production deployment. As always, deployments should be tested with simulated workloads before running in production.

View File

@ -1,4 +1,6 @@
# Maintenance
---
title: Maintenance
---
## Overview
@ -47,11 +49,9 @@ $ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
```
To defragment an etcd data directory directly, while etcd is not running, use the command:
**Note that defragmentation to a live member blocks the system from reading and writing data while rebuilding its states**.
``` sh
$ etcdctl defrag --data-dir <path-to-etcd-data-dir>
```
**Note that defragmentation request does not get replicated over cluster. That is, the request is only applied to the local node. Specify all members in `--endpoints` flag.**
## Space quota
@ -80,7 +80,7 @@ $ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
+----------------+------------------+-----------+---------+-----------+-----------+------------+
# confirm alarm is raised
$ ETCDCTL_API=3 etcdctl alarm list
memberID:13803658152347727308 alarm:NOSPACE
memberID:13803658152347727308 alarm:NOSPACE
```
Removing excessive keyspace data and defragmenting the backend database will put the cluster back within the quota limits:
@ -96,7 +96,7 @@ $ ETCDCTL_API=3 etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
# disarm alarm
$ ETCDCTL_API=3 etcdctl alarm disarm
memberID:13803658152347727308 alarm:NOSPACE
memberID:13803658152347727308 alarm:NOSPACE
# test puts are allowed again
$ ETCDCTL_API=3 etcdctl put newkey 123
OK

View File

@ -1,4 +1,6 @@
# Monitoring etcd
---
title: Monitoring etcd
---
Each etcd server provides local monitoring information on its client port through http endpoints. The monitoring data is useful for both system health checking and cluster debugging.
@ -41,24 +43,21 @@ When Elapsed (s)
17:34:51.999535 . 36 ... sent: header:<cluster_id:14841639068965178418 member_id:10276657743932975437 revision:15 raft_term:17 > kvs:<key:"abc" create_revision:6 mod_revision:14 version:9 value:"asda" > count:1
```
## Metrics endpoint
Each etcd server exports metrics under the `/metrics` path on its client port and optionally on interfaces given by `--listen-metrics-urls`.
The metrics can be fetched with `curl`:
```sh
$ curl -L http://localhost:2379/metrics | grep -v debugging # ignore unstable debugging metrics
$ curl -L http://localhost:2379/metrics
# HELP etcd_disk_backend_commit_duration_seconds The latency distributions of commit called by backend.
# TYPE etcd_disk_backend_commit_duration_seconds histogram
etcd_disk_backend_commit_duration_seconds_bucket{le="0.002"} 72756
etcd_disk_backend_commit_duration_seconds_bucket{le="0.004"} 401587
etcd_disk_backend_commit_duration_seconds_bucket{le="0.008"} 405979
etcd_disk_backend_commit_duration_seconds_bucket{le="0.016"} 406464
# HELP etcd_debugging_mvcc_keys_total Total number of keys.
# TYPE etcd_debugging_mvcc_keys_total gauge
etcd_debugging_mvcc_keys_total 0
# HELP etcd_debugging_mvcc_pending_events_total Total number of pending events to be sent.
# TYPE etcd_debugging_mvcc_pending_events_total gauge
etcd_debugging_mvcc_pending_events_total 0
...
```
## Prometheus
Running a [Prometheus][prometheus] monitoring service is the easiest way to ingest and record etcd's metrics.
@ -66,7 +65,7 @@ Running a [Prometheus][prometheus] monitoring service is the easiest way to inge
First, install Prometheus:
```sh
PROMETHEUS_VERSION="2.0.0"
PROMETHEUS_VERSION="1.3.1"
wget https://github.com/prometheus/prometheus/releases/download/v$PROMETHEUS_VERSION/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz -O /tmp/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz
tar -xvzf /tmp/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz --directory /tmp/ --strip-components=1
/tmp/prometheus -version
@ -98,13 +97,13 @@ nohup /tmp/prometheus \
Now Prometheus will scrape etcd metrics every 10 seconds.
### Alerting
## Alerting
There is a set of default alerts for etcd v3 clusters for [Prometheus 1.x](./etcd3_alert.rules) as well as [Prometheus 2.x](./etcd3_alert.rules.yml).
There is a [set of default alerts for etcd v3 clusters](./etcd3_alert.rules).
> Note: `job` labels may need to be adjusted to fit a particular need. The rules were written to apply to a single cluster so it is recommended to choose labels unique to a cluster.
### Grafana
## Grafana
[Grafana][grafana] has built-in Prometheus support; just add a Prometheus data source:
@ -117,8 +116,6 @@ Access: proxy
Then import the default [etcd dashboard template][template] and customize. For instance, if Prometheus data source name is `my-etcd`, the `datasource` field values in JSON also need to be `my-etcd`.
See the [demo][demo].
Sample dashboard:
![](./etcd-sample-grafana.png)
@ -127,4 +124,3 @@ Sample dashboard:
[prometheus]: https://prometheus.io/
[grafana]: http://grafana.org/
[template]: ./grafana.json
[demo]: http://dash.etcd.io/dashboard/db/test-etcd-kubernetes

View File

@ -1,4 +1,6 @@
# Performance
---
title: Performance
---
## Understanding performance

View File

@ -1,4 +1,6 @@
# Disaster recovery
---
title: Disaster recovery
---
etcd is designed to withstand machine failures. An etcd cluster automatically recovers from temporary failures (e.g., machine reboots) and tolerates up to *(N-1)/2* permanent failures for a cluster of N members. When a member permanently fails, whether due to hardware failure or disk corruption, it loses access to the cluster. If the cluster permanently loses more than *(N-1)/2* members then it disastrously fails, irrevocably losing quorum. Once quorum is lost, the cluster cannot reach consensus and therefore cannot continue accepting updates.
@ -6,7 +8,7 @@ To recover from disastrous failure, etcd v3 provides snapshot and restore facili
[v2_recover]: ../v2/admin_guide.md#disaster-recovery
## Snapshotting the keyspace
### Snapshotting the keyspace
Recovering a cluster first needs a snapshot of the keyspace from an etcd member. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd data directory. For example, the following command snapshots the keyspace served by `$ENDPOINT` to the file `snapshot.db`:
@ -14,7 +16,7 @@ Recovering a cluster first needs a snapshot of the keyspace from an etcd member.
$ ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db
```
## Restoring a cluster
### Restoring a cluster
To restore a cluster, all that is needed is a single snapshot "db" file. A cluster restore with `etcdctl snapshot restore` creates new etcd data directories; all members should restore using the same snapshot. Restoring overwrites some snapshot metadata (specifically, the member ID and cluster ID); the member loses its former identity. This metadata overwrite prevents the new member from inadvertently joining an existing cluster. Therefore in order to start a cluster from a snapshot, the restore must start a new logical cluster.

View File

@ -1,8 +1,10 @@
# Runtime reconfiguration
---
title: Runtime reconfiguration
---
etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.
Reconfiguration requests can only be processed when a majority of cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not be able to make progress and need to [restart from majority failure][majority failure].
Reconfiguration requests can only be processed when a majority of cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
To better understand the design behind runtime reconfiguration, please read [the runtime reconfiguration document][runtime-reconf].
@ -41,7 +43,7 @@ Before making any change, a simple majority (quorum) of etcd members must be ava
All changes to the cluster must be done sequentially:
* To update a single member peerURLs, issue an update operation
* To replace a healthy single member, remove the old member then add a new member
* To replace a healthy single member, add a new member then remove the old member
* To increase from 3 to 5 members, issue two add operations
* To decrease from 5 to 3, issue two remove operations
@ -55,9 +57,9 @@ To update the advertise client URLs of a member, simply restart that member with
#### Update advertise peer URLs
To update the advertise peer URLs of a member, first update it explicitly via member command and then restart the member. The additional action is required since updating peer URLs changes the cluster wide configuration and can affect the health of the etcd cluster.
To update the advertise peer URLs of a member, first update it explicitly via member command and then restart the member. The additional action is required since updating peer URLs changes the cluster wide configuration and can affect the health of the etcd cluster.
To update the advertise peer URLs, first find the target member's ID. To list all members with `etcdctl`:
To update the peer URLs, first find the target member's ID. To list all members with `etcdctl`:
```sh
$ etcdctl member list
@ -69,7 +71,7 @@ a8266ecf031671f3: name=node1 peerURLs=http://localhost:23801 clientURLs=http://1
This example will `update` a8266ecf031671f3 member ID and change its peerURLs value to `http://10.0.1.10:2380`:
```sh
$ etcdctl member update a8266ecf031671f3 --peer-urls=http://10.0.1.10:2380
$ etcdctl member update a8266ecf031671f3 http://10.0.1.10:2380
Updated member with ID a8266ecf031671f3 in cluster
```

View File

@ -1,4 +1,6 @@
# Design of runtime reconfiguration
---
title: Design of runtime reconfiguration
---
Runtime reconfiguration is one of the hardest and most error prone features in a distributed system, especially in a consensus based system like etcd.
@ -10,13 +12,13 @@ In etcd, every runtime reconfiguration has to go through [two phases][add-member
Phase 1 - Inform cluster of new configuration
To add a member into etcd cluster, make an API call to request a new member to be added to the cluster. This is the only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
To add a member into etcd cluster, make an API call to request a new member to be added to the cluster. This is only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
Phase 2 - Start new member
To join the new etcd member into the existing cluster, specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, the cluster has reached the expected configuration.
To join the etcd member into the existing cluster, specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, the cluster has reached the expected configuration.
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change, it will not be accepted by the cluster.
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change,, it will not be accepted by the cluster.
Without the explicit workflow around cluster membership etcd would be vulnerable to unexpected cluster membership changes. For example, if etcd is running under an init system such as systemd, etcd would be restarted after being removed via the membership API, and attempt to rejoin the cluster on startup. This cycle would continue every time a member is removed via the API and systemd is set to restart etcd after failing, which is unexpected.
@ -26,21 +28,21 @@ We expect runtime reconfiguration to be an infrequent operation. We decided to k
If a cluster permanently loses a majority of its members, a new cluster will need to be started from an old data directory to recover the previous state.
It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or force removed through different members in the same cluster, etcd will end up with a diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards.
It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or force removed through different members in the same cluster, etcd will end up with a diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards.
With a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and preparing for permanent majority lose before putting etcd into production.
With a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and prepare for permanent majority lose before putting etcd into production.
## Do not use public discovery service for runtime reconfiguration
The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, use runtime reconfiguration API.
The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, use runtime reconfiguration API.
Discovery service is designed for bootstrapping an etcd cluster in the cloud environment, when the IP addresses of all the members are not known beforehand. After successfully bootstrapping a cluster, the IP addresses of all the members are known. Technically, the discovery service should no longer be needed.
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
1. it introduces external dependencies for the entire life-cycle of the cluster, not just bootstrap time. If there is a network issue between the cluster and public discovery service, the cluster will suffer from it.
2. public discovery service must reflect correct runtime configuration of the cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard.
2. public discovery service must reflect correct runtime configuration of the cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard.
3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload.

View File

@ -1,4 +1,6 @@
# Transport security model
---
title: Security model
---
etcd supports automatic TLS as well as authentication through client certificates for both clients to server as well as peer (server to server / cluster) communication.
@ -181,10 +183,6 @@ To disable certificate chain checking, invoke curl with the `-k` flag:
$ curl -k https://127.0.0.1:2379/v2/keys/foo -Xput -d value=bar -v
```
## Notes for DNS SRV
Since v3.1.0 (except v3.2.9), discovery SRV bootstrapping authenticates `ServerName` with a root domain name from `--discovery-srv` flag. This is to avoid man-in-the-middle cert attacks, by requiring a certificate to have matching root domain name in its Subject Alternative Name (SAN) field. For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `etcd.local` as an entry in Subject Alternative Name (SAN) field
## Notes for etcd proxy
etcd proxy terminates the TLS from its client if the connection is secure, and uses proxy's own key/cert specified in `--peer-key-file` and `--peer-cert-file` to communicate with etcd members.

View File

@ -1,8 +1,10 @@
# Supported systems
---
title: Supported platforms
---
## Current support
### Current support
The following table lists etcd support status for common architectures and operating systems:
The following table lists etcd support status for common architectures and operating systems,
| Architecture | Operating System | Status | Maintainers |
| ------------ | ---------------- | ------------ | --------------------------- |
@ -18,7 +20,7 @@ The following table lists etcd support status for common architectures and opera
Experimental platforms appear to work in practice and have some platform specific code in etcd, but do not fully conform to the stable support policy. Unstable platforms have been lightly tested, but less than experimental. Unlisted architecture and operating system pairs are currently unsupported; caveat emptor.
## Supporting a new system platform
### Supporting a new platform
For etcd to officially support a new platform as stable, a few requirements are necessary to ensure acceptable quality:
@ -28,7 +30,7 @@ For etcd to officially support a new platform as stable, a few requirements are
4. Set up CI (TravisCI, SemaphoreCI or Jenkins) for running integration tests; etcd must pass intensive tests.
5. (Optional) Set up a functional testing cluster; an etcd cluster should survive stress testing.
## 32-bit and other unsupported systems
### 32-bit and other unsupported systems
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See the [Go issue][go-issue] and [atomic package][go-atomic] for more information.

View File

@ -1,4 +1,6 @@
# Migrate applications from using API v2 to API v3
---
title: Migrate applications from using API v2 to API v3
---
The data store v2 is still accessible from the API v2 after upgrading to etcd3. Thus, it will work as before and require no application changes. With etcd 3, applications use the new grpc API v3 to access the mvcc store, which provides more features and improved performance. The mvcc store and the old store v2 are separate and isolated; writes to the store v2 will not affect the mvcc store and, similarly, writes to the mvcc store will not affect the store v2.
@ -6,7 +8,7 @@ Migrating an application from the API v2 to the API v3 involves two steps: 1) mi
## Migrate client library
API v3 is different from API v2, thus application developers need to use a new client library to send requests to etcd API v3. The documentation of the client v3 is available at https://godoc.org/github.com/coreos/etcd/clientv3.
API v3 is different from API v2, thus application developers need to use a new client library to send requests to etcd API v3. The documentation of the client v3 is available at https://godoc.org/github.com/coreos/etcd/clientv3.
There are some notable differences between API v2 and API v3:
@ -38,17 +40,13 @@ Second, migrate the v2 keys into v3 with the [migrate][migrate_command] (`ETCDCT
Restart the etcd members and everything should just work.
For etcd v3.3+, run `ETCDCTL_API=3 etcdctl endpoint hashkv --cluster` to ensure key-value stores are consistent post migration.
**Warn**: When v2 store has expiring TTL keys and migrate command intends to preserve TTLs, migration may be inconsistent with the last committed v2 state when run on any member with a raft index less than the last leader's raft index.
### Online migration
If the application cannot tolerate any downtime, then it must migrate online. The implementation of online migration will vary from application to application but the overall idea is the same.
First, write application code using the v3 API. The application must support two modes: a migration mode and a normal mode. The application starts in migration mode. When running in migration mode, the application reads keys using the v3 API first, and, if it cannot find the key, it retries with the API v2. In normal mode, the application only reads keys using the v3 API. The application writes keys over the API v3 in both modes. To acknowledge a switch from migration mode to normal mode, the application watches on a switch mode key. When switch keys value turns to `true`, the application switches over from migration mode to normal mode.
Second, start a background job to migrate data from the store v2 to the mvcc store by reading keys from the API v2 and writing keys to the API v3.
Second, start a background job to migrate data from the store v2 to the mvcc store by reading keys from the API v2 and writing keys to the API v3.
After finishing data migration, the background job writes `true` into the switch mode key to notify the application that it may switch modes.

View File

@ -1,6 +1,8 @@
# Versioning
---
title: Versioning
---
## Service versioning
### Service versioning
etcd uses [semantic versioning](http://semver.org)
New minor versions may add additional features to the API.
@ -11,7 +13,7 @@ Get the running etcd cluster version with `etcdctl`:
ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 endpoint status
```
## API versioning
### API versioning
The `v3` API responses should not change after the 3.0.0 release but new features will be added over time.

View File

@ -0,0 +1,3 @@
---
title: Platforms
---

View File

@ -1,4 +1,8 @@
# Amazon Web Services
---
title: Amazon Web Services
---
## Introduction
This guide assumes operational knowledge of Amazon Web Services (AWS), specifically Amazon Elastic Compute Cloud (EC2). This guide provides an introduction to design considerations when designing an etcd deployment on AWS EC2 and how AWS specific features may be utilized in that context.

View File

@ -1,4 +1,6 @@
# Container Linux with systemd
---
title: Run etcd on Container Linux with systemd
---
The following guide shows how to run etcd with [systemd][systemd-docs] under [Container Linux][container-linux-docs].
@ -134,13 +136,13 @@ etcd:
initial_cluster_state: new
client_cert_auth: true
trusted_ca_file: /etc/ssl/certs/etcd-root-ca.pem
cert_file: /etc/ssl/certs/s1.pem
key_file: /etc/ssl/certs/s1-key.pem
peer_client_cert_auth: true
peer_trusted_ca_file: /etc/ssl/certs/etcd-root-ca.pem
peer_cert_file: /etc/ssl/certs/s1.pem
peer_key_file: /etc/ssl/certs/s1-key.pem
auto_compaction_retention: 1
cert-file: /etc/ssl/certs/s1.pem
key-file: /etc/ssl/certs/s1-key.pem
peer-client-cert-auth: true
peer-trusted-ca-file: /etc/ssl/certs/etcd-root-ca.pem
peer-cert-file: /etc/ssl/certs/s1.pem
peer-key-file: /etc/ssl/certs/s1-key.pem
auto-compaction-retention: 1
```
```

View File

@ -1,4 +1,6 @@
# FreeBSD
---
title: FreeBSD
---
Starting with version 0.1.2 both etcd and etcdctl have been ported to FreeBSD and can be installed either via packages or ports system. Their versions have been recently updated to 0.2.0 so now etcd and etcdctl can be enjoyed on FreeBSD 10.0 (RC4 as of now) and 9.x, where they have been tested. They might also work when installed from ports on earlier versions of FreeBSD, but it is untested; caveat emptor.

View File

@ -1,4 +1,6 @@
# Production users
---
title: Production users
---
This document tracks people and use cases for etcd in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various etcd applications, operation environments, and cluster sizes. The etcd development team may reach out periodically to check-in on how etcd is working in the field and update this list.
@ -79,9 +81,9 @@ Radius Intelligence uses Kubernetes running CoreOS to containerize and scale int
PD(Placement Driver) is the central controller in the TiDB cluster. It saves the cluster meta information, schedule the data, allocate the global unique timestamp for the distributed transaction, etc. It embeds etcd to supply high availability and auto failover.
## Huawei
## Canal
- *Application*: System configuration for overlay network (Canal)
- *Application*: system configuration for overlay network
- *Launched*: June 2016
- *Cluster Size*: 3 members for each cluster
- *Order of Data Size*: kilobytes

View File

@ -1,4 +1,6 @@
# Reporting bugs
---
title: Reporting bugs
---
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.

View File

@ -0,0 +1,3 @@
---
title: RFC
---

View File

@ -1,4 +1,6 @@
# Overview
---
title: etcd v3 API
---
The etcd v3 API is designed to give users a more efficient and cleaner abstraction compared to etcd v2. There are a number of semantic and protocol changes in this new API. For an overview [see Xiang Li's video](https://youtu.be/J5AioGtEPeQ?t=211).

View File

@ -1,4 +1,6 @@
# Tuning
---
title: Tuning
---
The default settings in etcd should work well for installations on a local network where the average network latency is low. However, when using etcd across multiple data centers or over networks with high latency, the heartbeat interval and election timeout settings may need tuning.

View File

@ -0,0 +1,3 @@
---
title: etcd upgrades
---

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 2.3 to 3.0
---
title: Upgrade etcd from 2.3 to 3.0
---
In the general case, upgrading from etcd 2.3 to 3.0 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v2.3 processes and replace them with etcd v3.0 processes
@ -8,6 +10,8 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0.

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 3.0 to 3.1
---
title: Upgrade etcd from 3.0 to 3.1
---
In the general case, upgrading from etcd 3.0 to 3.1 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.0 processes and replace them with etcd v3.1 processes
@ -8,6 +10,8 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Monitoring
Following metrics from v3.0.x have been deprecated in favor of [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus):

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 3.1 to 3.2
---
title: Upgrade etcd from 3.1 to 3.2
---
In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.1 processes and replace them with etcd v3.2 processes
@ -6,51 +8,44 @@ In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, roll
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Client upgrade checklists
### Upgrade checklists
3.2 introduces two breaking changes.
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/coreos/etcd/pull/7305)).
Highlighted breaking changes in 3.2.
Before
#### Change in default `snapshot-count` value
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp == nil
err == lease.ErrLeaseNotFound
```
The default value of `--snapshot-count` has [changed from from 10,000 to 100,000](https://github.com/coreos/etcd/pull/7160). Higher snapshot count means it holds Raft entries in memory for longer before discarding old entries. It is a trade-off between less frequent snapshotting and [higher memory usage](https://github.com/kubernetes/kubernetes/issues/60589#issuecomment-371977156). Higher `--snapshot-count` will be manifested with higher memory usage, while retaining more Raft entries helps with the availabilities of slow followers: leader is still able to replicate its logs to followers, rather than forcing followers to rebuild its stores from leader snapshots.
After
#### Change in gRPC dependency (>=3.2.10)
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp.TTL == -1
err == nil
```
3.2.10 or later now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5` (<=3.2.9 requires `v1.2.1`).
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
##### Deprecate `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.NewFromConfigFile
clientv3.SetLogger(log.New(os.Stderr, "grpc: ", 0))
```
After
```go
import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
clientv3yaml.NewConfig
import "github.com/coreos/etcd/clientv3"
import "google.golang.org/grpc/grpclog"
clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface)
```
### Client upgrade checklists (>=3.2.10)
##### Deprecate `grpc.ErrClientConnTimeout`
>=3.2.10 upgrades `grpc/grpc-go` to >=v1.7.x from v1.2.1, which introduces some breaking changes.
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. >=3.2.10 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)).
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.2 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)).
Before
@ -77,6 +72,148 @@ if err == context.DeadlineExceeded {
}
```
#### Change in maximum request size limits (>=3.2.10)
3.2.10 and 3.2.11 allow custom request size limits in server side. >=3.2.12 allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag:
```bash
# limits request size to 1.5 KiB
etcd --max-request-bytes 1536
# client writes exceeding 1.5 KiB will be rejected
etcdctl put foo [LARGE VALUE...]
# etcdserver: request is too large
```
Or configure `embed.Config.MaxRequestBytes` field:
```go
import "github.com/coreos/etcd/embed"
import "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
// limit requests to 5 MiB
cfg := embed.NewConfig()
cfg.MaxRequestBytes = 5 * 1024 * 1024
// client writes exceeding 5 MiB will be rejected
_, err := cli.Put(ctx, "foo", [LARGE VALUE...])
err == rpctypes.ErrRequestTooLarge
```
**If not specified, server-side limit defaults to 1.5 MiB**.
Client-side request limits must be configured based on server-side limits.
```bash
# limits request size to 1 MiB
etcd --max-request-bytes 1048576
```
```go
import "github.com/coreos/etcd/clientv3"
cli, _ := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:2379"},
MaxCallSendMsgSize: 2 * 1024 * 1024,
MaxCallRecvMsgSize: 3 * 1024 * 1024,
})
// client writes exceeding "--max-request-bytes" will be rejected from etcd server
_, err := cli.Put(ctx, "foo", strings.Repeat("a", 1*1024*1024+5))
err == rpctypes.ErrRequestTooLarge
// client writes exceeding "MaxCallSendMsgSize" will be rejected from client-side
_, err = cli.Put(ctx, "foo", strings.Repeat("a", 5*1024*1024))
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (5242890 vs. 2097152)"
// some writes under limits
for i := range []int{0,1,2,3,4} {
_, err = cli.Put(ctx, fmt.Sprintf("foo%d", i), strings.Repeat("a", 1*1024*1024-500))
if err != nil {
panic(err)
}
}
// client reads exceeding "MaxCallRecvMsgSize" will be rejected from client-side
_, err = cli.Get(ctx, "foo", clientv3.WithPrefix())
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5240509 vs. 3145728)"
```
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Change in raw gRPC client wrappers
3.2.12 or later changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/coreos/etcd/pull/9047).
Before and after
```diff
-func NewKVFromKVClient(remote pb.KVClient) KV {
+func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
-func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
+func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
-func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
+func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
-func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
+func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
-func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
```
#### Change in `clientv3.Lease.TimeToLive` API
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/coreos/etcd/pull/7305)).
Before
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp == nil
err == lease.ErrLeaseNotFound
```
After
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp.TTL == -1
err == nil
```
#### Change in `clientv3.NewFromConfigFile`
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.NewFromConfigFile
```
After
```go
import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
clientv3yaml.NewConfig
```
#### Change in `--listen-peer-urls` and `--listen-client-urls`
3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formated as `scheme://IP:port`.
See [issue #6336](https://github.com/coreos/etcd/issues/6336) for more contexts.
### Server upgrade checklists
#### Upgrade requirements

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 3.2 to 3.3
---
title: Upgrade etcd from 3.2 to 3.3
---
In the general case, upgrading from etcd 3.2 to 3.3 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.2 processes and replace them with etcd v3.3 processes
@ -6,9 +8,306 @@ In the general case, upgrading from etcd 3.2 to 3.3 can be a zero-downtime, roll
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Client upgrade checklists
### Upgrade checklists
3.3 introduces breaking changes (TODO: update this before 3.3 release).
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.3.
#### Change in `etcdserver.EtcdServer` struct
`etcdserver.EtcdServer` has changed the type of its member field `*etcdserver.ServerConfig` to `etcdserver.ServerConfig`. And `etcdserver.NewServer` now takes `etcdserver.ServerConfig`, instead of `*etcdserver.ServerConfig`.
Before and after (e.g. [k8s.io/kubernetes/test/e2e_node/services/etcd.go](https://github.com/kubernetes/kubernetes/blob/release-1.8/test/e2e_node/services/etcd.go#L50-L55))
```diff
import "github.com/coreos/etcd/etcdserver"
type EtcdServer struct {
*etcdserver.EtcdServer
- config *etcdserver.ServerConfig
+ config etcdserver.ServerConfig
}
func NewEtcd(dataDir string) *EtcdServer {
- config := &etcdserver.ServerConfig{
+ config := etcdserver.ServerConfig{
DataDir: dataDir,
...
}
return &EtcdServer{config: config}
}
func (e *EtcdServer) Start() error {
var err error
e.EtcdServer, err = etcdserver.NewServer(e.config)
...
```
#### Change in `embed.EtcdServer` struct
Field `LogOutput` is added to `embed.Config`:
```diff
package embed
type Config struct {
Debug bool `json:"debug"`
LogPkgLevels string `json:"log-package-levels"`
+ LogOutput string `json:"log-output"`
...
```
Before gRPC server warnings were logged in etcdserver.
```
WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {localhost:2379 <nil>}
WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {localhost:2379 <nil>}
```
From v3.3, gRPC server logs are disabled by default.
```go
import "github.com/coreos/etcd/embed"
cfg := &embed.Config{Debug: false}
cfg.SetupLogging()
```
Set `embed.Config.Debug` field to `true` to enable gRPC server logs.
#### Change in `/health` endpoint response
Previously, `[endpoint]:[client-port]/health` returned manually marshaled JSON value. 3.3 now defines [`etcdhttp.Health`](https://godoc.org/github.com/coreos/etcd/etcdserver/api/etcdhttp#Health) struct.
Note that in v3.3.0-rc.0, v3.3.0-rc.1, and v3.3.0-rc.2, `etcdhttp.Health` has boolean type `"health"` and `"errors"` fields. For backward compatibilities, we reverted `"health"` field to `string` type and removed `"errors"` field. Further health information will be provided in separate APIs.
```bash
$ curl http://localhost:2379/health
{"health":"true"}
```
#### Change in gRPC gateway HTTP endpoints (replaced `/v3alpha` with `/v3beta`)
Before
```bash
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
After
```bash
curl -L http://localhost:2379/v3beta/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
Requests to `/v3alpha` endpoints will redirect to `/v3beta`, and `/v3alpha` will be removed in 3.4 release.
#### Change in maximum request size limits
3.3 now allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag:
```bash
# limits request size to 1.5 KiB
etcd --max-request-bytes 1536
# client writes exceeding 1.5 KiB will be rejected
etcdctl put foo [LARGE VALUE...]
# etcdserver: request is too large
```
Or configure `embed.Config.MaxRequestBytes` field:
```go
import "github.com/coreos/etcd/embed"
import "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
// limit requests to 5 MiB
cfg := embed.NewConfig()
cfg.MaxRequestBytes = 5 * 1024 * 1024
// client writes exceeding 5 MiB will be rejected
_, err := cli.Put(ctx, "foo", [LARGE VALUE...])
err == rpctypes.ErrRequestTooLarge
```
**If not specified, server-side limit defaults to 1.5 MiB**.
Client-side request limits must be configured based on server-side limits.
```bash
# limits request size to 1 MiB
etcd --max-request-bytes 1048576
```
```go
import "github.com/coreos/etcd/clientv3"
cli, _ := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:2379"},
MaxCallSendMsgSize: 2 * 1024 * 1024,
MaxCallRecvMsgSize: 3 * 1024 * 1024,
})
// client writes exceeding "--max-request-bytes" will be rejected from etcd server
_, err := cli.Put(ctx, "foo", strings.Repeat("a", 1*1024*1024+5))
err == rpctypes.ErrRequestTooLarge
// client writes exceeding "MaxCallSendMsgSize" will be rejected from client-side
_, err = cli.Put(ctx, "foo", strings.Repeat("a", 5*1024*1024))
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (5242890 vs. 2097152)"
// some writes under limits
for i := range []int{0,1,2,3,4} {
_, err = cli.Put(ctx, fmt.Sprintf("foo%d", i), strings.Repeat("a", 1*1024*1024-500))
if err != nil {
panic(err)
}
}
// client reads exceeding "MaxCallRecvMsgSize" will be rejected from client-side
_, err = cli.Get(ctx, "foo", clientv3.WithPrefix())
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5240509 vs. 3145728)"
```
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Change in raw gRPC client wrappers
3.3 changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/coreos/etcd/pull/9047).
Before and after
```diff
-func NewKVFromKVClient(remote pb.KVClient) KV {
+func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
-func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
+func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
-func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
+func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
-func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
+func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
-func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
```
#### Change in clientv3 `Snapshot` API error type
Previously, clientv3 `Snapshot` API returned raw [`grpc/*status.statusError`] type error. v3.3 now translates those errors to corresponding public error types, to be consistent with other APIs.
Before
```go
import "context"
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc, _ := cli.Snapshot(ctx)
cancel()
_, err := io.Copy(f, rc)
err.Error() == "rpc error: code = Canceled desc = context canceled"
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc, _ = cli.Snapshot(ctx)
time.Sleep(2 * time.Second)
_, err = io.Copy(f, rc)
err.Error() == "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
```
After
```go
import "context"
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc, _ := cli.Snapshot(ctx)
cancel()
_, err := io.Copy(f, rc)
err == context.Canceled
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc, _ = cli.Snapshot(ctx)
time.Sleep(2 * time.Second)
_, err = io.Copy(f, rc)
err == context.DeadlineExceeded
```
#### Change in `etcdctl lease timetolive` command output
Previously, `lease timetolive LEASE_ID` command on expired lease prints `-1s` for remaining seconds. 3.3 now outputs clearer messages.
Before
```bash
lease 2d8257079fa1bc0c granted with TTL(0s), remaining(-1s)
```
After
```bash
lease 2d8257079fa1bc0c already expired
```
#### Change in `golang.org/x/net/context` imports
`clientv3` has deprecated `golang.org/x/net/context`. If a project vendors `golang.org/x/net/context` in other code (e.g. etcd generated protocol buffer code) and imports `github.com/coreos/etcd/clientv3`, it requires Go 1.9+ to compile.
Before
```go
import "golang.org/x/net/context"
cli.Put(context.Background(), "f", "v")
```
After
```go
import "context"
cli.Put(context.Background(), "f", "v")
```
#### Change in gRPC dependency
3.3 now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5`.
##### Deprecate `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.SetLogger(log.New(os.Stderr, "grpc: ", 0))
```
After
```go
import "github.com/coreos/etcd/clientv3"
import "google.golang.org/grpc/grpclog"
clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface)
```
##### Deprecate `grpc.ErrClientConnTimeout`
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.3 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)).
@ -37,6 +336,22 @@ if err == context.DeadlineExceeded {
}
```
#### Change in official container registry
etcd now uses [`gcr.io/etcd-development/etcd`](https://gcr.io/etcd-development/etcd) as a primary container registry, and [`quay.io/coreos/etcd`](https://quay.io/coreos/etcd) as secondary.
Before
```bash
docker pull quay.io/coreos/etcd:v3.2.5
```
After
```bash
docker pull gcr.io/etcd-development/etcd:v3.3.0
```
### Server upgrade checklists
#### Upgrade requirements
@ -160,4 +475,4 @@ localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -0,0 +1,173 @@
---
title: Upgrade etcd from 3.3 to 3.4
---
In the general case, upgrading from etcd 3.3 to 3.4 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.3 processes and replace them with etcd v3.4 processes
- after running all v3.4 processes, new features in v3.4 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.4.
#### Change in `etcd` flags
`--ca-file` and `--peer-ca-file` flags are deprecated; they have been deprecated since v2.1.
```diff
-etcd --ca-file ca-client.crt
+etcd --trusted-ca-file ca-client.crt
```
```diff
-etcd --peer-ca-file ca-peer.crt
+etcd --peer-trusted-ca-file ca-peer.crt
```
#### Change in ``pkg/transport`
Deprecated `pkg/transport.TLSInfo.CAFile` field.
```diff
import "github.com/coreos/etcd/pkg/transport"
tlsInfo := transport.TLSInfo{
CertFile: "/tmp/test-certs/test.pem",
KeyFile: "/tmp/test-certs/test-key.pem",
- CAFile: "/tmp/test-certs/trusted-ca.pem",
+ TrustedCAFile: "/tmp/test-certs/trusted-ca.pem",
}
tlsConfig, err := tlsInfo.ClientConfig()
if err != nil {
panic(err)
}
```
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.4, the running cluster must be 3.3 or greater. If it's before 3.3, please [upgrade to 3.3](upgrade_3_3.md) before upgrading to 3.4.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
#### Mixed versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.4. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.4, the cluster will be upgraded to v3.4, and downgrade from this completed state is **not possible**. If any single member is still v3.3, however, the cluster and its operations remains "v3.3", and it is possible from this mixed cluster state to return to using a v3.3 etcd binary on all members.
Please [backup the data directory](../op-guide/maintenance.md#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade procedure
This example shows how to upgrade a 3-member v3.3 ectd cluster running on a local machine.
#### 1. Check upgrade requirements
Is the cluster healthy and running v3.3.x?
```
$ ETCDCTL_API=3 etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 6.600684ms
localhost:22379 is healthy: successfully committed proposal: took = 8.540064ms
localhost:32379 is healthy: successfully committed proposal: took = 8.763432ms
$ curl http://localhost:2379/version
{"etcdserver":"3.3.0","etcdcluster":"3.3.0"}
```
#### 2. Stop the existing etcd process
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```
14:13:31.491746 I | raft: c89feb932daef420 [term 3] received MsgTimeoutNow from 6d4f535bae3ab960 and starts an election to get leadership.
14:13:31.491769 I | raft: c89feb932daef420 became candidate at term 4
14:13:31.491788 I | raft: c89feb932daef420 received MsgVoteResp from c89feb932daef420 at term 4
14:13:31.491797 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 6d4f535bae3ab960 at term 4
14:13:31.491805 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 9eda174c7df8a033 at term 4
14:13:31.491815 I | raft: raft.node: c89feb932daef420 lost leader 6d4f535bae3ab960 at term 4
14:13:31.524084 I | raft: c89feb932daef420 received MsgVoteResp from 6d4f535bae3ab960 at term 4
14:13:31.524108 I | raft: c89feb932daef420 [quorum:2] has received 2 MsgVoteResp votes and 0 vote rejections
14:13:31.524123 I | raft: c89feb932daef420 became leader at term 4
14:13:31.524136 I | raft: raft.node: c89feb932daef420 elected leader c89feb932daef420 at term 4
14:13:31.592650 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream MsgApp v2 reader)
14:13:31.592825 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message reader)
14:13:31.693275 E | rafthttp: failed to dial 6d4f535bae3ab960 on stream Message (dial tcp [::1]:2380: getsockopt: connection refused)
14:13:31.693289 I | rafthttp: peer 6d4f535bae3ab960 became inactive
14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer)
```
It's a good idea at this point to [backup the etcd data](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur:
```
$ etcdctl snapshot save backup.db
```
#### 3. Drop-in etcd v3.4 binary and start the new etcd process
The new v3.4 etcd will publish its information to the cluster:
```
14:14:25.363225 I | etcdserver: published {Name:s1 ClientURLs:[http://localhost:2379]} to cluster a9ededbffcb1b1f1
```
Verify that each member, and then the entire cluster, becomes healthy with the new v3.4 etcd binary:
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:22379 is healthy: successfully committed proposal: took = 5.540129ms
localhost:32379 is healthy: successfully committed proposal: took = 7.321771ms
localhost:2379 is healthy: successfully committed proposal: took = 10.629901ms
```
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.4:
```
14:15:17.071804 W | etcdserver: member c89feb932daef420 has a higher version 3.4.0
14:15:21.073110 W | etcdserver: the local etcd version 3.3.0 is not up-to-date
14:15:21.073142 W | etcdserver: member 6d4f535bae3ab960 has a higher version 3.4.0
14:15:21.073157 W | etcdserver: the local etcd version 3.3.0 is not up-to-date
14:15:21.073164 W | etcdserver: member c89feb932daef420 has a higher version 3.4.0
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, the cluster will report upgrading to 3.4 successfully:
```
14:15:54.536901 N | etcdserver/membership: updated the cluster version from 3.3 to 3.4
14:15:54.537035 I | etcdserver/api: enabled capabilities for version 3.4
```
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 2.312897ms
localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -1,4 +1,6 @@
# Upgrading etcd clusters and applications
---
title: Upgrading etcd clusters and applications
---
This section contains documents specific to upgrading etcd clusters and applications.

View File

@ -1,36 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Snapshot Migration
You can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.2 cluster using a snapshot migration. After snapshot migration, the etcd indexes of your data will change. Many etcd applications rely on these indexes to behave correctly. This operation should only be done while all etcd applications are stopped.
To get started get the newest data snapshot from the 0.4.9+ cluster:
```
curl http://cluster.example.com:4001/v2/migration/snapshot > backup.snap
```
Now, import the snapshot into your new cluster:
```
etcdctl --endpoint new_cluster.example.com import --snap backup.snap
```
If you have a large amount of data, you can specify more concurrent works to copy data in parallel by using `-c` flag.
If you have hidden keys to copy, you can use `--hidden` flag to specify. For example fleet uses `/_coreos.com/fleet` so to import those keys use `--hidden /_coreos.com`.
And the data will quickly copy into the new cluster:
```
entering dir: /
entering dir: /foo
entering dir: /foo/bar
copying key: /foo/bar/1 1
entering dir: /
entering dir: /foo2
entering dir: /foo2/bar2
copying key: /foo2/bar2/2 2
```

View File

@ -1,85 +0,0 @@
# Documentation
etcd is a distributed key-value store designed to reliably and quickly preserve and provide access to critical data. It enables reliable distributed coordination through distributed locking, leader elections, and write barriers. An etcd cluster is intended for high availability and permanent data storage and retrieval.
This is the etcd v2 documentation set. For more recent versions, please see the [etcd v3 guides][etcd-v3].
## Communicating with etcd v2
Reading and writing into the etcd keyspace is done via a simple, RESTful HTTP API, or using language-specific libraries that wrap the HTTP API with higher level primitives.
### Reading and Writing
- [Client API Documentation][api]
- [Libraries, Tools, and Language Bindings][libraries]
- [Admin API Documentation][admin-api]
- [Members API][members-api]
### Security, Auth, Access control
- [Security Model][security]
- [Auth and Security][auth_api]
- [Authentication Guide][authentication]
## etcd v2 Cluster Administration
Configuration values are distributed within the cluster for your applications to read. Values can be changed programmatically and smart applications can reconfigure automatically. You'll never again have to run a configuration management tool on every machine in order to change a single config value.
### General Info
- [etcd Proxies][proxy]
- [Production Users][production-users]
- [Admin Guide][admin_guide]
- [Configuration Flags][configuration]
- [Frequently Asked Questions][faq]
### Initial Setup
- [Tuning etcd Clusters][tuning]
- [Discovery Service Protocol][discovery_protocol]
- [Running etcd under Docker][docker_guide]
### Live Reconfiguration
- [Runtime Configuration][runtime-configuration]
### Debugging etcd
- [Metrics Collection][metrics]
- [Error Code][errorcode]
- [Reporting Bugs][reporting_bugs]
### Migration
- [Upgrade etcd to 2.3][upgrade_2_3]
- [Upgrade etcd to 2.2][upgrade_2_2]
- [Upgrade to etcd 2.1][upgrade_2_1]
- [Snapshot Migration (0.4.x to 2.x)][04_to_2_snapshot_migration]
- [Backward Compatibility][backward_compatibility]
[etcd-v3]: ../docs.md
[api]: api.md
[libraries]: libraries-and-tools.md
[admin-api]: other_apis.md
[members-api]: members_api.md
[security]: security.md
[auth_api]: auth_api.md
[authentication]: authentication.md
[proxy]: proxy.md
[production-users]: production-users.md
[admin_guide]: admin_guide.md
[configuration]: configuration.md
[faq]: faq.md
[tuning]: tuning.md
[discovery_protocol]: discovery_protocol.md
[docker_guide]: docker_guide.md
[runtime-configuration]: runtime-configuration.md
[metrics]: metrics.md
[errorcode]: errorcode.md
[reporting_bugs]: reporting_bugs.md
[upgrade_2_3]: upgrade_2_3.md
[upgrade_2_2]: upgrade_2_2.md
[upgrade_2_1]: upgrade_2_1.md
[04_to_2_snapshot_migration]: 04_to_2_snapshot_migration.md
[backward_compatibility]: backward_compatibility.md

View File

@ -1,317 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Administration
## Data Directory
### Lifecycle
When first started, etcd stores its configuration into a data directory specified by the data-dir configuration parameter.
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
The write ahead log and snapshot files are used during member operation and to recover after a restart.
Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster.
It is highly recommended to dedicate a wal disk and set `--wal-dir` to point to a directory on that device for a production cluster deployment.
If a members data directory is ever lost or corrupted then the user should [remove][remove-a-member] the etcd member from the cluster using `etcdctl` tool.
A user should avoid restarting an etcd member with a data directory from an out-of-date backup.
Using an out-of-date data directory can lead to inconsistency as the member had agreed to store information via raft then re-joins saying it needs that information again.
For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
Once removed the member can be re-added with an empty data directory.
### Contents
The data directory has two sub-directories in it:
1. wal: write ahead log files are stored here. For details see the [wal package documentation][wal-pkg]
2. snap: log snapshots are stored here. For details see the [snap package documentation][snap-pkg]
If `--wal-dir` flag is set, etcd will write the write ahead log files to the specified directory instead of data directory.
## Cluster Management
### Lifecycle
If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.
### Monitoring
It is important to monitor your production etcd cluster for healthy information and runtime metrics.
#### Health Monitoring
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health":true}`, then the cluster is healthy.
```
$ curl -L http://127.0.0.1:2379/health
{"health":true}
```
You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.
```
$./etcdctl cluster-health
member 8211f1d0f64f3269 is healthy: got healthy result from http://127.0.0.1:12379
member 91bc3c398fb3c146 is healthy: got healthy result from http://127.0.0.1:22379
member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:32379
cluster is healthy
```
#### Runtime Metrics
etcd uses [Prometheus][prometheus] for metrics reporting in the server. You can read more through the runtime metrics [doc][metrics].
### Debugging
Debugging a distributed system can be difficult. etcd provides several ways to make debug
easier.
#### Enabling Debug Logging
When you want to debug etcd without stopping it, you can enable debug logging at runtime.
etcd exposes logging configuration at `/config/local/log`.
```
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
$ # debug logging enabled
$
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
$ # debug logging disabled
```
#### Debugging Variables
Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
`cmdline` is the command line arguments passed into etcd.
`file_descriptor_limit` is the max number of file descriptors etcd can utilize.
`memstats` is explained in detail in the [Go runtime documentation][golang-memstats].
`raft.status` is useful when you want to debug low level raft issues if you are familiar with raft internals. In most cases, you do not need to check `raft.status`.
```json
{
"cmdline": ["./etcd"],
"file_descriptor_limit": 0,
"memstats": {"Alloc":4105744,"TotalAlloc":42337320,"Sys":12560632,"...":"..."},
"raft.status": {"id":"ce2a822cea30bfca","term":5,"vote":"ce2a822cea30bfca","commit":23509,"lead":"ce2a822cea30bfca","raftState":"StateLeader","progress":{"ce2a822cea30bfca":{"match":23509,"next":23510,"state":"ProgressStateProbe"}}}
}
```
### Optimal Cluster Size
The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.
#### Fault Tolerance Table
It is recommended to have an odd number of members in a cluster. Having an odd cluster size doesn't change the number needed for majority, but you gain a higher tolerance for failure by adding the extra member. You can see this in practice when comparing even and odd sized clusters:
| Cluster Size | Majority | Failure Tolerance |
|--------------|------------|-------------------|
| 1 | 1 | 0 |
| 2 | 2 | 0 |
| 3 | 2 | **1** |
| 4 | 3 | 1 |
| 5 | 3 | **2** |
| 6 | 4 | 2 |
| 7 | 4 | **3** |
| 8 | 5 | 3 |
| 9 | 5 | **4** |
As you can see, adding another member to bring the size of cluster up to an odd size is always worth it. During a network partition, an odd number of members also guarantees that there will almost always be a majority of the cluster that can continue to operate and be the source of truth when the partition ends.
#### Changing Cluster Size
After your cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-reconfig], which allows the cluster to be modified without downtime. The `etcdctl` tool has `member list`, `member add` and `member remove` commands to complete this process.
### Member Migration
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
The data directory contains all the data to recover a member to its point-in-time state. To migrate a member:
* Stop the member process.
* Copy the data directory of the now-idle member to the new machine.
* Update the peer URLs for the replaced member to reflect the new machine according to the [runtime reconfiguration instructions][update-a-member].
* Start etcd on the new machine, using the same configuration and the copy of the data directory.
This example will walk you through the process of migrating the infra1 member to a new machine:
|Name|Peer URL|
|------|--------------|
|infra0|10.0.1.10:2380|
|infra1|10.0.1.11:2380|
|infra2|10.0.1.12:2380|
```sh
$ export ETCDCTL_ENDPOINT=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
```
```sh
$ etcdctl member list
84194f7c5edd8b37: name=infra0 peerURLs=http://10.0.1.10:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.10:2379
b4db3bf5e495e255: name=infra1 peerURLs=http://10.0.1.11:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.11:2379
bc1083c870280d44: name=infra2 peerURLs=http://10.0.1.12:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.12:2379
```
#### Stop the member etcd process
```sh
$ ssh 10.0.1.11
```
```sh
$ kill `pgrep etcd`
```
#### Copy the data directory of the now-idle member to the new machine
```
$ tar -cvzf infra1.etcd.tar.gz %data_dir%
```
```sh
$ scp infra1.etcd.tar.gz 10.0.1.13:~/
```
#### Update the peer URLs for that member to reflect the new machine
```sh
$ curl http://10.0.1.10:2379/v2/members/b4db3bf5e495e255 -XPUT \
-H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.1.13:2380"]}'
```
Or use `etcdctl member update` command
```sh
$ etcdctl member update b4db3bf5e495e255 http://10.0.1.13:2380
```
#### Start etcd on the new machine, using the same configuration and the copy of the data directory
```sh
$ ssh 10.0.1.13
```
```sh
$ tar -xzvf infra1.etcd.tar.gz -C %data_dir%
```
```
etcd -name infra1 \
-listen-peer-urls http://10.0.1.13:2380 \
-listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
```
### Disaster Recovery
etcd is designed to be resilient to machine failures. An etcd cluster can automatically recover from any number of temporary failures (for example, machine reboots), and a cluster of N members can tolerate up to _(N-1)/2_ permanent failures (where a member can no longer access the cluster, due to hardware failure or disk corruption). However, in extreme circumstances, a cluster might permanently lose enough members such that quorum is irrevocably lost. For example, if a three-node cluster suffered two simultaneous and unrecoverable machine failures, it would be normally impossible for the cluster to restore quorum and continue functioning.
To recover from such scenarios, etcd provides functionality to backup and restore the datastore and recreate the cluster without data loss.
#### Backing up the datastore
**Note:** Windows users must stop etcd before running the backup command.
The first step of the recovery is to backup the data directory and wal directory, if stored separately, on a functioning etcd node. To do this, use the `etcdctl backup` command, passing in the original data (and wal) directory used by etcd. For example:
```sh
etcdctl backup \
--data-dir %data_dir% \
[--wal-dir %wal_dir%] \
--backup-dir %backup_data_dir%
[--backup-wal-dir %backup_wal_dir%]
```
This command will rewrite some of the metadata contained in the backup (specifically, the node ID and cluster ID), which means that the node will lose its former identity. In order to recreate a cluster from the backup, you will need to start a new, single-node cluster. The metadata is rewritten to prevent the new node from inadvertently being joined onto an existing cluster.
#### Restoring a backup
To restore a backup using the procedure created above, start etcd with the `-force-new-cluster` option and pointing to the backup directory. This will initialize a new, single-member cluster with the default advertised peer URLs, but preserve the entire contents of the etcd data store. Continuing from the previous example:
```sh
etcd \
-data-dir=%backup_data_dir% \
[-wal-dir=%backup_wal_dir%] \
-force-new-cluster \
...
```
Now etcd should be available on this node and serving the original datastore.
Once you have verified that etcd has started successfully, shut it down and move the data and wal, if stored separately, back to the previous location (you may wish to make another copy as well to be safe):
```sh
pkill etcd
rm -fr %data_dir%
rm -fr %wal_dir%
mv %backup_data_dir% %data_dir%
mv %backup_wal_dir% %wal_dir%
etcd \
-data-dir=%data_dir% \
[-wal-dir=%wal_dir%] \
...
```
#### Restoring the cluster
Now that the node is running successfully, [change its advertised peer URLs][update-a-member], as the `--force-new-cluster` option has set the peer URL to the default listening on localhost.
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details.
**Note:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
### Client Request Timeout
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon (https://github.com/coreos/etcd/issues/2038).
#### Get requests
Timeout is not set for get requests, because etcd serves the result locally in a non-blocking way.
**Note**: QuorumGet request is a different type, which is mentioned in the following sections.
#### Watch requests
Timeout is not set for watch requests. etcd will not stop a watch request until client cancels it, or the connection is broken.
#### Delete, Put, Post, QuorumGet requests
The default timeout is 5 seconds. It should be large enough to allow all key modifications if the majority of cluster is functioning.
If the request times out, it indicates two possibilities:
1. the server the request sent to was not functioning at that time.
2. the majority of the cluster is not functioning.
If timeout happens several times continuously, administrators should check status of cluster and resolve it as soon as possible.
### Best Practices
#### Maximum OS threads
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [has changed in Go 1.5][golang1.5-runtime]).
When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable GOMAXPROCS to the desired number when starting etcd. For more information on this variable, see the [Go runtime documentation][golang-runtime].
[add-a-member]: runtime-configuration.md#add-a-new-member
[golang1.5-runtime]: https://golang.org/doc/go1.5#runtime
[golang-memstats]: https://golang.org/pkg/runtime/#MemStats
[golang-runtime]: https://golang.org/pkg/runtime
[metrics]: metrics.md
[prometheus]: http://prometheus.io/
[remove-a-member]: runtime-configuration.md#remove-a-member
[runtime-reconfig]: runtime-configuration.md#cluster-reconfiguration-operations
[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap
[update-a-member]: runtime-configuration.md#update-a-member
[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal

File diff suppressed because it is too large Load Diff

View File

@ -1,97 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# etcd3 API
TODO: API doc
## Data Model
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases.
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generates a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
### Logical View
The stores logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive.
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to recover space, revisions before the compact revision will be removed.
A keys lifetime spans a generation. Each key may have one or multiple generations. Creating a key increments the generation of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the keys current generation. Each modification of a key creates a new version of the key. Once a compaction happens, any generation ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
### Physical View
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.
etcd also keeps a secondary in-memory [btree][btree] index to speed up range queries over keys. The keys in the btree index are the keys of the store exposed to user. The value is a pointer to the modification of the persistent b+tree. Compaction removes dead pointers.
## KV API Guarantees
etcd is a consistent and durable key value store with mini-transaction(TODO: link to txn doc when we have it) support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.
### APIs to consider
* Read APIs
* range
* watch
* Write APIs
* put
* delete
* Combination (read-modify-write) APIs
* txn
### etcd Specific Definitions
#### operation completed
An etcd operation is considered complete when it is committed through consensus, and therefore “executed” -- permanently stored -- by the etcd storage engine. The client knows an operation is completed when it receives a response from the etcd server. Note that the client may be uncertain about the status of an operation if it times out, or there is a network disruption between the client and the etcd member. etcd may also abort operations when there is a leader election. etcd does not send `abort` responses to clients outstanding requests in this event.
#### revision
An etcd operation that modifies the key value store is assigned with a single increasing revision. A transaction operation might modify the key value store multiple times, but only one revision is assigned. The revision attribute of a key value pair that modified by the operation has the same value as the revision of the operation. The revision can be used as a logical clock for key value store. A key value pair that has a larger revision is modified after a key value pair with a smaller revision. Two key value pairs that have the same revision are modified by an operation "concurrently".
### Guarantees Provided
#### Atomicity
All API requests are atomic; an operation either completes entirely or not at all. For watch requests, all events generated by one operation will be in one watch response. Watch never observes partial events for a single operation.
#### Consistency
All API calls ensure [sequential consistency][seq_consistency], the strongest consistency guarantee available from distributed systems. No matter which etcd member server a client makes requests to, a client reads the same events in the same order. If two members complete the same number of operations, the state of the two members is consistent.
For watch operations, etcd guarantees to return the same value for the same key across all members for the same revision. For range operations, etcd has a similar guarantee for [linearized][Linearizability] access; serialized access may be behind the quorum state, so that the later revision is not yet available.
As with all distributed systems, it is impossible for etcd to ensure [strict consistency][strict_consistency]. etcd does not guarantee that it will return to a read the “most recent” value (as measured by a wall clock when a request is completed) available on any cluster member.
#### Isolation
etcd ensures [serializable isolation][serializable_isolation], which is the highest isolation level available in distributed systems. Read operations will never observe any intermediate data.
#### Durability
Any completed operations are durable. All accessible data is also durable data. A read will never return data that has not been made durable.
#### Linearizability
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence.
For example, consider a client completing a write at time point 1 (*t1*). A client issuing a read at *t2* (for *t2* > *t1*) should receive a value at least as recent as the previous write, completed at *t1*. However, the read might actually complete only by *t3*, and the returned value, current at *t2* when the read began, might be "stale" by *t3*.
etcd does not ensure linearizability for watch operations. Users are expected to verify the revision of watch responses to ensure correct ordering.
etcd ensures linearizability for all other operations by default. Linearizability comes with a cost, however, because linearized requests must go through the Raft consensus process. To obtain lower latencies and higher throughput for read requests, clients can configure a requests consistency mode to `serializable`, which may access stale data with respect to quorum, but removes the performance penalty of linearized accesses' reliance on live consensus.
[persistent-ds]: https://en.wikipedia.org/wiki/Persistent_data_structure
[btree]: https://en.wikipedia.org/wiki/B-tree
[b+tree]: https://en.wikipedia.org/wiki/B%2B_tree
[seq_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
[Linearizability]: #linearizability

View File

@ -1,516 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# v2 Auth and Security
## etcd Resources
There are three types of resources in etcd
1. permission resources: users and roles in the user store
2. key-value resources: key-value pairs in the key-value store
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
### Permission Resources
#### Users
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
#### Roles
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatibility. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
#### Permissions
There are two types of permissions, `read` and `write`. All management and settings require the ROOT role.
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes are supported. DENY becomes more complicated and is TBD.
### Key-Value Resources
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
### Settings Resources
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling authentication, replacing certificates, and any other dynamic configuration by the administrator (holder of the ROOT role).
## v2 Auth
### Basic Auth
We only support [Basic Auth][basic-auth] for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
### Authorization field for operations
Added to requests to /v2/keys, /v2/auth
Add code 401 Unauthorized to the set of responses from the v2 API
Authorization: Basic {encoded string}
### Future Work
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
### Things out of Scope for etcd Permissions
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possibility)
* Very fine-grained access controls (eg: users modifying keys outside work hours)
## API endpoints
An Error JSON corresponds to:
{
"name": "ErrErrorName",
"description" : "The longer helpful description of the error."
}
#### Enable and Disable Authentication
**Get auth status**
GET /v2/auth/enable
Sent Headers:
Possible Status Codes:
200 OK
200 Body:
{
"enabled": true
}
**Enable auth**
PUT /v2/auth/enable
Sent Headers:
Put Body: (empty)
Possible Status Codes:
200 OK
400 Bad Request (if root user has not been created)
409 Conflict (already enabled)
200 Body: (empty)
**Disable auth**
DELETE /v2/auth/enable
Sent Headers:
Authorization: Basic <RootAuthString>
Possible Status Codes:
200 OK
401 Unauthorized (if not a root user)
409 Conflict (already disabled)
200 Body: (empty)
#### Users
The User JSON object is formed as follows:
```
{
"user": "userName",
"password": "password",
"roles": [
"role1",
"role2"
],
"grant": [],
"revoke": []
}
```
Password is only passed when necessary.
**Get a List of Users**
GET/HEAD /v2/auth/users
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"users": [
{
"user": "alice",
"roles": [
{
"role": "root",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
},
{
"user": "bob",
"roles": [
{
"role": "guest",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
}
]
}
**Get User Details**
GET/HEAD /v2/auth/users/alice
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"user" : "alice",
"roles" : [
{
"role": "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
},
{
"role": "etcd",
"permissions" : {
"kv" : {
"read": [ "/*" ],
"write": [ "/*" ]
}
}
}
]
}
**Create Or Update A User**
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
PUT /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
JSON struct, above, matching the appropriate name
* Starting password and roles when creating.
* Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent users)
409 Conflict (when granting duplicated roles or revoking non-existent roles)
200 Headers:
Content-type: application/json
200 Body:
JSON state of the user
**Remove A User**
DELETE /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root user when auth is enabled)
404 Not Found
200 Headers:
200 Body: (empty)
#### Roles
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
```
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read" : [ "/fleet/" ],
"write": [ "/fleet/" ]
}
},
"grant" : {"kv": {...}},
"revoke": {"kv": {...}}
}
```
**Get Role Details**
GET/HEAD /v2/auth/roles/fleet
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
}
**Get a list of Roles**
GET/HEAD /v2/auth/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"roles": [
{
"role": "fleet",
"permissions": {
"kv": {
"read": ["/fleet/"],
"write": ["/fleet/"]
}
}
},
{
"role": "etcd",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
},
{
"role": "quay",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
}
**Create Or Update A Role**
PUT /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
Initial desired JSON state, including the role name for verification and:
* Starting permission set if creating
* Granted/Revoked permission set if updating
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent roles)
409 Conflict (when granting duplicated permission or revoking non-existent permission)
200 Body:
JSON state of the role
**Remove A Role**
DELETE /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root)
404 Not Found
200 Headers:
200 Body: (empty)
## Example Workflow
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
### Create root role
```
PUT /v2/auth/users/root
Put Body:
{"user" : "root", "password": "betterRootPW!"}
```
### Enable auth
```
PUT /v2/auth/enable
```
### Modify guest role (revoke write permission)
```
PUT /v2/auth/roles/guest
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "guest",
"revoke" : {
"kv" : {
"write": [
"/*"
]
}
}
}
```
### Create Roles for the Applications
Create the rkt role fully specified:
```
PUT /v2/auth/roles/rkt
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "rkt",
"permissions" : {
"kv": {
"read": [
"/rkt/*"
],
"write": [
"/rkt/*"
]
}
}
}
```
But let's make fleet just a basic role for now:
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "fleet"
}
```
### Optional: Grant some permissions to the roles
Well, we finally figured out where we want fleet to live. Let's fix it.
(Note that we avoided this in the rkt case. So this step is optional.)
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "fleet",
"grant" : {
"kv" : {
"read": [
"/rkt/fleet",
"/fleet/*"
]
}
}
}
```
### Create Users
Same as before, let's use rocket all at once and fleet separately
```
PUT /v2/auth/users/rktuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "rktuser", "password" : "rktpw", "roles" : ["rkt"]}
```
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "fleetuser", "password" : "fleetpw"}
```
### Optional: Grant Roles to Users
Likewise, let's explicitly grant fleetuser access.
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user": "fleetuser", "grant": ["fleet"]}
```
#### Start to use fleetuser and rktuser
For example:
```
PUT /v2/keys/rkt/RktData
Headers:
Authorization: Basic <rktuser:rktpw>
Body:
value=launch
```
Reads and writes outside the prefixes granted will fail with a 401 Unauthorized.
[basic-auth]: https://en.wikipedia.org/wiki/Basic_access_authentication

View File

@ -1,185 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Authentication Guide
## Overview
Authentication -- having users and roles in etcd -- was added in etcd 2.1. This guide will help you set up basic authentication in etcd.
etcd before 2.1 was a completely open system; anyone with access to the API could change keys. In order to preserve backward compatibility and upgradability, this feature is off by default.
For a full discussion of the RESTful API, see [the authentication API documentation][auth-api]
## Special Users and Roles
There is one special user, `root`, and there are two special roles, `root` and `guest`.
### User `root`
User `root` must be created before security can be activated. It has the `root` role and allows for the changing of anything inside etcd. The idea behind the `root` user is for recovery purposes -- a password is generated and stored somewhere -- and the root role is granted to the administrator accounts on the system. In the future, for troubleshooting and recovery, we will need to assume some access to the system, and future documentation will assume this root user (though anyone with the role will suffice).
### Role `root`
Role `root` cannot be modified, but it may be granted to any user. Having access via the root role not only allows global read-write access (as was the case before 2.1) but allows modification of the authentication policy and all administrative things, like modifying the cluster membership.
### Role `guest`
The `guest` role defines the permissions granted to any request that does not provide an authentication. This will be created on security activation (if it doesn't already exist) to have full access to all keys, as was true in etcd 2.0. It may be modified at any time, and cannot be removed.
## Working with users
The `user` subcommand for `etcdctl` handles all things having to do with user accounts.
A listing of users can be found with
```
$ etcdctl user list
```
Creating a user is as easy as
```
$ etcdctl user add myusername
```
And there will be prompt for a new password.
Roles can be granted and revoked for a user with
```
$ etcdctl user grant myusername -roles foo,bar,baz
$ etcdctl user revoke myusername -roles bar,baz
```
We can look at this user with
```
$ etcdctl user get myusername
```
And the password for a user can be changed with
```
$ etcdctl user passwd myusername
```
Which will prompt again for a new password.
To delete an account, there's always
```
$ etcdctl user remove myusername
```
## Working with roles
The `role` subcommand for `etcdctl` handles all things having to do with access controls for particular roles, as were granted to individual users.
A listing of roles can be found with
```
$ etcdctl role list
```
A new role can be created with
```
$ etcdctl role add myrolename
```
A role has no password; we are merely defining a new set of access rights.
Roles are granted access to various parts of the keyspace, a single path at a time.
Reading a path is simple; if the path ends in `*`, that key **and all keys prefixed with it**, are granted to holders of this role. If it does not end in `*`, only that key and that key alone is granted.
Access can be granted as either read, write, or both, as in the following examples:
```
# Give read access to keys under the /foo directory
$ etcdctl role grant myrolename -path '/foo/*' -read
# Give write-only access to the key at /foo/bar
$ etcdctl role grant myrolename -path '/foo/bar' -write
# Give full access to keys under /pub
$ etcdctl role grant myrolename -path '/pub/*' -readwrite
```
Beware that
```
# Give full access to keys under /pub??
$ etcdctl role grant myrolename -path '/pub*' -readwrite
```
Without the slash may include keys under `/publishing`, for example. To do both, grant `/pub` and `/pub/*`
To see what's granted, we can look at the role at any time:
```
$ etcdctl role get myrolename
```
Revocation of permissions is done the same logical way:
```
$ etcdctl role revoke myrolename -path '/foo/bar' -write
```
As is removing a role entirely
```
$ etcdctl role remove myrolename
```
## Enabling authentication
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
Make sure the root user is created:
```
$ etcdctl user add root
New password:
```
And enable authentication
```
$ etcdctl auth enable
```
After this, etcd is running with authentication enabled. To disable it for any reason, use the reciprocal command:
```
$ etcdctl -u root:rootpw auth disable
```
It would also be good to check what guests (unauthenticated users) are allowed to do:
```
$ etcdctl -u root:rootpw role get guest
```
And modify this role appropriately, depending on your policies.
## Using `etcdctl` to authenticate
`etcdctl` supports a similar flag as `curl` for authentication.
```
$ etcdctl -u user:password get foo
```
or if you prefer to be prompted:
```
$ etcdctl -u user get foo
```
Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.
[auth-api]: auth_api.md

View File

@ -1,77 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Backward Compatibility
The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
The other main focus of this release was a more reliable Raft implementation, but as this change is internal it should not have any notable effects to users.
## Command Line Flags Changes
The major flag changes are to mostly related to bootstrapping. The `initial-*` flags provide an improved way to specify the required criteria to start the cluster. The advertised URLs now support a list of values instead of a single value, which allows etcd users to gracefully migrate to the new set of IANA-assigned ports (2379/client and 2380/peers) while maintaining backward compatibility with the old ports.
- `-addr` is replaced by `-advertise-client-urls`.
- `-bind-addr` is replaced by `-listen-client-urls`.
- `-peer-addr` is replaced by `-initial-advertise-peer-urls`.
- `-peer-bind-addr` is replaced by `-listen-peer-urls`.
- `-peers` is replaced by `-initial-cluster`.
- `-peers-file` is replaced by `-initial-cluster`.
- `-peer-heartbeat-interval` is replaced by `-heartbeat-interval`.
- `-peer-election-timeout` is replaced by `-election-timeout`.
The documentation of new command line flags can be found at
https://github.com/coreos/etcd/blob/master/Documentation/v2/configuration.md.
## Data Directory Naming
The default data dir location has changed from {$hostname}.etcd to {name}.etcd.
## Key-Value API
### Read consistency flag
The consistent flag for read operations is removed in etcd 2.0.0. The normal read operations provides the same consistency guarantees with the 0.4.6 read operations with consistent flag set.
The read consistency guarantees are:
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
Reads do not provide linearizability. If you want linearizable read, you need to set quorum option to true.
**Previous behavior**
We added an option for a consistent read in the old version of etcd since etcd 0.x redirects the write request to the leader. When the user get back the result from the leader, the member it sent the request to originally might not apply the write request yet. With the consistent flag set to true, the client will always send read request to the leader. So one client should be able to see its last write when consistent=true is enabled. There is no order guarantees among different clients.
## Standby
etcd 0.4s standby mode has been deprecated. [Proxy mode][proxymode] is introduced to solve a subset of problems standby was solving.
Standby mode was intended for large clusters that had a subset of the members acting in the consensus process. Overall this process was too magical and allowed for operators to back themselves into a corner.
Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
[proxymode]: proxy.md
## Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: clustering.md#custom-etcd-discovery-service
## HTTP Admin API
`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/members API][members-api] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
[members-api]: members_api.md
## HTTP Key Value API
- The follower can now transparently proxy write requests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
- Expiration time is in UTC instead of local time.

View File

@ -1,23 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
# Benchmarks
etcd benchmarks will be published regularly and tracked for each release below:
- [etcd v2.1.0-alpha][2.1]
- [etcd v2.2.0-rc][2.2]
- [etcd v3 demo][3.0]
# Memory Usage Benchmarks
It records expected memory usage in different scenarios.
- [etcd v2.2.0-rc][2.2-mem]
[2.1]: etcd-2-1-0-alpha-benchmarks.md
[2.2]: etcd-2-2-0-rc-benchmarks.md
[2.2-mem]: etcd-2-2-0-rc-memory-benchmarks.md
[3.0]: etcd-3-demo-benchmarks.md

View File

@ -1,57 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.1.0 alpha
## etcd Cluster
3 etcd members, each runs on a single machine
## Testing
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 1534 | 0.7 |
| 64 | 64 | leader only | 10125 | 9.1 |
| 64 | 256 | leader only | 13892 | 27.1 |
| 256 | 1 | leader only | 1530 | 0.8 |
| 256 | 64 | leader only | 10106 | 10.1 |
| 256 | 256 | leader only | 14667 | 27.0 |
| 64 | 64 | all servers | 24200 | 3.9 |
| 64 | 256 | all servers | 33300 | 11.8 |
| 256 | 64 | all servers | 24800 | 3.9 |
| 256 | 256 | all servers | 33000 | 11.5 |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 60 | 21.4 |
| 64 | 64 | leader only | 1742 | 46.8 |
| 64 | 256 | leader only | 3982 | 90.5 |
| 256 | 1 | leader only | 58 | 20.3 |
| 256 | 64 | leader only | 1770 | 47.8 |
| 256 | 256 | leader only | 4157 | 105.3 |
| 64 | 64 | all servers | 1028 | 123.4 |
| 64 | 256 | all servers | 3260 | 123.8 |
| 256 | 64 | all servers | 1033 | 121.5 |
| 256 | 256 | all servers | 3061 | 119.3 |
[boom]: https://github.com/rakyll/boom
[hack-benchmark]: ../../../hack/benchmark/

View File

@ -1,77 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
# Benchmarking etcd v2.2.0
## Physical Machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted as etcd data directory
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
## etcd Cluster
3 etcd 2.2.0 members, each runs on a single machine.
Detailed versions:
```
etcd Version: 2.2.0
Git SHA: e4561dd
Go Version: go1.5
Go OS/Arch: linux/amd64
```
## Testing
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool][boom] with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions][hack] for the patch and the steps to reproduce our procedures.
The performance is calulated through results of 100 benchmark rounds.
## Performance
### Single Key Read Performance
| key size in bytes | number of clients | target etcd server | average read QPS | read QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
|-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
| 64 | 1 | leader only | 2303 | 200 | 0.49 | 0.06 |
| 64 | 64 | leader only | 15048 | 685 | 7.60 | 0.46 |
| 64 | 256 | leader only | 14508 | 434 | 29.76 | 1.05 |
| 256 | 1 | leader only | 2162 | 214 | 0.52 | 0.06 |
| 256 | 64 | leader only | 14789 | 792 | 7.69| 0.48 |
| 256 | 256 | leader only | 14424 | 512 | 29.92 | 1.42 |
| 64 | 64 | all servers | 45752 | 2048 | 2.47 | 0.14 |
| 64 | 256 | all servers | 46592 | 1273 | 10.14 | 0.59 |
| 256 | 64 | all servers | 45332 | 1847 | 2.48| 0.12 |
| 256 | 256 | all servers | 46485 | 1340 | 10.18 | 0.74 |
### Single Key Write Performance
| key size in bytes | number of clients | target etcd server | average write QPS | write QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
|-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
| 64 | 1 | leader only | 55 | 4 | 24.51 | 13.26 |
| 64 | 64 | leader only | 2139 | 125 | 35.23 | 3.40 |
| 64 | 256 | leader only | 4581 | 581 | 70.53 | 10.22 |
| 256 | 1 | leader only | 56 | 4 | 22.37| 4.33 |
| 256 | 64 | leader only | 2052 | 151 | 36.83 | 4.20 |
| 256 | 256 | leader only | 4442 | 560 | 71.59 | 10.03 |
| 64 | 64 | all servers | 1625 | 85 | 58.51 | 5.14 |
| 64 | 256 | all servers | 4461 | 298 | 89.47 | 36.48 |
| 256 | 64 | all servers | 1599 | 94 | 60.11| 6.43 |
| 256 | 256 | all servers | 4315 | 193 | 88.98 | 7.01 |
## Performance Changes
- Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.
- Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
[boom]: https://github.com/rakyll/boom
[hack]: ../../../hack/benchmark/

View File

@ -1,77 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
## etcd Cluster
3 etcd 2.2.0-rc members, each runs on a single machine.
Detailed versions:
```
etcd Version: 2.2.0-alpha.1+git
Git SHA: 59a5a7e
Go Version: go1.4.2
Go OS/Arch: linux/amd64
```
Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base performance. etcd's commit head is at [c7146bd5][c7146bd5], which is the same as the one that we use in [etcd 2.1 benchmark][etcd-2.1-benchmark].
## Testing
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 2804 (-5%) | 0.4 (+0%) |
| 64 | 64 | leader only | 17816 (+0%) | 5.7 (-6%) |
| 64 | 256 | leader only | 18667 (-6%) | 20.4 (+2%) |
| 256 | 1 | leader only | 2181 (-15%) | 0.5 (+25%) |
| 256 | 64 | leader only | 17435 (-7%) | 6.0 (+9%) |
| 256 | 256 | leader only | 18180 (-8%) | 21.3 (+3%) |
| 64 | 64 | all servers | 46965 (-4%) | 2.1 (+0%) |
| 64 | 256 | all servers | 55286 (-6%) | 7.4 (+6%) |
| 256 | 64 | all servers | 46603 (-6%) | 2.1 (+5%) |
| 256 | 256 | all servers | 55291 (-6%) | 7.3 (+4%) |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 76 (+22%) | 19.4 (-15%) |
| 64 | 64 | leader only | 2461 (+45%) | 31.8 (-32%) |
| 64 | 256 | leader only | 4275 (+1%) | 69.6 (-10%) |
| 256 | 1 | leader only | 64 (+20%) | 16.7 (-30%) |
| 256 | 64 | leader only | 2385 (+30%) | 31.5 (-19%) |
| 256 | 256 | leader only | 4353 (-3%) | 74.0 (+9%) |
| 64 | 64 | all servers | 2005 (+81%) | 49.8 (-55%) |
| 64 | 256 | all servers | 4868 (+35%) | 81.5 (-40%) |
| 256 | 64 | all servers | 1925 (+72%) | 47.7 (-59%) |
| 256 | 256 | all servers | 4975 (+36%) | 70.3 (-36%) |
### performance changes explanation
- read QPS in most scenarios is decreased by 5~8%. The reason is that etcd records store metrics for each store operation. The metrics is important for monitoring and debugging, so this is acceptable.
- write QPS to leader is increased by 20~30%. This is because we decouple raft main loop and entry apply loop, which avoids them blocking each other.
- write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.
[boom]: https://github.com/rakyll/boom
[c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md
[hack-benchmark]: ../../../hack/benchmark/

View File

@ -1,52 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
## Physical machine
GCE n1-standard-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 7.5 GB memory
- 2x CPUs
## etcd
```
etcd Version: 2.2.0-rc.0+git
Git SHA: 103cb5c
Go Version: go1.5
Go OS/Arch: linux/amd64
```
## Testing
Start 3-member etcd cluster, each of which uses 2 cores.
The length of key name is always 64 bytes, which is a reasonable length of average key bytes.
## Memory Maximal Usage
- etcd may use maximal memory if one follower is dead and the leader keeps sending snapshots.
- `max RSS` is the maximal memory usage recorded in 3 runs.
| value bytes | key number | data size(MB) | max RSS(MB) | max RSS/data rate on leader |
|-------------|-------------|---------------|-------------|-----------------------------|
| 128 | 50000 | 6 | 433 | 72x |
| 128 | 100000 | 12 | 659 | 54x |
| 128 | 200000 | 24 | 1466 | 61x |
| 1024 | 50000 | 48 | 1253 | 26x |
| 1024 | 100000 | 96 | 2344 | 24x |
| 1024 | 200000 | 192 | 4361 | 22x |
## Data Size Threshold
- When etcd reaches data size threshold, it may trigger leader election easily and drop part of proposals.
- At most cases, etcd cluster should work smoothly if it doesn't hit the threshold. If it doesn't work well due to insufficient resources, you need to decrease its data size.
| value bytes | key number limitation | suggested data size threshold(MB) | consumed RSS(MB) |
|-------------|-----------------------|-----------------------------------|------------------|
| 128 | 400K | 48 | 2400 |
| 1024 | 300K | 292 | 6500 |

View File

@ -1,47 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.2.0
## etcd Cluster
1 etcd member running in v3 demo mode
## Testing
Use [etcd v3 benchmark tool][etcd-v3-benchmark].
## Performance
### reading one single key
| key size in bytes | number of clients | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|----------|---------------|
| 256 | 1 | 2716 | 0.4 |
| 256 | 64 | 16623 | 6.1 |
| 256 | 256 | 16622 | 21.7 |
The performance is nearly the same as the one with empty server handler.
### reading one single key after putting
| key size in bytes | number of clients | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|----------|---------------|
| 256 | 1 | 2269 | 0.5 |
| 256 | 64 | 13582 | 8.6 |
| 256 | 256 | 13262 | 47.5 |
The performance with empty server handler is not affected by one put. So the
performance downgrade should be caused by storage package.
[etcd-v3-benchmark]: ../../../tools/benchmark/

View File

@ -1,82 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
# Watch Memory Usage Benchmark
*NOTE*: The watch features are under active development, and their memory usage may change as that development progresses. We do not expect it to significantly increase beyond the figures stated below.
A primary goal of etcd is supporting a very large number of watchers doing a massively large amount of watching. etcd aims to support O(10k) clients, O(100K) watch streams (O(10) streams per client) and O(10M) total watchings (O(100) watching per stream). The memory consumed by each individual watching accounts for the largest portion of etcd's overall usage, and is therefore the focus of current and future optimizations.
Three related components of etcd watch consume physical memory: each `grpc.Conn`, each watch stream, and each instance of the watching activity. `grpc.Conn` maintains the actual TCP connection and other gRPC connection state. Each `grpc.Conn` consumes O(10kb) of memory, and might have multiple watch streams attached.
Each watch stream is an independent HTTP2 connection which consumes another O(10kb) of memory.
Multiple watchings might share one watch stream.
Watching is the actual struct that tracks the changes on the key-value store. Each watching should only consume < O(1kb).
```
+-------+
| watch |
+---------> | foo |
| +-------+
+------+-----+
| stream |
+--------------> | |
| +------+-----+ +-------+
| | | watch |
| +---------> | bar |
+-----+------+ +-------+
| | +------------+
| conn +-------> | stream |
| | | |
+-----+------+ +------------+
|
|
|
| +------------+
+--------------> | stream |
| |
+------------+
```
The theoretical memory consumption of watch can be approximated with the formula:
`memory = c1 * number_of_conn + c2 * avg_number_of_stream_per_conn + c3 * avg_number_of_watch_stream`
## Testing Environment
etcd version
- git head https://github.com/coreos/etcd/commit/185097ffaa627b909007e772c175e8fefac17af3
GCE n1-standard-2 machine type
- 7.5 GB memory
- 2x CPUs
## Overall memory usage
The overall memory usage captures how much [RSS][rss] etcd consumes with the client watchers. While the result may vary by as much as 10%, it is still meaningful, since the goal is to learn about the rough memory usage and the pattern of allocations.
With the benchmark result, we can calculate roughly that `c1 = 17kb`, `c2 = 18kb` and `c3 = 350bytes`. So each additional client connection consumes 17kb of memory and each additional stream consumes 18kb of memory, and each additional watching only cause 350bytes. A single etcd server can maintain millions of watchings with a few GB of memory in normal case.
| clients | streams per client | watchings per stream | total watching | memory usage |
|---------|---------|-----------|----------------|--------------|
| 1k | 1 | 1 | 1k | 50MB |
| 2k | 1 | 1 | 2k | 90MB |
| 5k | 1 | 1 | 5k | 200MB |
| 1k | 10 | 1 | 10k | 217MB |
| 2k | 10 | 1 | 20k | 417MB |
| 5k | 10 | 1 | 50k | 980MB |
| 1k | 50 | 1 | 50k | 1001MB |
| 2k | 50 | 1 | 100k | 1960MB |
| 5k | 50 | 1 | 250k | 4700MB |
| 1k | 50 | 10 | 500k | 1171MB |
| 2k | 50 | 10 | 1M | 2371MB |
| 5k | 50 | 10 | 2.5M | 5710MB |
| 1k | 50 | 100 | 5M | 2380MB |
| 2k | 50 | 100 | 10M | 4672MB |
| 5k | 50 | 100 | 50M | *OOM* |
[rss]: https://en.wikipedia.org/wiki/Resident_set_size

View File

@ -1,103 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
# Storage Memory Usage Benchmark
<!---todo: link storage to storage design doc-->
Two components of etcd storage consume physical memory. The etcd process allocates an *in-memory index* to speed key lookup. The process's *page cache*, managed by the operating system, stores recently-accessed data from disk for quick re-use.
The in-memory index holds all the keys in a [B-tree][btree] data structure, along with pointers to the on-disk data (the values). Each key in the B-tree may contain multiple pointers, pointing to different versions of its values. The theoretical memory consumption of the in-memory index can hence be approximated with the formula:
`N * (c1 + avg_key_size) + N * (avg_versions_of_key) * (c2 + size_of_pointer)`
where `c1` is the key metadata overhead and `c2` is the version metadata overhead.
The graph shows the detailed structure of the in-memory index B-tree.
```
In mem index
+------------+
| key || ... |
+--------------+ | || |
| | +------------+
| | | v1 || ... |
| disk <----------------| || | Tree Node
| | +------------+
| | | v2 || ... |
| <----------------+ || |
| | +------------+
+--------------+ +-----+ | | |
| | | | |
| +------------+
|
|
^
------+
| ... |
| |
+-----+
| ... | Tree Node
| |
+-----+
| ... |
| |
------+
```
[Page cache memory][pagecache] is managed by the operating system and is not covered in detail in this document.
## Testing Environment
etcd version
- git head https://github.com/coreos/etcd/commit/776e9fb7be7eee5e6b58ab977c8887b4fe4d48db
GCE n1-standard-2 machine type
- 7.5 GB memory
- 2x CPUs
## In-memory index memory usage
In this test, we only benchmark the memory usage of the in-memory index. The goal is to find `c1` and `c2` mentioned above and to understand the hard limit of memory consumption of the storage.
We calculate the memory usage consumption via the Go runtime.ReadMemStats. We calculate the total allocated bytes difference before creating the index and after creating the index. It cannot perfectly reflect the memory usage of the in-memory index itself but can show the rough consumption pattern.
| N | versions | key size | memory usage |
|------|----------|----------|--------------|
| 100K | 1 | 64bytes | 22MB |
| 100K | 5 | 64bytes | 39MB |
| 1M | 1 | 64bytes | 218MB |
| 1M | 5 | 64bytes | 432MB |
| 100K | 1 | 256bytes | 41MB |
| 100K | 5 | 256bytes | 65MB |
| 1M | 1 | 256bytes | 409MB |
| 1M | 5 | 256bytes | 506MB |
Based on the result, we can calculate `c1=120bytes`, `c2=30bytes`. We only need two sets of data to calculate `c1` and `c2`, since they are the only unknown variable in the formula. The `c1=120bytes` and `c2=30bytes` are the average value of the 4 sets of `c1` and `c2` we calculated. The key metadata overhead is still relatively nontrivial (50%) for small key-value pairs. However, this is a significant improvement over the old store, which had at least 1000% overhead.
## Overall memory usage
The overall memory usage captures how much RSS etcd consumes with the storage. The value size should have very little impact on the overall memory usage of etcd, since we keep values on disk and only retain hot values in memory, managed by the OS page cache.
| N | versions | key size | value size | memory usage |
|------|----------|----------|------------|--------------|
| 100K | 1 | 64bytes | 256bytes | 40MB |
| 100K | 5 | 64bytes | 256bytes | 89MB |
| 1M | 1 | 64bytes | 256bytes | 470MB |
| 1M | 5 | 64bytes | 256bytes | 880MB |
| 100K | 1 | 64bytes | 1KB | 102MB |
| 100K | 5 | 64bytes | 1KB | 164MB |
| 1M | 1 | 64bytes | 1KB | 587MB |
| 1M | 5 | 64bytes | 1KB | 836MB |
Based on the result, we know the value size does not significantly impact the memory consumption. There is some minor increase due to more data held in the OS page cache.
[btree]: https://en.wikipedia.org/wiki/B-tree
[pagecache]: https://en.wikipedia.org/wiki/Page_cache

View File

@ -1,31 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Branch Management
## Guide
* New development occurs on the [master branch][master].
* Master branch should always have a green build!
* Backwards-compatible bug fixes should target the master branch and subsequently be ported to stable branches.
* Once the master branch is ready for release, it will be tagged and become the new stable branch.
The etcd team has adopted a *rolling release model* and supports one stable version of etcd.
### Master branch
The `master` branch is our development branch. All new features land here first.
If you want to try new features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs.
Before the release of the next stable version, feature PRs will be frozen. We will focus on the testing, bug-fix and documentation for one to two weeks.
### Stable branches
All branches with prefix `release-` are considered _stable_ branches.
After every minor release (http://semver.org/), we will have a new stable branch for that release. We will keep fixing the backwards-compatible bugs for the latest stable release, but not previous releases. The _patch_ release, incorporating any bug fixes, will be once every two weeks, given any patches.
[master]: https://github.com/coreos/etcd/tree/master

Some files were not shown because too many files have changed in this diff Show More