Compare commits

...

1048 Commits

Author SHA1 Message Date
8334790777 Merge pull request #6657 from gyuho/build
*: fix build script, bump up version
2016-10-14 14:40:17 -07:00
a81997ac3f version: bump up to v3.1.0-rc.0 2016-10-14 14:21:32 -07:00
06fd31cde9 build: get GitSHA first 2016-10-14 14:21:20 -07:00
4c444df7a6 Revert "version: bump to v3.1.0-rc.0"
This reverts commit cb178a78ea.
2016-10-14 14:20:33 -07:00
cb178a78ea version: bump to v3.1.0-rc.0 2016-10-14 14:06:21 -07:00
45588c1f9f Merge pull request #6650 from gyuho/flag
*: tests, README on environment variables in etcdctl v3
2016-10-14 12:15:27 -07:00
66f9e81c9a etcdctl: update README on environment variables 2016-10-14 11:58:59 -07:00
8081254498 e2e: add tests with environment vars for flags 2016-10-14 11:58:56 -07:00
a00ed609c3 pkg/flags: export 'FlagToEnv' for e2e tests 2016-10-14 11:15:28 -07:00
77d6ecbc5f Merge pull request #6649 from fanminshi/discovery_max_wait
discovery: add upper limit for waiting on a retry
2016-10-14 09:46:08 -07:00
84508697ce Merge pull request #6639 from mitake/functional-tester-external
functional-tester: a new option -failure-wrapper for enabling/disabli…
2016-10-14 07:56:26 -07:00
d1660b5ba3 Merge pull request #6619 from mitake/health-key
etcdctl, e2e: add --check-key option to endpoint health
2016-10-13 20:27:37 -07:00
eb9a01258e discovery: add upper limit for waiting on a retry
Adding upper limit ensures that expoential backoff doesn't reach more than 5 min on a re-try.

FIX #6648
2016-10-13 20:14:41 -07:00
d585b43abe etcdctl, e2e: add --check-key option to endpoint health
This commit adds a new option --check-key to endpoint health command
for health checking with a custom key. It is mainly for avoiding
permission problem.
2016-10-14 11:39:46 +09:00
b2b03d9926 functional-tester: a new option -failure-wrapper for enabling/disabling external fault injector
This commit adds a new option -failure-wrapper to etcd-tester. The
option receives a path of script that is used for enabling/disabling
external fault injectors. The script is called with an option "enable"
when it needs to be enabled (when failure.Inject() is called) and
called with "disabled" in an opposite case (when failure.Recover() is
called).
2016-10-14 11:31:28 +09:00
57008f1690 Merge pull request #6644 from kragniz/increase-warn-duration
etcdserver: increase warnApplyDuration from 10ms to 100ms
2016-10-13 10:58:58 -07:00
9df97eb441 etcdserver: increase warnApplyDuration from 10ms to 100ms
When running test suites for a client locally I'm getting spammed by log
lines such as:

    etcdserver: apply entries took too long [14.226771ms for 1 entries]

The comments in #6278 mention there were future plans of increasing the
threshold for logging these warnings, but it hadn't been done yet.
2016-10-13 17:55:50 +01:00
354891f75d Merge pull request #6634 from gyuho/manual
integration: add TestV3WatchWithPrevKV
2016-10-12 16:42:42 -07:00
c3948284a0 integration: add TestV3WatchWithPrevKV 2016-10-12 16:21:52 -07:00
614adb0230 Merge pull request #6628 from gyuho/fix-waitgroup
etcdserver: make WaitGroup.Add sync with Wait
2016-10-12 14:10:54 -07:00
546873f27e Merge pull request #6632 from heyitsanthony/grpc-naming
clientv3/naming: support resolving to multiple hosts
2016-10-12 13:18:36 -07:00
0c61d8804a etcdserver: make WaitGroup.Add sync with Wait 2016-10-12 13:11:35 -07:00
a97866b629 Merge pull request #6633 from xiang90/fix_rev_inconsistency
mvcc: fix rev inconsistency
2016-10-12 13:04:15 -07:00
3dbd30fcaa Documentation: add grpc naming resolver doc 2016-10-12 11:56:14 -07:00
7d50dc06a2 clientv3/naming: support resolving to multiple hosts
Previous implementation watches a single key so there's no way
to have separate hosts associate with separate keys for a single
grpc target. Instead, accept all keys on a prefix.

Also fixes first the Next() to read current name data from etcd instead
of waiting for the next event on a synced watcher.
2016-10-12 11:27:22 -07:00
93225ebafc mvcc: fix rev inconsistency
Try:

./etcdctl put foo bar
./etcdctl del foo
./etcdctl compact 3

restart etcd

./etcdctl get foo
mvcc: required revision has been compacted

The error is unexpected when range over the head revision.

Internally, we incorrectly set current revision smaller than the
compacted revision when we remove all keys around compacted revision.

This commit fixes the issue by recovering the current revision at least
to compacted revision.
2016-10-12 10:42:57 -07:00
cb9c77c4ba Merge pull request #6620 from nekto0n/put_update_optimize
Optimize updating key by storing lease in lessor
2016-10-12 09:47:11 -07:00
064e02f4b3 mvcc: Optimize updating key by storing lease in lessor 2016-10-12 09:37:09 +05:00
66f945c4bf Merge pull request #6629 from gyuho/clientv3-logger
clientv3: drop Config.Logger field
2016-10-11 17:01:13 -07:00
084c407a8d clientv3: drop Config.Logger field
Fix https://github.com/coreos/etcd/issues/6603.

Instead adds 'SetLogger' to set global logger interface
to avoid unnecessary logger updates.
2016-10-11 16:38:32 -07:00
e9f3101c49 Merge pull request #6625 from xiang90/grpc_proxy_doc
doc: add grpc proxy doc
2016-10-11 16:06:05 -07:00
17a6025ac8 doc: add grpc proxy doc 2016-10-11 15:15:45 -07:00
4c1a738caf Merge pull request #6627 from xiang90/apply_log
etcdserver: better panic logging
2016-10-11 14:44:46 -07:00
dbaa44372b etcdserver: better panic logging 2016-10-11 13:34:18 -07:00
c10dad41a3 Merge pull request #6604 from sinsharat/support_debug_build_using_delve_gdb
build: Added support for debugging using delve, gdb, etc
2016-10-11 13:03:35 -07:00
9ac2c8072a build: Added support for debugging using delve, gdb, etc 2016-10-12 01:00:15 +05:30
a7247b3c7e Merge pull request #6618 from heyitsanthony/fix-e2e-err-leak
e2e: close process if spawnWithExpects fails
2016-10-11 11:30:28 -07:00
2448f6a003 e2e: close process if spawnWithExpects fails
Was causing a process leak in TestCtlV3Alarm
2016-10-10 15:52:37 -07:00
d7f69d0f92 Merge pull request #6617 from gyuho/vendor-update
vendor: update glide and grpc-go
2016-10-10 14:48:17 -07:00
4a07bbec59 clientv3: implement new grpc.Balancer interface 2016-10-10 11:18:29 -07:00
e3558a64cf vendor: update grpc-go v1.0.2 tag
Fix https://github.com/coreos/etcd/issues/6529.
2016-10-10 11:18:01 -07:00
69ea359e62 vendor: update glide.yaml with grpc-go v1.0.2 tag 2016-10-10 11:17:47 -07:00
b9f3ef09e1 vendor: clean up dependencies (remove unused ones) 2016-10-10 11:17:27 -07:00
def1a3b77f script/updatedep: update glide, glide-vc version 2016-10-10 11:11:58 -07:00
3a6fe61c03 Merge pull request #6610 from heyitsanthony/bench-lease
benchmark: submit keepalive requests concurrently with report.Run()
2016-10-10 09:53:08 -07:00
fd60205e95 Merge pull request #6616 from bdarnell/genproto-gopath
scripts: Don't erase gopath.proto after genproto.sh
2016-10-10 09:19:49 -07:00
ef4e3ef55a scripts: Don't erase gopath.proto after genproto.sh
Wiping gopath.proto after a successful run does nothing but slow down
the next run unnecessarily as it downloads everything again.
2016-10-10 11:33:43 +08:00
602fd6a67e Merge pull request #6613 from mitake/ep-health
etcdctl: parse auth related options in endpoint health command
2016-10-09 06:58:06 -07:00
644ec0ddef etcdctl, e2e: parse auth related options in endpoint health command
Partially fixes https://github.com/coreos/etcd/issues/6611
2016-10-09 20:34:09 +09:00
c1d115b322 benchmark: submit keepalive requests concurrently with report.Run()
Otherwise report won't consume the results and the benchmark hangs.
2016-10-07 15:57:38 -07:00
ac4d39cfb0 Merge pull request #6583 from sinsharat/windows_etcd3.0.1_etcdctlv2api_issue_fix
etcdctlv2: windows compatibility issue fix for etcd v3.0.1
2016-10-07 13:58:57 -07:00
3f60ee0d27 Merge pull request #6590 from gyuho/etcdserver
etcdserver: separate EtcdServer from raftNode
2016-10-07 13:39:37 -07:00
e011ea25ca etcdserver: separate EtcdServer from raftNode 2016-10-07 13:18:39 -07:00
e1e16d9b28 Merge pull request #6608 from gyuho/news
NEWS: add 'prev-kv' feature for upcoming v3.0.11
2016-10-07 12:43:17 -07:00
ab2a20402e NEWS: add 'prev-kv' feature for upcoming v3.0.11 2016-10-07 11:22:02 -07:00
71f8f3ceb6 Merge pull request #6607 from glevand/for-merge-typo
Documentation: Minor typo fix
2016-10-07 10:31:38 -07:00
f1437a8932 Documentation: Minor typo fix
Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-07 10:17:43 -07:00
75f812eaa3 etcdctlv2: windows compatibility issue fix for etcd v3.0.1 2016-10-07 22:15:30 +05:30
4e4140040a Merge pull request #6602 from nekto0n/watchable_store_bench
mvcc: add BenchmarkWatchableStoreTxnPut benchmark
2016-10-07 09:13:44 -07:00
e2bd6f2213 Merge pull request #6601 from nekto0n/interval_tree_fast_stab
adt: fast path Stab in empty interval tree
2016-10-07 09:13:23 -07:00
f3cdfcdcf4 Merge pull request #6486 from glevand/for-merge-arm64
Get tests working on ARM64
2016-10-06 17:53:10 -07:00
686282393d Merge pull request #6600 from heyitsanthony/report
benchmark: split out report and add --precise option
2016-10-06 17:14:08 -07:00
e7d8292cd1 benchmark: add --precise flag
Usually benchmark writes with %4.4f; this adds optional %g formatting.
2016-10-06 16:18:47 -07:00
3d28faa3eb pkg/report, tools/benchmark: refactor report out of tools/benchmark
Only tracks time series when requested. Can configure output precision.
2016-10-06 16:18:47 -07:00
ea9e857eb9 Merge pull request #6599 from fanminshi/lease_error_type_fix
Lease: Add lease errors to togRPCError()
2016-10-06 15:47:51 -07:00
cbbd1f0f44 Merge pull request #6598 from xiang90/cleanup
v3rpc: return nil as error explicitly
2016-10-06 15:30:04 -07:00
a862fd9f0f Lease: Add lease errors to togRPCError()
This allows lease's function to convert lease error to appropriate GRPC errors
2016-10-06 14:29:31 -07:00
10cafe56b8 v3rpc: return nil as error explicitly 2016-10-06 14:14:43 -07:00
4a5fa261c6 Merge pull request #6596 from gyuho/protect-TTL
lease: add TTL() method
2016-10-06 11:41:51 -07:00
65ac718a11 etcdserver: use 'TTL()' on lease.Lease 2016-10-06 11:24:12 -07:00
5adca4a720 lease, leasehttp: add TTL() method
Fix https://github.com/coreos/etcd/issues/6595.
2016-10-06 11:24:09 -07:00
9970ded79f mvcc: add BenchmarkWatchableStoreTxnPut benchmark 2016-10-06 22:44:25 +05:00
eae70c9379 adt: fast path Stab in empty interval tree 2016-10-06 22:41:33 +05:00
b8079b7fc0 Merge pull request #6594 from heyitsanthony/e2e-etcdctl-timeout
e2e: print correct timeout for etcdctl tests
2016-10-06 10:40:46 -07:00
fa1e28102e Merge pull request #5316 from ajityagaty/too_many_allocs
mvcc: Reduce number of allocs in PUT when watchableStore has no watchers.
2016-10-06 09:47:59 -07:00
e28706d9e2 e2e: print correct timeout for etcdctl tests 2016-10-06 09:18:41 -07:00
cc04d80b09 Merge pull request #6578 from glevand/for-merge-serial
test: Run integration pass in series
2016-10-05 19:28:15 -07:00
54c252ee63 clientv3/kv_test: Fix quota test
Updates TestKVPutError.  Change the quota to work with systems
that have a 64 KiB page size. Increase the db sync wait time to
one second.  Also, add some comments for the hard coded value.

Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-05 16:41:06 -07:00
84d2ff93b0 integration/v3_grpc_test: Fix quota tests
Use the system page size to set the test quota size.  Also, change
a comment related to setting the node quota to be more clear.

Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-05 16:41:06 -07:00
de8adc9e03 e2e/ctl_v3_alarm_test: Fix quota test
Rework the over quota test to be more a realistic test.  Take into
consideration that the system page size will be different across
platforms.

Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-05 16:41:06 -07:00
8c60a532a6 e2e/ctl_v3_alarm_test: Use fixed small buf size
We just need a small chunk of data to test put, so to be
consistent across platforms use a fixed size of 64 bytes.

Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-05 16:41:06 -07:00
beb194967e Documentation: Improve quota example
Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-05 16:41:06 -07:00
bdbb32dfe8 Documentation: Set ETCDCTL_API for v3 features
Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-05 16:41:06 -07:00
b65a2cec18 Documentation: Clearify Space quota section
Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-05 16:41:06 -07:00
f0469f7f25 Merge pull request #6570 from xiang90/lease_expire
Fix lease expire
2016-10-05 15:49:45 -07:00
3cbc5285e0 test: Run integration pass in series
On slower or heavily loaded platforms running the integration pass in
parallel results in test timeout errors.

Rename the integration_pass function to integration_e2e_pass, and add two
new functions integration_pass and e2e_pass.

Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-05 15:35:14 -07:00
0f0c048e29 etcdserver: fix early lessor promotion issue
If we promote the lessor before finish applying all
entries from the last term, we might incorrectly renew
the already revoked leases.

Here is an example:

- Term 1: revoke lease A accepted by raft
- Old leader failed, new election happened
- Term 2: promote
- Term 2: keep alive A succeed. A now has 10 seconds TTL
- Term 2: revoke lease A from Term 1 got committed and applied
- Term 2: the lease A with 10 seconds TTL is revoked

To solve this, the new leader MUST apply all entries from old term
before promote its lessor to start accept renew requests.
2016-10-05 14:41:47 -07:00
279c103517 lease: fix lease expire and add a test 2016-10-05 14:41:47 -07:00
7f0d5946ff Merge pull request #6589 from heyitsanthony/etcdctl-lock-one-session
etcdctl: remove superfluous session in lock command
2016-10-05 14:36:11 -07:00
b980ab0c67 Merge pull request #6582 from heyitsanthony/fix-cancel-close
clientv3: only return closing error to watcher if context is not canceled
2016-10-05 13:37:29 -07:00
f2af08f5aa etcdctl: remove superfluous session in lock command 2016-10-05 13:30:36 -07:00
f67f8d3b31 Merge pull request #6587 from heyitsanthony/watch-fix-revrace
clientv3: fix race on watch initial revision
2016-10-05 10:55:31 -07:00
d5cd563ce7 Merge pull request #6572 from glevand/for-merge-release_pass
Fixes for release_pass test
2016-10-05 09:42:16 -07:00
06d5cf2d52 clientv3: fix race on watch initial revision
The initial revision was being updated in the substream goroutine defer;
this was racing with the resume path fetching the initial revision when
the substream closes during resume. Instead, update the initial revision
whenever the substream processes a new watch response. Since the substream
cannot receive a watch response while it is resuming, the write to the
initial revision is ordered to always happen after the resume read.

Fixes #6586
2016-10-05 09:36:06 -07:00
e285f599e2 clientv3: only return closing error to watcher if context is not canceled
Fixes #6503
2016-10-04 16:09:50 -07:00
8e1c989ec3 integration: test a canceled watch won't return a closing error 2016-10-04 14:47:40 -07:00
98897b7603 Merge pull request #6580 from spoonben/fix-404-docs
docs: link directly to github procfile
2016-10-04 13:54:10 -07:00
25f1088edd test: Fixes for release_pass
Some fixes related to release_pass:

o Create the output directory ./bin if it does not exist.
o Define the GOARCH variable if it is not defined.
o Simplify the race detection test.
o Download the relese archive based on GOARCH.
o If the release file is not found, return success.  This will allow the tests
  to continue.

Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-10-04 13:42:53 -07:00
9c5a32eb7a docs: link directly to github procfile
This is in response to https://github.com/coreos/docs/issues/822
Unfortunately because of how the doc sync works there has to be
a direct link here.
2016-10-04 13:42:17 -07:00
5269bbd277 Merge pull request #6513 from gyuho/manual
raft: refactor inflight
2016-10-04 13:31:43 -07:00
dc8bf26cd8 raft: refactor inflight 2016-10-04 13:12:16 -07:00
19122b463e Merge pull request #6525 from heyitsanthony/watcher-disconn
clientv3: simplify watcher synchronization
2016-10-04 11:21:12 -07:00
02f557068e Merge pull request #6576 from xiang90/fix_doc
doc: build should work for non-github users
2016-10-04 10:32:37 -07:00
904e5090fd doc: build should work for non-github users 2016-10-04 10:26:51 -07:00
5b50658118 clientv3: simplify watch synchronization
Was more complicated than it needed to be and didn't really work in the
first place. Restructured watcher registation to use a queue.
2016-10-03 16:56:14 -07:00
9ce398f8a6 integration: test canceling watchers when disconnected 2016-10-03 16:56:14 -07:00
33e4f2ea28 Merge pull request #6563 from gyuho/gen-proto
scripts/genproto: use 'gopath.proto' for $GOPATH
2016-10-03 15:54:38 -07:00
9b56e51ca7 *: regenerate proto + gofmt change 2016-10-03 15:34:34 -07:00
8174fcf201 scripts/genproto: use 'gopath.proto' for $GOPATH 2016-10-03 15:34:31 -07:00
dfe85b26cc Merge pull request #6571 from xiang90/log_pkg
*: set repo correctly for logging
2016-10-03 15:49:44 -05:00
b7f02a8c0a Merge pull request #6568 from gyuho/e2e
e2e: test 'https' scheme endpoints
2016-10-03 13:24:00 -07:00
29dd3cf5bd Revert "clientv3/integration: add TestDialWithHTTPS"
This reverts commit a96a28d603.
2016-10-03 13:05:08 -07:00
0dc14d1771 e2e: test 'https' scheme endpoints 2016-10-03 13:04:58 -07:00
c26ebe3262 Merge pull request #6453 from vimalk78/wal-optimize-marshal-outside-lock
wal/wal.go: optimized WAL.SaveSnapshot to do Marshal outside the mutex lock
2016-10-03 11:50:11 -07:00
dd607b5eff Merge pull request #6560 from gyuho/scheme
clientv3: handle 'https' scheme in endpoint
2016-10-03 09:44:46 -07:00
a96a28d603 clientv3/integration: add TestDialWithHTTPS 2016-10-03 02:16:07 -07:00
962433c17f *: set repo correctly for logging 2016-10-03 17:03:22 +08:00
f45542394b clientv3: handle 'https' scheme in endpoint 2016-10-03 01:03:28 -07:00
02912fe8c4 Merge pull request #6564 from gyuho/nil-ref
e2e: skip when 'etcdProcess' is nil
2016-10-01 01:32:56 -07:00
5c51c600aa e2e: skip when 'etcdProcess' is nil 2016-10-01 00:45:28 -07:00
37bd0932f7 Merge pull request #6557 from heyitsanthony/fix-publish-retry
etcdserver: use stream recorder for TestPublishRetry
2016-09-30 18:40:18 -07:00
613525f711 Merge pull request #6559 from heyitsanthony/fix-lease-hash
lessor: delete keys in deterministic order on revoke
2016-09-30 17:23:00 -07:00
4f9be94643 lessor: delete keys in deterministic order on revoke
Fixes #6558
2016-09-30 16:45:52 -07:00
289e3c0c63 etcdserver: use stream recorder for TestPublishRetry
Fixes #6546
2016-09-30 15:43:32 -07:00
7225c77a3b Merge pull request #6556 from gyuho/simplify
lease: remove redundant lookup methods
2016-09-30 11:19:14 -07:00
4871a4a5f3 lease: remove redundant get method 2016-09-30 10:27:27 -07:00
c349e089b1 Merge pull request #6550 from heyitsanthony/watch-prog-notify
clientv3: make IsProgressNotify() false on compact event and closed channel
2016-09-29 11:12:25 -07:00
6ac284a577 grpcproxy: use valid progress notification in broadcast test 2016-09-29 10:45:25 -07:00
dac6e700f8 Merge pull request #6519 from mitake/functional-tester
functional-tester: decoupling functionalities of etcd-tester
2016-09-29 10:07:47 -07:00
868617ef86 Merge pull request #6548 from gyuho/get-config-embed
embed: add 'Config' method
2016-09-29 09:14:32 -05:00
b8017004ba embed: add 'Config' method 2016-09-29 07:10:59 -07:00
a781f4ebda Merge pull request #6551 from xiang90/fix_log_repo
pkg: use etcd as logging repo
2016-09-29 02:46:57 -05:00
9473e9c30e pkg: use etcd as logging repo 2016-09-29 15:29:38 +08:00
d80c13555a Merge pull request #6543 from xiang90/improve_txn
etcdserver: use linearizableReadNotify for txn
2016-09-28 19:54:30 -05:00
bf2581390d clientv3: make IsProgressNotify() false on compact event and closed channel
Fixes #6549
2016-09-28 16:49:39 -07:00
0ca0260c89 Merge pull request #6531 from sinsharat/glossary_update
Documentation/learning: Glossary update
2016-09-28 11:39:02 -07:00
2353cbca71 Merge pull request #6544 from gyuho/page-offset
wal, ioutil: set page offset for encoder
2016-09-28 11:37:24 -07:00
f5588526cc wal: set PageWriter offset in file encoder 2016-09-28 11:03:24 -07:00
d0c29cc610 pkg/ioutil: configure pageOffset in NewPageWriter 2016-09-28 09:45:54 -07:00
0b8b40ccca Merge pull request #6545 from gyuho/grammar
wal: fix minor wording in comment
2016-09-28 09:34:43 -07:00
231530e0c5 wal: fix minor wording in comment 2016-09-28 09:12:31 -07:00
ea0c65797a etcdserver: use linearizableReadNotify for txn 2016-09-28 20:47:49 +08:00
6c2414ebd1 Documentation/learning: Glossary update 2016-09-28 11:18:47 +05:30
f4ec303d1b wal/wal.go: modified WAL.SaveSnapshot to do the Marshal before aquiring the mutex 2016-09-28 10:35:19 +05:30
1e1dd24d05 Merge pull request #6536 from sinsharat/etcdctlv3_readme_update
etcdctlv3: minor updates to put and make-mirror command
2016-09-27 23:58:42 -05:00
dcfbcb7a68 etcdctlv3: minor updates to put and make-mirror command 2016-09-28 10:20:08 +05:30
3807faeddf Merge pull request #6541 from hhkbp2/improve-test-coverage
raft: add test cases to improve test coverage
2016-09-27 23:24:52 -05:00
eee23eaf43 Merge pull request #6540 from fanminshi/lease_panic_fix
etcdserver: fix a node panic bug caused LeaseTimeToLive call on a nonexistent lease
2016-09-27 23:17:16 -05:00
7d48855630 functional-tester: decouple failures from tester
This commit adds a new option --failures to etcd-tester. The option
receives a comma-delimited argument like this:
"default,failpoints". The given arguments are interpreted as names of
failures and they are injected to an etcd cluster. Available failures
are default (default scenario in etcd-tester) and failpoints. If no
args are passed to the option (--failures=""), no failures are
injected during testing.
2016-09-28 11:30:53 +09:00
a6eb2939b1 raft: add test cases to improve test coverage 2016-09-28 10:19:30 +08:00
8ef6687018 etcdserver: fix a node panic bug caused LeaseTimeToLive call on a nonexistent lease
When the non Leader etcd server receives a LeaseTimeToLive on a nonexistent lease, it responds with a nil resp and a nil error The invoking function parses the nil resp and results a segmentation fault.
I fix the bug by making sure the lease not found error is returned so that the invoking function parses the the error message instead.

fix #6537
2016-09-27 17:46:30 -07:00
e68cd086ee Merge pull request #6532 from heyitsanthony/no-gopath-build
build: support building out of path when GOPATH is not set
2016-09-27 13:25:04 -07:00
1e3a71d098 build: support building out of path when GOPATH is not set
Otherwise gets "go: GOPATH entry is relative; must be absolute path: ""."
2016-09-27 10:20:52 -07:00
150576fa72 Merge pull request #6212 from xiang90/readindex
etcdserver: initial read index implementation
2016-09-27 11:51:08 -05:00
e5f4fb1a79 Merge pull request #6527 from sinsharat/intracting_v3
etcdctlv3: interactive_v3 compaction and timetolive command update
2016-09-27 06:14:18 -05:00
ab20187f93 etcdctlv3: interactive_v3 compaction and timetolive command update 2016-09-27 16:19:55 +05:30
6167a2aaa7 Merge pull request #6524 from sinsharat/intracting_v3
etcdctlv3: interactive_v3 watch command update
2016-09-27 01:35:04 -05:00
e3e3993022 etcdserver: support read index
Use read index to achieve l-read.
2016-09-27 13:41:40 +08:00
7d9355ffba Merge pull request #6523 from vimalk78/correct-compactor-logger-package
compactor/compactor.go : corrected the capnslog package name
2016-09-26 14:21:24 -07:00
cd1306f866 etcdctlv3: interactive_v3 watch command update 2016-09-27 00:00:05 +05:30
e1550bae61 compactor/compactor.go: corrected the capnslog package name 2016-09-26 23:52:48 +05:30
e1efdd591e Merge pull request #6521 from sinsharat/intracting_v3
etcdctlv3: interactive_v3 del command update
2016-09-26 10:48:09 -05:00
83f2fa7adc etcdctlv3: interactive_v3 del command update 2016-09-26 19:56:20 +05:30
e2d51961dd Merge pull request #6520 from sinsharat/intracting_v3
etcdctlv3: interactive_v3 get command update
2016-09-26 07:20:44 -05:00
213e8a5b15 Merge pull request #6514 from gyuho/sort
test: grep versions with --sort
2016-09-26 06:31:05 -05:00
595743651b etcdctlv3: interactive_v3 get command update 2016-09-26 16:28:29 +05:30
06546cf100 Merge pull request #6517 from sinsharat/intracting_v3
etcdctlv3: interactive_v3 version and put command update
2016-09-26 03:53:13 -05:00
7a95831018 etcdctlv3: interactive_v3 version and put command update 2016-09-26 12:32:08 +05:30
cf83de6488 Merge pull request #6510 from sinsharat/etcdctlv3_readme_final_draft
etcdctlv3:corrected and organised etcdctl commands
2016-09-25 19:19:06 -05:00
f5b9238a3c Merge pull request #6516 from gyuho/vvv
vendor: remove unused code
2016-09-23 18:54:05 -07:00
f957c401d3 vendor: remove unused code 2016-09-23 16:57:28 -07:00
20211ed6bf test: grep versions with --sort 2016-09-23 15:49:20 -07:00
cf09562e40 Merge pull request #6512 from gyuho/dep
vendor: update 'google/btree'
2016-09-23 13:21:52 -07:00
ecb577d40c vendor: update 'google/btree' 2016-09-23 12:54:25 -07:00
15d268709e version: bump to v3.1.0-alpha.1+git 2016-09-23 11:32:39 -07:00
2469a95685 version: bump to v3.1.0-alpha.1 2016-09-23 11:19:22 -07:00
4ef44d1130 Merge pull request #6506 from mitake/decouple-stresser
functional-tester: decouple stresser from tester
2016-09-23 10:05:03 -07:00
044e5cf3a9 Merge pull request #6498 from ychen11/ychen11/etcdserverpb
Added more lines of comments into rpc.proto
2016-09-23 10:04:24 -07:00
0e493c11c2 functional-tester: decouple stresser from tester
This commit decouples stresser from the tester of
functional-tester. For doing it, this commit adds a new option
--stresser to etcd-tester. The option accepts two types of stresser:
"default" and "nop". If the option is "default", etcd-tester stresses
its etcd cluster with the existing stresser. If the option is "nop",
etcd-tester does nothing for stressing.

Partially fixes https://github.com/coreos/etcd/issues/6446
2016-09-24 01:04:57 +09:00
69f5b4ba79 Documentation:made watch request doc more clear 2016-09-23 23:13:55 +08:00
af8728f328 etcdctlv3:corrected and organised etcdctl commands 2016-09-23 18:21:54 +05:30
51aa220449 Merge pull request #6507 from sinsharat/readme_del_cmd_options_example_update
etcdctlv3 : added options and examples for del from-key
2016-09-22 17:08:50 -05:00
308038e96a etcdctlv3 : added options and examples for del from-key 2016-09-22 22:54:20 +05:30
b1e4defc48 Merge pull request #6501 from sinsharat/feature_add_del_from-key
etcdctlv3: del command from-key feature added
2016-09-22 09:15:04 -05:00
804e215981 Merge pull request #6505 from sinsharat/compaction_options_update
etcdctlv3: updated compaction options
2016-09-22 06:52:43 -07:00
5fa233a564 etcdctlv3: updated compaction options 2016-09-22 19:06:05 +05:30
35ff70656b etcdctlv3: del command from-key feature added 2016-09-22 16:55:36 +05:30
ea97aa3f0f Merge pull request #6504 from sinsharat/member_command_options_update
etcdctlv3: updated member command options
2016-09-22 03:47:49 -07:00
1601ee761a etcdctlv3: updated member command options 2016-09-22 15:04:54 +05:30
4de39d3683 Merge pull request #6502 from xiang90/etcdctl_mirror
etcdctl: remove the use of remprefix
2016-09-22 01:05:55 -05:00
30b26f8f50 etcdctl: remove the use of remprefix 2016-09-22 08:43:31 +08:00
3453ce55e3 Merge pull request #6496 from sinsharat/refactor_mirror_command_tests
e2e: refactored ctlv3_make_mirror_test
2016-09-21 19:33:42 -05:00
4ec0fce109 Merge pull request #6493 from gyuho/tester-build
functional-tester: build from repo root, vendor
2016-09-21 16:57:34 -07:00
27c500d8d0 Merge pull request #6487 from heyitsanthony/watch-stress
clientv3: process closed watcherStreams in watcherGrpcStream run loop
2016-09-21 13:55:25 -07:00
3f7f6fb557 Merge pull request #6500 from sinsharat/readme_del_option_update
etcdctlv3: updated del command options
2016-09-21 13:54:18 -07:00
a32518006c clientv3: process closed watcherStreams in watcherGrpcStream run loop
Was racing with Watch() when closing the grpc stream on no watchers.

Fixes #6476
2016-09-21 13:28:00 -07:00
bcda9af15d etcdctlv3: updated del command options 2016-09-22 00:16:53 +05:30
d743b8b866 Merge pull request #6474 from gyuho/auto-sync
clientv3: add 'Sync' method
2016-09-21 10:57:10 -07:00
deef16b376 integration: test client watchers with overlapped context cancels 2016-09-21 09:40:24 -07:00
592538986d e2e: refactored ctlv3_make_mirror_test 2016-09-21 22:07:03 +05:30
cdb1e34799 clientv3: add 'Sync' method 2016-09-21 09:10:25 -07:00
c016325647 Merge pull request #6495 from vimalk78/wal-improve-coverage-add-testcase-save-with-cut
wal/wal.go : improved coverage by testing WAL.Save which causes a WAL…
2016-09-21 11:04:21 -05:00
4426e282d6 Merge pull request #6497 from gyuho/raft-example
raftexample: remove snapshot TODO in README
2016-09-21 08:44:04 -07:00
3492753edf e2e: refactored ctlv3_make_mirror_test 2016-09-21 20:01:24 +05:30
113b27229b raftexample: remove snapshot TODO in README 2016-09-21 05:07:04 -07:00
13e7172b4b Merge pull request #6244 from gyuho/raft-example
raftexample: implement Raft snapshot
2016-09-21 04:55:29 -07:00
e4fbf7db00 raftexample: implement Raft snapshot 2016-09-21 04:23:05 -07:00
4b83f40618 raftexample: add index fields to filter entries 2016-09-21 04:23:05 -07:00
666d555450 raftexample: add snapshotter, handle Ready in raft 2016-09-21 04:23:05 -07:00
15fa8dd866 raftexample: add snapshot methods to kvstore 2016-09-21 04:23:01 -07:00
064411b51c wal/wal.go : improved coverage by testing WAL.Save which causes a WAL.cut to happen 2016-09-21 16:50:55 +05:30
d3906e75bf Merge pull request #6494 from sinsharat/update_snapshot_restore_options
etcdctlv3: updated snapshot restore options
2016-09-21 05:50:34 -05:00
05175480b3 etcdctlv3: updated snapshot restore options 2016-09-21 16:17:32 +05:30
0604fccfea Merge pull request #6492 from sinsharat/make-mirror_no_dest_test
etcdctlv3: test case: make-mirror no dest prefix
2016-09-21 03:12:01 -07:00
cff06ef64d Merge pull request #6491 from gyuho/functional
functional-tester: use different ports in Procfile
2016-09-21 02:54:54 -07:00
409fc439d1 etcdctlv3: test case: make-mirror no dest prefix 2016-09-21 15:12:36 +05:30
b2c4992a82 functional-tester: use different ports in Procfile 2016-09-21 02:39:45 -07:00
e8adc24c32 functional-tester: build from repo root, vendor 2016-09-21 02:06:13 -07:00
d6a3ce17d5 Merge pull request #6472 from sinsharat/make-mirror_modify_dest_test
etcdctlv3: test case: make-mirror modify dest prefix
2016-09-21 00:43:56 -07:00
e5ff5d92e6 etcdctlv3: test case: make-mirror modify dest prefix 2016-09-21 05:40:52 +05:30
b91d8625c8 Merge pull request #6485 from sinsharat/readme_get_features_update
ctlv3: updated readme for options and examples for get command
2016-09-21 07:26:46 +08:00
9743ee8b83 etcdctlv3: updated readme for options and examples for get command 2016-09-21 04:51:13 +05:30
095cff4415 Merge pull request #6478 from heyitsanthony/untangle-check
etcd-tester: split out consistency checking code from tester
2016-09-20 10:56:17 -07:00
d4eff5381c etcd-tester: split out consistency checking code from tester 2016-09-20 10:26:58 -07:00
3da8c6512b Merge pull request #6481 from sinsharat/update_timetolive_options
etcdctlv3: updated options for TIMETOLIVE
2016-09-20 23:29:15 +08:00
3e67702d4b etcdctlv3: updated options for TIMETOLIVE 2016-09-20 16:40:58 +05:30
b586060812 Merge pull request #6475 from fanminshi/leaseparallel
etcdserver: parallelize expired leases process
2016-09-19 16:46:31 -07:00
690a0b6f00 etcdserver: parallelize expired leases process
When 1000 leases expired at the same time, etcd takes more than 5 seconds to clean them. This means that even after the leases have expired, keys associated with leases are still accessible. I increase the deletion throughput by parallelizing leases deletion process.
2016-09-19 16:17:49 -07:00
69c7ea0b4a Merge pull request #6473 from heyitsanthony/watchreconn-putretry
integration: l-read before Put in TestWatchReconnRequest
2016-09-19 14:52:26 -07:00
0fb2cab221 integration: l-read before Put in TestWatchReconnRequest
TestWatchReconnRequest occasionally triggers elections because it spins on
drop connections, eating up CPU. In case there's an election, submit an
l-read to wait for the cluster to settle down.

Fixes #6314
2016-09-19 14:14:32 -07:00
c9e06fa1ed Merge pull request #6330 from gyuho/balancer-sync
clientv3: add SetEndpoints method
2016-09-20 04:52:13 +09:00
d26cfdb7d1 Merge pull request #6425 from heyitsanthony/etcdserver-wg
etcdserver: tighten up goroutine management
2016-09-19 12:51:16 -07:00
f11b35eb71 clientv3/integration: test 'SetEndpoints' 2016-09-20 04:36:14 +09:00
b9d18d4ac9 clientv3: add 'SetEndpoints' method 2016-09-20 04:36:01 +09:00
3866e78c26 etcdserver: tighten up goroutine management
All outstanding goroutines now go into the etcdserver waitgroup. goroutines are
shutdown with a "stopping" channel which is closed when the run() goroutine
shutsdown. The done channel will only close once the waitgroup is totally cleared.
2016-09-19 12:10:41 -07:00
a70513621c Merge pull request #6470 from xiang90/fix_doc
doc: use 2379 as port of the first member in local cluster
2016-09-19 08:34:11 -05:00
328c42f1b7 doc: use 2379 as port of the first member in local cluster 2016-09-19 21:28:33 +08:00
2dc06787ae Merge pull request #6467 from coreos/revert-6465-tls-copy
Revert "pkg/transport: update tls.Config copy method"
2016-09-19 16:02:41 +09:00
629d9e7dab Revert "pkg/transport: update tls.Config copy method" 2016-09-19 15:07:12 +09:00
db9ed233dc Merge pull request #6465 from gyuho/tls-copy
pkg/transport: update tls.Config copy method
2016-09-19 00:46:08 +09:00
8c9a88c7d4 pkg/transport: update tls.Config copy method
For Go 1.7
2016-09-18 22:50:45 +09:00
33dbf5c6bd Merge pull request #6463 from xiang90/fix_http
embed: fix go 1.7 http issue
2016-09-18 08:44:04 -05:00
7a48ca4cea embed: fix go 1.7 http issue
go 1.7 introduces HTTP2 compability issue. Now we
need to explicitly enable HTTP2 when TLS is set.
2016-09-18 18:38:55 +08:00
ac2077559d Merge pull request #6461 from gyuho/travis
travis: test with Go 1.7.1
2016-09-17 22:10:09 +09:00
63d6a4e0e1 travis: test with Go 1.7.1 2016-09-17 20:57:28 +09:00
4a7c1da9b3 Merge pull request #6460 from sinsharat/readme_update
etcdctlv3: updated readme for make-mirror: modify/remove prefix in dest cluster
2016-09-17 19:57:15 +09:00
6c408eb779 etcdctlv3:updated readme.md for make-mirror modify/remove prefix in dest cluster 2016-09-17 16:13:01 +05:30
86aeeca644 Merge pull request #6454 from sinsharat/windows_save_snapshot_fix
ctlv3: close snapshot file before rename (Windows)
2016-09-16 18:09:59 -05:00
0d65061a2d Merge pull request #6439 from sinsharat/make_mirror_feature_add
etcdctl/ctlv3: make-mirror: feature add to modify/remove prefix in dest cluster
2016-09-16 18:07:20 -05:00
01a0db0fce Merge pull request #6456 from heyitsanthony/version-bump-git
version: bump to 3.1.0-alpha.0+git
2016-09-16 15:12:30 -07:00
0a8bf60a9d version: bump to 3.1.0-alpha.0+git 2016-09-16 09:56:29 -07:00
fef6557f6c ctlv3: close snapshot file before rename (Windows) 2016-09-16 21:55:04 +05:30
b571f4d627 etcdctl/ctlv3: feature added to modify/remove prefix in the destination cluster 2016-09-16 18:48:41 +05:30
5c2053109b Merge pull request #6449 from gyuho/supported-stream
rafthttp: add v3.x to supported streams
2016-09-16 21:47:20 +09:00
8827619f5b rafthttp: add v3.x to supported streams 2016-09-16 20:49:00 +09:00
143e2f27fc Merge pull request #6447 from xiang90/cap
api: update capability map
2016-09-16 02:35:26 -05:00
d6904ce415 Merge pull request #6441 from petermattis/pmattis/tick-quiesced
raft: add RawNode.TickQuiesced
2016-09-16 01:48:21 -05:00
c6feb695dc api: update capability map 2016-09-16 14:34:55 +08:00
37fa6ac45c raft: add RawNode.TickQuiesced
TickQuiesced allows the caller to support "quiesced" Raft groups which
do not perform periodic heartbeats and elections. This is useful in a
system with thousands of Raft groups where these periodic operations can
be overwhelming in an otherwise idle system.

It might seem possible to avoid advancing the logical clock at all in
such Raft groups, but doing so has an interaction with the CheckQuorum
functionality. If a follower is not quiesced while the leader is the
follower can call an election that will fail because the leader's lease
has not expired (electionElapsed < electionTimeout). The next time the
leader sends a heartbeat to this follower the follower will see that the
heartbeat is from a previous term and respond with a MsgAppResp. This in
turn will cause the leader to step down and become a follower even
though there isn't a leader in the group. By allowing the leader's
logical clock to advance via TickQuiesced, the leader won't reject the
election and there will be a smooth transfer of leadership to the
follower.
2016-09-15 21:05:18 -04:00
2724c3946e Merge pull request #6444 from heyitsanthony/version-bump-3.1
version: bump to 3.1.0-alpha.0
2016-09-15 15:24:59 -07:00
c658fa62c5 version: bump to 3.1.0-alpha.0 2016-09-15 15:13:51 -07:00
624eb609fa Merge pull request #6443 from gyuho/news
NEWS: add v3.0.8, v3.0.9
2016-09-16 07:09:42 +09:00
1b1e54a281 NEWS: add v3.0.8, v3.0.9 2016-09-16 07:05:31 +09:00
9913e0073c Merge pull request #6438 from gyuho/e2e-backends
e2e: rename 'backends' to 'processes'
2016-09-15 19:00:28 +09:00
7cd7b5d539 e2e: rename 'backends' to 'processes' 2016-09-15 18:30:08 +09:00
a12b317552 Merge pull request #6428 from gyuho/snapshot-test
e2e: test snapshot restore
2016-09-15 04:22:03 -05:00
bb337c87d0 e2e: test snapshot restore 2016-09-15 17:58:00 +09:00
fb760b4c53 Merge pull request #6403 from vimalk78/rafthttp-mertics-record-rw-failures
rafthttp/metrics.go:fixed TODO: record write/recv failures.
2016-09-15 02:46:20 -05:00
d814804fa1 Merge pull request #6437 from sinsharat/readme_update
etcdctl: readme.md display fix
2016-09-15 16:20:42 +09:00
cd3a7fb833 etcdctl: readme.md display fix 2016-09-15 12:23:56 +05:30
64e1a327ee rafthttp/metrics.go:fixed TODO: record write/recv failures. 2016-09-15 11:32:08 +05:30
b3a083d336 Merge pull request #6436 from LiamHaworth/bugfix/6433-support-for-charset-in-content-type-header
etcdserver, api, v2http, client: Added support for semicolons
2016-09-14 23:25:31 -05:00
5cfa9e2384 etcdserver, api, v2http, client: Added support for semicolons
Added support into the v2 API to fix an issue (6433) where if there is a semicolon
and fields after it the API would return an "invalid Content-type" message even
if the content type was actually correct
2016-09-15 13:54:22 +10:00
e77baa3dcb Merge pull request #6424 from heyitsanthony/v3api-createminmax
etcdserver: range queries with min/max create revision
2016-09-14 19:10:52 -07:00
059f419ac5 Merge pull request #6429 from xiang90/fix_balancer
clientv3: balancer panics when call up after close
2016-09-14 19:42:24 -05:00
82af0c4a7d ctlv3: remove superfluous session creation 2016-09-14 17:03:33 -07:00
9b1fe45853 concurrency: use create max revision for locks and elections 2016-09-14 17:03:33 -07:00
004a5f0dbc clientv3: balancer panics when call up after close
Fix the issue by adding a simple guard varable.
2016-09-15 07:43:42 +08:00
aa7a35798d integration: add tests for MinCreateRev and MaxCreateRev 2016-09-14 15:31:45 -07:00
5bd251a6fa clientv3: WithMinCreateRev, WithMaxCreateRev 2016-09-14 15:31:45 -07:00
c0981a90f7 etcdserver, etcdserverpb: range min_create_revision and max_create_revision 2016-09-14 15:31:45 -07:00
c74ac99871 Merge pull request #6423 from heyitsanthony/fix-rwmutex
recipes: fix rwmutex locking
2016-09-14 09:50:26 -07:00
3730802fef Merge pull request #6427 from mitake/prefix-print
etcdctl: improve printing of role get for prefix permission
2016-09-14 02:27:28 -05:00
8eac9fb93d Merge pull request #6401 from hhkbp2/add-read-index-for-raft-rawnode
raft: add read index for RawNode
2016-09-14 02:14:49 -05:00
4211c0b7af etcdctl, clientv3: improve printing of role get for prefix permission
This commit improves printing of role get command for prefix
permission. If a range permission corresponds to a prefix permission,
it is explicitly printed for a user. Below is an example of the new
printing:

$ ETCDCTL_API=3 bin/etcdctl --user root:p role get r1
Role r1
KV Read:
        [/dir/, /dir0) (prefix /dir/)
        [k1, k5)
KV Write:
        [/dir/, /dir0) (prefix /dir/)
        [k1, k5)
2016-09-14 16:10:32 +09:00
eeca614cd3 raft: add read index for RawNode 2016-09-14 14:43:46 +08:00
672472f85e Merge pull request #6414 from mitake/prefix-perm
etcdctl: an option for granting permission with key prefix
2016-09-13 23:29:40 -05:00
4e2b09a7ca etcdctl: an option for granting permission with key prefix
This commit adds a new option --prefix to "role grant-permission"
command. If the option is passed, the command interprets the key as a
prefix of range permission.

Example of usage:
$ ETCDCTL_API=3 bin/etcdctl --user root:p role grant-permission --prefix r1 readwrite /dir/
Role r1 updated
$ ETCDCTL_API=3 bin/etcdctl --user root:p role get r1
Role r1
KV Read:
        [/dir/, /dir0)
        [k1, k5)
KV Write:
        [/dir/, /dir0)
        [k1, k5)
$ ETCDCTL_API=3 bin/etcdctl --user u1:p put /dir/key val
OK
2016-09-14 12:54:14 +09:00
c350cd7679 Merge pull request #6417 from xiang90/fix_TestPipelineExceedMaximumServing
rafthttp: fix TestPipelineExceedMaximumServing
2016-09-13 17:59:43 -05:00
9b91e96510 integration: fix rwmutex test to check write locking 2016-09-13 14:09:59 -07:00
9f829fdab7 recipes: fix rwmutex so locking works
Fixes #6408
2016-09-13 14:09:59 -07:00
c6bfdb909b Merge pull request #6412 from heyitsanthony/revert-domain-listener
embed: warn on domain name in listener
2016-09-13 10:25:18 -07:00
afef9cc312 Merge pull request #6418 from sinsharat/update_readme
etcdctl\ctlv3: updated readme.md for timetolive example
2016-09-14 02:06:57 +09:00
6f4e3696d2 etcdctl\ctlv3: updated readme.md for timetolive example 2016-09-13 22:31:34 +05:30
c7212b438d embed: warn on domain name in listener 2016-09-13 09:17:40 -07:00
0d35ba9b94 rafthttp: fix TestPipelineExceedMaximumServing
The timeout is too short. It might take more than 10ms to send
request over a blocking chan (buffer is full). Changing the timeout
to 1 second can fix this issue.
2016-09-13 19:06:11 +08:00
e6a7f25065 Merge pull request #6411 from heyitsanthony/v3api-minmaxmod
etcdserver: Range with min/max mod revision
2016-09-13 05:54:58 -05:00
cfe717e926 Merge pull request #6275 from xiang90/raft_l
raft: support safe readonly request
2016-09-13 01:36:04 -05:00
8c492c70ef Merge pull request #6413 from xiang90/fix_wait
clientv3: return error from response when possible
2016-09-12 22:54:42 -05:00
56084a7cc8 clientv3: return error from response when possible 2016-09-13 11:18:21 +08:00
fa2e9c2449 Revert "Merge pull request #6365 from heyitsanthony/fix-dns-bind"
This reverts commit af5ab7b351, reversing
changes made to da6a0f0594.
2016-09-12 19:45:35 -07:00
17e7f83212 integration: test MinModRev/MaxModRev 2016-09-12 19:44:14 -07:00
b0481ba858 clientv3: WithMinModRev and WithMaxModRev 2016-09-12 19:44:14 -07:00
3df8838501 Merge pull request #6404 from glycerine/range_fixes
etcd/auth: fix range handling bugs.
2016-09-12 21:26:59 -05:00
af0264d2e6 etcdserver, etcdserverpb: add MinModRevision and MaxModRevision options to Range 2016-09-12 15:17:57 -07:00
ce01fb3cdf Merge pull request #6410 from fanminshi/master
etcd-tester: fix peer-port parsing bug with localhost url
2016-09-12 14:00:06 -07:00
8a63071463 etcd-tester: fix peer-port parsing bug with localhost url
The following format "http://localhost:1234" causes existing port parser to fail. Add new logic to parse the host name first then extract port.

Fixes #6409
2016-09-12 13:29:52 -07:00
ef1ef0ba16 auth: fix range handling bugs.
Test 15, counting from zero, in TestGetMergedPerms
in etcd/auth/range_perm_cache_test.go, was trying
incorrectly assert that [a, b) merged with [b, "")
should be [a, b). Added a test specifically for
this. This patch fixes the incorrect larger test
and the bugs in the code that it was hiding.

Fixes #6359
2016-09-12 09:23:19 -05:00
710b14ce56 raft: support safe readonly request
Implement raft readonly request described in raft thesis 6.4
along with the existing clock/lease based approach.
2016-09-12 15:13:52 +08:00
840f4d48c8 Merge pull request #6402 from gyuho/logger
*: separate 'capnslog' log level setting
2016-09-10 21:38:53 -05:00
bfb9d837d9 Merge pull request #6399 from AdoHe/master
update language bindings doc to add coreos/jetcd
2016-09-10 21:55:41 +09:00
caaa8a48aa libraries-and-tools.md: add Java client 2016-09-10 20:47:31 +08:00
03b9d6f24c *: separate 'capnslog' log level setting 2016-09-10 20:26:51 +09:00
9a67d71e6c Merge pull request #6396 from heyitsanthony/rafthttp-msg-leak
rafthttp: log stream stopped message before closing channel
2016-09-09 17:52:03 -05:00
8f47468a40 Merge pull request #6397 from fanminshi/master
functional-tester: correct goreman command in readme
2016-09-09 17:30:54 -05:00
a571655983 functional-tester: correct goreman command in readme
update readme file to have the correct goreman command to start the functional tester locally.
2016-09-09 14:56:23 -07:00
0250f0c984 rafthttp: log stream stopped message before closing channel
Was causing spurious goroutine leak failures in testing.
2016-09-09 12:47:06 -07:00
92f141d670 Merge pull request #6393 from sinsharat/readme_update
etcdctl:readme.md doc made uniform
2016-09-09 12:04:48 -07:00
d5edb62bd0 etcdctl:readme.md doc made uniform 2016-09-10 00:32:36 +05:30
b22b405465 Merge pull request #6390 from gyuho/simple
wal: simplify dir.Close call
2016-09-09 09:50:38 +09:00
20fc9dc463 Merge pull request #6389 from heyitsanthony/func-tester-noroot
functional-tester: run locally
2016-09-08 19:48:33 -05:00
ccb46d2024 wal: simplify dir.Close call 2016-09-09 09:23:55 +09:00
0b675845f6 Merge pull request #6321 from gyuho/lease-information
*: lease timetolive
2016-09-09 08:43:28 +09:00
aa6b1e6a10 functional-tester: add Procfile 2016-09-08 16:35:55 -07:00
b7dc6cc604 e2e: test 'lease timetolive' 2016-09-09 08:22:41 +09:00
04a4cea630 etcdctl/ctlv3: add 'lease timetolive' command 2016-09-09 08:21:58 +09:00
4c08f6767c clientv3: add lease.TimeToLive + tests 2016-09-09 08:18:45 +09:00
55ba3d95fb etcd-tester: support per-agent client/peer/failpoint ports 2016-09-08 16:15:18 -07:00
78cfc8db95 grpcproxy: implement 'LeaseTimeToLive' 2016-09-09 08:14:46 +09:00
63b0cd470d etcdserver: implement 'LeaseTimeToLive' 2016-09-09 08:14:14 +09:00
0712ebc9b5 v2http: handle '/leases/internal' 2016-09-09 08:12:31 +09:00
2e25a772a5 etcd-agent: support rootless operation and configurable gofail ports 2016-09-08 16:12:00 -07:00
617d2d5b98 lease/*: add lease handler for 'LeaseTimeToLive' 2016-09-09 08:11:46 +09:00
3132e36bf3 etcdserverpb: add 'LeaseTimeToLive' RPC 2016-09-09 08:08:14 +09:00
33b3fdc627 Merge pull request #6388 from groxxda/patch-1
etcd.service: order after network.target
2016-09-08 16:31:29 -05:00
758f0d9017 Merge pull request #6387 from sinsharat/fix_ctl_win
ctlv3: fix line parsing for Windows
2016-09-08 16:27:26 -05:00
17377f5642 example .service file: Order after network.target
From (systemd NetworkTarget description)[https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/]:
```
[...]since the shutdown ordering of units in systemd is the reverse of the startup ordering, any unit that is order After=network.target can be sure that it is stopped before the network is shut down if the system is powered off. This allows services to cleanly terminate connections before going down, instead of abruptly losing connectivity for ongoing connections, leaving them in an undefined state.[...]
```
2016-09-08 23:11:01 +02:00
8b764aac71 ctlv3: fix line parsing for Windows 2016-09-09 01:58:33 +05:30
bb3ba1ee1c Merge pull request #6381 from heyitsanthony/fix-wal-rename
wal: fsync directory after wal file rename
2016-09-08 12:56:50 -07:00
28d80ad709 Merge pull request #6370 from xiang90/fix_restore
etcdctl: restore should create a snapshot
2016-09-08 14:25:07 -05:00
e9f841627c Merge pull request #6384 from hhkbp2/add-test-case-for-leader-transfer-from-follower
raft: add test case for leader transfer from follower
2016-09-08 13:58:03 -05:00
4563efd766 Merge pull request #6382 from heyitsanthony/unhealthy-err
v3api, rpctypes: add ErrUnhealthy
2016-09-08 09:15:58 -07:00
68f2fdc1ff raft: add test case for leader transfer from follower 2016-09-08 17:22:52 +08:00
bd7107bd4b wal: fsync directory after wal file rename
Fixes #6368
2016-09-08 00:09:16 -07:00
c449da6ff9 fileutil: windows OpenDir
Windows needs to open a directory with write access to fsync but the go
runtime won't open directories that way.
2016-09-08 00:09:16 -07:00
0cc2f82e7e Merge pull request #6383 from gyuho/lease-client
clientv3: use correct context in toErr (lease)
2016-09-08 01:39:40 -05:00
1aec483e42 clientv3: use correct context in toErr (lease) 2016-09-08 10:58:11 +09:00
1defeda792 v3api, rpctypes: add ErrUnhealthy 2016-09-07 16:51:49 -07:00
0b6350227c Merge pull request #6341 from xiang90/handle_overload
grpcproxy: handle overloaded stream
2016-09-07 16:55:41 -05:00
656167d760 etcdctl: Corrected command in Readme.md (#6376)
Corrected command in Readme.md
2016-09-07 21:09:24 +09:00
a6c905ad96 Merge pull request #6367 from heyitsanthony/fix-watch-init-reconn
clientv3: drain buffered WatchResponses before resuming
2016-09-07 03:15:01 -05:00
f411583ed1 Merge pull request #6374 from sinsharat/master
etcdctlv3: Readme.md updated
2016-09-07 02:29:14 -05:00
534cb0b749 etcdctlv3: Readme.md updated
1. Under PUT example the put command was mentioned in capital which will
give the below error:
Error: unknown command "PUT" for "etcdctl"
Hence corrected the same.
2. The lease id is mentioned with 0x to denote hex but since its an
example, copy pasting the command will give the below error:
Error: bad lease ID (strconv.ParseInt: parsing "0x1234abcd": invalid
syntax), expecting ID in Hex
Hence modified the same to a sample correct value so that a user new to
etcd does not get confused.
3. The command ./etcdctl range foo does not work and gives the below
error:
Error: unknown command "range" for "etcdctl"
Hence corrected the same

#6372
2016-09-07 12:35:20 +05:30
7b7b29ad1e Merge pull request #6373 from vimalk78/master
pkg/pbutil: corrected the package name in logger in pbutil.go
2016-09-07 01:35:21 -05:00
5ea6990a73 corrected the package name in logger 2016-09-07 11:52:01 +05:30
ce49fb6ec4 raft: add tests for IsLocalMsg (#6357)
* raft: add tests for IsLocalMsg

* report index of failed tests
2016-09-07 12:52:37 +09:00
7e182fa24a etcdctl: restore should create a snapshot
Restore should create a snasphot. So the new db file
can be sent to newly joined member.
2016-09-07 11:21:53 +08:00
b24527f2f0 Merge pull request #6353 from petermattis/pmattis/grow-inflights-buffer
raft: grow the inflights buffer instead of preallocating
2016-09-07 09:51:45 +09:00
ad318ee891 clientv3: drain buffered WatchResponses before resuming
Otherwise, the watcherStream can receive WatchResponses in the
middle of a resume, corrupting the stream.

Fixes #6364
2016-09-06 17:15:39 -07:00
af5ab7b351 Merge pull request #6365 from heyitsanthony/fix-dns-bind
embed: reject domain names before binding
2016-09-06 16:02:46 -07:00
7644a8ad76 integration: test domain name URLs are rejected before binding 2016-09-06 15:33:47 -07:00
2752169d6a embed: reject binding listeners to domain names
Fixes #6336
2016-09-06 15:33:28 -07:00
c1948f2940 raft: grow the inflights buffer instead of preallocating
Grow the inflights buffer as needed instead of preallocating it to its
max size. This avoids preallocating a lot of unnecessary
space (8*MaxInflightMsgs) when using lots of raft groups while still
allowing for a reasonable MaxInflightMsgs configuration.
2016-09-06 18:07:01 -04:00
da6a0f0594 Merge pull request #6362 from kevinburke/fix-typo
Documentation: fix typo
2016-09-06 14:45:03 -05:00
96ed856bca Merge pull request #6345 from topecongiro/patch-1
rafthttp: remove unnecessary sendc from peer
2016-09-06 11:32:16 -07:00
e508ce36ef Documentation: fix typo
"its" in this case is not short for "it is", it should be a possessive.
2016-09-06 11:26:27 -07:00
0b9c65c82f Merge pull request #6360 from jonboulle/master
scripts, doc: remove actool references
2016-09-06 18:37:28 +02:00
fd0539c8cc scripts, doc: remove actool references
Since c597d591b5 the release script uses
acbuild instead of actool, so purge all the references and have the
release script check for acbuild's presence instead.
2016-09-06 17:47:41 +02:00
d36c0a1444 Merge pull request #6356 from mitake/root-role
auth, e2e: the root role should be granted access to every key
2016-09-06 15:31:20 +08:00
bc5d7bbe03 auth, e2e, clientv3: the root role should be granted access to every key
This commit changes the semantics of the root role. The role should be
able to access to every key.

Partially fixes https://github.com/coreos/etcd/issues/6355
2016-09-06 16:10:28 +09:00
271df0dd71 Merge pull request #6354 from es-chow/fix-typo-in-interacting_v3-md
interacting_v3.md: fix typo
2016-09-06 10:17:56 +09:00
b17b482268 interacting_v3.md: fix typo 2016-09-06 09:08:37 +08:00
65fb1ad362 Merge pull request #6351 from petermattis/pmattis/raft-global-rand
raft: use a singleton global rand
2016-09-05 22:25:18 +08:00
4a33aa3917 raft: use a singleton global rand
rand.NewSource creates a 4872 byte object. With a small number of raft
groups in a process this isn't a problem. With 10k raft groups we'd use
46MB for these random sources. The only usage is in
raft.resetRandomizedElectionTimeout which isn't performance critical.

Fixes #6347.
2016-09-05 09:03:18 -04:00
1ebeef5cbf Merge pull request #6350 from nekto0n/fix_message_limit
rafthttp: fix misprint in readBytesLimit value
2016-09-05 15:26:19 +09:00
1b40fe7709 Merge pull request #6348 from plasticbox/master
libraries-and-tools.md: remove C++
2016-09-05 15:07:23 +09:00
da26e230a0 rafthttp: fix misprint in readBytesLimit value
and make test path in restricted test environments
2016-09-05 11:06:08 +05:00
f36267bf74 libraries-and-tools.md: remove C++ 2016-09-05 15:03:07 +09:00
a66b1e7c60 Merge pull request #6349 from gyuho/decode-length-limit
rafthttp: check decode size before buffer alloc
2016-09-05 14:25:23 +09:00
5c8ba23767 rafthttp: check decode size before buffer alloc
Fix https://github.com/coreos/etcd/issues/5386.
2016-09-05 14:06:03 +09:00
ec9e77db96 rafthttp: remove unnecessary sendc from peer 2016-09-04 13:07:31 +09:00
2e0dc8467d Merge pull request #6344 from glycerine/partial_fix_6343
etcdctl/ctlv3: don't crash when we should prompt for pw.
2016-09-03 10:55:38 -07:00
cccbf302f2 etcdctl/ctlv3: don't crash when we should prompt for pw.
when 'etcdctl --user name get blah' is invoked to
 prompt for password, don't panic.

 addresses the segfault part of #6343
2016-09-03 10:32:16 -07:00
56cfe40184 grpcproxy: fix a data race 2016-09-03 07:53:18 -07:00
b56ee178d5 grpcproxy: handle overloaded stream 2016-09-03 07:49:20 -07:00
0d07154926 Merge pull request #6340 from xiang90/fix_double_create
grpcproxy: fix double create event
2016-09-02 16:37:29 -07:00
81bd381048 Merge pull request #6339 from xiang90/close
grpcproxy: stop watchers in watch groups
2016-09-02 16:03:12 -07:00
805d4cbd93 grpcproxy: fix double create event 2016-09-02 16:02:46 -07:00
eded62e60c grpcproxy: stop watchers in watch groups 2016-09-02 16:01:11 -07:00
5b14b834c9 Merge pull request #6338 from xiang90/create
grpcproxy: fix more issues in watch path
2016-09-02 15:14:12 -07:00
8cd47c4348 grpcproxy: fix more issues in watch path 2016-09-02 15:13:21 -07:00
f7293125cf Merge pull request #6337 from xiang90/watch_cancel
grpcproxy: support cancel watcher
2016-09-02 13:38:20 -07:00
51b4d6b7a8 grpcproxy: support cancel watcher
We do not wait for the cancellation from actual etcd server,
but generate it at the proxy side. The rule is to return the
latest rev that the watcher has seen. This should be good
enough for most use cases if not all.
2016-09-02 12:36:47 -07:00
acc270edbf Merge pull request #6333 from plasticbox/master
libraries-and-tools.md: add C++ client package
2016-09-02 09:29:49 -07:00
ed2b3314b8 libraries-and-tools.md: add C++ client package 2016-09-02 14:05:49 +09:00
e93ee6179c Merge pull request #6325 from heyitsanthony/etcdctl-txn-quotes
etcdctl: fix quotes in txn and watch
2016-09-01 19:55:16 -07:00
666e7bd120 e2e: add quoted key/value to txn test 2016-09-01 19:39:23 -07:00
b1740f5fe4 etcdctl: fix quoted string handling in txn and watch
Fixes #6315
2016-09-01 19:39:23 -07:00
c59e0aa83e Merge pull request #6332 from heyitsanthony/fix-watcher-stream-cancel
grpcproxy: shutdown on client context cancel
2016-09-01 16:18:29 -07:00
7b2f769643 clientv3: only resume watcher if error is non-halting 2016-09-01 15:22:35 -07:00
3489fa82fb integration: don't nest proxies in cluster_proxy mode 2016-09-01 15:21:52 -07:00
d3ecebd14e grpcproxy: shut down watcher proxy when client context is done 2016-09-01 15:20:50 -07:00
26999db927 Merge pull request #6331 from xiang90/fix_proxy
grpcproxy: fix stream closing issue
2016-09-01 11:27:37 -07:00
9ef0f5ef8a grpcproxy: fix stream closing issue 2016-09-01 09:35:56 -07:00
9e5bccd458 Merge pull request #6324 from xiang90/fix_proxy_data_race
grpcproxy: fix data race
2016-08-31 18:48:51 -07:00
b982c80c14 grpcproxy: fix data race 2016-08-31 16:52:04 -07:00
48706a9cd6 Merge pull request #6320 from xiang90/fixTestIssue3699
integration: fix live lock in issue3699
2016-08-31 12:43:43 -07:00
5b60be9626 integration: fix live lock in issue3699
Do not restart the killed member immediately.
The member will advance its election timeout after restart
So it will have a better chance to become the leader again.
2016-08-31 12:25:24 -07:00
d016383740 Merge pull request #6319 from gyuho/news
NEWS: add v3.0.7
2016-08-31 11:22:09 -07:00
44e710f76c NEWS: add v3.0.7 2016-08-31 09:31:05 -07:00
a6d22b96c3 Merge pull request #6317 from gyuho/release-test
e2e: add 'TestReleaseUpgradeWithRestart'
2016-08-30 21:22:20 -07:00
2d552927e0 Merge pull request #6316 from gyuho/grpc-endpoints
e2e: remove stripSchema
2016-08-30 21:03:06 -07:00
a1598d767b e2e: add 'TestReleaseUpgradeWithRestart' 2016-08-30 21:01:10 -07:00
54ab9a1aba Merge pull request #6312 from gyuho/release-upgrade-test-v2
test: test with v3.0 (preparation for v3.1)
2016-08-30 20:57:18 -07:00
3aa2d1b40e test: test with v3.0 (preparation for v3.1) 2016-08-30 20:54:07 -07:00
c8ad147c0a e2e: remove stripSchema 2016-08-30 20:52:33 -07:00
e29c79c54c Merge pull request #6310 from heyitsanthony/wal-page-write
wal: use page buffered writer for writing records
2016-08-30 19:34:12 -07:00
28277b5a65 wal: use page buffered writer for writing records
Forces torn writes to only happen on sector boundaries.

Fixes #6271
2016-08-30 15:49:07 -07:00
2943bf9086 ioutil: add page buffered writer
A buffered writer that only writes full pages or when explicitly flushed.
2016-08-30 15:49:07 -07:00
48941cea95 Merge pull request #6308 from gyuho/manual2
client: do not send previous node data (optional)
2016-08-30 13:33:22 -07:00
ff7458508f Documentation/v2: add 'noValueOnSuccess' example 2016-08-30 11:49:12 -07:00
b9cd329c61 Merge pull request #6309 from xiang90/fix_upgrade
etcdserver: allow zero kv index for cluster upgrade
2016-08-30 11:46:14 -07:00
771ee43169 etcdserver: allow zero kv index for cluster upgrade
If a user upgrades etcd from 2.3.x to 3.0 and shutdown the
cluster immediately without triggering any new backend writes,
then the consistent index in backend would be zero.

The user cannot restart etcdserver due to today's strick index
match checking. We now have to lose this a bit for this case.
2016-08-30 11:28:18 -07:00
5c06fc9093 integration: change to 'NoValueOnSuccess' 2016-08-30 10:58:44 -07:00
2da7b63809 v2http: change to 'NoValueOnSuccess' 2016-08-30 10:53:02 -07:00
fb39e96862 client: change to 'NoValueOnSuccess' 2016-08-30 10:52:58 -07:00
572bfd99ff v2http: update function returns 2016-08-30 10:29:37 -07:00
82053f04b2 client: do not send previous node data (optional)
- Do not send back node data when specified
- remove node and prevNode when noDataOnSuccess is set
2016-08-30 10:04:09 -07:00
7873c25abd Merge pull request #6307 from gyuho/manual
libraries-and-tools.md: add C++ client package
2016-08-30 10:00:49 -07:00
e7314a2460 libraries-and-tools.md: add C++ client package 2016-08-30 09:51:27 -07:00
9e9bbb829e Merge pull request #6289 from purpleidea/feat/move-readynotify
embed: Move the ReadyNotify() call to a more sane place
2016-08-29 20:06:17 -07:00
547bf1a92d Merge pull request #6284 from glycerine/fix6278
fix unintended deadlock on key prefixes
2016-08-29 19:50:50 -07:00
9aee3f01cd embed: Move the ReadyNotify() call to a better place
When using the embed functionality, you can't call the Server.Stop()
function until StartEtcd returns, which can block until there is a call
to Server.Stop() in error situations. Since we have a catch-22, the
ReadyNotify() can be called manually by the user if they wish to wait
for the server startup, or in parallel with a timeout if they wish to
cancel it after some time.

Chzz pointed out that this is also more consistent with the
etcdserver.Start() behaviour too.

purpleidea pointed out that this is actually more correct too, because
we can now register the stop interrupt handler before we block on
startup.
2016-08-29 22:45:41 -04:00
9497e9678c clientv3/concurrency: allow election on prefixes of keys.
After winning an election or obtaining a lock, we
auto-append a slash after the provided key prefix.
This avoids the previous deadlock due to waiting
on the wrong key.

Fixes #6278
2016-08-29 18:34:14 -07:00
48f4a7d037 Merge pull request #6286 from bdarnell/initial-election-check-quorum
raft: Allow an election immediately after start with checkQuorum
2016-08-29 17:59:32 -07:00
a7a867c1e6 raft: Allow an election immediately after start with checkQuorum
Previously, the checkQuorum flag required an election timeout to
expire before a node could cast its first vote. This change permits
the node to cast a vote at any time when the leader is not known,
including immediately after startup.
2016-08-30 08:28:41 +08:00
f4c30425c0 Merge pull request #6298 from sinsharat/master
store: added missing test case scenerio for scan of de-queued entries
2016-08-29 13:55:55 -07:00
452dedf8ab Merge pull request #6297 from gyuho/grpc-proxy
grpcproxy: fix recursive Context method
2016-08-29 13:31:44 -07:00
f6cda8ac0b Merge pull request #6299 from sinsharat/master
store: removed duplicate method call for the same method
2016-08-29 13:27:57 -07:00
396fac416e Merge pull request #6273 from gyuho/get-cmd
ctlv3: add 'print-value-only' flag to get command
2016-08-29 13:25:30 -07:00
db7e38b0ed Merge pull request #6300 from sinsharat/master
wal: document grammar correction
2016-08-29 12:22:38 -07:00
69ed560fae wal: document grammar correction
Corrected grammar mistake for doc.go
2016-08-30 00:50:02 +05:30
754b9025c4 store: removed duplicate method call for the same method
the get func was calling path's Join and clean method which is already
being in internalGet(nodePath) func. Hence the func was getting called
unnecessarily twice which is not needed.

#6295
2016-08-30 00:44:53 +05:30
1c59708c51 e2e: test 'print-value-only' flag 2016-08-29 12:09:16 -07:00
524a5a1afb ctlv3: add 'print-value-only' flag to get command 2016-08-29 12:09:07 -07:00
45079ec6c1 Merge pull request #6274 from dghubble/etcd3-rkt-docs
Documentation: Add initial etcd3 with rkt docs
2016-08-29 12:01:27 -07:00
4f150b06e5 store: added missing test case scenerio for scan of de-queued entries
Test case added to check err handing for replaced entries.

#6255
2016-08-30 00:30:48 +05:30
fa79d42b98 Documentation: Add initial etcd3 with rkt docs 2016-08-29 11:59:46 -07:00
86bf2bc443 grpcproxy: fix recursive Context method 2016-08-29 11:37:35 -07:00
e53b99588a Merge pull request #6288 from heyitsanthony/fix-retryread
clientv3: retry non-mutable rpcs on Internal codes
2016-08-28 20:41:19 -07:00
5e963608b7 clientv3: do not treat Internal codes as halting
Fixes #6277
2016-08-28 20:20:22 -07:00
3552420dfd clientv3: set failfast=false on read-only txns 2016-08-28 19:40:38 -07:00
64ac631863 rpctypes: set unknown codes to Unknown instead of internal
An unrecognized error code isn't "very broken".
2016-08-28 19:37:35 -07:00
f73258a51f Merge pull request #6282 from gyuho/tester-error
etcd-tester: return error for mismatch rev/hash
2016-08-27 22:25:18 -07:00
0bf2ef3c1b etcd-tester: return error for mismatch rev/hash 2016-08-27 22:14:42 -07:00
a0759298c5 Merge pull request #6281 from xiang90/fix
etcd-tester: do not restart stresser on error
2016-08-27 20:49:08 -07:00
017aac88a8 etcd-tester: do not restart stresser on error 2016-08-27 20:47:45 -07:00
0be190df4d Merge pull request #6279 from xiang90/fix_hash
mvcc: force commit and hash should be atomic for getting hash
2016-08-27 20:09:22 -07:00
1437388f77 mvcc: force commit and hash should be atomic for getting hash 2016-08-27 19:22:22 -07:00
c388b2f22f Merge pull request #6264 from heyitsanthony/error-codes
clientv3: use grpc codes to translate raw grpc errors
2016-08-26 11:52:37 -07:00
a50c707050 clientv3/integration: wait for two request timeouts in txn tests
Read only txns and Get may timeout once if the leader is lost.
2016-08-26 10:04:10 -07:00
3a49cbb769 Merge pull request #6269 from aaronlehmann/hold-lock-while-renaming
On non-Windows OS, hold file lock while renaming WAL directory
2016-08-26 09:53:59 -07:00
af4f82228c wal: hold file lock while renaming WAL directory on non-Windows
Windows requires this lock to be released before the directory is
renamed. But on unix-like operating systems, releasing the lock and
trying to reacquire it immediately can be flaky if a process is forked
around the same time. The file descriptors are marked as close-on-exec
by the Go runtime, but there is a window between the fork and exec where
another process will be holding the lock.
2016-08-26 09:27:51 -07:00
df54ad2208 v3rpc, rpctypes: add error types for timeouts 2016-08-26 09:22:09 -07:00
267063efd0 clientv3: use grpc codes to translate raw grpc errors 2016-08-26 09:22:09 -07:00
417b9469aa Merge pull request #6270 from heyitsanthony/etcdserver-timeout
etcdserver: use request timeout defined by ServerConfig for v3 requests
2016-08-25 20:50:21 -07:00
254c0ea814 etcdserver: use request timeout defined by ServerConfig for v3 requests 2016-08-25 18:39:01 -07:00
4f5cacc835 Merge pull request #6267 from heyitsanthony/fix-wal-tear
wal: fix CRC corruption on writes following write tears
2016-08-25 17:10:08 -07:00
f1ead43482 wal: zero out wal tail past its first zero record
Whenever the WAL is opened for writes, it should write zeroes to its tail
starting from the first zero record. Otherwise, if there are entries past
the first zero record due to a torn write, any new writes that overlap the
old entries will lead to a garbage record on the tail and cause a CRC
mismatch.
2016-08-25 14:24:46 -07:00
58a36cb651 fileutil: add ZeroToEnd for zeroing files 2016-08-25 14:24:46 -07:00
0d8d9a374c wal: test for truncation on torn writes 2016-08-25 14:24:46 -07:00
488ae52a51 Merge pull request #6259 from xiang90/fix_test_c
clientv3/integration: fix TestKVPutStoppedServerAndClose
2016-08-24 14:14:17 -07:00
f2b7c501cc clientv3/integration: fix TestKVPutStoppedServerAndClose 2016-08-24 13:57:27 -07:00
bb110b0a2d Merge pull request #6257 from heyitsanthony/doc-fix-buglink
Documentation: update links for unaligned 64-bit atomics issue
2016-08-24 09:37:00 -07:00
159c8ee6e0 Documentation: update links for unaligned 64-bit atomics issue
Fixes #6256
2016-08-24 09:13:53 -07:00
1c989edb47 Merge pull request #6253 from heyitsanthony/srv-arec
discovery: reject IP address records in SRVGetCluster
2016-08-24 06:56:17 -07:00
3dc12e33f1 discovery: reject IP address records in SRVGetCluster
Was incorrectly trimming the trailing '.' from the target; this in turn
caused the etcd server to accept any SRV record with an IP target
instead of only targets with A records.
2016-08-23 18:10:42 -07:00
8e4fcaa6dc Merge pull request #6251 from xiang90/ctl_doc
etcdctl: list output options
2016-08-23 11:32:33 -07:00
86dcfbf205 etcdctl: list output options 2016-08-23 11:32:00 -07:00
83e66d2962 Merge pull request #6248 from xiang90/fix_mvcc
mvcc: only write txn should update index
2016-08-23 10:50:46 -07:00
c12104bd15 Merge pull request #6247 from xiang90/fix_snap
etcdserver: kv.commit needs to be serialized with apply
2016-08-23 09:39:54 -07:00
7f3d4bfae5 etcdserver: kv.commit needs to be serialized with apply
kv.commit updates the consistent index in backend. When
executing in parallel with apply, it might grab tx lock
after apply update the consistent index and before apply
starts to execute the opeartion. If the server dies right
after kv.commit, the consistent is updated but the opeartion
is not executed. If we restart etcd server, etcd will skip
the operation. :(

There are a few other places that we need to take care of,
but let us fix this first.
2016-08-23 09:16:09 -07:00
959f860a40 Merge pull request #6249 from gyuho/fix-count
etcd-tester: fix compact rev counting
2016-08-22 23:36:57 -07:00
0c37df7265 etcd-tester: fix compact rev counting 2016-08-22 22:58:44 -07:00
e1789aa531 mvcc: only write txn should update index 2016-08-22 22:05:51 -07:00
028b954052 Merge pull request #6245 from requenym/patch-1
documentation: update libraries-and-tools.md
2016-08-22 19:08:15 -07:00
49ef47a9a4 documentation: update libraries-and-tools.md 2016-08-22 20:21:29 -04:00
13f79affb6 Merge pull request #6243 from xiang90/fix_m
e2e: remove server testing in etcdctl test
2016-08-22 16:14:51 -07:00
aa89bc35fd Merge pull request #6242 from heyitsanthony/rwdial-timeout
pkg/transport: bump wait time in TestReadWriteTimeoutDialer for write deadline
2016-08-22 16:13:50 -07:00
722d66b03d Merge pull request #6241 from gyuho/progress-doc
clientv3: specify watch progress notify interval
2016-08-22 15:59:01 -07:00
be38c50567 clientv3: specify watch progress notify interval
For watch request
2016-08-22 15:44:59 -07:00
1d58c7d3b2 e2e: remove server testing in etcdctl test 2016-08-22 15:34:50 -07:00
3b92384394 pkg/transport: bump wait time in TestReadWriteTimeoutDialer for write deadline
Was able to get 2s wait times with 500 concurrent requests on a fast machine;
a slower machine could possibly see similar delays with a single connection.

Fixes #6220
2016-08-22 15:30:44 -07:00
c39b7205a6 Merge pull request #6228 from mitake/e2e-txn-auth
e2e: a test case for txn and permission
2016-08-22 09:24:18 -07:00
3d5d3b90e9 e2e: a test case for txn and permission
This commit adds a new test case for checking the permission mechanism
can work well in txn requests.
2016-08-22 12:06:19 +09:00
0504b277b6 Merge pull request #6235 from coreos/procfile-location
local_cluster: make it clear where Procfile is
2016-08-21 19:55:50 -07:00
4c7bced34e local_cluster: make it clear where Procfile is
It isn't clear where to start with these instructions, fix this.
2016-08-21 17:14:59 -04:00
8c88c1611e Merge pull request #6231 from heyitsanthony/fix-rafthttp-test
rafthttp: fix race in TestStreamWriterAttachOutgoingConn
2016-08-19 20:40:42 -07:00
784c4446d9 rafthttp: fix race in TestStreamWriterAttachOutgoingConn
Fixes #6230
2016-08-19 19:59:16 -07:00
262c98f327 Merge pull request #6229 from xiang90/applynotify
etcdserver: add waitApplyIndex
2016-08-19 16:58:21 -07:00
83de13e4a8 etcdserver: support apply wait 2016-08-19 16:18:35 -07:00
940402a27d Merge pull request #6225 from xiang90/cache
grpc-proxy: invalidate cache entries when there is a put/delete
2016-08-19 15:11:59 -07:00
8db4f5b8e1 pkg/wait: change wait time to use logical clock 2016-08-19 15:10:37 -07:00
146bce3377 Merge pull request #6211 from gyuho/proxy-timeout
integration: improve TestTransferLeader
2016-08-19 13:32:18 -07:00
eaa5d9772f integration: improve TestTransferLeader
so that it can check leader transition
2016-08-19 13:11:38 -07:00
c8bbb8c53e grpc-proxy: invalidate cache entries when there is a put/delete 2016-08-19 12:52:19 -07:00
5e6d2a23b7 Merge pull request #6226 from gyuho/vendor
vendor: update grpc/grpc-go for clientconn patch
2016-08-18 20:35:25 -07:00
01471481a9 vendor: update grpc/grpc-go for clientconn patch 2016-08-18 20:17:24 -07:00
f4b6ed2469 Merge pull request #6223 from heyitsanthony/fix-rafthttp-badoutgoing
rafthttp: remove WaitSchedule() from tests
2016-08-18 16:44:56 -07:00
da1e022890 rafthttp: remove WaitSchedule() from tests
Fixes #6187
2016-08-18 16:26:35 -07:00
5e9fe0dc23 Merge pull request #6222 from hongchaodeng/master
integration: NewClusterV3() should launch cluster before creating clients
2016-08-18 14:52:04 -07:00
5630a76766 integration: NewClusterV3 should launch cluster before creating clients 2016-08-18 14:05:21 -07:00
8021487b7a Merge pull request #6219 from sinsharat/master
raft: handled panic for Term due to IOB
2016-08-18 12:33:52 -07:00
a8fc4396e2 Merge pull request #6218 from gyuho/boltdb
vendor: boltdb/bolt v1.3.0 for Go 1.7
2016-08-18 11:06:05 -07:00
9b3b1f80dd raft: handled panic for Term due to IOB
Instead of raising panic, returning an error instead for better handling

#6215
2016-08-18 23:11:38 +05:30
00f5a01378 vendor: boltdb/bolt v1.3.0 for Go 1.7 2016-08-18 10:36:20 -07:00
cc4f4b47bc Merge pull request #6198 from heyitsanthony/reenable-outside-gopath
build: re-enable building outside gopath
2016-08-18 09:44:34 -07:00
a20d4a2d31 Merge pull request #6209 from heyitsanthony/fix-waittime-test
pkg/wait: don't expect time.Now() to be strict increasing in WaitTime tests
2016-08-17 13:46:11 -07:00
14f6dd4ded Merge pull request #6210 from gyuho/race
integration: fix race in TestDoubleBarrierFailover
2016-08-17 12:19:06 -07:00
10c9e238f0 integration: fix race in TestDoubleBarrierFailover 2016-08-17 11:56:49 -07:00
f9d122066e pkg/wait: don't expect time.Now() to be strict increasing in WaitTime tests 2016-08-17 11:53:34 -07:00
57fde954b9 Merge pull request #6208 from xiang90/better_logging
etcdserver: improve logging for leadership transfer
2016-08-17 11:47:38 -07:00
d0fa390048 etcdserver: improve logging for leadership transfer 2016-08-17 11:40:46 -07:00
5aa935f3b7 Merge pull request #6207 from gyuho/wait-extra
integration: write to leader group first, or wait
2016-08-17 11:25:09 -07:00
f2fedbae9b integration: write to leader group first, or wait
Write to leader group first, or give more time to
acknowledge the leader after network partition recovery
2016-08-17 11:09:33 -07:00
a5022c1cba Merge pull request #6205 from heyitsanthony/ft-large-writes
functional-tester: put large keys
2016-08-17 10:49:56 -07:00
e7a7fb2bb1 Merge pull request #6204 from gyuho/news
NEWS: add v3.0.5
2016-08-17 09:57:06 -07:00
6655afda4b NEWS: add v3.0.5 2016-08-17 09:56:45 -07:00
47b6449934 functional-tester: put large keys
For testing writes that must span multiple pages.
2016-08-17 09:51:44 -07:00
30cf8b7f0f Merge pull request #6197 from gyuho/mutex-proxies
integration: fix race in setting shared proxies
2016-08-17 09:15:10 -07:00
83dd121bae build: re-enable building outside gopath
Have build return an error code if build fails and add a test to travis
to confirm running build outside the gopath works.
2016-08-16 20:06:05 -07:00
38c370a7c5 Merge pull request #6196 from gyuho/clockwork
vendor: use v0.1.0 clockwork
2016-08-16 19:52:34 -07:00
fb00a32b86 integration: fix races in global proxies 2016-08-16 19:43:31 -07:00
f91f7dfb91 v2http: fix tests to use new clockwork 2016-08-16 16:36:24 -07:00
3f0f4bfee7 vendor: clockwork v0.1.0 2016-08-16 16:31:10 -07:00
28b797b538 Merge pull request #6194 from heyitsanthony/fix-gofail
build: don't override gopath by default, demote old gopath on override
2016-08-16 14:27:08 -07:00
cf063ed475 Merge pull request #6193 from xiang90/gw
docs: add gateway
2016-08-16 14:16:22 -07:00
b499f69181 docs: add gateway 2016-08-16 14:02:45 -07:00
e1519cf460 build: don't override gopath by default, demote old gopath on override
Builds already vendor through cmd/ so there's no reason to set the GOPATH; it
was also breaking gofail builds. For builds that need to override GOPATH, also
include the old GOPATH as a fallback for dependencies outside cmd/vendor/.
2016-08-16 13:46:07 -07:00
8d7703528a Merge pull request #5845 from heyitsanthony/clientv3-ignore-dead-eps
clientv3: respect up/down notifications from grpc
2016-08-16 11:56:03 -07:00
3eadf964f4 clientv3: use failfast and retry wrappers for at-most-once rpcs 2016-08-16 10:49:50 -07:00
ee3797ddff integration: treat client TLS connecting to insecure server as timeout 2016-08-16 10:17:16 -07:00
46765ad79c clientv3: respect up/down notifications from grpc
Fixes #5842
2016-08-16 09:49:36 -07:00
462eb511c5 Merge pull request #6183 from heyitsanthony/go-install-etcd
build: support go install github.com/coreos/etcd/cmd/etcd
2016-08-15 16:29:36 -07:00
b125d590cf Merge pull request #6186 from gyuho/grpcproxy-fix
proxy/grpcproxy: fix nil-map assign to 'singles'
2016-08-15 16:25:02 -07:00
b9d01fb98b vendor: update grpc 2016-08-15 16:19:40 -07:00
a4ef36c8bf proxy/grpcproxy: fix nil-map assign to 'singles' 2016-08-15 15:48:45 -07:00
d5d2370fc8 Merge pull request #6172 from xiang90/session
session: remove session manager and add ttl
2016-08-15 15:20:19 -07:00
961b03420e Merge pull request #6185 from heyitsanthony/wait-time-collision
wait: make WaitTime robust against deadline collisions
2016-08-15 15:15:29 -07:00
16b2d9ca5e Merge pull request #6170 from heyitsanthony/default-advertise-ip
use default ip for advertise URL
2016-08-15 15:12:25 -07:00
449923c98b build: support go install github.com/coreos/etcd/cmd/etcd
Could build via github.com/coreos/etcd/cmd but that would generate a binary
named "cmd", which is not ideal.
2016-08-15 15:08:41 -07:00
7b84456366 Merge pull request #6163 from gyuho/vendor
vendor: migrate to glide + update go-systemd, probing
2016-08-15 15:01:09 -07:00
c3f069c9fc wait: make WaitTime robust against deadline collisions 2016-08-15 14:38:41 -07:00
0307382c1a Merge pull request #6184 from xiang90/rm
ROADMAP: update
2016-08-15 14:30:23 -07:00
db834301eb ROADMAP: update 2016-08-15 14:30:10 -07:00
feaff17259 session: remove session manager and add ttl 2016-08-15 14:12:25 -07:00
2cc245e8bf etcdmain: report default advertise detection / fallback 2016-08-15 14:08:09 -07:00
29372f9dd2 vendor: update go-systemd, probing 2016-08-15 14:04:07 -07:00
ddf65421e7 scripts: use glide in updatedep.sh 2016-08-15 14:04:03 -07:00
b207dd095c glide: initial commit 2016-08-15 12:10:32 -07:00
d5900e8b63 vendor: migrate to glide 2016-08-15 12:10:21 -07:00
e810dec662 Merge pull request #6182 from gyuho/fix
rafthttp: use reportCriticalError, fix typo
2016-08-15 11:48:30 -07:00
e8594b60b1 embed: use default route IP for default advertise URL
Fixes #2858
2016-08-15 11:12:26 -07:00
d23392ed8e netutil: GetDefaultHost for getting the default IP of the host machine 2016-08-15 11:12:26 -07:00
bd450c1ba3 rafthttp: use reportCriticalError, fix typo 2016-08-15 10:40:58 -07:00
561c3b918a Merge pull request #6179 from ypu/binDir
e2e: Update binary path with binDir
2016-08-15 10:36:04 -07:00
9eb6ea34bd Merge pull request #6175 from heyitsanthony/fix-conn-race
rafthttp: fix race between streamReader.stop() and connection closer
2016-08-15 09:27:24 -07:00
d0d8e49e20 e2e: Update binary path with binDir
Signed-off-by: Yiqiao Pu <ypu@redhat.com>
2016-08-15 17:22:42 +08:00
911c8442b7 rafthttp: fix race between streamReader.stop() and connection closer 2016-08-15 01:36:09 -07:00
96e018634a Merge pull request #6173 from gyuho/ccc
pkg/httputil: simplify RequestCanceler args
2016-08-14 20:41:19 -07:00
f14fd43548 proxy/httpproxy: fix httputil.RequestCanceler 2016-08-14 14:37:08 -07:00
0503676bde rafthttp: fix httputil.RequestCanceler 2016-08-14 14:36:51 -07:00
ae4b4109b2 pkg/httputil: simplify RequestCanceler args 2016-08-14 14:35:50 -07:00
1b5a129bbe Merge pull request #6171 from gyuho/go-vet
*: fix spell errors from go report card
2016-08-13 23:10:17 -07:00
19b35c939a proxy/grpcproxy: fix spell 'gropu' to 'group' 2016-08-13 20:55:15 -07:00
4d3b281369 etcdserver: fix spell errors 2016-08-13 20:54:48 -07:00
6b671b88dc etcdctl/ctlv3: fix spell errors 2016-08-13 20:54:27 -07:00
d788eb8d92 Merge pull request #6038 from gyuho/leader
*: transfer leadership when stopping leader
2016-08-13 14:47:54 -07:00
a205242ca5 integration: add 'TestTransferLeader/Stop' 2016-08-13 14:32:01 -07:00
64a0e34602 etcdserver: transfer leadership when stopping 2016-08-13 14:31:58 -07:00
7b11c288fe Merge pull request #6169 from sinsharat/master
etcdserver: optimized veryfying local member
2016-08-12 19:09:55 -07:00
1fec4ba127 etcdserver: optimized veryfying local member
moved the code for perparing and sorting of advertising peer urls and
sorting of peer urls only when strict verification needs to be done.
This is done to avoid this processing when strict verification is not
required like in case of VerifyJoinExisting function.

#6165
2016-08-13 06:17:21 +05:30
817de6d212 Merge pull request #6168 from heyitsanthony/fix-periodic-test-block
compactor: wait for After() in TestPeriodic
2016-08-12 13:54:06 -07:00
5eff6fb7db compactor: wait for After() in TestPeriodic
If the test calls clock.Advance() after the compactor checks clock.Now()
but before the compactor calls clock.After(), the compactor will wait
forever on clock.After() expecting the lost clock.Advance().

Reproduced failure by putting a Sleep() in the clock.Now() continue path.

Fixes #6060 (again)
2016-08-12 13:28:40 -07:00
f975fe8068 Merge pull request #6140 from gyuho/network-partition
*: add network partition tests
2016-08-12 12:33:24 -07:00
0a00328a7c integration: add network partition tests 2016-08-12 12:15:29 -07:00
82a3d90763 Merge pull request #6167 from xiang90/fix_txn_rev
etcdserver: fix wrong rev in header when nothing is actually got executed
2016-08-12 12:14:48 -07:00
92a0f08722 etcdserver: fix wrong rev in header when nothing is actually got executed 2016-08-12 11:44:13 -07:00
67b1c7cce5 Merge pull request #6166 from heyitsanthony/clientv3-nonblock-new
clientv3: support non-blocking New()
2016-08-12 10:57:45 -07:00
429d5ab20b clientv3: only block on New() when DialTimeout > 0
Fixes #6162
2016-08-12 10:33:11 -07:00
c6c6cfb502 etcdserver: implement 'CutPeer', 'MendPeer' 2016-08-12 07:38:52 -07:00
c33ea20fef Merge pull request #6161 from sinsharat/master
etcdserver: stats/server - refactored
2016-08-11 17:03:23 -07:00
965b2901d5 Merge pull request #6156 from heyitsanthony/remove-member-quorum
etcdserver: reject member removal that breaks active quorum
2016-08-11 11:40:38 -07:00
aa9837e8ff e2e: support --strict-reconfig-check=false 2016-08-11 11:14:14 -07:00
e742ff331f integration: test member removal which breaks active quorum is rejected 2016-08-11 11:14:14 -07:00
6205a9a6cb etcdserver: stats/server - refactored
removed code duplicacy and improved readability

#6160
2016-08-11 22:09:25 +05:30
de06dc1272 Merge pull request #6155 from gyuho/raft-leader-transfer
*: expose Raft leader transfer
2016-08-11 08:03:28 -07:00
d3812ed664 Merge pull request #6157 from siddontang/siddontang/fix-overflow
raft: fix overflow
2016-08-11 07:53:48 -07:00
f8ee322b08 raft: fix overflow 2016-08-11 09:24:49 +08:00
8a32929d29 Merge pull request #6154 from gyuho/rafthttp-pause
rafthttp: add Transport.Cut/MendPeer
2016-08-10 17:10:30 -07:00
937ae658dd rafthttp: add Transport.Cut/MendPeer
From https://github.com/coreos/etcd/pull/6140.
2016-08-10 17:09:35 -07:00
a1ce07a321 etcdserver: reject member removal that breaks the current active quorum 2016-08-10 17:00:39 -07:00
a56cb82180 etcdserver: add TransferLeadership for raft.Node 2016-08-10 16:26:11 -07:00
e64ef3f261 raft: add 'TransferLeadership' to Node interface 2016-08-10 16:25:22 -07:00
f4141f0f51 raft: handle 'MsgTransferLeader' in follower 2016-08-10 16:24:29 -07:00
d72cee1b0c Merge pull request #6153 from gyuho/example
clientv3: update base example with TLS
2016-08-10 14:53:42 -07:00
1644679d00 clientv3: add 'ExampleConfig_withTLS' 2016-08-10 14:37:34 -07:00
7eb43ea75b Merge pull request #6152 from xiang90/fix_count
mvcc: fix count
2016-08-10 11:42:10 -07:00
f5549cba2a Merge pull request #6151 from heyitsanthony/configfile-defaults
embed: load config defaults before loading from file
2016-08-10 11:27:57 -07:00
de864d3b58 mvcc: fix count 2016-08-10 10:54:25 -07:00
2bb1f9c8a4 Merge pull request #6150 from gyuho/metrics
etcdserver: use Counter for proposals_failed_total
2016-08-10 09:59:10 -07:00
eb97aba581 e2e: test etcd boots with example config file 2016-08-10 09:45:17 -07:00
6de993b468 embed: load config defaults before loading config from file 2016-08-10 09:44:50 -07:00
06e2338108 Merge pull request #6113 from ypu/e2e
Add some test flags for e2e test
2016-08-10 09:28:27 -07:00
d219e96359 etcdserver: use Counter for proposals_failed_total
It only ever goes up.
2016-08-10 09:27:51 -07:00
b6f5b6b1c9 Merge pull request #6147 from sinsharat/master
etcdserver: Error handling for invalid empty raft cluster
2016-08-10 08:52:45 -07:00
2b5a5c77cf etcdserver: Error handling for invalid empty raft cluster
TODO implemented for GetClusterFromRemotePeers should not return nil
error with an invalid empty cluster

#6137
2016-08-10 19:23:19 +05:30
a5e4fbd335 e2e: Make the certificate file path configurable
This commit will help us to run the e2e tests in an enviroment
without e2e source code more convenient.

Signed-off-by: Yiqiao Pu <ypu@redhat.com>
2016-08-10 15:40:12 +08:00
2ca87f6c03 e2e: Make it can run with exist binary
Add the bin-dir option to the command line, so the e2e tests can
run with an exist binary. For example(run the command under e2e
directory):
go test -v -timeout 10m -bin-dir /usr/bin -cpu 1,2,4

Signed-off-by: Yiqiao Pu <ypu@redhat.com>
2016-08-10 15:40:12 +08:00
81f5e31ed2 Merge pull request #6142 from heyitsanthony/fix-cancel-watch-imm
clientv3: handle watchGrpcStream shutdown if prior to goroutine start
2016-08-09 20:53:56 -07:00
2d3eda4afa Merge pull request #6139 from aaronlehmann/export-segment-size
wal: Export SegmentSizeBytes as a variable
2016-08-09 20:39:56 -07:00
1c83a46c6d clientv3: handle watchGrpcStream shutdown if prior to goroutine start
Fixes #6141
2016-08-09 19:59:04 -07:00
2b996b6038 wal: Export SegmentSizeBytes as a variable
In test situations, it's useful to create smaller than usual WAL files
to test rotation and to avoid the overhead of preallocation on old-style
filesystems that don't handle it efficiently. This commit changes
segmentSizeBytes to an exported variable so that tests can override it
from an init() function.
2016-08-09 15:38:30 -07:00
88a77f30e1 Merge pull request #6136 from heyitsanthony/fix-watcher-leak
clientv3: close watcher stream once all watchers detach
2016-08-09 10:23:15 -07:00
8c1c291332 clientv3/integration: test watcher cancelation propagation to server 2016-08-09 00:10:57 -07:00
5e651a0d0d clientv3: close watcher stream once all watchers detach
Fixes #6134
2016-08-09 00:10:57 -07:00
c3c41234f1 integration: support querying member metrics 2016-08-08 23:45:50 -07:00
c7e4198742 Merge pull request #6129 from xiang90/fix_raft
raft: fix getting unapplied log entries
2016-08-08 16:30:42 -07:00
8f3a11c73c Merge pull request #6105 from gyuho/release-notes
NEWS: add release notes for >v3.0.0 releases
2016-08-08 15:39:14 -07:00
f58a119b44 Merge pull request #6132 from gyuho/manual
Documentation/dev-guide: add bash syntax to doc
2016-08-08 15:26:14 -07:00
adbd936f22 Documentation/dev-guide: add bash syntax to doc 2016-08-08 15:06:02 -07:00
39f39c185e NEWS: add release notes for >v3.0.0 releases
Fix https://github.com/coreos/etcd/issues/6049.
2016-08-08 15:01:17 -07:00
918af500c3 Merge pull request #6130 from gyuho/port-e2e
e2e: use unix port for release tests
2016-08-08 14:41:09 -07:00
311c19e494 e2e: use unix port for release tests
Fix https://github.com/coreos/etcd/issues/5947.

When we restart, the previous port could have been still bind
by the OS. Use Unix port to avoid such rebind cases.
2016-08-08 14:26:19 -07:00
5f0c122496 raft: fix getting unapplied log entries 2016-08-08 10:44:02 -07:00
bb28c9ab00 Merge pull request #6126 from gyuho/tester
etcd-tester: fix tester for 5-node cluster
2016-08-07 21:58:49 -07:00
c6cf015e26 etcd-tester: fix tester for 5-node cluster
1. fix failure case counting
2. match ErrClientConnClosing in stresser
3. longer timeout for set-health-key
4. fixed range for range/delete stresser
5. remove Limit in RangeRequest
2016-08-07 21:15:01 -07:00
fb7c4da361 Merge pull request #6124 from heyitsanthony/share-limiter
functional-tester: share limiter among stresser
2016-08-07 19:17:06 -07:00
978ae9de29 functional-tester: share limiter among stresser
Otherwise, adding more members stresses the cluster with more ops.
2016-08-07 19:15:00 -07:00
7678b84f2c Merge pull request #6123 from xiang90/fix_limiter
tools/functional-tester: fix limiter
2016-08-07 16:20:17 -07:00
619a40b22b Merge pull request #6122 from xiang90/debug_stresser
tools/functional-tester: better logging
2016-08-07 16:17:52 -07:00
f6a1585902 functional-tester: reduce rate to 3000 2016-08-07 14:34:01 -07:00
107a07563f tools/functional-tester: fix limiter 2016-08-07 14:28:16 -07:00
69204397ee tools/functional-tester: better logging 2016-08-07 14:21:44 -07:00
f505bcb91a Merge pull request #6117 from gyuho/lease-test
integration: add more lease tests
2016-08-05 19:25:07 -07:00
f1f31f1015 integration: add more lease tests
Fix https://github.com/coreos/etcd/issues/6102.
2016-08-05 19:09:46 -07:00
c71f0ea174 Merge pull request #6106 from heyitsanthony/strict-reconfig-healthy
etcdserver, embed: stricter reconfig checking
2016-08-05 17:15:01 -07:00
9063ce5e3f etcdserver, embed: stricter reconfig checking
Make --strict-reconfig-check a default and check if cluster is healthy when
adding a member.
2016-08-05 16:59:25 -07:00
9764652356 Merge pull request #6081 from gyuho/functional-tester
etcd-tester: delete/range with limit, clean up
2016-08-05 11:28:41 -07:00
854a215329 etcd-tester: delete/range with limit, clean up 2016-08-05 11:21:36 -07:00
4a7fabd219 Merge pull request #6098 from xiang90/lease
Fix Lease
2016-08-05 10:08:24 -07:00
6c3efde51b Merge pull request #6099 from sinsharat/master
raft: handling of applying old snapshots
2016-08-05 07:38:07 -07:00
d69d438289 *: minor cleanup for lease 2016-08-04 20:39:32 -07:00
7ed8a133d2 Merge pull request #6104 from gyuho/typo
pkg/transport: fix minor typo
2016-08-04 16:03:40 -07:00
c38f0290a7 pkg/transport: fix minor typo 2016-08-04 16:00:18 -07:00
c46955b60a Merge pull request #6097 from swingbach/master
raft: fix #6096
2016-08-04 11:40:02 -07:00
e2a956c0c4 Merge pull request #6100 from gyuho/sort-comment
clientv3: ignore sort-ascend-key option
2016-08-04 11:28:49 -07:00
bd62b0a646 mvcc: attach keys to leases after recover all state
The previous logic is wrong. When we have hisotry like Put(foo, bar, lease1),
and Put(foo, bar, lease2), we will end up with attaching foo to two leases 1 and
2. Similar things can happen for deattach by clearing the lease of a key.

Now we try to fix this by starting to attach leases at the end of the recovery.
We use a map to keep the last lease attachment state.
2016-08-04 11:17:58 -07:00
ddddecc3ab clientv3: ignore sort-ascend-key option 2016-08-04 11:13:41 -07:00
75c06cacae lease: do lease delection in the kv txn 2016-08-04 10:06:47 -07:00
4d59b6f52c lease: delete kvs in a txn 2016-08-04 10:06:46 -07:00
fd757756f5 raft: handling of applying old snapshots
There was a TODO requirement to handle ErrorSnapshotOutOfDate for the
function ApplySnapshot. The same has been implemented

#6090
2016-08-04 21:08:24 +05:30
29a077bdbe etcdserver: always recover lessor first 2016-08-04 08:06:19 -07:00
41dee84733 raft: fix #6096 2016-08-04 18:31:22 +08:00
eb36d0dbba Merge pull request #6084 from heyitsanthony/srv-servername
etcdctl: set TLS servername on discovery
2016-08-03 23:51:11 -07:00
a752338d45 Documentation: update clustering guide about PKI SRV record forging 2016-08-03 22:28:03 -07:00
d1809830bb embed: use ServerName on TLS DNS discovery without CA file 2016-08-03 22:28:03 -07:00
ab4ac828f3 etcdmain: check TLS on gateway SRV records 2016-08-03 22:28:03 -07:00
e218834b58 etcdctl: set ServerName for TLS when using --discovery-srv 2016-08-03 22:28:03 -07:00
cd781bf30c transport: add ServerName to TLSConfig and add ValidateSecureEndpoints
ServerName prevents accepting forged SRV records with cross-domain
credentials. ValidateSecureEndpoints prevents downgrade attacks from SRV
records.
2016-08-03 22:28:03 -07:00
6e7baab32c Merge pull request #6070 from swingbach/master
raft: fix #6068
2016-08-03 19:59:07 -07:00
cabd28516c Merge pull request #6092 from gyuho/transport
pkg/transport: update scheme to unix without copy
2016-08-03 10:59:00 -07:00
c8cc87c3f5 pkg/transport: update scheme to unix copying URL 2016-08-03 10:35:28 -07:00
bc9882f521 Merge pull request #6087 from xiang90/grpc_create
grpcproxy: handle create event
2016-08-03 09:31:33 -07:00
57c68ab1db grpcproxy: handle create event 2016-08-02 20:51:30 -07:00
c30a436829 Merge pull request #6086 from xiang90/sc
clientv3: add send created notification
2016-08-02 20:27:04 -07:00
33c3583b50 clientv3: add send created notification 2016-08-02 20:08:11 -07:00
76e62c39b0 Merge pull request #6085 from heyitsanthony/lease-elect-timeout
etcdserver, lease: tie lease min ttl to election timeout
2016-08-02 13:27:04 -07:00
bf71497537 etcdserver, lease: tie lease min ttl to election timeout 2016-08-02 13:06:57 -07:00
c0a8da7fd0 raft: minor refactor 2016-08-02 08:46:43 +08:00
4db07dbc93 Merge pull request #6079 from gyuho/cleanup-functional-tester
etcd-tester: remove unnecessary arg from stresser
2016-08-01 15:40:50 -07:00
755eee0d30 etcd-tester: remove unnecessary arg from stresser 2016-08-01 15:35:31 -07:00
b23045e34d Merge pull request #6078 from gyuho/release-note
dev-internal: update release note
2016-08-01 15:14:05 -07:00
fc4b30a1e0 dev-internal: update release note
For https://github.com/coreos/etcd/issues/6049.
2016-08-01 15:09:47 -07:00
9836990aa7 Merge pull request #6077 from gyuho/auth-guest
v2http: use guest access in non-TLS mode
2016-08-01 14:32:46 -07:00
87498e0209 v2http: use guest access in non-TLS mode
Fix https://github.com/coreos/etcd/issues/6075.
2016-08-01 14:00:38 -07:00
59ac42ff38 Merge pull request #6073 from heyitsanthony/rafthttp-close-stream
rafthttp: close http socket when pipeline handler gets a raft error
2016-07-31 21:49:04 -07:00
911dcc9386 rafthttp: close http socket when pipeline handler gets a raft error
Otherwise the http stream remains open and keeps receiving raft messages.
This can lead to "raft: stopped" log spam on closing an embedded server.

Fixes #5981
2016-07-31 20:25:42 -07:00
a2715e3bda Merge pull request #6072 from xiang90/tls_err
Log TLS error in health checking
2016-07-31 20:17:47 -07:00
9311d7b77e rafthttp: log health checking error early 2016-07-31 19:58:22 -07:00
5a83f05e96 dep: update probing 2016-07-31 18:24:00 -07:00
a60387bab2 Merge pull request #6001 from mitake/auth-errcode
client, etcdserver: propagate status code of auth related error
2016-07-31 08:28:41 -07:00
564bf8d17e client: utility functions for getting detail of v2 auth errors
Current v2 auth API doesn't propagate its error code. This commit adds
utility functions for parsing error messages and getting detail of v2
auth errors.

Fixes https://github.com/coreos/etcd/issues/5894
2016-07-31 21:23:58 +09:00
4d309f0cb7 Merge pull request #6054 from heyitsanthony/serialize-refactor
etcdserver: apply serialized requests outside auth apply lock
2016-07-30 22:44:26 -07:00
06da46c4ee etcdserver: apply serialized requests outside auth apply lock
Fixes #6010
2016-07-30 22:00:49 -07:00
b43722dd48 Merge pull request #6069 from xiang90/raft_doc
raft: better doc
2016-07-30 21:11:57 -07:00
8d12017fe2 raft: better doc 2016-07-30 21:11:37 -07:00
992f628e6e raft: fix #6068 2016-07-30 03:27:29 +08:00
e2088b8073 Merge pull request #6063 from siddontang/siddontang/embed-handler
embed: support registering client handlers
2016-07-27 22:57:27 -07:00
86de0797e1 embed: support registering user handlers 2016-07-28 13:39:06 +08:00
72eb2d8893 Merge pull request #6064 from heyitsanthony/clientv3-watch-filter
clientv3: watch filters
2016-07-27 21:48:25 -07:00
4c9a2a65c9 integration: test clientv3 watch filters 2016-07-27 21:25:06 -07:00
943fe70178 clientv3: support watch filters 2016-07-27 21:24:52 -07:00
79d25a6884 Merge pull request #6061 from heyitsanthony/fix-snapshot-test
etcdserver: don't race when waiting for store in TestSnapshot
2016-07-27 19:15:41 -07:00
3d8e4ace47 Merge pull request #6062 from heyitsanthony/fix-test-periodic
compactor: fix race in TestPeriodic
2016-07-27 19:15:27 -07:00
76a99fa1c3 compactor: fix race in TestPeriodic
Test ordering now similar to TestPeriodicPause

Fixes #6060
2016-07-27 16:03:22 -07:00
1153350a95 Merge pull request #6059 from jlamillan/honor_global_output
etcdctl: Add support for formating output of key related commands
2016-07-27 16:03:17 -07:00
cfe09d34b8 etcdserver: don't race when waiting for store in TestSnapshot 2016-07-27 15:37:27 -07:00
205f10aeb6 etcdctl: Add support for formating output of key related commands
All v2 key and dir related commands will now honor the global format option if
it was specified. Otherwise, the output will remain the same.
2016-07-27 14:17:19 -07:00
6136b26f38 Merge pull request #6056 from gyuho/gateway
scripts/genproto: use latest grpc-gateway c8ec92d0
2016-07-27 13:37:35 -07:00
273c6f6ba9 Merge pull request #6058 from gyuho/dockerfile-release
Dockerfile-release: add '/var/lib/etcd/'
2016-07-27 13:37:26 -07:00
de99dfb134 Dockerfile-release: add '/var/lib/etcd/'
We have '/var/etcd/' in Dockerfile for historical reason.
Most cases, user store data in '/var/lib/etcd/'.
2016-07-27 13:24:07 -07:00
982e18d80b *: regenerate proto with latest grpc-gateway 2016-07-27 13:21:03 -07:00
6e95ce26fb scripts/genproto: use latest grpc-gateway c8ec92d0 2016-07-27 13:20:15 -07:00
13c2d32061 Merge pull request #6045 from heyitsanthony/fix-version-race
etcdserver, api, membership: don't race on setting version
2016-07-27 08:56:39 -07:00
a75688bd17 Merge pull request #6039 from xiang90/fix_r
raft: hide Campaign rules on applying all entries
2016-07-26 20:52:09 -07:00
3c3b33b00f Merge pull request #5911 from mitake/skip-apply-txn
etcdserver: skip range requests in txn if the result is needless
2016-07-26 20:48:41 -07:00
0090573749 etcdserver: skip range requests in txn if the result is needless
If a server isn't serving txn requests from a client, the server
doesn't need the result of range requests in the txn.

This is a succeeding commit of
https://github.com/coreos/etcd/pull/5689
2016-07-26 19:49:07 -07:00
de2c3ec3db etcdserver, api, membership: don't race on setting version
Fixes #6029
2016-07-26 18:21:40 -07:00
640d511684 Merge pull request #6047 from gyuho/doc
Documentation: fix links in upgrades
2016-07-26 12:54:14 -07:00
914e9266cb Documentation: fix links in upgrades 2016-07-26 12:51:59 -07:00
0d6c028aa2 Merge pull request #6032 from xiang90/gateway
fix a few issues in grpc gateway
2016-07-25 16:48:38 -07:00
484f579905 raft: hide Campaign rules on applying all entries 2016-07-25 15:53:39 -07:00
864947a825 Merge pull request #6037 from heyitsanthony/disable-tracing
etcdmain: disable grpc tracing by default
2016-07-25 15:20:28 -07:00
d6b22323a8 etcdmain: disable grpc tracing by default 2016-07-25 14:23:36 -07:00
6079be7dae Merge pull request #6036 from heyitsanthony/fix-embed-defaults
embed: add listen urls to default config
2016-07-25 12:49:45 -07:00
537057bd11 Merge pull request #6033 from heyitsanthony/watch-adapter
integration: support watch with cluster_proxy tag
2016-07-25 11:34:15 -07:00
42fc36b4d6 embed: add listen urls to default config
Was only setting the advertise urls.
2016-07-25 11:06:03 -07:00
7f0f9795bf Merge pull request #6028 from xiang90/plat
doc: update platform.md
2016-07-25 09:58:55 -07:00
2b4c37f54a grpcproxy: don't leak goroutines on watch proxy shutdown 2016-07-25 09:34:36 -07:00
418bb5e176 grpcproxy: bind clientv3.Watcher on initialization 2016-07-25 09:34:36 -07:00
1cad722a6d integration: support watch apis in cluster_proxy build 2016-07-25 09:34:36 -07:00
ac96963003 clientv3: support creating a Watch from a WatchClient 2016-07-25 09:34:36 -07:00
4fa9363aca grpcproxy: client watch adapter 2016-07-25 09:34:36 -07:00
020a24f1c3 *: regenerate proto for handling eof error 2016-07-23 16:21:44 -07:00
38b69a9301 scripts:genproto.sh: update grpc-gateway 2016-07-23 16:18:42 -07:00
fffa484a9f *: regenerate proto for adding deleterange 2016-07-23 16:17:44 -07:00
b4ce427d45 etcdserverpb: add missing deleterange annotation 2016-07-23 15:59:53 -07:00
116a1b5855 Merge pull request #6031 from gyuho/vet-fix
grpcproxy: define 'watchergroups' in pointer
2016-07-22 17:39:36 -07:00
abbefc9e25 grpcproxy: define 'watchergroups' in pointer
To avoid copying mutex lock values
2016-07-22 16:54:11 -07:00
5b288f6cd1 Merge pull request #6030 from gyuho/raft-raft
raft: replace 'reflect.DeepEqual' with bytes.Equal
2016-07-22 16:53:11 -07:00
4ff6c72257 raft: replace 'reflect.DeepEqual' with bytes.Equal 2016-07-22 16:34:13 -07:00
8f4a36fd32 doc: update platform.md 2016-07-22 11:24:19 -07:00
ec5c5d9ddf Merge pull request #6021 from xiang90/gateway_test
e2e: add gateway test
2016-07-21 16:48:04 -07:00
c603b5e6a1 e2e: add gateway test 2016-07-21 16:19:54 -07:00
2bf55e3a15 Merge pull request #6016 from endocode/kayrus/fix_serve_err_return
embed: Fixed serve() err return
2016-07-21 11:17:08 -07:00
fee9e2b183 embed: Fixed serve() err return 2016-07-21 18:06:08 +02:00
de638a5e4d Merge pull request #5991 from gyuho/manual
v2http: client certificate auth via common name
2016-07-21 08:02:17 -07:00
214c1e55b0 Merge pull request #5999 from jlamillan/master
Add support for formating output of ls command in json or extended fo…
2016-07-21 07:09:52 -07:00
32553c5796 Merge pull request #6006 from dongsupark/dongsu/fix-build-error-go-systemd
etcdmain: correctly check return values from SdNotify()
2016-07-21 07:08:58 -07:00
624187d25f etcdmain: correctly check return values from SdNotify()
SdNotify() now returns 2 values, sent and err. So startEtcdOrProxyV2()
needs to check the 2 return values correctly. As the 2 values are
independent of each other, error checking needs to be slightly updated
too.

SdNotifyNoSocket, which was previously provided by go-systemd, does not
exist any more. In that case (false, nil) will be returned instead.
2016-07-21 09:19:07 +02:00
00c9fe4753 vendor: update go-systemd
Godeps.json and vendor need to be updated according to the newest
go-systemd, as SdNotify() in go-systemd has changed its API.
2016-07-21 08:20:52 +02:00
f18d5433cc etcdctl: Add support for formating output of ls command in json
The ls command will check for and honor json or extended output formats.

Fixes #5993
2016-07-20 18:05:23 -07:00
42db8f55b2 e2e: test auth enabled with CN name cert 2016-07-20 16:55:45 -07:00
e001848270 Merge pull request #5772 from heyitsanthony/integration-proxy
integration: build tag for proxy
2016-07-20 16:28:12 -07:00
5066981cc7 v2http: test with 'ClientCertAuthEnabled' 2016-07-20 16:24:33 -07:00
25aeeb35c3 v2http: set 'ClientCertAuthEnabled' in client.go 2016-07-20 16:24:15 -07:00
68ece954fb v2http: add 'ClientCertAuthEnabled' in handlers 2016-07-20 16:23:41 -07:00
be001c44e8 embed: set 'ClientCertAuthEnabled' 2016-07-20 16:23:24 -07:00
9510bd6036 etcdserver: add 'ClientCertAuthEnabled' option 2016-07-20 16:22:59 -07:00
0f0d32b073 v2http: move 'testdata' from 'etcdhttp' 2016-07-20 16:20:42 -07:00
ff5709bb41 v2http: client cert cn authentication
introduce client certificate authentication using certificate cn.
2016-07-20 16:20:13 -07:00
ab17165352 v2http: refactor http basic auth
refactor http basic auth code to combine basic auth extraction and validation
2016-07-20 16:20:05 -07:00
768ccb8c10 grpcproxy: respect prev_kv flag 2016-07-20 15:58:33 -07:00
becbd9f3d6 test: grpcproxy integration test pass
Run via
PASSES=grpcproxy ./test
2016-07-20 15:58:33 -07:00
7b3d502b96 integration: build tag cluster_proxy for testing backed by proxy 2016-07-20 15:40:33 -07:00
17e0164f57 clientv3: add KV constructor using pb.KVClient 2016-07-20 15:40:33 -07:00
54df540c2c grpcproxy: wrapper from pb.KVServer to pb.KVClient 2016-07-20 15:40:33 -07:00
15aa64eb3c Merge pull request #6009 from heyitsanthony/fix-progress-notify
v3rpc: don't elide next progress notification on progress notification
2016-07-20 13:46:11 -07:00
65d7e7963a Merge pull request #6011 from heyitsanthony/fix-migrate-test
e2e: use a single member cluster in TestCtlV3Migrate
2016-07-20 13:27:17 -07:00
8c8742f43c integration: change timeouts for TestWatchWithProgressNotify
a) 2 * progress interval was passing with dropped notifies
b) waitResponse was waiting so long that it expected a dropped notify
2016-07-20 13:23:44 -07:00
a289bf58e6 e2e: use a single member cluster in TestCtlV3Migrate
Occasionally migrate would fail because a minority node would be missing
v2 keys. Instead, just use a single member cluster.

Fixes #5992
2016-07-20 12:10:09 -07:00
299ebc6137 v3rpc: don't elide next progress notification on progress notification
Fixes #5878
2016-07-20 11:37:20 -07:00
a7b098b26d Merge pull request #6007 from heyitsanthony/fix-watch-test
integration: fix race in TestV3WatchMultipleEventsTxnSynced
2016-07-20 10:34:54 -07:00
82ddeb38b4 integration: fix race in TestV3WatchMultipleEventsTxnSynced
Writes between watcher creation request and reply were being dropped.

Fixes #5789
2016-07-20 09:55:39 -07:00
aba478fb8a Merge pull request #5793 from mitake/auth-revision
auth, etcdserver: introduce revision of authStore for avoiding TOCTOU problem
2016-07-20 09:32:54 -07:00
edcfcae332 Merge pull request #5995 from heyitsanthony/clientv3-retry-stopped
rpctypes, clientv3: retry RPC on EtcdStopped
2016-07-20 08:54:14 -07:00
ef6b74411c auth, etcdserver: introduce revision of authStore for avoiding TOCTOU problem
This commit introduces revision of authStore. The revision number
represents a version of authStore that is incremented by updating auth
related information.

The revision is required for avoiding TOCTOU problems. Currently there
are two types of the TOCTOU problems in v3 auth.

The first one is in ordinal linearizable requests with a sequence like
below ():
1. Request from client CA is processed in follower FA. FA looks up the
   username (let it U) for the request from a token of the request. At
   this time, the request is authorized correctly.
2. Another request from client CB is processed in follower FB. CB
   is for changing U's password.
3. FB forwards the request from CB to the leader before FA. Now U's
   password is updated and the request from CA should be rejected.
4. However, the request from CA is processed by the leader because
   authentication is already done in FA.

For avoiding the above sequence, this commit lets
etcdserverpb.RequestHeader have a member revision. The member is
initialized during authentication by followers and checked in a
leader. If the revision in RequestHeader is lower than the leader's
authStore revision, it means a sequence like above happened. In such a
case, the state machine returns auth.ErrAuthRevisionObsolete. The
error code lets nodes retry their requests.

The second one, a case of serializable range and txn, is more
subtle. Because these requests are processed in follower directly. The
TOCTOU problem can be caused by a sequence like below:
1. Serializable request from client CA is processed in follower FA. At
   first, FA looks up the username (let it U) and its permission
   before actual access to KV.
2. Another request from client CB is processed in follower FB and
   forwarded to the leader. The cluster including FA now commits a log
   entry of the request from CB. Assume the request changed the
   permission or password of U.
3. Now the serializable request from CA is accessing to KV. Even if
   the access is allowed at the point of 1, now it can be invalid
   because of the change introduced in 2.

For avoiding the above sequence, this commit lets the functions of
serializable requests (EtcdServer.Range() and EtcdServer.Txn())
compare the revision in the request header with the latest revision of
authStore after the actual access. If the saved revision is lower than
the latest one, it means the permission can be changed. Although it
would introduce false positives (e.g. changing other user's password),
it prevents the TOCTOU problem. This idea is an implementation of
Anthony's comment:
https://github.com/coreos/etcd/pull/5739#issuecomment-228128254
2016-07-20 14:39:04 +09:00
8abae076d1 rpctypes, clientv3: retry RPC on EtcdStopped
Fixes #5983
2016-07-19 18:29:12 -07:00
6e290abee2 Merge pull request #5998 from heyitsanthony/tls-timeout-conn
transport: wrap timeout listener with tls listener
2016-07-19 17:42:05 -07:00
99e0655c2f transport: wrap timeout listener with tls listener
Otherwise the listener will return timeoutConn's, causing a type
assertion to tls.Conn in net.http to fail so http.Request.TLS is never set.
2016-07-19 16:47:14 -07:00
80c2e4098d Merge pull request #5997 from xiang90/l_r
raft: fix readindex
2016-07-19 15:25:53 -07:00
1c5754f02d raft: fix readindex 2016-07-19 15:00:58 -07:00
e5f0cdcc69 Merge pull request #5984 from xiang90/p_b
grpcproxy: do not send duplicate events to watchers
2016-07-19 12:47:23 -07:00
783675f91c grpcproxy: do not send duplicate events to watchers 2016-07-19 10:14:57 -07:00
d3d954d659 Merge pull request #5990 from xiang90/wcr
clientv3/integration: fix race in TestWatchCompactRevision
2016-07-19 10:08:28 -07:00
e177d9eda2 clientv3/integration: fix race in TestWatchCompactRevision 2016-07-19 09:31:44 -07:00
1bf78476cf Merge pull request #5980 from xiang90/gateway
etcdmian: gateway supports dns srv discovery
2016-07-18 22:10:54 -07:00
c7c5cd324b etcdmian: gateway supports dns srv discovery 2016-07-18 21:53:24 -07:00
fcc96c9ebd Merge pull request #5976 from heyitsanthony/fix-kadc
integration: drain keepalives in TestLeaseKeepAliveCloseAfterDisconnectRevoke
2016-07-18 20:21:44 -07:00
d914502090 Merge pull request #5978 from heyitsanthony/fix-compactor
compactor: make event ordering well-defined in TestPeriodicPause
2016-07-18 20:06:57 -07:00
27a30768e1 integration: drain keepalives in TestLeaseKeepAliveCloseAfterDisconnectRevoke
Fixes #5900
2016-07-18 19:45:59 -07:00
a1d823c2aa compactor: make event ordering well-defined in TestPeriodicPause
Fixes #5847
2016-07-18 19:45:31 -07:00
a61862acc7 Merge pull request #5977 from xiang90/b_proxy
grpcproxy: return interface
2016-07-18 19:12:43 -07:00
5cccb49498 Merge pull request #5979 from heyitsanthony/unix-embed
embed: support unix sockets
2016-07-18 17:05:58 -07:00
5271cf0160 grpcproxy: return interface 2016-07-18 16:47:58 -07:00
8d897fd51f integration: use unix sockets in TestEmbedEtcd
Was getting tcp port conflicts in semphore even after assigning unique ports.

Fixes #5953
2016-07-18 16:42:08 -07:00
e177f391f2 embed: support unix peers 2016-07-18 16:41:41 -07:00
32ed0aa0b3 Merge pull request #5626 from gyuho/stresser
etcd-tester: stress with range, delete
2016-07-18 15:26:34 -07:00
969bcd282b etcd-tester: stress with range, delete 2016-07-18 15:17:08 -07:00
7fbc1e39a6 Merge pull request #5973 from heyitsanthony/purge-test
fileutil: rework purge tests so they don't poll
2016-07-18 14:59:19 -07:00
7bfe75cbf3 Merge pull request #5963 from xiang90/p_filter
grpcproxy: add filter to watcher
2016-07-18 14:56:10 -07:00
3a5e418ff9 Merge pull request #5974 from xiang90/a_proxy
grpcproxy: add auth
2016-07-18 14:55:13 -07:00
cae56f583e Merge pull request #5975 from bts/restrict-channel-types-in-demo
contrib/raftexample: Restrict commit/error channel types in raftNode
2016-07-18 14:54:56 -07:00
e1892e264d grpcproxy: add auth 2016-07-18 14:26:22 -07:00
851d69181d Merge pull request #5972 from xiang90/m_proxy
grpcproxy: add maintenance proxy
2016-07-18 14:24:22 -07:00
b86e723107 contrib/raftexample: Restrict channel types 2016-07-18 17:19:54 -04:00
c920ce0453 fileutil: rework purge tests so they don't poll
Fixes #5966
2016-07-18 14:19:09 -07:00
fd24340903 grpcproxy: add maintenance proxy 2016-07-18 13:31:03 -07:00
58aa3483c3 grpcproxy: add filter to watcher 2016-07-18 13:02:34 -07:00
6dbdf6e55f Merge pull request #5958 from xiang90/lease_proxy
*: add lease proxy
2016-07-18 12:57:14 -07:00
3f74e9db0d *: add lease proxy 2016-07-18 12:06:59 -07:00
b61f882635 Merge pull request #5962 from xiang90/c_p
*: add cluster proxy
2016-07-18 11:55:35 -07:00
1c8b30dbdb Merge pull request #5957 from heyitsanthony/wait-panic
testutil, clientv3: wait for panics in txn tests to complete
2016-07-18 11:18:23 -07:00
dc80ae86d9 Merge pull request #5969 from gyuho/vendor-fix
*: fix 'gogo/protobuf' compatibility issue
2016-07-18 10:56:34 -07:00
8893ab0198 Merge pull request #5965 from endocode/kayrus/build_env
build: allow to build outside the etcd directory
2016-07-18 10:36:27 -07:00
984badeb03 testutil, clientv3: wait for panics in txn tests to complete
Fixes #5901
2016-07-18 09:37:33 -07:00
50be793f09 *: regenerate proto 2016-07-18 09:33:32 -07:00
e7c1594c82 vendor: update 'gogo/protobuf' 2016-07-18 09:33:09 -07:00
6e53f75092 scripts: update gogo/protobuf, use 'gofast' plugin
- Fix https://github.com/coreos/etcd/issues/5942
- Partial fix for https://github.com/coreos/etcd/issues/5865
2016-07-18 09:31:27 -07:00
cab2e45319 build: allow to build outside the etcd directory
And added gopath hack which allows to build without setting any GOPATH
env. Just run the build script when you have installed golang.
2016-07-18 17:40:08 +02:00
336e4f2f28 Merge pull request #5960 from xiang90/a_i
etcdserver: set applied index correctly
2016-07-16 19:56:27 -07:00
d9e939d5d1 Merge pull request #5961 from heyitsanthony/test-e2e-unsupported
e2e: run e2e tests on unsupported architectures
2016-07-16 17:12:04 -07:00
52764f1e5a Merge pull request #5959 from heyitsanthony/build-same-place-xarch
build: build cross-compiled binaries in bin/ by default
2016-07-16 17:11:57 -07:00
bdfbd26e94 *: add cluster proxy 2016-07-16 12:15:32 -07:00
2d761d64a4 etcdserver: set applied index correctly 2016-07-16 11:44:18 -07:00
884452c403 e2e: run e2e tests on unsupported architectures 2016-07-16 10:30:19 -07:00
cb9ee7320b build: build cross-compiled binaries in bin/ by default
Otherwise GOARCH=386 PASSES="build integration" ./test fail on amd64
because the e2e tests can't find the binaries. Added a BINDIR option
for writing the build output to somewhere else, in case it's needed.
2016-07-16 10:21:25 -07:00
331ec82400 Merge pull request #5955 from gyuho/port
integration: new ports for embed test
2016-07-15 20:19:33 -07:00
4a5795b55f integration: new ports for embed test 2016-07-15 16:47:32 -07:00
04155423f5 Merge pull request #5956 from xiang90/fix_renew
*: fix issue found in fast lease renew
2016-07-15 15:53:59 -07:00
4835322aa1 Merge pull request #5954 from heyitsanthony/fix-embed-cfg-validate
embed: fix nil dereference on error to set up initial cluster
2016-07-15 15:41:40 -07:00
b26f1bb2b6 Merge pull request #5951 from gyuho/vendor
*: update grpc-gateway and its import paths
2016-07-15 15:33:18 -07:00
93e3112471 Merge pull request #5910 from xiang90/grpc_proxy
grpcproxy: initial watch proxy
2016-07-15 15:12:21 -07:00
3839a55910 *: fix issue found in fast lease renew 2016-07-15 15:07:15 -07:00
34602b87ec embed: fix nil dereference on error to set up initial cluster 2016-07-15 14:43:00 -07:00
5f3aa43899 grpcproxy: initial watch proxy 2016-07-15 14:30:45 -07:00
ecebe7b979 vendor: change to 'grpc-ecosystem' from 'gengo' 2016-07-15 13:29:05 -07:00
5b92e17e86 *: regenerate proto files 2016-07-15 13:24:19 -07:00
4a7b730e69 scripts: update genproto with grpc-ecosystem 2016-07-15 13:21:41 -07:00
4ec94989cf Documentation: change to grpc-ecosystem 2016-07-15 12:11:30 -07:00
b2b98399fb embed: change import path to 'grpc-ecosystem' 2016-07-15 12:10:38 -07:00
1ba7bb237f Merge pull request #5939 from heyitsanthony/x86-unit-test
travis: unit test on 386
2016-07-15 09:43:31 -07:00
38d38f2635 travis: unit test on 386 2016-07-14 20:23:35 -07:00
1dfafd8fe0 test: separate phases of tests into configurable passes 2016-07-14 20:23:35 -07:00
b50d2395fd Merge pull request #5949 from heyitsanthony/fix-functester-failfast
etcd-tester: add FailFast(false) to grpc calls
2016-07-14 19:50:21 -07:00
0419d3ecf7 etcd-tester: add FailFast(false) to grpc calls 2016-07-14 19:16:41 -07:00
3e21d9f023 Merge pull request #5945 from Infra-Red/patch-bench
hack/benchmark: remove deprecated boom parameter
2016-07-14 18:44:48 -07:00
bf0be0fe5e Merge pull request #5948 from heyitsanthony/upgrade-grpc-cred-clobber
vendor: update grpc
2016-07-14 18:44:35 -07:00
b3f8490660 integration: add FailFast(false) to failing tests 2016-07-14 17:58:58 -07:00
d8f0ef0e80 clientv3: use grpc.FailFast(false) for all calls 2016-07-14 17:58:58 -07:00
d9a8a326df vendor: update grpc
Fixes #5871
2016-07-14 17:58:58 -07:00
07ed4da2ff integration: test grpc error equivalence with Error() 2016-07-14 17:58:49 -07:00
51c5c307fa rpctypes: test error equivalence with Error()
grpc.Errorf() now returns *rpcError, which makes comparisons shallow.
2016-07-14 15:59:06 -07:00
ee78f590ba hack/benchmark: remove deprecated boom parameter
Benchmark script will fail to run with -readall flag provided
2016-07-14 23:49:53 +03:00
575682f593 Merge pull request #5944 from heyitsanthony/mvcc-failpoints
build, backend: add backend commit failpoints
2016-07-14 13:08:25 -07:00
14d7dc940d Merge pull request #5943 from gyuho/pause-before-compaction-2
etcd-tester: pause before compaction, fix races, cleanups
2016-07-14 13:02:53 -07:00
ba2725c2d0 build, backend: add backend commit failpoints 2016-07-14 12:26:35 -07:00
ceb9fe4822 etcd-tester: stop stress before compact, fix races
fix race condition between stresser cancel, start
2016-07-14 12:16:42 -07:00
b0f2e5e64a Merge pull request #5927 from xiang90/pacing
*: deny proposals when there is a huge gap between apply/commit
2016-07-14 11:47:53 -07:00
8e59fb749c etcd-tester: increase default qps, fix cleanup 2016-07-14 11:20:16 -07:00
c0cc161ba8 Merge pull request #5937 from coreos/revert-5876-manual
Revert "Dockerfile: use 'ENTRYPOINT' instead of 'CMD'"
2016-07-14 10:35:21 -07:00
27b03f0ed5 *: deny proposals when there is a huge gap between apply/commit 2016-07-14 10:02:55 -07:00
35d379b052 Merge pull request #5934 from heyitsanthony/fix-publish-race
e2e: wait for every etcd server to publish to cluster
2016-07-13 19:22:08 -07:00
2f7da66d43 Revert "Dockerfile: use 'ENTRYPOINT' instead of 'CMD'" 2016-07-13 19:06:20 -07:00
6b487fb199 e2e: wait for every etcd server to publish to cluster
If etcdctl accesses the cluster before all members are published, it
will get an "unsupported protocol scheme" error. To fix, wait for both
the capabilities and published message.

Fixes #5824
2016-07-13 17:01:43 -07:00
3d109be3b4 Merge pull request #3621 from yichengq/usage-stderr
etcdmain: print usage in stderr when flag.Parse fail
2016-07-13 16:56:26 -07:00
071eac3838 Merge pull request #5918 from xiang90/init
etcdmain: only get initial cluster setting if the member is not initi…
2016-07-13 16:28:32 -07:00
c7881fddc2 Merge pull request #5933 from xiang90/doc
doc: better link for embed etcd
2016-07-13 16:27:32 -07:00
9bcf5a83fb doc: better link for embed etcd 2016-07-13 16:24:56 -07:00
c32dd164fe Merge pull request #5932 from heyitsanthony/nuke-etcdctl-v0.4
etcdctl: remove v0.4 support
2016-07-13 16:18:33 -07:00
8368e6a992 embed: only get initial cluster setting if the member is not init 2016-07-13 16:03:27 -07:00
97ff1abb3e etcdctl: remove 0.4 import command 2016-07-13 15:30:37 -07:00
439b96f090 etcdctl: remove 0.4 peer syncing 2016-07-13 15:26:25 -07:00
06fd46f835 Merge pull request #5928 from xiang90/err_code
rpctypes: use permission deny code for permission deny error
2016-07-13 15:00:02 -07:00
41a98dbd66 Merge pull request #5925 from heyitsanthony/embed-etcdmain
embeddable etcdmain
2016-07-13 13:51:19 -07:00
f6ef6157cc Documentation: link embedding etcd into docs 2016-07-13 13:28:11 -07:00
c0299ca6f4 integration: test embedded etcd 2016-07-13 10:40:03 -07:00
f4f33ea767 etcdmain, embed: export Config and StartEtcd into embed/
Lets programs embed etcd.

Fixes #5430
2016-07-13 10:40:03 -07:00
81d5ae3ce1 rpctypes: use permission deny code for permission deny error 2016-07-13 10:32:10 -07:00
7114a27345 Merge pull request #5922 from xiang90/l_b
tools/benchmark: add benchmark for lease keepalive
2016-07-12 10:48:04 -07:00
8273e1c07e tools/benchmark: add benchmark for lease keepalive 2016-07-12 10:40:56 -07:00
a243064e76 Merge pull request #5924 from xiang90/rm_04
integration: remove upgrade test for etcd0.4
2016-07-12 10:36:16 -07:00
6392ef5c44 integration: remove upgrade test for etcd0.4 2016-07-12 10:13:03 -07:00
7432e9fbe9 Merge pull request #5809 from swingbach/master
raft: make leader transferring workable when quorum check is on
2016-07-12 09:46:18 -07:00
b9f6de9277 Merge pull request #5895 from smallfish/master
etcdserver/api/v2http, Documentation: fix debug pprof index miss / in end
2016-07-12 07:10:53 -07:00
b2c1112288 Merge pull request #5921 from xiang90/r
raft: do not change RecentActive when resetState for progress
2016-07-12 06:54:14 -07:00
c36a40ca15 raft: introduce top-level context in message struct 2016-07-12 16:14:06 +08:00
eb08f2274e raft: do not change RecentActive when resetState for progress 2016-07-11 21:12:14 -07:00
cc26f2c889 Merge pull request #5913 from rboyer/correct-sample-peer-config-file
Correct security configuration for peers in sample config file.
2016-07-11 19:41:41 -07:00
4bc29e2b9c Merge pull request #5902 from mitake/bench-auth
tools: add --user for auth in benchmarks
2016-07-11 18:38:07 -07:00
8a21be721f Merge pull request #5919 from gyuho/raft-lead
raft: set leader id in stepFollower
2016-07-11 18:34:07 -07:00
7edb6bcbe1 etcd: correct security configuration for peers in sample config file 2016-07-11 20:19:27 -05:00
6f3a40cb53 raft: set leader id in stepFollower
Follower has already set its leader ID from
previous append messages from the leader, but
to be consistent,  this adds a line to set its
leader id from leader snapshot message.
2016-07-11 16:37:31 -07:00
ea0a569c4d Merge pull request #5917 from xiang90/rm
*: remove unnecessary data upgrade code
2016-07-11 15:38:07 -07:00
f65e75e4b3 *: remove unnecessary data upgrade code 2016-07-11 15:11:56 -07:00
c0f292e6b8 Merge pull request #5916 from xiang90/ctl
etcdctl: only takes 127.0.0.1:2379 as default endpoint
2016-07-11 13:34:55 -07:00
55ca788efe etcdctl: only takes 127.0.0.1:2379 as default endpoint 2016-07-11 13:28:02 -07:00
2b6f04a58e Merge pull request #5906 from gyuho/release-test
e2e: add basic upgrade tests
2016-07-11 12:42:54 -07:00
a3347e3e68 Merge pull request #5915 from heyitsanthony/doc-new-platform
Documentation: clarify support policy
2016-07-11 12:41:38 -07:00
5b0d52f8c3 Documentation: clarify support policy 2016-07-11 12:10:17 -07:00
e8e561e8f5 e2e: add basic upgrade tests 2016-07-11 11:28:04 -07:00
e5b5cf02d3 test: add upgrade test flag 2016-07-11 11:10:24 -07:00
0d9b6ba0ab raft: fix a few problems 2016-07-11 14:59:53 +08:00
da44e17b58 Merge pull request #5908 from gyuho/raft-cleanup
raft: remove unnecessary type-cast, else-clause
2016-07-10 16:42:42 -07:00
c396b6aaaa raft: remove unnecessary type-cast, else-clause 2016-07-09 22:01:19 -07:00
c47689d98f Merge pull request #5689 from mitake/skip-apply
RFC: etcdserver, pkg: skip needless log entry applying
2016-07-10 01:23:35 +09:00
474eb1b44b Merge pull request #5890 from jaredeh/32bit
Easy 32bit architecture fixes
2016-07-08 13:36:52 -07:00
f78d4713ea etcdserver: atomic access alignment
Most fields accessed with sync/atomic functions are 64bit aligned, but a couple
are not.  This makes comments out of date and therefore misleading.

Affected fields reordered, comments scrubbed and updated.
2016-07-08 11:20:47 -07:00
90889ebc0f raftpb: atomic access alignment
The Entry struct has misaligned fields that are accessed atomically.  The
misalignment is caused by the EntryType enum which the Protocol Buffers
spec forces to be a 32bit int.

Moving the order of the fields without renumbering them in the .proto file
seems to align the go structure without changing the wire format.
2016-07-08 11:13:53 -07:00
df94f58462 raft: atomic access alignment
The relevant structures are properly aligned, however, there is no comment
highlighting the need to keep it aligned as is present elsewhere in the
codebase.

Adding note to keep alignment, in line with similar comments in the codebase.
2016-07-08 11:05:41 -07:00
eded9f5f84 Merge pull request #5887 from gyuho/rate-limiting-stresser
etcd-tester: add rate limits to stresser
2016-07-08 09:32:25 -07:00
a153448b84 tools: add --user for auth in benchmarks
This commit adds --user for auth in benchmarks. Its purpose is
measuring overhead of authentication of v3 API. Of course the given
user must be granted permission of target keys before benchmarking.

Example of a case with no authentication:
% ./benchmark range k1
bench with linearizable range
 10000 / 10000 Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo! 100.00%2m10s

Summary:
  Total:        130.1850 secs.
  Slowest:      0.4071 secs.
  Fastest:      0.0064 secs.
  Average:      0.0130 secs.
  Stddev:       0.0079 secs.
  Requests/sec: 76.8138

Response time histogram:
  0.006 [1]     |
  0.046 [9990]  |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.087 [3]     |
  0.127 [0]     |
  0.167 [3]     |
  0.207 [2]     |
  0.247 [0]     |
  0.287 [0]     |
  0.327 [0]     |
  0.367 [0]     |
  0.407 [1]     |

Latency distribution:
  10% in 0.0076 secs.
  25% in 0.0086 secs.
  50% in 0.0113 secs.
  75% in 0.0146 secs.
  90% in 0.0209 secs.
  95% in 0.0272 secs.
  99% in 0.0344 secs.

Example of a case with authentication:
% ./benchmark --user=u1:p range k1
bench with linearizable range
 10000 / 10000 Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo! 100.00%2m11s

Summary:
  Total:        131.4923 secs.
  Slowest:      0.1637 secs.
  Fastest:      0.0065 secs.
  Average:      0.0131 secs.
  Stddev:       0.0070 secs.
  Requests/sec: 76.0501

Response time histogram:
  0.006 [1]     |
  0.022 [9075]  |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.038 [875]   |∎∎∎
  0.054 [36]    |
  0.069 [5]     |
  0.085 [1]     |
  0.101 [1]     |
  0.117 [0]     |
  0.132 [0]     |
  0.148 [5]     |
  0.164 [1]     |

Latency distribution:
  10% in 0.0076 secs.
  25% in 0.0087 secs.
  50% in 0.0114 secs.
  75% in 0.0150 secs.
  90% in 0.0215 secs.
  95% in 0.0272 secs.
  99% in 0.0347 secs.

It seems that current auth mechanism does not introduce visible overhead.
2016-07-08 16:53:05 +09:00
abb20ec51f etcdserver, pkg: skip needless log entry applying
This commit lets etcdserver skip needless log entry applying. If the
result of log applying isn't required by the node (client that issued
the request isn't talking with the node) and the operation has no side
effects, applying can be skipped.

It would contribute to reduce disk I/O on followers and be useful for
a cluster that processes much serializable get.
2016-07-08 15:16:45 +09:00
7c39f41e7c etcd-tester: add rate limiter to stresser 2016-07-07 21:55:12 -07:00
b970e03e19 Merge pull request #5446 from gyuho/gateway_log
tcpproxy: log proxy start
2016-07-07 21:38:01 -07:00
ce8900e3b4 Merge pull request #5899 from heyitsanthony/qos-tuning
Documentation: tuning advice for peer prioritization
2016-07-07 19:48:07 -07:00
e6d15b966c etcdserver/api/v2http, Documentation: fix debug pprof index miss / in end 2016-07-08 10:21:05 +08:00
6f0a67603a Documentation: tuning advice for peer prioritization 2016-07-07 19:14:31 -07:00
a2760c9f49 Merge pull request #5888 from heyitsanthony/v2-one-shot
client: make set/delete one shot operations
2016-07-07 16:41:01 -07:00
c30f89f1d0 client/integration: test v2 client one shot operations 2016-07-07 15:58:58 -07:00
946b3cce1d client: make set/delete one shot operations
Old behavior would retry set and delete even if there's an error. This
can lead to the client returning an error for deleting twice, instead
of returning an error for an interdeterminate state.

Fixes #5832
2016-07-07 15:51:08 -07:00
4f2da16d82 Merge pull request #5897 from xiang90/lock
v3rpc: lock progress and prevKV map correctly
2016-07-07 15:20:37 -07:00
427496ebb8 v3rpc: lock progress and prevKV map correctly 2016-07-07 15:01:05 -07:00
dc2dced129 Merge pull request #5892 from heyitsanthony/auth-cheap-bcrypt
auth: cheap bcrypt for tests
2016-07-07 09:04:57 -07:00
b6a497214e Merge pull request #5883 from westhood/master
clientv3: fix sync base
2016-07-07 07:09:55 -07:00
0b0cbaac09 clientv3: use cheap bcrypt for ExampleAuth and use embedded auth api
Fixes #5783
2016-07-06 23:35:14 -07:00
d4e0e419dc auth: set bcrypt cost to minimum for test cases
DefaultCost makes auth tests 10x more expensive than MinCost.

Fixes #5851
2016-07-06 23:35:06 -07:00
16b0c1d1e1 clientv3: fix sync base
It is not correct to use WithPrefix. Range end will change in every
internal batch.
2016-07-07 12:02:53 +08:00
88a9cf2cea clientv3: add public function to get prefix range end 2016-07-07 10:35:40 +08:00
c4a280e511 Merge pull request #5881 from goby/master
hack: fix etcd execute path in k8s example
2016-07-06 17:04:16 -07:00
244b1d7d20 tcpproxy: add start logging line 2016-07-06 14:21:26 -07:00
1c9e0a0e33 Merge pull request #5886 from heyitsanthony/health-check-str
rafthttp: make health check meaning clearer
2016-07-06 11:32:27 -07:00
4db8f018cb Merge pull request #5885 from xiang90/fix_snap_test
etcdserver: fix TestSnap
2016-07-06 11:21:13 -07:00
3a080143a7 rafthttp: make health check meaning clearer 2016-07-06 10:31:13 -07:00
3451623c71 etcdserver: fix TestSnap 2016-07-06 10:30:15 -07:00
8c4df9a96f hack: fix etcd execute path in k8s example
Change /etcd to /usr/local/bin/etcd
2016-07-06 15:07:00 +08:00
234c30c061 Merge pull request #5880 from xiang90/put_prev
add options to return prev_kv
2016-07-05 21:03:56 -07:00
7ec822107a *: add put prevkv 2016-07-05 20:45:01 -07:00
12bf1a3382 *: rename preserveKVs to prevKv 2016-07-05 20:45:01 -07:00
a78cdeae81 Merge pull request #5877 from heyitsanthony/rsa-fixtures
test: certificate fixes for fedora
2016-07-05 19:34:23 -07:00
929d6ab62c Merge pull request #5850 from xiang90/get_o_kv
*: support get-old-kv in watch
2016-07-05 16:37:24 -07:00
c853704ac9 *: support get-old-kv in watch 2016-07-05 16:17:09 -07:00
c642430fae integration: use RSA certs for testing
Some systems don't support EC due to patent issues, but the tests
should still work.

Fixes #5744
2016-07-05 13:21:21 -07:00
066afd6abd Merge pull request #5876 from gyuho/manual
Dockerfile: use 'ENTRYPOINT' instead of 'CMD'
2016-07-05 11:40:02 -07:00
f19cef960e Dockerfile: use 'ENTRYPOINT' instead of 'CMD'
use entrypoint, so people can specify flags to etcd
without providing the binary.

Signed-off-by: Secret <haichuang221@163.com>
2016-07-05 11:28:19 -07:00
beab76c7a9 Merge pull request #5872 from gyuho/build_doc
Documentation: add instruction on vendoring, build
2016-07-05 10:43:28 -07:00
ff5ddd0909 Documentation: add instruction on vendoring, build
Addressing https://github.com/coreos/etcd/issues/5857#issuecomment-230174840.
2016-07-05 09:55:44 -07:00
660f0fcc3d Merge pull request #5873 from gyuho/raft_updates
raft: fix minor grammar, remove TODO
2016-07-05 09:49:08 -07:00
8c71eb71df Merge pull request #5867 from vmatekole/master
Documentation: Example config amendment
2016-07-05 09:17:22 -07:00
c52bf1ac5d Documentation: Example config amendment 2016-07-05 16:27:47 +02:00
9e0de02fde raft: fix minor grammar, remove TODO
- test 'Term' panic cases (remove TODO)
- fix minor grammar in 'Node' godoc
2016-07-05 07:21:52 -07:00
c7dd74d8d3 Merge pull request #5869 from gyuho/raft_log_test
raft: minor updates and clean up in log.go
2016-07-04 21:51:13 -07:00
881a120453 raft: minor updates and clean up in log.go
- remove redundant test case in log_test.go
- fix test case comment ('equal or larger')
- lastnewi after matching index and term
2016-07-04 16:52:17 -07:00
b566ca225c Merge pull request #5855 from heyitsanthony/fix-windows-wal-init
wal: release wal locks before renaming directory on init
2016-07-03 19:21:23 -07:00
8d99a666f9 Merge pull request #5854 from xiang90/r_f
raft: add features section to readme file
2016-07-03 18:00:31 -07:00
c76dcc5190 raft: add features section to readme file 2016-07-03 17:59:59 -07:00
df61322e5b Merge pull request #5862 from xiang90/fix_sn
etcdserver: commit before sending snapshot
2016-07-03 15:30:20 -07:00
7cb61af245 Merge pull request #5864 from gyuho/raft_cleanup
raft: remove unnecessary reflect.DeepEqual in test
2016-07-03 14:03:08 -07:00
70bf768005 Merge pull request #5861 from xiang90/fix_watch
v3rpc: do not panic on user error for watch
2016-07-03 13:56:33 -07:00
8a8a8253fa etcdserver: commit before sending snapshot 2016-07-03 13:54:05 -07:00
9b5e99efe0 raft: remove unnecessary reflect.DeepEqual in test 2016-07-03 13:42:26 -07:00
13a4056327 v3rpc: do not panic on user error for watch 2016-07-03 08:57:48 -07:00
5991209c2d wal: release wal locks before renaming directory on init
Fixes #5852
2016-07-02 12:14:37 -07:00
7cc4596ebd Merge pull request #5849 from heyitsanthony/fix-compactor-test-races
compactor: make tests deterministic
2016-07-01 23:07:36 -07:00
9405583745 Merge pull request #5830 from heyitsanthony/functest-failpoints
functional-tester: failpoint support
2016-07-01 16:58:36 -07:00
1af7c400d1 compactor: make tests deterministic
Fixes #5847
2016-07-01 16:50:05 -07:00
a5f043c85b etcd-tester: add failpoint cases
Fixes #5754
2016-07-01 15:31:49 -07:00
c6a3048e81 Merge pull request #5848 from gyuho/cluster_version
etcdserver/api: print only major.minor version API
2016-07-01 15:19:08 -07:00
ba023e539a etcdserver/api: print only major.minor version API
Before

2016-07-01 14:57:50.927170 I | api: enabled capabilities for version 3.0.0

After

2016-07-01 14:57:50.927170 I | api: enabled capabilities for version 3.0
2016-07-01 14:58:06 -07:00
c8c5f41a01 Merge pull request #5836 from xiang90/better_d_prev
*: support return prev deleted kv
2016-07-01 14:43:33 -07:00
8d4701bb1d etcd-agent: enable GOFAIL_HTTP endpoint 2016-07-01 14:39:48 -07:00
40c4a7894d *: support return prev deleted kv 2016-07-01 14:01:48 -07:00
ab6f49dc67 Merge pull request #5844 from gyuho/go_version
*: test, docs with go1.6+
2016-07-01 11:27:51 -07:00
a53f538f27 *: test, docs with go1.6+
etcd v3 uses http/2, which doesn't work well with go1.5
2016-07-01 11:16:38 -07:00
d163aefc1a Merge pull request #5823 from davygeek/configcheck
*: fixed some  warning
2016-07-01 10:28:59 -07:00
bf0ab6a2df Merge pull request #5843 from gyuho/manual
Documentation: fix typo in api_grpc_gateway.md
2016-07-01 10:21:45 -07:00
c7a0830a62 Merge pull request #5841 from heyitsanthony/fix-be-semver
etcdserver: exit on missing backend only if semver is >= 3.0.0
2016-07-01 10:14:05 -07:00
b3464a918b Documentation: fix typo in api_grpc_gateway.md 2016-07-01 10:07:14 -07:00
b7f5f8fc99 etcdserver: exit on missing backend only if semver is >= 3.0.0 2016-07-01 09:10:01 -07:00
581f847e06 Merge pull request #5829 from gyuho/ftest
etcd-tester: fix slow leader with injectLatency
2016-06-30 13:46:13 -07:00
0d44947c11 etcd-tester: fix slow leader with injectLatency 2016-06-30 13:41:27 -07:00
78b143b800 Merge pull request #5828 from gyuho/docker
release: fix Dockerfile etcd binary paths
2016-06-30 12:25:51 -07:00
a2f6ec3128 release: fix Dockerfile etcd binary paths
release script uses binary files in 'release/image-docker',
not the ones in "bin/". Tested with v3.0.0 release.
2016-06-30 11:47:33 -07:00
c68d60c99f Merge pull request #5827 from gyuho/version
*: clean up beta in docs, bump to 3.0.0+git
2016-06-30 10:03:28 -07:00
4cd834910e version: bump to v3.0.0+git 2016-06-30 09:43:10 -07:00
cb1a1426b1 *: remove beta from docs 2016-06-30 09:39:52 -07:00
04a9141e45 Merge pull request #5825 from sofuture/jeff/tls-setup-fixes
hack/tls-setup minor fixes
2016-06-30 09:29:41 -07:00
548360b140 Merge pull request #5826 from gyuho/back
Doc: fix typo in dev-guide.md
2016-06-30 09:20:07 -07:00
8ce7481a7f Doc: fix typo in dev-guide.md 2016-06-30 09:14:03 -07:00
74d75a96eb hack: install goreman in tls-setup example 2016-06-30 10:11:47 -06:00
0938c861f0 hack: add tls-setup example generated certs to gitignore 2016-06-30 10:11:28 -06:00
8c96d2573f *: fixed some warning 2016-06-30 23:13:46 +08:00
ad556b7e7d Merge pull request #5821 from davygeek/discovery
discovery: Uniform code style
2016-06-30 07:19:08 -07:00
ea0eab84a4 discovery: Uniform code style 2016-06-30 22:00:01 +08:00
5f4d1c8891 Merge pull request #5819 from gyuho/tester_fix
etcd-tester: handle error in RevHash
2016-06-29 19:36:08 -07:00
dc49016987 etcd-tester: handle error in RevHash 2016-06-29 19:31:45 -07:00
0e137e21bc Merge pull request #5817 from gyuho/ctl_consistent
ctlv3: make flags, commands formats consistent
2016-06-29 16:15:54 -07:00
9b47ca5972 ctlv3: make flags, commands formats consistent
1. Capitalize first letter
2. Remove period at the end

(followed the pattern in linux coreutil man page)
2016-06-29 15:52:06 -07:00
3b80df7f4e Merge pull request #5814 from heyitsanthony/functest-refactor
etcd-tester: refactor cluster and failure code
2016-06-29 15:25:55 -07:00
b7d0497c47 Merge pull request #5807 from xiang90/gproxy
*: initial implementation of grpc-proxy
2016-06-29 13:28:57 -07:00
150321f5ac Merge pull request #5815 from gyuho/raft_test_fix
raft: give correct offset in unstable test
2016-06-29 13:27:14 -07:00
2cc2372165 raft: give correct offset in unstable test
`unstable.entries[i] has raft log position i+unstable.offset`

So, this fixes some test cases by giving them correct
offsets.
2016-06-29 12:29:36 -07:00
6d8c647db8 *: initial implementation of grpc-proxy 2016-06-29 12:06:04 -07:00
5f459a64ce etcd-tester: refactor cluster member handling 2016-06-29 11:25:33 -07:00
402df5bd03 etcd-tester: refactor failure code to reduce code duplication 2016-06-29 11:03:34 -07:00
63f78bf7c8 etcd-tester: refactor round loop 2016-06-29 11:03:34 -07:00
66d195ff75 Merge pull request #5813 from nekto0n/encoder-pointer
rafthttp: use pointers to avoid extra copies upon message encoding
2016-06-29 10:01:27 -07:00
ff908b4ba8 Merge pull request #5812 from heyitsanthony/test-merge-base
test: use merge-base for commit title checking
2016-06-29 09:50:26 -07:00
dc218fb41d test: use merge-base for commit title checking
Otherwise, will compare branch with forked master against upstream master.
2016-06-29 09:28:53 -07:00
fd5bc21522 rafthttp: use pointers to avoid extra copies upon message encoding 2016-06-29 21:17:18 +05:00
7f3b2e23a4 Merge pull request #5811 from davygeek/golintnotice
client: follow golint notice change errors.New to fmt.Errorf
2016-06-29 09:12:49 -07:00
e020b2a228 raft: make leader transferring workable when quorum check is on 2016-06-29 18:24:58 +08:00
8e9097d0c0 Merge pull request #5748 from mitake/auth-disable
disabling auth in v3 API
2016-06-28 22:32:44 -07:00
3b91648070 client: follow golint notice change errors.New to fmt.Errorf and use 'var eps []string' instead 'var make([]string, 0)' 2016-06-29 13:21:49 +08:00
a4667cb863 Merge pull request #5700 from mqliang/proxy-compact
proxy: implement compaction function
2016-06-28 20:49:01 -07:00
2e2f405b1e proxy:replace c with client to improve readability 2016-06-29 11:30:03 +08:00
f28a87d835 proxy: implement compaction 2016-06-29 11:28:10 +08:00
83d9ce3d7c Merge pull request #5803 from gyuho/manual
raft: simplify truncateAndAppend
2016-06-28 20:07:35 -07:00
8f8ff4d519 Merge pull request #5805 from gyuho/ftest
etcd-tester: match ErrTimeout in stresser
2016-06-28 19:51:21 -07:00
745e1e2cf9 e2e: enhance the test case of auth disabling 2016-06-29 11:31:42 +09:00
15f2fd0726 etcd-tester: match ErrTimeout in stresser
Fix https://github.com/coreos/etcd/issues/5804.
2016-06-28 19:20:28 -07:00
df31eab136 raft: simplify truncateAndAppend
truncateAndAppend no need the value of 'after' with subbing one
2016-06-28 18:53:12 -07:00
66107b8653 auth: invalidate every token in disabling auth 2016-06-29 10:31:46 +09:00
8e825de35f Merge pull request #5513 from vikstrous/clustererror
improve error message for ClusterError
2016-06-28 18:15:35 -07:00
8216fdc59f Merge pull request #5799 from xiang90/grpcnaming
clientv3: add grpc naming resolver
2016-06-28 17:51:23 -07:00
4f57bb313f clientv3: add grpc naming resolver 2016-06-28 17:06:58 -07:00
1b2f025414 Merge pull request #5801 from heyitsanthony/fix-watch-closeerr-race
clientv3: only use return closeErr when donec is closed
2016-06-28 17:05:48 -07:00
1db4ee8c61 clientv3: only use closeErr on watch when donec is closed
Fixes #5800
2016-06-28 16:14:09 -07:00
81322b498e Merge pull request #5798 from gyuho/tests_more
Tests other projects to ensure it compiles
2016-06-28 15:22:32 -07:00
ede0b584b8 test: test builds on other projects 2016-06-28 15:03:19 -07:00
da180e0790 Merge pull request #5796 from gyuho/bench
benchmark: fix Compact request
2016-06-28 14:14:07 -07:00
bc6d7659af Merge pull request #5795 from xiang90/filter
*: support watch with filters
2016-06-28 14:07:12 -07:00
ae057ec508 benchmark: fix Compact request 2016-06-28 13:58:28 -07:00
dced92f8bd *: support watch with filters
Now user can filter events with types. The API is also extensible.
It might make sense for the proxy to filter out events based on
more expensive/customized filter.
2016-06-28 13:46:57 -07:00
5f1c763993 Merge pull request #5553 from swingbach/master
raft: implemented read-only query when quorum check is on
2016-06-28 12:47:43 -07:00
ddffdc3e37 Merge pull request #5725 from mitake/auth-not-enabled
auth, etcdserver: let Authenticate() fail if auth isn't enabled
2016-06-28 12:34:54 -07:00
ec232ec9d8 Merge pull request #5787 from heyitsanthony/compact-resp
clientv3, ctl3, clientv3/integration: add compact response to compact
2016-06-28 12:27:42 -07:00
38035c8c13 Merge pull request #5794 from xiang90/fix_c
mvcc: do not hash consistent index
2016-06-28 12:25:32 -07:00
ef9754910e mvcc: do not hash consistent index 2016-06-28 09:36:26 -07:00
1c25aa6c48 clientv3, ctl3, clientv3/integration: add compact response to compact 2016-06-28 09:32:31 -07:00
0cd5c658aa Merge pull request #5788 from gyuho/retry_tester
etcd-tester: match ErrTimeoutDueToLeaderFail
2016-06-27 21:16:07 -07:00
ac68f70843 etcd-tester: match ErrTimeoutDueToLeaderFail
stresser in followers should retry when failure is injected to
their leader.
2016-06-27 20:48:06 -07:00
0faae33ace raft: implemented read-only query when quorum check is on 2016-06-28 10:52:53 +08:00
8df37d53d6 auth, etcdserver: let Authenticate() fail if auth isn't enabled
Successful Authenticate() would be confusing and make trouble shooting
harder if auth isn't enabled in a cluster.
2016-06-26 22:49:23 -07:00
da85108ca2 client: improve error message for ClusterError 2016-06-22 13:13:12 -07:00
34b0736f2c mvcc: Reduce number of allocs when watchableStore if no watchers.
When there are no watchers the number of allocations made while handling
a PUT operation can be reduced by exiting early.
2016-05-11 00:51:00 -07:00
7ba352d9ca etcdmain: print usage in stderr when flag.Parse fail
This fits the requirement of stderr.

I still let `etcd --version` and `etcd --help` print out to stdout
because when users ask explicitly for version/help docs, they expect to see
the doc in stdout.

Ref:
http://www.jstorimer.com/blogs/workingwithcode/7766119-when-to-use-stderr-instead-of-stdout
2015-09-30 14:19:39 -07:00
571 changed files with 22341 additions and 19880 deletions

2
.gitignore vendored
View File

@ -1,5 +1,6 @@
/coverage
/gopath
/gopath.proto
/go-bindata
/machine*
/bin
@ -10,3 +11,4 @@
/hack/insta-discovery/.env
*.test
tools/functional-tester/docker/bin
hack/tls-setup/certs

View File

@ -4,8 +4,7 @@ go_import_path: github.com/coreos/etcd
sudo: false
go:
- 1.5
- 1.6
- 1.7.1
- tip
env:
@ -15,25 +14,19 @@ env:
- TARGET=amd64
- TARGET=arm64
- TARGET=arm
- TARGET=ppc64le
- TARGET=386
matrix:
fast_finish: true
allow_failures:
- go: tip
exclude:
- go: 1.5
env: TARGET=arm
- go: 1.5
env: TARGET=ppc64le
- go: 1.6
env: TARGET=arm64
- go: tip
env: TARGET=arm
- go: tip
env: TARGET=arm64
- go: tip
env: TARGET=ppc64le
env: TARGET=386
addons:
apt:
@ -49,12 +42,19 @@ before_install:
# disable godep restore override
install:
- pushd cmd/ && go get -t -v ./... && popd
- pushd cmd/etcd && go get -t -v ./... && popd
script:
- >
if [ "${TARGET}" == "amd64" ]; then
GOARCH="${TARGET}" ./test;
else
GOARCH="${TARGET}" ./build;
fi
case "${TARGET}" in
amd64)
GOARCH=amd64 ./test
;;
386)
GOARCH=386 PASSES="build unit" ./test
;;
*)
# test building out of gopath
GO_BUILD_FLAGS="-a -v" GOPATH="" GOARCH="${TARGET}" ./build
;;
esac

View File

@ -1,8 +1,9 @@
FROM alpine:latest
ADD bin/etcd /usr/local/bin/
ADD bin/etcdctl /usr/local/bin/
ADD etcd /usr/local/bin/
ADD etcdctl /usr/local/bin/
RUN mkdir -p /var/etcd/
RUN mkdir -p /var/lib/etcd/
EXPOSE 2379 2380

View File

@ -25,13 +25,13 @@ curl -L http://localhost:2379/v3alpha/kv/range \
## Swagger
Generated [Swapper][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].
Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].
[api-ref]: ./api_reference_v3.md
[go-client]: https://github.com/coreos/etcd/tree/master/clientv3
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
[grpc]: http://www.grpc.io/
[grpc-gateway]: https://github.com/gengo/grpc-gateway
[grpc-gateway]: https://github.com/grpc-ecosystem/grpc-gateway
[json-mapping]: https://developers.google.com/protocol-buffers/docs/proto3#json
[swagger]: http://swagger.io/
[swagger-doc]: apispec/swagger/rpc.swagger.json

View File

@ -59,6 +59,7 @@ for grpc-gateway
| LeaseGrant | LeaseGrantRequest | LeaseGrantResponse | LeaseGrant creates a lease which expires if the server does not receive a keepAlive within a given time to live period. All keys attached to the lease will be expired and deleted if the lease expires. Each expired key generates a delete event in the event history. |
| LeaseRevoke | LeaseRevokeRequest | LeaseRevokeResponse | LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted. |
| LeaseKeepAlive | LeaseKeepAliveRequest | LeaseKeepAliveResponse | LeaseKeepAlive keeps the lease alive by streaming keep alive requests from the client to the server and streaming keep alive responses from the server to the client. |
| LeaseTimeToLive | LeaseTimeToLiveRequest | LeaseTimeToLiveResponse | LeaseTimeToLive retrieves lease information. |
@ -427,6 +428,7 @@ Empty field.
| ----- | ----------- | ---- |
| key | key is the first key to delete in the range. | bytes |
| range_end | range_end is the key following the last key to delete for the range [key, range_end). If range_end is not given, the range is defined to contain only the key argument. If range_end is '\0', the range is all keys greater than or equal to the key argument. | bytes |
| prev_kv | If prev_kv is set, etcd gets the previous key-value pairs before deleting it. The previous key-value pairs will be returned in the delte response. | bool |
@ -436,6 +438,7 @@ Empty field.
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| deleted | deleted is the number of keys deleted by the delete range request. | int64 |
| prev_kvs | if prev_kv is set in the request, the previous key-value pairs will be returned. | (slice of) mvccpb.KeyValue |
@ -508,6 +511,27 @@ Empty field.
##### message `LeaseTimeToLiveRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the lease ID for the lease. | int64 |
| keys | keys is true to query all the keys attached to this lease. | bool |
##### message `LeaseTimeToLiveResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| ID | ID is the lease ID from the keep alive request. | int64 |
| TTL | TTL is the remaining TTL in seconds for the lease; the lease will expire in under TTL+1 seconds. | int64 |
| grantedTTL | GrantedTTL is the initial granted time in seconds upon lease creation/renewal. | int64 |
| keys | Keys is the list of keys attached to this lease. | (slice of) bytes |
##### message `Member` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
@ -591,6 +615,7 @@ Empty field.
| key | key is the key, in bytes, to put into the key-value store. | bytes |
| value | value is the value, in bytes, to associate with the key in the key-value store. | bytes |
| lease | lease is the lease ID to associate with the key in the key-value store. A lease value of 0 indicates no lease. | int64 |
| prev_kv | If prev_kv is set, etcd gets the previous key-value pair before changing it. The previous key-value pair will be returned in the put response. | bool |
@ -599,6 +624,7 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| prev_kv | if prev_kv is set in the request, the previous key-value pair will be returned. | mvccpb.KeyValue |
@ -613,6 +639,12 @@ Empty field.
| sort_order | sort_order is the order for returned sorted results. | SortOrder |
| sort_target | sort_target is the key-value field to use for sorting. | SortTarget |
| serializable | serializable sets the range request to use serializable member-local reads. Range requests are linearizable by default; linearizable requests have higher latency and lower throughput than serializable requests but reflect the current consensus of the cluster. For better performance, in exchange for possible stale reads, a serializable range request is served locally without needing to reach consensus with other nodes in the cluster. | bool |
| keys_only | keys_only when set returns only the keys and not the values. | bool |
| count_only | count_only when set returns only the count of the keys in the range. | bool |
| min_mod_revision | min_mod_revision is the lower bound for returned key mod revisions; all keys with lesser mod revisions will be filtered away. | int64 |
| max_mod_revision | max_mod_revision is the upper bound for returned key mod revisions; all keys with greater mod revisions will be filtered away. | int64 |
| min_create_revision | min_create_revision is the lower bound for returned key create revisions; all keys with lesser create trevisions will be filtered away. | int64 |
| max_create_revision | max_create_revision is the upper bound for returned key create revisions; all keys with greater create revisions will be filtered away. | int64 |
@ -621,8 +653,9 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| kvs | kvs is the list of key-value pairs matched by the range request. | (slice of) mvccpb.KeyValue |
| kvs | kvs is the list of key-value pairs matched by the range request. kvs is empty when count is requested. | (slice of) mvccpb.KeyValue |
| more | more indicates if there are more keys to return in the requested range. | bool |
| count | count is set to the number of keys within the range when requested. | int64 |
@ -732,6 +765,8 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| range_end | range_end is the end of the range [key, range_end) to watch. If range_end is not given, only the key argument is watched. If range_end is equal to '\0', all keys greater than or equal to the key argument are watched. | bytes |
| start_revision | start_revision is an optional revision to watch from (inclusive). No start_revision is "now". | int64 |
| progress_notify | progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server may decide how often it will send notifications based on current load. | bool |
| filters | filter out put event. filter out delete event. filters filter the events at server side before it sends back to the watcher. | (slice of) FilterType |
| prev_kv | If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned. | bool |
@ -764,6 +799,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| ----- | ----------- | ---- |
| type | type is the kind of event. If type is a PUT, it indicates new data has been stored to the key. If type is a DELETE, it indicates the key was deleted. | EventType |
| kv | kv holds the KeyValue for the event. A PUT event contains current kv pair. A PUT event with kv.Version=1 indicates the creation of a key. A DELETE/EXPIRE event contains the deleted key with its modification revision set to the revision of deletion. | KeyValue |
| prev_kv | prev_kv holds the key-value pair before the event happens. | KeyValue |
@ -789,6 +825,22 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
##### message `LeaseInternalRequest` (lease/leasepb/lease.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| LeaseTimeToLiveRequest | | etcdserverpb.LeaseTimeToLiveRequest |
##### message `LeaseInternalResponse` (lease/leasepb/lease.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| LeaseTimeToLiveResponse | | etcdserverpb.LeaseTimeToLiveResponse |
##### message `Permission` (auth/authpb/auth.proto)
Permission is a single entity

File diff suppressed because it is too large Load Diff

View File

@ -4,5 +4,5 @@ For the most part, the etcd project is stable, but we are still moving fast! We
## The current experimental API/features are:
- v3 auth API: expect to be stale in 3.1 release
- etcd gateway: expect to be stable in 3.1 release
- v3 auth API: expect to be stable in 3.1 release
- etcd gateway: expect to be stable in 3.1 release

View File

@ -0,0 +1,65 @@
# gRPC naming and discovery
etcd provides a gRPC resolver to support an alternative name system that fetches endpoints from etcd for discovering gRPC services. The underlying mechanism is based on watching updates to keys prefixed with the service name.
## Using etcd discovery with go-grpc
The etcd client provides a gRPC resolver for resolving gRPC endpoints with an etcd backend. The resolver is initialized with an etcd client and given a target for resolution:
```go
import (
"github.com/coreos/etcd/clientv3"
etcdnaming "github.com/coroes/etcd/clientv3/naming"
"google.golang.org/grpc"
)
...
cli, cerr := clientv3.NewFromURL("http://localhost:2379")
r := &etcdnaming.GRPCResolver{Client: cli}
b := grpc.RoundRobin(r)
conn, gerr := grpc.Dial("my-service", grpc.WithBalancer(b))
```
## Managing service endpoints
The etcd resolver treats all keys under the prefix of the resolution target following a "/" (e.g., "my-service/") with JSON-encoded go-grpc `naming.Update` values as potential service endpoints. Endpoints are added to the service by creating new keys and removed from the service by deleting keys.
### Adding an endpoint
New endpoints can be added to the service through `etcdctl`:
```sh
ETCDCTL_API=3 etcdctl put my-service/1.2.3.4 '{"Addr":"1.2.3.4","Metadata":"..."}'
```
The etcd client's `GRPCResolver.Update` method can also register new endpoints with a key matching the `Addr`:
```go
r.Update(context.TODO(), "my-service", naming.Update{Op: naming.Add, Addr: "1.2.3.4", Metadata: "..."})
```
### Deleting an endpoint
Hosts can be deleted from the service through `etcdctl`:
```sh
ETCDCTL_API=3 etcdctl del my-service/1.2.3.4
```
The etcd client's `GRPCResolver.Update` method also supports deleting endpoints:
```go
r.Update(context.TODO(), "my-service", naming.Update{Op: naming.Delete, Addr: "1.2.3.4"})
```
### Registering an endpoint with a lease
Registering an endpoint with a lease ensures that if the host can't maintain a keepalive heartbeat (e.g., its machine fails), it will be removed from the service:
```sh
lease=`ETCDCTL_API=3 etcdctl lease grant 5 | cut -f2 -d' '`
ETCDCTL_API=3 etcdctl put --lease=$lease my-service/1.2.3.4 '{"Addr":"1.2.3.4","Metadata":"..."}'
ETCDCTL_API=3 etcdctl lease keep-alive $lease
```

View File

@ -4,28 +4,51 @@ Users mostly interact with etcd by putting or getting the value of a key. This s
By default, etcdctl talks to the etcd server with the v2 API for backward compatibility. For etcdctl to speak to etcd using the v3 API, the API version must be set to version 3 via the `ETCDCTL_API` environment variable.
``` bash
```bash
export ETCDCTL_API=3
```
## Find versions
etcdctl version and Server API version can be useful in finding the appropriate commands to be used for performing various opertions on etcd.
Here is the command to find the versions:
```bash
$ etcdctl version
etcdctl version: 3.1.0-alpha.0+git
API version: 3.1
```
## Write a key
Applications store keys into the etcd cluster by writing to keys. Every stored key is replicated to all etcd cluster members through the Raft protocol to achieve consistency and reliability.
Here is the command to set the value of key `foo` to `bar`:
``` bash
```bash
$ etcdctl put foo bar
OK
```
Also a key can be set for a specified interval of time by attaching lease to it.
Here is the command to set the value of key `foo1` to `bar1` for 10s.
```bash
$ etcdctl put foo1 bar1 --lease=1234abcd
OK
```
Note: The lease id `1234abcd` in the above command refers to id returned on creating the lease of 10s. This id can then be attached to the key.
## Read keys
Applications can read values of keys from an etcd cluster. Queries may read a single key, or a range of keys.
Applications can read values of keys from an etcd cluster. Queries may read a single key, or a range of keys.
Suppose the etcd cluster has stored the following keys:
```
```bash
foo = bar
foo1 = bar1
foo3 = bar3
@ -39,6 +62,21 @@ foo
bar
```
Here is the command to read the value of key `foo` in hex format:
```bash
$ etcdctl get foo --hex
\x66\x6f\x6f # Key
\x62\x61\x72 # Value
```
Here is the command to read only the value of key `foo`:
```bash
$ etcdctl get foo --print-value-only
bar
```
Here is the command to range over the keys from `foo` to `foo9`:
```bash
@ -51,6 +89,16 @@ foo3
bar3
```
Here is the command to range over the keys from `foo` to `foo9` limiting the number of results to 2:
```bash
$ etcdctl get foo foo9 --limit 2
foo
bar
foo1
bar1
```
## Read past version of keys
Applications may want to read superseded versions of a key. For example, an application may wish to roll back to an old configuration by accessing an earlier version of a key. Alternatively, an application may want a consistent view over multiple keys through multiple requests by accessing key history.
@ -58,11 +106,11 @@ Since every modification to the etcd cluster key-value store increments the glob
Suppose an etcd cluster already has the following keys:
``` bash
$ etcdctl put foo bar # revision = 2
$ etcdctl put foo1 bar1 # revision = 3
$ etcdctl put foo bar_new # revision = 4
$ etcdctl put foo1 bar1_new # revision = 5
```bash
foo = bar # revision = 2
foo1 = bar1 # revision = 3
foo = bar_new # revision = 4
foo1 = bar1_new # revision = 5
```
Here are an example to access the past versions of keys:
@ -93,10 +141,46 @@ bar
$ etcdctl get --rev=1 foo foo9 # access the versions of keys at revision 1
```
## Read keys which are greater than or equal to the byte value of the specified key
Applications may want to read keys which are greater than or equal to the byte value of the specified key.
Suppose an etcd cluster already has the following keys:
```bash
a = 123
b = 456
z = 789
```
Here is the command to read keys which are greater than or equal to the byte value of key `b` :
```bash
$ etcdctl get --from-key b
b
456
z
789
```
## Delete keys
Applications can delete a key or a range of keys from an etcd cluster.
Suppose an etcd cluster already has the following keys:
```bash
foo = bar
foo1 = bar1
foo3 = bar3
zoo = val
zoo1 = val1
zoo2 = val2
a = 123
b = 456
z = 789
```
Here is the command to delete key `foo`:
```bash
@ -111,6 +195,29 @@ $ etcdctl del foo foo9
2 # two keys are deleted
```
Here is the command to delete key `zoo` with the deleted key value pair returned:
```bash
$ etcdctl del --prev-kv zoo
1 # one key is deleted
zoo # deleted key
val # the value of the deleted key
```
Here is the command to delete keys having prefix as `zoo`:
```bash
$ etcdctl del --prefix zoo
2 # two keys are deleted
```
Here is the command to delete keys which are greater than or equal to the byte value of key `b` :
```bash
$ etcdctl del --from-key b
2 # two keys are deleted
```
## Watch key changes
Applications can watch on a key or a range of keys to monitor for any updates.
@ -118,38 +225,86 @@ Applications can watch on a key or a range of keys to monitor for any updates.
Here is the command to watch on key `foo`:
```bash
$ etcdctl watch foo
$ etcdctl watch foo
# in another terminal: etcdctl put foo bar
PUT
foo
bar
```
Here is the command to watch on key `foo` in hex format:
```bash
$ etcdctl watch foo --hex
# in another terminal: etcdctl put foo bar
PUT
\x66\x6f\x6f # Key
\x62\x61\x72 # Value
```
Here is the command to watch on a range key from `foo` to `foo9`:
```bash
$ etcdctl watch foo foo9
# in another terminal: etcdctl put foo bar
PUT
foo
bar
# in another terminal: etcdctl put foo1 bar1
PUT
foo1
bar1
```
Here is the command to watch on keys having prefix `foo`:
```bash
$ etcdctl watch --prefix foo
# in another terminal: etcdctl put foo bar
PUT
foo
bar
# in another terminal: etcdctl put fooz1 barz1
PUT
fooz1
barz1
```
Here is the command to watch on multiple keys `foo` and `zoo`:
```bash
$ etcdctl watch -i
$ watch foo
$ watch zoo
# in another terminal: etcdctl put foo bar
PUT
foo
bar
# in another terminal: etcdctl put zoo val
PUT
zoo
val
```
## Watch historical changes of keys
Applications may want to watch for historical changes of keys in etcd. For example, an application may wish to receive all the modifications of a key; if the application stays connected to etcd, then `watch` is good enough. However, if the application or etcd fails, a change may happen during the failure, and the application will not receive the update in real time. To guarantee the update is delivered, the application must be able to watch for historical changes to keys. To do this, an application can specify a historical revision on a watch, just like reading past version of keys.
Suppose we finished the following sequence of operations:
``` bash
etcdctl put foo bar # revision = 2
etcdctl put foo1 bar1 # revision = 3
etcdctl put foo bar_new # revision = 4
etcdctl put foo1 bar1_new # revision = 5
```bash
$ etcdctl put foo bar # revision = 2
OK
$ etcdctl put foo1 bar1 # revision = 3
OK
$ etcdctl put foo bar_new # revision = 4
OK
$ etcdctl put foo1 bar1_new # revision = 5
OK
```
Here is an example to watch the historical changes:
```bash
# watch for changes on key `foo` since revision 2
$ etcdctl watch --rev=2 foo
@ -159,7 +314,9 @@ bar
PUT
foo
bar_new
```
```bash
# watch for changes on key `foo` since revision 3
$ etcdctl watch --rev=3 foo
PUT
@ -167,6 +324,19 @@ foo
bar_new
```
Here is an example to watch only from the last historical change:
```bash
# watch for changes on key `foo` and return last revision value along with modified value
$ etcdctl watch --prev-kv foo
# in another terminal: etcdctl put foo bar_latest
PUT
foo # key
bar_new # last value of foo key before modification
foo # key
bar_latest # value of foo key after modification
```
## Compacted revisions
As we mentioned, etcd keeps revisions so that applications can read past versions of keys. However, to avoid accumulating an unbounded amount of history, it is important to compact past revisions. After compacting, etcd removes historical revisions, releasing resources for future use. All superseded data with revisions before the compacted revision will be unavailable.
@ -182,13 +352,20 @@ $ etcdctl get --rev=4 foo
Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted
```
Note: The current revision of etcd server can be found using get command on any key (existent or non-existent) in json format. Example is shown below for mykey which does not exist in etcd server:
```bash
$ etcdctl get mykey -w=json
{"header":{"cluster_id":14841639068965178418,"member_id":10276657743932975437,"revision":15,"raft_term":4}}
```
## Grant leases
Applications can grant leases for keys from an etcd cluster. When a key is attached to a lease, its lifetime is bound to the lease's lifetime which in turn is governed by a time-to-live (TTL). Each lease has a minimum time-to-live (TTL) value specified by the application at grant time. The lease's actual TTL value is at least the minimum TTL and is chosen by the etcd cluster. Once a lease's TTL elapses, the lease expires and all attached keys are deleted.
Here is the command to grant a lease:
```
```bash
# grant a lease with 10 second TTL
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
@ -204,7 +381,7 @@ Applications revoke leases by lease ID. Revoking a lease deletes all of its atta
Suppose we finished the following sequence of operations:
```
```bash
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
$ etcdctl put --lease=32695410dcc0ca06 foo bar
@ -213,7 +390,7 @@ OK
Here is the command to revoke the same lease:
```
```bash
$ etcdctl lease revoke 32695410dcc0ca06
lease 32695410dcc0ca06 revoked
@ -227,17 +404,54 @@ Applications can keep a lease alive by refreshing its TTL so it does not expire.
Suppose we finished the following sequence of operations:
```
```bash
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
```
Here is the command to keep the same lease alive:
```
$ etcdctl lease keep-alive 32695410dcc0ca0
lease 32695410dcc0ca0 keepalived with TTL(100)
lease 32695410dcc0ca0 keepalived with TTL(100)
lease 32695410dcc0ca0 keepalived with TTL(100)
```bash
$ etcdctl lease keep-alive 32695410dcc0ca06
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(100)
...
```
## Get lease information
Applications may want to know about lease information, so that they can be renewed or to check if the lease still exists or it has expired. Applications may also want to know the keys to which a particular lease is attached.
Suppose we finished the following sequence of operations:
```bash
# grant a lease with 500 second TTL
$ etcdctl lease grant 500
lease 694d5765fc71500b granted with TTL(500s)
# attach key zoo1 to lease 694d5765fc71500b
$ etcdctl put zoo1 val1 --lease=694d5765fc71500b
OK
# attach key zoo2 to lease 694d5765fc71500b
$ etcdctl put zoo2 val2 --lease=694d5765fc71500b
OK
```
Here is the command to get information about the lease:
```bash
$ etcdctl lease timetolive 694d5765fc71500b
lease 694d5765fc71500b granted with TTL(500s), remaining(258s)
```
Here is the command to get information about the lease along with the keys attached with the lease:
```bash
$ etcdctl lease timetolive --keys 694d5765fc71500b
lease 694d5765fc71500b granted with TTL(500s), remaining(132s), attached keys([zoo2 zoo1])
# if the lease has expired or does not exist it will give the below response:
Error: etcdserver: requested lease not found
```

View File

@ -28,7 +28,7 @@ bar
## Local multi-member cluster
A Procfile is provided to easily set up a local multi-member cluster. Start a multi-member cluster with a few commands:
A `Procfile` at the base of this git repo is provided to easily set up a local multi-member cluster. To start a multi-member cluster go to the root of an etcd source tree and run:
```
# install goreman program to control Profile-based applications.
@ -37,7 +37,7 @@ $ goreman -f Procfile start
...
```
The started members listen on `localhost:12379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
The started members listen on `localhost:2379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
To interact with the started cluster by using etcdctl:
@ -49,12 +49,12 @@ $ etcdctl --write-out=table --endpoints=localhost:12379 member list
+------------------+---------+--------+------------------------+------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+--------+------------------------+------------------------+
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:12380 | http://127.0.0.1:12379 |
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380 | http://127.0.0.1:2379 |
| 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
| fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
+------------------+---------+--------+------------------------+------------------------+
$ etcdctl --endpoints=localhost:12379 put foo bar
$ etcdctl put foo bar
OK
```
@ -64,10 +64,10 @@ To exercise etcd's fault tolerance, kill a member:
# kill etcd2
$ goreman run stop etcd2
$ etcdctl --endpoints=localhost:12379 put key hello
$ etcdctl put key hello
OK
$ etcdctl --endpoints=localhost:12379 get key
$ etcdctl get key
hello
# try to get key from the killed member

View File

@ -31,8 +31,8 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
## Write release note
- Write introduction for the new release. For example, what major bug we fix, what new features we introduce or what performance improvement we make.
- Write changelog for the last release. ChangeLog should be straightforward and easy to understand for the end-user.
- Put `[GH XXXX]` at the head of change line to reference Pull Request that introduces the change. Moreover, add a link on it to jump to the Pull Request.
- Find PRs with `release-note` label and explain them in `NEWS` file, as a straightforward summary of changes for end-users.
## Tag version
@ -47,7 +47,7 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
## Build release binaries and images
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
- Ensure `acbuild` is available.
- Ensure `docker` is available.
Run release script in root directory:

View File

@ -11,7 +11,9 @@ The easiest way to get etcd is to use one of the pre-built release binaries whic
## Build the latest version
For those wanting to try the very latest version, build etcd from the `master` branch.
[Go](https://golang.org/) version 1.5+ is required to build the latest version of etcd.
[Go](https://golang.org/) version 1.6+ (with HTTP2 support) is required to build the latest version of etcd.
etcd vendors its dependency for official release binaries, while making vendoring optional to avoid import conflicts.
[`build` script][build-script] would automatically include the vendored dependencies from [`cmd`][cmd-directory] directory.
Here are the commands to build an etcd binary from the `master` branch:
@ -26,7 +28,7 @@ $ echo $GOPATH
$ mkdir -p $GOPATH/src/github.com/coreos
$ cd $GOPATH/src/github.com/coreos
$ git clone github.com:coreos/etcd.git
$ git clone https://github.com/coreos/etcd.git
$ cd etcd
$ ./build
$ ./bin/etcd
@ -54,3 +56,6 @@ If OK is printed, then etcd is working!
[github-release]: https://github.com/coreos/etcd/releases/
[go]: https://golang.org/doc/install
[build-script]: ../build
[cmd-directory]: ../cmd

View File

@ -14,13 +14,17 @@ The easiest way to get started using etcd as a distributed key-value store is to
- [Interacting with etcd][interacting]
- [API references][api_ref]
- [gRPC gateway][api_grpc_gateway]
- [gRPC naming and discovery][grpc_naming]
- [Embedding etcd][embed_etcd]
- [Experimental features and APIs][experimental]
## Operating etcd clusters
Administrators who need to create reliable and scalable key-value stores for the developers they support should begin with a [cluster on multiple machines][clustering].
- [Setting up clusters][clustering]
- [Setting up etcd clusters][clustering]
- [Setting up etcd gateways][gateway]
- [Setting up etcd gRPC proxy (pre-alpha)][grpc_proxy]
- [Run etcd clusters inside containers][container]
- [Configuration][conf]
- [Security][security]
@ -56,8 +60,12 @@ To learn more about the concepts and internals behind etcd, read the following p
[data_model]: learning/data_model.md
[demo]: demo.md
[download_build]: dl_build.md
[embed_etcd]: https://godoc.org/github.com/coreos/etcd/embed
[grpc_naming]: dev-guide/grpc_naming.md
[failures]: op-guide/failures.md
[gateway]: op-guide/gateway.md
[glossary]: learning/glossary.md
[grpc_proxy]: op-guide/grpc_proxy.md
[interacting]: dev-guide/interacting_v3.md
[local_cluster]: dev-guide/local_cluster.md
[performance]: op-guide/performance.md

View File

@ -2,15 +2,17 @@
This document defines the various terms used in etcd documentation, command line and source code.
## Node
## Alarm
Node is an instance of raft state machine.
The etcd server raises an alarm whenever the cluster needs operator intervention to remain reliable.
It has a unique identification, and records other nodes' progress internally when it is the leader.
## Authentication
## Member
Authentication manages user access permissions for etcd resources.
Member is an instance of etcd. It hosts a node, and provides service to clients.
## Client
A client connects to the etcd cluster to issue service requests such as fetching key-value pairs, writing data, or watching for updates.
## Cluster
@ -18,6 +20,42 @@ Cluster consists of several members.
The node in each member follows raft consensus protocol to replicate logs. Cluster receives proposals from members, commits them and apply to local store.
## Compaction
Compaction discards all etcd event history and superseded keys prior to a given revision. It is used to reclaim storage space in the etcd backend database.
## Election
The etcd cluster holds elections among its members to choose a leader as part of the raft consensus protocol.
## Endpoint
A URL pointing to an etcd service or resource.
## Key
A user-defined identifier for storing and retrieving user-defined values in etcd.
## Key range
A set of keys containing either an individual key, a lexical interval for all x such that a < x <= b, or all keys greater than a given key.
## Keyspace
The set of all keys in an etcd cluster.
## Lease
A short-lived renewable contract that deletes keys associated with it on its expiry.
## Member
A logical etcd server that participates in serving an etcd cluster.
## Modification Revision
The first revision to hold the last write to a given key.
## Peer
Peer is another member of the same cluster.
@ -26,10 +64,34 @@ Peer is another member of the same cluster.
A proposal is a request (for example a write request, a configuration change request) that needs to go through raft protocol.
## Client
## Quorum
Client is a caller of the cluster's HTTP API.
The number of active members needed for consensus to modify the cluster state. etcd requires a member majority to reach quorum.
## Machine (deprecated)
## Revision
The alternative of Member in etcd before 2.0
A 64-bit cluster-wide counter that is incremented each time the keyspace is modified.
## Role
A unit of permissions over a set of key ranges which may be granted to a set of users for access control.
## Snapshot
A point-in-time backup of the etcd cluster state.
## Store
The physical storage backing the cluster keyspace.
## Transaction
An atomically executed set of operations. All modified keys in a transaction share the same modification revision.
## Key Version
The number of writes to a key since it was created, starting at 1. The version of a nonexistent or deleted key is 0.
## Watcher
A client opens a watcher to observe updates on a given key range.

View File

@ -23,6 +23,7 @@
**Java libraries**
- [coreos/jetcd](https://github.com/coreos/jetcd) - Supports v3
- [boonproject/etcd](https://github.com/boonproject/boon/blob/master/etcd/README.md) - Supports v2, Async/Sync and waits
- [justinsb/jetcd](https://github.com/justinsb/jetcd)
- [diwakergupta/jetcd](https://github.com/diwakergupta/jetcd) - Supports v2
@ -61,6 +62,8 @@
**C++ libraries**
- [edwardcapriolo/etcdcpp](https://github.com/edwardcapriolo/etcdcpp) - Supports v2
- [suryanathan/etcdcpp](https://github.com/suryanathan/etcdcpp) - Supports v2 (with waits)
- [nokia/etcd-cpp-api](https://github.com/nokia/etcd-cpp-api) - Supports v2
- [nokia/etcd-cpp-apiv3](https://github.com/nokia/etcd-cpp-apiv3) - Supports v3
**Clojure libraries**
@ -80,6 +83,7 @@
**PHP Libraries**
- [linkorb/etcd-php](https://github.com/linkorb/etcd-php)
- [activecollab/etcd](https://github.com/activecollab/etcd)
**Haskell libraries**

View File

@ -70,6 +70,8 @@ All these metrics are prefixed with `etcd_network_`
|---------------------------|--------------------------------------------------------------------|---------------|
| peer_sent_bytes_total | The total number of bytes sent to the peer with ID `To`. | Counter(To) |
| peer_received_bytes_total | The total number of bytes received from the peer with ID `From`. | Counter(From) |
| peer_sent_failures_total | The total number of send failures from the peer with ID `To`. | Counter(To) |
| peer_received_failures_total | The total number of receive failures from the peer with ID `From`. | Counter(From) |
| peer_round_trip_time_seconds | Round-Trip-Time histogram between peers. | Histogram(To) |
| client_grpc_sent_bytes_total | The total number of bytes sent to grpc clients. | Counter |
| client_grpc_received_bytes_total| The total number of bytes received to grpc clients. | Counter |

View File

@ -357,6 +357,8 @@ To help clients discover the etcd cluster, the following DNS SRV records are loo
If `_etcd-client-ssl._tcp.example.com` is found, clients will attempt to communicate with the etcd cluster over SSL/TLS.
If etcd is using TLS without a custom certificate authority, the discovery domain (e.g., example.com) must match the SRV record domain (e.g., infra1.example.com). This is to mitigate attacks that forge SRV records to point to a different domain; the domain would have a valid certificate under PKI but be controlled by an unknown third party.
#### Create DNS SRV records
```
@ -454,6 +456,10 @@ $ etcd --name infra2 \
--listen-peer-urls http://10.0.1.12:2380
```
### Gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. Please read [gateway guide] for more information.
### Proxy
When the `--proxy` flag is set, etcd runs in [proxy mode][proxy]. This proxy mode only supports the etcd v2 API; there are no plans to support the v3 API. Instead, for v3 API support, there will be a new proxy with enhanced features following the etcd 3.0 release.
@ -470,3 +476,4 @@ To setup an etcd cluster with proxies of v2 API, please read the the [clustering
[clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md
[security-guide]: security.md
[tls-setup]: /hack/tls-setup
[gateway]: gateway.md

View File

@ -276,7 +276,7 @@ Follow the instructions when using these flags.
## Profiling flags
### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/"
+ default: false
[build-cluster]: clustering.md#static

View File

@ -2,13 +2,75 @@
The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering.md#static).
## rkt
### Running a single node etcd
The following rkt run command will expose the etcd client API on port 2379 and expose the peer API on port 2380.
Use the host IP address when configuring etcd.
```
export NODE1=192.168.1.21
```
Trust the CoreOS [App Signing Key](https://coreos.com/security/app-signing-key/).
```
sudo rkt trust --prefix coreos.com/etcd
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
```
Run the `v3.0.6` version of etcd or specify another release version.
```
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.0.6 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380
```
List the cluster member.
```
etcdctl --endpoints=http://192.168.1.21:2379 member list
```
### Running a 3 node etcd cluster
Setup a 3 node cluster with rkt locally, using the `-initial-cluster` flag.
```sh
export NODE1=172.16.28.21
export NODE2=172.16.28.22
export NODE3=172.16.28.23
```
```
# node 1
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.0.6 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 2
sudo rkt run --net=default:IP=${NODE2} coreos.com/etcd:v3.0.6 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 3
sudo rkt run --net=default:IP=${NODE3} coreos.com/etcd:v3.0.6 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
```
Verify the cluster is healthy and can be reached.
```
ETCDCTL_API=3 etcdctl --endpoints=http://172.16.28.21:2379,http://172.16.28.22:2379,http://172.16.28.23:2379 endpoint-health
```
### DNS
Production clusters which refer to peers by DNS name known to the local resolver must mount the [host's DNS configuration](https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html#customizing-rkt-options).
## Docker
In order to expose the etcd API to clients outside of Docker host, use the host IP address of the container. Please see [`docker inspect`](https://docs.docker.com/engine/reference/commandline/inspect) for more detail on how to get the IP address. Alternatively, specify `--net=host` flag to `docker run` command to skip placing the container inside of a separate network stack.
```
# For each machine
ETCD_VERSION=v3.0.0-beta.0
ETCD_VERSION=v3.0.0
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-0
@ -59,3 +121,7 @@ To run `etcdctl` using API version 3:
docker exec etcd /bin/sh -c "export ETCDCTL_API=3 && /usr/local/bin/etcdctl put foo bar"
```
## Bare Metal
To provision a 3 node etcd cluster on bare-metal, you might find the examples in the [baremetal repo](https://github.com/coreos/coreos-baremetal/tree/master/examples) useful.

View File

@ -0,0 +1,66 @@
# etcd gateway
## What is etcd gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses.
The gateway supports multiple etcd server endpoints. When the gateway starts, it randomly picks one etcd server endpoint and forwards all requests to that endpoint. This endpoint serves all requests until the gateway detects a network failure. If the gateway detects an endpoint failure, it will switch to a different endpoint, if available, to hide failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
## When to use etcd gateway
Every application that accesses etcd must first have the address of an etcd cluster client endpoint. If multiple applications on the same server access the same etcd cluster, every application still needs to know the advertised client endpoints of the etcd cluster. If the etcd cluster is reconfigured to have different endpoints, every application may also need to update its endpoint list. This wide-scale reconfiguration is both tedious and error prone.
etcd gateway solves this problem by serving as a stable local endpoint. A typical etcd gateway configuration has
each machine running a gateway listening on a local address and every etcd application connecting to its local gateway. The upshot is only the gateway needs to update its endpoints instead of updating each and every application.
In summary, to automatically propagate cluster endpoint changes, the etcd gateway runs on every machine serving multiple applications accessing same etcd cluster.
## When not to use etcd gateway
- Improving performance
The gateway is not designed for improving etcd cluster performance. It does not provide caching, watch coalescing or batching. The etcd team is developing a caching proxy designed for improving cluster scalability.
- Running on a cluster management system
Advanced cluster management systems like Kubernetes natively support service discovery. Applications can access an etcd cluster with a DNS name or a virtual IP address managed by the system. For example, kube-proxy is equivalent to etcd gateway.
## Start etcd gateway
Consider an etcd cluster with the following static endpoints:
|Name|Address|Hostname|
|------|---------|------------------|
|infra0|10.0.1.10|infra0.example.com|
|infra1|10.0.1.11|infra1.example.com|
|infra2|10.0.1.12|infra2.example.com|
Start the etcd gateway to use these static endpoints with the command:
```bash
$ etcd gateway start --endpoints=infra0.example.com,infra1.example.com,infra2.example.com
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
```
Alternatively, if using DNS for service discovery, consider the DNS SRV entries:
```bash
$ dig +noall +answer SRV _etcd-client._tcp.example.com
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra0.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra1.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra2.example.com.
```
```bash
$ dig +noall +answer infra0.example.com infra1.example.com infra2.example.com
infra0.example.com. 300 IN A 10.0.1.10
infra1.example.com. 300 IN A 10.0.1.11
infra2.example.com. 300 IN A 10.0.1.12
```
Start the etcd gateway to fetch the endpoints from the DNS SRV entries with the command:
```bash
$ etcd gateway --discovery-srv=example.com
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
```

View File

@ -0,0 +1,49 @@
# gRPC proxy
*This is a pre-alpha feature, we are looking for early feedback.*
The gRPC proxy is a stateless etcd reverse proxy operating at the gRPC layer (L7). The proxy is designed to reduce the total processing load on the core etcd cluster. For horizontal scalability, it coalesces watch and lease API requests. To protect the cluster against abusive clients, it caches key range requests.
The gRPC proxy supports multiple etcd server endpoints. When the proxy starts, it randomly picks one etcd server endpoint to use. This endpoint serves all requests until the proxy detects an endpoint failure. If the gRPC proxy detects an endpoint failure, it switches to a different endpoint, if available, to hide failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
## Scalable watch API
The gRPC proxy coalesces multiple client watchers (`c-watchers`) on the same key or range into a single watcher (`s-watcher`) connected to an etcd server. The proxy broadcasts all events from the `s-watcher` to its `c-watchers`.
Assuming N clients watch the same key, one gRPC proxy can reduce the watch load on the etcd server from N to 1. Users can deploy multiple gRPC proxies to further distribute server load.
In the following example, three clients watch on key A. The gRPC proxy coalesces the three watchers, creating a single watcher attached to the etcd server.
```
+-------------+
| etcd server |
+------+------+
^ watch key A (s-watcher)
|
+-------+-----+
| gRPC proxy | <-------+
| | |
++-----+------+ |watch key A (c-watcher)
watch key A ^ ^ watch key A |
(c-watcher) | | (c-watcher) |
+-------+-+ ++--------+ +----+----+
| client | | client | | client |
| | | | | |
+---------+ +---------+ +---------+
```
### Limitations
To effectively coalesce multiple client watchers into a single watcher, the gRPC proxy coalesces new `c-watchers` into an existing `s-watcher` when possible. This coalesced `s-watcher` may be out of sync with the etcd server due to network delays or buffered undelivered events. When the watch revision is unspecified, the gRPC proxy will not guarantee the `c-watcher` will start watching from the most recent store revision. For example, if a client watches from an etcd server with revision 1000, that watcher will begin at revision 1000. If a client watches from the gRPC proxy, may begin watching from revision 990.
Similar limitations apply to cancellation. When the watcher is cancelled, the etcd servers revision may be greater than the cancellation response revision.
These two limitations should not cause problems for most use cases. In the future, there may be additional options to force the watcher to bypass the gRPC proxy for more accurate revision responses.
## Scalable lease API
TODO
## Abusive clients protection
The gRPC proxy caches responses for requests when it does not break consistency requirements. This can protect the etcd server from abusive clients in tight for loops.

View File

@ -49,51 +49,50 @@ Finished defragmenting etcd member[127.0.0.1:2379]
## Space quota
The space quota in `etcd` ensures the cluster operates in a reliable fashion. Without a space quota, `etcd` may suffer from poor performance if the keyspace grows excessively large, or it may simply run out of storage space, leading to unpredictable cluster behavior. If the keyspace's backend database for any member exceeds the space quota, `etcd` raises a cluster-wide alarm that puts the cluster into a maintenance mode which only accepts key reads and deletes. After freeing enough space in the keyspace, the alarm can be disarmed and the cluster will resume normal operation.
The space quota in `etcd` ensures the cluster operates in a reliable fashion. Without a space quota, `etcd` may suffer from poor performance if the keyspace grows excessively large, or it may simply run out of storage space, leading to unpredictable cluster behavior. If the keyspace's backend database for any member exceeds the space quota, `etcd` raises a cluster-wide alarm that puts the cluster into a maintenance mode which only accepts key reads and deletes. Only after freeing enough space in the keyspace and defragmenting the backend database, along with clearing the space quota alarm can the cluster resume normal operation.
By default, `etcd` sets a conservative space quota suitable for most applications, but it may be configured on the command line, in bytes:
```sh
# set a very small 16MB quota
$ etcd --quota-backend-bytes=16777216
$ etcd --quota-backend-bytes=$((16*1024*1024))
```
The space quota can be triggered with a loop:
```sh
# fill keyspace
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | etcdctl put key || break; done
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | ETCDCTL_API=3 etcdctl put key || break; done
...
Error: rpc error: code = 8 desc = etcdserver: mvcc: database space exceeded
# confirm quota space is exceeded
$ etcdctl --write-out=table endpoint status
$ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
+----------------+------------------+-----------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+-----------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | bf9071f4639c75cc | 2.3.0+git | 18 MB | true | 2 | 3332 |
+----------------+------------------+-----------+---------+-----------+-----------+------------+
# confirm alarm is raised
$ etcdctl alarm list
$ ETCDCTL_API=3 etcdctl alarm list
memberID:13803658152347727308 alarm:NOSPACE
```
Removing excessive keyspace data will put the cluster back within the quota limits so the alarm can be disarmed:
Removing excessive keyspace data and defragmenting the backend database will put the cluster back within the quota limits:
```sh
# get current revision
$ etcdctl --endpoints=:2379 endpoint status
[{"Endpoint":"127.0.0.1:2379","Status":{"header":{"cluster_id":8925027824743593106,"member_id":13803658152347727308,"revision":1516,"raft_term":2},"version":"2.3.0+git","dbSize":17973248,"leader":13803658152347727308,"raftIndex":6359,"raftTerm":2}}]
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')
# compact away all old revisions
$ etdctl compact 1516
$ ETCDCTL_API=3 etcdctl compact $rev
compacted revision 1516
# defragment away excessive space
$ etcdctl defrag
$ ETCDCTL_API=3 etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
# disarm alarm
$ etcdctl alarm disarm
$ ETCDCTL_API=3 etcdctl alarm disarm
memberID:13803658152347727308 alarm:NOSPACE
# test puts are allowed again
$ etdctl put newkey 123
$ ETCDCTL_API=3 etcdctl put newkey 123
OK
```

View File

@ -1,14 +1,39 @@
## Supported platform
## Supported platforms
### Current support
The following table lists etcd support status for common architectures and operating systems,
| Architecture | Operating System | Status | Maintainers |
| ------------ | ---------------- | ------------ | ---------------- |
| amd64 | Darwin | Experimental | etcd maintainers |
| amd64 | Linux | Stable | etcd maintainers |
| amd64 | Windows | Experimental | |
| arm64 | Linux | Experimental | @glevand |
| arm | Linux | Unstable | |
| 386 | Linux | Unstable | |
* etcd-maintainers are listed in https://github.com/coreos/etcd/blob/master/MAINTAINERS.
Experimental platforms appear to work in practice and have some platform specific code in etcd, but do not fully conform to the stable support policy. Unstable platforms have been lightly tested, but less than experimental. Unlisted architecture and operating system pairs are currently unsupported; caveat emptor.
### Supporting a new platform
For etcd to officially support a new platform as stable, a few requirements are necessary to ensure acceptable quality:
1. An "official" maintainer for the platform with clear motivation; someone must be responsible for taking care of the platform.
2. Set up CI for build; etcd must compile.
3. Set up CI for running unit tests; etcd must pass simple tests.
4. Set up CI (TravisCI, SemaphoreCI or Jenkins) for running integration tests; etcd must pass intensive tests.
5. (Optional) Set up a functional testing cluster; an etcd cluster should survive stress testing.
### 32-bit and other unsupported systems
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See #[358][358] for more information.
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See the [Go issue][go-issue] and [atomic package][go-atomic] for more information.
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unsupported architectures will print
a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to
the target architecture.
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unstable or unsupported architectures will print a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to the target architecture.
Currently only the amd64 architecture is officially supported by `etcd`.
[358]: https://github.com/coreos/etcd/issues/358
[go-issue]: https://github.com/golang/go/issues/599
[go-atomic]: https://golang.org/pkg/sync/atomic/#pkg-note-BUG

View File

@ -71,4 +71,23 @@ $ etcd --snapshot-count=5000
$ ETCD_SNAPSHOT_COUNT=5000 etcd
```
## Network
If the etcd leader serves a large number of concurrent client requests, it may delay processing follower peer requests due to network congestion. This manifests as send buffer error messages on the follower nodes:
```
dropped MsgProp to 247ae21ff9436b2d since streamMsg's sending buffer is full
dropped MsgAppResp to 247ae21ff9436b2d since streamMsg's sending buffer is full
```
These errors may be resolved by prioritizing etcd's peer traffic over its client traffic. On Linux, peer traffic can be prioritized by using the traffic control mechanism:
```
tc qdisc add dev eth0 root handle 1: prio bands 3
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip sport 2380 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 2380 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip sport 2739 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip dport 2739 0xffff flowid 1:1
```
[ping]: https://en.wikipedia.org/wiki/Ping_(networking_utility)

View File

@ -18,7 +18,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Y
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [backup the etcd data directory](admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version.
Before beginning, [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version.
#### Mixed Versions
@ -34,7 +34,7 @@ For a much larger total data size, 100MB or more , this one-time process might t
If all members have been upgraded to v3.0, the cluster will be upgraded to v3.0, and downgrade from this completed state is **not possible**. If any single member is still v2.3, however, the cluster and its operations remains “v2.3”, and it is possible from this mixed cluster state to return to using a v2.3 etcd binary on all members.
Please [backup the data directory](admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
Please [backup the data directory](../v2/admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade Procedure
@ -64,7 +64,7 @@ When each etcd process is stopped, expected errors will be logged by other clust
2016-06-27 15:21:48.624175 I | rafthttp: the connection with 8211f1d0f64f3269 became inactive
```
Its a good idea at this point to [backup the etcd data directory](https://github.com/coreos/etcd/blob/master/Documentation/v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur:
Its a good idea at this point to [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur:
```
$ etcdctl backup \

View File

@ -559,6 +559,25 @@ Let's create a key-value pair first: `foo=one`.
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=one
```
```json
{
"action":"set",
"node":{
"key":"/foo",
"value":"one",
"modifiedIndex":4,
"createdIndex":4
}
}
```
Specifying `noValueOnSuccess` option skips returning the node as value.
```sh
curl http://127.0.0.1:2379/v2/keys/foo?noValueOnSuccess=true -XPUT -d value=one
# {"action":"set"}
```
Now let's try some invalid `CompareAndSwap` commands.
Trying to set this existing key with `prevExist=false` fails as expected:

View File

@ -266,7 +266,7 @@ Follow the instructions when using these flags.
## Profiling flags
### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/"
+ default: false
[build-cluster]: clustering.md#static

View File

@ -48,7 +48,7 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
## Build Release Binaries and Images
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
- Ensure `acbuild` is available.
- Ensure `docker` is available.
Run release script in root directory:

View File

@ -105,7 +105,7 @@ ETCD_INITIAL_CLUSTER_STATE=existing
### Stop the proxy process
Stop the existing proxy so we can wipe it's state on disk and reload it with the new configuration:
Stop the existing proxy so we can wipe its state on disk and reload it with the new configuration:
``` bash
px aux | grep etcd
@ -149,5 +149,5 @@ If an error occurs, check the [add member troubleshooting doc][runtime-configura
[discovery-service]: clustering.md#discovery
[goreman]: https://github.com/mattn/goreman
[procfile]: /Procfile
[procfile]: https://github.com/coreos/etcd/blob/master/Procfile
[runtime-configuration]: runtime-configuration.md#error-cases-when-adding-members

54
NEWS Normal file
View File

@ -0,0 +1,54 @@
etcd v3.0.11 (2016-10-07)
- server returns previous key-value (optional)
- clientv3 WithPrevKV option
- v3 etcdctl prev-kv flag
etcd v3.0.10 (2016-09-23)
etcd v3.0.9 (2016-09-15)
- warn on domain names on listen URLs (v3.2 will reject domain names)
etcd v3.0.8 (2016-09-09)
- allow only IP addresses in listen URLs (domain names are rejected)
etcd v3.0.7 (2016-08-31)
- SRV records only allow A records (RFC 2052)
etcd v3.0.6 (2016-08-19)
etcd v3.0.5 (2016-08-19)
- SRV records (e.g., infra1.example.com) must match the discovery domain
(i.e., example.com) when using the default certificate authority.
etcd v3.0.4 (2016-07-27)
- v2 auth can now use common name from TLS certificate when --client-cert-auth is enabled
- v2 etcdctl ls command now supports --output=json
- Add /var/lib/etcd directory to etcd official Docker image
etcd v3.0.3 (2016-07-15)
- Revert Dockerfile to use CMD, instead of ENTRYPOINT, to support etcdctl run
- Docker commands for v3.0.2 won't work without specifying executable binary paths
- v3 etcdctl default endpoints are now 127.0.0.1:2379
etcd v3.0.2 (2016-07-08)
- Dockerfile uses ENTRYPOINT, instead of CMD, to run etcd without binary path specified
etcd v3.0.1 (2016-07-01)

View File

@ -39,13 +39,14 @@ See [etcdctl][etcdctl] for a simple command line client.
The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, AppC (ACI), and Docker. Instructions for using these binaries are on the [GitHub releases page][github-release].
For those wanting to try the very latest version, you can build the latest version of etcd from the `master` branch.
You will first need [*Go*](https://golang.org/) installed on your machine (version 1.5+ is required).
For those wanting to try the very latest version, you can [build the latest version of etcd][dl-build] from the `master` branch.
You will first need [*Go*](https://golang.org/) installed on your machine (version 1.6+ is required).
All development occurs on `master`, including new features and bug fixes.
Bug fixes are first targeted at `master` and subsequently ported to release branches, as described in the [branch management][branch-management] guide.
[github-release]: https://github.com/coreos/etcd/releases/
[branch-management]: ./Documentation/branch_management.md
[dl-build]: ./Documentation/dl_build.md#build-the-latest-version
### Running etcd

View File

@ -6,26 +6,19 @@ This document defines a high level roadmap for etcd development.
The dates below should not be considered authoritative, but rather indicative of the projected timeline of the project. The [milestones defined in GitHub](https://github.com/coreos/etcd/milestones) represent the most up-to-date and issue-for-issue plans.
etcd 2.3 is our current stable branch. The roadmap below outlines new features that will be added to etcd, and while subject to change, define what future stable will look like.
etcd 3.0 is our current stable branch. The roadmap below outlines new features that will be added to etcd, and while subject to change, define what future stable will look like.
### etcd 3.0 (April)
- v3 API ([see also the issue tag](https://github.com/coreos/etcd/issues?utf8=%E2%9C%93&q=label%3Aarea/v3api))
- Leases
- Binary protocol
- Support a large number of watchers
- Failure guarantees documented
- Simple v3 client (golang)
- v3 API
- Locking
- Better disk backend
- Improved write throughput
- Support larger datasets and histories
- Simpler disaster recovery UX
- Integrated with Kubernetes
- Mirroring
### etcd 3.1 (2016-Oct)
- Stable L4 gateway
- Experimental support for scalable proxy
- Automatic leadership transfer for the rolling upgrade
- V3 API improvements
- Get previous key-value pair
- Get only keys (ignore values)
- Get only key count
### etcd 3.1 (July)
- API bindings for other languages
### etcd 3.+ (future)
- Horizontally scalable proxy layer
### etcd 3.2 (2017-Feb)
- Stable scalable proxy
- JWT token based auth
- Improved watch performance
- ...

View File

@ -18,12 +18,12 @@ package authpb
import (
"fmt"
proto "github.com/gogo/protobuf/proto"
proto "github.com/golang/protobuf/proto"
math "math"
)
import io "io"
io "io"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@ -32,7 +32,7 @@ var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.GoGoProtoPackageIsVersion1
const _ = proto.ProtoPackageIsVersion1
type Permission_Type int32
@ -798,23 +798,23 @@ var (
)
var fileDescriptorAuth = []byte{
// 276 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0x4a, 0x2c, 0x2d, 0xc9,
0xd0, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x03, 0xb1, 0x0b, 0x92, 0xa4, 0x44, 0xd2, 0xf3,
0xd3, 0xf3, 0xc1, 0x42, 0xfa, 0x20, 0x16, 0x44, 0x56, 0xc9, 0x87, 0x8b, 0x25, 0xb4, 0x38, 0xb5,
0x48, 0x48, 0x88, 0x8b, 0x25, 0x2f, 0x31, 0x37, 0x55, 0x82, 0x51, 0x81, 0x51, 0x83, 0x27, 0x08,
0xcc, 0x16, 0x92, 0xe2, 0xe2, 0x28, 0x48, 0x2c, 0x2e, 0x2e, 0xcf, 0x2f, 0x4a, 0x91, 0x60, 0x02,
0x8b, 0xc3, 0xf9, 0x42, 0x22, 0x5c, 0xac, 0x45, 0xf9, 0x39, 0xa9, 0xc5, 0x12, 0xcc, 0x0a, 0xcc,
0x1a, 0x9c, 0x41, 0x10, 0x8e, 0xd2, 0x1c, 0x46, 0x2e, 0xae, 0x80, 0xd4, 0xa2, 0xdc, 0xcc, 0xe2,
0xe2, 0xcc, 0xfc, 0x3c, 0x21, 0x63, 0xa0, 0x01, 0x40, 0x5e, 0x48, 0x65, 0x01, 0xc4, 0x60, 0x3e,
0x23, 0x71, 0x3d, 0x88, 0x6b, 0xf4, 0x10, 0xaa, 0xf4, 0x40, 0xd2, 0x41, 0x70, 0x85, 0x42, 0x02,
0x5c, 0xcc, 0xd9, 0xa9, 0x95, 0x50, 0x0b, 0x41, 0x4c, 0x21, 0x69, 0x2e, 0xce, 0xa2, 0xc4, 0xbc,
0xf4, 0xd4, 0xf8, 0xd4, 0xbc, 0x14, 0xa0, 0x7d, 0x60, 0x87, 0x80, 0x05, 0x5c, 0xf3, 0x52, 0x94,
0xb4, 0xb8, 0x58, 0xc0, 0xda, 0x38, 0xb8, 0x58, 0x82, 0x5c, 0x1d, 0x5d, 0x04, 0x18, 0x84, 0x38,
0xb9, 0x58, 0xc3, 0x83, 0x3c, 0x43, 0x5c, 0x05, 0x18, 0x85, 0x78, 0xb9, 0x38, 0x41, 0x82, 0x10,
0x2e, 0x93, 0x52, 0x08, 0x50, 0x0d, 0xd0, 0x9d, 0x58, 0x3d, 0x6b, 0xc1, 0xc5, 0x0b, 0xb4, 0x0b,
0xe1, 0x2c, 0xa0, 0x03, 0x98, 0x35, 0xb8, 0x8d, 0x84, 0x30, 0x1d, 0x1c, 0x84, 0xaa, 0xd0, 0x49,
0xe4, 0xc4, 0x43, 0x39, 0x86, 0x0b, 0x40, 0x7c, 0xe2, 0x91, 0x1c, 0xe3, 0x05, 0x20, 0x7e, 0x00,
0xc4, 0x49, 0x6c, 0xe0, 0xf0, 0x35, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0x9e, 0x31, 0x53, 0xfd,
0x8b, 0x01, 0x00, 0x00,
// 288 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x6c, 0x90, 0xc1, 0x4a, 0xc3, 0x30,
0x1c, 0xc6, 0x9b, 0xb6, 0x1b, 0xed, 0x5f, 0x27, 0x25, 0x0c, 0x0c, 0x13, 0x42, 0xe9, 0xa9, 0x78,
0xa8, 0xb0, 0x5d, 0xbc, 0x2a, 0xf6, 0x20, 0x78, 0x90, 0x50, 0xf1, 0x28, 0x1d, 0x0d, 0x75, 0x6c,
0x6d, 0x4a, 0x32, 0x91, 0xbe, 0x89, 0x07, 0x1f, 0x68, 0xc7, 0x3d, 0x82, 0xab, 0x2f, 0x22, 0x4d,
0x64, 0x43, 0xdc, 0xed, 0xfb, 0xbe, 0xff, 0x97, 0xe4, 0x97, 0x3f, 0x40, 0xfe, 0xb6, 0x7e, 0x4d,
0x1a, 0x29, 0xd6, 0x02, 0x0f, 0x7b, 0xdd, 0xcc, 0x27, 0xe3, 0x52, 0x94, 0x42, 0x47, 0x57, 0xbd,
0x32, 0xd3, 0xe8, 0x01, 0xdc, 0x27, 0xc5, 0x25, 0xc6, 0xe0, 0xd6, 0x79, 0xc5, 0x09, 0x0a, 0x51,
0x7c, 0xca, 0xb4, 0xc6, 0x13, 0xf0, 0x9a, 0x5c, 0xa9, 0x77, 0x21, 0x0b, 0x62, 0xeb, 0x7c, 0xef,
0xf1, 0x18, 0x06, 0x52, 0xac, 0xb8, 0x22, 0x4e, 0xe8, 0xc4, 0x3e, 0x33, 0x26, 0xfa, 0x44, 0x00,
0x8f, 0x5c, 0x56, 0x0b, 0xa5, 0x16, 0xa2, 0xc6, 0x33, 0xf0, 0x1a, 0x2e, 0xab, 0xac, 0x6d, 0xcc,
0xc5, 0x67, 0xd3, 0xf3, 0xc4, 0xd0, 0x24, 0x87, 0x56, 0xd2, 0x8f, 0xd9, 0xbe, 0x88, 0x03, 0x70,
0x96, 0xbc, 0xfd, 0x7d, 0xb0, 0x97, 0xf8, 0x02, 0x7c, 0x99, 0xd7, 0x25, 0x7f, 0xe1, 0x75, 0x41,
0x1c, 0x03, 0xa2, 0x83, 0xb4, 0x2e, 0xa2, 0x4b, 0x70, 0xf5, 0x31, 0x0f, 0x5c, 0x96, 0xde, 0xdc,
0x05, 0x16, 0xf6, 0x61, 0xf0, 0xcc, 0xee, 0xb3, 0x34, 0x40, 0x78, 0x04, 0x7e, 0x1f, 0x1a, 0x6b,
0x47, 0x19, 0xb8, 0x4c, 0xac, 0xf8, 0xd1, 0xcf, 0x5e, 0xc3, 0x68, 0xc9, 0xdb, 0x03, 0x16, 0xb1,
0x43, 0x27, 0x3e, 0x99, 0xe2, 0xff, 0xc0, 0xec, 0x6f, 0xf1, 0x96, 0x6c, 0x76, 0xd4, 0xda, 0xee,
0xa8, 0xb5, 0xe9, 0x28, 0xda, 0x76, 0x14, 0x7d, 0x75, 0x14, 0x7d, 0x7c, 0x53, 0x6b, 0x3e, 0xd4,
0x3b, 0x9e, 0xfd, 0x04, 0x00, 0x00, 0xff, 0xff, 0xcc, 0x76, 0x8d, 0x4f, 0x8f, 0x01, 0x00, 0x00,
}

View File

@ -22,7 +22,10 @@ import (
"github.com/coreos/etcd/mvcc/backend"
)
// isSubset returns true if a is a subset of b
// isSubset returns true if a is a subset of b.
// If a is a prefix of b, then a is a subset of b.
// Given intervals [a1,a2) and [b1,b2), is
// the a interval a subset of b?
func isSubset(a, b *rangePerm) bool {
switch {
case len(a.end) == 0 && len(b.end) == 0:
@ -32,9 +35,11 @@ func isSubset(a, b *rangePerm) bool {
// b is a key, a is a range
return false
case len(a.end) == 0:
return 0 <= bytes.Compare(a.begin, b.begin) && bytes.Compare(a.begin, b.end) <= 0
// a is a key, b is a range. need b1 <= a1 and a1 < b2
return bytes.Compare(b.begin, a.begin) <= 0 && bytes.Compare(a.begin, b.end) < 0
default:
return 0 <= bytes.Compare(a.begin, b.begin) && bytes.Compare(a.end, b.end) <= 0
// both are ranges. need b1 <= a1 and a2 <= b2
return bytes.Compare(b.begin, a.begin) <= 0 && bytes.Compare(a.end, b.end) <= 0
}
}
@ -46,7 +51,7 @@ func isRangeEqual(a, b *rangePerm) bool {
// If there are equal ranges, removeSubsetRangePerms only keeps one of them.
func removeSubsetRangePerms(perms []*rangePerm) []*rangePerm {
// TODO(mitake): currently it is O(n^2), we need a better algorithm
newp := make([]*rangePerm, 0)
var newp []*rangePerm
for i := range perms {
skip := false
@ -81,19 +86,25 @@ func removeSubsetRangePerms(perms []*rangePerm) []*rangePerm {
// mergeRangePerms merges adjacent rangePerms.
func mergeRangePerms(perms []*rangePerm) []*rangePerm {
merged := make([]*rangePerm, 0)
var merged []*rangePerm
perms = removeSubsetRangePerms(perms)
sort.Sort(RangePermSliceByBegin(perms))
i := 0
for i < len(perms) {
begin, next := i, i
for next+1 < len(perms) && bytes.Compare(perms[next].end, perms[next+1].begin) != -1 {
for next+1 < len(perms) && bytes.Compare(perms[next].end, perms[next+1].begin) >= 0 {
next++
}
merged = append(merged, &rangePerm{begin: perms[begin].begin, end: perms[next].end})
// don't merge ["a", "b") with ["b", ""), because perms[next+1].end is empty.
if next != begin && len(perms[next].end) > 0 {
merged = append(merged, &rangePerm{begin: perms[begin].begin, end: perms[next].end})
} else {
merged = append(merged, perms[begin])
if next != begin {
merged = append(merged, perms[next])
}
}
i = next + 1
}

View File

@ -46,6 +46,10 @@ func TestGetMergedPerms(t *testing.T) {
[]*rangePerm{{[]byte("a"), []byte("b")}},
[]*rangePerm{{[]byte("a"), []byte("b")}},
},
{
[]*rangePerm{{[]byte("a"), []byte("b")}, {[]byte("b"), []byte("")}},
[]*rangePerm{{[]byte("a"), []byte("b")}, {[]byte("b"), []byte("")}},
},
{
[]*rangePerm{{[]byte("a"), []byte("b")}, {[]byte("b"), []byte("c")}},
[]*rangePerm{{[]byte("a"), []byte("c")}},
@ -106,7 +110,7 @@ func TestGetMergedPerms(t *testing.T) {
},
{
[]*rangePerm{{[]byte("a"), []byte("")}, {[]byte("b"), []byte("c")}, {[]byte("b"), []byte("")}, {[]byte("c"), []byte("")}, {[]byte("d"), []byte("")}},
[]*rangePerm{{[]byte("a"), []byte("")}, {[]byte("b"), []byte("c")}, {[]byte("d"), []byte("")}},
[]*rangePerm{{[]byte("a"), []byte("")}, {[]byte("b"), []byte("c")}, {[]byte("c"), []byte("")}, {[]byte("d"), []byte("")}},
},
// duplicate ranges
{

View File

@ -20,6 +20,7 @@ package auth
import (
"crypto/rand"
"math/big"
"strings"
)
const (
@ -53,3 +54,14 @@ func (as *authStore) assignSimpleTokenToUser(username, token string) {
as.simpleTokens[token] = username
as.simpleTokensMu.Unlock()
}
func (as *authStore) invalidateUser(username string) {
as.simpleTokensMu.Lock()
defer as.simpleTokensMu.Unlock()
for token, name := range as.simpleTokens {
if strings.Compare(name, username) == 0 {
delete(as.simpleTokens, token)
}
}
}

View File

@ -16,6 +16,7 @@ package auth
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"sort"
@ -35,6 +36,8 @@ var (
authEnabled = []byte{1}
authDisabled = []byte{0}
revisionKey = []byte("authRevision")
authBucketName = []byte("auth")
authUsersBucketName = []byte("authUsers")
authRolesBucketName = []byte("authRoles")
@ -51,13 +54,25 @@ var (
ErrPermissionDenied = errors.New("auth: permission denied")
ErrRoleNotGranted = errors.New("auth: role is not granted to the user")
ErrPermissionNotGranted = errors.New("auth: permission is not granted to the role")
ErrAuthNotEnabled = errors.New("auth: authentication is not enabled")
ErrAuthOldRevision = errors.New("auth: revision in header is old")
// BcryptCost is the algorithm cost / strength for hashing auth passwords
BcryptCost = bcrypt.DefaultCost
)
const (
rootUser = "root"
rootRole = "root"
revBytesLen = 8
)
type AuthInfo struct {
Username string
Revision uint64
}
type AuthStore interface {
// AuthEnable turns on the authentication feature
AuthEnable() error
@ -110,23 +125,27 @@ type AuthStore interface {
// RoleList gets a list of all roles
RoleList(r *pb.AuthRoleListRequest) (*pb.AuthRoleListResponse, error)
// UsernameFromToken gets a username from the given Token
UsernameFromToken(token string) (string, bool)
// AuthInfoFromToken gets a username from the given Token and current revision number
// (The revision number is used for preventing the TOCTOU problem)
AuthInfoFromToken(token string) (*AuthInfo, bool)
// IsPutPermitted checks put permission of the user
IsPutPermitted(username string, key []byte) bool
IsPutPermitted(authInfo *AuthInfo, key []byte) error
// IsRangePermitted checks range permission of the user
IsRangePermitted(username string, key, rangeEnd []byte) bool
IsRangePermitted(authInfo *AuthInfo, key, rangeEnd []byte) error
// IsDeleteRangePermitted checks delete-range permission of the user
IsDeleteRangePermitted(username string, key, rangeEnd []byte) bool
IsDeleteRangePermitted(authInfo *AuthInfo, key, rangeEnd []byte) error
// IsAdminPermitted checks admin permission of the user
IsAdminPermitted(username string) bool
IsAdminPermitted(authInfo *AuthInfo) error
// GenSimpleToken produces a simple random string
GenSimpleToken() (string, error)
// Revision gets current revision of authStore
Revision() uint64
}
type authStore struct {
@ -138,6 +157,8 @@ type authStore struct {
simpleTokensMu sync.RWMutex
simpleTokens map[string]string // token -> username
revision uint64
}
func (as *authStore) AuthEnable() error {
@ -166,6 +187,8 @@ func (as *authStore) AuthEnable() error {
as.rangePermCache = make(map[string]*unifiedRangePermissions)
as.revision = getRevision(tx)
plog.Noticef("Authentication enabled")
return nil
@ -176,6 +199,7 @@ func (as *authStore) AuthDisable() {
tx := b.BatchTx()
tx.Lock()
tx.UnsafePut(authBucketName, enableFlagKey, authDisabled)
as.commitRevision(tx)
tx.Unlock()
b.ForceCommit()
@ -183,10 +207,18 @@ func (as *authStore) AuthDisable() {
as.enabled = false
as.enabledMu.Unlock()
as.simpleTokensMu.Lock()
as.simpleTokens = make(map[string]string) // invalidate all tokens
as.simpleTokensMu.Unlock()
plog.Noticef("Authentication disabled")
}
func (as *authStore) Authenticate(ctx context.Context, username, password string) (*pb.AuthenticateResponse, error) {
if !as.isAuthEnabled() {
return nil, ErrAuthNotEnabled
}
// TODO(mitake): after adding jwt support, branching based on values of ctx is required
index := ctx.Value("index").(uint64)
simpleToken := ctx.Value("simpleToken").(string)
@ -223,6 +255,9 @@ func (as *authStore) Recover(be backend.Backend) {
enabled = true
}
}
as.revision = getRevision(tx)
tx.Unlock()
as.enabledMu.Lock()
@ -231,7 +266,7 @@ func (as *authStore) Recover(be backend.Backend) {
}
func (as *authStore) UserAdd(r *pb.AuthUserAddRequest) (*pb.AuthUserAddResponse, error) {
hashed, err := bcrypt.GenerateFromPassword([]byte(r.Password), bcrypt.DefaultCost)
hashed, err := bcrypt.GenerateFromPassword([]byte(r.Password), BcryptCost)
if err != nil {
plog.Errorf("failed to hash password: %s", err)
return nil, err
@ -253,6 +288,8 @@ func (as *authStore) UserAdd(r *pb.AuthUserAddRequest) (*pb.AuthUserAddResponse,
putUser(tx, newUser)
as.commitRevision(tx)
plog.Noticef("added a new user: %s", r.Name)
return &pb.AuthUserAddResponse{}, nil
@ -270,6 +307,11 @@ func (as *authStore) UserDelete(r *pb.AuthUserDeleteRequest) (*pb.AuthUserDelete
delUser(tx, r.Name)
as.commitRevision(tx)
as.invalidateCachedPerm(r.Name)
as.invalidateUser(r.Name)
plog.Noticef("deleted a user: %s", r.Name)
return &pb.AuthUserDeleteResponse{}, nil
@ -278,7 +320,7 @@ func (as *authStore) UserDelete(r *pb.AuthUserDeleteRequest) (*pb.AuthUserDelete
func (as *authStore) UserChangePassword(r *pb.AuthUserChangePasswordRequest) (*pb.AuthUserChangePasswordResponse, error) {
// TODO(mitake): measure the cost of bcrypt.GenerateFromPassword()
// If the cost is too high, we should move the encryption to outside of the raft
hashed, err := bcrypt.GenerateFromPassword([]byte(r.Password), bcrypt.DefaultCost)
hashed, err := bcrypt.GenerateFromPassword([]byte(r.Password), BcryptCost)
if err != nil {
plog.Errorf("failed to hash password: %s", err)
return nil, err
@ -301,6 +343,11 @@ func (as *authStore) UserChangePassword(r *pb.AuthUserChangePasswordRequest) (*p
putUser(tx, updatedUser)
as.commitRevision(tx)
as.invalidateCachedPerm(r.Name)
as.invalidateUser(r.Name)
plog.Noticef("changed a password of a user: %s", r.Name)
return &pb.AuthUserChangePasswordResponse{}, nil
@ -336,6 +383,8 @@ func (as *authStore) UserGrantRole(r *pb.AuthUserGrantRoleRequest) (*pb.AuthUser
as.invalidateCachedPerm(r.User)
as.commitRevision(tx)
plog.Noticef("granted role %s to user %s", r.Role, r.User)
return &pb.AuthUserGrantRoleResponse{}, nil
}
@ -404,6 +453,8 @@ func (as *authStore) UserRevokeRole(r *pb.AuthUserRevokeRoleRequest) (*pb.AuthUs
as.invalidateCachedPerm(r.Name)
as.commitRevision(tx)
plog.Noticef("revoked role %s from user %s", r.Role, r.Name)
return &pb.AuthUserRevokeRoleResponse{}, nil
}
@ -473,6 +524,8 @@ func (as *authStore) RoleRevokePermission(r *pb.AuthRoleRevokePermissionRequest)
// It should be optimized.
as.clearCachedPerm()
as.commitRevision(tx)
plog.Noticef("revoked key %s from role %s", r.Key, r.Role)
return &pb.AuthRoleRevokePermissionResponse{}, nil
}
@ -501,6 +554,8 @@ func (as *authStore) RoleDelete(r *pb.AuthRoleDeleteRequest) (*pb.AuthRoleDelete
delRole(tx, r.Role)
as.commitRevision(tx)
plog.Noticef("deleted role %s", r.Role)
return &pb.AuthRoleDeleteResponse{}, nil
}
@ -521,16 +576,18 @@ func (as *authStore) RoleAdd(r *pb.AuthRoleAddRequest) (*pb.AuthRoleAddResponse,
putRole(tx, newRole)
as.commitRevision(tx)
plog.Noticef("Role %s is created", r.Name)
return &pb.AuthRoleAddResponse{}, nil
}
func (as *authStore) UsernameFromToken(token string) (string, bool) {
func (as *authStore) AuthInfoFromToken(token string) (*AuthInfo, bool) {
as.simpleTokensMu.RLock()
defer as.simpleTokensMu.RUnlock()
t, ok := as.simpleTokens[token]
return t, ok
return &AuthInfo{Username: t, Revision: as.revision}, ok
}
type permSlice []*authpb.Permission
@ -582,15 +639,21 @@ func (as *authStore) RoleGrantPermission(r *pb.AuthRoleGrantPermissionRequest) (
// It should be optimized.
as.clearCachedPerm()
as.commitRevision(tx)
plog.Noticef("role %s's permission of key %s is updated as %s", r.Name, r.Perm.Key, authpb.Permission_Type_name[int32(r.Perm.PermType)])
return &pb.AuthRoleGrantPermissionResponse{}, nil
}
func (as *authStore) isOpPermitted(userName string, key, rangeEnd []byte, permTyp authpb.Permission_Type) bool {
func (as *authStore) isOpPermitted(userName string, revision uint64, key, rangeEnd []byte, permTyp authpb.Permission_Type) error {
// TODO(mitake): this function would be costly so we need a caching mechanism
if !as.isAuthEnabled() {
return true
return nil
}
if revision < as.revision {
return ErrAuthOldRevision
}
tx := as.be.BatchTx()
@ -600,43 +663,52 @@ func (as *authStore) isOpPermitted(userName string, key, rangeEnd []byte, permTy
user := getUser(tx, userName)
if user == nil {
plog.Errorf("invalid user name %s for permission checking", userName)
return false
return ErrPermissionDenied
}
// root role should have permission on all ranges
if hasRootRole(user) {
return nil
}
if as.isRangeOpPermitted(tx, userName, key, rangeEnd, permTyp) {
return true
return nil
}
return false
return ErrPermissionDenied
}
func (as *authStore) IsPutPermitted(username string, key []byte) bool {
return as.isOpPermitted(username, key, nil, authpb.WRITE)
func (as *authStore) IsPutPermitted(authInfo *AuthInfo, key []byte) error {
return as.isOpPermitted(authInfo.Username, authInfo.Revision, key, nil, authpb.WRITE)
}
func (as *authStore) IsRangePermitted(username string, key, rangeEnd []byte) bool {
return as.isOpPermitted(username, key, rangeEnd, authpb.READ)
func (as *authStore) IsRangePermitted(authInfo *AuthInfo, key, rangeEnd []byte) error {
return as.isOpPermitted(authInfo.Username, authInfo.Revision, key, rangeEnd, authpb.READ)
}
func (as *authStore) IsDeleteRangePermitted(username string, key, rangeEnd []byte) bool {
return as.isOpPermitted(username, key, rangeEnd, authpb.WRITE)
func (as *authStore) IsDeleteRangePermitted(authInfo *AuthInfo, key, rangeEnd []byte) error {
return as.isOpPermitted(authInfo.Username, authInfo.Revision, key, rangeEnd, authpb.WRITE)
}
func (as *authStore) IsAdminPermitted(username string) bool {
func (as *authStore) IsAdminPermitted(authInfo *AuthInfo) error {
if !as.isAuthEnabled() {
return true
return nil
}
tx := as.be.BatchTx()
tx.Lock()
defer tx.Unlock()
u := getUser(tx, username)
u := getUser(tx, authInfo.Username)
if u == nil {
return false
return ErrUserNotFound
}
return hasRootRole(u)
if !hasRootRole(u) {
return ErrPermissionDenied
}
return nil
}
func getUser(tx backend.BatchTx, username string) *authpb.User {
@ -748,13 +820,18 @@ func NewAuthStore(be backend.Backend) *authStore {
tx.UnsafeCreateBucket(authUsersBucketName)
tx.UnsafeCreateBucket(authRolesBucketName)
as := &authStore{
be: be,
simpleTokens: make(map[string]string),
revision: 0,
}
as.commitRevision(tx)
tx.Unlock()
be.ForceCommit()
return &authStore{
be: be,
simpleTokens: make(map[string]string),
}
return as
}
func hasRootRole(u *authpb.User) bool {
@ -765,3 +842,23 @@ func hasRootRole(u *authpb.User) bool {
}
return false
}
func (as *authStore) commitRevision(tx backend.BatchTx) {
as.revision++
revBytes := make([]byte, revBytesLen)
binary.BigEndian.PutUint64(revBytes, as.revision)
tx.UnsafePut(authBucketName, revisionKey, revBytes)
}
func getRevision(tx backend.BatchTx) uint64 {
_, vs := tx.UnsafeRange(authBucketName, []byte(revisionKey), nil, 0)
if len(vs) != 1 {
plog.Panicf("failed to get the key of auth store revision")
}
return binary.BigEndian.Uint64(vs[0])
}
func (as *authStore) Revision() uint64 {
return as.revision
}

View File

@ -20,9 +20,12 @@ import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"github.com/coreos/etcd/mvcc/backend"
"golang.org/x/crypto/bcrypt"
"golang.org/x/net/context"
)
func init() { BcryptCost = bcrypt.MinCost }
func TestUserAdd(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer func() {
@ -45,6 +48,25 @@ func TestUserAdd(t *testing.T) {
}
}
func enableAuthAndCreateRoot(as *authStore) error {
_, err := as.UserAdd(&pb.AuthUserAddRequest{Name: "root", Password: "root"})
if err != nil {
return err
}
_, err = as.RoleAdd(&pb.AuthRoleAddRequest{Name: "root"})
if err != nil {
return err
}
_, err = as.UserGrantRole(&pb.AuthUserGrantRoleRequest{User: "root", Role: "root"})
if err != nil {
return err
}
return as.AuthEnable()
}
func TestAuthenticate(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer func() {
@ -53,9 +75,13 @@ func TestAuthenticate(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
ua := &pb.AuthUserAddRequest{Name: "foo", Password: "bar"}
_, err := as.UserAdd(ua)
_, err = as.UserAdd(ua)
if err != nil {
t.Fatal(err)
}
@ -96,9 +122,13 @@ func TestUserDelete(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
ua := &pb.AuthUserAddRequest{Name: "foo"}
_, err := as.UserAdd(ua)
_, err = as.UserAdd(ua)
if err != nil {
t.Fatal(err)
}
@ -128,8 +158,12 @@ func TestUserChangePassword(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
_, err := as.UserAdd(&pb.AuthUserAddRequest{Name: "foo"})
_, err = as.UserAdd(&pb.AuthUserAddRequest{Name: "foo"})
if err != nil {
t.Fatal(err)
}
@ -169,9 +203,13 @@ func TestRoleAdd(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
// adds a new role
_, err := as.RoleAdd(&pb.AuthRoleAddRequest{Name: "role-test"})
_, err = as.RoleAdd(&pb.AuthRoleAddRequest{Name: "role-test"})
if err != nil {
t.Fatal(err)
}
@ -185,8 +223,12 @@ func TestUserGrant(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
_, err := as.UserAdd(&pb.AuthUserAddRequest{Name: "foo"})
_, err = as.UserAdd(&pb.AuthUserAddRequest{Name: "foo"})
if err != nil {
t.Fatal(err)
}

40
build
View File

@ -4,15 +4,20 @@
ORG_PATH="github.com/coreos"
REPO_PATH="${ORG_PATH}/etcd"
export GO15VENDOREXPERIMENT="1"
eval $(go env)
GIT_SHA=`git rev-parse --short HEAD || echo "GitNotFound"`
if [ ! -z "$FAILPOINTS" ]; then
GIT_SHA="$GIT_SHA"-FAILPOINTS
fi
# Set GO_LDFLAGS="" for building with all symbols for debugging.
if [ -z "${GO_LDFLAGS+x}" ]; then GO_LDFLAGS="-s"; fi
GO_LDFLAGS="$GO_LDFLAGS -X ${REPO_PATH}/cmd/vendor/${REPO_PATH}/version.GitSHA=${GIT_SHA}"
# enable/disable failpoints
toggle_failpoints() {
FAILPKGS="etcdserver/"
FAILPKGS="etcdserver/ mvcc/backend/"
mode="disable"
if [ ! -z "$FAILPOINTS" ]; then mode="enable"; fi
@ -27,18 +32,33 @@ toggle_failpoints() {
}
etcd_build() {
if [ -z "${GOARCH}" ] || [ "${GOARCH}" = "$(go env GOHOSTARCH)" ]; then
out="bin"
else
out="bin/${GOARCH}"
fi
out="bin"
if [ -n "${BINDIR}" ]; then out="${BINDIR}"; fi
toggle_failpoints
# Static compilation is useful when etcd is run in a container
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "-s -X ${REPO_PATH}/cmd/vendor/${REPO_PATH}/version.GitSHA=${GIT_SHA}" -o ${out}/etcd ${REPO_PATH}/cmd
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "-s" -o ${out}/etcdctl ${REPO_PATH}/cmd/etcdctl
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "$GO_LDFLAGS" -o ${out}/etcd ${REPO_PATH}/cmd/etcd || return
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "$GO_LDFLAGS" -o ${out}/etcdctl ${REPO_PATH}/cmd/etcdctl || return
}
etcd_setup_gopath() {
CDIR=$(cd `dirname "$0"` && pwd)
cd "$CDIR"
etcdGOPATH=${CDIR}/gopath
# preserve old gopath to support building with unvendored tooling deps (e.g., gofail)
if [ -n "$GOPATH" ]; then
GOPATH=":$GOPATH"
fi
export GOPATH=${etcdGOPATH}$GOPATH
rm -f ${etcdGOPATH}/src
mkdir -p ${etcdGOPATH}
ln -s ${CDIR}/cmd/vendor ${etcdGOPATH}/src
}
toggle_failpoints
# don't build when sourced
(echo "$0" | grep "/build$" > /dev/null) && etcd_build || true
# only build when called directly, not sourced
if echo "$0" | grep "build$" >/dev/null; then
# force new gopath so builds outside of gopath work
etcd_setup_gopath
etcd_build
fi

View File

@ -1,6 +1,13 @@
$ORG_PATH="github.com/coreos"
$REPO_PATH="$ORG_PATH/etcd"
$PWD = $((Get-Item -Path ".\" -Verbose).FullName)
$GO_LDFLAGS="-s"
# Set $Env:GO_LDFLAGS=" "(space) for building with all symbols for debugging.
if ($Env:GO_LDFLAGS.length -gt 0) {
$GO_LDFLAGS=$Env:GO_LDFLAGS
}
$GO_LDFLAGS="$GO_LDFLAGS -X $REPO_PATH/cmd/vendor/$REPO_PATH/version.GitSHA=$GIT_SHA"
# rebuild symlinks
echo "Rebuilding symlinks"
@ -41,5 +48,5 @@ if (-not $env:GOPATH) {
$env:CGO_ENABLED = 0
$env:GO15VENDOREXPERIMENT = 1
$GIT_SHA="$(git rev-parse --short HEAD)"
go build -a -installsuffix cgo -ldflags "-s -X $REPO_PATH/cmd/vendor/$REPO_PATH/version.GitSHA=$GIT_SHA" -o bin\etcd.exe "$REPO_PATH\cmd"
go build -a -installsuffix cgo -ldflags "-s" -o bin\etcdctl.exe "$REPO_PATH\cmd\etcdctl"
go build -a -installsuffix cgo -ldflags $GO_LDFLAGS -o bin\etcd.exe "$REPO_PATH\cmd\etcd"
go build -a -installsuffix cgo -ldflags $GO_LDFLAGS -o bin\etcdctl.exe "$REPO_PATH\cmd\etcdctl"

View File

@ -37,6 +37,10 @@ var (
ErrClusterUnavailable = errors.New("client: etcd cluster is unavailable or misconfigured")
ErrNoLeaderEndpoint = errors.New("client: no leader endpoint available")
errTooManyRedirectChecks = errors.New("client: too many redirect checks")
// oneShotCtxValue is set on a context using WithValue(&oneShotValue) so
// that Do() will not retry a request
oneShotCtxValue interface{}
)
var DefaultRequestTimeout = 5 * time.Second
@ -301,7 +305,7 @@ func (c *httpClusterClient) SetEndpoints(eps []string) error {
// If endpoints doesn't have the lu, just keep c.pinned = 0.
// Forwarding between follower and leader would be required but it works.
default:
return errors.New(fmt.Sprintf("invalid endpoint selection mode: %d", c.selectionMode))
return fmt.Errorf("invalid endpoint selection mode: %d", c.selectionMode)
}
return nil
@ -335,6 +339,7 @@ func (c *httpClusterClient) Do(ctx context.Context, act httpAction) (*http.Respo
var body []byte
var err error
cerr := &ClusterError{}
isOneShot := ctx.Value(&oneShotCtxValue) != nil
for i := pinned; i < leps+pinned; i++ {
k := i % leps
@ -348,6 +353,9 @@ func (c *httpClusterClient) Do(ctx context.Context, act httpAction) (*http.Respo
if err == context.Canceled || err == context.DeadlineExceeded {
return nil, nil, err
}
if isOneShot {
return nil, nil, err
}
continue
}
if resp.StatusCode/100 == 5 {
@ -358,6 +366,9 @@ func (c *httpClusterClient) Do(ctx context.Context, act httpAction) (*http.Respo
default:
cerr.Errors = append(cerr.Errors, fmt.Errorf("client: etcd member %s returns server error [%s]", eps[k].String(), http.StatusText(resp.StatusCode)))
}
if isOneShot {
return nil, nil, cerr.Errors[0]
}
continue
}
if k != pinned {
@ -393,7 +404,7 @@ func (c *httpClusterClient) Sync(ctx context.Context) error {
c.Lock()
defer c.Unlock()
eps := make([]string, 0)
var eps []string
for _, m := range ms {
eps = append(eps, m.ClientURLs...)
}

View File

@ -855,7 +855,7 @@ func TestHTTPClusterClientAutoSyncFail(t *testing.T) {
}
err = hc.AutoSync(context.Background(), time.Hour)
if err.Error() != ErrClusterUnavailable.Error() {
if !strings.HasPrefix(err.Error(), ErrClusterUnavailable.Error()) {
t.Fatalf("incorrect error value: want=%v got=%v", ErrClusterUnavailable, err)
}
}

View File

@ -21,7 +21,11 @@ type ClusterError struct {
}
func (ce *ClusterError) Error() string {
return ErrClusterUnavailable.Error()
s := ErrClusterUnavailable.Error()
for i, e := range ce.Errors {
s += fmt.Sprintf("; error #%d: %s\n", i, e)
}
return s
}
func (ce *ClusterError) Detail() string {

View File

@ -0,0 +1,134 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package integration
import (
"fmt"
"net/http"
"net/http/httptest"
"os"
"strings"
"sync/atomic"
"testing"
"golang.org/x/net/context"
"github.com/coreos/etcd/client"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
)
// TestV2NoRetryEOF tests destructive api calls won't retry on a disconnection.
func TestV2NoRetryEOF(t *testing.T) {
defer testutil.AfterTest(t)
// generate an EOF response; specify address so appears first in sorted ep list
lEOF := integration.NewListenerWithAddr(t, fmt.Sprintf("eof:123.%d.sock", os.Getpid()))
defer lEOF.Close()
tries := uint32(0)
go func() {
for {
conn, err := lEOF.Accept()
if err != nil {
return
}
atomic.AddUint32(&tries, 1)
conn.Close()
}
}()
eofURL := integration.UrlScheme + "://" + lEOF.Addr().String()
cli := integration.MustNewHTTPClient(t, []string{eofURL, eofURL}, nil)
kapi := client.NewKeysAPI(cli)
for i, f := range noRetryList(kapi) {
startTries := atomic.LoadUint32(&tries)
if err := f(); err == nil {
t.Errorf("#%d: expected EOF error, got nil", i)
}
endTries := atomic.LoadUint32(&tries)
if startTries+1 != endTries {
t.Errorf("#%d: expected 1 try, got %d", i, endTries-startTries)
}
}
}
// TestV2NoRetryNoLeader tests destructive api calls won't retry if given an error code.
func TestV2NoRetryNoLeader(t *testing.T) {
defer testutil.AfterTest(t)
lHttp := integration.NewListenerWithAddr(t, fmt.Sprintf("errHttp:123.%d.sock", os.Getpid()))
eh := &errHandler{errCode: http.StatusServiceUnavailable}
srv := httptest.NewUnstartedServer(eh)
defer lHttp.Close()
defer srv.Close()
srv.Listener = lHttp
go srv.Start()
lHttpURL := integration.UrlScheme + "://" + lHttp.Addr().String()
cli := integration.MustNewHTTPClient(t, []string{lHttpURL, lHttpURL}, nil)
kapi := client.NewKeysAPI(cli)
// test error code
for i, f := range noRetryList(kapi) {
reqs := eh.reqs
if err := f(); err == nil || !strings.Contains(err.Error(), "no leader") {
t.Errorf("#%d: expected \"no leader\", got %v", i, err)
}
if eh.reqs != reqs+1 {
t.Errorf("#%d: expected 1 request, got %d", i, eh.reqs-reqs)
}
}
}
// TestV2RetryRefuse tests destructive api calls will retry if a connection is refused.
func TestV2RetryRefuse(t *testing.T) {
defer testutil.AfterTest(t)
cl := integration.NewCluster(t, 1)
cl.Launch(t)
defer cl.Terminate(t)
// test connection refused; expect no error failover
cli := integration.MustNewHTTPClient(t, []string{integration.UrlScheme + "://refuseconn:123", cl.URL(0)}, nil)
kapi := client.NewKeysAPI(cli)
if _, err := kapi.Set(context.Background(), "/delkey", "def", nil); err != nil {
t.Fatal(err)
}
for i, f := range noRetryList(kapi) {
if err := f(); err != nil {
t.Errorf("#%d: unexpected retry failure (%v)", i, err)
}
}
}
type errHandler struct {
errCode int
reqs int
}
func (eh *errHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
req.Body.Close()
eh.reqs++
w.WriteHeader(eh.errCode)
}
func noRetryList(kapi client.KeysAPI) []func() error {
return []func() error{
func() error {
opts := &client.SetOptions{PrevExist: client.PrevNoExist}
_, err := kapi.Set(context.Background(), "/setkey", "bar", opts)
return err
},
func() error {
_, err := kapi.Delete(context.Background(), "/delkey", nil)
return err
},
}
}

17
client/integration/doc.go Normal file
View File

@ -0,0 +1,17 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package integration implements tests built upon embedded etcd, focusing on
// the correctness of the etcd v2 client.
package integration

View File

@ -0,0 +1,20 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package integration
import (
"os"
"testing"
"github.com/coreos/etcd/pkg/testutil"
)
func TestMain(m *testing.M) {
v := m.Run()
if v == 0 && testutil.CheckLeakedGoroutine() {
os.Exit(1)
}
os.Exit(v)
}

View File

@ -8,10 +8,11 @@ package client
import (
"errors"
"fmt"
codec1978 "github.com/ugorji/go/codec"
"reflect"
"runtime"
time "time"
codec1978 "github.com/ugorji/go/codec"
)
const (

View File

@ -191,6 +191,10 @@ type SetOptions struct {
// Dir specifies whether or not this Node should be created as a directory.
Dir bool
// NoValueOnSuccess specifies whether the response contains the current value of the Node.
// If set, the response will only contain the current value when the request fails.
NoValueOnSuccess bool
}
type GetOptions struct {
@ -335,9 +339,14 @@ func (k *httpKeysAPI) Set(ctx context.Context, key, val string, opts *SetOptions
act.TTL = opts.TTL
act.Refresh = opts.Refresh
act.Dir = opts.Dir
act.NoValueOnSuccess = opts.NoValueOnSuccess
}
resp, body, err := k.client.Do(ctx, act)
doCtx := ctx
if act.PrevExist == PrevNoExist {
doCtx = context.WithValue(doCtx, &oneShotCtxValue, &oneShotCtxValue)
}
resp, body, err := k.client.Do(doCtx, act)
if err != nil {
return nil, err
}
@ -385,7 +394,8 @@ func (k *httpKeysAPI) Delete(ctx context.Context, key string, opts *DeleteOption
act.Recursive = opts.Recursive
}
resp, body, err := k.client.Do(ctx, act)
doCtx := context.WithValue(ctx, &oneShotCtxValue, &oneShotCtxValue)
resp, body, err := k.client.Do(doCtx, act)
if err != nil {
return nil, err
}
@ -518,15 +528,16 @@ func (w *waitAction) HTTPRequest(ep url.URL) *http.Request {
}
type setAction struct {
Prefix string
Key string
Value string
PrevValue string
PrevIndex uint64
PrevExist PrevExistType
TTL time.Duration
Refresh bool
Dir bool
Prefix string
Key string
Value string
PrevValue string
PrevIndex uint64
PrevExist PrevExistType
TTL time.Duration
Refresh bool
Dir bool
NoValueOnSuccess bool
}
func (a *setAction) HTTPRequest(ep url.URL) *http.Request {
@ -560,6 +571,9 @@ func (a *setAction) HTTPRequest(ep url.URL) *http.Request {
if a.Refresh {
form.Add("refresh", "true")
}
if a.NoValueOnSuccess {
params.Set("noValueOnSuccess", strconv.FormatBool(a.NoValueOnSuccess))
}
u.RawQuery = params.Encode()
body := strings.NewReader(form.Encode())

View File

@ -407,6 +407,15 @@ func TestSetAction(t *testing.T) {
wantURL: "http://example.com/foo?dir=true",
wantBody: "",
},
// NoValueOnSuccess is set
{
act: setAction{
Key: "foo",
NoValueOnSuccess: true,
},
wantURL: "http://example.com/foo?noValueOnSuccess=true",
wantBody: "value=",
},
}
for i, tt := range tests {

View File

@ -14,6 +14,20 @@
package client
import (
"regexp"
)
var (
roleNotFoundRegExp *regexp.Regexp
userNotFoundRegExp *regexp.Regexp
)
func init() {
roleNotFoundRegExp = regexp.MustCompile("auth: Role .* does not exist.")
userNotFoundRegExp = regexp.MustCompile("auth: User .* does not exist.")
}
// IsKeyNotFound returns true if the error code is ErrorCodeKeyNotFound.
func IsKeyNotFound(err error) bool {
if cErr, ok := err.(Error); ok {
@ -21,3 +35,19 @@ func IsKeyNotFound(err error) bool {
}
return false
}
// IsRoleNotFound returns true if the error means role not found of v2 API.
func IsRoleNotFound(err error) bool {
if ae, ok := err.(authError); ok {
return roleNotFoundRegExp.MatchString(ae.Message)
}
return false
}
// IsUserNotFound returns true if the error means user not found of v2 API.
func IsUserNotFound(err error) bool {
if ae, ok := err.(authError); ok {
return userNotFoundRegExp.MatchString(ae.Message)
}
return false
}

View File

@ -43,6 +43,7 @@ type (
AuthRoleListResponse pb.AuthRoleListResponse
PermissionType authpb.Permission_Type
Permission authpb.Permission
)
const (
@ -145,12 +146,12 @@ func (auth *auth) UserGrantRole(ctx context.Context, user string, role string) (
}
func (auth *auth) UserGet(ctx context.Context, name string) (*AuthUserGetResponse, error) {
resp, err := auth.remote.UserGet(ctx, &pb.AuthUserGetRequest{Name: name})
resp, err := auth.remote.UserGet(ctx, &pb.AuthUserGetRequest{Name: name}, grpc.FailFast(false))
return (*AuthUserGetResponse)(resp), toErr(ctx, err)
}
func (auth *auth) UserList(ctx context.Context) (*AuthUserListResponse, error) {
resp, err := auth.remote.UserList(ctx, &pb.AuthUserListRequest{})
resp, err := auth.remote.UserList(ctx, &pb.AuthUserListRequest{}, grpc.FailFast(false))
return (*AuthUserListResponse)(resp), toErr(ctx, err)
}
@ -175,12 +176,12 @@ func (auth *auth) RoleGrantPermission(ctx context.Context, name string, key, ran
}
func (auth *auth) RoleGet(ctx context.Context, role string) (*AuthRoleGetResponse, error) {
resp, err := auth.remote.RoleGet(ctx, &pb.AuthRoleGetRequest{Role: role})
resp, err := auth.remote.RoleGet(ctx, &pb.AuthRoleGetRequest{Role: role}, grpc.FailFast(false))
return (*AuthRoleGetResponse)(resp), toErr(ctx, err)
}
func (auth *auth) RoleList(ctx context.Context) (*AuthRoleListResponse, error) {
resp, err := auth.remote.RoleList(ctx, &pb.AuthRoleListRequest{})
resp, err := auth.remote.RoleList(ctx, &pb.AuthRoleListRequest{}, grpc.FailFast(false))
return (*AuthRoleListResponse)(resp), toErr(ctx, err)
}
@ -208,7 +209,7 @@ type authenticator struct {
}
func (auth *authenticator) authenticate(ctx context.Context, name string, password string) (*AuthenticateResponse, error) {
resp, err := auth.remote.Authenticate(ctx, &pb.AuthenticateRequest{Name: name, Password: password})
resp, err := auth.remote.Authenticate(ctx, &pb.AuthenticateRequest{Name: name, Password: password}, grpc.FailFast(false))
return (*AuthenticateResponse)(resp), toErr(ctx, err)
}

View File

@ -17,7 +17,7 @@ package clientv3
import (
"net/url"
"strings"
"sync/atomic"
"sync"
"golang.org/x/net/context"
"google.golang.org/grpc"
@ -26,32 +26,182 @@ import (
// simpleBalancer does the bare minimum to expose multiple eps
// to the grpc reconnection code path
type simpleBalancer struct {
// eps are the client's endpoints stripped of any URL scheme
eps []string
ch chan []grpc.Address
numGets uint32
// addrs are the client's endpoints for grpc
addrs []grpc.Address
// notifyCh notifies grpc of the set of addresses for connecting
notifyCh chan []grpc.Address
// readyc closes once the first connection is up
readyc chan struct{}
readyOnce sync.Once
// mu protects upEps, pinAddr, and connectingAddr
mu sync.RWMutex
// upEps holds the current endpoints that have an active connection
upEps map[string]struct{}
// upc closes when upEps transitions from empty to non-zero or the balancer closes.
upc chan struct{}
// grpc issues TLS cert checks using the string passed into dial so
// that string must be the host. To recover the full scheme://host URL,
// have a map from hosts to the original endpoint.
host2ep map[string]string
// pinAddr is the currently pinned address; set to the empty string on
// intialization and shutdown.
pinAddr string
closed bool
}
func newSimpleBalancer(eps []string) grpc.Balancer {
ch := make(chan []grpc.Address, 1)
func newSimpleBalancer(eps []string) *simpleBalancer {
notifyCh := make(chan []grpc.Address, 1)
addrs := make([]grpc.Address, len(eps))
for i := range eps {
addrs[i].Addr = getHost(eps[i])
}
ch <- addrs
return &simpleBalancer{eps: eps, ch: ch}
notifyCh <- addrs
sb := &simpleBalancer{
addrs: addrs,
notifyCh: notifyCh,
readyc: make(chan struct{}),
upEps: make(map[string]struct{}),
upc: make(chan struct{}),
host2ep: getHost2ep(eps),
}
return sb
}
func (b *simpleBalancer) Start(target string) error { return nil }
func (b *simpleBalancer) Up(addr grpc.Address) func(error) { return func(error) {} }
func (b *simpleBalancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (grpc.Address, func(), error) {
v := atomic.AddUint32(&b.numGets, 1)
ep := b.eps[v%uint32(len(b.eps))]
return grpc.Address{Addr: getHost(ep)}, func() {}, nil
func (b *simpleBalancer) Start(target string, config grpc.BalancerConfig) error { return nil }
func (b *simpleBalancer) ConnectNotify() <-chan struct{} {
b.mu.Lock()
defer b.mu.Unlock()
return b.upc
}
func (b *simpleBalancer) Notify() <-chan []grpc.Address { return b.ch }
func (b *simpleBalancer) getEndpoint(host string) string {
b.mu.Lock()
defer b.mu.Unlock()
return b.host2ep[host]
}
func getHost2ep(eps []string) map[string]string {
hm := make(map[string]string, len(eps))
for i := range eps {
_, host, _ := parseEndpoint(eps[i])
hm[host] = eps[i]
}
return hm
}
func (b *simpleBalancer) updateAddrs(eps []string) {
np := getHost2ep(eps)
b.mu.Lock()
defer b.mu.Unlock()
match := len(np) == len(b.host2ep)
for k, v := range np {
if b.host2ep[k] != v {
match = false
break
}
}
if match {
// same endpoints, so no need to update address
return
}
b.host2ep = np
addrs := make([]grpc.Address, 0, len(eps))
for i := range eps {
addrs = append(addrs, grpc.Address{Addr: getHost(eps[i])})
}
b.addrs = addrs
b.notifyCh <- addrs
}
func (b *simpleBalancer) Up(addr grpc.Address) func(error) {
b.mu.Lock()
defer b.mu.Unlock()
// gRPC might call Up after it called Close. We add this check
// to "fix" it up at application layer. Or our simplerBalancer
// might panic since b.upc is closed.
if b.closed {
return func(err error) {}
}
if len(b.upEps) == 0 {
// notify waiting Get()s and pin first connected address
close(b.upc)
b.pinAddr = addr.Addr
}
b.upEps[addr.Addr] = struct{}{}
// notify client that a connection is up
b.readyOnce.Do(func() { close(b.readyc) })
return func(err error) {
b.mu.Lock()
delete(b.upEps, addr.Addr)
if len(b.upEps) == 0 && b.pinAddr != "" {
b.upc = make(chan struct{})
} else if b.pinAddr == addr.Addr {
// choose new random up endpoint
for k := range b.upEps {
b.pinAddr = k
break
}
}
b.mu.Unlock()
}
}
func (b *simpleBalancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (grpc.Address, func(), error) {
var addr string
for {
b.mu.RLock()
ch := b.upc
b.mu.RUnlock()
select {
case <-ch:
case <-ctx.Done():
return grpc.Address{Addr: ""}, nil, ctx.Err()
}
b.mu.RLock()
addr = b.pinAddr
upEps := len(b.upEps)
b.mu.RUnlock()
if addr == "" {
return grpc.Address{Addr: ""}, nil, grpc.ErrClientConnClosing
}
if upEps > 0 {
break
}
}
return grpc.Address{Addr: addr}, func() {}, nil
}
func (b *simpleBalancer) Notify() <-chan []grpc.Address { return b.notifyCh }
func (b *simpleBalancer) Close() error {
close(b.ch)
b.mu.Lock()
defer b.mu.Unlock()
// In case gRPC calls close twice. TODO: remove the checking
// when we are sure that gRPC wont call close twice.
if b.closed {
return nil
}
b.closed = true
close(b.notifyCh)
// terminate all waiting Get()s
b.pinAddr = ""
if len(b.upEps) == 0 {
close(b.upc)
}
return nil
}

View File

@ -18,8 +18,6 @@ import (
"crypto/tls"
"errors"
"fmt"
"io/ioutil"
"log"
"net"
"net/url"
"strings"
@ -29,6 +27,7 @@ import (
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
)
@ -46,9 +45,11 @@ type Client struct {
Auth
Maintenance
conn *grpc.ClientConn
cfg Config
creds *credentials.TransportCredentials
conn *grpc.ClientConn
cfg Config
creds *credentials.TransportCredentials
balancer *simpleBalancer
retryWrapper retryRpcFunc
ctx context.Context
cancel context.CancelFunc
@ -96,6 +97,44 @@ func (c *Client) Ctx() context.Context { return c.ctx }
// Endpoints lists the registered endpoints for the client.
func (c *Client) Endpoints() []string { return c.cfg.Endpoints }
// SetEndpoints updates client's endpoints.
func (c *Client) SetEndpoints(eps ...string) {
c.cfg.Endpoints = eps
c.balancer.updateAddrs(eps)
}
// Sync synchronizes client's endpoints with the known endpoints from the etcd membership.
func (c *Client) Sync(ctx context.Context) error {
mresp, err := c.MemberList(ctx)
if err != nil {
return err
}
var eps []string
for _, m := range mresp.Members {
eps = append(eps, m.ClientURLs...)
}
c.SetEndpoints(eps...)
return nil
}
func (c *Client) autoSync() {
if c.cfg.AutoSyncInterval == time.Duration(0) {
return
}
for {
select {
case <-c.ctx.Done():
return
case <-time.After(c.cfg.AutoSyncInterval):
ctx, _ := context.WithTimeout(c.ctx, 5*time.Second)
if err := c.Sync(ctx); err != nil && err != c.ctx.Err() {
logger.Println("Auto sync endpoints failed:", err)
}
}
}
}
type authTokenCredential struct {
token string
}
@ -110,19 +149,31 @@ func (cred authTokenCredential) GetRequestMetadata(ctx context.Context, s ...str
}, nil
}
func (c *Client) dialTarget(endpoint string) (proto string, host string, creds *credentials.TransportCredentials) {
func parseEndpoint(endpoint string) (proto string, host string, scheme string) {
proto = "tcp"
host = endpoint
creds = c.creds
url, uerr := url.Parse(endpoint)
if uerr != nil || !strings.Contains(endpoint, "://") {
return
}
scheme = url.Scheme
// strip scheme:// prefix since grpc dials by host
host = url.Host
switch url.Scheme {
case "http", "https":
case "unix":
proto = "unix"
default:
proto, host = "", ""
}
return
}
func (c *Client) processCreds(scheme string) (creds *credentials.TransportCredentials) {
creds = c.creds
switch scheme {
case "unix":
case "http":
creds = nil
case "https":
@ -133,30 +184,20 @@ func (c *Client) dialTarget(endpoint string) (proto string, host string, creds *
emptyCreds := credentials.NewTLS(tlsconfig)
creds = &emptyCreds
default:
return "", "", nil
creds = nil
}
return
}
// dialSetupOpts gives the dial opts prioer to any authentication
func (c *Client) dialSetupOpts(endpoint string, dopts ...grpc.DialOption) []grpc.DialOption {
opts := []grpc.DialOption{
grpc.WithBlock(),
grpc.WithTimeout(c.cfg.DialTimeout),
// dialSetupOpts gives the dial opts prior to any authentication
func (c *Client) dialSetupOpts(endpoint string, dopts ...grpc.DialOption) (opts []grpc.DialOption) {
if c.cfg.DialTimeout > 0 {
opts = []grpc.DialOption{grpc.WithTimeout(c.cfg.DialTimeout)}
}
opts = append(opts, dopts...)
// grpc issues TLS cert checks using the string passed into dial so
// that string must be the host. To recover the full scheme://host URL,
// have a map from hosts to the original endpoint.
host2ep := make(map[string]string)
for i := range c.cfg.Endpoints {
_, host, _ := c.dialTarget(c.cfg.Endpoints[i])
host2ep[host] = c.cfg.Endpoints[i]
}
f := func(host string, t time.Duration) (net.Conn, error) {
proto, host, _ := c.dialTarget(host2ep[host])
proto, host, _ := parseEndpoint(c.balancer.getEndpoint(host))
if proto == "" {
return nil, fmt.Errorf("unknown scheme for %q", host)
}
@ -169,7 +210,10 @@ func (c *Client) dialSetupOpts(endpoint string, dopts ...grpc.DialOption) []grpc
}
opts = append(opts, grpc.WithDialer(f))
_, _, creds := c.dialTarget(endpoint)
creds := c.creds
if _, _, scheme := parseEndpoint(endpoint); len(scheme) != 0 {
creds = c.processCreds(scheme)
}
if creds != nil {
opts = append(opts, grpc.WithTransportCredentials(*creds))
} else {
@ -240,12 +284,30 @@ func newClient(cfg *Config) (*Client, error) {
client.Password = cfg.Password
}
b := newSimpleBalancer(cfg.Endpoints)
conn, err := client.dial(cfg.Endpoints[0], grpc.WithBalancer(b))
client.balancer = newSimpleBalancer(cfg.Endpoints)
conn, err := client.dial(cfg.Endpoints[0], grpc.WithBalancer(client.balancer))
if err != nil {
return nil, err
}
client.conn = conn
client.retryWrapper = client.newRetryWrapper()
// wait for a connection
if cfg.DialTimeout > 0 {
hasConn := false
waitc := time.After(cfg.DialTimeout)
select {
case <-client.balancer.readyc:
hasConn = true
case <-ctx.Done():
case <-waitc:
}
if !hasConn {
client.cancel()
conn.Close()
return nil, grpc.ErrClientConnTimeout
}
}
client.Cluster = NewCluster(client)
client.KV = NewKV(client)
@ -253,13 +315,8 @@ func newClient(cfg *Config) (*Client, error) {
client.Watcher = NewWatcher(client)
client.Auth = NewAuth(client)
client.Maintenance = NewMaintenance(client)
if cfg.Logger != nil {
logger.Set(cfg.Logger)
} else {
// disable client side grpc by default
logger.Set(log.New(ioutil.Discard, "", 0))
}
go client.autoSync()
return client, nil
}
@ -275,8 +332,14 @@ func isHaltErr(ctx context.Context, err error) bool {
if err == nil {
return false
}
return strings.HasPrefix(grpc.ErrorDesc(err), "etcdserver: ") ||
strings.Contains(err.Error(), grpc.ErrClientConnClosing.Error())
code := grpc.Code(err)
// Unavailable codes mean the system will be right back.
// (e.g., can't connect, lost leader)
// Treat Internal codes as if something failed, leaving the
// system in an inconsistent state, but retrying could make progress.
// (e.g., failed in middle of send, corrupted frame)
// TODO: are permanent Internal errors possible from grpc?
return code != codes.Unavailable && code != codes.Internal
}
func toErr(ctx context.Context, err error) error {
@ -284,9 +347,20 @@ func toErr(ctx context.Context, err error) error {
return nil
}
err = rpctypes.Error(err)
if ctx.Err() != nil && strings.Contains(err.Error(), "context") {
err = ctx.Err()
} else if strings.Contains(err.Error(), grpc.ErrClientConnClosing.Error()) {
if _, ok := err.(rpctypes.EtcdError); ok {
return err
}
code := grpc.Code(err)
switch code {
case codes.DeadlineExceeded:
fallthrough
case codes.Canceled:
if ctx.Err() != nil {
err = ctx.Err()
}
case codes.Unavailable:
err = ErrNoAvailableEndpoints
case codes.FailedPrecondition:
err = grpc.ErrClientConnClosing
}
return err

View File

@ -19,11 +19,15 @@ import (
"testing"
"time"
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
"github.com/coreos/etcd/pkg/testutil"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
func TestDialTimeout(t *testing.T) {
defer testutil.AfterTest(t)
donec := make(chan error)
go func() {
// without timeout, grpc keeps redialing if connection refused
@ -55,9 +59,24 @@ func TestDialTimeout(t *testing.T) {
}
}
func TestDialNoTimeout(t *testing.T) {
cfg := Config{Endpoints: []string{"127.0.0.1:12345"}}
c, err := New(cfg)
if c == nil || err != nil {
t.Fatalf("new client with DialNoWait should succeed, got %v", err)
}
c.Close()
}
func TestIsHaltErr(t *testing.T) {
if !isHaltErr(nil, fmt.Errorf("etcdserver: some etcdserver error")) {
t.Errorf(`error prefixed with "etcdserver: " should be Halted`)
t.Errorf(`error prefixed with "etcdserver: " should be Halted by default`)
}
if isHaltErr(nil, rpctypes.ErrGRPCStopped) {
t.Errorf("error %v should not halt", rpctypes.ErrGRPCStopped)
}
if isHaltErr(nil, rpctypes.ErrGRPCNoLeader) {
t.Errorf("error %v should not halt", rpctypes.ErrGRPCNoLeader)
}
ctx, cancel := context.WithCancel(context.TODO())
if isHaltErr(ctx, nil) {

View File

@ -17,6 +17,7 @@ package clientv3
import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
type (
@ -46,7 +47,7 @@ type cluster struct {
}
func NewCluster(c *Client) Cluster {
return &cluster{remote: pb.NewClusterClient(c.conn)}
return &cluster{remote: RetryClusterClient(c)}
}
func (c *cluster) MemberAdd(ctx context.Context, peerAddrs []string) (*MemberAddResponse, error) {
@ -90,7 +91,7 @@ func (c *cluster) MemberUpdate(ctx context.Context, id uint64, peerAddrs []strin
func (c *cluster) MemberList(ctx context.Context) (*MemberListResponse, error) {
// it is safe to retry on list.
for {
resp, err := c.remote.MemberList(ctx, &pb.MemberListRequest{})
resp, err := c.remote.MemberList(ctx, &pb.MemberListRequest{}, grpc.FailFast(false))
if err == nil {
return (*MemberListResponse)(resp), nil
}

View File

@ -29,7 +29,7 @@ var (
)
type Election struct {
client *v3.Client
session *Session
keyPrefix string
@ -39,27 +39,24 @@ type Election struct {
}
// NewElection returns a new election on a given key prefix.
func NewElection(client *v3.Client, pfx string) *Election {
return &Election{client: client, keyPrefix: pfx}
func NewElection(s *Session, pfx string) *Election {
return &Election{session: s, keyPrefix: pfx + "/"}
}
// Campaign puts a value as eligible for the election. It blocks until
// it is elected, an error occurs, or the context is cancelled.
func (e *Election) Campaign(ctx context.Context, val string) error {
s, serr := NewSession(e.client)
if serr != nil {
return serr
}
s := e.session
client := e.session.Client()
k := fmt.Sprintf("%s/%x", e.keyPrefix, s.Lease())
txn := e.client.Txn(ctx).If(v3.Compare(v3.CreateRevision(k), "=", 0))
k := fmt.Sprintf("%s%x", e.keyPrefix, s.Lease())
txn := client.Txn(ctx).If(v3.Compare(v3.CreateRevision(k), "=", 0))
txn = txn.Then(v3.OpPut(k, val, v3.WithLease(s.Lease())))
txn = txn.Else(v3.OpGet(k))
resp, err := txn.Commit()
if err != nil {
return err
}
e.leaderKey, e.leaderRev, e.leaderSession = k, resp.Header.Revision, s
if !resp.Succeeded {
kv := resp.Responses[0].GetResponseRange().Kvs[0]
@ -72,12 +69,12 @@ func (e *Election) Campaign(ctx context.Context, val string) error {
}
}
err = waitDeletes(ctx, e.client, e.keyPrefix, v3.WithPrefix(), v3.WithRev(e.leaderRev-1))
err = waitDeletes(ctx, client, e.keyPrefix, e.leaderRev-1)
if err != nil {
// clean up in case of context cancel
select {
case <-ctx.Done():
e.Resign(e.client.Ctx())
e.Resign(client.Ctx())
default:
e.leaderSession = nil
}
@ -92,8 +89,9 @@ func (e *Election) Proclaim(ctx context.Context, val string) error {
if e.leaderSession == nil {
return ErrElectionNotLeader
}
client := e.session.Client()
cmp := v3.Compare(v3.CreateRevision(e.leaderKey), "=", e.leaderRev)
txn := e.client.Txn(ctx).If(cmp)
txn := client.Txn(ctx).If(cmp)
txn = txn.Then(v3.OpPut(e.leaderKey, val, v3.WithLease(e.leaderSession.Lease())))
tresp, terr := txn.Commit()
if terr != nil {
@ -111,7 +109,8 @@ func (e *Election) Resign(ctx context.Context) (err error) {
if e.leaderSession == nil {
return nil
}
_, err = e.client.Delete(ctx, e.leaderKey)
client := e.session.Client()
_, err = client.Delete(ctx, e.leaderKey)
e.leaderKey = ""
e.leaderSession = nil
return err
@ -119,7 +118,8 @@ func (e *Election) Resign(ctx context.Context) (err error) {
// Leader returns the leader value for the current election.
func (e *Election) Leader(ctx context.Context) (string, error) {
resp, err := e.client.Get(ctx, e.keyPrefix, v3.WithFirstCreate()...)
client := e.session.Client()
resp, err := client.Get(ctx, e.keyPrefix, v3.WithFirstCreate()...)
if err != nil {
return "", err
} else if len(resp.Kvs) == 0 {
@ -139,9 +139,11 @@ func (e *Election) Observe(ctx context.Context) <-chan v3.GetResponse {
}
func (e *Election) observe(ctx context.Context, ch chan<- v3.GetResponse) {
client := e.session.Client()
defer close(ch)
for {
resp, err := e.client.Get(ctx, e.keyPrefix, v3.WithFirstCreate()...)
resp, err := client.Get(ctx, e.keyPrefix, v3.WithFirstCreate()...)
if err != nil {
return
}
@ -152,7 +154,7 @@ func (e *Election) observe(ctx context.Context, ch chan<- v3.GetResponse) {
if len(resp.Kvs) == 0 {
// wait for first key put on prefix
opts := []v3.OpOption{v3.WithRev(resp.Header.Revision), v3.WithPrefix()}
wch := e.client.Watch(cctx, e.keyPrefix, opts...)
wch := client.Watch(cctx, e.keyPrefix, opts...)
for kv == nil {
wr, ok := <-wch
@ -172,7 +174,7 @@ func (e *Election) observe(ctx context.Context, ch chan<- v3.GetResponse) {
kv = resp.Kvs[0]
}
wch := e.client.Watch(cctx, string(kv.Key), v3.WithRev(kv.ModRevision))
wch := client.Watch(cctx, string(kv.Key), v3.WithRev(kv.ModRevision))
keyDeleted := false
for !keyDeleted {
wr, ok := <-wch

View File

@ -16,7 +16,6 @@ package concurrency
import (
"fmt"
"math"
v3 "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/mvcc/mvccpb"
@ -26,46 +25,40 @@ import (
func waitDelete(ctx context.Context, client *v3.Client, key string, rev int64) error {
cctx, cancel := context.WithCancel(ctx)
defer cancel()
var wr v3.WatchResponse
wch := client.Watch(cctx, key, v3.WithRev(rev))
for wr := range wch {
for wr = range wch {
for _, ev := range wr.Events {
if ev.Type == mvccpb.DELETE {
return nil
}
}
}
if err := wr.Err(); err != nil {
return err
}
if err := ctx.Err(); err != nil {
return err
}
return fmt.Errorf("lost watcher waiting for delete")
}
// waitDeletes efficiently waits until all keys matched by Get(key, opts...) are deleted
func waitDeletes(ctx context.Context, client *v3.Client, key string, opts ...v3.OpOption) error {
getOpts := []v3.OpOption{v3.WithSort(v3.SortByCreateRevision, v3.SortAscend)}
getOpts = append(getOpts, opts...)
resp, err := client.Get(ctx, key, getOpts...)
maxRev := int64(math.MaxInt64)
getOpts = append(getOpts, v3.WithRev(0))
for err == nil {
for len(resp.Kvs) > 0 {
i := len(resp.Kvs) - 1
if resp.Kvs[i].CreateRevision <= maxRev {
break
}
resp.Kvs = resp.Kvs[:i]
// waitDeletes efficiently waits until all keys matching the prefix and no greater
// than the create revision.
func waitDeletes(ctx context.Context, client *v3.Client, pfx string, maxCreateRev int64) error {
getOpts := append(v3.WithLastCreate(), v3.WithMaxCreateRev(maxCreateRev))
for {
resp, err := client.Get(ctx, pfx, getOpts...)
if err != nil {
return err
}
if len(resp.Kvs) == 0 {
break
return nil
}
lastKV := resp.Kvs[len(resp.Kvs)-1]
maxRev = lastKV.CreateRevision
err = waitDelete(ctx, client, string(lastKV.Key), maxRev)
if err != nil || len(resp.Kvs) == 1 {
break
lastKey := string(resp.Kvs[0].Key)
if err = waitDelete(ctx, client, lastKey, resp.Header.Revision); err != nil {
return err
}
getOpts = append(getOpts, v3.WithLimit(int64(len(resp.Kvs)-1)))
resp, err = client.Get(ctx, key, getOpts...)
}
return err
}

View File

@ -24,32 +24,30 @@ import (
// Mutex implements the sync Locker interface with etcd
type Mutex struct {
client *v3.Client
s *Session
pfx string
myKey string
myRev int64
}
func NewMutex(client *v3.Client, pfx string) *Mutex {
return &Mutex{client, pfx, "", -1}
func NewMutex(s *Session, pfx string) *Mutex {
return &Mutex{s, pfx + "/", "", -1}
}
// Lock locks the mutex with a cancellable context. If the context is cancelled
// while trying to acquire the lock, the mutex tries to clean its stale lock entry.
func (m *Mutex) Lock(ctx context.Context) error {
s, serr := NewSession(m.client)
if serr != nil {
return serr
}
s := m.s
client := m.s.Client()
m.myKey = fmt.Sprintf("%s/%x", m.pfx, s.Lease())
m.myKey = fmt.Sprintf("%s%x", m.pfx, s.Lease())
cmp := v3.Compare(v3.CreateRevision(m.myKey), "=", 0)
// put self in lock waiters via myKey; oldest waiter holds lock
put := v3.OpPut(m.myKey, "", v3.WithLease(s.Lease()))
// reuse key in case this session already holds the lock
get := v3.OpGet(m.myKey)
resp, err := m.client.Txn(ctx).If(cmp).Then(put).Else(get).Commit()
resp, err := client.Txn(ctx).If(cmp).Then(put).Else(get).Commit()
if err != nil {
return err
}
@ -59,18 +57,19 @@ func (m *Mutex) Lock(ctx context.Context) error {
}
// wait for deletion revisions prior to myKey
err = waitDeletes(ctx, m.client, m.pfx, v3.WithPrefix(), v3.WithRev(m.myRev-1))
err = waitDeletes(ctx, client, m.pfx, m.myRev-1)
// release lock key if cancelled
select {
case <-ctx.Done():
m.Unlock(m.client.Ctx())
m.Unlock(client.Ctx())
default:
}
return err
}
func (m *Mutex) Unlock(ctx context.Context) error {
if _, err := m.client.Delete(ctx, m.myKey); err != nil {
client := m.s.Client()
if _, err := client.Delete(ctx, m.myKey); err != nil {
return err
}
m.myKey = "\x00"
@ -87,17 +86,19 @@ func (m *Mutex) Key() string { return m.myKey }
type lockerMutex struct{ *Mutex }
func (lm *lockerMutex) Lock() {
if err := lm.Mutex.Lock(lm.client.Ctx()); err != nil {
client := lm.s.Client()
if err := lm.Mutex.Lock(client.Ctx()); err != nil {
panic(err)
}
}
func (lm *lockerMutex) Unlock() {
if err := lm.Mutex.Unlock(lm.client.Ctx()); err != nil {
client := lm.s.Client()
if err := lm.Mutex.Unlock(client.Ctx()); err != nil {
panic(err)
}
}
// NewLocker creates a sync.Locker backed by an etcd mutex.
func NewLocker(client *v3.Client, pfx string) sync.Locker {
return &lockerMutex{NewMutex(client, pfx)}
func NewLocker(s *Session, pfx string) sync.Locker {
return &lockerMutex{NewMutex(s, pfx)}
}

View File

@ -15,21 +15,11 @@
package concurrency
import (
"sync"
v3 "github.com/coreos/etcd/clientv3"
"golang.org/x/net/context"
)
// only keep one ephemeral lease per client
var clientSessions clientSessionMgr = clientSessionMgr{sessions: make(map[*v3.Client]*Session)}
const sessionTTL = 60
type clientSessionMgr struct {
sessions map[*v3.Client]*Session
mu sync.Mutex
}
const defaultSessionTTL = 60
// Session represents a lease kept alive for the lifetime of a client.
// Fault-tolerant applications may use sessions to reason about liveness.
@ -42,14 +32,13 @@ type Session struct {
}
// NewSession gets the leased session for a client.
func NewSession(client *v3.Client) (*Session, error) {
clientSessions.mu.Lock()
defer clientSessions.mu.Unlock()
if s, ok := clientSessions.sessions[client]; ok {
return s, nil
func NewSession(client *v3.Client, opts ...SessionOption) (*Session, error) {
ops := &sessionOptions{ttl: defaultSessionTTL}
for _, opt := range opts {
opt(ops)
}
resp, err := client.Grant(client.Ctx(), sessionTTL)
resp, err := client.Grant(client.Ctx(), int64(ops.ttl))
if err != nil {
return nil, err
}
@ -63,16 +52,10 @@ func NewSession(client *v3.Client) (*Session, error) {
donec := make(chan struct{})
s := &Session{client: client, id: id, cancel: cancel, donec: donec}
clientSessions.sessions[client] = s
// keep the lease alive until client error or cancelled context
go func() {
defer func() {
clientSessions.mu.Lock()
delete(clientSessions.sessions, client)
clientSessions.mu.Unlock()
close(donec)
}()
defer close(donec)
for range keepAlive {
// eat messages until keep alive channel closes
}
@ -81,6 +64,11 @@ func NewSession(client *v3.Client) (*Session, error) {
return s, nil
}
// Client is the etcd client that is attached to the session.
func (s *Session) Client() *v3.Client {
return s.client
}
// Lease is the lease ID for keys bound to the session.
func (s *Session) Lease() v3.LeaseID { return s.id }
@ -102,3 +90,20 @@ func (s *Session) Close() error {
_, err := s.client.Revoke(s.client.Ctx(), s.id)
return err
}
type sessionOptions struct {
ttl int
}
// SessionOption configures Session.
type SessionOption func(*sessionOptions)
// WithTTL configures the session's TTL in seconds.
// If TTL is <= 0, the default 60 seconds TTL will be used.
func WithTTL(ttl int) SessionOption {
return func(so *sessionOptions) {
if ttl > 0 {
so.ttl = ttl
}
}
}

View File

@ -28,15 +28,16 @@ type Config struct {
// Endpoints is a list of URLs
Endpoints []string
// AutoSyncInterval is the interval to update endpoints with its latest members.
// 0 disables auto-sync. By default auto-sync is disabled.
AutoSyncInterval time.Duration
// DialTimeout is the timeout for failing to establish a connection.
DialTimeout time.Duration
// TLS holds the client secure credentials, if any.
TLS *tls.Config
// Logger is the logger used by client library.
Logger Logger
// Username is a username for authentication
Username string
@ -46,6 +47,7 @@ type Config struct {
type yamlConfig struct {
Endpoints []string `json:"endpoints"`
AutoSyncInterval time.Duration `json:"auto-sync-interval"`
DialTimeout time.Duration `json:"dial-timeout"`
InsecureTransport bool `json:"insecure-transport"`
InsecureSkipTLSVerify bool `json:"insecure-skip-tls-verify"`
@ -68,8 +70,9 @@ func configFromFile(fpath string) (*Config, error) {
}
cfg := &Config{
Endpoints: yc.Endpoints,
DialTimeout: yc.DialTimeout,
Endpoints: yc.Endpoints,
AutoSyncInterval: yc.AutoSyncInterval,
DialTimeout: yc.DialTimeout,
}
if yc.InsecureTransport {

View File

@ -32,35 +32,63 @@ func ExampleAuth() {
}
defer cli.Close()
authapi := clientv3.NewAuth(cli)
if _, err = authapi.RoleAdd(context.TODO(), "root"); err != nil {
if _, err = cli.RoleAdd(context.TODO(), "root"); err != nil {
log.Fatal(err)
}
if _, err = cli.UserAdd(context.TODO(), "root", "123"); err != nil {
log.Fatal(err)
}
if _, err = cli.UserGrantRole(context.TODO(), "root", "root"); err != nil {
log.Fatal(err)
}
if _, err = authapi.RoleGrantPermission(
if _, err = cli.RoleAdd(context.TODO(), "r"); err != nil {
log.Fatal(err)
}
if _, err = cli.RoleGrantPermission(
context.TODO(),
"root", // role name
"foo", // key
"zoo", // range end
"r", // role name
"foo", // key
"zoo", // range end
clientv3.PermissionType(clientv3.PermReadWrite),
); err != nil {
log.Fatal(err)
}
if _, err = authapi.UserAdd(context.TODO(), "root", "123"); err != nil {
if _, err = cli.UserAdd(context.TODO(), "u", "123"); err != nil {
log.Fatal(err)
}
if _, err = authapi.UserGrantRole(context.TODO(), "root", "root"); err != nil {
if _, err = cli.UserGrantRole(context.TODO(), "u", "r"); err != nil {
log.Fatal(err)
}
if _, err = authapi.AuthEnable(context.TODO()); err != nil {
if _, err = cli.AuthEnable(context.TODO()); err != nil {
log.Fatal(err)
}
cliAuth, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialTimeout: dialTimeout,
Username: "u",
Password: "123",
})
if err != nil {
log.Fatal(err)
}
defer cliAuth.Close()
if _, err = cliAuth.Put(context.TODO(), "foo1", "bar"); err != nil {
log.Fatal(err)
}
_, err = cliAuth.Txn(context.TODO()).
If(clientv3.Compare(clientv3.Value("zoo1"), ">", "abc")).
Then(clientv3.OpPut("zoo1", "XYZ")).
Else(clientv3.OpPut("zoo1", "ABC")).
Commit()
fmt.Println(err)
// now check the permission with the root account
rootCli, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialTimeout: dialTimeout,
Username: "root",
@ -69,31 +97,17 @@ func ExampleAuth() {
if err != nil {
log.Fatal(err)
}
defer cliAuth.Close()
defer rootCli.Close()
kv := clientv3.NewKV(cliAuth)
if _, err = kv.Put(context.TODO(), "foo1", "bar"); err != nil {
log.Fatal(err)
}
_, err = kv.Txn(context.TODO()).
If(clientv3.Compare(clientv3.Value("zoo1"), ">", "abc")).
Then(clientv3.OpPut("zoo1", "XYZ")).
Else(clientv3.OpPut("zoo1", "ABC")).
Commit()
fmt.Println(err)
// now check the permission
authapi2 := clientv3.NewAuth(cliAuth)
resp, err := authapi2.RoleGet(context.TODO(), "root")
resp, err := rootCli.RoleGet(context.TODO(), "r")
if err != nil {
log.Fatal(err)
}
fmt.Printf("root user permission: key %q, range end %q\n", resp.Perm[0].Key, resp.Perm[0].RangeEnd)
fmt.Printf("user u permission: key %q, range end %q\n", resp.Perm[0].Key, resp.Perm[0].RangeEnd)
if _, err = authapi2.AuthDisable(context.TODO()); err != nil {
if _, err = rootCli.AuthDisable(context.TODO()); err != nil {
log.Fatal(err)
}
// Output: etcdserver: permission denied
// root user permission: key "foo", range end "zoo"
// user u permission: key "foo", range end "zoo"
}

View File

@ -210,7 +210,7 @@ func ExampleKV_compact() {
compRev := resp.Header.Revision // specify compact revision of your choice
ctx, cancel = context.WithTimeout(context.Background(), requestTimeout)
err = cli.Compact(ctx, compRev)
_, err = cli.Compact(ctx, compRev)
cancel()
if err != nil {
log.Fatal(err)

View File

@ -19,6 +19,8 @@ import (
"time"
"github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/pkg/transport"
"github.com/coreos/pkg/capnslog"
"golang.org/x/net/context"
)
@ -29,6 +31,9 @@ var (
)
func Example() {
var plog = capnslog.NewPackageLogger("github.com/coreos/etcd", "clientv3")
clientv3.SetLogger(plog)
cli, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialTimeout: dialTimeout,
@ -43,3 +48,29 @@ func Example() {
log.Fatal(err)
}
}
func ExampleConfig_withTLS() {
tlsInfo := transport.TLSInfo{
CertFile: "/tmp/test-certs/test-name-1.pem",
KeyFile: "/tmp/test-certs/test-name-1-key.pem",
TrustedCAFile: "/tmp/test-certs/trusted-ca.pem",
}
tlsConfig, err := tlsInfo.ClientConfig()
if err != nil {
log.Fatal(err)
}
cli, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialTimeout: dialTimeout,
TLS: tlsConfig,
})
if err != nil {
log.Fatal(err)
}
defer cli.Close() // make sure to close the client
_, err = cli.Put(context.TODO(), "foo", "bar")
if err != nil {
log.Fatal(err)
}
}

View File

@ -0,0 +1,60 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package integration
import (
"math/rand"
"testing"
"time"
"github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
"golang.org/x/net/context"
)
// TestDialSetEndpoints ensures SetEndpoints can replace unavailable endpoints with available ones.
func TestDialSetEndpoints(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 3})
defer clus.Terminate(t)
// get endpoint list
eps := make([]string, 3)
for i := range eps {
eps[i] = clus.Members[i].GRPCAddr()
}
toKill := rand.Intn(len(eps))
cfg := clientv3.Config{Endpoints: []string{eps[toKill]}, DialTimeout: 1 * time.Second}
cli, err := clientv3.New(cfg)
if err != nil {
t.Fatal(err)
}
defer cli.Close()
// make a dead node
clus.Members[toKill].Stop(t)
clus.WaitLeader(t)
// update client with available endpoints
cli.SetEndpoints(eps[(toKill+1)%3])
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
if _, err = cli.Get(ctx, "foo", clientv3.WithSerializable()); err != nil {
t.Fatal(err)
}
cancel()
}

View File

@ -16,6 +16,8 @@ package integration
import (
"bytes"
"math/rand"
"os"
"reflect"
"strings"
"testing"
@ -34,8 +36,8 @@ func TestKVPutError(t *testing.T) {
defer testutil.AfterTest(t)
var (
maxReqBytes = 1.5 * 1024 * 1024
quota = int64(maxReqBytes * 1.2)
maxReqBytes = 1.5 * 1024 * 1024 // hard coded max in v3_server.go
quota = int64(int(maxReqBytes) + 8*os.Getpagesize())
)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1, QuotaBackendBytes: quota})
defer clus.Terminate(t)
@ -48,7 +50,7 @@ func TestKVPutError(t *testing.T) {
t.Fatalf("expected %v, got %v", rpctypes.ErrEmptyKey, err)
}
_, err = kv.Put(ctx, "key", strings.Repeat("a", int(maxReqBytes+100))) // 1.5MB
_, err = kv.Put(ctx, "key", strings.Repeat("a", int(maxReqBytes+100)))
if err != rpctypes.ErrRequestTooLarge {
t.Fatalf("expected %v, got %v", rpctypes.ErrRequestTooLarge, err)
}
@ -58,7 +60,7 @@ func TestKVPutError(t *testing.T) {
t.Fatal(err)
}
time.Sleep(500 * time.Millisecond) // give enough time for commit
time.Sleep(1 * time.Second) // give enough time for commit
_, err = kv.Put(ctx, "foo2", strings.Repeat("a", int(maxReqBytes-50)))
if err != rpctypes.ErrNoSpace { // over quota
@ -470,17 +472,17 @@ func TestKVCompactError(t *testing.T) {
t.Fatalf("couldn't put 'foo' (%v)", err)
}
}
err := kv.Compact(ctx, 6)
_, err := kv.Compact(ctx, 6)
if err != nil {
t.Fatalf("couldn't compact 6 (%v)", err)
}
err = kv.Compact(ctx, 6)
_, err = kv.Compact(ctx, 6)
if err != rpctypes.ErrCompacted {
t.Fatalf("expected %v, got %v", rpctypes.ErrCompacted, err)
}
err = kv.Compact(ctx, 100)
_, err = kv.Compact(ctx, 100)
if err != rpctypes.ErrFutureRev {
t.Fatalf("expected %v, got %v", rpctypes.ErrFutureRev, err)
}
@ -501,11 +503,11 @@ func TestKVCompact(t *testing.T) {
}
}
err := kv.Compact(ctx, 7)
_, err := kv.Compact(ctx, 7)
if err != nil {
t.Fatalf("couldn't compact kv space (%v)", err)
}
err = kv.Compact(ctx, 7)
_, err = kv.Compact(ctx, 7)
if err == nil || err != rpctypes.ErrCompacted {
t.Fatalf("error got %v, want %v", err, rpctypes.ErrCompacted)
}
@ -525,7 +527,7 @@ func TestKVCompact(t *testing.T) {
t.Fatalf("wchan got %v, expected closed", wr)
}
err = kv.Compact(ctx, 1000)
_, err = kv.Compact(ctx, 1000)
if err == nil || err != rpctypes.ErrFutureRev {
t.Fatalf("error got %v, want %v", err, rpctypes.ErrFutureRev)
}
@ -647,18 +649,121 @@ func TestKVGetCancel(t *testing.T) {
}
}
// TestKVPutStoppedServerAndClose ensures closing after a failed Put works.
func TestKVPutStoppedServerAndClose(t *testing.T) {
// TestKVGetStoppedServerAndClose ensures closing after a failed Get works.
func TestKVGetStoppedServerAndClose(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
cli := clus.Client(0)
clus.Members[0].Stop(t)
ctx, cancel := context.WithTimeout(context.TODO(), time.Second)
// this Put fails and triggers an asynchronous connection retry
_, err := cli.Put(ctx, "abc", "123")
// this Get fails and triggers an asynchronous connection retry
_, err := cli.Get(ctx, "abc")
cancel()
if !strings.Contains(err.Error(), "context deadline") {
t.Fatal(err)
}
}
// TestKVPutStoppedServerAndClose ensures closing after a failed Put works.
func TestKVPutStoppedServerAndClose(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
cli := clus.Client(0)
clus.Members[0].Stop(t)
ctx, cancel := context.WithTimeout(context.TODO(), time.Second)
// get retries on all errors.
// so here we use it to eat the potential broken pipe error for the next put.
// grpc client might see a broken pipe error when we issue the get request before
// grpc finds out the original connection is down due to the member shutdown.
_, err := cli.Get(ctx, "abc")
cancel()
if !strings.Contains(err.Error(), "context deadline") {
t.Fatal(err)
}
// this Put fails and triggers an asynchronous connection retry
_, err = cli.Put(ctx, "abc", "123")
cancel()
if !strings.Contains(err.Error(), "context deadline") {
t.Fatal(err)
}
}
// TestKVGetOneEndpointDown ensures a client can connect and get if one endpoint is down
func TestKVPutOneEndpointDown(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 3})
defer clus.Terminate(t)
// get endpoint list
eps := make([]string, 3)
for i := range eps {
eps[i] = clus.Members[i].GRPCAddr()
}
// make a dead node
clus.Members[rand.Intn(len(eps))].Stop(t)
// try to connect with dead node in the endpoint list
cfg := clientv3.Config{Endpoints: eps, DialTimeout: 1 * time.Second}
cli, err := clientv3.New(cfg)
if err != nil {
t.Fatal(err)
}
defer cli.Close()
ctx, cancel := context.WithTimeout(context.TODO(), 3*time.Second)
if _, err := cli.Get(ctx, "abc", clientv3.WithSerializable()); err != nil {
t.Fatal(err)
}
cancel()
}
// TestKVGetResetLoneEndpoint ensures that if an endpoint resets and all other
// endpoints are down, then it will reconnect.
func TestKVGetResetLoneEndpoint(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 2})
defer clus.Terminate(t)
// get endpoint list
eps := make([]string, 2)
for i := range eps {
eps[i] = clus.Members[i].GRPCAddr()
}
cfg := clientv3.Config{Endpoints: eps, DialTimeout: 500 * time.Millisecond}
cli, err := clientv3.New(cfg)
if err != nil {
t.Fatal(err)
}
defer cli.Close()
// disconnect everything
clus.Members[0].Stop(t)
clus.Members[1].Stop(t)
// have Get try to reconnect
donec := make(chan struct{})
go func() {
ctx, cancel := context.WithTimeout(context.TODO(), 5*time.Second)
if _, err := cli.Get(ctx, "abc", clientv3.WithSerializable()); err != nil {
t.Fatal(err)
}
cancel()
close(donec)
}()
time.Sleep(500 * time.Millisecond)
clus.Members[0].Restart(t)
select {
case <-time.After(10 * time.Second):
t.Fatalf("timed out waiting for Get")
case <-donec:
}
}

View File

@ -15,6 +15,8 @@
package integration
import (
"reflect"
"sort"
"testing"
"time"
@ -359,7 +361,8 @@ func TestLeaseKeepAliveCloseAfterDisconnectRevoke(t *testing.T) {
if kerr != nil {
t.Fatal(kerr)
}
if kresp := <-rc; kresp.ID != resp.ID {
kresp := <-rc
if kresp.ID != resp.ID {
t.Fatalf("ID = %x, want %x", kresp.ID, resp.ID)
}
@ -374,13 +377,14 @@ func TestLeaseKeepAliveCloseAfterDisconnectRevoke(t *testing.T) {
clus.Members[0].Restart(t)
select {
case ka, ok := <-rc:
if ok {
t.Fatalf("unexpected keepalive %v", ka)
// some keep-alives may still be buffered; drain until close
timer := time.After(time.Duration(kresp.TTL) * time.Second)
for kresp != nil {
select {
case kresp = <-rc:
case <-timer:
t.Fatalf("keepalive channel did not close")
}
case <-time.After(5 * time.Second):
t.Fatalf("keepalive channel did not close")
}
}
@ -453,3 +457,56 @@ func TestLeaseKeepAliveTTLTimeout(t *testing.T) {
clus.Members[0].Restart(t)
}
func TestLeaseTimeToLive(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 3})
defer clus.Terminate(t)
lapi := clientv3.NewLease(clus.RandClient())
defer lapi.Close()
resp, err := lapi.Grant(context.Background(), 10)
if err != nil {
t.Errorf("failed to create lease %v", err)
}
kv := clientv3.NewKV(clus.RandClient())
keys := []string{"foo1", "foo2"}
for i := range keys {
if _, err = kv.Put(context.TODO(), keys[i], "bar", clientv3.WithLease(resp.ID)); err != nil {
t.Fatal(err)
}
}
lresp, lerr := lapi.TimeToLive(context.Background(), resp.ID, clientv3.WithAttachedKeys())
if lerr != nil {
t.Fatal(lerr)
}
if lresp.ID != resp.ID {
t.Fatalf("leaseID expected %d, got %d", resp.ID, lresp.ID)
}
if lresp.GrantedTTL != int64(10) {
t.Fatalf("GrantedTTL expected %d, got %d", 10, lresp.GrantedTTL)
}
if lresp.TTL == 0 || lresp.TTL > lresp.GrantedTTL {
t.Fatalf("unexpected TTL %d (granted %d)", lresp.TTL, lresp.GrantedTTL)
}
ks := make([]string, len(lresp.Keys))
for i := range lresp.Keys {
ks[i] = string(lresp.Keys[i])
}
sort.Strings(ks)
if !reflect.DeepEqual(ks, keys) {
t.Fatalf("keys expected %v, got %v", keys, ks)
}
lresp, lerr = lapi.TimeToLive(context.Background(), resp.ID)
if lerr != nil {
t.Fatal(lerr)
}
if len(lresp.Keys) != 0 {
t.Fatalf("unexpected keys %+v", lresp.Keys)
}
}

View File

@ -0,0 +1,21 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package integration
import "github.com/coreos/pkg/capnslog"
func init() {
capnslog.SetGlobalLogLevel(capnslog.INFO)
}

View File

@ -15,7 +15,9 @@
package integration
import (
"fmt"
"reflect"
"sync"
"testing"
"time"
@ -69,3 +71,55 @@ func TestMirrorSync(t *testing.T) {
t.Fatal("failed to receive update in one second")
}
}
func TestMirrorSyncBase(t *testing.T) {
cluster := integration.NewClusterV3(nil, &integration.ClusterConfig{Size: 1})
defer cluster.Terminate(nil)
cli := cluster.Client(0)
ctx := context.TODO()
keyCh := make(chan string)
var wg sync.WaitGroup
for i := 0; i < 50; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for key := range keyCh {
if _, err := cli.Put(ctx, key, "test"); err != nil {
t.Fatal(err)
}
}
}()
}
for i := 0; i < 2000; i++ {
keyCh <- fmt.Sprintf("test%d", i)
}
close(keyCh)
wg.Wait()
syncer := mirror.NewSyncer(cli, "test", 0)
respCh, errCh := syncer.SyncBase(ctx)
count := 0
for resp := range respCh {
count = count + len(resp.Kvs)
if !resp.More {
break
}
}
for err := range errCh {
t.Fatalf("unexpected error %v", err)
}
if count != 2000 {
t.Errorf("unexpected kv count: %d", count)
}
}

View File

@ -73,6 +73,7 @@ func TestTxnWriteFail(t *testing.T) {
}()
go func() {
defer close(getc)
select {
case <-time.After(5 * time.Second):
t.Fatalf("timed out waiting for txn fail")
@ -86,11 +87,10 @@ func TestTxnWriteFail(t *testing.T) {
if len(gresp.Kvs) != 0 {
t.Fatalf("expected no keys, got %v", gresp.Kvs)
}
close(getc)
}()
select {
case <-time.After(5 * time.Second):
case <-time.After(2 * clus.Members[1].ServerConfig.ReqTimeout()):
t.Fatalf("timed out waiting for get")
case <-getc:
}
@ -125,7 +125,7 @@ func TestTxnReadRetry(t *testing.T) {
clus.Members[0].Restart(t)
select {
case <-donec:
case <-time.After(5 * time.Second):
case <-time.After(2 * clus.Members[1].ServerConfig.ReqTimeout()):
t.Fatalf("waited too long")
}
}

View File

@ -211,6 +211,14 @@ func testWatchReconnRequest(t *testing.T, wctx *watchctx) {
stopc <- struct{}{}
<-donec
// spinning on dropping connections may trigger a leader election
// due to resource starvation; l-read to ensure the cluster is stable
ctx, cancel := context.WithTimeout(context.TODO(), 30*time.Second)
if _, err := wctx.kv.Get(ctx, "_"); err != nil {
t.Fatal(err)
}
cancel()
// ensure watcher works
putAndWatch(t, wctx, "a", "a")
}
@ -375,7 +383,7 @@ func TestWatchResumeCompacted(t *testing.T) {
t.Fatal(err)
}
}
if err := kv.Compact(context.TODO(), 3); err != nil {
if _, err := kv.Compact(context.TODO(), 3); err != nil {
t.Fatal(err)
}
@ -400,7 +408,7 @@ func TestWatchResumeCompacted(t *testing.T) {
func TestWatchCompactRevision(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 3})
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
// set some keys
@ -414,7 +422,7 @@ func TestWatchCompactRevision(t *testing.T) {
w := clientv3.NewWatcher(clus.RandClient())
defer w.Close()
if err := kv.Compact(context.TODO(), 4); err != nil {
if _, err := kv.Compact(context.TODO(), 4); err != nil {
t.Fatal(err)
}
wch := w.Watch(context.Background(), "foo", clientv3.WithRev(2))
@ -487,7 +495,7 @@ func testWatchWithProgressNotify(t *testing.T, watchOnPut bool) {
} else if len(resp.Events) != 0 { // wait for notification otherwise
t.Fatalf("expected no events, but got %+v", resp.Events)
}
case <-time.After(2 * pi):
case <-time.After(time.Duration(1.5 * float64(pi))):
t.Fatalf("watch response expected in %v, but timed out", pi)
}
}
@ -673,3 +681,190 @@ func TestWatchWithRequireLeader(t *testing.T) {
t.Fatalf("expected response, got closed channel")
}
}
// TestWatchWithFilter checks that watch filtering works.
func TestWatchWithFilter(t *testing.T) {
cluster := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer cluster.Terminate(t)
client := cluster.RandClient()
ctx := context.Background()
wcNoPut := client.Watch(ctx, "a", clientv3.WithFilterPut())
wcNoDel := client.Watch(ctx, "a", clientv3.WithFilterDelete())
if _, err := client.Put(ctx, "a", "abc"); err != nil {
t.Fatal(err)
}
if _, err := client.Delete(ctx, "a"); err != nil {
t.Fatal(err)
}
npResp := <-wcNoPut
if len(npResp.Events) != 1 || npResp.Events[0].Type != clientv3.EventTypeDelete {
t.Fatalf("expected delete event, got %+v", npResp.Events)
}
ndResp := <-wcNoDel
if len(ndResp.Events) != 1 || ndResp.Events[0].Type != clientv3.EventTypePut {
t.Fatalf("expected put event, got %+v", ndResp.Events)
}
select {
case resp := <-wcNoPut:
t.Fatalf("unexpected event on filtered put (%+v)", resp)
case resp := <-wcNoDel:
t.Fatalf("unexpected event on filtered delete (%+v)", resp)
case <-time.After(100 * time.Millisecond):
}
}
// TestWatchWithCreatedNotification checks that createdNotification works.
func TestWatchWithCreatedNotification(t *testing.T) {
cluster := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer cluster.Terminate(t)
client := cluster.RandClient()
ctx := context.Background()
createC := client.Watch(ctx, "a", clientv3.WithCreatedNotify())
resp := <-createC
if !resp.Created {
t.Fatalf("expected created event, got %v", resp)
}
}
// TestWatchCancelOnServer ensures client watcher cancels propagate back to the server.
func TestWatchCancelOnServer(t *testing.T) {
cluster := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer cluster.Terminate(t)
client := cluster.RandClient()
for i := 0; i < 10; i++ {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
client.Watch(ctx, "a", clientv3.WithCreatedNotify())
cancel()
}
// wait for cancels to propagate
time.Sleep(time.Second)
watchers, err := cluster.Members[0].Metric("etcd_debugging_mvcc_watcher_total")
if err != nil {
t.Fatal(err)
}
if watchers != "0" {
t.Fatalf("expected 0 watchers, got %q", watchers)
}
}
// TestWatchOverlapContextCancel stresses the watcher stream teardown path by
// creating/canceling watchers to ensure that new watchers are not taken down
// by a torn down watch stream. The sort of race that's being detected:
// 1. create w1 using a cancelable ctx with %v as "ctx"
// 2. cancel ctx
// 3. watcher client begins tearing down watcher grpc stream since no more watchers
// 3. start creating watcher w2 using a new "ctx" (not canceled), attaches to old grpc stream
// 4. watcher client finishes tearing down stream on "ctx"
// 5. w2 comes back canceled
func TestWatchOverlapContextCancel(t *testing.T) {
f := func(clus *integration.ClusterV3) {}
testWatchOverlapContextCancel(t, f)
}
func TestWatchOverlapDropConnContextCancel(t *testing.T) {
f := func(clus *integration.ClusterV3) {
clus.Members[0].DropConnections()
}
testWatchOverlapContextCancel(t, f)
}
func testWatchOverlapContextCancel(t *testing.T, f func(*integration.ClusterV3)) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
// each unique context "%v" has a unique grpc stream
n := 100
ctxs, ctxc := make([]context.Context, 5), make([]chan struct{}, 5)
for i := range ctxs {
// make "%v" unique
ctxs[i] = context.WithValue(context.TODO(), "key", i)
// limits the maximum number of outstanding watchers per stream
ctxc[i] = make(chan struct{}, 2)
}
// issue concurrent watches on "abc" with cancel
cli := clus.RandClient()
if _, err := cli.Put(context.TODO(), "abc", "def"); err != nil {
t.Fatal(err)
}
ch := make(chan struct{}, n)
for i := 0; i < n; i++ {
go func() {
defer func() { ch <- struct{}{} }()
idx := rand.Intn(len(ctxs))
ctx, cancel := context.WithCancel(ctxs[idx])
ctxc[idx] <- struct{}{}
wch := cli.Watch(ctx, "abc", clientv3.WithRev(1))
f(clus)
select {
case _, ok := <-wch:
if !ok {
t.Fatalf("unexpected closed channel %p", wch)
}
// may take a second or two to reestablish a watcher because of
// grpc backoff policies for disconnects
case <-time.After(5 * time.Second):
t.Errorf("timed out waiting for watch on %p", wch)
}
// randomize how cancel overlaps with watch creation
if rand.Intn(2) == 0 {
<-ctxc[idx]
cancel()
} else {
cancel()
<-ctxc[idx]
}
}()
}
// join on watches
for i := 0; i < n; i++ {
select {
case <-ch:
case <-time.After(5 * time.Second):
t.Fatalf("timed out waiting for completed watch")
}
}
}
// TestWatchCanelAndCloseClient ensures that canceling a watcher then immediately
// closing the client does not return a client closing error.
func TestWatchCancelAndCloseClient(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
cli := clus.Client(0)
ctx, cancel := context.WithCancel(context.Background())
wch := cli.Watch(ctx, "abc")
donec := make(chan struct{})
go func() {
defer close(donec)
select {
case wr, ok := <-wch:
if ok {
t.Fatalf("expected closed watch after cancel(), got resp=%+v err=%v", wr, wr.Err())
}
case <-time.After(5 * time.Second):
t.Fatal("timed out waiting for closed channel")
}
}()
cancel()
if err := cli.Close(); err != nil {
t.Fatal(err)
}
<-donec
clus.TakeClient(0)
}

View File

@ -17,13 +17,15 @@ package clientv3
import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
type (
PutResponse pb.PutResponse
GetResponse pb.RangeResponse
DeleteResponse pb.DeleteRangeResponse
TxnResponse pb.TxnResponse
CompactResponse pb.CompactionResponse
PutResponse pb.PutResponse
GetResponse pb.RangeResponse
DeleteResponse pb.DeleteRangeResponse
TxnResponse pb.TxnResponse
)
type KV interface {
@ -47,7 +49,7 @@ type KV interface {
Delete(ctx context.Context, key string, opts ...OpOption) (*DeleteResponse, error)
// Compact compacts etcd KV history before the given rev.
Compact(ctx context.Context, rev int64, opts ...CompactOption) error
Compact(ctx context.Context, rev int64, opts ...CompactOption) (*CompactResponse, error)
// Do applies a single Op on KV without a transaction.
// Do is useful when declaring operations to be issued at a later time
@ -80,7 +82,11 @@ type kv struct {
}
func NewKV(c *Client) KV {
return &kv{remote: pb.NewKVClient(c.conn)}
return &kv{remote: RetryKVClient(c)}
}
func NewKVFromKVClient(remote pb.KVClient) KV {
return &kv{remote: remote}
}
func (kv *kv) Put(ctx context.Context, key, val string, opts ...OpOption) (*PutResponse, error) {
@ -98,11 +104,12 @@ func (kv *kv) Delete(ctx context.Context, key string, opts ...OpOption) (*Delete
return r.del, toErr(ctx, err)
}
func (kv *kv) Compact(ctx context.Context, rev int64, opts ...CompactOption) error {
if _, err := kv.remote.Compact(ctx, OpCompact(rev, opts...).toRequest()); err != nil {
return toErr(ctx, err)
func (kv *kv) Compact(ctx context.Context, rev int64, opts ...CompactOption) (*CompactResponse, error) {
resp, err := kv.remote.Compact(ctx, OpCompact(rev, opts...).toRequest(), grpc.FailFast(false))
if err != nil {
return nil, toErr(ctx, err)
}
return nil
return (*CompactResponse)(resp), err
}
func (kv *kv) Txn(ctx context.Context) Txn {
@ -134,34 +141,20 @@ func (kv *kv) do(ctx context.Context, op Op) (OpResponse, error) {
// TODO: handle other ops
case tRange:
var resp *pb.RangeResponse
r := &pb.RangeRequest{
Key: op.key,
RangeEnd: op.end,
Limit: op.limit,
Revision: op.rev,
Serializable: op.serializable,
KeysOnly: op.keysOnly,
CountOnly: op.countOnly,
}
if op.sort != nil {
r.SortOrder = pb.RangeRequest_SortOrder(op.sort.Order)
r.SortTarget = pb.RangeRequest_SortTarget(op.sort.Target)
}
resp, err = kv.remote.Range(ctx, r)
resp, err = kv.remote.Range(ctx, op.toRangeRequest(), grpc.FailFast(false))
if err == nil {
return OpResponse{get: (*GetResponse)(resp)}, nil
}
case tPut:
var resp *pb.PutResponse
r := &pb.PutRequest{Key: op.key, Value: op.val, Lease: int64(op.leaseID)}
r := &pb.PutRequest{Key: op.key, Value: op.val, Lease: int64(op.leaseID), PrevKv: op.prevKV}
resp, err = kv.remote.Put(ctx, r)
if err == nil {
return OpResponse{put: (*PutResponse)(resp)}, nil
}
case tDeleteRange:
var resp *pb.DeleteRangeResponse
r := &pb.DeleteRangeRequest{Key: op.key, RangeEnd: op.end}
r := &pb.DeleteRangeRequest{Key: op.key, RangeEnd: op.end, PrevKv: op.prevKV}
resp, err = kv.remote.DeleteRange(ctx, r)
if err == nil {
return OpResponse{del: (*DeleteResponse)(resp)}, nil

View File

@ -21,6 +21,7 @@ import (
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
type (
@ -43,6 +44,21 @@ type LeaseKeepAliveResponse struct {
TTL int64
}
// LeaseTimeToLiveResponse is used to convert the protobuf lease timetolive response.
type LeaseTimeToLiveResponse struct {
*pb.ResponseHeader
ID LeaseID `json:"id"`
// TTL is the remaining TTL in seconds for the lease; the lease will expire in under TTL+1 seconds.
TTL int64 `json:"ttl"`
// GrantedTTL is the initial granted time in seconds upon lease creation/renewal.
GrantedTTL int64 `json:"granted-ttl"`
// Keys is the list of keys attached to this lease.
Keys [][]byte `json:"keys"`
}
const (
// defaultTTL is the assumed lease TTL used for the first keepalive
// deadline before the actual TTL is known to the client.
@ -60,6 +76,9 @@ type Lease interface {
// Revoke revokes the given lease.
Revoke(ctx context.Context, id LeaseID) (*LeaseRevokeResponse, error)
// TimeToLive retrieves the lease information of the given lease ID.
TimeToLive(ctx context.Context, id LeaseID, opts ...LeaseOption) (*LeaseTimeToLiveResponse, error)
// KeepAlive keeps the given lease alive forever.
KeepAlive(ctx context.Context, id LeaseID) (<-chan *LeaseKeepAliveResponse, error)
@ -109,7 +128,7 @@ func NewLease(c *Client) Lease {
l := &lessor{
donec: make(chan struct{}),
keepAlives: make(map[LeaseID]*keepAlive),
remote: pb.NewLeaseClient(c.conn),
remote: RetryLeaseClient(c),
firstKeepAliveTimeout: c.cfg.DialTimeout + time.Second,
}
if l.firstKeepAliveTimeout == time.Second {
@ -140,7 +159,7 @@ func (l *lessor) Grant(ctx context.Context, ttl int64) (*LeaseGrantResponse, err
return gresp, nil
}
if isHaltErr(cctx, err) {
return nil, toErr(ctx, err)
return nil, toErr(cctx, err)
}
if nerr := l.newStream(); nerr != nil {
return nil, nerr
@ -169,6 +188,30 @@ func (l *lessor) Revoke(ctx context.Context, id LeaseID) (*LeaseRevokeResponse,
}
}
func (l *lessor) TimeToLive(ctx context.Context, id LeaseID, opts ...LeaseOption) (*LeaseTimeToLiveResponse, error) {
cctx, cancel := context.WithCancel(ctx)
done := cancelWhenStop(cancel, l.stopCtx.Done())
defer close(done)
for {
r := toLeaseTimeToLiveRequest(id, opts...)
resp, err := l.remote.LeaseTimeToLive(cctx, r)
if err == nil {
gresp := &LeaseTimeToLiveResponse{
ResponseHeader: resp.GetHeader(),
ID: LeaseID(resp.ID),
TTL: resp.TTL,
GrantedTTL: resp.GrantedTTL,
Keys: resp.Keys,
}
return gresp, nil
}
if isHaltErr(cctx, err) {
return nil, toErr(cctx, err)
}
}
}
func (l *lessor) KeepAlive(ctx context.Context, id LeaseID) (<-chan *LeaseKeepAliveResponse, error) {
ch := make(chan *LeaseKeepAliveResponse, leaseResponseChSize)
@ -261,7 +304,7 @@ func (l *lessor) keepAliveOnce(ctx context.Context, id LeaseID) (*LeaseKeepAlive
cctx, cancel := context.WithCancel(ctx)
defer cancel()
stream, err := l.remote.LeaseKeepAlive(cctx)
stream, err := l.remote.LeaseKeepAlive(cctx, grpc.FailFast(false))
if err != nil {
return nil, toErr(ctx, err)
}
@ -389,7 +432,7 @@ func (l *lessor) sendKeepAliveLoop(stream pb.Lease_LeaseKeepAliveClient) {
return
}
tosend := make([]LeaseID, 0)
var tosend []LeaseID
now := time.Now()
l.mu.Lock()
@ -418,7 +461,7 @@ func (l *lessor) getKeepAliveStream() pb.Lease_LeaseKeepAliveClient {
func (l *lessor) newStream() error {
sctx, cancel := context.WithCancel(l.stopCtx)
stream, err := l.remote.LeaseKeepAlive(sctx)
stream, err := l.remote.LeaseKeepAlive(sctx, grpc.FailFast(false))
if err != nil {
cancel()
return toErr(sctx, err)

View File

@ -15,13 +15,15 @@
package clientv3
import (
"io/ioutil"
"log"
"os"
"sync"
"google.golang.org/grpc/grpclog"
)
// Logger is the logger used by client library.
// It implements grpclog.Logger interface.
type Logger grpclog.Logger
var (
@ -34,20 +36,36 @@ type settableLogger struct {
}
func init() {
// use go's standard logger by default like grpc
// disable client side logs by default
logger.mu.Lock()
logger.l = log.New(os.Stderr, "", log.LstdFlags)
logger.l = log.New(ioutil.Discard, "", 0)
// logger has to override the grpclog at initialization so that
// any changes to the grpclog go through logger with locking
// instead of through SetLogger
//
// now updates only happen through settableLogger.set
grpclog.SetLogger(&logger)
logger.mu.Unlock()
}
func (s *settableLogger) Set(l Logger) {
// SetLogger sets client-side Logger. By default, logs are disabled.
func SetLogger(l Logger) {
logger.set(l)
}
// GetLogger returns the current logger.
func GetLogger() Logger {
return logger.get()
}
func (s *settableLogger) set(l Logger) {
s.mu.Lock()
logger.l = l
s.mu.Unlock()
}
func (s *settableLogger) Get() Logger {
func (s *settableLogger) get() Logger {
s.mu.RLock()
l := logger.l
s.mu.RUnlock()
@ -56,9 +74,9 @@ func (s *settableLogger) Get() Logger {
// implement the grpclog.Logger interface
func (s *settableLogger) Fatal(args ...interface{}) { s.Get().Fatal(args...) }
func (s *settableLogger) Fatalf(format string, args ...interface{}) { s.Get().Fatalf(format, args...) }
func (s *settableLogger) Fatalln(args ...interface{}) { s.Get().Fatalln(args...) }
func (s *settableLogger) Print(args ...interface{}) { s.Get().Print(args...) }
func (s *settableLogger) Printf(format string, args ...interface{}) { s.Get().Printf(format, args...) }
func (s *settableLogger) Println(args ...interface{}) { s.Get().Println(args...) }
func (s *settableLogger) Fatal(args ...interface{}) { s.get().Fatal(args...) }
func (s *settableLogger) Fatalf(format string, args ...interface{}) { s.get().Fatalf(format, args...) }
func (s *settableLogger) Fatalln(args ...interface{}) { s.get().Fatalln(args...) }
func (s *settableLogger) Print(args ...interface{}) { s.get().Print(args...) }
func (s *settableLogger) Printf(format string, args ...interface{}) { s.get().Printf(format, args...) }
func (s *settableLogger) Println(args ...interface{}) { s.get().Println(args...) }

View File

@ -20,10 +20,14 @@ import (
"strings"
"testing"
"github.com/coreos/etcd/auth"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
"golang.org/x/crypto/bcrypt"
)
func init() { auth.BcryptCost = bcrypt.MinCost }
// TestMain sets up an etcd cluster if running the examples.
func TestMain(m *testing.M) {
useCluster := true // default to running all tests

View File

@ -19,6 +19,7 @@ import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
type (
@ -67,7 +68,7 @@ func (m *maintenance) AlarmList(ctx context.Context) (*AlarmResponse, error) {
Alarm: pb.AlarmType_NONE, // all
}
for {
resp, err := m.remote.Alarm(ctx, req)
resp, err := m.remote.Alarm(ctx, req, grpc.FailFast(false))
if err == nil {
return (*AlarmResponse)(resp), nil
}
@ -100,7 +101,7 @@ func (m *maintenance) AlarmDisarm(ctx context.Context, am *AlarmMember) (*AlarmR
return &ret, nil
}
resp, err := m.remote.Alarm(ctx, req)
resp, err := m.remote.Alarm(ctx, req, grpc.FailFast(false))
if err == nil {
return (*AlarmResponse)(resp), nil
}
@ -114,7 +115,7 @@ func (m *maintenance) Defragment(ctx context.Context, endpoint string) (*Defragm
}
defer conn.Close()
remote := pb.NewMaintenanceClient(conn)
resp, err := remote.Defragment(ctx, &pb.DefragmentRequest{})
resp, err := remote.Defragment(ctx, &pb.DefragmentRequest{}, grpc.FailFast(false))
if err != nil {
return nil, toErr(ctx, err)
}
@ -128,7 +129,7 @@ func (m *maintenance) Status(ctx context.Context, endpoint string) (*StatusRespo
}
defer conn.Close()
remote := pb.NewMaintenanceClient(conn)
resp, err := remote.Status(ctx, &pb.StatusRequest{})
resp, err := remote.Status(ctx, &pb.StatusRequest{}, grpc.FailFast(false))
if err != nil {
return nil, toErr(ctx, err)
}
@ -136,7 +137,7 @@ func (m *maintenance) Status(ctx context.Context, endpoint string) (*StatusRespo
}
func (m *maintenance) Snapshot(ctx context.Context) (io.ReadCloser, error) {
ss, err := m.remote.Snapshot(ctx, &pb.SnapshotRequest{})
ss, err := m.remote.Snapshot(ctx, &pb.SnapshotRequest{}, grpc.FailFast(false))
if err != nil {
return nil, toErr(ctx, err)
}

View File

@ -78,7 +78,7 @@ func (s *syncer) SyncBase(ctx context.Context) (<-chan clientv3.GetResponse, cha
// If len(s.prefix) != 0, we will sync key-value space with given prefix.
// We then range from the prefix to the next prefix if exists. Or we will
// range from the prefix to the end if the next prefix does not exists.
opts = append(opts, clientv3.WithPrefix())
opts = append(opts, clientv3.WithRange(clientv3.GetPrefixRangeEnd(s.prefix)))
key = s.prefix
}

128
clientv3/naming/grpc.go Normal file
View File

@ -0,0 +1,128 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package naming
import (
"encoding/json"
etcd "github.com/coreos/etcd/clientv3"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/naming"
)
// GRPCResolver creates a grpc.Watcher for a target to track its resolution changes.
type GRPCResolver struct {
// Client is an initialized etcd client.
Client *etcd.Client
}
func (gr *GRPCResolver) Update(ctx context.Context, target string, nm naming.Update) (err error) {
switch nm.Op {
case naming.Add:
var v []byte
if v, err = json.Marshal(nm); err != nil {
return grpc.Errorf(codes.InvalidArgument, err.Error())
}
_, err = gr.Client.KV.Put(ctx, target+"/"+nm.Addr, string(v))
case naming.Delete:
_, err = gr.Client.Delete(ctx, target+"/"+nm.Addr)
default:
return grpc.Errorf(codes.InvalidArgument, "naming: bad naming op")
}
return err
}
func (gr *GRPCResolver) Resolve(target string) (naming.Watcher, error) {
ctx, cancel := context.WithCancel(context.Background())
w := &gRPCWatcher{c: gr.Client, target: target + "/", ctx: ctx, cancel: cancel}
return w, nil
}
type gRPCWatcher struct {
c *etcd.Client
target string
ctx context.Context
cancel context.CancelFunc
wch etcd.WatchChan
err error
}
// Next gets the next set of updates from the etcd resolver.
// Calls to Next should be serialized; concurrent calls are not safe since
// there is no way to reconcile the update ordering.
func (gw *gRPCWatcher) Next() ([]*naming.Update, error) {
if gw.wch == nil {
// first Next() returns all addresses
return gw.firstNext()
}
if gw.err != nil {
return nil, gw.err
}
// process new events on target/*
wr, ok := <-gw.wch
if !ok {
gw.err = grpc.Errorf(codes.Unavailable, "naming: watch closed")
return nil, gw.err
}
if gw.err = wr.Err(); gw.err != nil {
return nil, gw.err
}
updates := make([]*naming.Update, 0, len(wr.Events))
for _, e := range wr.Events {
var jupdate naming.Update
var err error
switch e.Type {
case etcd.EventTypePut:
err = json.Unmarshal(e.Kv.Value, &jupdate)
jupdate.Op = naming.Add
case etcd.EventTypeDelete:
err = json.Unmarshal(e.PrevKv.Value, &jupdate)
jupdate.Op = naming.Delete
}
if err == nil {
updates = append(updates, &jupdate)
}
}
return updates, nil
}
func (gw *gRPCWatcher) firstNext() ([]*naming.Update, error) {
// Use serialized request so resolution still works if the target etcd
// server is partitioned away from the quorum.
resp, err := gw.c.Get(gw.ctx, gw.target, etcd.WithPrefix(), etcd.WithSerializable())
if gw.err = err; err != nil {
return nil, err
}
updates := make([]*naming.Update, 0, len(resp.Kvs))
for _, kv := range resp.Kvs {
var jupdate naming.Update
if err := json.Unmarshal(kv.Value, &jupdate); err != nil {
continue
}
updates = append(updates, &jupdate)
}
opts := []etcd.OpOption{etcd.WithRev(resp.Header.Revision + 1), etcd.WithPrefix(), etcd.WithPrevKV()}
gw.wch = gw.c.Watch(gw.ctx, gw.target, opts...)
return updates, nil
}
func (gw *gRPCWatcher) Close() { gw.cancel() }

View File

@ -0,0 +1,135 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package naming
import (
"encoding/json"
"reflect"
"testing"
"golang.org/x/net/context"
"google.golang.org/grpc/naming"
etcd "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
)
func TestGRPCResolver(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
r := GRPCResolver{
Client: clus.RandClient(),
}
w, err := r.Resolve("foo")
if err != nil {
t.Fatal("failed to resolve foo", err)
}
defer w.Close()
addOp := naming.Update{Op: naming.Add, Addr: "127.0.0.1", Metadata: "metadata"}
err = r.Update(context.TODO(), "foo", addOp)
if err != nil {
t.Fatal("failed to add foo", err)
}
us, err := w.Next()
if err != nil {
t.Fatal("failed to get udpate", err)
}
wu := &naming.Update{
Op: naming.Add,
Addr: "127.0.0.1",
Metadata: "metadata",
}
if !reflect.DeepEqual(us[0], wu) {
t.Fatalf("up = %#v, want %#v", us[0], wu)
}
delOp := naming.Update{Op: naming.Delete, Addr: "127.0.0.1"}
err = r.Update(context.TODO(), "foo", delOp)
us, err = w.Next()
if err != nil {
t.Fatal("failed to get udpate", err)
}
wu = &naming.Update{
Op: naming.Delete,
Addr: "127.0.0.1",
Metadata: "metadata",
}
if !reflect.DeepEqual(us[0], wu) {
t.Fatalf("up = %#v, want %#v", us[0], wu)
}
}
// TestGRPCResolverMultiInit ensures the resolver will initialize
// correctly with multiple hosts and correctly receive multiple
// updates in a single revision.
func TestGRPCResolverMulti(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
c := clus.RandClient()
v, verr := json.Marshal(naming.Update{Addr: "127.0.0.1", Metadata: "md"})
if verr != nil {
t.Fatal(verr)
}
if _, err := c.Put(context.TODO(), "foo/host", string(v)); err != nil {
t.Fatal(err)
}
if _, err := c.Put(context.TODO(), "foo/host2", string(v)); err != nil {
t.Fatal(err)
}
r := GRPCResolver{c}
w, err := r.Resolve("foo")
if err != nil {
t.Fatal("failed to resolve foo", err)
}
defer w.Close()
updates, nerr := w.Next()
if nerr != nil {
t.Fatal(nerr)
}
if len(updates) != 2 {
t.Fatalf("expected two updates, got %+v", updates)
}
_, err = c.Txn(context.TODO()).Then(etcd.OpDelete("foo/host"), etcd.OpDelete("foo/host2")).Commit()
if err != nil {
t.Fatal(err)
}
updates, nerr = w.Next()
if nerr != nil {
t.Fatal(nerr)
}
if len(updates) != 2 || (updates[0].Op != naming.Delete && updates[1].Op != naming.Delete) {
t.Fatalf("expected two updates, got %+v", updates)
}
}

View File

@ -14,9 +14,7 @@
package clientv3
import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
)
import pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
type opType int
@ -43,40 +41,63 @@ type Op struct {
serializable bool
keysOnly bool
countOnly bool
minModRev int64
maxModRev int64
minCreateRev int64
maxCreateRev int64
// for range, watch
rev int64
// for watch, put, delete
prevKV bool
// progressNotify is for progress updates.
progressNotify bool
// createdNotify is for created event
createdNotify bool
// filters for watchers
filterPut bool
filterDelete bool
// for put
val []byte
leaseID LeaseID
}
func (op Op) toRangeRequest() *pb.RangeRequest {
if op.t != tRange {
panic("op.t != tRange")
}
r := &pb.RangeRequest{
Key: op.key,
RangeEnd: op.end,
Limit: op.limit,
Revision: op.rev,
Serializable: op.serializable,
KeysOnly: op.keysOnly,
CountOnly: op.countOnly,
MinModRevision: op.minModRev,
MaxModRevision: op.maxModRev,
MinCreateRevision: op.minCreateRev,
MaxCreateRevision: op.maxCreateRev,
}
if op.sort != nil {
r.SortOrder = pb.RangeRequest_SortOrder(op.sort.Order)
r.SortTarget = pb.RangeRequest_SortTarget(op.sort.Target)
}
return r
}
func (op Op) toRequestOp() *pb.RequestOp {
switch op.t {
case tRange:
r := &pb.RangeRequest{
Key: op.key,
RangeEnd: op.end,
Limit: op.limit,
Revision: op.rev,
Serializable: op.serializable,
KeysOnly: op.keysOnly,
CountOnly: op.countOnly,
}
if op.sort != nil {
r.SortOrder = pb.RangeRequest_SortOrder(op.sort.Order)
r.SortTarget = pb.RangeRequest_SortTarget(op.sort.Target)
}
return &pb.RequestOp{Request: &pb.RequestOp_RequestRange{RequestRange: r}}
return &pb.RequestOp{Request: &pb.RequestOp_RequestRange{RequestRange: op.toRangeRequest()}}
case tPut:
r := &pb.PutRequest{Key: op.key, Value: op.val, Lease: int64(op.leaseID)}
r := &pb.PutRequest{Key: op.key, Value: op.val, Lease: int64(op.leaseID), PrevKv: op.prevKV}
return &pb.RequestOp{Request: &pb.RequestOp_RequestPut{RequestPut: r}}
case tDeleteRange:
r := &pb.DeleteRangeRequest{Key: op.key, RangeEnd: op.end}
r := &pb.DeleteRangeRequest{Key: op.key, RangeEnd: op.end, PrevKv: op.prevKV}
return &pb.RequestOp{Request: &pb.RequestOp_RequestDeleteRange{RequestDeleteRange: r}}
default:
panic("Unknown Op")
@ -109,6 +130,14 @@ func OpDelete(key string, opts ...OpOption) Op {
panic("unexpected serializable in delete")
case ret.countOnly:
panic("unexpected countOnly in delete")
case ret.minModRev != 0, ret.maxModRev != 0:
panic("unexpected mod revision filter in delete")
case ret.minCreateRev != 0, ret.maxCreateRev != 0:
panic("unexpected create revision filter in delete")
case ret.filterDelete, ret.filterPut:
panic("unexpected filter in delete")
case ret.createdNotify:
panic("unexpected createdNotify in delete")
}
return ret
}
@ -128,7 +157,15 @@ func OpPut(key, val string, opts ...OpOption) Op {
case ret.serializable:
panic("unexpected serializable in put")
case ret.countOnly:
panic("unexpected countOnly in delete")
panic("unexpected countOnly in put")
case ret.minModRev != 0, ret.maxModRev != 0:
panic("unexpected mod revision filter in put")
case ret.minCreateRev != 0, ret.maxCreateRev != 0:
panic("unexpected create revision filter in put")
case ret.filterDelete, ret.filterPut:
panic("unexpected filter in put")
case ret.createdNotify:
panic("unexpected createdNotify in put")
}
return ret
}
@ -146,7 +183,11 @@ func opWatch(key string, opts ...OpOption) Op {
case ret.serializable:
panic("unexpected serializable in watch")
case ret.countOnly:
panic("unexpected countOnly in delete")
panic("unexpected countOnly in watch")
case ret.minModRev != 0, ret.maxModRev != 0:
panic("unexpected mod revision filter in watch")
case ret.minCreateRev != 0, ret.maxCreateRev != 0:
panic("unexpected create revision filter in watch")
}
return ret
}
@ -178,10 +219,24 @@ func WithRev(rev int64) OpOption { return func(op *Op) { op.rev = rev } }
// 'order' can be either 'SortNone', 'SortAscend', 'SortDescend'.
func WithSort(target SortTarget, order SortOrder) OpOption {
return func(op *Op) {
if target == SortByKey && order == SortAscend {
// If order != SortNone, server fetches the entire key-space,
// and then applies the sort and limit, if provided.
// Since current mvcc.Range implementation returns results
// sorted by keys in lexiographically ascending order,
// client should ignore SortOrder if the target is SortByKey.
order = SortNone
}
op.sort = &SortOption{target, order}
}
}
// GetPrefixRangeEnd gets the range end of the prefix.
// 'Get(foo, WithPrefix())' is equal to 'Get(foo, WithRange(GetPrefixRangeEnd(foo))'.
func GetPrefixRangeEnd(prefix string) string {
return string(getPrefix([]byte(prefix)))
}
func getPrefix(key []byte) []byte {
end := make([]byte, len(key))
copy(end, key)
@ -235,6 +290,18 @@ func WithCountOnly() OpOption {
return func(op *Op) { op.countOnly = true }
}
// WithMinModRev filters out keys for Get with modification revisions less than the given revision.
func WithMinModRev(rev int64) OpOption { return func(op *Op) { op.minModRev = rev } }
// WithMaxModRev filters out keys for Get with modification revisions greater than the given revision.
func WithMaxModRev(rev int64) OpOption { return func(op *Op) { op.maxModRev = rev } }
// WithMinCreateRev filters out keys for Get with creation revisions less than the given revision.
func WithMinCreateRev(rev int64) OpOption { return func(op *Op) { op.minCreateRev = rev } }
// WithMaxCreateRev filters out keys for Get with creation revisions greater than the given revision.
func WithMaxCreateRev(rev int64) OpOption { return func(op *Op) { op.maxCreateRev = rev } }
// WithFirstCreate gets the key with the oldest creation revision in the request range.
func WithFirstCreate() []OpOption { return withTop(SortByCreateRevision, SortAscend) }
@ -258,10 +325,65 @@ func withTop(target SortTarget, order SortOrder) []OpOption {
return []OpOption{WithPrefix(), WithSort(target, order), WithLimit(1)}
}
// WithProgressNotify makes watch server send periodic progress updates.
// WithProgressNotify makes watch server send periodic progress updates
// every 10 minutes when there is no incoming events.
// Progress updates have zero events in WatchResponse.
func WithProgressNotify() OpOption {
return func(op *Op) {
op.progressNotify = true
}
}
// WithCreatedNotify makes watch server sends the created event.
func WithCreatedNotify() OpOption {
return func(op *Op) {
op.createdNotify = true
}
}
// WithFilterPut discards PUT events from the watcher.
func WithFilterPut() OpOption {
return func(op *Op) { op.filterPut = true }
}
// WithFilterDelete discards DELETE events from the watcher.
func WithFilterDelete() OpOption {
return func(op *Op) { op.filterDelete = true }
}
// WithPrevKV gets the previous key-value pair before the event happens. If the previous KV is already compacted,
// nothing will be returned.
func WithPrevKV() OpOption {
return func(op *Op) {
op.prevKV = true
}
}
// LeaseOp represents an Operation that lease can execute.
type LeaseOp struct {
id LeaseID
// for TimeToLive
attachedKeys bool
}
// LeaseOption configures lease operations.
type LeaseOption func(*LeaseOp)
func (op *LeaseOp) applyOpts(opts []LeaseOption) {
for _, opt := range opts {
opt(op)
}
}
// WithAttachedKeys requests lease timetolive API to return
// attached keys of given lease ID.
func WithAttachedKeys() LeaseOption {
return func(op *LeaseOp) { op.attachedKeys = true }
}
func toLeaseTimeToLiveRequest(id LeaseID, opts ...LeaseOption) *pb.LeaseTimeToLiveRequest {
ret := &LeaseOp{id: id}
ret.applyOpts(opts)
return &pb.LeaseTimeToLiveRequest{ID: int64(id), Keys: ret.attachedKeys}
}

38
clientv3/op_test.go Normal file
View File

@ -0,0 +1,38 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package clientv3
import (
"reflect"
"testing"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
)
// TestOpWithSort tests if WithSort(ASCEND, KEY) and WithLimit are specified,
// RangeRequest ignores the SortOption to avoid unnecessarily fetching
// the entire key-space.
func TestOpWithSort(t *testing.T) {
opReq := OpGet("foo", WithSort(SortByKey, SortAscend), WithLimit(10)).toRequestOp().Request
q, ok := opReq.(*pb.RequestOp_RequestRange)
if !ok {
t.Fatalf("expected range request, got %v", reflect.TypeOf(opReq))
}
req := q.RequestRange
wreq := &pb.RangeRequest{Key: []byte("foo"), SortOrder: pb.RangeRequest_NONE, Limit: 10}
if !reflect.DeepEqual(req, wreq) {
t.Fatalf("expected %+v, got %+v", wreq, req)
}
}

253
clientv3/retry.go Normal file
View File

@ -0,0 +1,253 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package clientv3
import (
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
)
type rpcFunc func(ctx context.Context) error
type retryRpcFunc func(context.Context, rpcFunc)
func (c *Client) newRetryWrapper() retryRpcFunc {
return func(rpcCtx context.Context, f rpcFunc) {
for {
err := f(rpcCtx)
if err == nil {
return
}
// only retry if unavailable
if grpc.Code(err) != codes.Unavailable {
return
}
// always stop retry on etcd errors
eErr := rpctypes.Error(err)
if _, ok := eErr.(rpctypes.EtcdError); ok {
return
}
select {
case <-c.balancer.ConnectNotify():
case <-rpcCtx.Done():
case <-c.ctx.Done():
return
}
}
}
}
type retryKVClient struct {
pb.KVClient
retryf retryRpcFunc
}
// RetryKVClient implements a KVClient that uses the client's FailFast retry policy.
func RetryKVClient(c *Client) pb.KVClient {
return &retryKVClient{pb.NewKVClient(c.conn), c.retryWrapper}
}
func (rkv *retryKVClient) Put(ctx context.Context, in *pb.PutRequest, opts ...grpc.CallOption) (resp *pb.PutResponse, err error) {
rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.KVClient.Put(rctx, in, opts...)
return err
})
return resp, err
}
func (rkv *retryKVClient) DeleteRange(ctx context.Context, in *pb.DeleteRangeRequest, opts ...grpc.CallOption) (resp *pb.DeleteRangeResponse, err error) {
rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.KVClient.DeleteRange(rctx, in, opts...)
return err
})
return resp, err
}
func (rkv *retryKVClient) Txn(ctx context.Context, in *pb.TxnRequest, opts ...grpc.CallOption) (resp *pb.TxnResponse, err error) {
rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.KVClient.Txn(rctx, in, opts...)
return err
})
return resp, err
}
func (rkv *retryKVClient) Compact(ctx context.Context, in *pb.CompactionRequest, opts ...grpc.CallOption) (resp *pb.CompactionResponse, err error) {
rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.KVClient.Compact(rctx, in, opts...)
return err
})
return resp, err
}
type retryLeaseClient struct {
pb.LeaseClient
retryf retryRpcFunc
}
// RetryLeaseClient implements a LeaseClient that uses the client's FailFast retry policy.
func RetryLeaseClient(c *Client) pb.LeaseClient {
return &retryLeaseClient{pb.NewLeaseClient(c.conn), c.retryWrapper}
}
func (rlc *retryLeaseClient) LeaseGrant(ctx context.Context, in *pb.LeaseGrantRequest, opts ...grpc.CallOption) (resp *pb.LeaseGrantResponse, err error) {
rlc.retryf(ctx, func(rctx context.Context) error {
resp, err = rlc.LeaseClient.LeaseGrant(rctx, in, opts...)
return err
})
return resp, err
}
func (rlc *retryLeaseClient) LeaseRevoke(ctx context.Context, in *pb.LeaseRevokeRequest, opts ...grpc.CallOption) (resp *pb.LeaseRevokeResponse, err error) {
rlc.retryf(ctx, func(rctx context.Context) error {
resp, err = rlc.LeaseClient.LeaseRevoke(rctx, in, opts...)
return err
})
return resp, err
}
type retryClusterClient struct {
pb.ClusterClient
retryf retryRpcFunc
}
// RetryClusterClient implements a ClusterClient that uses the client's FailFast retry policy.
func RetryClusterClient(c *Client) pb.ClusterClient {
return &retryClusterClient{pb.NewClusterClient(c.conn), c.retryWrapper}
}
func (rcc *retryClusterClient) MemberAdd(ctx context.Context, in *pb.MemberAddRequest, opts ...grpc.CallOption) (resp *pb.MemberAddResponse, err error) {
rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.ClusterClient.MemberAdd(rctx, in, opts...)
return err
})
return resp, err
}
func (rcc *retryClusterClient) MemberRemove(ctx context.Context, in *pb.MemberRemoveRequest, opts ...grpc.CallOption) (resp *pb.MemberRemoveResponse, err error) {
rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.ClusterClient.MemberRemove(rctx, in, opts...)
return err
})
return resp, err
}
func (rcc *retryClusterClient) MemberUpdate(ctx context.Context, in *pb.MemberUpdateRequest, opts ...grpc.CallOption) (resp *pb.MemberUpdateResponse, err error) {
rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.ClusterClient.MemberUpdate(rctx, in, opts...)
return err
})
return resp, err
}
type retryAuthClient struct {
pb.AuthClient
retryf retryRpcFunc
}
// RetryAuthClient implements a AuthClient that uses the client's FailFast retry policy.
func RetryAuthClient(c *Client) pb.AuthClient {
return &retryAuthClient{pb.NewAuthClient(c.conn), c.retryWrapper}
}
func (rac *retryAuthClient) AuthEnable(ctx context.Context, in *pb.AuthEnableRequest, opts ...grpc.CallOption) (resp *pb.AuthEnableResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.AuthEnable(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) AuthDisable(ctx context.Context, in *pb.AuthDisableRequest, opts ...grpc.CallOption) (resp *pb.AuthDisableResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.AuthDisable(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) UserAdd(ctx context.Context, in *pb.AuthUserAddRequest, opts ...grpc.CallOption) (resp *pb.AuthUserAddResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserAdd(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) UserDelete(ctx context.Context, in *pb.AuthUserDeleteRequest, opts ...grpc.CallOption) (resp *pb.AuthUserDeleteResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserDelete(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) UserChangePassword(ctx context.Context, in *pb.AuthUserChangePasswordRequest, opts ...grpc.CallOption) (resp *pb.AuthUserChangePasswordResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserChangePassword(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) UserGrantRole(ctx context.Context, in *pb.AuthUserGrantRoleRequest, opts ...grpc.CallOption) (resp *pb.AuthUserGrantRoleResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserGrantRole(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) UserRevokeRole(ctx context.Context, in *pb.AuthUserRevokeRoleRequest, opts ...grpc.CallOption) (resp *pb.AuthUserRevokeRoleResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserRevokeRole(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) RoleAdd(ctx context.Context, in *pb.AuthRoleAddRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleAddResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.RoleAdd(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) RoleDelete(ctx context.Context, in *pb.AuthRoleDeleteRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleDeleteResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.RoleDelete(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) RoleGrantPermission(ctx context.Context, in *pb.AuthRoleGrantPermissionRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleGrantPermissionResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.RoleGrantPermission(rctx, in, opts...)
return err
})
return resp, err
}
func (rac *retryAuthClient) RoleRevokePermission(ctx context.Context, in *pb.AuthRoleRevokePermissionRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleRevokePermissionResponse, err error) {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.RoleRevokePermission(rctx, in, opts...)
return err
})
return resp, err
}

View File

@ -19,6 +19,7 @@ import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
// Txn is the interface that wraps mini-transactions.
@ -152,7 +153,12 @@ func (txn *txn) Commit() (*TxnResponse, error) {
func (txn *txn) commit() (*TxnResponse, error) {
r := &pb.TxnRequest{Compare: txn.cmps, Success: txn.sus, Failure: txn.fas}
resp, err := txn.kv.remote.Txn(txn.ctx, r)
var opts []grpc.CallOption
if !txn.isWrite {
opts = []grpc.CallOption{grpc.FailFast(false)}
}
resp, err := txn.kv.remote.Txn(txn.ctx, r, opts...)
if err != nil {
return nil, err
}

View File

@ -17,9 +17,13 @@ package clientv3
import (
"testing"
"time"
"github.com/coreos/etcd/pkg/testutil"
)
func TestTxnPanics(t *testing.T) {
defer testutil.AfterTest(t)
kv := &kv{}
errc := make(chan string)

View File

@ -23,6 +23,7 @@ import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
mvccpb "github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
const (
@ -60,6 +61,9 @@ type WatchResponse struct {
// the channel sends a final response that has Canceled set to true with a non-nil Err().
Canceled bool
// Created is used to indicate the creation of the watcher.
Created bool
closeErr error
}
@ -88,7 +92,7 @@ func (wr *WatchResponse) Err() error {
// IsProgressNotify returns true if the WatchResponse is progress notification.
func (wr *WatchResponse) IsProgressNotify() bool {
return len(wr.Events) == 0 && !wr.Canceled
return len(wr.Events) == 0 && !wr.Canceled && !wr.Created && wr.CompactRevision == 0 && wr.Header.Revision != 0
}
// watcher implements the Watcher interface
@ -97,10 +101,12 @@ type watcher struct {
// mu protects the grpc streams map
mu sync.RWMutex
// streams holds all the active grpc streams keyed by ctx value.
streams map[string]*watchGrpcStream
}
// watchGrpcStream tracks all watch resources attached to a single grpc stream.
type watchGrpcStream struct {
owner *watcher
remote pb.WatchClient
@ -111,10 +117,10 @@ type watchGrpcStream struct {
ctxKey string
cancel context.CancelFunc
// mu protects the streams map
mu sync.RWMutex
// streams holds all active watchers
streams map[int64]*watcherStream
// substreams holds all active watchers on this grpc stream
substreams map[int64]*watcherStream
// resuming holds all resuming watchers on this grpc stream
resuming []*watcherStream
// reqc sends a watch request from Watch() to the main goroutine
reqc chan *watchRequest
@ -126,8 +132,12 @@ type watchGrpcStream struct {
donec chan struct{}
// errc transmits errors from grpc Recv to the watch stream reconn logic
errc chan error
// closingc gets the watcherStream of closing watchers
closingc chan *watcherStream
// the error that closed the watch stream
// resumec closes to signal that all substreams should begin resuming
resumec chan struct{}
// closeErr is the error that closed the watch stream
closeErr error
}
@ -137,8 +147,14 @@ type watchRequest struct {
key string
end string
rev int64
// progressNotify is for progress updates.
// send created notification event if this field is true
createdNotify bool
// progressNotify is for progress updates
progressNotify bool
// filters is the list of events to filter out
filters []pb.WatchCreateRequest_FilterType
// get the previous key-value pair before the event happens
prevKV bool
// retc receives a chan WatchResponse once the watcher is established
retc chan chan WatchResponse
}
@ -149,20 +165,27 @@ type watcherStream struct {
initReq watchRequest
// outc publishes watch responses to subscriber
outc chan<- WatchResponse
outc chan WatchResponse
// recvc buffers watch responses before publishing
recvc chan *WatchResponse
id int64
// donec closes when the watcherStream goroutine stops.
donec chan struct{}
// closing is set to true when stream should be scheduled to shutdown.
closing bool
// id is the registered watch id on the grpc stream
id int64
// lastRev is revision last successfully sent over outc
lastRev int64
// resumec indicates the stream must recover at a given revision
resumec chan int64
// buf holds all events received from etcd but not yet consumed by the client
buf []*WatchResponse
}
func NewWatcher(c *Client) Watcher {
return NewWatchFromWatchClient(pb.NewWatchClient(c.conn))
}
func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
return &watcher{
remote: pb.NewWatchClient(c.conn),
remote: wc,
streams: make(map[string]*watchGrpcStream),
}
}
@ -181,18 +204,20 @@ func (vc *valCtx) Err() error { return nil }
func (w *watcher) newWatcherGrpcStream(inctx context.Context) *watchGrpcStream {
ctx, cancel := context.WithCancel(&valCtx{inctx})
wgs := &watchGrpcStream{
owner: w,
remote: w.remote,
ctx: ctx,
ctxKey: fmt.Sprintf("%v", inctx),
cancel: cancel,
streams: make(map[int64]*watcherStream),
owner: w,
remote: w.remote,
ctx: ctx,
ctxKey: fmt.Sprintf("%v", inctx),
cancel: cancel,
substreams: make(map[int64]*watcherStream),
respc: make(chan *pb.WatchResponse),
reqc: make(chan *watchRequest),
stopc: make(chan struct{}),
donec: make(chan struct{}),
errc: make(chan error, 1),
respc: make(chan *pb.WatchResponse),
reqc: make(chan *watchRequest),
stopc: make(chan struct{}),
donec: make(chan struct{}),
errc: make(chan error, 1),
closingc: make(chan *watcherStream),
resumec: make(chan struct{}),
}
go wgs.run()
return wgs
@ -202,14 +227,24 @@ func (w *watcher) newWatcherGrpcStream(inctx context.Context) *watchGrpcStream {
func (w *watcher) Watch(ctx context.Context, key string, opts ...OpOption) WatchChan {
ow := opWatch(key, opts...)
retc := make(chan chan WatchResponse, 1)
var filters []pb.WatchCreateRequest_FilterType
if ow.filterPut {
filters = append(filters, pb.WatchCreateRequest_NOPUT)
}
if ow.filterDelete {
filters = append(filters, pb.WatchCreateRequest_NODELETE)
}
wr := &watchRequest{
ctx: ctx,
createdNotify: ow.createdNotify,
key: string(ow.key),
end: string(ow.end),
rev: ow.rev,
progressNotify: ow.progressNotify,
retc: retc,
filters: filters,
prevKV: ow.prevKV,
retc: make(chan chan WatchResponse, 1),
}
ok := false
@ -253,7 +288,7 @@ func (w *watcher) Watch(ctx context.Context, key string, opts ...OpOption) Watch
// receive channel
if ok {
select {
case ret := <-retc:
case ret := <-wr.retc:
return ret
case <-ctx.Done():
case <-donec:
@ -293,65 +328,57 @@ func (w *watchGrpcStream) Close() (err error) {
return toErr(w.ctx, err)
}
func (w *watchGrpcStream) addStream(resp *pb.WatchResponse, pendingReq *watchRequest) {
if pendingReq == nil {
// no pending request; ignore
return
}
if resp.Canceled || resp.CompactRevision != 0 {
// a cancel at id creation time means the start revision has
// been compacted out of the store
ret := make(chan WatchResponse, 1)
ret <- WatchResponse{
Header: *resp.Header,
CompactRevision: resp.CompactRevision,
Canceled: true}
close(ret)
pendingReq.retc <- ret
return
}
ret := make(chan WatchResponse)
if resp.WatchId == -1 {
// failed; no channel
close(ret)
pendingReq.retc <- ret
return
}
ws := &watcherStream{
initReq: *pendingReq,
id: resp.WatchId,
outc: ret,
// buffered so unlikely to block on sending while holding mu
recvc: make(chan *WatchResponse, 4),
resumec: make(chan int64),
}
if pendingReq.rev == 0 {
// note the header revision so that a put following a current watcher
// disconnect will arrive on the watcher channel after reconnect
ws.initReq.rev = resp.Header.Revision
}
func (w *watcher) closeStream(wgs *watchGrpcStream) {
w.mu.Lock()
w.streams[ws.id] = ws
close(wgs.donec)
wgs.cancel()
if w.streams != nil {
delete(w.streams, wgs.ctxKey)
}
w.mu.Unlock()
// pass back the subscriber channel for the watcher
pendingReq.retc <- ret
// send messages to subscriber
go w.serveStream(ws)
}
// closeStream closes the watcher resources and removes it
func (w *watchGrpcStream) closeStream(ws *watcherStream) {
// cancels request stream; subscriber receives nil channel
close(ws.initReq.retc)
// close subscriber's channel
func (w *watchGrpcStream) addSubstream(resp *pb.WatchResponse, ws *watcherStream) {
if resp.WatchId == -1 {
// failed; no channel
close(ws.recvc)
return
}
ws.id = resp.WatchId
w.substreams[ws.id] = ws
}
func (w *watchGrpcStream) sendCloseSubstream(ws *watcherStream, resp *WatchResponse) {
select {
case ws.outc <- *resp:
case <-ws.initReq.ctx.Done():
case <-time.After(closeSendErrTimeout):
}
close(ws.outc)
delete(w.streams, ws.id)
}
func (w *watchGrpcStream) closeSubstream(ws *watcherStream) {
// send channel response in case stream was never established
select {
case ws.initReq.retc <- ws.outc:
default:
}
// close subscriber's channel
if closeErr := w.closeErr; closeErr != nil && ws.initReq.ctx.Err() == nil {
go w.sendCloseSubstream(ws, &WatchResponse{closeErr: w.closeErr})
} else {
close(ws.outc)
}
if ws.id != -1 {
delete(w.substreams, ws.id)
return
}
for i := range w.resuming {
if w.resuming[i] == ws {
w.resuming[i] = nil
return
}
}
}
// run is the root of the goroutines for managing a watcher client
@ -359,15 +386,29 @@ func (w *watchGrpcStream) run() {
var wc pb.Watch_WatchClient
var closeErr error
// substreams marked to close but goroutine still running; needed for
// avoiding double-closing recvc on grpc stream teardown
closing := make(map[*watcherStream]struct{})
defer func() {
w.owner.mu.Lock()
w.closeErr = closeErr
if w.owner.streams != nil {
delete(w.owner.streams, w.ctxKey)
// shutdown substreams and resuming substreams
for _, ws := range w.substreams {
if _, ok := closing[ws]; !ok {
close(ws.recvc)
}
}
close(w.donec)
w.owner.mu.Unlock()
w.cancel()
for _, ws := range w.resuming {
if _, ok := closing[ws]; ws != nil && !ok {
close(ws.recvc)
}
}
w.joinSubstreams()
for toClose := len(w.substreams) + len(w.resuming); toClose > 0; toClose-- {
w.closeSubstream(<-w.closingc)
}
w.owner.closeStream(w)
}()
// start a stream with the etcd grpc server
@ -375,42 +416,49 @@ func (w *watchGrpcStream) run() {
return
}
var pendingReq, failedReq *watchRequest
curReqC := w.reqc
cancelSet := make(map[int64]struct{})
for {
select {
// Watch() requested
case pendingReq = <-curReqC:
// no more watch requests until there's a response
curReqC = nil
if err := wc.Send(pendingReq.toPB()); err == nil {
// pendingReq now waits on w.respc
break
case wreq := <-w.reqc:
outc := make(chan WatchResponse, 1)
ws := &watcherStream{
initReq: *wreq,
id: -1,
outc: outc,
// unbufffered so resumes won't cause repeat events
recvc: make(chan *WatchResponse),
}
ws.donec = make(chan struct{})
go w.serveSubstream(ws, w.resumec)
// queue up for watcher creation/resume
w.resuming = append(w.resuming, ws)
if len(w.resuming) == 1 {
// head of resume queue, can register a new watcher
wc.Send(ws.initReq.toPB())
}
failedReq = pendingReq
// New events from the watch client
case pbresp := <-w.respc:
switch {
case pbresp.Created:
// response to pending req, try to add
w.addStream(pbresp, pendingReq)
pendingReq = nil
curReqC = w.reqc
// response to head of queue creation
if ws := w.resuming[0]; ws != nil {
w.addSubstream(pbresp, ws)
w.dispatchEvent(pbresp)
w.resuming[0] = nil
}
if ws := w.nextResume(); ws != nil {
wc.Send(ws.initReq.toPB())
}
case pbresp.Canceled:
delete(cancelSet, pbresp.WatchId)
// shutdown serveStream, if any
w.mu.Lock()
if ws, ok := w.streams[pbresp.WatchId]; ok {
if ws, ok := w.substreams[pbresp.WatchId]; ok {
// signal to stream goroutine to update closingc
close(ws.recvc)
delete(w.streams, ws.id)
}
numStreams := len(w.streams)
w.mu.Unlock()
if numStreams == 0 {
// don't leak watcher streams
return
closing[ws] = struct{}{}
}
default:
// dispatch to appropriate watch stream
@ -431,57 +479,66 @@ func (w *watchGrpcStream) run() {
wc.Send(req)
}
// watch client failed to recv; spawn another if possible
// TODO report watch client errors from errc?
case err := <-w.errc:
if toErr(w.ctx, err) == v3rpc.ErrNoLeader {
if isHaltErr(w.ctx, err) || toErr(w.ctx, err) == v3rpc.ErrNoLeader {
closeErr = err
return
}
if wc, closeErr = w.newWatchClient(); closeErr != nil {
return
}
curReqC = w.reqc
if pendingReq != nil {
failedReq = pendingReq
if ws := w.nextResume(); ws != nil {
wc.Send(ws.initReq.toPB())
}
cancelSet = make(map[int64]struct{})
case <-w.stopc:
return
}
// send failed; queue for retry
if failedReq != nil {
go func(wr *watchRequest) {
select {
case w.reqc <- wr:
case <-wr.ctx.Done():
case <-w.donec:
}
}(pendingReq)
failedReq = nil
pendingReq = nil
case ws := <-w.closingc:
w.closeSubstream(ws)
delete(closing, ws)
if len(w.substreams)+len(w.resuming) == 0 {
// no more watchers on this stream, shutdown
return
}
}
}
}
// nextResume chooses the next resuming to register with the grpc stream. Abandoned
// streams are marked as nil in the queue since the head must wait for its inflight registration.
func (w *watchGrpcStream) nextResume() *watcherStream {
for len(w.resuming) != 0 {
if w.resuming[0] != nil {
return w.resuming[0]
}
w.resuming = w.resuming[1:len(w.resuming)]
}
return nil
}
// dispatchEvent sends a WatchResponse to the appropriate watcher stream
func (w *watchGrpcStream) dispatchEvent(pbresp *pb.WatchResponse) bool {
w.mu.RLock()
defer w.mu.RUnlock()
ws, ok := w.streams[pbresp.WatchId]
ws, ok := w.substreams[pbresp.WatchId]
if !ok {
return false
}
events := make([]*Event, len(pbresp.Events))
for i, ev := range pbresp.Events {
events[i] = (*Event)(ev)
}
if ok {
wr := &WatchResponse{
Header: *pbresp.Header,
Events: events,
CompactRevision: pbresp.CompactRevision,
Canceled: pbresp.Canceled}
ws.recvc <- wr
wr := &WatchResponse{
Header: *pbresp.Header,
Events: events,
CompactRevision: pbresp.CompactRevision,
Created: pbresp.Created,
Canceled: pbresp.Canceled,
}
return ok
select {
case ws.recvc <- wr:
case <-ws.donec:
return false
}
return true
}
// serveWatchClient forwards messages from the grpc stream to run()
@ -503,111 +560,114 @@ func (w *watchGrpcStream) serveWatchClient(wc pb.Watch_WatchClient) {
}
}
// serveStream forwards watch responses from run() to the subscriber
func (w *watchGrpcStream) serveStream(ws *watcherStream) {
emptyWr := &WatchResponse{}
wrs := []*WatchResponse{}
// serveSubstream forwards watch responses from run() to the subscriber
func (w *watchGrpcStream) serveSubstream(ws *watcherStream, resumec chan struct{}) {
if ws.closing {
panic("created substream goroutine but substream is closing")
}
// nextRev is the minimum expected next revision
nextRev := ws.initReq.rev
resuming := false
closing := false
for !closing {
defer func() {
if !resuming {
ws.closing = true
}
close(ws.donec)
if !resuming {
w.closingc <- ws
}
}()
emptyWr := &WatchResponse{}
for {
curWr := emptyWr
outc := ws.outc
if len(wrs) > 0 {
curWr = wrs[0]
if len(ws.buf) > 0 && ws.buf[0].Created {
select {
case ws.initReq.retc <- ws.outc:
// send first creation event and only if requested
if !ws.initReq.createdNotify {
ws.buf = ws.buf[1:]
}
default:
}
}
if len(ws.buf) > 0 {
curWr = ws.buf[0]
} else {
outc = nil
}
select {
case outc <- *curWr:
if wrs[0].Err() != nil {
closing = true
break
}
var newRev int64
if len(wrs[0].Events) > 0 {
newRev = wrs[0].Events[len(wrs[0].Events)-1].Kv.ModRevision
} else {
newRev = wrs[0].Header.Revision
}
if newRev != ws.lastRev {
ws.lastRev = newRev
}
wrs[0] = nil
wrs = wrs[1:]
case wr, ok := <-ws.recvc:
if !ok {
// shutdown from closeStream
if ws.buf[0].Err() != nil {
return
}
// resume up to last seen event if disconnected
if resuming && wr.Err() == nil {
resuming = false
// trim events already seen
for i := 0; i < len(wr.Events); i++ {
if wr.Events[i].Kv.ModRevision > ws.lastRev {
wr.Events = wr.Events[i:]
break
}
}
// only forward new events
if wr.Events[0].Kv.ModRevision == ws.lastRev {
break
}
ws.buf[0] = nil
ws.buf = ws.buf[1:]
case wr, ok := <-ws.recvc:
if !ok {
// shutdown from closeSubstream
return
}
resuming = false
// TODO don't keep buffering if subscriber stops reading
wrs = append(wrs, wr)
case resumeRev := <-ws.resumec:
wrs = nil
resuming = true
if resumeRev == -1 {
// pause serving stream while resume gets set up
break
// TODO pause channel if buffer gets too large
ws.buf = append(ws.buf, wr)
nextRev = wr.Header.Revision
if len(wr.Events) > 0 {
nextRev = wr.Events[len(wr.Events)-1].Kv.ModRevision + 1
}
if resumeRev != ws.lastRev {
panic("unexpected resume revision")
}
case <-w.donec:
closing = true
ws.initReq.rev = nextRev
case <-ws.initReq.ctx.Done():
closing = true
return
case <-resumec:
resuming = true
return
}
}
// try to send off close error
if w.closeErr != nil {
select {
case ws.outc <- WatchResponse{closeErr: w.closeErr}:
case <-w.donec:
case <-time.After(closeSendErrTimeout):
}
}
w.mu.Lock()
w.closeStream(ws)
w.mu.Unlock()
// lazily send cancel message if events on missing id
}
func (w *watchGrpcStream) newWatchClient() (pb.Watch_WatchClient, error) {
ws, rerr := w.resume()
if rerr != nil {
return nil, rerr
// connect to grpc stream
wc, err := w.openWatchClient()
if err != nil {
return nil, v3rpc.Error(err)
}
go w.serveWatchClient(ws)
return ws, nil
}
// resume creates a new WatchClient with all current watchers reestablished
func (w *watchGrpcStream) resume() (ws pb.Watch_WatchClient, err error) {
for {
if ws, err = w.openWatchClient(); err != nil {
break
} else if err = w.resumeWatchers(ws); err == nil {
break
// mark all substreams as resuming
if len(w.substreams)+len(w.resuming) > 0 {
close(w.resumec)
w.resumec = make(chan struct{})
w.joinSubstreams()
for _, ws := range w.substreams {
ws.id = -1
w.resuming = append(w.resuming, ws)
}
for _, ws := range w.resuming {
if ws == nil || ws.closing {
continue
}
ws.donec = make(chan struct{})
go w.serveSubstream(ws, w.resumec)
}
}
w.substreams = make(map[int64]*watcherStream)
// receive data from new grpc stream
go w.serveWatchClient(wc)
return wc, nil
}
// joinSubstream waits for all substream goroutines to complete
func (w *watchGrpcStream) joinSubstreams() {
for _, ws := range w.substreams {
<-ws.donec
}
for _, ws := range w.resuming {
if ws != nil {
<-ws.donec
}
}
return ws, v3rpc.Error(err)
}
// openWatchClient retries opening a watchclient until retryConnection fails
@ -616,12 +676,12 @@ func (w *watchGrpcStream) openWatchClient() (ws pb.Watch_WatchClient, err error)
select {
case <-w.stopc:
if err == nil {
err = context.Canceled
return nil, context.Canceled
}
return nil, err
default:
}
if ws, err = w.remote.Watch(w.ctx); ws != nil && err == nil {
if ws, err = w.remote.Watch(w.ctx, grpc.FailFast(false)); ws != nil && err == nil {
break
}
if isHaltErr(w.ctx, err) {
@ -631,48 +691,6 @@ func (w *watchGrpcStream) openWatchClient() (ws pb.Watch_WatchClient, err error)
return ws, nil
}
// resumeWatchers rebuilds every registered watcher on a new client
func (w *watchGrpcStream) resumeWatchers(wc pb.Watch_WatchClient) error {
w.mu.RLock()
streams := make([]*watcherStream, 0, len(w.streams))
for _, ws := range w.streams {
streams = append(streams, ws)
}
w.mu.RUnlock()
for _, ws := range streams {
// pause serveStream
ws.resumec <- -1
// reconstruct watcher from initial request
if ws.lastRev != 0 {
ws.initReq.rev = ws.lastRev
}
if err := wc.Send(ws.initReq.toPB()); err != nil {
return err
}
// wait for request ack
resp, err := wc.Recv()
if err != nil {
return err
} else if len(resp.Events) != 0 || !resp.Created {
return fmt.Errorf("watcher: unexpected response (%+v)", resp)
}
// id may be different since new remote watcher; update map
w.mu.Lock()
delete(w.streams, ws.id)
ws.id = resp.WatchId
w.streams[ws.id] = ws
w.mu.Unlock()
// unpause serveStream
ws.resumec <- ws.lastRev
}
return nil
}
// toPB converts an internal watch request structure to its protobuf messagefunc (wr *watchRequest)
func (wr *watchRequest) toPB() *pb.WatchRequest {
req := &pb.WatchCreateRequest{
@ -680,6 +698,8 @@ func (wr *watchRequest) toPB() *pb.WatchRequest {
Key: []byte(wr.key),
RangeEnd: []byte(wr.end),
ProgressNotify: wr.progressNotify,
Filters: wr.filters,
PrevKv: wr.prevKV,
}
cr := &pb.WatchRequest_CreateRequest{CreateRequest: req}
return &pb.WatchRequest{RequestUnion: cr}

281
cmd/Godeps/Godeps.json generated
View File

@ -1,281 +0,0 @@
{
"ImportPath": "github.com/coreos/etcd",
"GoVersion": "go1.6",
"GodepVersion": "v74",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "bitbucket.org/ww/goautoneg",
"Comment": "null-5",
"Rev": "'75cd24fc2f2c2a2088577d12123ddee5f54e0675'"
},
{
"ImportPath": "github.com/akrennmair/gopcap",
"Rev": "00e11033259acb75598ba416495bb708d864a010"
},
{
"ImportPath": "github.com/beorn7/perks/quantile",
"Rev": "b965b613227fddccbfffe13eae360ed3fa822f8d"
},
{
"ImportPath": "github.com/bgentry/speakeasy",
"Rev": "36e9cfdd690967f4f690c6edcc9ffacd006014a0"
},
{
"ImportPath": "github.com/boltdb/bolt",
"Comment": "v1.2.1",
"Rev": "dfb21201d9270c1082d5fb0f07f500311ff72f18"
},
{
"ImportPath": "github.com/cockroachdb/cmux",
"Rev": "112f0506e7743d64a6eb8fedbcff13d9979bbf92"
},
{
"ImportPath": "github.com/coreos/go-semver/semver",
"Rev": "568e959cd89871e61434c1143528d9162da89ef2"
},
{
"ImportPath": "github.com/coreos/go-systemd/daemon",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/go-systemd/journal",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/go-systemd/util",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/pkg/capnslog",
"Comment": "v2-8-gfa29b1d",
"Rev": "fa29b1d70f0beaddd4c7021607cc3c3be8ce94b8"
},
{
"ImportPath": "github.com/cpuguy83/go-md2man/md2man",
"Comment": "v1.0.4",
"Rev": "71acacd42f85e5e82f70a55327789582a5200a90"
},
{
"ImportPath": "github.com/dustin/go-humanize",
"Rev": "8929fe90cee4b2cb9deb468b51fb34eba64d1bf0"
},
{
"ImportPath": "github.com/gengo/grpc-gateway/runtime",
"Rev": "dcb844349dc5d2cb0300fdc4d2d374839d0d2e13"
},
{
"ImportPath": "github.com/gengo/grpc-gateway/runtime/internal",
"Rev": "dcb844349dc5d2cb0300fdc4d2d374839d0d2e13"
},
{
"ImportPath": "github.com/gengo/grpc-gateway/utilities",
"Rev": "dcb844349dc5d2cb0300fdc4d2d374839d0d2e13"
},
{
"ImportPath": "github.com/ghodss/yaml",
"Rev": "73d445a93680fa1a78ae23a5839bad48f32ba1ee"
},
{
"ImportPath": "github.com/gogo/protobuf/proto",
"Comment": "v0.2-13-gc3995ae",
"Rev": "c3995ae437bb78d1189f4f147dfe5f87ad3596e4"
},
{
"ImportPath": "github.com/golang/glog",
"Rev": "44145f04b68cf362d9c4df2182967c2275eaefed"
},
{
"ImportPath": "github.com/golang/groupcache/lru",
"Rev": "02826c3e79038b59d737d3b1c0a1d937f71a4433"
},
{
"ImportPath": "github.com/golang/protobuf/jsonpb",
"Rev": "8616e8ee5e20a1704615e6c8d7afcdac06087a67"
},
{
"ImportPath": "github.com/golang/protobuf/proto",
"Rev": "8616e8ee5e20a1704615e6c8d7afcdac06087a67"
},
{
"ImportPath": "github.com/google/btree",
"Rev": "7d79101e329e5a3adf994758c578dab82b90c017"
},
{
"ImportPath": "github.com/inconshreveable/mousetrap",
"Rev": "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
},
{
"ImportPath": "github.com/jonboulle/clockwork",
"Rev": "72f9bd7c4e0c2a40055ab3d0f09654f730cce982"
},
{
"ImportPath": "github.com/kballard/go-shellquote",
"Rev": "d8ec1a69a250a17bb0e419c386eac1f3711dc142"
},
{
"ImportPath": "github.com/kr/pty",
"Comment": "release.r56-29-gf7ee69f",
"Rev": "f7ee69f31298ecbe5d2b349c711e2547a617d398"
},
{
"ImportPath": "github.com/mattn/go-runewidth",
"Comment": "v0.0.1",
"Rev": "d6bea18f789704b5f83375793155289da36a3c7f"
},
{
"ImportPath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
"Rev": "fc2b8d3a73c4867e51861bbdd5ae3c1f0869dd6a"
},
{
"ImportPath": "github.com/olekukonko/tablewriter",
"Rev": "cca8bbc0798408af109aaaa239cbd2634846b340"
},
{
"ImportPath": "github.com/prometheus/client_golang/prometheus",
"Comment": "0.7.0-52-ge51041b",
"Rev": "e51041b3fa41cece0dca035740ba6411905be473"
},
{
"ImportPath": "github.com/prometheus/client_model/go",
"Comment": "model-0.0.2-12-gfa8ad6f",
"Rev": "fa8ad6fec33561be4280a8f0514318c79d7f6cb6"
},
{
"ImportPath": "github.com/prometheus/common/expfmt",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/common/model",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/procfs",
"Rev": "454a56f35412459b5e684fd5ec0f9211b94f002a"
},
{
"ImportPath": "github.com/russross/blackfriday",
"Comment": "v1.4-2-g300106c",
"Rev": "300106c228d52c8941d4b3de6054a6062a86dda3"
},
{
"ImportPath": "github.com/shurcooL/sanitized_anchor_name",
"Rev": "10ef21a441db47d8b13ebcc5fd2310f636973c77"
},
{
"ImportPath": "github.com/spacejam/loghisto",
"Rev": "323309774dec8b7430187e46cd0793974ccca04a"
},
{
"ImportPath": "github.com/spf13/cobra",
"Rev": "1c44ec8d3f1552cac48999f9306da23c4d8a288b"
},
{
"ImportPath": "github.com/spf13/pflag",
"Rev": "08b1a584251b5b62f458943640fc8ebd4d50aaa5"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
"Rev": "9cc77fa25329013ce07362c7742952ff887361f2"
},
{
"ImportPath": "github.com/ugorji/go/codec",
"Rev": "f1f1a805ed361a0e078bb537e4ea78cd37dcf065"
},
{
"ImportPath": "github.com/urfave/cli",
"Comment": "v1.17.0-79-g6011f16",
"Rev": "6011f165dc288c72abd8acd7722f837c5c64198d"
},
{
"ImportPath": "github.com/xiang90/probing",
"Rev": "6a0cc1ae81b4cc11db5e491e030e4b98fba79c19"
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
},
{
"ImportPath": "golang.org/x/net/context",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/http2",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/http2/hpack",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/internal/timeseries",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/trace",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/sys/unix",
"Rev": "9c60d1c508f5134d1ca726b4641db998f2523357"
},
{
"ImportPath": "golang.org/x/time/rate",
"Rev": "a4bde12657593d5e90d0533a3e4fd95e635124cb"
},
{
"ImportPath": "google.golang.org/grpc",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "google.golang.org/grpc/codes",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "google.golang.org/grpc/credentials",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "google.golang.org/grpc/grpclog",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "google.golang.org/grpc/internal",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "google.golang.org/grpc/metadata",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "google.golang.org/grpc/naming",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "google.golang.org/grpc/peer",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "google.golang.org/grpc/transport",
"Rev": "e78224b060cf3215247b7be455f80ea22e469b66"
},
{
"ImportPath": "gopkg.in/cheggaaa/pb.v1",
"Comment": "v1.0.1",
"Rev": "29ad9b62f9e0274422d738242b94a5b89440bfa6"
},
{
"ImportPath": "gopkg.in/yaml.v2",
"Rev": "53feefa2559fb8dfa8d81baad31be332c97d6c77"
}
]
}

5
cmd/Godeps/Readme generated
View File

@ -1,5 +0,0 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

1
cmd/etcd Symbolic link
View File

@ -0,0 +1 @@
../

View File

@ -1 +0,0 @@
../etcdmain

View File

@ -1 +0,0 @@
../main.go

View File

@ -1,13 +0,0 @@
include $(GOROOT)/src/Make.inc
TARG=bitbucket.org/ww/goautoneg
GOFILES=autoneg.go
include $(GOROOT)/src/Make.pkg
format:
gofmt -w *.go
docs:
gomake clean
godoc ${TARG} > README.txt

View File

@ -1,67 +0,0 @@
PACKAGE
package goautoneg
import "bitbucket.org/ww/goautoneg"
HTTP Content-Type Autonegotiation.
The functions in this package implement the behaviour specified in
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Copyright (c) 2011, Open Knowledge Foundation Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
Neither the name of the Open Knowledge Foundation Ltd. nor the
names of its contributors may be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
FUNCTIONS
func Negotiate(header string, alternatives []string) (content_type string)
Negotiate the most appropriate content_type given the accept header
and a list of alternatives.
func ParseAccept(header string) (accept []Accept)
Parse an Accept Header string returning a sorted list
of clauses
TYPES
type Accept struct {
Type, SubType string
Q float32
Params map[string]string
}
Structure to represent a clause in an HTTP Accept Header
SUBDIRECTORIES
.hg

View File

@ -1,5 +0,0 @@
#*
*~
/tools/pass/pass
/tools/pcaptest/pcaptest
/tools/tcpdump/tcpdump

View File

@ -1,11 +0,0 @@
# PCAP
This is a simple wrapper around libpcap for Go. Originally written by Andreas
Krennmair <ak@synflood.at> and only minorly touched up by Mark Smith <mark@qq.is>.
Please see the included pcaptest.go and tcpdump.go programs for instructions on
how to use this library.
Miek Gieben <miek@miek.nl> has created a more Go-like package and replaced functionality
with standard functions from the standard library. The package has also been renamed to
pcap.

File diff suppressed because it is too large Load Diff

View File

@ -1,2 +0,0 @@
example/example
example/example.exe

View File

@ -1,30 +0,0 @@
# Speakeasy
This package provides cross-platform Go (#golang) helpers for taking user input
from the terminal while not echoing the input back (similar to `getpasswd`). The
package uses syscalls to avoid any dependence on cgo, and is therefore
compatible with cross-compiling.
[![GoDoc](https://godoc.org/github.com/bgentry/speakeasy?status.png)][godoc]
## Unicode
Multi-byte unicode characters work successfully on Mac OS X. On Windows,
however, this may be problematic (as is UTF in general on Windows). Other
platforms have not been tested.
## License
The code herein was not written by me, but was compiled from two separate open
source packages. Unix portions were imported from [gopass][gopass], while
Windows portions were imported from the [CloudFoundry Go CLI][cf-cli]'s
[Windows terminal helpers][cf-ui-windows].
The [license for the windows portion](./LICENSE_WINDOWS) has been copied exactly
from the source (though I attempted to fill in the correct owner in the
boilerplate copyright notice).
[cf-cli]: https://github.com/cloudfoundry/cli "CloudFoundry Go CLI"
[cf-ui-windows]: https://github.com/cloudfoundry/cli/blob/master/src/cf/terminal/ui_windows.go "CloudFoundry Go CLI Windows input helpers"
[godoc]: https://godoc.org/github.com/bgentry/speakeasy "speakeasy on Godoc.org"
[gopass]: https://code.google.com/p/gopass "gopass"

View File

@ -1,4 +0,0 @@
*.prof
*.test
*.swp
/bin/

View File

@ -1,18 +0,0 @@
BRANCH=`git rev-parse --abbrev-ref HEAD`
COMMIT=`git rev-parse --short HEAD`
GOLDFLAGS="-X main.branch $(BRANCH) -X main.commit $(COMMIT)"
default: build
race:
@go test -v -race -test.run="TestSimulate_(100op|1000op)"
# go get github.com/kisielk/errcheck
errcheck:
@errcheck -ignorepkg=bytes -ignore=os:Remove github.com/boltdb/bolt
test:
@go test -v -cover .
@go test -v ./cmd/bolt
.PHONY: fmt test

View File

@ -1,850 +0,0 @@
Bolt [![Coverage Status](https://coveralls.io/repos/boltdb/bolt/badge.svg?branch=master)](https://coveralls.io/r/boltdb/bolt?branch=master) [![GoDoc](https://godoc.org/github.com/boltdb/bolt?status.svg)](https://godoc.org/github.com/boltdb/bolt) ![Version](https://img.shields.io/badge/version-1.0-green.svg)
====
Bolt is a pure Go key/value store inspired by [Howard Chu's][hyc_symas]
[LMDB project][lmdb]. The goal of the project is to provide a simple,
fast, and reliable database for projects that don't require a full database
server such as Postgres or MySQL.
Since Bolt is meant to be used as such a low-level piece of functionality,
simplicity is key. The API will be small and only focus on getting values
and setting values. That's it.
[hyc_symas]: https://twitter.com/hyc_symas
[lmdb]: http://symas.com/mdb/
## Project Status
Bolt is stable and the API is fixed. Full unit test coverage and randomized
black box testing are used to ensure database consistency and thread safety.
Bolt is currently in high-load production environments serving databases as
large as 1TB. Many companies such as Shopify and Heroku use Bolt-backed
services every day.
## Table of Contents
- [Getting Started](#getting-started)
- [Installing](#installing)
- [Opening a database](#opening-a-database)
- [Transactions](#transactions)
- [Read-write transactions](#read-write-transactions)
- [Read-only transactions](#read-only-transactions)
- [Batch read-write transactions](#batch-read-write-transactions)
- [Managing transactions manually](#managing-transactions-manually)
- [Using buckets](#using-buckets)
- [Using key/value pairs](#using-keyvalue-pairs)
- [Autoincrementing integer for the bucket](#autoincrementing-integer-for-the-bucket)
- [Iterating over keys](#iterating-over-keys)
- [Prefix scans](#prefix-scans)
- [Range scans](#range-scans)
- [ForEach()](#foreach)
- [Nested buckets](#nested-buckets)
- [Database backups](#database-backups)
- [Statistics](#statistics)
- [Read-Only Mode](#read-only-mode)
- [Mobile Use (iOS/Android)](#mobile-use-iosandroid)
- [Resources](#resources)
- [Comparison with other databases](#comparison-with-other-databases)
- [Postgres, MySQL, & other relational databases](#postgres-mysql--other-relational-databases)
- [LevelDB, RocksDB](#leveldb-rocksdb)
- [LMDB](#lmdb)
- [Caveats & Limitations](#caveats--limitations)
- [Reading the Source](#reading-the-source)
- [Other Projects Using Bolt](#other-projects-using-bolt)
## Getting Started
### Installing
To start using Bolt, install Go and run `go get`:
```sh
$ go get github.com/boltdb/bolt/...
```
This will retrieve the library and install the `bolt` command line utility into
your `$GOBIN` path.
### Opening a database
The top-level object in Bolt is a `DB`. It is represented as a single file on
your disk and represents a consistent snapshot of your data.
To open your database, simply use the `bolt.Open()` function:
```go
package main
import (
"log"
"github.com/boltdb/bolt"
)
func main() {
// Open the my.db data file in your current directory.
// It will be created if it doesn't exist.
db, err := bolt.Open("my.db", 0600, nil)
if err != nil {
log.Fatal(err)
}
defer db.Close()
...
}
```
Please note that Bolt obtains a file lock on the data file so multiple processes
cannot open the same database at the same time. Opening an already open Bolt
database will cause it to hang until the other process closes it. To prevent
an indefinite wait you can pass a timeout option to the `Open()` function:
```go
db, err := bolt.Open("my.db", 0600, &bolt.Options{Timeout: 1 * time.Second})
```
### Transactions
Bolt allows only one read-write transaction at a time but allows as many
read-only transactions as you want at a time. Each transaction has a consistent
view of the data as it existed when the transaction started.
Individual transactions and all objects created from them (e.g. buckets, keys)
are not thread safe. To work with data in multiple goroutines you must start
a transaction for each one or use locking to ensure only one goroutine accesses
a transaction at a time. Creating transaction from the `DB` is thread safe.
Read-only transactions and read-write transactions should not depend on one
another and generally shouldn't be opened simultaneously in the same goroutine.
This can cause a deadlock as the read-write transaction needs to periodically
re-map the data file but it cannot do so while a read-only transaction is open.
#### Read-write transactions
To start a read-write transaction, you can use the `DB.Update()` function:
```go
err := db.Update(func(tx *bolt.Tx) error {
...
return nil
})
```
Inside the closure, you have a consistent view of the database. You commit the
transaction by returning `nil` at the end. You can also rollback the transaction
at any point by returning an error. All database operations are allowed inside
a read-write transaction.
Always check the return error as it will report any disk failures that can cause
your transaction to not complete. If you return an error within your closure
it will be passed through.
#### Read-only transactions
To start a read-only transaction, you can use the `DB.View()` function:
```go
err := db.View(func(tx *bolt.Tx) error {
...
return nil
})
```
You also get a consistent view of the database within this closure, however,
no mutating operations are allowed within a read-only transaction. You can only
retrieve buckets, retrieve values, and copy the database within a read-only
transaction.
#### Batch read-write transactions
Each `DB.Update()` waits for disk to commit the writes. This overhead
can be minimized by combining multiple updates with the `DB.Batch()`
function:
```go
err := db.Batch(func(tx *bolt.Tx) error {
...
return nil
})
```
Concurrent Batch calls are opportunistically combined into larger
transactions. Batch is only useful when there are multiple goroutines
calling it.
The trade-off is that `Batch` can call the given
function multiple times, if parts of the transaction fail. The
function must be idempotent and side effects must take effect only
after a successful return from `DB.Batch()`.
For example: don't display messages from inside the function, instead
set variables in the enclosing scope:
```go
var id uint64
err := db.Batch(func(tx *bolt.Tx) error {
// Find last key in bucket, decode as bigendian uint64, increment
// by one, encode back to []byte, and add new key.
...
id = newValue
return nil
})
if err != nil {
return ...
}
fmt.Println("Allocated ID %d", id)
```
#### Managing transactions manually
The `DB.View()` and `DB.Update()` functions are wrappers around the `DB.Begin()`
function. These helper functions will start the transaction, execute a function,
and then safely close your transaction if an error is returned. This is the
recommended way to use Bolt transactions.
However, sometimes you may want to manually start and end your transactions.
You can use the `Tx.Begin()` function directly but **please** be sure to close
the transaction.
```go
// Start a writable transaction.
tx, err := db.Begin(true)
if err != nil {
return err
}
defer tx.Rollback()
// Use the transaction...
_, err := tx.CreateBucket([]byte("MyBucket"))
if err != nil {
return err
}
// Commit the transaction and check for error.
if err := tx.Commit(); err != nil {
return err
}
```
The first argument to `DB.Begin()` is a boolean stating if the transaction
should be writable.
### Using buckets
Buckets are collections of key/value pairs within the database. All keys in a
bucket must be unique. You can create a bucket using the `DB.CreateBucket()`
function:
```go
db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucket([]byte("MyBucket"))
if err != nil {
return fmt.Errorf("create bucket: %s", err)
}
return nil
})
```
You can also create a bucket only if it doesn't exist by using the
`Tx.CreateBucketIfNotExists()` function. It's a common pattern to call this
function for all your top-level buckets after you open your database so you can
guarantee that they exist for future transactions.
To delete a bucket, simply call the `Tx.DeleteBucket()` function.
### Using key/value pairs
To save a key/value pair to a bucket, use the `Bucket.Put()` function:
```go
db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("MyBucket"))
err := b.Put([]byte("answer"), []byte("42"))
return err
})
```
This will set the value of the `"answer"` key to `"42"` in the `MyBucket`
bucket. To retrieve this value, we can use the `Bucket.Get()` function:
```go
db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("MyBucket"))
v := b.Get([]byte("answer"))
fmt.Printf("The answer is: %s\n", v)
return nil
})
```
The `Get()` function does not return an error because its operation is
guaranteed to work (unless there is some kind of system failure). If the key
exists then it will return its byte slice value. If it doesn't exist then it
will return `nil`. It's important to note that you can have a zero-length value
set to a key which is different than the key not existing.
Use the `Bucket.Delete()` function to delete a key from the bucket.
Please note that values returned from `Get()` are only valid while the
transaction is open. If you need to use a value outside of the transaction
then you must use `copy()` to copy it to another byte slice.
### Autoincrementing integer for the bucket
By using the `NextSequence()` function, you can let Bolt determine a sequence
which can be used as the unique identifier for your key/value pairs. See the
example below.
```go
// CreateUser saves u to the store. The new user ID is set on u once the data is persisted.
func (s *Store) CreateUser(u *User) error {
return s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the users bucket.
// This should be created when the DB is first opened.
b := tx.Bucket([]byte("users"))
// Generate ID for the user.
// This returns an error only if the Tx is closed or not writeable.
// That can't happen in an Update() call so I ignore the error check.
id, _ = b.NextSequence()
u.ID = int(id)
// Marshal user data into bytes.
buf, err := json.Marshal(u)
if err != nil {
return err
}
// Persist bytes to users bucket.
return b.Put(itob(u.ID), buf)
})
}
// itob returns an 8-byte big endian representation of v.
func itob(v int) []byte {
b := make([]byte, 8)
binary.BigEndian.PutUint64(b, uint64(v))
return b
}
type User struct {
ID int
...
}
```
### Iterating over keys
Bolt stores its keys in byte-sorted order within a bucket. This makes sequential
iteration over these keys extremely fast. To iterate over keys we'll use a
`Cursor`:
```go
db.View(func(tx *bolt.Tx) error {
// Assume bucket exists and has keys
b := tx.Bucket([]byte("MyBucket"))
c := b.Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
fmt.Printf("key=%s, value=%s\n", k, v)
}
return nil
})
```
The cursor allows you to move to a specific point in the list of keys and move
forward or backward through the keys one at a time.
The following functions are available on the cursor:
```
First() Move to the first key.
Last() Move to the last key.
Seek() Move to a specific key.
Next() Move to the next key.
Prev() Move to the previous key.
```
Each of those functions has a return signature of `(key []byte, value []byte)`.
When you have iterated to the end of the cursor then `Next()` will return a
`nil` key. You must seek to a position using `First()`, `Last()`, or `Seek()`
before calling `Next()` or `Prev()`. If you do not seek to a position then
these functions will return a `nil` key.
During iteration, if the key is non-`nil` but the value is `nil`, that means
the key refers to a bucket rather than a value. Use `Bucket.Bucket()` to
access the sub-bucket.
#### Prefix scans
To iterate over a key prefix, you can combine `Seek()` and `bytes.HasPrefix()`:
```go
db.View(func(tx *bolt.Tx) error {
// Assume bucket exists and has keys
c := tx.Bucket([]byte("MyBucket")).Cursor()
prefix := []byte("1234")
for k, v := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, v = c.Next() {
fmt.Printf("key=%s, value=%s\n", k, v)
}
return nil
})
```
#### Range scans
Another common use case is scanning over a range such as a time range. If you
use a sortable time encoding such as RFC3339 then you can query a specific
date range like this:
```go
db.View(func(tx *bolt.Tx) error {
// Assume our events bucket exists and has RFC3339 encoded time keys.
c := tx.Bucket([]byte("Events")).Cursor()
// Our time range spans the 90's decade.
min := []byte("1990-01-01T00:00:00Z")
max := []byte("2000-01-01T00:00:00Z")
// Iterate over the 90's.
for k, v := c.Seek(min); k != nil && bytes.Compare(k, max) <= 0; k, v = c.Next() {
fmt.Printf("%s: %s\n", k, v)
}
return nil
})
```
Note that, while RFC3339 is sortable, the Golang implementation of RFC3339Nano does not use a fixed number of digits after the decimal point and is therefore not sortable.
#### ForEach()
You can also use the function `ForEach()` if you know you'll be iterating over
all the keys in a bucket:
```go
db.View(func(tx *bolt.Tx) error {
// Assume bucket exists and has keys
b := tx.Bucket([]byte("MyBucket"))
b.ForEach(func(k, v []byte) error {
fmt.Printf("key=%s, value=%s\n", k, v)
return nil
})
return nil
})
```
### Nested buckets
You can also store a bucket in a key to create nested buckets. The API is the
same as the bucket management API on the `DB` object:
```go
func (*Bucket) CreateBucket(key []byte) (*Bucket, error)
func (*Bucket) CreateBucketIfNotExists(key []byte) (*Bucket, error)
func (*Bucket) DeleteBucket(key []byte) error
```
### Database backups
Bolt is a single file so it's easy to backup. You can use the `Tx.WriteTo()`
function to write a consistent view of the database to a writer. If you call
this from a read-only transaction, it will perform a hot backup and not block
your other database reads and writes.
By default, it will use a regular file handle which will utilize the operating
system's page cache. See the [`Tx`](https://godoc.org/github.com/boltdb/bolt#Tx)
documentation for information about optimizing for larger-than-RAM datasets.
One common use case is to backup over HTTP so you can use tools like `cURL` to
do database backups:
```go
func BackupHandleFunc(w http.ResponseWriter, req *http.Request) {
err := db.View(func(tx *bolt.Tx) error {
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", `attachment; filename="my.db"`)
w.Header().Set("Content-Length", strconv.Itoa(int(tx.Size())))
_, err := tx.WriteTo(w)
return err
})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
```
Then you can backup using this command:
```sh
$ curl http://localhost/backup > my.db
```
Or you can open your browser to `http://localhost/backup` and it will download
automatically.
If you want to backup to another file you can use the `Tx.CopyFile()` helper
function.
### Statistics
The database keeps a running count of many of the internal operations it
performs so you can better understand what's going on. By grabbing a snapshot
of these stats at two points in time we can see what operations were performed
in that time range.
For example, we could start a goroutine to log stats every 10 seconds:
```go
go func() {
// Grab the initial stats.
prev := db.Stats()
for {
// Wait for 10s.
time.Sleep(10 * time.Second)
// Grab the current stats and diff them.
stats := db.Stats()
diff := stats.Sub(&prev)
// Encode stats to JSON and print to STDERR.
json.NewEncoder(os.Stderr).Encode(diff)
// Save stats for the next loop.
prev = stats
}
}()
```
It's also useful to pipe these stats to a service such as statsd for monitoring
or to provide an HTTP endpoint that will perform a fixed-length sample.
### Read-Only Mode
Sometimes it is useful to create a shared, read-only Bolt database. To this,
set the `Options.ReadOnly` flag when opening your database. Read-only mode
uses a shared lock to allow multiple processes to read from the database but
it will block any processes from opening the database in read-write mode.
```go
db, err := bolt.Open("my.db", 0666, &bolt.Options{ReadOnly: true})
if err != nil {
log.Fatal(err)
}
```
### Mobile Use (iOS/Android)
Bolt is able to run on mobile devices by leveraging the binding feature of the
[gomobile](https://github.com/golang/mobile) tool. Create a struct that will
contain your database logic and a reference to a `*bolt.DB` with a initializing
contstructor that takes in a filepath where the database file will be stored.
Neither Android nor iOS require extra permissions or cleanup from using this method.
```go
func NewBoltDB(filepath string) *BoltDB {
db, err := bolt.Open(filepath+"/demo.db", 0600, nil)
if err != nil {
log.Fatal(err)
}
return &BoltDB{db}
}
type BoltDB struct {
db *bolt.DB
...
}
func (b *BoltDB) Path() string {
return b.db.Path()
}
func (b *BoltDB) Close() {
b.db.Close()
}
```
Database logic should be defined as methods on this wrapper struct.
To initialize this struct from the native language (both platforms now sync
their local storage to the cloud. These snippets disable that functionality for the
database file):
#### Android
```java
String path;
if (android.os.Build.VERSION.SDK_INT >=android.os.Build.VERSION_CODES.LOLLIPOP){
path = getNoBackupFilesDir().getAbsolutePath();
} else{
path = getFilesDir().getAbsolutePath();
}
Boltmobiledemo.BoltDB boltDB = Boltmobiledemo.NewBoltDB(path)
```
#### iOS
```objc
- (void)demo {
NSString* path = [NSSearchPathForDirectoriesInDomains(NSLibraryDirectory,
NSUserDomainMask,
YES) objectAtIndex:0];
GoBoltmobiledemoBoltDB * demo = GoBoltmobiledemoNewBoltDB(path);
[self addSkipBackupAttributeToItemAtPath:demo.path];
//Some DB Logic would go here
[demo close];
}
- (BOOL)addSkipBackupAttributeToItemAtPath:(NSString *) filePathString
{
NSURL* URL= [NSURL fileURLWithPath: filePathString];
assert([[NSFileManager defaultManager] fileExistsAtPath: [URL path]]);
NSError *error = nil;
BOOL success = [URL setResourceValue: [NSNumber numberWithBool: YES]
forKey: NSURLIsExcludedFromBackupKey error: &error];
if(!success){
NSLog(@"Error excluding %@ from backup %@", [URL lastPathComponent], error);
}
return success;
}
```
## Resources
For more information on getting started with Bolt, check out the following articles:
* [Intro to BoltDB: Painless Performant Persistence](http://npf.io/2014/07/intro-to-boltdb-painless-performant-persistence/) by [Nate Finch](https://github.com/natefinch).
* [Bolt -- an embedded key/value database for Go](https://www.progville.com/go/bolt-embedded-db-golang/) by Progville
## Comparison with other databases
### Postgres, MySQL, & other relational databases
Relational databases structure data into rows and are only accessible through
the use of SQL. This approach provides flexibility in how you store and query
your data but also incurs overhead in parsing and planning SQL statements. Bolt
accesses all data by a byte slice key. This makes Bolt fast to read and write
data by key but provides no built-in support for joining values together.
Most relational databases (with the exception of SQLite) are standalone servers
that run separately from your application. This gives your systems
flexibility to connect multiple application servers to a single database
server but also adds overhead in serializing and transporting data over the
network. Bolt runs as a library included in your application so all data access
has to go through your application's process. This brings data closer to your
application but limits multi-process access to the data.
### LevelDB, RocksDB
LevelDB and its derivatives (RocksDB, HyperLevelDB) are similar to Bolt in that
they are libraries bundled into the application, however, their underlying
structure is a log-structured merge-tree (LSM tree). An LSM tree optimizes
random writes by using a write ahead log and multi-tiered, sorted files called
SSTables. Bolt uses a B+tree internally and only a single file. Both approaches
have trade-offs.
If you require a high random write throughput (>10,000 w/sec) or you need to use
spinning disks then LevelDB could be a good choice. If your application is
read-heavy or does a lot of range scans then Bolt could be a good choice.
One other important consideration is that LevelDB does not have transactions.
It supports batch writing of key/values pairs and it supports read snapshots
but it will not give you the ability to do a compare-and-swap operation safely.
Bolt supports fully serializable ACID transactions.
### LMDB
Bolt was originally a port of LMDB so it is architecturally similar. Both use
a B+tree, have ACID semantics with fully serializable transactions, and support
lock-free MVCC using a single writer and multiple readers.
The two projects have somewhat diverged. LMDB heavily focuses on raw performance
while Bolt has focused on simplicity and ease of use. For example, LMDB allows
several unsafe actions such as direct writes for the sake of performance. Bolt
opts to disallow actions which can leave the database in a corrupted state. The
only exception to this in Bolt is `DB.NoSync`.
There are also a few differences in API. LMDB requires a maximum mmap size when
opening an `mdb_env` whereas Bolt will handle incremental mmap resizing
automatically. LMDB overloads the getter and setter functions with multiple
flags whereas Bolt splits these specialized cases into their own functions.
## Caveats & Limitations
It's important to pick the right tool for the job and Bolt is no exception.
Here are a few things to note when evaluating and using Bolt:
* Bolt is good for read intensive workloads. Sequential write performance is
also fast but random writes can be slow. You can use `DB.Batch()` or add a
write-ahead log to help mitigate this issue.
* Bolt uses a B+tree internally so there can be a lot of random page access.
SSDs provide a significant performance boost over spinning disks.
* Try to avoid long running read transactions. Bolt uses copy-on-write so
old pages cannot be reclaimed while an old transaction is using them.
* Byte slices returned from Bolt are only valid during a transaction. Once the
transaction has been committed or rolled back then the memory they point to
can be reused by a new page or can be unmapped from virtual memory and you'll
see an `unexpected fault address` panic when accessing it.
* Be careful when using `Bucket.FillPercent`. Setting a high fill percent for
buckets that have random inserts will cause your database to have very poor
page utilization.
* Use larger buckets in general. Smaller buckets causes poor page utilization
once they become larger than the page size (typically 4KB).
* Bulk loading a lot of random writes into a new bucket can be slow as the
page will not split until the transaction is committed. Randomly inserting
more than 100,000 key/value pairs into a single new bucket in a single
transaction is not advised.
* Bolt uses a memory-mapped file so the underlying operating system handles the
caching of the data. Typically, the OS will cache as much of the file as it
can in memory and will release memory as needed to other processes. This means
that Bolt can show very high memory usage when working with large databases.
However, this is expected and the OS will release memory as needed. Bolt can
handle databases much larger than the available physical RAM, provided its
memory-map fits in the process virtual address space. It may be problematic
on 32-bits systems.
* The data structures in the Bolt database are memory mapped so the data file
will be endian specific. This means that you cannot copy a Bolt file from a
little endian machine to a big endian machine and have it work. For most
users this is not a concern since most modern CPUs are little endian.
* Because of the way pages are laid out on disk, Bolt cannot truncate data files
and return free pages back to the disk. Instead, Bolt maintains a free list
of unused pages within its data file. These free pages can be reused by later
transactions. This works well for many use cases as databases generally tend
to grow. However, it's important to note that deleting large chunks of data
will not allow you to reclaim that space on disk.
For more information on page allocation, [see this comment][page-allocation].
[page-allocation]: https://github.com/boltdb/bolt/issues/308#issuecomment-74811638
## Reading the Source
Bolt is a relatively small code base (<3KLOC) for an embedded, serializable,
transactional key/value database so it can be a good starting point for people
interested in how databases work.
The best places to start are the main entry points into Bolt:
- `Open()` - Initializes the reference to the database. It's responsible for
creating the database if it doesn't exist, obtaining an exclusive lock on the
file, reading the meta pages, & memory-mapping the file.
- `DB.Begin()` - Starts a read-only or read-write transaction depending on the
value of the `writable` argument. This requires briefly obtaining the "meta"
lock to keep track of open transactions. Only one read-write transaction can
exist at a time so the "rwlock" is acquired during the life of a read-write
transaction.
- `Bucket.Put()` - Writes a key/value pair into a bucket. After validating the
arguments, a cursor is used to traverse the B+tree to the page and position
where they key & value will be written. Once the position is found, the bucket
materializes the underlying page and the page's parent pages into memory as
"nodes". These nodes are where mutations occur during read-write transactions.
These changes get flushed to disk during commit.
- `Bucket.Get()` - Retrieves a key/value pair from a bucket. This uses a cursor
to move to the page & position of a key/value pair. During a read-only
transaction, the key and value data is returned as a direct reference to the
underlying mmap file so there's no allocation overhead. For read-write
transactions, this data may reference the mmap file or one of the in-memory
node values.
- `Cursor` - This object is simply for traversing the B+tree of on-disk pages
or in-memory nodes. It can seek to a specific key, move to the first or last
value, or it can move forward or backward. The cursor handles the movement up
and down the B+tree transparently to the end user.
- `Tx.Commit()` - Converts the in-memory dirty nodes and the list of free pages
into pages to be written to disk. Writing to disk then occurs in two phases.
First, the dirty pages are written to disk and an `fsync()` occurs. Second, a
new meta page with an incremented transaction ID is written and another
`fsync()` occurs. This two phase write ensures that partially written data
pages are ignored in the event of a crash since the meta page pointing to them
is never written. Partially written meta pages are invalidated because they
are written with a checksum.
If you have additional notes that could be helpful for others, please submit
them via pull request.
## Other Projects Using Bolt
Below is a list of public, open source projects that use Bolt:
* [Operation Go: A Routine Mission](http://gocode.io) - An online programming game for Golang using Bolt for user accounts and a leaderboard.
* [Bazil](https://bazil.org/) - A file system that lets your data reside where it is most convenient for it to reside.
* [DVID](https://github.com/janelia-flyem/dvid) - Added Bolt as optional storage engine and testing it against Basho-tuned leveldb.
* [Skybox Analytics](https://github.com/skybox/skybox) - A standalone funnel analysis tool for web analytics.
* [Scuttlebutt](https://github.com/benbjohnson/scuttlebutt) - Uses Bolt to store and process all Twitter mentions of GitHub projects.
* [Wiki](https://github.com/peterhellberg/wiki) - A tiny wiki using Goji, BoltDB and Blackfriday.
* [ChainStore](https://github.com/pressly/chainstore) - Simple key-value interface to a variety of storage engines organized as a chain of operations.
* [MetricBase](https://github.com/msiebuhr/MetricBase) - Single-binary version of Graphite.
* [Gitchain](https://github.com/gitchain/gitchain) - Decentralized, peer-to-peer Git repositories aka "Git meets Bitcoin".
* [event-shuttle](https://github.com/sclasen/event-shuttle) - A Unix system service to collect and reliably deliver messages to Kafka.
* [ipxed](https://github.com/kelseyhightower/ipxed) - Web interface and api for ipxed.
* [BoltStore](https://github.com/yosssi/boltstore) - Session store using Bolt.
* [photosite/session](https://godoc.org/bitbucket.org/kardianos/photosite/session) - Sessions for a photo viewing site.
* [LedisDB](https://github.com/siddontang/ledisdb) - A high performance NoSQL, using Bolt as optional storage.
* [ipLocator](https://github.com/AndreasBriese/ipLocator) - A fast ip-geo-location-server using bolt with bloom filters.
* [cayley](https://github.com/google/cayley) - Cayley is an open-source graph database using Bolt as optional backend.
* [bleve](http://www.blevesearch.com/) - A pure Go search engine similar to ElasticSearch that uses Bolt as the default storage backend.
* [tentacool](https://github.com/optiflows/tentacool) - REST api server to manage system stuff (IP, DNS, Gateway...) on a linux server.
* [SkyDB](https://github.com/skydb/sky) - Behavioral analytics database.
* [Seaweed File System](https://github.com/chrislusf/seaweedfs) - Highly scalable distributed key~file system with O(1) disk read.
* [InfluxDB](https://influxdata.com) - Scalable datastore for metrics, events, and real-time analytics.
* [Freehold](http://tshannon.bitbucket.org/freehold/) - An open, secure, and lightweight platform for your files and data.
* [Prometheus Annotation Server](https://github.com/oliver006/prom_annotation_server) - Annotation server for PromDash & Prometheus service monitoring system.
* [Consul](https://github.com/hashicorp/consul) - Consul is service discovery and configuration made easy. Distributed, highly available, and datacenter-aware.
* [Kala](https://github.com/ajvb/kala) - Kala is a modern job scheduler optimized to run on a single node. It is persistent, JSON over HTTP API, ISO 8601 duration notation, and dependent jobs.
* [drive](https://github.com/odeke-em/drive) - drive is an unofficial Google Drive command line client for \*NIX operating systems.
* [stow](https://github.com/djherbis/stow) - a persistence manager for objects
backed by boltdb.
* [buckets](https://github.com/joyrexus/buckets) - a bolt wrapper streamlining
simple tx and key scans.
* [mbuckets](https://github.com/abhigupta912/mbuckets) - A Bolt wrapper that allows easy operations on multi level (nested) buckets.
* [Request Baskets](https://github.com/darklynx/request-baskets) - A web service to collect arbitrary HTTP requests and inspect them via REST API or simple web UI, similar to [RequestBin](http://requestb.in/) service
* [Go Report Card](https://goreportcard.com/) - Go code quality report cards as a (free and open source) service.
* [Boltdb Boilerplate](https://github.com/bobintornado/boltdb-boilerplate) - Boilerplate wrapper around bolt aiming to make simple calls one-liners.
* [lru](https://github.com/crowdriff/lru) - Easy to use Bolt-backed Least-Recently-Used (LRU) read-through cache with chainable remote stores.
* [Storm](https://github.com/asdine/storm) - A simple ORM around BoltDB.
* [GoWebApp](https://github.com/josephspurrier/gowebapp) - A basic MVC web application in Go using BoltDB.
* [SimpleBolt](https://github.com/xyproto/simplebolt) - A simple way to use BoltDB. Deals mainly with strings.
* [Algernon](https://github.com/xyproto/algernon) - A HTTP/2 web server with built-in support for Lua. Uses BoltDB as the default database backend.
If you are using Bolt in a project please send a pull request to add it to the list.

View File

@ -1,18 +0,0 @@
version: "{build}"
os: Windows Server 2012 R2
clone_folder: c:\gopath\src\github.com\boltdb\bolt
environment:
GOPATH: c:\gopath
install:
- echo %PATH%
- echo %GOPATH%
- go version
- go env
- go get -v -t ./...
build_script:
- go test -v ./...

Some files were not shown because too many files have changed in this diff Show More