Compare commits

...

379 Commits

Author SHA1 Message Date
33245c6b5b version: 3.3.8
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:41:56 -07:00
4c18c56bf6 travis: use Go 1.9.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:41:41 -07:00
cb46e9ee0b gitignore: ignore "docs" and "vendor"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:34:20 -07:00
1fea97b898 clientv3: backoff on reestablishing watches when Unavailable errors are encountered 2018-06-14 10:47:46 -07:00
5227545764 tests/semaphore.test.bash: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 14:39:38 -07:00
1ba7c71975 Makefile: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 14:39:02 -07:00
b7c19232bc etcdserver: Fix txn request 'took too long' warnings to use loggable request stringer 2018-06-12 09:33:33 -07:00
07f833ae3e etcdserver: Add response byte size and range response count to took too long warning 2018-06-11 11:26:26 -07:00
ef154094b3 etcdserver: Replace value contents with value_size in request took too long warning 2018-06-08 09:49:43 -07:00
21f186a40b version: bump up to 3.3.7+git 2018-06-06 10:08:16 -07:00
56536de551 version: 3.3.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:19 -07:00
a0ebf8cb1c e2e: test client-side cipher suites with curl
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:19 -07:00
13715724b8 etcdmain: add "--cipher-suites" flag
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:15 -07:00
22d65d8cc2 embed: support custom cipher suites
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:18:16 -07:00
6c2add4142 integration: test client-side TLS cipher suites
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:11:16 -07:00
6a3842776b pkg/transport: add "TLSInfo.CipherSuites" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:10:35 -07:00
641bddca0f pkg/tlsutil: add "GetCipherSuite"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:10:16 -07:00
21a1162ad1 tests/e2e: test move-leader command with TLS
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 13:56:31 -07:00
e2cb9cbaec ctlv3: support TLS endpoints for move-leader command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 13:56:05 -07:00
243074c5c5 scripts/release: Fix docker push for 3.1 releases, remove inaccurate warning at the end of release script 2018-05-31 14:44:29 -07:00
26a73f2fa1 version: bump up to 3.3.6+git 2018-05-31 11:57:20 -07:00
932c3c01f9 version: 3.3.6
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-31 11:41:42 -07:00
41888ddbaa mvcc: fix panic by allowing future revision watcher from restore operation
This also happens without gRPC proxy.

Fix panic when gRPC proxy leader watcher is restored:

```
go test -v -tags cluster_proxy -cpu 4 -race -run TestV3WatchRestoreSnapshotUnsync

=== RUN   TestV3WatchRestoreSnapshotUnsync
panic: watcher minimum revision 9223372036854775805 should not exceed current revision 16

goroutine 156 [running]:
github.com/coreos/etcd/mvcc.(*watcherGroup).chooseAll(0xc4202b8720, 0x10, 0xffffffffffffffff, 0x1)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:242 +0x3b5
github.com/coreos/etcd/mvcc.(*watcherGroup).choose(0xc4202b8720, 0x200, 0x10, 0xffffffffffffffff, 0xc420253378, 0xc420253378)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:225 +0x289
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchers(0xc4202b86e0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:340 +0x237
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchersLoop(0xc4202b86e0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:214 +0x280
created by github.com/coreos/etcd/mvcc.newWatchableStore
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:90 +0x477
exit status 2
FAIL	github.com/coreos/etcd/integration	2.551s
```

gRPC proxy spawns a watcher with a key "proxy-namespace__lostleader"
and watch revision "int64(math.MaxInt64 - 2)" to detect leader loss.
But, when the partitioned node restores, this watcher triggers
panic with "watcher minimum revision ... should not exceed current ...".

This check was added a long time ago, by my PR, when there was no gRPC proxy:

https://github.com/coreos/etcd/pull/4043#discussion_r48457145

> we can remove this checking actually. it is impossible for a unsynced watching to have a future rev. or we should just panic here.

However, now it's possible that a unsynced watcher has a future
revision, when it was moved from a synced watcher group through
restore operation.

This PR adds "restore" flag to indicate that a watcher was moved
from the synced watcher group with restore operation. Otherwise,
the watcher with future revision in an unsynced watcher group
would still panic.

Example logs with future revision watcher from restore operation:

```
{"level":"info","ts":1527196358.9057755,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
{"level":"info","ts":1527196358.910349,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
```

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-31 11:41:34 -07:00
7292963ae7 auth: fix panic using WithRoot and improve JWT coverage
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-23 23:45:24 -07:00
37767bc6e2 auth: a new auth token provider nop
This commit adds a new auth token provider named nop. The nop provider
refuses every Authenticate() request so CN based authentication can
only be allowed. If the tokenOpts parameter of auth.NewTokenProvider()
is empty, the provider will be used.
2018-05-23 15:48:39 -07:00
d659771bb8 scripts: Fix remote tag check, gcloud login and umask in release script 2018-05-09 11:08:23 -07:00
39d01e716f version: 3.3.5+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-09 11:07:52 -07:00
70c8726202 version: 3.3.5
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-09 09:23:59 -07:00
aaca01a0fa tests/e2e: separate coverage tests for exec commands
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-03 18:48:16 -07:00
bc2d400b4c etcdctl/ctlv3: fix watch with exec commands
Following command was failing because the parser incorrectly
picks up the second "watch" string in exec command, thus
passing wrong exec commands.

```
ETCDCTL_API=3 ./bin/etcdctl watch aaa -- echo watch event received

panic: runtime error: slice bounds out of range

goroutine 1 [running]:
github.com/coreos/etcd/etcdctl/ctlv3/command.parseWatchArgs(0xc42002e080, 0x8, 0x8, 0xc420206a20, 0x5, 0x6, 0x0, 0x0, 0x0, 0x0, ...)
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/command/watch_command.go:303 +0xbed
github.com/coreos/etcd/etcdctl/ctlv3/command.watchCommandFunc(0xc4202a7180, 0xc420206a20, 0x5, 0x6)
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/command/watch_command.go:73 +0x11d
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).execute(0xc4202a7180, 0xc420206960, 0x6, 0x6, 0xc4202a7180, 0xc420206960)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:766 +0x2c1
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1363de0, 0xc420128638, 0xc420185e01, 0xc420185ee8)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:852 +0x30a
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).Execute(0x1363de0, 0x0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:800 +0x2b
github.com/coreos/etcd/etcdctl/ctlv3.Start()
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/ctl_nocov.go:25 +0x8e
main.main()
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/main.go:40 +0x17b
```

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-03 18:48:08 -07:00
913a98567e tests: use Go 1.9.6
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-01 10:22:04 -07:00
3f888b8085 functional/tester: handle retries in "caseUntilSnapshot"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-30 14:37:20 -07:00
c15c8c6116 functional.yaml: use lower ports
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-30 13:36:36 -07:00
f535bb64f3 scripts: Fix a few etcd release script bugs and make it reenterant. 2018-04-25 10:04:43 -07:00
f01d690e6f etcdmain: document peer-cert-allowed-cn flag 2018-04-24 13:57:51 -07:00
d09fa9c537 version: 3.3.4+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-24 13:56:13 -07:00
fdde8705f5 version: 3.3.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-24 12:05:29 -07:00
600b2d1967 scripts: Add scripts/release that performs 'etcd-release-runbook' (https://goo.gl/Gxwysq) style release workflow 2018-04-24 12:05:18 -07:00
870138accb etcdserver: log skipping initial election tick
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:59:01 -07:00
758203bd86 etcdmain: add "--initial-election-tick-advance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:58:57 -07:00
8886a6397c embed: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:26:48 -07:00
ea829611b5 integration: set InitialElectionTickAdvance to true by default
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:22:16 -07:00
b923c74fe5 etcdserver: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:21:51 -07:00
7cbc2f1068 etcdserver: add is_leader prometheus metric that is 1 on the leader.
Before this change, we had now way to find a leader using /metrics
endpoint. This commit adds a metric to do that.
2018-04-19 14:59:31 -07:00
78109152b9 integration: re-overwrite "httptest.Server" TLS.Certificates
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-17 06:17:46 -07:00
08dc184618 pkg/transport: don't set certificates on tls config 2018-04-17 06:17:38 -07:00
48f4ee9268 functional: create symlinks for build
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 16:05:36 -07:00
07a34aa76b travis: run build tests for "functional"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:56:30 -07:00
2cabb82375 snapshot: remove tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:24:02 -07:00
56a9778bc2 functional: initial commit (copied from master)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 13:19:22 -07:00
5abe521e77 snapshot: initial commit (for functional tests)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 13:19:19 -07:00
3c4ace2d27 test: simplify
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 11:09:25 -07:00
095fc0b411 etcdserver/stats: make all fields guarded by mutex. 2018-04-11 19:49:00 -07:00
d40abbb502 etcdserver/stats: fix stats data race. 2018-04-11 19:49:00 -07:00
c19be730fd test: remove build flag "-a"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-11 10:17:31 -07:00
99e4a5ffae cmd/vendor: add "go.uber.org/zap"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:46:00 -07:00
3736a126df pkg/proxy: move from "pkg/transport"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:43:23 -07:00
074e417770 tools: remove
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:43:16 -07:00
dd9f05567d travis: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:34:27 -07:00
a28cf17f25 test/*: clean up semaphore scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:33:50 -07:00
cdbb8ffdc1 etcdserver: fix "lease_expired_total" metrics
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 17:57:35 -07:00
68ba797549 tests: move test scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-09 11:33:23 -07:00
5d97bccff2 semaphore.sh: update Go version
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:20:26 -07:00
e5ec25fe0b travis: use Go 1.9.5
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:07:35 -07:00
c522f6060f version: 3.3.3+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:07:10 -07:00
e348b1aedd version: 3.3.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 13:00:06 -07:00
4355d91fcc Documentation/upgrades: backport all upgrade guides
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-27 10:32:43 -07:00
ce7b86b65a compactor: simplify interval logic on periodic compactor
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-26 05:37:31 -07:00
d70a218b19 compactor: adjust interval for period <1-hour 2018-03-26 05:37:24 -07:00
e029de320a compactor: clean up
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-22 11:03:22 -07:00
863a56a998 rafthttp: add missing "peer_sent_failures_total" metrics call
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-14 12:44:38 -04:00
3282d90707 etcdserver: adjust election ticks on restart
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:56 -08:00
b2d5c6c7bd etcdserver: make "advanceTicks" method
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:50 -08:00
6fe7316ec4 rafthttp: add "ActivePeers" to "Transport"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:35 -08:00
40e02256c7 version: 3.3.2+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 14:49:14 -08:00
c9d46ab379 version: 3.3.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 12:57:09 -08:00
d1da2023b9 clientv3/integration: test "rpctypes.ErrLeaseTTLTooLarge"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 10:34:34 -08:00
eaa0050d4d *: enforce max lease TTL with 9,000,000,000 seconds
math.MaxInt64 / time.Second is 9,223,372,036. 9,000,000,000 is easier to
remember/document.
2018-03-08 10:34:12 -08:00
99a12662c1 *: remove unused env vars
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 01:35:36 -08:00
e6d44fa3f2 hack/scripts-dev: fix indentation in run.sh
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:32:27 -08:00
43caf2b28a hack/scripts-dev: sync with master branch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:18:58 -08:00
bfb7a155b4 travis: update Go version string
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:04:14 -08:00
f76ef3ce8d e2e: fix missing "apiPrefix"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 00:03:02 -08:00
462ba8bb09 embed: fix wrong compactor imports
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-06 23:26:45 -08:00
146ed08052 Documentation/op-guide: highlight defrag operation "--endpoints" flag
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-05 11:15:05 -08:00
1bc974d536 etcdctl: highlight "defrag" command caveats
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-05 11:15:02 -08:00
3e3468d1fa e2e: add "Election" grpc-gateway test cases
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:50 -08:00
207f19354b e2e: add "spawnWithExpectLines"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:41 -08:00
bb8a5377ce api/v3election: error on missing "leader" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:34 -08:00
8291e16128 Documentation: make "Consul" section more objective
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:22 -08:00
a5b31087e8 etcdserver: enable "CheckQuorum" when starting with "ForceNewCluster"
We enable "raft.Config.CheckQuorum" by default in other
Raft initial starts. So should start with "ForceNewCluster".

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:08 -08:00
cec79dd706 httpproxy: cancel requests when client closes a connection 2018-03-02 10:39:46 -08:00
3641af83e7 semaphore: release test version 2018-02-27 11:29:58 -08:00
240fda5128 embed: fix revision-based compaction with default value
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-21 09:35:00 -08:00
d627301735 embed: document/validate compaction mode
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-21 09:34:59 -08:00
534c31b4ca version: 3.3.1+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 14:36:11 -08:00
28f3f26c0e version: 3.3.1
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:29:11 -08:00
4737f3a620 hack/scripts-dev: Makefile with Go 1.9.4, 1.8.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:56 -08:00
bc6e235052 travis: use Go 1.9.4 with TARGET_GO_VERSION
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:56 -08:00
13c5cedfb8 semaphore: use Go 1.9.4, update release upgrade test version
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:55 -08:00
9942f904fb etcdserver: improve request took too long warning 2018-02-06 16:58:04 -08:00
eaf7d631ad mvcc: restore unsynced watchers
In case syncWatchersLoop() starts before Restore() is called,
watchers already added by that moment are moved to s.synced by the loop.
However, there is a broken logic that moves watchers from s.synced
to s.uncyned without setting keyWatchers of the watcherGroup.
Eventually syncWatchers() fails to pickup those watchers from s.unsynced
and no events are sent to the watchers, because newWatcherBatch() called
in the function uses wg.watcherSetByKey() internally that requires
a proper keyWatchers value.
2018-02-06 11:34:46 -08:00
21a1a28c18 hack: sync with etcd master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:07:01 -08:00
c932e9e2ba tools/functional-tester: update README for local docker testing
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:35 -08:00
cf96d8a130 Dockerfile-functional-tester: initial commit
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:25 -08:00
a3ec84e311 gitignore: add ".Dockerfile-functional-tester"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:12 -08:00
29aca652bf test: configure advertise ports in functional_pass
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:42 -08:00
bbfd0077e8 etcd-tester: set advertise ports, delay w/ network faults
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:33 -08:00
18df07754f etcd-agent: use "pkg/transport.Proxy"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:10 -08:00
56178a8a06 test: remove "use-root" in functional_pass
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:58 -08:00
a9a616a09f etcd-agent: remove "use-root"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:41 -08:00
abdfa87ae5 functional-tester: remove old assets
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:29 -08:00
a4cbba89ff pkg/transport: implement "Proxy"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:02:34 -08:00
0bc06d72df pkg/transport: add "fixtures" for TLS tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:02:25 -08:00
a1fbed5abc *: Remove 8GiB quota limitation from documents
Also mention that in v3.3 change log.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-02 14:28:26 -08:00
665fb01f95 version: 3.3.0+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-01 14:14:07 -08:00
c23606781f version: 3.3.0
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-01 10:03:36 -08:00
afa01aaef0 etcdmain: define "defaultGRPCMaxCallSendMsgSize"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:50:27 -08:00
d20e5a6bb5 Documentation/op-guide: highlight defragment operation
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:42:37 -08:00
6931dd8442 Documentation/op-guide: revert "--discovery-srv-name" doc changes
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:41:42 -08:00
f320348682 Documentation: sync with etcd master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:37:57 -08:00
d7e6dd77bb grpcproxy: configure --max-send-bytes and --max-recv-bytes for grpc proxy 2018-01-30 09:33:16 -08:00
50d2a00f01 etcdserver: clarify warnings on backend open taking >10 seconds
If db file is 10 GiB, it can take more than 1-second.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-26 10:55:16 -08:00
c5bba152ee etcdserver: add detailed errors in "ValidateClusterAndAssignIDs"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:21 -08:00
dbde4e986b pkg/netutil: return error from "URLStringsEqual"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:14 -08:00
f9b7fccf1b etcdserver: add error details on DNS resolution failure on advertise URLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:07 -08:00
9deb838ddb semaphore,travis: test with Go 1.9.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-23 14:03:24 -08:00
baf7320e10 version: 3.3.0-rc.4+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-23 14:03:07 -08:00
ea6360f550 version: 3.3.0-rc.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:32:00 -08:00
2aa3d91759 clientv3/integration: add TestMemberAddUpdateWrongURLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:31:45 -08:00
7973612c6e words: whitelit "rafthttp"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:29:51 -08:00
1c91ddc6f4 clientv3: prevent no-scheme URLs to cluster APIs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:27:25 -08:00
8a18cc96d0 etcdserver/api/v3rpc: debug-log client disconnect on TLS, http/2 stream CANCEL
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-19 12:50:20 -08:00
a90f301ba8 version: 3.3.0-rc.3+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-19 12:48:21 -08:00
374dc5743f version: 3.3.0-rc.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:23:08 -08:00
55505617df proxy/grpcproxy: remove "Errors" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
a9317d3d77 e2e: remove "/health" "errors" field test
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
02d362ccde etcdserver/api/etcdhttp: remove "errors" field in /health
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
d292337d14 api/etcdhttp: change /health type back to string for backwards compatibility 2018-01-17 12:44:38 -08:00
7974f008f3 etcdctl: document "ETCD_WATCH_*"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:44:22 -08:00
4a3f99415e e2e: test ETCD_WATCH_VALUE
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:44:07 -08:00
6340564c84 ctlv3: set ETCD_WATCH_* on watch exec
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:55 -08:00
6735028ec0 ctlv3: exit on exec watch error
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:45 -08:00
906f098053 ctlv3: set ETCD_WATCH_KEY, ETCD_WATCH_VALUE on exec watch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:38 -08:00
8a66237693 ctlv3: handle pkg/flags warnings
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:27 -08:00
d37afffb98 etcdctl: document watch with ETCDCTL_WATCH_*
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:12 -08:00
7e2759da8d e2e: add watch tests with ETCDCTL_WATCH_*
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:02 -08:00
ad4df985fc ctlv3: support ETCDCTL_WATCH_KEY, ETCDCTL_WATCH_RANGE_END
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:42:54 -08:00
2df89c8bf6 Documentation/op-guide: clarify security.md on TLS auth
Make it more accurate (just as pkg/transport/listener_tls.go does).

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-12 15:23:06 -08:00
6178c45066 etcdctl: don't ask password twice for etcdctl endpoint health --cluster
Current etcdctl endpoint health --cluster asks password twice if auth
is enabled. This is because the command creates two client instances:
one for the purpose of checking endpoint health and another for
getting cluster members with MemberList(). The latter client doesn't
need to be authenticated because MemberList() is a public RPC. This
commit makes the client with no authed one.

Fix https://github.com/coreos/etcd/issues/9094
2018-01-12 09:59:31 -08:00
9ccae0f81a etcd-tester: update stresser weights with txn stresser
Large key writes (stressEntries[1].weight) should not take this
much weight. It was triggering "database size exceeded" errors.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-12 09:41:51 -08:00
a5079cc381 version: 3.3.0-rc.2+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 14:16:08 -08:00
9e079d8f02 version: 3.3.0-rc.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 11:18:46 -08:00
bd57c9ca5b etcd-tester: fix "writeTxn" key selection
Found when debugging https://github.com/coreos/etcd/issues/9130.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 11:18:05 -08:00
58c402a47b test: limit stress-qps for slow CI machines, add txn flags
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2018-01-09 14:18:45 -08:00
3ce73b70bc etcd-tester: add txn stresser
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2018-01-09 14:18:33 -08:00
ee3c81d8d3 ctlv3: add "snapshot restore --wal-dir"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-09 11:12:29 -08:00
2dfabfbef6 DocCommand: use regex wildcard
The current command as such produces no output on mac term or bash shell.
Using regex wildcard works fine on mac and linux.
2018-01-09 09:11:16 -08:00
bf83d5269f clientv3/integration: fix typos
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-09 09:11:15 -08:00
a609b1eb47 integration: add constant RequestWaitTimeout. 2018-01-09 09:11:15 -08:00
1ae0c0b47d mvcc: check null before set FillPercent not to panic
Since CreateBucketIfNotExists() can return nil when it gets an error,
accessing FillPercent must be done after a nil check, not to cause
a panic.
2018-01-08 13:08:03 -08:00
ec43197344 etcdserver/api/v3rpc: debug user cancellation and log warning for rest
The context error with cancel code is typically for user cancellation which
should be at debug level. For other error codes we should display a warning.

Fixes #9085
2018-01-08 10:14:37 -08:00
70ba0518f1 embed: enable extensive metrics if specified 2018-01-07 18:48:59 -08:00
e330f5004f etcdmain: unset ETCD_UNSUPPORTED_ARCH after arch check
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-05 03:38:35 +00:00
0ec5023b7b pkg/expect: fix deadlock in mac OS
bufio.NewReader.ReadString blocks even
when the process received syscall.SIGKILL.
Remove ptyMu mutex and make ReadString return
when *os.File is closed.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 14:34:01 -08:00
0f69520622 version: bump up to 3.3.0-rc.1+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 14:33:10 -08:00
d3c2acf090 version: bump up to 3.3.0-rc.1
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:27:15 -08:00
5e35f79087 clientv3/integration: fix TestKVLargeRequests with -tags cluster_proxy
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:07:24 -08:00
6dff1a9398 tools/functional-tester: remove duplicate grpclog set
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
325913d6fb etcdserver/api/v3rpc: set grpclog once
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
24c9fb0527 etcdserver,embed: discard gRPC info logs when debug is off
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
8511db5e2b etcdserver/api/v3rpc: log stream error with debug level
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
3193f3c9ab clientv3/leasing: fix racey waitSession
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 17:51:03 -08:00
bdc508cadf grpc-proxy: add "--debug" flag to "etcd grpc-proxy start" command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 14:44:10 -08:00
d5a0609412 embed: only discard infos when debug flag is off
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 14:44:02 -08:00
67af1a2138 CHANGELOG: remove rc in release-3.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:32:15 -08:00
66d68a8fdb *: update release upgrade test versions
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:16:59 -08:00
ebaa83c985 version: bump up to 3.3.0+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:16:49 -08:00
f7a395f030 version: bump up to 3.3.0
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 13:48:48 -08:00
09fe2ddc31 Merge pull request #9057 from gyuho/version-bump-up
*: server-side version bump up to 3.3
2017-12-20 13:48:22 -08:00
bd9bd71a61 rafthttp: add 3.3.0 support
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 13:34:12 -08:00
b3ec44ca99 etcdserver/api: add 3.3.0 as compatible server capability
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 13:34:12 -08:00
578d13c53c Merge pull request #8724 from gyuho/cleanup-retry
clientv3/retry: clean up retryRPCFunc
2017-12-20 13:33:53 -08:00
b01f16a4a4 Merge pull request #9058 from hexfusion/fx_lease_proto
Documentation/dev-guide: update TimeToLive documentation.
2017-12-20 12:50:33 -08:00
211805b188 Merge pull request #9056 from gyuho/lease-expire-doc
Document/upgrades: add "lease timetolive" output change
2017-12-20 12:40:58 -08:00
eb65f26182 Documentation/dev-guide: Update TimeToLive documentation. 2017-12-20 15:39:37 -05:00
255476b5e5 clientv3/retry: clean up retryRPCFunc
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-20 12:30:33 -08:00
a7282a3f9f Document/upgrades: add "lease timetolive" output change
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 12:29:03 -08:00
94e50a1e68 Merge pull request #9047 from gyuho/client-grpc-call-options
*: configurable gRPC message size limits
2017-12-20 12:25:53 -08:00
c00255908d Merge pull request #9053 from tomwilkie/peer_round_trip_time_seconds
Documentation/op-guide: fix typo, s/member_round_trip_time/peer_round_trip_time/
2017-12-20 12:17:47 -08:00
e49e231b81 Merge pull request #8979 from gyuho/changelog-3.3
CHANGELOG: add v3.3 pre-release
2017-12-20 12:16:44 -08:00
3c5eb4f4fe Documentation/upgrades: highlight raw gRPC client wrapper changes
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 11:09:05 -08:00
3d56045da0 integration: bump up wait leader timeout for slow CIs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
88fe8de99b clientv3/integration: fix TestKVPutError
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
6bfde98be7 Documentation/upgrades: highlight request limit changes in v3.2, v3.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
3d924aedc8 Documentation/upgrades: clean up 3.2, 3.3 guides
Make headers consistent.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
1b3ed912a2 words: whitelist more
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
f38593bbad clientv3/integration: test large KV requests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
497412c588 clientv3: call other APIs with default gRPC call options
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
f87760998b clientv3: call KV/Txn APIs with default gRPC call options
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:05 -08:00
63d66b1011 clientv3: configure gRPC message limits in Config
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:58:01 -08:00
9442f90016 integration: remove typo in "TestV3LargeRequests"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 10:54:40 -08:00
22127895d8 Merge pull request #8919 from gyuho/exec-watch
etcdctl: support exec watch in v3
2017-12-20 10:53:30 -08:00
cd2f83900a CHANGELOG: add "lease timetolive" change
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:54:49 -08:00
5628fff00f CHANGELOG: link to upgrade guides for every release
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:54:49 -08:00
38f92801db CHANGELOG: highlight request limit changes, add v3.2.12
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:54:45 -08:00
59269aa7b0 CHANGELOG: update "code changes" links
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:52:22 -08:00
d9d12acf78 CHANGELOG: add v3.3.0 pre-release
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:52:22 -08:00
849c6cfb21 CHANGELOG: minor formatting update in previous releases
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 09:52:22 -08:00
e378b9831c Merge pull request #9052 from gyuho/lease-timetolive-output
etcdctl/ctlv3: clarify "lease timetolive" output on expired lease
2017-12-20 09:49:34 -08:00
f59808a2ca etcdctl: update README for new "lease timetolive" output
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 06:55:23 -08:00
13a8634630 Documentation/op-guide: s/member_round_trip_time/peer_round_trip_time/ 2017-12-20 13:25:54 +00:00
c559b0eede e2e: update "leaseTestTimeToLiveExpire"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 00:52:42 -08:00
9978b4fd35 etcdctl/ctlv3: clarify "lease timetolive" output on expired lease
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 00:40:57 -08:00
25222c22d9 e2e: test watch exec in v3 etcdctl
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-19 19:45:27 -08:00
e89fc20542 etcdctl: document watch exec in v3
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-19 19:45:27 -08:00
904513fa5c etcdctl/ctlv3: support "exec-watch" in watch command
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-19 19:45:24 -08:00
828289db32 Merge pull request #9046 from gyuho/request-limit
integration: test large request response back from server
2017-12-19 16:41:27 -08:00
abfc09b1ca integration: test large request response back from server
Address https://github.com/coreos/etcd/issues/9043.
Won't fix it, but we need test coverage on response back
from server as well.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 14:45:20 -08:00
74a600afae Merge pull request #9045 from gyuho/test-limit
test: bump up clientv3/integration test time out
2017-12-19 14:44:04 -08:00
ba702ae601 test: bump up clientv3/integration test time out
Recently we've added many more tests.
Until we parallelize tests, just increase the timeout.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 14:17:55 -08:00
b0a7623be8 Merge pull request #9023 from gyuho/keepalive-doc
clientv3: document context to "KeepAlive" API
2017-12-19 11:53:56 -08:00
ecbd1aec06 Merge pull request #9038 from gyuho/snapshot-error
clientv3: translate Snapshot API gRPC status error
2017-12-19 11:14:25 -08:00
c8a516d515 Documentation/upgrades: document Snapshot API error handling
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 10:46:22 -08:00
7cd985bdac clientv3: translate Snapshot API gRPC status error
To be consistent with other APIs.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-19 10:46:19 -08:00
6f899186f8 Merge pull request #9042 from gyuho/grpc-go
vendor: pin "grpc/grpc-go" v1.7.5
2017-12-19 09:40:54 -08:00
d57002ba9c vendor: pin "grpc/grpc-go" v1.7.5
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 15:56:23 -08:00
a7445d752b Merge pull request #9039 from gyuho/mvcc-doc
mvcc: clean-up godoc in key_index.go
2017-12-18 13:56:51 -08:00
eb58e7607b Merge pull request #9040 from gyuho/etcd-tester-main
etcd-tester: discard gRPC balancer logs
2017-12-18 13:56:39 -08:00
e21eac808e etcd-tester: discard gRPC balancer logs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 13:39:55 -08:00
76dd9d56a1 mvcc: clean-up godoc in key_index.go
Minor clean-up.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 13:20:00 -08:00
95ecf9602a Merge pull request #9031 from gyuho/fix-mvcc
mvcc: fetch revisions with current revision, not 0, in HashByRev
2017-12-18 12:26:30 -08:00
2e95ace82b mvcc: fetch revisions with current revision, not 0, in HashByRev
It was getting revisions with "atRev==0", which makes
"available" from "keep" method always empty since
"walk" on "keyIndex" only returns true.

"available" should be populated with all revisions to be
kept if the compaction happens with the given revision.
But, "available" was being empty when "kvindex.Keep(0)"
since it's always the case that "rev.main > atRev==0".

Fix https://github.com/coreos/etcd/issues/9022.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 12:17:06 -08:00
fffb26596c words: whitelist "KeepAlive"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 10:34:43 -08:00
3e58dd707f clientv3: document lease KeepAlive streaming errors
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-18 10:19:41 -08:00
da3e3b7240 clientv3: document from "don't halt lease client if there is a lease error"
From https://github.com/coreos/etcd/pull/7866.
2017-12-18 09:55:00 -08:00
9c326ab78c Merge pull request #9034 from hexfusion/fx_ttl_doc
clientv3/lease.go: TTL, document expired Lease.
2017-12-18 09:19:55 -08:00
9b98cbb819 Merge pull request #9017 from hexfusion/test_lease_auth
e2e: improve Lease coverage
2017-12-18 09:19:29 -08:00
ed3672850c e2e: improve lease coverage 2017-12-18 10:47:42 -05:00
33a1d307df Merge pull request #9032 from hexfusion/perl_intergration
Documentation/integrations: minor style fix.
2017-12-18 07:20:41 -08:00
a5d9bff24c clientv3/lease.go: TTL, document expired Lease. 2017-12-18 08:34:19 -05:00
ac58646298 Documentation/integrations: minor style fix. 2017-12-18 07:57:28 -05:00
940dace5d1 Merge pull request #9025 from gyuho/meeting
README: add "Community meetings"
2017-12-15 14:59:55 -08:00
207827a94e README: add "Community meetings"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 14:37:37 -08:00
91415f8aaa Merge pull request #8961 from gyuho/test-scripts-4
hack/scripts-dev: add "docker-dns-example-certs-common-name-run"
2017-12-15 13:43:43 -08:00
5783460dbb hack/scripts-dev: rename to example
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 13:37:55 -08:00
7533f700f1 hack/scripts-dev: add "docker-dns-test-certs-common-name-run"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 13:33:41 -08:00
da52b23542 hack/scripts-dev/docker-dns: add "certs-common-name" test case
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-15 13:33:41 -08:00
9deaee3ea1 Merge pull request #9020 from mkumatag/fix_govet
Clientv3: Fix govet error for gotip
2017-12-15 09:21:19 -08:00
b0f0ba7f81 Merge pull request #9018 from gyuho/auth-ctx
*: fix server-side lease expire when auth is enabled
2017-12-15 08:57:26 -08:00
18746c65da Clientv3: Fix govet error for gotip 2017-12-15 14:31:27 +05:30
9fb7bbdb2d integration: add "TestV3AuthWithLeaseRevokeWithRoot"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-14 21:45:50 -08:00
85af65eca9 etcdserver: log lease revoke error
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-14 21:45:20 -08:00
1f191a0e34 auth: use NewIncomingContext for "WithRoot"
"WithRoot" is only used within local node, and
"AuthInfoFromCtx" expects token from incoming context.
Embed token with "NewIncomingContext" so that token
can be found in "AuthInfoFromCtx".

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-14 21:45:16 -08:00
014c375099 Merge pull request #9012 from gyuho/gyuho/help-flag
etcdmain: display default --enable-v2, --strict-reconfig-check value ("true")
2017-12-14 11:40:52 -08:00
608961b2b8 Merge pull request #8921 from gyuho/fileutil-darwin
pkg/fileutil: fix preallocate under OS X kernel
2017-12-14 11:38:17 -08:00
0133d77f0a etcdmain: display default --enable-v2, --strict-reconfig-check value ("true")
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-14 11:25:20 -08:00
954ced48d2 Merge pull request #8923 from gyuho/client-logger
clientv3: simplify balancer level logging
2017-12-14 10:39:02 -08:00
962976f2df pkg/fileutil: fix preallocate under OS X kernel
ftruncate changes st_blocks, and following fallocate
syscalls would return EINVAL when allocated block size
is already greater than requested block size
(e.g. st_blocks==8, requested blocks are 2).

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-14 10:36:57 -08:00
b48cf77abb Merge pull request #9004 from spzala/readme
Update Running etcd section for pre-built install of etcd
2017-12-13 09:11:41 -08:00
24bdfed0a5 README: update running etcd section for pre-built etcd install
The current Running etcd section only shows how to run etcd for installation
with master branch. If user has installed a pre-built release following the
instructions on the release page, the ./bin/etcd won't work to bring up the
etcd. The Getting etcd section covers both, pre-built and master branch,
with recommendation of pre-built usage so the Running etcd section is updated
accordingly.

fix #9003
2017-12-12 22:43:24 -05:00
c886bda7fe Merge pull request #8996 from tinx/api_md_typo_fixes
Documentation/learning/api.md: fix typos
2017-12-12 13:23:02 -08:00
5a2b0dd0a7 Documentation/learning/api.md: fix markup, wording
A few words have been emphasized to be consistent with the rest
of the text. Also, some phrases have been altered for better
readability.
2017-12-12 18:48:06 +01:00
b52856d4f6 Merge pull request #8999 from yudai/fix_revision_failed_message
compactor: fix error message of Revision compactor
2017-12-11 11:44:13 -08:00
06365b6008 compactor: fix error message of Revision compactor
Reorder the parameters so that Noticef can output the error properly.
2017-12-11 10:39:00 -08:00
5f24a81f64 Documentation/learning/api.md: fix typos 2017-12-10 15:18:45 +01:00
809b0d71a3 Merge pull request #8995 from hexfusion/perl_intergration
Documentation/integrations: add Perl clients.
2017-12-09 15:49:03 -08:00
e8ff7da057 Documentation/integrations: add Perl clients. 2017-12-09 13:33:14 -05:00
a7f1fbe00e Merge pull request #8992 from gyuho/server-close
embed: stop *grpc.Server on *serveCtx serve error
2017-12-08 19:54:03 -08:00
e5e109609f Merge pull request #8991 from gyuho/upgrade-guide
Documentation/upgrades: highlight 3.2 breaking change, require gRPC v1.7.4
2017-12-08 18:55:46 -08:00
9744e1ee87 embed: stop *grpc.Server on *serveCtx serve error
If serve errors before *grpc.Server is sent to serversC,
it should be closed manually.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-08 18:50:37 -08:00
e3da56a8df Merge pull request #8985 from gyuho/update-dep
*: update bbolt, gogo/protobuf, golang/protobuf, regenerate protobuf
2017-12-08 08:57:49 -08:00
bbd2147248 Merge pull request #8988 from gyuho/error-return
embed/config: remove v3.2 TODO
2017-12-08 06:41:12 -08:00
015c04bcf5 Merge pull request #8987 from gyuho/tls-shutdown
embed: fix *grpc.Server panic on GracefulStop with TLS-enabled server
2017-12-08 06:40:50 -08:00
4d1a95c18b bill-of-materials: regenerate
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:31:13 -08:00
bcd5390b35 *: regenerate protobuf, grpc-gateway
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:31:13 -08:00
b1c6b98f3d scripts/genproto: require protoc 3.5, update gogo/proto
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:31:13 -08:00
749a9b14e0 vendor: upgrade bbolt, gogo/protobuf, golang/protobuf
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:31:13 -08:00
605dcb57a8 Documentation/upgrades: highlight 3.2 breaking change, require gRPC v1.7.4
There's already a section called "Server upgrade checklists" below.
Instead, highlight the listen URLs change as a breaking change in
server. Also update 3.2 and 3.3 gRPC requirements as v1.7.4.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 21:28:19 -08:00
af5a5b3998 embed/config: remove v3.2 TODO
Already returning error.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 20:37:12 -08:00
9bd07c91de integration: test GracefulStop on secure embedded server
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 20:36:31 -08:00
552b58dcfb embed: only gracefully shutdown insecure grpc.Server
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 20:36:31 -08:00
e39915feec embed: Avoid panic when shutting down gRPC Server
Avoid panic when stopping gRPC Server if TLS configuration is present.
Provided solution (attempts to) implement suggestion from gRPC team: https://github.com/grpc/grpc-go/issues/1384#issuecomment-317124531.

Fixes #8916
2017-12-07 20:36:31 -08:00
fc2eecf90c Merge pull request #8989 from gyuho/upgrade-doc
Document/upgrades: add server upgrade checklists on listen URLs
2017-12-07 19:59:59 -08:00
3d44e55179 Document/upgrades: add server upgrade checklists on listen URLs
Address https://github.com/coreos/etcd/issues/6336#issuecomment-246486183
about https://github.com/coreos/etcd/pull/7236.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 17:45:26 -08:00
5b059acd65 semaphore: run upgrade tests against v3.2.11
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-07 14:35:41 -08:00
a2256a6f24 hack/scripts-dev/Makefile: grpc-proxy with additional metrics URLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-06 14:24:11 -08:00
84e51cabc7 hack/scripts-dev: fix Makefile quoute, configurable host tmp dir
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-06 11:16:53 -08:00
805bcc828c clientv3: simplify V(4) logger with Lvl(4)
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-05 18:48:36 -08:00
5d2461e139 clientv3: add Lvl method to logger
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-05 18:48:36 -08:00
7e0fc6136e Merge pull request #8970 from gyuho/coverage-test
*: fix coverage test failures
2017-12-05 18:38:51 -08:00
f97233d206 test: log gocovmerge merging
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 17:18:13 -08:00
89047ab598 Dockerfile-test: use forked version of gocovmerge
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 17:17:17 -08:00
1b280a5a12 Merge pull request #8978 from gyuho/clientv3-doc
clientv3/config.go: remove extra whitespace character
2017-12-05 17:14:38 -08:00
944bd2c663 hack/scripts-dev: remove "Too many goroutines" in test scripts
Otherwise, "pkg/testutil" unit tests will trigger test failures.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 17:02:24 -08:00
3a941c9455 clientv3/config.go: remove extra whitespace character
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 14:51:36 -08:00
3ecd69cc2f hack/scripts-dev: mount host /tmp for Jenkins tests
Was running out of disk space in Jenkins.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-05 10:09:32 -08:00
21252e4219 Merge pull request #8973 from gyuho/changelog
CHANGELOG: add v3.2.11
2017-12-05 09:40:29 -08:00
8b13f7ff12 Merge pull request #8974 from gyuho/indentation
clientv3: fix indentation in doc.go
2017-12-04 17:34:26 -08:00
6458e22708 clientv3: fix indentation in doc.go
Looks off in https://godoc.org/github.com/coreos/etcd/clientv3.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 17:05:31 -08:00
788e759559 CHANGELOG: add v3.2.11
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 16:57:37 -08:00
6babf6a656 hack/scripts-dev: fix typo in Makefile
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 16:27:26 -08:00
287c23c4b1 Merge pull request #8972 from gyuho/grpc
vendor: upgrade grpc/grpc-go to v1.7.4
2017-12-04 16:21:14 -08:00
148192245c Merge pull request #8971 from gyuho/gosimple
*: fix gosimple warnings on sort.StringSlice
2017-12-04 16:13:57 -08:00
b3f53ce16d vendor: upgrade grpc/grpc-go to v1.7.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 14:27:06 -08:00
21d4307982 lease: use sort.Strings instead of StringSlice
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 14:10:14 -08:00
645c7c9a92 auth: use "sort.Strings" instead of StringSlice
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-04 14:09:27 -08:00
70db68b6e2 Merge pull request #8938 from gyuho/goddoc
clientv3/doc: update dial-timeout error handling with new gRPC
2017-12-04 13:46:10 -08:00
6b6013fad5 clientv3/doc: update dial-timeout error handling with new gRPC
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-04 13:45:42 -08:00
198d8d6b24 Merge pull request #8963 from gyuho/certs-gateway-srv
hack/scripts-dev: add "certs-gateway" test case with SRV
2017-12-04 13:39:16 -08:00
44e059879c Merge pull request #8968 from zbindenren/master
clientv3: Fix comment for DialKeepAliveTime and DialKeepAliveTimeout
2017-12-04 09:22:27 -08:00
e18afc462b clientv3: Fix comment for DialKeepAliveTime and DialKeepAliveTimeout 2017-12-04 14:22:34 +01:00
7e79c257ca Merge pull request #8960 from jpbetz/version-metric
metrics: Add server_version metric
2017-12-02 12:15:45 -08:00
49b4117077 hack/scripts-dev: add "docker-dns-srv-test-certs-gateway-run" to Makefile
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-01 15:56:10 -08:00
952f3b1a3b hack/scripts-dev/docker-dns-srv: add "certs-gateway" test case
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-01 15:54:58 -08:00
d0ee3e3c64 Merge pull request #8962 from gyuho/test-scripts-5
hack/scripts-dev/docker-dns: add "certs-gateway" test case
2017-12-01 15:51:12 -08:00
6fcfff132f Merge pull request #8959 from gyuho/test-scripts-3
hack/scripts-dev: share docker image between test cases, clean up DNS SRV tests
2017-12-01 15:46:24 -08:00
d50eb4d671 hack/scripts-dev: add separate certs, scripts to "docker-dns-srv"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 15:45:35 -08:00
e3b3608175 hack/scripts-dev: add docker-dns-srv-test-certs-run, docker-dns-srv-test-certs-wildcard-run
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 15:44:53 -08:00
37ae6e0c41 hack/scripts-dev: keep only shared scripts in docker-dns-srv
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 15:44:53 -08:00
7c6fb57f2f hack/scripts-dev: add "docker-dns-test-certs-gateway-run" to Makefile
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-01 15:43:36 -08:00
9da00c73bd hack/scripts-dev/docker-dns: add "certs-gateway" test case
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-01 15:43:33 -08:00
413aa48593 Merge pull request #8958 from gyuho/test-scripts-2
hack/scripts-dev: share docker image between test cases, clean up DNS tests
2017-12-01 15:41:28 -08:00
461d70254e hack/scripts-dev: add separate certs to "docker-dns"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 15:39:10 -08:00
4cacbf19dd metrics: Add server_version metric 2017-12-01 15:25:46 -08:00
b1cb99d3eb hack/scripts-dev: add docker-dns-test-certs-run, docker-dns-test-certs-wildcard-run
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 13:49:08 -08:00
a2c2f8ebc6 hack/scripts-dev: only keep shared scripts between test cases
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 13:45:50 -08:00
5db3cdd3bb Merge pull request #8957 from gyuho/test-scripts-1
semaphore, test: grep "test timed out" first, specify leaky goroutine string
2017-12-01 13:44:01 -08:00
75fc59fe0d hack/scripts-dev: grep "test timed out" first
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 13:35:15 -08:00
c6c3e81026 semaphore: grep "test timed out" first, then leaky goroutines
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 13:32:09 -08:00
c152be097b Merge pull request #8955 from gyuho/fix-e2e
e2e: fix remote error string in TestEtcdPeerCNAuth
2017-12-01 13:29:11 -08:00
156722e26a e2e: fix remote error string in TestEtcdPeerCNAuth
Now error is embed: rejected connection from "127.0.0.1:58527" (error "remote error: tls: bad certificate", ServerName "").
Change from https://github.com/coreos/etcd/pull/8952.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 12:54:29 -08:00
3537582bcf Merge pull request #8895 from gyuho/tls-doc
Documentation/op-guide: document TLS changes in 3.2
2017-12-01 09:41:32 -08:00
1613ef5822 Merge pull request #8952 from gyuho/tls-log
embed: provide more details on TLS handshake failure
2017-12-01 09:41:16 -08:00
ae589018cb embed: provide more details on TLS handshake failure
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-12-01 09:40:23 -08:00
83aa59b480 Documentation/op-guide: document TLS changes in 3.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-30 19:20:34 -08:00
b041ce5d51 Merge pull request #8946 from gyuho/cherry-pick
hack/patch: fix some typos in README, make cherrypick.sh executable
2017-11-30 11:35:15 -08:00
3167780cde hack/patch: fix some typos in README, make cherrypick.sh executable
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-30 11:21:38 -08:00
66a9508fdf Merge pull request #8945 from gyuho/logrus
glide: pin transitive dependency "sirupsen/logrus"
2017-11-30 10:19:15 -08:00
c232c85ba7 glide: pin transitive dependency "sirupsen/logrus"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-30 10:03:04 -08:00
56a012f2ab Merge pull request #8841 from gyuho/test-test
clientv3/integration: add more tests on balancer switch, inflight range
2017-11-30 09:38:53 -08:00
faaac0f964 Merge pull request #8937 from gyuho/quay
Documentation/upgrades: gcr.io as primary, do not deprecate quay.io
2017-11-29 17:36:13 -08:00
7a1deaa12a Merge pull request #8939 from gyuho/server-stream-error-log
api/v3rpc: log grpc stream send/recv errors in server-side
2017-11-29 17:35:20 -08:00
6bd41f36ff api/v3rpc: log grpc stream send/recv errors in server-side
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-29 17:34:05 -08:00
08905ee594 Documentation/upgrades: gcr.io as primary, do not deprecate quay.io
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-29 11:03:44 -08:00
b21180d198 Merge pull request #8936 from gyuho/godoc-clientv3-errors
clientv3: update error handling godoc
2017-11-29 11:00:49 -08:00
92167e8773 clientv3: update error handling godoc
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-29 10:53:54 -08:00
4ad8bd9299 Merge pull request #8858 from gyuho/aaa
test: clean up fmt tests
2017-11-29 09:51:07 -08:00
dbd1787672 Merge pull request #8925 from jpbetz/patch-manager-table
Documentation: Add release manager table
2017-11-28 17:46:07 -08:00
614ef75c01 Merge pull request #8932 from gyuho/release-gcr
scripts/build-docker: build both gcr.io and quay.io images
2017-11-28 15:10:13 -08:00
aca39e2ae1 scripts/build-docker: build both gcr.io and quay.io images
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-28 15:02:09 -08:00
6e116542c2 Merge pull request #8928 from gyuho/timeout-config
embed: error on zero heartbeat-interval, election-timeout
2017-11-28 11:11:20 -08:00
878367ba4c Merge pull request #8930 from jpbetz/changelog-3.1.11
CHANGELOG: add v3.1.11, bug fixes
2017-11-28 11:08:44 -08:00
8cc9063ea6 CHANGELOG: add v3.1.11, bug fixes 2017-11-28 10:59:34 -08:00
c8277e1b02 etcdmain: test wrong heartbeat-interval, election-timeout in TestConfigFileElectionTimeout
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-28 09:53:35 -08:00
cffa130253 embed: error on zero heartbeat-interval, election-timeout
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-28 09:53:32 -08:00
7e876ecc90 Documentation: Add release manager table 2017-11-27 15:43:35 -08:00
a7cb307a18 clientv3/integration: add more tests on balancer switch, inflight range
Test all possible cases of server shutdown with inflight range requests.
Removed redundant tests in kv_test.go.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 15:05:12 -08:00
9717a1240c Merge pull request #8926 from gyuho/fix-test
clientv3/integration: move isServerCtxTimeout to server_shutdown_test.go
2017-11-27 15:03:43 -08:00
bd76ac85db clientv3/integration: move isServerCtxTimeout to server_shutdown_test.go
Tests with cluster_proxy tags were failing, since isServerCtxTimeout
was defined with "+build !cluster_proxy".

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 15:02:48 -08:00
96e32a408f Merge pull request #8896 from gyuho/error
clientv3/integration: handle server-side context timeouts from clock-drift
2017-11-27 14:34:22 -08:00
289653d914 Merge pull request #8920 from gyuho/fix-flag
pkg/flags: fix "SetFlagsFromEnv" error masking
2017-11-27 14:23:09 -08:00
a9105b5a8d clientv3: document context timeout error with server-side clock skew
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 14:06:42 -08:00
0d0e8e78f7 clientv3/integration: handle server-side context timeouts from clock-drift
Due to clock drifts in server-side, client context times out
first in server-side, while original client-side context is
not timed out yet.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 14:02:09 -08:00
bdff651428 Merge pull request #8922 from gyuho/corrupt-check
*: enable initial corrupt check in tests
2017-11-27 10:37:39 -08:00
a20b24be7b etcd-tester: enable initial corrupt check by default
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 09:41:53 -08:00
f228f6a002 e2e: enable initialCorruptCheck by default
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 09:39:22 -08:00
965d9806d5 pkg/flags: fix "SetFlagsFromEnv" error masking
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-27 06:41:43 -08:00
5fbb4590fd Merge pull request #8918 from xiang90/ic
integration: always enable initial corruption check
2017-11-27 00:11:35 -08:00
1c69cc5657 integration: always enable initial corruption check 2017-11-26 16:51:04 -08:00
d84d3f2f77 Merge pull request #8554 from gyuho/initial-hash-checking
*: check data corruption on boot
2017-11-23 09:57:26 -08:00
0e4e8ed3d1 embed: corrupt-check on restart member
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-22 21:20:19 -08:00
e0dfc4368f etcdserver: CheckInitialHashKV when "InitialCorruptCheck==true"
etcdserver: only compare hash values if any

It's possible that peer has higher revision than local node.
In such case, hashes will still be different on requested
revision, but peer's header revision is greater.

etcdserver: count mismatch only when compact revisions are same

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-22 21:20:14 -08:00
f739853ec6 Merge pull request #8564 from gyuho/upgrade-doc
Documentation/upgrades: add 3.3 changes
2017-11-22 16:20:40 -08:00
1f38f1fddb e2e: add corruption checking tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-22 15:52:09 -08:00
3db5ad8d57 embed,etcdmain: add "--experimental-initial-corrupt-check"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-22 15:27:14 -08:00
c983f0ae98 Documentation: fix typo in upgrade 3.2 guide 2017-11-22 11:08:21 -08:00
321a9ca0a0 Documentation/upgrades: add 3.3 changes
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-21 10:01:25 -08:00
27519ffdb4 test: clean up fmt tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-12 14:19:53 -08:00
602 changed files with 39344 additions and 8122 deletions

7
.gitignore vendored
View File

@ -1,13 +1,14 @@
/agent-*
/coverage
/covdir
/docs
/vendor
/gopath
/gopath.proto
/go-bindata
/release
/machine*
/bin
.Dockerfile-test
.vagrant
*.etcd
*.log
@ -15,8 +16,6 @@
*.swp
/hack/insta-discovery/.env
*.test
tools/functional-tester/docker/bin
hack/scripts-dev/docker-dns/.Dockerfile
hack/scripts-dev/docker-dns-srv/.Dockerfile
hack/tls-setup/certs
.idea
*.bak

View File

@ -1,16 +0,0 @@
#!/usr/bin/env bash
TEST_SUFFIX=$(date +%s | base64 | head -c 15)
TEST_OPTS="RELEASE_TEST=y INTEGRATION=y PASSES='build unit release integration_e2e functional' MANUAL_VER=v3.2.9"
if [ "$TEST_ARCH" == "386" ]; then
TEST_OPTS="GOARCH=386 PASSES='build unit integration_e2e'"
fi
docker run \
--rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "${TEST_OPTS} ./test 2>&1 | tee test-${TEST_SUFFIX}.log"
! egrep "(--- FAIL:|leak|panic: test timed out)" -A10 -B50 test-${TEST_SUFFIX}.log

View File

@ -6,8 +6,7 @@ sudo: required
services: docker
go:
- 1.9.2
- tip
- 1.9.7
notifications:
on_success: never
@ -15,74 +14,84 @@ notifications:
env:
matrix:
- TARGET=amd64
- TARGET=amd64-go-tip
- TARGET=darwin-amd64
- TARGET=windows-amd64
- TARGET=arm64
- TARGET=arm
- TARGET=386
- TARGET=ppc64le
- TARGET=linux-amd64-build
- TARGET=linux-amd64-unit
- TARGET=linux-amd64-integration
- TARGET=linux-amd64-functional
- TARGET=linux-386-build
- TARGET=linux-386-unit
- TARGET=darwin-amd64-build
- TARGET=windows-amd64-build
- TARGET=linux-arm-build
- TARGET=linux-arm64-build
- TARGET=linux-ppc64le-build
matrix:
fast_finish: true
allow_failures:
- go: tip
env: TARGET=amd64-go-tip
exclude:
- go: 1.9.2
env: TARGET=amd64-go-tip
- go: tip
env: TARGET=amd64
- go: tip
env: TARGET=darwin-amd64
- go: tip
env: TARGET=windows-amd64
- go: tip
env: TARGET=arm
- go: tip
env: TARGET=arm64
- go: tip
env: TARGET=386
- go: tip
env: TARGET=ppc64le
before_install:
- docker pull gcr.io/etcd-development/etcd-test:go1.9.2
- if [[ $TRAVIS_GO_VERSION == 1.* ]]; then docker pull gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION}; fi
install:
- pushd cmd/etcd && go get -t -v ./... && popd
script:
- echo "TRAVIS_GO_VERSION=${TRAVIS_GO_VERSION}"
- >
case "${TARGET}" in
amd64)
linux-amd64-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GOARCH=amd64 ./test"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='build' ./test"
;;
amd64-go-tip)
GOARCH=amd64 ./test
;;
darwin-amd64)
linux-amd64-unit)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOOS=darwin GOARCH=amd64 ./build"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='unit' ./test"
;;
windows-amd64)
linux-amd64-integration)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOOS=windows GOARCH=amd64 ./build"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='integration' ./test"
;;
386)
linux-amd64-functional)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GOARCH=386 PASSES='build unit' ./test"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "./build && GOARCH=amd64 PASSES='build functional' ./test"
;;
*)
# test building out of gopath
linux-386-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOARCH='${TARGET}' ./build"
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=386 PASSES='build' ./test"
;;
linux-386-unit)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=386 PASSES='unit' ./test"
;;
darwin-amd64-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOOS=darwin GOARCH=amd64 ./build"
;;
windows-amd64-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOOS=windows GOARCH=amd64 ./build"
;;
linux-arm-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOARCH=arm ./build"
;;
linux-arm64-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOARCH=arm64 ./build"
;;
linux-ppc64le-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOARCH=ppc64le ./build"
;;
esac

6
.words
View File

@ -1,6 +1,11 @@
DefaultMaxRequestBytes
ErrCodeEnhanceYourCalm
ErrTimeout
GoAway
KeepAlive
Keepalive
MiB
ResourceExhausted
RPC
RPCs
TODO
@ -28,6 +33,7 @@ mutex
prefetching
protobuf
prometheus
rafthttp
repin
serializable
teardown

View File

@ -1,48 +1,307 @@
## [v3.2.10](https://github.com/coreos/etcd/releases/tag/v3.2.10) (2017-11-16)
## [v3.3.0](https://github.com/coreos/etcd/releases/tag/v3.3.0)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.9...v3.2.10).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.0...v3.3.0) and [v3.3 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_3.md) for any breaking changes.
### Improved
- Use [`coreos/bbolt`](https://github.com/coreos/bbolt/releases) to replace [`boltdb/bolt`](https://github.com/boltdb/bolt#project-status).
- Fix [etcd database size grows until `mvcc: database space exceeded`](https://github.com/coreos/etcd/issues/8009).
- [Reduce memory allocation](https://github.com/coreos/etcd/pull/8428) on [Range operations](https://github.com/coreos/etcd/pull/8475).
- [Rate limit](https://github.com/coreos/etcd/pull/8099) and [randomize](https://github.com/coreos/etcd/pull/8101) lease revoke on restart or leader elections.
- Prevent [spikes in Raft proposal rate](https://github.com/coreos/etcd/issues/8096).
- Support `clientv3` balancer failover under [network faults/partitions](https://github.com/coreos/etcd/issues/8711).
- Better warning on [mismatched `--initial-cluster`](https://github.com/coreos/etcd/pull/8083) flag.
### Changed(Breaking Changes)
- Require [Go 1.9+](https://github.com/coreos/etcd/issues/6174).
- Compile with *Go 1.9.2*.
- Deprecate [`golang.org/x/net/context`](https://github.com/coreos/etcd/pull/8511).
- Require [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) [**`v1.7.4`**](https://github.com/grpc/grpc-go/releases/tag/v1.7.4) or [**`v1.7.5+`**](https://github.com/grpc/grpc-go/releases/tag/v1.7.5):
- Deprecate [`metadata.Incoming/OutgoingContext`](https://github.com/coreos/etcd/pull/7896).
- Deprecate `grpclog.Logger`, upgrade to [`grpclog.LoggerV2`](https://github.com/coreos/etcd/pull/8533).
- Deprecate [`grpc.ErrClientConnTimeout`](https://github.com/coreos/etcd/pull/8505) errors in `clientv3`.
- Use [`MaxRecvMsgSize` and `MaxSendMsgSize`](https://github.com/coreos/etcd/pull/8437) to limit message size, in etcd server.
- Upgrade [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) `v1.2.2` to `v1.3.0`.
- Translate [gRPC status error in v3 client `Snapshot` API](https://github.com/coreos/etcd/pull/9038).
- Upgrade [`github.com/ugorji/go/codec`](https://github.com/ugorji/go) for v2 `client`.
- [Regenerated](https://github.com/coreos/etcd/pull/8721) v2 `client` source code with latest `ugorji/go/codec`.
- Fix [`/health` endpoint JSON output](https://github.com/coreos/etcd/pull/8312).
- v3 `etcdctl` [`lease timetolive LEASE_ID`](https://github.com/coreos/etcd/issues/9028) on expired lease now prints [`lease LEASE_ID already expired`](https://github.com/coreos/etcd/pull/9047).
- <=3.2 prints `lease LEASE_ID granted with TTL(0s), remaining(-1s)`.
### Added(`etcd`)
- Add [`--experimental-enable-v2v3`](https://github.com/coreos/etcd/pull/8407) flag to [emulate v2 API with v3](https://github.com/coreos/etcd/issues/6925).
- Add [`--experimental-corrupt-check-time`](https://github.com/coreos/etcd/pull/8420) flag to [raise corrupt alarm monitoring](https://github.com/coreos/etcd/issues/7125).
- Add [`--experimental-initial-corrupt-check`](https://github.com/coreos/etcd/pull/8554) flag to [check database hash before serving client/peer traffic](https://github.com/coreos/etcd/issues/8313).
- Add [`--max-txn-ops`](https://github.com/coreos/etcd/pull/7976) flag to [configure maximum number operations in transaction](https://github.com/coreos/etcd/issues/7826).
- Add [`--max-request-bytes`](https://github.com/coreos/etcd/pull/7968) flag to [configure maximum client request size](https://github.com/coreos/etcd/issues/7923).
- If not configured, it defaults to 1.5 MiB.
- Add [`--client-crl-file`, `--peer-crl-file`](https://github.com/coreos/etcd/pull/8124) flags for [Certificate revocation list](https://github.com/coreos/etcd/issues/4034).
- Add [`--peer-require-cn`](https://github.com/coreos/etcd/pull/8616) flag to support [CN-based auth for inter-peer connection](https://github.com/coreos/etcd/issues/8262).
- Add [`--listen-metrics-urls`](https://github.com/coreos/etcd/pull/8242) flag for additional `/metrics` endpoints.
- Support [additional (non) TLS `/metrics` endpoints for a TLS-enabled cluster](https://github.com/coreos/etcd/pull/8282).
- e.g. `--listen-metrics-urls=https://localhost:2378,http://localhost:9379` to serve `/metrics` in secure port 2378 and insecure port 9379.
- Useful for [bypassing critical APIs when monitoring etcd](https://github.com/coreos/etcd/issues/8060).
- Add [`--auto-compaction-mode`](https://github.com/coreos/etcd/pull/8123) flag to [support revision-based compaction](https://github.com/coreos/etcd/issues/8098).
- Change `--auto-compaction-retention` flag to [accept string values](https://github.com/coreos/etcd/pull/8563) with [finer granularity](https://github.com/coreos/etcd/issues/8503).
- Add [`--grpc-keepalive-min-time`, `--grpc-keepalive-interval`, `--grpc-keepalive-timeout`](https://github.com/coreos/etcd/pull/8535) flags to configure server-side keepalive policies.
- Serve [`/health` endpoint as unhealthy](https://github.com/coreos/etcd/pull/8272) when [alarm is raised](https://github.com/coreos/etcd/issues/8207).
- Provide [error information in `/health`](https://github.com/coreos/etcd/pull/8312).
- e.g. `{"health":false,"errors":["NOSPACE"]}`.
- Move [logging setup to embed package](https://github.com/coreos/etcd/pull/8810)
- Disable gRPC server log by default.
- Use [monotonic time in Go 1.9](https://github.com/coreos/etcd/pull/8507) for `lease` package.
- Warn on [empty hosts in advertise URLs](https://github.com/coreos/etcd/pull/8384).
- Address [advertise client URLs accepts empty hosts](https://github.com/coreos/etcd/issues/8379).
- etcd `v3.4` will exit on this error.
- e.g. `--advertise-client-urls=http://:2379`.
- Warn on [shadowed environment variables](https://github.com/coreos/etcd/pull/8385).
- Address [error on shadowed environment variables](https://github.com/coreos/etcd/issues/8380).
- etcd `v3.4` will exit on this error.
### Added(API)
- Support [ranges in transaction comparisons](https://github.com/coreos/etcd/pull/8025) for [disconnected linearized reads](https://github.com/coreos/etcd/issues/7924).
- Add [nested transactions](https://github.com/coreos/etcd/pull/8102) to extend [proxy use cases](https://github.com/coreos/etcd/issues/7857).
- Add [lease comparison target in transaction](https://github.com/coreos/etcd/pull/8324).
- Add [lease list](https://github.com/coreos/etcd/pull/8358).
- Add [hash by revision](https://github.com/coreos/etcd/pull/8263) for [better corruption checking against boltdb](https://github.com/coreos/etcd/issues/8016).
### Added(`etcd/clientv3`)
- Add [health balancer](https://github.com/coreos/etcd/pull/8545) to fix [watch API hangs](https://github.com/coreos/etcd/issues/7247), improve [endpoint switch under network faults](https://github.com/coreos/etcd/issues/7941).
- [Refactor balancer](https://github.com/coreos/etcd/pull/8840) and add [client-side keepalive pings](https://github.com/coreos/etcd/pull/8199) to handle [network partitions](https://github.com/coreos/etcd/issues/8711).
- Add [`MaxCallSendMsgSize` and `MaxCallRecvMsgSize`](https://github.com/coreos/etcd/pull/9047) fields to [`clientv3.Config`](https://godoc.org/github.com/coreos/etcd/clientv3#Config).
- Fix [exceeded response size limit error in client-side](https://github.com/coreos/etcd/issues/9043).
- Address [kubernetes#51099](https://github.com/kubernetes/kubernetes/issues/51099).
- `MaxCallSendMsgSize` default value is 2 MiB, if not configured.
- `MaxCallRecvMsgSize` default value is `math.MaxInt32`, if not configured.
- Accept [`Compare_LEASE`](https://github.com/coreos/etcd/pull/8324) in [`clientv3.Compare`](https://godoc.org/github.com/coreos/etcd/clientv3#Compare).
- Add [`LeaseValue` helper](https://github.com/coreos/etcd/pull/8488) to `Cmp` `LeaseID` values in `Txn`.
- Add [`MoveLeader`](https://github.com/coreos/etcd/pull/8153) to `Maintenance`.
- Add [`HashKV`](https://github.com/coreos/etcd/pull/8351) to `Maintenance`.
- Add [`Leases`](https://github.com/coreos/etcd/pull/8358) to `Lease`.
- Add [`clientv3/ordering`](https://github.com/coreos/etcd/pull/8092) for enforce [ordering in serialized requests](https://github.com/coreos/etcd/issues/7623).
### Added(v2 `etcdctl`)
- Add [`backup --with-v3`](https://github.com/coreos/etcd/pull/8479) flag.
### Added(v3 `etcdctl`)
- Add [`--discovery-srv`](https://github.com/coreos/etcd/pull/8462) flag.
- Add [`--keepalive-time`, `--keepalive-timeout`](https://github.com/coreos/etcd/pull/8663) flags.
- Add [`lease list`](https://github.com/coreos/etcd/pull/8358) command.
- Add [`lease keep-alive --once`](https://github.com/coreos/etcd/pull/8775) flag.
- Make [`lease timetolive LEASE_ID`](https://github.com/coreos/etcd/issues/9028) on expired lease print [`lease LEASE_ID already expired`](https://github.com/coreos/etcd/pull/9047).
- <=3.2 prints `lease LEASE_ID granted with TTL(0s), remaining(-1s)`.
- Add [`defrag --data-dir`](https://github.com/coreos/etcd/pull/8367) flag.
- Add [`move-leader`](https://github.com/coreos/etcd/pull/8153) command.
- Add [`endpoint hashkv`](https://github.com/coreos/etcd/pull/8351) command.
- Add [`endpoint --cluster`](https://github.com/coreos/etcd/pull/8143) flag, equivalent to [v2 `etcdctl cluster-health`](https://github.com/coreos/etcd/issues/8117).
- Make `endpoint health` command terminate with [non-zero exit code on unhealthy status](https://github.com/coreos/etcd/pull/8342).
- Add [`lock --ttl`](https://github.com/coreos/etcd/pull/8370) flag.
- Support [`watch [key] [range_end] -- [exec-command…]`](https://github.com/coreos/etcd/pull/8919), equivalent to [v2 `etcdctl exec-watch`](https://github.com/coreos/etcd/issues/8814).
- Enable [`clientv3.WithRequireLeader(context.Context)` for `watch`](https://github.com/coreos/etcd/pull/8672) command.
- Print [`"del"` instead of `"delete"`](https://github.com/coreos/etcd/pull/8297) in `txn` interactive mode.
- Print [`ETCD_INITIAL_ADVERTISE_PEER_URLS` in `member add`](https://github.com/coreos/etcd/pull/8332).
### Added(metrics)
- Add [`etcd --listen-metrics-urls`](https://github.com/coreos/etcd/pull/8242) flag for additional `/metrics` endpoints.
- Useful for [bypassing critical APIs when monitoring etcd](https://github.com/coreos/etcd/issues/8060).
- Add [`etcd_server_version`](https://github.com/coreos/etcd/pull/8960) Prometheus metric.
- To replace [Kubernetes `etcd-version-monitor`](https://github.com/coreos/etcd/issues/8948).
- Add [`etcd_debugging_mvcc_db_compaction_keys_total`](https://github.com/coreos/etcd/pull/8280) Prometheus metric.
- Add [`etcd_debugging_server_lease_expired_total`](https://github.com/coreos/etcd/pull/8064) Prometheus metric.
- To improve [lease revoke monitoring](https://github.com/coreos/etcd/issues/8050).
- Document [Prometheus 2.0 rules](https://github.com/coreos/etcd/pull/8879).
- Initialize gRPC server [metrics with zero values](https://github.com/coreos/etcd/pull/8878).
### Added(`grpc-proxy`)
- Add [`grpc-proxy start --experimental-leasing-prefix`](https://github.com/coreos/etcd/pull/8341) flag:
- For disconnected linearized reads.
- Based on [V system leasing](https://github.com/coreos/etcd/issues/6065).
- See ["Disconnected consistent reads with etcd" blog post](https://coreos.com/blog/coreos-labs-disconnected-consistent-reads-with-etcd).
- Add [`grpc-proxy start --experimental-serializable-ordering`](https://github.com/coreos/etcd/pull/8315) flag.
- To ensure serializable reads have monotonically increasing store revisions across endpoints.
- Add [`grpc-proxy start --metrics-addr`](https://github.com/coreos/etcd/pull/8242) flag for an additional `/metrics` endpoint.
- Set `--metrics-addr=http://[HOST]:9379` to serve `/metrics` in insecure port 9379.
- Serve [`/health` endpoint in grpc-proxy](https://github.com/coreos/etcd/pull/8322).
- Add [`grpc-proxy start --debug`](https://github.com/coreos/etcd/pull/8994) flag.
### Added(gRPC gateway)
- Replace [gRPC gateway](https://github.com/grpc-ecosystem/grpc-gateway) endpoint with [`/v3beta`](https://github.com/coreos/etcd/pull/8880).
- To deprecate [`/v3alpha`](https://github.com/coreos/etcd/issues/8125) in `v3.4`.
- Support ["authorization" token](https://github.com/coreos/etcd/pull/7999).
- Support [websocket for bi-directional streams](https://github.com/coreos/etcd/pull/8257).
- Fix [`Watch` API with gRPC gateway](https://github.com/coreos/etcd/issues/8237).
- Upgrade gRPC gateway to [v1.3.0](https://github.com/coreos/etcd/issues/8838).
### Added(`etcd/raft`)
- Add [non-voting member](https://github.com/coreos/etcd/pull/8751).
- To implement [Raft thesis 4.2.1 Catching up new servers](https://github.com/coreos/etcd/issues/8568).
- `Learner` node does not vote or promote itself.
### Added/Fixed(Security/Auth)
- Add [CRL based connection rejection](https://github.com/coreos/etcd/pull/8124) to manage [revoked certs](https://github.com/coreos/etcd/issues/4034).
- Document [TLS authentication changes](https://github.com/coreos/etcd/pull/8895):
- [Server accepts connections if IP matches, without checking DNS entries](https://github.com/coreos/etcd/pull/8223). For instance, if peer cert contains IP addresses and DNS names in Subject Alternative Name (SAN) field, and the remote IP address matches one of those IP addresses, server just accepts connection without further checking the DNS names.
- [Server supports reverse-lookup on wildcard DNS `SAN`](https://github.com/coreos/etcd/pull/8281). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server first reverse-lookups the remote IP address to get a list of names mapping to that address (e.g. `nslookup IPADDR`). Then accepts the connection if those names have a matching name with peer cert's DNS names (either by exact or wildcard match). If none is matched, server forward-lookups each DNS entry in peer cert (e.g. look up `example.default.svc` when the entry is `*.example.default.svc`), and accepts connection only when the host's resolved addresses have the matching IP address with the peer's remote IP address.
- Add [`etcd --peer-require-cn`](https://github.com/coreos/etcd/pull/8616) flag.
- To support [CommonName(CN) based auth](https://github.com/coreos/etcd/issues/8262) for inter peer connection.
- [Swap priority](https://github.com/coreos/etcd/pull/8594) of cert CommonName(CN) and username + password.
- To address ["username and password specified in the request should take priority over CN in the cert"](https://github.com/coreos/etcd/issues/8584).
- Protect [lease revoke with auth](https://github.com/coreos/etcd/pull/8031).
- Provide user's role on [auth permission error](https://github.com/coreos/etcd/pull/8164).
- Fix [auth store panic with disabled token](https://github.com/coreos/etcd/pull/8695).
- Update `golang.org/x/crypto/bcrypt` (see [golang/crypto@6c586e1](https://github.com/golang/crypto/commit/6c586e17d90a7d08bbbc4069984180dce3b04117)).
### Fixed(v2)
- [Fail-over v2 client](https://github.com/coreos/etcd/pull/8519) to next endpoint on [oneshot failure](https://github.com/coreos/etcd/issues/8515).
- [Put back `/v2/machines`](https://github.com/coreos/etcd/pull/8062) endpoint for python-etcd wrapper.
### Fixed(v3)
- Fix [range/put/delete operation metrics](https://github.com/coreos/etcd/pull/8054) with transaction:
- `etcd_debugging_mvcc_range_total`
- `etcd_debugging_mvcc_put_total`
- `etcd_debugging_mvcc_delete_total`
- `etcd_debugging_mvcc_txn_total`
- Fix [`etcd_debugging_mvcc_keys_total`](https://github.com/coreos/etcd/pull/8390) on restore.
- Fix [`etcd_debugging_mvcc_db_total_size_in_bytes`](https://github.com/coreos/etcd/pull/8120) on restore.
- Also change to [`prometheus.NewGaugeFunc`](https://github.com/coreos/etcd/pull/8150).
- Fix [backend database in-memory index corruption](https://github.com/coreos/etcd/pull/8127) issue on restore (only 3.2.0 is affected).
- Fix [watch restore from snapshot](https://github.com/coreos/etcd/pull/8427).
- Fix ["put at-most-once" in `clientv3`](https://github.com/coreos/etcd/pull/8335).
- Handle [empty key permission](https://github.com/coreos/etcd/pull/8514) in `etcdctl`.
- [Fix server crash](https://github.com/coreos/etcd/pull/8010) on [invalid transaction request from gRPC gateway](https://github.com/coreos/etcd/issues/7889).
- Fix [`clientv3.WatchResponse.Canceled`](https://github.com/coreos/etcd/pull/8283) on [compacted watch request](https://github.com/coreos/etcd/issues/8231).
- Handle [WAL renaming failure on Windows](https://github.com/coreos/etcd/pull/8286).
- Make [peer dial timeout longer](https://github.com/coreos/etcd/pull/8599).
- See [coreos/etcd-operator#1300](https://github.com/coreos/etcd-operator/issues/1300) for more detail.
- Make server [wait up to request time-out](https://github.com/coreos/etcd/pull/8267) with [pending RPCs](https://github.com/coreos/etcd/issues/8224).
- Fix [`grpc.Server` panic on `GracefulStop`](https://github.com/coreos/etcd/pull/8987) with [TLS-enabled server](https://github.com/coreos/etcd/issues/8916).
- Fix ["multiple peer URLs cannot start" issue](https://github.com/coreos/etcd/issues/8383).
- Fix server-side auth so [concurrent auth operations do not return old revision error](https://github.com/coreos/etcd/pull/8442).
- Fix [`concurrency/stm` `Put` with serializable snapshot](https://github.com/coreos/etcd/pull/8439).
- Use store revision from first fetch to resolve write conflicts instead of modified revision.
- Fix [`grpc-proxy` Snapshot API error handling](https://github.com/coreos/etcd/commit/dbd16d52fbf81e5fd806d21ff5e9148d5bf203ab).
- Fix [`grpc-proxy` KV API `PrevKv` flag handling](https://github.com/coreos/etcd/pull/8366).
- Fix [`grpc-proxy` KV API `KeysOnly` flag handling](https://github.com/coreos/etcd/pull/8552).
- Upgrade [`coreos/go-systemd`](https://github.com/coreos/go-systemd/releases) to `v15` (see https://github.com/coreos/go-systemd/releases/tag/v15).
### Other
- Support previous two minor versions (see our [new release policy](https://github.com/coreos/etcd/pull/8805)).
- `v3.3.x` is the last release cycle that supports `ACI`:
- AppC was [officially suspended](https://github.com/appc/spec#-disclaimer-), as of late 2016.
- [`acbuild`](https://github.com/containers/build#this-project-is-currently-unmaintained) is not maintained anymore.
- `*.aci` files won't be available from etcd `v3.4` release.
- Add container registry [`gcr.io/etcd-development/etcd`](https://gcr.io/etcd-development/etcd).
- [quay.io/coreos/etcd](https://quay.io/coreos/etcd) is still supported as secondary.
## [v3.2.12](https://github.com/coreos/etcd/releases/tag/v3.2.12) (2017-12-20)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.11...v3.2.12) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Replace backend key-value database `boltdb/bolt` with [`coreos/bbolt`](https://github.com/coreos/bbolt/releases) to address [backend database size issue](https://github.com/coreos/etcd/issues/8009)
- Fix clientv3 balancer to handle [network partition](https://github.com/coreos/etcd/issues/8711)
- Upgrade [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) v1.2.1 to v1.7.3
- Upgrade [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) v1.2 to v1.3
- Revert [discovery SRV auth `ServerName` with `*.{ROOT_DOMAIN}`](https://github.com/coreos/etcd/pull/8651) to support non-wildcard subject alternative names in the certs (see [issue #8445](https://github.com/coreos/etcd/issues/8445) for more contexts)
- For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `etcd.local` (**not `*.etcd.local`**) as an entry in Subject Alternative Name (SAN) field
- Fix [error message of `Revision` compactor](https://github.com/coreos/etcd/pull/8999) in server-side.
### Added(`etcd/clientv3`,`etcdctl/v3`)
- Add [`MaxCallSendMsgSize` and `MaxCallRecvMsgSize`](https://github.com/coreos/etcd/pull/9047) fields to [`clientv3.Config`](https://godoc.org/github.com/coreos/etcd/clientv3#Config).
- Fix [exceeded response size limit error in client-side](https://github.com/coreos/etcd/issues/9043).
- Address [kubernetes#51099](https://github.com/kubernetes/kubernetes/issues/51099).
- `MaxCallSendMsgSize` default value is 2 MiB, if not configured.
- `MaxCallRecvMsgSize` default value is `math.MaxInt32`, if not configured.
### Other
- Pin [grpc v1.7.5](https://github.com/grpc/grpc-go/releases/tag/v1.7.5), [grpc-gateway v1.3.0](https://github.com/grpc-ecosystem/grpc-gateway/releases/tag/v1.3.0).
- No code change, just to be explicit about recommended versions.
## [v3.2.11](https://github.com/coreos/etcd/releases/tag/v3.2.11) (2017-12-05)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.10...v3.2.11) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix racey grpc-go's server handler transport `WriteStatus` call to prevent [TLS-enabled etcd server crash](https://github.com/coreos/etcd/issues/8904):
- Upgrade [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) `v1.7.3` to `v1.7.4`.
- Add [gRPC RPC failure warnings](https://github.com/coreos/etcd/pull/8939) to help debug such issues in the future.
- Remove `--listen-metrics-urls` flag in monitoring document (non-released in `v3.2.x`, planned for `v3.3.x`).
### Added
- Provide [more cert details](https://github.com/coreos/etcd/pull/8952/files) on TLS handshake failures.
## [v3.1.11](https://github.com/coreos/etcd/releases/tag/v3.1.11) (2017-11-28)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.10...v3.1.11) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- [#8411](https://github.com/coreos/etcd/issues/8411),[#8806](https://github.com/coreos/etcd/pull/8806) mvcc: fix watch restore from snapshot
- [#8009](https://github.com/coreos/etcd/issues/8009),[#8902](https://github.com/coreos/etcd/pull/8902) backport coreos/bbolt v1.3.1-coreos.5
## [v3.2.10](https://github.com/coreos/etcd/releases/tag/v3.2.10) (2017-11-16)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.9...v3.2.10) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Replace backend key-value database `boltdb/bolt` with [`coreos/bbolt`](https://github.com/coreos/bbolt/releases) to address [backend database size issue](https://github.com/coreos/etcd/issues/8009).
- Fix `clientv3` balancer to handle [network partitions](https://github.com/coreos/etcd/issues/8711):
- Upgrade [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) `v1.2.1` to `v1.7.3`.
- Upgrade [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) `v1.2` to `v1.3`.
- Revert [discovery SRV auth `ServerName` with `*.{ROOT_DOMAIN}`](https://github.com/coreos/etcd/pull/8651) to support non-wildcard subject alternative names in the certs (see [issue #8445](https://github.com/coreos/etcd/issues/8445) for more contexts).
- For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `etcd.local` (**not `*.etcd.local`**) as an entry in Subject Alternative Name (SAN) field.
## [v3.2.9](https://github.com/coreos/etcd/releases/tag/v3.2.9) (2017-10-06)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.8...v3.2.9).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.8...v3.2.9) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed(Security)
- Compile with [Go 1.8.4](https://groups.google.com/d/msg/golang-nuts/sHfMg4gZNps/a-HDgDDDAAAJ)
- Update `golang.org/x/crypto/bcrypt` (See [golang/crypto@6c586e1](https://github.com/golang/crypto/commit/6c586e17d90a7d08bbbc4069984180dce3b04117) for more)
- Fix discovery SRV bootstrapping to [authenticate `ServerName` with `*.{ROOT_DOMAIN}`](https://github.com/coreos/etcd/pull/8651), in order to support sub-domain wildcard matching (see [issue #8445](https://github.com/coreos/etcd/issues/8445) for more contexts)
- For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `*.etcd.local` as an entry in Subject Alternative Name (SAN) field
- Compile with [Go 1.8.4](https://groups.google.com/d/msg/golang-nuts/sHfMg4gZNps/a-HDgDDDAAAJ).
- Update `golang.org/x/crypto/bcrypt` (see [golang/crypto@6c586e1](https://github.com/golang/crypto/commit/6c586e17d90a7d08bbbc4069984180dce3b04117)).
- Fix discovery SRV bootstrapping to [authenticate `ServerName` with `*.{ROOT_DOMAIN}`](https://github.com/coreos/etcd/pull/8651), in order to support sub-domain wildcard matching (see [issue #8445](https://github.com/coreos/etcd/issues/8445) for more contexts).
- For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `*.etcd.local` as an entry in Subject Alternative Name (SAN) field.
## [v3.2.8](https://github.com/coreos/etcd/releases/tag/v3.2.8) (2017-09-29)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.7...v3.2.8).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.7...v3.2.8) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix v2 client failover to next endpoint on mutable operation
- Fix grpc-proxy to respect `KeysOnly` flag
- Fix v2 client failover to next endpoint on mutable operation.
- Fix grpc-proxy to respect `KeysOnly` flag.
## [v3.2.7](https://github.com/coreos/etcd/releases/tag/v3.2.7) (2017-09-01)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.6...v3.2.7).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.6...v3.2.7) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix server-side auth so concurrent auth operations do not return old revision error
- Fix server-side auth so concurrent auth operations do not return old revision error.
- Fix concurrency/stm Put with serializable snapshot
- Use store revision from first fetch to resolve write conflicts instead of modified revision
- Use store revision from first fetch to resolve write conflicts instead of modified revision.
## [v3.2.6](https://github.com/coreos/etcd/releases/tag/v3.2.6) (2017-08-21)
@ -51,34 +310,34 @@ See [code changes](https://github.com/coreos/etcd/compare/v3.2.5...v3.2.6).
### Fixed
- Fix watch restore from snapshot
- Fix `etcd_debugging_mvcc_keys_total` inconsistency
- Fix multiple URLs for `--listen-peer-urls` flag
- Add `enable-pprof` to etcd configuration file format
- Fix watch restore from snapshot.
- Fix `etcd_debugging_mvcc_keys_total` inconsistency.
- Fix multiple URLs for `--listen-peer-urls` flag.
- Add `--enable-pprof` flag to etcd configuration file format.
## [v3.2.5](https://github.com/coreos/etcd/releases/tag/v3.2.5) (2017-08-04)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.4...v3.2.5).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.4...v3.2.5) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Changed
- Use reverse lookup to match wildcard DNS SAN
- Return non-zero exit code on unhealthy `endpoint health`
- Use reverse lookup to match wildcard DNS SAN.
- Return non-zero exit code on unhealthy `endpoint health`.
### Fixed
- Fix unreachable /metrics endpoint when `--enable-v2=false`
- Fix grpc-proxy to respect `PrevKv` flag
- Fix unreachable /metrics endpoint when `--enable-v2=false`.
- Fix grpc-proxy to respect `PrevKv` flag.
### Added
- Add container registry `gcr.io/etcd-development/etcd`
- Add container registry `gcr.io/etcd-development/etcd`.
## [v3.2.4](https://github.com/coreos/etcd/releases/tag/v3.2.4) (2017-07-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.3...v3.2.4).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.3...v3.2.4) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
@ -88,7 +347,7 @@ See [code changes](https://github.com/coreos/etcd/compare/v3.2.3...v3.2.4).
## [v3.2.3](https://github.com/coreos/etcd/releases/tag/v3.2.3) (2017-07-14)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.2...v3.2.3).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.2...v3.2.3) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
@ -102,7 +361,7 @@ See [code changes](https://github.com/coreos/etcd/compare/v3.2.2...v3.2.3).
## [v3.1.10](https://github.com/coreos/etcd/releases/tag/v3.1.10) (2017-07-14)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.9...v3.1.10).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.9...v3.1.10) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
@ -110,192 +369,191 @@ See [code changes](https://github.com/coreos/etcd/compare/v3.1.9...v3.1.10).
### Added
- Tag docker images with minor versions
- e.g. `docker pull quay.io/coreos/etcd:v3.1` to fetch latest v3.1 versions
- Tag docker images with minor versions.
- e.g. `docker pull quay.io/coreos/etcd:v3.1` to fetch latest v3.1 versions.
## [v3.2.2](https://github.com/coreos/etcd/releases/tag/v3.2.2) (2017-07-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.1...v3.2.2).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.1...v3.2.2) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Improved
- Rate-limit lease revoke on expiration
- Extend leases on promote to avoid queueing effect on lease expiration
- Rate-limit lease revoke on expiration.
- Extend leases on promote to avoid queueing effect on lease expiration.
### Fixed
- Use user-provided listen address to connect to gRPC gateway
- `net.Listener` rewrites IPv4 0.0.0.0 to IPv6 [::], breaking IPv6 disabled hosts
- Only v3.2.0, v3.2.1 are affected
- Accept connection with matched IP SAN but no DNS match
- Don't check DNS entries in certs if there's a matching IP
- Fix 'tools/benchmark' watch command
- Use user-provided listen address to connect to gRPC gateway:
- `net.Listener` rewrites IPv4 0.0.0.0 to IPv6 [::], breaking IPv6 disabled hosts.
- Only v3.2.0, v3.2.1 are affected.
- Accept connection with matched IP SAN but no DNS match.
- Don't check DNS entries in certs if there's a matching IP.
- Fix 'tools/benchmark' watch command.
## [v3.2.1](https://github.com/coreos/etcd/releases/tag/v3.2.1) (2017-06-23)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.0...v3.2.1).
See [code changes](https://github.com/coreos/etcd/compare/v3.2.0...v3.2.1) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix backend database in-memory index corruption issue on restore (only 3.2.0 is affected)
- Fix gRPC gateway Txn marshaling issue
- Fix backend database size debugging metrics
- Fix backend database in-memory index corruption issue on restore (only 3.2.0 is affected).
- Fix gRPC gateway Txn marshaling issue.
- Fix backend database size debugging metrics.
## [v3.2.0](https://github.com/coreos/etcd/releases/tag/v3.2.0) (2017-06-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.10...v3.2.0).
See [upgrade 3.2](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.0...v3.2.0) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Improved
- Improve backend read concurrency
- Improve backend read concurrency.
### Added
- Embedded etcd
- `Etcd.Peers` field is now `[]*peerListener`
- `Etcd.Peers` field is now `[]*peerListener`.
- RPCs
- Add Election, Lock service
- Add Election, Lock service.
- Native client etcdserver/api/v3client
- client "embedded" in the server
- client "embedded" in the server.
- gRPC proxy
- Proxy endpoint discovery
- Namespaces
- Coalesce lease requests
- Proxy endpoint discovery.
- Namespaces.
- Coalesce lease requests.
- v3 client
- STM prefetching
- Add namespace feature
- Add `ErrOldCluster` with server version checking
- Translate WithPrefix() into WithFromKey() for empty key
- STM prefetching.
- Add namespace feature.
- Add `ErrOldCluster` with server version checking.
- Translate `WithPrefix()` into `WithFromKey()` for empty key.
- v3 etcdctl
- Add `check perf` command
- Add `--from-key` flag to role grant-permission command
- `lock` command takes an optional command to execute
- Add `check perf` command.
- Add `--from-key` flag to role grant-permission command.
- `lock` command takes an optional command to execute.
- etcd flags
- Add `--enable-v2` flag to configure v2 backend (enabled by default)
- Add `--auth-token` flag
- Add `--enable-v2` flag to configure v2 backend (enabled by default).
- Add `--auth-token` flag.
- `etcd gateway`
- Support DNS SRV priority
- Support DNS SRV priority.
- Auth
- Support Watch API
- JWT tokens
- Support Watch API.
- JWT tokens.
- Logging, monitoring
- Server warns large snapshot operations
- Add `etcd_debugging_server_lease_expired_total` metrics
- Server warns large snapshot operations.
- Add `etcd_debugging_server_lease_expired_total` metrics.
- Security
- Deny incoming peer certs with wrong IP SAN
- Resolve TLS DNSNames when SAN checking
- Reload TLS certificates on every client connection
- Deny incoming peer certs with wrong IP SAN.
- Resolve TLS `DNSNames` when SAN checking.
- Reload TLS certificates on every client connection.
- Release
- Annotate acbuild with supports-systemd-notify
- Add `nsswitch.conf` to Docker container image
- Add ppc64le, arm64(experimental) builds
- Compile with Go 1.8.3
- Annotate acbuild with supports-systemd-notify.
- Add `nsswitch.conf` to Docker container image.
- Add ppc64le, arm64(experimental) builds.
- Compile with `Go 1.8.3`.
### Changed
- v3 client
- `LeaseTimeToLive` returns TTL=-1 resp on lease not found
- `clientv3.NewFromConfigFile` is moved to `clientv3/yaml.NewConfig`
- concurrency package's elections updated to match RPC interfaces
- let client dial endpoints not in the balancer
- `LeaseTimeToLive` returns TTL=-1 resp on lease not found.
- `clientv3.NewFromConfigFile` is moved to `clientv3/yaml.NewConfig`.
- concurrency package's elections updated to match RPC interfaces.
- let client dial endpoints not in the balancer.
- Dependencies
- Update gRPC to v1.2.1
- Update grpc-gateway to v1.2.0
- Update [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) to `v1.2.1`.
- Update [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) to `v1.2.0`.
### Fixed
- Allow v2 snapshot over 512MB
- Allow v2 snapshot over 512MB.
## [v3.1.9](https://github.com/coreos/etcd/releases/tag/v3.1.9) (2017-06-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.8...v3.1.9).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.8...v3.1.9) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Fixed
- Allow v2 snapshot over 512MB
- Allow v2 snapshot over 512MB.
## [v3.1.8](https://github.com/coreos/etcd/releases/tag/v3.1.8) (2017-05-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.7...v3.1.8).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.7...v3.1.8) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
## [v3.1.7](https://github.com/coreos/etcd/releases/tag/v3.1.7) (2017-04-28)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.6...v3.1.7).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.6...v3.1.7) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
## [v3.1.6](https://github.com/coreos/etcd/releases/tag/v3.1.6) (2017-04-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.5...v3.1.6).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.5...v3.1.6) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Remove auth check in Status API
- Remove auth check in Status API.
### Fixed
- Fill in Auth API response header
- Fill in Auth API response header.
## [v3.1.5](https://github.com/coreos/etcd/releases/tag/v3.1.5) (2017-03-27)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.4...v3.1.5).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.4...v3.1.5) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Added
- Add `/etc/nsswitch.conf` file to alpine-based Docker image
- Add `/etc/nsswitch.conf` file to alpine-based Docker image.
### Fixed
- Fix raft memory leak issue
- Fix Windows file path issues
- Fix raft memory leak issue.
- Fix Windows file path issues.
## [v3.1.4](https://github.com/coreos/etcd/releases/tag/v3.1.4) (2017-03-22)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.3...v3.1.4).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.3...v3.1.4) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
## [v3.1.3](https://github.com/coreos/etcd/releases/tag/v3.1.3) (2017-03-10)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.2...v3.1.3).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.2...v3.1.3) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Use machine default host when advertise URLs are default values(localhost:2379,2380) AND if listen URL is 0.0.0.0
- Use machine default host when advertise URLs are default values(`localhost:2379,2380`) AND if listen URL is `0.0.0.0`.
### Fixed
- Fix `etcd gateway` schema handling in DNS discovery
- Fix sd_notify behaviors in gateway, grpc-proxy
- Fix `etcd gateway` schema handling in DNS discovery.
- Fix sd_notify behaviors in `gateway`, `grpc-proxy`.
## [v3.1.2](https://github.com/coreos/etcd/releases/tag/v3.1.2) (2017-02-24)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.1...v3.1.2).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.1...v3.1.2) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Use IPv4 default host, by default (when IPv4 and IPv6 are available)
- Use IPv4 default host, by default (when IPv4 and IPv6 are available).
### Fixed
- Fix `etcd gateway` with multiple endpoints
- Fix `etcd gateway` with multiple endpoints.
## [v3.1.1](https://github.com/coreos/etcd/releases/tag/v3.1.1) (2017-02-17)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.0...v3.1.1).
See [code changes](https://github.com/coreos/etcd/compare/v3.1.0...v3.1.1) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Compile with Go 1.7.5
- Compile with `Go 1.7.5`.
## [v2.3.8](https://github.com/coreos/etcd/releases/tag/v2.3.8) (2017-02-17)
@ -304,40 +562,39 @@ See [code changes](https://github.com/coreos/etcd/compare/v2.3.7...v2.3.8).
### Changed
- Compile with Go 1.7.5
- Compile with `Go 1.7.5`.
## [v3.1.0](https://github.com/coreos/etcd/releases/tag/v3.1.0) (2017-01-20)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.16...v3.1.0).
See [upgrade 3.1](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.0...v3.1.0) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Improved
- Faster linearizable reads (implements Raft read-index)
- v3 authentication API is now stable
- Faster linearizable reads (implements Raft read-index).
- v3 authentication API is now stable.
### Added
- Automatic leadership transfer when leader steps down
- Automatic leadership transfer when leader steps down.
- etcd flags
- `--strict-reconfig-check` flag is set by default
- Add `--log-output` flag
- Add `--metrics` flag
- `--strict-reconfig-check` flag is set by default.
- Add `--log-output` flag.
- Add `--metrics` flag.
- v3 client
- Add SetEndpoints method; update endpoints at runtime
- Add Sync method; auto-update endpoints at runtime
- Add Lease TimeToLive API; fetch lease information
- replace Config.Logger field with global logger
- Get API responses are sorted in ascending order by default
- Add `SetEndpoints` method; update endpoints at runtime.
- Add `Sync` method; auto-update endpoints at runtime.
- Add `Lease TimeToLive` API; fetch lease information.
- replace Config.Logger field with global logger.
- Get API responses are sorted in ascending order by default.
- v3 etcdctl
- Add lease timetolive command
- Add `--print-value-only` flag to get command
- Add `--dest-prefix` flag to make-mirror command
- command get responses are sorted in ascending order by default
- `recipes` now conform to sessions defined in clientv3/concurrency
- ACI has symlinks to `/usr/local/bin/etcd*`
- experimental gRPC proxy feature
- Add `lease timetolive` command.
- Add `--print-value-only` flag to get command.
- Add `--dest-prefix` flag to make-mirror command.
- `get` command responses are sorted in ascending order by default.
- `recipes` now conform to sessions defined in `clientv3/concurrency`.
- ACI has symlinks to `/usr/local/bin/etcd*`.
- Experimental gRPC proxy feature.
### Changed
@ -346,145 +603,144 @@ See [upgrade 3.1](https://github.com/coreos/etcd/blob/master/Documentation/upgra
- `etcd_grpc_requests_failed_total`
- `etcd_grpc_active_streams`
- `etcd_grpc_unary_requests_duration_seconds`
- etcd uses default route IP if advertise URL is not given
- Cluster rejects removing members if quorum will be lost
- SRV records (e.g., infra1.example.com) must match the discovery domain (i.e., example.com) if no custom certificate authority is given
- TLSConfig ServerName is ignored with user-provided certificates for backwards compatibility; to be deprecated in 3.2
- For example, `etcd --discovery-srv=example.com` will only authenticate peers/clients when the provided certs have root domain `example.com` as an entry in Subject Alternative Name (SAN) field
- Discovery now has upper limit for waiting on retries
- Warn on binding listeners through domain names; to be deprecated in 3.2
- etcd uses default route IP if advertise URL is not given.
- Cluster rejects removing members if quorum will be lost.
- SRV records (e.g., infra1.example.com) must match the discovery domain (i.e., example.com) if no custom certificate authority is given.
- `TLSConfig.ServerName` is ignored with user-provided certificates for backwards compatibility; to be deprecated.
- For example, `etcd --discovery-srv=example.com` will only authenticate peers/clients when the provided certs have root domain `example.com` as an entry in Subject Alternative Name (SAN) field.
- Discovery now has upper limit for waiting on retries.
- Warn on binding listeners through domain names; to be deprecated.
## [v3.0.16](https://github.com/coreos/etcd/releases/tag/v3.0.16) (2016-11-13)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.15...v3.0.16).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.15...v3.0.16) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.15](https://github.com/coreos/etcd/releases/tag/v3.0.15) (2016-11-11)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.14...v3.0.15).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.14...v3.0.15) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Fixed
- Fix cancel watch request with wrong range end
- Fix cancel watch request with wrong range end.
## [v3.0.14](https://github.com/coreos/etcd/releases/tag/v3.0.14) (2016-11-04)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.13...v3.0.14).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.13...v3.0.14) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Added
- v3 etcdctl migrate command now supports `--no-ttl` flag to discard keys on transform
- v3 `etcdctl migrate` command now supports `--no-ttl` flag to discard keys on transform.
## [v3.0.13](https://github.com/coreos/etcd/releases/tag/v3.0.13) (2016-10-24)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.12...v3.0.13).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.12...v3.0.13) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.12](https://github.com/coreos/etcd/releases/tag/v3.0.12) (2016-10-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.11...v3.0.12).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.11...v3.0.12) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.11](https://github.com/coreos/etcd/releases/tag/v3.0.11) (2016-10-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.10...v3.0.11).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.10...v3.0.11) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Added
- Server returns previous key-value (optional)
- `clientv3.WithPrevKV` option
- v3 etcdctl put,watch,del `--prev-kv` flag
- v3 etcdctl `put,watch,del --prev-kv` flag
## [v3.0.10](https://github.com/coreos/etcd/releases/tag/v3.0.10) (2016-09-23)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.9...v3.0.10).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.9...v3.0.10) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.9](https://github.com/coreos/etcd/releases/tag/v3.0.9) (2016-09-15)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.8...v3.0.9).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.8...v3.0.9) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Added
- Warn on domain names on listen URLs (v3.2 will reject domain names)
- Warn on domain names on listen URLs (v3.2 will reject domain names).
## [v3.0.8](https://github.com/coreos/etcd/releases/tag/v3.0.8) (2016-09-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.7...v3.0.8).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.7...v3.0.8) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- Allow only IP addresses in listen URLs (domain names are rejected)
- Allow only IP addresses in listen URLs (domain names are rejected).
## [v3.0.7](https://github.com/coreos/etcd/releases/tag/v3.0.7) (2016-08-31)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.6...v3.0.7).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.6...v3.0.7) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- SRV records only allow A records (RFC 2052)
- SRV records only allow A records (RFC 2052).
## [v3.0.6](https://github.com/coreos/etcd/releases/tag/v3.0.6) (2016-08-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.5...v3.0.6).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.5...v3.0.6) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.5](https://github.com/coreos/etcd/releases/tag/v3.0.5) (2016-08-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.4...v3.0.5).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.4...v3.0.5) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- SRV records (e.g., infra1.example.com) must match the discovery domain (i.e., example.com) if no custom certificate authority is given
- SRV records (e.g., infra1.example.com) must match the discovery domain (i.e., example.com) if no custom certificate authority is given.
## [v3.0.4](https://github.com/coreos/etcd/releases/tag/v3.0.4) (2016-07-27)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.3...v3.0.4).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.3...v3.0.4) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- v2 auth can now use common name from TLS certificate when `--client-cert-auth` is enabled
- v2 auth can now use common name from TLS certificate when `--client-cert-auth` is enabled.
### Added
- v2 etcdctl ls command now supports `--output=json`
- Add /var/lib/etcd directory to etcd official Docker image
- v2 `etcdctl ls` command now supports `--output=json`.
- Add /var/lib/etcd directory to etcd official Docker image.
## [v3.0.3](https://github.com/coreos/etcd/releases/tag/v3.0.3) (2016-07-15)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.2...v3.0.3).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.2...v3.0.3) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- Revert Dockerfile to use `CMD`, instead of `ENTRYPOINT`, to support etcdctl run
- Docker commands for v3.0.2 won't work without specifying executable binary paths
- v3 etcdctl default endpoints are now 127.0.0.1:2379
- Revert Dockerfile to use `CMD`, instead of `ENTRYPOINT`, to support `etcdctl` run.
- Docker commands for v3.0.2 won't work without specifying executable binary paths.
- v3 etcdctl default endpoints are now `127.0.0.1:2379`.
## [v3.0.2](https://github.com/coreos/etcd/releases/tag/v3.0.2) (2016-07-08)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.1...v3.0.2).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.1...v3.0.2) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- Dockerfile uses `ENTRYPOINT`, instead of `CMD`, to run etcd without binary path specified
- Dockerfile uses `ENTRYPOINT`, instead of `CMD`, to run etcd without binary path specified.
## [v3.0.1](https://github.com/coreos/etcd/releases/tag/v3.0.1) (2016-07-01)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.0...v3.0.1).
See [code changes](https://github.com/coreos/etcd/compare/v3.0.0...v3.0.1) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.0](https://github.com/coreos/etcd/releases/tag/v3.0.0) (2016-06-30)
See [code changes](https://github.com/coreos/etcd/compare/v2.3.8...v3.0.0).
See [upgrade 3.0](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md).
See [code changes](https://github.com/coreos/etcd/compare/v2.3.0...v3.0.0) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.

View File

@ -0,0 +1,53 @@
FROM ubuntu:17.10
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get -y update \
&& apt-get -y install \
build-essential \
gcc \
apt-utils \
pkg-config \
software-properties-common \
apt-transport-https \
libssl-dev \
sudo \
bash \
curl \
wget \
tar \
git \
&& apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y autoremove \
&& apt-get -y autoclean
ENV GOROOT /usr/local/go
ENV GOPATH /go
ENV PATH ${GOPATH}/bin:${GOROOT}/bin:${PATH}
ENV GO_VERSION REPLACE_ME_GO_VERSION
ENV GO_DOWNLOAD_URL https://storage.googleapis.com/golang
RUN rm -rf ${GOROOT} \
&& curl -s ${GO_DOWNLOAD_URL}/go${GO_VERSION}.linux-amd64.tar.gz | tar -v -C /usr/local/ -xz \
&& mkdir -p ${GOPATH}/src ${GOPATH}/bin \
&& go version
RUN mkdir -p ${GOPATH}/src/github.com/coreos/etcd
ADD . ${GOPATH}/src/github.com/coreos/etcd
RUN go get -v github.com/coreos/gofail \
&& pushd ${GOPATH}/src/github.com/coreos/etcd \
&& GO_BUILD_FLAGS="-v" ./build \
&& cp ./bin/etcd /etcd \
&& cp ./bin/etcdctl /etcdctl \
&& GO_BUILD_FLAGS="-v" FAILPOINTS=1 ./build \
&& cp ./bin/etcd /etcd-failpoints \
&& ./tools/functional-tester/build \
&& cp ./bin/etcd-agent /etcd-agent \
&& cp ./bin/etcd-tester /etcd-tester \
&& cp ./bin/etcd-runner /etcd-runner \
&& go build -v -o /benchmark ./cmd/tools/benchmark \
&& go build -v -o /etcd-test-proxy ./cmd/tools/etcd-test-proxy \
&& popd \
&& rm -rf ${GOPATH}/src/github.com/coreos/etcd

View File

@ -49,7 +49,7 @@ RUN go get -v -u -tags spell github.com/chzchzchz/goword \
&& go get -v -u honnef.co/go/tools/cmd/gosimple \
&& go get -v -u honnef.co/go/tools/cmd/unused \
&& go get -v -u honnef.co/go/tools/cmd/staticcheck \
&& go get -v -u github.com/wadey/gocovmerge \
&& go get -v -u github.com/gyuho/gocovmerge \
&& go get -v -u github.com/gordonklaus/ineffassign \
&& go get -v -u github.com/alexkohler/nakedret \
&& /tmp/install-marker.sh amd64 \

View File

@ -15,12 +15,12 @@ The `master` branch is our development branch. All new features land here first.
To try new and experimental features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs.
Before the release of the next stable version, feature PRs will be frozen. We will focus on the testing, bug-fix and documentation for one to two weeks.
Before the release of the next stable version, feature PRs will be frozen. A [release manager](./dev-internal/release.md#release-management) will be assigned to major/minor version and will lead the etcd community in test, bug-fix and documentation of the release for one to two weeks.
### Stable branches
All branches with prefix `release-` are considered _stable_ branches.
After every minor release (http://semver.org/), we will have a new stable branch for that release. We will keep fixing the backwards-compatible bugs for the latest two stable releases. A _patch_ release to each supported release branch, incorporating any bug fixes, will be once every two weeks, given any patches.
After every minor release (http://semver.org/), we will have a new stable branch for that release, managed by a [patch release manager](./dev-internal/release.md#release-management). We will keep fixing the backwards-compatible bugs for the latest two stable releases. A _patch_ release to each supported release branch, incorporating any bug fixes, will be once every two weeks, given any patches.
[master]: https://github.com/coreos/etcd/tree/master

View File

@ -480,7 +480,7 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| TTL | TTL is the advisory time-to-live in seconds. | int64 |
| TTL | TTL is the advisory time-to-live in seconds. Expired lease will return -1. | int64 |
| ID | ID is the requested ID for the lease. If ID is set to 0, the lessor chooses an ID. | int64 |

View File

@ -1639,7 +1639,7 @@
"format": "int64"
},
"TTL": {
"description": "TTL is the advisory time-to-live in seconds.",
"description": "TTL is the advisory time-to-live in seconds. Expired lease will return -1.",
"type": "string",
"format": "int64"
}

View File

@ -6,5 +6,4 @@ etcd is designed to handle small key value pairs typical for metadata. Larger re
## Storage size limit
The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag; supports up to 8GB.
The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag. 8GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.

View File

@ -4,6 +4,17 @@ The guide talks about how to release a new version of etcd.
The procedure includes some manual steps for sanity checking, but it can probably be further scripted. Please keep this document up-to-date if making changes to the release process.
## Release management
etcd community members are assigned to manage the release each etcd major/minor version as well as manage patches
and to each stable release branch. The managers are responsible for communicating the timelines and status of each
release and for ensuring the stability of the release branch.
| Releases | Manager |
| -------- | ------- |
| 3.1 patch (post 3.1.0) | Joe Betz [@jpbetz](https://github.com/jpbetz) |
| 3.2 patch (post 3.2.0) | Gyuho Lee [@gyuho](https://github.com/gyuho) |
## Prepare release
Set desired version as environment variable for following steps. Here is an example to release 2.3.0:

View File

@ -22,7 +22,7 @@ A member's advertised peer URLs come from `--initial-advertise-peer-urls` on ini
### System requirements
Since etcd writes data to disk, SSD is highly recommended. To prevent performance degradation or unintentionally overloading the key-value store, etcd enforces a 2GB default storage size quota, configurable up to 8GB. To avoid swapping or running out of memory, the machine should have at least as much RAM to cover the quota. At CoreOS, an etcd cluster is usually deployed on dedicated CoreOS Container Linux machines with dual-core processors, 2GB of RAM, and 80GB of SSD *at the very least*. **Note that performance is intrinsically workload dependent; please test before production deployment**. See [hardware][hardware-setup] for more recommendations.
Since etcd writes data to disk, SSD is highly recommended. To prevent performance degradation or unintentionally overloading the key-value store, etcd enforces a configurable storage size quota set to 2GB by default. To avoid swapping or running out of memory, the machine should have at least as much RAM to cover the quota. 8GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it. At CoreOS, an etcd cluster is usually deployed on dedicated CoreOS Container Linux machines with dual-core processors, 2GB of RAM, and 80GB of SSD *at the very least*. **Note that performance is intrinsically workload dependent; please test before production deployment**. See [hardware][hardware-setup] for more recommendations.
Most stable production environment is Linux operating system with amd64 architecture; see [supported platform][supported-platform] for more.
@ -102,6 +102,12 @@ To recover from the low space quota alarm:
2. [Defragment][maintenance-defragment] every etcd endpoint.
3. [Disarm][maintenance-disarm] the alarm.
### What does the etcd warning "etcdserver/api/v3rpc: transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:2379->127.0.0.1:43020: read: connection reset by peer" mean?
This is gRPC-side warning when a server receives a TCP RST flag with client-side streams being prematurely closed. For example, a client closes its connection, while gRPC server has not yet processed all HTTP/2 frames in the TCP queue. Some data may have been lost in server side, but it is ok so long as client connection has already been closed.
Only [old versions of gRPC](https://github.com/grpc/grpc-go/issues/1362) log this. etcd [>=v3.2.13 by default log this with DEBUG level](https://github.com/coreos/etcd/pull/9080), thus only visible with `--debug` flag enabled.
## Performance
### How should I benchmark etcd?

View File

@ -38,6 +38,11 @@
- [maciej/etcd-client](https://github.com/maciej/etcd-client) - Supports v2. Akka HTTP-based fully async client
- [eiipii/etcdhttpclient](https://bitbucket.org/eiipii/etcdhttpclient) - Supports v2. Async HTTP client based on Netty and Scala Futures.
**Perl libraries**
- [hexfusion/perl-net-etcd](https://github.com/hexfusion/perl-net-etcd) - Supports v3 grpc gateway HTTP API
- [robn/p5-etcd](https://github.com/robn/p5-etcd) - Supports v2
**Python libraries**
- [kragniz/python-etcd3](https://github.com/kragniz/python-etcd3) - Client for v3
@ -147,7 +152,6 @@
- [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration
- [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
- [fleet](https://github.com/coreos/fleet) - Distributed init system
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.
- [duedil-ltd/discodns](https://github.com/duedil-ltd/discodns) - Simple DNS nameserver using etcd as a database for names and records.

View File

@ -45,9 +45,9 @@ message ResponseHeader {
* Revision - the revision of the key-value store when generating the response.
* Raft_Term - the Raft term of the member when generating the response.
An application may read the Cluster_ID (Member_ID) field to ensure it is communicating with the intended cluster (member).
An application may read the `Cluster_ID` or `Member_ID` field to ensure it is communicating with the intended cluster (member).
Applications can use the `Revision` to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make time `travel query` and wish to know the latest revision at the time of the request.
Applications can use the `Revision` field to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make a `time travel query` and wish to know the latest revision at the time of the request.
Applications can use `Raft_Term` to detect when the cluster completes a new leader election.
@ -84,9 +84,9 @@ In addition to just the key and value, etcd attaches additional revision metadat
#### Revisions
etcd maintains a 64-bit cluster-wide counter, the store revision, that is incremented each time the key space is modified. The revision serves as a global logical clock, sequentially ordering all updates to the store. The change represented by a new revisions is incremental; the data associated with a revision is the data that changed the store. Internally, a new revision means writing the changes to the backend's B+tree, keyed by the incremented revision.
etcd maintains a 64-bit cluster-wide counter, the store revision, that is incremented each time the key space is modified. The revision serves as a global logical clock, sequentially ordering all updates to the store. The change represented by a new revision is incremental; the data associated with a revision is the data that changed the store. Internally, a new revision means writing the changes to the backend's B+tree, keyed by the incremented revision.
Revisions become more valuable when taking considering etcd3's [multi-version concurrency control][mvcc] backend. The MVCC model means that the key-value store can be viewed from past revisions since historical key revisions are retained. The retention policy for this history can be configured by cluster administrators for fine-grained storage management; usually etcd3 discards old revisions of keys on a timer. A typical etcd3 cluster retains superseded key data for hours. This also buys reliable handling for long client disconnection, not just transient network disruptions: watchers simply resume from the last observed historical revision. Similarly, to read from the store at a particular point-in-time, read requests can be tagged with a revision to return keys from a view of the key space at the point in time that revision was committed.
Revisions become more valuable when considering etcd3's [multi-version concurrency control][mvcc] backend. The MVCC model means that the key-value store can be viewed from past revisions since historical key revisions are retained. The retention policy for this history can be configured by cluster administrators for fine-grained storage management; usually etcd3 discards old revisions of keys on a timer. A typical etcd3 cluster retains superseded key data for hours. This also provides reliable handling for long client disconnection, not just transient network disruptions: watchers simply resume from the last observed historical revision. Similarly, to read from the store at a particular point-in-time, read requests can be tagged with a revision to return keys from a view of the key space at the point-in-time that revision was committed.
#### Key ranges
@ -94,7 +94,7 @@ The etcd3 data model indexes all keys over a flat binary key space. This differs
These intervals are often referred to as "ranges" in etcd3. Operations over ranges are more powerful than operations on directories. Like a hierarchical store, intervals support single key lookups via `[a, a+1)` (e.g., ['a', 'a\x00') looks up 'a') and directory lookups by encoding keys by directory depth. In addition to those operations, intervals can also encode prefixes; for example the interval `['a', 'b')` looks up all keys prefixed by the string 'a'.
By convention, ranges for a Request are denoted by the fields `key` and `range_end`. The `key` field is the first key of the range and should be non-empty. The `range_end` is the key following the last key of the range. If `range_end` is not given or empty, the range is defined to contain only the key argument. If `range_end` is `key` plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"), then the range represents all keys prefixed with key. If both `key` and `range_end` are '\0', then range represents all keys. If `range_end` is '\0', the range is all keys greater than or equal to the key argument.
By convention, ranges for a request are denoted by the fields `key` and `range_end`. The `key` field is the first key of the range and should be non-empty. The `range_end` is the key following the last key of the range. If `range_end` is not given or empty, the range is defined to contain only the key argument. If `range_end` is `key` plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"), then the range represents all keys prefixed with key. If both `key` and `range_end` are '\0', then range represents all keys. If `range_end` is '\0', the range is all keys greater than or equal to the key argument.
### Range
@ -133,7 +133,7 @@ message RangeRequest {
* Key, Range_End - The key range to fetch.
* Limit - the maximum number of keys returned for the request. When limit is set to 0, it is treated as no limit.
* Revision - the point-in-time of the key-value store to use for the range. If revision is less or equal to zero, the range is over the latest key-value store If the revision is compacted, ErrCompacted is returned as a response.
* Revision - the point-in-time of the key-value store to use for the range. If revision is less or equal to zero, the range is over the latest key-value store. If the revision is compacted, ErrCompacted is returned as a response.
* Sort_Order - the ordering for sorted requests.
* Sort_Target - the key-value field to sort.
* Serializable - sets the range request to use serializable member-local reads. By default, Range is linearizable; it reflects the current consensus of the cluster. For better performance and availability, in exchange for possible stale reads, a serializable range request is served locally without needing to reach consensus with other nodes in the cluster.
@ -218,7 +218,7 @@ message DeleteRangeResponse {
```
* Deleted - number of keys deleted.
* Prev_Kv - a list of all key-value pairs deleted by the DeleteRange operation.
* Prev_Kv - a list of all key-value pairs deleted by the `DeleteRange` operation.
### Transaction
@ -226,7 +226,7 @@ A transaction is an atomic If/Then/Else construct over the key-value store. It p
A transaction can atomically process multiple requests in a single request. For modifications to the key-value store, this means the store's revision is incremented only once for the transaction and all events generated by the transaction will have the same revision. However, modifications to the same key multiple times within a single transaction are forbidden.
All transactions are guarded by a conjunction of comparisons, similar to an "If" statement. Each comparison checks a single key in the store. It may check for the absence or presence of a value, compare with a given value, or check a key's revision or version. Two different comparisons may apply to the same or different keys. All comparisons are applied atomically; if all comparisons are true, the transaction is said to succeed and etcd applies the transaction's then / `success` request block, otherwise it is said to fail and applies the else / `failure` request block.
All transactions are guarded by a conjunction of comparisons, similar to an `If` statement. Each comparison checks a single key in the store. It may check for the absence or presence of a value, compare with a given value, or check a key's revision or version. Two different comparisons may apply to the same or different keys. All comparisons are applied atomically; if all comparisons are true, the transaction is said to succeed and etcd applies the transaction's then / `success` request block, otherwise it is said to fail and applies the else / `failure` request block.
Each comparison is encoded as a `Compare` message:
@ -321,7 +321,7 @@ message ResponseOp {
## Watch API
The Watch API provides an event-based interface for asynchronously monitoring changes to keys. An etcd3 watch waits for changes to keys by continuously watching from a given revision, either current or historical, and streams key updates back to the client.
The `Watch` API provides an event-based interface for asynchronously monitoring changes to keys. An etcd3 watch waits for changes to keys by continuously watching from a given revision, either current or historical, and streams key updates back to the client.
### Events
@ -345,7 +345,7 @@ message Event {
### Watch streams
Watches are long-running requests and use gRPC streams to stream event data. A watch stream is bi-directional; the client writes to the stream to establish watches and reads to receive watch event. A single watch stream can multiplex many distinct watches by tagging events with per-watch identifiers. This multiplexing helps reducing the memory footprint and connection overhead on the core etcd cluster.
Watches are long-running requests and use gRPC streams to stream event data. A watch stream is bi-directional; the client writes to the stream to establish watches and reads to receive watch events. A single watch stream can multiplex many distinct watches by tagging events with per-watch identifiers. This multiplexing helps reducing the memory footprint and connection overhead on the core etcd cluster.
Watches make three guarantees about events:
* Ordered - events are ordered by revision; an event will never appear on a watch if it precedes an event in time that has already been posted.
@ -391,7 +391,7 @@ message WatchResponse {
```
* Watch_ID - the ID of the watch that corresponds to the response.
* Created - set to true if the response is for a create watch request. The client should record ID and expect to receive events for the watch on the stream. All events sent to the created watcher will have the same watch_id.
* Created - set to true if the response is for a create watch request. The client should store the ID and expect to receive events for the watch on the stream. All events sent to the created watcher will have the same watch_id.
* Canceled - set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher.
* Compact_Revision - set to the minimum historical revision available to etcd if a watcher tries watching at a compacted revision. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The watcher will be canceled; creating new watches with the same start_revision will fail.
* Events - a list of new events in sequence corresponding to the given watch ID.

View File

@ -47,7 +47,7 @@ When considering features, support, and stability, new applications planning to
### Consul
Consul bills itself as an end-to-end service discovery framework. To wit, it includes services such as health checking, failure detection, and DNS. Incidentally, Consul also exposes a key value store with mediocre performance and an intricate API. As it stands in Consul 0.7, the storage system does not scales well; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches.
Consul is an end-to-end service discovery framework. It provides built-in health checking, failure detection, and DNS services. In addition, Consul exposes a key value store with RESTful HTTP APIs. [As it stands in Consul 1.0][dbtester-comparison-results], the storage system does not scale as well as other systems like etcd or Zookeeper in key-value operations; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches.
etcd and Consul solve different problems. If looking for a distributed consistent key value store, etcd is a better choice over Consul. If looking for end-to-end cluster service discovery, etcd will not have enough features; choose Kubernetes, Consul, or SmartStack.
@ -113,3 +113,4 @@ For distributed coordination, choosing etcd can help prevent operational headach
[container-linux]: https://coreos.com/why
[locksmith]: https://github.com/coreos/locksmith
[kubernetes]: http://kubernetes.io/docs/whatisk8s
[dbtester-comparison-results]: https://github.com/coreos/dbtester/tree/master/test-results/2018Q1-02-etcd-zookeeper-consul

View File

@ -1,6 +1,11 @@
# Configuration flags
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment.
etcd is configurable through a configuration file, various command-line flags, and environment variables.
A reusable configuration file is a YAML file made with name and value of one or more command-line flags described below. In order to use this file, specify the file path as a value to the `--config-file` flag. The [sample configuration file][sample-config-file] can be used as a starting point to create a new configuration file as needed.
Options set on the command line take precedence over those from the environment. If a configuration file is provided, other command line flags and environment variables will be ignored.
For example, `etcd --config-file etcd.conf.yml.sample --data-dir /tmp` will ignore the `--data-dir` flag.
The format of environment variable for flag `--my-flag` is `ETCD_MY_FLAG`. It applies to all flags.
@ -266,12 +271,12 @@ The security flags help to [build a secure etcd cluster][security].
+ env variable: ETCD_PEER_CA_FILE
### --peer-cert-file
+ Path to the peer server TLS cert file.
+ Path to the peer server TLS cert file. This is the cert for peer-to-peer traffic, used both for server and client.
+ default: ""
+ env variable: ETCD_PEER_CERT_FILE
### --peer-key-file
+ Path to the peer server TLS key file.
+ Path to the peer server TLS key file. This is the key for peer-to-peer traffic, used both for server and client.
+ default: ""
+ env variable: ETCD_PEER_KEY_FILE
@ -332,6 +337,7 @@ Follow the instructions when using these flags.
### --config-file
+ Load server configuration from a file.
+ default: ""
+ example: [sample configuration file][sample-config-file]
## Profiling flags
@ -369,3 +375,4 @@ Follow the instructions when using these flags.
[security]: security.md
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
[tuning]: ../tuning.md#time-parameters
[sample-config-file]: ../../etcd.conf.yml.sample

View File

@ -17,14 +17,14 @@ export NODE1=192.168.1.21
Trust the CoreOS [App Signing Key](https://coreos.com/security/app-signing-key/).
```
sudo rkt trust --prefix coreos.com/etcd
sudo rkt trust --prefix quay.io/coreos/etcd
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
```
Run the `v3.1.2` version of etcd or specify another release version.
Run the `v3.2` version of etcd or specify another release version.
```
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.1.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380
sudo rkt run --net=default:IP=${NODE1} quay.io/coreos/etcd:v3.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380
```
List the cluster member.
@ -45,13 +45,13 @@ export NODE3=172.16.28.23
```
# node 1
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.1.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
sudo rkt run --net=default:IP=${NODE1} quay.io/coreos/etcd:v3.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 2
sudo rkt run --net=default:IP=${NODE2} coreos.com/etcd:v3.1.2 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
sudo rkt run --net=default:IP=${NODE2} quay.io/coreos/etcd:v3.2 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 3
sudo rkt run --net=default:IP=${NODE3} coreos.com/etcd:v3.1.2 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
sudo rkt run --net=default:IP=${NODE3} quay.io/coreos/etcd:v3.2 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
```
Verify the cluster is healthy and can be reached.

View File

@ -43,8 +43,8 @@ ANNOTATIONS {
# alert if more than 1% of gRPC method calls have failed within the last 5 minutes
ALERT HighNumberOfFailedGRPCRequests
IF sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m])) > 0.01
IF 100 * (sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m]))) > 1
FOR 10m
LABELS {
severity = "warning"
@ -56,8 +56,8 @@ ANNOTATIONS {
# alert if more than 5% of gRPC method calls have failed within the last 5 minutes
ALERT HighNumberOfFailedGRPCRequests
IF sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m])) > 0.05
IF 100 * (sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m]))) > 5
FOR 5m
LABELS {
severity = "critical"
@ -84,8 +84,8 @@ ANNOTATIONS {
# alert if more than 1% of requests to an HTTP endpoint have failed within the last 5 minutes
ALERT HighNumberOfFailedHTTPRequests
IF sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.01
IF 100 * (sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method)) > 1
FOR 10m
LABELS {
severity = "warning"
@ -97,8 +97,8 @@ ANNOTATIONS {
# alert if more than 5% of requests to an HTTP endpoint have failed within the last 5 minutes
ALERT HighNumberOfFailedHTTPRequests
IF sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.05
IF 100 * (sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method)) > 5
FOR 5m
LABELS {
severity = "critical"
@ -154,7 +154,7 @@ ANNOTATIONS {
# alert if 99th percentile of round trips take 150ms
ALERT EtcdMemberCommunicationSlow
IF histogram_quantile(0.99, rate(etcd_network_member_round_trip_time_seconds_bucket[5m])) > 0.15
IF histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[5m])) > 0.15
FOR 10m
LABELS {
severity = "warning"

View File

@ -26,8 +26,8 @@ groups:
changes within the last hour
summary: a high number of leader changes within the etcd cluster are happening
- alert: HighNumberOfFailedGRPCRequests
expr: sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.01
expr: 100 * (sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method)) > 1
for: 10m
labels:
severity: warning
@ -36,8 +36,8 @@ groups:
on etcd instance {{ $labels.instance }}'
summary: a high number of gRPC requests are failing
- alert: HighNumberOfFailedGRPCRequests
expr: sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.05
expr: 100 * (sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method)) > 5
for: 5m
labels:
severity: critical
@ -56,8 +56,8 @@ groups:
}} are slow
summary: slow gRPC requests
- alert: HighNumberOfFailedHTTPRequests
expr: sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
BY (method) > 0.01
expr: 100 * (sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
BY (method)) > 1
for: 10m
labels:
severity: warning
@ -66,8 +66,8 @@ groups:
instance {{ $labels.instance }}'
summary: a high number of HTTP requests are failing
- alert: HighNumberOfFailedHTTPRequests
expr: sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
BY (method) > 0.05
expr: 100 * (sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
BY (method)) > 5
for: 5m
labels:
severity: critical
@ -106,7 +106,7 @@ groups:
its file descriptors soon'
summary: file descriptors soon exhausted
- alert: EtcdMemberCommunicationSlow
expr: histogram_quantile(0.99, rate(etcd_network_member_round_trip_time_seconds_bucket[5m]))
expr: histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[5m]))
> 0.15
for: 10m
labels:

View File

@ -48,7 +48,7 @@ Example application workload: A 50-node Kubernetes cluster
| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
|----------|------|-------|--------|------|----------------|
| AWS | m4.large | 2 | 8 | 3600 | 56.25 |
| GCE | n1-standard-1 + 50GB PD SSD | 2 | 7.5 | 1500 | 25 |
| GCE | n1-standard-2 + 50GB PD SSD | 2 | 7.5 | 1500 | 25 |
### Medium cluster

View File

@ -36,9 +36,9 @@ Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been
## Defragmentation
After compacting the keyspace, the backend database may exhibit internal fragmentation. Any internal fragmentation is space that is free to use by the backend but still consumes storage space. The process of defragmentation releases this storage space back to the file system. Defragmentation is issued on a per-member so that cluster-wide latency spikes may be avoided.
After compacting the keyspace, the backend database may exhibit internal fragmentation. Any internal fragmentation is space that is free to use by the backend but still consumes storage space. Compacting old revisions internally fragments `etcd` by leaving gaps in backend database. Fragmented space is available for use by `etcd` but unavailable to the host filesystem. In other words, deleting application data does not reclaim the space on disk.
Compacting old revisions internally fragments `etcd` by leaving gaps in backend database. Fragmented space is available for use by `etcd` but unavailable to the host filesystem.
The process of defragmentation releases this storage space back to the file system. Defragmentation is issued on a per-member so that cluster-wide latency spikes may be avoided.
To defragment an etcd member, use the `etcdctl defrag` command:
@ -47,6 +47,10 @@ $ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
```
**Note that defragmentation to a live member blocks the system from reading and writing data while rebuilding its states**.
**Note that defragmentation request does not get replicated over cluster. That is, the request is only applied to the local node. Specify all members in `--endpoints` flag.**
To defragment an etcd data directory directly, while etcd is not running, use the command:
``` sh
@ -80,14 +84,14 @@ $ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
+----------------+------------------+-----------+---------+-----------+-----------+------------+
# confirm alarm is raised
$ ETCDCTL_API=3 etcdctl alarm list
memberID:13803658152347727308 alarm:NOSPACE
memberID:13803658152347727308 alarm:NOSPACE
```
Removing excessive keyspace data and defragmenting the backend database will put the cluster back within the quota limits:
```sh
# get current revision
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9].*')
# compact away all old revisions
$ ETCDCTL_API=3 etcdctl compact $rev
compacted revision 1516
@ -96,7 +100,7 @@ $ ETCDCTL_API=3 etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
# disarm alarm
$ ETCDCTL_API=3 etcdctl alarm disarm
memberID:13803658152347727308 alarm:NOSPACE
memberID:13803658152347727308 alarm:NOSPACE
# test puts are allowed again
$ ETCDCTL_API=3 etcdctl put newkey 123
OK

View File

@ -193,6 +193,134 @@ The proxy communicates with etcd members through both the `--advertise-client-ur
When client authentication is enabled for an etcd member, the administrator must ensure that the peer certificate specified in the proxy's `--peer-cert-file` option is valid for that authentication. The proxy's peer certificate must also be valid for peer authentication if peer authentication is enabled.
## Notes for TLS authentication
Since [v3.2.0](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [TLS certificates get reloaded on every client connection](https://github.com/coreos/etcd/pull/7829). This is useful when replacing expiry certs without stopping etcd servers; it can be done by overwriting old certs with new ones. Refreshing certs for every connection should not have too much overhead, but can be improved in the future, with caching layer. Example tests can be found [here](https://github.com/coreos/etcd/blob/b041ce5d514a4b4aaeefbffb008f0c7570a18986/integration/v3_grpc_test.go#L1601-L1757).
Since [v3.2.0](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [server denies incoming peer certs with wrong IP `SAN`](https://github.com/coreos/etcd/pull/7687). For instance, if peer cert contains any IP addresses in Subject Alternative Name (SAN) field, server authenticates a peer only when the remote IP address matches one of those IP addresses. This is to prevent unauthorized endpoints from joining the cluster. For example, peer B's CSR (with `cfssl`) is:
```json
{
"CN": "etcd peer",
"hosts": [
"*.example.default.svc",
"*.example.default.svc.cluster.local",
"10.138.0.27"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "CA",
"ST": "San Francisco"
}
]
}
```
when peer B's actual IP address is `10.138.0.2`, not `10.138.0.27`. When peer B tries to join the cluster, peer A will reject B with the error `x509: certificate is valid for 10.138.0.27, not 10.138.0.2`, because B's remote IP address does not match the one in Subject Alternative Name (SAN) field.
Since [v3.2.0](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [server resolves TLS `DNSNames` when checking `SAN`](https://github.com/coreos/etcd/pull/7767). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server authenticates a peer only when forward-lookups (`dig b.com`) on those DNS names have matching IP with the remote IP address. For example, peer B's CSR (with `cfssl`) is:
```json
{
"CN": "etcd peer",
"hosts": [
"b.com"
],
```
when peer B's remote IP address is `10.138.0.2`. When peer B tries to join the cluster, peer A looks up the incoming host `b.com` to get the list of IP addresses (e.g. `dig b.com`). And rejects B if the list does not contain the IP `10.138.0.2`, with the error `tls: 10.138.0.2 does not match any of DNSNames ["b.com"]`.
Since [v3.2.2](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.2.md#v322-2017-07-07), [server accepts connections if IP matches, without checking DNS entries](https://github.com/coreos/etcd/pull/8223). For instance, if peer cert contains IP addresses and DNS names in Subject Alternative Name (SAN) field, and the remote IP address matches one of those IP addresses, server just accepts connection without further checking the DNS names. For example, peer B's CSR (with `cfssl`) is:
```json
{
"CN": "etcd peer",
"hosts": [
"invalid.domain",
"10.138.0.2"
],
```
when peer B's remote IP address is `10.138.0.2` and `invalid.domain` is a invalid host. When peer B tries to join the cluster, peer A successfully authenticates B, since Subject Alternative Name (SAN) field has a valid matching IP address. See [issue#8206](https://github.com/coreos/etcd/issues/8206) for more detail.
Since [v3.2.5](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.2.md#v325-2017-08-04), [server supports reverse-lookup on wildcard DNS `SAN`](https://github.com/coreos/etcd/pull/8281). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server first reverse-lookups the remote IP address to get a list of names mapping to that address (e.g. `nslookup IPADDR`). Then accepts the connection if those names have a matching name with peer cert's DNS names (either by exact or wildcard match). If none is matched, server forward-lookups each DNS entry in peer cert (e.g. look up `example.default.svc` when the entry is `*.example.default.svc`), and accepts connection only when the host's resolved addresses have the matching IP address with the peer's remote IP address. For example, peer B's CSR (with `cfssl`) is:
```json
{
"CN": "etcd peer",
"hosts": [
"*.example.default.svc",
"*.example.default.svc.cluster.local"
],
```
when peer B's remote IP address is `10.138.0.2`. When peer B tries to join the cluster, peer A reverse-lookup the IP `10.138.0.2` to get the list of host names. And either exact or wildcard match the host names with peer B's cert DNS names in Subject Alternative Name (SAN) field. If none of reverse/forward lookups worked, it returns an error `"tls: "10.138.0.2" does not match any of DNSNames ["*.example.default.svc","*.example.default.svc.cluster.local"]`. See [issue#8268](https://github.com/coreos/etcd/issues/8268) for more detail.
[v3.3.0](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.3.md) adds [`etcd --peer-cert-allowed-cn`](https://github.com/coreos/etcd/pull/8616) flag to support [CN(Common Name)-based auth for inter-peer connections](https://github.com/coreos/etcd/issues/8262). Kubernetes TLS bootstrapping involves generating dynamic certificates for etcd members and other system components (e.g. API server, kubelet, etc.). Maintaining different CAs for each component provides tighter access control to etcd cluster but often tedious. When `--peer-cert-allowed-cn` flag is specified, node can only join with matching common name even with shared CAs. For example, each member in 3-node cluster is set up with CSRs (with `cfssl`) as below:
```json
{
"CN": "etcd.local",
"hosts": [
"m1.etcd.local",
"127.0.0.1",
"localhost"
],
```
```json
{
"CN": "etcd.local",
"hosts": [
"m2.etcd.local",
"127.0.0.1",
"localhost"
],
```
```json
{
"CN": "etcd.local",
"hosts": [
"m3.etcd.local",
"127.0.0.1",
"localhost"
],
```
Then only peers with matching common names will be authenticated if `--peer-cert-allowed-cn etcd.local` is given. And nodes with different CNs in CSRs or different `--peer-cert-allowed-cn` will be rejected:
```bash
$ etcd --peer-cert-allowed-cn m1.etcd.local
I | embed: rejected connection from "127.0.0.1:48044" (error "CommonName authentication failed", ServerName "m1.etcd.local")
I | embed: rejected connection from "127.0.0.1:55702" (error "remote error: tls: bad certificate", ServerName "m3.etcd.local")
```
Each process should be started with:
```bash
etcd --peer-cert-allowed-cn etcd.local
I | pkg/netutil: resolving m3.etcd.local:32380 to 127.0.0.1:32380
I | pkg/netutil: resolving m2.etcd.local:22380 to 127.0.0.1:22380
I | pkg/netutil: resolving m1.etcd.local:2380 to 127.0.0.1:2380
I | etcdserver: published {Name:m3 ClientURLs:[https://m3.etcd.local:32379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | etcdserver: published {Name:m1 ClientURLs:[https://m1.etcd.local:2379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | etcdserver: published {Name:m2 ClientURLs:[https://m2.etcd.local:22379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | embed: serving client requests on 127.0.0.1:32379
I | embed: serving client requests on 127.0.0.1:22379
I | embed: serving client requests on 127.0.0.1:2379
```
## Frequently asked questions
### I'm seeing a SSLv3 alert handshake failure when using TLS client authentication?

View File

@ -8,6 +8,8 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0.

View File

@ -8,6 +8,8 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Monitoring
Following metrics from v3.0.x have been deprecated in favor of [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus):

View File

@ -6,51 +6,44 @@ In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, roll
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Client upgrade checklists
### Upgrade checklists
3.2 introduces two breaking changes.
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/coreos/etcd/pull/7305)).
Highlighted breaking changes in 3.2.
Before
#### Change in default `snapshot-count` value
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp == nil
err == lease.ErrLeaseNotFound
```
The default value of `--snapshot-count` has [changed from from 10,000 to 100,000](https://github.com/coreos/etcd/pull/7160). Higher snapshot count means it holds Raft entries in memory for longer before discarding old entries. It is a trade-off between less frequent snapshotting and [higher memory usage](https://github.com/kubernetes/kubernetes/issues/60589#issuecomment-371977156). Higher `--snapshot-count` will be manifested with higher memory usage, while retaining more Raft entries helps with the availabilities of slow followers: leader is still able to replicate its logs to followers, rather than forcing followers to rebuild its stores from leader snapshots.
After
#### Change in gRPC dependency (>=3.2.10)
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp.TTL == -1
err == nil
```
3.2.10 or later now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5` (<=3.2.9 requires `v1.2.1`).
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
##### Deprecate `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.NewFromConfigFile
clientv3.SetLogger(log.New(os.Stderr, "grpc: ", 0))
```
After
```go
import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
clientv3yaml.NewConfig
import "github.com/coreos/etcd/clientv3"
import "google.golang.org/grpc/grpclog"
clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface)
```
### Client upgrade checklists (>=3.2.10)
##### Deprecate `grpc.ErrClientConnTimeout`
>=3.2.10 upgrades `grpc/grpc-go` to >=v1.7.x from v1.2.1, which introduces some breaking changes.
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. >=3.2.10 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)).
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.2 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)).
Before
@ -77,6 +70,148 @@ if err == context.DeadlineExceeded {
}
```
#### Change in maximum request size limits (>=3.2.10)
3.2.10 and 3.2.11 allow custom request size limits in server side. >=3.2.12 allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag:
```bash
# limits request size to 1.5 KiB
etcd --max-request-bytes 1536
# client writes exceeding 1.5 KiB will be rejected
etcdctl put foo [LARGE VALUE...]
# etcdserver: request is too large
```
Or configure `embed.Config.MaxRequestBytes` field:
```go
import "github.com/coreos/etcd/embed"
import "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
// limit requests to 5 MiB
cfg := embed.NewConfig()
cfg.MaxRequestBytes = 5 * 1024 * 1024
// client writes exceeding 5 MiB will be rejected
_, err := cli.Put(ctx, "foo", [LARGE VALUE...])
err == rpctypes.ErrRequestTooLarge
```
**If not specified, server-side limit defaults to 1.5 MiB**.
Client-side request limits must be configured based on server-side limits.
```bash
# limits request size to 1 MiB
etcd --max-request-bytes 1048576
```
```go
import "github.com/coreos/etcd/clientv3"
cli, _ := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:2379"},
MaxCallSendMsgSize: 2 * 1024 * 1024,
MaxCallRecvMsgSize: 3 * 1024 * 1024,
})
// client writes exceeding "--max-request-bytes" will be rejected from etcd server
_, err := cli.Put(ctx, "foo", strings.Repeat("a", 1*1024*1024+5))
err == rpctypes.ErrRequestTooLarge
// client writes exceeding "MaxCallSendMsgSize" will be rejected from client-side
_, err = cli.Put(ctx, "foo", strings.Repeat("a", 5*1024*1024))
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (5242890 vs. 2097152)"
// some writes under limits
for i := range []int{0,1,2,3,4} {
_, err = cli.Put(ctx, fmt.Sprintf("foo%d", i), strings.Repeat("a", 1*1024*1024-500))
if err != nil {
panic(err)
}
}
// client reads exceeding "MaxCallRecvMsgSize" will be rejected from client-side
_, err = cli.Get(ctx, "foo", clientv3.WithPrefix())
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5240509 vs. 3145728)"
```
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Change in raw gRPC client wrappers
3.2.12 or later changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/coreos/etcd/pull/9047).
Before and after
```diff
-func NewKVFromKVClient(remote pb.KVClient) KV {
+func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
-func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
+func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
-func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
+func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
-func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
+func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
-func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
```
#### Change in `clientv3.Lease.TimeToLive` API
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/coreos/etcd/pull/7305)).
Before
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp == nil
err == lease.ErrLeaseNotFound
```
After
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp.TTL == -1
err == nil
```
#### Change in `clientv3.NewFromConfigFile`
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.NewFromConfigFile
```
After
```go
import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
clientv3yaml.NewConfig
```
#### Change in `--listen-peer-urls` and `--listen-client-urls`
3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formated as `scheme://IP:port`.
See [issue #6336](https://github.com/coreos/etcd/issues/6336) for more contexts.
### Server upgrade checklists
#### Upgrade requirements

View File

@ -6,9 +6,306 @@ In the general case, upgrading from etcd 3.2 to 3.3 can be a zero-downtime, roll
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Client upgrade checklists
### Upgrade checklists
3.3 introduces breaking changes (TODO: update this before 3.3 release).
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.3.
#### Change in `etcdserver.EtcdServer` struct
`etcdserver.EtcdServer` has changed the type of its member field `*etcdserver.ServerConfig` to `etcdserver.ServerConfig`. And `etcdserver.NewServer` now takes `etcdserver.ServerConfig`, instead of `*etcdserver.ServerConfig`.
Before and after (e.g. [k8s.io/kubernetes/test/e2e_node/services/etcd.go](https://github.com/kubernetes/kubernetes/blob/release-1.8/test/e2e_node/services/etcd.go#L50-L55))
```diff
import "github.com/coreos/etcd/etcdserver"
type EtcdServer struct {
*etcdserver.EtcdServer
- config *etcdserver.ServerConfig
+ config etcdserver.ServerConfig
}
func NewEtcd(dataDir string) *EtcdServer {
- config := &etcdserver.ServerConfig{
+ config := etcdserver.ServerConfig{
DataDir: dataDir,
...
}
return &EtcdServer{config: config}
}
func (e *EtcdServer) Start() error {
var err error
e.EtcdServer, err = etcdserver.NewServer(e.config)
...
```
#### Change in `embed.EtcdServer` struct
Field `LogOutput` is added to `embed.Config`:
```diff
package embed
type Config struct {
Debug bool `json:"debug"`
LogPkgLevels string `json:"log-package-levels"`
+ LogOutput string `json:"log-output"`
...
```
Before gRPC server warnings were logged in etcdserver.
```
WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {localhost:2379 <nil>}
WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {localhost:2379 <nil>}
```
From v3.3, gRPC server logs are disabled by default.
```go
import "github.com/coreos/etcd/embed"
cfg := &embed.Config{Debug: false}
cfg.SetupLogging()
```
Set `embed.Config.Debug` field to `true` to enable gRPC server logs.
#### Change in `/health` endpoint response
Previously, `[endpoint]:[client-port]/health` returned manually marshaled JSON value. 3.3 now defines [`etcdhttp.Health`](https://godoc.org/github.com/coreos/etcd/etcdserver/api/etcdhttp#Health) struct.
Note that in v3.3.0-rc.0, v3.3.0-rc.1, and v3.3.0-rc.2, `etcdhttp.Health` has boolean type `"health"` and `"errors"` fields. For backward compatibilities, we reverted `"health"` field to `string` type and removed `"errors"` field. Further health information will be provided in separate APIs.
```bash
$ curl http://localhost:2379/health
{"health":"true"}
```
#### Change in gRPC gateway HTTP endpoints (replaced `/v3alpha` with `/v3beta`)
Before
```bash
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
After
```bash
curl -L http://localhost:2379/v3beta/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
Requests to `/v3alpha` endpoints will redirect to `/v3beta`, and `/v3alpha` will be removed in 3.4 release.
#### Change in maximum request size limits
3.3 now allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag:
```bash
# limits request size to 1.5 KiB
etcd --max-request-bytes 1536
# client writes exceeding 1.5 KiB will be rejected
etcdctl put foo [LARGE VALUE...]
# etcdserver: request is too large
```
Or configure `embed.Config.MaxRequestBytes` field:
```go
import "github.com/coreos/etcd/embed"
import "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
// limit requests to 5 MiB
cfg := embed.NewConfig()
cfg.MaxRequestBytes = 5 * 1024 * 1024
// client writes exceeding 5 MiB will be rejected
_, err := cli.Put(ctx, "foo", [LARGE VALUE...])
err == rpctypes.ErrRequestTooLarge
```
**If not specified, server-side limit defaults to 1.5 MiB**.
Client-side request limits must be configured based on server-side limits.
```bash
# limits request size to 1 MiB
etcd --max-request-bytes 1048576
```
```go
import "github.com/coreos/etcd/clientv3"
cli, _ := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:2379"},
MaxCallSendMsgSize: 2 * 1024 * 1024,
MaxCallRecvMsgSize: 3 * 1024 * 1024,
})
// client writes exceeding "--max-request-bytes" will be rejected from etcd server
_, err := cli.Put(ctx, "foo", strings.Repeat("a", 1*1024*1024+5))
err == rpctypes.ErrRequestTooLarge
// client writes exceeding "MaxCallSendMsgSize" will be rejected from client-side
_, err = cli.Put(ctx, "foo", strings.Repeat("a", 5*1024*1024))
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (5242890 vs. 2097152)"
// some writes under limits
for i := range []int{0,1,2,3,4} {
_, err = cli.Put(ctx, fmt.Sprintf("foo%d", i), strings.Repeat("a", 1*1024*1024-500))
if err != nil {
panic(err)
}
}
// client reads exceeding "MaxCallRecvMsgSize" will be rejected from client-side
_, err = cli.Get(ctx, "foo", clientv3.WithPrefix())
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5240509 vs. 3145728)"
```
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Change in raw gRPC client wrappers
3.3 changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/coreos/etcd/pull/9047).
Before and after
```diff
-func NewKVFromKVClient(remote pb.KVClient) KV {
+func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
-func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
+func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
-func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
+func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
-func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
+func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
-func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
```
#### Change in clientv3 `Snapshot` API error type
Previously, clientv3 `Snapshot` API returned raw [`grpc/*status.statusError`] type error. v3.3 now translates those errors to corresponding public error types, to be consistent with other APIs.
Before
```go
import "context"
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc, _ := cli.Snapshot(ctx)
cancel()
_, err := io.Copy(f, rc)
err.Error() == "rpc error: code = Canceled desc = context canceled"
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc, _ = cli.Snapshot(ctx)
time.Sleep(2 * time.Second)
_, err = io.Copy(f, rc)
err.Error() == "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
```
After
```go
import "context"
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc, _ := cli.Snapshot(ctx)
cancel()
_, err := io.Copy(f, rc)
err == context.Canceled
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc, _ = cli.Snapshot(ctx)
time.Sleep(2 * time.Second)
_, err = io.Copy(f, rc)
err == context.DeadlineExceeded
```
#### Change in `etcdctl lease timetolive` command output
Previously, `lease timetolive LEASE_ID` command on expired lease prints `-1s` for remaining seconds. 3.3 now outputs clearer messages.
Before
```bash
lease 2d8257079fa1bc0c granted with TTL(0s), remaining(-1s)
```
After
```bash
lease 2d8257079fa1bc0c already expired
```
#### Change in `golang.org/x/net/context` imports
`clientv3` has deprecated `golang.org/x/net/context`. If a project vendors `golang.org/x/net/context` in other code (e.g. etcd generated protocol buffer code) and imports `github.com/coreos/etcd/clientv3`, it requires Go 1.9+ to compile.
Before
```go
import "golang.org/x/net/context"
cli.Put(context.Background(), "f", "v")
```
After
```go
import "context"
cli.Put(context.Background(), "f", "v")
```
#### Change in gRPC dependency
3.3 now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5`.
##### Deprecate `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.SetLogger(log.New(os.Stderr, "grpc: ", 0))
```
After
```go
import "github.com/coreos/etcd/clientv3"
import "google.golang.org/grpc/grpclog"
clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface)
```
##### Deprecate `grpc.ErrClientConnTimeout`
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.3 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)).
@ -37,6 +334,22 @@ if err == context.DeadlineExceeded {
}
```
#### Change in official container registry
etcd now uses [`gcr.io/etcd-development/etcd`](https://gcr.io/etcd-development/etcd) as a primary container registry, and [`quay.io/coreos/etcd`](https://quay.io/coreos/etcd) as secondary.
Before
```bash
docker pull quay.io/coreos/etcd:v3.2.5
```
After
```bash
docker pull gcr.io/etcd-development/etcd:v3.3.0
```
### Server upgrade checklists
#### Upgrade requirements
@ -160,4 +473,4 @@ localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -0,0 +1,171 @@
## Upgrade etcd from 3.3 to 3.4
In the general case, upgrading from etcd 3.3 to 3.4 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.3 processes and replace them with etcd v3.4 processes
- after running all v3.4 processes, new features in v3.4 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.4.
#### Change in `etcd` flags
`--ca-file` and `--peer-ca-file` flags are deprecated; they have been deprecated since v2.1.
```diff
-etcd --ca-file ca-client.crt
+etcd --trusted-ca-file ca-client.crt
```
```diff
-etcd --peer-ca-file ca-peer.crt
+etcd --peer-trusted-ca-file ca-peer.crt
```
#### Change in ``pkg/transport`
Deprecated `pkg/transport.TLSInfo.CAFile` field.
```diff
import "github.com/coreos/etcd/pkg/transport"
tlsInfo := transport.TLSInfo{
CertFile: "/tmp/test-certs/test.pem",
KeyFile: "/tmp/test-certs/test-key.pem",
- CAFile: "/tmp/test-certs/trusted-ca.pem",
+ TrustedCAFile: "/tmp/test-certs/trusted-ca.pem",
}
tlsConfig, err := tlsInfo.ClientConfig()
if err != nil {
panic(err)
}
```
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.4, the running cluster must be 3.3 or greater. If it's before 3.3, please [upgrade to 3.3](upgrade_3_3.md) before upgrading to 3.4.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
#### Mixed versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.4. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.4, the cluster will be upgraded to v3.4, and downgrade from this completed state is **not possible**. If any single member is still v3.3, however, the cluster and its operations remains "v3.3", and it is possible from this mixed cluster state to return to using a v3.3 etcd binary on all members.
Please [backup the data directory](../op-guide/maintenance.md#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade procedure
This example shows how to upgrade a 3-member v3.3 ectd cluster running on a local machine.
#### 1. Check upgrade requirements
Is the cluster healthy and running v3.3.x?
```
$ ETCDCTL_API=3 etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 6.600684ms
localhost:22379 is healthy: successfully committed proposal: took = 8.540064ms
localhost:32379 is healthy: successfully committed proposal: took = 8.763432ms
$ curl http://localhost:2379/version
{"etcdserver":"3.3.0","etcdcluster":"3.3.0"}
```
#### 2. Stop the existing etcd process
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```
14:13:31.491746 I | raft: c89feb932daef420 [term 3] received MsgTimeoutNow from 6d4f535bae3ab960 and starts an election to get leadership.
14:13:31.491769 I | raft: c89feb932daef420 became candidate at term 4
14:13:31.491788 I | raft: c89feb932daef420 received MsgVoteResp from c89feb932daef420 at term 4
14:13:31.491797 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 6d4f535bae3ab960 at term 4
14:13:31.491805 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 9eda174c7df8a033 at term 4
14:13:31.491815 I | raft: raft.node: c89feb932daef420 lost leader 6d4f535bae3ab960 at term 4
14:13:31.524084 I | raft: c89feb932daef420 received MsgVoteResp from 6d4f535bae3ab960 at term 4
14:13:31.524108 I | raft: c89feb932daef420 [quorum:2] has received 2 MsgVoteResp votes and 0 vote rejections
14:13:31.524123 I | raft: c89feb932daef420 became leader at term 4
14:13:31.524136 I | raft: raft.node: c89feb932daef420 elected leader c89feb932daef420 at term 4
14:13:31.592650 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream MsgApp v2 reader)
14:13:31.592825 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message reader)
14:13:31.693275 E | rafthttp: failed to dial 6d4f535bae3ab960 on stream Message (dial tcp [::1]:2380: getsockopt: connection refused)
14:13:31.693289 I | rafthttp: peer 6d4f535bae3ab960 became inactive
14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer)
```
It's a good idea at this point to [backup the etcd data](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur:
```
$ etcdctl snapshot save backup.db
```
#### 3. Drop-in etcd v3.4 binary and start the new etcd process
The new v3.4 etcd will publish its information to the cluster:
```
14:14:25.363225 I | etcdserver: published {Name:s1 ClientURLs:[http://localhost:2379]} to cluster a9ededbffcb1b1f1
```
Verify that each member, and then the entire cluster, becomes healthy with the new v3.4 etcd binary:
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:22379 is healthy: successfully committed proposal: took = 5.540129ms
localhost:32379 is healthy: successfully committed proposal: took = 7.321771ms
localhost:2379 is healthy: successfully committed proposal: took = 10.629901ms
```
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.4:
```
14:15:17.071804 W | etcdserver: member c89feb932daef420 has a higher version 3.4.0
14:15:21.073110 W | etcdserver: the local etcd version 3.3.0 is not up-to-date
14:15:21.073142 W | etcdserver: member 6d4f535bae3ab960 has a higher version 3.4.0
14:15:21.073157 W | etcdserver: the local etcd version 3.3.0 is not up-to-date
14:15:21.073164 W | etcdserver: member c89feb932daef420 has a higher version 3.4.0
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, the cluster will report upgrading to 3.4 successfully:
```
14:15:54.536901 N | etcdserver/membership: updated the cluster version from 3.3 to 3.4
14:15:54.537035 I | etcdserver/api: enabled capabilities for version 3.4
```
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 2.312897ms
localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -45,12 +45,12 @@ It is important to monitor your production etcd cluster for healthy information
#### Health Monitoring
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health":true}`, then the cluster is healthy.
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health":"true"}`, then the cluster is healthy.
```
$ curl -L http://127.0.0.1:2379/health
{"health":true}
{"health":"true"}
```
You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.

View File

@ -29,5 +29,5 @@ curl http://10.0.0.10:2379/health
```
```json
{"health":true}
{"health":"true"}
```

521
Makefile Normal file
View File

@ -0,0 +1,521 @@
# run from repository root
# Example:
# make build
# make clean
# make docker-clean
# make docker-start
# make docker-kill
# make docker-remove
.PHONY: build
build:
GO_BUILD_FLAGS="-v" ./build
./bin/etcd --version
ETCDCTL_API=3 ./bin/etcdctl version
clean:
rm -f ./codecov
rm -rf ./agent-*
rm -rf ./covdir
rm -f ./*.coverprofile
rm -f ./*.log
rm -f ./bin/Dockerfile-release
rm -rf ./bin/*.etcd
rm -rf ./default.etcd
rm -rf ./tests/e2e/default.etcd
rm -rf ./gopath
rm -rf ./gopath.proto
rm -rf ./release
rm -f ./snapshot/localhost:*
rm -f ./integration/127.0.0.1:* ./integration/localhost:*
rm -f ./clientv3/integration/127.0.0.1:* ./clientv3/integration/localhost:*
rm -f ./clientv3/ordering/127.0.0.1:* ./clientv3/ordering/localhost:*
docker-clean:
docker images
docker image prune --force
docker-start:
service docker restart
docker-kill:
docker kill `docker ps -q` || true
docker-remove:
docker rm --force `docker ps -a -q` || true
docker rmi --force `docker images -q` || true
# GO_VERSION ?= 1.10.3
GO_VERSION ?= 1.9.6
ETCD_VERSION ?= $(shell git rev-parse --short HEAD || echo "GitNotFound")
TEST_SUFFIX = $(shell date +%s | base64 | head -c 15)
TEST_OPTS ?= PASSES='unit'
TMP_DIR_MOUNT_FLAG = --mount type=tmpfs,destination=/tmp
ifdef HOST_TMP_DIR
TMP_DIR_MOUNT_FLAG = --mount type=bind,source=$(HOST_TMP_DIR),destination=/tmp
endif
# Example:
# GO_VERSION=1.8.7 make build-docker-test
# GO_VERSION=1.9.7 make build-docker-test
# make build-docker-test
#
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# GO_VERSION=1.8.7 make push-docker-test
# GO_VERSION=1.9.7 make push-docker-test
# make push-docker-test
#
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# GO_VERSION=1.9.7 make pull-docker-test
# make pull-docker-test
build-docker-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./tests/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
--file ./tests/Dockerfile .
@mv ./tests/Dockerfile.bak ./tests/Dockerfile
push-docker-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-test:go$(GO_VERSION)
pull-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-test:go$(GO_VERSION)
# Example:
# make build-docker-test
# make compile-with-docker-test
# make compile-setup-gopath-with-docker-test
compile-with-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker run \
--rm \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "GO_BUILD_FLAGS=-v ./build && ./bin/etcd --version"
compile-setup-gopath-with-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker run \
--rm \
--mount type=bind,source=`pwd`,destination=/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && ETCD_SETUP_GOPATH=1 GO_BUILD_FLAGS=-v ./build && ./bin/etcd --version && rm -rf ./gopath"
# Example:
#
# Local machine:
# TEST_OPTS="PASSES='fmt'" make test
# TEST_OPTS="PASSES='fmt bom dep build unit'" make test
# TEST_OPTS="PASSES='build unit release integration_e2e functional'" make test
# TEST_OPTS="PASSES='build grpcproxy'" make test
#
# Example (test with docker):
# make pull-docker-test
# TEST_OPTS="PASSES='fmt'" make docker-test
# TEST_OPTS="VERBOSE=2 PASSES='unit'" make docker-test
#
# Travis CI (test with docker):
# TEST_OPTS="PASSES='fmt bom dep build unit'" make docker-test
#
# Semaphore CI (test with docker):
# TEST_OPTS="PASSES='build unit release integration_e2e functional'" make docker-test
# HOST_TMP_DIR=/tmp TEST_OPTS="PASSES='build unit release integration_e2e functional'" make docker-test
# TEST_OPTS="GOARCH=386 PASSES='build unit integration_e2e'" make docker-test
#
# grpc-proxy tests (test with docker):
# TEST_OPTS="PASSES='build grpcproxy'" make docker-test
# HOST_TMP_DIR=/tmp TEST_OPTS="PASSES='build grpcproxy'" make docker-test
.PHONY: test
test:
$(info TEST_OPTS: $(TEST_OPTS))
$(info log-file: test-$(TEST_SUFFIX).log)
$(TEST_OPTS) ./test 2>&1 | tee test-$(TEST_SUFFIX).log
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 test-$(TEST_SUFFIX).log
docker-test:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
$(info TEST_OPTS: $(TEST_OPTS))
$(info log-file: test-$(TEST_SUFFIX).log)
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "$(TEST_OPTS) ./test 2>&1 | tee test-$(TEST_SUFFIX).log"
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 test-$(TEST_SUFFIX).log
docker-test-coverage:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
$(info log-file: docker-test-coverage-$(TEST_SUFFIX).log)
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "COVERDIR=covdir PASSES='build build_cov cov' ./test 2>&1 | tee docker-test-coverage-$(TEST_SUFFIX).log && /codecov -t 6040de41-c073-4d6f-bbf8-d89256ef31e1"
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 docker-test-coverage-$(TEST_SUFFIX).log
# Example:
# make compile-with-docker-test
# ETCD_VERSION=v3-test make build-docker-release-master
# ETCD_VERSION=v3-test make push-docker-release-master
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
build-docker-release-master:
$(info ETCD_VERSION: $(ETCD_VERSION))
cp ./Dockerfile-release ./bin/Dockerfile-release
docker build \
--tag gcr.io/etcd-development/etcd:$(ETCD_VERSION) \
--file ./bin/Dockerfile-release \
./bin
rm -f ./bin/Dockerfile-release
docker run \
--rm \
gcr.io/etcd-development/etcd:$(ETCD_VERSION) \
/bin/sh -c "/usr/local/bin/etcd --version && ETCDCTL_API=3 /usr/local/bin/etcdctl version"
push-docker-release-master:
$(info ETCD_VERSION: $(ETCD_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd:$(ETCD_VERSION)
# Example:
# make build-docker-test
# make compile-with-docker-test
# make build-docker-static-ip-test
#
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-static-ip-test
#
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-static-ip-test
#
# make docker-static-ip-test-certs-run
# make docker-static-ip-test-certs-metrics-proxy-run
build-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./tests/docker-static-ip/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
--file ./tests/docker-static-ip/Dockerfile \
./tests/docker-static-ip
@mv ./tests/docker-static-ip/Dockerfile.bak ./tests/docker-static-ip/Dockerfile
push-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION)
pull-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION)
docker-static-ip-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-static-ip/certs,destination=/certs \
gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-static-ip-test-certs-metrics-proxy-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-static-ip/certs-metrics-proxy,destination=/certs-metrics-proxy \
gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-metrics-proxy/run.sh && rm -rf m*.etcd"
# Example:
# make build-docker-test
# make compile-with-docker-test
# make build-docker-dns-test
#
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-dns-test
#
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-dns-test
#
# make docker-dns-test-insecure-run
# make docker-dns-test-certs-run
# make docker-dns-test-certs-gateway-run
# make docker-dns-test-certs-wildcard-run
# make docker-dns-test-certs-common-name-auth-run
# make docker-dns-test-certs-common-name-multi-run
build-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./tests/docker-dns/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
--file ./tests/docker-dns/Dockerfile \
./tests/docker-dns
@mv ./tests/docker-dns/Dockerfile.bak ./tests/docker-dns/Dockerfile
docker run \
--rm \
--dns 127.0.0.1 \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "/etc/init.d/bind9 start && cat /dev/null >/etc/hosts && dig etcd.local"
push-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION)
pull-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION)
docker-dns-test-insecure-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/insecure,destination=/insecure \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /insecure/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs,destination=/certs \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-gateway-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs-gateway,destination=/certs-gateway \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-gateway/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-wildcard-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs-wildcard,destination=/certs-wildcard \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-wildcard/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-common-name-auth-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs-common-name-auth,destination=/certs-common-name-auth \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-common-name-auth/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-common-name-multi-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs-common-name-multi,destination=/certs-common-name-multi \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-common-name-multi/run.sh && rm -rf m*.etcd"
# Example:
# make build-docker-test
# make compile-with-docker-test
# make build-docker-dns-srv-test
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-dns-srv-test
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-dns-srv-test
# make docker-dns-srv-test-certs-run
# make docker-dns-srv-test-certs-gateway-run
# make docker-dns-srv-test-certs-wildcard-run
build-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./tests/docker-dns-srv/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
--file ./tests/docker-dns-srv/Dockerfile \
./tests/docker-dns-srv
@mv ./tests/docker-dns-srv/Dockerfile.bak ./tests/docker-dns-srv/Dockerfile
docker run \
--rm \
--dns 127.0.0.1 \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "/etc/init.d/bind9 start && cat /dev/null >/etc/hosts && dig +noall +answer SRV _etcd-client-ssl._tcp.etcd.local && dig +noall +answer SRV _etcd-server-ssl._tcp.etcd.local && dig +noall +answer m1.etcd.local m2.etcd.local m3.etcd.local"
push-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION)
pull-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION)
docker-dns-srv-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns-srv/certs,destination=/certs \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-dns-srv-test-certs-gateway-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns-srv/certs-gateway,destination=/certs-gateway \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-gateway/run.sh && rm -rf m*.etcd"
docker-dns-srv-test-certs-wildcard-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns-srv/certs-wildcard,destination=/certs-wildcard \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-wildcard/run.sh && rm -rf m*.etcd"
# Example:
# make build-functional
# make build-docker-functional
# make push-docker-functional
# make pull-docker-functional
build-functional:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
./functional/build
./bin/etcd-agent -help || true && \
./bin/etcd-proxy -help || true && \
./bin/etcd-runner --help || true && \
./bin/etcd-tester -help || true
build-docker-functional:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./functional/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-functional:go$(GO_VERSION) \
--file ./functional/Dockerfile \
.
@mv ./functional/Dockerfile.bak ./functional/Dockerfile
docker run \
--rm \
gcr.io/etcd-development/etcd-functional:go$(GO_VERSION) \
/bin/bash -c "./bin/etcd --version && \
./bin/etcd-failpoints --version && \
ETCDCTL_API=3 ./bin/etcdctl version && \
./bin/etcd-agent -help || true && \
./bin/etcd-proxy -help || true && \
./bin/etcd-runner --help || true && \
./bin/etcd-tester -help || true && \
./bin/benchmark --help || true"
push-docker-functional:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-functional:go$(GO_VERSION)
pull-docker-functional:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
docker pull gcr.io/etcd-development/etcd-functional:go$(GO_VERSION)

View File

@ -36,6 +36,14 @@ See [etcdctl][etcdctl] for a simple command line client.
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
[etcd-tests]: http://dash.etcd.io
## Community meetings
etcd contributors and maintainers have bi-weekly meetings at 11:00 AM (USA Pacific) on Tuesdays. There is an [iCalendar][rfc5545] format for the meetings [here](meeting.ics). Anyone is welcome to join via [Zoom][zoom] or audio-only: +1 669 900 6833. An initial agenda will be posted to the [shared Google docs][shared-meeting-notes] a day before each meeting, and everyone is welcome to suggest additional topics or other agendas.
[rfc5545]: https://tools.ietf.org/html/rfc5545
[zoom]: https://coreos.zoom.us/j/854793406
[shared-meeting-notes]: https://docs.google.com/document/d/1DbVXOHvd9scFsSmL2oNg4YGOHJdXqtx583DmeVWrB_M/edit#
## Getting started
### Getting etcd
@ -51,7 +59,22 @@ For those wanting to try the very latest version, [build the latest version of e
### Running etcd
First start a single-member cluster of etcd:
First start a single-member cluster of etcd.
If etcd is installed using the [pre-built release binaries][github-release], run it from the installation location as below:
```sh
/tmp/etcd-download-test/etcd
```
The etcd command can be simply run as such if it is moved to the system path as below:
```sh
mv /tmp/etcd-download-test/etcd /usr/locale/bin/
etcd
```
If etcd is [build from the master branch][dl-build], run it as below:
```sh
./bin/etcd

View File

@ -1,6 +1,5 @@
// Code generated by protoc-gen-gogo.
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: auth.proto
// DO NOT EDIT!
/*
Package authpb is a generated protocol buffer package.
@ -22,6 +21,8 @@ import (
math "math"
_ "github.com/gogo/protobuf/gogoproto"
io "io"
)
@ -217,24 +218,6 @@ func (m *Role) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func encodeFixed64Auth(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Auth(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintAuth(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)

View File

@ -16,6 +16,7 @@ package auth
import (
"context"
"fmt"
"testing"
)
@ -92,3 +93,8 @@ func TestJWTBad(t *testing.T) {
}
opts["priv-key"] = jwtPrivKey
}
// testJWTOpts is useful for passing to NewTokenProvider which requires a string.
func testJWTOpts() string {
return fmt.Sprintf("%s,pub-key=%s,priv-key=%s,sign-method=RS256", tokenTypeJWT, jwtPubKey, jwtPrivKey)
}

35
auth/nop.go Normal file
View File

@ -0,0 +1,35 @@
// Copyright 2018 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package auth
import (
"context"
)
type tokenNop struct{}
func (t *tokenNop) enable() {}
func (t *tokenNop) disable() {}
func (t *tokenNop) invalidateUser(string) {}
func (t *tokenNop) genTokenPrefix() (string, error) { return "", nil }
func (t *tokenNop) info(ctx context.Context, token string, rev uint64) (*AuthInfo, bool) {
return nil, false
}
func (t *tokenNop) assign(ctx context.Context, username string, revision uint64) (string, error) {
return "", ErrAuthFailed
}
func newTokenProviderNop() (*tokenNop, error) {
return &tokenNop{}, nil
}

View File

@ -73,6 +73,9 @@ const (
rootUser = "root"
rootRole = "root"
tokenTypeSimple = "simple"
tokenTypeJWT = "jwt"
revBytesLen = 8
)
@ -458,7 +461,7 @@ func (as *authStore) UserGrantRole(r *pb.AuthUserGrantRoleRequest) (*pb.AuthUser
}
user.Roles = append(user.Roles, r.Role)
sort.Sort(sort.StringSlice(user.Roles))
sort.Strings(user.Roles)
putUser(tx, user)
@ -1050,11 +1053,15 @@ func NewTokenProvider(tokenOpts string, indexWaiter func(uint64) <-chan struct{}
}
switch tokenType {
case "simple":
case tokenTypeSimple:
plog.Warningf("simple token is not cryptographically signed")
return newTokenProviderSimple(indexWaiter), nil
case "jwt":
case tokenTypeJWT:
return newTokenProviderJWT(typeSpecificOpts)
case "":
return newTokenProviderNop()
default:
plog.Errorf("unknown token type: %s", tokenType)
return nil, ErrInvalidAuthOpts
@ -1067,7 +1074,7 @@ func (as *authStore) WithRoot(ctx context.Context) context.Context {
}
var ctxForAssign context.Context
if ts := as.tokenProvider.(*tokenSimple); ts != nil {
if ts, ok := as.tokenProvider.(*tokenSimple); ok && ts != nil {
ctx1 := context.WithValue(ctx, AuthenticateParamIndex{}, uint64(0))
prefix, err := ts.genTokenPrefix()
if err != nil {
@ -1090,7 +1097,9 @@ func (as *authStore) WithRoot(ctx context.Context) context.Context {
"token": token,
}
tokenMD := metadata.New(mdMap)
return metadata.NewOutgoingContext(ctx, tokenMD)
// use "mdIncomingKey{}" since it's called from local etcdserver
return metadata.NewIncomingContext(ctx, tokenMD)
}
func (as *authStore) HasRole(user, role string) bool {

View File

@ -48,7 +48,7 @@ func TestNewAuthStoreRevision(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider("simple", dummyIndexWaiter)
tp, err := NewTokenProvider(tokenTypeSimple, dummyIndexWaiter)
if err != nil {
t.Fatal(err)
}
@ -76,7 +76,7 @@ func TestNewAuthStoreRevision(t *testing.T) {
func setupAuthStore(t *testing.T) (store *authStore, teardownfunc func(t *testing.T)) {
b, tPath := backend.NewDefaultTmpBackend()
tp, err := NewTokenProvider("simple", dummyIndexWaiter)
tp, err := NewTokenProvider(tokenTypeSimple, dummyIndexWaiter)
if err != nil {
t.Fatal(err)
}
@ -513,7 +513,7 @@ func TestAuthInfoFromCtxRace(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider("simple", dummyIndexWaiter)
tp, err := NewTokenProvider(tokenTypeSimple, dummyIndexWaiter)
if err != nil {
t.Fatal(err)
}
@ -579,7 +579,7 @@ func TestRecoverFromSnapshot(t *testing.T) {
as.Close()
tp, err := NewTokenProvider("simple", dummyIndexWaiter)
tp, err := NewTokenProvider(tokenTypeSimple, dummyIndexWaiter)
if err != nil {
t.Fatal(err)
}
@ -661,7 +661,7 @@ func TestRolesOrder(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider("simple", dummyIndexWaiter)
tp, err := NewTokenProvider(tokenTypeSimple, dummyIndexWaiter)
if err != nil {
t.Fatal(err)
}
@ -701,3 +701,43 @@ func TestRolesOrder(t *testing.T) {
}
}
}
func TestAuthInfoFromCtxWithRootSimple(t *testing.T) {
testAuthInfoFromCtxWithRoot(t, tokenTypeSimple)
}
func TestAuthInfoFromCtxWithRootJWT(t *testing.T) {
opts := testJWTOpts()
testAuthInfoFromCtxWithRoot(t, opts)
}
// testAuthInfoFromCtxWithRoot ensures "WithRoot" properly embeds token in the context.
func testAuthInfoFromCtxWithRoot(t *testing.T, opts string) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider(opts, dummyIndexWaiter)
if err != nil {
t.Fatal(err)
}
as := NewAuthStore(b, tp)
defer as.Close()
if err = enableAuthAndCreateRoot(as); err != nil {
t.Fatal(err)
}
ctx := context.Background()
ctx = as.WithRoot(ctx)
ai, aerr := as.AuthInfoFromCtx(ctx)
if aerr != nil {
t.Error(err)
}
if ai == nil {
t.Error("expected non-nil *AuthInfo")
}
if ai.Username != "root" {
t.Errorf("expected user name 'root', got %+v", ai)
}
}

View File

@ -108,7 +108,7 @@
]
},
{
"project": "github.com/gogo/protobuf/proto",
"project": "github.com/gogo/protobuf",
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",

View File

@ -101,60 +101,65 @@ type Auth interface {
}
type auth struct {
remote pb.AuthClient
remote pb.AuthClient
callOpts []grpc.CallOption
}
func NewAuth(c *Client) Auth {
return &auth{remote: RetryAuthClient(c)}
api := &auth{remote: RetryAuthClient(c)}
if c != nil {
api.callOpts = c.callOpts
}
return api
}
func (auth *auth) AuthEnable(ctx context.Context) (*AuthEnableResponse, error) {
resp, err := auth.remote.AuthEnable(ctx, &pb.AuthEnableRequest{})
resp, err := auth.remote.AuthEnable(ctx, &pb.AuthEnableRequest{}, auth.callOpts...)
return (*AuthEnableResponse)(resp), toErr(ctx, err)
}
func (auth *auth) AuthDisable(ctx context.Context) (*AuthDisableResponse, error) {
resp, err := auth.remote.AuthDisable(ctx, &pb.AuthDisableRequest{})
resp, err := auth.remote.AuthDisable(ctx, &pb.AuthDisableRequest{}, auth.callOpts...)
return (*AuthDisableResponse)(resp), toErr(ctx, err)
}
func (auth *auth) UserAdd(ctx context.Context, name string, password string) (*AuthUserAddResponse, error) {
resp, err := auth.remote.UserAdd(ctx, &pb.AuthUserAddRequest{Name: name, Password: password})
resp, err := auth.remote.UserAdd(ctx, &pb.AuthUserAddRequest{Name: name, Password: password}, auth.callOpts...)
return (*AuthUserAddResponse)(resp), toErr(ctx, err)
}
func (auth *auth) UserDelete(ctx context.Context, name string) (*AuthUserDeleteResponse, error) {
resp, err := auth.remote.UserDelete(ctx, &pb.AuthUserDeleteRequest{Name: name})
resp, err := auth.remote.UserDelete(ctx, &pb.AuthUserDeleteRequest{Name: name}, auth.callOpts...)
return (*AuthUserDeleteResponse)(resp), toErr(ctx, err)
}
func (auth *auth) UserChangePassword(ctx context.Context, name string, password string) (*AuthUserChangePasswordResponse, error) {
resp, err := auth.remote.UserChangePassword(ctx, &pb.AuthUserChangePasswordRequest{Name: name, Password: password})
resp, err := auth.remote.UserChangePassword(ctx, &pb.AuthUserChangePasswordRequest{Name: name, Password: password}, auth.callOpts...)
return (*AuthUserChangePasswordResponse)(resp), toErr(ctx, err)
}
func (auth *auth) UserGrantRole(ctx context.Context, user string, role string) (*AuthUserGrantRoleResponse, error) {
resp, err := auth.remote.UserGrantRole(ctx, &pb.AuthUserGrantRoleRequest{User: user, Role: role})
resp, err := auth.remote.UserGrantRole(ctx, &pb.AuthUserGrantRoleRequest{User: user, Role: role}, auth.callOpts...)
return (*AuthUserGrantRoleResponse)(resp), toErr(ctx, err)
}
func (auth *auth) UserGet(ctx context.Context, name string) (*AuthUserGetResponse, error) {
resp, err := auth.remote.UserGet(ctx, &pb.AuthUserGetRequest{Name: name})
resp, err := auth.remote.UserGet(ctx, &pb.AuthUserGetRequest{Name: name}, auth.callOpts...)
return (*AuthUserGetResponse)(resp), toErr(ctx, err)
}
func (auth *auth) UserList(ctx context.Context) (*AuthUserListResponse, error) {
resp, err := auth.remote.UserList(ctx, &pb.AuthUserListRequest{})
resp, err := auth.remote.UserList(ctx, &pb.AuthUserListRequest{}, auth.callOpts...)
return (*AuthUserListResponse)(resp), toErr(ctx, err)
}
func (auth *auth) UserRevokeRole(ctx context.Context, name string, role string) (*AuthUserRevokeRoleResponse, error) {
resp, err := auth.remote.UserRevokeRole(ctx, &pb.AuthUserRevokeRoleRequest{Name: name, Role: role})
resp, err := auth.remote.UserRevokeRole(ctx, &pb.AuthUserRevokeRoleRequest{Name: name, Role: role}, auth.callOpts...)
return (*AuthUserRevokeRoleResponse)(resp), toErr(ctx, err)
}
func (auth *auth) RoleAdd(ctx context.Context, name string) (*AuthRoleAddResponse, error) {
resp, err := auth.remote.RoleAdd(ctx, &pb.AuthRoleAddRequest{Name: name})
resp, err := auth.remote.RoleAdd(ctx, &pb.AuthRoleAddRequest{Name: name}, auth.callOpts...)
return (*AuthRoleAddResponse)(resp), toErr(ctx, err)
}
@ -164,27 +169,27 @@ func (auth *auth) RoleGrantPermission(ctx context.Context, name string, key, ran
RangeEnd: []byte(rangeEnd),
PermType: authpb.Permission_Type(permType),
}
resp, err := auth.remote.RoleGrantPermission(ctx, &pb.AuthRoleGrantPermissionRequest{Name: name, Perm: perm})
resp, err := auth.remote.RoleGrantPermission(ctx, &pb.AuthRoleGrantPermissionRequest{Name: name, Perm: perm}, auth.callOpts...)
return (*AuthRoleGrantPermissionResponse)(resp), toErr(ctx, err)
}
func (auth *auth) RoleGet(ctx context.Context, role string) (*AuthRoleGetResponse, error) {
resp, err := auth.remote.RoleGet(ctx, &pb.AuthRoleGetRequest{Role: role})
resp, err := auth.remote.RoleGet(ctx, &pb.AuthRoleGetRequest{Role: role}, auth.callOpts...)
return (*AuthRoleGetResponse)(resp), toErr(ctx, err)
}
func (auth *auth) RoleList(ctx context.Context) (*AuthRoleListResponse, error) {
resp, err := auth.remote.RoleList(ctx, &pb.AuthRoleListRequest{})
resp, err := auth.remote.RoleList(ctx, &pb.AuthRoleListRequest{}, auth.callOpts...)
return (*AuthRoleListResponse)(resp), toErr(ctx, err)
}
func (auth *auth) RoleRevokePermission(ctx context.Context, role string, key, rangeEnd string) (*AuthRoleRevokePermissionResponse, error) {
resp, err := auth.remote.RoleRevokePermission(ctx, &pb.AuthRoleRevokePermissionRequest{Role: role, Key: key, RangeEnd: rangeEnd})
resp, err := auth.remote.RoleRevokePermission(ctx, &pb.AuthRoleRevokePermissionRequest{Role: role, Key: key, RangeEnd: rangeEnd}, auth.callOpts...)
return (*AuthRoleRevokePermissionResponse)(resp), toErr(ctx, err)
}
func (auth *auth) RoleDelete(ctx context.Context, role string) (*AuthRoleDeleteResponse, error) {
resp, err := auth.remote.RoleDelete(ctx, &pb.AuthRoleDeleteRequest{Role: role})
resp, err := auth.remote.RoleDelete(ctx, &pb.AuthRoleDeleteRequest{Role: role}, auth.callOpts...)
return (*AuthRoleDeleteResponse)(resp), toErr(ctx, err)
}
@ -197,12 +202,13 @@ func StrToPermissionType(s string) (PermissionType, error) {
}
type authenticator struct {
conn *grpc.ClientConn // conn in-use
remote pb.AuthClient
conn *grpc.ClientConn // conn in-use
remote pb.AuthClient
callOpts []grpc.CallOption
}
func (auth *authenticator) authenticate(ctx context.Context, name string, password string) (*AuthenticateResponse, error) {
resp, err := auth.remote.Authenticate(ctx, &pb.AuthenticateRequest{Name: name, Password: password})
resp, err := auth.remote.Authenticate(ctx, &pb.AuthenticateRequest{Name: name, Password: password}, auth.callOpts...)
return (*AuthenticateResponse)(resp), toErr(ctx, err)
}
@ -210,14 +216,18 @@ func (auth *authenticator) close() {
auth.conn.Close()
}
func newAuthenticator(endpoint string, opts []grpc.DialOption) (*authenticator, error) {
func newAuthenticator(endpoint string, opts []grpc.DialOption, c *Client) (*authenticator, error) {
conn, err := grpc.Dial(endpoint, opts...)
if err != nil {
return nil, err
}
return &authenticator{
api := &authenticator{
conn: conn,
remote: pb.NewAuthClient(conn),
}, nil
}
if c != nil {
api.callOpts = c.callOpts
}
return api, nil
}

View File

@ -56,7 +56,7 @@ type Client struct {
cfg Config
creds *credentials.TransportCredentials
balancer *healthBalancer
mu sync.Mutex
mu *sync.Mutex
ctx context.Context
cancel context.CancelFunc
@ -67,6 +67,8 @@ type Client struct {
Password string
// tokenCred is an instance of WithPerRPCCredentials()'s argument
tokenCred *authTokenCredential
callOpts []grpc.CallOption
}
// New creates a new etcdv3 client from a given configuration.
@ -295,7 +297,7 @@ func (c *Client) getToken(ctx context.Context) error {
endpoint := c.cfg.Endpoints[i]
host := getHost(endpoint)
// use dial options without dopts to avoid reusing the client balancer
auth, err = newAuthenticator(host, c.dialSetupOpts(endpoint))
auth, err = newAuthenticator(host, c.dialSetupOpts(endpoint), c)
if err != nil {
continue
}
@ -385,11 +387,30 @@ func newClient(cfg *Config) (*Client, error) {
creds: creds,
ctx: ctx,
cancel: cancel,
mu: new(sync.Mutex),
callOpts: defaultCallOpts,
}
if cfg.Username != "" && cfg.Password != "" {
client.Username = cfg.Username
client.Password = cfg.Password
}
if cfg.MaxCallSendMsgSize > 0 || cfg.MaxCallRecvMsgSize > 0 {
if cfg.MaxCallRecvMsgSize > 0 && cfg.MaxCallSendMsgSize > cfg.MaxCallRecvMsgSize {
return nil, fmt.Errorf("gRPC message recv limit (%d bytes) must be greater than send limit (%d bytes)", cfg.MaxCallRecvMsgSize, cfg.MaxCallSendMsgSize)
}
callOpts := []grpc.CallOption{
defaultFailFast,
defaultMaxCallSendMsgSize,
defaultMaxCallRecvMsgSize,
}
if cfg.MaxCallSendMsgSize > 0 {
callOpts[1] = grpc.MaxCallSendMsgSize(cfg.MaxCallSendMsgSize)
}
if cfg.MaxCallRecvMsgSize > 0 {
callOpts[2] = grpc.MaxCallRecvMsgSize(cfg.MaxCallRecvMsgSize)
}
client.callOpts = callOpts
}
client.balancer = newHealthBalancer(cfg.Endpoints, cfg.DialTimeout, func(ep string) (bool, error) {
return grpcHealthCheck(client, ep)
@ -508,6 +529,20 @@ func isHaltErr(ctx context.Context, err error) bool {
return ev.Code() != codes.Unavailable && ev.Code() != codes.Internal
}
// isUnavailableErr returns true if the given error is an unavailable error
func isUnavailableErr(ctx context.Context, err error) bool {
if ctx != nil && ctx.Err() != nil {
return false
}
if err == nil {
return false
}
ev, _ := status.FromError(err)
// Unavailable codes mean the system will be right back.
// (e.g., can't connect, lost leader)
return ev.Code() == codes.Unavailable
}
func toErr(ctx context.Context, err error) error {
if err == nil {
return nil

View File

@ -18,6 +18,9 @@ import (
"context"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"github.com/coreos/etcd/pkg/types"
"google.golang.org/grpc"
)
type (
@ -43,20 +46,34 @@ type Cluster interface {
}
type cluster struct {
remote pb.ClusterClient
remote pb.ClusterClient
callOpts []grpc.CallOption
}
func NewCluster(c *Client) Cluster {
return &cluster{remote: RetryClusterClient(c)}
api := &cluster{remote: RetryClusterClient(c)}
if c != nil {
api.callOpts = c.callOpts
}
return api
}
func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
return &cluster{remote: remote}
func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
api := &cluster{remote: remote}
if c != nil {
api.callOpts = c.callOpts
}
return api
}
func (c *cluster) MemberAdd(ctx context.Context, peerAddrs []string) (*MemberAddResponse, error) {
// fail-fast before panic in rafthttp
if _, err := types.NewURLs(peerAddrs); err != nil {
return nil, err
}
r := &pb.MemberAddRequest{PeerURLs: peerAddrs}
resp, err := c.remote.MemberAdd(ctx, r)
resp, err := c.remote.MemberAdd(ctx, r, c.callOpts...)
if err != nil {
return nil, toErr(ctx, err)
}
@ -65,7 +82,7 @@ func (c *cluster) MemberAdd(ctx context.Context, peerAddrs []string) (*MemberAdd
func (c *cluster) MemberRemove(ctx context.Context, id uint64) (*MemberRemoveResponse, error) {
r := &pb.MemberRemoveRequest{ID: id}
resp, err := c.remote.MemberRemove(ctx, r)
resp, err := c.remote.MemberRemove(ctx, r, c.callOpts...)
if err != nil {
return nil, toErr(ctx, err)
}
@ -73,9 +90,14 @@ func (c *cluster) MemberRemove(ctx context.Context, id uint64) (*MemberRemoveRes
}
func (c *cluster) MemberUpdate(ctx context.Context, id uint64, peerAddrs []string) (*MemberUpdateResponse, error) {
// fail-fast before panic in rafthttp
if _, err := types.NewURLs(peerAddrs); err != nil {
return nil, err
}
// it is safe to retry on update.
r := &pb.MemberUpdateRequest{ID: id, PeerURLs: peerAddrs}
resp, err := c.remote.MemberUpdate(ctx, r)
resp, err := c.remote.MemberUpdate(ctx, r, c.callOpts...)
if err == nil {
return (*MemberUpdateResponse)(resp), nil
}
@ -84,7 +106,7 @@ func (c *cluster) MemberUpdate(ctx context.Context, id uint64, peerAddrs []strin
func (c *cluster) MemberList(ctx context.Context) (*MemberListResponse, error) {
// it is safe to retry on list.
resp, err := c.remote.MemberList(ctx, &pb.MemberListRequest{})
resp, err := c.remote.MemberList(ctx, &pb.MemberListRequest{}, c.callOpts...)
if err == nil {
return (*MemberListResponse)(resp), nil
}

View File

@ -33,14 +33,27 @@ type Config struct {
// DialTimeout is the timeout for failing to establish a connection.
DialTimeout time.Duration `json:"dial-timeout"`
// DialKeepAliveTime is the time in seconds after which client pings the server to see if
// DialKeepAliveTime is the time after which client pings the server to see if
// transport is alive.
DialKeepAliveTime time.Duration `json:"dial-keep-alive-time"`
// DialKeepAliveTimeout is the time in seconds that the client waits for a response for the
// keep-alive probe. If the response is not received in this time, the connection is closed.
// DialKeepAliveTimeout is the time that the client waits for a response for the
// keep-alive probe. If the response is not received in this time, the connection is closed.
DialKeepAliveTimeout time.Duration `json:"dial-keep-alive-timeout"`
// MaxCallSendMsgSize is the client-side request send limit in bytes.
// If 0, it defaults to 2.0 MiB (2 * 1024 * 1024).
// Make sure that "MaxCallSendMsgSize" < server-side default send/recv limit.
// ("--max-request-bytes" flag to etcd or "embed.Config.MaxRequestBytes").
MaxCallSendMsgSize int
// MaxCallRecvMsgSize is the client-side response receive limit.
// If 0, it defaults to "math.MaxInt32", because range response can
// easily exceed request send limits.
// Make sure that "MaxCallRecvMsgSize" >= server-side default send/recv limit.
// ("--max-request-bytes" flag to etcd or "embed.Config.MaxRequestBytes").
MaxCallRecvMsgSize int
// TLS holds the client secure credentials, if any.
TLS *tls.Config

View File

@ -16,6 +16,22 @@
//
// Create client using `clientv3.New`:
//
// // expect dial time-out on ipv4 blackhole
// _, err := clientv3.New(clientv3.Config{
// Endpoints: []string{"http://254.0.0.1:12345"},
// DialTimeout: 2 * time.Second
// })
//
// // etcd clientv3 >= v3.2.10, grpc/grpc-go >= v1.7.3
// if err == context.DeadlineExceeded {
// // handle errors
// }
//
// // etcd clientv3 <= v3.2.9, grpc/grpc-go <= v1.2.1
// if err == grpc.ErrClientConnTimeout {
// // handle errors
// }
//
// cli, err := clientv3.New(clientv3.Config{
// Endpoints: []string{"localhost:2379", "localhost:22379", "localhost:32379"},
// DialTimeout: 5 * time.Second,
@ -41,10 +57,11 @@
// The Client has internal state (watchers and leases), so Clients should be reused instead of created as needed.
// Clients are safe for concurrent use by multiple goroutines.
//
// etcd client returns 2 types of errors:
// etcd client returns 3 types of errors:
//
// 1. context error: canceled or deadline exceeded.
// 2. gRPC error: see https://github.com/coreos/etcd/blob/master/etcdserver/api/v3rpc/rpctypes/error.go
// 1. context error: canceled or deadline exceeded.
// 2. gRPC status error: e.g. when clock drifts in server-side before client's context deadline exceeded.
// 3. gRPC error: see https://github.com/coreos/etcd/blob/master/etcdserver/api/v3rpc/rpctypes/error.go
//
// Here is the example code to handle client errors:
//
@ -54,6 +71,12 @@
// // ctx is canceled by another routine
// } else if err == context.DeadlineExceeded {
// // ctx is attached with a deadline and it exceeded
// } else if ev, ok := status.FromError(err); ok {
// code := ev.Code()
// if code == codes.DeadlineExceeded {
// // server-side context might have timed-out first (due to clock skew)
// // while original client-side context is not timed-out yet
// }
// } else if verr, ok := err.(*v3rpc.ErrEmptyKey); ok {
// // process (verr.Errors)
// } else {
@ -61,4 +84,14 @@
// }
// }
//
// go func() { cli.Close() }()
// _, err := kvc.Get(ctx, "a")
// if err != nil {
// if err == context.Canceled {
// // grpc balancer calls 'Get' with an inflight client.Close
// } else if err == grpc.ErrClientConnClosing {
// // grpc balancer calls 'Get' after client.Close.
// }
// }
//
package clientv3

View File

@ -158,34 +158,26 @@ func (b *healthBalancer) pinned() string {
func (b *healthBalancer) hostPortError(hostPort string, err error) {
if b.endpoint(hostPort) == "" {
if logger.V(4) {
logger.Infof("clientv3/balancer: %q is stale (skip marking as unhealthy on %q)", hostPort, err.Error())
}
logger.Lvl(4).Infof("clientv3/balancer: %q is stale (skip marking as unhealthy on %q)", hostPort, err.Error())
return
}
b.unhealthyMu.Lock()
b.unhealthyHostPorts[hostPort] = time.Now()
b.unhealthyMu.Unlock()
if logger.V(4) {
logger.Infof("clientv3/balancer: %q is marked unhealthy (%q)", hostPort, err.Error())
}
logger.Lvl(4).Infof("clientv3/balancer: %q is marked unhealthy (%q)", hostPort, err.Error())
}
func (b *healthBalancer) removeUnhealthy(hostPort, msg string) {
if b.endpoint(hostPort) == "" {
if logger.V(4) {
logger.Infof("clientv3/balancer: %q was not in unhealthy (%q)", hostPort, msg)
}
logger.Lvl(4).Infof("clientv3/balancer: %q was not in unhealthy (%q)", hostPort, msg)
return
}
b.unhealthyMu.Lock()
delete(b.unhealthyHostPorts, hostPort)
b.unhealthyMu.Unlock()
if logger.V(4) {
logger.Infof("clientv3/balancer: %q is removed from unhealthy (%q)", hostPort, msg)
}
logger.Lvl(4).Infof("clientv3/balancer: %q is removed from unhealthy (%q)", hostPort, msg)
}
func (b *healthBalancer) countUnhealthy() (count int) {
@ -207,9 +199,7 @@ func (b *healthBalancer) cleanupUnhealthy() {
for k, v := range b.unhealthyHostPorts {
if time.Since(v) > b.healthCheckTimeout {
delete(b.unhealthyHostPorts, k)
if logger.V(4) {
logger.Infof("clientv3/balancer: removed %q from unhealthy after %v", k, b.healthCheckTimeout)
}
logger.Lvl(4).Infof("clientv3/balancer: removed %q from unhealthy after %v", k, b.healthCheckTimeout)
}
}
b.unhealthyMu.Unlock()
@ -412,9 +402,7 @@ func (b *healthBalancer) Up(addr grpc.Address) func(error) {
}
if b.pinAddr != "" {
if logger.V(4) {
logger.Infof("clientv3/balancer: %q is up but not pinned (already pinned %q)", addr.Addr, b.pinAddr)
}
logger.Lvl(4).Infof("clientv3/balancer: %q is up but not pinned (already pinned %q)", addr.Addr, b.pinAddr)
return func(err error) {}
}
@ -422,9 +410,7 @@ func (b *healthBalancer) Up(addr grpc.Address) func(error) {
close(b.upc)
b.downc = make(chan struct{})
b.pinAddr = addr.Addr
if logger.V(4) {
logger.Infof("clientv3/balancer: pin %q", addr.Addr)
}
logger.Lvl(4).Infof("clientv3/balancer: pin %q", addr.Addr)
// notify client that a connection is up
b.readyOnce.Do(func() { close(b.readyc) })
@ -441,9 +427,7 @@ func (b *healthBalancer) Up(addr grpc.Address) func(error) {
close(b.downc)
b.pinAddr = ""
b.mu.Unlock()
if logger.V(4) {
logger.Infof("clientv3/balancer: unpin %q (%q)", addr.Addr, err.Error())
}
logger.Lvl(4).Infof("clientv3/balancer: unpin %q (%q)", addr.Addr, err.Error())
}
}
@ -470,9 +454,7 @@ func (b *healthBalancer) mayPin(addr grpc.Address) bool {
// 3. grpc-healthcheck still SERVING, thus retry to pin
// instead, return before grpc-healthcheck if failed within healthcheck timeout
if elapsed := time.Since(failedTime); elapsed < b.healthCheckTimeout {
if logger.V(4) {
logger.Infof("clientv3/balancer: %q is up but not pinned (failed %v ago, require minimum %v after failure)", addr.Addr, elapsed, b.healthCheckTimeout)
}
logger.Lvl(4).Infof("clientv3/balancer: %q is up but not pinned (failed %v ago, require minimum %v after failure)", addr.Addr, elapsed, b.healthCheckTimeout)
return false
}

View File

@ -54,7 +54,7 @@ func TestBalancerUnderBlackholeKeepAliveWatch(t *testing.T) {
// TODO: only send healthy endpoint to gRPC so gRPC wont waste time to
// dial for unhealthy endpoint.
// then we can reduce 3s to 1s.
timeout := pingInterval + 3*time.Second
timeout := pingInterval + integration.RequestWaitTimeout
cli, err := clientv3.New(ccfg)
if err != nil {
@ -106,7 +106,7 @@ func TestBalancerUnderBlackholeKeepAliveWatch(t *testing.T) {
func TestBalancerUnderBlackholeNoKeepAlivePut(t *testing.T) {
testBalancerUnderBlackholeNoKeepAlive(t, func(cli *clientv3.Client, ctx context.Context) error {
_, err := cli.Put(ctx, "foo", "bar")
if err == context.DeadlineExceeded || err == rpctypes.ErrTimeout {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) || err == rpctypes.ErrTimeout {
return errExpected
}
return err
@ -116,7 +116,7 @@ func TestBalancerUnderBlackholeNoKeepAlivePut(t *testing.T) {
func TestBalancerUnderBlackholeNoKeepAliveDelete(t *testing.T) {
testBalancerUnderBlackholeNoKeepAlive(t, func(cli *clientv3.Client, ctx context.Context) error {
_, err := cli.Delete(ctx, "foo")
if err == context.DeadlineExceeded || err == rpctypes.ErrTimeout {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) || err == rpctypes.ErrTimeout {
return errExpected
}
return err
@ -129,7 +129,7 @@ func TestBalancerUnderBlackholeNoKeepAliveTxn(t *testing.T) {
If(clientv3.Compare(clientv3.Version("foo"), "=", 0)).
Then(clientv3.OpPut("foo", "bar")).
Else(clientv3.OpPut("foo", "baz")).Commit()
if err == context.DeadlineExceeded || err == rpctypes.ErrTimeout {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) || err == rpctypes.ErrTimeout {
return errExpected
}
return err
@ -139,7 +139,7 @@ func TestBalancerUnderBlackholeNoKeepAliveTxn(t *testing.T) {
func TestBalancerUnderBlackholeNoKeepAliveLinearizableGet(t *testing.T) {
testBalancerUnderBlackholeNoKeepAlive(t, func(cli *clientv3.Client, ctx context.Context) error {
_, err := cli.Get(ctx, "a")
if err == context.DeadlineExceeded || err == rpctypes.ErrTimeout {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) || err == rpctypes.ErrTimeout {
return errExpected
}
return err
@ -149,7 +149,7 @@ func TestBalancerUnderBlackholeNoKeepAliveLinearizableGet(t *testing.T) {
func TestBalancerUnderBlackholeNoKeepAliveSerializableGet(t *testing.T) {
testBalancerUnderBlackholeNoKeepAlive(t, func(cli *clientv3.Client, ctx context.Context) error {
_, err := cli.Get(ctx, "a", clientv3.WithSerializable())
if err == context.DeadlineExceeded {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) {
return errExpected
}
return err

View File

@ -126,3 +126,36 @@ func TestMemberUpdate(t *testing.T) {
t.Errorf("urls = %v, want %v", urls, resp.Members[0].PeerURLs)
}
}
func TestMemberAddUpdateWrongURLs(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
capi := clus.RandClient()
tt := [][]string{
// missing protocol scheme
{"://127.0.0.1:2379"},
// unsupported scheme
{"mailto://127.0.0.1:2379"},
// not conform to host:port
{"http://127.0.0.1"},
// contain a path
{"http://127.0.0.1:2379/path"},
// first path segment in URL cannot contain colon
{"127.0.0.1:1234"},
// URL scheme must be http, https, unix, or unixs
{"localhost:1234"},
}
for i := range tt {
_, err := capi.MemberAdd(context.Background(), tt[i])
if err == nil {
t.Errorf("#%d: MemberAdd err = nil, but error", i)
}
_, err = capi.MemberUpdate(context.Background(), 0, tt[i])
if err == nil {
t.Errorf("#%d: MemberUpdate err = nil, but error", i)
}
}
}

View File

@ -121,7 +121,7 @@ func testDialSetEndpoints(t *testing.T, setBefore bool) {
if !setBefore {
cli.SetEndpoints(eps[toKill%3], eps[(toKill+1)%3])
}
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
ctx, cancel := context.WithTimeout(context.Background(), integration.RequestWaitTimeout)
if _, err = cli.Get(ctx, "foo", clientv3.WithSerializable()); err != nil {
t.Fatal(err)
}
@ -182,7 +182,7 @@ func TestDialForeignEndpoint(t *testing.T) {
// grpc can return a lazy connection that's not connected yet; confirm
// that it can communicate with the cluster.
kvc := clientv3.NewKVFromKVClient(pb.NewKVClient(conn))
kvc := clientv3.NewKVFromKVClient(pb.NewKVClient(conn), clus.Client(0))
ctx, cancel := context.WithTimeout(context.TODO(), 5*time.Second)
defer cancel()
if _, gerr := kvc.Get(ctx, "abc"); gerr != nil {

View File

@ -30,6 +30,7 @@ import (
"github.com/coreos/etcd/pkg/testutil"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
)
func TestKVPutError(t *testing.T) {
@ -39,7 +40,7 @@ func TestKVPutError(t *testing.T) {
maxReqBytes = 1.5 * 1024 * 1024 // hard coded max in v3_server.go
quota = int64(int(maxReqBytes) + 8*os.Getpagesize())
)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1, QuotaBackendBytes: quota})
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1, QuotaBackendBytes: quota, ClientMaxCallSendMsgSize: 100 * 1024 * 1024})
defer clus.Terminate(t)
kv := clus.RandClient()
@ -452,7 +453,7 @@ func TestKVGetErrConnClosed(t *testing.T) {
clus.TakeClient(0)
select {
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("kv.Get took too long")
case <-donec:
}
@ -479,7 +480,7 @@ func TestKVNewAfterClose(t *testing.T) {
close(donec)
}()
select {
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("kv.Get took too long")
case <-donec:
}
@ -825,53 +826,6 @@ func TestKVPutStoppedServerAndClose(t *testing.T) {
}
}
// TestKVGetResetLoneEndpoint ensures that if an endpoint resets and all other
// endpoints are down, then it will reconnect.
func TestKVGetResetLoneEndpoint(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 2, SkipCreatingClient: true})
defer clus.Terminate(t)
// get endpoint list
eps := make([]string, 2)
for i := range eps {
eps[i] = clus.Members[i].GRPCAddr()
}
cfg := clientv3.Config{Endpoints: eps, DialTimeout: 500 * time.Millisecond}
cli, err := clientv3.New(cfg)
if err != nil {
t.Fatal(err)
}
defer cli.Close()
// disconnect everything
clus.Members[0].Stop(t)
clus.Members[1].Stop(t)
// have Get try to reconnect
donec := make(chan struct{})
go func() {
// 3-second is the minimum interval between endpoint being marked
// as unhealthy and being removed from unhealthy, so possibly
// takes >5-second to unpin and repin an endpoint
// TODO: decrease timeout when balancer switch rewrite
ctx, cancel := context.WithTimeout(context.TODO(), 7*time.Second)
if _, err := cli.Get(ctx, "abc", clientv3.WithSerializable()); err != nil {
t.Fatal(err)
}
cancel()
close(donec)
}()
time.Sleep(500 * time.Millisecond)
clus.Members[0].Restart(t)
select {
case <-time.After(10 * time.Second):
t.Fatalf("timed out waiting for Get")
case <-donec:
}
}
// TestKVPutAtMostOnce ensures that a Put will only occur at most once
// in the presence of network errors.
func TestKVPutAtMostOnce(t *testing.T) {
@ -908,3 +862,95 @@ func TestKVPutAtMostOnce(t *testing.T) {
t.Fatalf("expected version <= 10, got %+v", resp.Kvs[0])
}
}
// TestKVLargeRequests tests various client/server side request limits.
func TestKVLargeRequests(t *testing.T) {
defer testutil.AfterTest(t)
tests := []struct {
// make sure that "MaxCallSendMsgSize" < server-side default send/recv limit
maxRequestBytesServer uint
maxCallSendBytesClient int
maxCallRecvBytesClient int
valueSize int
expectError error
}{
{
maxRequestBytesServer: 1,
maxCallSendBytesClient: 0,
maxCallRecvBytesClient: 0,
valueSize: 1024,
expectError: rpctypes.ErrRequestTooLarge,
},
// without proper client-side receive size limit
// "code = ResourceExhausted desc = grpc: received message larger than max (5242929 vs. 4194304)"
{
maxRequestBytesServer: 7*1024*1024 + 512*1024,
maxCallSendBytesClient: 7 * 1024 * 1024,
maxCallRecvBytesClient: 0,
valueSize: 5 * 1024 * 1024,
expectError: nil,
},
{
maxRequestBytesServer: 10 * 1024 * 1024,
maxCallSendBytesClient: 100 * 1024 * 1024,
maxCallRecvBytesClient: 0,
valueSize: 10 * 1024 * 1024,
expectError: rpctypes.ErrRequestTooLarge,
},
{
maxRequestBytesServer: 10 * 1024 * 1024,
maxCallSendBytesClient: 10 * 1024 * 1024,
maxCallRecvBytesClient: 0,
valueSize: 10 * 1024 * 1024,
expectError: grpc.Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max "),
},
{
maxRequestBytesServer: 10 * 1024 * 1024,
maxCallSendBytesClient: 100 * 1024 * 1024,
maxCallRecvBytesClient: 0,
valueSize: 10*1024*1024 + 5,
expectError: rpctypes.ErrRequestTooLarge,
},
{
maxRequestBytesServer: 10 * 1024 * 1024,
maxCallSendBytesClient: 10 * 1024 * 1024,
maxCallRecvBytesClient: 0,
valueSize: 10*1024*1024 + 5,
expectError: grpc.Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max "),
},
}
for i, test := range tests {
clus := integration.NewClusterV3(t,
&integration.ClusterConfig{
Size: 1,
MaxRequestBytes: test.maxRequestBytesServer,
ClientMaxCallSendMsgSize: test.maxCallSendBytesClient,
ClientMaxCallRecvMsgSize: test.maxCallRecvBytesClient,
},
)
cli := clus.Client(0)
_, err := cli.Put(context.TODO(), "foo", strings.Repeat("a", test.valueSize))
if _, ok := err.(rpctypes.EtcdError); ok {
if err != test.expectError {
t.Errorf("#%d: expected %v, got %v", i, test.expectError, err)
}
} else if err != nil && !strings.HasPrefix(err.Error(), test.expectError.Error()) {
t.Errorf("#%d: expected %v, got %v", i, test.expectError, err)
}
// put request went through, now expects large response back
if err == nil {
_, err = cli.Get(context.TODO(), "foo")
if err != nil {
t.Errorf("#%d: get expected no error, got %v", i, err)
}
}
clus.Terminate(t)
}
}

View File

@ -55,6 +55,11 @@ func TestLeaseGrant(t *testing.T) {
kv := clus.RandClient()
_, merr := lapi.Grant(context.Background(), clientv3.MaxLeaseTTL+1)
if merr != rpctypes.ErrLeaseTTLTooLarge {
t.Fatalf("err = %v, want %v", merr, rpctypes.ErrLeaseTTLTooLarge)
}
resp, err := lapi.Grant(context.Background(), 10)
if err != nil {
t.Errorf("failed to create lease %v", err)
@ -299,7 +304,7 @@ func TestLeaseGrantErrConnClosed(t *testing.T) {
}
select {
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("le.Grant took too long")
case <-donec:
}
@ -325,7 +330,7 @@ func TestLeaseGrantNewAfterClose(t *testing.T) {
close(donec)
}()
select {
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("le.Grant took too long")
case <-donec:
}
@ -357,7 +362,7 @@ func TestLeaseRevokeNewAfterClose(t *testing.T) {
close(donec)
}()
select {
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("le.Revoke took too long")
case <-donec:
}

View File

@ -15,11 +15,20 @@
package integration
import (
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"path/filepath"
"testing"
"time"
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/lease"
"github.com/coreos/etcd/mvcc"
"github.com/coreos/etcd/mvcc/backend"
"github.com/coreos/etcd/pkg/testutil"
)
@ -84,3 +93,100 @@ func TestMaintenanceMoveLeader(t *testing.T) {
t.Fatalf("new leader expected %d, got %d", target, lead)
}
}
// TestMaintenanceSnapshotError ensures that context cancel/timeout
// before snapshot reading returns corresponding context errors.
func TestMaintenanceSnapshotError(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc1, err := clus.RandClient().Snapshot(ctx)
if err != nil {
t.Fatal(err)
}
defer rc1.Close()
cancel()
_, err = io.Copy(ioutil.Discard, rc1)
if err != context.Canceled {
t.Errorf("expected %v, got %v", context.Canceled, err)
}
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc2, err := clus.RandClient().Snapshot(ctx)
if err != nil {
t.Fatal(err)
}
defer rc2.Close()
time.Sleep(2 * time.Second)
_, err = io.Copy(ioutil.Discard, rc2)
if err != nil && err != context.DeadlineExceeded {
t.Errorf("expected %v, got %v", context.DeadlineExceeded, err)
}
}
// TestMaintenanceSnapshotErrorInflight ensures that inflight context cancel/timeout
// fails snapshot reading with corresponding context errors.
func TestMaintenanceSnapshotErrorInflight(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
// take about 1-second to read snapshot
clus.Members[0].Stop(t)
dpath := filepath.Join(clus.Members[0].DataDir, "member", "snap", "db")
b := backend.NewDefaultBackend(dpath)
s := mvcc.NewStore(b, &lease.FakeLessor{}, nil)
rev := 100000
for i := 2; i <= rev; i++ {
s.Put([]byte(fmt.Sprintf("%10d", i)), bytes.Repeat([]byte("a"), 1024), lease.NoLease)
}
s.Close()
b.Close()
clus.Members[0].Restart(t)
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc1, err := clus.RandClient().Snapshot(ctx)
if err != nil {
t.Fatal(err)
}
defer rc1.Close()
donec := make(chan struct{})
go func() {
time.Sleep(300 * time.Millisecond)
cancel()
close(donec)
}()
_, err = io.Copy(ioutil.Discard, rc1)
if err != nil && err != context.Canceled {
t.Errorf("expected %v, got %v", context.Canceled, err)
}
<-donec
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc2, err := clus.RandClient().Snapshot(ctx)
if err != nil {
t.Fatal(err)
}
defer rc2.Close()
// 300ms left and expect timeout while snapshot reading is in progress
time.Sleep(700 * time.Millisecond)
_, err = io.Copy(ioutil.Discard, rc2)
if err != nil && err != context.DeadlineExceeded {
t.Errorf("expected %v, got %v", context.DeadlineExceeded, err)
}
}

View File

@ -36,7 +36,7 @@ var errExpected = errors.New("expected error")
func TestBalancerUnderNetworkPartitionPut(t *testing.T) {
testBalancerUnderNetworkPartition(t, func(cli *clientv3.Client, ctx context.Context) error {
_, err := cli.Put(ctx, "a", "b")
if err == context.DeadlineExceeded || err == rpctypes.ErrTimeout {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) || err == rpctypes.ErrTimeout {
return errExpected
}
return err
@ -46,7 +46,7 @@ func TestBalancerUnderNetworkPartitionPut(t *testing.T) {
func TestBalancerUnderNetworkPartitionDelete(t *testing.T) {
testBalancerUnderNetworkPartition(t, func(cli *clientv3.Client, ctx context.Context) error {
_, err := cli.Delete(ctx, "a")
if err == context.DeadlineExceeded || err == rpctypes.ErrTimeout {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) || err == rpctypes.ErrTimeout {
return errExpected
}
return err
@ -59,7 +59,7 @@ func TestBalancerUnderNetworkPartitionTxn(t *testing.T) {
If(clientv3.Compare(clientv3.Version("foo"), "=", 0)).
Then(clientv3.OpPut("foo", "bar")).
Else(clientv3.OpPut("foo", "baz")).Commit()
if err == context.DeadlineExceeded || err == rpctypes.ErrTimeout {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) || err == rpctypes.ErrTimeout {
return errExpected
}
return err
@ -82,7 +82,7 @@ func TestBalancerUnderNetworkPartitionLinearizableGetWithLongTimeout(t *testing.
func TestBalancerUnderNetworkPartitionLinearizableGetWithShortTimeout(t *testing.T) {
testBalancerUnderNetworkPartition(t, func(cli *clientv3.Client, ctx context.Context) error {
_, err := cli.Get(ctx, "a")
if err == context.DeadlineExceeded {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) {
return errExpected
}
return err
@ -234,7 +234,7 @@ func testBalancerUnderNetworkPartitionWatch(t *testing.T, isolateLeader bool) {
wch := watchCli.Watch(clientv3.WithRequireLeader(context.Background()), "foo", clientv3.WithCreatedNotify())
select {
case <-wch:
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("took too long to create watch")
}
@ -252,7 +252,7 @@ func testBalancerUnderNetworkPartitionWatch(t *testing.T, isolateLeader bool) {
if err = ev.Err(); err != rpctypes.ErrNoLeader {
t.Fatalf("expected %v, got %v", rpctypes.ErrNoLeader, err)
}
case <-time.After(3 * time.Second): // enough time to detect leader lost
case <-time.After(integration.RequestWaitTimeout): // enough time to detect leader lost
t.Fatal("took too long to detect leader lost")
}
}

View File

@ -17,6 +17,7 @@ package integration
import (
"bytes"
"context"
"strings"
"testing"
"time"
@ -24,6 +25,9 @@ import (
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// TestBalancerUnderServerShutdownWatch expects that watch client
@ -59,7 +63,7 @@ func TestBalancerUnderServerShutdownWatch(t *testing.T) {
wch := watchCli.Watch(context.Background(), key, clientv3.WithCreatedNotify())
select {
case <-wch:
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("took too long to create watch")
}
@ -100,7 +104,7 @@ func TestBalancerUnderServerShutdownWatch(t *testing.T) {
if err == nil {
break
}
if err == context.DeadlineExceeded || err == rpctypes.ErrTimeout || err == rpctypes.ErrTimeoutDueToLeaderFail {
if err == context.DeadlineExceeded || isServerCtxTimeout(err) || err == rpctypes.ErrTimeout || err == rpctypes.ErrTimeoutDueToLeaderFail {
continue
}
t.Fatal(err)
@ -235,3 +239,129 @@ func testBalancerUnderServerShutdownImmutable(t *testing.T, op func(*clientv3.Cl
t.Errorf("failed to finish range request in time %v (timeout %v)", err, timeout)
}
}
func TestBalancerUnderServerStopInflightLinearizableGetOnRestart(t *testing.T) {
tt := []pinTestOpt{
{pinLeader: true, stopPinFirst: true},
{pinLeader: true, stopPinFirst: false},
{pinLeader: false, stopPinFirst: true},
{pinLeader: false, stopPinFirst: false},
}
for i := range tt {
testBalancerUnderServerStopInflightRangeOnRestart(t, true, tt[i])
}
}
func TestBalancerUnderServerStopInflightSerializableGetOnRestart(t *testing.T) {
tt := []pinTestOpt{
{pinLeader: true, stopPinFirst: true},
{pinLeader: true, stopPinFirst: false},
{pinLeader: false, stopPinFirst: true},
{pinLeader: false, stopPinFirst: false},
}
for i := range tt {
testBalancerUnderServerStopInflightRangeOnRestart(t, false, tt[i])
}
}
type pinTestOpt struct {
pinLeader bool
stopPinFirst bool
}
// testBalancerUnderServerStopInflightRangeOnRestart expects
// inflight range request reconnects on server restart.
func testBalancerUnderServerStopInflightRangeOnRestart(t *testing.T, linearizable bool, opt pinTestOpt) {
defer testutil.AfterTest(t)
cfg := &integration.ClusterConfig{
Size: 2,
SkipCreatingClient: true,
}
if linearizable {
cfg.Size = 3
}
clus := integration.NewClusterV3(t, cfg)
defer clus.Terminate(t)
eps := []string{clus.Members[0].GRPCAddr(), clus.Members[1].GRPCAddr()}
if linearizable {
eps = append(eps, clus.Members[2].GRPCAddr())
}
lead := clus.WaitLeader(t)
target := lead
if !opt.pinLeader {
target = (target + 1) % 2
}
// pin eps[target]
cli, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[target]}})
if err != nil {
t.Errorf("failed to create client: %v", err)
}
defer cli.Close()
// wait for eps[target] to be pinned
mustWaitPinReady(t, cli)
// add all eps to list, so that when the original pined one fails
// the client can switch to other available eps
cli.SetEndpoints(eps...)
if opt.stopPinFirst {
clus.Members[target].Stop(t)
// give some time for balancer switch before stopping the other
time.Sleep(time.Second)
clus.Members[(target+1)%2].Stop(t)
} else {
clus.Members[(target+1)%2].Stop(t)
// balancer cannot pin other member since it's already stopped
clus.Members[target].Stop(t)
}
// 3-second is the minimum interval between endpoint being marked
// as unhealthy and being removed from unhealthy, so possibly
// takes >5-second to unpin and repin an endpoint
// TODO: decrease timeout when balancer switch rewrite
clientTimeout := 7 * time.Second
var gops []clientv3.OpOption
if !linearizable {
gops = append(gops, clientv3.WithSerializable())
}
donec, readyc := make(chan struct{}), make(chan struct{}, 1)
go func() {
defer close(donec)
ctx, cancel := context.WithTimeout(context.TODO(), clientTimeout)
readyc <- struct{}{}
_, err := cli.Get(ctx, "abc", gops...)
cancel()
if err != nil {
t.Fatal(err)
}
}()
<-readyc
clus.Members[target].Restart(t)
select {
case <-time.After(clientTimeout + integration.RequestWaitTimeout):
t.Fatalf("timed out waiting for Get [linearizable: %v, opt: %+v]", linearizable, opt)
case <-donec:
}
}
// e.g. due to clock drifts in server-side,
// client context times out first in server-side
// while original client-side context is not timed out yet
func isServerCtxTimeout(err error) bool {
if err == nil {
return false
}
ev, _ := status.FromError(err)
code := ev.Code()
return code == codes.DeadlineExceeded && strings.Contains(err.Error(), "context deadline exceeded")
}

View File

@ -678,7 +678,7 @@ func TestWatchErrConnClosed(t *testing.T) {
clus.TakeClient(0)
select {
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("wc.Watch took too long")
case <-donec:
}
@ -705,7 +705,7 @@ func TestWatchAfterClose(t *testing.T) {
close(donec)
}()
select {
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("wc.Watch took too long")
case <-donec:
}
@ -751,7 +751,7 @@ func TestWatchWithRequireLeader(t *testing.T) {
if resp.Err() != rpctypes.ErrNoLeader {
t.Fatalf("expected %v watch response error, got %+v", rpctypes.ErrNoLeader, resp)
}
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("watch without leader took too long to close")
}
@ -760,7 +760,7 @@ func TestWatchWithRequireLeader(t *testing.T) {
if ok {
t.Fatalf("expected closed channel, got response %v", resp)
}
case <-time.After(3 * time.Second):
case <-time.After(integration.RequestWaitTimeout):
t.Fatal("waited too long for channel to close")
}

View File

@ -18,6 +18,8 @@ import (
"context"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"google.golang.org/grpc"
)
type (
@ -88,15 +90,24 @@ func (resp *TxnResponse) OpResponse() OpResponse {
}
type kv struct {
remote pb.KVClient
remote pb.KVClient
callOpts []grpc.CallOption
}
func NewKV(c *Client) KV {
return &kv{remote: RetryKVClient(c)}
api := &kv{remote: RetryKVClient(c)}
if c != nil {
api.callOpts = c.callOpts
}
return api
}
func NewKVFromKVClient(remote pb.KVClient) KV {
return &kv{remote: remote}
func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
api := &kv{remote: remote}
if c != nil {
api.callOpts = c.callOpts
}
return api
}
func (kv *kv) Put(ctx context.Context, key, val string, opts ...OpOption) (*PutResponse, error) {
@ -115,7 +126,7 @@ func (kv *kv) Delete(ctx context.Context, key string, opts ...OpOption) (*Delete
}
func (kv *kv) Compact(ctx context.Context, rev int64, opts ...CompactOption) (*CompactResponse, error) {
resp, err := kv.remote.Compact(ctx, OpCompact(rev, opts...).toRequest())
resp, err := kv.remote.Compact(ctx, OpCompact(rev, opts...).toRequest(), kv.callOpts...)
if err != nil {
return nil, toErr(ctx, err)
}
@ -124,8 +135,9 @@ func (kv *kv) Compact(ctx context.Context, rev int64, opts ...CompactOption) (*C
func (kv *kv) Txn(ctx context.Context) Txn {
return &txn{
kv: kv,
ctx: ctx,
kv: kv,
ctx: ctx,
callOpts: kv.callOpts,
}
}
@ -134,27 +146,27 @@ func (kv *kv) Do(ctx context.Context, op Op) (OpResponse, error) {
switch op.t {
case tRange:
var resp *pb.RangeResponse
resp, err = kv.remote.Range(ctx, op.toRangeRequest())
resp, err = kv.remote.Range(ctx, op.toRangeRequest(), kv.callOpts...)
if err == nil {
return OpResponse{get: (*GetResponse)(resp)}, nil
}
case tPut:
var resp *pb.PutResponse
r := &pb.PutRequest{Key: op.key, Value: op.val, Lease: int64(op.leaseID), PrevKv: op.prevKV, IgnoreValue: op.ignoreValue, IgnoreLease: op.ignoreLease}
resp, err = kv.remote.Put(ctx, r)
resp, err = kv.remote.Put(ctx, r, kv.callOpts...)
if err == nil {
return OpResponse{put: (*PutResponse)(resp)}, nil
}
case tDeleteRange:
var resp *pb.DeleteRangeResponse
r := &pb.DeleteRangeRequest{Key: op.key, RangeEnd: op.end, PrevKv: op.prevKV}
resp, err = kv.remote.DeleteRange(ctx, r)
resp, err = kv.remote.DeleteRange(ctx, r, kv.callOpts...)
if err == nil {
return OpResponse{del: (*DeleteResponse)(resp)}, nil
}
case tTxn:
var resp *pb.TxnResponse
resp, err = kv.remote.Txn(ctx, op.toTxnRequest())
resp, err = kv.remote.Txn(ctx, op.toTxnRequest(), kv.callOpts...)
if err == nil {
return OpResponse{txn: (*TxnResponse)(resp)}, nil
}

View File

@ -22,6 +22,7 @@ import (
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
)
@ -50,7 +51,7 @@ type LeaseTimeToLiveResponse struct {
*pb.ResponseHeader
ID LeaseID `json:"id"`
// TTL is the remaining TTL in seconds for the lease; the lease will expire in under TTL+1 seconds.
// TTL is the remaining TTL in seconds for the lease; the lease will expire in under TTL+1 seconds. Expired lease will return -1.
TTL int64 `json:"ttl"`
// GrantedTTL is the initial granted time in seconds upon lease creation/renewal.
@ -113,11 +114,29 @@ type Lease interface {
// Leases retrieves all leases.
Leases(ctx context.Context) (*LeaseLeasesResponse, error)
// KeepAlive keeps the given lease alive forever.
// KeepAlive keeps the given lease alive forever. If the keepalive response
// posted to the channel is not consumed immediately, the lease client will
// continue sending keep alive requests to the etcd server at least every
// second until latest response is consumed.
//
// The returned "LeaseKeepAliveResponse" channel closes if underlying keep
// alive stream is interrupted in some way the client cannot handle itself;
// given context "ctx" is canceled or timed out. "LeaseKeepAliveResponse"
// from this closed channel is nil.
//
// If client keep alive loop halts with an unexpected error (e.g. "etcdserver:
// no leader") or canceled by the caller (e.g. context.Canceled), the error
// is returned. Otherwise, it retries.
//
// TODO(v4.0): post errors to last keep alive message before closing
// (see https://github.com/coreos/etcd/pull/7866)
KeepAlive(ctx context.Context, id LeaseID) (<-chan *LeaseKeepAliveResponse, error)
// KeepAliveOnce renews the lease once. In most of the cases, KeepAlive
// should be used instead of KeepAliveOnce.
// KeepAliveOnce renews the lease once. The response corresponds to the
// first message from calling KeepAlive. If the response has a recoverable
// error, KeepAliveOnce will retry the RPC with a new keep alive message.
//
// In most of the cases, Keepalive should be used instead of KeepAliveOnce.
KeepAliveOnce(ctx context.Context, id LeaseID) (*LeaseKeepAliveResponse, error)
// Close releases all resources Lease keeps for efficient communication
@ -148,6 +167,8 @@ type lessor struct {
// firstKeepAliveOnce ensures stream starts after first KeepAlive call.
firstKeepAliveOnce sync.Once
callOpts []grpc.CallOption
}
// keepAlive multiplexes a keepalive for a lease over multiple channels
@ -163,10 +184,10 @@ type keepAlive struct {
}
func NewLease(c *Client) Lease {
return NewLeaseFromLeaseClient(RetryLeaseClient(c), c.cfg.DialTimeout+time.Second)
return NewLeaseFromLeaseClient(RetryLeaseClient(c), c, c.cfg.DialTimeout+time.Second)
}
func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
l := &lessor{
donec: make(chan struct{}),
keepAlives: make(map[LeaseID]*keepAlive),
@ -176,6 +197,9 @@ func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Durati
if l.firstKeepAliveTimeout == time.Second {
l.firstKeepAliveTimeout = defaultTTL
}
if c != nil {
l.callOpts = c.callOpts
}
reqLeaderCtx := WithRequireLeader(context.Background())
l.stopCtx, l.stopCancel = context.WithCancel(reqLeaderCtx)
return l
@ -183,7 +207,7 @@ func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Durati
func (l *lessor) Grant(ctx context.Context, ttl int64) (*LeaseGrantResponse, error) {
r := &pb.LeaseGrantRequest{TTL: ttl}
resp, err := l.remote.LeaseGrant(ctx, r)
resp, err := l.remote.LeaseGrant(ctx, r, l.callOpts...)
if err == nil {
gresp := &LeaseGrantResponse{
ResponseHeader: resp.GetHeader(),
@ -198,7 +222,7 @@ func (l *lessor) Grant(ctx context.Context, ttl int64) (*LeaseGrantResponse, err
func (l *lessor) Revoke(ctx context.Context, id LeaseID) (*LeaseRevokeResponse, error) {
r := &pb.LeaseRevokeRequest{ID: int64(id)}
resp, err := l.remote.LeaseRevoke(ctx, r)
resp, err := l.remote.LeaseRevoke(ctx, r, l.callOpts...)
if err == nil {
return (*LeaseRevokeResponse)(resp), nil
}
@ -207,7 +231,7 @@ func (l *lessor) Revoke(ctx context.Context, id LeaseID) (*LeaseRevokeResponse,
func (l *lessor) TimeToLive(ctx context.Context, id LeaseID, opts ...LeaseOption) (*LeaseTimeToLiveResponse, error) {
r := toLeaseTimeToLiveRequest(id, opts...)
resp, err := l.remote.LeaseTimeToLive(ctx, r)
resp, err := l.remote.LeaseTimeToLive(ctx, r, l.callOpts...)
if err == nil {
gresp := &LeaseTimeToLiveResponse{
ResponseHeader: resp.GetHeader(),
@ -222,7 +246,7 @@ func (l *lessor) TimeToLive(ctx context.Context, id LeaseID, opts ...LeaseOption
}
func (l *lessor) Leases(ctx context.Context) (*LeaseLeasesResponse, error) {
resp, err := l.remote.LeaseLeases(ctx, &pb.LeaseLeasesRequest{})
resp, err := l.remote.LeaseLeases(ctx, &pb.LeaseLeasesRequest{}, l.callOpts...)
if err == nil {
leases := make([]LeaseStatus, len(resp.Leases))
for i := range resp.Leases {
@ -371,7 +395,7 @@ func (l *lessor) keepAliveOnce(ctx context.Context, id LeaseID) (*LeaseKeepAlive
cctx, cancel := context.WithCancel(ctx)
defer cancel()
stream, err := l.remote.LeaseKeepAlive(cctx)
stream, err := l.remote.LeaseKeepAlive(cctx, l.callOpts...)
if err != nil {
return nil, toErr(ctx, err)
}
@ -442,7 +466,7 @@ func (l *lessor) recvKeepAliveLoop() (gerr error) {
// resetRecv opens a new lease stream and starts sending keep alive requests.
func (l *lessor) resetRecv() (pb.Lease_LeaseKeepAliveClient, error) {
sctx, cancel := context.WithCancel(l.stopCtx)
stream, err := l.remote.LeaseKeepAlive(sctx)
stream, err := l.remote.LeaseKeepAlive(sctx, l.callOpts...)
if err != nil {
cancel()
return nil, err

View File

@ -445,8 +445,11 @@ func (lkv *leasingKV) revokeLeaseKvs(ctx context.Context, kvs []*mvccpb.KeyValue
}
func (lkv *leasingKV) waitSession(ctx context.Context) error {
lkv.leases.mu.RLock()
sessionc := lkv.sessionc
lkv.leases.mu.RUnlock()
select {
case <-lkv.sessionc:
case <-sessionc:
return nil
case <-lkv.ctx.Done():
return lkv.ctx.Err()

View File

@ -23,10 +23,23 @@ import (
// Logger is the logger used by client library.
// It implements grpclog.LoggerV2 interface.
type Logger grpclog.LoggerV2
type Logger interface {
grpclog.LoggerV2
// Lvl returns logger if logger's verbosity level >= "lvl".
// Otherwise, logger that discards all logs.
Lvl(lvl int) Logger
// to satisfy capnslog
Print(args ...interface{})
Printf(format string, args ...interface{})
Println(args ...interface{})
}
var (
logger settableLogger
loggerMu sync.RWMutex
logger Logger
)
type settableLogger struct {
@ -36,38 +49,35 @@ type settableLogger struct {
func init() {
// disable client side logs by default
logger.mu.Lock()
logger.l = grpclog.NewLoggerV2(ioutil.Discard, ioutil.Discard, ioutil.Discard)
// logger has to override the grpclog at initialization so that
// any changes to the grpclog go through logger with locking
// instead of through SetLogger
//
// now updates only happen through settableLogger.set
grpclog.SetLoggerV2(&logger)
logger.mu.Unlock()
logger = &settableLogger{}
SetLogger(grpclog.NewLoggerV2(ioutil.Discard, ioutil.Discard, ioutil.Discard))
}
// SetLogger sets client-side Logger. By default, logs are disabled.
func SetLogger(l Logger) {
logger.set(l)
// SetLogger sets client-side Logger.
func SetLogger(l grpclog.LoggerV2) {
loggerMu.Lock()
logger = NewLogger(l)
// override grpclog so that any changes happen with locking
grpclog.SetLoggerV2(logger)
loggerMu.Unlock()
}
// GetLogger returns the current logger.
func GetLogger() Logger {
return logger.get()
loggerMu.RLock()
l := logger
loggerMu.RUnlock()
return l
}
func (s *settableLogger) set(l Logger) {
s.mu.Lock()
logger.l = l
grpclog.SetLoggerV2(&logger)
s.mu.Unlock()
// NewLogger returns a new Logger with grpclog.LoggerV2.
func NewLogger(gl grpclog.LoggerV2) Logger {
return &settableLogger{l: gl}
}
func (s *settableLogger) get() Logger {
func (s *settableLogger) get() grpclog.LoggerV2 {
s.mu.RLock()
l := logger.l
l := s.l
s.mu.RUnlock()
return l
}
@ -94,3 +104,32 @@ func (s *settableLogger) Print(args ...interface{}) { s.get().In
func (s *settableLogger) Printf(format string, args ...interface{}) { s.get().Infof(format, args...) }
func (s *settableLogger) Println(args ...interface{}) { s.get().Infoln(args...) }
func (s *settableLogger) V(l int) bool { return s.get().V(l) }
func (s *settableLogger) Lvl(lvl int) Logger {
s.mu.RLock()
l := s.l
s.mu.RUnlock()
if l.V(lvl) {
return s
}
return &noLogger{}
}
type noLogger struct{}
func (*noLogger) Info(args ...interface{}) {}
func (*noLogger) Infof(format string, args ...interface{}) {}
func (*noLogger) Infoln(args ...interface{}) {}
func (*noLogger) Warning(args ...interface{}) {}
func (*noLogger) Warningf(format string, args ...interface{}) {}
func (*noLogger) Warningln(args ...interface{}) {}
func (*noLogger) Error(args ...interface{}) {}
func (*noLogger) Errorf(format string, args ...interface{}) {}
func (*noLogger) Errorln(args ...interface{}) {}
func (*noLogger) Fatal(args ...interface{}) {}
func (*noLogger) Fatalf(format string, args ...interface{}) {}
func (*noLogger) Fatalln(args ...interface{}) {}
func (*noLogger) Print(args ...interface{}) {}
func (*noLogger) Printf(format string, args ...interface{}) {}
func (*noLogger) Println(args ...interface{}) {}
func (*noLogger) V(l int) bool { return false }
func (ng *noLogger) Lvl(lvl int) Logger { return ng }

51
clientv3/logger_test.go Normal file
View File

@ -0,0 +1,51 @@
// Copyright 2017 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package clientv3
import (
"bytes"
"io/ioutil"
"strings"
"testing"
"google.golang.org/grpc/grpclog"
)
func TestLogger(t *testing.T) {
buf := new(bytes.Buffer)
l := NewLogger(grpclog.NewLoggerV2WithVerbosity(buf, buf, buf, 10))
l.Infof("hello world!")
if !strings.Contains(buf.String(), "hello world!") {
t.Fatalf("expected 'hello world!', got %q", buf.String())
}
buf.Reset()
l.Lvl(10).Infof("Level 10")
l.Lvl(30).Infof("Level 30")
if !strings.Contains(buf.String(), "Level 10") {
t.Fatalf("expected 'Level 10', got %q", buf.String())
}
if strings.Contains(buf.String(), "Level 30") {
t.Fatalf("unexpected 'Level 30', got %q", buf.String())
}
buf.Reset()
l = NewLogger(grpclog.NewLoggerV2(ioutil.Discard, ioutil.Discard, ioutil.Discard))
l.Infof("ignore this")
if len(buf.Bytes()) > 0 {
t.Fatalf("unexpected logs %q", buf.String())
}
}

View File

@ -19,6 +19,8 @@ import (
"io"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"google.golang.org/grpc"
)
type (
@ -63,12 +65,13 @@ type Maintenance interface {
}
type maintenance struct {
dial func(endpoint string) (pb.MaintenanceClient, func(), error)
remote pb.MaintenanceClient
dial func(endpoint string) (pb.MaintenanceClient, func(), error)
remote pb.MaintenanceClient
callOpts []grpc.CallOption
}
func NewMaintenance(c *Client) Maintenance {
return &maintenance{
api := &maintenance{
dial: func(endpoint string) (pb.MaintenanceClient, func(), error) {
conn, err := c.dial(endpoint)
if err != nil {
@ -79,15 +82,23 @@ func NewMaintenance(c *Client) Maintenance {
},
remote: RetryMaintenanceClient(c, c.conn),
}
if c != nil {
api.callOpts = c.callOpts
}
return api
}
func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
return &maintenance{
func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
api := &maintenance{
dial: func(string) (pb.MaintenanceClient, func(), error) {
return remote, func() {}, nil
},
remote: remote,
}
if c != nil {
api.callOpts = c.callOpts
}
return api
}
func (m *maintenance) AlarmList(ctx context.Context) (*AlarmResponse, error) {
@ -96,7 +107,7 @@ func (m *maintenance) AlarmList(ctx context.Context) (*AlarmResponse, error) {
MemberID: 0, // all
Alarm: pb.AlarmType_NONE, // all
}
resp, err := m.remote.Alarm(ctx, req)
resp, err := m.remote.Alarm(ctx, req, m.callOpts...)
if err == nil {
return (*AlarmResponse)(resp), nil
}
@ -126,7 +137,7 @@ func (m *maintenance) AlarmDisarm(ctx context.Context, am *AlarmMember) (*AlarmR
return &ret, nil
}
resp, err := m.remote.Alarm(ctx, req)
resp, err := m.remote.Alarm(ctx, req, m.callOpts...)
if err == nil {
return (*AlarmResponse)(resp), nil
}
@ -139,7 +150,7 @@ func (m *maintenance) Defragment(ctx context.Context, endpoint string) (*Defragm
return nil, toErr(ctx, err)
}
defer cancel()
resp, err := remote.Defragment(ctx, &pb.DefragmentRequest{})
resp, err := remote.Defragment(ctx, &pb.DefragmentRequest{}, m.callOpts...)
if err != nil {
return nil, toErr(ctx, err)
}
@ -152,7 +163,7 @@ func (m *maintenance) Status(ctx context.Context, endpoint string) (*StatusRespo
return nil, toErr(ctx, err)
}
defer cancel()
resp, err := remote.Status(ctx, &pb.StatusRequest{})
resp, err := remote.Status(ctx, &pb.StatusRequest{}, m.callOpts...)
if err != nil {
return nil, toErr(ctx, err)
}
@ -165,7 +176,7 @@ func (m *maintenance) HashKV(ctx context.Context, endpoint string, rev int64) (*
return nil, toErr(ctx, err)
}
defer cancel()
resp, err := remote.HashKV(ctx, &pb.HashKVRequest{Revision: rev})
resp, err := remote.HashKV(ctx, &pb.HashKVRequest{Revision: rev}, m.callOpts...)
if err != nil {
return nil, toErr(ctx, err)
}
@ -173,7 +184,7 @@ func (m *maintenance) HashKV(ctx context.Context, endpoint string, rev int64) (*
}
func (m *maintenance) Snapshot(ctx context.Context) (io.ReadCloser, error) {
ss, err := m.remote.Snapshot(ctx, &pb.SnapshotRequest{})
ss, err := m.remote.Snapshot(ctx, &pb.SnapshotRequest{}, m.callOpts...)
if err != nil {
return nil, toErr(ctx, err)
}
@ -196,10 +207,20 @@ func (m *maintenance) Snapshot(ctx context.Context) (io.ReadCloser, error) {
}
pw.Close()
}()
return pr, nil
return &snapshotReadCloser{ctx: ctx, ReadCloser: pr}, nil
}
type snapshotReadCloser struct {
ctx context.Context
io.ReadCloser
}
func (rc *snapshotReadCloser) Read(p []byte) (n int, err error) {
n, err = rc.ReadCloser.Read(p)
return n, toErr(rc.ctx, err)
}
func (m *maintenance) MoveLeader(ctx context.Context, transfereeID uint64) (*MoveLeaderResponse, error) {
resp, err := m.remote.MoveLeader(ctx, &pb.MoveLeaderRequest{TargetID: transfereeID})
resp, err := m.remote.MoveLeader(ctx, &pb.MoveLeaderRequest{TargetID: transfereeID}, m.callOpts...)
return (*MoveLeaderResponse)(resp), toErr(ctx, err)
}

49
clientv3/options.go Normal file
View File

@ -0,0 +1,49 @@
// Copyright 2017 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package clientv3
import (
"math"
"google.golang.org/grpc"
)
var (
// Disable gRPC internal retrial logic
// TODO: enable when gRPC retry is stable (FailFast=false)
// Reference:
// - https://github.com/grpc/grpc-go/issues/1532
// - https://github.com/grpc/proposal/blob/master/A6-client-retries.md
defaultFailFast = grpc.FailFast(true)
// client-side request send limit, gRPC default is math.MaxInt32
// Make sure that "client-side send limit < server-side default send/recv limit"
// Same value as "embed.DefaultMaxRequestBytes" plus gRPC overhead bytes
defaultMaxCallSendMsgSize = grpc.MaxCallSendMsgSize(2 * 1024 * 1024)
// client-side response receive limit, gRPC default is 4MB
// Make sure that "client-side receive limit >= server-side default send/recv limit"
// because range response can easily exceed request send limits
// Default to math.MaxInt32; writes exceeding server-side send limit fails anyway
defaultMaxCallRecvMsgSize = grpc.MaxCallRecvMsgSize(math.MaxInt32)
)
// defaultCallOpts defines a list of default "gRPC.CallOption".
// Some options are exposed to "clientv3.Config".
// Defaults will be overridden by the settings in "clientv3.Config".
var defaultCallOpts = []grpc.CallOption{defaultFailFast, defaultMaxCallSendMsgSize, defaultMaxCallRecvMsgSize}
// MaxLeaseTTL is the maximum lease TTL value
const MaxLeaseTTL = 9000000000

View File

@ -204,7 +204,7 @@ var rangeTests = []struct {
func TestKvOrdering(t *testing.T) {
for i, tt := range rangeTests {
mKV := &mockKV{clientv3.NewKVFromKVClient(nil), tt.response.OpResponse()}
mKV := &mockKV{clientv3.NewKVFromKVClient(nil, nil), tt.response.OpResponse()}
kv := &kvOrdering{
mKV,
func(r *clientv3.GetResponse) OrderViolationFunc {
@ -258,7 +258,7 @@ var txnTests = []struct {
func TestTxnOrdering(t *testing.T) {
for i, tt := range txnTests {
mKV := &mockKV{clientv3.NewKVFromKVClient(nil), tt.response.OpResponse()}
mKV := &mockKV{clientv3.NewKVFromKVClient(nil, nil), tt.response.OpResponse()}
kv := &kvOrdering{
mKV,
func(r *clientv3.TxnResponse) OrderViolationFunc {

View File

@ -25,10 +25,26 @@ import (
"google.golang.org/grpc/status"
)
type retryPolicy uint8
const (
repeatable retryPolicy = iota
nonRepeatable
)
type rpcFunc func(ctx context.Context) error
type retryRPCFunc func(context.Context, rpcFunc) error
type retryRPCFunc func(context.Context, rpcFunc, retryPolicy) error
type retryStopErrFunc func(error) bool
// immutable requests (e.g. Get) should be retried unless it's
// an obvious server-side error (e.g. rpctypes.ErrRequestTooLarge).
//
// "isRepeatableStopError" returns "true" when an immutable request
// is interrupted by server-side or gRPC-side error and its status
// code is not transient (!= codes.Unavailable).
//
// Returning "true" means retry should stop, since client cannot
// handle itself even with retries.
func isRepeatableStopError(err error) bool {
eErr := rpctypes.Error(err)
// always stop retry on etcd errors
@ -40,6 +56,17 @@ func isRepeatableStopError(err error) bool {
return ev.Code() != codes.Unavailable
}
// mutable requests (e.g. Put, Delete, Txn) should only be retried
// when the status code is codes.Unavailable when initial connection
// has not been established (no pinned endpoint).
//
// "isNonRepeatableStopError" returns "true" when a mutable request
// is interrupted by non-transient error that client cannot handle itself,
// or transient error while the connection has already been established
// (pinned endpoint exists).
//
// Returning "true" means retry should stop, otherwise it violates
// write-at-most-once semantics.
func isNonRepeatableStopError(err error) bool {
ev, _ := status.FromError(err)
if ev.Code() != codes.Unavailable {
@ -49,8 +76,15 @@ func isNonRepeatableStopError(err error) bool {
return desc != "there is no address available" && desc != "there is no connection available"
}
func (c *Client) newRetryWrapper(isStop retryStopErrFunc) retryRPCFunc {
return func(rpcCtx context.Context, f rpcFunc) error {
func (c *Client) newRetryWrapper() retryRPCFunc {
return func(rpcCtx context.Context, f rpcFunc, rp retryPolicy) error {
var isStop retryStopErrFunc
switch rp {
case repeatable:
isStop = isRepeatableStopError
case nonRepeatable:
isStop = isNonRepeatableStopError
}
for {
if err := readyWait(rpcCtx, c.ctx, c.balancer.ConnectNotify()); err != nil {
return err
@ -60,17 +94,13 @@ func (c *Client) newRetryWrapper(isStop retryStopErrFunc) retryRPCFunc {
if err == nil {
return nil
}
if logger.V(4) {
logger.Infof("clientv3/retry: error %q on pinned endpoint %q", err.Error(), pinned)
}
logger.Lvl(4).Infof("clientv3/retry: error %q on pinned endpoint %q", err.Error(), pinned)
if s, ok := status.FromError(err); ok && (s.Code() == codes.Unavailable || s.Code() == codes.DeadlineExceeded || s.Code() == codes.Internal) {
// mark this before endpoint switch is triggered
c.balancer.hostPortError(pinned, err)
c.balancer.next()
if logger.V(4) {
logger.Infof("clientv3/retry: switching from %q due to error %q", pinned, err.Error())
}
logger.Lvl(4).Infof("clientv3/retry: switching from %q due to error %q", pinned, err.Error())
}
if isStop(err) {
@ -80,24 +110,20 @@ func (c *Client) newRetryWrapper(isStop retryStopErrFunc) retryRPCFunc {
}
}
func (c *Client) newAuthRetryWrapper() retryRPCFunc {
return func(rpcCtx context.Context, f rpcFunc) error {
func (c *Client) newAuthRetryWrapper(retryf retryRPCFunc) retryRPCFunc {
return func(rpcCtx context.Context, f rpcFunc, rp retryPolicy) error {
for {
pinned := c.balancer.pinned()
err := f(rpcCtx)
err := retryf(rpcCtx, f, rp)
if err == nil {
return nil
}
if logger.V(4) {
logger.Infof("clientv3/auth-retry: error %q on pinned endpoint %q", err.Error(), pinned)
}
logger.Lvl(4).Infof("clientv3/auth-retry: error %q on pinned endpoint %q", err.Error(), pinned)
// always stop retry on etcd errors other than invalid auth token
if rpctypes.Error(err) == rpctypes.ErrInvalidAuthToken {
gterr := c.getToken(rpcCtx)
if gterr != nil {
if logger.V(4) {
logger.Infof("clientv3/auth-retry: cannot retry due to error %q(%q) on pinned endpoint %q", err.Error(), gterr.Error(), pinned)
}
logger.Lvl(4).Infof("clientv3/auth-retry: cannot retry due to error %q(%q) on pinned endpoint %q", err.Error(), gterr.Error(), pinned)
return err // return the original error for simplicity
}
continue
@ -107,390 +133,364 @@ func (c *Client) newAuthRetryWrapper() retryRPCFunc {
}
}
type retryKVClient struct {
kc pb.KVClient
retryf retryRPCFunc
}
// RetryKVClient implements a KVClient.
func RetryKVClient(c *Client) pb.KVClient {
repeatableRetry := c.newRetryWrapper(isRepeatableStopError)
nonRepeatableRetry := c.newRetryWrapper(isNonRepeatableStopError)
conn := pb.NewKVClient(c.conn)
retryBasic := &retryKVClient{&nonRepeatableKVClient{conn, nonRepeatableRetry}, repeatableRetry}
retryAuthWrapper := c.newAuthRetryWrapper()
return &retryKVClient{
&nonRepeatableKVClient{retryBasic, retryAuthWrapper},
retryAuthWrapper}
kc: pb.NewKVClient(c.conn),
retryf: c.newAuthRetryWrapper(c.newRetryWrapper()),
}
}
type retryKVClient struct {
*nonRepeatableKVClient
repeatableRetry retryRPCFunc
}
func (rkv *retryKVClient) Range(ctx context.Context, in *pb.RangeRequest, opts ...grpc.CallOption) (resp *pb.RangeResponse, err error) {
err = rkv.repeatableRetry(ctx, func(rctx context.Context) error {
err = rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.kc.Range(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
type nonRepeatableKVClient struct {
kc pb.KVClient
nonRepeatableRetry retryRPCFunc
}
func (rkv *nonRepeatableKVClient) Put(ctx context.Context, in *pb.PutRequest, opts ...grpc.CallOption) (resp *pb.PutResponse, err error) {
err = rkv.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rkv *retryKVClient) Put(ctx context.Context, in *pb.PutRequest, opts ...grpc.CallOption) (resp *pb.PutResponse, err error) {
err = rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.kc.Put(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rkv *nonRepeatableKVClient) DeleteRange(ctx context.Context, in *pb.DeleteRangeRequest, opts ...grpc.CallOption) (resp *pb.DeleteRangeResponse, err error) {
err = rkv.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rkv *retryKVClient) DeleteRange(ctx context.Context, in *pb.DeleteRangeRequest, opts ...grpc.CallOption) (resp *pb.DeleteRangeResponse, err error) {
err = rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.kc.DeleteRange(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rkv *nonRepeatableKVClient) Txn(ctx context.Context, in *pb.TxnRequest, opts ...grpc.CallOption) (resp *pb.TxnResponse, err error) {
// TODO: repeatableRetry if read-only txn
err = rkv.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rkv *retryKVClient) Txn(ctx context.Context, in *pb.TxnRequest, opts ...grpc.CallOption) (resp *pb.TxnResponse, err error) {
// TODO: "repeatable" for read-only txn
err = rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.kc.Txn(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rkv *nonRepeatableKVClient) Compact(ctx context.Context, in *pb.CompactionRequest, opts ...grpc.CallOption) (resp *pb.CompactionResponse, err error) {
err = rkv.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rkv *retryKVClient) Compact(ctx context.Context, in *pb.CompactionRequest, opts ...grpc.CallOption) (resp *pb.CompactionResponse, err error) {
err = rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.kc.Compact(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
type retryLeaseClient struct {
lc pb.LeaseClient
repeatableRetry retryRPCFunc
lc pb.LeaseClient
retryf retryRPCFunc
}
// RetryLeaseClient implements a LeaseClient.
func RetryLeaseClient(c *Client) pb.LeaseClient {
retry := &retryLeaseClient{
pb.NewLeaseClient(c.conn),
c.newRetryWrapper(isRepeatableStopError),
return &retryLeaseClient{
lc: pb.NewLeaseClient(c.conn),
retryf: c.newAuthRetryWrapper(c.newRetryWrapper()),
}
return &retryLeaseClient{retry, c.newAuthRetryWrapper()}
}
func (rlc *retryLeaseClient) LeaseTimeToLive(ctx context.Context, in *pb.LeaseTimeToLiveRequest, opts ...grpc.CallOption) (resp *pb.LeaseTimeToLiveResponse, err error) {
err = rlc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rlc.retryf(ctx, func(rctx context.Context) error {
resp, err = rlc.lc.LeaseTimeToLive(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rlc *retryLeaseClient) LeaseLeases(ctx context.Context, in *pb.LeaseLeasesRequest, opts ...grpc.CallOption) (resp *pb.LeaseLeasesResponse, err error) {
err = rlc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rlc.retryf(ctx, func(rctx context.Context) error {
resp, err = rlc.lc.LeaseLeases(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rlc *retryLeaseClient) LeaseGrant(ctx context.Context, in *pb.LeaseGrantRequest, opts ...grpc.CallOption) (resp *pb.LeaseGrantResponse, err error) {
err = rlc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rlc.retryf(ctx, func(rctx context.Context) error {
resp, err = rlc.lc.LeaseGrant(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rlc *retryLeaseClient) LeaseRevoke(ctx context.Context, in *pb.LeaseRevokeRequest, opts ...grpc.CallOption) (resp *pb.LeaseRevokeResponse, err error) {
err = rlc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rlc.retryf(ctx, func(rctx context.Context) error {
resp, err = rlc.lc.LeaseRevoke(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rlc *retryLeaseClient) LeaseKeepAlive(ctx context.Context, opts ...grpc.CallOption) (stream pb.Lease_LeaseKeepAliveClient, err error) {
err = rlc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rlc.retryf(ctx, func(rctx context.Context) error {
stream, err = rlc.lc.LeaseKeepAlive(rctx, opts...)
return err
})
}, repeatable)
return stream, err
}
type retryClusterClient struct {
*nonRepeatableClusterClient
repeatableRetry retryRPCFunc
cc pb.ClusterClient
retryf retryRPCFunc
}
// RetryClusterClient implements a ClusterClient.
func RetryClusterClient(c *Client) pb.ClusterClient {
repeatableRetry := c.newRetryWrapper(isRepeatableStopError)
nonRepeatableRetry := c.newRetryWrapper(isNonRepeatableStopError)
cc := pb.NewClusterClient(c.conn)
return &retryClusterClient{&nonRepeatableClusterClient{cc, nonRepeatableRetry}, repeatableRetry}
return &retryClusterClient{
cc: pb.NewClusterClient(c.conn),
retryf: c.newRetryWrapper(),
}
}
func (rcc *retryClusterClient) MemberList(ctx context.Context, in *pb.MemberListRequest, opts ...grpc.CallOption) (resp *pb.MemberListResponse, err error) {
err = rcc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.cc.MemberList(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
type nonRepeatableClusterClient struct {
cc pb.ClusterClient
nonRepeatableRetry retryRPCFunc
}
func (rcc *nonRepeatableClusterClient) MemberAdd(ctx context.Context, in *pb.MemberAddRequest, opts ...grpc.CallOption) (resp *pb.MemberAddResponse, err error) {
err = rcc.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rcc *retryClusterClient) MemberAdd(ctx context.Context, in *pb.MemberAddRequest, opts ...grpc.CallOption) (resp *pb.MemberAddResponse, err error) {
err = rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.cc.MemberAdd(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rcc *nonRepeatableClusterClient) MemberRemove(ctx context.Context, in *pb.MemberRemoveRequest, opts ...grpc.CallOption) (resp *pb.MemberRemoveResponse, err error) {
err = rcc.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rcc *retryClusterClient) MemberRemove(ctx context.Context, in *pb.MemberRemoveRequest, opts ...grpc.CallOption) (resp *pb.MemberRemoveResponse, err error) {
err = rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.cc.MemberRemove(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rcc *nonRepeatableClusterClient) MemberUpdate(ctx context.Context, in *pb.MemberUpdateRequest, opts ...grpc.CallOption) (resp *pb.MemberUpdateResponse, err error) {
err = rcc.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rcc *retryClusterClient) MemberUpdate(ctx context.Context, in *pb.MemberUpdateRequest, opts ...grpc.CallOption) (resp *pb.MemberUpdateResponse, err error) {
err = rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.cc.MemberUpdate(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
type retryMaintenanceClient struct {
mc pb.MaintenanceClient
retryf retryRPCFunc
}
// RetryMaintenanceClient implements a Maintenance.
func RetryMaintenanceClient(c *Client, conn *grpc.ClientConn) pb.MaintenanceClient {
repeatableRetry := c.newRetryWrapper(isRepeatableStopError)
nonRepeatableRetry := c.newRetryWrapper(isNonRepeatableStopError)
mc := pb.NewMaintenanceClient(conn)
return &retryMaintenanceClient{&nonRepeatableMaintenanceClient{mc, nonRepeatableRetry}, repeatableRetry}
}
type retryMaintenanceClient struct {
*nonRepeatableMaintenanceClient
repeatableRetry retryRPCFunc
return &retryMaintenanceClient{
mc: pb.NewMaintenanceClient(conn),
retryf: c.newRetryWrapper(),
}
}
func (rmc *retryMaintenanceClient) Alarm(ctx context.Context, in *pb.AlarmRequest, opts ...grpc.CallOption) (resp *pb.AlarmResponse, err error) {
err = rmc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rmc.retryf(ctx, func(rctx context.Context) error {
resp, err = rmc.mc.Alarm(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rmc *retryMaintenanceClient) Status(ctx context.Context, in *pb.StatusRequest, opts ...grpc.CallOption) (resp *pb.StatusResponse, err error) {
err = rmc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rmc.retryf(ctx, func(rctx context.Context) error {
resp, err = rmc.mc.Status(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rmc *retryMaintenanceClient) Hash(ctx context.Context, in *pb.HashRequest, opts ...grpc.CallOption) (resp *pb.HashResponse, err error) {
err = rmc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rmc.retryf(ctx, func(rctx context.Context) error {
resp, err = rmc.mc.Hash(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rmc *retryMaintenanceClient) HashKV(ctx context.Context, in *pb.HashKVRequest, opts ...grpc.CallOption) (resp *pb.HashKVResponse, err error) {
err = rmc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rmc.retryf(ctx, func(rctx context.Context) error {
resp, err = rmc.mc.HashKV(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rmc *retryMaintenanceClient) Snapshot(ctx context.Context, in *pb.SnapshotRequest, opts ...grpc.CallOption) (stream pb.Maintenance_SnapshotClient, err error) {
err = rmc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rmc.retryf(ctx, func(rctx context.Context) error {
stream, err = rmc.mc.Snapshot(rctx, in, opts...)
return err
})
}, repeatable)
return stream, err
}
func (rmc *retryMaintenanceClient) MoveLeader(ctx context.Context, in *pb.MoveLeaderRequest, opts ...grpc.CallOption) (resp *pb.MoveLeaderResponse, err error) {
err = rmc.repeatableRetry(ctx, func(rctx context.Context) error {
err = rmc.retryf(ctx, func(rctx context.Context) error {
resp, err = rmc.mc.MoveLeader(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
type nonRepeatableMaintenanceClient struct {
mc pb.MaintenanceClient
nonRepeatableRetry retryRPCFunc
}
func (rmc *nonRepeatableMaintenanceClient) Defragment(ctx context.Context, in *pb.DefragmentRequest, opts ...grpc.CallOption) (resp *pb.DefragmentResponse, err error) {
err = rmc.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rmc *retryMaintenanceClient) Defragment(ctx context.Context, in *pb.DefragmentRequest, opts ...grpc.CallOption) (resp *pb.DefragmentResponse, err error) {
err = rmc.retryf(ctx, func(rctx context.Context) error {
resp, err = rmc.mc.Defragment(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
type retryAuthClient struct {
*nonRepeatableAuthClient
repeatableRetry retryRPCFunc
ac pb.AuthClient
retryf retryRPCFunc
}
// RetryAuthClient implements a AuthClient.
func RetryAuthClient(c *Client) pb.AuthClient {
repeatableRetry := c.newRetryWrapper(isRepeatableStopError)
nonRepeatableRetry := c.newRetryWrapper(isNonRepeatableStopError)
ac := pb.NewAuthClient(c.conn)
return &retryAuthClient{&nonRepeatableAuthClient{ac, nonRepeatableRetry}, repeatableRetry}
return &retryAuthClient{
ac: pb.NewAuthClient(c.conn),
retryf: c.newRetryWrapper(),
}
}
func (rac *retryAuthClient) UserList(ctx context.Context, in *pb.AuthUserListRequest, opts ...grpc.CallOption) (resp *pb.AuthUserListResponse, err error) {
err = rac.repeatableRetry(ctx, func(rctx context.Context) error {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.UserList(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rac *retryAuthClient) UserGet(ctx context.Context, in *pb.AuthUserGetRequest, opts ...grpc.CallOption) (resp *pb.AuthUserGetResponse, err error) {
err = rac.repeatableRetry(ctx, func(rctx context.Context) error {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.UserGet(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rac *retryAuthClient) RoleGet(ctx context.Context, in *pb.AuthRoleGetRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleGetResponse, err error) {
err = rac.repeatableRetry(ctx, func(rctx context.Context) error {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.RoleGet(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
func (rac *retryAuthClient) RoleList(ctx context.Context, in *pb.AuthRoleListRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleListResponse, err error) {
err = rac.repeatableRetry(ctx, func(rctx context.Context) error {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.RoleList(rctx, in, opts...)
return err
})
}, repeatable)
return resp, err
}
type nonRepeatableAuthClient struct {
ac pb.AuthClient
nonRepeatableRetry retryRPCFunc
}
func (rac *nonRepeatableAuthClient) AuthEnable(ctx context.Context, in *pb.AuthEnableRequest, opts ...grpc.CallOption) (resp *pb.AuthEnableResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) AuthEnable(ctx context.Context, in *pb.AuthEnableRequest, opts ...grpc.CallOption) (resp *pb.AuthEnableResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.AuthEnable(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) AuthDisable(ctx context.Context, in *pb.AuthDisableRequest, opts ...grpc.CallOption) (resp *pb.AuthDisableResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) AuthDisable(ctx context.Context, in *pb.AuthDisableRequest, opts ...grpc.CallOption) (resp *pb.AuthDisableResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.AuthDisable(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) UserAdd(ctx context.Context, in *pb.AuthUserAddRequest, opts ...grpc.CallOption) (resp *pb.AuthUserAddResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) UserAdd(ctx context.Context, in *pb.AuthUserAddRequest, opts ...grpc.CallOption) (resp *pb.AuthUserAddResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.UserAdd(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) UserDelete(ctx context.Context, in *pb.AuthUserDeleteRequest, opts ...grpc.CallOption) (resp *pb.AuthUserDeleteResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) UserDelete(ctx context.Context, in *pb.AuthUserDeleteRequest, opts ...grpc.CallOption) (resp *pb.AuthUserDeleteResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.UserDelete(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) UserChangePassword(ctx context.Context, in *pb.AuthUserChangePasswordRequest, opts ...grpc.CallOption) (resp *pb.AuthUserChangePasswordResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) UserChangePassword(ctx context.Context, in *pb.AuthUserChangePasswordRequest, opts ...grpc.CallOption) (resp *pb.AuthUserChangePasswordResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.UserChangePassword(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) UserGrantRole(ctx context.Context, in *pb.AuthUserGrantRoleRequest, opts ...grpc.CallOption) (resp *pb.AuthUserGrantRoleResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) UserGrantRole(ctx context.Context, in *pb.AuthUserGrantRoleRequest, opts ...grpc.CallOption) (resp *pb.AuthUserGrantRoleResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.UserGrantRole(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) UserRevokeRole(ctx context.Context, in *pb.AuthUserRevokeRoleRequest, opts ...grpc.CallOption) (resp *pb.AuthUserRevokeRoleResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) UserRevokeRole(ctx context.Context, in *pb.AuthUserRevokeRoleRequest, opts ...grpc.CallOption) (resp *pb.AuthUserRevokeRoleResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.UserRevokeRole(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) RoleAdd(ctx context.Context, in *pb.AuthRoleAddRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleAddResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) RoleAdd(ctx context.Context, in *pb.AuthRoleAddRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleAddResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.RoleAdd(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) RoleDelete(ctx context.Context, in *pb.AuthRoleDeleteRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleDeleteResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) RoleDelete(ctx context.Context, in *pb.AuthRoleDeleteRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleDeleteResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.RoleDelete(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) RoleGrantPermission(ctx context.Context, in *pb.AuthRoleGrantPermissionRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleGrantPermissionResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) RoleGrantPermission(ctx context.Context, in *pb.AuthRoleGrantPermissionRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleGrantPermissionResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.RoleGrantPermission(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) RoleRevokePermission(ctx context.Context, in *pb.AuthRoleRevokePermissionRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleRevokePermissionResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) RoleRevokePermission(ctx context.Context, in *pb.AuthRoleRevokePermissionRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleRevokePermissionResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.RoleRevokePermission(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}
func (rac *nonRepeatableAuthClient) Authenticate(ctx context.Context, in *pb.AuthenticateRequest, opts ...grpc.CallOption) (resp *pb.AuthenticateResponse, err error) {
err = rac.nonRepeatableRetry(ctx, func(rctx context.Context) error {
func (rac *retryAuthClient) Authenticate(ctx context.Context, in *pb.AuthenticateRequest, opts ...grpc.CallOption) (resp *pb.AuthenticateResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.ac.Authenticate(rctx, in, opts...)
return err
})
}, nonRepeatable)
return resp, err
}

View File

@ -19,6 +19,8 @@ import (
"sync"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"google.golang.org/grpc"
)
// Txn is the interface that wraps mini-transactions.
@ -65,6 +67,8 @@ type txn struct {
sus []*pb.RequestOp
fas []*pb.RequestOp
callOpts []grpc.CallOption
}
func (txn *txn) If(cs ...Cmp) Txn {
@ -139,7 +143,7 @@ func (txn *txn) Commit() (*TxnResponse, error) {
var resp *pb.TxnResponse
var err error
resp, err = txn.kv.remote.Txn(txn.ctx, r)
resp, err = txn.kv.remote.Txn(txn.ctx, r, txn.callOpts...)
if err != nil {
return nil, toErr(txn.ctx, err)
}

View File

@ -106,7 +106,8 @@ func (wr *WatchResponse) IsProgressNotify() bool {
// watcher implements the Watcher interface
type watcher struct {
remote pb.WatchClient
remote pb.WatchClient
callOpts []grpc.CallOption
// mu protects the grpc streams map
mu sync.RWMutex
@ -117,8 +118,9 @@ type watcher struct {
// watchGrpcStream tracks all watch resources attached to a single grpc stream.
type watchGrpcStream struct {
owner *watcher
remote pb.WatchClient
owner *watcher
remote pb.WatchClient
callOpts []grpc.CallOption
// ctx controls internal remote.Watch requests
ctx context.Context
@ -189,14 +191,18 @@ type watcherStream struct {
}
func NewWatcher(c *Client) Watcher {
return NewWatchFromWatchClient(pb.NewWatchClient(c.conn))
return NewWatchFromWatchClient(pb.NewWatchClient(c.conn), c)
}
func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
return &watcher{
func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
w := &watcher{
remote: wc,
streams: make(map[string]*watchGrpcStream),
}
if c != nil {
w.callOpts = c.callOpts
}
return w
}
// never closes
@ -215,6 +221,7 @@ func (w *watcher) newWatcherGrpcStream(inctx context.Context) *watchGrpcStream {
wgs := &watchGrpcStream{
owner: w,
remote: w.remote,
callOpts: w.callOpts,
ctx: ctx,
ctxKey: streamKeyFromCtx(inctx),
cancel: cancel,
@ -762,10 +769,13 @@ func (w *watchGrpcStream) joinSubstreams() {
}
}
var maxBackoff = 100 * time.Millisecond
// openWatchClient retries opening a watch client until success or halt.
// manually retry in case "ws==nil && err==nil"
// TODO: remove FailFast=false
func (w *watchGrpcStream) openWatchClient() (ws pb.Watch_WatchClient, err error) {
backoff := time.Millisecond
for {
select {
case <-w.ctx.Done():
@ -775,12 +785,23 @@ func (w *watchGrpcStream) openWatchClient() (ws pb.Watch_WatchClient, err error)
return nil, err
default:
}
if ws, err = w.remote.Watch(w.ctx, grpc.FailFast(false)); ws != nil && err == nil {
if ws, err = w.remote.Watch(w.ctx, w.callOpts...); ws != nil && err == nil {
break
}
if isHaltErr(w.ctx, err) {
return nil, v3rpc.Error(err)
}
if isUnavailableErr(w.ctx, err) {
// retry, but backoff
if backoff < maxBackoff {
// 25% backoff factor
backoff = backoff + backoff/4
if backoff > maxBackoff {
backoff = maxBackoff
}
}
time.Sleep(backoff)
}
}
return ws, nil
}

1
cmd/functional Symbolic link
View File

@ -0,0 +1 @@
../functional

View File

@ -77,15 +77,20 @@ func NewHighBiased(epsilon float64) *Stream {
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targets map[float64]float64) *Stream {
func NewTargeted(targetMap map[float64]float64) *Stream {
// Convert map to slice to avoid slow iterations on a map.
// ƒ is called on the hot path, so converting the map to a slice
// beforehand results in significant CPU savings.
targets := targetMapToSlice(targetMap)
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for quantile, epsilon := range targets {
if quantile*s.n <= r {
f = (2 * epsilon * r) / quantile
for _, t := range targets {
if t.quantile*s.n <= r {
f = (2 * t.epsilon * r) / t.quantile
} else {
f = (2 * epsilon * (s.n - r)) / (1 - quantile)
f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile)
}
if f < m {
m = f
@ -96,6 +101,25 @@ func NewTargeted(targets map[float64]float64) *Stream {
return newStream(ƒ)
}
type target struct {
quantile float64
epsilon float64
}
func targetMapToSlice(targetMap map[float64]float64) []target {
targets := make([]target, 0, len(targetMap))
for quantile, epsilon := range targetMap {
t := target{
quantile: quantile,
epsilon: epsilon,
}
targets = append(targets, t)
}
return targets
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {

View File

@ -802,9 +802,7 @@ retry:
// pass success, or bolt internal errors, to all callers
for _, c := range b.calls {
if c.err != nil {
c.err <- err
}
c.err <- err
}
break retry
}

View File

@ -483,7 +483,7 @@ func (tx *Tx) allocate(count int) (*page, error) {
tx.pages[p.id] = p
// Update statistics.
tx.stats.PageCount++
tx.stats.PageCount += count
tx.stats.PageAlloc += count * tx.db.pageSize
return p, nil

169
cmd/vendor/github.com/gogo/protobuf/gogoproto/doc.go generated vendored Normal file
View File

@ -0,0 +1,169 @@
// Protocol Buffers for Go with Gadgets
//
// Copyright (c) 2013, The GoGo Authors. All rights reserved.
// http://github.com/gogo/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
Package gogoproto provides extensions for protocol buffers to achieve:
- fast marshalling and unmarshalling.
- peace of mind by optionally generating test and benchmark code.
- more canonical Go structures.
- less typing by optionally generating extra helper code.
- goprotobuf compatibility
More Canonical Go Structures
A lot of time working with a goprotobuf struct will lead you to a place where you create another struct that is easier to work with and then have a function to copy the values between the two structs.
You might also find that basic structs that started their life as part of an API need to be sent over the wire. With gob, you could just send it. With goprotobuf, you need to make a parallel struct.
Gogoprotobuf tries to fix these problems with the nullable, embed, customtype and customname field extensions.
- nullable, if false, a field is generated without a pointer (see warning below).
- embed, if true, the field is generated as an embedded field.
- customtype, It works with the Marshal and Unmarshal methods, to allow you to have your own types in your struct, but marshal to bytes. For example, custom.Uuid or custom.Fixed128
- customname (beta), Changes the generated fieldname. This is especially useful when generated methods conflict with fieldnames.
- casttype (beta), Changes the generated fieldtype. All generated code assumes that this type is castable to the protocol buffer field type. It does not work for structs or enums.
- castkey (beta), Changes the generated fieldtype for a map key. All generated code assumes that this type is castable to the protocol buffer field type. Only supported on maps.
- castvalue (beta), Changes the generated fieldtype for a map value. All generated code assumes that this type is castable to the protocol buffer field type. Only supported on maps.
Warning about nullable: According to the Protocol Buffer specification, you should be able to tell whether a field is set or unset. With the option nullable=false this feature is lost, since your non-nullable fields will always be set. It can be seen as a layer on top of Protocol Buffers, where before and after marshalling all non-nullable fields are set and they cannot be unset.
Let us look at:
github.com/gogo/protobuf/test/example/example.proto
for a quicker overview.
The following message:
package test;
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
message A {
optional string Description = 1 [(gogoproto.nullable) = false];
optional int64 Number = 2 [(gogoproto.nullable) = false];
optional bytes Id = 3 [(gogoproto.customtype) = "github.com/gogo/protobuf/test/custom.Uuid", (gogoproto.nullable) = false];
}
Will generate a go struct which looks a lot like this:
type A struct {
Description string
Number int64
Id github_com_gogo_protobuf_test_custom.Uuid
}
You will see there are no pointers, since all fields are non-nullable.
You will also see a custom type which marshals to a string.
Be warned it is your responsibility to test your custom types thoroughly.
You should think of every possible empty and nil case for your marshaling, unmarshaling and size methods.
Next we will embed the message A in message B.
message B {
optional A A = 1 [(gogoproto.nullable) = false, (gogoproto.embed) = true];
repeated bytes G = 2 [(gogoproto.customtype) = "github.com/gogo/protobuf/test/custom.Uint128", (gogoproto.nullable) = false];
}
See below that A is embedded in B.
type B struct {
A
G []github_com_gogo_protobuf_test_custom.Uint128
}
Also see the repeated custom type.
type Uint128 [2]uint64
Next we will create a custom name for one of our fields.
message C {
optional int64 size = 1 [(gogoproto.customname) = "MySize"];
}
See below that the field's name is MySize and not Size.
type C struct {
MySize *int64
}
The is useful when having a protocol buffer message with a field name which conflicts with a generated method.
As an example, having a field name size and using the sizer plugin to generate a Size method will cause a go compiler error.
Using customname you can fix this error without changing the field name.
This is typically useful when working with a protocol buffer that was designed before these methods and/or the go language were avialable.
Gogoprotobuf also has some more subtle changes, these could be changed back:
- the generated package name for imports do not have the extra /filename.pb,
but are actually the imports specified in the .proto file.
Gogoprotobuf also has lost some features which should be brought back with time:
- Marshalling and unmarshalling with reflect and without the unsafe package,
this requires work in pointer_reflect.go
Why does nullable break protocol buffer specifications:
The protocol buffer specification states, somewhere, that you should be able to tell whether a
field is set or unset. With the option nullable=false this feature is lost,
since your non-nullable fields will always be set. It can be seen as a layer on top of
protocol buffers, where before and after marshalling all non-nullable fields are set
and they cannot be unset.
Goprotobuf Compatibility:
Gogoprotobuf is compatible with Goprotobuf, because it is compatible with protocol buffers.
Gogoprotobuf generates the same code as goprotobuf if no extensions are used.
The enumprefix, getters and stringer extensions can be used to remove some of the unnecessary code generated by goprotobuf:
- gogoproto_import, if false, the generated code imports github.com/golang/protobuf/proto instead of github.com/gogo/protobuf/proto.
- goproto_enum_prefix, if false, generates the enum constant names without the messagetype prefix
- goproto_enum_stringer (experimental), if false, the enum is generated without the default string method, this is useful for rather using enum_stringer, or allowing you to write your own string method.
- goproto_getters, if false, the message is generated without get methods, this is useful when you would rather want to use face
- goproto_stringer, if false, the message is generated without the default string method, this is useful for rather using stringer, or allowing you to write your own string method.
- goproto_extensions_map (beta), if false, the extensions field is generated as type []byte instead of type map[int32]proto.Extension
- goproto_unrecognized (beta), if false, XXX_unrecognized field is not generated. This is useful in conjunction with gogoproto.nullable=false, to generate structures completely devoid of pointers and reduce GC pressure at the cost of losing information about unrecognized fields.
- goproto_registration (beta), if true, the generated files will register all messages and types against both gogo/protobuf and golang/protobuf. This is necessary when using third-party packages which read registrations from golang/protobuf (such as the grpc-gateway).
Less Typing and Peace of Mind is explained in their specific plugin folders godoc:
- github.com/gogo/protobuf/plugin/<extension_name>
If you do not use any of these extension the code that is generated
will be the same as if goprotobuf has generated it.
The most complete way to see examples is to look at
github.com/gogo/protobuf/test/thetest.proto
Gogoprototest is a seperate project,
because we want to keep gogoprotobuf independant of goprotobuf,
but we still want to test it thoroughly.
*/
package gogoproto

View File

@ -0,0 +1,803 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: gogo.proto
/*
Package gogoproto is a generated protocol buffer package.
It is generated from these files:
gogo.proto
It has these top-level messages:
*/
package gogoproto
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/gogo/protobuf/protoc-gen-gogo/descriptor"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
var E_GoprotoEnumPrefix = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.EnumOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 62001,
Name: "gogoproto.goproto_enum_prefix",
Tag: "varint,62001,opt,name=goproto_enum_prefix,json=goprotoEnumPrefix",
Filename: "gogo.proto",
}
var E_GoprotoEnumStringer = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.EnumOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 62021,
Name: "gogoproto.goproto_enum_stringer",
Tag: "varint,62021,opt,name=goproto_enum_stringer,json=goprotoEnumStringer",
Filename: "gogo.proto",
}
var E_EnumStringer = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.EnumOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 62022,
Name: "gogoproto.enum_stringer",
Tag: "varint,62022,opt,name=enum_stringer,json=enumStringer",
Filename: "gogo.proto",
}
var E_EnumCustomname = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.EnumOptions)(nil),
ExtensionType: (*string)(nil),
Field: 62023,
Name: "gogoproto.enum_customname",
Tag: "bytes,62023,opt,name=enum_customname,json=enumCustomname",
Filename: "gogo.proto",
}
var E_Enumdecl = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.EnumOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 62024,
Name: "gogoproto.enumdecl",
Tag: "varint,62024,opt,name=enumdecl",
Filename: "gogo.proto",
}
var E_EnumvalueCustomname = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.EnumValueOptions)(nil),
ExtensionType: (*string)(nil),
Field: 66001,
Name: "gogoproto.enumvalue_customname",
Tag: "bytes,66001,opt,name=enumvalue_customname,json=enumvalueCustomname",
Filename: "gogo.proto",
}
var E_GoprotoGettersAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63001,
Name: "gogoproto.goproto_getters_all",
Tag: "varint,63001,opt,name=goproto_getters_all,json=goprotoGettersAll",
Filename: "gogo.proto",
}
var E_GoprotoEnumPrefixAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63002,
Name: "gogoproto.goproto_enum_prefix_all",
Tag: "varint,63002,opt,name=goproto_enum_prefix_all,json=goprotoEnumPrefixAll",
Filename: "gogo.proto",
}
var E_GoprotoStringerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63003,
Name: "gogoproto.goproto_stringer_all",
Tag: "varint,63003,opt,name=goproto_stringer_all,json=goprotoStringerAll",
Filename: "gogo.proto",
}
var E_VerboseEqualAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63004,
Name: "gogoproto.verbose_equal_all",
Tag: "varint,63004,opt,name=verbose_equal_all,json=verboseEqualAll",
Filename: "gogo.proto",
}
var E_FaceAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63005,
Name: "gogoproto.face_all",
Tag: "varint,63005,opt,name=face_all,json=faceAll",
Filename: "gogo.proto",
}
var E_GostringAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63006,
Name: "gogoproto.gostring_all",
Tag: "varint,63006,opt,name=gostring_all,json=gostringAll",
Filename: "gogo.proto",
}
var E_PopulateAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63007,
Name: "gogoproto.populate_all",
Tag: "varint,63007,opt,name=populate_all,json=populateAll",
Filename: "gogo.proto",
}
var E_StringerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63008,
Name: "gogoproto.stringer_all",
Tag: "varint,63008,opt,name=stringer_all,json=stringerAll",
Filename: "gogo.proto",
}
var E_OnlyoneAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63009,
Name: "gogoproto.onlyone_all",
Tag: "varint,63009,opt,name=onlyone_all,json=onlyoneAll",
Filename: "gogo.proto",
}
var E_EqualAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63013,
Name: "gogoproto.equal_all",
Tag: "varint,63013,opt,name=equal_all,json=equalAll",
Filename: "gogo.proto",
}
var E_DescriptionAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63014,
Name: "gogoproto.description_all",
Tag: "varint,63014,opt,name=description_all,json=descriptionAll",
Filename: "gogo.proto",
}
var E_TestgenAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63015,
Name: "gogoproto.testgen_all",
Tag: "varint,63015,opt,name=testgen_all,json=testgenAll",
Filename: "gogo.proto",
}
var E_BenchgenAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63016,
Name: "gogoproto.benchgen_all",
Tag: "varint,63016,opt,name=benchgen_all,json=benchgenAll",
Filename: "gogo.proto",
}
var E_MarshalerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63017,
Name: "gogoproto.marshaler_all",
Tag: "varint,63017,opt,name=marshaler_all,json=marshalerAll",
Filename: "gogo.proto",
}
var E_UnmarshalerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63018,
Name: "gogoproto.unmarshaler_all",
Tag: "varint,63018,opt,name=unmarshaler_all,json=unmarshalerAll",
Filename: "gogo.proto",
}
var E_StableMarshalerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63019,
Name: "gogoproto.stable_marshaler_all",
Tag: "varint,63019,opt,name=stable_marshaler_all,json=stableMarshalerAll",
Filename: "gogo.proto",
}
var E_SizerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63020,
Name: "gogoproto.sizer_all",
Tag: "varint,63020,opt,name=sizer_all,json=sizerAll",
Filename: "gogo.proto",
}
var E_GoprotoEnumStringerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63021,
Name: "gogoproto.goproto_enum_stringer_all",
Tag: "varint,63021,opt,name=goproto_enum_stringer_all,json=goprotoEnumStringerAll",
Filename: "gogo.proto",
}
var E_EnumStringerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63022,
Name: "gogoproto.enum_stringer_all",
Tag: "varint,63022,opt,name=enum_stringer_all,json=enumStringerAll",
Filename: "gogo.proto",
}
var E_UnsafeMarshalerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63023,
Name: "gogoproto.unsafe_marshaler_all",
Tag: "varint,63023,opt,name=unsafe_marshaler_all,json=unsafeMarshalerAll",
Filename: "gogo.proto",
}
var E_UnsafeUnmarshalerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63024,
Name: "gogoproto.unsafe_unmarshaler_all",
Tag: "varint,63024,opt,name=unsafe_unmarshaler_all,json=unsafeUnmarshalerAll",
Filename: "gogo.proto",
}
var E_GoprotoExtensionsMapAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63025,
Name: "gogoproto.goproto_extensions_map_all",
Tag: "varint,63025,opt,name=goproto_extensions_map_all,json=goprotoExtensionsMapAll",
Filename: "gogo.proto",
}
var E_GoprotoUnrecognizedAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63026,
Name: "gogoproto.goproto_unrecognized_all",
Tag: "varint,63026,opt,name=goproto_unrecognized_all,json=goprotoUnrecognizedAll",
Filename: "gogo.proto",
}
var E_GogoprotoImport = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63027,
Name: "gogoproto.gogoproto_import",
Tag: "varint,63027,opt,name=gogoproto_import,json=gogoprotoImport",
Filename: "gogo.proto",
}
var E_ProtosizerAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63028,
Name: "gogoproto.protosizer_all",
Tag: "varint,63028,opt,name=protosizer_all,json=protosizerAll",
Filename: "gogo.proto",
}
var E_CompareAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63029,
Name: "gogoproto.compare_all",
Tag: "varint,63029,opt,name=compare_all,json=compareAll",
Filename: "gogo.proto",
}
var E_TypedeclAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63030,
Name: "gogoproto.typedecl_all",
Tag: "varint,63030,opt,name=typedecl_all,json=typedeclAll",
Filename: "gogo.proto",
}
var E_EnumdeclAll = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63031,
Name: "gogoproto.enumdecl_all",
Tag: "varint,63031,opt,name=enumdecl_all,json=enumdeclAll",
Filename: "gogo.proto",
}
var E_GoprotoRegistration = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FileOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 63032,
Name: "gogoproto.goproto_registration",
Tag: "varint,63032,opt,name=goproto_registration,json=goprotoRegistration",
Filename: "gogo.proto",
}
var E_GoprotoGetters = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64001,
Name: "gogoproto.goproto_getters",
Tag: "varint,64001,opt,name=goproto_getters,json=goprotoGetters",
Filename: "gogo.proto",
}
var E_GoprotoStringer = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64003,
Name: "gogoproto.goproto_stringer",
Tag: "varint,64003,opt,name=goproto_stringer,json=goprotoStringer",
Filename: "gogo.proto",
}
var E_VerboseEqual = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64004,
Name: "gogoproto.verbose_equal",
Tag: "varint,64004,opt,name=verbose_equal,json=verboseEqual",
Filename: "gogo.proto",
}
var E_Face = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64005,
Name: "gogoproto.face",
Tag: "varint,64005,opt,name=face",
Filename: "gogo.proto",
}
var E_Gostring = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64006,
Name: "gogoproto.gostring",
Tag: "varint,64006,opt,name=gostring",
Filename: "gogo.proto",
}
var E_Populate = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64007,
Name: "gogoproto.populate",
Tag: "varint,64007,opt,name=populate",
Filename: "gogo.proto",
}
var E_Stringer = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 67008,
Name: "gogoproto.stringer",
Tag: "varint,67008,opt,name=stringer",
Filename: "gogo.proto",
}
var E_Onlyone = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64009,
Name: "gogoproto.onlyone",
Tag: "varint,64009,opt,name=onlyone",
Filename: "gogo.proto",
}
var E_Equal = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64013,
Name: "gogoproto.equal",
Tag: "varint,64013,opt,name=equal",
Filename: "gogo.proto",
}
var E_Description = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64014,
Name: "gogoproto.description",
Tag: "varint,64014,opt,name=description",
Filename: "gogo.proto",
}
var E_Testgen = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64015,
Name: "gogoproto.testgen",
Tag: "varint,64015,opt,name=testgen",
Filename: "gogo.proto",
}
var E_Benchgen = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64016,
Name: "gogoproto.benchgen",
Tag: "varint,64016,opt,name=benchgen",
Filename: "gogo.proto",
}
var E_Marshaler = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64017,
Name: "gogoproto.marshaler",
Tag: "varint,64017,opt,name=marshaler",
Filename: "gogo.proto",
}
var E_Unmarshaler = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64018,
Name: "gogoproto.unmarshaler",
Tag: "varint,64018,opt,name=unmarshaler",
Filename: "gogo.proto",
}
var E_StableMarshaler = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64019,
Name: "gogoproto.stable_marshaler",
Tag: "varint,64019,opt,name=stable_marshaler,json=stableMarshaler",
Filename: "gogo.proto",
}
var E_Sizer = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64020,
Name: "gogoproto.sizer",
Tag: "varint,64020,opt,name=sizer",
Filename: "gogo.proto",
}
var E_UnsafeMarshaler = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64023,
Name: "gogoproto.unsafe_marshaler",
Tag: "varint,64023,opt,name=unsafe_marshaler,json=unsafeMarshaler",
Filename: "gogo.proto",
}
var E_UnsafeUnmarshaler = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64024,
Name: "gogoproto.unsafe_unmarshaler",
Tag: "varint,64024,opt,name=unsafe_unmarshaler,json=unsafeUnmarshaler",
Filename: "gogo.proto",
}
var E_GoprotoExtensionsMap = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64025,
Name: "gogoproto.goproto_extensions_map",
Tag: "varint,64025,opt,name=goproto_extensions_map,json=goprotoExtensionsMap",
Filename: "gogo.proto",
}
var E_GoprotoUnrecognized = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64026,
Name: "gogoproto.goproto_unrecognized",
Tag: "varint,64026,opt,name=goproto_unrecognized,json=goprotoUnrecognized",
Filename: "gogo.proto",
}
var E_Protosizer = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64028,
Name: "gogoproto.protosizer",
Tag: "varint,64028,opt,name=protosizer",
Filename: "gogo.proto",
}
var E_Compare = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64029,
Name: "gogoproto.compare",
Tag: "varint,64029,opt,name=compare",
Filename: "gogo.proto",
}
var E_Typedecl = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.MessageOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 64030,
Name: "gogoproto.typedecl",
Tag: "varint,64030,opt,name=typedecl",
Filename: "gogo.proto",
}
var E_Nullable = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 65001,
Name: "gogoproto.nullable",
Tag: "varint,65001,opt,name=nullable",
Filename: "gogo.proto",
}
var E_Embed = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 65002,
Name: "gogoproto.embed",
Tag: "varint,65002,opt,name=embed",
Filename: "gogo.proto",
}
var E_Customtype = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*string)(nil),
Field: 65003,
Name: "gogoproto.customtype",
Tag: "bytes,65003,opt,name=customtype",
Filename: "gogo.proto",
}
var E_Customname = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*string)(nil),
Field: 65004,
Name: "gogoproto.customname",
Tag: "bytes,65004,opt,name=customname",
Filename: "gogo.proto",
}
var E_Jsontag = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*string)(nil),
Field: 65005,
Name: "gogoproto.jsontag",
Tag: "bytes,65005,opt,name=jsontag",
Filename: "gogo.proto",
}
var E_Moretags = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*string)(nil),
Field: 65006,
Name: "gogoproto.moretags",
Tag: "bytes,65006,opt,name=moretags",
Filename: "gogo.proto",
}
var E_Casttype = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*string)(nil),
Field: 65007,
Name: "gogoproto.casttype",
Tag: "bytes,65007,opt,name=casttype",
Filename: "gogo.proto",
}
var E_Castkey = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*string)(nil),
Field: 65008,
Name: "gogoproto.castkey",
Tag: "bytes,65008,opt,name=castkey",
Filename: "gogo.proto",
}
var E_Castvalue = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*string)(nil),
Field: 65009,
Name: "gogoproto.castvalue",
Tag: "bytes,65009,opt,name=castvalue",
Filename: "gogo.proto",
}
var E_Stdtime = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 65010,
Name: "gogoproto.stdtime",
Tag: "varint,65010,opt,name=stdtime",
Filename: "gogo.proto",
}
var E_Stdduration = &proto.ExtensionDesc{
ExtendedType: (*google_protobuf.FieldOptions)(nil),
ExtensionType: (*bool)(nil),
Field: 65011,
Name: "gogoproto.stdduration",
Tag: "varint,65011,opt,name=stdduration",
Filename: "gogo.proto",
}
func init() {
proto.RegisterExtension(E_GoprotoEnumPrefix)
proto.RegisterExtension(E_GoprotoEnumStringer)
proto.RegisterExtension(E_EnumStringer)
proto.RegisterExtension(E_EnumCustomname)
proto.RegisterExtension(E_Enumdecl)
proto.RegisterExtension(E_EnumvalueCustomname)
proto.RegisterExtension(E_GoprotoGettersAll)
proto.RegisterExtension(E_GoprotoEnumPrefixAll)
proto.RegisterExtension(E_GoprotoStringerAll)
proto.RegisterExtension(E_VerboseEqualAll)
proto.RegisterExtension(E_FaceAll)
proto.RegisterExtension(E_GostringAll)
proto.RegisterExtension(E_PopulateAll)
proto.RegisterExtension(E_StringerAll)
proto.RegisterExtension(E_OnlyoneAll)
proto.RegisterExtension(E_EqualAll)
proto.RegisterExtension(E_DescriptionAll)
proto.RegisterExtension(E_TestgenAll)
proto.RegisterExtension(E_BenchgenAll)
proto.RegisterExtension(E_MarshalerAll)
proto.RegisterExtension(E_UnmarshalerAll)
proto.RegisterExtension(E_StableMarshalerAll)
proto.RegisterExtension(E_SizerAll)
proto.RegisterExtension(E_GoprotoEnumStringerAll)
proto.RegisterExtension(E_EnumStringerAll)
proto.RegisterExtension(E_UnsafeMarshalerAll)
proto.RegisterExtension(E_UnsafeUnmarshalerAll)
proto.RegisterExtension(E_GoprotoExtensionsMapAll)
proto.RegisterExtension(E_GoprotoUnrecognizedAll)
proto.RegisterExtension(E_GogoprotoImport)
proto.RegisterExtension(E_ProtosizerAll)
proto.RegisterExtension(E_CompareAll)
proto.RegisterExtension(E_TypedeclAll)
proto.RegisterExtension(E_EnumdeclAll)
proto.RegisterExtension(E_GoprotoRegistration)
proto.RegisterExtension(E_GoprotoGetters)
proto.RegisterExtension(E_GoprotoStringer)
proto.RegisterExtension(E_VerboseEqual)
proto.RegisterExtension(E_Face)
proto.RegisterExtension(E_Gostring)
proto.RegisterExtension(E_Populate)
proto.RegisterExtension(E_Stringer)
proto.RegisterExtension(E_Onlyone)
proto.RegisterExtension(E_Equal)
proto.RegisterExtension(E_Description)
proto.RegisterExtension(E_Testgen)
proto.RegisterExtension(E_Benchgen)
proto.RegisterExtension(E_Marshaler)
proto.RegisterExtension(E_Unmarshaler)
proto.RegisterExtension(E_StableMarshaler)
proto.RegisterExtension(E_Sizer)
proto.RegisterExtension(E_UnsafeMarshaler)
proto.RegisterExtension(E_UnsafeUnmarshaler)
proto.RegisterExtension(E_GoprotoExtensionsMap)
proto.RegisterExtension(E_GoprotoUnrecognized)
proto.RegisterExtension(E_Protosizer)
proto.RegisterExtension(E_Compare)
proto.RegisterExtension(E_Typedecl)
proto.RegisterExtension(E_Nullable)
proto.RegisterExtension(E_Embed)
proto.RegisterExtension(E_Customtype)
proto.RegisterExtension(E_Customname)
proto.RegisterExtension(E_Jsontag)
proto.RegisterExtension(E_Moretags)
proto.RegisterExtension(E_Casttype)
proto.RegisterExtension(E_Castkey)
proto.RegisterExtension(E_Castvalue)
proto.RegisterExtension(E_Stdtime)
proto.RegisterExtension(E_Stdduration)
}
func init() { proto.RegisterFile("gogo.proto", fileDescriptorGogo) }
var fileDescriptorGogo = []byte{
// 1201 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x98, 0xcb, 0x6f, 0x1c, 0x45,
0x13, 0xc0, 0xf5, 0xe9, 0x73, 0x64, 0x6f, 0xf9, 0x85, 0xd7, 0xc6, 0x84, 0x08, 0x44, 0x72, 0xe3,
0xe4, 0x9c, 0x22, 0x94, 0xb6, 0x22, 0xcb, 0xb1, 0x1c, 0x2b, 0x11, 0x06, 0x63, 0xe2, 0x00, 0xe2,
0xb0, 0x9a, 0xdd, 0x6d, 0x4f, 0x06, 0x66, 0xa6, 0x87, 0x99, 0x9e, 0x28, 0xce, 0x0d, 0x85, 0x87,
0x10, 0xe2, 0x8d, 0x04, 0x09, 0x49, 0x80, 0x03, 0xef, 0x67, 0x78, 0x1f, 0xb9, 0xf0, 0xb8, 0xf2,
0x3f, 0x70, 0x01, 0xcc, 0xdb, 0x37, 0x5f, 0x50, 0xcd, 0x56, 0xcd, 0xf6, 0xac, 0x57, 0xea, 0xde,
0xdb, 0xec, 0xba, 0x7f, 0xbf, 0xad, 0xa9, 0x9a, 0xae, 0xea, 0x31, 0x80, 0xaf, 0x7c, 0x35, 0x97,
0xa4, 0x4a, 0xab, 0x7a, 0x0d, 0xaf, 0x8b, 0xcb, 0x03, 0x07, 0x7d, 0xa5, 0xfc, 0x50, 0x1e, 0x2e,
0x3e, 0x35, 0xf3, 0xcd, 0xc3, 0x6d, 0x99, 0xb5, 0xd2, 0x20, 0xd1, 0x2a, 0xed, 0x2c, 0x16, 0x77,
0xc1, 0x34, 0x2d, 0x6e, 0xc8, 0x38, 0x8f, 0x1a, 0x49, 0x2a, 0x37, 0x83, 0xf3, 0xf5, 0x5b, 0xe6,
0x3a, 0xe4, 0x1c, 0x93, 0x73, 0xcb, 0x71, 0x1e, 0xdd, 0x9d, 0xe8, 0x40, 0xc5, 0xd9, 0xfe, 0xeb,
0x3f, 0xff, 0xff, 0xe0, 0xff, 0x6e, 0x1f, 0x59, 0x9f, 0x22, 0x14, 0xff, 0xb6, 0x56, 0x80, 0x62,
0x1d, 0x6e, 0xac, 0xf8, 0x32, 0x9d, 0x06, 0xb1, 0x2f, 0x53, 0x8b, 0xf1, 0x3b, 0x32, 0x4e, 0x1b,
0xc6, 0x7b, 0x09, 0x15, 0x4b, 0x30, 0x3e, 0x88, 0xeb, 0x7b, 0x72, 0x8d, 0x49, 0x53, 0xb2, 0x02,
0x93, 0x85, 0xa4, 0x95, 0x67, 0x5a, 0x45, 0xb1, 0x17, 0x49, 0x8b, 0xe6, 0x87, 0x42, 0x53, 0x5b,
0x9f, 0x40, 0x6c, 0xa9, 0xa4, 0x84, 0x80, 0x11, 0xfc, 0xa6, 0x2d, 0x5b, 0xa1, 0xc5, 0xf0, 0x23,
0x05, 0x52, 0xae, 0x17, 0x67, 0x60, 0x06, 0xaf, 0xcf, 0x79, 0x61, 0x2e, 0xcd, 0x48, 0x0e, 0xf5,
0xf5, 0x9c, 0xc1, 0x65, 0x2c, 0xfb, 0xe9, 0xe2, 0x50, 0x11, 0xce, 0x74, 0x29, 0x30, 0x62, 0x32,
0xaa, 0xe8, 0x4b, 0xad, 0x65, 0x9a, 0x35, 0xbc, 0xb0, 0x5f, 0x78, 0x27, 0x82, 0xb0, 0x34, 0x5e,
0xda, 0xae, 0x56, 0x71, 0xa5, 0x43, 0x2e, 0x86, 0xa1, 0xd8, 0x80, 0x9b, 0xfa, 0x3c, 0x15, 0x0e,
0xce, 0xcb, 0xe4, 0x9c, 0xd9, 0xf3, 0x64, 0xa0, 0x76, 0x0d, 0xf8, 0xfb, 0xb2, 0x96, 0x0e, 0xce,
0xd7, 0xc8, 0x59, 0x27, 0x96, 0x4b, 0x8a, 0xc6, 0x53, 0x30, 0x75, 0x4e, 0xa6, 0x4d, 0x95, 0xc9,
0x86, 0x7c, 0x24, 0xf7, 0x42, 0x07, 0xdd, 0x15, 0xd2, 0x4d, 0x12, 0xb8, 0x8c, 0x1c, 0xba, 0x8e,
0xc2, 0xc8, 0xa6, 0xd7, 0x92, 0x0e, 0x8a, 0xab, 0xa4, 0x18, 0xc6, 0xf5, 0x88, 0x2e, 0xc2, 0x98,
0xaf, 0x3a, 0xb7, 0xe4, 0x80, 0x5f, 0x23, 0x7c, 0x94, 0x19, 0x52, 0x24, 0x2a, 0xc9, 0x43, 0x4f,
0xbb, 0x44, 0xf0, 0x3a, 0x2b, 0x98, 0x21, 0xc5, 0x00, 0x69, 0x7d, 0x83, 0x15, 0x99, 0x91, 0xcf,
0x05, 0x18, 0x55, 0x71, 0xb8, 0xa5, 0x62, 0x97, 0x20, 0xde, 0x24, 0x03, 0x10, 0x82, 0x82, 0x79,
0xa8, 0xb9, 0x16, 0xe2, 0xad, 0x6d, 0xde, 0x1e, 0x5c, 0x81, 0x15, 0x98, 0xe4, 0x06, 0x15, 0xa8,
0xd8, 0x41, 0xf1, 0x36, 0x29, 0x26, 0x0c, 0x8c, 0x6e, 0x43, 0xcb, 0x4c, 0xfb, 0xd2, 0x45, 0xf2,
0x0e, 0xdf, 0x06, 0x21, 0x94, 0xca, 0xa6, 0x8c, 0x5b, 0x67, 0xdd, 0x0c, 0xef, 0x72, 0x2a, 0x99,
0x41, 0xc5, 0x12, 0x8c, 0x47, 0x5e, 0x9a, 0x9d, 0xf5, 0x42, 0xa7, 0x72, 0xbc, 0x47, 0x8e, 0xb1,
0x12, 0xa2, 0x8c, 0xe4, 0xf1, 0x20, 0x9a, 0xf7, 0x39, 0x23, 0x06, 0x46, 0x5b, 0x2f, 0xd3, 0x5e,
0x33, 0x94, 0x8d, 0x41, 0x6c, 0x1f, 0xf0, 0xd6, 0xeb, 0xb0, 0xab, 0xa6, 0x71, 0x1e, 0x6a, 0x59,
0x70, 0xc1, 0x49, 0xf3, 0x21, 0x57, 0xba, 0x00, 0x10, 0x7e, 0x00, 0x6e, 0xee, 0x3b, 0x26, 0x1c,
0x64, 0x1f, 0x91, 0x6c, 0xb6, 0xcf, 0xa8, 0xa0, 0x96, 0x30, 0xa8, 0xf2, 0x63, 0x6e, 0x09, 0xb2,
0xc7, 0xb5, 0x06, 0x33, 0x79, 0x9c, 0x79, 0x9b, 0x83, 0x65, 0xed, 0x13, 0xce, 0x5a, 0x87, 0xad,
0x64, 0xed, 0x34, 0xcc, 0x92, 0x71, 0xb0, 0xba, 0x7e, 0xca, 0x8d, 0xb5, 0x43, 0x6f, 0x54, 0xab,
0xfb, 0x20, 0x1c, 0x28, 0xd3, 0x79, 0x5e, 0xcb, 0x38, 0x43, 0xa6, 0x11, 0x79, 0x89, 0x83, 0xf9,
0x3a, 0x99, 0xb9, 0xe3, 0x2f, 0x97, 0x82, 0x55, 0x2f, 0x41, 0xf9, 0xfd, 0xb0, 0x9f, 0xe5, 0x79,
0x9c, 0xca, 0x96, 0xf2, 0xe3, 0xe0, 0x82, 0x6c, 0x3b, 0xa8, 0x3f, 0xeb, 0x29, 0xd5, 0x86, 0x81,
0xa3, 0xf9, 0x24, 0xdc, 0x50, 0x9e, 0x55, 0x1a, 0x41, 0x94, 0xa8, 0x54, 0x5b, 0x8c, 0x9f, 0x73,
0xa5, 0x4a, 0xee, 0x64, 0x81, 0x89, 0x65, 0x98, 0x28, 0x3e, 0xba, 0x3e, 0x92, 0x5f, 0x90, 0x68,
0xbc, 0x4b, 0x51, 0xe3, 0x68, 0xa9, 0x28, 0xf1, 0x52, 0x97, 0xfe, 0xf7, 0x25, 0x37, 0x0e, 0x42,
0xa8, 0x71, 0xe8, 0xad, 0x44, 0xe2, 0xb4, 0x77, 0x30, 0x7c, 0xc5, 0x8d, 0x83, 0x19, 0x52, 0xf0,
0x81, 0xc1, 0x41, 0xf1, 0x35, 0x2b, 0x98, 0x41, 0xc5, 0x3d, 0xdd, 0x41, 0x9b, 0x4a, 0x3f, 0xc8,
0x74, 0xea, 0xe1, 0x6a, 0x8b, 0xea, 0x9b, 0xed, 0xea, 0x21, 0x6c, 0xdd, 0x40, 0xc5, 0x29, 0x98,
0xec, 0x39, 0x62, 0xd4, 0x6f, 0xdb, 0x63, 0x5b, 0x95, 0x59, 0xe6, 0xf9, 0xa5, 0xf0, 0xd1, 0x1d,
0x6a, 0x46, 0xd5, 0x13, 0x86, 0xb8, 0x13, 0xeb, 0x5e, 0x3d, 0x07, 0xd8, 0x65, 0x17, 0x77, 0xca,
0xd2, 0x57, 0x8e, 0x01, 0xe2, 0x04, 0x8c, 0x57, 0xce, 0x00, 0x76, 0xd5, 0x63, 0xa4, 0x1a, 0x33,
0x8f, 0x00, 0xe2, 0x08, 0x0c, 0xe1, 0x3c, 0xb7, 0xe3, 0x8f, 0x13, 0x5e, 0x2c, 0x17, 0xc7, 0x60,
0x84, 0xe7, 0xb8, 0x1d, 0x7d, 0x82, 0xd0, 0x12, 0x41, 0x9c, 0x67, 0xb8, 0x1d, 0x7f, 0x92, 0x71,
0x46, 0x10, 0x77, 0x4f, 0xe1, 0xb7, 0x4f, 0x0f, 0x51, 0x1f, 0xe6, 0xdc, 0xcd, 0xc3, 0x30, 0x0d,
0x6f, 0x3b, 0xfd, 0x14, 0xfd, 0x38, 0x13, 0xe2, 0x0e, 0xd8, 0xe7, 0x98, 0xf0, 0x67, 0x08, 0xed,
0xac, 0x17, 0x4b, 0x30, 0x6a, 0x0c, 0x6c, 0x3b, 0xfe, 0x2c, 0xe1, 0x26, 0x85, 0xa1, 0xd3, 0xc0,
0xb6, 0x0b, 0x9e, 0xe3, 0xd0, 0x89, 0xc0, 0xb4, 0xf1, 0xac, 0xb6, 0xd3, 0xcf, 0x73, 0xd6, 0x19,
0x11, 0x0b, 0x50, 0x2b, 0xfb, 0xaf, 0x9d, 0x7f, 0x81, 0xf8, 0x2e, 0x83, 0x19, 0x30, 0xfa, 0xbf,
0x5d, 0xf1, 0x22, 0x67, 0xc0, 0xa0, 0x70, 0x1b, 0xf5, 0xce, 0x74, 0xbb, 0xe9, 0x25, 0xde, 0x46,
0x3d, 0x23, 0x1d, 0xab, 0x59, 0xb4, 0x41, 0xbb, 0xe2, 0x65, 0xae, 0x66, 0xb1, 0x1e, 0xc3, 0xe8,
0x1d, 0x92, 0x76, 0xc7, 0x2b, 0x1c, 0x46, 0xcf, 0x8c, 0x14, 0x6b, 0x50, 0xdf, 0x3b, 0x20, 0xed,
0xbe, 0x57, 0xc9, 0x37, 0xb5, 0x67, 0x3e, 0x8a, 0xfb, 0x60, 0xb6, 0xff, 0x70, 0xb4, 0x5b, 0x2f,
0xed, 0xf4, 0xbc, 0xce, 0x98, 0xb3, 0x51, 0x9c, 0xee, 0x76, 0x59, 0x73, 0x30, 0xda, 0xb5, 0x97,
0x77, 0xaa, 0x8d, 0xd6, 0x9c, 0x8b, 0x62, 0x11, 0xa0, 0x3b, 0x93, 0xec, 0xae, 0x2b, 0xe4, 0x32,
0x20, 0xdc, 0x1a, 0x34, 0x92, 0xec, 0xfc, 0x55, 0xde, 0x1a, 0x44, 0xe0, 0xd6, 0xe0, 0x69, 0x64,
0xa7, 0xaf, 0xf1, 0xd6, 0x60, 0x44, 0xcc, 0xc3, 0x48, 0x9c, 0x87, 0x21, 0x3e, 0x5b, 0xf5, 0x5b,
0xfb, 0x8c, 0x1b, 0x19, 0xb6, 0x19, 0xfe, 0x65, 0x97, 0x60, 0x06, 0xc4, 0x11, 0xd8, 0x27, 0xa3,
0xa6, 0x6c, 0xdb, 0xc8, 0x5f, 0x77, 0xb9, 0x9f, 0xe0, 0x6a, 0xb1, 0x00, 0xd0, 0x79, 0x99, 0xc6,
0x28, 0x6c, 0xec, 0x6f, 0xbb, 0x9d, 0xf7, 0x7a, 0x03, 0xe9, 0x0a, 0x8a, 0xb7, 0x71, 0x8b, 0x60,
0xbb, 0x2a, 0x28, 0x5e, 0xc0, 0x8f, 0xc2, 0xf0, 0x43, 0x99, 0x8a, 0xb5, 0xe7, 0xdb, 0xe8, 0xdf,
0x89, 0xe6, 0xf5, 0x98, 0xb0, 0x48, 0xa5, 0x52, 0x7b, 0x7e, 0x66, 0x63, 0xff, 0x20, 0xb6, 0x04,
0x10, 0x6e, 0x79, 0x99, 0x76, 0xb9, 0xef, 0x3f, 0x19, 0x66, 0x00, 0x83, 0xc6, 0xeb, 0x87, 0xe5,
0x96, 0x8d, 0xfd, 0x8b, 0x83, 0xa6, 0xf5, 0xe2, 0x18, 0xd4, 0xf0, 0xb2, 0xf8, 0x3f, 0x84, 0x0d,
0xfe, 0x9b, 0xe0, 0x2e, 0x81, 0xbf, 0x9c, 0xe9, 0xb6, 0x0e, 0xec, 0xc9, 0xfe, 0x87, 0x2a, 0xcd,
0xeb, 0xc5, 0x22, 0x8c, 0x66, 0xba, 0xdd, 0xce, 0xe9, 0x44, 0x63, 0xc1, 0xff, 0xdd, 0x2d, 0x5f,
0x72, 0x4b, 0xe6, 0xf8, 0x21, 0x98, 0x6e, 0xa9, 0xa8, 0x17, 0x3c, 0x0e, 0x2b, 0x6a, 0x45, 0xad,
0x15, 0xbb, 0xe8, 0xbf, 0x00, 0x00, 0x00, 0xff, 0xff, 0x0a, 0x9c, 0xec, 0xd8, 0x50, 0x13, 0x00,
0x00,
}

357
cmd/vendor/github.com/gogo/protobuf/gogoproto/helper.go generated vendored Normal file
View File

@ -0,0 +1,357 @@
// Protocol Buffers for Go with Gadgets
//
// Copyright (c) 2013, The GoGo Authors. All rights reserved.
// http://github.com/gogo/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package gogoproto
import google_protobuf "github.com/gogo/protobuf/protoc-gen-gogo/descriptor"
import proto "github.com/gogo/protobuf/proto"
func IsEmbed(field *google_protobuf.FieldDescriptorProto) bool {
return proto.GetBoolExtension(field.Options, E_Embed, false)
}
func IsNullable(field *google_protobuf.FieldDescriptorProto) bool {
return proto.GetBoolExtension(field.Options, E_Nullable, true)
}
func IsStdTime(field *google_protobuf.FieldDescriptorProto) bool {
return proto.GetBoolExtension(field.Options, E_Stdtime, false)
}
func IsStdDuration(field *google_protobuf.FieldDescriptorProto) bool {
return proto.GetBoolExtension(field.Options, E_Stdduration, false)
}
func NeedsNilCheck(proto3 bool, field *google_protobuf.FieldDescriptorProto) bool {
nullable := IsNullable(field)
if field.IsMessage() || IsCustomType(field) {
return nullable
}
if proto3 {
return false
}
return nullable || *field.Type == google_protobuf.FieldDescriptorProto_TYPE_BYTES
}
func IsCustomType(field *google_protobuf.FieldDescriptorProto) bool {
typ := GetCustomType(field)
if len(typ) > 0 {
return true
}
return false
}
func IsCastType(field *google_protobuf.FieldDescriptorProto) bool {
typ := GetCastType(field)
if len(typ) > 0 {
return true
}
return false
}
func IsCastKey(field *google_protobuf.FieldDescriptorProto) bool {
typ := GetCastKey(field)
if len(typ) > 0 {
return true
}
return false
}
func IsCastValue(field *google_protobuf.FieldDescriptorProto) bool {
typ := GetCastValue(field)
if len(typ) > 0 {
return true
}
return false
}
func HasEnumDecl(file *google_protobuf.FileDescriptorProto, enum *google_protobuf.EnumDescriptorProto) bool {
return proto.GetBoolExtension(enum.Options, E_Enumdecl, proto.GetBoolExtension(file.Options, E_EnumdeclAll, true))
}
func HasTypeDecl(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Typedecl, proto.GetBoolExtension(file.Options, E_TypedeclAll, true))
}
func GetCustomType(field *google_protobuf.FieldDescriptorProto) string {
if field == nil {
return ""
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_Customtype)
if err == nil && v.(*string) != nil {
return *(v.(*string))
}
}
return ""
}
func GetCastType(field *google_protobuf.FieldDescriptorProto) string {
if field == nil {
return ""
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_Casttype)
if err == nil && v.(*string) != nil {
return *(v.(*string))
}
}
return ""
}
func GetCastKey(field *google_protobuf.FieldDescriptorProto) string {
if field == nil {
return ""
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_Castkey)
if err == nil && v.(*string) != nil {
return *(v.(*string))
}
}
return ""
}
func GetCastValue(field *google_protobuf.FieldDescriptorProto) string {
if field == nil {
return ""
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_Castvalue)
if err == nil && v.(*string) != nil {
return *(v.(*string))
}
}
return ""
}
func IsCustomName(field *google_protobuf.FieldDescriptorProto) bool {
name := GetCustomName(field)
if len(name) > 0 {
return true
}
return false
}
func IsEnumCustomName(field *google_protobuf.EnumDescriptorProto) bool {
name := GetEnumCustomName(field)
if len(name) > 0 {
return true
}
return false
}
func IsEnumValueCustomName(field *google_protobuf.EnumValueDescriptorProto) bool {
name := GetEnumValueCustomName(field)
if len(name) > 0 {
return true
}
return false
}
func GetCustomName(field *google_protobuf.FieldDescriptorProto) string {
if field == nil {
return ""
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_Customname)
if err == nil && v.(*string) != nil {
return *(v.(*string))
}
}
return ""
}
func GetEnumCustomName(field *google_protobuf.EnumDescriptorProto) string {
if field == nil {
return ""
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_EnumCustomname)
if err == nil && v.(*string) != nil {
return *(v.(*string))
}
}
return ""
}
func GetEnumValueCustomName(field *google_protobuf.EnumValueDescriptorProto) string {
if field == nil {
return ""
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_EnumvalueCustomname)
if err == nil && v.(*string) != nil {
return *(v.(*string))
}
}
return ""
}
func GetJsonTag(field *google_protobuf.FieldDescriptorProto) *string {
if field == nil {
return nil
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_Jsontag)
if err == nil && v.(*string) != nil {
return (v.(*string))
}
}
return nil
}
func GetMoreTags(field *google_protobuf.FieldDescriptorProto) *string {
if field == nil {
return nil
}
if field.Options != nil {
v, err := proto.GetExtension(field.Options, E_Moretags)
if err == nil && v.(*string) != nil {
return (v.(*string))
}
}
return nil
}
type EnableFunc func(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool
func EnabledGoEnumPrefix(file *google_protobuf.FileDescriptorProto, enum *google_protobuf.EnumDescriptorProto) bool {
return proto.GetBoolExtension(enum.Options, E_GoprotoEnumPrefix, proto.GetBoolExtension(file.Options, E_GoprotoEnumPrefixAll, true))
}
func EnabledGoStringer(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_GoprotoStringer, proto.GetBoolExtension(file.Options, E_GoprotoStringerAll, true))
}
func HasGoGetters(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_GoprotoGetters, proto.GetBoolExtension(file.Options, E_GoprotoGettersAll, true))
}
func IsUnion(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Onlyone, proto.GetBoolExtension(file.Options, E_OnlyoneAll, false))
}
func HasGoString(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Gostring, proto.GetBoolExtension(file.Options, E_GostringAll, false))
}
func HasEqual(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Equal, proto.GetBoolExtension(file.Options, E_EqualAll, false))
}
func HasVerboseEqual(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_VerboseEqual, proto.GetBoolExtension(file.Options, E_VerboseEqualAll, false))
}
func IsStringer(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Stringer, proto.GetBoolExtension(file.Options, E_StringerAll, false))
}
func IsFace(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Face, proto.GetBoolExtension(file.Options, E_FaceAll, false))
}
func HasDescription(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Description, proto.GetBoolExtension(file.Options, E_DescriptionAll, false))
}
func HasPopulate(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Populate, proto.GetBoolExtension(file.Options, E_PopulateAll, false))
}
func HasTestGen(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Testgen, proto.GetBoolExtension(file.Options, E_TestgenAll, false))
}
func HasBenchGen(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Benchgen, proto.GetBoolExtension(file.Options, E_BenchgenAll, false))
}
func IsMarshaler(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Marshaler, proto.GetBoolExtension(file.Options, E_MarshalerAll, false))
}
func IsUnmarshaler(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Unmarshaler, proto.GetBoolExtension(file.Options, E_UnmarshalerAll, false))
}
func IsStableMarshaler(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_StableMarshaler, proto.GetBoolExtension(file.Options, E_StableMarshalerAll, false))
}
func IsSizer(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Sizer, proto.GetBoolExtension(file.Options, E_SizerAll, false))
}
func IsProtoSizer(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Protosizer, proto.GetBoolExtension(file.Options, E_ProtosizerAll, false))
}
func IsGoEnumStringer(file *google_protobuf.FileDescriptorProto, enum *google_protobuf.EnumDescriptorProto) bool {
return proto.GetBoolExtension(enum.Options, E_GoprotoEnumStringer, proto.GetBoolExtension(file.Options, E_GoprotoEnumStringerAll, true))
}
func IsEnumStringer(file *google_protobuf.FileDescriptorProto, enum *google_protobuf.EnumDescriptorProto) bool {
return proto.GetBoolExtension(enum.Options, E_EnumStringer, proto.GetBoolExtension(file.Options, E_EnumStringerAll, false))
}
func IsUnsafeMarshaler(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_UnsafeMarshaler, proto.GetBoolExtension(file.Options, E_UnsafeMarshalerAll, false))
}
func IsUnsafeUnmarshaler(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_UnsafeUnmarshaler, proto.GetBoolExtension(file.Options, E_UnsafeUnmarshalerAll, false))
}
func HasExtensionsMap(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_GoprotoExtensionsMap, proto.GetBoolExtension(file.Options, E_GoprotoExtensionsMapAll, true))
}
func HasUnrecognized(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
if IsProto3(file) {
return false
}
return proto.GetBoolExtension(message.Options, E_GoprotoUnrecognized, proto.GetBoolExtension(file.Options, E_GoprotoUnrecognizedAll, true))
}
func IsProto3(file *google_protobuf.FileDescriptorProto) bool {
return file.GetSyntax() == "proto3"
}
func ImportsGoGoProto(file *google_protobuf.FileDescriptorProto) bool {
return proto.GetBoolExtension(file.Options, E_GogoprotoImport, true)
}
func HasCompare(file *google_protobuf.FileDescriptorProto, message *google_protobuf.DescriptorProto) bool {
return proto.GetBoolExtension(message.Options, E_Compare, proto.GetBoolExtension(file.Options, E_CompareAll, false))
}
func RegistersGolangProto(file *google_protobuf.FileDescriptorProto) bool {
return proto.GetBoolExtension(file.Options, E_GoprotoRegistration, false)
}

View File

@ -174,11 +174,11 @@ func sizeFixed32(x uint64) int {
// This is the format used for the sint64 protocol buffer type.
func (p *Buffer) EncodeZigzag64(x uint64) error {
// use signed number to get arithmetic right shift.
return p.EncodeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
return p.EncodeVarint((x << 1) ^ uint64((int64(x) >> 63)))
}
func sizeZigzag64(x uint64) int {
return sizeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
return sizeVarint((x << 1) ^ uint64((int64(x) >> 63)))
}
// EncodeZigzag32 writes a zigzag-encoded 32-bit integer

View File

@ -73,7 +73,6 @@ for a protocol buffer variable v:
When the .proto file specifies `syntax="proto3"`, there are some differences:
- Non-repeated fields of non-message type are values instead of pointers.
- Getters are only generated for message and oneof fields.
- Enum types do not get an Enum method.
The simplest way to describe this is to see an example.

View File

@ -193,6 +193,7 @@ type Properties struct {
Default string // default value
HasDefault bool // whether an explicit default was provided
CustomType string
CastType string
StdTime bool
StdDuration bool
@ -341,6 +342,8 @@ func (p *Properties) Parse(s string) {
p.OrigName = strings.Split(f, "=")[1]
case strings.HasPrefix(f, "customtype="):
p.CustomType = strings.Split(f, "=")[1]
case strings.HasPrefix(f, "casttype="):
p.CastType = strings.Split(f, "=")[1]
case f == "stdtime":
p.StdTime = true
case f == "stdduration":

View File

@ -522,6 +522,17 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
}
return nil
}
} else if len(props.CastType) > 0 {
if _, ok := v.Interface().(interface {
String() string
}); ok {
switch v.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
_, err := fmt.Fprintf(w, "%d", v.Interface())
return err
}
}
} else if props.StdTime {
t, ok := v.Interface().(time.Time)
if !ok {
@ -531,9 +542,9 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
if err != nil {
return err
}
props.StdTime = false
err = tm.writeAny(w, reflect.ValueOf(tproto), props)
props.StdTime = true
propsCopy := *props // Make a copy so that this is goroutine-safe
propsCopy.StdTime = false
err = tm.writeAny(w, reflect.ValueOf(tproto), &propsCopy)
return err
} else if props.StdDuration {
d, ok := v.Interface().(time.Duration)
@ -541,9 +552,9 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
return fmt.Errorf("stdtime is not time.Duration, but %T", v.Interface())
}
dproto := durationProto(d)
props.StdDuration = false
err := tm.writeAny(w, reflect.ValueOf(dproto), props)
props.StdDuration = true
propsCopy := *props // Make a copy so that this is goroutine-safe
propsCopy.StdDuration = false
err := tm.writeAny(w, reflect.ValueOf(dproto), &propsCopy)
return err
}
}

View File

@ -983,7 +983,7 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
return p.readStruct(fv, terminator)
case reflect.Uint32:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
fv.SetUint(uint64(x))
fv.SetUint(x)
return nil
}
case reflect.Uint64:

View File

@ -0,0 +1,118 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Package descriptor provides functions for obtaining protocol buffer
// descriptors for generated Go types.
//
// These functions cannot go in package proto because they depend on the
// generated protobuf descriptor messages, which themselves depend on proto.
package descriptor
import (
"bytes"
"compress/gzip"
"fmt"
"io/ioutil"
"github.com/gogo/protobuf/proto"
)
// extractFile extracts a FileDescriptorProto from a gzip'd buffer.
func extractFile(gz []byte) (*FileDescriptorProto, error) {
r, err := gzip.NewReader(bytes.NewReader(gz))
if err != nil {
return nil, fmt.Errorf("failed to open gzip reader: %v", err)
}
defer r.Close()
b, err := ioutil.ReadAll(r)
if err != nil {
return nil, fmt.Errorf("failed to uncompress descriptor: %v", err)
}
fd := new(FileDescriptorProto)
if err := proto.Unmarshal(b, fd); err != nil {
return nil, fmt.Errorf("malformed FileDescriptorProto: %v", err)
}
return fd, nil
}
// Message is a proto.Message with a method to return its descriptor.
//
// Message types generated by the protocol compiler always satisfy
// the Message interface.
type Message interface {
proto.Message
Descriptor() ([]byte, []int)
}
// ForMessage returns a FileDescriptorProto and a DescriptorProto from within it
// describing the given message.
func ForMessage(msg Message) (fd *FileDescriptorProto, md *DescriptorProto) {
gz, path := msg.Descriptor()
fd, err := extractFile(gz)
if err != nil {
panic(fmt.Sprintf("invalid FileDescriptorProto for %T: %v", msg, err))
}
md = fd.MessageType[path[0]]
for _, i := range path[1:] {
md = md.NestedType[i]
}
return fd, md
}
// Is this field a scalar numeric type?
func (field *FieldDescriptorProto) IsScalar() bool {
if field.Type == nil {
return false
}
switch *field.Type {
case FieldDescriptorProto_TYPE_DOUBLE,
FieldDescriptorProto_TYPE_FLOAT,
FieldDescriptorProto_TYPE_INT64,
FieldDescriptorProto_TYPE_UINT64,
FieldDescriptorProto_TYPE_INT32,
FieldDescriptorProto_TYPE_FIXED64,
FieldDescriptorProto_TYPE_FIXED32,
FieldDescriptorProto_TYPE_BOOL,
FieldDescriptorProto_TYPE_UINT32,
FieldDescriptorProto_TYPE_ENUM,
FieldDescriptorProto_TYPE_SFIXED32,
FieldDescriptorProto_TYPE_SFIXED64,
FieldDescriptorProto_TYPE_SINT32,
FieldDescriptorProto_TYPE_SINT64:
return true
default:
return false
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,749 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: descriptor.proto
/*
Package descriptor is a generated protocol buffer package.
It is generated from these files:
descriptor.proto
It has these top-level messages:
FileDescriptorSet
FileDescriptorProto
DescriptorProto
ExtensionRangeOptions
FieldDescriptorProto
OneofDescriptorProto
EnumDescriptorProto
EnumValueDescriptorProto
ServiceDescriptorProto
MethodDescriptorProto
FileOptions
MessageOptions
FieldOptions
OneofOptions
EnumOptions
EnumValueOptions
ServiceOptions
MethodOptions
UninterpretedOption
SourceCodeInfo
GeneratedCodeInfo
*/
package descriptor
import fmt "fmt"
import strings "strings"
import github_com_gogo_protobuf_proto "github.com/gogo/protobuf/proto"
import sort "sort"
import strconv "strconv"
import reflect "reflect"
import proto "github.com/gogo/protobuf/proto"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
func (this *FileDescriptorSet) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 5)
s = append(s, "&descriptor.FileDescriptorSet{")
if this.File != nil {
s = append(s, "File: "+fmt.Sprintf("%#v", this.File)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *FileDescriptorProto) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 16)
s = append(s, "&descriptor.FileDescriptorProto{")
if this.Name != nil {
s = append(s, "Name: "+valueToGoStringDescriptor(this.Name, "string")+",\n")
}
if this.Package != nil {
s = append(s, "Package: "+valueToGoStringDescriptor(this.Package, "string")+",\n")
}
if this.Dependency != nil {
s = append(s, "Dependency: "+fmt.Sprintf("%#v", this.Dependency)+",\n")
}
if this.PublicDependency != nil {
s = append(s, "PublicDependency: "+fmt.Sprintf("%#v", this.PublicDependency)+",\n")
}
if this.WeakDependency != nil {
s = append(s, "WeakDependency: "+fmt.Sprintf("%#v", this.WeakDependency)+",\n")
}
if this.MessageType != nil {
s = append(s, "MessageType: "+fmt.Sprintf("%#v", this.MessageType)+",\n")
}
if this.EnumType != nil {
s = append(s, "EnumType: "+fmt.Sprintf("%#v", this.EnumType)+",\n")
}
if this.Service != nil {
s = append(s, "Service: "+fmt.Sprintf("%#v", this.Service)+",\n")
}
if this.Extension != nil {
s = append(s, "Extension: "+fmt.Sprintf("%#v", this.Extension)+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.SourceCodeInfo != nil {
s = append(s, "SourceCodeInfo: "+fmt.Sprintf("%#v", this.SourceCodeInfo)+",\n")
}
if this.Syntax != nil {
s = append(s, "Syntax: "+valueToGoStringDescriptor(this.Syntax, "string")+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *DescriptorProto) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 14)
s = append(s, "&descriptor.DescriptorProto{")
if this.Name != nil {
s = append(s, "Name: "+valueToGoStringDescriptor(this.Name, "string")+",\n")
}
if this.Field != nil {
s = append(s, "Field: "+fmt.Sprintf("%#v", this.Field)+",\n")
}
if this.Extension != nil {
s = append(s, "Extension: "+fmt.Sprintf("%#v", this.Extension)+",\n")
}
if this.NestedType != nil {
s = append(s, "NestedType: "+fmt.Sprintf("%#v", this.NestedType)+",\n")
}
if this.EnumType != nil {
s = append(s, "EnumType: "+fmt.Sprintf("%#v", this.EnumType)+",\n")
}
if this.ExtensionRange != nil {
s = append(s, "ExtensionRange: "+fmt.Sprintf("%#v", this.ExtensionRange)+",\n")
}
if this.OneofDecl != nil {
s = append(s, "OneofDecl: "+fmt.Sprintf("%#v", this.OneofDecl)+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.ReservedRange != nil {
s = append(s, "ReservedRange: "+fmt.Sprintf("%#v", this.ReservedRange)+",\n")
}
if this.ReservedName != nil {
s = append(s, "ReservedName: "+fmt.Sprintf("%#v", this.ReservedName)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *DescriptorProto_ExtensionRange) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 7)
s = append(s, "&descriptor.DescriptorProto_ExtensionRange{")
if this.Start != nil {
s = append(s, "Start: "+valueToGoStringDescriptor(this.Start, "int32")+",\n")
}
if this.End != nil {
s = append(s, "End: "+valueToGoStringDescriptor(this.End, "int32")+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *DescriptorProto_ReservedRange) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 6)
s = append(s, "&descriptor.DescriptorProto_ReservedRange{")
if this.Start != nil {
s = append(s, "Start: "+valueToGoStringDescriptor(this.Start, "int32")+",\n")
}
if this.End != nil {
s = append(s, "End: "+valueToGoStringDescriptor(this.End, "int32")+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *ExtensionRangeOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 5)
s = append(s, "&descriptor.ExtensionRangeOptions{")
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *FieldDescriptorProto) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 14)
s = append(s, "&descriptor.FieldDescriptorProto{")
if this.Name != nil {
s = append(s, "Name: "+valueToGoStringDescriptor(this.Name, "string")+",\n")
}
if this.Number != nil {
s = append(s, "Number: "+valueToGoStringDescriptor(this.Number, "int32")+",\n")
}
if this.Label != nil {
s = append(s, "Label: "+valueToGoStringDescriptor(this.Label, "FieldDescriptorProto_Label")+",\n")
}
if this.Type != nil {
s = append(s, "Type: "+valueToGoStringDescriptor(this.Type, "FieldDescriptorProto_Type")+",\n")
}
if this.TypeName != nil {
s = append(s, "TypeName: "+valueToGoStringDescriptor(this.TypeName, "string")+",\n")
}
if this.Extendee != nil {
s = append(s, "Extendee: "+valueToGoStringDescriptor(this.Extendee, "string")+",\n")
}
if this.DefaultValue != nil {
s = append(s, "DefaultValue: "+valueToGoStringDescriptor(this.DefaultValue, "string")+",\n")
}
if this.OneofIndex != nil {
s = append(s, "OneofIndex: "+valueToGoStringDescriptor(this.OneofIndex, "int32")+",\n")
}
if this.JsonName != nil {
s = append(s, "JsonName: "+valueToGoStringDescriptor(this.JsonName, "string")+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *OneofDescriptorProto) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 6)
s = append(s, "&descriptor.OneofDescriptorProto{")
if this.Name != nil {
s = append(s, "Name: "+valueToGoStringDescriptor(this.Name, "string")+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *EnumDescriptorProto) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 7)
s = append(s, "&descriptor.EnumDescriptorProto{")
if this.Name != nil {
s = append(s, "Name: "+valueToGoStringDescriptor(this.Name, "string")+",\n")
}
if this.Value != nil {
s = append(s, "Value: "+fmt.Sprintf("%#v", this.Value)+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *EnumValueDescriptorProto) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 7)
s = append(s, "&descriptor.EnumValueDescriptorProto{")
if this.Name != nil {
s = append(s, "Name: "+valueToGoStringDescriptor(this.Name, "string")+",\n")
}
if this.Number != nil {
s = append(s, "Number: "+valueToGoStringDescriptor(this.Number, "int32")+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *ServiceDescriptorProto) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 7)
s = append(s, "&descriptor.ServiceDescriptorProto{")
if this.Name != nil {
s = append(s, "Name: "+valueToGoStringDescriptor(this.Name, "string")+",\n")
}
if this.Method != nil {
s = append(s, "Method: "+fmt.Sprintf("%#v", this.Method)+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *MethodDescriptorProto) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 10)
s = append(s, "&descriptor.MethodDescriptorProto{")
if this.Name != nil {
s = append(s, "Name: "+valueToGoStringDescriptor(this.Name, "string")+",\n")
}
if this.InputType != nil {
s = append(s, "InputType: "+valueToGoStringDescriptor(this.InputType, "string")+",\n")
}
if this.OutputType != nil {
s = append(s, "OutputType: "+valueToGoStringDescriptor(this.OutputType, "string")+",\n")
}
if this.Options != nil {
s = append(s, "Options: "+fmt.Sprintf("%#v", this.Options)+",\n")
}
if this.ClientStreaming != nil {
s = append(s, "ClientStreaming: "+valueToGoStringDescriptor(this.ClientStreaming, "bool")+",\n")
}
if this.ServerStreaming != nil {
s = append(s, "ServerStreaming: "+valueToGoStringDescriptor(this.ServerStreaming, "bool")+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *FileOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 23)
s = append(s, "&descriptor.FileOptions{")
if this.JavaPackage != nil {
s = append(s, "JavaPackage: "+valueToGoStringDescriptor(this.JavaPackage, "string")+",\n")
}
if this.JavaOuterClassname != nil {
s = append(s, "JavaOuterClassname: "+valueToGoStringDescriptor(this.JavaOuterClassname, "string")+",\n")
}
if this.JavaMultipleFiles != nil {
s = append(s, "JavaMultipleFiles: "+valueToGoStringDescriptor(this.JavaMultipleFiles, "bool")+",\n")
}
if this.JavaGenerateEqualsAndHash != nil {
s = append(s, "JavaGenerateEqualsAndHash: "+valueToGoStringDescriptor(this.JavaGenerateEqualsAndHash, "bool")+",\n")
}
if this.JavaStringCheckUtf8 != nil {
s = append(s, "JavaStringCheckUtf8: "+valueToGoStringDescriptor(this.JavaStringCheckUtf8, "bool")+",\n")
}
if this.OptimizeFor != nil {
s = append(s, "OptimizeFor: "+valueToGoStringDescriptor(this.OptimizeFor, "FileOptions_OptimizeMode")+",\n")
}
if this.GoPackage != nil {
s = append(s, "GoPackage: "+valueToGoStringDescriptor(this.GoPackage, "string")+",\n")
}
if this.CcGenericServices != nil {
s = append(s, "CcGenericServices: "+valueToGoStringDescriptor(this.CcGenericServices, "bool")+",\n")
}
if this.JavaGenericServices != nil {
s = append(s, "JavaGenericServices: "+valueToGoStringDescriptor(this.JavaGenericServices, "bool")+",\n")
}
if this.PyGenericServices != nil {
s = append(s, "PyGenericServices: "+valueToGoStringDescriptor(this.PyGenericServices, "bool")+",\n")
}
if this.PhpGenericServices != nil {
s = append(s, "PhpGenericServices: "+valueToGoStringDescriptor(this.PhpGenericServices, "bool")+",\n")
}
if this.Deprecated != nil {
s = append(s, "Deprecated: "+valueToGoStringDescriptor(this.Deprecated, "bool")+",\n")
}
if this.CcEnableArenas != nil {
s = append(s, "CcEnableArenas: "+valueToGoStringDescriptor(this.CcEnableArenas, "bool")+",\n")
}
if this.ObjcClassPrefix != nil {
s = append(s, "ObjcClassPrefix: "+valueToGoStringDescriptor(this.ObjcClassPrefix, "string")+",\n")
}
if this.CsharpNamespace != nil {
s = append(s, "CsharpNamespace: "+valueToGoStringDescriptor(this.CsharpNamespace, "string")+",\n")
}
if this.SwiftPrefix != nil {
s = append(s, "SwiftPrefix: "+valueToGoStringDescriptor(this.SwiftPrefix, "string")+",\n")
}
if this.PhpClassPrefix != nil {
s = append(s, "PhpClassPrefix: "+valueToGoStringDescriptor(this.PhpClassPrefix, "string")+",\n")
}
if this.PhpNamespace != nil {
s = append(s, "PhpNamespace: "+valueToGoStringDescriptor(this.PhpNamespace, "string")+",\n")
}
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *MessageOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 9)
s = append(s, "&descriptor.MessageOptions{")
if this.MessageSetWireFormat != nil {
s = append(s, "MessageSetWireFormat: "+valueToGoStringDescriptor(this.MessageSetWireFormat, "bool")+",\n")
}
if this.NoStandardDescriptorAccessor != nil {
s = append(s, "NoStandardDescriptorAccessor: "+valueToGoStringDescriptor(this.NoStandardDescriptorAccessor, "bool")+",\n")
}
if this.Deprecated != nil {
s = append(s, "Deprecated: "+valueToGoStringDescriptor(this.Deprecated, "bool")+",\n")
}
if this.MapEntry != nil {
s = append(s, "MapEntry: "+valueToGoStringDescriptor(this.MapEntry, "bool")+",\n")
}
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *FieldOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 11)
s = append(s, "&descriptor.FieldOptions{")
if this.Ctype != nil {
s = append(s, "Ctype: "+valueToGoStringDescriptor(this.Ctype, "FieldOptions_CType")+",\n")
}
if this.Packed != nil {
s = append(s, "Packed: "+valueToGoStringDescriptor(this.Packed, "bool")+",\n")
}
if this.Jstype != nil {
s = append(s, "Jstype: "+valueToGoStringDescriptor(this.Jstype, "FieldOptions_JSType")+",\n")
}
if this.Lazy != nil {
s = append(s, "Lazy: "+valueToGoStringDescriptor(this.Lazy, "bool")+",\n")
}
if this.Deprecated != nil {
s = append(s, "Deprecated: "+valueToGoStringDescriptor(this.Deprecated, "bool")+",\n")
}
if this.Weak != nil {
s = append(s, "Weak: "+valueToGoStringDescriptor(this.Weak, "bool")+",\n")
}
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *OneofOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 5)
s = append(s, "&descriptor.OneofOptions{")
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *EnumOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 7)
s = append(s, "&descriptor.EnumOptions{")
if this.AllowAlias != nil {
s = append(s, "AllowAlias: "+valueToGoStringDescriptor(this.AllowAlias, "bool")+",\n")
}
if this.Deprecated != nil {
s = append(s, "Deprecated: "+valueToGoStringDescriptor(this.Deprecated, "bool")+",\n")
}
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *EnumValueOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 6)
s = append(s, "&descriptor.EnumValueOptions{")
if this.Deprecated != nil {
s = append(s, "Deprecated: "+valueToGoStringDescriptor(this.Deprecated, "bool")+",\n")
}
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *ServiceOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 6)
s = append(s, "&descriptor.ServiceOptions{")
if this.Deprecated != nil {
s = append(s, "Deprecated: "+valueToGoStringDescriptor(this.Deprecated, "bool")+",\n")
}
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *MethodOptions) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 7)
s = append(s, "&descriptor.MethodOptions{")
if this.Deprecated != nil {
s = append(s, "Deprecated: "+valueToGoStringDescriptor(this.Deprecated, "bool")+",\n")
}
if this.IdempotencyLevel != nil {
s = append(s, "IdempotencyLevel: "+valueToGoStringDescriptor(this.IdempotencyLevel, "MethodOptions_IdempotencyLevel")+",\n")
}
if this.UninterpretedOption != nil {
s = append(s, "UninterpretedOption: "+fmt.Sprintf("%#v", this.UninterpretedOption)+",\n")
}
s = append(s, "XXX_InternalExtensions: "+extensionToGoStringDescriptor(this)+",\n")
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *UninterpretedOption) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 11)
s = append(s, "&descriptor.UninterpretedOption{")
if this.Name != nil {
s = append(s, "Name: "+fmt.Sprintf("%#v", this.Name)+",\n")
}
if this.IdentifierValue != nil {
s = append(s, "IdentifierValue: "+valueToGoStringDescriptor(this.IdentifierValue, "string")+",\n")
}
if this.PositiveIntValue != nil {
s = append(s, "PositiveIntValue: "+valueToGoStringDescriptor(this.PositiveIntValue, "uint64")+",\n")
}
if this.NegativeIntValue != nil {
s = append(s, "NegativeIntValue: "+valueToGoStringDescriptor(this.NegativeIntValue, "int64")+",\n")
}
if this.DoubleValue != nil {
s = append(s, "DoubleValue: "+valueToGoStringDescriptor(this.DoubleValue, "float64")+",\n")
}
if this.StringValue != nil {
s = append(s, "StringValue: "+valueToGoStringDescriptor(this.StringValue, "byte")+",\n")
}
if this.AggregateValue != nil {
s = append(s, "AggregateValue: "+valueToGoStringDescriptor(this.AggregateValue, "string")+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *UninterpretedOption_NamePart) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 6)
s = append(s, "&descriptor.UninterpretedOption_NamePart{")
if this.NamePart != nil {
s = append(s, "NamePart: "+valueToGoStringDescriptor(this.NamePart, "string")+",\n")
}
if this.IsExtension != nil {
s = append(s, "IsExtension: "+valueToGoStringDescriptor(this.IsExtension, "bool")+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *SourceCodeInfo) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 5)
s = append(s, "&descriptor.SourceCodeInfo{")
if this.Location != nil {
s = append(s, "Location: "+fmt.Sprintf("%#v", this.Location)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *SourceCodeInfo_Location) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 9)
s = append(s, "&descriptor.SourceCodeInfo_Location{")
if this.Path != nil {
s = append(s, "Path: "+fmt.Sprintf("%#v", this.Path)+",\n")
}
if this.Span != nil {
s = append(s, "Span: "+fmt.Sprintf("%#v", this.Span)+",\n")
}
if this.LeadingComments != nil {
s = append(s, "LeadingComments: "+valueToGoStringDescriptor(this.LeadingComments, "string")+",\n")
}
if this.TrailingComments != nil {
s = append(s, "TrailingComments: "+valueToGoStringDescriptor(this.TrailingComments, "string")+",\n")
}
if this.LeadingDetachedComments != nil {
s = append(s, "LeadingDetachedComments: "+fmt.Sprintf("%#v", this.LeadingDetachedComments)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *GeneratedCodeInfo) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 5)
s = append(s, "&descriptor.GeneratedCodeInfo{")
if this.Annotation != nil {
s = append(s, "Annotation: "+fmt.Sprintf("%#v", this.Annotation)+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func (this *GeneratedCodeInfo_Annotation) GoString() string {
if this == nil {
return "nil"
}
s := make([]string, 0, 8)
s = append(s, "&descriptor.GeneratedCodeInfo_Annotation{")
if this.Path != nil {
s = append(s, "Path: "+fmt.Sprintf("%#v", this.Path)+",\n")
}
if this.SourceFile != nil {
s = append(s, "SourceFile: "+valueToGoStringDescriptor(this.SourceFile, "string")+",\n")
}
if this.Begin != nil {
s = append(s, "Begin: "+valueToGoStringDescriptor(this.Begin, "int32")+",\n")
}
if this.End != nil {
s = append(s, "End: "+valueToGoStringDescriptor(this.End, "int32")+",\n")
}
if this.XXX_unrecognized != nil {
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
}
s = append(s, "}")
return strings.Join(s, "")
}
func valueToGoStringDescriptor(v interface{}, typ string) string {
rv := reflect.ValueOf(v)
if rv.IsNil() {
return "nil"
}
pv := reflect.Indirect(rv).Interface()
return fmt.Sprintf("func(v %v) *%v { return &v } ( %#v )", typ, typ, pv)
}
func extensionToGoStringDescriptor(m github_com_gogo_protobuf_proto.Message) string {
e := github_com_gogo_protobuf_proto.GetUnsafeExtensionsMap(m)
if e == nil {
return "nil"
}
s := "proto.NewUnsafeXXX_InternalExtensions(map[int32]proto.Extension{"
keys := make([]int, 0, len(e))
for k := range e {
keys = append(keys, int(k))
}
sort.Ints(keys)
ss := []string{}
for _, k := range keys {
ss = append(ss, strconv.Itoa(k)+": "+e[int32(k)].GoString())
}
s += strings.Join(ss, ",") + "})"
return s
}

View File

@ -0,0 +1,390 @@
// Protocol Buffers for Go with Gadgets
//
// Copyright (c) 2013, The GoGo Authors. All rights reserved.
// http://github.com/gogo/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package descriptor
import (
"strings"
)
func (msg *DescriptorProto) GetMapFields() (*FieldDescriptorProto, *FieldDescriptorProto) {
if !msg.GetOptions().GetMapEntry() {
return nil, nil
}
return msg.GetField()[0], msg.GetField()[1]
}
func dotToUnderscore(r rune) rune {
if r == '.' {
return '_'
}
return r
}
func (field *FieldDescriptorProto) WireType() (wire int) {
switch *field.Type {
case FieldDescriptorProto_TYPE_DOUBLE:
return 1
case FieldDescriptorProto_TYPE_FLOAT:
return 5
case FieldDescriptorProto_TYPE_INT64:
return 0
case FieldDescriptorProto_TYPE_UINT64:
return 0
case FieldDescriptorProto_TYPE_INT32:
return 0
case FieldDescriptorProto_TYPE_UINT32:
return 0
case FieldDescriptorProto_TYPE_FIXED64:
return 1
case FieldDescriptorProto_TYPE_FIXED32:
return 5
case FieldDescriptorProto_TYPE_BOOL:
return 0
case FieldDescriptorProto_TYPE_STRING:
return 2
case FieldDescriptorProto_TYPE_GROUP:
return 2
case FieldDescriptorProto_TYPE_MESSAGE:
return 2
case FieldDescriptorProto_TYPE_BYTES:
return 2
case FieldDescriptorProto_TYPE_ENUM:
return 0
case FieldDescriptorProto_TYPE_SFIXED32:
return 5
case FieldDescriptorProto_TYPE_SFIXED64:
return 1
case FieldDescriptorProto_TYPE_SINT32:
return 0
case FieldDescriptorProto_TYPE_SINT64:
return 0
}
panic("unreachable")
}
func (field *FieldDescriptorProto) GetKeyUint64() (x uint64) {
packed := field.IsPacked()
wireType := field.WireType()
fieldNumber := field.GetNumber()
if packed {
wireType = 2
}
x = uint64(uint32(fieldNumber)<<3 | uint32(wireType))
return x
}
func (field *FieldDescriptorProto) GetKey3Uint64() (x uint64) {
packed := field.IsPacked3()
wireType := field.WireType()
fieldNumber := field.GetNumber()
if packed {
wireType = 2
}
x = uint64(uint32(fieldNumber)<<3 | uint32(wireType))
return x
}
func (field *FieldDescriptorProto) GetKey() []byte {
x := field.GetKeyUint64()
i := 0
keybuf := make([]byte, 0)
for i = 0; x > 127; i++ {
keybuf = append(keybuf, 0x80|uint8(x&0x7F))
x >>= 7
}
keybuf = append(keybuf, uint8(x))
return keybuf
}
func (field *FieldDescriptorProto) GetKey3() []byte {
x := field.GetKey3Uint64()
i := 0
keybuf := make([]byte, 0)
for i = 0; x > 127; i++ {
keybuf = append(keybuf, 0x80|uint8(x&0x7F))
x >>= 7
}
keybuf = append(keybuf, uint8(x))
return keybuf
}
func (desc *FileDescriptorSet) GetField(packageName, messageName, fieldName string) *FieldDescriptorProto {
msg := desc.GetMessage(packageName, messageName)
if msg == nil {
return nil
}
for _, field := range msg.GetField() {
if field.GetName() == fieldName {
return field
}
}
return nil
}
func (file *FileDescriptorProto) GetMessage(typeName string) *DescriptorProto {
for _, msg := range file.GetMessageType() {
if msg.GetName() == typeName {
return msg
}
nes := file.GetNestedMessage(msg, strings.TrimPrefix(typeName, msg.GetName()+"."))
if nes != nil {
return nes
}
}
return nil
}
func (file *FileDescriptorProto) GetNestedMessage(msg *DescriptorProto, typeName string) *DescriptorProto {
for _, nes := range msg.GetNestedType() {
if nes.GetName() == typeName {
return nes
}
res := file.GetNestedMessage(nes, strings.TrimPrefix(typeName, nes.GetName()+"."))
if res != nil {
return res
}
}
return nil
}
func (desc *FileDescriptorSet) GetMessage(packageName string, typeName string) *DescriptorProto {
for _, file := range desc.GetFile() {
if strings.Map(dotToUnderscore, file.GetPackage()) != strings.Map(dotToUnderscore, packageName) {
continue
}
for _, msg := range file.GetMessageType() {
if msg.GetName() == typeName {
return msg
}
}
for _, msg := range file.GetMessageType() {
for _, nes := range msg.GetNestedType() {
if nes.GetName() == typeName {
return nes
}
if msg.GetName()+"."+nes.GetName() == typeName {
return nes
}
}
}
}
return nil
}
func (desc *FileDescriptorSet) IsProto3(packageName string, typeName string) bool {
for _, file := range desc.GetFile() {
if strings.Map(dotToUnderscore, file.GetPackage()) != strings.Map(dotToUnderscore, packageName) {
continue
}
for _, msg := range file.GetMessageType() {
if msg.GetName() == typeName {
return file.GetSyntax() == "proto3"
}
}
for _, msg := range file.GetMessageType() {
for _, nes := range msg.GetNestedType() {
if nes.GetName() == typeName {
return file.GetSyntax() == "proto3"
}
if msg.GetName()+"."+nes.GetName() == typeName {
return file.GetSyntax() == "proto3"
}
}
}
}
return false
}
func (msg *DescriptorProto) IsExtendable() bool {
return len(msg.GetExtensionRange()) > 0
}
func (desc *FileDescriptorSet) FindExtension(packageName string, typeName string, fieldName string) (extPackageName string, field *FieldDescriptorProto) {
parent := desc.GetMessage(packageName, typeName)
if parent == nil {
return "", nil
}
if !parent.IsExtendable() {
return "", nil
}
extendee := "." + packageName + "." + typeName
for _, file := range desc.GetFile() {
for _, ext := range file.GetExtension() {
if strings.Map(dotToUnderscore, file.GetPackage()) == strings.Map(dotToUnderscore, packageName) {
if !(ext.GetExtendee() == typeName || ext.GetExtendee() == extendee) {
continue
}
} else {
if ext.GetExtendee() != extendee {
continue
}
}
if ext.GetName() == fieldName {
return file.GetPackage(), ext
}
}
}
return "", nil
}
func (desc *FileDescriptorSet) FindExtensionByFieldNumber(packageName string, typeName string, fieldNum int32) (extPackageName string, field *FieldDescriptorProto) {
parent := desc.GetMessage(packageName, typeName)
if parent == nil {
return "", nil
}
if !parent.IsExtendable() {
return "", nil
}
extendee := "." + packageName + "." + typeName
for _, file := range desc.GetFile() {
for _, ext := range file.GetExtension() {
if strings.Map(dotToUnderscore, file.GetPackage()) == strings.Map(dotToUnderscore, packageName) {
if !(ext.GetExtendee() == typeName || ext.GetExtendee() == extendee) {
continue
}
} else {
if ext.GetExtendee() != extendee {
continue
}
}
if ext.GetNumber() == fieldNum {
return file.GetPackage(), ext
}
}
}
return "", nil
}
func (desc *FileDescriptorSet) FindMessage(packageName string, typeName string, fieldName string) (msgPackageName string, msgName string) {
parent := desc.GetMessage(packageName, typeName)
if parent == nil {
return "", ""
}
field := parent.GetFieldDescriptor(fieldName)
if field == nil {
var extPackageName string
extPackageName, field = desc.FindExtension(packageName, typeName, fieldName)
if field == nil {
return "", ""
}
packageName = extPackageName
}
typeNames := strings.Split(field.GetTypeName(), ".")
if len(typeNames) == 1 {
msg := desc.GetMessage(packageName, typeName)
if msg == nil {
return "", ""
}
return packageName, msg.GetName()
}
if len(typeNames) > 2 {
for i := 1; i < len(typeNames)-1; i++ {
packageName = strings.Join(typeNames[1:len(typeNames)-i], ".")
typeName = strings.Join(typeNames[len(typeNames)-i:], ".")
msg := desc.GetMessage(packageName, typeName)
if msg != nil {
typeNames := strings.Split(msg.GetName(), ".")
if len(typeNames) == 1 {
return packageName, msg.GetName()
}
return strings.Join(typeNames[1:len(typeNames)-1], "."), typeNames[len(typeNames)-1]
}
}
}
return "", ""
}
func (msg *DescriptorProto) GetFieldDescriptor(fieldName string) *FieldDescriptorProto {
for _, field := range msg.GetField() {
if field.GetName() == fieldName {
return field
}
}
return nil
}
func (desc *FileDescriptorSet) GetEnum(packageName string, typeName string) *EnumDescriptorProto {
for _, file := range desc.GetFile() {
if strings.Map(dotToUnderscore, file.GetPackage()) != strings.Map(dotToUnderscore, packageName) {
continue
}
for _, enum := range file.GetEnumType() {
if enum.GetName() == typeName {
return enum
}
}
}
return nil
}
func (f *FieldDescriptorProto) IsEnum() bool {
return *f.Type == FieldDescriptorProto_TYPE_ENUM
}
func (f *FieldDescriptorProto) IsMessage() bool {
return *f.Type == FieldDescriptorProto_TYPE_MESSAGE
}
func (f *FieldDescriptorProto) IsBytes() bool {
return *f.Type == FieldDescriptorProto_TYPE_BYTES
}
func (f *FieldDescriptorProto) IsRepeated() bool {
return f.Label != nil && *f.Label == FieldDescriptorProto_LABEL_REPEATED
}
func (f *FieldDescriptorProto) IsString() bool {
return *f.Type == FieldDescriptorProto_TYPE_STRING
}
func (f *FieldDescriptorProto) IsBool() bool {
return *f.Type == FieldDescriptorProto_TYPE_BOOL
}
func (f *FieldDescriptorProto) IsRequired() bool {
return f.Label != nil && *f.Label == FieldDescriptorProto_LABEL_REQUIRED
}
func (f *FieldDescriptorProto) IsPacked() bool {
return f.Options != nil && f.GetOptions().GetPacked()
}
func (f *FieldDescriptorProto) IsPacked3() bool {
if f.IsRepeated() && f.IsScalar() {
if f.Options == nil || f.GetOptions().Packed == nil {
return true
}
return f.Options != nil && f.GetOptions().GetPacked()
}
return false
}
func (m *DescriptorProto) HasExtension() bool {
return len(m.ExtensionRange) > 0
}

View File

@ -73,6 +73,31 @@ type Marshaler struct {
// Whether to use the original (.proto) name for fields.
OrigName bool
// A custom URL resolver to use when marshaling Any messages to JSON.
// If unset, the default resolution strategy is to extract the
// fully-qualified type name from the type URL and pass that to
// proto.MessageType(string).
AnyResolver AnyResolver
}
// AnyResolver takes a type URL, present in an Any message, and resolves it into
// an instance of the associated message.
type AnyResolver interface {
Resolve(typeUrl string) (proto.Message, error)
}
func defaultResolveAny(typeUrl string) (proto.Message, error) {
// Only the part of typeUrl after the last slash is relevant.
mname := typeUrl
if slash := strings.LastIndex(mname, "/"); slash >= 0 {
mname = mname[slash+1:]
}
mt := proto.MessageType(mname)
if mt == nil {
return nil, fmt.Errorf("unknown message type %q", mname)
}
return reflect.New(mt.Elem()).Interface().(proto.Message), nil
}
// JSONPBMarshaler is implemented by protobuf messages that customize the
@ -168,8 +193,7 @@ func (m *Marshaler) marshalObject(out *errWriter, v proto.Message, indent, typeU
// "Generated output always contains 3, 6, or 9 fractional digits,
// depending on required precision."
s, ns := s.Field(0).Int(), s.Field(1).Int()
d := time.Duration(s)*time.Second + time.Duration(ns)*time.Nanosecond
x := fmt.Sprintf("%.9f", d.Seconds())
x := fmt.Sprintf("%d.%09d", s, ns)
x = strings.TrimSuffix(x, "000")
x = strings.TrimSuffix(x, "000")
out.write(`"`)
@ -344,16 +368,17 @@ func (m *Marshaler) marshalAny(out *errWriter, any proto.Message, indent string)
turl := v.Field(0).String()
val := v.Field(1).Bytes()
// Only the part of type_url after the last slash is relevant.
mname := turl
if slash := strings.LastIndex(mname, "/"); slash >= 0 {
mname = mname[slash+1:]
var msg proto.Message
var err error
if m.AnyResolver != nil {
msg, err = m.AnyResolver.Resolve(turl)
} else {
msg, err = defaultResolveAny(turl)
}
mt := proto.MessageType(mname)
if mt == nil {
return fmt.Errorf("unknown message type %q", mname)
if err != nil {
return err
}
msg := reflect.New(mt.Elem()).Interface().(proto.Message)
if err := proto.Unmarshal(val, msg); err != nil {
return err
}
@ -590,6 +615,12 @@ type Unmarshaler struct {
// Whether to allow messages to contain unknown fields, as opposed to
// failing to unmarshal.
AllowUnknownFields bool
// A custom URL resolver to use when unmarshaling Any messages from JSON.
// If unset, the default resolution strategy is to extract the
// fully-qualified type name from the type URL and pass that to
// proto.MessageType(string).
AnyResolver AnyResolver
}
// UnmarshalNext unmarshals the next protocol buffer from a JSON object stream.
@ -639,7 +670,14 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
// Allocate memory for pointer fields.
if targetType.Kind() == reflect.Ptr {
// If input value is "null" and target is a pointer type, then the field should be treated as not set
// UNLESS the target is structpb.Value, in which case it should be set to structpb.NullValue.
_, isJSONPBUnmarshaler := target.Interface().(JSONPBUnmarshaler)
if string(inputValue) == "null" && targetType != reflect.TypeOf(&stpb.Value{}) && !isJSONPBUnmarshaler {
return nil
}
target.Set(reflect.New(targetType.Elem()))
return u.unmarshalValue(target.Elem(), inputValue, prop)
}
@ -647,15 +685,11 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
return jsu.UnmarshalJSONPB(u, []byte(inputValue))
}
// Handle well-known types.
// Handle well-known types that are not pointers.
if w, ok := target.Addr().Interface().(wkt); ok {
switch w.XXX_WellKnownType() {
case "DoubleValue", "FloatValue", "Int64Value", "UInt64Value",
"Int32Value", "UInt32Value", "BoolValue", "StringValue", "BytesValue":
// "Wrappers use the same representation in JSON
// as the wrapped primitive type, except that null is allowed."
// encoding/json will turn JSON `null` into Go `nil`,
// so we don't have to do any extra work.
return u.unmarshalValue(target.Field(0), inputValue, prop)
case "Any":
// Use json.RawMessage pointer type instead of value to support pre-1.8 version.
@ -677,16 +711,17 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
}
target.Field(0).SetString(turl)
mname := turl
if slash := strings.LastIndex(mname, "/"); slash >= 0 {
mname = mname[slash+1:]
var m proto.Message
var err error
if u.AnyResolver != nil {
m, err = u.AnyResolver.Resolve(turl)
} else {
m, err = defaultResolveAny(turl)
}
mt := proto.MessageType(mname)
if mt == nil {
return fmt.Errorf("unknown message type %q", mname)
if err != nil {
return err
}
m := reflect.New(mt.Elem()).Interface().(proto.Message)
if _, ok := m.(wkt); ok {
val, ok := jsonFields["value"]
if !ok {
@ -716,21 +751,16 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
return nil
case "Duration":
ivStr := string(inputValue)
if ivStr == "null" {
target.Field(0).SetInt(0)
target.Field(1).SetInt(0)
return nil
}
unq, err := strconv.Unquote(ivStr)
unq, err := strconv.Unquote(string(inputValue))
if err != nil {
return err
}
d, err := time.ParseDuration(unq)
if err != nil {
return fmt.Errorf("bad Duration: %v", err)
}
ns := d.Nanoseconds()
s := ns / 1e9
ns %= 1e9
@ -738,33 +768,25 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
target.Field(1).SetInt(ns)
return nil
case "Timestamp":
ivStr := string(inputValue)
if ivStr == "null" {
target.Field(0).SetInt(0)
target.Field(1).SetInt(0)
return nil
}
unq, err := strconv.Unquote(ivStr)
unq, err := strconv.Unquote(string(inputValue))
if err != nil {
return err
}
t, err := time.Parse(time.RFC3339Nano, unq)
if err != nil {
return fmt.Errorf("bad Timestamp: %v", err)
}
target.Field(0).SetInt(int64(t.Unix()))
target.Field(0).SetInt(t.Unix())
target.Field(1).SetInt(int64(t.Nanosecond()))
return nil
case "Struct":
if string(inputValue) == "null" {
// Interpret a null struct as empty.
return nil
}
var m map[string]json.RawMessage
if err := json.Unmarshal(inputValue, &m); err != nil {
return fmt.Errorf("bad StructValue: %v", err)
}
target.Field(0).Set(reflect.ValueOf(map[string]*stpb.Value{}))
for k, jv := range m {
pv := &stpb.Value{}
@ -775,14 +797,11 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
}
return nil
case "ListValue":
if string(inputValue) == "null" {
// Interpret a null ListValue as empty.
return nil
}
var s []json.RawMessage
if err := json.Unmarshal(inputValue, &s); err != nil {
return fmt.Errorf("bad ListValue: %v", err)
}
target.Field(0).Set(reflect.ValueOf(make([]*stpb.Value, len(s), len(s))))
for i, sv := range s {
if err := u.unmarshalValue(target.Field(0).Index(i), sv, prop); err != nil {
@ -933,11 +952,13 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
if err := json.Unmarshal(inputValue, &slc); err != nil {
return err
}
len := len(slc)
target.Set(reflect.MakeSlice(targetType, len, len))
for i := 0; i < len; i++ {
if err := u.unmarshalValue(target.Index(i), slc[i], prop); err != nil {
return err
if slc != nil {
l := len(slc)
target.Set(reflect.MakeSlice(targetType, l, l))
for i := 0; i < l; i++ {
if err := u.unmarshalValue(target.Index(i), slc[i], prop); err != nil {
return err
}
}
}
return nil
@ -949,33 +970,35 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
if err := json.Unmarshal(inputValue, &mp); err != nil {
return err
}
target.Set(reflect.MakeMap(targetType))
var keyprop, valprop *proto.Properties
if prop != nil {
// These could still be nil if the protobuf metadata is broken somehow.
// TODO: This won't work because the fields are unexported.
// We should probably just reparse them.
//keyprop, valprop = prop.mkeyprop, prop.mvalprop
}
for ks, raw := range mp {
// Unmarshal map key. The core json library already decoded the key into a
// string, so we handle that specially. Other types were quoted post-serialization.
var k reflect.Value
if targetType.Key().Kind() == reflect.String {
k = reflect.ValueOf(ks)
} else {
k = reflect.New(targetType.Key()).Elem()
if err := u.unmarshalValue(k, json.RawMessage(ks), keyprop); err != nil {
if mp != nil {
target.Set(reflect.MakeMap(targetType))
var keyprop, valprop *proto.Properties
if prop != nil {
// These could still be nil if the protobuf metadata is broken somehow.
// TODO: This won't work because the fields are unexported.
// We should probably just reparse them.
//keyprop, valprop = prop.mkeyprop, prop.mvalprop
}
for ks, raw := range mp {
// Unmarshal map key. The core json library already decoded the key into a
// string, so we handle that specially. Other types were quoted post-serialization.
var k reflect.Value
if targetType.Key().Kind() == reflect.String {
k = reflect.ValueOf(ks)
} else {
k = reflect.New(targetType.Key()).Elem()
if err := u.unmarshalValue(k, json.RawMessage(ks), keyprop); err != nil {
return err
}
}
// Unmarshal map value.
v := reflect.New(targetType.Elem()).Elem()
if err := u.unmarshalValue(v, raw, valprop); err != nil {
return err
}
target.SetMapIndex(k, v)
}
// Unmarshal map value.
v := reflect.New(targetType.Elem()).Elem()
if err := u.unmarshalValue(v, raw, valprop); err != nil {
return err
}
target.SetMapIndex(k, v)
}
return nil
}

View File

@ -174,11 +174,11 @@ func sizeFixed32(x uint64) int {
// This is the format used for the sint64 protocol buffer type.
func (p *Buffer) EncodeZigzag64(x uint64) error {
// use signed number to get arithmetic right shift.
return p.EncodeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
return p.EncodeVarint((x << 1) ^ uint64((int64(x) >> 63)))
}
func sizeZigzag64(x uint64) int {
return sizeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
return sizeVarint((x << 1) ^ uint64((int64(x) >> 63)))
}
// EncodeZigzag32 writes a zigzag-encoded 32-bit integer

View File

@ -865,7 +865,7 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
return p.readStruct(fv, terminator)
case reflect.Uint32:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
fv.SetUint(uint64(x))
fv.SetUint(x)
return nil
}
case reflect.Uint64:

View File

@ -51,6 +51,9 @@ const googleApis = "type.googleapis.com/"
// function. AnyMessageName is provided for less common use cases like filtering a
// sequence of Any messages based on a set of allowed message type names.
func AnyMessageName(any *any.Any) (string, error) {
if any == nil {
return "", fmt.Errorf("message is nil")
}
slash := strings.LastIndex(any.TypeUrl, "/")
if slash < 0 {
return "", fmt.Errorf("message type url %q is invalid", any.TypeUrl)

View File

@ -1,11 +1,11 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: github.com/golang/protobuf/ptypes/any/any.proto
// source: google/protobuf/any.proto
/*
Package any is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/any/any.proto
google/protobuf/any.proto
It has these top-level messages:
Any
@ -62,6 +62,16 @@ const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// any.Unpack(foo)
// ...
//
// Example 4: Pack and unpack a message in Go
//
// foo := &pb.Foo{...}
// any, err := ptypes.MarshalAny(foo)
// ...
// foo := &pb.Foo{}
// if err := ptypes.UnmarshalAny(any, foo); err != nil {
// ...
// }
//
// The pack methods provided by protobuf library will by default use
// 'type.googleapis.com/full.type.name' as the type URL and the unpack
// methods only use the fully qualified type name after the last '/'
@ -149,20 +159,20 @@ func init() {
proto.RegisterType((*Any)(nil), "google.protobuf.Any")
}
func init() { proto.RegisterFile("github.com/golang/protobuf/ptypes/any/any.proto", fileDescriptor0) }
func init() { proto.RegisterFile("google/protobuf/any.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 184 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xd2, 0x4f, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x4f, 0xcc,
0xab, 0x04, 0x61, 0x3d, 0xb0, 0xb8, 0x10, 0x7f, 0x7a, 0x7e, 0x7e, 0x7a, 0x4e, 0xaa, 0x1e, 0x4c,
0x95, 0x92, 0x19, 0x17, 0xb3, 0x63, 0x5e, 0xa5, 0x90, 0x24, 0x17, 0x07, 0x48, 0x79, 0x7c, 0x69,
0x51, 0x8e, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x67, 0x10, 0x3b, 0x88, 0x1f, 0x5a, 0x94, 0x23, 0x24,
0xc2, 0xc5, 0x5a, 0x96, 0x98, 0x53, 0x9a, 0x2a, 0xc1, 0xa4, 0xc0, 0xa8, 0xc1, 0x13, 0x04, 0xe1,
0x38, 0xe5, 0x73, 0x09, 0x27, 0xe7, 0xe7, 0xea, 0xa1, 0x19, 0xe7, 0xc4, 0xe1, 0x98, 0x57, 0x19,
0x00, 0xe2, 0x04, 0x30, 0x46, 0xa9, 0x12, 0xe5, 0xb8, 0x45, 0x4c, 0xcc, 0xee, 0x01, 0x4e, 0xab,
0x98, 0xe4, 0xdc, 0x21, 0x46, 0x05, 0x40, 0x95, 0xe8, 0x85, 0xa7, 0xe6, 0xe4, 0x78, 0xe7, 0xe5,
0x97, 0xe7, 0x85, 0x80, 0x94, 0x26, 0xb1, 0x81, 0xf5, 0x1a, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff,
0x45, 0x1f, 0x1a, 0xf2, 0xf3, 0x00, 0x00, 0x00,
// 185 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4c, 0xcf, 0xcf, 0x4f,
0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x4f, 0xcc, 0xab, 0xd4,
0x03, 0x73, 0x84, 0xf8, 0x21, 0x52, 0x7a, 0x30, 0x29, 0x25, 0x33, 0x2e, 0x66, 0xc7, 0xbc, 0x4a,
0x21, 0x49, 0x2e, 0x8e, 0x92, 0xca, 0x82, 0xd4, 0xf8, 0xd2, 0xa2, 0x1c, 0x09, 0x46, 0x05, 0x46,
0x0d, 0xce, 0x20, 0x76, 0x10, 0x3f, 0xb4, 0x28, 0x47, 0x48, 0x84, 0x8b, 0xb5, 0x2c, 0x31, 0xa7,
0x34, 0x55, 0x82, 0x49, 0x81, 0x51, 0x83, 0x27, 0x08, 0xc2, 0x71, 0xca, 0xe7, 0x12, 0x4e, 0xce,
0xcf, 0xd5, 0x43, 0x33, 0xce, 0x89, 0xc3, 0x31, 0xaf, 0x32, 0x00, 0xc4, 0x09, 0x60, 0x8c, 0x52,
0x4d, 0xcf, 0x2c, 0xc9, 0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc,
0x4b, 0x47, 0xb8, 0xa8, 0x00, 0x64, 0x7a, 0x31, 0xc8, 0x61, 0x8b, 0x98, 0x98, 0xdd, 0x03, 0x9c,
0x56, 0x31, 0xc9, 0xb9, 0x43, 0x8c, 0x0a, 0x80, 0x2a, 0xd1, 0x0b, 0x4f, 0xcd, 0xc9, 0xf1, 0xce,
0xcb, 0x2f, 0xcf, 0x0b, 0x01, 0x29, 0x4d, 0x62, 0x03, 0xeb, 0x35, 0x06, 0x04, 0x00, 0x00, 0xff,
0xff, 0x13, 0xf8, 0xe8, 0x42, 0xdd, 0x00, 0x00, 0x00,
}

View File

@ -1,11 +1,11 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: github.com/golang/protobuf/ptypes/duration/duration.proto
// source: google/protobuf/duration.proto
/*
Package duration is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/duration/duration.proto
google/protobuf/duration.proto
It has these top-level messages:
Duration
@ -125,22 +125,20 @@ func init() {
proto.RegisterType((*Duration)(nil), "google.protobuf.Duration")
}
func init() {
proto.RegisterFile("github.com/golang/protobuf/ptypes/duration/duration.proto", fileDescriptor0)
}
func init() { proto.RegisterFile("google/protobuf/duration.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 189 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x4c, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x4f, 0x29,
0x2d, 0x4a, 0x2c, 0xc9, 0xcc, 0xcf, 0x83, 0x33, 0xf4, 0xc0, 0x2a, 0x84, 0xf8, 0xd3, 0xf3, 0xf3,
0xd3, 0x73, 0x52, 0xf5, 0x60, 0xea, 0x95, 0xac, 0xb8, 0x38, 0x5c, 0xa0, 0x4a, 0x84, 0x24, 0xb8,
0xd8, 0x8b, 0x53, 0x93, 0xf3, 0xf3, 0x52, 0x8a, 0x25, 0x18, 0x15, 0x18, 0x35, 0x98, 0x83, 0x60,
0x5c, 0x21, 0x11, 0x2e, 0xd6, 0xbc, 0xc4, 0xbc, 0xfc, 0x62, 0x09, 0x26, 0x05, 0x46, 0x0d, 0xd6,
0x20, 0x08, 0xc7, 0xa9, 0x86, 0x4b, 0x38, 0x39, 0x3f, 0x57, 0x0f, 0xcd, 0x48, 0x27, 0x5e, 0x98,
0x81, 0x01, 0x20, 0x91, 0x00, 0xc6, 0x28, 0x2d, 0xe2, 0xdd, 0xfb, 0x83, 0x91, 0x71, 0x11, 0x13,
0xb3, 0x7b, 0x80, 0xd3, 0x2a, 0x26, 0x39, 0x77, 0x88, 0xb9, 0x01, 0x50, 0xa5, 0x7a, 0xe1, 0xa9,
0x39, 0x39, 0xde, 0x79, 0xf9, 0xe5, 0x79, 0x21, 0x20, 0x2d, 0x49, 0x6c, 0x60, 0x33, 0x8c, 0x01,
0x01, 0x00, 0x00, 0xff, 0xff, 0x45, 0x5a, 0x81, 0x3d, 0x0e, 0x01, 0x00, 0x00,
// 190 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4b, 0xcf, 0xcf, 0x4f,
0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x4f, 0x29, 0x2d, 0x4a,
0x2c, 0xc9, 0xcc, 0xcf, 0xd3, 0x03, 0x8b, 0x08, 0xf1, 0x43, 0xe4, 0xf5, 0x60, 0xf2, 0x4a, 0x56,
0x5c, 0x1c, 0x2e, 0x50, 0x25, 0x42, 0x12, 0x5c, 0xec, 0xc5, 0xa9, 0xc9, 0xf9, 0x79, 0x29, 0xc5,
0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0xcc, 0x41, 0x30, 0xae, 0x90, 0x08, 0x17, 0x6b, 0x5e, 0x62, 0x5e,
0x7e, 0xb1, 0x04, 0x93, 0x02, 0xa3, 0x06, 0x6b, 0x10, 0x84, 0xe3, 0x54, 0xc3, 0x25, 0x9c, 0x9c,
0x9f, 0xab, 0x87, 0x66, 0xa4, 0x13, 0x2f, 0xcc, 0xc0, 0x00, 0x90, 0x48, 0x00, 0x63, 0x94, 0x56,
0x7a, 0x66, 0x49, 0x46, 0x69, 0x92, 0x5e, 0x72, 0x7e, 0xae, 0x7e, 0x7a, 0x7e, 0x4e, 0x62, 0x5e,
0x3a, 0xc2, 0x7d, 0x05, 0x25, 0x95, 0x05, 0xa9, 0xc5, 0x70, 0x67, 0xfe, 0x60, 0x64, 0x5c, 0xc4,
0xc4, 0xec, 0x1e, 0xe0, 0xb4, 0x8a, 0x49, 0xce, 0x1d, 0x62, 0x6e, 0x00, 0x54, 0xa9, 0x5e, 0x78,
0x6a, 0x4e, 0x8e, 0x77, 0x5e, 0x7e, 0x79, 0x5e, 0x08, 0x48, 0x4b, 0x12, 0x1b, 0xd8, 0x0c, 0x63,
0x40, 0x00, 0x00, 0x00, 0xff, 0xff, 0xdc, 0x84, 0x30, 0xff, 0xf3, 0x00, 0x00, 0x00,
}

View File

@ -1,11 +1,11 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: github.com/golang/protobuf/ptypes/struct/struct.proto
// source: google/protobuf/struct.proto
/*
Package structpb is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/struct/struct.proto
google/protobuf/struct.proto
It has these top-level messages:
Struct
@ -346,37 +346,35 @@ func init() {
proto.RegisterEnum("google.protobuf.NullValue", NullValue_name, NullValue_value)
}
func init() {
proto.RegisterFile("github.com/golang/protobuf/ptypes/struct/struct.proto", fileDescriptor0)
}
func init() { proto.RegisterFile("google/protobuf/struct.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 417 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x92, 0x41, 0x8b, 0xd3, 0x40,
0x14, 0x80, 0x3b, 0xc9, 0x36, 0x98, 0x17, 0x59, 0x97, 0x11, 0xb4, 0xac, 0xa0, 0xa1, 0x7b, 0x09,
0x22, 0x09, 0x56, 0x04, 0x31, 0x5e, 0x0c, 0xac, 0xbb, 0x60, 0x58, 0x62, 0x74, 0x57, 0xf0, 0x52,
0x9a, 0x34, 0x8d, 0xa1, 0xd3, 0x99, 0x90, 0xcc, 0x28, 0x3d, 0xfa, 0x2f, 0x3c, 0x7b, 0xf4, 0xe8,
0xaf, 0xf3, 0x28, 0x33, 0x93, 0x44, 0x69, 0x29, 0x78, 0x9a, 0xbe, 0x37, 0xdf, 0xfb, 0xe6, 0xbd,
0xd7, 0xc0, 0xf3, 0xb2, 0xe2, 0x9f, 0x45, 0xe6, 0xe7, 0x6c, 0x13, 0x94, 0x8c, 0x2c, 0x68, 0x19,
0xd4, 0x0d, 0xe3, 0x2c, 0x13, 0xab, 0xa0, 0xe6, 0xdb, 0xba, 0x68, 0x83, 0x96, 0x37, 0x22, 0xe7,
0xdd, 0xe1, 0xab, 0x5b, 0x7c, 0xa7, 0x64, 0xac, 0x24, 0x85, 0xdf, 0xb3, 0xd3, 0xef, 0x08, 0xac,
0xf7, 0x8a, 0xc0, 0x21, 0x58, 0xab, 0xaa, 0x20, 0xcb, 0x76, 0x82, 0x5c, 0xd3, 0x73, 0x66, 0x67,
0xfe, 0x0e, 0xec, 0x6b, 0xd0, 0x7f, 0xa3, 0xa8, 0x73, 0xca, 0x9b, 0x6d, 0xda, 0x95, 0x9c, 0xbe,
0x03, 0xe7, 0x9f, 0x34, 0x3e, 0x01, 0x73, 0x5d, 0x6c, 0x27, 0xc8, 0x45, 0x9e, 0x9d, 0xca, 0x9f,
0xf8, 0x09, 0x8c, 0xbf, 0x2c, 0x88, 0x28, 0x26, 0x86, 0x8b, 0x3c, 0x67, 0x76, 0x6f, 0x4f, 0x7e,
0x23, 0x6f, 0x53, 0x0d, 0xbd, 0x34, 0x5e, 0xa0, 0xe9, 0x2f, 0x03, 0xc6, 0x2a, 0x89, 0x43, 0x00,
0x2a, 0x08, 0x99, 0x6b, 0x81, 0x94, 0x1e, 0xcf, 0x4e, 0xf7, 0x04, 0x57, 0x82, 0x10, 0xc5, 0x5f,
0x8e, 0x52, 0x9b, 0xf6, 0x01, 0x3e, 0x83, 0xdb, 0x54, 0x6c, 0xb2, 0xa2, 0x99, 0xff, 0x7d, 0x1f,
0x5d, 0x8e, 0x52, 0x47, 0x67, 0x07, 0xa8, 0xe5, 0x4d, 0x45, 0xcb, 0x0e, 0x32, 0x65, 0xe3, 0x12,
0xd2, 0x59, 0x0d, 0x3d, 0x02, 0xc8, 0x18, 0xeb, 0xdb, 0x38, 0x72, 0x91, 0x77, 0x4b, 0x3e, 0x25,
0x73, 0x1a, 0x78, 0xa5, 0x2c, 0x22, 0xe7, 0x1d, 0x32, 0x56, 0xa3, 0xde, 0x3f, 0xb0, 0xc7, 0x4e,
0x2f, 0x72, 0x3e, 0x4c, 0x49, 0xaa, 0xb6, 0xaf, 0xb5, 0x54, 0xed, 0xfe, 0x94, 0x71, 0xd5, 0xf2,
0x61, 0x4a, 0xd2, 0x07, 0x91, 0x05, 0x47, 0xeb, 0x8a, 0x2e, 0xa7, 0x21, 0xd8, 0x03, 0x81, 0x7d,
0xb0, 0x94, 0xac, 0xff, 0x47, 0x0f, 0x2d, 0xbd, 0xa3, 0x1e, 0x3f, 0x00, 0x7b, 0x58, 0x22, 0x3e,
0x06, 0xb8, 0xba, 0x8e, 0xe3, 0xf9, 0xcd, 0xeb, 0xf8, 0xfa, 0xfc, 0x64, 0x14, 0x7d, 0x43, 0x70,
0x37, 0x67, 0x9b, 0x5d, 0x45, 0xe4, 0xe8, 0x69, 0x12, 0x19, 0x27, 0xe8, 0xd3, 0xd3, 0xff, 0xfd,
0x30, 0x43, 0x7d, 0xd4, 0xd9, 0x6f, 0x84, 0x7e, 0x18, 0xe6, 0x45, 0x12, 0xfd, 0x34, 0x1e, 0x5e,
0x68, 0x79, 0xd2, 0xf7, 0xf7, 0xb1, 0x20, 0xe4, 0x2d, 0x65, 0x5f, 0xe9, 0x07, 0x59, 0x99, 0x59,
0x4a, 0xf5, 0xec, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x9b, 0x6e, 0x5d, 0x3c, 0xfe, 0x02, 0x00,
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x92, 0x41, 0x8b, 0xd3, 0x40,
0x14, 0xc7, 0x3b, 0xc9, 0x36, 0x98, 0x17, 0x59, 0x97, 0x11, 0xb4, 0xac, 0xa2, 0xa1, 0x7b, 0x09,
0x22, 0x29, 0xd6, 0x8b, 0x18, 0x2f, 0x06, 0xd6, 0x5d, 0x30, 0x2c, 0x31, 0xba, 0x15, 0xbc, 0x94,
0x26, 0x4d, 0x63, 0xe8, 0x74, 0x26, 0x24, 0x33, 0x4a, 0x8f, 0x7e, 0x0b, 0xcf, 0x1e, 0x3d, 0xfa,
0xe9, 0x3c, 0xca, 0xcc, 0x24, 0xa9, 0xb4, 0xf4, 0x94, 0xbc, 0xf7, 0x7e, 0xef, 0x3f, 0xef, 0xff,
0x66, 0xe0, 0x71, 0xc1, 0x58, 0x41, 0xf2, 0x49, 0x55, 0x33, 0xce, 0x52, 0xb1, 0x9a, 0x34, 0xbc,
0x16, 0x19, 0xf7, 0x55, 0x8c, 0xef, 0xe9, 0xaa, 0xdf, 0x55, 0xc7, 0x3f, 0x11, 0x58, 0x1f, 0x15,
0x81, 0x03, 0xb0, 0x56, 0x65, 0x4e, 0x96, 0xcd, 0x08, 0xb9, 0xa6, 0xe7, 0x4c, 0x2f, 0xfc, 0x3d,
0xd8, 0xd7, 0xa0, 0xff, 0x4e, 0x51, 0x97, 0x94, 0xd7, 0xdb, 0xa4, 0x6d, 0x39, 0xff, 0x00, 0xce,
0x7f, 0x69, 0x7c, 0x06, 0xe6, 0x3a, 0xdf, 0x8e, 0x90, 0x8b, 0x3c, 0x3b, 0x91, 0xbf, 0xf8, 0x39,
0x0c, 0xbf, 0x2d, 0x88, 0xc8, 0x47, 0x86, 0x8b, 0x3c, 0x67, 0xfa, 0xe0, 0x40, 0x7c, 0x26, 0xab,
0x89, 0x86, 0x5e, 0x1b, 0xaf, 0xd0, 0xf8, 0x8f, 0x01, 0x43, 0x95, 0xc4, 0x01, 0x00, 0x15, 0x84,
0xcc, 0xb5, 0x80, 0x14, 0x3d, 0x9d, 0x9e, 0x1f, 0x08, 0xdc, 0x08, 0x42, 0x14, 0x7f, 0x3d, 0x48,
0x6c, 0xda, 0x05, 0xf8, 0x02, 0xee, 0x52, 0xb1, 0x49, 0xf3, 0x7a, 0xbe, 0x3b, 0x1f, 0x5d, 0x0f,
0x12, 0x47, 0x67, 0x7b, 0xa8, 0xe1, 0x75, 0x49, 0x8b, 0x16, 0x32, 0xe5, 0xe0, 0x12, 0xd2, 0x59,
0x0d, 0x3d, 0x05, 0x48, 0x19, 0xeb, 0xc6, 0x38, 0x71, 0x91, 0x77, 0x47, 0x1e, 0x25, 0x73, 0x1a,
0x78, 0xa3, 0x54, 0x44, 0xc6, 0x5b, 0x64, 0xa8, 0xac, 0x3e, 0x3c, 0xb2, 0xc7, 0x56, 0x5e, 0x64,
0xbc, 0x77, 0x49, 0xca, 0xa6, 0xeb, 0xb5, 0x54, 0xef, 0xa1, 0xcb, 0xa8, 0x6c, 0x78, 0xef, 0x92,
0x74, 0x41, 0x68, 0xc1, 0xc9, 0xba, 0xa4, 0xcb, 0x71, 0x00, 0x76, 0x4f, 0x60, 0x1f, 0x2c, 0x25,
0xd6, 0xdd, 0xe8, 0xb1, 0xa5, 0xb7, 0xd4, 0xb3, 0x47, 0x60, 0xf7, 0x4b, 0xc4, 0xa7, 0x00, 0x37,
0xb7, 0x51, 0x34, 0x9f, 0xbd, 0x8d, 0x6e, 0x2f, 0xcf, 0x06, 0xe1, 0x0f, 0x04, 0xf7, 0x33, 0xb6,
0xd9, 0x97, 0x08, 0x1d, 0xed, 0x26, 0x96, 0x71, 0x8c, 0xbe, 0xbc, 0x28, 0x4a, 0xfe, 0x55, 0xa4,
0x7e, 0xc6, 0x36, 0x93, 0x82, 0x91, 0x05, 0x2d, 0x76, 0x4f, 0xb1, 0xe2, 0xdb, 0x2a, 0x6f, 0xda,
0x17, 0x19, 0xe8, 0x4f, 0x95, 0xfe, 0x45, 0xe8, 0x97, 0x61, 0x5e, 0xc5, 0xe1, 0x6f, 0xe3, 0xc9,
0x95, 0x16, 0x8f, 0xbb, 0xf9, 0x3e, 0xe7, 0x84, 0xbc, 0xa7, 0xec, 0x3b, 0xfd, 0x24, 0x3b, 0x53,
0x4b, 0x49, 0xbd, 0xfc, 0x17, 0x00, 0x00, 0xff, 0xff, 0xe8, 0x1b, 0x59, 0xf8, 0xe5, 0x02, 0x00,
0x00,
}

View File

@ -99,6 +99,15 @@ func Timestamp(ts *tspb.Timestamp) (time.Time, error) {
return t, validateTimestamp(ts)
}
// TimestampNow returns a google.protobuf.Timestamp for the current time.
func TimestampNow() *tspb.Timestamp {
ts, err := TimestampProto(time.Now())
if err != nil {
panic("ptypes: time.Now() out of Timestamp range")
}
return ts
}
// TimestampProto converts the time.Time to a google.protobuf.Timestamp proto.
// It returns an error if the resulting Timestamp is invalid.
func TimestampProto(t time.Time) (*tspb.Timestamp, error) {

View File

@ -1,11 +1,11 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
// source: google/protobuf/timestamp.proto
/*
Package timestamp is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
google/protobuf/timestamp.proto
It has these top-level messages:
Timestamp
@ -141,22 +141,20 @@ func init() {
proto.RegisterType((*Timestamp)(nil), "google.protobuf.Timestamp")
}
func init() {
proto.RegisterFile("github.com/golang/protobuf/ptypes/timestamp/timestamp.proto", fileDescriptor0)
}
func init() { proto.RegisterFile("google/protobuf/timestamp.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 190 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x4e, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x2f, 0xc9,
0xcc, 0x4d, 0x2d, 0x2e, 0x49, 0xcc, 0x2d, 0x40, 0xb0, 0xf4, 0xc0, 0x6a, 0x84, 0xf8, 0xd3, 0xf3,
0xf3, 0xd3, 0x73, 0x52, 0xf5, 0x60, 0x3a, 0x94, 0xac, 0xb9, 0x38, 0x43, 0x60, 0x6a, 0x84, 0x24,
0xb8, 0xd8, 0x8b, 0x53, 0x93, 0xf3, 0xf3, 0x52, 0x8a, 0x25, 0x18, 0x15, 0x18, 0x35, 0x98, 0x83,
0x60, 0x5c, 0x21, 0x11, 0x2e, 0xd6, 0xbc, 0xc4, 0xbc, 0xfc, 0x62, 0x09, 0x26, 0x05, 0x46, 0x0d,
0xd6, 0x20, 0x08, 0xc7, 0xa9, 0x8e, 0x4b, 0x38, 0x39, 0x3f, 0x57, 0x0f, 0xcd, 0x4c, 0x27, 0x3e,
0xb8, 0x89, 0x01, 0x20, 0xa1, 0x00, 0xc6, 0x28, 0x6d, 0x12, 0xdc, 0xfc, 0x83, 0x91, 0x71, 0x11,
0x13, 0xb3, 0x7b, 0x80, 0xd3, 0x2a, 0x26, 0x39, 0x77, 0x88, 0xc9, 0x01, 0x50, 0xb5, 0x7a, 0xe1,
0xa9, 0x39, 0x39, 0xde, 0x79, 0xf9, 0xe5, 0x79, 0x21, 0x20, 0x3d, 0x49, 0x6c, 0x60, 0x43, 0x8c,
0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0x6b, 0x59, 0x0a, 0x4d, 0x13, 0x01, 0x00, 0x00,
// 191 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4f, 0xcf, 0xcf, 0x4f,
0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0xc9, 0xcc, 0x4d,
0x2d, 0x2e, 0x49, 0xcc, 0x2d, 0xd0, 0x03, 0x0b, 0x09, 0xf1, 0x43, 0x14, 0xe8, 0xc1, 0x14, 0x28,
0x59, 0x73, 0x71, 0x86, 0xc0, 0xd4, 0x08, 0x49, 0x70, 0xb1, 0x17, 0xa7, 0x26, 0xe7, 0xe7, 0xa5,
0x14, 0x4b, 0x30, 0x2a, 0x30, 0x6a, 0x30, 0x07, 0xc1, 0xb8, 0x42, 0x22, 0x5c, 0xac, 0x79, 0x89,
0x79, 0xf9, 0xc5, 0x12, 0x4c, 0x0a, 0x8c, 0x1a, 0xac, 0x41, 0x10, 0x8e, 0x53, 0x1d, 0x97, 0x70,
0x72, 0x7e, 0xae, 0x1e, 0x9a, 0x99, 0x4e, 0x7c, 0x70, 0x13, 0x03, 0x40, 0x42, 0x01, 0x8c, 0x51,
0xda, 0xe9, 0x99, 0x25, 0x19, 0xa5, 0x49, 0x7a, 0xc9, 0xf9, 0xb9, 0xfa, 0xe9, 0xf9, 0x39, 0x89,
0x79, 0xe9, 0x08, 0x27, 0x16, 0x94, 0x54, 0x16, 0xa4, 0x16, 0x23, 0x5c, 0xfa, 0x83, 0x91, 0x71,
0x11, 0x13, 0xb3, 0x7b, 0x80, 0xd3, 0x2a, 0x26, 0x39, 0x77, 0x88, 0xc9, 0x01, 0x50, 0xb5, 0x7a,
0xe1, 0xa9, 0x39, 0x39, 0xde, 0x79, 0xf9, 0xe5, 0x79, 0x21, 0x20, 0x3d, 0x49, 0x6c, 0x60, 0x43,
0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0xbc, 0x77, 0x4a, 0x07, 0xf7, 0x00, 0x00, 0x00,
}

View File

@ -94,10 +94,7 @@ func (entry Entry) log(level Level, msg string) {
entry.Level = level
entry.Message = msg
entry.Logger.mu.Lock()
err := entry.Logger.Hooks.Fire(level, &entry)
entry.Logger.mu.Unlock()
if err != nil {
if err := entry.Logger.Hooks.Fire(level, &entry); err != nil {
entry.Logger.mu.Lock()
fmt.Fprintf(os.Stderr, "Failed to fire hook: %v\n", err)
entry.Logger.mu.Unlock()

View File

@ -315,9 +315,3 @@ func (logger *Logger) level() Level {
func (logger *Logger) SetLevel(level Level) {
atomic.StoreUint32((*uint32)(&logger.Level), uint32(level))
}
func (logger *Logger) AddHook(hook Hook) {
logger.mu.Lock()
defer logger.mu.Unlock()
logger.Hooks.Add(hook)
}

19
cmd/vendor/go.uber.org/atomic/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2016 Uber Technologies, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

309
cmd/vendor/go.uber.org/atomic/atomic.go generated vendored Normal file
View File

@ -0,0 +1,309 @@
// Copyright (c) 2016 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
// Package atomic provides simple wrappers around numerics to enforce atomic
// access.
package atomic
import (
"math"
"sync/atomic"
)
// Int32 is an atomic wrapper around an int32.
type Int32 struct{ v int32 }
// NewInt32 creates an Int32.
func NewInt32(i int32) *Int32 {
return &Int32{i}
}
// Load atomically loads the wrapped value.
func (i *Int32) Load() int32 {
return atomic.LoadInt32(&i.v)
}
// Add atomically adds to the wrapped int32 and returns the new value.
func (i *Int32) Add(n int32) int32 {
return atomic.AddInt32(&i.v, n)
}
// Sub atomically subtracts from the wrapped int32 and returns the new value.
func (i *Int32) Sub(n int32) int32 {
return atomic.AddInt32(&i.v, -n)
}
// Inc atomically increments the wrapped int32 and returns the new value.
func (i *Int32) Inc() int32 {
return i.Add(1)
}
// Dec atomically decrements the wrapped int32 and returns the new value.
func (i *Int32) Dec() int32 {
return i.Sub(1)
}
// CAS is an atomic compare-and-swap.
func (i *Int32) CAS(old, new int32) bool {
return atomic.CompareAndSwapInt32(&i.v, old, new)
}
// Store atomically stores the passed value.
func (i *Int32) Store(n int32) {
atomic.StoreInt32(&i.v, n)
}
// Swap atomically swaps the wrapped int32 and returns the old value.
func (i *Int32) Swap(n int32) int32 {
return atomic.SwapInt32(&i.v, n)
}
// Int64 is an atomic wrapper around an int64.
type Int64 struct{ v int64 }
// NewInt64 creates an Int64.
func NewInt64(i int64) *Int64 {
return &Int64{i}
}
// Load atomically loads the wrapped value.
func (i *Int64) Load() int64 {
return atomic.LoadInt64(&i.v)
}
// Add atomically adds to the wrapped int64 and returns the new value.
func (i *Int64) Add(n int64) int64 {
return atomic.AddInt64(&i.v, n)
}
// Sub atomically subtracts from the wrapped int64 and returns the new value.
func (i *Int64) Sub(n int64) int64 {
return atomic.AddInt64(&i.v, -n)
}
// Inc atomically increments the wrapped int64 and returns the new value.
func (i *Int64) Inc() int64 {
return i.Add(1)
}
// Dec atomically decrements the wrapped int64 and returns the new value.
func (i *Int64) Dec() int64 {
return i.Sub(1)
}
// CAS is an atomic compare-and-swap.
func (i *Int64) CAS(old, new int64) bool {
return atomic.CompareAndSwapInt64(&i.v, old, new)
}
// Store atomically stores the passed value.
func (i *Int64) Store(n int64) {
atomic.StoreInt64(&i.v, n)
}
// Swap atomically swaps the wrapped int64 and returns the old value.
func (i *Int64) Swap(n int64) int64 {
return atomic.SwapInt64(&i.v, n)
}
// Uint32 is an atomic wrapper around an uint32.
type Uint32 struct{ v uint32 }
// NewUint32 creates a Uint32.
func NewUint32(i uint32) *Uint32 {
return &Uint32{i}
}
// Load atomically loads the wrapped value.
func (i *Uint32) Load() uint32 {
return atomic.LoadUint32(&i.v)
}
// Add atomically adds to the wrapped uint32 and returns the new value.
func (i *Uint32) Add(n uint32) uint32 {
return atomic.AddUint32(&i.v, n)
}
// Sub atomically subtracts from the wrapped uint32 and returns the new value.
func (i *Uint32) Sub(n uint32) uint32 {
return atomic.AddUint32(&i.v, ^(n - 1))
}
// Inc atomically increments the wrapped uint32 and returns the new value.
func (i *Uint32) Inc() uint32 {
return i.Add(1)
}
// Dec atomically decrements the wrapped int32 and returns the new value.
func (i *Uint32) Dec() uint32 {
return i.Sub(1)
}
// CAS is an atomic compare-and-swap.
func (i *Uint32) CAS(old, new uint32) bool {
return atomic.CompareAndSwapUint32(&i.v, old, new)
}
// Store atomically stores the passed value.
func (i *Uint32) Store(n uint32) {
atomic.StoreUint32(&i.v, n)
}
// Swap atomically swaps the wrapped uint32 and returns the old value.
func (i *Uint32) Swap(n uint32) uint32 {
return atomic.SwapUint32(&i.v, n)
}
// Uint64 is an atomic wrapper around a uint64.
type Uint64 struct{ v uint64 }
// NewUint64 creates a Uint64.
func NewUint64(i uint64) *Uint64 {
return &Uint64{i}
}
// Load atomically loads the wrapped value.
func (i *Uint64) Load() uint64 {
return atomic.LoadUint64(&i.v)
}
// Add atomically adds to the wrapped uint64 and returns the new value.
func (i *Uint64) Add(n uint64) uint64 {
return atomic.AddUint64(&i.v, n)
}
// Sub atomically subtracts from the wrapped uint64 and returns the new value.
func (i *Uint64) Sub(n uint64) uint64 {
return atomic.AddUint64(&i.v, ^(n - 1))
}
// Inc atomically increments the wrapped uint64 and returns the new value.
func (i *Uint64) Inc() uint64 {
return i.Add(1)
}
// Dec atomically decrements the wrapped uint64 and returns the new value.
func (i *Uint64) Dec() uint64 {
return i.Sub(1)
}
// CAS is an atomic compare-and-swap.
func (i *Uint64) CAS(old, new uint64) bool {
return atomic.CompareAndSwapUint64(&i.v, old, new)
}
// Store atomically stores the passed value.
func (i *Uint64) Store(n uint64) {
atomic.StoreUint64(&i.v, n)
}
// Swap atomically swaps the wrapped uint64 and returns the old value.
func (i *Uint64) Swap(n uint64) uint64 {
return atomic.SwapUint64(&i.v, n)
}
// Bool is an atomic Boolean.
type Bool struct{ v uint32 }
// NewBool creates a Bool.
func NewBool(initial bool) *Bool {
return &Bool{boolToInt(initial)}
}
// Load atomically loads the Boolean.
func (b *Bool) Load() bool {
return truthy(atomic.LoadUint32(&b.v))
}
// CAS is an atomic compare-and-swap.
func (b *Bool) CAS(old, new bool) bool {
return atomic.CompareAndSwapUint32(&b.v, boolToInt(old), boolToInt(new))
}
// Store atomically stores the passed value.
func (b *Bool) Store(new bool) {
atomic.StoreUint32(&b.v, boolToInt(new))
}
// Swap sets the given value and returns the previous value.
func (b *Bool) Swap(new bool) bool {
return truthy(atomic.SwapUint32(&b.v, boolToInt(new)))
}
// Toggle atomically negates the Boolean and returns the previous value.
func (b *Bool) Toggle() bool {
return truthy(atomic.AddUint32(&b.v, 1) - 1)
}
func truthy(n uint32) bool {
return n&1 == 1
}
func boolToInt(b bool) uint32 {
if b {
return 1
}
return 0
}
// Float64 is an atomic wrapper around float64.
type Float64 struct {
v uint64
}
// NewFloat64 creates a Float64.
func NewFloat64(f float64) *Float64 {
return &Float64{math.Float64bits(f)}
}
// Load atomically loads the wrapped value.
func (f *Float64) Load() float64 {
return math.Float64frombits(atomic.LoadUint64(&f.v))
}
// Store atomically stores the passed value.
func (f *Float64) Store(s float64) {
atomic.StoreUint64(&f.v, math.Float64bits(s))
}
// Add atomically adds to the wrapped float64 and returns the new value.
func (f *Float64) Add(s float64) float64 {
for {
old := f.Load()
new := old + s
if f.CAS(old, new) {
return new
}
}
}
// Sub atomically subtracts from the wrapped float64 and returns the new value.
func (f *Float64) Sub(s float64) float64 {
return f.Add(-s)
}
// CAS is an atomic compare-and-swap.
func (f *Float64) CAS(old, new float64) bool {
return atomic.CompareAndSwapUint64(&f.v, math.Float64bits(old), math.Float64bits(new))
}
// Value shadows the type of the same name from sync/atomic
// https://godoc.org/sync/atomic#Value
type Value struct{ atomic.Value }

49
cmd/vendor/go.uber.org/atomic/string.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
// Copyright (c) 2016 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package atomic
// String is an atomic type-safe wrapper around Value for strings.
type String struct{ v Value }
// NewString creates a String.
func NewString(str string) *String {
s := &String{}
if str != "" {
s.Store(str)
}
return s
}
// Load atomically loads the wrapped string.
func (s *String) Load() string {
v := s.v.Load()
if v == nil {
return ""
}
return v.(string)
}
// Store atomically stores the passed string.
// Note: Converting the string to an interface{} to store in the Value
// requires an allocation.
func (s *String) Store(str string) {
s.v.Store(str)
}

19
cmd/vendor/go.uber.org/multierr/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2017 Uber Technologies, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

401
cmd/vendor/go.uber.org/multierr/error.go generated vendored Normal file
View File

@ -0,0 +1,401 @@
// Copyright (c) 2017 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
// Package multierr allows combining one or more errors together.
//
// Overview
//
// Errors can be combined with the use of the Combine function.
//
// multierr.Combine(
// reader.Close(),
// writer.Close(),
// conn.Close(),
// )
//
// If only two errors are being combined, the Append function may be used
// instead.
//
// err = multierr.Combine(reader.Close(), writer.Close())
//
// This makes it possible to record resource cleanup failures from deferred
// blocks with the help of named return values.
//
// func sendRequest(req Request) (err error) {
// conn, err := openConnection()
// if err != nil {
// return err
// }
// defer func() {
// err = multierr.Append(err, conn.Close())
// }()
// // ...
// }
//
// The underlying list of errors for a returned error object may be retrieved
// with the Errors function.
//
// errors := multierr.Errors(err)
// if len(errors) > 0 {
// fmt.Println("The following errors occurred:")
// }
//
// Advanced Usage
//
// Errors returned by Combine and Append MAY implement the following
// interface.
//
// type errorGroup interface {
// // Returns a slice containing the underlying list of errors.
// //
// // This slice MUST NOT be modified by the caller.
// Errors() []error
// }
//
// Note that if you need access to list of errors behind a multierr error, you
// should prefer using the Errors function. That said, if you need cheap
// read-only access to the underlying errors slice, you can attempt to cast
// the error to this interface. You MUST handle the failure case gracefully
// because errors returned by Combine and Append are not guaranteed to
// implement this interface.
//
// var errors []error
// group, ok := err.(errorGroup)
// if ok {
// errors = group.Errors()
// } else {
// errors = []error{err}
// }
package multierr // import "go.uber.org/multierr"
import (
"bytes"
"fmt"
"io"
"strings"
"sync"
"go.uber.org/atomic"
)
var (
// Separator for single-line error messages.
_singlelineSeparator = []byte("; ")
_newline = []byte("\n")
// Prefix for multi-line messages
_multilinePrefix = []byte("the following errors occurred:")
// Prefix for the first and following lines of an item in a list of
// multi-line error messages.
//
// For example, if a single item is:
//
// foo
// bar
//
// It will become,
//
// - foo
// bar
_multilineSeparator = []byte("\n - ")
_multilineIndent = []byte(" ")
)
// _bufferPool is a pool of bytes.Buffers.
var _bufferPool = sync.Pool{
New: func() interface{} {
return &bytes.Buffer{}
},
}
type errorGroup interface {
Errors() []error
}
// Errors returns a slice containing zero or more errors that the supplied
// error is composed of. If the error is nil, the returned slice is empty.
//
// err := multierr.Append(r.Close(), w.Close())
// errors := multierr.Errors(err)
//
// If the error is not composed of other errors, the returned slice contains
// just the error that was passed in.
//
// Callers of this function are free to modify the returned slice.
func Errors(err error) []error {
if err == nil {
return nil
}
// Note that we're casting to multiError, not errorGroup. Our contract is
// that returned errors MAY implement errorGroup. Errors, however, only
// has special behavior for multierr-specific error objects.
//
// This behavior can be expanded in the future but I think it's prudent to
// start with as little as possible in terms of contract and possibility
// of misuse.
eg, ok := err.(*multiError)
if !ok {
return []error{err}
}
errors := eg.Errors()
result := make([]error, len(errors))
copy(result, errors)
return result
}
// multiError is an error that holds one or more errors.
//
// An instance of this is guaranteed to be non-empty and flattened. That is,
// none of the errors inside multiError are other multiErrors.
//
// multiError formats to a semi-colon delimited list of error messages with
// %v and with a more readable multi-line format with %+v.
type multiError struct {
copyNeeded atomic.Bool
errors []error
}
var _ errorGroup = (*multiError)(nil)
// Errors returns the list of underlying errors.
//
// This slice MUST NOT be modified.
func (merr *multiError) Errors() []error {
if merr == nil {
return nil
}
return merr.errors
}
func (merr *multiError) Error() string {
if merr == nil {
return ""
}
buff := _bufferPool.Get().(*bytes.Buffer)
buff.Reset()
merr.writeSingleline(buff)
result := buff.String()
_bufferPool.Put(buff)
return result
}
func (merr *multiError) Format(f fmt.State, c rune) {
if c == 'v' && f.Flag('+') {
merr.writeMultiline(f)
} else {
merr.writeSingleline(f)
}
}
func (merr *multiError) writeSingleline(w io.Writer) {
first := true
for _, item := range merr.errors {
if first {
first = false
} else {
w.Write(_singlelineSeparator)
}
io.WriteString(w, item.Error())
}
}
func (merr *multiError) writeMultiline(w io.Writer) {
w.Write(_multilinePrefix)
for _, item := range merr.errors {
w.Write(_multilineSeparator)
writePrefixLine(w, _multilineIndent, fmt.Sprintf("%+v", item))
}
}
// Writes s to the writer with the given prefix added before each line after
// the first.
func writePrefixLine(w io.Writer, prefix []byte, s string) {
first := true
for len(s) > 0 {
if first {
first = false
} else {
w.Write(prefix)
}
idx := strings.IndexByte(s, '\n')
if idx < 0 {
idx = len(s) - 1
}
io.WriteString(w, s[:idx+1])
s = s[idx+1:]
}
}
type inspectResult struct {
// Number of top-level non-nil errors
Count int
// Total number of errors including multiErrors
Capacity int
// Index of the first non-nil error in the list. Value is meaningless if
// Count is zero.
FirstErrorIdx int
// Whether the list contains at least one multiError
ContainsMultiError bool
}
// Inspects the given slice of errors so that we can efficiently allocate
// space for it.
func inspect(errors []error) (res inspectResult) {
first := true
for i, err := range errors {
if err == nil {
continue
}
res.Count++
if first {
first = false
res.FirstErrorIdx = i
}
if merr, ok := err.(*multiError); ok {
res.Capacity += len(merr.errors)
res.ContainsMultiError = true
} else {
res.Capacity++
}
}
return
}
// fromSlice converts the given list of errors into a single error.
func fromSlice(errors []error) error {
res := inspect(errors)
switch res.Count {
case 0:
return nil
case 1:
// only one non-nil entry
return errors[res.FirstErrorIdx]
case len(errors):
if !res.ContainsMultiError {
// already flat
return &multiError{errors: errors}
}
}
nonNilErrs := make([]error, 0, res.Capacity)
for _, err := range errors[res.FirstErrorIdx:] {
if err == nil {
continue
}
if nested, ok := err.(*multiError); ok {
nonNilErrs = append(nonNilErrs, nested.errors...)
} else {
nonNilErrs = append(nonNilErrs, err)
}
}
return &multiError{errors: nonNilErrs}
}
// Combine combines the passed errors into a single error.
//
// If zero arguments were passed or if all items are nil, a nil error is
// returned.
//
// Combine(nil, nil) // == nil
//
// If only a single error was passed, it is returned as-is.
//
// Combine(err) // == err
//
// Combine skips over nil arguments so this function may be used to combine
// together errors from operations that fail independently of each other.
//
// multierr.Combine(
// reader.Close(),
// writer.Close(),
// pipe.Close(),
// )
//
// If any of the passed errors is a multierr error, it will be flattened along
// with the other errors.
//
// multierr.Combine(multierr.Combine(err1, err2), err3)
// // is the same as
// multierr.Combine(err1, err2, err3)
//
// The returned error formats into a readable multi-line error message if
// formatted with %+v.
//
// fmt.Sprintf("%+v", multierr.Combine(err1, err2))
func Combine(errors ...error) error {
return fromSlice(errors)
}
// Append appends the given errors together. Either value may be nil.
//
// This function is a specialization of Combine for the common case where
// there are only two errors.
//
// err = multierr.Append(reader.Close(), writer.Close())
//
// The following pattern may also be used to record failure of deferred
// operations without losing information about the original error.
//
// func doSomething(..) (err error) {
// f := acquireResource()
// defer func() {
// err = multierr.Append(err, f.Close())
// }()
func Append(left error, right error) error {
switch {
case left == nil:
return right
case right == nil:
return left
}
if _, ok := right.(*multiError); !ok {
if l, ok := left.(*multiError); ok && !l.copyNeeded.Swap(true) {
// Common case where the error on the left is constantly being
// appended to.
errs := append(l.errors, right)
return &multiError{errors: errs}
} else if !ok {
// Both errors are single errors.
return &multiError{errors: []error{left, right}}
}
}
// Either right or both, left and right, are multiErrors. Rely on usual
// expensive logic.
errors := [2]error{left, right}
return fromSlice(errors[0:])
}

19
cmd/vendor/go.uber.org/zap/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2016-2017 Uber Technologies, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

320
cmd/vendor/go.uber.org/zap/array.go generated vendored Normal file
View File

@ -0,0 +1,320 @@
// Copyright (c) 2016 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package zap
import (
"time"
"go.uber.org/zap/zapcore"
)
// Array constructs a field with the given key and ArrayMarshaler. It provides
// a flexible, but still type-safe and efficient, way to add array-like types
// to the logging context. The struct's MarshalLogArray method is called lazily.
func Array(key string, val zapcore.ArrayMarshaler) zapcore.Field {
return zapcore.Field{Key: key, Type: zapcore.ArrayMarshalerType, Interface: val}
}
// Bools constructs a field that carries a slice of bools.
func Bools(key string, bs []bool) zapcore.Field {
return Array(key, bools(bs))
}
// ByteStrings constructs a field that carries a slice of []byte, each of which
// must be UTF-8 encoded text.
func ByteStrings(key string, bss [][]byte) zapcore.Field {
return Array(key, byteStringsArray(bss))
}
// Complex128s constructs a field that carries a slice of complex numbers.
func Complex128s(key string, nums []complex128) zapcore.Field {
return Array(key, complex128s(nums))
}
// Complex64s constructs a field that carries a slice of complex numbers.
func Complex64s(key string, nums []complex64) zapcore.Field {
return Array(key, complex64s(nums))
}
// Durations constructs a field that carries a slice of time.Durations.
func Durations(key string, ds []time.Duration) zapcore.Field {
return Array(key, durations(ds))
}
// Float64s constructs a field that carries a slice of floats.
func Float64s(key string, nums []float64) zapcore.Field {
return Array(key, float64s(nums))
}
// Float32s constructs a field that carries a slice of floats.
func Float32s(key string, nums []float32) zapcore.Field {
return Array(key, float32s(nums))
}
// Ints constructs a field that carries a slice of integers.
func Ints(key string, nums []int) zapcore.Field {
return Array(key, ints(nums))
}
// Int64s constructs a field that carries a slice of integers.
func Int64s(key string, nums []int64) zapcore.Field {
return Array(key, int64s(nums))
}
// Int32s constructs a field that carries a slice of integers.
func Int32s(key string, nums []int32) zapcore.Field {
return Array(key, int32s(nums))
}
// Int16s constructs a field that carries a slice of integers.
func Int16s(key string, nums []int16) zapcore.Field {
return Array(key, int16s(nums))
}
// Int8s constructs a field that carries a slice of integers.
func Int8s(key string, nums []int8) zapcore.Field {
return Array(key, int8s(nums))
}
// Strings constructs a field that carries a slice of strings.
func Strings(key string, ss []string) zapcore.Field {
return Array(key, stringArray(ss))
}
// Times constructs a field that carries a slice of time.Times.
func Times(key string, ts []time.Time) zapcore.Field {
return Array(key, times(ts))
}
// Uints constructs a field that carries a slice of unsigned integers.
func Uints(key string, nums []uint) zapcore.Field {
return Array(key, uints(nums))
}
// Uint64s constructs a field that carries a slice of unsigned integers.
func Uint64s(key string, nums []uint64) zapcore.Field {
return Array(key, uint64s(nums))
}
// Uint32s constructs a field that carries a slice of unsigned integers.
func Uint32s(key string, nums []uint32) zapcore.Field {
return Array(key, uint32s(nums))
}
// Uint16s constructs a field that carries a slice of unsigned integers.
func Uint16s(key string, nums []uint16) zapcore.Field {
return Array(key, uint16s(nums))
}
// Uint8s constructs a field that carries a slice of unsigned integers.
func Uint8s(key string, nums []uint8) zapcore.Field {
return Array(key, uint8s(nums))
}
// Uintptrs constructs a field that carries a slice of pointer addresses.
func Uintptrs(key string, us []uintptr) zapcore.Field {
return Array(key, uintptrs(us))
}
// Errors constructs a field that carries a slice of errors.
func Errors(key string, errs []error) zapcore.Field {
return Array(key, errArray(errs))
}
type bools []bool
func (bs bools) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range bs {
arr.AppendBool(bs[i])
}
return nil
}
type byteStringsArray [][]byte
func (bss byteStringsArray) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range bss {
arr.AppendByteString(bss[i])
}
return nil
}
type complex128s []complex128
func (nums complex128s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendComplex128(nums[i])
}
return nil
}
type complex64s []complex64
func (nums complex64s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendComplex64(nums[i])
}
return nil
}
type durations []time.Duration
func (ds durations) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range ds {
arr.AppendDuration(ds[i])
}
return nil
}
type float64s []float64
func (nums float64s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendFloat64(nums[i])
}
return nil
}
type float32s []float32
func (nums float32s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendFloat32(nums[i])
}
return nil
}
type ints []int
func (nums ints) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendInt(nums[i])
}
return nil
}
type int64s []int64
func (nums int64s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendInt64(nums[i])
}
return nil
}
type int32s []int32
func (nums int32s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendInt32(nums[i])
}
return nil
}
type int16s []int16
func (nums int16s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendInt16(nums[i])
}
return nil
}
type int8s []int8
func (nums int8s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendInt8(nums[i])
}
return nil
}
type stringArray []string
func (ss stringArray) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range ss {
arr.AppendString(ss[i])
}
return nil
}
type times []time.Time
func (ts times) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range ts {
arr.AppendTime(ts[i])
}
return nil
}
type uints []uint
func (nums uints) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendUint(nums[i])
}
return nil
}
type uint64s []uint64
func (nums uint64s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendUint64(nums[i])
}
return nil
}
type uint32s []uint32
func (nums uint32s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendUint32(nums[i])
}
return nil
}
type uint16s []uint16
func (nums uint16s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendUint16(nums[i])
}
return nil
}
type uint8s []uint8
func (nums uint8s) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendUint8(nums[i])
}
return nil
}
type uintptrs []uintptr
func (nums uintptrs) MarshalLogArray(arr zapcore.ArrayEncoder) error {
for i := range nums {
arr.AppendUintptr(nums[i])
}
return nil
}

106
cmd/vendor/go.uber.org/zap/buffer/buffer.go generated vendored Normal file
View File

@ -0,0 +1,106 @@
// Copyright (c) 2016 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
// Package buffer provides a thin wrapper around a byte slice. Unlike the
// standard library's bytes.Buffer, it supports a portion of the strconv
// package's zero-allocation formatters.
package buffer
import "strconv"
const _size = 1024 // by default, create 1 KiB buffers
// Buffer is a thin wrapper around a byte slice. It's intended to be pooled, so
// the only way to construct one is via a Pool.
type Buffer struct {
bs []byte
pool Pool
}
// AppendByte writes a single byte to the Buffer.
func (b *Buffer) AppendByte(v byte) {
b.bs = append(b.bs, v)
}
// AppendString writes a string to the Buffer.
func (b *Buffer) AppendString(s string) {
b.bs = append(b.bs, s...)
}
// AppendInt appends an integer to the underlying buffer (assuming base 10).
func (b *Buffer) AppendInt(i int64) {
b.bs = strconv.AppendInt(b.bs, i, 10)
}
// AppendUint appends an unsigned integer to the underlying buffer (assuming
// base 10).
func (b *Buffer) AppendUint(i uint64) {
b.bs = strconv.AppendUint(b.bs, i, 10)
}
// AppendBool appends a bool to the underlying buffer.
func (b *Buffer) AppendBool(v bool) {
b.bs = strconv.AppendBool(b.bs, v)
}
// AppendFloat appends a float to the underlying buffer. It doesn't quote NaN
// or +/- Inf.
func (b *Buffer) AppendFloat(f float64, bitSize int) {
b.bs = strconv.AppendFloat(b.bs, f, 'f', -1, bitSize)
}
// Len returns the length of the underlying byte slice.
func (b *Buffer) Len() int {
return len(b.bs)
}
// Cap returns the capacity of the underlying byte slice.
func (b *Buffer) Cap() int {
return cap(b.bs)
}
// Bytes returns a mutable reference to the underlying byte slice.
func (b *Buffer) Bytes() []byte {
return b.bs
}
// String returns a string copy of the underlying byte slice.
func (b *Buffer) String() string {
return string(b.bs)
}
// Reset resets the underlying byte slice. Subsequent writes re-use the slice's
// backing array.
func (b *Buffer) Reset() {
b.bs = b.bs[:0]
}
// Write implements io.Writer.
func (b *Buffer) Write(bs []byte) (int, error) {
b.bs = append(b.bs, bs...)
return len(bs), nil
}
// Free returns the Buffer to its Pool.
//
// Callers must not retain references to the Buffer after calling Free.
func (b *Buffer) Free() {
b.pool.put(b)
}

49
cmd/vendor/go.uber.org/zap/buffer/pool.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
// Copyright (c) 2016 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package buffer
import "sync"
// A Pool is a type-safe wrapper around a sync.Pool.
type Pool struct {
p *sync.Pool
}
// NewPool constructs a new Pool.
func NewPool() Pool {
return Pool{p: &sync.Pool{
New: func() interface{} {
return &Buffer{bs: make([]byte, 0, _size)}
},
}}
}
// Get retrieves a Buffer from the pool, creating one if necessary.
func (p Pool) Get() *Buffer {
buf := p.p.Get().(*Buffer)
buf.Reset()
buf.pool = p
return buf
}
func (p Pool) put(buf *Buffer) {
p.p.Put(buf)
}

Some files were not shown because too many files have changed in this diff Show More