Compare commits

...

74 Commits

Author SHA1 Message Date
2373ddb445 version: bump up to 3.1.14 2018-04-24 13:23:45 -07:00
5da3a7232f Merge pull request #9606 from jpbetz/automated-cherry-pick-of-#9587-release-3.1
Automated cherry pick of #9587
2018-04-23 12:48:23 -07:00
3865d69db3 etcdserver: add is_leader prometheus metric that is 1 on the leader.
Before this change, we had now way to find a leader using /metrics
endpoint. This commit adds a metric to do that.
2018-04-23 11:10:12 -07:00
c764878701 etcdmain: fix "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 11:09:07 -07:00
e66af56730 etcdserver: log skipping initial election tick
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 11:07:51 -07:00
097a653945 etcdmain: add "--initial-election-tick-advance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 11:07:36 -07:00
d2673cec81 embed: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 11:06:36 -07:00
4e63906c33 integration: set InitialElectionTickAdvance to true by default
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 11:05:15 -07:00
0c0bf3f1a5 etcdserver: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 11:03:16 -07:00
16487395e1 test: simplify CI tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 19:05:34 -07:00
c6ae68d3f7 travis.yml: update, remove go tip tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 11:15:37 -07:00
3b6bd6eea6 tests: move Semaphore script
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-09 11:36:23 -07:00
b5ae9b6879 version: bump up to 3.1.13+git 2018-03-29 10:56:52 -07:00
1558170293 version: bump up to 3.1.13 2018-03-29 10:28:55 -07:00
c3a14a2b28 semaphore: run release test with v3.1.12
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:23:15 -07:00
6f75c56c5e etcdserver: Manually backport etcdserver/raft.go tickMu fix to 3.1 2018-03-28 12:40:07 -07:00
908c0f4f98 rafthttp: add missing "peer_sent_failures_total" metrics call
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 12:39:59 -07:00
35c6ea7a67 Documentation/upgrades: backport all upgrade guides
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 12:39:59 -07:00
8eeab582d0 etcdserver: adjust election ticks on restart
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 10:17:30 -07:00
c536205249 etcdserver: make "advanceTicks" method
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 10:05:02 -07:00
2e57d99a2c rafthttp: add "ActivePeers" to "Transport"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 10:02:13 -07:00
2fdc4aa06c version: bump up to 3.1.12+git 2018-03-08 14:17:36 -08:00
918698add7 version: bump up to 3.1.12 2018-03-08 13:01:30 -08:00
00d14cfd03 test: fix etcd-tester flags
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 08:24:04 -08:00
6e11a79fd8 *: remove unused env vars
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 01:36:54 -08:00
028f99b103 test: fix "functional-tester" build script
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 01:29:57 -08:00
290fa0c1be *: sync "functional-tester" from 3.2 branch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 00:57:30 -08:00
0874fcbed4 test: run "functional" tests in 3.1 branch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 00:26:52 -08:00
7a148fee36 Merge pull request #9401 from jpbetz/automated-cherry-pick-of-#9297-origin-release-3.1
Automated cherry pick of #9297
2018-03-07 22:03:14 -08:00
087b9aa3dc mvcc: fix watchable store test for 3.2 cherrypick of #9281 2018-03-07 21:20:32 -08:00
b6373f1625 mvcc: restore unsynced watchers
In case syncWatchersLoop() starts before Restore() is called,
watchers already added by that moment are moved to s.synced by the loop.
However, there is a broken logic that moves watchers from s.synced
to s.uncyned without setting keyWatchers of the watcherGroup.
Eventually syncWatchers() fails to pickup those watchers from s.unsynced
and no events are sent to the watchers, because newWatcherBatch() called
in the function uses wg.watcherSetByKey() internally that requires
a proper keyWatchers value.
2018-03-07 21:20:21 -08:00
4178b75411 hack/scripts-dev: fix indentation in run.sh
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:31:25 -08:00
9c8e39e7f4 hack/scripts-dev: sync with master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:25:10 -08:00
af3021aa1a semaphore: update Go version, release test version
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:24:57 -08:00
df0b652d6a travis: use docker, sync with master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:24:04 -08:00
8e5d62cf1e Merge pull request #8933 from gyuho/release-doc
release-3.1: scripts/build-docker: build both gcr.io and quay.io images
2017-11-30 12:49:51 -08:00
212d801294 scripts/build-docker: build both gcr.io and quay.io images
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-28 15:03:38 -08:00
c2d8d9fd26 version: bump up to 3.1.11+git 2017-11-28 14:43:37 -08:00
960f4604bc version: bump up to v3.1.11 2017-11-28 10:48:24 -08:00
22b67da920 Merge pull request #8902 from jpbetz/automated-cherry-pick-of-#8813-release-3.1
Automated cherry pick of #8813 release 3.1
2017-11-22 11:21:13 -08:00
4b53ab0909 test: Clean agent directories on disk before functional test runs, not after
This is primarily so CI tooling can capture the agent logs after the functional tester runs.
2017-11-21 11:41:57 -08:00
b32ec69f9b vendor: Switch from boltdb v1.3.0 to coreos/bbolt v1.3.1-coreos.3 2017-11-21 11:34:45 -08:00
3ab9894b04 Merge pull request #8806 from jpbetz/automated-cherry-pick-of-#8427-origin-release-3.1-1509570173
Automated cherry pick of #8427 to release 3.1 branch
2017-11-09 18:07:40 -08:00
00d6d4aba7 e2e: test against latest v3.1.x release
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-08 15:11:34 -08:00
88b5e22b73 semaphore: manually pin v3.1.10 for release upgrade test
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-08 12:46:15 -08:00
2bb8278fbf hack/scripts-dev: add Makefile, Dockerfile-test
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-06 14:13:51 -08:00
935c76b8c3 semaphore.sh: fail tests with "(--- FAIL:|leak)"
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-03 10:59:23 -07:00
e83f50ec7c mvcc: sending events after restore
Fixes: #8411
2017-11-02 17:15:35 -07:00
232a81d804 client/integration: use only digits in unix port
Fix https://github.com/coreos/etcd/issues/7558.

Same as https://github.com/coreos/etcd/issues/6959.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 13:56:55 -07:00
6ffc32cd06 semaphore.sh: add to release-3.1 branch 2017-11-02 13:56:48 -07:00
f8aeb21c2d version: bump up to v3.1.10+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-11-02 13:56:16 -07:00
0520cb9304 version: bump up to 3.1.10
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 09:37:27 -07:00
424e4ae1cc fixtures: add gencerts.sh, generate CRL 2017-07-14 09:37:15 -07:00
a631a80a39 travis.yml: test with Go 1.8.3
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 08:51:24 -07:00
fc08fd75ee version: bump up to 3.1.9+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-07-14 08:51:13 -07:00
0f4a535c2f version: bump up to 3.1.9
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 10:44:38 -07:00
c765bef483 rafthttp: permit very large v2 snapshots
v2 snapshots were hitting the 512MB message decode limit, causing
sending snapshots to new members to fail for being too big.
2017-06-09 10:44:07 -07:00
5586a5806e version: bump up to 3.1.8+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 10:43:34 -07:00
d267ca9c18 version: bump up to 3.1.8
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-05 12:25:40 -07:00
4176fe768f *: fix other broken links in markdown
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 22:27:58 -07:00
950c846144 Documentation/v3: fix broken links
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 22:27:51 -07:00
0b78d66abe Documentation/v2: fix broken links
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 16:59:23 -07:00
2d58079626 integration: close accepted connection on stopc path
Connection pausing added another exit condition in the listener
path, causing the bridge to leak connections instead of closing
them when signalled to close. Also adds some additional Close
paranoia.

Fixes #7823
2017-05-03 14:51:47 -07:00
be171fa424 etcdserver: apply() sets consistIndex for any entry type
previously, apply() doesn't set consistIndex for EntryConfChange type.
this causes a misalignment between consistIndex and applied index
where EntryConfChange entry results setting applied index but not consistIndex.

suppose that addMember() is called and leader reflects that change.
1. applied index and consistIndex is now misaligned.
2. a new follower node joined.
3. leader sends the snapshot to follower
	where the applied index is the snapshot metadata index.
4. follower node saves the snapshot and database(includes consistIndex) from leader.
5. restarting follower loads snapshot and database.
6. follower checks snapshot metadata index(same as applied index) and database consistIndex,
	finds them don't match, and then panic.

FIXES #7834
2017-05-03 09:22:48 -07:00
4b60243fc5 etcd-2-1-0-bench: Fix an absolute bare link to resource outside of Documentation dir 2017-05-03 08:32:23 -07:00
2c5d79f49f Docs: replace absolute links with relative ones. 2017-05-03 08:32:15 -07:00
424abca6ac version: bump up to 3.1.7+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 08:31:53 -07:00
43b75072bf version: bump up to 3.1.7
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-04-27 08:25:50 -07:00
78141fae60 clientv3: set current revision to create rev regardless of CreateNotify
Turns out the optimization to ignore setting the init rev for
current revision watches breaks some ordering assumptions. Since
Watch only returns a channel once it gets a response, it should
bind the revision at the time of the first create response.

Was causing TestWatchReconnInit to fail.
2017-04-25 10:54:39 -07:00
3be37f042e integration: add pause/unpause to client bridge
Resetting connections sometimes isn't enough; need to stop/resume
accepting connections for some tests while keeping the member up.
2017-04-25 10:54:15 -07:00
7c896098d2 clientv3/integration: test watch resume with disconnect before first event 2017-04-25 10:53:58 -07:00
30f4e36de4 clientv3: only update initReq.rev == 0 with creation watch revision
Always updating the initReq.rev on watch create will resume from the wrong
revision if initReq is ever nonzero.
2017-04-25 10:53:37 -07:00
557abbe437 ctlv3: use printer for lease command results
Fixes #7783
2017-04-20 10:39:36 -07:00
4b448c209b version: bump up to 3.1.6+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-04-20 10:39:18 -07:00
363 changed files with 10044 additions and 1485 deletions

View File

@ -1,10 +1,12 @@
dist: trusty
language: go
go_import_path: github.com/coreos/etcd
sudo: false
sudo: required
services: docker
go:
- 1.7.5
- 1.8.7
notifications:
on_success: never
@ -12,41 +14,84 @@ notifications:
env:
matrix:
- TARGET=amd64
- TARGET=arm64
- TARGET=arm
- TARGET=386
- TARGET=ppc64le
- TARGET=linux-amd64-build
- TARGET=linux-amd64-unit
- TARGET=linux-amd64-integration
- TARGET=linux-amd64-functional
- TARGET=linux-386-build
- TARGET=linux-386-unit
- TARGET=darwin-amd64-build
- TARGET=windows-amd64-build
- TARGET=linux-arm-build
- TARGET=linux-arm64-build
- TARGET=linux-ppc64le-build
matrix:
fast_finish: true
allow_failures:
- go: tip
exclude:
- go: tip
env: TARGET=arm
- go: tip
env: TARGET=arm64
- go: tip
env: TARGET=386
- go: tip
env: TARGET=ppc64le
# disable godep restore override
before_install:
- if [[ $TRAVIS_GO_VERSION == 1.* ]]; then docker pull gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION}; fi
install:
- pushd cmd/etcd && go get -t -v ./... && popd
- pushd cmd/etcd && go get -t -v ./... && popd
script:
- echo "TRAVIS_GO_VERSION=${TRAVIS_GO_VERSION}"
- >
case "${TARGET}" in
amd64)
GOARCH=amd64 ./test
linux-amd64-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='build' ./test"
;;
386)
GOARCH=386 PASSES="build unit" ./test
linux-amd64-unit)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='unit' ./test"
;;
*)
# test building out of gopath
GO_BUILD_FLAGS="-a -v" GOPATH="" GOARCH="${TARGET}" ./build
linux-amd64-integration)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='integration' ./test"
;;
linux-amd64-functional)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "./build && GOARCH=amd64 PASSES='functional' ./test"
;;
linux-386-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=386 PASSES='build' ./test"
;;
linux-386-unit)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=386 PASSES='unit' ./test"
;;
darwin-amd64-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOOS=darwin GOARCH=amd64 ./build"
;;
windows-amd64-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOOS=windows GOARCH=amd64 ./build"
;;
linux-arm-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOARCH=arm ./build"
;;
linux-arm64-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOARCH=arm64 ./build"
;;
linux-ppc64le-build)
docker run --rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GO_BUILD_FLAGS='-v' GOARCH=ppc64le ./build"
;;
esac

57
Dockerfile-test Normal file
View File

@ -0,0 +1,57 @@
FROM ubuntu:16.10
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get -y update \
&& apt-get -y install \
build-essential \
gcc \
apt-utils \
pkg-config \
software-properties-common \
apt-transport-https \
libssl-dev \
sudo \
bash \
curl \
wget \
tar \
git \
netcat \
libaspell-dev \
libhunspell-dev \
hunspell-en-us \
aspell-en \
shellcheck \
&& apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y autoremove \
&& apt-get -y autoclean
ENV GOROOT /usr/local/go
ENV GOPATH /go
ENV PATH ${GOPATH}/bin:${GOROOT}/bin:${PATH}
ENV GO_VERSION REPLACE_ME_GO_VERSION
ENV GO_DOWNLOAD_URL https://storage.googleapis.com/golang
RUN rm -rf ${GOROOT} \
&& curl -s ${GO_DOWNLOAD_URL}/go${GO_VERSION}.linux-amd64.tar.gz | tar -v -C /usr/local/ -xz \
&& mkdir -p ${GOPATH}/src ${GOPATH}/bin \
&& go version
RUN mkdir -p ${GOPATH}/src/github.com/coreos/etcd
WORKDIR ${GOPATH}/src/github.com/coreos/etcd
ADD ./scripts/install-marker.sh /tmp/install-marker.sh
RUN go get -v -u -tags spell github.com/chzchzchz/goword \
&& go get -v -u github.com/coreos/license-bill-of-materials \
&& go get -v -u honnef.co/go/tools/cmd/gosimple \
&& go get -v -u honnef.co/go/tools/cmd/unused \
&& go get -v -u honnef.co/go/tools/cmd/staticcheck \
&& go get -v -u github.com/wadey/gocovmerge \
&& go get -v -u github.com/gordonklaus/ineffassign \
&& /tmp/install-marker.sh amd64 \
&& rm -f /tmp/install-marker.sh \
&& curl -s https://codecov.io/bash >/codecov \
&& chmod 700 /codecov

View File

@ -49,4 +49,4 @@ Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send req
| 256 | 256 | all servers | 3061 | 119.3 |
[hey]: https://github.com/rakyll/hey
[hack-benchmark]: /hack/benchmark/
[hack-benchmark]: https://github.com/coreos/etcd/tree/master/hack/benchmark

View File

@ -69,4 +69,4 @@ Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send req
[hey]: https://github.com/rakyll/hey
[c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md
[hack-benchmark]: /hack/benchmark/
[hack-benchmark]: ../../hack/benchmark/

View File

@ -39,4 +39,4 @@ The performance is nearly the same as the one with empty server handler.
The performance with empty server handler is not affected by one put. So the
performance downgrade should be caused by storage package.
[etcd-v3-benchmark]: /tools/benchmark/
[etcd-v3-benchmark]: ../../tools/benchmark/

View File

@ -3,7 +3,7 @@
etcd uses the [capnslog][capnslog] library for logging application output categorized into *levels*. A log message's level is determined according to these conventions:
* Error: Data has been lost, a request has failed for a bad reason, or a required resource has been lost
* Examples:
* Examples:
* A failure to allocate disk space for WAL
* Warning: (Hopefully) Temporary conditions that may cause errors, but may work fine. A replica disappearing (that may reconnect) is a warning.
@ -26,4 +26,4 @@ etcd uses the [capnslog][capnslog] library for logging application output catego
* Send a normal message to a remote peer
* Write a log entry to disk
[capnslog]: [https://github.com/coreos/pkg/tree/master/capnslog]
[capnslog]: https://github.com/coreos/pkg/tree/master/capnslog

View File

@ -475,5 +475,5 @@ To setup an etcd cluster with proxies of v2 API, please read the the [clustering
[proxy]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/proxy.md
[clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md
[security-guide]: security.md
[tls-setup]: /hack/tls-setup
[tls-setup]: ../../hack/tls-setup
[gateway]: gateway.md

View File

@ -286,7 +286,7 @@ Follow the instructions when using these flags.
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
[proxy]: ../v2/proxy.md
[restore]: ../v2/admin_guide.md#restoring-a-backup
[security]: security.md

View File

@ -36,9 +36,9 @@ watch key A ^ ^ watch key A |
To effectively coalesce multiple client watchers into a single watcher, the gRPC proxy coalesces new `c-watchers` into an existing `s-watcher` when possible. This coalesced `s-watcher` may be out of sync with the etcd server due to network delays or buffered undelivered events. When the watch revision is unspecified, the gRPC proxy will not guarantee the `c-watcher` will start watching from the most recent store revision. For example, if a client watches from an etcd server with revision 1000, that watcher will begin at revision 1000. If a client watches from the gRPC proxy, may begin watching from revision 990.
Similar limitations apply to cancellation. When the watcher is cancelled, the etcd servers revision may be greater than the cancellation response revision.
Similar limitations apply to cancellation. When the watcher is cancelled, the etcd servers revision may be greater than the cancellation response revision.
These two limitations should not cause problems for most use cases. In the future, there may be additional options to force the watcher to bypass the gRPC proxy for more accurate revision responses.
These two limitations should not cause problems for most use cases. In the future, there may be additional options to force the watcher to bypass the gRPC proxy for more accurate revision responses.
## Scalable lease API
@ -75,3 +75,4 @@ $ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 get foo
foo
bar
```

View File

@ -219,6 +219,6 @@ Make sure to sign the certificates with a Subject Name the member's public IP ad
The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too.
[cfssl]: https://github.com/cloudflare/cfssl
[tls-setup]: /hack/tls-setup
[tls-setup]: ../../hack/tls-setup
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName

View File

@ -50,7 +50,7 @@ Radius Intelligence uses Kubernetes running CoreOS to containerize and scale int
## Vonage
- *Application*: system configuration for microservices, scheduling, locks (future - service discovery)
- *Application*: system configuration for microservices, scheduling, locks (future - service discovery)
- *Launched*: August 2015
- *Cluster Size*: 2 clusters of 5 members in 2 DCs, n local proxies 1-to-1 with microservice, (ssl and SRV look up)
- *Order of Data Size*: kilobytes
@ -60,3 +60,148 @@ Radius Intelligence uses Kubernetes running CoreOS to containerize and scale int
[teamcity]: https://www.jetbrains.com/teamcity/
[raoofm]:https://github.com/raoofm
## Qiniu Cloud
- *Application*: system configuration for microservices, distributed locks
- *Launched*: Jan. 2016
- *Cluster Size*: 3 members each with several clusters
- *Order of Data Size*: kilobytes
- *Operator*: Pandora, chenchao@qiniu.com
- *Environment*: Baremetal
- *Backups*: None, all data can be recreated if necessary
## QingCloud
- *Application*: [QingCloud][qingcloud] appcenter cluster for service discovery as [metad][metad] backend.
- *Launched*: December 2016
- *Cluster Size*: 1 cluster of 3 members per user.
- *Order of Data Size*: kilobytes
- *Operator*: [yunify][yunify]
- *Environment*: QingCloud IaaS
- *Backups*: None, all data can be recreated if necessary.
[metad]:https://github.com/yunify/metad
[yunify]:https://github.com/yunify
[qingcloud]:https://qingcloud.com/
## Yandex
- *Application*: system configuration for services, service discovery
- *Launched*: March 2016
- *Cluster Size*: 3 clusters of 5 members
- *Order of Data Size*: several gigabytes
- *Operator*: Yandex; [nekto0n][nekto0n]
- *Environment*: Bare Metal
- *Backups*: None
[nekto0n]:https://github.com/nekto0n
## Tencent Games
- *Application*: Meta data and configuration data for service discovery, Kubernetes, etc.
- *Launched*: Jan. 2015
- *Cluster Size*: 3 members each with 10s of clusters
- *Order of Data Size*: 10s of Megabytes
- *Operator*: Tencent Game Operations Department
- *Environment*: Baremetal
- *Backups*: Periodic sync to backup server
In Tencent games, we use Docker and Kubernetes to deploy and run our applications, and use etcd to save meta data for service discovery, Kubernetes, etc.
## Hyper.sh
- *Application*: Kubernetes, distributed locks, etc.
- *Launched*: April 2016
- *Cluster Size*: 1 cluster of 3 members
- *Order of Data Size*: 10s of MB
- *Operator*: Hyper.sh
- *Environment*: Baremetal
- *Backups*: None, all data can be recreated if necessary.
In [hyper.sh][hyper.sh], the container service is backed by [hypernetes][hypernetes], a multi-tenant kubernetes distro. Moreover, we use etcd to coordinate the multiple manage services and store global meta data.
[hypernetes]:https://github.com/hyperhq/hypernetes
[Hyper.sh]:https://www.hyper.sh
## Meitu
- *Application*: system configuration for services, service discovery, kubernetes in test environment
- *Launched*: October 2015
- *Cluster Size*: 1 cluster of 3 members
- *Order of Data Size*: megabytes
- *Operator*: Meitu, hxj@meitu.com, [shafreeck][shafreeck]
- *Environment*: Bare Metal
- *Backups*: None, all data can be recreated if necessary.
[shafreeck]:https://github.com/shafreeck
## Grab
- *Application*: system configuration for services, service discovery
- *Launched*: June 2016
- *Cluster Size*: 1 cluster of 7 members
- *Order of Data Size*: megabytes
- *Operator*: Grab, [taxitan][taxitan], [reterVision][reterVision]
- *Environment*: AWS
- *Backups*: None, all data can be recreated if necessary.
[taxitan]:https://github.com/taxitan
[reterVision]:https://github.com/reterVision
## DaoCloud.io
- *Application*: container management
- *Launched*: Sep. 2015
- *Cluster Size*: 1000+ deployments, each deployment contains a 3 node cluster.
- *Order of Data Size*: 100s of Megabytes
- *Operator*: daocloud.io
- *Environment*: Baremetal and virtual machines
- *Backups*: None, all data can be recreated if necessary.
In [DaoCloud][DaoCloud], we use Docker and Swarm to deploy and run our applications, and we use etcd to save metadata for service discovery.
[DaoCloud]:https://www.daocloud.io
## Branch.io
- *Application*: Kubernetes
- *Launched*: April 2016
- *Cluster Size*: Multiple clusters, multiple sizes
- *Order of Data Size*: 100s of Megabytes
- *Operator*: branch.io
- *Environment*: AWS, Kubernetes
- *Backups*: EBS volume backups
At [Branch][branch], we use kubernetes heavily as our core microservice platform for staging and production.
[branch]: https://branch.io
## Baidu Waimai
- *Application*: SkyDNS, Kubernetes, UDC, CMDB and other distributed systems
- *Launched*: April. 2016
- *Cluster Size*: 3 clusters of 5 members
- *Order of Data Size*: several gigabytes
- *Operator*: Baidu Waimai Operations Department
- *Environment*: CentOS 6.5
- *Backups*: backup scripts
## Salesforce.com
- *Application*: Kubernetes
- *Launched*: Jan 2017
- *Cluster Size*: Multiple clusters of 3 members
- *Order of Data Size*: 100s of Megabytes
- *Operator*: Salesforce.com (krmayankk@github)
- *Environment*: BareMetal
- *Backups*: None, all data can be recreated
## Hosted Graphite
- *Application*: Service discovery, locking, ephemeral application data
- *Launched*: January 2017
- *Cluster Size*: 2 clusters of 7 members
- *Order of Data Size*: Megabytes
- *Operator*: Hosted Graphite (sre@hostedgraphite.com)
- *Environment*: Bare Metal
- *Backups*: None, all data is considered ephemeral.

View File

@ -1,6 +1,6 @@
# Reporting bugs
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
To make the bug report accurate and easy to understand, please try to create bug reports that are:

View File

@ -8,9 +8,11 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.0) before upgrading to 3.0.
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl cluster-health` command before proceeding.
@ -52,7 +54,7 @@ member 8211f1d0f64f3269 is healthy: got healthy result from http://localhost:123
cluster is healthy
$ curl http://localhost:2379/version
{"etcdserver":"2.3.x","etcdcluster":"2.3.0"}
{"etcdserver":"2.3.x","etcdcluster":"2.3.8"}
```
#### 2. Stop the existing etcd process
@ -116,4 +118,14 @@ $ ETCDCTL_API=3 etcdctl endpoint health
127.0.0.1:22379 is healthy: successfully committed proposal: took = 18.513301ms
```
## Further considerations
- etcdctl environment variables have been updated. If `ETCDCTL_API=2 etcdctl cluster-health` works properly but `ETCDCTL_API=3 etcdctl endpoints health` responds with `Error: grpc: timed out when dialing`, be sure to use the [new variable names](https://github.com/coreos/etcd/tree/master/etcdctl#etcdctl).
## Known Issues
- etcd &lt; v3.1 does not work properly if built with Go &gt; v1.7. See [Issue 6951](https://github.com/coreos/etcd/issues/6951) for additional information.
- If an error such as `transport: http2Client.notifyError got notified that the client transport was broken unexpected EOF.` shows up in the etcd server logs, be sure etcd is a pre-built release or built with (etcd v3.1+ &amp; go v1.7+) or (etcd &lt;v3.1 &amp; go v1.6.x).
- Adding a v3 node to v2.3 cluster during upgrades is not supported and could trigger panics. See [Issue 7249](https://github.com/coreos/etcd/issues/7429) for additional information. Mixed versions of etcd members are only allowed during v3 migration. Finish upgrades before making any membership changes.
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -8,9 +8,20 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Monitoring
Following metrics from v3.0.x have been deprecated in favor of [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus):
- `etcd_grpc_requests_total`
- `etcd_grpc_requests_failed_total`
- `etcd_grpc_active_streams`
- `etcd_grpc_unary_requests_duration_seconds`
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please upgrade to [3.0](https://github.com/coreos/etcd/releases/tag/v3.0.16) before upgrading to 3.1.
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please [upgrade to 3.0](upgrade_3_0.md) before upgrading to 3.1.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.

View File

@ -0,0 +1,338 @@
## Upgrade etcd from 3.1 to 3.2
In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.1 processes and replace them with etcd v3.2 processes
- after running all v3.2 processes, new features in v3.2 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.2.
#### Change in default `snapshot-count` value
The default value of `--snapshot-count` has [changed from from 10,000 to 100,000](https://github.com/coreos/etcd/pull/7160). Higher snapshot count means it holds Raft entries in memory for longer before discarding old entries. It is a trade-off between less frequent snapshotting and [higher memory usage](https://github.com/kubernetes/kubernetes/issues/60589#issuecomment-371977156). Higher `--snapshot-count` will be manifested with higher memory usage, while retaining more Raft entries helps with the availabilities of slow followers: leader is still able to replicate its logs to followers, rather than forcing followers to rebuild its stores from leader snapshots.
#### Change in gRPC dependency (>=3.2.10)
3.2.10 or later now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5` (<=3.2.9 requires `v1.2.1`).
##### Deprecate `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.SetLogger(log.New(os.Stderr, "grpc: ", 0))
```
After
```go
import "github.com/coreos/etcd/clientv3"
import "google.golang.org/grpc/grpclog"
clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface)
```
##### Deprecate `grpc.ErrClientConnTimeout`
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.2 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)).
Before
```go
// expect dial time-out on ipv4 blackhole
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://254.0.0.1:12345"},
DialTimeout: 2 * time.Second
})
if err == grpc.ErrClientConnTimeout {
// handle errors
}
```
After
```go
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://254.0.0.1:12345"},
DialTimeout: 2 * time.Second
})
if err == context.DeadlineExceeded {
// handle errors
}
```
#### Change in maximum request size limits (>=3.2.10)
3.2.10 and 3.2.11 allow custom request size limits in server side. >=3.2.12 allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag:
```bash
# limits request size to 1.5 KiB
etcd --max-request-bytes 1536
# client writes exceeding 1.5 KiB will be rejected
etcdctl put foo [LARGE VALUE...]
# etcdserver: request is too large
```
Or configure `embed.Config.MaxRequestBytes` field:
```go
import "github.com/coreos/etcd/embed"
import "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
// limit requests to 5 MiB
cfg := embed.NewConfig()
cfg.MaxRequestBytes = 5 * 1024 * 1024
// client writes exceeding 5 MiB will be rejected
_, err := cli.Put(ctx, "foo", [LARGE VALUE...])
err == rpctypes.ErrRequestTooLarge
```
**If not specified, server-side limit defaults to 1.5 MiB**.
Client-side request limits must be configured based on server-side limits.
```bash
# limits request size to 1 MiB
etcd --max-request-bytes 1048576
```
```go
import "github.com/coreos/etcd/clientv3"
cli, _ := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:2379"},
MaxCallSendMsgSize: 2 * 1024 * 1024,
MaxCallRecvMsgSize: 3 * 1024 * 1024,
})
// client writes exceeding "--max-request-bytes" will be rejected from etcd server
_, err := cli.Put(ctx, "foo", strings.Repeat("a", 1*1024*1024+5))
err == rpctypes.ErrRequestTooLarge
// client writes exceeding "MaxCallSendMsgSize" will be rejected from client-side
_, err = cli.Put(ctx, "foo", strings.Repeat("a", 5*1024*1024))
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (5242890 vs. 2097152)"
// some writes under limits
for i := range []int{0,1,2,3,4} {
_, err = cli.Put(ctx, fmt.Sprintf("foo%d", i), strings.Repeat("a", 1*1024*1024-500))
if err != nil {
panic(err)
}
}
// client reads exceeding "MaxCallRecvMsgSize" will be rejected from client-side
_, err = cli.Get(ctx, "foo", clientv3.WithPrefix())
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5240509 vs. 3145728)"
```
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Change in raw gRPC client wrappers
3.2.12 or later changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/coreos/etcd/pull/9047).
Before and after
```diff
-func NewKVFromKVClient(remote pb.KVClient) KV {
+func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
-func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
+func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
-func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
+func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
-func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
+func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
-func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
```
#### Change in `clientv3.Lease.TimeToLive` API
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/coreos/etcd/pull/7305)).
Before
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp == nil
err == lease.ErrLeaseNotFound
```
After
```go
// when leaseID does not exist
resp, err := TimeToLive(ctx, leaseID)
resp.TTL == -1
err == nil
```
#### Change in `clientv3.NewFromConfigFile`
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.NewFromConfigFile
```
After
```go
import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
clientv3yaml.NewConfig
```
#### Change in `--listen-peer-urls` and `--listen-client-urls`
3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formated as `scheme://IP:port`.
See [issue #6336](https://github.com/coreos/etcd/issues/6336) for more contexts.
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.2, the running cluster must be 3.1 or greater. If it's before 3.1, please [upgrade to 3.1](upgrade_3_1.md) before upgrading to 3.2.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
#### Mixed versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.2. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.2, the cluster will be upgraded to v3.2, and downgrade from this completed state is **not possible**. If any single member is still v3.1, however, the cluster and its operations remains "v3.1", and it is possible from this mixed cluster state to return to using a v3.1 etcd binary on all members.
Please [backup the data directory](../op-guide/maintenance.md#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade procedure
This example shows how to upgrade a 3-member v3.1 ectd cluster running on a local machine.
#### 1. Check upgrade requirements
Is the cluster healthy and running v3.1.x?
```
$ ETCDCTL_API=3 etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 6.600684ms
localhost:22379 is healthy: successfully committed proposal: took = 8.540064ms
localhost:32379 is healthy: successfully committed proposal: took = 8.763432ms
$ curl http://localhost:2379/version
{"etcdserver":"3.1.7","etcdcluster":"3.1.0"}
```
#### 2. Stop the existing etcd process
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```
2017-04-27 14:13:31.491746 I | raft: c89feb932daef420 [term 3] received MsgTimeoutNow from 6d4f535bae3ab960 and starts an election to get leadership.
2017-04-27 14:13:31.491769 I | raft: c89feb932daef420 became candidate at term 4
2017-04-27 14:13:31.491788 I | raft: c89feb932daef420 received MsgVoteResp from c89feb932daef420 at term 4
2017-04-27 14:13:31.491797 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 6d4f535bae3ab960 at term 4
2017-04-27 14:13:31.491805 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 9eda174c7df8a033 at term 4
2017-04-27 14:13:31.491815 I | raft: raft.node: c89feb932daef420 lost leader 6d4f535bae3ab960 at term 4
2017-04-27 14:13:31.524084 I | raft: c89feb932daef420 received MsgVoteResp from 6d4f535bae3ab960 at term 4
2017-04-27 14:13:31.524108 I | raft: c89feb932daef420 [quorum:2] has received 2 MsgVoteResp votes and 0 vote rejections
2017-04-27 14:13:31.524123 I | raft: c89feb932daef420 became leader at term 4
2017-04-27 14:13:31.524136 I | raft: raft.node: c89feb932daef420 elected leader c89feb932daef420 at term 4
2017-04-27 14:13:31.592650 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream MsgApp v2 reader)
2017-04-27 14:13:31.592825 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message reader)
2017-04-27 14:13:31.693275 E | rafthttp: failed to dial 6d4f535bae3ab960 on stream Message (dial tcp [::1]:2380: getsockopt: connection refused)
2017-04-27 14:13:31.693289 I | rafthttp: peer 6d4f535bae3ab960 became inactive
2017-04-27 14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer)
```
It's a good idea at this point to [backup the etcd data](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur:
```
$ etcdctl snapshot save backup.db
```
#### 3. Drop-in etcd v3.2 binary and start the new etcd process
The new v3.2 etcd will publish its information to the cluster:
```
2017-04-27 14:14:25.363225 I | etcdserver: published {Name:s1 ClientURLs:[http://localhost:2379]} to cluster a9ededbffcb1b1f1
```
Verify that each member, and then the entire cluster, becomes healthy with the new v3.2 etcd binary:
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:22379 is healthy: successfully committed proposal: took = 5.540129ms
localhost:32379 is healthy: successfully committed proposal: took = 7.321771ms
localhost:2379 is healthy: successfully committed proposal: took = 10.629901ms
```
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.2:
```
2017-04-27 14:15:17.071804 W | etcdserver: member c89feb932daef420 has a higher version 3.2.0
2017-04-27 14:15:21.073110 W | etcdserver: the local etcd version 3.1.7 is not up-to-date
2017-04-27 14:15:21.073142 W | etcdserver: member 6d4f535bae3ab960 has a higher version 3.2.0
2017-04-27 14:15:21.073157 W | etcdserver: the local etcd version 3.1.7 is not up-to-date
2017-04-27 14:15:21.073164 W | etcdserver: member c89feb932daef420 has a higher version 3.2.0
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, the cluster will report upgrading to 3.2 successfully:
```
2017-04-27 14:15:54.536901 N | etcdserver/membership: updated the cluster version from 3.1 to 3.2
2017-04-27 14:15:54.537035 I | etcdserver/api: enabled capabilities for version 3.2
```
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 2.312897ms
localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -0,0 +1,476 @@
## Upgrade etcd from 3.2 to 3.3
In the general case, upgrading from etcd 3.2 to 3.3 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.2 processes and replace them with etcd v3.3 processes
- after running all v3.3 processes, new features in v3.3 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.3.
#### Change in `etcdserver.EtcdServer` struct
`etcdserver.EtcdServer` has changed the type of its member field `*etcdserver.ServerConfig` to `etcdserver.ServerConfig`. And `etcdserver.NewServer` now takes `etcdserver.ServerConfig`, instead of `*etcdserver.ServerConfig`.
Before and after (e.g. [k8s.io/kubernetes/test/e2e_node/services/etcd.go](https://github.com/kubernetes/kubernetes/blob/release-1.8/test/e2e_node/services/etcd.go#L50-L55))
```diff
import "github.com/coreos/etcd/etcdserver"
type EtcdServer struct {
*etcdserver.EtcdServer
- config *etcdserver.ServerConfig
+ config etcdserver.ServerConfig
}
func NewEtcd(dataDir string) *EtcdServer {
- config := &etcdserver.ServerConfig{
+ config := etcdserver.ServerConfig{
DataDir: dataDir,
...
}
return &EtcdServer{config: config}
}
func (e *EtcdServer) Start() error {
var err error
e.EtcdServer, err = etcdserver.NewServer(e.config)
...
```
#### Change in `embed.EtcdServer` struct
Field `LogOutput` is added to `embed.Config`:
```diff
package embed
type Config struct {
Debug bool `json:"debug"`
LogPkgLevels string `json:"log-package-levels"`
+ LogOutput string `json:"log-output"`
...
```
Before gRPC server warnings were logged in etcdserver.
```
WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {localhost:2379 <nil>}
WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {localhost:2379 <nil>}
```
From v3.3, gRPC server logs are disabled by default.
```go
import "github.com/coreos/etcd/embed"
cfg := &embed.Config{Debug: false}
cfg.SetupLogging()
```
Set `embed.Config.Debug` field to `true` to enable gRPC server logs.
#### Change in `/health` endpoint response
Previously, `[endpoint]:[client-port]/health` returned manually marshaled JSON value. 3.3 now defines [`etcdhttp.Health`](https://godoc.org/github.com/coreos/etcd/etcdserver/api/etcdhttp#Health) struct.
Note that in v3.3.0-rc.0, v3.3.0-rc.1, and v3.3.0-rc.2, `etcdhttp.Health` has boolean type `"health"` and `"errors"` fields. For backward compatibilities, we reverted `"health"` field to `string` type and removed `"errors"` field. Further health information will be provided in separate APIs.
```bash
$ curl http://localhost:2379/health
{"health":"true"}
```
#### Change in gRPC gateway HTTP endpoints (replaced `/v3alpha` with `/v3beta`)
Before
```bash
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
After
```bash
curl -L http://localhost:2379/v3beta/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
Requests to `/v3alpha` endpoints will redirect to `/v3beta`, and `/v3alpha` will be removed in 3.4 release.
#### Change in maximum request size limits
3.3 now allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag:
```bash
# limits request size to 1.5 KiB
etcd --max-request-bytes 1536
# client writes exceeding 1.5 KiB will be rejected
etcdctl put foo [LARGE VALUE...]
# etcdserver: request is too large
```
Or configure `embed.Config.MaxRequestBytes` field:
```go
import "github.com/coreos/etcd/embed"
import "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
// limit requests to 5 MiB
cfg := embed.NewConfig()
cfg.MaxRequestBytes = 5 * 1024 * 1024
// client writes exceeding 5 MiB will be rejected
_, err := cli.Put(ctx, "foo", [LARGE VALUE...])
err == rpctypes.ErrRequestTooLarge
```
**If not specified, server-side limit defaults to 1.5 MiB**.
Client-side request limits must be configured based on server-side limits.
```bash
# limits request size to 1 MiB
etcd --max-request-bytes 1048576
```
```go
import "github.com/coreos/etcd/clientv3"
cli, _ := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:2379"},
MaxCallSendMsgSize: 2 * 1024 * 1024,
MaxCallRecvMsgSize: 3 * 1024 * 1024,
})
// client writes exceeding "--max-request-bytes" will be rejected from etcd server
_, err := cli.Put(ctx, "foo", strings.Repeat("a", 1*1024*1024+5))
err == rpctypes.ErrRequestTooLarge
// client writes exceeding "MaxCallSendMsgSize" will be rejected from client-side
_, err = cli.Put(ctx, "foo", strings.Repeat("a", 5*1024*1024))
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (5242890 vs. 2097152)"
// some writes under limits
for i := range []int{0,1,2,3,4} {
_, err = cli.Put(ctx, fmt.Sprintf("foo%d", i), strings.Repeat("a", 1*1024*1024-500))
if err != nil {
panic(err)
}
}
// client reads exceeding "MaxCallRecvMsgSize" will be rejected from client-side
_, err = cli.Get(ctx, "foo", clientv3.WithPrefix())
err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5240509 vs. 3145728)"
```
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Change in raw gRPC client wrappers
3.3 changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/coreos/etcd/pull/9047).
Before and after
```diff
-func NewKVFromKVClient(remote pb.KVClient) KV {
+func NewKVFromKVClient(remote pb.KVClient, c *Client) KV {
-func NewClusterFromClusterClient(remote pb.ClusterClient) Cluster {
+func NewClusterFromClusterClient(remote pb.ClusterClient, c *Client) Cluster {
-func NewLeaseFromLeaseClient(remote pb.LeaseClient, keepAliveTimeout time.Duration) Lease {
+func NewLeaseFromLeaseClient(remote pb.LeaseClient, c *Client, keepAliveTimeout time.Duration) Lease {
-func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient) Maintenance {
+func NewMaintenanceFromMaintenanceClient(remote pb.MaintenanceClient, c *Client) Maintenance {
-func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
```
#### Change in clientv3 `Snapshot` API error type
Previously, clientv3 `Snapshot` API returned raw [`grpc/*status.statusError`] type error. v3.3 now translates those errors to corresponding public error types, to be consistent with other APIs.
Before
```go
import "context"
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc, _ := cli.Snapshot(ctx)
cancel()
_, err := io.Copy(f, rc)
err.Error() == "rpc error: code = Canceled desc = context canceled"
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc, _ = cli.Snapshot(ctx)
time.Sleep(2 * time.Second)
_, err = io.Copy(f, rc)
err.Error() == "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
```
After
```go
import "context"
// reading snapshot with canceled context should error out
ctx, cancel := context.WithCancel(context.Background())
rc, _ := cli.Snapshot(ctx)
cancel()
_, err := io.Copy(f, rc)
err == context.Canceled
// reading snapshot with deadline exceeded should error out
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
defer cancel()
rc, _ = cli.Snapshot(ctx)
time.Sleep(2 * time.Second)
_, err = io.Copy(f, rc)
err == context.DeadlineExceeded
```
#### Change in `etcdctl lease timetolive` command output
Previously, `lease timetolive LEASE_ID` command on expired lease prints `-1s` for remaining seconds. 3.3 now outputs clearer messages.
Before
```bash
lease 2d8257079fa1bc0c granted with TTL(0s), remaining(-1s)
```
After
```bash
lease 2d8257079fa1bc0c already expired
```
#### Change in `golang.org/x/net/context` imports
`clientv3` has deprecated `golang.org/x/net/context`. If a project vendors `golang.org/x/net/context` in other code (e.g. etcd generated protocol buffer code) and imports `github.com/coreos/etcd/clientv3`, it requires Go 1.9+ to compile.
Before
```go
import "golang.org/x/net/context"
cli.Put(context.Background(), "f", "v")
```
After
```go
import "context"
cli.Put(context.Background(), "f", "v")
```
#### Change in gRPC dependency
3.3 now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5`.
##### Deprecate `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
Before
```go
import "github.com/coreos/etcd/clientv3"
clientv3.SetLogger(log.New(os.Stderr, "grpc: ", 0))
```
After
```go
import "github.com/coreos/etcd/clientv3"
import "google.golang.org/grpc/grpclog"
clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface)
```
##### Deprecate `grpc.ErrClientConnTimeout`
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.3 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)).
Before
```go
// expect dial time-out on ipv4 blackhole
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://254.0.0.1:12345"},
DialTimeout: 2 * time.Second
})
if err == grpc.ErrClientConnTimeout {
// handle errors
}
```
After
```go
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://254.0.0.1:12345"},
DialTimeout: 2 * time.Second
})
if err == context.DeadlineExceeded {
// handle errors
}
```
#### Change in official container registry
etcd now uses [`gcr.io/etcd-development/etcd`](https://gcr.io/etcd-development/etcd) as a primary container registry, and [`quay.io/coreos/etcd`](https://quay.io/coreos/etcd) as secondary.
Before
```bash
docker pull quay.io/coreos/etcd:v3.2.5
```
After
```bash
docker pull gcr.io/etcd-development/etcd:v3.3.0
```
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.3, the running cluster must be 3.2 or greater. If it's before 3.2, please [upgrade to 3.2](upgrade_3_2.md) before upgrading to 3.3.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
#### Mixed versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.3. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.3, the cluster will be upgraded to v3.3, and downgrade from this completed state is **not possible**. If any single member is still v3.2, however, the cluster and its operations remains "v3.2", and it is possible from this mixed cluster state to return to using a v3.2 etcd binary on all members.
Please [backup the data directory](../op-guide/maintenance.md#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade procedure
This example shows how to upgrade a 3-member v3.2 ectd cluster running on a local machine.
#### 1. Check upgrade requirements
Is the cluster healthy and running v3.2.x?
```
$ ETCDCTL_API=3 etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 6.600684ms
localhost:22379 is healthy: successfully committed proposal: took = 8.540064ms
localhost:32379 is healthy: successfully committed proposal: took = 8.763432ms
$ curl http://localhost:2379/version
{"etcdserver":"3.2.7","etcdcluster":"3.2.0"}
```
#### 2. Stop the existing etcd process
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```
14:13:31.491746 I | raft: c89feb932daef420 [term 3] received MsgTimeoutNow from 6d4f535bae3ab960 and starts an election to get leadership.
14:13:31.491769 I | raft: c89feb932daef420 became candidate at term 4
14:13:31.491788 I | raft: c89feb932daef420 received MsgVoteResp from c89feb932daef420 at term 4
14:13:31.491797 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 6d4f535bae3ab960 at term 4
14:13:31.491805 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 9eda174c7df8a033 at term 4
14:13:31.491815 I | raft: raft.node: c89feb932daef420 lost leader 6d4f535bae3ab960 at term 4
14:13:31.524084 I | raft: c89feb932daef420 received MsgVoteResp from 6d4f535bae3ab960 at term 4
14:13:31.524108 I | raft: c89feb932daef420 [quorum:2] has received 2 MsgVoteResp votes and 0 vote rejections
14:13:31.524123 I | raft: c89feb932daef420 became leader at term 4
14:13:31.524136 I | raft: raft.node: c89feb932daef420 elected leader c89feb932daef420 at term 4
14:13:31.592650 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream MsgApp v2 reader)
14:13:31.592825 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message reader)
14:13:31.693275 E | rafthttp: failed to dial 6d4f535bae3ab960 on stream Message (dial tcp [::1]:2380: getsockopt: connection refused)
14:13:31.693289 I | rafthttp: peer 6d4f535bae3ab960 became inactive
14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer)
```
It's a good idea at this point to [backup the etcd data](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur:
```
$ etcdctl snapshot save backup.db
```
#### 3. Drop-in etcd v3.3 binary and start the new etcd process
The new v3.3 etcd will publish its information to the cluster:
```
14:14:25.363225 I | etcdserver: published {Name:s1 ClientURLs:[http://localhost:2379]} to cluster a9ededbffcb1b1f1
```
Verify that each member, and then the entire cluster, becomes healthy with the new v3.3 etcd binary:
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:22379 is healthy: successfully committed proposal: took = 5.540129ms
localhost:32379 is healthy: successfully committed proposal: took = 7.321771ms
localhost:2379 is healthy: successfully committed proposal: took = 10.629901ms
```
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.3:
```
14:15:17.071804 W | etcdserver: member c89feb932daef420 has a higher version 3.3.0
14:15:21.073110 W | etcdserver: the local etcd version 3.2.7 is not up-to-date
14:15:21.073142 W | etcdserver: member 6d4f535bae3ab960 has a higher version 3.3.0
14:15:21.073157 W | etcdserver: the local etcd version 3.2.7 is not up-to-date
14:15:21.073164 W | etcdserver: member c89feb932daef420 has a higher version 3.3.0
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, the cluster will report upgrading to 3.3 successfully:
```
14:15:54.536901 N | etcdserver/membership: updated the cluster version from 3.2 to 3.3
14:15:54.537035 I | etcdserver/api: enabled capabilities for version 3.3
```
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 2.312897ms
localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -0,0 +1,171 @@
## Upgrade etcd from 3.3 to 3.4
In the general case, upgrading from etcd 3.3 to 3.4 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.3 processes and replace them with etcd v3.4 processes
- after running all v3.4 processes, new features in v3.4 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/coreos/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.4.
#### Change in `etcd` flags
`--ca-file` and `--peer-ca-file` flags are deprecated; they have been deprecated since v2.1.
```diff
-etcd --ca-file ca-client.crt
+etcd --trusted-ca-file ca-client.crt
```
```diff
-etcd --peer-ca-file ca-peer.crt
+etcd --peer-trusted-ca-file ca-peer.crt
```
#### Change in ``pkg/transport`
Deprecated `pkg/transport.TLSInfo.CAFile` field.
```diff
import "github.com/coreos/etcd/pkg/transport"
tlsInfo := transport.TLSInfo{
CertFile: "/tmp/test-certs/test.pem",
KeyFile: "/tmp/test-certs/test-key.pem",
- CAFile: "/tmp/test-certs/trusted-ca.pem",
+ TrustedCAFile: "/tmp/test-certs/trusted-ca.pem",
}
tlsConfig, err := tlsInfo.ClientConfig()
if err != nil {
panic(err)
}
```
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.4, the running cluster must be 3.3 or greater. If it's before 3.3, please [upgrade to 3.3](upgrade_3_3.md) before upgrading to 3.4.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
#### Mixed versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.4. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.4, the cluster will be upgraded to v3.4, and downgrade from this completed state is **not possible**. If any single member is still v3.3, however, the cluster and its operations remains "v3.3", and it is possible from this mixed cluster state to return to using a v3.3 etcd binary on all members.
Please [backup the data directory](../op-guide/maintenance.md#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade procedure
This example shows how to upgrade a 3-member v3.3 ectd cluster running on a local machine.
#### 1. Check upgrade requirements
Is the cluster healthy and running v3.3.x?
```
$ ETCDCTL_API=3 etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 6.600684ms
localhost:22379 is healthy: successfully committed proposal: took = 8.540064ms
localhost:32379 is healthy: successfully committed proposal: took = 8.763432ms
$ curl http://localhost:2379/version
{"etcdserver":"3.3.0","etcdcluster":"3.3.0"}
```
#### 2. Stop the existing etcd process
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```
14:13:31.491746 I | raft: c89feb932daef420 [term 3] received MsgTimeoutNow from 6d4f535bae3ab960 and starts an election to get leadership.
14:13:31.491769 I | raft: c89feb932daef420 became candidate at term 4
14:13:31.491788 I | raft: c89feb932daef420 received MsgVoteResp from c89feb932daef420 at term 4
14:13:31.491797 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 6d4f535bae3ab960 at term 4
14:13:31.491805 I | raft: c89feb932daef420 [logterm: 3, index: 9] sent MsgVote request to 9eda174c7df8a033 at term 4
14:13:31.491815 I | raft: raft.node: c89feb932daef420 lost leader 6d4f535bae3ab960 at term 4
14:13:31.524084 I | raft: c89feb932daef420 received MsgVoteResp from 6d4f535bae3ab960 at term 4
14:13:31.524108 I | raft: c89feb932daef420 [quorum:2] has received 2 MsgVoteResp votes and 0 vote rejections
14:13:31.524123 I | raft: c89feb932daef420 became leader at term 4
14:13:31.524136 I | raft: raft.node: c89feb932daef420 elected leader c89feb932daef420 at term 4
14:13:31.592650 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream MsgApp v2 reader)
14:13:31.592825 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message reader)
14:13:31.693275 E | rafthttp: failed to dial 6d4f535bae3ab960 on stream Message (dial tcp [::1]:2380: getsockopt: connection refused)
14:13:31.693289 I | rafthttp: peer 6d4f535bae3ab960 became inactive
14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer)
```
It's a good idea at this point to [backup the etcd data](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur:
```
$ etcdctl snapshot save backup.db
```
#### 3. Drop-in etcd v3.4 binary and start the new etcd process
The new v3.4 etcd will publish its information to the cluster:
```
14:14:25.363225 I | etcdserver: published {Name:s1 ClientURLs:[http://localhost:2379]} to cluster a9ededbffcb1b1f1
```
Verify that each member, and then the entire cluster, becomes healthy with the new v3.4 etcd binary:
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:22379 is healthy: successfully committed proposal: took = 5.540129ms
localhost:32379 is healthy: successfully committed proposal: took = 7.321771ms
localhost:2379 is healthy: successfully committed proposal: took = 10.629901ms
```
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.4:
```
14:15:17.071804 W | etcdserver: member c89feb932daef420 has a higher version 3.4.0
14:15:21.073110 W | etcdserver: the local etcd version 3.3.0 is not up-to-date
14:15:21.073142 W | etcdserver: member 6d4f535bae3ab960 has a higher version 3.4.0
14:15:21.073157 W | etcdserver: the local etcd version 3.3.0 is not up-to-date
14:15:21.073164 W | etcdserver: member c89feb932daef420 has a higher version 3.4.0
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, the cluster will report upgrading to 3.4 successfully:
```
14:15:54.536901 N | etcdserver/membership: updated the cluster version from 3.3 to 3.4
14:15:54.537035 I | etcdserver/api: enabled capabilities for version 3.4
```
```
$ ETCDCTL_API=3 /etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
localhost:2379 is healthy: successfully committed proposal: took = 2.312897ms
localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms
localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -0,0 +1,19 @@
# Upgrading etcd clusters and applications
This section contains documents specific to upgrading etcd clusters and applications.
## Moving from etcd API v2 to API v3
* [Migrate applications from using API v2 to API v3][migrate-apps]
## Upgrading an etcd v3.x cluster
* [Upgrade etcd from 3.0 to 3.1][upgrade-3-1]
* [Upgrade etcd from 3.1 to 3.2][upgrade-3-2]
## Upgrading from etcd v2.3
* [Upgrade a v2.3 cluster to v3.0][upgrade-cluster]
[migrate-apps]: ../op-guide/v2-migration.md
[upgrade-cluster]: upgrade_3_0.md
[upgrade-3-1]: upgrade_3_1.md
[upgrade-3-2]: upgrade_3_2.md

View File

@ -67,13 +67,13 @@ You have successfully started an etcd and written a key to the store.
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication. To maintain compatibility, some etcd configuration and documentation continues to refer to the legacy ports 4001 and 7001, but all new etcd use and discussion should adopt the IANA-assigned ports. The legacy ports 4001 and 7001 will be fully deprecated, and support for their use removed, in future etcd releases.
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
### Running local etcd cluster
First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
Our [Procfile script](./Procfile) will set up a local example cluster. You can start it with:
Our [Procfile script](../../V2Procfile) will set up a local example cluster. You can start it with:
```sh
goreman start
@ -162,4 +162,4 @@ Currently only the amd64 architecture is officially supported by `etcd`.
### License
etcd is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
etcd is under the Apache 2.0 license. See the [LICENSE](../../LICENSE) file for details.

View File

@ -18,7 +18,7 @@ A keys lifetime spans a generation. Each key may have one or multiple generat
### Physical View
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.
@ -73,7 +73,7 @@ Any completed operations are durable. All accessible data is also durable data.
#### Linearizability
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence.
@ -83,10 +83,10 @@ etcd does not ensure linearizability for watch operations. Users are expected to
etcd ensures linearizability for all other operations by default. Linearizability comes with a cost, however, because linearized requests must go through the Raft consensus process. To obtain lower latencies and higher throughput for read requests, clients can configure a requests consistency mode to `serializable`, which may access stale data with respect to quorum, but removes the performance penalty of linearized accesses' reliance on live consensus.
[persistent-ds]: [https://en.wikipedia.org/wiki/Persistent_data_structure]
[btree]: [https://en.wikipedia.org/wiki/B-tree]
[b+tree]: [https://en.wikipedia.org/wiki/B%2B_tree]
[seq_consistency]: [https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency]
[strict_consistency]: [https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency]
[serializable_isolation]: [https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable]
[Linearizability]: [#Linearizability]
[persistent-ds]: https://en.wikipedia.org/wiki/Persistent_data_structure
[btree]: https://en.wikipedia.org/wiki/B-tree
[b+tree]: https://en.wikipedia.org/wiki/B%2B_tree
[seq_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
[Linearizability]: #linearizability

View File

@ -32,7 +32,7 @@ The consistent flag for read operations is removed in etcd 2.0.0. The normal rea
The read consistency guarantees are:
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
@ -56,6 +56,7 @@ Proxy mode in 2.0 will provide similar functionality, and with improved control
## Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: clustering.md#custom-etcd-discovery-service
## HTTP Admin API

View File

@ -49,4 +49,4 @@ Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send r
| 256 | 256 | all servers | 3061 | 119.3 |
[boom]: https://github.com/rakyll/boom
[hack-benchmark]: /hack/benchmark/
[hack-benchmark]: ../../../hack/benchmark/

View File

@ -24,7 +24,7 @@ Go OS/Arch: linux/amd64
## Testing
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool](https://github.com/rakyll/boom) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool][boom] with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions][hack] for the patch and the steps to reproduce our procedures.
The performance is calulated through results of 100 benchmark rounds.
@ -66,4 +66,7 @@ The performance is calulated through results of 100 benchmark rounds.
- Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
[boom]: https://github.com/rakyll/boom
[hack]: ../../../hack/benchmark/

View File

@ -69,4 +69,4 @@ Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send r
[boom]: https://github.com/rakyll/boom
[c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md
[hack-benchmark]: /hack/benchmark/
[hack-benchmark]: ../../../hack/benchmark/

View File

@ -39,4 +39,4 @@ The performance is nearly the same as the one with empty server handler.
The performance with empty server handler is not affected by one put. So the
performance downgrade should be caused by storage package.
[etcd-v3-benchmark]: /tools/benchmark/
[etcd-v3-benchmark]: ../../../tools/benchmark/

View File

@ -423,7 +423,7 @@ To make understanding this feature easier, we changed the naming of some flags,
|-peers |none |Deprecated. The --initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
|-peers-file |none |Deprecated. The --initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
[client]: /client
[client]: ../../client
[client-discoverer]: https://godoc.org/github.com/coreos/etcd/client#Discoverer
[conf-adv-client]: configuration.md#-advertise-client-urls
[conf-listen-client]: configuration.md#-listen-client-urls

View File

@ -234,7 +234,7 @@ The security flags help to [build a secure etcd cluster][security].
+ env variable: ETCD_DEBUG
### --log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: none (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
@ -272,7 +272,7 @@ Follow the instructions when using these flags.
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
[proxy]: proxy.md
[reconfig]: runtime-configuration.md
[restore]: admin_guide.md#restoring-a-backup

View File

@ -112,7 +112,6 @@
- [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration
- [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
- [scrz](https://github.com/scrz/scrz) - Container manager, stores configuration in etcd.
- [fleet](https://github.com/coreos/fleet) - Distributed init system
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.

View File

@ -1,6 +1,6 @@
# Reporting Bugs
If you find bugs or documentation mistakes in the etcd project, please let us know by [opening an issue][issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
If you find bugs or documentation mistakes in the etcd project, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
To make your bug report accurate and easy to understand, please try to create bug reports that are:

View File

@ -7,25 +7,25 @@ To prove out the design of the v3 API the team has also built [a number of examp
# Design
1. Flatten binary key-value space
2. Keep the event history until compaction
- access to old version of keys
- user controlled history compaction
3. Support range query
- Pagination support with limit argument
- Support consistency guarantee across multiple range queries
4. Replace TTL key with Lease
- more efficient/ low cost keep alive
- a logical group of TTL keys
5. Replace CAS/CAD with multi-object Txn
- MUCH MORE powerful and flexible
6. Support efficient watching with multiple ranges
7. RPC API supports the completed set of APIs.
7. RPC API supports the completed set of APIs.
- more efficient than JSON/HTTP
- additional txn/lease support
@ -56,7 +56,7 @@ the size in the future a little bit or make it configurable.
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
PutResponse {
PutResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
@ -119,7 +119,7 @@ RangeResponse {
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
compare = {
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = greater, key = foo1, mod_revision = 1}}
},
// if the comparison succeeds, put foo2 = bar2
@ -156,7 +156,7 @@ Watch( WatchRequest{
end_revision = 10000,
// server decided notification frequency
progress_notification = true,
}
}
… // this can be a watch request stream
)
@ -176,7 +176,7 @@ WatchResponse {
},
}
// a notification at 2000
WatchResponse {
cluster_id = 0x1000,
@ -185,9 +185,9 @@ WatchResponse {
raft_term = 0x1,
// nil event as notification
}
// put (foo0=bar3000) event at 3000
WatchResponse {
cluster_id = 0x1000,
@ -204,8 +204,8 @@ WatchResponse {
},
}
```
[api-protobuf]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/master/storage/storagepb/kv.proto
[api-protobuf]: https://github.com/coreos/etcd/blob/release-2.3/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/release-2.3/storage/storagepb/kv.proto

View File

@ -188,6 +188,6 @@ Make sure that you sign your certificates with a Subject Name your member's publ
If you need your certificate to be signed for your member's FQDN in its Subject Name then you could use Subject Alternative Names (short IP SANs) to add your IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too.
[cfssl]: https://github.com/cloudflare/cfssl
[tls-setup]: /hack/tls-setup
[tls-setup]: ../../hack/tls-setup
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName

516
Makefile Normal file
View File

@ -0,0 +1,516 @@
# run from repository root
# Example:
# make build
# make clean
# make docker-clean
# make docker-start
# make docker-kill
# make docker-remove
.PHONY: build
build:
GO_BUILD_FLAGS="-v" ./build
./bin/etcd --version
ETCDCTL_API=3 ./bin/etcdctl version
clean:
rm -f ./codecov
rm -rf ./agent-*
rm -rf ./covdir
rm -f ./*.log
rm -f ./bin/Dockerfile-release
rm -rf ./bin/*.etcd
rm -rf ./gopath
rm -rf ./gopath.proto
rm -rf ./release
rm -f ./integration/127.0.0.1:* ./integration/localhost:*
rm -f ./clientv3/integration/127.0.0.1:* ./clientv3/integration/localhost:*
rm -f ./clientv3/ordering/127.0.0.1:* ./clientv3/ordering/localhost:*
docker-clean:
docker images
docker image prune --force
docker-start:
service docker restart
docker-kill:
docker kill `docker ps -q` || true
docker-remove:
docker rm --force `docker ps -a -q` || true
docker rmi --force `docker images -q` || true
GO_VERSION ?= 1.10.1
ETCD_VERSION ?= $(shell git rev-parse --short HEAD || echo "GitNotFound")
TEST_SUFFIX = $(shell date +%s | base64 | head -c 15)
TEST_OPTS ?= PASSES='unit'
TMP_DIR_MOUNT_FLAG = --mount type=tmpfs,destination=/tmp
ifdef HOST_TMP_DIR
TMP_DIR_MOUNT_FLAG = --mount type=bind,source=$(HOST_TMP_DIR),destination=/tmp
endif
# Example:
# GO_VERSION=1.8.7 make build-docker-test
# GO_VERSION=1.9.5 make build-docker-test
# make build-docker-test
#
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# GO_VERSION=1.8.7 make push-docker-test
# GO_VERSION=1.9.5 make push-docker-test
# make push-docker-test
#
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# GO_VERSION=1.9.5 make pull-docker-test
# make pull-docker-test
build-docker-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./tests/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
--file ./tests/Dockerfile .
@mv ./tests/Dockerfile.bak ./tests/Dockerfile
push-docker-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-test:go$(GO_VERSION)
pull-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-test:go$(GO_VERSION)
# Example:
# make build-docker-test
# make compile-with-docker-test
# make compile-setup-gopath-with-docker-test
compile-with-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker run \
--rm \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "GO_BUILD_FLAGS=-v ./build && ./bin/etcd --version"
compile-setup-gopath-with-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker run \
--rm \
--mount type=bind,source=`pwd`,destination=/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && ETCD_SETUP_GOPATH=1 GO_BUILD_FLAGS=-v ./build && ./bin/etcd --version && rm -rf ./gopath"
# Example:
#
# Local machine:
# TEST_OPTS="PASSES='fmt'" make test
# TEST_OPTS="PASSES='fmt bom dep compile build unit'" make test
# TEST_OPTS="PASSES='build unit release integration_e2e functional'" make test
# TEST_OPTS="PASSES='build grpcproxy'" make test
#
# Example (test with docker):
# make pull-docker-test
# TEST_OPTS="PASSES='fmt'" make docker-test
# TEST_OPTS="VERBOSE=2 PASSES='unit'" make docker-test
#
# Travis CI (test with docker):
# TEST_OPTS="PASSES='fmt bom dep compile build unit'" make docker-test
#
# Semaphore CI (test with docker):
# TEST_OPTS="PASSES='build unit release integration_e2e functional'" make docker-test
# HOST_TMP_DIR=/tmp TEST_OPTS="PASSES='build unit release integration_e2e functional'" make docker-test
# TEST_OPTS="GOARCH=386 PASSES='build unit integration_e2e'" make docker-test
#
# grpc-proxy tests (test with docker):
# TEST_OPTS="PASSES='build grpcproxy'" make docker-test
# HOST_TMP_DIR=/tmp TEST_OPTS="PASSES='build grpcproxy'" make docker-test
.PHONY: test
test:
$(info TEST_OPTS: $(TEST_OPTS))
$(info log-file: test-$(TEST_SUFFIX).log)
$(TEST_OPTS) ./test 2>&1 | tee test-$(TEST_SUFFIX).log
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 test-$(TEST_SUFFIX).log
docker-test:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
$(info TEST_OPTS: $(TEST_OPTS))
$(info log-file: test-$(TEST_SUFFIX).log)
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "$(TEST_OPTS) ./test 2>&1 | tee test-$(TEST_SUFFIX).log"
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 test-$(TEST_SUFFIX).log
docker-test-coverage:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
$(info log-file: docker-test-coverage-$(TEST_SUFFIX).log)
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "COVERDIR=covdir PASSES='build build_cov cov' ./test 2>&1 | tee docker-test-coverage-$(TEST_SUFFIX).log && /codecov -t 6040de41-c073-4d6f-bbf8-d89256ef31e1"
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 docker-test-coverage-$(TEST_SUFFIX).log
# Example:
# make compile-with-docker-test
# ETCD_VERSION=v3-test make build-docker-release-master
# ETCD_VERSION=v3-test make push-docker-release-master
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
build-docker-release-master:
$(info ETCD_VERSION: $(ETCD_VERSION))
cp ./Dockerfile-release ./bin/Dockerfile-release
docker build \
--tag gcr.io/etcd-development/etcd:$(ETCD_VERSION) \
--file ./bin/Dockerfile-release \
./bin
rm -f ./bin/Dockerfile-release
docker run \
--rm \
gcr.io/etcd-development/etcd:$(ETCD_VERSION) \
/bin/sh -c "/usr/local/bin/etcd --version && ETCDCTL_API=3 /usr/local/bin/etcdctl version"
push-docker-release-master:
$(info ETCD_VERSION: $(ETCD_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd:$(ETCD_VERSION)
# Example:
# make build-docker-test
# make compile-with-docker-test
# make build-docker-static-ip-test
#
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-static-ip-test
#
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-static-ip-test
#
# make docker-static-ip-test-certs-run
# make docker-static-ip-test-certs-metrics-proxy-run
build-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./tests/docker-static-ip/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
--file ./tests/docker-static-ip/Dockerfile \
./tests/docker-static-ip
@mv ./tests/docker-static-ip/Dockerfile.bak ./tests/docker-static-ip/Dockerfile
push-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION)
pull-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION)
docker-static-ip-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-static-ip/certs,destination=/certs \
gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-static-ip-test-certs-metrics-proxy-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-static-ip/certs-metrics-proxy,destination=/certs-metrics-proxy \
gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-metrics-proxy/run.sh && rm -rf m*.etcd"
# Example:
# make build-docker-test
# make compile-with-docker-test
# make build-docker-dns-test
#
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-dns-test
#
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-dns-test
#
# make docker-dns-test-insecure-run
# make docker-dns-test-certs-run
# make docker-dns-test-certs-gateway-run
# make docker-dns-test-certs-wildcard-run
# make docker-dns-test-certs-common-name-auth-run
# make docker-dns-test-certs-common-name-multi-run
build-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./tests/docker-dns/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
--file ./tests/docker-dns/Dockerfile \
./tests/docker-dns
@mv ./tests/docker-dns/Dockerfile.bak ./tests/docker-dns/Dockerfile
docker run \
--rm \
--dns 127.0.0.1 \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "/etc/init.d/bind9 start && cat /dev/null >/etc/hosts && dig etcd.local"
push-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION)
pull-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION)
docker-dns-test-insecure-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/insecure,destination=/insecure \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /insecure/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs,destination=/certs \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-gateway-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs-gateway,destination=/certs-gateway \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-gateway/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-wildcard-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs-wildcard,destination=/certs-wildcard \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-wildcard/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-common-name-auth-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs-common-name-auth,destination=/certs-common-name-auth \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-common-name-auth/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-common-name-multi-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns/certs-common-name-multi,destination=/certs-common-name-multi \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-common-name-multi/run.sh && rm -rf m*.etcd"
# Example:
# make build-docker-test
# make compile-with-docker-test
# make build-docker-dns-srv-test
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-dns-srv-test
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-dns-srv-test
# make docker-dns-srv-test-certs-run
# make docker-dns-srv-test-certs-gateway-run
# make docker-dns-srv-test-certs-wildcard-run
build-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./tests/docker-dns-srv/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
--file ./tests/docker-dns-srv/Dockerfile \
./tests/docker-dns-srv
@mv ./tests/docker-dns-srv/Dockerfile.bak ./tests/docker-dns-srv/Dockerfile
docker run \
--rm \
--dns 127.0.0.1 \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "/etc/init.d/bind9 start && cat /dev/null >/etc/hosts && dig +noall +answer SRV _etcd-client-ssl._tcp.etcd.local && dig +noall +answer SRV _etcd-server-ssl._tcp.etcd.local && dig +noall +answer m1.etcd.local m2.etcd.local m3.etcd.local"
push-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION)
pull-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION)
docker-dns-srv-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns-srv/certs,destination=/certs \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-dns-srv-test-certs-gateway-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns-srv/certs-gateway,destination=/certs-gateway \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-gateway/run.sh && rm -rf m*.etcd"
docker-dns-srv-test-certs-wildcard-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/tests/docker-dns-srv/certs-wildcard,destination=/certs-wildcard \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-wildcard/run.sh && rm -rf m*.etcd"
# Example:
# make build-functional
# make build-docker-functional
# make push-docker-functional
# make pull-docker-functional
build-functional:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
./functional/build
./bin/etcd-agent -help || true && \
./bin/etcd-proxy -help || true && \
./bin/etcd-runner --help || true && \
./bin/etcd-tester -help || true
build-docker-functional:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./functional/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-functional:go$(GO_VERSION) \
--file ./functional/Dockerfile \
.
@mv ./functional/Dockerfile.bak ./functional/Dockerfile
docker run \
--rm \
gcr.io/etcd-development/etcd-functional:go$(GO_VERSION) \
/bin/bash -c "./bin/etcd --version && \
./bin/etcd-failpoints --version && \
ETCDCTL_API=3 ./bin/etcdctl version && \
./bin/etcd-agent -help || true && \
./bin/etcd-proxy -help || true && \
./bin/etcd-runner --help || true && \
./bin/etcd-tester -help || true && \
./bin/benchmark --help || true"
push-docker-functional:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-functional:go$(GO_VERSION)
pull-docker-functional:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
docker pull gcr.io/etcd-development/etcd-functional:go$(GO_VERSION)

View File

@ -78,7 +78,7 @@ That's it! etcd is now running and serving client requests. For more
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication.
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
### Running a local etcd cluster
@ -136,5 +136,3 @@ See [reporting bugs](Documentation/reporting_bugs.md) for details about reportin
### License
etcd is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.

View File

@ -34,7 +34,7 @@ import (
func TestV2NoRetryEOF(t *testing.T) {
defer testutil.AfterTest(t)
// generate an EOF response; specify address so appears first in sorted ep list
lEOF := integration.NewListenerWithAddr(t, fmt.Sprintf("eof:123.%d.sock", os.Getpid()))
lEOF := integration.NewListenerWithAddr(t, fmt.Sprintf("127.0.0.1:%05d", os.Getpid()))
defer lEOF.Close()
tries := uint32(0)
go func() {
@ -65,8 +65,7 @@ func TestV2NoRetryEOF(t *testing.T) {
// TestV2NoRetryNoLeader tests destructive api calls won't retry if given an error code.
func TestV2NoRetryNoLeader(t *testing.T) {
defer testutil.AfterTest(t)
lHttp := integration.NewListenerWithAddr(t, fmt.Sprintf("errHttp:123.%d.sock", os.Getpid()))
lHttp := integration.NewListenerWithAddr(t, fmt.Sprintf("127.0.0.1:%05d", os.Getpid()))
eh := &errHandler{errCode: http.StatusServiceUnavailable}
srv := httptest.NewUnstartedServer(eh)
defer lHttp.Close()

View File

@ -17,5 +17,5 @@ package integration
import "github.com/coreos/pkg/capnslog"
func init() {
capnslog.SetGlobalLogLevel(capnslog.INFO)
capnslog.SetGlobalLogLevel(capnslog.CRITICAL)
}

View File

@ -347,7 +347,57 @@ func putAndWatch(t *testing.T, wctx *watchctx, key, val string) {
}
}
// TestWatchResumeComapcted checks that the watcher gracefully closes in case
func TestWatchResumeInitRev(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
cli := clus.Client(0)
if _, err := cli.Put(context.TODO(), "b", "2"); err != nil {
t.Fatal(err)
}
if _, err := cli.Put(context.TODO(), "a", "3"); err != nil {
t.Fatal(err)
}
// if resume is broken, it'll pick up this key first instead of a=3
if _, err := cli.Put(context.TODO(), "a", "4"); err != nil {
t.Fatal(err)
}
wch := clus.Client(0).Watch(context.Background(), "a", clientv3.WithRev(1), clientv3.WithCreatedNotify())
if resp, ok := <-wch; !ok || resp.Header.Revision != 4 {
t.Fatalf("got (%v, %v), expected create notification rev=4", resp, ok)
}
// pause wch
clus.Members[0].DropConnections()
clus.Members[0].PauseConnections()
select {
case resp, ok := <-wch:
t.Skipf("wch should block, got (%+v, %v); drop not fast enough", resp, ok)
case <-time.After(100 * time.Millisecond):
}
// resume wch
clus.Members[0].UnpauseConnections()
select {
case resp, ok := <-wch:
if !ok {
t.Fatal("unexpected watch close")
}
if len(resp.Events) == 0 {
t.Fatal("expected event on watch")
}
if string(resp.Events[0].Kv.Value) != "3" {
t.Fatalf("expected value=3, got event %+v", resp.Events[0])
}
case <-time.After(5 * time.Second):
t.Fatal("watch timed out")
}
}
// TestWatchResumeCompacted checks that the watcher gracefully closes in case
// that it tries to resume to a revision that's been compacted out of the store.
// Since the watcher's server restarts with stale data, the watcher will receive
// either a compaction error or all keys by staying in sync before the compaction

View File

@ -616,10 +616,24 @@ func (w *watchGrpcStream) serveSubstream(ws *watcherStream, resumec chan struct{
if ws.initReq.createdNotify {
ws.outc <- *wr
}
// once the watch channel is returned, a current revision
// watch must resume at the store revision. This is necessary
// for the following case to work as expected:
// wch := m1.Watch("a")
// m2.Put("a", "b")
// <-wch
// If the revision is only bound on the first observed event,
// if wch is disconnected before the Put is issued, then reconnects
// after it is committed, it'll miss the Put.
if ws.initReq.rev == 0 {
nextRev = wr.Header.Revision
}
}
} else {
// current progress of watch; <= store revision
nextRev = wr.Header.Revision
}
nextRev = wr.Header.Revision
if len(wr.Events) > 0 {
nextRev = wr.Events[len(wr.Events)-1].Kv.ModRevision + 1
}

View File

@ -5,3 +5,6 @@ const maxMapSize = 0x7FFFFFFF // 2GB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0xFFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

View File

@ -5,3 +5,6 @@ const maxMapSize = 0xFFFFFFFFFFFF // 256TB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0x7FFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

28
cmd/vendor/github.com/coreos/bbolt/bolt_arm.go generated vendored Normal file
View File

@ -0,0 +1,28 @@
package bolt
import "unsafe"
// maxMapSize represents the largest mmap size supported by Bolt.
const maxMapSize = 0x7FFFFFFF // 2GB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0xFFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned bool
func init() {
// Simple check to see whether this arch handles unaligned load/stores
// correctly.
// ARM9 and older devices require load/stores to be from/to aligned
// addresses. If not, the lower 2 bits are cleared and that address is
// read in a jumbled up order.
// See http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html
raw := [6]byte{0xfe, 0xef, 0x11, 0x22, 0x22, 0x11}
val := *(*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&raw)) + 2))
brokenUnaligned = val != 0x11222211
}

View File

@ -7,3 +7,6 @@ const maxMapSize = 0xFFFFFFFFFFFF // 256TB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0x7FFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

12
cmd/vendor/github.com/coreos/bbolt/bolt_mips64x.go generated vendored Normal file
View File

@ -0,0 +1,12 @@
// +build mips64 mips64le
package bolt
// maxMapSize represents the largest mmap size supported by Bolt.
const maxMapSize = 0x8000000000 // 512GB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0x7FFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

View File

@ -1,7 +1,12 @@
// +build mips mipsle
package bolt
// maxMapSize represents the largest mmap size supported by Bolt.
const maxMapSize = 0x7FFFFFFF // 2GB
const maxMapSize = 0x40000000 // 1GB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0xFFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

View File

@ -7,3 +7,6 @@ const maxMapSize = 0xFFFFFFFFFFFF // 256TB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0x7FFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

View File

@ -7,3 +7,6 @@ const maxMapSize = 0xFFFFFFFFFFFF // 256TB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0x7FFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

View File

@ -7,3 +7,6 @@ const maxMapSize = 0xFFFFFFFFFFFF // 256TB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0x7FFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

View File

@ -13,29 +13,32 @@ import (
// flock acquires an advisory lock on a file descriptor.
func flock(db *DB, mode os.FileMode, exclusive bool, timeout time.Duration) error {
var t time.Time
if timeout != 0 {
t = time.Now()
}
fd := db.file.Fd()
flag := syscall.LOCK_NB
if exclusive {
flag |= syscall.LOCK_EX
} else {
flag |= syscall.LOCK_SH
}
for {
// If we're beyond our timeout then return an error.
// This can only occur after we've attempted a flock once.
if t.IsZero() {
t = time.Now()
} else if timeout > 0 && time.Since(t) > timeout {
return ErrTimeout
}
flag := syscall.LOCK_SH
if exclusive {
flag = syscall.LOCK_EX
}
// Otherwise attempt to obtain an exclusive lock.
err := syscall.Flock(int(db.file.Fd()), flag|syscall.LOCK_NB)
// Attempt to obtain an exclusive lock.
err := syscall.Flock(int(fd), flag)
if err == nil {
return nil
} else if err != syscall.EWOULDBLOCK {
return err
}
// If we timed out then return an error.
if timeout != 0 && time.Since(t) > timeout-flockRetryTimeout {
return ErrTimeout
}
// Wait for a bit and try again.
time.Sleep(50 * time.Millisecond)
time.Sleep(flockRetryTimeout)
}
}

View File

@ -13,34 +13,33 @@ import (
// flock acquires an advisory lock on a file descriptor.
func flock(db *DB, mode os.FileMode, exclusive bool, timeout time.Duration) error {
var t time.Time
if timeout != 0 {
t = time.Now()
}
fd := db.file.Fd()
var lockType int16
if exclusive {
lockType = syscall.F_WRLCK
} else {
lockType = syscall.F_RDLCK
}
for {
// If we're beyond our timeout then return an error.
// This can only occur after we've attempted a flock once.
if t.IsZero() {
t = time.Now()
} else if timeout > 0 && time.Since(t) > timeout {
return ErrTimeout
}
var lock syscall.Flock_t
lock.Start = 0
lock.Len = 0
lock.Pid = 0
lock.Whence = 0
lock.Pid = 0
if exclusive {
lock.Type = syscall.F_WRLCK
} else {
lock.Type = syscall.F_RDLCK
}
err := syscall.FcntlFlock(db.file.Fd(), syscall.F_SETLK, &lock)
// Attempt to obtain an exclusive lock.
lock := syscall.Flock_t{Type: lockType}
err := syscall.FcntlFlock(fd, syscall.F_SETLK, &lock)
if err == nil {
return nil
} else if err != syscall.EAGAIN {
return err
}
// If we timed out then return an error.
if timeout != 0 && time.Since(t) > timeout-flockRetryTimeout {
return ErrTimeout
}
// Wait for a bit and try again.
time.Sleep(50 * time.Millisecond)
time.Sleep(flockRetryTimeout)
}
}

View File

@ -59,29 +59,30 @@ func flock(db *DB, mode os.FileMode, exclusive bool, timeout time.Duration) erro
db.lockfile = f
var t time.Time
if timeout != 0 {
t = time.Now()
}
fd := f.Fd()
var flag uint32 = flagLockFailImmediately
if exclusive {
flag |= flagLockExclusive
}
for {
// If we're beyond our timeout then return an error.
// This can only occur after we've attempted a flock once.
if t.IsZero() {
t = time.Now()
} else if timeout > 0 && time.Since(t) > timeout {
return ErrTimeout
}
var flag uint32 = flagLockFailImmediately
if exclusive {
flag |= flagLockExclusive
}
err := lockFileEx(syscall.Handle(db.lockfile.Fd()), flag, 0, 1, 0, &syscall.Overlapped{})
// Attempt to obtain an exclusive lock.
err := lockFileEx(syscall.Handle(fd), flag, 0, 1, 0, &syscall.Overlapped{})
if err == nil {
return nil
} else if err != errLockViolation {
return err
}
// If we timed oumercit then return an error.
if timeout != 0 && time.Since(t) > timeout-flockRetryTimeout {
return ErrTimeout
}
// Wait for a bit and try again.
time.Sleep(50 * time.Millisecond)
time.Sleep(flockRetryTimeout)
}
}
@ -89,7 +90,7 @@ func flock(db *DB, mode os.FileMode, exclusive bool, timeout time.Duration) erro
func funlock(db *DB) error {
err := unlockFileEx(syscall.Handle(db.lockfile.Fd()), 0, 1, 0, &syscall.Overlapped{})
db.lockfile.Close()
os.Remove(db.path+lockExt)
os.Remove(db.path + lockExt)
return err
}

View File

@ -14,13 +14,6 @@ const (
MaxValueSize = (1 << 31) - 2
)
const (
maxUint = ^uint(0)
minUint = 0
maxInt = int(^uint(0) >> 1)
minInt = -maxInt - 1
)
const bucketHeaderSize = int(unsafe.Sizeof(bucket{}))
const (
@ -130,9 +123,17 @@ func (b *Bucket) Bucket(name []byte) *Bucket {
func (b *Bucket) openBucket(value []byte) *Bucket {
var child = newBucket(b.tx)
// If unaligned load/stores are broken on this arch and value is
// unaligned simply clone to an aligned byte array.
unaligned := brokenUnaligned && uintptr(unsafe.Pointer(&value[0]))&3 != 0
if unaligned {
value = cloneBytes(value)
}
// If this is a writable transaction then we need to copy the bucket entry.
// Read-only transactions can point directly at the mmap entry.
if b.tx.writable {
if b.tx.writable && !unaligned {
child.bucket = &bucket{}
*child.bucket = *(*bucket)(unsafe.Pointer(&value[0]))
} else {
@ -167,9 +168,8 @@ func (b *Bucket) CreateBucket(key []byte) (*Bucket, error) {
if bytes.Equal(key, k) {
if (flags & bucketLeafFlag) != 0 {
return nil, ErrBucketExists
} else {
return nil, ErrIncompatibleValue
}
return nil, ErrIncompatibleValue
}
// Create empty, inline bucket.
@ -316,7 +316,12 @@ func (b *Bucket) Delete(key []byte) error {
// Move cursor to correct position.
c := b.Cursor()
_, _, flags := c.seek(key)
k, _, flags := c.seek(key)
// Return nil if the key doesn't exist.
if !bytes.Equal(key, k) {
return nil
}
// Return an error if there is already existing bucket value.
if (flags & bucketLeafFlag) != 0 {
@ -329,6 +334,28 @@ func (b *Bucket) Delete(key []byte) error {
return nil
}
// Sequence returns the current integer for the bucket without incrementing it.
func (b *Bucket) Sequence() uint64 { return b.bucket.sequence }
// SetSequence updates the sequence number for the bucket.
func (b *Bucket) SetSequence(v uint64) error {
if b.tx.db == nil {
return ErrTxClosed
} else if !b.Writable() {
return ErrTxNotWritable
}
// Materialize the root node if it hasn't been already so that the
// bucket will be saved during commit.
if b.rootNode == nil {
_ = b.node(b.root, nil)
}
// Increment and return the sequence.
b.bucket.sequence = v
return nil
}
// NextSequence returns an autoincrementing integer for the bucket.
func (b *Bucket) NextSequence() (uint64, error) {
if b.tx.db == nil {

View File

@ -7,8 +7,7 @@ import (
"log"
"os"
"runtime"
"runtime/debug"
"strings"
"sort"
"sync"
"time"
"unsafe"
@ -23,6 +22,8 @@ const version = 2
// Represents a marker value to indicate that a file is a Bolt DB.
const magic uint32 = 0xED0CDAED
const pgidNoFreelist pgid = 0xffffffffffffffff
// IgnoreNoSync specifies whether the NoSync field of a DB is ignored when
// syncing changes to a file. This is required as some operating systems,
// such as OpenBSD, do not have a unified buffer cache (UBC) and writes
@ -39,6 +40,9 @@ const (
// default page size for db is set to the OS page size.
var defaultPageSize = os.Getpagesize()
// The time elapsed between consecutive file locking attempts.
const flockRetryTimeout = 50 * time.Millisecond
// DB represents a collection of buckets persisted to a file on disk.
// All data access is performed through transactions which can be obtained through the DB.
// All the functions on DB will return a ErrDatabaseNotOpen if accessed before Open() is called.
@ -61,6 +65,11 @@ type DB struct {
// THIS IS UNSAFE. PLEASE USE WITH CAUTION.
NoSync bool
// When true, skips syncing freelist to disk. This improves the database
// write performance under normal operation, but requires a full database
// re-sync during recovery.
NoFreelistSync bool
// When true, skips the truncate call when growing the database.
// Setting this to true is only safe on non-ext3/ext4 systems.
// Skipping truncation avoids preallocation of hard drive space and
@ -107,9 +116,11 @@ type DB struct {
opened bool
rwtx *Tx
txs []*Tx
freelist *freelist
stats Stats
freelist *freelist
freelistLoad sync.Once
pagePool sync.Pool
batchMu sync.Mutex
@ -148,14 +159,17 @@ func (db *DB) String() string {
// If the file does not exist then it will be created automatically.
// Passing in nil options will cause Bolt to open the database with the default options.
func Open(path string, mode os.FileMode, options *Options) (*DB, error) {
var db = &DB{opened: true}
db := &DB{
opened: true,
}
// Set default options if no options are provided.
if options == nil {
options = DefaultOptions
}
db.NoSync = options.NoSync
db.NoGrowSync = options.NoGrowSync
db.MmapFlags = options.MmapFlags
db.NoFreelistSync = options.NoFreelistSync
// Set default values for later DB operations.
db.MaxBatchSize = DefaultMaxBatchSize
@ -184,6 +198,7 @@ func Open(path string, mode os.FileMode, options *Options) (*DB, error) {
// The database file is locked using the shared lock (more than one process may
// hold a lock at the same time) otherwise (options.ReadOnly is set).
if err := flock(db, mode, !db.readOnly, options.Timeout); err != nil {
db.lockfile = nil // make 'unused' happy. TODO: rework locks
_ = db.close()
return nil, err
}
@ -191,6 +206,11 @@ func Open(path string, mode os.FileMode, options *Options) (*DB, error) {
// Default values for test hooks
db.ops.writeAt = db.file.WriteAt
if db.pageSize = options.PageSize; db.pageSize == 0 {
// Set the default page size to the OS page size.
db.pageSize = defaultPageSize
}
// Initialize the database if it doesn't exist.
if info, err := db.file.Stat(); err != nil {
return nil, err
@ -202,20 +222,21 @@ func Open(path string, mode os.FileMode, options *Options) (*DB, error) {
} else {
// Read the first meta page to determine the page size.
var buf [0x1000]byte
if _, err := db.file.ReadAt(buf[:], 0); err == nil {
m := db.pageInBuffer(buf[:], 0).meta()
if err := m.validate(); err != nil {
// If we can't read the page size, we can assume it's the same
// as the OS -- since that's how the page size was chosen in the
// first place.
//
// If the first page is invalid and this OS uses a different
// page size than what the database was created with then we
// are out of luck and cannot access the database.
db.pageSize = os.Getpagesize()
} else {
// If we can't read the page size, but can read a page, assume
// it's the same as the OS or one given -- since that's how the
// page size was chosen in the first place.
//
// If the first page is invalid and this OS uses a different
// page size than what the database was created with then we
// are out of luck and cannot access the database.
//
// TODO: scan for next page
if bw, err := db.file.ReadAt(buf[:], 0); err == nil && bw == len(buf) {
if m := db.pageInBuffer(buf[:], 0).meta(); m.validate() == nil {
db.pageSize = int(m.pageSize)
}
} else {
return nil, ErrInvalid
}
}
@ -232,14 +253,50 @@ func Open(path string, mode os.FileMode, options *Options) (*DB, error) {
return nil, err
}
// Read in the freelist.
db.freelist = newFreelist()
db.freelist.read(db.page(db.meta().freelist))
if db.readOnly {
return db, nil
}
db.loadFreelist()
// Flush freelist when transitioning from no sync to sync so
// NoFreelistSync unaware boltdb can open the db later.
if !db.NoFreelistSync && !db.hasSyncedFreelist() {
tx, err := db.Begin(true)
if tx != nil {
err = tx.Commit()
}
if err != nil {
_ = db.close()
return nil, err
}
}
// Mark the database as opened and return.
return db, nil
}
// loadFreelist reads the freelist if it is synced, or reconstructs it
// by scanning the DB if it is not synced. It assumes there are no
// concurrent accesses being made to the freelist.
func (db *DB) loadFreelist() {
db.freelistLoad.Do(func() {
db.freelist = newFreelist()
if !db.hasSyncedFreelist() {
// Reconstruct free list by scanning the DB.
db.freelist.readIDs(db.freepages())
} else {
// Read free list from freelist page.
db.freelist.read(db.page(db.meta().freelist))
}
db.stats.FreePageN = len(db.freelist.ids)
})
}
func (db *DB) hasSyncedFreelist() bool {
return db.meta().freelist != pgidNoFreelist
}
// mmap opens the underlying memory-mapped file and initializes the meta references.
// minsz is the minimum size that the new mmap can be.
func (db *DB) mmap(minsz int) error {
@ -341,9 +398,6 @@ func (db *DB) mmapSize(size int) (int, error) {
// init creates a new database file and initializes its meta pages.
func (db *DB) init() error {
// Set the page size to the OS page size.
db.pageSize = os.Getpagesize()
// Create two meta pages on a buffer.
buf := make([]byte, db.pageSize*4)
for i := 0; i < 2; i++ {
@ -526,21 +580,36 @@ func (db *DB) beginRWTx() (*Tx, error) {
t := &Tx{writable: true}
t.init(db)
db.rwtx = t
db.freePages()
return t, nil
}
// Free any pages associated with closed read-only transactions.
var minid txid = 0xFFFFFFFFFFFFFFFF
for _, t := range db.txs {
if t.meta.txid < minid {
minid = t.meta.txid
}
// freePages releases any pages associated with closed read-only transactions.
func (db *DB) freePages() {
// Free all pending pages prior to earliest open transaction.
sort.Sort(txsById(db.txs))
minid := txid(0xFFFFFFFFFFFFFFFF)
if len(db.txs) > 0 {
minid = db.txs[0].meta.txid
}
if minid > 0 {
db.freelist.release(minid - 1)
}
return t, nil
// Release unused txid extents.
for _, t := range db.txs {
db.freelist.releaseRange(minid, t.meta.txid-1)
minid = t.meta.txid + 1
}
db.freelist.releaseRange(minid, txid(0xFFFFFFFFFFFFFFFF))
// Any page both allocated and freed in an extent is safe to release.
}
type txsById []*Tx
func (t txsById) Len() int { return len(t) }
func (t txsById) Swap(i, j int) { t[i], t[j] = t[j], t[i] }
func (t txsById) Less(i, j int) bool { return t[i].meta.txid < t[j].meta.txid }
// removeTx removes a transaction from the database.
func (db *DB) removeTx(tx *Tx) {
// Release the read lock on the mmap.
@ -552,7 +621,10 @@ func (db *DB) removeTx(tx *Tx) {
// Remove the transaction.
for i, t := range db.txs {
if t == tx {
db.txs = append(db.txs[:i], db.txs[i+1:]...)
last := len(db.txs) - 1
db.txs[i] = db.txs[last]
db.txs[last] = nil
db.txs = db.txs[:last]
break
}
}
@ -630,11 +702,7 @@ func (db *DB) View(fn func(*Tx) error) error {
return err
}
if err := t.Rollback(); err != nil {
return err
}
return nil
return t.Rollback()
}
// Batch calls fn as part of a batch. It behaves similar to Update,
@ -823,7 +891,7 @@ func (db *DB) meta() *meta {
}
// allocate returns a contiguous block of memory starting at a given page.
func (db *DB) allocate(count int) (*page, error) {
func (db *DB) allocate(txid txid, count int) (*page, error) {
// Allocate a temporary buffer for the page.
var buf []byte
if count == 1 {
@ -835,7 +903,7 @@ func (db *DB) allocate(count int) (*page, error) {
p.overflow = uint32(count - 1)
// Use pages from the freelist if they are available.
if p.id = db.freelist.allocate(count); p.id != 0 {
if p.id = db.freelist.allocate(txid, count); p.id != 0 {
return p, nil
}
@ -890,6 +958,38 @@ func (db *DB) IsReadOnly() bool {
return db.readOnly
}
func (db *DB) freepages() []pgid {
tx, err := db.beginTx()
defer func() {
err = tx.Rollback()
if err != nil {
panic("freepages: failed to rollback tx")
}
}()
if err != nil {
panic("freepages: failed to open read only tx")
}
reachable := make(map[pgid]*page)
nofreed := make(map[pgid]bool)
ech := make(chan error)
go func() {
for e := range ech {
panic(fmt.Sprintf("freepages: failed to get all reachable pages (%v)", e))
}
}()
tx.checkBucket(&tx.root, reachable, nofreed, ech)
close(ech)
var fids []pgid
for i := pgid(2); i < db.meta().pgid; i++ {
if _, ok := reachable[i]; !ok {
fids = append(fids, i)
}
}
return fids
}
// Options represents the options that can be set when opening a database.
type Options struct {
// Timeout is the amount of time to wait to obtain a file lock.
@ -900,6 +1000,10 @@ type Options struct {
// Sets the DB.NoGrowSync flag before memory mapping the file.
NoGrowSync bool
// Do not sync freelist to disk. This improves the database write performance
// under normal operation, but requires a full database re-sync during recovery.
NoFreelistSync bool
// Open database in read-only mode. Uses flock(..., LOCK_SH |LOCK_NB) to
// grab a shared lock (UNIX).
ReadOnly bool
@ -916,6 +1020,14 @@ type Options struct {
// If initialMmapSize is smaller than the previous database size,
// it takes no effect.
InitialMmapSize int
// PageSize overrides the default OS page size.
PageSize int
// NoSync sets the initial value of DB.NoSync. Normally this can just be
// set directly on the DB itself when returned from Open(), but this option
// is useful in APIs which expose Options but not the underlying DB.
NoSync bool
}
// DefaultOptions represent the options used if nil options are passed into Open().
@ -952,15 +1064,11 @@ func (s *Stats) Sub(other *Stats) Stats {
diff.PendingPageN = s.PendingPageN
diff.FreeAlloc = s.FreeAlloc
diff.FreelistInuse = s.FreelistInuse
diff.TxN = other.TxN - s.TxN
diff.TxN = s.TxN - other.TxN
diff.TxStats = s.TxStats.Sub(&other.TxStats)
return diff
}
func (s *Stats) add(other *Stats) {
s.TxStats.add(&other.TxStats)
}
type Info struct {
Data uintptr
PageSize int
@ -999,7 +1107,8 @@ func (m *meta) copy(dest *meta) {
func (m *meta) write(p *page) {
if m.root.root >= m.pgid {
panic(fmt.Sprintf("root bucket pgid (%d) above high water mark (%d)", m.root.root, m.pgid))
} else if m.freelist >= m.pgid {
} else if m.freelist >= m.pgid && m.freelist != pgidNoFreelist {
// TODO: reject pgidNoFreeList if !NoFreelistSync
panic(fmt.Sprintf("freelist pgid (%d) above high water mark (%d)", m.freelist, m.pgid))
}
@ -1026,11 +1135,3 @@ func _assert(condition bool, msg string, v ...interface{}) {
panic(fmt.Sprintf("assertion failed: "+msg, v...))
}
}
func warn(v ...interface{}) { fmt.Fprintln(os.Stderr, v...) }
func warnf(msg string, v ...interface{}) { fmt.Fprintf(os.Stderr, msg+"\n", v...) }
func printstack() {
stack := strings.Join(strings.Split(string(debug.Stack()), "\n")[2:], "\n")
fmt.Fprintln(os.Stderr, stack)
}

View File

@ -6,25 +6,40 @@ import (
"unsafe"
)
// txPending holds a list of pgids and corresponding allocation txns
// that are pending to be freed.
type txPending struct {
ids []pgid
alloctx []txid // txids allocating the ids
lastReleaseBegin txid // beginning txid of last matching releaseRange
}
// freelist represents a list of all pages that are available for allocation.
// It also tracks pages that have been freed but are still in use by open transactions.
type freelist struct {
ids []pgid // all free and available free page ids.
pending map[txid][]pgid // mapping of soon-to-be free page ids by tx.
cache map[pgid]bool // fast lookup of all free and pending page ids.
ids []pgid // all free and available free page ids.
allocs map[pgid]txid // mapping of txid that allocated a pgid.
pending map[txid]*txPending // mapping of soon-to-be free page ids by tx.
cache map[pgid]bool // fast lookup of all free and pending page ids.
}
// newFreelist returns an empty, initialized freelist.
func newFreelist() *freelist {
return &freelist{
pending: make(map[txid][]pgid),
allocs: make(map[pgid]txid),
pending: make(map[txid]*txPending),
cache: make(map[pgid]bool),
}
}
// size returns the size of the page after serialization.
func (f *freelist) size() int {
return pageHeaderSize + (int(unsafe.Sizeof(pgid(0))) * f.count())
n := f.count()
if n >= 0xFFFF {
// The first element will be used to store the count. See freelist.write.
n++
}
return pageHeaderSize + (int(unsafe.Sizeof(pgid(0))) * n)
}
// count returns count of pages on the freelist
@ -40,27 +55,26 @@ func (f *freelist) free_count() int {
// pending_count returns count of pending pages
func (f *freelist) pending_count() int {
var count int
for _, list := range f.pending {
count += len(list)
for _, txp := range f.pending {
count += len(txp.ids)
}
return count
}
// all returns a list of all free ids and all pending ids in one sorted list.
func (f *freelist) all() []pgid {
m := make(pgids, 0)
for _, list := range f.pending {
m = append(m, list...)
// copyall copies into dst a list of all free ids and all pending ids in one sorted list.
// f.count returns the minimum length required for dst.
func (f *freelist) copyall(dst []pgid) {
m := make(pgids, 0, f.pending_count())
for _, txp := range f.pending {
m = append(m, txp.ids...)
}
sort.Sort(m)
return pgids(f.ids).merge(m)
mergepgids(dst, f.ids, m)
}
// allocate returns the starting page id of a contiguous list of pages of a given size.
// If a contiguous block cannot be found then 0 is returned.
func (f *freelist) allocate(n int) pgid {
func (f *freelist) allocate(txid txid, n int) pgid {
if len(f.ids) == 0 {
return 0
}
@ -93,7 +107,7 @@ func (f *freelist) allocate(n int) pgid {
for i := pgid(0); i < pgid(n); i++ {
delete(f.cache, initial+i)
}
f.allocs[initial] = txid
return initial
}
@ -110,28 +124,73 @@ func (f *freelist) free(txid txid, p *page) {
}
// Free page and all its overflow pages.
var ids = f.pending[txid]
txp := f.pending[txid]
if txp == nil {
txp = &txPending{}
f.pending[txid] = txp
}
allocTxid, ok := f.allocs[p.id]
if ok {
delete(f.allocs, p.id)
} else if (p.flags & freelistPageFlag) != 0 {
// Freelist is always allocated by prior tx.
allocTxid = txid - 1
}
for id := p.id; id <= p.id+pgid(p.overflow); id++ {
// Verify that page is not already free.
if f.cache[id] {
panic(fmt.Sprintf("page %d already freed", id))
}
// Add to the freelist and cache.
ids = append(ids, id)
txp.ids = append(txp.ids, id)
txp.alloctx = append(txp.alloctx, allocTxid)
f.cache[id] = true
}
f.pending[txid] = ids
}
// release moves all page ids for a transaction id (or older) to the freelist.
func (f *freelist) release(txid txid) {
m := make(pgids, 0)
for tid, ids := range f.pending {
for tid, txp := range f.pending {
if tid <= txid {
// Move transaction's pending pages to the available freelist.
// Don't remove from the cache since the page is still free.
m = append(m, ids...)
m = append(m, txp.ids...)
delete(f.pending, tid)
}
}
sort.Sort(m)
f.ids = pgids(f.ids).merge(m)
}
// releaseRange moves pending pages allocated within an extent [begin,end] to the free list.
func (f *freelist) releaseRange(begin, end txid) {
if begin > end {
return
}
var m pgids
for tid, txp := range f.pending {
if tid < begin || tid > end {
continue
}
// Don't recompute freed pages if ranges haven't updated.
if txp.lastReleaseBegin == begin {
continue
}
for i := 0; i < len(txp.ids); i++ {
if atx := txp.alloctx[i]; atx < begin || atx > end {
continue
}
m = append(m, txp.ids[i])
txp.ids[i] = txp.ids[len(txp.ids)-1]
txp.ids = txp.ids[:len(txp.ids)-1]
txp.alloctx[i] = txp.alloctx[len(txp.alloctx)-1]
txp.alloctx = txp.alloctx[:len(txp.alloctx)-1]
i--
}
txp.lastReleaseBegin = begin
if len(txp.ids) == 0 {
delete(f.pending, tid)
}
}
@ -142,12 +201,29 @@ func (f *freelist) release(txid txid) {
// rollback removes the pages from a given pending tx.
func (f *freelist) rollback(txid txid) {
// Remove page ids from cache.
for _, id := range f.pending[txid] {
delete(f.cache, id)
txp := f.pending[txid]
if txp == nil {
return
}
// Remove pages from pending list.
var m pgids
for i, pgid := range txp.ids {
delete(f.cache, pgid)
tx := txp.alloctx[i]
if tx == 0 {
continue
}
if tx != txid {
// Pending free aborted; restore page back to alloc list.
f.allocs[pgid] = tx
} else {
// Freed page was allocated by this txn; OK to throw away.
m = append(m, pgid)
}
}
// Remove pages from pending list and mark as free if allocated by txid.
delete(f.pending, txid)
sort.Sort(m)
f.ids = pgids(f.ids).merge(m)
}
// freed returns whether a given page is in the free list.
@ -157,6 +233,9 @@ func (f *freelist) freed(pgid pgid) bool {
// read initializes the freelist from a freelist page.
func (f *freelist) read(p *page) {
if (p.flags & freelistPageFlag) == 0 {
panic(fmt.Sprintf("invalid freelist page: %d, page type is %s", p.id, p.typ()))
}
// If the page.count is at the max uint16 value (64k) then it's considered
// an overflow and the size of the freelist is stored as the first element.
idx, count := 0, int(p.count)
@ -169,7 +248,7 @@ func (f *freelist) read(p *page) {
if count == 0 {
f.ids = nil
} else {
ids := ((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[idx:count]
ids := ((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[idx : idx+count]
f.ids = make([]pgid, len(ids))
copy(f.ids, ids)
@ -181,27 +260,33 @@ func (f *freelist) read(p *page) {
f.reindex()
}
// read initializes the freelist from a given list of ids.
func (f *freelist) readIDs(ids []pgid) {
f.ids = ids
f.reindex()
}
// write writes the page ids onto a freelist page. All free and pending ids are
// saved to disk since in the event of a program crash, all pending ids will
// become free.
func (f *freelist) write(p *page) error {
// Combine the old free pgids and pgids waiting on an open transaction.
ids := f.all()
// Update the header flag.
p.flags |= freelistPageFlag
// The page.count can only hold up to 64k elements so if we overflow that
// number then we handle it by putting the size in the first element.
if len(ids) == 0 {
p.count = uint16(len(ids))
} else if len(ids) < 0xFFFF {
p.count = uint16(len(ids))
copy(((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[:], ids)
lenids := f.count()
if lenids == 0 {
p.count = uint16(lenids)
} else if lenids < 0xFFFF {
p.count = uint16(lenids)
f.copyall(((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[:])
} else {
p.count = 0xFFFF
((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[0] = pgid(len(ids))
copy(((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[1:], ids)
((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[0] = pgid(lenids)
f.copyall(((*[maxAllocSize]pgid)(unsafe.Pointer(&p.ptr)))[1:])
}
return nil
@ -213,8 +298,8 @@ func (f *freelist) reload(p *page) {
// Build a cache of only pending pages.
pcache := make(map[pgid]bool)
for _, pendingIDs := range f.pending {
for _, pendingID := range pendingIDs {
for _, txp := range f.pending {
for _, pendingID := range txp.ids {
pcache[pendingID] = true
}
}
@ -236,12 +321,12 @@ func (f *freelist) reload(p *page) {
// reindex rebuilds the free cache based on available and pending free lists.
func (f *freelist) reindex() {
f.cache = make(map[pgid]bool)
f.cache = make(map[pgid]bool, len(f.ids))
for _, id := range f.ids {
f.cache[id] = true
}
for _, pendingIDs := range f.pending {
for _, pendingID := range pendingIDs {
for _, txp := range f.pending {
for _, pendingID := range txp.ids {
f.cache[pendingID] = true
}
}

View File

@ -365,7 +365,7 @@ func (n *node) spill() error {
}
// Allocate contiguous space for the node.
p, err := tx.allocate((node.size() / tx.db.pageSize) + 1)
p, err := tx.allocate((node.size() + tx.db.pageSize - 1) / tx.db.pageSize)
if err != nil {
return err
}

View File

@ -145,12 +145,33 @@ func (a pgids) merge(b pgids) pgids {
// Return the opposite slice if one is nil.
if len(a) == 0 {
return b
} else if len(b) == 0 {
}
if len(b) == 0 {
return a
}
merged := make(pgids, len(a)+len(b))
mergepgids(merged, a, b)
return merged
}
// Create a list to hold all elements from both lists.
merged := make(pgids, 0, len(a)+len(b))
// mergepgids copies the sorted union of a and b into dst.
// If dst is too small, it panics.
func mergepgids(dst, a, b pgids) {
if len(dst) < len(a)+len(b) {
panic(fmt.Errorf("mergepgids bad len %d < %d + %d", len(dst), len(a), len(b)))
}
// Copy in the opposite slice if one is nil.
if len(a) == 0 {
copy(dst, b)
return
}
if len(b) == 0 {
copy(dst, a)
return
}
// Merged will hold all elements from both lists.
merged := dst[:0]
// Assign lead to the slice with a lower starting value, follow to the higher value.
lead, follow := a, b
@ -172,7 +193,5 @@ func (a pgids) merge(b pgids) pgids {
}
// Append what's left in follow.
merged = append(merged, follow...)
return merged
_ = append(merged, follow...)
}

View File

@ -126,10 +126,7 @@ func (tx *Tx) DeleteBucket(name []byte) error {
// the error is returned to the caller.
func (tx *Tx) ForEach(fn func(name []byte, b *Bucket) error) error {
return tx.root.ForEach(func(k, v []byte) error {
if err := fn(k, tx.root.Bucket(k)); err != nil {
return err
}
return nil
return fn(k, tx.root.Bucket(k))
})
}
@ -169,28 +166,18 @@ func (tx *Tx) Commit() error {
// Free the old root bucket.
tx.meta.root.root = tx.root.root
opgid := tx.meta.pgid
// Free the freelist and allocate new pages for it. This will overestimate
// the size of the freelist but not underestimate the size (which would be bad).
tx.db.freelist.free(tx.meta.txid, tx.db.page(tx.meta.freelist))
p, err := tx.allocate((tx.db.freelist.size() / tx.db.pageSize) + 1)
if err != nil {
tx.rollback()
return err
// Free the old freelist because commit writes out a fresh freelist.
if tx.meta.freelist != pgidNoFreelist {
tx.db.freelist.free(tx.meta.txid, tx.db.page(tx.meta.freelist))
}
if err := tx.db.freelist.write(p); err != nil {
tx.rollback()
return err
}
tx.meta.freelist = p.id
// If the high water mark has moved up then attempt to grow the database.
if tx.meta.pgid > opgid {
if err := tx.db.grow(int(tx.meta.pgid+1) * tx.db.pageSize); err != nil {
tx.rollback()
if !tx.db.NoFreelistSync {
err := tx.commitFreelist()
if err != nil {
return err
}
} else {
tx.meta.freelist = pgidNoFreelist
}
// Write dirty pages to disk.
@ -235,6 +222,31 @@ func (tx *Tx) Commit() error {
return nil
}
func (tx *Tx) commitFreelist() error {
// Allocate new pages for the new free list. This will overestimate
// the size of the freelist but not underestimate the size (which would be bad).
opgid := tx.meta.pgid
p, err := tx.allocate((tx.db.freelist.size() / tx.db.pageSize) + 1)
if err != nil {
tx.rollback()
return err
}
if err := tx.db.freelist.write(p); err != nil {
tx.rollback()
return err
}
tx.meta.freelist = p.id
// If the high water mark has moved up then attempt to grow the database.
if tx.meta.pgid > opgid {
if err := tx.db.grow(int(tx.meta.pgid+1) * tx.db.pageSize); err != nil {
tx.rollback()
return err
}
}
return nil
}
// Rollback closes the transaction and ignores all previous updates. Read-only
// transactions must be rolled back and not committed.
func (tx *Tx) Rollback() error {
@ -305,7 +317,11 @@ func (tx *Tx) WriteTo(w io.Writer) (n int64, err error) {
if err != nil {
return 0, err
}
defer func() { _ = f.Close() }()
defer func() {
if cerr := f.Close(); err == nil {
err = cerr
}
}()
// Generate a meta page. We use the same page data for both meta pages.
buf := make([]byte, tx.db.pageSize)
@ -333,7 +349,7 @@ func (tx *Tx) WriteTo(w io.Writer) (n int64, err error) {
}
// Move past the meta pages in the file.
if _, err := f.Seek(int64(tx.db.pageSize*2), os.SEEK_SET); err != nil {
if _, err := f.Seek(int64(tx.db.pageSize*2), io.SeekStart); err != nil {
return n, fmt.Errorf("seek: %s", err)
}
@ -344,7 +360,7 @@ func (tx *Tx) WriteTo(w io.Writer) (n int64, err error) {
return n, err
}
return n, f.Close()
return n, nil
}
// CopyFile copies the entire database to file at the given path.
@ -379,9 +395,14 @@ func (tx *Tx) Check() <-chan error {
}
func (tx *Tx) check(ch chan error) {
// Force loading free list if opened in ReadOnly mode.
tx.db.loadFreelist()
// Check if any pages are double freed.
freed := make(map[pgid]bool)
for _, id := range tx.db.freelist.all() {
all := make([]pgid, tx.db.freelist.count())
tx.db.freelist.copyall(all)
for _, id := range all {
if freed[id] {
ch <- fmt.Errorf("page %d: already freed", id)
}
@ -392,8 +413,10 @@ func (tx *Tx) check(ch chan error) {
reachable := make(map[pgid]*page)
reachable[0] = tx.page(0) // meta0
reachable[1] = tx.page(1) // meta1
for i := uint32(0); i <= tx.page(tx.meta.freelist).overflow; i++ {
reachable[tx.meta.freelist+pgid(i)] = tx.page(tx.meta.freelist)
if tx.meta.freelist != pgidNoFreelist {
for i := uint32(0); i <= tx.page(tx.meta.freelist).overflow; i++ {
reachable[tx.meta.freelist+pgid(i)] = tx.page(tx.meta.freelist)
}
}
// Recursively check buckets.
@ -451,7 +474,7 @@ func (tx *Tx) checkBucket(b *Bucket, reachable map[pgid]*page, freed map[pgid]bo
// allocate returns a contiguous block of memory starting at a given page.
func (tx *Tx) allocate(count int) (*page, error) {
p, err := tx.db.allocate(count)
p, err := tx.db.allocate(tx.meta.txid, count)
if err != nil {
return nil, err
}

View File

@ -23,6 +23,7 @@ import (
"github.com/coreos/etcd/pkg/fileutil"
"github.com/coreos/etcd/pkg/testutil"
"github.com/coreos/etcd/version"
)
// TestReleaseUpgrade ensures that changes to master branch does not affect
@ -53,7 +54,7 @@ func TestReleaseUpgrade(t *testing.T) {
// so there's a window at boot time where it doesn't have V3rpcCapability enabled
// poll /version until etcdcluster is >2.3.x before making v3 requests
for i := 0; i < 7; i++ {
if err = cURLGet(epc, cURLReq{endpoint: "/version", expected: `"etcdcluster":"3.0`}); err != nil {
if err = cURLGet(epc, cURLReq{endpoint: "/version", expected: `"etcdcluster":"` + version.Cluster(version.Version)}); err != nil {
t.Logf("#%d: v3 is not ready yet (%v)", i, err)
time.Sleep(time.Second)
continue

View File

@ -80,8 +80,38 @@ type Config struct {
// TickMs is the number of milliseconds between heartbeat ticks.
// TODO: decouple tickMs and heartbeat tick (current heartbeat tick = 1).
// make ticks a cluster wide configuration.
TickMs uint `json:"heartbeat-interval"`
ElectionMs uint `json:"election-timeout"`
TickMs uint `json:"heartbeat-interval"`
ElectionMs uint `json:"election-timeout"`
// InitialElectionTickAdvance is true, then local member fast-forwards
// election ticks to speed up "initial" leader election trigger. This
// benefits the case of larger election ticks. For instance, cross
// datacenter deployment may require longer election timeout of 10-second.
// If true, local node does not need wait up to 10-second. Instead,
// forwards its election ticks to 8-second, and have only 2-second left
// before leader election.
//
// Major assumptions are that:
// - cluster has no active leader thus advancing ticks enables faster
// leader election, or
// - cluster already has an established leader, and rejoining follower
// is likely to receive heartbeats from the leader after tick advance
// and before election timeout.
//
// However, when network from leader to rejoining follower is congested,
// and the follower does not receive leader heartbeat within left election
// ticks, disruptive election has to happen thus affecting cluster
// availabilities.
//
// Disabling this would slow down initial bootstrap process for cross
// datacenter deployments. Make your own tradeoffs by configuring
// --initial-election-tick-advance at the cost of slow initial bootstrap.
//
// If single-node, it advances ticks regardless.
//
// See https://github.com/coreos/etcd/issues/9333 for more detail.
InitialElectionTickAdvance bool `json:"initial-election-tick-advance"`
QuotaBackendBytes int64 `json:"quota-backend-bytes"`
// clustering
@ -152,13 +182,14 @@ func NewConfig() *Config {
lcurl, _ := url.Parse(DefaultListenClientURLs)
acurl, _ := url.Parse(DefaultAdvertiseClientURLs)
cfg := &Config{
CorsInfo: &cors.CORSInfo{},
MaxSnapFiles: DefaultMaxSnapshots,
MaxWalFiles: DefaultMaxWALs,
Name: DefaultName,
SnapCount: etcdserver.DefaultSnapCount,
TickMs: 100,
ElectionMs: 1000,
CorsInfo: &cors.CORSInfo{},
MaxSnapFiles: DefaultMaxSnapshots,
MaxWalFiles: DefaultMaxWALs,
Name: DefaultName,
SnapCount: etcdserver.DefaultSnapCount,
TickMs: 100,
ElectionMs: 1000,
InitialElectionTickAdvance: true,
LPUrls: []url.URL{*lpurl},
LCUrls: []url.URL{*lcurl},
APUrls: []url.URL{*apurl},

View File

@ -97,27 +97,28 @@ func StartEtcd(inCfg *Config) (e *Etcd, err error) {
}
srvcfg := &etcdserver.ServerConfig{
Name: cfg.Name,
ClientURLs: cfg.ACUrls,
PeerURLs: cfg.APUrls,
DataDir: cfg.Dir,
DedicatedWALDir: cfg.WalDir,
SnapCount: cfg.SnapCount,
MaxSnapFiles: cfg.MaxSnapFiles,
MaxWALFiles: cfg.MaxWalFiles,
InitialPeerURLsMap: urlsmap,
InitialClusterToken: token,
DiscoveryURL: cfg.Durl,
DiscoveryProxy: cfg.Dproxy,
NewCluster: cfg.IsNewCluster(),
ForceNewCluster: cfg.ForceNewCluster,
PeerTLSInfo: cfg.PeerTLSInfo,
TickMs: cfg.TickMs,
ElectionTicks: cfg.ElectionTicks(),
AutoCompactionRetention: cfg.AutoCompactionRetention,
QuotaBackendBytes: cfg.QuotaBackendBytes,
StrictReconfigCheck: cfg.StrictReconfigCheck,
ClientCertAuthEnabled: cfg.ClientTLSInfo.ClientCertAuth,
Name: cfg.Name,
ClientURLs: cfg.ACUrls,
PeerURLs: cfg.APUrls,
DataDir: cfg.Dir,
DedicatedWALDir: cfg.WalDir,
SnapCount: cfg.SnapCount,
MaxSnapFiles: cfg.MaxSnapFiles,
MaxWALFiles: cfg.MaxWalFiles,
InitialPeerURLsMap: urlsmap,
InitialClusterToken: token,
DiscoveryURL: cfg.Durl,
DiscoveryProxy: cfg.Dproxy,
NewCluster: cfg.IsNewCluster(),
ForceNewCluster: cfg.ForceNewCluster,
PeerTLSInfo: cfg.PeerTLSInfo,
TickMs: cfg.TickMs,
ElectionTicks: cfg.ElectionTicks(),
InitialElectionTickAdvance: cfg.InitialElectionTickAdvance,
AutoCompactionRetention: cfg.AutoCompactionRetention,
QuotaBackendBytes: cfg.QuotaBackendBytes,
StrictReconfigCheck: cfg.StrictReconfigCheck,
ClientCertAuthEnabled: cfg.ClientTLSInfo.ClientCertAuth,
}
if e.Server, err = etcdserver.NewServer(srvcfg); err != nil {

View File

@ -949,7 +949,7 @@ RPC: RoleGrantPermission
#### Ouptut
`Role <role name> updated`.
`Role <role name> updated`.
#### Examples

View File

@ -336,6 +336,6 @@ etcdctl is under the Apache 2.0 license. See the [LICENSE][license] file for det
[authentication]: ../Documentation/v2/authentication.md
[etcd]: https://github.com/coreos/etcd
[github-release]: https://github.com/coreos/etcd/releases/
[license]: https://github.com/coreos/etcdctl/blob/master/LICENSE
[license]: ../LICENSE
[semver]: http://semver.org/
[username-flag]: #--username--u

View File

@ -67,7 +67,7 @@ func leaseGrantCommandFunc(cmd *cobra.Command, args []string) {
if err != nil {
ExitWithError(ExitError, fmt.Errorf("failed to grant lease (%v)\n", err))
}
fmt.Printf("lease %016x granted with TTL(%ds)\n", resp.ID, resp.TTL)
display.Grant(*resp)
}
// NewLeaseRevokeCommand returns the cobra command for "lease revoke".
@ -90,12 +90,12 @@ func leaseRevokeCommandFunc(cmd *cobra.Command, args []string) {
id := leaseFromArgs(args[0])
ctx, cancel := commandCtx(cmd)
_, err := mustClientFromCmd(cmd).Revoke(ctx, id)
resp, err := mustClientFromCmd(cmd).Revoke(ctx, id)
cancel()
if err != nil {
ExitWithError(ExitError, fmt.Errorf("failed to revoke lease (%v)\n", err))
}
fmt.Printf("lease %016x revoked\n", id)
display.Revoke(id, *resp)
}
var timeToLiveKeys bool
@ -154,9 +154,12 @@ func leaseKeepAliveCommandFunc(cmd *cobra.Command, args []string) {
}
for resp := range respc {
fmt.Printf("lease %016x keepalived with TTL(%d)\n", resp.ID, resp.TTL)
display.KeepAlive(*resp)
}
if _, ok := (display).(*simplePrinter); ok {
fmt.Printf("lease %016x expired or revoked.\n", id)
}
fmt.Printf("lease %016x expired or revoked.\n", id)
}
func leaseFromArgs(arg string) v3.LeaseID {

View File

@ -32,6 +32,9 @@ type printer interface {
Txn(v3.TxnResponse)
Watch(v3.WatchResponse)
Grant(r v3.LeaseGrantResponse)
Revoke(id v3.LeaseID, r v3.LeaseRevokeResponse)
KeepAlive(r v3.LeaseKeepAliveResponse)
TimeToLive(r v3.LeaseTimeToLiveResponse, keys bool)
MemberAdd(v3.MemberAddResponse)
@ -81,13 +84,18 @@ type printerRPC struct {
p func(interface{})
}
func (p *printerRPC) Del(r v3.DeleteResponse) { p.p((*pb.DeleteRangeResponse)(&r)) }
func (p *printerRPC) Get(r v3.GetResponse) { p.p((*pb.RangeResponse)(&r)) }
func (p *printerRPC) Put(r v3.PutResponse) { p.p((*pb.PutResponse)(&r)) }
func (p *printerRPC) Txn(r v3.TxnResponse) { p.p((*pb.TxnResponse)(&r)) }
func (p *printerRPC) Watch(r v3.WatchResponse) { p.p(&r) }
func (p *printerRPC) Del(r v3.DeleteResponse) { p.p((*pb.DeleteRangeResponse)(&r)) }
func (p *printerRPC) Get(r v3.GetResponse) { p.p((*pb.RangeResponse)(&r)) }
func (p *printerRPC) Put(r v3.PutResponse) { p.p((*pb.PutResponse)(&r)) }
func (p *printerRPC) Txn(r v3.TxnResponse) { p.p((*pb.TxnResponse)(&r)) }
func (p *printerRPC) Watch(r v3.WatchResponse) { p.p(&r) }
func (p *printerRPC) Grant(r v3.LeaseGrantResponse) { p.p(r) }
func (p *printerRPC) Revoke(id v3.LeaseID, r v3.LeaseRevokeResponse) { p.p(r) }
func (p *printerRPC) KeepAlive(r v3.LeaseKeepAliveResponse) { p.p(r) }
func (p *printerRPC) TimeToLive(r v3.LeaseTimeToLiveResponse, keys bool) { p.p(&r) }
func (p *printerRPC) MemberAdd(r v3.MemberAddResponse) { p.p((*pb.MemberAddResponse)(&r)) }
func (p *printerRPC) MemberAdd(r v3.MemberAddResponse) { p.p((*pb.MemberAddResponse)(&r)) }
func (p *printerRPC) MemberRemove(id uint64, r v3.MemberRemoveResponse) {
p.p((*pb.MemberRemoveResponse)(&r))
}

View File

@ -92,6 +92,22 @@ func (p *fieldsPrinter) Watch(resp v3.WatchResponse) {
}
}
func (p *fieldsPrinter) Grant(r v3.LeaseGrantResponse) {
p.hdr(r.ResponseHeader)
fmt.Println(`"ID" :`, r.ID)
fmt.Println(`"TTL" :`, r.TTL)
}
func (p *fieldsPrinter) Revoke(id v3.LeaseID, r v3.LeaseRevokeResponse) {
p.hdr(r.Header)
}
func (p *fieldsPrinter) KeepAlive(r v3.LeaseKeepAliveResponse) {
p.hdr(r.ResponseHeader)
fmt.Println(`"ID" :`, r.ID)
fmt.Println(`"TTL" :`, r.TTL)
}
func (p *fieldsPrinter) TimeToLive(r v3.LeaseTimeToLiveResponse, keys bool) {
p.hdr(r.ResponseHeader)
fmt.Println(`"ID" :`, r.ID)

View File

@ -79,6 +79,18 @@ func (s *simplePrinter) Watch(resp v3.WatchResponse) {
}
}
func (s *simplePrinter) Grant(resp v3.LeaseGrantResponse) {
fmt.Printf("lease %016x granted with TTL(%ds)\n", resp.ID, resp.TTL)
}
func (p *simplePrinter) Revoke(id v3.LeaseID, r v3.LeaseRevokeResponse) {
fmt.Printf("lease %016x revoked\n", id)
}
func (p *simplePrinter) KeepAlive(resp v3.LeaseKeepAliveResponse) {
fmt.Printf("lease %016x keepalived with TTL(%d)\n", resp.ID, resp.TTL)
}
func (s *simplePrinter) TimeToLive(resp v3.LeaseTimeToLiveResponse, keys bool) {
txt := fmt.Sprintf("lease %016x granted with TTL(%ds), remaining(%ds)", resp.ID, resp.GrantedTTL, resp.TTL)
if keys {

View File

@ -27,7 +27,7 @@ import (
"reflect"
"strings"
"github.com/boltdb/bolt"
bolt "github.com/coreos/bbolt"
"github.com/coreos/etcd/etcdserver"
"github.com/coreos/etcd/etcdserver/etcdserverpb"
"github.com/coreos/etcd/etcdserver/membership"

View File

@ -134,6 +134,7 @@ func newConfig() *config {
fs.Uint64Var(&cfg.SnapCount, "snapshot-count", cfg.SnapCount, "Number of committed transactions to trigger a snapshot to disk.")
fs.UintVar(&cfg.TickMs, "heartbeat-interval", cfg.TickMs, "Time (in milliseconds) of a heartbeat interval.")
fs.UintVar(&cfg.ElectionMs, "election-timeout", cfg.ElectionMs, "Time (in milliseconds) for an election to timeout.")
fs.BoolVar(&cfg.InitialElectionTickAdvance, "initial-election-tick-advance", cfg.InitialElectionTickAdvance, "Whether to fast-forward initial election ticks on boot for faster election.")
fs.Int64Var(&cfg.QuotaBackendBytes, "quota-backend-bytes", cfg.QuotaBackendBytes, "Raise alarms when backend size exceeds the given quota. 0 means use the default quota.")
// clustering

View File

@ -48,6 +48,8 @@ member flags:
time (in milliseconds) of a heartbeat interval.
--election-timeout '1000'
time (in milliseconds) for an election to timeout. See tuning documentation for details.
--initial-election-tick-advance 'true'
whether to fast-forward initial election ticks on boot for faster election.
--listen-peer-urls 'http://localhost:2380'
list of URLs to listen on for peer traffic.
--listen-client-urls 'http://localhost:2379'

View File

@ -48,8 +48,38 @@ type ServerConfig struct {
ForceNewCluster bool
PeerTLSInfo transport.TLSInfo
TickMs uint
ElectionTicks int
TickMs uint
ElectionTicks int
// InitialElectionTickAdvance is true, then local member fast-forwards
// election ticks to speed up "initial" leader election trigger. This
// benefits the case of larger election ticks. For instance, cross
// datacenter deployment may require longer election timeout of 10-second.
// If true, local node does not need wait up to 10-second. Instead,
// forwards its election ticks to 8-second, and have only 2-second left
// before leader election.
//
// Major assumptions are that:
// - cluster has no active leader thus advancing ticks enables faster
// leader election, or
// - cluster already has an established leader, and rejoining follower
// is likely to receive heartbeats from the leader after tick advance
// and before election timeout.
//
// However, when network from leader to rejoining follower is congested,
// and the follower does not receive leader heartbeat within left election
// ticks, disruptive election has to happen thus affecting cluster
// availabilities.
//
// Disabling this would slow down initial bootstrap process for cross
// datacenter deployments. Make your own tradeoffs by configuring
// --initial-election-tick-advance at the cost of slow initial bootstrap.
//
// If single-node, it advances ticks regardless.
//
// See https://github.com/coreos/etcd/issues/9333 for more detail.
InitialElectionTickAdvance bool
BootstrapTimeout time.Duration
AutoCompactionRetention int

View File

@ -28,6 +28,12 @@ var (
Name: "has_leader",
Help: "Whether or not a leader exists. 1 is existence, 0 is not.",
})
isLeader = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "etcd",
Subsystem: "server",
Name: "is_leader",
Help: "Whether or not this member is a leader. 1 if is, 0 otherwise.",
})
leaderChanges = prometheus.NewCounter(prometheus.CounterOpts{
Namespace: "etcd",
Subsystem: "server",
@ -62,6 +68,7 @@ var (
func init() {
prometheus.MustRegister(hasLeader)
prometheus.MustRegister(isLeader)
prometheus.MustRegister(leaderChanges)
prometheus.MustRegister(proposalsCommitted)
prometheus.MustRegister(proposalsApplied)

View File

@ -95,6 +95,8 @@ type raftNode struct {
lead uint64
mu sync.Mutex
tickMu sync.Mutex
// last lead elected time
lt time.Time
@ -129,6 +131,13 @@ type raftNode struct {
done chan struct{}
}
// raft.Node does not have locks in Raft package
func (r *raftNode) tick() {
r.tickMu.Lock()
r.Tick()
r.tickMu.Unlock()
}
// start prepares and starts raftNode in a new goroutine. It is no longer safe
// to modify the fields after it has been started.
func (r *raftNode) start(rh *raftReadyHandler) {
@ -144,7 +153,7 @@ func (r *raftNode) start(rh *raftReadyHandler) {
for {
select {
case <-r.ticker:
r.Tick()
r.tick()
case rd := <-r.Ready():
if rd.SoftState != nil {
if lead := atomic.LoadUint64(&r.lead); rd.SoftState.Lead != raft.None && lead != rd.SoftState.Lead {
@ -162,6 +171,11 @@ func (r *raftNode) start(rh *raftReadyHandler) {
atomic.StoreUint64(&r.lead, rd.SoftState.Lead)
islead = rd.RaftState == raft.StateLeader
if islead {
isLeader.Set(1)
} else {
isLeader.Set(0)
}
rh.updateLeadership()
}
@ -321,13 +335,13 @@ func (r *raftNode) resumeSending() {
p.Resume()
}
// advanceTicksForElection advances ticks to the node for fast election.
// This reduces the time to wait for first leader election if bootstrapping the whole
// cluster, while leaving at least 1 heartbeat for possible existing leader
// to contact it.
func advanceTicksForElection(n raft.Node, electionTicks int) {
for i := 0; i < electionTicks-1; i++ {
n.Tick()
// advanceTicks advances ticks of Raft node.
// This can be used for fast-forwarding election
// ticks in multi data-center deployments, thus
// speeding up election process.
func (r *raftNode) advanceTicks(ticks int) {
for i := 0; i < ticks; i++ {
r.tick()
}
}
@ -368,8 +382,7 @@ func startNode(cfg *ServerConfig, cl *membership.RaftCluster, ids []types.ID) (i
raftStatusMu.Lock()
raftStatus = n.Status
raftStatusMu.Unlock()
advanceTicksForElection(n, c.ElectionTick)
return
return id, n, s, w
}
func restartNode(cfg *ServerConfig, snapshot *raftpb.Snapshot) (types.ID, *membership.RaftCluster, raft.Node, *raft.MemoryStorage, *wal.WAL) {
@ -402,7 +415,6 @@ func restartNode(cfg *ServerConfig, snapshot *raftpb.Snapshot) (types.ID, *membe
raftStatusMu.Lock()
raftStatus = n.Status
raftStatusMu.Unlock()
advanceTicksForElection(n, c.ElectionTick)
return id, cl, n, s, w
}

View File

@ -506,11 +506,55 @@ func NewServer(cfg *ServerConfig) (srv *EtcdServer, err error) {
return srv, nil
}
func (s *EtcdServer) adjustTicks() {
clusterN := len(s.cluster.Members())
// single-node fresh start, or single-node recovers from snapshot
if clusterN == 1 {
ticks := s.Cfg.ElectionTicks - 1
plog.Infof("%s as single-node; fast-forwarding %d ticks (election ticks %d)", s.ID(), ticks, s.Cfg.ElectionTicks)
s.r.advanceTicks(ticks)
return
}
if !s.Cfg.InitialElectionTickAdvance {
plog.Infof("skipping initial election tick advance (election tick %d)", s.Cfg.ElectionTicks)
return
}
// retry up to "rafthttp.ConnReadTimeout", which is 5-sec
// until peer connection reports; otherwise:
// 1. all connections failed, or
// 2. no active peers, or
// 3. restarted single-node with no snapshot
// then, do nothing, because advancing ticks would have no effect
waitTime := rafthttp.ConnReadTimeout
itv := 50 * time.Millisecond
for i := int64(0); i < int64(waitTime/itv); i++ {
select {
case <-time.After(itv):
case <-s.stopping:
return
}
peerN := s.r.transport.ActivePeers()
if peerN > 1 {
// multi-node received peer connection reports
// adjust ticks, in case slow leader message receive
ticks := s.Cfg.ElectionTicks - 2
plog.Infof("%s initialzed peer connection; fast-forwarding %d ticks (election ticks %d) with %d active peer(s)", s.ID(), ticks, s.Cfg.ElectionTicks, peerN)
s.r.advanceTicks(ticks)
return
}
}
}
// Start prepares and starts server in a new goroutine. It is no longer safe to
// modify a server's fields after it has been sent to Start.
// It also starts a goroutine to publish its server information.
func (s *EtcdServer) Start() {
s.start()
s.goAttach(func() { s.adjustTicks() })
s.goAttach(func() { s.publish(s.Cfg.ReqTimeout()) })
s.goAttach(s.purgeFile)
s.goAttach(func() { monitorFileDescriptor(s.stopping) })
@ -608,7 +652,7 @@ type raftReadyHandler struct {
}
func (s *EtcdServer) run() {
snap, err := s.r.raftStorage.Snapshot()
snapshot, err := s.r.raftStorage.Snapshot()
if err != nil {
plog.Panicf("get snapshot from raft storage error: %v", err)
}
@ -666,10 +710,10 @@ func (s *EtcdServer) run() {
// asynchronously accept apply packets, dispatch progress in-order
sched := schedule.NewFIFOScheduler()
ep := etcdProgress{
confState: snap.Metadata.ConfState,
snapi: snap.Metadata.Index,
appliedt: snap.Metadata.Term,
appliedi: snap.Metadata.Index,
confState: snapshot.Metadata.ConfState,
snapi: snapshot.Metadata.Index,
appliedt: snapshot.Metadata.Term,
appliedi: snapshot.Metadata.Index,
}
defer func() {
@ -1253,9 +1297,14 @@ func (s *EtcdServer) apply(es []raftpb.Entry, confState *raftpb.ConfState) (appl
case raftpb.EntryNormal:
s.applyEntryNormal(&e)
case raftpb.EntryConfChange:
// set the consistent index of current executing entry
if e.Index > s.consistIndex.ConsistentIndex() {
s.consistIndex.setConsistentIndex(e.Index)
}
var cc raftpb.ConfChange
pbutil.MustUnmarshal(&cc, e.Data)
removedSelf, err := s.applyConfChange(cc, confState)
s.setAppliedIndex(e.Index)
shouldStop = shouldStop || removedSelf
s.w.Trigger(cc.ID, err)
default:

View File

@ -83,6 +83,7 @@ func (s *nopTransporterWithActiveTime) RemovePeer(id types.ID) {}
func (s *nopTransporterWithActiveTime) RemoveAllPeers() {}
func (s *nopTransporterWithActiveTime) UpdatePeer(id types.ID, us []string) {}
func (s *nopTransporterWithActiveTime) ActiveSince(id types.ID) time.Time { return s.activeMap[id] }
func (s *nopTransporterWithActiveTime) ActivePeers() int { return 0 }
func (s *nopTransporterWithActiveTime) Stop() {}
func (s *nopTransporterWithActiveTime) Pause() {}
func (s *nopTransporterWithActiveTime) Resume() {}

10
glide.lock generated
View File

@ -1,5 +1,5 @@
hash: ca3c895fa60c9ca9f53408202fb7643705f9960212d342967ed0da8e93606cc4
updated: 2017-01-18T10:26:48.990115455-08:00
hash: 8a2307db167c2f9bc511956fbec0a4400bc38db103603fc6a87bb179ac853bf5
updated: 2017-11-21T11:36:36.822426751-08:00
imports:
- name: github.com/beorn7/perks
version: 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
@ -7,10 +7,10 @@ imports:
- quantile
- name: github.com/bgentry/speakeasy
version: 36e9cfdd690967f4f690c6edcc9ffacd006014a0
- name: github.com/boltdb/bolt
version: 583e8937c61f1af6513608ccc75c97b6abdf4ff9
- name: github.com/cockroachdb/cmux
version: 112f0506e7743d64a6eb8fedbcff13d9979bbf92
- name: github.com/coreos/bbolt
version: 32c383e75ce054674c53b5a07e55de85332aee14
- name: github.com/coreos/go-semver
version: 568e959cd89871e61434c1143528d9162da89ef2
subpackages:
@ -74,7 +74,7 @@ imports:
subpackages:
- prometheus
- name: github.com/prometheus/client_model
version: fa8ad6fec33561be4280a8f0514318c79d7f6cb6
version: 6f3806018612930941127f2a7c6c453ba2c527d2
subpackages:
- go
- name: github.com/prometheus/common

View File

@ -2,8 +2,8 @@ package: github.com/coreos/etcd
import:
- package: github.com/bgentry/speakeasy
version: 36e9cfdd690967f4f690c6edcc9ffacd006014a0
- package: github.com/boltdb/bolt
version: v1.3.0
- package: github.com/coreos/bbolt
version: v1.3.1-coreos.5
- package: github.com/cockroachdb/cmux
version: 112f0506e7743d64a6eb8fedbcff13d9979bbf92
- package: github.com/coreos/go-semver
@ -48,9 +48,21 @@ import:
- package: github.com/olekukonko/tablewriter
version: cca8bbc0798408af109aaaa239cbd2634846b340
- package: github.com/prometheus/client_golang
version: v0.8.0
version: c5b7fccd204277076155f10851dad72b76a49317
subpackages:
- prometheus
- package: github.com/prometheus/client_model
version: 6f3806018612930941127f2a7c6c453ba2c527d2
subpackages:
- go
- package: github.com/prometheus/common
version: 195bde7883f7c39ea62b0d92ab7359b5327065cb
subpackages:
- expfmt
- internal/bitbucket.org/ww/goautoneg
- model
- package: github.com/prometheus/procfs
version: fcdb11ccb4389efb1b210b7ffb623ab71c5fdd60
- package: github.com/spf13/cobra
version: 1c44ec8d3f1552cac48999f9306da23c4d8a288b
- package: github.com/spf13/pflag
@ -102,4 +114,22 @@ import:
subpackages:
- assert
- package: github.com/karlseguin/ccache
version: v2.0.2
version: a2d62155777b39595c825ed3824279e642a5db3c
- package: github.com/beorn7/perks
version: 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
- package: github.com/cpuguy83/go-md2man
version: a65d4d2de4d5f7c74868dfa9b202a3c8be315aaa
- package: github.com/matttproud/golang_protobuf_extensions
version: c12348ce28de40eed0136aa2b644d0ee0650e56c
subpackages:
- pbutil
- package: github.com/russross/blackfriday
version: 5f33e7b7878355cd2b7e6b8eefc48a5472c69f70
- package: github.com/mattn/go-runewidth
version: 737072b4e32b7a5018b4a7125da8d12de90e8045
- package: github.com/shurcooL/sanitized_anchor_name
version: 1dba4b3954bc059efc3991ec364f9f9a35f597d2
- package: golang.org/x/sys
version: 478fcf54317e52ab69f40bb4c7a1520288d7f7ea
subpackages:
- unix

4
hack/benchmark/bench.sh Normal file → Executable file
View File

@ -1,8 +1,8 @@
#!/bin/bash -e
leader=http://10.240.201.15:2379
leader=http://localhost:2379
# assume three servers
servers=( http://10.240.201.15:2379 http://10.240.212.209:2379 http://10.240.95.3:2379 )
servers=( http://localhost:2379 http://localhost:22379 http://localhost:32379 )
keyarray=( 64 256 )

View File

@ -1 +1 @@
Starts a cluster via the discovery service on your local machine. Useful for testing.
Starts a cluster via the discovery service locally. Useful for testing.

37
hack/patch/README.md Normal file
View File

@ -0,0 +1,37 @@
# ./hack/patch/cherrypick.sh
Handles cherry-picks of PR(s) from etcd master to a stable etcd release branch automatically.
## Setup
Set the `UPSTREAM_REMOTE` and `FORK_REMOTE` environment variables.
`UPSTREAM_REMOTE` should be set to git remote name of `github.com/coreos/etcd`,
and `FORK_REMOTE` should be set to the git remote name of the forked etcd
repo (`github.com/${github-username}/etcd`). Use `git remotes -v` to
look up the git remote names. If etcd has not been forked, create
one on github.com and register it locally with `git remote add ...`.
```
export UPSTREAM_REMOTE=origin
export FORK_REMOTE=${github-username}
export GITHUB_USER=${github-username}
```
Next, install hub from https://github.com/github/hub
## Usage
To cherry pick PR 12345 onto release-3.2 and propose is as a PR, run:
```sh
./hack/patch/cherrypick.sh ${UPSTREAM_REMOTE}/release-3.2 12345
```
To cherry pick 12345 then 56789 and propose them togther as a single PR, run:
```
./hack/patch/cherrypick.sh ${UPSTREAM_REMOTE}/release-3.2 12345 56789
```

229
hack/patch/cherrypick.sh Executable file
View File

@ -0,0 +1,229 @@
#!/usr/bin/env bash
# Based on github.com/kubernetes/kubernetes/blob/v1.8.2/hack/cherry_pick_pull.sh
# Checkout a PR from GitHub. (Yes, this is sitting in a Git tree. How
# meta.) Assumes you care about pulls from remote "upstream" and
# checks thems out to a branch named:
# automated-cherry-pick-of-<pr>-<target branch>-<timestamp>
set -o errexit
set -o nounset
set -o pipefail
declare -r ETCD_ROOT="$(dirname "${BASH_SOURCE}")/../.."
cd "${ETCD_ROOT}"
declare -r STARTINGBRANCH=$(git symbolic-ref --short HEAD)
declare -r REBASEMAGIC="${ETCD_ROOT}/.git/rebase-apply"
DRY_RUN=${DRY_RUN:-""}
REGENERATE_DOCS=${REGENERATE_DOCS:-""}
UPSTREAM_REMOTE=${UPSTREAM_REMOTE:-upstream}
FORK_REMOTE=${FORK_REMOTE:-origin}
if [[ -z ${GITHUB_USER:-} ]]; then
echo "Please export GITHUB_USER=<your-user> (or GH organization, if that's where your fork lives)"
exit 1
fi
if ! which hub > /dev/null; then
echo "Can't find 'hub' tool in PATH, please install from https://github.com/github/hub"
exit 1
fi
if [[ "$#" -lt 2 ]]; then
echo "${0} <remote branch> <pr-number>...: cherry pick one or more <pr> onto <remote branch> and leave instructions for proposing pull request"
echo
echo " Checks out <remote branch> and handles the cherry-pick of <pr> (possibly multiple) for you."
echo " Examples:"
echo " $0 upstream/release-3.14 12345 # Cherry-picks PR 12345 onto upstream/release-3.14 and proposes that as a PR."
echo " $0 upstream/release-3.14 12345 56789 # Cherry-picks PR 12345, then 56789 and proposes the combination as a single PR."
echo
echo " Set the DRY_RUN environment var to skip git push and creating PR."
echo " This is useful for creating patches to a release branch without making a PR."
echo " When DRY_RUN is set the script will leave you in a branch containing the commits you cherry-picked."
echo
echo " Set the REGENERATE_DOCS environment var to regenerate documentation for the target branch after picking the specified commits."
echo " This is useful when picking commits containing changes to API documentation."
echo
echo " Set UPSTREAM_REMOTE (default: upstream) and FORK_REMOTE (default: origin)"
echo " To override the default remote names to what you have locally."
exit 2
fi
if git_status=$(git status --porcelain --untracked=no 2>/dev/null) && [[ -n "${git_status}" ]]; then
echo "!!! Dirty tree. Clean up and try again."
exit 1
fi
if [[ -e "${REBASEMAGIC}" ]]; then
echo "!!! 'git rebase' or 'git am' in progress. Clean up and try again."
exit 1
fi
declare -r BRANCH="$1"
shift 1
declare -r PULLS=( "$@" )
function join { local IFS="$1"; shift; echo "$*"; }
declare -r PULLDASH=$(join - "${PULLS[@]/#/#}") # Generates something like "#12345-#56789"
declare -r PULLSUBJ=$(join " " "${PULLS[@]/#/#}") # Generates something like "#12345 #56789"
echo "+++ Updating remotes..."
git remote update "${UPSTREAM_REMOTE}" "${FORK_REMOTE}"
if ! git log -n1 --format=%H "${BRANCH}" >/dev/null 2>&1; then
echo "!!! '${BRANCH}' not found. The second argument should be something like ${UPSTREAM_REMOTE}/release-0.21."
echo " (In particular, it needs to be a valid, existing remote branch that I can 'git checkout'.)"
exit 1
fi
declare -r NEWBRANCHREQ="automated-cherry-pick-of-${PULLDASH}" # "Required" portion for tools.
declare -r NEWBRANCH="$(echo "${NEWBRANCHREQ}-${BRANCH}" | sed 's/\//-/g')"
declare -r NEWBRANCHUNIQ="${NEWBRANCH}-$(date +%s)"
echo "+++ Creating local branch ${NEWBRANCHUNIQ}"
cleanbranch=""
prtext=""
gitamcleanup=false
function return_to_kansas {
if [[ "${gitamcleanup}" == "true" ]]; then
echo
echo "+++ Aborting in-progress git am."
git am --abort >/dev/null 2>&1 || true
fi
# return to the starting branch and delete the PR text file
if [[ -z "${DRY_RUN}" ]]; then
echo
echo "+++ Returning you to the ${STARTINGBRANCH} branch and cleaning up."
git checkout -f "${STARTINGBRANCH}" >/dev/null 2>&1 || true
if [[ -n "${cleanbranch}" ]]; then
git branch -D "${cleanbranch}" >/dev/null 2>&1 || true
fi
if [[ -n "${prtext}" ]]; then
rm "${prtext}"
fi
fi
}
trap return_to_kansas EXIT
SUBJECTS=()
function make-a-pr() {
local rel="$(basename "${BRANCH}")"
echo
echo "+++ Creating a pull request on GitHub at ${GITHUB_USER}:${NEWBRANCH}"
# This looks like an unnecessary use of a tmpfile, but it avoids
# https://github.com/github/hub/issues/976 Otherwise stdin is stolen
# when we shove the heredoc at hub directly, tickling the ioctl
# crash.
prtext="$(mktemp -t prtext.XXXX)" # cleaned in return_to_kansas
cat >"${prtext}" <<EOF
Automated cherry pick of ${PULLSUBJ}
Cherry pick of ${PULLSUBJ} on ${rel}.
$(printf '%s\n' "${SUBJECTS[@]}")
EOF
hub pull-request -F "${prtext}" -h "${GITHUB_USER}:${NEWBRANCH}" -b "coreos:${rel}"
}
git checkout -b "${NEWBRANCHUNIQ}" "${BRANCH}"
cleanbranch="${NEWBRANCHUNIQ}"
gitamcleanup=true
for pull in "${PULLS[@]}"; do
echo "+++ Downloading patch to /tmp/${pull}.patch (in case you need to do this again)"
curl -o "/tmp/${pull}.patch" -sSL "http://github.com/coreos/etcd/pull/${pull}.patch"
echo
echo "+++ About to attempt cherry pick of PR. To reattempt:"
echo " $ git am -3 /tmp/${pull}.patch"
echo
git am -3 "/tmp/${pull}.patch" || {
conflicts=false
while unmerged=$(git status --porcelain | grep ^U) && [[ -n ${unmerged} ]] \
|| [[ -e "${REBASEMAGIC}" ]]; do
conflicts=true # <-- We should have detected conflicts once
echo
echo "+++ Conflicts detected:"
echo
(git status --porcelain | grep ^U) || echo "!!! None. Did you git am --continue?"
echo
echo "+++ Please resolve the conflicts in another window (and remember to 'git add / git am --continue')"
read -p "+++ Proceed (anything but 'y' aborts the cherry-pick)? [y/n] " -r
echo
if ! [[ "${REPLY}" =~ ^[yY]$ ]]; then
echo "Aborting." >&2
exit 1
fi
done
if [[ "${conflicts}" != "true" ]]; then
echo "!!! git am failed, likely because of an in-progress 'git am' or 'git rebase'"
exit 1
fi
}
# set the subject
subject=$(grep -m 1 "^Subject" "/tmp/${pull}.patch" | sed -e 's/Subject: \[PATCH//g' | sed 's/.*] //')
SUBJECTS+=("#${pull}: ${subject}")
# remove the patch file from /tmp
rm -f "/tmp/${pull}.patch"
done
gitamcleanup=false
# Re-generate docs (if needed)
if [[ -n "${REGENERATE_DOCS}" ]]; then
echo
echo "Regenerating docs..."
if ! hack/generate-docs.sh; then
echo
echo "hack/generate-docs.sh FAILED to complete."
exit 1
fi
fi
if [[ -n "${DRY_RUN}" ]]; then
echo "!!! Skipping git push and PR creation because you set DRY_RUN."
echo "To return to the branch you were in when you invoked this script:"
echo
echo " git checkout ${STARTINGBRANCH}"
echo
echo "To delete this branch:"
echo
echo " git branch -D ${NEWBRANCHUNIQ}"
exit 0
fi
if git remote -v | grep ^${FORK_REMOTE} | grep etcd/etcd.git; then
echo "!!! You have ${FORK_REMOTE} configured as your etcd/etcd.git"
echo "This isn't normal. Leaving you with push instructions:"
echo
echo "+++ First manually push the branch this script created:"
echo
echo " git push REMOTE ${NEWBRANCHUNIQ}:${NEWBRANCH}"
echo
echo "where REMOTE is your personal fork (maybe ${UPSTREAM_REMOTE}? Consider swapping those.)."
echo "OR consider setting UPSTREAM_REMOTE and FORK_REMOTE to different values."
echo
make-a-pr
cleanbranch=""
exit 0
fi
echo
echo "+++ I'm about to do the following to push to GitHub (and I'm assuming ${FORK_REMOTE} is your personal fork):"
echo
echo " git push ${FORK_REMOTE} ${NEWBRANCHUNIQ}:${NEWBRANCH}"
echo
read -p "+++ Proceed (anything but 'y' aborts the cherry-pick)? [y/n] " -r
if ! [[ "${REPLY}" =~ ^[yY]$ ]]; then
echo "Aborting." >&2
exit 1
fi
git push "${FORK_REMOTE}" -f "${NEWBRANCHUNIQ}:${NEWBRANCH}"
make-a-pr

499
hack/scripts-dev/Makefile Normal file
View File

@ -0,0 +1,499 @@
# run from repository root
# Example:
# make build -f ./hack/scripts-dev/Makefile
# make clean -f ./hack/scripts-dev/Makefile
# make clean-docker -f ./hack/scripts-dev/Makefile
# make restart-docker -f ./hack/scripts-dev/Makefile
# make delete-docker-images -f ./hack/scripts-dev/Makefile
.PHONY: build
build:
GO_BUILD_FLAGS="-v" ./build
./bin/etcd --version
ETCDCTL_API=3 ./bin/etcdctl version
clean:
rm -f ./codecov
rm -rf ./agent-*
rm -rf ./covdir
rm -f ./*.log
rm -f ./bin/Dockerfile-release
rm -rf ./bin/*.etcd
rm -rf ./gopath
rm -rf ./release
rm -f ./integration/127.0.0.1:* ./integration/localhost:*
rm -f ./clientv3/integration/127.0.0.1:* ./clientv3/integration/localhost:*
rm -f ./clientv3/ordering/127.0.0.1:* ./clientv3/ordering/localhost:*
clean-docker:
docker images
docker image prune --force
restart-docker:
service docker restart
delete-docker-images:
docker rm --force $(docker ps -a -q) || true
docker rmi --force $(docker images -q) || true
GO_VERSION ?= 1.10
ETCD_VERSION ?= $(shell git rev-parse --short HEAD || echo "GitNotFound")
TEST_SUFFIX = $(shell date +%s | base64 | head -c 15)
TEST_OPTS ?= PASSES='unit'
TMP_DIR_MOUNT_FLAG = --mount type=tmpfs,destination=/tmp
ifdef HOST_TMP_DIR
TMP_DIR_MOUNT_FLAG = --mount type=bind,source=$(HOST_TMP_DIR),destination=/tmp
endif
# Example:
# GO_VERSION=1.8.7 make build-docker-test -f ./hack/scripts-dev/Makefile
# make build-docker-test -f ./hack/scripts-dev/Makefile
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# GO_VERSION=1.8.7 make push-docker-test -f ./hack/scripts-dev/Makefile
# make push-docker-test -f ./hack/scripts-dev/Makefile
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# GO_VERSION=1.8.7 make pull-docker-test -f ./hack/scripts-dev/Makefile
# make pull-docker-test -f ./hack/scripts-dev/Makefile
build-docker-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./Dockerfile-test
docker build \
--tag gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
--file ./Dockerfile-test .
@mv ./Dockerfile-test.bak ./Dockerfile-test
push-docker-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-test:go$(GO_VERSION)
pull-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-test:go$(GO_VERSION)
# Example:
# make build-docker-test -f ./hack/scripts-dev/Makefile
# make compile-with-docker-test -f ./hack/scripts-dev/Makefile
# make compile-setup-gopath-with-docker-test -f ./hack/scripts-dev/Makefile
compile-with-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker run \
--rm \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "GO_BUILD_FLAGS=-v ./build && ./bin/etcd --version"
compile-setup-gopath-with-docker-test:
$(info GO_VERSION: $(GO_VERSION))
docker run \
--rm \
--mount type=bind,source=`pwd`,destination=/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && ETCD_SETUP_GOPATH=1 GO_BUILD_FLAGS=-v ./build && ./bin/etcd --version && rm -rf ./gopath"
# Example:
#
# Local machine:
# TEST_OPTS="PASSES='fmt'" make test -f ./hack/scripts-dev/Makefile
# TEST_OPTS="PASSES='fmt bom dep compile build unit'" make test -f ./hack/scripts-dev/Makefile
# TEST_OPTS="PASSES='build unit release integration_e2e functional'" make test -f ./hack/scripts-dev/Makefile
# TEST_OPTS="PASSES='build grpcproxy'" make test -f ./hack/scripts-dev/Makefile
#
# Example (test with docker):
# make pull-docker-test -f ./hack/scripts-dev/Makefile
# TEST_OPTS="PASSES='fmt'" make docker-test -f ./hack/scripts-dev/Makefile
# TEST_OPTS="VERBOSE=2 PASSES='unit'" make docker-test -f ./hack/scripts-dev/Makefile
#
# Travis CI (test with docker):
# TEST_OPTS="PASSES='fmt bom dep compile build unit'" make docker-test -f ./hack/scripts-dev/Makefile
#
# Semaphore CI (test with docker):
# TEST_OPTS="PASSES='build unit release integration_e2e functional'" make docker-test -f ./hack/scripts-dev/Makefile
# HOST_TMP_DIR=/tmp TEST_OPTS="PASSES='build unit release integration_e2e functional'" make docker-test -f ./hack/scripts-dev/Makefile
# TEST_OPTS="GOARCH=386 PASSES='build unit integration_e2e'" make docker-test -f ./hack/scripts-dev/Makefile
#
# grpc-proxy tests (test with docker):
# TEST_OPTS="PASSES='build grpcproxy'" make docker-test -f ./hack/scripts-dev/Makefile
# HOST_TMP_DIR=/tmp TEST_OPTS="PASSES='build grpcproxy'" make docker-test -f ./hack/scripts-dev/Makefile
.PHONY: test
test:
$(info TEST_OPTS: $(TEST_OPTS))
$(info log-file: test-$(TEST_SUFFIX).log)
$(TEST_OPTS) ./test 2>&1 | tee test-$(TEST_SUFFIX).log
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 test-$(TEST_SUFFIX).log
docker-test:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
$(info TEST_OPTS: $(TEST_OPTS))
$(info log-file: test-$(TEST_SUFFIX).log)
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "$(TEST_OPTS) ./test 2>&1 | tee test-$(TEST_SUFFIX).log"
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 test-$(TEST_SUFFIX).log
docker-test-coverage:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
$(info log-file: docker-test-coverage-$(TEST_SUFFIX).log)
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`,destination=/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go$(GO_VERSION) \
/bin/bash -c "COVERDIR=covdir PASSES='build build_cov cov' ./test 2>&1 | tee docker-test-coverage-$(TEST_SUFFIX).log && /codecov -t 6040de41-c073-4d6f-bbf8-d89256ef31e1"
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 docker-test-coverage-$(TEST_SUFFIX).log
# Example:
# make compile-with-docker-test -f ./hack/scripts-dev/Makefile
# ETCD_VERSION=v3-test make build-docker-release-master -f ./hack/scripts-dev/Makefile
# ETCD_VERSION=v3-test make push-docker-release-master -f ./hack/scripts-dev/Makefile
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
build-docker-release-master:
$(info ETCD_VERSION: $(ETCD_VERSION))
cp ./Dockerfile-release ./bin/Dockerfile-release
docker build \
--tag gcr.io/etcd-development/etcd:$(ETCD_VERSION) \
--file ./bin/Dockerfile-release \
./bin
rm -f ./bin/Dockerfile-release
docker run \
--rm \
gcr.io/etcd-development/etcd:$(ETCD_VERSION) \
/bin/sh -c "/usr/local/bin/etcd --version && ETCDCTL_API=3 /usr/local/bin/etcdctl version"
push-docker-release-master:
$(info ETCD_VERSION: $(ETCD_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd:$(ETCD_VERSION)
# Example:
# make build-docker-test -f ./hack/scripts-dev/Makefile
# make compile-with-docker-test -f ./hack/scripts-dev/Makefile
# make build-docker-static-ip-test -f ./hack/scripts-dev/Makefile
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-static-ip-test -f ./hack/scripts-dev/Makefile
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-static-ip-test -f ./hack/scripts-dev/Makefile
# make docker-static-ip-test-certs-run -f ./hack/scripts-dev/Makefile
# make docker-static-ip-test-certs-metrics-proxy-run -f ./hack/scripts-dev/Makefile
build-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./hack/scripts-dev/docker-static-ip/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
--file ./hack/scripts-dev/docker-static-ip/Dockerfile \
./hack/scripts-dev/docker-static-ip
@mv ./hack/scripts-dev/docker-static-ip/Dockerfile.bak ./hack/scripts-dev/docker-static-ip/Dockerfile
push-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION)
pull-docker-static-ip-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION)
docker-static-ip-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-static-ip/certs,destination=/certs \
gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-static-ip-test-certs-metrics-proxy-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-static-ip/certs-metrics-proxy,destination=/certs-metrics-proxy \
gcr.io/etcd-development/etcd-static-ip-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-metrics-proxy/run.sh && rm -rf m*.etcd"
# Example:
# make build-docker-test -f ./hack/scripts-dev/Makefile
# make compile-with-docker-test -f ./hack/scripts-dev/Makefile
# make build-docker-dns-test -f ./hack/scripts-dev/Makefile
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-dns-test -f ./hack/scripts-dev/Makefile
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-dns-test -f ./hack/scripts-dev/Makefile
# make docker-dns-test-insecure-run -f ./hack/scripts-dev/Makefile
# make docker-dns-test-certs-run -f ./hack/scripts-dev/Makefile
# make docker-dns-test-certs-gateway-run -f ./hack/scripts-dev/Makefile
# make docker-dns-test-certs-wildcard-run -f ./hack/scripts-dev/Makefile
# make docker-dns-test-certs-common-name-auth-run -f ./hack/scripts-dev/Makefile
# make docker-dns-test-certs-common-name-multi-run -f ./hack/scripts-dev/Makefile
build-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./hack/scripts-dev/docker-dns/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
--file ./hack/scripts-dev/docker-dns/Dockerfile \
./hack/scripts-dev/docker-dns
@mv ./hack/scripts-dev/docker-dns/Dockerfile.bak ./hack/scripts-dev/docker-dns/Dockerfile
docker run \
--rm \
--dns 127.0.0.1 \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "/etc/init.d/bind9 start && cat /dev/null >/etc/hosts && dig etcd.local"
push-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION)
pull-docker-dns-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION)
docker-dns-test-insecure-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns/insecure,destination=/insecure \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /insecure/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns/certs,destination=/certs \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-gateway-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns/certs-gateway,destination=/certs-gateway \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-gateway/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-wildcard-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns/certs-wildcard,destination=/certs-wildcard \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-wildcard/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-common-name-auth-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns/certs-common-name-auth,destination=/certs-common-name-auth \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-common-name-auth/run.sh && rm -rf m*.etcd"
docker-dns-test-certs-common-name-multi-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns/certs-common-name-multi,destination=/certs-common-name-multi \
gcr.io/etcd-development/etcd-dns-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-common-name-multi/run.sh && rm -rf m*.etcd"
# Example:
# make build-docker-test -f ./hack/scripts-dev/Makefile
# make compile-with-docker-test -f ./hack/scripts-dev/Makefile
# make build-docker-dns-srv-test -f ./hack/scripts-dev/Makefile
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# make push-docker-dns-srv-test -f ./hack/scripts-dev/Makefile
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com
# make pull-docker-dns-srv-test -f ./hack/scripts-dev/Makefile
# make docker-dns-srv-test-certs-run -f ./hack/scripts-dev/Makefile
# make docker-dns-srv-test-certs-gateway-run -f ./hack/scripts-dev/Makefile
# make docker-dns-srv-test-certs-wildcard-run -f ./hack/scripts-dev/Makefile
build-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./hack/scripts-dev/docker-dns-srv/Dockerfile
docker build \
--tag gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
--file ./hack/scripts-dev/docker-dns-srv/Dockerfile \
./hack/scripts-dev/docker-dns-srv
@mv ./hack/scripts-dev/docker-dns-srv/Dockerfile.bak ./hack/scripts-dev/docker-dns-srv/Dockerfile
docker run \
--rm \
--dns 127.0.0.1 \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "/etc/init.d/bind9 start && cat /dev/null >/etc/hosts && dig +noall +answer SRV _etcd-client-ssl._tcp.etcd.local && dig +noall +answer SRV _etcd-server-ssl._tcp.etcd.local && dig +noall +answer m1.etcd.local m2.etcd.local m3.etcd.local"
push-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION)
pull-docker-dns-srv-test:
$(info GO_VERSION: $(GO_VERSION))
docker pull gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION)
docker-dns-srv-test-certs-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns-srv/certs,destination=/certs \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs/run.sh && rm -rf m*.etcd"
docker-dns-srv-test-certs-gateway-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns-srv/certs-gateway,destination=/certs-gateway \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-gateway/run.sh && rm -rf m*.etcd"
docker-dns-srv-test-certs-wildcard-run:
$(info GO_VERSION: $(GO_VERSION))
$(info HOST_TMP_DIR: $(HOST_TMP_DIR))
$(info TMP_DIR_MOUNT_FLAG: $(TMP_DIR_MOUNT_FLAG))
docker run \
--rm \
--tty \
--dns 127.0.0.1 \
$(TMP_DIR_MOUNT_FLAG) \
--mount type=bind,source=`pwd`/bin,destination=/etcd \
--mount type=bind,source=`pwd`/hack/scripts-dev/docker-dns-srv/certs-wildcard,destination=/certs-wildcard \
gcr.io/etcd-development/etcd-dns-srv-test:go$(GO_VERSION) \
/bin/bash -c "cd /etcd && /certs-wildcard/run.sh && rm -rf m*.etcd"
# Example:
# make build-etcd-test-proxy -f ./hack/scripts-dev/Makefile
build-etcd-test-proxy:
go build -v -o ./bin/etcd-test-proxy ./tools/etcd-test-proxy
# Example:
# make build-docker-functional-tester -f ./hack/scripts-dev/Makefile
# make push-docker-functional-tester -f ./hack/scripts-dev/Makefile
# make pull-docker-functional-tester -f ./hack/scripts-dev/Makefile
build-docker-functional-tester:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
@sed -i.bak 's|REPLACE_ME_GO_VERSION|$(GO_VERSION)|g' ./Dockerfile-functional-tester
docker build \
--tag gcr.io/etcd-development/etcd-functional-tester:go$(GO_VERSION) \
--file ./Dockerfile-functional-tester \
.
@mv ./Dockerfile-functional-tester.bak ./Dockerfile-functional-tester
docker run \
--rm \
gcr.io/etcd-development/etcd-functional-tester:go$(GO_VERSION) \
/bin/bash -c "/etcd --version && \
/etcd-failpoints --version && \
ETCDCTL_API=3 /etcdctl version && \
/etcd-agent -help || true && \
/etcd-tester -help || true && \
/etcd-runner --help || true && \
/benchmark --help || true && \
/etcd-test-proxy -help || true"
push-docker-functional-tester:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
gcloud docker -- push gcr.io/etcd-development/etcd-functional-tester:go$(GO_VERSION)
pull-docker-functional-tester:
$(info GO_VERSION: $(GO_VERSION))
$(info ETCD_VERSION: $(ETCD_VERSION))
docker pull gcr.io/etcd-development/etcd-functional-tester:go$(GO_VERSION)

1
hack/scripts-dev/README Normal file
View File

@ -0,0 +1 @@
scripts for etcd development

View File

@ -0,0 +1,44 @@
FROM ubuntu:17.10
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get -y update \
&& apt-get -y install \
build-essential \
gcc \
apt-utils \
pkg-config \
software-properties-common \
apt-transport-https \
libssl-dev \
sudo \
bash \
curl \
tar \
git \
netcat \
bind9 \
dnsutils \
&& apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y autoremove \
&& apt-get -y autoclean
ENV GOROOT /usr/local/go
ENV GOPATH /go
ENV PATH ${GOPATH}/bin:${GOROOT}/bin:${PATH}
ENV GO_VERSION REPLACE_ME_GO_VERSION
ENV GO_DOWNLOAD_URL https://storage.googleapis.com/golang
RUN rm -rf ${GOROOT} \
&& curl -s ${GO_DOWNLOAD_URL}/go${GO_VERSION}.linux-amd64.tar.gz | tar -v -C /usr/local/ -xz \
&& mkdir -p ${GOPATH}/src ${GOPATH}/bin \
&& go version \
&& go get -v -u github.com/mattn/goreman
RUN mkdir -p /var/bind /etc/bind
RUN chown root:bind /var/bind /etc/bind
ADD named.conf etcd.zone rdns.zone /etc/bind/
RUN chown root:bind /etc/bind/named.conf /etc/bind/etcd.zone /etc/bind/rdns.zone
ADD resolv.conf /etc/resolv.conf

View File

@ -0,0 +1,7 @@
etcd1: ./etcd --name m1 --data-dir /tmp/m1.data --listen-client-urls https://127.0.0.1:2379 --advertise-client-urls https://m1.etcd.local:2379 --listen-peer-urls https://127.0.0.1:2380 --initial-advertise-peer-urls=https://m1.etcd.local:2380 --initial-cluster-token tkn --discovery-srv=etcd.local --initial-cluster-state new --peer-cert-file=/certs-gateway/server.crt --peer-key-file=/certs-gateway/server.key.insecure --peer-trusted-ca-file=/certs-gateway/ca.crt --peer-client-cert-auth --cert-file=/certs-gateway/server.crt --key-file=/certs-gateway/server.key.insecure --trusted-ca-file=/certs-gateway/ca.crt --client-cert-auth
etcd2: ./etcd --name m2 --data-dir /tmp/m2.data --listen-client-urls https://127.0.0.1:22379 --advertise-client-urls https://m2.etcd.local:22379 --listen-peer-urls https://127.0.0.1:22380 --initial-advertise-peer-urls=https://m2.etcd.local:22380 --initial-cluster-token tkn --discovery-srv=etcd.local --initial-cluster-state new --peer-cert-file=/certs-gateway/server.crt --peer-key-file=/certs-gateway/server.key.insecure --peer-trusted-ca-file=/certs-gateway/ca.crt --peer-client-cert-auth --cert-file=/certs-gateway/server.crt --key-file=/certs-gateway/server.key.insecure --trusted-ca-file=/certs-gateway/ca.crt --client-cert-auth
etcd3: ./etcd --name m3 --data-dir /tmp/m3.data --listen-client-urls https://127.0.0.1:32379 --advertise-client-urls https://m3.etcd.local:32379 --listen-peer-urls https://127.0.0.1:32380 --initial-advertise-peer-urls=https://m3.etcd.local:32380 --initial-cluster-token tkn --discovery-srv=etcd.local --initial-cluster-state new --peer-cert-file=/certs-gateway/server.crt --peer-key-file=/certs-gateway/server.key.insecure --peer-trusted-ca-file=/certs-gateway/ca.crt --peer-client-cert-auth --cert-file=/certs-gateway/server.crt --key-file=/certs-gateway/server.key.insecure --trusted-ca-file=/certs-gateway/ca.crt --client-cert-auth
gateway: ./etcd gateway start --discovery-srv etcd.local --trusted-ca-file /certs-gateway/ca.crt --listen-addr 127.0.0.1:23790

View File

@ -0,0 +1,19 @@
{
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "San Francisco",
"ST": "California",
"C": "USA"
}
],
"CN": "ca",
"ca": {
"expiry": "87600h"
}
}

View File

@ -0,0 +1,22 @@
-----BEGIN CERTIFICATE-----
MIIDsTCCApmgAwIBAgIUbQA3lX1hcR1W8D5wmmAwaLp4AWQwDQYJKoZIhvcNAQEL
BQAwbzEMMAoGA1UEBhMDVVNBMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
Ew1TYW4gRnJhbmNpc2NvMQ0wCwYDVQQKEwRldGNkMRYwFAYDVQQLEw1ldGNkIFNl
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0xNzEyMDExOTI5MDBaFw0yNzExMjkxOTI5
MDBaMG8xDDAKBgNVBAYTA1VTQTETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UE
BxMNU2FuIEZyYW5jaXNjbzENMAsGA1UEChMEZXRjZDEWMBQGA1UECxMNZXRjZCBT
ZWN1cml0eTELMAkGA1UEAxMCY2EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDdZjG+dJixdUuZLIlPVE/qvqNqbgIQy3Hrgq9OlPevLu3FAKIgTHoSKugq
jOuBjzAtmbGTky3PPmkjWrOUWKEUYMuJJzXA1fO2NALXle47NVyVVfuwCmDnaAAL
Sw4QTZKREoe3EwswbeYguQinCqazRwbXMzzfypIfaHAyGrqFCq12IvarrjfDcamm
egtPkxNNdj1QHbkeYXcp76LOSBRjD2B3bzZvyVv/wPORaGTFXQ0feGz/93/Y/E0z
BL5TdZ84qmgKxW04hxkhhuuxsL5zDNpbXcGm//Zw9qzO/AvtEux6ag9t0JziiEtj
zLz5M7yXivfG4oxEeLKTieS/1ZkbAgMBAAGjRTBDMA4GA1UdDwEB/wQEAwIBBjAS
BgNVHRMBAf8ECDAGAQH/AgECMB0GA1UdDgQWBBR7XtZP3fc6ElgHl6hdSHLmrFWj
MzANBgkqhkiG9w0BAQsFAAOCAQEAPy3ol3CPyFxuWD0IGKde26p1mT8cdoaeRbOa
2Z3GMuRrY2ojaKMfXuroOi+5ZbR9RSvVXhVX5tEMOSy81tb5OGPZP24Eroh4CUfK
bw7dOeBNCm9tcmHkV+5frJwOgjN2ja8W8jBlV1flLx+Jpyk2PSGun5tQPsDlqzor
E8QQ2FzCzxoGiEpB53t5gKeX+mH6gS1c5igJ5WfsEGXBC4xJm/u8/sg30uCGP6kT
tCoQ8gnvGen2OqYJEfCIEk28/AZJvJ90TJFS3ExXJpyfImK9j5VcTohW+KvcX5xF
W7M6KCGVBQtophobt3v/Zs4f11lWck9xVFCPGn9+LI1dbJUIIQ==
-----END CERTIFICATE-----

View File

@ -0,0 +1,13 @@
{
"signing": {
"default": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}

View File

@ -0,0 +1,26 @@
#!/bin/bash
if ! [[ "$0" =~ "./gencerts.sh" ]]; then
echo "must be run from 'fixtures'"
exit 255
fi
if ! which cfssl; then
echo "cfssl is not installed"
exit 255
fi
cfssl gencert --initca=true ./ca-csr.json | cfssljson --bare ./ca
mv ca.pem ca.crt
openssl x509 -in ca.crt -noout -text
# generate wildcard certificates DNS: *.etcd.local
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
./server-ca-csr.json | cfssljson --bare ./server
mv server.pem server.crt
mv server-key.pem server.key.insecure
rm -f *.csr *.pem *.stderr *.txt

View File

@ -0,0 +1,47 @@
#!/bin/sh
rm -rf /tmp/m1.data /tmp/m2.data /tmp/m3.data
/etc/init.d/bind9 start
# get rid of hosts so go lookup won't resolve 127.0.0.1 to localhost
cat /dev/null >/etc/hosts
goreman -f /certs-gateway/Procfile start &
# TODO: remove random sleeps
sleep 7s
ETCDCTL_API=3 ./etcdctl \
--cacert=/certs-gateway/ca.crt \
--cert=/certs-gateway/server.crt \
--key=/certs-gateway/server.key.insecure \
--discovery-srv etcd.local \
endpoint health --cluster
ETCDCTL_API=3 ./etcdctl \
--cacert=/certs-gateway/ca.crt \
--cert=/certs-gateway/server.crt \
--key=/certs-gateway/server.key.insecure \
--discovery-srv etcd.local \
put abc def
ETCDCTL_API=3 ./etcdctl \
--cacert=/certs-gateway/ca.crt \
--cert=/certs-gateway/server.crt \
--key=/certs-gateway/server.key.insecure \
--discovery-srv etcd.local \
get abc
ETCDCTL_API=3 ./etcdctl \
--cacert=/certs-gateway/ca.crt \
--cert=/certs-gateway/server.crt \
--key=/certs-gateway/server.key.insecure \
--endpoints=127.0.0.1:23790 \
put ghi jkl
ETCDCTL_API=3 ./etcdctl \
--cacert=/certs-gateway/ca.crt \
--cert=/certs-gateway/server.crt \
--key=/certs-gateway/server.key.insecure \
--endpoints=127.0.0.1:23790 \
get ghi

View File

@ -0,0 +1,23 @@
{
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "San Francisco",
"ST": "California",
"C": "USA"
}
],
"hosts": [
"m1.etcd.local",
"m2.etcd.local",
"m3.etcd.local",
"etcd.local",
"127.0.0.1",
"localhost"
]
}

View File

@ -0,0 +1,25 @@
-----BEGIN CERTIFICATE-----
MIIENTCCAx2gAwIBAgIUcviGEkA57QgUUFUIuB23kO/jHWIwDQYJKoZIhvcNAQEL
BQAwbzEMMAoGA1UEBhMDVVNBMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
Ew1TYW4gRnJhbmNpc2NvMQ0wCwYDVQQKEwRldGNkMRYwFAYDVQQLEw1ldGNkIFNl
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0xNzEyMDExOTI5MDBaFw0yNzExMjkxOTI5
MDBaMGIxDDAKBgNVBAYTA1VTQTETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UE
BxMNU2FuIEZyYW5jaXNjbzENMAsGA1UEChMEZXRjZDEWMBQGA1UECxMNZXRjZCBT
ZWN1cml0eTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL6rB1Kh08Fo
FieWqzB4WvKxSFjLWlNfAXbSC1IEPEc/2JOSTF/VfsEX7Xf4eDlTUIZ/TpMS4nUE
Jn0rOIxDJWieQgF99a88CKCwVeqyiQ1iGlI/Ls78P7712QJ1QvcYPBRCvAFo2VLg
TSNhq4taRtAnP690TJVKMSxHg7qtMIpiBLc8ryNbtNUkQHl7/puiBZVVFwHQZm6d
ZRkfMqXWs4+VKLTx0pqJaM0oWVISQlLWQV83buVsuDVyLAZu2MjRYZwBj9gQwZDO
15VGvacjMU+l1+nLRuODrpGeGlxwfT57jqipbUtTsoZFsGxPdIWn14M6Pzw/mML4
guYLKv3UqkkCAwEAAaOB1TCB0jAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYI
KwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFKYKYVPu
XPnZ2j0NORiNPUJpBnhkMB8GA1UdIwQYMBaAFHte1k/d9zoSWAeXqF1IcuasVaMz
MFMGA1UdEQRMMEqCDW0xLmV0Y2QubG9jYWyCDW0yLmV0Y2QubG9jYWyCDW0zLmV0
Y2QubG9jYWyCCmV0Y2QubG9jYWyCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0B
AQsFAAOCAQEAK40lD6Nx/V6CaShL95fQal7mFp/LXiyrlFTqCqrCruVnntwpukSx
I864bNMxVSTStEA3NM5V4mGuYjRvdjS65LBhaS1MQDPb4ofPj0vnxDOx6fryRIsB
wYKDuT4LSQ7pV/hBfL/bPb+itvb24G4/ECbduOprrywxmZskeEm/m0WqUb1A08Hv
6vDleyt382Wnxahq8txhMU+gNLTGVne60hhfLR+ePK7MJ4oyk3yeUxsmsnBkYaOu
gYOak5nWzRa09dLq6/vHQLt6n0AB0VurMAjshzO2rsbdOkD233sdkvKiYpayAyEf
Iu7S5vNjP9jiUgmws6G95wgJOd2xv54D4Q==
-----END CERTIFICATE-----

View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAvqsHUqHTwWgWJ5arMHha8rFIWMtaU18BdtILUgQ8Rz/Yk5JM
X9V+wRftd/h4OVNQhn9OkxLidQQmfSs4jEMlaJ5CAX31rzwIoLBV6rKJDWIaUj8u
zvw/vvXZAnVC9xg8FEK8AWjZUuBNI2Gri1pG0Cc/r3RMlUoxLEeDuq0wimIEtzyv
I1u01SRAeXv+m6IFlVUXAdBmbp1lGR8ypdazj5UotPHSmolozShZUhJCUtZBXzdu
5Wy4NXIsBm7YyNFhnAGP2BDBkM7XlUa9pyMxT6XX6ctG44OukZ4aXHB9PnuOqKlt
S1OyhkWwbE90hafXgzo/PD+YwviC5gsq/dSqSQIDAQABAoIBAEAOsb0fRUdbMuZG
BmmYZeXXjdjXKReNea5zzv3VEnNVjeu2YRZpYdZ5tXxy6+FGjm1BZCKhW5e4tz2i
QbNN88l8MezSZrJi1vs1gwgAx27JoNI1DALaWIhNjIT45HCjobuk2AkZMrpXRVM3
wyxkPho8tXa6+efGL1MTC7yx5vb2dbhnEsjrPdUO0GLVP56bgrz7vRk+hE772uq2
QDenZg+PcH+hOhptbY1h9CYotGWYXCpi0+yoHhsh5PTcEpyPmLWSkACsHovm3MIn
a5oU0uh28nVBfYE0Sk6I9XBERHVO/OrCvz4Y3ZbVyGpCdLcaMB5wI1P4a5ULV52+
VPrALQkCgYEA+w85KYuL+eUjHeMqa8V8A9xgcl1+dvB8SXgfRRm5QTqxgetzurD9
G7vgMex42nqgoW1XUx6i9roRk3Qn3D2NKvBJcpMohYcY3HcGkCsBwtNUCyOWKasS
Oj2q9LzPjVqTFII0zzarQ85XuuZyTRieFAMoYmsS8O/GcapKqYhPIDMCgYEAwmuR
ctnCNgoEj1NaLBSAcq7njONvYUFvbXO8BCyd1WeLZyz/krgXxuhQh9oXIccWAKX2
uxIDaoWV8F5c8bNOkeebHzVHfaLpwl4IlLa/i5WTIc+IZmpBR0aiS021k/M3KkDg
KnQXAer6jEymT3lUL0AqZd+GX6DjFw61zPOFH5MCgYAnCiv6YN/IYTA/woZjMddi
Bk/dGNrEhgrdpdc++IwNL6JQsJtTaZhCSsnHGZ2FY9I8p/MPUtFGipKXGlXkcpHU
Hn9dWLLRaLud9MhJfNaORCxqewMrwZVZByPhYMbplS8P3lt16WtiZODRiGo3wN87
/221OC8+1hpGrJNln3OmbwKBgDV8voEoY4PWcba0qcQix8vFTrK2B3hsNimYg4tq
cum5GOMDwDQvLWttkmotl9uVF/qJrj19ES+HHN8KNuvP9rexTj3hvI9V+JWepSG0
vTG7rsTIgbAbX2Yqio/JC0Fu0ihvvLwxP/spGFDs7XxD1uNA9ekc+6znaFJ5m46N
GHy9AoGBAJmGEv5+rM3cucRyYYhE7vumXeCLXyAxxaf0f7+1mqRVO6uNGNGbNY6U
Heq6De4yc1VeAXUpkGQi/afPJNMU+fy8paCjFyzID1yLvdtFOG38KDbgMmj4t+cH
xTp2RT3MkcCWPq2+kXZeQjPdesPkzdB+nA8ckaSursV908n6AHcM
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,5 @@
etcd1: ./etcd --name m1 --data-dir /tmp/m1.data --listen-client-urls https://127.0.0.1:2379 --advertise-client-urls https://m1.etcd.local:2379 --listen-peer-urls https://127.0.0.1:2380 --initial-advertise-peer-urls=https://m1.etcd.local:2380 --initial-cluster-token tkn --discovery-srv=etcd.local --initial-cluster-state new --peer-cert-file=/certs-wildcard/server.crt --peer-key-file=/certs-wildcard/server.key.insecure --peer-trusted-ca-file=/certs-wildcard/ca.crt --peer-client-cert-auth --cert-file=/certs-wildcard/server.crt --key-file=/certs-wildcard/server.key.insecure --trusted-ca-file=/certs-wildcard/ca.crt --client-cert-auth
etcd2: ./etcd --name m2 --data-dir /tmp/m2.data --listen-client-urls https://127.0.0.1:22379 --advertise-client-urls https://m2.etcd.local:22379 --listen-peer-urls https://127.0.0.1:22380 --initial-advertise-peer-urls=https://m2.etcd.local:22380 --initial-cluster-token tkn --discovery-srv=etcd.local --initial-cluster-state new --peer-cert-file=/certs-wildcard/server.crt --peer-key-file=/certs-wildcard/server.key.insecure --peer-trusted-ca-file=/certs-wildcard/ca.crt --peer-client-cert-auth --cert-file=/certs-wildcard/server.crt --key-file=/certs-wildcard/server.key.insecure --trusted-ca-file=/certs-wildcard/ca.crt --client-cert-auth
etcd3: ./etcd --name m3 --data-dir /tmp/m3.data --listen-client-urls https://127.0.0.1:32379 --advertise-client-urls https://m3.etcd.local:32379 --listen-peer-urls https://127.0.0.1:32380 --initial-advertise-peer-urls=https://m3.etcd.local:32380 --initial-cluster-token tkn --discovery-srv=etcd.local --initial-cluster-state new --peer-cert-file=/certs-wildcard/server.crt --peer-key-file=/certs-wildcard/server.key.insecure --peer-trusted-ca-file=/certs-wildcard/ca.crt --peer-client-cert-auth --cert-file=/certs-wildcard/server.crt --key-file=/certs-wildcard/server.key.insecure --trusted-ca-file=/certs-wildcard/ca.crt --client-cert-auth

View File

@ -0,0 +1,19 @@
{
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "San Francisco",
"ST": "California",
"C": "USA"
}
],
"CN": "ca",
"ca": {
"expiry": "87600h"
}
}

Some files were not shown because too many files have changed in this diff Show More