Compare commits
16 Commits
Author | SHA1 | Date | |
---|---|---|---|
02697ca725 | |||
bd693c7069 | |||
52c90cdcfb | |||
a88b22ac0a | |||
e93f8b8a12 | |||
86e616c6e9 | |||
5ae55a2c0d | |||
62ce6eef7b | |||
7df4f5c804 | |||
461c24e899 | |||
6d90d03bf0 | |||
9995e80a2c | |||
229405f113 | |||
b3f2a998d4 | |||
8436e901e9 | |||
c03f5cb941 |
@ -1,120 +0,0 @@
|
|||||||
## Allow-legacy mode
|
|
||||||
|
|
||||||
Allow-legacy is a special mode in etcd that contains logic to enable a running etcd cluster to smoothly transition between major versions of etcd. For example, the internal API versions between etcd 0.4 (internal v1) and etcd 2.0 (internal v2) aren't compatible and the cluster needs to be updated all at once to make the switch. To minimize downtime, allow-legacy coordinates with all of the members of the cluster to shutdown, migration of data and restart onto the new version.
|
|
||||||
|
|
||||||
Allow-legacy helps users upgrade v0.4 etcd clusters easily, and allows your etcd cluster to have a minimal amount of downtime -- less than 1 minute for clusters storing less than 50 MB.
|
|
||||||
|
|
||||||
It supports upgrading from internal v1 to internal v2 now.
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
This mode is enabled if `ETCD_ALLOW_LEGACY_MODE` is set to true, or etcd is running in CoreOS system.
|
|
||||||
|
|
||||||
It treats `ETCD_BINARY_DIR` as the directory for etcd binaries, which is organized in this way:
|
|
||||||
|
|
||||||
```
|
|
||||||
ETCD_BINARY_DIR
|
|
||||||
|
|
|
||||||
-- 1
|
|
||||||
|
|
|
||||||
-- 2
|
|
||||||
```
|
|
||||||
|
|
||||||
`1` is etcd with internal v1 protocol. You should use etcd v0.4.7 here. `2` is etcd with internal v2 protocol, which is etcd v2.x.
|
|
||||||
|
|
||||||
The default value for `ETCD_BINARY_DIR` is `/usr/libexec/etcd/internal_versions/`.
|
|
||||||
|
|
||||||
### Upgrading a Cluster
|
|
||||||
|
|
||||||
When starting etcd with a v1 data directory and v1 flags, etcd executes the v0.4.7 binary and runs exactly the same as before. To start the migration, follow the steps below:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
#### 1. Check the Cluster Health
|
|
||||||
|
|
||||||
Before upgrading, you should check the health of the cluster to double check that everything working perfectly. Check the health by running:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ etcdctl cluster-health
|
|
||||||
cluster is healthy
|
|
||||||
member 6e3bd23ae5f1eae0 is healthy
|
|
||||||
member 924e2e83e93f2560 is healthy
|
|
||||||
member a8266ecf031671f3 is healthy
|
|
||||||
```
|
|
||||||
|
|
||||||
If the cluster and all members are healthy, you can start the upgrading process. If not, check the unhealthy machines and repair them using [admin guide](./admin_guide.md).
|
|
||||||
|
|
||||||
#### 2. Trigger the Upgrade
|
|
||||||
|
|
||||||
When you're ready, use the `etcdctl upgrade` command to start the upgrade the etcd cluster to 2.0:
|
|
||||||
|
|
||||||
```
|
|
||||||
# Defaults work on a CoreOS machine running etcd
|
|
||||||
$ etcdctl upgrade
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
# Advanced example specifying a peer url
|
|
||||||
$ etcdctl upgrade --old-version=1 --new-version=2 --peer-url=$PEER_URL
|
|
||||||
```
|
|
||||||
|
|
||||||
`PEER_URL` can be any accessible peer url of the cluster.
|
|
||||||
|
|
||||||
Once triggered, all peer-mode members will print out:
|
|
||||||
|
|
||||||
```
|
|
||||||
detected next internal version 2, exit after 10 seconds.
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Parallel Coordinated Upgrade
|
|
||||||
|
|
||||||
As part of the upgrade, etcd does internal coordination within the cluster for a brief period and then exits. Clusters storing 50 MB should be unavailable for less than 1 minute.
|
|
||||||
|
|
||||||
#### Restart etcd Processes
|
|
||||||
|
|
||||||
After the etcd processes exit, they need to be restarted. You can do this manually or configure your unit system to do this automatically. On CoreOS, etcd is already configured to start automatically with systemd.
|
|
||||||
|
|
||||||
When restarted, the data directory of each member is upgraded, and afterwards etcd v2.0 will be running and servicing requests. The upgrade is now complete!
|
|
||||||
|
|
||||||
Standby-mode members are a special case — they will be upgraded into proxy mode (a new feature in etcd 2.0) upon restarting. When the upgrade is triggered, any standbys will exit with the message:
|
|
||||||
|
|
||||||
```
|
|
||||||
Detect the cluster has been upgraded to internal API v2. Exit now.
|
|
||||||
```
|
|
||||||
|
|
||||||
Once restarted, standbys run in v2.0 proxy mode, which proxy user requests to the etcd cluster.
|
|
||||||
|
|
||||||
#### 3. Check the Cluster Health
|
|
||||||
|
|
||||||
After the upgrade process, you can run the health check again to verify the upgrade. If the cluster is unhealthy or there is an unhealthy member, please refer to start [failure recovery](#failure-recovery).
|
|
||||||
|
|
||||||
### Downgrade
|
|
||||||
|
|
||||||
If the upgrading fails due to disk/network issues, you still can restart the upgrading process manually. However, once you upgrade etcd to internal v2 protocol, you CANNOT downgrade it back to internal v1 protocol. If you want to downgrade etcd in the future, please backup your v1 data dir beforehand.
|
|
||||||
|
|
||||||
### Upgrade Process on CoreOS
|
|
||||||
|
|
||||||
When running on a CoreOS system, allow-legacy mode is enabled by default and an automatic update will set up everything needed to execute the upgrade. The `etcd.service` on CoreOS is already configured to restart automatically. All you need to do is run `etcdctl upgrade` when you're ready, as described
|
|
||||||
|
|
||||||
### Internal Details
|
|
||||||
|
|
||||||
etcd v0.4.7 registers versions of available etcd binaries in its local machine into the key space at bootstrap stage. When the upgrade command is executed, etcdctl checks whether each member has internal-version-v2 etcd binary around. If that is true, each member is asked to record the fact that it needs to be upgraded the next time it reboots, and exits after 10 seconds.
|
|
||||||
|
|
||||||
Once restarted, etcd v2.0 sees the upgrade flag recorded. It upgrades the data directory, and executes etcd v2.0.
|
|
||||||
|
|
||||||
### Failure Recovery
|
|
||||||
|
|
||||||
If `etcdctl cluster-health` says that the cluster is unhealthy, the upgrade process fails, which may happen if the network is broken, or the disk cannot work.
|
|
||||||
|
|
||||||
The way to recover it is to manually upgrade the whole cluster to v2.0:
|
|
||||||
|
|
||||||
- Log into machines that ran v0.4 peer-mode etcd
|
|
||||||
- Stop all etcd services
|
|
||||||
- Remove the `member` directory under the etcd data-dir
|
|
||||||
- Start etcd service using [2.0 flags](configuration.md). An example for this is:
|
|
||||||
```
|
|
||||||
$ etcd --data-dir=$DATA_DIR --listen-peer-urls http://$LISTEN_PEER_ADDR \
|
|
||||||
--advertise-client-urls http://$ADVERTISE_CLIENT_ADDR \
|
|
||||||
--listen-client-urls http://$LISTEN_CLIENT_ADDR
|
|
||||||
```
|
|
||||||
- When this is done, v2.0 etcd cluster should work now.
|
|
@ -287,7 +287,7 @@ curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'
|
|||||||
|
|
||||||
The watch command returns immediately with the same response as previously.
|
The watch command returns immediately with the same response as previously.
|
||||||
|
|
||||||
**Note**: etcd only keeps the responses of the most recent 1000 events.
|
**Note**: etcd only keeps the responses of the most recent 1000 events across all etcd keys.
|
||||||
It is recommended to send the response to another thread to process immediately
|
It is recommended to send the response to another thread to process immediately
|
||||||
instead of blocking the watch while processing the result.
|
instead of blocking the watch while processing the result.
|
||||||
|
|
||||||
|
Binary file not shown.
Before Width: | Height: | Size: 7.9 KiB |
2
build
2
build
@ -14,5 +14,3 @@ eval $(go env)
|
|||||||
# Static compilation is useful when etcd is run in a container
|
# Static compilation is useful when etcd is run in a container
|
||||||
CGO_ENABLED=0 go build -a -installsuffix cgo -ldflags '-s' -o bin/etcd ${REPO_PATH}
|
CGO_ENABLED=0 go build -a -installsuffix cgo -ldflags '-s' -o bin/etcd ${REPO_PATH}
|
||||||
CGO_ENABLED=0 go build -a -installsuffix cgo -ldflags '-s' -o bin/etcdctl ${REPO_PATH}/etcdctl
|
CGO_ENABLED=0 go build -a -installsuffix cgo -ldflags '-s' -o bin/etcdctl ${REPO_PATH}/etcdctl
|
||||||
go build -o bin/etcd-migrate ${REPO_PATH}/tools/etcd-migrate
|
|
||||||
go build -o bin/etcd-dump-logs ${REPO_PATH}/tools/etcd-dump-logs
|
|
||||||
|
128
etcdctl/command/import_snap_command.go
Normal file
128
etcdctl/command/import_snap_command.go
Normal file
@ -0,0 +1,128 @@
|
|||||||
|
package command
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/codegangsta/cli"
|
||||||
|
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd"
|
||||||
|
"github.com/coreos/etcd/store"
|
||||||
|
)
|
||||||
|
|
||||||
|
type set struct {
|
||||||
|
key string
|
||||||
|
value string
|
||||||
|
ttl int64
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewImportSnapCommand() cli.Command {
|
||||||
|
return cli.Command{
|
||||||
|
Name: "import",
|
||||||
|
Usage: "import a snapshot to a cluster",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
cli.StringFlag{Name: "snap", Value: "", Usage: "Path to the vaild etcd 0.4.x snapshot."},
|
||||||
|
cli.StringSliceFlag{Name: "hidden", Value: new(cli.StringSlice), Usage: "Hidden key spaces to import from snapshot"},
|
||||||
|
cli.IntFlag{Name: "c", Value: 10, Usage: "Number of concurrent clients to import the data"},
|
||||||
|
},
|
||||||
|
Action: handleImportSnap,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func handleImportSnap(c *cli.Context) {
|
||||||
|
d, err := ioutil.ReadFile(c.String("snap"))
|
||||||
|
if err != nil {
|
||||||
|
if c.String("snap") == "" {
|
||||||
|
fmt.Printf("no snapshot file provided (use --snap)\n")
|
||||||
|
} else {
|
||||||
|
fmt.Printf("cannot read snapshot file %s\n", c.String("snap"))
|
||||||
|
}
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
st := store.New()
|
||||||
|
err = st.Recovery(d)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf("cannot recover the snapshot file: %v\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
endpoints, err := getEndpoints(c)
|
||||||
|
if err != nil {
|
||||||
|
handleError(ErrorFromEtcd, err)
|
||||||
|
}
|
||||||
|
tr, err := getTransport(c)
|
||||||
|
if err != nil {
|
||||||
|
handleError(ErrorFromEtcd, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
wg := &sync.WaitGroup{}
|
||||||
|
setc := make(chan set)
|
||||||
|
concurrent := c.Int("c")
|
||||||
|
fmt.Printf("starting to import snapshot %s with %d clients\n", c.String("snap"), concurrent)
|
||||||
|
for i := 0; i < concurrent; i++ {
|
||||||
|
client := etcd.NewClient(endpoints)
|
||||||
|
client.SetTransport(tr)
|
||||||
|
|
||||||
|
if c.GlobalBool("debug") {
|
||||||
|
go dumpCURL(client)
|
||||||
|
}
|
||||||
|
|
||||||
|
if ok := client.SyncCluster(); !ok {
|
||||||
|
handleError(FailedToConnectToHost, errors.New("cannot sync with the cluster using endpoints "+strings.Join(endpoints, ", ")))
|
||||||
|
}
|
||||||
|
wg.Add(1)
|
||||||
|
go runSet(client, setc, wg)
|
||||||
|
}
|
||||||
|
|
||||||
|
all, err := st.Get("/", true, true)
|
||||||
|
if err != nil {
|
||||||
|
handleError(ErrorFromEtcd, err)
|
||||||
|
}
|
||||||
|
n := copyKeys(all.Node, setc)
|
||||||
|
|
||||||
|
hiddens := c.StringSlice("hidden")
|
||||||
|
for _, h := range hiddens {
|
||||||
|
allh, err := st.Get(h, true, true)
|
||||||
|
if err != nil {
|
||||||
|
handleError(ErrorFromEtcd, err)
|
||||||
|
}
|
||||||
|
n += copyKeys(allh.Node, setc)
|
||||||
|
}
|
||||||
|
close(setc)
|
||||||
|
wg.Wait()
|
||||||
|
fmt.Printf("finished importing %d keys\n", n)
|
||||||
|
}
|
||||||
|
|
||||||
|
func copyKeys(n *store.NodeExtern, setc chan set) int {
|
||||||
|
num := 0
|
||||||
|
if !n.Dir {
|
||||||
|
setc <- set{n.Key, *n.Value, n.TTL}
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
log.Println("entering dir:", n.Key)
|
||||||
|
for _, nn := range n.Nodes {
|
||||||
|
sub := copyKeys(nn, setc)
|
||||||
|
num += sub
|
||||||
|
}
|
||||||
|
return num
|
||||||
|
}
|
||||||
|
|
||||||
|
func runSet(c *etcd.Client, setc chan set, wg *sync.WaitGroup) {
|
||||||
|
for s := range setc {
|
||||||
|
log.Println("copying key:", s.key)
|
||||||
|
if s.ttl != 0 && s.ttl < 300 {
|
||||||
|
log.Printf("extending key %s's ttl to 300 seconds", s.key)
|
||||||
|
s.ttl = 5 * 60
|
||||||
|
}
|
||||||
|
_, err := c.Set(s.key, s.value, uint64(s.ttl))
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to copy key: %v\n", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
wg.Done()
|
||||||
|
}
|
@ -1,78 +0,0 @@
|
|||||||
/*
|
|
||||||
Copyright 2015 CoreOS, Inc.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package command
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/codegangsta/cli"
|
|
||||||
"github.com/coreos/etcd/pkg/transport"
|
|
||||||
)
|
|
||||||
|
|
||||||
func UpgradeCommand() cli.Command {
|
|
||||||
return cli.Command{
|
|
||||||
Name: "upgrade",
|
|
||||||
Usage: "upgrade an old version etcd cluster to a new version",
|
|
||||||
Flags: []cli.Flag{
|
|
||||||
cli.StringFlag{Name: "old-version", Value: "1", Usage: "Old internal version"},
|
|
||||||
cli.StringFlag{Name: "new-version", Value: "2", Usage: "New internal version"},
|
|
||||||
cli.StringFlag{Name: "peer-url", Value: "http://localhost:7001", Usage: "An etcd peer url string"},
|
|
||||||
cli.StringFlag{Name: "peer-cert-file", Value: "", Usage: "identify HTTPS peer using this SSL certificate file"},
|
|
||||||
cli.StringFlag{Name: "peer-key-file", Value: "", Usage: "identify HTTPS peer using this SSL key file"},
|
|
||||||
cli.StringFlag{Name: "peer-ca-file", Value: "", Usage: "verify certificates of HTTPS-enabled peers using this CA bundle"},
|
|
||||||
},
|
|
||||||
Action: handleUpgrade,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func handleUpgrade(c *cli.Context) {
|
|
||||||
if c.String("old-version") != "1" {
|
|
||||||
fmt.Printf("Do not support upgrade from version %s\n", c.String("old-version"))
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
if c.String("new-version") != "2" {
|
|
||||||
fmt.Printf("Do not support upgrade to version %s\n", c.String("new-version"))
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
tls := transport.TLSInfo{
|
|
||||||
CAFile: c.String("peer-ca-file"),
|
|
||||||
CertFile: c.String("peer-cert-file"),
|
|
||||||
KeyFile: c.String("peer-key-file"),
|
|
||||||
}
|
|
||||||
t, err := transport.NewTransport(tls)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
client := http.Client{Transport: t}
|
|
||||||
resp, err := client.Get(c.String("peer-url") + "/v2/admin/next-internal-version")
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf("Failed to send upgrade request to %s: %v\n", c.String("peer-url"), err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if resp.StatusCode == http.StatusOK {
|
|
||||||
fmt.Println("Cluster will start upgrading from internal version 1 to 2 in 10 seconds.")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if resp.StatusCode == http.StatusNotFound {
|
|
||||||
fmt.Println("Cluster cannot upgrade to 2: version is not 0.4.7")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Printf("Faild to send upgrade request to %s: bad status code %d\n", c.String("cluster-url"), resp.StatusCode)
|
|
||||||
}
|
|
@ -65,7 +65,7 @@ func getPeersFlagValue(c *cli.Context) []string {
|
|||||||
|
|
||||||
// If we still don't have peers, use a default
|
// If we still don't have peers, use a default
|
||||||
if peerstr == "" {
|
if peerstr == "" {
|
||||||
peerstr = "127.0.0.1:4001"
|
peerstr = "127.0.0.1:4001,127.0.0.1:2379"
|
||||||
}
|
}
|
||||||
|
|
||||||
return strings.Split(peerstr, ",")
|
return strings.Split(peerstr, ",")
|
||||||
|
@ -53,7 +53,7 @@ func main() {
|
|||||||
command.NewWatchCommand(),
|
command.NewWatchCommand(),
|
||||||
command.NewExecWatchCommand(),
|
command.NewExecWatchCommand(),
|
||||||
command.NewMemberCommand(),
|
command.NewMemberCommand(),
|
||||||
command.UpgradeCommand(),
|
command.NewImportSnapCommand(),
|
||||||
}
|
}
|
||||||
|
|
||||||
app.Run(os.Args)
|
app.Run(os.Args)
|
||||||
|
@ -49,7 +49,7 @@ type ServerConfig struct {
|
|||||||
// VerifyBootstrapConfig sanity-checks the initial config for bootstrap case
|
// VerifyBootstrapConfig sanity-checks the initial config for bootstrap case
|
||||||
// and returns an error for things that should never happen.
|
// and returns an error for things that should never happen.
|
||||||
func (c *ServerConfig) VerifyBootstrap() error {
|
func (c *ServerConfig) VerifyBootstrap() error {
|
||||||
if err := c.verifyLocalMember(); err != nil {
|
if err := c.verifyLocalMember(true); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err := c.Cluster.Validate(); err != nil {
|
if err := c.Cluster.Validate(); err != nil {
|
||||||
@ -64,7 +64,10 @@ func (c *ServerConfig) VerifyBootstrap() error {
|
|||||||
// VerifyJoinExisting sanity-checks the initial config for join existing cluster
|
// VerifyJoinExisting sanity-checks the initial config for join existing cluster
|
||||||
// case and returns an error for things that should never happen.
|
// case and returns an error for things that should never happen.
|
||||||
func (c *ServerConfig) VerifyJoinExisting() error {
|
func (c *ServerConfig) VerifyJoinExisting() error {
|
||||||
if err := c.verifyLocalMember(); err != nil {
|
// no need for strict checking since the member have announced its
|
||||||
|
// peer urls to the cluster before starting and do not have to set
|
||||||
|
// it in the configuration again.
|
||||||
|
if err := c.verifyLocalMember(false); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err := c.Cluster.Validate(); err != nil {
|
if err := c.Cluster.Validate(); err != nil {
|
||||||
@ -76,9 +79,10 @@ func (c *ServerConfig) VerifyJoinExisting() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// verifyLocalMember verifies that the local member is valid and is listed
|
// verifyLocalMember verifies the configured member is in configured
|
||||||
// in the cluster correctly.
|
// cluster. If strict is set, it also verifies the configured member
|
||||||
func (c *ServerConfig) verifyLocalMember() error {
|
// has the same peer urls as configured advertised peer urls.
|
||||||
|
func (c *ServerConfig) verifyLocalMember(strict bool) error {
|
||||||
m := c.Cluster.MemberByName(c.Name)
|
m := c.Cluster.MemberByName(c.Name)
|
||||||
// Make sure the cluster at least contains the local server.
|
// Make sure the cluster at least contains the local server.
|
||||||
if m == nil {
|
if m == nil {
|
||||||
@ -92,8 +96,10 @@ func (c *ServerConfig) verifyLocalMember() error {
|
|||||||
// TODO: Remove URLStringsEqual after improvement of using hostnames #2150 #2123
|
// TODO: Remove URLStringsEqual after improvement of using hostnames #2150 #2123
|
||||||
apurls := c.PeerURLs.StringSlice()
|
apurls := c.PeerURLs.StringSlice()
|
||||||
sort.Strings(apurls)
|
sort.Strings(apurls)
|
||||||
if !netutil.URLStringsEqual(apurls, m.PeerURLs) {
|
if strict {
|
||||||
return fmt.Errorf("%s has different advertised URLs in the cluster and advertised peer URLs list", c.Name)
|
if !netutil.URLStringsEqual(apurls, m.PeerURLs) {
|
||||||
|
return fmt.Errorf("%s has different advertised URLs in the cluster and advertised peer URLs list", c.Name)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -22,6 +22,9 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func mustNewURLs(t *testing.T, urls []string) []url.URL {
|
func mustNewURLs(t *testing.T, urls []string) []url.URL {
|
||||||
|
if len(urls) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
u, err := types.NewURLs(urls)
|
u, err := types.NewURLs(urls)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("error creating new URLs from %q: %v", urls, err)
|
t.Fatalf("error creating new URLs from %q: %v", urls, err)
|
||||||
@ -65,12 +68,14 @@ func TestConfigVerifyLocalMember(t *testing.T) {
|
|||||||
tests := []struct {
|
tests := []struct {
|
||||||
clusterSetting string
|
clusterSetting string
|
||||||
apurls []string
|
apurls []string
|
||||||
|
strict bool
|
||||||
shouldError bool
|
shouldError bool
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
// Node must exist in cluster
|
// Node must exist in cluster
|
||||||
"",
|
"",
|
||||||
nil,
|
nil,
|
||||||
|
true,
|
||||||
|
|
||||||
true,
|
true,
|
||||||
},
|
},
|
||||||
@ -78,6 +83,7 @@ func TestConfigVerifyLocalMember(t *testing.T) {
|
|||||||
// Initial cluster set
|
// Initial cluster set
|
||||||
"node1=http://localhost:7001,node2=http://localhost:7002",
|
"node1=http://localhost:7001,node2=http://localhost:7002",
|
||||||
[]string{"http://localhost:7001"},
|
[]string{"http://localhost:7001"},
|
||||||
|
true,
|
||||||
|
|
||||||
false,
|
false,
|
||||||
},
|
},
|
||||||
@ -85,6 +91,7 @@ func TestConfigVerifyLocalMember(t *testing.T) {
|
|||||||
// Default initial cluster
|
// Default initial cluster
|
||||||
"node1=http://localhost:2380,node1=http://localhost:7001",
|
"node1=http://localhost:2380,node1=http://localhost:7001",
|
||||||
[]string{"http://localhost:2380", "http://localhost:7001"},
|
[]string{"http://localhost:2380", "http://localhost:7001"},
|
||||||
|
true,
|
||||||
|
|
||||||
false,
|
false,
|
||||||
},
|
},
|
||||||
@ -92,6 +99,7 @@ func TestConfigVerifyLocalMember(t *testing.T) {
|
|||||||
// Advertised peer URLs must match those in cluster-state
|
// Advertised peer URLs must match those in cluster-state
|
||||||
"node1=http://localhost:7001",
|
"node1=http://localhost:7001",
|
||||||
[]string{"http://localhost:12345"},
|
[]string{"http://localhost:12345"},
|
||||||
|
true,
|
||||||
|
|
||||||
true,
|
true,
|
||||||
},
|
},
|
||||||
@ -99,9 +107,26 @@ func TestConfigVerifyLocalMember(t *testing.T) {
|
|||||||
// Advertised peer URLs must match those in cluster-state
|
// Advertised peer URLs must match those in cluster-state
|
||||||
"node1=http://localhost:7001,node1=http://localhost:12345",
|
"node1=http://localhost:7001,node1=http://localhost:12345",
|
||||||
[]string{"http://localhost:12345"},
|
[]string{"http://localhost:12345"},
|
||||||
|
true,
|
||||||
|
|
||||||
true,
|
true,
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
// Advertised peer URLs must match those in cluster-state
|
||||||
|
"node1=http://localhost:7001",
|
||||||
|
[]string{},
|
||||||
|
true,
|
||||||
|
|
||||||
|
true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// do not care about the urls if strict is not set
|
||||||
|
"node1=http://localhost:7001",
|
||||||
|
[]string{},
|
||||||
|
false,
|
||||||
|
|
||||||
|
false,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for i, tt := range tests {
|
for i, tt := range tests {
|
||||||
@ -116,7 +141,7 @@ func TestConfigVerifyLocalMember(t *testing.T) {
|
|||||||
if tt.apurls != nil {
|
if tt.apurls != nil {
|
||||||
cfg.PeerURLs = mustNewURLs(t, tt.apurls)
|
cfg.PeerURLs = mustNewURLs(t, tt.apurls)
|
||||||
}
|
}
|
||||||
err = cfg.verifyLocalMember()
|
err = cfg.verifyLocalMember(tt.strict)
|
||||||
if (err == nil) && tt.shouldError {
|
if (err == nil) && tt.shouldError {
|
||||||
t.Errorf("%#v", *cluster)
|
t.Errorf("%#v", *cluster)
|
||||||
t.Errorf("#%d: Got no error where one was expected", i)
|
t.Errorf("#%d: Got no error where one was expected", i)
|
||||||
|
@ -119,7 +119,6 @@ func (h *keysHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
|||||||
writeError(w, err)
|
writeError(w, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
switch {
|
switch {
|
||||||
case resp.Event != nil:
|
case resp.Event != nil:
|
||||||
if err := writeKeyEvent(w, resp.Event, h.timer); err != nil {
|
if err := writeKeyEvent(w, resp.Event, h.timer); err != nil {
|
||||||
@ -334,7 +333,7 @@ func serveVersion(w http.ResponseWriter, r *http.Request) {
|
|||||||
if !allowMethod(w, r.Method, "GET") {
|
if !allowMethod(w, r.Method, "GET") {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
fmt.Fprintf(w, `{"releaseVersion":"%s","internalVersion":"%s"}`, version.Version, version.InternalVersion)
|
w.Write([]byte("etcd " + version.Version))
|
||||||
}
|
}
|
||||||
|
|
||||||
// parseKeyRequest converts a received http.Request on keysPrefix to
|
// parseKeyRequest converts a received http.Request on keysPrefix to
|
||||||
|
@ -1327,7 +1327,7 @@ func TestServeVersion(t *testing.T) {
|
|||||||
if rw.Code != http.StatusOK {
|
if rw.Code != http.StatusOK {
|
||||||
t.Errorf("code=%d, want %d", rw.Code, http.StatusOK)
|
t.Errorf("code=%d, want %d", rw.Code, http.StatusOK)
|
||||||
}
|
}
|
||||||
w := fmt.Sprintf(`{"releaseVersion":"%s","internalVersion":"%s"}`, version.Version, version.InternalVersion)
|
w := fmt.Sprintf("etcd %s", version.Version)
|
||||||
if g := rw.Body.String(); g != w {
|
if g := rw.Body.String(); g != w {
|
||||||
t.Fatalf("body = %q, want %q", g, w)
|
t.Fatalf("body = %q, want %q", g, w)
|
||||||
}
|
}
|
||||||
|
@ -84,7 +84,6 @@ func (wh *watcherHub) watch(key string, recursive, stream bool, index, storeInde
|
|||||||
|
|
||||||
if ok { // add the new watcher to the back of the list
|
if ok { // add the new watcher to the back of the list
|
||||||
elem = l.PushBack(w)
|
elem = l.PushBack(w)
|
||||||
|
|
||||||
} else { // create a new list and add the new watcher
|
} else { // create a new list and add the new watcher
|
||||||
l = list.New()
|
l = list.New()
|
||||||
elem = l.PushBack(w)
|
elem = l.PushBack(w)
|
||||||
@ -146,6 +145,7 @@ func (wh *watcherHub) notifyWatchers(e *Event, nodePath string, deleted bool) {
|
|||||||
// if we successfully notify a watcher
|
// if we successfully notify a watcher
|
||||||
// we need to remove the watcher from the list
|
// we need to remove the watcher from the list
|
||||||
// and decrease the counter
|
// and decrease the counter
|
||||||
|
w.removed = true
|
||||||
l.Remove(curr)
|
l.Remove(curr)
|
||||||
atomic.AddInt64(&wh.count, -1)
|
atomic.AddInt64(&wh.count, -1)
|
||||||
}
|
}
|
||||||
|
@ -15,7 +15,8 @@ etcd will detect 0.4.x data dir and update the data automatically (while leaving
|
|||||||
|
|
||||||
The tool can be run via:
|
The tool can be run via:
|
||||||
```sh
|
```sh
|
||||||
./bin/etcd-migrate --data-dir=<PATH TO YOUR DATA>
|
./go build
|
||||||
|
./etcd-migrate --data-dir=<PATH TO YOUR DATA>
|
||||||
```
|
```
|
||||||
|
|
||||||
It should autodetect everything and convert the data-dir to be 2.0 compatible. It does not remove the 0.4.x data, and is safe to convert multiple times; the 2.0 data will be overwritten. Recovering the disk space once everything is settled is covered later in the document.
|
It should autodetect everything and convert the data-dir to be 2.0 compatible. It does not remove the 0.4.x data, and is safe to convert multiple times; the 2.0 data will be overwritten. Recovering the disk space once everything is settled is covered later in the document.
|
||||||
@ -44,4 +45,4 @@ If the conversion has completed, the entire cluster is running on something 2.0-
|
|||||||
rm -ri snapshot conf log
|
rm -ri snapshot conf log
|
||||||
```
|
```
|
||||||
|
|
||||||
It will ask before every deletion, but these are the 0.4.x files and will not affect the working 2.0 data.
|
It will ask before every deletion, but these are the 0.4.x files and will not affect the working 2.0 data.
|
@ -23,8 +23,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
Version = "2.0.7"
|
Version = "2.0.9"
|
||||||
InternalVersion = "2"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// WalVersion is an enum for versions of etcd logs.
|
// WalVersion is an enum for versions of etcd logs.
|
||||||
|
Reference in New Issue
Block a user