Compare commits
50 Commits
Author | SHA1 | Date | |
---|---|---|---|
9970141f76 | |||
16c2bcf951 | |||
868b7f7902 | |||
1c958f8fc3 | |||
dfeecd2537 | |||
ed58193ebe | |||
79c650d900 | |||
a451cf2333 | |||
3455431da3 | |||
9424a10f49 | |||
fbcfe8e1c4 | |||
757bb3af13 | |||
2cd367e9d9 | |||
a974bbfe4f | |||
99dcc8c322 | |||
3d2523e7e0 | |||
25e69d9659 | |||
707174b56a | |||
ce92cc3dc5 | |||
5bfbf3a48c | |||
e04a188358 | |||
a51fda3e5e | |||
ca44801650 | |||
2387ef3f21 | |||
d5bfca9465 | |||
7cb126967c | |||
444e017c05 | |||
356675b70f | |||
d7768635fd | |||
37796ed84c | |||
f007cf321d | |||
ca29691543 | |||
4bebb538eb | |||
c27db1ec5e | |||
a5fc1d214d | |||
1df0b941d7 | |||
3a71eb9d72 | |||
001cceb1cd | |||
98ff4af7f2 | |||
db4c5e0eaa | |||
b3c5ed60bd | |||
673d90728e | |||
22c944d8ef | |||
a2d16b52bb | |||
b637b3a607 | |||
0eba3c9000 | |||
c3aab42959 | |||
62560f9959 | |||
3c04f8b664 | |||
cc37c58103 |
15
CHANGELOG
15
CHANGELOG
@ -1,3 +1,18 @@
|
||||
v0.4.3
|
||||
* Avoid panic() on truncated or unexpected log data (#834, #833)
|
||||
* Fix missing stats field (#807)
|
||||
* Lengthen default peer removal delay to 30mins (#835)
|
||||
* Reduce logging on heartbeat timeouts (#836)
|
||||
|
||||
v0.4.2
|
||||
* Improvements to the clustering documents
|
||||
* Set content-type properly on errors (#469)
|
||||
* Standbys re-join if they should be part of the cluster (#810, #815, #818)
|
||||
|
||||
v0.4.1
|
||||
* Re-introduce DELETE on the machines endpoint
|
||||
* Document the machines endpoint
|
||||
|
||||
v0.4.0
|
||||
* Introduced standby mode
|
||||
* Added HEAD requests
|
||||
|
@ -1205,21 +1205,22 @@ The configuration endpoint manages shared cluster wide properties.
|
||||
### Set Cluster Config
|
||||
|
||||
```sh
|
||||
curl -L http://127.0.0.1:7001/v2/admin/config -XPUT -d '{"activeSize":3, "promoteDelay":1800}'
|
||||
curl -L http://127.0.0.1:7001/v2/admin/config -XPUT -d '{"activeSize":3, "removeDelay":1800,"syncInterval":5}'
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"activeSize": 3,
|
||||
"promoteDelay": 1800
|
||||
"removeDelay": 1800,
|
||||
"syncInterval":5
|
||||
}
|
||||
```
|
||||
|
||||
`activeSize` is the maximum number of peers that can join the cluster and participate in the consensus protocol.
|
||||
|
||||
The size of cluster is controlled to be around a certain number. If it is not, it will promote standby-mode instances or demote peer-mode instances to make it happen.
|
||||
The size of cluster is controlled to be around a certain number. If it is not, standby-mode instances will join or peer-mode instances will be removed to make it happen.
|
||||
|
||||
`promoteDelay` indicates the minimum length of delay that has been observed before promotion or demotion.
|
||||
`removeDelay` indicates the minimum time that a machine has been observed to be unresponsive before it is removed from the cluster.
|
||||
|
||||
### Get Cluster Config
|
||||
|
||||
@ -1230,6 +1231,61 @@ curl -L http://127.0.0.1:7001/v2/admin/config
|
||||
```json
|
||||
{
|
||||
"activeSize": 3,
|
||||
"promoteDelay": 1800
|
||||
"removeDelay": 1800,
|
||||
"syncInterval":5
|
||||
}
|
||||
```
|
||||
|
||||
## Remove Machines
|
||||
|
||||
At times you may want to manually remove a machine. Using the machines endpoint
|
||||
you can find and remove machines.
|
||||
|
||||
First, list all the machines in the cluster.
|
||||
|
||||
```sh
|
||||
curl -L http://127.0.0.1:7001/v2/admin/machines
|
||||
```
|
||||
```json
|
||||
[
|
||||
{
|
||||
"clientURL": "http://127.0.0.1:4001",
|
||||
"name": "peer1",
|
||||
"peerURL": "http://127.0.0.1:7001",
|
||||
"state": "leader"
|
||||
},
|
||||
{
|
||||
"clientURL": "http://127.0.0.1:4002",
|
||||
"name": "peer2",
|
||||
"peerURL": "http://127.0.0.1:7002",
|
||||
"state": "follower"
|
||||
},
|
||||
{
|
||||
"clientURL": "http://127.0.0.1:4003",
|
||||
"name": "peer3",
|
||||
"peerURL": "http://127.0.0.1:7003",
|
||||
"state": "follower"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Then take a closer look at the machine you want to remove.
|
||||
|
||||
```sh
|
||||
curl -L http://127.0.0.1:7001/v2/admin/machines/peer2
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"clientURL": "http://127.0.0.1:4002",
|
||||
"name": "peer2",
|
||||
"peerURL": "http://127.0.0.1:7002",
|
||||
"state": "follower"
|
||||
}
|
||||
```
|
||||
|
||||
And finally remove it.
|
||||
|
||||
```sh
|
||||
curl -L -XDELETE http://127.0.0.1:7001/v2/admin/machines/peer2
|
||||
```
|
||||
|
@ -53,3 +53,9 @@ The Discovery API submits the `-peer-addr` of each etcd instance to the configur
|
||||
The discovery API will automatically clean up the address of a stale peer that is no longer part of the cluster. The TTL for this process is a week, which should be long enough to handle any extremely long outage you may encounter. There is no harm in having stale peers in the list until they are cleaned up, since an etcd instance only needs to connect to one valid peer in the cluster to join.
|
||||
|
||||
[discovery-design]: https://github.com/coreos/etcd/blob/master/Documentation/design/cluster-finding.md
|
||||
|
||||
## Lifetime of a Discovery URL
|
||||
|
||||
A discovery URL identifies a single etcd cluster. Do not re-use discovery URLs for new clusters.
|
||||
|
||||
When a machine starts with a new discovery URL the discovery URL will be activated and record the machine's metadata. If you destroy the whole cluster and attempt to bring the cluster back up with the same discovery URL it will fail. This is intentional because all of the registered machines are gone including their logs so there is nothing to recover the killed cluster.
|
||||
|
@ -107,7 +107,7 @@ curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar
|
||||
|
||||
If one machine disconnects from the cluster, it could rejoin the cluster automatically when the communication is recovered.
|
||||
|
||||
If one machine is killed, it could rejoin the cluster when started with old name. If the peer address is changed, etcd will treat the new peer address as the refreshed one, which benefits instance migration, or virtual machine boot with different IP.
|
||||
If one machine is killed, it could rejoin the cluster when started with old name. If the peer address is changed, etcd will treat the new peer address as the refreshed one, which benefits instance migration, or virtual machine boot with different IP. The peer-address-changing functionality is only supported when the majority of the cluster is alive, because this behavior needs the consensus of the etcd cluster.
|
||||
|
||||
**Note:** For now, it is user responsibility to ensure that the machine doesn't join the cluster that has the member with the same name. Or unexpected error will happen. It would be improved sooner or later.
|
||||
|
||||
@ -167,15 +167,3 @@ Etcd can also do internal server-to-server communication using SSL client certs.
|
||||
To do this just change the `-*-file` flags to `-peer-*-file`.
|
||||
|
||||
If you are using SSL for server-to-server communication, you must use it on all instances of etcd.
|
||||
|
||||
|
||||
### What size cluster should I use?
|
||||
|
||||
Every command the client sends to the master is broadcast to all of the followers.
|
||||
The command is not committed until the majority of the cluster peers receive that command.
|
||||
|
||||
Because of this majority voting property, the ideal cluster should be kept small to keep speed up and be made up of an odd number of peers.
|
||||
|
||||
Odd numbers are good because if you have 8 peers the majority will be 5 and if you have 9 peers the majority will still be 5.
|
||||
The result is that an 8 peer cluster can tolerate 3 peer failures and a 9 peer cluster can tolerate 4 machine failures.
|
||||
And in the best case when all 9 peers are responding the cluster will perform at the speed of the fastest 5 machines.
|
||||
|
@ -18,8 +18,8 @@ If there are not enough peers to meet the active size, standbys will send join r
|
||||
If there are more peers than the target active size then peers are removed by the leader and will become standbys.
|
||||
|
||||
The remove delay specifies how long the cluster should wait before removing a dead peer.
|
||||
By default this is 5 seconds.
|
||||
If a peer is inactive for 5 seconds then the peer is removed.
|
||||
By default this is 30 minutes.
|
||||
If a peer is inactive for 30 minutes then the peer is removed.
|
||||
|
||||
The standby sync interval specifies the synchronization interval of standbys with the cluster.
|
||||
By default this is 5 seconds.
|
||||
|
@ -20,6 +20,7 @@
|
||||
- [transitorykris/etcd-py](https://github.com/transitorykris/etcd-py)
|
||||
- [jplana/python-etcd](https://github.com/jplana/python-etcd) - Supports v2
|
||||
- [russellhaering/txetcd](https://github.com/russellhaering/txetcd) - a Twisted Python library
|
||||
- [cholcombe973/autodock](https://github.com/cholcombe973/autodock) - A docker deployment automation tool
|
||||
|
||||
**Node libraries**
|
||||
|
||||
|
@ -1,30 +1,38 @@
|
||||
# Optimal etcd Cluster Size
|
||||
|
||||
etcd's Raft consensus algorithm is most efficient in small clusters between 3 and 9 peers. Let's briefly explore how etcd works internally to understand why.
|
||||
|
||||
## Writing to etcd
|
||||
|
||||
Writes to an etcd peer are always redirected to the leader of the cluster and distributed to all of the peers immediately. A write is only considered successful when a majority of the peers acknowledge the write.
|
||||
|
||||
For example, in a 5 node cluster, a write operation is only as fast as the 3rd fastest machine. This is the main reason for keeping your etcd cluster below 9 nodes. In practice, you only need to worry about write performance in high latency environments such as a cluster spanning multiple data centers.
|
||||
|
||||
## Leader Election
|
||||
|
||||
The leader election process is similar to writing a key — a majority of the cluster must acknowledge the new leader before cluster operations can continue. The longer each node takes to elect a new leader means you have to wait longer before you can write to the cluster again. In low latency environments this process takes milliseconds.
|
||||
|
||||
## Odd Cluster Size
|
||||
|
||||
The other important cluster optimization is to always have an odd cluster size. Adding an odd node to the cluster doesn't change the size of the majority and therefore doesn't increase the total latency of the majority as described above. But you do gain a higher tolerance for peer failure by adding the extra machine. You can see this in practice when comparing two even and odd sized clusters:
|
||||
|
||||
| Cluster Size | Majority | Failure Tolerance |
|
||||
|--------------|------------|-------------------|
|
||||
| 8 machines | 5 machines | 3 machines |
|
||||
| 9 machines | 5 machines | **4 machines** |
|
||||
|
||||
As you can see, adding another node to bring the cluster up to an odd size is always worth it. During a network partition, an odd cluster size also guarantees that there will almost always be a majority of the cluster that can continue to operate and be the source of truth when the partition ends.
|
||||
etcd's Raft consensus algorithm is most efficient in small clusters between 3 and 9 peers. For clusters larger than 9, etcd will select a subset of instances to participate in the algorithm in order to keep it efficient. The end of this document briefly explores how etcd works internally and why these choices have been made.
|
||||
|
||||
## Cluster Management
|
||||
|
||||
Currently, each CoreOS machine is an etcd peer — if you have 30 CoreOS machines, you have 30 etcd peers and end up with a cluster size that is way too large. If desired, you may manually stop some of these etcd instances to increase cluster performance.
|
||||
You can manage the active cluster size through the [cluster config API](https://github.com/coreos/etcd/blob/master/Documentation/api.md#cluster-config). `activeSize` represents the etcd peers allowed to actively participate in the consensus algorithm.
|
||||
|
||||
Functionality is being developed to expose two different types of followers: active and benched followers. Active followers will influence operations within the cluster. Benched followers will not participate, but will transparently proxy etcd traffic to an active follower. This allows every CoreOS machine to expose etcd on port 4001 for ease of use. Benched followers will have the ability to transition into an active follower if needed.
|
||||
If the total number of etcd instances exceeds this number, additional peers are started as [standbys](https://github.com/coreos/etcd/blob/master/Documentation/design/standbys.md), which can be promoted to active participation if one of the existing active instances has failed or been removed.
|
||||
|
||||
## Internals of etcd
|
||||
|
||||
### Writing to etcd
|
||||
|
||||
Writes to an etcd peer are always redirected to the leader of the cluster and distributed to all of the peers immediately. A write is only considered successful when a majority of the peers acknowledge the write.
|
||||
|
||||
For example, in a cluster with 5 peers, a write operation is only as fast as the 3rd fastest machine. This is the main reason for keeping the number of active peers below 9. In practice, you only need to worry about write performance in high latency environments such as a cluster spanning multiple data centers.
|
||||
|
||||
### Leader Election
|
||||
|
||||
The leader election process is similar to writing a key — a majority of the active peers must acknowledge the new leader before cluster operations can continue. The longer each peer takes to elect a new leader means you have to wait longer before you can write to the cluster again. In low latency environments this process takes milliseconds.
|
||||
|
||||
### Odd Active Cluster Size
|
||||
|
||||
The other important cluster optimization is to always have an odd active cluster size (i.e. `activeSize`). Adding an odd node to the number of peers doesn't change the size of the majority and therefore doesn't increase the total latency of the majority as described above. But, you gain a higher tolerance for peer failure by adding the extra machine. You can see this in practice when comparing two even and odd sized clusters:
|
||||
|
||||
| Active Peers | Majority | Failure Tolerance |
|
||||
|--------------|------------|-------------------|
|
||||
| 1 peers | 1 peers | None |
|
||||
| 3 peers | 2 peers | 1 peer |
|
||||
| 4 peers | 3 peers | 2 peers |
|
||||
| 5 peers | 3 peers | **3 peers** |
|
||||
| 6 peers | 4 peers | 2 peers |
|
||||
| 7 peers | 4 peers | **3 peers** |
|
||||
| 8 peers | 5 peers | 3 peers |
|
||||
| 9 peers | 5 peers | **4 peers** |
|
||||
|
||||
As you can see, adding another peer to bring the number of active peers up to an odd size is always worth it. During a network partition, an odd number of active peers also guarantees that there will almost always be a majority of the cluster that can continue to operate and be the source of truth when the partition ends.
|
@ -1,6 +1,6 @@
|
||||
# etcd
|
||||
|
||||
README version 0.4.0
|
||||
README version 0.4.3
|
||||
|
||||
A highly-available key value store for shared configuration and service discovery.
|
||||
etcd is inspired by [Apache ZooKeeper][zookeeper] and [doozer][doozer], with a focus on being:
|
||||
|
@ -143,5 +143,6 @@ func (e Error) Write(w http.ResponseWriter) {
|
||||
status = http.StatusInternalServerError
|
||||
}
|
||||
}
|
||||
http.Error(w, e.toJsonString(), status)
|
||||
w.WriteHeader(status)
|
||||
fmt.Fprintln(w, e.toJsonString())
|
||||
}
|
||||
|
@ -232,6 +232,7 @@ func (e *Etcd) Run() {
|
||||
DataDir: e.Config.DataDir,
|
||||
}
|
||||
e.StandbyServer = server.NewStandbyServer(ssConfig, client)
|
||||
e.StandbyServer.SetRaftServer(raftServer)
|
||||
|
||||
// Generating config could be slow.
|
||||
// Put it here to make listen happen immediately after peer-server starting.
|
||||
@ -347,6 +348,7 @@ func (e *Etcd) runServer() {
|
||||
raftServer.SetElectionTimeout(electionTimeout)
|
||||
raftServer.SetHeartbeatInterval(heartbeatInterval)
|
||||
e.PeerServer.SetRaftServer(raftServer, e.Config.Snapshot)
|
||||
e.StandbyServer.SetRaftServer(raftServer)
|
||||
|
||||
e.PeerServer.SetJoinIndex(e.StandbyServer.JoinIndex())
|
||||
e.setMode(PeerMode)
|
||||
|
Binary file not shown.
@ -33,6 +33,8 @@ function package {
|
||||
cp ${proj}/README.md ${target}/README-${proj}.md
|
||||
}
|
||||
|
||||
mkdir release
|
||||
cd release
|
||||
|
||||
for i in darwin windows linux; do
|
||||
export GOOS=${i}
|
||||
|
@ -12,7 +12,7 @@ const (
|
||||
MinActiveSize = 3
|
||||
|
||||
// DefaultRemoveDelay is the default elapsed time before removal.
|
||||
DefaultRemoveDelay = float64((5 * time.Second) / time.Second)
|
||||
DefaultRemoveDelay = float64((30 * time.Minute) / time.Second)
|
||||
|
||||
// MinRemoveDelay is the minimum remove delay allowed.
|
||||
MinRemoveDelay = float64((2 * time.Second) / time.Second)
|
||||
|
@ -23,6 +23,10 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
// MaxHeartbeatTimeoutBackoff is the maximum number of seconds before we warn
|
||||
// the user again about a peer not accepting heartbeats.
|
||||
MaxHeartbeatTimeoutBackoff = 15 * time.Second
|
||||
|
||||
// ThresholdMonitorTimeout is the time between log notifications that the
|
||||
// Raft heartbeat is too close to the election timeout.
|
||||
ThresholdMonitorTimeout = 5 * time.Second
|
||||
@ -70,10 +74,18 @@ type PeerServer struct {
|
||||
routineGroup sync.WaitGroup
|
||||
timeoutThresholdChan chan interface{}
|
||||
|
||||
logBackoffs map[string]*logBackoff
|
||||
|
||||
metrics *metrics.Bucket
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
type logBackoff struct {
|
||||
next time.Time
|
||||
backoff time.Duration
|
||||
count int
|
||||
}
|
||||
|
||||
// TODO: find a good policy to do snapshot
|
||||
type snapshotConf struct {
|
||||
// Etcd will check if snapshot is need every checkingInterval
|
||||
@ -97,6 +109,7 @@ func NewPeerServer(psConfig PeerServerConfig, client *Client, registry *Registry
|
||||
serverStats: serverStats,
|
||||
|
||||
timeoutThresholdChan: make(chan interface{}, 1),
|
||||
logBackoffs: make(map[string]*logBackoff),
|
||||
|
||||
metrics: mb,
|
||||
}
|
||||
@ -214,6 +227,7 @@ func (s *PeerServer) FindCluster(discoverURL string, peers []string) (toStart bo
|
||||
// TODO(yichengq): Think about the action that should be done
|
||||
// if it cannot connect any of the previous known node.
|
||||
log.Debugf("%s is restarting the cluster %v", name, possiblePeers)
|
||||
s.SetJoinIndex(s.raftServer.CommitIndex())
|
||||
toStart = true
|
||||
return
|
||||
}
|
||||
@ -355,6 +369,7 @@ func (s *PeerServer) HTTPHandler() http.Handler {
|
||||
router.HandleFunc("/v2/admin/config", s.setClusterConfigHttpHandler).Methods("PUT")
|
||||
router.HandleFunc("/v2/admin/machines", s.getMachinesHttpHandler).Methods("GET")
|
||||
router.HandleFunc("/v2/admin/machines/{name}", s.getMachineHttpHandler).Methods("GET")
|
||||
router.HandleFunc("/v2/admin/machines/{name}", s.RemoveHttpHandler).Methods("DELETE")
|
||||
|
||||
return router
|
||||
}
|
||||
@ -625,7 +640,7 @@ func (s *PeerServer) joinByPeer(server raft.Server, peer string, scheme string)
|
||||
}
|
||||
|
||||
func (s *PeerServer) Stats() []byte {
|
||||
s.serverStats.LeaderInfo.Uptime = time.Now().Sub(s.serverStats.LeaderInfo.startTime).String()
|
||||
s.serverStats.LeaderInfo.Uptime = time.Now().Sub(s.serverStats.LeaderInfo.StartTime).String()
|
||||
|
||||
// TODO: register state listener to raft to change this field
|
||||
// rather than compare the state each time Stats() is called.
|
||||
@ -685,11 +700,12 @@ func (s *PeerServer) raftEventLogger(event raft.Event) {
|
||||
case raft.RemovePeerEventType:
|
||||
log.Infof("%s: peer removed: '%v'", s.Config.Name, value)
|
||||
case raft.HeartbeatIntervalEventType:
|
||||
var name = "<unknown>"
|
||||
if peer, ok := value.(*raft.Peer); ok {
|
||||
name = peer.Name
|
||||
peer, ok := value.(*raft.Peer)
|
||||
if !ok {
|
||||
log.Warnf("%s: heatbeat timeout from unknown peer", s.Config.Name)
|
||||
return
|
||||
}
|
||||
log.Infof("%s: warning: heartbeat timed out: '%v'", s.Config.Name, name)
|
||||
s.logHeartbeatTimeout(peer)
|
||||
case raft.ElectionTimeoutThresholdEventType:
|
||||
select {
|
||||
case s.timeoutThresholdChan <- value:
|
||||
@ -699,6 +715,35 @@ func (s *PeerServer) raftEventLogger(event raft.Event) {
|
||||
}
|
||||
}
|
||||
|
||||
// logHeartbeatTimeout logs about the edge triggered heartbeat timeout event
|
||||
// only if we haven't warned within a reasonable interval.
|
||||
func (s *PeerServer) logHeartbeatTimeout(peer *raft.Peer) {
|
||||
b, ok := s.logBackoffs[peer.Name]
|
||||
if !ok {
|
||||
b = &logBackoff{time.Time{}, time.Second, 1}
|
||||
s.logBackoffs[peer.Name] = b
|
||||
}
|
||||
|
||||
if peer.LastActivity().After(b.next) {
|
||||
b.next = time.Time{}
|
||||
b.backoff = time.Second
|
||||
b.count = 1
|
||||
}
|
||||
|
||||
if b.next.After(time.Now()) {
|
||||
b.count++
|
||||
return
|
||||
}
|
||||
|
||||
b.backoff = 2 * b.backoff
|
||||
if b.backoff > MaxHeartbeatTimeoutBackoff {
|
||||
b.backoff = MaxHeartbeatTimeoutBackoff
|
||||
}
|
||||
b.next = time.Now().Add(b.backoff)
|
||||
|
||||
log.Infof("%s: warning: heartbeat time out peer=%q missed=%d backoff=%q", s.Config.Name, peer.Name, b.count, b.backoff)
|
||||
}
|
||||
|
||||
func (s *PeerServer) recordMetricEvent(event raft.Event) {
|
||||
name := fmt.Sprintf("raft.event.%s", event.Type())
|
||||
value := event.Value().(time.Duration)
|
||||
|
@ -13,9 +13,9 @@ type raftServerStats struct {
|
||||
StartTime time.Time `json:"startTime"`
|
||||
|
||||
LeaderInfo struct {
|
||||
Name string `json:"leader"`
|
||||
Uptime string `json:"uptime"`
|
||||
startTime time.Time
|
||||
Name string `json:"leader"`
|
||||
Uptime string `json:"uptime"`
|
||||
StartTime time.Time `json:"startTime"`
|
||||
} `json:"leaderInfo"`
|
||||
|
||||
RecvAppendRequestCnt uint64 `json:"recvAppendRequestCnt,"`
|
||||
@ -43,7 +43,7 @@ func NewRaftServerStats(name string) *raftServerStats {
|
||||
back: -1,
|
||||
},
|
||||
}
|
||||
stats.LeaderInfo.startTime = time.Now()
|
||||
stats.LeaderInfo.StartTime = time.Now()
|
||||
return stats
|
||||
}
|
||||
|
||||
@ -54,7 +54,7 @@ func (ss *raftServerStats) RecvAppendReq(leaderName string, pkgSize int) {
|
||||
ss.State = raft.Follower
|
||||
if leaderName != ss.LeaderInfo.Name {
|
||||
ss.LeaderInfo.Name = leaderName
|
||||
ss.LeaderInfo.startTime = time.Now()
|
||||
ss.LeaderInfo.StartTime = time.Now()
|
||||
}
|
||||
|
||||
ss.recvRateQueue.Insert(NewPackageStats(time.Now(), pkgSize))
|
||||
@ -70,7 +70,7 @@ func (ss *raftServerStats) SendAppendReq(pkgSize int) {
|
||||
if ss.State != raft.Leader {
|
||||
ss.State = raft.Leader
|
||||
ss.LeaderInfo.Name = ss.Name
|
||||
ss.LeaderInfo.startTime = now
|
||||
ss.LeaderInfo.StartTime = now
|
||||
}
|
||||
|
||||
ss.sendRateQueue.Insert(NewPackageStats(now, pkgSize))
|
||||
|
@ -1,3 +1,3 @@
|
||||
package server
|
||||
|
||||
const ReleaseVersion = "0.4.0"
|
||||
const ReleaseVersion = "0.4.3"
|
||||
|
@ -36,8 +36,9 @@ type standbyInfo struct {
|
||||
}
|
||||
|
||||
type StandbyServer struct {
|
||||
Config StandbyServerConfig
|
||||
client *Client
|
||||
Config StandbyServerConfig
|
||||
client *Client
|
||||
raftServer raft.Server
|
||||
|
||||
standbyInfo
|
||||
joinIndex uint64
|
||||
@ -62,6 +63,10 @@ func NewStandbyServer(config StandbyServerConfig, client *Client) *StandbyServer
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *StandbyServer) SetRaftServer(raftServer raft.Server) {
|
||||
s.raftServer = raftServer
|
||||
}
|
||||
|
||||
func (s *StandbyServer) Start() {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
@ -235,6 +240,13 @@ func (s *StandbyServer) syncCluster(peerURLs []string) error {
|
||||
}
|
||||
|
||||
func (s *StandbyServer) join(peer string) error {
|
||||
for _, url := range s.ClusterURLs() {
|
||||
if s.Config.PeerURL == url {
|
||||
s.joinIndex = s.raftServer.CommitIndex()
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Our version must match the leaders version
|
||||
version, err := s.client.GetVersion(peer)
|
||||
if err != nil {
|
||||
|
@ -24,12 +24,15 @@ func TestV2GetKey(t *testing.T) {
|
||||
v.Set("value", "XXX")
|
||||
fullURL := fmt.Sprintf("%s%s", s.URL(), "/v2/keys/foo/bar")
|
||||
resp, _ := tests.Get(fullURL)
|
||||
assert.Equal(t, resp.Header.Get("Content-Type"), "application/json")
|
||||
assert.Equal(t, resp.StatusCode, http.StatusNotFound)
|
||||
|
||||
resp, _ = tests.PutForm(fullURL, v)
|
||||
assert.Equal(t, resp.Header.Get("Content-Type"), "application/json")
|
||||
tests.ReadBody(resp)
|
||||
|
||||
resp, _ = tests.Get(fullURL)
|
||||
assert.Equal(t, resp.Header.Get("Content-Type"), "application/json")
|
||||
assert.Equal(t, resp.StatusCode, http.StatusOK)
|
||||
body := tests.ReadBodyJSON(resp)
|
||||
assert.Equal(t, body["action"], "get", "")
|
||||
|
@ -4,6 +4,7 @@ import (
|
||||
"bytes"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@ -100,6 +101,8 @@ func TestTLSMultiNodeKillAllAndRecovery(t *testing.T) {
|
||||
t.Fatal("cannot create cluster")
|
||||
}
|
||||
|
||||
time.Sleep(time.Second)
|
||||
|
||||
c := etcd.NewClient(nil)
|
||||
|
||||
go Monitor(clusterSize, clusterSize, leaderChan, all, stop)
|
||||
@ -239,3 +242,74 @@ func TestMultiNodeKillAllAndRecoveryWithStandbys(t *testing.T) {
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, len(result.Node.Nodes), 7)
|
||||
}
|
||||
|
||||
// Create a five nodes
|
||||
// Kill all the nodes and restart, then remove the leader
|
||||
func TestMultiNodeKillAllAndRecoveryAndRemoveLeader(t *testing.T) {
|
||||
procAttr := new(os.ProcAttr)
|
||||
procAttr.Files = []*os.File{nil, os.Stdout, os.Stderr}
|
||||
|
||||
stop := make(chan bool)
|
||||
leaderChan := make(chan string, 1)
|
||||
all := make(chan bool, 1)
|
||||
|
||||
clusterSize := 5
|
||||
argGroup, etcds, err := CreateCluster(clusterSize, procAttr, false)
|
||||
defer DestroyCluster(etcds)
|
||||
|
||||
if err != nil {
|
||||
t.Fatal("cannot create cluster")
|
||||
}
|
||||
|
||||
c := etcd.NewClient(nil)
|
||||
|
||||
go Monitor(clusterSize, clusterSize, leaderChan, all, stop)
|
||||
<-all
|
||||
<-leaderChan
|
||||
stop <- true
|
||||
|
||||
// It needs some time to sync current commits and write it to disk.
|
||||
// Or some instance may be restarted as a new peer, and we don't support
|
||||
// to connect back the old cluster that doesn't have majority alive
|
||||
// without log now.
|
||||
time.Sleep(time.Second)
|
||||
|
||||
c.SyncCluster()
|
||||
|
||||
// kill all
|
||||
DestroyCluster(etcds)
|
||||
|
||||
time.Sleep(time.Second)
|
||||
|
||||
stop = make(chan bool)
|
||||
leaderChan = make(chan string, 1)
|
||||
all = make(chan bool, 1)
|
||||
|
||||
time.Sleep(time.Second)
|
||||
|
||||
for i := 0; i < clusterSize; i++ {
|
||||
etcds[i], err = os.StartProcess(EtcdBinPath, argGroup[i], procAttr)
|
||||
}
|
||||
|
||||
go Monitor(clusterSize, 1, leaderChan, all, stop)
|
||||
|
||||
<-all
|
||||
leader := <-leaderChan
|
||||
|
||||
_, err = c.Set("foo", "bar", 0)
|
||||
if err != nil {
|
||||
t.Fatalf("Recovery error: %s", err)
|
||||
}
|
||||
|
||||
port, _ := strconv.Atoi(strings.Split(leader, ":")[2])
|
||||
num := port - 7000
|
||||
resp, _ := tests.Delete(leader+"/v2/admin/machines/node"+strconv.Itoa(num), "application/json", nil)
|
||||
if !assert.Equal(t, resp.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
|
||||
// check the old leader is in standby mode now
|
||||
time.Sleep(time.Second)
|
||||
resp, _ = tests.Get(leader + "/name")
|
||||
assert.Equal(t, resp.StatusCode, 404)
|
||||
}
|
||||
|
@ -31,7 +31,7 @@ func TestRemoveNode(t *testing.T) {
|
||||
|
||||
c.SyncCluster()
|
||||
|
||||
resp, _ := tests.Put("http://localhost:7001/v2/admin/config", "application/json", bytes.NewBufferString(`{"activeSize":4, "syncInterval":1}`))
|
||||
resp, _ := tests.Put("http://localhost:7001/v2/admin/config", "application/json", bytes.NewBufferString(`{"activeSize":4, "syncInterval":5}`))
|
||||
if !assert.Equal(t, resp.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
@ -41,11 +41,6 @@ func TestRemoveNode(t *testing.T) {
|
||||
client := &http.Client{}
|
||||
for i := 0; i < 2; i++ {
|
||||
for i := 0; i < 2; i++ {
|
||||
r, _ := tests.Put("http://localhost:7001/v2/admin/config", "application/json", bytes.NewBufferString(`{"activeSize":3}`))
|
||||
if !assert.Equal(t, r.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
|
||||
client.Do(rmReq)
|
||||
|
||||
fmt.Println("send remove to node3 and wait for its exiting")
|
||||
@ -76,12 +71,7 @@ func TestRemoveNode(t *testing.T) {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
r, _ = tests.Put("http://localhost:7001/v2/admin/config", "application/json", bytes.NewBufferString(`{"activeSize":4}`))
|
||||
if !assert.Equal(t, r.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
|
||||
time.Sleep(time.Second + time.Second)
|
||||
time.Sleep(time.Second + 5*time.Second)
|
||||
|
||||
resp, err = c.Get("_etcd/machines", false, false)
|
||||
|
||||
@ -96,11 +86,6 @@ func TestRemoveNode(t *testing.T) {
|
||||
|
||||
// first kill the node, then remove it, then add it back
|
||||
for i := 0; i < 2; i++ {
|
||||
r, _ := tests.Put("http://localhost:7001/v2/admin/config", "application/json", bytes.NewBufferString(`{"activeSize":3}`))
|
||||
if !assert.Equal(t, r.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
|
||||
etcds[2].Kill()
|
||||
fmt.Println("kill node3 and wait for its exiting")
|
||||
etcds[2].Wait()
|
||||
@ -131,11 +116,6 @@ func TestRemoveNode(t *testing.T) {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
r, _ = tests.Put("http://localhost:7001/v2/admin/config", "application/json", bytes.NewBufferString(`{"activeSize":4}`))
|
||||
if !assert.Equal(t, r.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
|
||||
time.Sleep(time.Second + time.Second)
|
||||
|
||||
resp, err = c.Get("_etcd/machines", false, false)
|
||||
@ -169,7 +149,8 @@ func TestRemovePausedNode(t *testing.T) {
|
||||
if !assert.Equal(t, r.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
time.Sleep(2 * time.Second)
|
||||
// Wait for standby instances to update its cluster config
|
||||
time.Sleep(6 * time.Second)
|
||||
|
||||
resp, err := c.Get("_etcd/machines", false, false)
|
||||
if err != nil {
|
||||
|
@ -89,7 +89,7 @@ func TestSnapshot(t *testing.T) {
|
||||
|
||||
index, _ = strconv.Atoi(snapshots[0].Name()[2:6])
|
||||
|
||||
if index < 1010 || index > 1025 {
|
||||
if index < 1010 || index > 1029 {
|
||||
t.Fatal("wrong name of snapshot :", snapshots[0].Name())
|
||||
}
|
||||
}
|
||||
|
@ -8,6 +8,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/coreos/etcd/server"
|
||||
"github.com/coreos/etcd/store"
|
||||
"github.com/coreos/etcd/tests"
|
||||
"github.com/coreos/etcd/third_party/github.com/coreos/go-etcd/etcd"
|
||||
"github.com/coreos/etcd/third_party/github.com/stretchr/testify/assert"
|
||||
@ -279,3 +280,61 @@ func TestStandbyDramaticChange(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestStandbyJoinMiss(t *testing.T) {
|
||||
clusterSize := 2
|
||||
_, etcds, err := CreateCluster(clusterSize, &os.ProcAttr{Files: []*os.File{nil, os.Stdout, os.Stderr}}, false)
|
||||
if err != nil {
|
||||
t.Fatal("cannot create cluster")
|
||||
}
|
||||
defer DestroyCluster(etcds)
|
||||
|
||||
c := etcd.NewClient(nil)
|
||||
c.SyncCluster()
|
||||
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
// Verify that we have two machines.
|
||||
result, err := c.Get("_etcd/machines", false, true)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, len(result.Node.Nodes), clusterSize)
|
||||
|
||||
resp, _ := tests.Put("http://localhost:7001/v2/admin/config", "application/json", bytes.NewBufferString(`{"removeDelay":4, "syncInterval":4}`))
|
||||
if !assert.Equal(t, resp.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
time.Sleep(time.Second)
|
||||
|
||||
resp, _ = tests.Delete("http://localhost:7001/v2/admin/machines/node2", "application/json", nil)
|
||||
if !assert.Equal(t, resp.StatusCode, 200) {
|
||||
t.FailNow()
|
||||
}
|
||||
|
||||
// Wait for a monitor cycle before checking for removal.
|
||||
time.Sleep(server.ActiveMonitorTimeout + (1 * time.Second))
|
||||
|
||||
// Verify that we now have one peer.
|
||||
result, err = c.Get("_etcd/machines", false, true)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, len(result.Node.Nodes), 1)
|
||||
|
||||
// Simulate the join failure
|
||||
_, err = server.NewClient(nil).AddMachine("http://localhost:7001",
|
||||
&server.JoinCommand{
|
||||
MinVersion: store.MinVersion(),
|
||||
MaxVersion: store.MaxVersion(),
|
||||
Name: "node2",
|
||||
RaftURL: "http://127.0.0.1:7002",
|
||||
EtcdURL: "http://127.0.0.1:4002",
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
|
||||
time.Sleep(6 * time.Second)
|
||||
|
||||
go tests.Delete("http://localhost:7001/v2/admin/machines/node2", "application/json", nil)
|
||||
|
||||
time.Sleep(time.Second)
|
||||
result, err = c.Get("_etcd/machines", false, true)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, len(result.Node.Nodes), 1)
|
||||
}
|
||||
|
@ -507,61 +507,61 @@ func TestReset(t *testing.T) {
|
||||
func TestEncodeDecode1(t *testing.T) {
|
||||
pb := initGoTest(false)
|
||||
overify(t, pb,
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 0x20
|
||||
"714000000000000000"+ // field 14, encoding 1, value 0x40
|
||||
"78a019"+ // field 15, encoding 0, value 0xca0 = 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 0x1940 = 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2, string "string"
|
||||
"b304"+ // field 70, encoding 3, start group
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // field 70, encoding 4, end group
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2, string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f") // field 103, encoding 0, 0x7f zigzag64
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 0x20
|
||||
"714000000000000000"+ // field 14, encoding 1, value 0x40
|
||||
"78a019"+ // field 15, encoding 0, value 0xca0 = 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 0x1940 = 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2, string "string"
|
||||
"b304"+ // field 70, encoding 3, start group
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // field 70, encoding 4, end group
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2, string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f") // field 103, encoding 0, 0x7f zigzag64
|
||||
}
|
||||
|
||||
// All required fields set, defaults provided.
|
||||
func TestEncodeDecode2(t *testing.T) {
|
||||
pb := initGoTest(true)
|
||||
overify(t, pb,
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"c00201"+ // field 40, encoding 0, value 1
|
||||
"c80220"+ // field 41, encoding 0, value 32
|
||||
"d00240"+ // field 42, encoding 0, value 64
|
||||
"dd0240010000"+ // field 43, encoding 5, value 320
|
||||
"e1028002000000000000"+ // field 44, encoding 1, value 640
|
||||
"e8028019"+ // field 45, encoding 0, value 3200
|
||||
"f0028032"+ // field 46, encoding 0, value 6400
|
||||
"fd02e0659948"+ // field 47, encoding 5, value 314159.0
|
||||
"81030000000050971041"+ // field 48, encoding 1, value 271828.0
|
||||
"8a0310"+"68656c6c6f2c2022776f726c6421220a"+ // field 49, encoding 2 string "hello, \"world!\"\n"
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"8a1907"+"4269676e6f7365"+ // field 401, encoding 2, string "Bignose"
|
||||
"90193f"+ // field 402, encoding 0, value 63
|
||||
"98197f") // field 403, encoding 0, value 127
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"c00201"+ // field 40, encoding 0, value 1
|
||||
"c80220"+ // field 41, encoding 0, value 32
|
||||
"d00240"+ // field 42, encoding 0, value 64
|
||||
"dd0240010000"+ // field 43, encoding 5, value 320
|
||||
"e1028002000000000000"+ // field 44, encoding 1, value 640
|
||||
"e8028019"+ // field 45, encoding 0, value 3200
|
||||
"f0028032"+ // field 46, encoding 0, value 6400
|
||||
"fd02e0659948"+ // field 47, encoding 5, value 314159.0
|
||||
"81030000000050971041"+ // field 48, encoding 1, value 271828.0
|
||||
"8a0310"+"68656c6c6f2c2022776f726c6421220a"+ // field 49, encoding 2 string "hello, \"world!\"\n"
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"8a1907"+"4269676e6f7365"+ // field 401, encoding 2, string "Bignose"
|
||||
"90193f"+ // field 402, encoding 0, value 63
|
||||
"98197f") // field 403, encoding 0, value 127
|
||||
|
||||
}
|
||||
|
||||
@ -583,37 +583,37 @@ func TestEncodeDecode3(t *testing.T) {
|
||||
pb.F_Sint64Defaulted = Int64(-64)
|
||||
|
||||
overify(t, pb,
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"c00201"+ // field 40, encoding 0, value 1
|
||||
"c80220"+ // field 41, encoding 0, value 32
|
||||
"d00240"+ // field 42, encoding 0, value 64
|
||||
"dd0240010000"+ // field 43, encoding 5, value 320
|
||||
"e1028002000000000000"+ // field 44, encoding 1, value 640
|
||||
"e8028019"+ // field 45, encoding 0, value 3200
|
||||
"f0028032"+ // field 46, encoding 0, value 6400
|
||||
"fd02e0659948"+ // field 47, encoding 5, value 314159.0
|
||||
"81030000000050971041"+ // field 48, encoding 1, value 271828.0
|
||||
"8a0310"+"68656c6c6f2c2022776f726c6421220a"+ // field 49, encoding 2 string "hello, \"world!\"\n"
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"8a1907"+"4269676e6f7365"+ // field 401, encoding 2, string "Bignose"
|
||||
"90193f"+ // field 402, encoding 0, value 63
|
||||
"98197f") // field 403, encoding 0, value 127
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"c00201"+ // field 40, encoding 0, value 1
|
||||
"c80220"+ // field 41, encoding 0, value 32
|
||||
"d00240"+ // field 42, encoding 0, value 64
|
||||
"dd0240010000"+ // field 43, encoding 5, value 320
|
||||
"e1028002000000000000"+ // field 44, encoding 1, value 640
|
||||
"e8028019"+ // field 45, encoding 0, value 3200
|
||||
"f0028032"+ // field 46, encoding 0, value 6400
|
||||
"fd02e0659948"+ // field 47, encoding 5, value 314159.0
|
||||
"81030000000050971041"+ // field 48, encoding 1, value 271828.0
|
||||
"8a0310"+"68656c6c6f2c2022776f726c6421220a"+ // field 49, encoding 2 string "hello, \"world!\"\n"
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"8a1907"+"4269676e6f7365"+ // field 401, encoding 2, string "Bignose"
|
||||
"90193f"+ // field 402, encoding 0, value 63
|
||||
"98197f") // field 403, encoding 0, value 127
|
||||
|
||||
}
|
||||
|
||||
@ -639,56 +639,56 @@ func TestEncodeDecode4(t *testing.T) {
|
||||
pb.Optionalgroup = initGoTest_OptionalGroup()
|
||||
|
||||
overify(t, pb,
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"1205"+"68656c6c6f"+ // field 2, encoding 2, string "hello"
|
||||
"1807"+ // field 3, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"320d"+"0a056c6162656c120474797065"+ // field 6, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"f00101"+ // field 30, encoding 0, value 1
|
||||
"f80120"+ // field 31, encoding 0, value 32
|
||||
"800240"+ // field 32, encoding 0, value 64
|
||||
"8d02a00c0000"+ // field 33, encoding 5, value 3232
|
||||
"91024019000000000000"+ // field 34, encoding 1, value 6464
|
||||
"9802a0dd13"+ // field 35, encoding 0, value 323232
|
||||
"a002c0ba27"+ // field 36, encoding 0, value 646464
|
||||
"ad0200000042"+ // field 37, encoding 5, value 32.0
|
||||
"b1020000000000005040"+ // field 38, encoding 1, value 64.0
|
||||
"ba0205"+"68656c6c6f"+ // field 39, encoding 2, string "hello"
|
||||
"c00201"+ // field 40, encoding 0, value 1
|
||||
"c80220"+ // field 41, encoding 0, value 32
|
||||
"d00240"+ // field 42, encoding 0, value 64
|
||||
"dd0240010000"+ // field 43, encoding 5, value 320
|
||||
"e1028002000000000000"+ // field 44, encoding 1, value 640
|
||||
"e8028019"+ // field 45, encoding 0, value 3200
|
||||
"f0028032"+ // field 46, encoding 0, value 6400
|
||||
"fd02e0659948"+ // field 47, encoding 5, value 314159.0
|
||||
"81030000000050971041"+ // field 48, encoding 1, value 271828.0
|
||||
"8a0310"+"68656c6c6f2c2022776f726c6421220a"+ // field 49, encoding 2 string "hello, \"world!\"\n"
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"d305"+ // start group field 90 level 1
|
||||
"da0508"+"6f7074696f6e616c"+ // field 91, encoding 2, string "optional"
|
||||
"d405"+ // end group field 90 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"ea1207"+"4269676e6f7365"+ // field 301, encoding 2, string "Bignose"
|
||||
"f0123f"+ // field 302, encoding 0, value 63
|
||||
"f8127f"+ // field 303, encoding 0, value 127
|
||||
"8a1907"+"4269676e6f7365"+ // field 401, encoding 2, string "Bignose"
|
||||
"90193f"+ // field 402, encoding 0, value 63
|
||||
"98197f") // field 403, encoding 0, value 127
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"1205"+"68656c6c6f"+ // field 2, encoding 2, string "hello"
|
||||
"1807"+ // field 3, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"320d"+"0a056c6162656c120474797065"+ // field 6, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"f00101"+ // field 30, encoding 0, value 1
|
||||
"f80120"+ // field 31, encoding 0, value 32
|
||||
"800240"+ // field 32, encoding 0, value 64
|
||||
"8d02a00c0000"+ // field 33, encoding 5, value 3232
|
||||
"91024019000000000000"+ // field 34, encoding 1, value 6464
|
||||
"9802a0dd13"+ // field 35, encoding 0, value 323232
|
||||
"a002c0ba27"+ // field 36, encoding 0, value 646464
|
||||
"ad0200000042"+ // field 37, encoding 5, value 32.0
|
||||
"b1020000000000005040"+ // field 38, encoding 1, value 64.0
|
||||
"ba0205"+"68656c6c6f"+ // field 39, encoding 2, string "hello"
|
||||
"c00201"+ // field 40, encoding 0, value 1
|
||||
"c80220"+ // field 41, encoding 0, value 32
|
||||
"d00240"+ // field 42, encoding 0, value 64
|
||||
"dd0240010000"+ // field 43, encoding 5, value 320
|
||||
"e1028002000000000000"+ // field 44, encoding 1, value 640
|
||||
"e8028019"+ // field 45, encoding 0, value 3200
|
||||
"f0028032"+ // field 46, encoding 0, value 6400
|
||||
"fd02e0659948"+ // field 47, encoding 5, value 314159.0
|
||||
"81030000000050971041"+ // field 48, encoding 1, value 271828.0
|
||||
"8a0310"+"68656c6c6f2c2022776f726c6421220a"+ // field 49, encoding 2 string "hello, \"world!\"\n"
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"d305"+ // start group field 90 level 1
|
||||
"da0508"+"6f7074696f6e616c"+ // field 91, encoding 2, string "optional"
|
||||
"d405"+ // end group field 90 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"ea1207"+"4269676e6f7365"+ // field 301, encoding 2, string "Bignose"
|
||||
"f0123f"+ // field 302, encoding 0, value 63
|
||||
"f8127f"+ // field 303, encoding 0, value 127
|
||||
"8a1907"+"4269676e6f7365"+ // field 401, encoding 2, string "Bignose"
|
||||
"90193f"+ // field 402, encoding 0, value 63
|
||||
"98197f") // field 403, encoding 0, value 127
|
||||
|
||||
}
|
||||
|
||||
@ -712,71 +712,71 @@ func TestEncodeDecode5(t *testing.T) {
|
||||
pb.Repeatedgroup = []*GoTest_RepeatedGroup{initGoTest_RepeatedGroup(), initGoTest_RepeatedGroup()}
|
||||
|
||||
overify(t, pb,
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"2a0d"+"0a056c6162656c120474797065"+ // field 5, encoding 2 (GoTestField)
|
||||
"2a0d"+"0a056c6162656c120474797065"+ // field 5, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"a00100"+ // field 20, encoding 0, value 0
|
||||
"a00101"+ // field 20, encoding 0, value 1
|
||||
"a80120"+ // field 21, encoding 0, value 32
|
||||
"a80121"+ // field 21, encoding 0, value 33
|
||||
"b00140"+ // field 22, encoding 0, value 64
|
||||
"b00141"+ // field 22, encoding 0, value 65
|
||||
"bd01a00c0000"+ // field 23, encoding 5, value 3232
|
||||
"bd01050d0000"+ // field 23, encoding 5, value 3333
|
||||
"c1014019000000000000"+ // field 24, encoding 1, value 6464
|
||||
"c101a519000000000000"+ // field 24, encoding 1, value 6565
|
||||
"c801a0dd13"+ // field 25, encoding 0, value 323232
|
||||
"c80195ac14"+ // field 25, encoding 0, value 333333
|
||||
"d001c0ba27"+ // field 26, encoding 0, value 646464
|
||||
"d001b58928"+ // field 26, encoding 0, value 656565
|
||||
"dd0100000042"+ // field 27, encoding 5, value 32.0
|
||||
"dd0100000442"+ // field 27, encoding 5, value 33.0
|
||||
"e1010000000000005040"+ // field 28, encoding 1, value 64.0
|
||||
"e1010000000000405040"+ // field 28, encoding 1, value 65.0
|
||||
"ea0105"+"68656c6c6f"+ // field 29, encoding 2, string "hello"
|
||||
"ea0106"+"7361696c6f72"+ // field 29, encoding 2, string "sailor"
|
||||
"c00201"+ // field 40, encoding 0, value 1
|
||||
"c80220"+ // field 41, encoding 0, value 32
|
||||
"d00240"+ // field 42, encoding 0, value 64
|
||||
"dd0240010000"+ // field 43, encoding 5, value 320
|
||||
"e1028002000000000000"+ // field 44, encoding 1, value 640
|
||||
"e8028019"+ // field 45, encoding 0, value 3200
|
||||
"f0028032"+ // field 46, encoding 0, value 6400
|
||||
"fd02e0659948"+ // field 47, encoding 5, value 314159.0
|
||||
"81030000000050971041"+ // field 48, encoding 1, value 271828.0
|
||||
"8a0310"+"68656c6c6f2c2022776f726c6421220a"+ // field 49, encoding 2 string "hello, \"world!\"\n"
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"8305"+ // start group field 80 level 1
|
||||
"8a0508"+"7265706561746564"+ // field 81, encoding 2, string "repeated"
|
||||
"8405"+ // end group field 80 level 1
|
||||
"8305"+ // start group field 80 level 1
|
||||
"8a0508"+"7265706561746564"+ // field 81, encoding 2, string "repeated"
|
||||
"8405"+ // end group field 80 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"ca0c03"+"626967"+ // field 201, encoding 2, string "big"
|
||||
"ca0c04"+"6e6f7365"+ // field 201, encoding 2, string "nose"
|
||||
"d00c40"+ // field 202, encoding 0, value 32
|
||||
"d00c3f"+ // field 202, encoding 0, value -32
|
||||
"d80c8001"+ // field 203, encoding 0, value 64
|
||||
"d80c7f"+ // field 203, encoding 0, value -64
|
||||
"8a1907"+"4269676e6f7365"+ // field 401, encoding 2, string "Bignose"
|
||||
"90193f"+ // field 402, encoding 0, value 63
|
||||
"98197f") // field 403, encoding 0, value 127
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"2a0d"+"0a056c6162656c120474797065"+ // field 5, encoding 2 (GoTestField)
|
||||
"2a0d"+"0a056c6162656c120474797065"+ // field 5, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"a00100"+ // field 20, encoding 0, value 0
|
||||
"a00101"+ // field 20, encoding 0, value 1
|
||||
"a80120"+ // field 21, encoding 0, value 32
|
||||
"a80121"+ // field 21, encoding 0, value 33
|
||||
"b00140"+ // field 22, encoding 0, value 64
|
||||
"b00141"+ // field 22, encoding 0, value 65
|
||||
"bd01a00c0000"+ // field 23, encoding 5, value 3232
|
||||
"bd01050d0000"+ // field 23, encoding 5, value 3333
|
||||
"c1014019000000000000"+ // field 24, encoding 1, value 6464
|
||||
"c101a519000000000000"+ // field 24, encoding 1, value 6565
|
||||
"c801a0dd13"+ // field 25, encoding 0, value 323232
|
||||
"c80195ac14"+ // field 25, encoding 0, value 333333
|
||||
"d001c0ba27"+ // field 26, encoding 0, value 646464
|
||||
"d001b58928"+ // field 26, encoding 0, value 656565
|
||||
"dd0100000042"+ // field 27, encoding 5, value 32.0
|
||||
"dd0100000442"+ // field 27, encoding 5, value 33.0
|
||||
"e1010000000000005040"+ // field 28, encoding 1, value 64.0
|
||||
"e1010000000000405040"+ // field 28, encoding 1, value 65.0
|
||||
"ea0105"+"68656c6c6f"+ // field 29, encoding 2, string "hello"
|
||||
"ea0106"+"7361696c6f72"+ // field 29, encoding 2, string "sailor"
|
||||
"c00201"+ // field 40, encoding 0, value 1
|
||||
"c80220"+ // field 41, encoding 0, value 32
|
||||
"d00240"+ // field 42, encoding 0, value 64
|
||||
"dd0240010000"+ // field 43, encoding 5, value 320
|
||||
"e1028002000000000000"+ // field 44, encoding 1, value 640
|
||||
"e8028019"+ // field 45, encoding 0, value 3200
|
||||
"f0028032"+ // field 46, encoding 0, value 6400
|
||||
"fd02e0659948"+ // field 47, encoding 5, value 314159.0
|
||||
"81030000000050971041"+ // field 48, encoding 1, value 271828.0
|
||||
"8a0310"+"68656c6c6f2c2022776f726c6421220a"+ // field 49, encoding 2 string "hello, \"world!\"\n"
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"8305"+ // start group field 80 level 1
|
||||
"8a0508"+"7265706561746564"+ // field 81, encoding 2, string "repeated"
|
||||
"8405"+ // end group field 80 level 1
|
||||
"8305"+ // start group field 80 level 1
|
||||
"8a0508"+"7265706561746564"+ // field 81, encoding 2, string "repeated"
|
||||
"8405"+ // end group field 80 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"ca0c03"+"626967"+ // field 201, encoding 2, string "big"
|
||||
"ca0c04"+"6e6f7365"+ // field 201, encoding 2, string "nose"
|
||||
"d00c40"+ // field 202, encoding 0, value 32
|
||||
"d00c3f"+ // field 202, encoding 0, value -32
|
||||
"d80c8001"+ // field 203, encoding 0, value 64
|
||||
"d80c7f"+ // field 203, encoding 0, value -64
|
||||
"8a1907"+"4269676e6f7365"+ // field 401, encoding 2, string "Bignose"
|
||||
"90193f"+ // field 402, encoding 0, value 63
|
||||
"98197f") // field 403, encoding 0, value 127
|
||||
|
||||
}
|
||||
|
||||
@ -796,43 +796,43 @@ func TestEncodeDecode6(t *testing.T) {
|
||||
pb.F_Sint64RepeatedPacked = []int64{64, -64}
|
||||
|
||||
overify(t, pb,
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"9203020001"+ // field 50, encoding 2, 2 bytes, value 0, value 1
|
||||
"9a03022021"+ // field 51, encoding 2, 2 bytes, value 32, value 33
|
||||
"a203024041"+ // field 52, encoding 2, 2 bytes, value 64, value 65
|
||||
"aa0308"+ // field 53, encoding 2, 8 bytes
|
||||
"a00c0000050d0000"+ // value 3232, value 3333
|
||||
"b20310"+ // field 54, encoding 2, 16 bytes
|
||||
"4019000000000000a519000000000000"+ // value 6464, value 6565
|
||||
"ba0306"+ // field 55, encoding 2, 6 bytes
|
||||
"a0dd1395ac14"+ // value 323232, value 333333
|
||||
"c20306"+ // field 56, encoding 2, 6 bytes
|
||||
"c0ba27b58928"+ // value 646464, value 656565
|
||||
"ca0308"+ // field 57, encoding 2, 8 bytes
|
||||
"0000004200000442"+ // value 32.0, value 33.0
|
||||
"d20310"+ // field 58, encoding 2, 16 bytes
|
||||
"00000000000050400000000000405040"+ // value 64.0, value 65.0
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"b21f02"+ // field 502, encoding 2, 2 bytes
|
||||
"403f"+ // value 32, value -32
|
||||
"ba1f03"+ // field 503, encoding 2, 3 bytes
|
||||
"80017f") // value 64, value -64
|
||||
"0807"+ // field 1, encoding 0, value 7
|
||||
"220d"+"0a056c6162656c120474797065"+ // field 4, encoding 2 (GoTestField)
|
||||
"5001"+ // field 10, encoding 0, value 1
|
||||
"5803"+ // field 11, encoding 0, value 3
|
||||
"6006"+ // field 12, encoding 0, value 6
|
||||
"6d20000000"+ // field 13, encoding 5, value 32
|
||||
"714000000000000000"+ // field 14, encoding 1, value 64
|
||||
"78a019"+ // field 15, encoding 0, value 3232
|
||||
"8001c032"+ // field 16, encoding 0, value 6464
|
||||
"8d0100004a45"+ // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940"+ // field 18, encoding 1, value 6464.0
|
||||
"9a0106"+"737472696e67"+ // field 19, encoding 2 string "string"
|
||||
"9203020001"+ // field 50, encoding 2, 2 bytes, value 0, value 1
|
||||
"9a03022021"+ // field 51, encoding 2, 2 bytes, value 32, value 33
|
||||
"a203024041"+ // field 52, encoding 2, 2 bytes, value 64, value 65
|
||||
"aa0308"+ // field 53, encoding 2, 8 bytes
|
||||
"a00c0000050d0000"+ // value 3232, value 3333
|
||||
"b20310"+ // field 54, encoding 2, 16 bytes
|
||||
"4019000000000000a519000000000000"+ // value 6464, value 6565
|
||||
"ba0306"+ // field 55, encoding 2, 6 bytes
|
||||
"a0dd1395ac14"+ // value 323232, value 333333
|
||||
"c20306"+ // field 56, encoding 2, 6 bytes
|
||||
"c0ba27b58928"+ // value 646464, value 656565
|
||||
"ca0308"+ // field 57, encoding 2, 8 bytes
|
||||
"0000004200000442"+ // value 32.0, value 33.0
|
||||
"d20310"+ // field 58, encoding 2, 16 bytes
|
||||
"00000000000050400000000000405040"+ // value 64.0, value 65.0
|
||||
"b304"+ // start group field 70 level 1
|
||||
"ba0408"+"7265717569726564"+ // field 71, encoding 2, string "required"
|
||||
"b404"+ // end group field 70 level 1
|
||||
"aa0605"+"6279746573"+ // field 101, encoding 2 string "bytes"
|
||||
"b0063f"+ // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f"+ // field 103, encoding 0, 0x7f zigzag64
|
||||
"b21f02"+ // field 502, encoding 2, 2 bytes
|
||||
"403f"+ // value 32, value -32
|
||||
"ba1f03"+ // field 503, encoding 2, 3 bytes
|
||||
"80017f") // value 64, value -64
|
||||
}
|
||||
|
||||
// Test that we can encode empty bytes fields.
|
||||
@ -898,13 +898,13 @@ func TestSkippingUnrecognizedFields(t *testing.T) {
|
||||
|
||||
// Now new a GoSkipTest record.
|
||||
skip := &GoSkipTest{
|
||||
SkipInt32: Int32(32),
|
||||
SkipFixed32: Uint32(3232),
|
||||
SkipFixed64: Uint64(6464),
|
||||
SkipString: String("skipper"),
|
||||
SkipInt32: Int32(32),
|
||||
SkipFixed32: Uint32(3232),
|
||||
SkipFixed64: Uint64(6464),
|
||||
SkipString: String("skipper"),
|
||||
Skipgroup: &GoSkipTest_SkipGroup{
|
||||
GroupInt32: Int32(75),
|
||||
GroupString: String("wxyz"),
|
||||
GroupInt32: Int32(75),
|
||||
GroupString: String("wxyz"),
|
||||
},
|
||||
}
|
||||
|
||||
@ -944,8 +944,8 @@ func TestSkippingUnrecognizedFields(t *testing.T) {
|
||||
func TestSubmessageUnrecognizedFields(t *testing.T) {
|
||||
nm := &NewMessage{
|
||||
Nested: &NewMessage_Nested{
|
||||
Name: String("Nigel"),
|
||||
FoodGroup: String("carbs"),
|
||||
Name: String("Nigel"),
|
||||
FoodGroup: String("carbs"),
|
||||
},
|
||||
}
|
||||
b, err := Marshal(nm)
|
||||
@ -960,9 +960,9 @@ func TestSubmessageUnrecognizedFields(t *testing.T) {
|
||||
}
|
||||
exp := &OldMessage{
|
||||
Nested: &OldMessage_Nested{
|
||||
Name: String("Nigel"),
|
||||
Name: String("Nigel"),
|
||||
// normal protocol buffer users should not do this
|
||||
XXX_unrecognized: []byte("\x12\x05carbs"),
|
||||
XXX_unrecognized: []byte("\x12\x05carbs"),
|
||||
},
|
||||
}
|
||||
if !Equal(om, exp) {
|
||||
@ -999,7 +999,7 @@ func TestBigRepeated(t *testing.T) {
|
||||
pb := initGoTest(true)
|
||||
|
||||
// Create the arrays
|
||||
const N = 50 // Internally the library starts much smaller.
|
||||
const N = 50 // Internally the library starts much smaller.
|
||||
pb.Repeatedgroup = make([]*GoTest_RepeatedGroup, N)
|
||||
pb.F_Sint64Repeated = make([]int64, N)
|
||||
pb.F_Sint32Repeated = make([]int32, N)
|
||||
@ -1047,7 +1047,7 @@ func TestBigRepeated(t *testing.T) {
|
||||
|
||||
// Check the checkable values
|
||||
for i := uint64(0); i < N; i++ {
|
||||
if pbd.Repeatedgroup[i] == nil { // TODO: more checking?
|
||||
if pbd.Repeatedgroup[i] == nil { // TODO: more checking?
|
||||
t.Error("pbd.Repeatedgroup bad")
|
||||
}
|
||||
var x uint64
|
||||
@ -1099,7 +1099,7 @@ func TestBigRepeated(t *testing.T) {
|
||||
if pbd.F_BoolRepeated[i] != (i%2 == 0) {
|
||||
t.Error("pbd.F_BoolRepeated bad", x, i)
|
||||
}
|
||||
if pbd.RepeatedField[i] == nil { // TODO: more checking?
|
||||
if pbd.RepeatedField[i] == nil { // TODO: more checking?
|
||||
t.Error("pbd.RepeatedField bad")
|
||||
}
|
||||
}
|
||||
@ -1159,8 +1159,8 @@ func TestProto1RepeatedGroup(t *testing.T) {
|
||||
pb := &MessageList{
|
||||
Message: []*MessageList_Message{
|
||||
{
|
||||
Name: String("blah"),
|
||||
Count: Int32(7),
|
||||
Name: String("blah"),
|
||||
Count: Int32(7),
|
||||
},
|
||||
// NOTE: pb.Message[1] is a nil
|
||||
nil,
|
||||
@ -1240,9 +1240,9 @@ type NNIMessage struct {
|
||||
nni nonNillableInt
|
||||
}
|
||||
|
||||
func (*NNIMessage) Reset() {}
|
||||
func (*NNIMessage) String() string { return "" }
|
||||
func (*NNIMessage) ProtoMessage() {}
|
||||
func (*NNIMessage) Reset() {}
|
||||
func (*NNIMessage) String() string { return "" }
|
||||
func (*NNIMessage) ProtoMessage() {}
|
||||
|
||||
// A type that implements the Marshaler interface and is nillable.
|
||||
type nillableMessage struct {
|
||||
@ -1257,9 +1257,9 @@ type NMMessage struct {
|
||||
nm *nillableMessage
|
||||
}
|
||||
|
||||
func (*NMMessage) Reset() {}
|
||||
func (*NMMessage) String() string { return "" }
|
||||
func (*NMMessage) ProtoMessage() {}
|
||||
func (*NMMessage) Reset() {}
|
||||
func (*NMMessage) String() string { return "" }
|
||||
func (*NMMessage) ProtoMessage() {}
|
||||
|
||||
// Verify a type that uses the Marshaler interface, but has a nil pointer.
|
||||
func TestNilMarshaler(t *testing.T) {
|
||||
@ -1273,7 +1273,7 @@ func TestNilMarshaler(t *testing.T) {
|
||||
// Try a struct with a Marshaler field that is not nillable.
|
||||
nnim := new(NNIMessage)
|
||||
nnim.nni = 7
|
||||
var _ Marshaler = nnim.nni // verify it is truly a Marshaler
|
||||
var _ Marshaler = nnim.nni // verify it is truly a Marshaler
|
||||
if _, err := Marshal(nnim); err != nil {
|
||||
t.Error("unexpected error marshaling nnim: ", err)
|
||||
}
|
||||
@ -1286,23 +1286,23 @@ func TestAllSetDefaults(t *testing.T) {
|
||||
F_Nan: Float32(1.7),
|
||||
}
|
||||
expected := &Defaults{
|
||||
F_Bool: Bool(true),
|
||||
F_Int32: Int32(32),
|
||||
F_Int64: Int64(64),
|
||||
F_Fixed32: Uint32(320),
|
||||
F_Fixed64: Uint64(640),
|
||||
F_Uint32: Uint32(3200),
|
||||
F_Uint64: Uint64(6400),
|
||||
F_Float: Float32(314159),
|
||||
F_Double: Float64(271828),
|
||||
F_String: String(`hello, "world!"` + "\n"),
|
||||
F_Bytes: []byte("Bignose"),
|
||||
F_Sint32: Int32(-32),
|
||||
F_Sint64: Int64(-64),
|
||||
F_Enum: Defaults_GREEN.Enum(),
|
||||
F_Pinf: Float32(float32(math.Inf(1))),
|
||||
F_Ninf: Float32(float32(math.Inf(-1))),
|
||||
F_Nan: Float32(1.7),
|
||||
F_Bool: Bool(true),
|
||||
F_Int32: Int32(32),
|
||||
F_Int64: Int64(64),
|
||||
F_Fixed32: Uint32(320),
|
||||
F_Fixed64: Uint64(640),
|
||||
F_Uint32: Uint32(3200),
|
||||
F_Uint64: Uint64(6400),
|
||||
F_Float: Float32(314159),
|
||||
F_Double: Float64(271828),
|
||||
F_String: String(`hello, "world!"` + "\n"),
|
||||
F_Bytes: []byte("Bignose"),
|
||||
F_Sint32: Int32(-32),
|
||||
F_Sint64: Int64(-64),
|
||||
F_Enum: Defaults_GREEN.Enum(),
|
||||
F_Pinf: Float32(float32(math.Inf(1))),
|
||||
F_Ninf: Float32(float32(math.Inf(-1))),
|
||||
F_Nan: Float32(1.7),
|
||||
}
|
||||
SetDefaults(m)
|
||||
if !Equal(m, expected) {
|
||||
@ -1323,16 +1323,16 @@ func TestSetDefaultsWithSetField(t *testing.T) {
|
||||
|
||||
func TestSetDefaultsWithSubMessage(t *testing.T) {
|
||||
m := &OtherMessage{
|
||||
Key: Int64(123),
|
||||
Key: Int64(123),
|
||||
Inner: &InnerMessage{
|
||||
Host: String("gopher"),
|
||||
},
|
||||
}
|
||||
expected := &OtherMessage{
|
||||
Key: Int64(123),
|
||||
Key: Int64(123),
|
||||
Inner: &InnerMessage{
|
||||
Host: String("gopher"),
|
||||
Port: Int32(4000),
|
||||
Host: String("gopher"),
|
||||
Port: Int32(4000),
|
||||
},
|
||||
}
|
||||
SetDefaults(m)
|
||||
@ -1375,12 +1375,12 @@ func TestMaximumTagNumber(t *testing.T) {
|
||||
|
||||
func TestJSON(t *testing.T) {
|
||||
m := &MyMessage{
|
||||
Count: Int32(4),
|
||||
Pet: []string{"bunny", "kitty"},
|
||||
Count: Int32(4),
|
||||
Pet: []string{"bunny", "kitty"},
|
||||
Inner: &InnerMessage{
|
||||
Host: String("cauchy"),
|
||||
},
|
||||
Bikeshed: MyMessage_GREEN.Enum(),
|
||||
Bikeshed: MyMessage_GREEN.Enum(),
|
||||
}
|
||||
const expected = `{"count":4,"pet":["bunny","kitty"],"inner":{"host":"cauchy"},"bikeshed":1}`
|
||||
|
||||
@ -1413,7 +1413,7 @@ func TestJSON(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestBadWireType(t *testing.T) {
|
||||
b := []byte{7<<3 | 6} // field 7, wire type 6
|
||||
b := []byte{7<<3 | 6} // field 7, wire type 6
|
||||
pb := new(OtherMessage)
|
||||
if err := Unmarshal(b, pb); err == nil {
|
||||
t.Errorf("Unmarshal did not fail")
|
||||
@ -1610,10 +1610,10 @@ func TestUnmarshalMergesMessages(t *testing.T) {
|
||||
// If a nested message occurs twice in the input,
|
||||
// the fields should be merged when decoding.
|
||||
a := &OtherMessage{
|
||||
Key: Int64(123),
|
||||
Key: Int64(123),
|
||||
Inner: &InnerMessage{
|
||||
Host: String("polhode"),
|
||||
Port: Int32(1234),
|
||||
Host: String("polhode"),
|
||||
Port: Int32(1234),
|
||||
},
|
||||
}
|
||||
aData, err := Marshal(a)
|
||||
@ -1621,10 +1621,10 @@ func TestUnmarshalMergesMessages(t *testing.T) {
|
||||
t.Fatalf("Marshal(a): %v", err)
|
||||
}
|
||||
b := &OtherMessage{
|
||||
Weight: Float32(1.2),
|
||||
Weight: Float32(1.2),
|
||||
Inner: &InnerMessage{
|
||||
Host: String("herpolhode"),
|
||||
Connected: Bool(true),
|
||||
Host: String("herpolhode"),
|
||||
Connected: Bool(true),
|
||||
},
|
||||
}
|
||||
bData, err := Marshal(b)
|
||||
@ -1632,12 +1632,12 @@ func TestUnmarshalMergesMessages(t *testing.T) {
|
||||
t.Fatalf("Marshal(b): %v", err)
|
||||
}
|
||||
want := &OtherMessage{
|
||||
Key: Int64(123),
|
||||
Weight: Float32(1.2),
|
||||
Key: Int64(123),
|
||||
Weight: Float32(1.2),
|
||||
Inner: &InnerMessage{
|
||||
Host: String("herpolhode"),
|
||||
Port: Int32(1234),
|
||||
Connected: Bool(true),
|
||||
Host: String("herpolhode"),
|
||||
Port: Int32(1234),
|
||||
Connected: Bool(true),
|
||||
},
|
||||
}
|
||||
got := new(OtherMessage)
|
||||
@ -1651,8 +1651,8 @@ func TestUnmarshalMergesMessages(t *testing.T) {
|
||||
|
||||
func TestEncodingSizes(t *testing.T) {
|
||||
tests := []struct {
|
||||
m Message
|
||||
n int
|
||||
m Message
|
||||
n int
|
||||
}{
|
||||
{&Defaults{F_Int32: Int32(math.MaxInt32)}, 6},
|
||||
{&Defaults{F_Int32: Int32(math.MinInt32)}, 6},
|
||||
@ -1676,22 +1676,22 @@ func TestRequiredNotSetError(t *testing.T) {
|
||||
pb.F_Int32Required = nil
|
||||
pb.F_Int64Required = nil
|
||||
|
||||
expected := "0807" + // field 1, encoding 0, value 7
|
||||
"2206" + "120474797065" + // field 4, encoding 2 (GoTestField)
|
||||
"5001" + // field 10, encoding 0, value 1
|
||||
"6d20000000" + // field 13, encoding 5, value 0x20
|
||||
"714000000000000000" + // field 14, encoding 1, value 0x40
|
||||
"78a019" + // field 15, encoding 0, value 0xca0 = 3232
|
||||
"8001c032" + // field 16, encoding 0, value 0x1940 = 6464
|
||||
"8d0100004a45" + // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940" + // field 18, encoding 1, value 6464.0
|
||||
"9a0106" + "737472696e67" + // field 19, encoding 2, string "string"
|
||||
"b304" + // field 70, encoding 3, start group
|
||||
"ba0408" + "7265717569726564" + // field 71, encoding 2, string "required"
|
||||
"b404" + // field 70, encoding 4, end group
|
||||
"aa0605" + "6279746573" + // field 101, encoding 2, string "bytes"
|
||||
"b0063f" + // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f" // field 103, encoding 0, 0x7f zigzag64
|
||||
expected := "0807" + // field 1, encoding 0, value 7
|
||||
"2206" + "120474797065" + // field 4, encoding 2 (GoTestField)
|
||||
"5001" + // field 10, encoding 0, value 1
|
||||
"6d20000000" + // field 13, encoding 5, value 0x20
|
||||
"714000000000000000" + // field 14, encoding 1, value 0x40
|
||||
"78a019" + // field 15, encoding 0, value 0xca0 = 3232
|
||||
"8001c032" + // field 16, encoding 0, value 0x1940 = 6464
|
||||
"8d0100004a45" + // field 17, encoding 5, value 3232.0
|
||||
"9101000000000040b940" + // field 18, encoding 1, value 6464.0
|
||||
"9a0106" + "737472696e67" + // field 19, encoding 2, string "string"
|
||||
"b304" + // field 70, encoding 3, start group
|
||||
"ba0408" + "7265717569726564" + // field 71, encoding 2, string "required"
|
||||
"b404" + // field 70, encoding 4, end group
|
||||
"aa0605" + "6279746573" + // field 101, encoding 2, string "bytes"
|
||||
"b0063f" + // field 102, encoding 0, 0x3f zigzag32
|
||||
"b8067f" // field 103, encoding 0, 0x7f zigzag64
|
||||
|
||||
o := old()
|
||||
bytes, err := Marshal(pb)
|
||||
@ -1751,7 +1751,7 @@ func fuzzUnmarshal(t *testing.T, data []byte) {
|
||||
|
||||
func testMsg() *GoTest {
|
||||
pb := initGoTest(true)
|
||||
const N = 1000 // Internally the library starts much smaller.
|
||||
const N = 1000 // Internally the library starts much smaller.
|
||||
pb.F_Int32Repeated = make([]int32, N)
|
||||
pb.F_DoubleRepeated = make([]float64, N)
|
||||
for i := 0; i < N; i++ {
|
||||
@ -1869,13 +1869,13 @@ func BenchmarkUnmarshalUnrecognizedFields(b *testing.B) {
|
||||
b.StopTimer()
|
||||
pb := initGoTestField()
|
||||
skip := &GoSkipTest{
|
||||
SkipInt32: Int32(32),
|
||||
SkipFixed32: Uint32(3232),
|
||||
SkipFixed64: Uint64(6464),
|
||||
SkipString: String("skipper"),
|
||||
SkipInt32: Int32(32),
|
||||
SkipFixed32: Uint32(3232),
|
||||
SkipFixed64: Uint64(6464),
|
||||
SkipString: String("skipper"),
|
||||
Skipgroup: &GoSkipTest_SkipGroup{
|
||||
GroupInt32: Int32(75),
|
||||
GroupString: String("wxyz"),
|
||||
GroupInt32: Int32(75),
|
||||
GroupString: String("wxyz"),
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -83,9 +83,14 @@ func mergeStruct(out, in reflect.Value) {
|
||||
mergeAny(out.Field(i), in.Field(i))
|
||||
}
|
||||
|
||||
if emIn, ok := in.Addr().Interface().(extendableProto); ok {
|
||||
emOut := out.Addr().Interface().(extendableProto)
|
||||
if emIn, ok := in.Addr().Interface().(extensionsMap); ok {
|
||||
emOut := out.Addr().Interface().(extensionsMap)
|
||||
mergeExtension(emOut.ExtensionMap(), emIn.ExtensionMap())
|
||||
} else if emIn, ok := in.Addr().Interface().(extensionsBytes); ok {
|
||||
emOut := out.Addr().Interface().(extensionsBytes)
|
||||
bIn := emIn.GetExtensions()
|
||||
bOut := emOut.GetExtensions()
|
||||
*bOut = append(*bOut, *bIn...)
|
||||
}
|
||||
|
||||
uf := in.FieldByName("XXX_unrecognized")
|
||||
|
@ -40,13 +40,13 @@ import (
|
||||
)
|
||||
|
||||
var cloneTestMessage = &pb.MyMessage{
|
||||
Count: proto.Int32(42),
|
||||
Name: proto.String("Dave"),
|
||||
Pet: []string{"bunny", "kitty", "horsey"},
|
||||
Count: proto.Int32(42),
|
||||
Name: proto.String("Dave"),
|
||||
Pet: []string{"bunny", "kitty", "horsey"},
|
||||
Inner: &pb.InnerMessage{
|
||||
Host: proto.String("niles"),
|
||||
Port: proto.Int32(9099),
|
||||
Connected: proto.Bool(true),
|
||||
Host: proto.String("niles"),
|
||||
Port: proto.Int32(9099),
|
||||
Connected: proto.Bool(true),
|
||||
},
|
||||
Others: []*pb.OtherMessage{
|
||||
{
|
||||
@ -56,7 +56,7 @@ var cloneTestMessage = &pb.MyMessage{
|
||||
Somegroup: &pb.MyMessage_SomeGroup{
|
||||
GroupField: proto.Int32(6),
|
||||
},
|
||||
RepBytes: [][]byte{[]byte("sham"), []byte("wow")},
|
||||
RepBytes: [][]byte{[]byte("sham"), []byte("wow")},
|
||||
}
|
||||
|
||||
func init() {
|
||||
@ -99,17 +99,17 @@ var mergeTests = []struct {
|
||||
Name: proto.String("Dave"),
|
||||
},
|
||||
want: &pb.MyMessage{
|
||||
Count: proto.Int32(42),
|
||||
Name: proto.String("Dave"),
|
||||
Count: proto.Int32(42),
|
||||
Name: proto.String("Dave"),
|
||||
},
|
||||
},
|
||||
{
|
||||
src: &pb.MyMessage{
|
||||
Inner: &pb.InnerMessage{
|
||||
Host: proto.String("hey"),
|
||||
Connected: proto.Bool(true),
|
||||
Host: proto.String("hey"),
|
||||
Connected: proto.Bool(true),
|
||||
},
|
||||
Pet: []string{"horsey"},
|
||||
Pet: []string{"horsey"},
|
||||
Others: []*pb.OtherMessage{
|
||||
{
|
||||
Value: []byte("some bytes"),
|
||||
@ -118,10 +118,10 @@ var mergeTests = []struct {
|
||||
},
|
||||
dst: &pb.MyMessage{
|
||||
Inner: &pb.InnerMessage{
|
||||
Host: proto.String("niles"),
|
||||
Port: proto.Int32(9099),
|
||||
Host: proto.String("niles"),
|
||||
Port: proto.Int32(9099),
|
||||
},
|
||||
Pet: []string{"bunny", "kitty"},
|
||||
Pet: []string{"bunny", "kitty"},
|
||||
Others: []*pb.OtherMessage{
|
||||
{
|
||||
Key: proto.Int64(31415926535),
|
||||
@ -134,11 +134,11 @@ var mergeTests = []struct {
|
||||
},
|
||||
want: &pb.MyMessage{
|
||||
Inner: &pb.InnerMessage{
|
||||
Host: proto.String("hey"),
|
||||
Connected: proto.Bool(true),
|
||||
Port: proto.Int32(9099),
|
||||
Host: proto.String("hey"),
|
||||
Connected: proto.Bool(true),
|
||||
Port: proto.Int32(9099),
|
||||
},
|
||||
Pet: []string{"bunny", "kitty", "horsey"},
|
||||
Pet: []string{"bunny", "kitty", "horsey"},
|
||||
Others: []*pb.OtherMessage{
|
||||
{
|
||||
Key: proto.Int64(31415926535),
|
||||
@ -158,13 +158,13 @@ var mergeTests = []struct {
|
||||
Somegroup: &pb.MyMessage_SomeGroup{
|
||||
GroupField: proto.Int32(6),
|
||||
},
|
||||
RepBytes: [][]byte{[]byte("sham")},
|
||||
RepBytes: [][]byte{[]byte("sham")},
|
||||
},
|
||||
want: &pb.MyMessage{
|
||||
Somegroup: &pb.MyMessage_SomeGroup{
|
||||
GroupField: proto.Int32(6),
|
||||
},
|
||||
RepBytes: [][]byte{[]byte("sham"), []byte("wow")},
|
||||
RepBytes: [][]byte{[]byte("sham"), []byte("wow")},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
@ -235,12 +235,6 @@ func (o *Buffer) skipAndSave(t reflect.Type, tag, wire int, base structPointer,
|
||||
|
||||
ptr := structPointer_Bytes(base, unrecField)
|
||||
|
||||
if *ptr == nil {
|
||||
// This is the first skipped element,
|
||||
// allocate a new buffer.
|
||||
*ptr = o.bufalloc()
|
||||
}
|
||||
|
||||
// Add the skipped field to struct field
|
||||
obuf := o.buf
|
||||
|
||||
@ -381,9 +375,14 @@ func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group
|
||||
if prop.extendable {
|
||||
if e := structPointer_Interface(base, st).(extendableProto); isExtensionField(e, int32(tag)) {
|
||||
if err = o.skip(st, tag, wire); err == nil {
|
||||
ext := e.ExtensionMap()[int32(tag)] // may be missing
|
||||
ext.enc = append(ext.enc, o.buf[oi:o.index]...)
|
||||
e.ExtensionMap()[int32(tag)] = ext
|
||||
if ee, ok := e.(extensionsMap); ok {
|
||||
ext := ee.ExtensionMap()[int32(tag)] // may be missing
|
||||
ext.enc = append(ext.enc, o.buf[oi:o.index]...)
|
||||
ee.ExtensionMap()[int32(tag)] = ext
|
||||
} else if ee, ok := e.(extensionsBytes); ok {
|
||||
ext := ee.GetExtensions()
|
||||
*ext = append(*ext, o.buf[oi:o.index]...)
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
@ -221,6 +221,10 @@ func Marshal(pb Message) ([]byte, error) {
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
return nil, err
|
||||
}
|
||||
if p.buf == nil && err == nil {
|
||||
// Return a non-nil slice on success.
|
||||
return []byte{}, nil
|
||||
}
|
||||
return p.buf, err
|
||||
}
|
||||
|
||||
@ -400,23 +404,8 @@ func (o *Buffer) enc_struct_message(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// need the length before we can write out the message itself,
|
||||
// so marshal into a separate byte buffer first.
|
||||
obuf := o.buf
|
||||
o.buf = o.bufalloc()
|
||||
|
||||
err := o.enc_struct(p.stype, p.sprop, structp)
|
||||
|
||||
nbuf := o.buf
|
||||
o.buf = obuf
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
o.buffree(nbuf)
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(nbuf)
|
||||
o.buffree(nbuf)
|
||||
return state.err
|
||||
return o.enc_len_struct(p.stype, p.sprop, structp, &state)
|
||||
}
|
||||
|
||||
func size_struct_message(p *Properties, base structPointer) int {
|
||||
@ -748,24 +737,14 @@ func (o *Buffer) enc_slice_struct_message(p *Properties, base structPointer) err
|
||||
continue
|
||||
}
|
||||
|
||||
obuf := o.buf
|
||||
o.buf = o.bufalloc()
|
||||
|
||||
err := o.enc_struct(p.stype, p.sprop, structp)
|
||||
|
||||
nbuf := o.buf
|
||||
o.buf = obuf
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
err := o.enc_len_struct(p.stype, p.sprop, structp, &state)
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
o.buffree(nbuf)
|
||||
if err == ErrNil {
|
||||
return ErrRepeatedHasNil
|
||||
}
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(nbuf)
|
||||
|
||||
o.buffree(nbuf)
|
||||
}
|
||||
return state.err
|
||||
}
|
||||
@ -923,6 +902,36 @@ func size_struct(t reflect.Type, prop *StructProperties, base structPointer) (n
|
||||
return
|
||||
}
|
||||
|
||||
var zeroes [20]byte // longer than any conceivable sizeVarint
|
||||
|
||||
// Encode a struct, preceded by its encoded length (as a varint).
|
||||
func (o *Buffer) enc_len_struct(t reflect.Type, prop *StructProperties, base structPointer, state *errorState) error {
|
||||
iLen := len(o.buf)
|
||||
o.buf = append(o.buf, 0, 0, 0, 0) // reserve four bytes for length
|
||||
iMsg := len(o.buf)
|
||||
err := o.enc_struct(t, prop, base)
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
return err
|
||||
}
|
||||
lMsg := len(o.buf) - iMsg
|
||||
lLen := sizeVarint(uint64(lMsg))
|
||||
switch x := lLen - (iMsg - iLen); {
|
||||
case x > 0: // actual length is x bytes larger than the space we reserved
|
||||
// Move msg x bytes right.
|
||||
o.buf = append(o.buf, zeroes[:x]...)
|
||||
copy(o.buf[iMsg+x:], o.buf[iMsg:iMsg+lMsg])
|
||||
case x < 0: // actual length is x bytes smaller than the space we reserved
|
||||
// Move msg x bytes left.
|
||||
copy(o.buf[iMsg+x:], o.buf[iMsg:iMsg+lMsg])
|
||||
o.buf = o.buf[:len(o.buf)+x] // x is negative
|
||||
}
|
||||
// Encode the length in the reserved space.
|
||||
o.buf = o.buf[:iLen]
|
||||
o.EncodeVarint(uint64(lMsg))
|
||||
o.buf = o.buf[:len(o.buf)+lMsg]
|
||||
return state.err
|
||||
}
|
||||
|
||||
// errorState maintains the first error that occurs and updates that error
|
||||
// with additional context.
|
||||
type errorState struct {
|
||||
|
@ -44,6 +44,24 @@ type Sizer interface {
|
||||
Size() int
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_ext_slice_byte(p *Properties, base structPointer) error {
|
||||
s := *structPointer_Bytes(base, p.field)
|
||||
if s == nil {
|
||||
return ErrNil
|
||||
}
|
||||
o.buf = append(o.buf, s...)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_ext_slice_byte(p *Properties, base structPointer) (n int) {
|
||||
s := *structPointer_Bytes(base, p.field)
|
||||
if s == nil {
|
||||
return 0
|
||||
}
|
||||
n += len(s)
|
||||
return
|
||||
}
|
||||
|
||||
// Encode a reference to bool pointer.
|
||||
func (o *Buffer) enc_ref_bool(p *Properties, base structPointer) error {
|
||||
v := structPointer_RefBool(base, p.field)
|
||||
@ -156,23 +174,8 @@ func (o *Buffer) enc_ref_struct_message(p *Properties, base structPointer) error
|
||||
return nil
|
||||
}
|
||||
|
||||
// need the length before we can write out the message itself,
|
||||
// so marshal into a separate byte buffer first.
|
||||
obuf := o.buf
|
||||
o.buf = o.bufalloc()
|
||||
|
||||
err := o.enc_struct(p.stype, p.sprop, structp)
|
||||
|
||||
nbuf := o.buf
|
||||
o.buf = obuf
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
o.buffree(nbuf)
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(nbuf)
|
||||
o.buffree(nbuf)
|
||||
return nil
|
||||
return o.enc_len_struct(p.stype, p.sprop, structp, &state)
|
||||
}
|
||||
|
||||
//TODO this is only copied, please fix this
|
||||
@ -222,26 +225,17 @@ func (o *Buffer) enc_slice_ref_struct_message(p *Properties, base structPointer)
|
||||
continue
|
||||
}
|
||||
|
||||
obuf := o.buf
|
||||
o.buf = o.bufalloc()
|
||||
|
||||
err := o.enc_struct(p.stype, p.sprop, structp)
|
||||
|
||||
nbuf := o.buf
|
||||
o.buf = obuf
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
err := o.enc_len_struct(p.stype, p.sprop, structp, &state)
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
o.buffree(nbuf)
|
||||
if err == ErrNil {
|
||||
return ErrRepeatedHasNil
|
||||
}
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(nbuf)
|
||||
|
||||
o.buffree(nbuf)
|
||||
}
|
||||
return nil
|
||||
return state.err
|
||||
}
|
||||
|
||||
//TODO this is only copied, please fix this
|
||||
|
@ -85,9 +85,9 @@ func init() {
|
||||
}
|
||||
|
||||
var EqualTests = []struct {
|
||||
desc string
|
||||
a, b Message
|
||||
exp bool
|
||||
desc string
|
||||
a, b Message
|
||||
exp bool
|
||||
}{
|
||||
{"different types", &pb.GoEnum{}, &pb.GoTestField{}, false},
|
||||
{"equal empty", &pb.GoEnum{}, &pb.GoEnum{}, true},
|
||||
@ -142,13 +142,13 @@ var EqualTests = []struct {
|
||||
{
|
||||
"message with group",
|
||||
&pb.MyMessage{
|
||||
Count: Int32(1),
|
||||
Count: Int32(1),
|
||||
Somegroup: &pb.MyMessage_SomeGroup{
|
||||
GroupField: Int32(5),
|
||||
},
|
||||
},
|
||||
&pb.MyMessage{
|
||||
Count: Int32(1),
|
||||
Count: Int32(1),
|
||||
Somegroup: &pb.MyMessage_SomeGroup{
|
||||
GroupField: Int32(5),
|
||||
},
|
||||
|
@ -55,9 +55,18 @@ type ExtensionRange struct {
|
||||
type extendableProto interface {
|
||||
Message
|
||||
ExtensionRangeArray() []ExtensionRange
|
||||
}
|
||||
|
||||
type extensionsMap interface {
|
||||
extendableProto
|
||||
ExtensionMap() map[int32]Extension
|
||||
}
|
||||
|
||||
type extensionsBytes interface {
|
||||
extendableProto
|
||||
GetExtensions() *[]byte
|
||||
}
|
||||
|
||||
var extendableProtoType = reflect.TypeOf((*extendableProto)(nil)).Elem()
|
||||
|
||||
// ExtensionDesc represents an extension specification.
|
||||
@ -92,7 +101,15 @@ type Extension struct {
|
||||
|
||||
// SetRawExtension is for testing only.
|
||||
func SetRawExtension(base extendableProto, id int32, b []byte) {
|
||||
base.ExtensionMap()[id] = Extension{enc: b}
|
||||
if ebase, ok := base.(extensionsMap); ok {
|
||||
ebase.ExtensionMap()[id] = Extension{enc: b}
|
||||
} else if ebase, ok := base.(extensionsBytes); ok {
|
||||
clearExtension(base, id)
|
||||
ext := ebase.GetExtensions()
|
||||
*ext = append(*ext, b...)
|
||||
} else {
|
||||
panic("unreachable")
|
||||
}
|
||||
}
|
||||
|
||||
// isExtensionField returns true iff the given field number is in an extension range.
|
||||
@ -210,51 +227,127 @@ func sizeExtensionMap(m map[int32]Extension) (n int) {
|
||||
// HasExtension returns whether the given extension is present in pb.
|
||||
func HasExtension(pb extendableProto, extension *ExtensionDesc) bool {
|
||||
// TODO: Check types, field numbers, etc.?
|
||||
_, ok := pb.ExtensionMap()[extension.Field]
|
||||
return ok
|
||||
if epb, doki := pb.(extensionsMap); doki {
|
||||
_, ok := epb.ExtensionMap()[extension.Field]
|
||||
return ok
|
||||
} else if epb, doki := pb.(extensionsBytes); doki {
|
||||
ext := epb.GetExtensions()
|
||||
buf := *ext
|
||||
o := 0
|
||||
for o < len(buf) {
|
||||
tag, n := DecodeVarint(buf[o:])
|
||||
fieldNum := int32(tag >> 3)
|
||||
if int32(fieldNum) == extension.Field {
|
||||
return true
|
||||
}
|
||||
wireType := int(tag & 0x7)
|
||||
o += n
|
||||
l, err := size(buf[o:], wireType)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
o += l
|
||||
}
|
||||
return false
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
func deleteExtension(pb extensionsBytes, theFieldNum int32, offset int) int {
|
||||
ext := pb.GetExtensions()
|
||||
for offset < len(*ext) {
|
||||
tag, n1 := DecodeVarint((*ext)[offset:])
|
||||
fieldNum := int32(tag >> 3)
|
||||
wireType := int(tag & 0x7)
|
||||
n2, err := size((*ext)[offset+n1:], wireType)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
newOffset := offset + n1 + n2
|
||||
if fieldNum == theFieldNum {
|
||||
*ext = append((*ext)[:offset], (*ext)[newOffset:]...)
|
||||
return offset
|
||||
}
|
||||
offset = newOffset
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
func clearExtension(pb extendableProto, fieldNum int32) {
|
||||
if epb, doki := pb.(extensionsMap); doki {
|
||||
delete(epb.ExtensionMap(), fieldNum)
|
||||
} else if epb, doki := pb.(extensionsBytes); doki {
|
||||
offset := 0
|
||||
for offset != -1 {
|
||||
offset = deleteExtension(epb, fieldNum, offset)
|
||||
}
|
||||
} else {
|
||||
panic("unreachable")
|
||||
}
|
||||
}
|
||||
|
||||
// ClearExtension removes the given extension from pb.
|
||||
func ClearExtension(pb extendableProto, extension *ExtensionDesc) {
|
||||
// TODO: Check types, field numbers, etc.?
|
||||
delete(pb.ExtensionMap(), extension.Field)
|
||||
clearExtension(pb, extension.Field)
|
||||
}
|
||||
|
||||
// GetExtension parses and returns the given extension of pb.
|
||||
// If the extension is not present it returns ErrMissingExtension.
|
||||
// If the returned extension is modified, SetExtension must be called
|
||||
// for the modifications to be reflected in pb.
|
||||
func GetExtension(pb extendableProto, extension *ExtensionDesc) (interface{}, error) {
|
||||
if err := checkExtensionTypes(pb, extension); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
e, ok := pb.ExtensionMap()[extension.Field]
|
||||
if !ok {
|
||||
return nil, ErrMissingExtension
|
||||
}
|
||||
if e.value != nil {
|
||||
// Already decoded. Check the descriptor, though.
|
||||
if e.desc != extension {
|
||||
// This shouldn't happen. If it does, it means that
|
||||
// GetExtension was called twice with two different
|
||||
// descriptors with the same field number.
|
||||
return nil, errors.New("proto: descriptor conflict")
|
||||
if epb, doki := pb.(extensionsMap); doki {
|
||||
e, ok := epb.ExtensionMap()[extension.Field]
|
||||
if !ok {
|
||||
return nil, ErrMissingExtension
|
||||
}
|
||||
if e.value != nil {
|
||||
// Already decoded. Check the descriptor, though.
|
||||
if e.desc != extension {
|
||||
// This shouldn't happen. If it does, it means that
|
||||
// GetExtension was called twice with two different
|
||||
// descriptors with the same field number.
|
||||
return nil, errors.New("proto: descriptor conflict")
|
||||
}
|
||||
return e.value, nil
|
||||
}
|
||||
|
||||
v, err := decodeExtension(e.enc, extension)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Remember the decoded version and drop the encoded version.
|
||||
// That way it is safe to mutate what we return.
|
||||
e.value = v
|
||||
e.desc = extension
|
||||
e.enc = nil
|
||||
return e.value, nil
|
||||
} else if epb, doki := pb.(extensionsBytes); doki {
|
||||
ext := epb.GetExtensions()
|
||||
o := 0
|
||||
for o < len(*ext) {
|
||||
tag, n := DecodeVarint((*ext)[o:])
|
||||
fieldNum := int32(tag >> 3)
|
||||
wireType := int(tag & 0x7)
|
||||
l, err := size((*ext)[o+n:], wireType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if int32(fieldNum) == extension.Field {
|
||||
v, err := decodeExtension((*ext)[o:o+n+l], extension)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
o += n + l
|
||||
}
|
||||
}
|
||||
|
||||
v, err := decodeExtension(e.enc, extension)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Remember the decoded version and drop the encoded version.
|
||||
// That way it is safe to mutate what we return.
|
||||
e.value = v
|
||||
e.desc = extension
|
||||
e.enc = nil
|
||||
return e.value, nil
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
// decodeExtension decodes an extension encoded in b.
|
||||
@ -319,7 +412,21 @@ func SetExtension(pb extendableProto, extension *ExtensionDesc, value interface{
|
||||
return errors.New("proto: bad extension value type")
|
||||
}
|
||||
|
||||
pb.ExtensionMap()[extension.Field] = Extension{desc: extension, value: value}
|
||||
if epb, doki := pb.(extensionsMap); doki {
|
||||
epb.ExtensionMap()[extension.Field] = Extension{desc: extension, value: value}
|
||||
} else if epb, doki := pb.(extensionsBytes); doki {
|
||||
ClearExtension(pb, extension)
|
||||
ext := epb.GetExtensions()
|
||||
et := reflect.TypeOf(extension.ExtensionType)
|
||||
props := extensionProperties(extension)
|
||||
p := NewBuffer(nil)
|
||||
x := reflect.New(et)
|
||||
x.Elem().Set(reflect.ValueOf(value))
|
||||
if err := props.enc(p, props, toStructPointer(x)); err != nil {
|
||||
return err
|
||||
}
|
||||
*ext = append(*ext, p.buf...)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -31,6 +31,7 @@ import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func GetBoolExtension(pb extendableProto, extension *ExtensionDesc, ifnotset bool) bool {
|
||||
@ -58,6 +59,48 @@ func SizeOfExtensionMap(m map[int32]Extension) (n int) {
|
||||
return sizeExtensionMap(m)
|
||||
}
|
||||
|
||||
type sortableMapElem struct {
|
||||
field int32
|
||||
ext Extension
|
||||
}
|
||||
|
||||
func newSortableExtensionsFromMap(m map[int32]Extension) sortableExtensions {
|
||||
s := make(sortableExtensions, 0, len(m))
|
||||
for k, v := range m {
|
||||
s = append(s, &sortableMapElem{field: k, ext: v})
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
type sortableExtensions []*sortableMapElem
|
||||
|
||||
func (this sortableExtensions) Len() int { return len(this) }
|
||||
|
||||
func (this sortableExtensions) Swap(i, j int) { this[i], this[j] = this[j], this[i] }
|
||||
|
||||
func (this sortableExtensions) Less(i, j int) bool { return this[i].field < this[j].field }
|
||||
|
||||
func (this sortableExtensions) String() string {
|
||||
sort.Sort(this)
|
||||
ss := make([]string, len(this))
|
||||
for i := range this {
|
||||
ss[i] = fmt.Sprintf("%d: %v", this[i].field, this[i].ext)
|
||||
}
|
||||
return "map[" + strings.Join(ss, ",") + "]"
|
||||
}
|
||||
|
||||
func StringFromExtensionsMap(m map[int32]Extension) string {
|
||||
return newSortableExtensionsFromMap(m).String()
|
||||
}
|
||||
|
||||
func StringFromExtensionsBytes(ext []byte) string {
|
||||
m, err := BytesToExtensionsMap(ext)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return StringFromExtensionsMap(m)
|
||||
}
|
||||
|
||||
func EncodeExtensionMap(m map[int32]Extension, data []byte) (n int, err error) {
|
||||
if err := encodeExtensionMap(m); err != nil {
|
||||
return 0, err
|
||||
@ -83,6 +126,58 @@ func GetRawExtension(m map[int32]Extension, id int32) ([]byte, error) {
|
||||
return m[id].enc, nil
|
||||
}
|
||||
|
||||
func size(buf []byte, wire int) (int, error) {
|
||||
switch wire {
|
||||
case WireVarint:
|
||||
_, n := DecodeVarint(buf)
|
||||
return n, nil
|
||||
case WireFixed64:
|
||||
return 8, nil
|
||||
case WireBytes:
|
||||
v, n := DecodeVarint(buf)
|
||||
return int(v) + n, nil
|
||||
case WireFixed32:
|
||||
return 4, nil
|
||||
case WireStartGroup:
|
||||
offset := 0
|
||||
for {
|
||||
u, n := DecodeVarint(buf[offset:])
|
||||
fwire := int(u & 0x7)
|
||||
offset += n
|
||||
if fwire == WireEndGroup {
|
||||
return offset, nil
|
||||
}
|
||||
s, err := size(buf[offset:], wire)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
offset += s
|
||||
}
|
||||
}
|
||||
return 0, fmt.Errorf("proto: can't get size for unknown wire type %d", wire)
|
||||
}
|
||||
|
||||
func BytesToExtensionsMap(buf []byte) (map[int32]Extension, error) {
|
||||
m := make(map[int32]Extension)
|
||||
i := 0
|
||||
for i < len(buf) {
|
||||
tag, n := DecodeVarint(buf[i:])
|
||||
if n <= 0 {
|
||||
return nil, fmt.Errorf("unable to decode varint")
|
||||
}
|
||||
fieldNum := int32(tag >> 3)
|
||||
wireType := int(tag & 0x7)
|
||||
l, err := size(buf[i+n:], wireType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
end := i + int(l) + n
|
||||
m[int32(fieldNum)] = Extension{enc: buf[i:end]}
|
||||
i = end
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func NewExtension(e []byte) Extension {
|
||||
ee := Extension{enc: make([]byte, len(e))}
|
||||
copy(ee.enc, e)
|
||||
|
@ -240,10 +240,8 @@ func GetStats() Stats { return stats }
|
||||
// the global functions Marshal and Unmarshal create a
|
||||
// temporary Buffer and are fine for most applications.
|
||||
type Buffer struct {
|
||||
buf []byte // encode/decode byte stream
|
||||
index int // write point
|
||||
freelist [10][]byte // list of available buffers
|
||||
nfreelist int // number of free buffers
|
||||
buf []byte // encode/decode byte stream
|
||||
index int // write point
|
||||
|
||||
// pools of basic types to amortize allocation.
|
||||
bools []bool
|
||||
@ -260,20 +258,11 @@ type Buffer struct {
|
||||
// NewBuffer allocates a new Buffer and initializes its internal data to
|
||||
// the contents of the argument slice.
|
||||
func NewBuffer(e []byte) *Buffer {
|
||||
p := new(Buffer)
|
||||
if e == nil {
|
||||
e = p.bufalloc()
|
||||
}
|
||||
p.buf = e
|
||||
p.index = 0
|
||||
return p
|
||||
return &Buffer{buf: e}
|
||||
}
|
||||
|
||||
// Reset resets the Buffer, ready for marshaling a new protocol buffer.
|
||||
func (p *Buffer) Reset() {
|
||||
if p.buf == nil {
|
||||
p.buf = p.bufalloc()
|
||||
}
|
||||
p.buf = p.buf[0:0] // for reading/writing
|
||||
p.index = 0 // for reading
|
||||
}
|
||||
@ -288,44 +277,6 @@ func (p *Buffer) SetBuf(s []byte) {
|
||||
// Bytes returns the contents of the Buffer.
|
||||
func (p *Buffer) Bytes() []byte { return p.buf }
|
||||
|
||||
// Allocate a buffer for the Buffer.
|
||||
func (p *Buffer) bufalloc() []byte {
|
||||
if p.nfreelist > 0 {
|
||||
// reuse an old one
|
||||
p.nfreelist--
|
||||
s := p.freelist[p.nfreelist]
|
||||
return s[0:0]
|
||||
}
|
||||
// make a new one
|
||||
s := make([]byte, 0, 16)
|
||||
return s
|
||||
}
|
||||
|
||||
// Free (and remember in freelist) a byte buffer for the Buffer.
|
||||
func (p *Buffer) buffree(s []byte) {
|
||||
if p.nfreelist < len(p.freelist) {
|
||||
// Take next slot.
|
||||
p.freelist[p.nfreelist] = s
|
||||
p.nfreelist++
|
||||
return
|
||||
}
|
||||
|
||||
// Find the smallest.
|
||||
besti := -1
|
||||
bestl := len(s)
|
||||
for i, b := range p.freelist {
|
||||
if len(b) < bestl {
|
||||
besti = i
|
||||
bestl = len(b)
|
||||
}
|
||||
}
|
||||
|
||||
// Overwrite the smallest.
|
||||
if besti >= 0 {
|
||||
p.freelist[besti] = s
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Helper routines for simplifying the creation of optional fields of basic type.
|
||||
*/
|
||||
|
@ -51,10 +51,17 @@ func structPointer_InterfaceRef(p structPointer, f field, t reflect.Type) interf
|
||||
}
|
||||
|
||||
func copyUintPtr(oldptr, newptr uintptr, size int) {
|
||||
for j := 0; j < size; j++ {
|
||||
oldb := (*byte)(unsafe.Pointer(oldptr + uintptr(j)))
|
||||
*(*byte)(unsafe.Pointer(newptr + uintptr(j))) = *oldb
|
||||
}
|
||||
oldbytes := make([]byte, 0)
|
||||
oldslice := (*reflect.SliceHeader)(unsafe.Pointer(&oldbytes))
|
||||
oldslice.Data = oldptr
|
||||
oldslice.Len = size
|
||||
oldslice.Cap = size
|
||||
newbytes := make([]byte, 0)
|
||||
newslice := (*reflect.SliceHeader)(unsafe.Pointer(&newbytes))
|
||||
newslice.Data = newptr
|
||||
newslice.Len = size
|
||||
newslice.Cap = size
|
||||
copy(newbytes, oldbytes)
|
||||
}
|
||||
|
||||
func structPointer_Copy(oldptr structPointer, newptr structPointer, size int) {
|
||||
|
@ -575,9 +575,15 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
|
||||
p.init(f.Type, name, f.Tag.Get("protobuf"), &f, false)
|
||||
|
||||
if f.Name == "XXX_extensions" { // special case
|
||||
p.enc = (*Buffer).enc_map
|
||||
p.dec = nil // not needed
|
||||
p.size = size_map
|
||||
if len(f.Tag.Get("protobuf")) > 0 {
|
||||
p.enc = (*Buffer).enc_ext_slice_byte
|
||||
p.dec = nil // not needed
|
||||
p.size = size_ext_slice_byte
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_map
|
||||
p.dec = nil // not needed
|
||||
p.size = size_map
|
||||
}
|
||||
}
|
||||
if f.Name == "XXX_unrecognized" { // special case
|
||||
prop.unrecField = toField(&f)
|
||||
|
@ -58,8 +58,8 @@ func init() {
|
||||
}
|
||||
|
||||
var SizeTests = []struct {
|
||||
desc string
|
||||
pb Message
|
||||
desc string
|
||||
pb Message
|
||||
}{
|
||||
{"empty", &pb.OtherMessage{}},
|
||||
// Basic types.
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -79,6 +79,13 @@ type textWriter struct {
|
||||
w writer
|
||||
}
|
||||
|
||||
// textMarshaler is implemented by Messages that can marshal themsleves.
|
||||
// It is identical to encoding.TextMarshaler, introduced in go 1.2,
|
||||
// which will eventually replace it.
|
||||
type textMarshaler interface {
|
||||
MarshalText() (text []byte, err error)
|
||||
}
|
||||
|
||||
func (w *textWriter) WriteString(s string) (n int, err error) {
|
||||
if !strings.Contains(s, "\n") {
|
||||
if !w.compact && w.complete {
|
||||
@ -366,7 +373,15 @@ func writeAny(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
}
|
||||
}
|
||||
w.indent()
|
||||
if err := writeStruct(w, v); err != nil {
|
||||
if tm, ok := v.Interface().(textMarshaler); ok {
|
||||
text, err := tm.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err = w.Write(text); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if err := writeStruct(w, v); err != nil {
|
||||
return err
|
||||
}
|
||||
w.unindent()
|
||||
@ -555,7 +570,18 @@ func writeExtensions(w *textWriter, pv reflect.Value) error {
|
||||
// Order the extensions by ID.
|
||||
// This isn't strictly necessary, but it will give us
|
||||
// canonical output, which will also make testing easier.
|
||||
m := ep.ExtensionMap()
|
||||
var m map[int32]Extension
|
||||
if em, ok := ep.(extensionsMap); ok {
|
||||
m = em.ExtensionMap()
|
||||
} else if em, ok := ep.(extensionsBytes); ok {
|
||||
eb := em.GetExtensions()
|
||||
var err error
|
||||
m, err = BytesToExtensionsMap(*eb)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
ids := make([]int32, 0, len(m))
|
||||
for id := range m {
|
||||
ids = append(ids, id)
|
||||
@ -653,6 +679,19 @@ func marshalText(w io.Writer, pb Message, compact bool) error {
|
||||
compact: compact,
|
||||
}
|
||||
|
||||
if tm, ok := pb.(textMarshaler); ok {
|
||||
text, err := tm.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err = aw.Write(text); err != nil {
|
||||
return err
|
||||
}
|
||||
if bw != nil {
|
||||
return bw.Flush()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
// Dereference the received pointer so we don't have outer < and >.
|
||||
v := reflect.Indirect(val)
|
||||
if err := writeStruct(aw, v); err != nil {
|
||||
@ -666,7 +705,9 @@ func marshalText(w io.Writer, pb Message, compact bool) error {
|
||||
|
||||
// MarshalText writes a given protocol buffer in text format.
|
||||
// The only errors returned are from w.
|
||||
func MarshalText(w io.Writer, pb Message) error { return marshalText(w, pb, false) }
|
||||
func MarshalText(w io.Writer, pb Message) error {
|
||||
return marshalText(w, pb, false)
|
||||
}
|
||||
|
||||
// MarshalTextString is the same as MarshalText, but returns the string directly.
|
||||
func MarshalTextString(pb Message) string {
|
||||
|
@ -48,6 +48,13 @@ import (
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// textUnmarshaler is implemented by Messages that can unmarshal themsleves.
|
||||
// It is identical to encoding.TextUnmarshaler, introduced in go 1.2,
|
||||
// which will eventually replace it.
|
||||
type textUnmarshaler interface {
|
||||
UnmarshalText(text []byte) error
|
||||
}
|
||||
|
||||
type ParseError struct {
|
||||
Message string
|
||||
Line int // 1-based line number
|
||||
@ -686,6 +693,7 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) *ParseError {
|
||||
default:
|
||||
return p.errorf("expected '{' or '<', found %q", tok.value)
|
||||
}
|
||||
// TODO: Handle nested messages which implement textUnmarshaler.
|
||||
return p.readStruct(fv, terminator)
|
||||
case reflect.Uint32:
|
||||
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
|
||||
@ -704,6 +712,10 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) *ParseError {
|
||||
// UnmarshalText reads a protocol buffer in Text format. UnmarshalText resets pb
|
||||
// before starting to unmarshal, so any existing data in pb is always removed.
|
||||
func UnmarshalText(s string, pb Message) error {
|
||||
if um, ok := pb.(textUnmarshaler); ok {
|
||||
err := um.UnmarshalText([]byte(s))
|
||||
return err
|
||||
}
|
||||
pb.Reset()
|
||||
v := reflect.ValueOf(pb)
|
||||
if pe := newTextParser(s).readStruct(v.Elem(), ""); pe != nil {
|
||||
|
@ -41,9 +41,9 @@ import (
|
||||
)
|
||||
|
||||
type UnmarshalTextTest struct {
|
||||
in string
|
||||
err string // if "", no error expected
|
||||
out *MyMessage
|
||||
in string
|
||||
err string // if "", no error expected
|
||||
out *MyMessage
|
||||
}
|
||||
|
||||
func buildExtStructTest(text string) UnmarshalTextTest {
|
||||
@ -78,97 +78,97 @@ func buildExtRepStringTest(text string) UnmarshalTextTest {
|
||||
var unMarshalTextTests = []UnmarshalTextTest{
|
||||
// Basic
|
||||
{
|
||||
in: " count:42\n name:\"Dave\" ",
|
||||
in: " count:42\n name:\"Dave\" ",
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String("Dave"),
|
||||
Count: Int32(42),
|
||||
Name: String("Dave"),
|
||||
},
|
||||
},
|
||||
|
||||
// Empty quoted string
|
||||
{
|
||||
in: `count:42 name:""`,
|
||||
in: `count:42 name:""`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String(""),
|
||||
Count: Int32(42),
|
||||
Name: String(""),
|
||||
},
|
||||
},
|
||||
|
||||
// Quoted string concatenation
|
||||
{
|
||||
in: `count:42 name: "My name is "` + "\n" + `"elsewhere"`,
|
||||
in: `count:42 name: "My name is "` + "\n" + `"elsewhere"`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String("My name is elsewhere"),
|
||||
Count: Int32(42),
|
||||
Name: String("My name is elsewhere"),
|
||||
},
|
||||
},
|
||||
|
||||
// Quoted string with escaped apostrophe
|
||||
{
|
||||
in: `count:42 name: "HOLIDAY - New Year\'s Day"`,
|
||||
in: `count:42 name: "HOLIDAY - New Year\'s Day"`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String("HOLIDAY - New Year's Day"),
|
||||
Count: Int32(42),
|
||||
Name: String("HOLIDAY - New Year's Day"),
|
||||
},
|
||||
},
|
||||
|
||||
// Quoted string with single quote
|
||||
{
|
||||
in: `count:42 name: 'Roger "The Ramster" Ramjet'`,
|
||||
in: `count:42 name: 'Roger "The Ramster" Ramjet'`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String(`Roger "The Ramster" Ramjet`),
|
||||
Count: Int32(42),
|
||||
Name: String(`Roger "The Ramster" Ramjet`),
|
||||
},
|
||||
},
|
||||
|
||||
// Quoted string with all the accepted special characters from the C++ test
|
||||
{
|
||||
in: `count:42 name: ` + "\"\\\"A string with \\' characters \\n and \\r newlines and \\t tabs and \\001 slashes \\\\ and multiple spaces\"",
|
||||
in: `count:42 name: ` + "\"\\\"A string with \\' characters \\n and \\r newlines and \\t tabs and \\001 slashes \\\\ and multiple spaces\"",
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String("\"A string with ' characters \n and \r newlines and \t tabs and \001 slashes \\ and multiple spaces"),
|
||||
Count: Int32(42),
|
||||
Name: String("\"A string with ' characters \n and \r newlines and \t tabs and \001 slashes \\ and multiple spaces"),
|
||||
},
|
||||
},
|
||||
|
||||
// Quoted string with quoted backslash
|
||||
{
|
||||
in: `count:42 name: "\\'xyz"`,
|
||||
in: `count:42 name: "\\'xyz"`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String(`\'xyz`),
|
||||
Count: Int32(42),
|
||||
Name: String(`\'xyz`),
|
||||
},
|
||||
},
|
||||
|
||||
// Quoted string with UTF-8 bytes.
|
||||
{
|
||||
in: "count:42 name: '\303\277\302\201\xAB'",
|
||||
in: "count:42 name: '\303\277\302\201\xAB'",
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String("\303\277\302\201\xAB"),
|
||||
Count: Int32(42),
|
||||
Name: String("\303\277\302\201\xAB"),
|
||||
},
|
||||
},
|
||||
|
||||
// Bad quoted string
|
||||
{
|
||||
in: `inner: < host: "\0" >` + "\n",
|
||||
err: `line 1.15: invalid quoted string "\0"`,
|
||||
in: `inner: < host: "\0" >` + "\n",
|
||||
err: `line 1.15: invalid quoted string "\0"`,
|
||||
},
|
||||
|
||||
// Number too large for int64
|
||||
{
|
||||
in: "count: 123456789012345678901",
|
||||
err: "line 1.7: invalid int32: 123456789012345678901",
|
||||
in: "count: 123456789012345678901",
|
||||
err: "line 1.7: invalid int32: 123456789012345678901",
|
||||
},
|
||||
|
||||
// Number too large for int32
|
||||
{
|
||||
in: "count: 1234567890123",
|
||||
err: "line 1.7: invalid int32: 1234567890123",
|
||||
in: "count: 1234567890123",
|
||||
err: "line 1.7: invalid int32: 1234567890123",
|
||||
},
|
||||
|
||||
// Number in hexadecimal
|
||||
{
|
||||
in: "count: 0x2beef",
|
||||
in: "count: 0x2beef",
|
||||
out: &MyMessage{
|
||||
Count: Int32(0x2beef),
|
||||
},
|
||||
@ -176,7 +176,7 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
|
||||
// Number in octal
|
||||
{
|
||||
in: "count: 024601",
|
||||
in: "count: 024601",
|
||||
out: &MyMessage{
|
||||
Count: Int32(024601),
|
||||
},
|
||||
@ -184,9 +184,9 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
|
||||
// Floating point number with "f" suffix
|
||||
{
|
||||
in: "count: 4 others:< weight: 17.0f >",
|
||||
in: "count: 4 others:< weight: 17.0f >",
|
||||
out: &MyMessage{
|
||||
Count: Int32(4),
|
||||
Count: Int32(4),
|
||||
Others: []*OtherMessage{
|
||||
{
|
||||
Weight: Float32(17),
|
||||
@ -197,69 +197,69 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
|
||||
// Floating point positive infinity
|
||||
{
|
||||
in: "count: 4 bigfloat: inf",
|
||||
in: "count: 4 bigfloat: inf",
|
||||
out: &MyMessage{
|
||||
Count: Int32(4),
|
||||
Bigfloat: Float64(math.Inf(1)),
|
||||
Count: Int32(4),
|
||||
Bigfloat: Float64(math.Inf(1)),
|
||||
},
|
||||
},
|
||||
|
||||
// Floating point negative infinity
|
||||
{
|
||||
in: "count: 4 bigfloat: -inf",
|
||||
in: "count: 4 bigfloat: -inf",
|
||||
out: &MyMessage{
|
||||
Count: Int32(4),
|
||||
Bigfloat: Float64(math.Inf(-1)),
|
||||
Count: Int32(4),
|
||||
Bigfloat: Float64(math.Inf(-1)),
|
||||
},
|
||||
},
|
||||
|
||||
// Number too large for float32
|
||||
{
|
||||
in: "others:< weight: 12345678901234567890123456789012345678901234567890 >",
|
||||
err: "line 1.17: invalid float32: 12345678901234567890123456789012345678901234567890",
|
||||
in: "others:< weight: 12345678901234567890123456789012345678901234567890 >",
|
||||
err: "line 1.17: invalid float32: 12345678901234567890123456789012345678901234567890",
|
||||
},
|
||||
|
||||
// Number posing as a quoted string
|
||||
{
|
||||
in: `inner: < host: 12 >` + "\n",
|
||||
err: `line 1.15: invalid string: 12`,
|
||||
in: `inner: < host: 12 >` + "\n",
|
||||
err: `line 1.15: invalid string: 12`,
|
||||
},
|
||||
|
||||
// Quoted string posing as int32
|
||||
{
|
||||
in: `count: "12"`,
|
||||
err: `line 1.7: invalid int32: "12"`,
|
||||
in: `count: "12"`,
|
||||
err: `line 1.7: invalid int32: "12"`,
|
||||
},
|
||||
|
||||
// Quoted string posing a float32
|
||||
{
|
||||
in: `others:< weight: "17.4" >`,
|
||||
err: `line 1.17: invalid float32: "17.4"`,
|
||||
in: `others:< weight: "17.4" >`,
|
||||
err: `line 1.17: invalid float32: "17.4"`,
|
||||
},
|
||||
|
||||
// Enum
|
||||
{
|
||||
in: `count:42 bikeshed: BLUE`,
|
||||
in: `count:42 bikeshed: BLUE`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Bikeshed: MyMessage_BLUE.Enum(),
|
||||
Count: Int32(42),
|
||||
Bikeshed: MyMessage_BLUE.Enum(),
|
||||
},
|
||||
},
|
||||
|
||||
// Repeated field
|
||||
{
|
||||
in: `count:42 pet: "horsey" pet:"bunny"`,
|
||||
in: `count:42 pet: "horsey" pet:"bunny"`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Pet: []string{"horsey", "bunny"},
|
||||
Count: Int32(42),
|
||||
Pet: []string{"horsey", "bunny"},
|
||||
},
|
||||
},
|
||||
|
||||
// Repeated message with/without colon and <>/{}
|
||||
{
|
||||
in: `count:42 others:{} others{} others:<> others:{}`,
|
||||
in: `count:42 others:{} others{} others:<> others:{}`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Count: Int32(42),
|
||||
Others: []*OtherMessage{
|
||||
{},
|
||||
{},
|
||||
@ -271,9 +271,9 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
|
||||
// Missing colon for inner message
|
||||
{
|
||||
in: `count:42 inner < host: "cauchy.syd" >`,
|
||||
in: `count:42 inner < host: "cauchy.syd" >`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Count: Int32(42),
|
||||
Inner: &InnerMessage{
|
||||
Host: String("cauchy.syd"),
|
||||
},
|
||||
@ -282,33 +282,33 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
|
||||
// Missing colon for string field
|
||||
{
|
||||
in: `name "Dave"`,
|
||||
err: `line 1.5: expected ':', found "\"Dave\""`,
|
||||
in: `name "Dave"`,
|
||||
err: `line 1.5: expected ':', found "\"Dave\""`,
|
||||
},
|
||||
|
||||
// Missing colon for int32 field
|
||||
{
|
||||
in: `count 42`,
|
||||
err: `line 1.6: expected ':', found "42"`,
|
||||
in: `count 42`,
|
||||
err: `line 1.6: expected ':', found "42"`,
|
||||
},
|
||||
|
||||
// Missing required field
|
||||
{
|
||||
in: ``,
|
||||
err: `line 1.0: message testdata.MyMessage missing required field "count"`,
|
||||
in: ``,
|
||||
err: `line 1.0: message testdata.MyMessage missing required field "count"`,
|
||||
},
|
||||
|
||||
// Repeated non-repeated field
|
||||
{
|
||||
in: `name: "Rob" name: "Russ"`,
|
||||
err: `line 1.12: non-repeated field "name" was repeated`,
|
||||
in: `name: "Rob" name: "Russ"`,
|
||||
err: `line 1.12: non-repeated field "name" was repeated`,
|
||||
},
|
||||
|
||||
// Group
|
||||
{
|
||||
in: `count: 17 SomeGroup { group_field: 12 }`,
|
||||
in: `count: 17 SomeGroup { group_field: 12 }`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(17),
|
||||
Count: Int32(17),
|
||||
Somegroup: &MyMessage_SomeGroup{
|
||||
GroupField: Int32(12),
|
||||
},
|
||||
@ -317,18 +317,18 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
|
||||
// Semicolon between fields
|
||||
{
|
||||
in: `count:3;name:"Calvin"`,
|
||||
in: `count:3;name:"Calvin"`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(3),
|
||||
Name: String("Calvin"),
|
||||
Count: Int32(3),
|
||||
Name: String("Calvin"),
|
||||
},
|
||||
},
|
||||
// Comma between fields
|
||||
{
|
||||
in: `count:4,name:"Ezekiel"`,
|
||||
in: `count:4,name:"Ezekiel"`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(4),
|
||||
Name: String("Ezekiel"),
|
||||
Count: Int32(4),
|
||||
Name: String("Ezekiel"),
|
||||
},
|
||||
},
|
||||
|
||||
@ -363,25 +363,25 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
` >` +
|
||||
`>`,
|
||||
out: &MyMessage{
|
||||
Count: Int32(42),
|
||||
Name: String("Dave"),
|
||||
Quote: String(`"I didn't want to go."`),
|
||||
Pet: []string{"bunny", "kitty", "horsey"},
|
||||
Count: Int32(42),
|
||||
Name: String("Dave"),
|
||||
Quote: String(`"I didn't want to go."`),
|
||||
Pet: []string{"bunny", "kitty", "horsey"},
|
||||
Inner: &InnerMessage{
|
||||
Host: String("footrest.syd"),
|
||||
Port: Int32(7001),
|
||||
Connected: Bool(true),
|
||||
Host: String("footrest.syd"),
|
||||
Port: Int32(7001),
|
||||
Connected: Bool(true),
|
||||
},
|
||||
Others: []*OtherMessage{
|
||||
{
|
||||
Key: Int64(3735928559),
|
||||
Value: []byte{0x1, 'A', '\a', '\f'},
|
||||
Key: Int64(3735928559),
|
||||
Value: []byte{0x1, 'A', '\a', '\f'},
|
||||
},
|
||||
{
|
||||
Weight: Float32(58.9),
|
||||
Weight: Float32(58.9),
|
||||
Inner: &InnerMessage{
|
||||
Host: String("lesha.mtv"),
|
||||
Port: Int32(8002),
|
||||
Host: String("lesha.mtv"),
|
||||
Port: Int32(8002),
|
||||
},
|
||||
},
|
||||
},
|
||||
@ -413,6 +413,16 @@ func TestUnmarshalText(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestUnmarshalTextCustomMessage(t *testing.T) {
|
||||
msg := &textMessage{}
|
||||
if err := UnmarshalText("custom", msg); err != nil {
|
||||
t.Errorf("Unexpected error from custom unmarshal: %v", err)
|
||||
}
|
||||
if UnmarshalText("not custom", msg) == nil {
|
||||
t.Errorf("Didn't get expected error from custom unmarshal")
|
||||
}
|
||||
}
|
||||
|
||||
// Regression test; this caused a panic.
|
||||
func TestRepeatedEnum(t *testing.T) {
|
||||
pb := new(RepeatedEnum)
|
||||
|
@ -44,37 +44,57 @@ import (
|
||||
pb "./testdata"
|
||||
)
|
||||
|
||||
// textMessage implements the methods that allow it to marshal and unmarshal
|
||||
// itself as text.
|
||||
type textMessage struct {
|
||||
}
|
||||
|
||||
func (*textMessage) MarshalText() ([]byte, error) {
|
||||
return []byte("custom"), nil
|
||||
}
|
||||
|
||||
func (*textMessage) UnmarshalText(bytes []byte) error {
|
||||
if string(bytes) != "custom" {
|
||||
return errors.New("expected 'custom'")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (*textMessage) Reset() {}
|
||||
func (*textMessage) String() string { return "" }
|
||||
func (*textMessage) ProtoMessage() {}
|
||||
|
||||
func newTestMessage() *pb.MyMessage {
|
||||
msg := &pb.MyMessage{
|
||||
Count: proto.Int32(42),
|
||||
Name: proto.String("Dave"),
|
||||
Quote: proto.String(`"I didn't want to go."`),
|
||||
Pet: []string{"bunny", "kitty", "horsey"},
|
||||
Count: proto.Int32(42),
|
||||
Name: proto.String("Dave"),
|
||||
Quote: proto.String(`"I didn't want to go."`),
|
||||
Pet: []string{"bunny", "kitty", "horsey"},
|
||||
Inner: &pb.InnerMessage{
|
||||
Host: proto.String("footrest.syd"),
|
||||
Port: proto.Int32(7001),
|
||||
Connected: proto.Bool(true),
|
||||
Host: proto.String("footrest.syd"),
|
||||
Port: proto.Int32(7001),
|
||||
Connected: proto.Bool(true),
|
||||
},
|
||||
Others: []*pb.OtherMessage{
|
||||
{
|
||||
Key: proto.Int64(0xdeadbeef),
|
||||
Value: []byte{1, 65, 7, 12},
|
||||
Key: proto.Int64(0xdeadbeef),
|
||||
Value: []byte{1, 65, 7, 12},
|
||||
},
|
||||
{
|
||||
Weight: proto.Float32(6.022),
|
||||
Weight: proto.Float32(6.022),
|
||||
Inner: &pb.InnerMessage{
|
||||
Host: proto.String("lesha.mtv"),
|
||||
Port: proto.Int32(8002),
|
||||
Host: proto.String("lesha.mtv"),
|
||||
Port: proto.Int32(8002),
|
||||
},
|
||||
},
|
||||
},
|
||||
Bikeshed: pb.MyMessage_BLUE.Enum(),
|
||||
Bikeshed: pb.MyMessage_BLUE.Enum(),
|
||||
Somegroup: &pb.MyMessage_SomeGroup{
|
||||
GroupField: proto.Int32(8),
|
||||
},
|
||||
// One normally wouldn't do this.
|
||||
// This is an undeclared tag 13, as a varint (wire type 0) with value 4.
|
||||
XXX_unrecognized: []byte{13<<3 | 0, 4},
|
||||
XXX_unrecognized: []byte{13<<3 | 0, 4},
|
||||
}
|
||||
ext := &pb.Ext{
|
||||
Data: proto.String("Big gobs for big rats"),
|
||||
@ -153,6 +173,16 @@ func TestMarshalText(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestMarshalTextCustomMessage(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
if err := proto.MarshalText(buf, &textMessage{}); err != nil {
|
||||
t.Fatalf("proto.MarshalText: %v", err)
|
||||
}
|
||||
s := buf.String()
|
||||
if s != "custom" {
|
||||
t.Errorf("Got %q, expected %q", s, "custom")
|
||||
}
|
||||
}
|
||||
func TestMarshalTextNil(t *testing.T) {
|
||||
want := "<nil>"
|
||||
tests := []proto.Message{nil, (*pb.MyMessage)(nil)}
|
||||
@ -250,8 +280,8 @@ func TestCompactText(t *testing.T) {
|
||||
|
||||
func TestStringEscaping(t *testing.T) {
|
||||
testCases := []struct {
|
||||
in *pb.Strings
|
||||
out string
|
||||
in *pb.Strings
|
||||
out string
|
||||
}{
|
||||
{
|
||||
// Test data from C++ test (TextFormatTest.StringEscape).
|
||||
@ -299,8 +329,8 @@ func TestStringEscaping(t *testing.T) {
|
||||
// This is a proxy for something like a nearly-full or imminently-failing disk,
|
||||
// or a network connection that is about to die.
|
||||
type limitedWriter struct {
|
||||
b bytes.Buffer
|
||||
limit int
|
||||
b bytes.Buffer
|
||||
limit int
|
||||
}
|
||||
|
||||
var outOfSpace = errors.New("proto: insufficient space")
|
||||
@ -337,8 +367,8 @@ func TestMarshalTextFailing(t *testing.T) {
|
||||
|
||||
func TestFloats(t *testing.T) {
|
||||
tests := []struct {
|
||||
f float64
|
||||
want string
|
||||
f float64
|
||||
want string
|
||||
}{
|
||||
{0, "0"},
|
||||
{4.7, "4.7"},
|
||||
|
3
third_party/github.com/goraft/raft/log.go
vendored
3
third_party/github.com/goraft/raft/log.go
vendored
@ -168,9 +168,10 @@ func (l *Log) open(path string) error {
|
||||
if err == io.EOF {
|
||||
debugln("open.log.append: finish ")
|
||||
} else {
|
||||
if err = os.Truncate(path, readBytes); err != nil {
|
||||
if err = l.file.Truncate(readBytes); err != nil {
|
||||
return fmt.Errorf("raft.Log: Unable to recover: %v", err)
|
||||
}
|
||||
l.file.Seek(readBytes, os.SEEK_SET)
|
||||
}
|
||||
break
|
||||
}
|
||||
|
@ -2,6 +2,23 @@
|
||||
// source: append_entries_request.proto
|
||||
// DO NOT EDIT!
|
||||
|
||||
/*
|
||||
Package protobuf is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
append_entries_request.proto
|
||||
append_entries_responses.proto
|
||||
log_entry.proto
|
||||
request_vote_request.proto
|
||||
request_vote_responses.proto
|
||||
snapshot_recovery_request.proto
|
||||
snapshot_recovery_response.proto
|
||||
snapshot_request.proto
|
||||
snapshot_response.proto
|
||||
|
||||
It has these top-level messages:
|
||||
AppendEntriesRequest
|
||||
*/
|
||||
package protobuf
|
||||
|
||||
import proto "github.com/coreos/etcd/third_party/code.google.com/p/gogoprotobuf/proto"
|
||||
@ -110,7 +127,7 @@ func (m *AppendEntriesRequest) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto2.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -127,7 +144,7 @@ func (m *AppendEntriesRequest) Unmarshal(data []byte) error {
|
||||
m.Term = &v
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto2.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -144,7 +161,7 @@ func (m *AppendEntriesRequest) Unmarshal(data []byte) error {
|
||||
m.PrevLogIndex = &v
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto2.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -161,7 +178,7 @@ func (m *AppendEntriesRequest) Unmarshal(data []byte) error {
|
||||
m.PrevLogTerm = &v
|
||||
case 4:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto2.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -178,7 +195,7 @@ func (m *AppendEntriesRequest) Unmarshal(data []byte) error {
|
||||
m.CommitIndex = &v
|
||||
case 5:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto2.ErrWrongType
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -201,7 +218,7 @@ func (m *AppendEntriesRequest) Unmarshal(data []byte) error {
|
||||
index = postIndex
|
||||
case 6:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto2.ErrWrongType
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -236,6 +253,9 @@ func (m *AppendEntriesRequest) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io1.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -309,7 +329,6 @@ func sovAppendEntriesRequest(x uint64) (n int) {
|
||||
}
|
||||
func sozAppendEntriesRequest(x uint64) (n int) {
|
||||
return sovAppendEntriesRequest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovAppendEntriesRequest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedAppendEntriesRequest(r randyAppendEntriesRequest, easy bool) *AppendEntriesRequest {
|
||||
this := &AppendEntriesRequest{}
|
||||
|
@ -94,7 +94,7 @@ func (m *AppendEntriesResponse) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto4.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -111,7 +111,7 @@ func (m *AppendEntriesResponse) Unmarshal(data []byte) error {
|
||||
m.Term = &v
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto4.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -128,7 +128,7 @@ func (m *AppendEntriesResponse) Unmarshal(data []byte) error {
|
||||
m.Index = &v
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto4.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -145,7 +145,7 @@ func (m *AppendEntriesResponse) Unmarshal(data []byte) error {
|
||||
m.CommitIndex = &v
|
||||
case 4:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto4.ErrWrongType
|
||||
}
|
||||
var v int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -175,6 +175,9 @@ func (m *AppendEntriesResponse) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io2.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -236,7 +239,6 @@ func sovAppendEntriesResponses(x uint64) (n int) {
|
||||
}
|
||||
func sozAppendEntriesResponses(x uint64) (n int) {
|
||||
return sovAppendEntriesResponses(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovAppendEntriesResponses(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedAppendEntriesResponse(r randyAppendEntriesResponses, easy bool) *AppendEntriesResponse {
|
||||
this := &AppendEntriesResponse{}
|
||||
|
@ -94,7 +94,7 @@ func (m *LogEntry) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -111,7 +111,7 @@ func (m *LogEntry) Unmarshal(data []byte) error {
|
||||
m.Index = &v
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -128,7 +128,7 @@ func (m *LogEntry) Unmarshal(data []byte) error {
|
||||
m.Term = &v
|
||||
case 3:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto.ErrWrongType
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -151,7 +151,7 @@ func (m *LogEntry) Unmarshal(data []byte) error {
|
||||
index = postIndex
|
||||
case 4:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto.ErrWrongType
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -185,6 +185,9 @@ func (m *LogEntry) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -248,7 +251,6 @@ func sovLogEntry(x uint64) (n int) {
|
||||
}
|
||||
func sozLogEntry(x uint64) (n int) {
|
||||
return sovLogEntry(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovLogEntry(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedLogEntry(r randyLogEntry, easy bool) *LogEntry {
|
||||
this := &LogEntry{}
|
||||
|
@ -94,7 +94,7 @@ func (m *RequestVoteRequest) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto6.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -111,7 +111,7 @@ func (m *RequestVoteRequest) Unmarshal(data []byte) error {
|
||||
m.Term = &v
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto6.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -128,7 +128,7 @@ func (m *RequestVoteRequest) Unmarshal(data []byte) error {
|
||||
m.LastLogIndex = &v
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto6.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -145,7 +145,7 @@ func (m *RequestVoteRequest) Unmarshal(data []byte) error {
|
||||
m.LastLogTerm = &v
|
||||
case 4:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto6.ErrWrongType
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -180,6 +180,9 @@ func (m *RequestVoteRequest) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io3.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -242,7 +245,6 @@ func sovRequestVoteRequest(x uint64) (n int) {
|
||||
}
|
||||
func sozRequestVoteRequest(x uint64) (n int) {
|
||||
return sovRequestVoteRequest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovRequestVoteRequest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedRequestVoteRequest(r randyRequestVoteRequest, easy bool) *RequestVoteRequest {
|
||||
this := &RequestVoteRequest{}
|
||||
|
@ -78,7 +78,7 @@ func (m *RequestVoteResponse) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto8.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -95,7 +95,7 @@ func (m *RequestVoteResponse) Unmarshal(data []byte) error {
|
||||
m.Term = &v
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto8.ErrWrongType
|
||||
}
|
||||
var v int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -125,6 +125,9 @@ func (m *RequestVoteResponse) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io4.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -178,7 +181,6 @@ func sovRequestVoteResponses(x uint64) (n int) {
|
||||
}
|
||||
func sozRequestVoteResponses(x uint64) (n int) {
|
||||
return sovRequestVoteResponses(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovRequestVoteResponses(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedRequestVoteResponse(r randyRequestVoteResponses, easy bool) *RequestVoteResponse {
|
||||
this := &RequestVoteResponse{}
|
||||
|
@ -125,7 +125,7 @@ func (m *SnapshotRecoveryRequest) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto10.ErrWrongType
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -148,7 +148,7 @@ func (m *SnapshotRecoveryRequest) Unmarshal(data []byte) error {
|
||||
index = postIndex
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto10.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -165,7 +165,7 @@ func (m *SnapshotRecoveryRequest) Unmarshal(data []byte) error {
|
||||
m.LastIndex = &v
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto10.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -182,7 +182,7 @@ func (m *SnapshotRecoveryRequest) Unmarshal(data []byte) error {
|
||||
m.LastTerm = &v
|
||||
case 4:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto10.ErrWrongType
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -205,7 +205,7 @@ func (m *SnapshotRecoveryRequest) Unmarshal(data []byte) error {
|
||||
index = postIndex
|
||||
case 5:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto10.ErrWrongType
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -239,6 +239,9 @@ func (m *SnapshotRecoveryRequest) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io5.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -266,7 +269,7 @@ func (m *SnapshotRecoveryRequest_Peer) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto10.ErrWrongType
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -289,7 +292,7 @@ func (m *SnapshotRecoveryRequest_Peer) Unmarshal(data []byte) error {
|
||||
index = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto10.ErrWrongType
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -324,6 +327,9 @@ func (m *SnapshotRecoveryRequest_Peer) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io5.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -422,7 +428,6 @@ func sovSnapshotRecoveryRequest(x uint64) (n int) {
|
||||
}
|
||||
func sozSnapshotRecoveryRequest(x uint64) (n int) {
|
||||
return sovSnapshotRecoveryRequest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovSnapshotRecoveryRequest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedSnapshotRecoveryRequest(r randySnapshotRecoveryRequest, easy bool) *SnapshotRecoveryRequest {
|
||||
this := &SnapshotRecoveryRequest{}
|
||||
|
@ -86,7 +86,7 @@ func (m *SnapshotRecoveryResponse) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto12.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -103,7 +103,7 @@ func (m *SnapshotRecoveryResponse) Unmarshal(data []byte) error {
|
||||
m.Term = &v
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto12.ErrWrongType
|
||||
}
|
||||
var v int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -121,7 +121,7 @@ func (m *SnapshotRecoveryResponse) Unmarshal(data []byte) error {
|
||||
m.Success = &b
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto12.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -150,6 +150,9 @@ func (m *SnapshotRecoveryResponse) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io6.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -207,7 +210,6 @@ func sovSnapshotRecoveryResponse(x uint64) (n int) {
|
||||
}
|
||||
func sozSnapshotRecoveryResponse(x uint64) (n int) {
|
||||
return sovSnapshotRecoveryResponse(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovSnapshotRecoveryResponse(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedSnapshotRecoveryResponse(r randySnapshotRecoveryResponse, easy bool) *SnapshotRecoveryResponse {
|
||||
this := &SnapshotRecoveryResponse{}
|
||||
|
@ -86,7 +86,7 @@ func (m *SnapshotRequest) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto14.ErrWrongType
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -109,7 +109,7 @@ func (m *SnapshotRequest) Unmarshal(data []byte) error {
|
||||
index = postIndex
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto14.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -126,7 +126,7 @@ func (m *SnapshotRequest) Unmarshal(data []byte) error {
|
||||
m.LastIndex = &v
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto14.ErrWrongType
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -155,6 +155,9 @@ func (m *SnapshotRequest) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io7.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -213,7 +216,6 @@ func sovSnapshotRequest(x uint64) (n int) {
|
||||
}
|
||||
func sozSnapshotRequest(x uint64) (n int) {
|
||||
return sovSnapshotRequest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovSnapshotRequest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedSnapshotRequest(r randySnapshotRequest, easy bool) *SnapshotRequest {
|
||||
this := &SnapshotRequest{}
|
||||
|
@ -70,7 +70,7 @@ func (m *SnapshotResponse) Unmarshal(data []byte) error {
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return proto.ErrWrongType
|
||||
return code_google_com_p_gogoprotobuf_proto16.ErrWrongType
|
||||
}
|
||||
var v int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
@ -100,6 +100,9 @@ func (m *SnapshotResponse) Unmarshal(data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (index + skippy) > l {
|
||||
return io8.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, data[index:index+skippy]...)
|
||||
index += skippy
|
||||
}
|
||||
@ -149,7 +152,6 @@ func sovSnapshotResponse(x uint64) (n int) {
|
||||
}
|
||||
func sozSnapshotResponse(x uint64) (n int) {
|
||||
return sovSnapshotResponse(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
return sovSnapshotResponse(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func NewPopulatedSnapshotResponse(r randySnapshotResponse, easy bool) *SnapshotResponse {
|
||||
this := &SnapshotResponse{}
|
||||
|
Reference in New Issue
Block a user