Compare commits
185 Commits
v3.2.23
...
v3.2.0+git
Author | SHA1 | Date | |
---|---|---|---|
e475a4ea71 | |||
7f149d8fb6 | |||
a825709940 | |||
1acc8090e3 | |||
e962b0c849 | |||
44a6c2121b | |||
8fa96cb303 | |||
42584f84b4 | |||
03ab4d9cc5 | |||
5fedaf2dd7 | |||
5e059fd8dc | |||
0d0c0f3959 | |||
5fe58228b4 | |||
b9a53db0c2 | |||
639687bb89 | |||
15b86d064d | |||
b6b56160cd | |||
703893f334 | |||
099952136a | |||
52afc03d68 | |||
6e74c335e2 | |||
aa0e6b26c0 | |||
44422f3898 | |||
dcf52bbfac | |||
95bc33f37f | |||
5bba05703c | |||
037e33e833 | |||
1748fe3eda | |||
f5b96991a1 | |||
1f206c027a | |||
3a37b68cda | |||
c27634c215 | |||
e5aa938fec | |||
13d9438cf9 | |||
66687da3ba | |||
0caab26310 | |||
ee0c805de2 | |||
0011b78bd5 | |||
e6d26675e6 | |||
e402606f02 | |||
750dc7f157 | |||
74e020b715 | |||
3993f37a26 | |||
e006e2dbcb | |||
a7c33d48de | |||
4445996a38 | |||
5ae04259c4 | |||
b7741c6ecf | |||
4ebeba0e18 | |||
2afd0a726f | |||
7ff5b05004 | |||
933aa09b73 | |||
3fcb8336aa | |||
b194276289 | |||
7f3127441b | |||
3a6180d490 | |||
bdddbcc414 | |||
84e6aaff66 | |||
d5b917daad | |||
ad0b3cfdab | |||
9543431aeb | |||
d6750158fb | |||
56841bbc5f | |||
798119ed6f | |||
5bb0a091fc | |||
d173b09a1b | |||
da48f1feaf | |||
ad22aaa354 | |||
3b460506d9 | |||
56db7e56f9 | |||
a8c073c51e | |||
762b2c625c | |||
74a2b2e873 | |||
2caae60004 | |||
4dff7aaa2a | |||
9ffdb3a59e | |||
300feea177 | |||
45fd8279f0 | |||
d335821c51 | |||
c6330d86f1 | |||
c2dadbd9f8 | |||
eb3622942b | |||
fa4903c83c | |||
aaa9e1735a | |||
3df9352c00 | |||
8f8f79db56 | |||
7b68318284 | |||
0c655902f2 | |||
83b2ea2f60 | |||
0352ce79b8 | |||
8d8d1d225a | |||
fe727f3106 | |||
a36d62a30c | |||
29911195de | |||
c3fcf0f339 | |||
09abea5784 | |||
87a3c87e45 | |||
fb086ef13f | |||
fd71da47d1 | |||
4c5f9e0910 | |||
e12c7f6dd4 | |||
8542f2e673 | |||
d83d7e8262 | |||
d8935903a2 | |||
9a367a39d0 | |||
7350525937 | |||
0989780a77 | |||
1711fdba32 | |||
f5a5abf8ad | |||
402fa8a827 | |||
ef63abdf7f | |||
85f433232a | |||
17ad275124 | |||
42104fd44b | |||
2332afe877 | |||
e3ff4bf095 | |||
1561eb612c | |||
8fbf7ce744 | |||
88a3bb74b3 | |||
f5fc6649fe | |||
aefd3eb4cf | |||
9c2bbc51ca | |||
3cbbb54927 | |||
ace1760628 | |||
887db5a3db | |||
9b33aa1967 | |||
591443d838 | |||
68e0e4abc1 | |||
cdb722123a | |||
1be245269e | |||
97519cf79f | |||
156612bb25 | |||
4301f49988 | |||
c578ac4a1a | |||
1cbc7cc274 | |||
f80be42a55 | |||
82153e8840 | |||
e0653043ff | |||
0c923bdf11 | |||
085bea5c5a | |||
166ae10ca3 | |||
ea8561c35c | |||
1b48d6e5df | |||
00e581754b | |||
8effbda3a7 | |||
d8210da505 | |||
1467b456ae | |||
85095760ff | |||
f03ed33c87 | |||
7acd43e8bb | |||
a20e667c5b | |||
3748e3cf28 | |||
119bca6ce7 | |||
c250e7be9e | |||
0970fe78a0 | |||
5d837e5ab3 | |||
c3879e3776 | |||
84226a722c | |||
1de75d2035 | |||
ee45c948ac | |||
e42d5174ef | |||
e66a1439db | |||
6846e49edf | |||
3e1eb1a2e7 | |||
ac4855e911 | |||
73dee0bec4 | |||
0506f49f9e | |||
343a018361 | |||
57de98f132 | |||
99366c6b42 | |||
384a84ceee | |||
dac2c10ce9 | |||
9b6c8d216f | |||
2f84f3d8d8 | |||
212a1efd47 | |||
68a72c6b6e | |||
9e7740011b | |||
b003734be6 | |||
e9f464debc | |||
ae7ddfb483 | |||
ab16fa1f07 | |||
db7ab961bf | |||
44a49ff45a | |||
1aca63e9e0 | |||
163fd2d76b |
@ -41,10 +41,13 @@ matrix:
|
||||
|
||||
addons:
|
||||
apt:
|
||||
sources:
|
||||
- debian-sid
|
||||
packages:
|
||||
- libpcap-dev
|
||||
- libaspell-dev
|
||||
- libhunspell-dev
|
||||
- shellcheck
|
||||
|
||||
before_install:
|
||||
- go get -v -u github.com/chzchzchz/goword
|
||||
|
63
CODE_OF_CONDUCT.md
Normal file
63
CODE_OF_CONDUCT.md
Normal file
@ -0,0 +1,63 @@
|
||||
## CoreOS Community Code of Conduct
|
||||
|
||||
### Contributor Code of Conduct
|
||||
|
||||
As contributors and maintainers of this project, and in the interest of
|
||||
fostering an open and welcoming community, we pledge to respect all people who
|
||||
contribute through reporting issues, posting feature requests, updating
|
||||
documentation, submitting pull requests or patches, and other activities.
|
||||
|
||||
We are committed to making participation in this project a harassment-free
|
||||
experience for everyone, regardless of level of experience, gender, gender
|
||||
identity and expression, sexual orientation, disability, personal appearance,
|
||||
body size, race, ethnicity, age, religion, or nationality.
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery
|
||||
* Personal attacks
|
||||
* Trolling or insulting/derogatory comments
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as physical or electronic addresses, without explicit permission
|
||||
* Other unethical or unprofessional conduct.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or
|
||||
reject comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct. By adopting this Code of Conduct,
|
||||
project maintainers commit themselves to fairly and consistently applying these
|
||||
principles to every aspect of managing this project. Project maintainers who do
|
||||
not follow or enforce the Code of Conduct may be permanently removed from the
|
||||
project team.
|
||||
|
||||
This code of conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported by contacting a project maintainer, Brandon Philips
|
||||
<brandon.philips@coreos.com>, and/or Meghan Schofield
|
||||
<meghan.schofield@coreos.com>.
|
||||
|
||||
This Code of Conduct is adapted from the Contributor Covenant
|
||||
(http://contributor-covenant.org), version 1.2.0, available at
|
||||
http://contributor-covenant.org/version/1/2/0/
|
||||
|
||||
### CoreOS Events Code of Conduct
|
||||
|
||||
CoreOS events are working conferences intended for professional networking and
|
||||
collaboration in the CoreOS community. Attendees are expected to behave
|
||||
according to professional standards and in accordance with their employer’s
|
||||
policies on appropriate workplace behavior.
|
||||
|
||||
While at CoreOS events or related social networking opportunities, attendees
|
||||
should not engage in discriminatory or offensive speech or actions including
|
||||
but not limited to gender, sexuality, race, age, disability, or religion.
|
||||
Speakers should be especially aware of these concerns.
|
||||
|
||||
CoreOS does not condone any statements by speakers contrary to these standards.
|
||||
CoreOS reserves the right to deny entrance and/or eject from an event (without
|
||||
refund) any individual found to be engaging in discriminatory or offensive
|
||||
speech or actions.
|
||||
|
||||
Please bring any concerns to the immediate attention of designated on-site
|
||||
staff, Brandon Philips <brandon.philips@coreos.com>, and/or Meghan Schofield
|
||||
<meghan.schofield@coreos.com>.
|
@ -38,6 +38,15 @@ curl -L http://localhost:2379/v3alpha/kv/put \
|
||||
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"2"},"events":[{"kv":{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}}]}}
|
||||
```
|
||||
|
||||
Use `curl` to issue a transaction:
|
||||
|
||||
```bash
|
||||
curl -L http://localhost:2379/v3alpha/kv/txn \
|
||||
-X POST \
|
||||
-d '{"compare":[{"target":"CREATE","key":"Zm9v","createRevision":"2"}],"success":[{"requestPut":{"key":"Zm9v","value":"YmFy"}}]}'
|
||||
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"3","raft_term":"2"},"succeeded":true,"responses":[{"response_put":{"header":{"revision":"3"}}}]}
|
||||
```
|
||||
|
||||
## Swagger
|
||||
|
||||
Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].
|
||||
|
@ -790,6 +790,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
|
||||
| created | created is set to true if the response is for a create watch request. The client should record the watch_id and expect to receive events for the created watcher from the same stream. All events sent to the created watcher will attach with the same watch_id. | bool |
|
||||
| canceled | canceled is set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher. | bool |
|
||||
| compact_revision | compact_revision is set to the minimum index if a watcher tries to watch at a compacted index. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The client should treat the watcher as canceled and should not try to create any watcher with the same start_revision again. | int64 |
|
||||
| cancel_reason | cancel_reason indicates the reason for canceling the watcher. | string |
|
||||
| events | | (slice of) mvccpb.Event |
|
||||
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -2,7 +2,7 @@
|
||||
|
||||
## System requirements
|
||||
|
||||
The etcd performance benchmarks run etcd on 8 vCPU, 16GB RAM, 50GB SSD GCE instances, but any relatively modern machine with low latency storage and a few gigabytes of memory should suffice for most use cases. Applications with large v2 data stores will require more memory than a large v3 data store since data is kept in anonymous memory instead of memory mapped from a file. than For running etcd on a cloud provider, we suggest at least a medium instance on AWS or a standard-1 instance on GCE.
|
||||
The etcd performance benchmarks run etcd on 8 vCPU, 16GB RAM, 50GB SSD GCE instances, but any relatively modern machine with low latency storage and a few gigabytes of memory should suffice for most use cases. Applications with large v2 data stores will require more memory than a large v3 data store since data is kept in anonymous memory instead of memory mapped from a file. For running etcd on a cloud provider, see the [Example hardware configuration][example-hardware-configurations] documentation.
|
||||
|
||||
## Download the pre-built binary
|
||||
|
||||
@ -18,7 +18,6 @@ To build `etcd` from the `master` branch without a `GOPATH` using the official `
|
||||
$ git clone https://github.com/coreos/etcd.git
|
||||
$ cd etcd
|
||||
$ ./build
|
||||
$ ./bin/etcd
|
||||
```
|
||||
|
||||
To build a vendored `etcd` from the `master` branch via `go get`:
|
||||
@ -28,7 +27,6 @@ To build a vendored `etcd` from the `master` branch via `go get`:
|
||||
$ echo $GOPATH
|
||||
/Users/example/go
|
||||
$ go get github.com/coreos/etcd/cmd/etcd
|
||||
$ $GOPATH/bin/etcd
|
||||
```
|
||||
|
||||
To build `etcd` from the `master` branch without vendoring (may not build due to upstream conflicts):
|
||||
@ -38,20 +36,28 @@ To build `etcd` from the `master` branch without vendoring (may not build due to
|
||||
$ echo $GOPATH
|
||||
/Users/example/go
|
||||
$ go get github.com/coreos/etcd
|
||||
$ $GOPATH/bin/etcd
|
||||
```
|
||||
|
||||
## Test the installation
|
||||
|
||||
Check the etcd binary is built correctly by starting etcd and setting a key.
|
||||
|
||||
Start etcd:
|
||||
### Starting etcd
|
||||
|
||||
If etcd is built without using GOPATH, run the following:
|
||||
|
||||
```
|
||||
$ ./bin/etcd
|
||||
```
|
||||
If etcd is built using GOPATH, run the following:
|
||||
|
||||
Set a key:
|
||||
```
|
||||
$ $GOPATH/bin/etcd
|
||||
```
|
||||
|
||||
### Setting a key
|
||||
|
||||
Run the following:
|
||||
|
||||
```
|
||||
$ ETCDCTL_API=3 ./bin/etcdctl put foo bar
|
||||
@ -64,4 +70,4 @@ If OK is printed, then etcd is working!
|
||||
[go]: https://golang.org/doc/install
|
||||
[build-script]: ../build
|
||||
[cmd-directory]: ../cmd
|
||||
|
||||
[example-hardware-configurations]: op-guide/hardware.md#example-hardware-configurations
|
||||
|
@ -14,6 +14,10 @@
|
||||
|
||||
`advertise-urls` specifies the addresses etcd clients or other etcd members should use to contact the etcd server. The advertise addresses must be reachable from the remote machines. Do not advertise addresses like `localhost` or `0.0.0.0` for a production setup since these addresses are unreachable from remote machines.
|
||||
|
||||
#### Why doesn't changing `--listen-peer-urls` or `--initial-advertise-peer-urls` update the advertised peer URLs in `etcdctl member list`?
|
||||
|
||||
A member's advertised peer URLs come from `--initial-advertise-peer-urls` on initial cluster boot. Changing the listen peer URLs or the initial advertise peers after booting the member won't affect the exported advertise peer URLs since changes must go through quorum to avoid membership configuration split brain. Use `etcdctl member update` to update a member's peer URLs.
|
||||
|
||||
### Deployment
|
||||
|
||||
#### System requirements
|
||||
@ -78,10 +82,26 @@ On the other hand, if the downed member is removed from cluster membership first
|
||||
|
||||
etcd sets `strict-reconfig-check` in order to reject reconfiguration requests that would cause quorum loss. Abandoning quorum is really risky (especially when the cluster is already unhealthy). Although it may be tempting to disable quorum checking if there's quorum loss to add a new member, this could lead to full fledged cluster inconsistency. For many applications, this will make the problem even worse ("disk geometry corruption" being a candidate for most terrifying).
|
||||
|
||||
### Why does etcd lose its leader from disk latency spikes?
|
||||
#### Why does etcd lose its leader from disk latency spikes?
|
||||
|
||||
This is intentional; disk latency is part of leader liveness. Suppose the cluster leader takes a minute to fsync a raft log update to disk, but the etcd cluster has a one second election timeout. Even though the leader can process network messages within the election interval (e.g., send heartbeats), it's effectively unavailable because it can't commit any new proposals; it's waiting on the slow disk. If the cluster frequently loses its leader due to disk latencies, try [tuning][tuning] the disk settings or etcd time parameters.
|
||||
|
||||
#### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
|
||||
|
||||
Every new etcd cluster generates a new cluster ID based on the initial cluster configuration and a user-provided unique `initial-cluster-token` value. By having unique cluster ID's, etcd is protected from cross-cluster interaction which could corrupt the cluster.
|
||||
|
||||
Usually this warning happens after tearing down an old cluster, then reusing some of the peer addresses for the new cluster. If any etcd process from the old cluster is still running it will try to contact the new cluster. The new cluster will recognize a cluster ID mismatch, then ignore the request and emit this warning. This warning is often cleared by ensuring peer addresses among distinct clusters are disjoint.
|
||||
|
||||
#### What does "mvcc: database space exceeded" mean and how do I fix it?
|
||||
|
||||
The [multi-version concurrency control][api-mvcc] data model in etcd keeps an exact history of the keyspace. Without periodically compacting this history (e.g., by setting `--auto-compaction`), etcd will eventually exhaust its storage space. If etcd runs low on storage space, it raises a space quota alarm to protect the cluster from further writes. So long as the alarm is raised, etcd responds to write requests with the error `mvcc: database space exceeded`.
|
||||
|
||||
To recover from the low space quota alarm:
|
||||
|
||||
1. [Compact][maintenance-compact] etcd's history.
|
||||
2. [Defragment][maintenance-defragment] every etcd endpoint.
|
||||
3. [Disarm][maintenance-disarm] the alarm.
|
||||
|
||||
### Performance
|
||||
|
||||
#### How should I benchmark etcd?
|
||||
@ -112,12 +132,6 @@ A slow network can also cause this issue. If network metrics among the etcd mach
|
||||
|
||||
If none of the above suggestions clear the warnings, please [open an issue][new_issue] with detailed logging, monitoring, metrics and optionally workload information.
|
||||
|
||||
#### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
|
||||
|
||||
Every new etcd cluster generates a new cluster ID based on the initial cluster configuration and a user-provided unique `initial-cluster-token` value. By having unique cluster ID's, etcd is protected from cross-cluster interaction which could corrupt the cluster.
|
||||
|
||||
Usually this warning happens after tearing down an old cluster, then reusing some of the peer addresses for the new cluster. If any etcd process from the old cluster is still running it will try to contact the new cluster. The new cluster will recognize a cluster ID mismatch, then ignore the request and emit this warning. This warning is often cleared by ensuring peer addresses among distinct clusters are disjoint.
|
||||
|
||||
#### What does the etcd warning "snapshotting is taking more than x seconds to finish ..." mean?
|
||||
|
||||
etcd sends a snapshot of its complete key-value store to refresh slow followers and for [backups][backup]. Slow snapshot transfer times increase MTTR; if the cluster is ingesting data with high throughput, slow followers may livelock by needing a new snapshot before finishing receiving a snapshot. To catch slow snapshot performance, etcd warns when sending a snapshot takes more than thirty seconds and exceeds the expected transfer time for a 1Gbps connection.
|
||||
@ -135,3 +149,7 @@ etcd sends a snapshot of its complete key-value store to refresh slow followers
|
||||
[runtime reconfiguration]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md
|
||||
[benchmark]: https://github.com/coreos/etcd/tree/master/tools/benchmark
|
||||
[benchmark-result]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/performance.md
|
||||
[api-mvcc]: learning/api.md#revisions
|
||||
[maintenance-compact]: op-guide/maintenance.md#history-compaction
|
||||
[maintenance-defragment]: op-guide/maintenance.md#defragmentation
|
||||
[maintenance-disarm]: ../etcdctl/README.md#alarm-disarm
|
||||
|
@ -40,7 +40,7 @@
|
||||
|
||||
**Python libraries**
|
||||
|
||||
- [kragniz/python-etcd3](https://github.com/kragniz/python-etcd3) - Work in progress client for v3
|
||||
- [kragniz/python-etcd3](https://github.com/kragniz/python-etcd3) - Client for v3
|
||||
- [jplana/python-etcd](https://github.com/jplana/python-etcd) - Supports v2
|
||||
- [russellhaering/txetcd](https://github.com/russellhaering/txetcd) - a Twisted Python library
|
||||
- [cholcombe973/autodock](https://github.com/cholcombe973/autodock) - A docker deployment automation tool
|
||||
@ -50,6 +50,7 @@
|
||||
|
||||
**Node libraries**
|
||||
|
||||
- [mixer/etcd3](https://github.com/mixer/etcd3) - Supports v3
|
||||
- [stianeikeland/node-etcd](https://github.com/stianeikeland/node-etcd) - Supports v2 (w Coffeescript)
|
||||
- [lavagetto/nodejs-etcd](https://github.com/lavagetto/nodejs-etcd) - Supports v2
|
||||
- [deedubs/node-etcd-config](https://github.com/deedubs/node-etcd-config) - Supports v2
|
||||
|
@ -60,7 +60,7 @@ For avoiding such a situation, the API layer performs *version number validation
|
||||
|
||||
After authenticating with `Authenticate()`, a client can create a gRPC connection as it would without auth. In addition to the existing initialization process, the client must associate the token with the newly created connection. `grpc.WithPerRPCCredentials()` provides the functionality for this purpose.
|
||||
|
||||
Every authenticated request from the client has a token. The token can be obtained with `grpc.metadata.FromContext()` in the server side. The server can obtain who is issuing the request and when the user was authorized. The information will be filled by the API layer in the header (`etcdserverpb.RequestHeader.Username` and `etcdserverpb.RequestHeader.AuthRevision`) of a raft log entry (`etcdserverpb.InternalRaftRequest`).
|
||||
Every authenticated request from the client has a token. The token can be obtained with `grpc.metadata.FromIncomingContext()` in the server side. The server can obtain who is issuing the request and when the user was authorized. The information will be filled by the API layer in the header (`etcdserverpb.RequestHeader.Username` and `etcdserverpb.RequestHeader.AuthRevision`) of a raft log entry (`etcdserverpb.InternalRaftRequest`).
|
||||
|
||||
### Checking permission in the state machine
|
||||
|
||||
|
@ -281,7 +281,7 @@ ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573d
|
||||
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
```
|
||||
|
||||
**Each member must have a different name flag specified or else discovery will fail due to duplicated names. `Hostname` or `machine-id` can be a good choice. **
|
||||
**Each member must have a different name flag specified or else discovery will fail due to duplicated names. `Hostname` or `machine-id` can be a good choice.**
|
||||
|
||||
Now we start etcd with those relevant flags for each member:
|
||||
|
||||
|
@ -79,14 +79,16 @@ export NODE1=192.168.1.21
|
||||
Run the latest version of etcd:
|
||||
|
||||
```
|
||||
docker run --net=host \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:latest \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name node1 \
|
||||
--initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://${NODE1}:2380 \
|
||||
--advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://${NODE1}:2379 \
|
||||
--initial-cluster node1=http://${NODE1}:2380
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:latest \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name node1 \
|
||||
--initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://${NODE1}:2380 \
|
||||
--advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://${NODE1}:2379 \
|
||||
--initial-cluster node1=http://${NODE1}:2380
|
||||
```
|
||||
|
||||
List the cluster member:
|
||||
@ -114,41 +116,47 @@ DATA_DIR=/var/lib/etcd
|
||||
# For node 1
|
||||
THIS_NAME=${NAME_1}
|
||||
THIS_IP=${HOST_1}
|
||||
docker run --net=host \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
|
||||
# For node 2
|
||||
THIS_NAME=${NAME_2}
|
||||
THIS_IP=${HOST_2}
|
||||
docker run --net=host \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
|
||||
# For node 3
|
||||
THIS_NAME=${NAME_3}
|
||||
THIS_IP=${HOST_3}
|
||||
docker run --net=host \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
docker run \
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=${DATA_DIR}:/etcd-data \
|
||||
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--data-dir=/etcd-data --name ${THIS_NAME} \
|
||||
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
|
||||
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
|
||||
--initial-cluster ${CLUSTER} \
|
||||
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
|
||||
```
|
||||
|
||||
To run `etcdctl` using API version 3:
|
||||
@ -170,17 +178,19 @@ rkt run \
|
||||
--volume etcd-ssl-certs-bundle,kind=host,source=/etc/ssl/certs/ca-certificates.crt \
|
||||
--mount volume=etcd-ssl-certs-bundle,target=/etc/ssl/certs/ca-certificates.crt \
|
||||
quay.io/coreos/etcd:latest -- --name my-name \
|
||||
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
|
||||
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
|
||||
--discovery https://discovery.etcd.io/c11fbcdc16972e45253491a24fcf45e1
|
||||
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
|
||||
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
|
||||
--discovery https://discovery.etcd.io/c11fbcdc16972e45253491a24fcf45e1
|
||||
```
|
||||
|
||||
```
|
||||
docker run \
|
||||
--volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt \
|
||||
quay.io/coreos/etcd:latest \
|
||||
/usr/local/bin/etcd --name my-name \
|
||||
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
|
||||
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
|
||||
--discovery https://discovery.etcd.io/86a9ff6c8cb8b4c4544c1a2f88f8b801
|
||||
-p 2379:2379 \
|
||||
-p 2380:2380 \
|
||||
--volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt \
|
||||
quay.io/coreos/etcd:latest \
|
||||
/usr/local/bin/etcd --name my-name \
|
||||
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
|
||||
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
|
||||
--discovery https://discovery.etcd.io/86a9ff6c8cb8b4c4544c1a2f88f8b801
|
||||
```
|
||||
|
@ -76,7 +76,7 @@ LABELS {
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "slow gRPC requests",
|
||||
description = "on etcd instance {{ $labels.instance }} gRPC requests to {{ $label.grpc_method }} are slow",
|
||||
description = "on etcd instance {{ $labels.instance }} gRPC requests to {{ $labels.grpc_method }} are slow",
|
||||
}
|
||||
|
||||
# HTTP requests alerts
|
||||
@ -117,7 +117,7 @@ LABELS {
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "slow HTTP requests",
|
||||
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $label.method }} are slow",
|
||||
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method }} are slow",
|
||||
}
|
||||
|
||||
# file descriptor alerts
|
||||
@ -161,7 +161,7 @@ LABELS {
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "etcd member communication is slow",
|
||||
description = "etcd instance {{ $labels.instance }} member communication with {{ $label.To }} is slow",
|
||||
description = "etcd instance {{ $labels.instance }} member communication with {{ $labels.To }} is slow",
|
||||
}
|
||||
|
||||
# etcd proposal alerts
|
||||
|
@ -10,8 +10,7 @@ The gateway supports multiple etcd server endpoints and works on a simple round-
|
||||
|
||||
Every application that accesses etcd must first have the address of an etcd cluster client endpoint. If multiple applications on the same server access the same etcd cluster, every application still needs to know the advertised client endpoints of the etcd cluster. If the etcd cluster is reconfigured to have different endpoints, every application may also need to update its endpoint list. This wide-scale reconfiguration is both tedious and error prone.
|
||||
|
||||
etcd gateway solves this problem by serving as a stable local endpoint. A typical etcd gateway configuration has
|
||||
each machine running a gateway listening on a local address and every etcd application connecting to its local gateway. The upshot is only the gateway needs to update its endpoints instead of updating each and every application.
|
||||
etcd gateway solves this problem by serving as a stable local endpoint. A typical etcd gateway configuration has each machine running a gateway listening on a local address and every etcd application connecting to its local gateway. The upshot is only the gateway needs to update its endpoints instead of updating each and every application.
|
||||
|
||||
In summary, to automatically propagate cluster endpoint changes, the etcd gateway runs on every machine serving multiple applications accessing the same etcd cluster.
|
||||
|
||||
@ -64,3 +63,43 @@ Start the etcd gateway to fetch the endpoints from the DNS SRV entries with the
|
||||
$ etcd gateway --discovery-srv=example.com
|
||||
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
|
||||
```
|
||||
|
||||
## Configuration flags
|
||||
|
||||
### etcd cluster
|
||||
|
||||
#### --endpoints
|
||||
|
||||
* Comma-separated list of etcd server targets for forwarding client connections.
|
||||
* Default: `127.0.0.1:2379`
|
||||
* Invalid example: `https://127.0.0.1:2379` (gateway does not terminate TLS)
|
||||
|
||||
#### --discovery-srv
|
||||
|
||||
* DNS domain used to bootstrap cluster endpoints through SRV recrods.
|
||||
* Default: (not set)
|
||||
|
||||
### Network
|
||||
|
||||
#### --listen-addr
|
||||
|
||||
* Interface and port to bind for accepting client requests.
|
||||
* Default: `127.0.0.1:23790`
|
||||
|
||||
#### --retry-delay
|
||||
|
||||
* Duration of delay before retrying to connect to failed endpoints.
|
||||
* Default: 1m0s
|
||||
* Invalid example: "123" (expects time unit in format)
|
||||
|
||||
### Security
|
||||
|
||||
#### --insecure-discovery
|
||||
|
||||
* Accept SRV records that are insecure or susceptible to man-in-the-middle attacks.
|
||||
* Default: `false`
|
||||
|
||||
#### --trusted-ca-file
|
||||
|
||||
* Path to the client TLS CA file for the etcd cluster. Used to authenticate endpoints.
|
||||
* Default: (not set)
|
||||
|
@ -114,18 +114,21 @@
|
||||
"span": 5,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [{
|
||||
"expr": "sum(rate({grpc_type=\"unary\",grpc_code!=\"OK\"} [1m]))",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(grpc_server_started_total{grpc_type=\"unary\"}[5m]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{instance}} RPC Rate",
|
||||
"legendFormat": "RPC Rate",
|
||||
"metric": "grpc_server_started_total",
|
||||
"refId": "A",
|
||||
"step": 2
|
||||
},
|
||||
{
|
||||
"expr": "sum(rate(grpc_server_started_total{grpc_type=\"unary\",grpc_code!=\"OK\"} [1m])) - sum(rate(grpc_server_handled_total{grpc_type=\"unary\"} [1m]))",
|
||||
"expr": "sum(rate(grpc_server_handled_total{grpc_type=\"unary\",grpc_code!=\"OK\"}[5m]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{instance}} RPC Failed Rate",
|
||||
"legendFormat": "RPC Failed Rate",
|
||||
"metric": "grpc_server_handled_total",
|
||||
"refId": "B",
|
||||
"step": 2
|
||||
@ -197,7 +200,7 @@
|
||||
"stack": true,
|
||||
"steppedLine": false,
|
||||
"targets": [{
|
||||
"expr": "sum(grpc_server_started_total {grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\",grpc_code!=\"OK\"}) - sum(grpc_server_handled_total {grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\"})",
|
||||
"expr": "sum(grpc_server_started_total{grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\"}) - sum(grpc_server_handled_total{grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\"})",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "Watch Streams",
|
||||
"metric": "grpc_server_handled_total",
|
||||
@ -205,7 +208,7 @@
|
||||
"step": 4
|
||||
},
|
||||
{
|
||||
"expr": "sum(grpc_server_started_total {grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"}) - sum(grpc_server_handled_total {grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"})",
|
||||
"expr": "sum(grpc_server_started_total{grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"}) - sum(grpc_server_handled_total{grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"})",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "Lease Streams",
|
||||
"metric": "grpc_server_handled_total",
|
||||
@ -361,7 +364,7 @@
|
||||
"stack": false,
|
||||
"steppedLine": true,
|
||||
"targets": [{
|
||||
"expr": "histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket [5m])) by (instance, le))",
|
||||
"expr": "histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le))",
|
||||
"hide": false,
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{instance}} WAL fsync",
|
||||
@ -370,7 +373,7 @@
|
||||
"step": 4
|
||||
},
|
||||
{
|
||||
"expr": "histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket [5m])) by (instance, le))",
|
||||
"expr": "histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le))",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{instance}} DB fsync",
|
||||
"metric": "etcd_disk_backend_commit_duration_seconds_bucket",
|
||||
@ -522,7 +525,7 @@
|
||||
"stack": true,
|
||||
"steppedLine": false,
|
||||
"targets": [{
|
||||
"expr": "rate(etcd_network_client_grpc_received_bytes_total [1m])",
|
||||
"expr": "rate(etcd_network_client_grpc_received_bytes_total[5m])",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{instance}} Client Traffic In",
|
||||
"metric": "etcd_network_client_grpc_received_bytes_total",
|
||||
@ -595,7 +598,7 @@
|
||||
"stack": true,
|
||||
"steppedLine": false,
|
||||
"targets": [{
|
||||
"expr": "rate(etcd_network_client_grpc_sent_bytes_total [1m])",
|
||||
"expr": "rate(etcd_network_client_grpc_sent_bytes_total[5m])",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{instance}} Client Traffic Out",
|
||||
"metric": "etcd_network_client_grpc_sent_bytes_total",
|
||||
@ -668,7 +671,7 @@
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [{
|
||||
"expr": "sum(rate(etcd_network_peer_received_bytes_total [1m])) by (instance)",
|
||||
"expr": "sum(rate(etcd_network_peer_received_bytes_total[5m])) by (instance)",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{instance}} Peer Traffic In",
|
||||
"metric": "etcd_network_peer_received_bytes_total",
|
||||
@ -742,7 +745,7 @@
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [{
|
||||
"expr": "sum(rate(etcd_network_peer_sent_bytes_total [1m])) by (instance)",
|
||||
"expr": "sum(rate(etcd_network_peer_sent_bytes_total[5m])) by (instance)",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
"intervalFactor": 2,
|
||||
@ -822,7 +825,7 @@
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [{
|
||||
"expr": "sum(rate(etcd_server_proposals_failed_total [1m]))",
|
||||
"expr": "sum(rate(etcd_server_proposals_failed_total[5m]))",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "Proposal Failure Rate",
|
||||
"metric": "etcd_server_proposals_failed_total",
|
||||
@ -838,7 +841,7 @@
|
||||
"step": 2
|
||||
},
|
||||
{
|
||||
"expr": "sum(rate(etcd_server_proposals_committed_total [1m]))",
|
||||
"expr": "sum(rate(etcd_server_proposals_committed_total[5m]))",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "Proposal Commit Rate",
|
||||
"metric": "etcd_server_proposals_committed_total",
|
||||
@ -846,7 +849,7 @@
|
||||
"step": 2
|
||||
},
|
||||
{
|
||||
"expr": "sum(rate(etcd_server_proposals_applied_total [1m]))",
|
||||
"expr": "sum(rate(etcd_server_proposals_applied_total[5m]))",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "Proposal Apply Rate",
|
||||
"refId": "D",
|
||||
@ -922,9 +925,9 @@
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [{
|
||||
"expr": "etcd_server_leader_changes_seen_total",
|
||||
"expr": "changes(etcd_server_leader_changes_seen_total[1d])",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{instance}} Leader Change Seen",
|
||||
"legendFormat": "{{instance}} Total Leader Elections Per Day",
|
||||
"metric": "etcd_server_leader_changes_seen_total",
|
||||
"refId": "A",
|
||||
"step": 2
|
||||
@ -932,7 +935,7 @@
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Rate Leader Elections",
|
||||
"title": "Total Leader Elections Per Day",
|
||||
"tooltip": {
|
||||
"msResolution": false,
|
||||
"shared": true,
|
||||
@ -1009,4 +1012,4 @@
|
||||
"version": 215,
|
||||
"links": [],
|
||||
"gnetId": null
|
||||
}
|
||||
}
|
||||
|
@ -17,58 +17,54 @@ For some baseline performance numbers, we consider a three member etcd cluster w
|
||||
- Google Cloud Compute Engine
|
||||
- 3 machines of 8 vCPUs + 16GB Memory + 50GB SSD
|
||||
- 1 machine(client) of 16 vCPUs + 30GB Memory + 50GB SSD
|
||||
- Ubuntu 15.10
|
||||
- etcd v3 master branch (commit SHA d8f325d), Go 1.6.2
|
||||
- Ubuntu 17.04
|
||||
- etcd 3.2.0, go 1.8.3
|
||||
|
||||
With this configuration, etcd can approximately write:
|
||||
|
||||
| Number of keys | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Target etcd server | Average write QPS | Average latency per request | Memory |
|
||||
|----------------|-------------------|---------------------|-----------------------|-------------------|--------------------|-------------------|-----------------------------|--------|
|
||||
| 10,000 | 8 | 256 | 1 | 1 | leader only | 525 | 2ms | 35 MB |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | leader only | 25,000 | 30ms | 35 MB |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | all members | 33,000 | 25ms | 35 MB |
|
||||
| Number of keys | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Target etcd server | Average write QPS | Average latency per request | Average server RSS |
|
||||
|---------------:|------------------:|--------------------:|----------------------:|------------------:|--------------------|------------------:|----------------------------:|-------------------:|
|
||||
| 10,000 | 8 | 256 | 1 | 1 | leader only | 583 | 1.6ms | 48 MB |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | leader only | 44,341 | 22ms | 124MB |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | all members | 50,104 | 20ms | 126MB |
|
||||
|
||||
Sample commands are:
|
||||
|
||||
```
|
||||
# assuming IP_1 is leader, write requests to the leader
|
||||
benchmark --endpoints={IP_1} --conns=1 --clients=1 \
|
||||
```sh
|
||||
# write to leader
|
||||
benchmark --endpoints=${HOST_1} --target-leader --conns=1 --clients=1 \
|
||||
put --key-size=8 --sequential-keys --total=10000 --val-size=256
|
||||
benchmark --endpoints={IP_1} --conns=100 --clients=1000 \
|
||||
benchmark --endpoints=${HOST_1} --target-leader --conns=100 --clients=1000 \
|
||||
put --key-size=8 --sequential-keys --total=100000 --val-size=256
|
||||
|
||||
# write to all members
|
||||
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=100 --clients=1000 \
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
|
||||
put --key-size=8 --sequential-keys --total=100000 --val-size=256
|
||||
```
|
||||
|
||||
Linearizable read requests go through a quorum of cluster members for consensus to fetch the most recent data. Serializable read requests are cheaper than linearizable reads since they are served by any single etcd member, instead of a quorum of members, in exchange for possibly serving stale data. etcd can read:
|
||||
|
||||
| Number of requests | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Consistency | Average latency per request | Average read QPS |
|
||||
|--------------------|-------------------|---------------------|-----------------------|-------------------|-------------|-----------------------------|------------------|
|
||||
| 10,000 | 8 | 256 | 1 | 1 | Linearizable | 2ms | 560 |
|
||||
| 10,000 | 8 | 256 | 1 | 1 | Serializable | 0.4ms | 7,500 |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | Linearizable | 15ms | 43,000 |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | Serializable | 9ms | 93,000 |
|
||||
| Number of requests | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Consistency | Average read QPS | Average latency per request |
|
||||
|-------------------:|------------------:|--------------------:|----------------------:|------------------:|-------------|-----------------:|----------------------------:|
|
||||
| 10,000 | 8 | 256 | 1 | 1 | Linearizable | 1,353 | 0.7ms |
|
||||
| 10,000 | 8 | 256 | 1 | 1 | Serializable | 2,909 | 0.3ms |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | Linearizable | 141,578 | 5.5ms |
|
||||
| 100,000 | 8 | 256 | 100 | 1000 | Serializable | 185,758 | 2.2ms |
|
||||
|
||||
Sample commands are:
|
||||
|
||||
```
|
||||
# Linearizable read requests
|
||||
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=1 --clients=1 \
|
||||
```sh
|
||||
# Single connection read requests
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=1 --clients=1 \
|
||||
range YOUR_KEY --consistency=l --total=10000
|
||||
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=100 --clients=1000 \
|
||||
range YOUR_KEY --consistency=l --total=100000
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=1 --clients=1 \
|
||||
range YOUR_KEY --consistency=s --total=10000
|
||||
|
||||
# Serializable read requests for each member and sum up the numbers
|
||||
for endpoint in {IP_1} {IP_2} {IP_3}; do
|
||||
benchmark --endpoints=$endpoint --conns=1 --clients=1 \
|
||||
range YOUR_KEY --consistency=s --total=10000
|
||||
done
|
||||
for endpoint in {IP_1} {IP_2} {IP_3}; do
|
||||
benchmark --endpoints=$endpoint --conns=100 --clients=1000 \
|
||||
range YOUR_KEY --consistency=s --total=100000
|
||||
done
|
||||
# Many concurrent read requests
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
|
||||
range YOUR_KEY --consistency=l --total=100000
|
||||
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
|
||||
range YOUR_KEY --consistency=s --total=100000
|
||||
```
|
||||
|
||||
We encourage running the benchmark test when setting up an etcd cluster for the first time in a new environment to ensure the cluster achieves adequate performance; cluster latency and throughput can be sensitive to minor environment differences.
|
||||
We encourage running the benchmark test when setting up an etcd cluster for the first time in a new environment to ensure the cluster achieves adequate performance; cluster latency and throughput can be sensitive to minor environment differences.
|
||||
|
@ -16,7 +16,7 @@ etcd takes several certificate related configuration options, either through com
|
||||
|
||||
`--key-file=<path>`: Key for the certificate. Must be unencrypted.
|
||||
|
||||
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail.
|
||||
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail. If [authentication][auth] is enabled, the certificate provides credentials for the user name given by the Common Name field.
|
||||
|
||||
`--trusted-ca-file=<path>`: Trusted certificate authority.
|
||||
|
||||
@ -222,3 +222,4 @@ The certificate needs to be signed for the member's FQDN in its Subject Name, us
|
||||
[tls-setup]: ../../hack/tls-setup
|
||||
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
|
||||
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName
|
||||
[auth]: authentication.md
|
||||
|
@ -10,7 +10,7 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
|
||||
|
||||
#### Upgrade requirements
|
||||
|
||||
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.0) before upgrading to 3.0.
|
||||
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0.
|
||||
|
||||
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl cluster-health` command before proceeding.
|
||||
|
||||
@ -52,7 +52,7 @@ member 8211f1d0f64f3269 is healthy: got healthy result from http://localhost:123
|
||||
cluster is healthy
|
||||
|
||||
$ curl http://localhost:2379/version
|
||||
{"etcdserver":"2.3.x","etcdcluster":"2.3.0"}
|
||||
{"etcdserver":"2.3.x","etcdcluster":"2.3.8"}
|
||||
```
|
||||
|
||||
#### 2. Stop the existing etcd process
|
||||
|
@ -10,7 +10,7 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
|
||||
|
||||
#### Upgrade requirements
|
||||
|
||||
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please upgrade to [3.0](https://github.com/coreos/etcd/releases/tag/v3.0.16) before upgrading to 3.1.
|
||||
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please [upgrade to 3.0](upgrade_3_0.md) before upgrading to 3.1.
|
||||
|
||||
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
|
||||
|
||||
|
@ -30,11 +30,27 @@ resp.TTL == -1
|
||||
err == nil
|
||||
```
|
||||
|
||||
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
|
||||
|
||||
Before
|
||||
|
||||
```go
|
||||
import "github.com/coreos/etcd/clientv3"
|
||||
clientv3.NewFromConfigFile
|
||||
```
|
||||
|
||||
After
|
||||
|
||||
```go
|
||||
import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
|
||||
clientv3yaml.NewConfig
|
||||
```
|
||||
|
||||
### Server upgrade checklists
|
||||
|
||||
#### Upgrade requirements
|
||||
|
||||
To upgrade an existing etcd deployment to 3.2, the running cluster must be 3.1 or greater. If it's before 3.1, please upgrade to [3.1](https://github.com/coreos/etcd/releases/tag/v3.1.7) before upgrading to 3.2.
|
||||
To upgrade an existing etcd deployment to 3.2, the running cluster must be 3.1 or greater. If it's before 3.1, please [upgrade to 3.1](upgrade_3_1.md) before upgrading to 3.2.
|
||||
|
||||
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
|
||||
|
||||
|
@ -62,7 +62,7 @@ ALERT HTTPRequestsSlow
|
||||
}
|
||||
ANNOTATIONS {
|
||||
summary = "slow HTTP requests",
|
||||
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $label.method }} are slow",
|
||||
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method }} are slow",
|
||||
}
|
||||
|
||||
### File descriptor alerts ###
|
||||
|
@ -16,7 +16,7 @@ etcd takes several certificate related configuration options, either through com
|
||||
|
||||
`--key-file=<path>`: Key for the certificate. Must be unencrypted.
|
||||
|
||||
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail.
|
||||
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail. If [authentication][auth] is enabled, the certificate provides credentials for the user name given by the Common Name field.
|
||||
|
||||
`--trusted-ca-file=<path>`: Trusted certificate authority.
|
||||
|
||||
@ -191,3 +191,4 @@ If you need your certificate to be signed for your member's FQDN in its Subject
|
||||
[tls-setup]: ../../hack/tls-setup
|
||||
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
|
||||
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName
|
||||
[auth]: authentication.md
|
||||
|
57
NEWS
57
NEWS
@ -1,3 +1,60 @@
|
||||
etcd v3.2.0 (2017-06-09)
|
||||
- improved backend read concurrency
|
||||
- embedded etcd
|
||||
- Etcd.Peers field is now []*peerListener
|
||||
- RPCs
|
||||
- add Election, Lock service
|
||||
- native client etcdserver/api/v3client
|
||||
- client "embedded" in the server
|
||||
- v3 client
|
||||
- LeaseTimeToLive returns TTL=-1 resp on lease not found
|
||||
- clientv3.NewFromConfigFile is moved to clientv3/yaml.NewConfig
|
||||
- STM prefetching
|
||||
- add namespace feature
|
||||
- concurrency package's elections updated to match RPC interfaces
|
||||
- let client dial endpoints not in the balancer
|
||||
- add ErrOldCluster with server version checking
|
||||
- translate WithPrefix() into WithFromKey() for empty key
|
||||
- v3 etcdctl
|
||||
- add check perf command
|
||||
- add --from-key flag to role grant-permission command
|
||||
- lock command takes an optional command to execute
|
||||
- etcd flags
|
||||
- add --enable-v2 flag to configure v2 backend (enabled by default)
|
||||
- add --auth-token flag
|
||||
- gRPC proxy
|
||||
- proxy endpoint discovery
|
||||
- namespaces
|
||||
- coalesce lease requests
|
||||
- gateway
|
||||
- support DNS SRV priority
|
||||
- auth
|
||||
- support Watch API
|
||||
- JWT tokens
|
||||
- logging, monitoring
|
||||
- server warns large snapshot operations
|
||||
- add 'etcd_debugging_server_lease_expired_total' metrics
|
||||
- security
|
||||
- deny incoming peer certs with wrong IP SAN
|
||||
- resolve TLS DNSNames when SAN checking
|
||||
- reload TLS certificates on every client connection
|
||||
- release
|
||||
- annotate acbuild with supports-systemd-notify
|
||||
- add nsswitch.conf to Docker container image
|
||||
- add ppc64le, arm64(experimental) builds
|
||||
- Go 1.8.3
|
||||
- gRPC v1.2.1
|
||||
- grpc-gateway to v1.2.0
|
||||
- v2
|
||||
- allow snapshot over 512MB
|
||||
|
||||
etcd v3.1.9 (2017-06-09)
|
||||
- allow v2 snapshot over 512MB
|
||||
|
||||
etcd v3.1.8 (2017-05-19)
|
||||
|
||||
etcd v3.1.7 (2017-04-28)
|
||||
|
||||
etcd v3.1.6 (2017-04-19)
|
||||
- remove auth check in Status API
|
||||
- fill in Auth API response header
|
||||
|
@ -97,7 +97,9 @@ func prepareOpts(opts map[string]string) (jwtSignMethod, jwtPubKeyPath, jwtPrivK
|
||||
return "", "", "", ErrInvalidAuthOpts
|
||||
}
|
||||
}
|
||||
|
||||
if len(jwtSignMethod) == 0 {
|
||||
return "", "", "", ErrInvalidAuthOpts
|
||||
}
|
||||
return jwtSignMethod, jwtPubKeyPath, jwtPrivKeyPath, nil
|
||||
}
|
||||
|
||||
|
94
auth/jwt_test.go
Normal file
94
auth/jwt_test.go
Normal file
@ -0,0 +1,94 @@
|
||||
// Copyright 2017 The etcd Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package auth
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
)
|
||||
|
||||
const (
|
||||
jwtPubKey = "../integration/fixtures/server.crt"
|
||||
jwtPrivKey = "../integration/fixtures/server.key.insecure"
|
||||
)
|
||||
|
||||
func TestJWTInfo(t *testing.T) {
|
||||
opts := map[string]string{
|
||||
"pub-key": jwtPubKey,
|
||||
"priv-key": jwtPrivKey,
|
||||
"sign-method": "RS256",
|
||||
}
|
||||
jwt, err := newTokenProviderJWT(opts)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
token, aerr := jwt.assign(context.TODO(), "abc", 123)
|
||||
if aerr != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ai, ok := jwt.info(context.TODO(), token, 123)
|
||||
if !ok {
|
||||
t.Fatalf("failed to authenticate with token %s", token)
|
||||
}
|
||||
if ai.Revision != 123 {
|
||||
t.Fatalf("expected revision 123, got %d", ai.Revision)
|
||||
}
|
||||
ai, ok = jwt.info(context.TODO(), "aaa", 120)
|
||||
if ok || ai != nil {
|
||||
t.Fatalf("expected aaa to fail to authenticate, got %+v", ai)
|
||||
}
|
||||
}
|
||||
|
||||
func TestJWTBad(t *testing.T) {
|
||||
opts := map[string]string{
|
||||
"pub-key": jwtPubKey,
|
||||
"priv-key": jwtPrivKey,
|
||||
"sign-method": "RS256",
|
||||
}
|
||||
// private key instead of public key
|
||||
opts["pub-key"] = jwtPrivKey
|
||||
if _, err := newTokenProviderJWT(opts); err == nil {
|
||||
t.Fatalf("expected failure on missing public key")
|
||||
}
|
||||
opts["pub-key"] = jwtPubKey
|
||||
|
||||
// public key instead of private key
|
||||
opts["priv-key"] = jwtPubKey
|
||||
if _, err := newTokenProviderJWT(opts); err == nil {
|
||||
t.Fatalf("expected failure on missing public key")
|
||||
}
|
||||
opts["priv-key"] = jwtPrivKey
|
||||
|
||||
// missing signing option
|
||||
delete(opts, "sign-method")
|
||||
if _, err := newTokenProviderJWT(opts); err == nil {
|
||||
t.Fatal("expected error on missing option")
|
||||
}
|
||||
opts["sign-method"] = "RS256"
|
||||
|
||||
// bad file for pubkey
|
||||
opts["pub-key"] = "whatever"
|
||||
if _, err := newTokenProviderJWT(opts); err == nil {
|
||||
t.Fatalf("expected failure on missing public key")
|
||||
}
|
||||
opts["pub-key"] = jwtPubKey
|
||||
|
||||
// bad file for private key
|
||||
opts["priv-key"] = "whatever"
|
||||
if _, err := newTokenProviderJWT(opts); err == nil {
|
||||
t.Fatalf("expeceted failure on missing private key")
|
||||
}
|
||||
opts["priv-key"] = jwtPrivKey
|
||||
}
|
@ -118,6 +118,8 @@ func (t *tokenSimple) genTokenPrefix() (string, error) {
|
||||
|
||||
func (t *tokenSimple) assignSimpleTokenToUser(username, token string) {
|
||||
t.simpleTokensMu.Lock()
|
||||
defer t.simpleTokensMu.Unlock()
|
||||
|
||||
_, ok := t.simpleTokens[token]
|
||||
if ok {
|
||||
plog.Panicf("token %s is alredy used", token)
|
||||
@ -125,7 +127,6 @@ func (t *tokenSimple) assignSimpleTokenToUser(username, token string) {
|
||||
|
||||
t.simpleTokens[token] = username
|
||||
t.simpleTokenKeeper.addSimpleToken(token)
|
||||
t.simpleTokensMu.Unlock()
|
||||
}
|
||||
|
||||
func (t *tokenSimple) invalidateUser(username string) {
|
||||
|
@ -162,6 +162,9 @@ type AuthStore interface {
|
||||
|
||||
// AuthInfoFromTLS gets AuthInfo from TLS info of gRPC's context
|
||||
AuthInfoFromTLS(ctx context.Context) *AuthInfo
|
||||
|
||||
// WithRoot generates and installs a token that can be used as a root credential
|
||||
WithRoot(ctx context.Context) context.Context
|
||||
}
|
||||
|
||||
type TokenProvider interface {
|
||||
@ -992,13 +995,17 @@ func (as *authStore) AuthInfoFromTLS(ctx context.Context) *AuthInfo {
|
||||
}
|
||||
|
||||
func (as *authStore) AuthInfoFromCtx(ctx context.Context) (*AuthInfo, error) {
|
||||
md, ok := metadata.FromContext(ctx)
|
||||
md, ok := metadata.FromIncomingContext(ctx)
|
||||
if !ok {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
ts, tok := md["token"]
|
||||
if !tok {
|
||||
//TODO(mitake|hexfusion) review unifying key names
|
||||
ts, ok := md["token"]
|
||||
if !ok {
|
||||
ts, ok = md["authorization"]
|
||||
}
|
||||
if !ok {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
@ -1008,6 +1015,7 @@ func (as *authStore) AuthInfoFromCtx(ctx context.Context) (*AuthInfo, error) {
|
||||
plog.Warningf("invalid auth token: %s", token)
|
||||
return nil, ErrInvalidAuthToken
|
||||
}
|
||||
|
||||
return authInfo, nil
|
||||
}
|
||||
|
||||
@ -1057,3 +1065,35 @@ func NewTokenProvider(tokenOpts string, indexWaiter func(uint64) <-chan struct{}
|
||||
return nil, ErrInvalidAuthOpts
|
||||
}
|
||||
}
|
||||
|
||||
func (as *authStore) WithRoot(ctx context.Context) context.Context {
|
||||
if !as.isAuthEnabled() {
|
||||
return ctx
|
||||
}
|
||||
|
||||
var ctxForAssign context.Context
|
||||
if ts := as.tokenProvider.(*tokenSimple); ts != nil {
|
||||
ctx1 := context.WithValue(ctx, "index", uint64(0))
|
||||
prefix, err := ts.genTokenPrefix()
|
||||
if err != nil {
|
||||
plog.Errorf("failed to generate prefix of internally used token")
|
||||
return ctx
|
||||
}
|
||||
ctxForAssign = context.WithValue(ctx1, "simpleToken", prefix)
|
||||
} else {
|
||||
ctxForAssign = ctx
|
||||
}
|
||||
|
||||
token, err := as.tokenProvider.assign(ctxForAssign, "root", as.Revision())
|
||||
if err != nil {
|
||||
// this must not happen
|
||||
plog.Errorf("failed to assign token for lease revoking: %s", err)
|
||||
return ctx
|
||||
}
|
||||
|
||||
mdMap := map[string]string{
|
||||
"token": token,
|
||||
}
|
||||
tokenMD := metadata.New(mdMap)
|
||||
return metadata.NewContext(ctx, tokenMD)
|
||||
}
|
||||
|
@ -453,7 +453,8 @@ func TestAuthInfoFromCtx(t *testing.T) {
|
||||
t.Errorf("expected (nil, nil), got (%v, %v)", ai, err)
|
||||
}
|
||||
|
||||
ctx = metadata.NewContext(context.Background(), metadata.New(map[string]string{"tokens": "dummy"}))
|
||||
// as if it came from RPC
|
||||
ctx = metadata.NewIncomingContext(context.Background(), metadata.New(map[string]string{"tokens": "dummy"}))
|
||||
ai, err = as.AuthInfoFromCtx(ctx)
|
||||
if err != nil && ai != nil {
|
||||
t.Errorf("expected (nil, nil), got (%v, %v)", ai, err)
|
||||
@ -465,19 +466,19 @@ func TestAuthInfoFromCtx(t *testing.T) {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
ctx = metadata.NewContext(context.Background(), metadata.New(map[string]string{"token": "Invalid Token"}))
|
||||
ctx = metadata.NewIncomingContext(context.Background(), metadata.New(map[string]string{"token": "Invalid Token"}))
|
||||
_, err = as.AuthInfoFromCtx(ctx)
|
||||
if err != ErrInvalidAuthToken {
|
||||
t.Errorf("expected %v, got %v", ErrInvalidAuthToken, err)
|
||||
}
|
||||
|
||||
ctx = metadata.NewContext(context.Background(), metadata.New(map[string]string{"token": "Invalid.Token"}))
|
||||
ctx = metadata.NewIncomingContext(context.Background(), metadata.New(map[string]string{"token": "Invalid.Token"}))
|
||||
_, err = as.AuthInfoFromCtx(ctx)
|
||||
if err != ErrInvalidAuthToken {
|
||||
t.Errorf("expected %v, got %v", ErrInvalidAuthToken, err)
|
||||
}
|
||||
|
||||
ctx = metadata.NewContext(context.Background(), metadata.New(map[string]string{"token": resp.Token}))
|
||||
ctx = metadata.NewIncomingContext(context.Background(), metadata.New(map[string]string{"token": resp.Token}))
|
||||
ai, err = as.AuthInfoFromCtx(ctx)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
@ -521,7 +522,7 @@ func TestAuthInfoFromCtxRace(t *testing.T) {
|
||||
donec := make(chan struct{})
|
||||
go func() {
|
||||
defer close(donec)
|
||||
ctx := metadata.NewContext(context.Background(), metadata.New(map[string]string{"token": "test"}))
|
||||
ctx := metadata.NewIncomingContext(context.Background(), metadata.New(map[string]string{"token": "test"}))
|
||||
as.AuthInfoFromCtx(ctx)
|
||||
}()
|
||||
as.UserAdd(&pb.AuthUserAddRequest{Name: "test"})
|
||||
|
@ -1,212 +1,388 @@
|
||||
[
|
||||
{
|
||||
"project": "bitbucket.org/ww/goautoneg",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/beorn7/perks/quantile",
|
||||
"license": "MIT License",
|
||||
"confidence": 0.989
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 0.9891304347826086
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/bgentry/speakeasy",
|
||||
"license": "MIT License",
|
||||
"confidence": 0.944
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 0.9441624365482234
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/boltdb/bolt",
|
||||
"license": "MIT License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/cockroachdb/cmux",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/coreos/etcd",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/coreos/go-semver/semver",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/coreos/go-systemd",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 0.997
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 0.9966703662597114
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/coreos/pkg",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/cpuguy83/go-md2man/md2man",
|
||||
"license": "MIT License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/dgrijalva/jwt-go",
|
||||
"license": "MIT License",
|
||||
"confidence": 0.989
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 0.9891304347826086
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/dustin/go-humanize",
|
||||
"license": "MIT License",
|
||||
"confidence": 0.969
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 0.96875
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/ghodss/yaml",
|
||||
"license": "MIT License and BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License and BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/gogo/protobuf/proto",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.909
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.9090909090909091
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/golang/groupcache/lru",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 0.997
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 0.9966703662597114
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/golang/protobuf",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.92
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.92
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/google/btree",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/grpc-ecosystem/go-grpc-prometheus",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/grpc-ecosystem/grpc-gateway",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.979
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.979253112033195
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/inconshreveable/mousetrap",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License and BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 1
|
||||
},
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/jonboulle/clockwork",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/mattn/go-runewidth",
|
||||
"license": "MIT License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/matttproud/golang_protobuf_extensions/pbutil",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/olekukonko/tablewriter",
|
||||
"license": "MIT License",
|
||||
"confidence": 0.989
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 0.9891304347826086
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/prometheus/client_golang/prometheus",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/prometheus/client_model/go",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/prometheus/common",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/prometheus/procfs",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/russross/blackfriday",
|
||||
"license": "BSD 2-clause \"Simplified\" License",
|
||||
"confidence": 0.963
|
||||
},
|
||||
{
|
||||
"project": "github.com/shurcooL/sanitized_anchor_name",
|
||||
"license": "MIT License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 2-clause \"Simplified\" License",
|
||||
"confidence": 0.9626168224299065
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/spf13/cobra",
|
||||
"license": "Apache License 2.0",
|
||||
"confidence": 0.957
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 0.9573241061130334
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/spf13/pflag",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.966
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.9663865546218487
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/ugorji/go/codec",
|
||||
"license": "MIT License",
|
||||
"confidence": 0.995
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 0.9946524064171123
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/urfave/cli",
|
||||
"license": "MIT License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/xiang90/probing",
|
||||
"license": "MIT License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "golang.org/x/crypto",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.966
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.9663865546218487
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "golang.org/x/net",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.966
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.9663865546218487
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "golang.org/x/text",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.966
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.9663865546218487
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "golang.org/x/time/rate",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.966
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.9663865546218487
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "google.golang.org/genproto/googleapis/rpc/status",
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0",
|
||||
"confidence": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "google.golang.org/grpc",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.979
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.979253112033195
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "gopkg.in/cheggaaa/pb.v1",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.992
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License",
|
||||
"confidence": 0.9916666666666667
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "gopkg.in/yaml.v2",
|
||||
"license": "Apache License 2.0 and MIT License",
|
||||
"confidence": 1
|
||||
"licenses": [
|
||||
{
|
||||
"type": "The Unlicense",
|
||||
"confidence": 0.35294117647058826
|
||||
},
|
||||
{
|
||||
"type": "MIT License",
|
||||
"confidence": 0.8975609756097561
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
@ -1,18 +1,26 @@
|
||||
[
|
||||
{
|
||||
"project": "bitbucket.org/ww/goautoneg",
|
||||
"license": "BSD 3-clause \"New\" or \"Revised\" License"
|
||||
"licenses": [
|
||||
{
|
||||
"type": "BSD 3-clause \"New\" or \"Revised\" License"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/ghodss/yaml",
|
||||
"license": "MIT License and BSD 3-clause \"New\" or \"Revised\" License"
|
||||
"licenses": [
|
||||
{
|
||||
"type": "MIT License and BSD 3-clause \"New\" or \"Revised\" License"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"project": "github.com/inconshreveable/mousetrap",
|
||||
"license": "Apache License 2.0"
|
||||
},
|
||||
{
|
||||
"project": "gopkg.in/yaml.v2",
|
||||
"license": "Apache License 2.0 and MIT License"
|
||||
"licenses": [
|
||||
{
|
||||
"type": "Apache License 2.0"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
23
build
23
build
@ -3,9 +3,7 @@
|
||||
# set some environment variables
|
||||
ORG_PATH="github.com/coreos"
|
||||
REPO_PATH="${ORG_PATH}/etcd"
|
||||
export GO15VENDOREXPERIMENT="1"
|
||||
|
||||
eval $(go env)
|
||||
GIT_SHA=`git rev-parse --short HEAD || echo "GitNotFound"`
|
||||
if [ ! -z "$FAILPOINTS" ]; then
|
||||
GIT_SHA="$GIT_SHA"-FAILPOINTS
|
||||
@ -17,11 +15,7 @@ GO_LDFLAGS="$GO_LDFLAGS -X ${REPO_PATH}/cmd/vendor/${REPO_PATH}/version.GitSHA=$
|
||||
# enable/disable failpoints
|
||||
toggle_failpoints() {
|
||||
FAILPKGS="etcdserver/ mvcc/backend/"
|
||||
|
||||
mode="disable"
|
||||
if [ ! -z "$FAILPOINTS" ]; then mode="enable"; fi
|
||||
if [ ! -z "$1" ]; then mode="$1"; fi
|
||||
|
||||
mode="$1"
|
||||
if which gofail >/dev/null 2>&1; then
|
||||
gofail "$mode" $FAILPKGS
|
||||
elif [ "$mode" != "disable" ]; then
|
||||
@ -30,19 +24,26 @@ toggle_failpoints() {
|
||||
fi
|
||||
}
|
||||
|
||||
toggle_failpoints_default() {
|
||||
mode="disable"
|
||||
if [ ! -z "$FAILPOINTS" ]; then mode="enable"; fi
|
||||
toggle_failpoints "$mode"
|
||||
}
|
||||
|
||||
etcd_build() {
|
||||
out="bin"
|
||||
if [ -n "${BINDIR}" ]; then out="${BINDIR}"; fi
|
||||
toggle_failpoints
|
||||
toggle_failpoints_default
|
||||
# Static compilation is useful when etcd is run in a container
|
||||
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "$GO_LDFLAGS" -o ${out}/etcd ${REPO_PATH}/cmd/etcd || return
|
||||
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "$GO_LDFLAGS" -o ${out}/etcdctl ${REPO_PATH}/cmd/etcdctl || return
|
||||
}
|
||||
|
||||
etcd_setup_gopath() {
|
||||
CDIR=$(cd `dirname "$0"` && pwd)
|
||||
d=$(dirname "$0")
|
||||
CDIR=$(cd "$d" && pwd)
|
||||
cd "$CDIR"
|
||||
etcdGOPATH=${CDIR}/gopath
|
||||
etcdGOPATH="${CDIR}/gopath"
|
||||
# preserve old gopath to support building with unvendored tooling deps (e.g., gofail)
|
||||
if [ -n "$GOPATH" ]; then
|
||||
GOPATH=":$GOPATH"
|
||||
@ -53,7 +54,7 @@ etcd_setup_gopath() {
|
||||
ln -s ${CDIR}/cmd/vendor ${etcdGOPATH}/src
|
||||
}
|
||||
|
||||
toggle_failpoints
|
||||
toggle_failpoints_default
|
||||
|
||||
# only build when called directly, not sourced
|
||||
if echo "$0" | grep "build$" >/dev/null; then
|
||||
|
@ -59,7 +59,7 @@ Use a custom context to set timeouts on your operations:
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// set a new key, ignoring it's previous state
|
||||
// set a new key, ignoring its previous state
|
||||
_, err := kAPI.Set(ctx, "/ping", "pong", nil)
|
||||
if err != nil {
|
||||
if err == context.DeadlineExceeded {
|
||||
|
93
client/example_keys_test.go
Normal file
93
client/example_keys_test.go
Normal file
@ -0,0 +1,93 @@
|
||||
// Copyright 2017 The etcd Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package client_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"sort"
|
||||
|
||||
"github.com/coreos/etcd/client"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
func ExampleKeysAPI_directory() {
|
||||
c, err := client.New(client.Config{
|
||||
Endpoints: exampleEndpoints,
|
||||
Transport: exampleTransport,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
kapi := client.NewKeysAPI(c)
|
||||
|
||||
// Setting '/myNodes' to create a directory that will hold some keys.
|
||||
o := client.SetOptions{Dir: true}
|
||||
resp, err := kapi.Set(context.Background(), "/myNodes", "", &o)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Add keys to /myNodes directory.
|
||||
resp, err = kapi.Set(context.Background(), "/myNodes/key1", "value1", nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
resp, err = kapi.Set(context.Background(), "/myNodes/key2", "value2", nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// fetch directory
|
||||
resp, err = kapi.Get(context.Background(), "/myNodes", nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
// print directory keys
|
||||
sort.Sort(resp.Node.Nodes)
|
||||
for _, n := range resp.Node.Nodes {
|
||||
fmt.Printf("Key: %q, Value: %q\n", n.Key, n.Value)
|
||||
}
|
||||
|
||||
// Output:
|
||||
// Key: "/myNodes/key1", Value: "value1"
|
||||
// Key: "/myNodes/key2", Value: "value2"
|
||||
}
|
||||
|
||||
func ExampleKeysAPI_setget() {
|
||||
c, err := client.New(client.Config{
|
||||
Endpoints: exampleEndpoints,
|
||||
Transport: exampleTransport,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
kapi := client.NewKeysAPI(c)
|
||||
|
||||
// Set key "/foo" to value "bar".
|
||||
resp, err := kapi.Set(context.Background(), "/foo", "bar", nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
// Get key "/foo"
|
||||
resp, err = kapi.Get(context.Background(), "/foo", nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
fmt.Printf("%q key has %q value\n", resp.Node.Key, resp.Node.Value)
|
||||
|
||||
// Output: "/foo" key has "bar" value
|
||||
}
|
77
client/main_test.go
Normal file
77
client/main_test.go
Normal file
@ -0,0 +1,77 @@
|
||||
// Copyright 2017 The etcd Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package client_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/coreos/etcd/integration"
|
||||
"github.com/coreos/etcd/pkg/testutil"
|
||||
"github.com/coreos/etcd/pkg/transport"
|
||||
)
|
||||
|
||||
var exampleEndpoints []string
|
||||
var exampleTransport *http.Transport
|
||||
|
||||
// TestMain sets up an etcd cluster if running the examples.
|
||||
func TestMain(m *testing.M) {
|
||||
useCluster, hasRunArg := false, false // default to running only Test*
|
||||
for _, arg := range os.Args {
|
||||
if strings.HasPrefix(arg, "-test.run=") {
|
||||
exp := strings.Split(arg, "=")[1]
|
||||
match, err := regexp.MatchString(exp, "Example")
|
||||
useCluster = (err == nil && match) || strings.Contains(exp, "Example")
|
||||
hasRunArg = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !hasRunArg {
|
||||
// force only running Test* if no args given to avoid leak false
|
||||
// positives from having a long-running cluster for the examples.
|
||||
os.Args = append(os.Args, "-test.run=Test")
|
||||
}
|
||||
|
||||
v := 0
|
||||
if useCluster {
|
||||
tr, trerr := transport.NewTransport(transport.TLSInfo{}, time.Second)
|
||||
if trerr != nil {
|
||||
fmt.Fprintf(os.Stderr, "%v", trerr)
|
||||
os.Exit(1)
|
||||
}
|
||||
cfg := integration.ClusterConfig{Size: 1}
|
||||
clus := integration.NewClusterV3(nil, &cfg)
|
||||
exampleEndpoints = []string{clus.Members[0].URL()}
|
||||
exampleTransport = tr
|
||||
v = m.Run()
|
||||
clus.Terminate(nil)
|
||||
if err := testutil.CheckAfterTest(time.Second); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "%v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
} else {
|
||||
v = m.Run()
|
||||
}
|
||||
|
||||
if v == 0 && testutil.CheckLeakedGoroutine() {
|
||||
os.Exit(1)
|
||||
}
|
||||
os.Exit(v)
|
||||
}
|
@ -44,7 +44,7 @@ type Member struct {
|
||||
PeerURLs []string `json:"peerURLs"`
|
||||
|
||||
// ClientURLs represents the HTTP(S) endpoints on which this Member
|
||||
// serves it's client-facing APIs.
|
||||
// serves its client-facing APIs.
|
||||
ClientURLs []string `json:"clientURLs"`
|
||||
}
|
||||
|
||||
|
@ -182,7 +182,7 @@ func parseEndpoint(endpoint string) (proto string, host string, scheme string) {
|
||||
host = url.Host
|
||||
switch url.Scheme {
|
||||
case "http", "https":
|
||||
case "unix":
|
||||
case "unix", "unixs":
|
||||
proto = "unix"
|
||||
host = url.Host + url.Path
|
||||
default:
|
||||
@ -197,7 +197,7 @@ func (c *Client) processCreds(scheme string) (creds *credentials.TransportCreden
|
||||
case "unix":
|
||||
case "http":
|
||||
creds = nil
|
||||
case "https":
|
||||
case "https", "unixs":
|
||||
if creds != nil {
|
||||
break
|
||||
}
|
||||
@ -322,7 +322,7 @@ func (c *Client) dial(endpoint string, dopts ...grpc.DialOption) (*grpc.ClientCo
|
||||
|
||||
opts = append(opts, c.cfg.DialOptions...)
|
||||
|
||||
conn, err := grpc.Dial(host, opts...)
|
||||
conn, err := grpc.DialContext(c.ctx, host, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -333,7 +333,7 @@ func (c *Client) dial(endpoint string, dopts ...grpc.DialOption) (*grpc.ClientCo
|
||||
// when the cluster has a leader.
|
||||
func WithRequireLeader(ctx context.Context) context.Context {
|
||||
md := metadata.Pairs(rpctypes.MetadataRequireLeaderKey, rpctypes.MetadataHasLeader)
|
||||
return metadata.NewContext(ctx, md)
|
||||
return metadata.NewOutgoingContext(ctx, md)
|
||||
}
|
||||
|
||||
func newClient(cfg *Config) (*Client, error) {
|
||||
@ -367,7 +367,9 @@ func newClient(cfg *Config) (*Client, error) {
|
||||
}
|
||||
|
||||
client.balancer = newSimpleBalancer(cfg.Endpoints)
|
||||
conn, err := client.dial("", grpc.WithBalancer(client.balancer))
|
||||
// use Endpoints[0] so that for https:// without any tls config given, then
|
||||
// grpc will assume the ServerName is in the endpoint.
|
||||
conn, err := client.dial(cfg.Endpoints[0], grpc.WithBalancer(client.balancer))
|
||||
if err != nil {
|
||||
client.cancel()
|
||||
client.balancer.Close()
|
||||
|
@ -49,7 +49,9 @@ func (m *Mutex) Lock(ctx context.Context) error {
|
||||
put := v3.OpPut(m.myKey, "", v3.WithLease(s.Lease()))
|
||||
// reuse key in case this session already holds the lock
|
||||
get := v3.OpGet(m.myKey)
|
||||
resp, err := client.Txn(ctx).If(cmp).Then(put).Else(get).Commit()
|
||||
// fetch current holder to complete uncontended path with only one RPC
|
||||
getOwner := v3.OpGet(m.pfx, v3.WithFirstCreate()...)
|
||||
resp, err := client.Txn(ctx).If(cmp).Then(put, getOwner).Else(get, getOwner).Commit()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -57,6 +59,12 @@ func (m *Mutex) Lock(ctx context.Context) error {
|
||||
if !resp.Succeeded {
|
||||
m.myRev = resp.Responses[0].GetResponseRange().Kvs[0].CreateRevision
|
||||
}
|
||||
// if no key on prefix / the minimum rev is key, already hold the lock
|
||||
ownerKey := resp.Responses[1].GetResponseRange().Kvs
|
||||
if len(ownerKey) == 0 || ownerKey[0].CreateRevision == m.myRev {
|
||||
m.hdr = resp.Header
|
||||
return nil
|
||||
}
|
||||
|
||||
// wait for deletion revisions prior to myKey
|
||||
hdr, werr := waitDeletes(ctx, client, m.pfx, m.myRev-1)
|
||||
|
@ -30,7 +30,7 @@ import (
|
||||
"google.golang.org/grpc"
|
||||
)
|
||||
|
||||
func ExampleMetrics_range() {
|
||||
func ExampleClient_metrics() {
|
||||
cli, err := clientv3.New(clientv3.Config{
|
||||
Endpoints: endpoints,
|
||||
DialOptions: []grpc.DialOption{
|
||||
|
@ -66,6 +66,22 @@ func TestDialTLSExpired(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// TestDialTLSNoConfig ensures the client fails to dial / times out
|
||||
// when TLS endpoints (https, unixs) are given but no tls config.
|
||||
func TestDialTLSNoConfig(t *testing.T) {
|
||||
defer testutil.AfterTest(t)
|
||||
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1, ClientTLS: &testTLSInfo})
|
||||
defer clus.Terminate(t)
|
||||
// expect 'signed by unknown authority'
|
||||
_, err := clientv3.New(clientv3.Config{
|
||||
Endpoints: []string{clus.Members[0].GRPCAddr()},
|
||||
DialTimeout: time.Second,
|
||||
})
|
||||
if err != grpc.ErrClientConnTimeout {
|
||||
t.Fatalf("expected %v, got %v", grpc.ErrClientConnTimeout, err)
|
||||
}
|
||||
}
|
||||
|
||||
// TestDialSetEndpoints ensures SetEndpoints can replace unavailable endpoints with available ones.
|
||||
func TestDialSetEndpointsBeforeFail(t *testing.T) {
|
||||
testDialSetEndpoints(t, true)
|
||||
|
@ -20,7 +20,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/coreos/etcd/clientv3"
|
||||
"github.com/coreos/etcd/etcdserver/api/v3rpc"
|
||||
"github.com/coreos/etcd/embed"
|
||||
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
|
||||
"github.com/coreos/etcd/integration"
|
||||
"github.com/coreos/etcd/pkg/testutil"
|
||||
@ -41,7 +41,7 @@ func TestTxnError(t *testing.T) {
|
||||
t.Fatalf("expected %v, got %v", rpctypes.ErrDuplicateKey, err)
|
||||
}
|
||||
|
||||
ops := make([]clientv3.Op, v3rpc.MaxOpsPerTxn+10)
|
||||
ops := make([]clientv3.Op, int(embed.DefaultMaxTxnOps+10))
|
||||
for i := range ops {
|
||||
ops[i] = clientv3.OpPut(fmt.Sprintf("foo%d", i), "")
|
||||
}
|
||||
|
@ -323,7 +323,7 @@ func (l *lessor) closeRequireLeader() {
|
||||
reqIdxs := 0
|
||||
// find all required leader channels, close, mark as nil
|
||||
for i, ctx := range ka.ctxs {
|
||||
md, ok := metadata.FromContext(ctx)
|
||||
md, ok := metadata.FromOutgoingContext(ctx)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
@ -386,7 +386,7 @@ func (l *lessor) recvKeepAliveLoop() (gerr error) {
|
||||
close(l.donec)
|
||||
l.loopErr = gerr
|
||||
for _, ka := range l.keepAlives {
|
||||
ka.Close()
|
||||
ka.close()
|
||||
}
|
||||
l.keepAlives = make(map[LeaseID]*keepAlive)
|
||||
l.mu.Unlock()
|
||||
@ -467,7 +467,7 @@ func (l *lessor) recvKeepAlive(resp *pb.LeaseKeepAliveResponse) {
|
||||
if karesp.TTL <= 0 {
|
||||
// lease expired; close all keep alive channels
|
||||
delete(l.keepAlives, karesp.ID)
|
||||
ka.Close()
|
||||
ka.close()
|
||||
return
|
||||
}
|
||||
|
||||
@ -497,7 +497,7 @@ func (l *lessor) deadlineLoop() {
|
||||
for id, ka := range l.keepAlives {
|
||||
if ka.deadline.Before(now) {
|
||||
// waited too long for response; lease may be expired
|
||||
ka.Close()
|
||||
ka.close()
|
||||
delete(l.keepAlives, id)
|
||||
}
|
||||
}
|
||||
@ -539,7 +539,7 @@ func (l *lessor) sendKeepAliveLoop(stream pb.Lease_LeaseKeepAliveClient) {
|
||||
}
|
||||
}
|
||||
|
||||
func (ka *keepAlive) Close() {
|
||||
func (ka *keepAlive) close() {
|
||||
close(ka.donec)
|
||||
for _, ch := range ka.chs {
|
||||
close(ch)
|
||||
|
@ -32,15 +32,21 @@ func init() { auth.BcryptCost = bcrypt.MinCost }
|
||||
|
||||
// TestMain sets up an etcd cluster if running the examples.
|
||||
func TestMain(m *testing.M) {
|
||||
useCluster := true // default to running all tests
|
||||
useCluster, hasRunArg := false, false // default to running only Test*
|
||||
for _, arg := range os.Args {
|
||||
if strings.HasPrefix(arg, "-test.run=") {
|
||||
exp := strings.Split(arg, "=")[1]
|
||||
match, err := regexp.MatchString(exp, "Example")
|
||||
useCluster = (err == nil && match) || strings.Contains(exp, "Example")
|
||||
hasRunArg = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !hasRunArg {
|
||||
// force only running Test* if no args given to avoid leak false
|
||||
// positives from having a long-running cluster for the examples.
|
||||
os.Args = append(os.Args, "-test.run=Test")
|
||||
}
|
||||
|
||||
v := 0
|
||||
if useCluster {
|
||||
|
@ -40,10 +40,9 @@ type WatchChan <-chan WatchResponse
|
||||
|
||||
type Watcher interface {
|
||||
// Watch watches on a key or prefix. The watched events will be returned
|
||||
// through the returned channel.
|
||||
// If the watch is slow or the required rev is compacted, the watch request
|
||||
// might be canceled from the server-side and the chan will be closed.
|
||||
// 'opts' can be: 'WithRev' and/or 'WithPrefix'.
|
||||
// through the returned channel. If revisions waiting to be sent over the
|
||||
// watch are compacted, then the watch will be canceled by the server, the
|
||||
// client will post a compacted error watch response, and the channel will close.
|
||||
Watch(ctx context.Context, key string, opts ...OpOption) WatchChan
|
||||
|
||||
// Close closes the watcher and cancels all watch requests.
|
||||
@ -317,14 +316,14 @@ func (w *watcher) Close() (err error) {
|
||||
w.streams = nil
|
||||
w.mu.Unlock()
|
||||
for _, wgs := range streams {
|
||||
if werr := wgs.Close(); werr != nil {
|
||||
if werr := wgs.close(); werr != nil {
|
||||
err = werr
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (w *watchGrpcStream) Close() (err error) {
|
||||
func (w *watchGrpcStream) close() (err error) {
|
||||
w.cancel()
|
||||
<-w.donec
|
||||
select {
|
||||
|
151
cmd/vendor/github.com/coreos/go-semver/semver/semver.go
generated
vendored
151
cmd/vendor/github.com/coreos/go-semver/semver/semver.go
generated
vendored
@ -1,3 +1,18 @@
|
||||
// Copyright 2013-2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Semantic Versions http://semver.org
|
||||
package semver
|
||||
|
||||
import (
|
||||
@ -29,35 +44,21 @@ func splitOff(input *string, delim string) (val string) {
|
||||
return val
|
||||
}
|
||||
|
||||
func New(version string) *Version {
|
||||
return Must(NewVersion(version))
|
||||
}
|
||||
|
||||
func NewVersion(version string) (*Version, error) {
|
||||
v := Version{}
|
||||
|
||||
dotParts := strings.SplitN(version, ".", 3)
|
||||
|
||||
if len(dotParts) != 3 {
|
||||
return nil, errors.New(fmt.Sprintf("%s is not in dotted-tri format", version))
|
||||
if err := v.Set(version); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
v.Metadata = splitOff(&dotParts[2], "+")
|
||||
v.PreRelease = PreRelease(splitOff(&dotParts[2], "-"))
|
||||
|
||||
parsed := make([]int64, 3, 3)
|
||||
|
||||
for i, v := range dotParts[:3] {
|
||||
val, err := strconv.ParseInt(v, 10, 64)
|
||||
parsed[i] = val
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
v.Major = parsed[0]
|
||||
v.Minor = parsed[1]
|
||||
v.Patch = parsed[2]
|
||||
|
||||
return &v, nil
|
||||
}
|
||||
|
||||
// Must is a helper for wrapping NewVersion and will panic if err is not nil.
|
||||
func Must(v *Version, err error) *Version {
|
||||
if err != nil {
|
||||
panic(err)
|
||||
@ -65,45 +66,99 @@ func Must(v *Version, err error) *Version {
|
||||
return v
|
||||
}
|
||||
|
||||
func (v *Version) String() string {
|
||||
// Set parses and updates v from the given version string. Implements flag.Value
|
||||
func (v *Version) Set(version string) error {
|
||||
metadata := splitOff(&version, "+")
|
||||
preRelease := PreRelease(splitOff(&version, "-"))
|
||||
dotParts := strings.SplitN(version, ".", 3)
|
||||
|
||||
if len(dotParts) != 3 {
|
||||
return fmt.Errorf("%s is not in dotted-tri format", version)
|
||||
}
|
||||
|
||||
parsed := make([]int64, 3, 3)
|
||||
|
||||
for i, v := range dotParts[:3] {
|
||||
val, err := strconv.ParseInt(v, 10, 64)
|
||||
parsed[i] = val
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
v.Metadata = metadata
|
||||
v.PreRelease = preRelease
|
||||
v.Major = parsed[0]
|
||||
v.Minor = parsed[1]
|
||||
v.Patch = parsed[2]
|
||||
return nil
|
||||
}
|
||||
|
||||
func (v Version) String() string {
|
||||
var buffer bytes.Buffer
|
||||
|
||||
base := fmt.Sprintf("%d.%d.%d", v.Major, v.Minor, v.Patch)
|
||||
buffer.WriteString(base)
|
||||
fmt.Fprintf(&buffer, "%d.%d.%d", v.Major, v.Minor, v.Patch)
|
||||
|
||||
if v.PreRelease != "" {
|
||||
buffer.WriteString(fmt.Sprintf("-%s", v.PreRelease))
|
||||
fmt.Fprintf(&buffer, "-%s", v.PreRelease)
|
||||
}
|
||||
|
||||
if v.Metadata != "" {
|
||||
buffer.WriteString(fmt.Sprintf("+%s", v.Metadata))
|
||||
fmt.Fprintf(&buffer, "+%s", v.Metadata)
|
||||
}
|
||||
|
||||
return buffer.String()
|
||||
}
|
||||
|
||||
func (v *Version) LessThan(versionB Version) bool {
|
||||
versionA := *v
|
||||
cmp := recursiveCompare(versionA.Slice(), versionB.Slice())
|
||||
|
||||
if cmp == 0 {
|
||||
cmp = preReleaseCompare(versionA, versionB)
|
||||
func (v *Version) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||
var data string
|
||||
if err := unmarshal(&data); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if cmp == -1 {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
return v.Set(data)
|
||||
}
|
||||
|
||||
/* Slice converts the comparable parts of the semver into a slice of strings */
|
||||
func (v *Version) Slice() []int64 {
|
||||
func (v Version) MarshalJSON() ([]byte, error) {
|
||||
return []byte(`"` + v.String() + `"`), nil
|
||||
}
|
||||
|
||||
func (v *Version) UnmarshalJSON(data []byte) error {
|
||||
l := len(data)
|
||||
if l == 0 || string(data) == `""` {
|
||||
return nil
|
||||
}
|
||||
if l < 2 || data[0] != '"' || data[l-1] != '"' {
|
||||
return errors.New("invalid semver string")
|
||||
}
|
||||
return v.Set(string(data[1 : l-1]))
|
||||
}
|
||||
|
||||
// Compare tests if v is less than, equal to, or greater than versionB,
|
||||
// returning -1, 0, or +1 respectively.
|
||||
func (v Version) Compare(versionB Version) int {
|
||||
if cmp := recursiveCompare(v.Slice(), versionB.Slice()); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
return preReleaseCompare(v, versionB)
|
||||
}
|
||||
|
||||
// Equal tests if v is equal to versionB.
|
||||
func (v Version) Equal(versionB Version) bool {
|
||||
return v.Compare(versionB) == 0
|
||||
}
|
||||
|
||||
// LessThan tests if v is less than versionB.
|
||||
func (v Version) LessThan(versionB Version) bool {
|
||||
return v.Compare(versionB) < 0
|
||||
}
|
||||
|
||||
// Slice converts the comparable parts of the semver into a slice of integers.
|
||||
func (v Version) Slice() []int64 {
|
||||
return []int64{v.Major, v.Minor, v.Patch}
|
||||
}
|
||||
|
||||
func (p *PreRelease) Slice() []string {
|
||||
preRelease := string(*p)
|
||||
func (p PreRelease) Slice() []string {
|
||||
preRelease := string(p)
|
||||
return strings.Split(preRelease, ".")
|
||||
}
|
||||
|
||||
@ -119,7 +174,7 @@ func preReleaseCompare(versionA Version, versionB Version) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
// If there is a prelease, check and compare each part.
|
||||
// If there is a prerelease, check and compare each part.
|
||||
return recursivePreReleaseCompare(a.Slice(), b.Slice())
|
||||
}
|
||||
|
||||
@ -141,9 +196,12 @@ func recursiveCompare(versionA []int64, versionB []int64) int {
|
||||
}
|
||||
|
||||
func recursivePreReleaseCompare(versionA []string, versionB []string) int {
|
||||
// Handle slice length disparity.
|
||||
// A larger set of pre-release fields has a higher precedence than a smaller set,
|
||||
// if all of the preceding identifiers are equal.
|
||||
if len(versionA) == 0 {
|
||||
// Nothing to compare too, so we return 0
|
||||
if len(versionB) > 0 {
|
||||
return -1
|
||||
}
|
||||
return 0
|
||||
} else if len(versionB) == 0 {
|
||||
// We're longer than versionB so return 1.
|
||||
@ -153,7 +211,8 @@ func recursivePreReleaseCompare(versionA []string, versionB []string) int {
|
||||
a := versionA[0]
|
||||
b := versionB[0]
|
||||
|
||||
aInt := false; bInt := false
|
||||
aInt := false
|
||||
bInt := false
|
||||
|
||||
aI, err := strconv.Atoi(versionA[0])
|
||||
if err == nil {
|
||||
|
14
cmd/vendor/github.com/coreos/go-semver/semver/sort.go
generated
vendored
14
cmd/vendor/github.com/coreos/go-semver/semver/sort.go
generated
vendored
@ -1,3 +1,17 @@
|
||||
// Copyright 2013-2015 CoreOS, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package semver
|
||||
|
||||
import (
|
||||
|
35
cmd/vendor/github.com/cpuguy83/go-md2man/md2man/roff.go
generated
vendored
35
cmd/vendor/github.com/cpuguy83/go-md2man/md2man/roff.go
generated
vendored
@ -118,11 +118,24 @@ func (r *roffRenderer) Paragraph(out *bytes.Buffer, text func() bool) {
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: This might now work
|
||||
func (r *roffRenderer) Table(out *bytes.Buffer, header []byte, body []byte, columnData []int) {
|
||||
out.WriteString(".TS\nallbox;\n")
|
||||
out.WriteString("\n.TS\nallbox;\n")
|
||||
|
||||
max_delims := 0
|
||||
lines := strings.Split(strings.TrimRight(string(header), "\n")+"\n"+strings.TrimRight(string(body), "\n"), "\n")
|
||||
for _, w := range lines {
|
||||
cur_delims := strings.Count(w, "\t")
|
||||
if cur_delims > max_delims {
|
||||
max_delims = cur_delims
|
||||
}
|
||||
}
|
||||
out.Write([]byte(strings.Repeat("l ", max_delims+1) + "\n"))
|
||||
out.Write([]byte(strings.Repeat("l ", max_delims+1) + ".\n"))
|
||||
out.Write(header)
|
||||
if len(header) > 0 {
|
||||
out.Write([]byte("\n"))
|
||||
}
|
||||
|
||||
out.Write(body)
|
||||
out.WriteString("\n.TE\n")
|
||||
}
|
||||
@ -132,24 +145,30 @@ func (r *roffRenderer) TableRow(out *bytes.Buffer, text []byte) {
|
||||
out.WriteString("\n")
|
||||
}
|
||||
out.Write(text)
|
||||
out.WriteString("\n")
|
||||
}
|
||||
|
||||
func (r *roffRenderer) TableHeaderCell(out *bytes.Buffer, text []byte, align int) {
|
||||
if out.Len() > 0 {
|
||||
out.WriteString(" ")
|
||||
out.WriteString("\t")
|
||||
}
|
||||
out.Write(text)
|
||||
out.WriteString(" ")
|
||||
if len(text) == 0 {
|
||||
text = []byte{' '}
|
||||
}
|
||||
out.Write([]byte("\\fB\\fC" + string(text) + "\\fR"))
|
||||
}
|
||||
|
||||
// TODO: This is probably broken
|
||||
func (r *roffRenderer) TableCell(out *bytes.Buffer, text []byte, align int) {
|
||||
if out.Len() > 0 {
|
||||
out.WriteString("\t")
|
||||
}
|
||||
if len(text) > 30 {
|
||||
text = append([]byte("T{\n"), text...)
|
||||
text = append(text, []byte("\nT}")...)
|
||||
}
|
||||
if len(text) == 0 {
|
||||
text = []byte{' '}
|
||||
}
|
||||
out.Write(text)
|
||||
out.WriteString("\t")
|
||||
}
|
||||
|
||||
func (r *roffRenderer) Footnotes(out *bytes.Buffer, text func() bool) {
|
||||
|
6
cmd/vendor/github.com/ghodss/yaml/fields.go
generated
vendored
6
cmd/vendor/github.com/ghodss/yaml/fields.go
generated
vendored
@ -45,7 +45,11 @@ func indirect(v reflect.Value, decodingNull bool) (json.Unmarshaler, encoding.Te
|
||||
break
|
||||
}
|
||||
if v.IsNil() {
|
||||
v.Set(reflect.New(v.Type().Elem()))
|
||||
if v.CanSet() {
|
||||
v.Set(reflect.New(v.Type().Elem()))
|
||||
} else {
|
||||
v = reflect.New(v.Type().Elem())
|
||||
}
|
||||
}
|
||||
if v.Type().NumMethod() > 0 {
|
||||
if u, ok := v.Interface().(json.Unmarshaler); ok {
|
||||
|
6
cmd/vendor/github.com/ghodss/yaml/yaml.go
generated
vendored
6
cmd/vendor/github.com/ghodss/yaml/yaml.go
generated
vendored
@ -15,12 +15,12 @@ import (
|
||||
func Marshal(o interface{}) ([]byte, error) {
|
||||
j, err := json.Marshal(o)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error marshaling into JSON: ", err)
|
||||
return nil, fmt.Errorf("error marshaling into JSON: %v", err)
|
||||
}
|
||||
|
||||
y, err := JSONToYAML(j)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error converting JSON to YAML: ", err)
|
||||
return nil, fmt.Errorf("error converting JSON to YAML: %v", err)
|
||||
}
|
||||
|
||||
return y, nil
|
||||
@ -48,7 +48,7 @@ func JSONToYAML(j []byte) ([]byte, error) {
|
||||
var jsonObj interface{}
|
||||
// We are using yaml.Unmarshal here (instead of json.Unmarshal) because the
|
||||
// Go JSON library doesn't try to pick the right number type (int, float,
|
||||
// etc.) when unmarshling to interface{}, it just picks float64
|
||||
// etc.) when unmarshalling to interface{}, it just picks float64
|
||||
// universally. go-yaml does go through the effort of picking the right
|
||||
// number type, so we can preserve number type throughout this process.
|
||||
err := yaml.Unmarshal(j, &jsonObj)
|
||||
|
112
cmd/vendor/github.com/gogo/protobuf/proto/decode.go
generated
vendored
112
cmd/vendor/github.com/gogo/protobuf/proto/decode.go
generated
vendored
@ -61,7 +61,6 @@ var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for
|
||||
// int32, int64, uint32, uint64, bool, and enum
|
||||
// protocol buffer types.
|
||||
func DecodeVarint(buf []byte) (x uint64, n int) {
|
||||
// x, n already 0
|
||||
for shift := uint(0); shift < 64; shift += 7 {
|
||||
if n >= len(buf) {
|
||||
return 0, 0
|
||||
@ -78,13 +77,7 @@ func DecodeVarint(buf []byte) (x uint64, n int) {
|
||||
return 0, 0
|
||||
}
|
||||
|
||||
// DecodeVarint reads a varint-encoded integer from the Buffer.
|
||||
// This is the format for the
|
||||
// int32, int64, uint32, uint64, bool, and enum
|
||||
// protocol buffer types.
|
||||
func (p *Buffer) DecodeVarint() (x uint64, err error) {
|
||||
// x, err already 0
|
||||
|
||||
func (p *Buffer) decodeVarintSlow() (x uint64, err error) {
|
||||
i := p.index
|
||||
l := len(p.buf)
|
||||
|
||||
@ -107,6 +100,107 @@ func (p *Buffer) DecodeVarint() (x uint64, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// DecodeVarint reads a varint-encoded integer from the Buffer.
|
||||
// This is the format for the
|
||||
// int32, int64, uint32, uint64, bool, and enum
|
||||
// protocol buffer types.
|
||||
func (p *Buffer) DecodeVarint() (x uint64, err error) {
|
||||
i := p.index
|
||||
buf := p.buf
|
||||
|
||||
if i >= len(buf) {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
} else if buf[i] < 0x80 {
|
||||
p.index++
|
||||
return uint64(buf[i]), nil
|
||||
} else if len(buf)-i < 10 {
|
||||
return p.decodeVarintSlow()
|
||||
}
|
||||
|
||||
var b uint64
|
||||
// we already checked the first byte
|
||||
x = uint64(buf[i]) - 0x80
|
||||
i++
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 7
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
x -= 0x80 << 7
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 14
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
x -= 0x80 << 14
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 21
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
x -= 0x80 << 21
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 28
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
x -= 0x80 << 28
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 35
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
x -= 0x80 << 35
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 42
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
x -= 0x80 << 42
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 49
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
x -= 0x80 << 49
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 56
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
x -= 0x80 << 56
|
||||
|
||||
b = uint64(buf[i])
|
||||
i++
|
||||
x += b << 63
|
||||
if b&0x80 == 0 {
|
||||
goto done
|
||||
}
|
||||
// x -= 0x80 << 63 // Always zero.
|
||||
|
||||
return 0, errOverflow
|
||||
|
||||
done:
|
||||
p.index = i
|
||||
return x, nil
|
||||
}
|
||||
|
||||
// DecodeFixed64 reads a 64-bit integer from the Buffer.
|
||||
// This is the format for the
|
||||
// fixed64, sfixed64, and double protocol buffer types.
|
||||
@ -340,6 +434,8 @@ func (p *Buffer) DecodeGroup(pb Message) error {
|
||||
// Buffer and places the decoded result in pb. If the struct
|
||||
// underlying pb does not match the data in the buffer, the results can be
|
||||
// unpredictable.
|
||||
//
|
||||
// Unlike proto.Unmarshal, this does not reset pb before starting to unmarshal.
|
||||
func (p *Buffer) Unmarshal(pb Message) error {
|
||||
// If the object can unmarshal itself, let it.
|
||||
if u, ok := pb.(Unmarshaler); ok {
|
||||
|
5
cmd/vendor/github.com/gogo/protobuf/proto/decode_gogo.go
generated
vendored
5
cmd/vendor/github.com/gogo/protobuf/proto/decode_gogo.go
generated
vendored
@ -98,7 +98,7 @@ func setPtrCustomType(base structPointer, f field, v interface{}) {
|
||||
if v == nil {
|
||||
return
|
||||
}
|
||||
structPointer_SetStructPointer(base, f, structPointer(reflect.ValueOf(v).Pointer()))
|
||||
structPointer_SetStructPointer(base, f, toStructPointer(reflect.ValueOf(v)))
|
||||
}
|
||||
|
||||
func setCustomType(base structPointer, f field, value interface{}) {
|
||||
@ -165,7 +165,8 @@ func (o *Buffer) dec_custom_slice_bytes(p *Properties, base structPointer) error
|
||||
}
|
||||
newBas := appendStructPointer(base, p.field, p.ctype)
|
||||
|
||||
setCustomType(newBas, 0, custom)
|
||||
var zero field
|
||||
setCustomType(newBas, zero, custom)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
100
cmd/vendor/github.com/gogo/protobuf/proto/duration.go
generated
vendored
Normal file
100
cmd/vendor/github.com/gogo/protobuf/proto/duration.go
generated
vendored
Normal file
@ -0,0 +1,100 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
package proto
|
||||
|
||||
// This file implements conversions between google.protobuf.Duration
|
||||
// and time.Duration.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
// Range of a Duration in seconds, as specified in
|
||||
// google/protobuf/duration.proto. This is about 10,000 years in seconds.
|
||||
maxSeconds = int64(10000 * 365.25 * 24 * 60 * 60)
|
||||
minSeconds = -maxSeconds
|
||||
)
|
||||
|
||||
// validateDuration determines whether the Duration is valid according to the
|
||||
// definition in google/protobuf/duration.proto. A valid Duration
|
||||
// may still be too large to fit into a time.Duration (the range of Duration
|
||||
// is about 10,000 years, and the range of time.Duration is about 290).
|
||||
func validateDuration(d *duration) error {
|
||||
if d == nil {
|
||||
return errors.New("duration: nil Duration")
|
||||
}
|
||||
if d.Seconds < minSeconds || d.Seconds > maxSeconds {
|
||||
return fmt.Errorf("duration: %#v: seconds out of range", d)
|
||||
}
|
||||
if d.Nanos <= -1e9 || d.Nanos >= 1e9 {
|
||||
return fmt.Errorf("duration: %#v: nanos out of range", d)
|
||||
}
|
||||
// Seconds and Nanos must have the same sign, unless d.Nanos is zero.
|
||||
if (d.Seconds < 0 && d.Nanos > 0) || (d.Seconds > 0 && d.Nanos < 0) {
|
||||
return fmt.Errorf("duration: %#v: seconds and nanos have different signs", d)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DurationFromProto converts a Duration to a time.Duration. DurationFromProto
|
||||
// returns an error if the Duration is invalid or is too large to be
|
||||
// represented in a time.Duration.
|
||||
func durationFromProto(p *duration) (time.Duration, error) {
|
||||
if err := validateDuration(p); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
d := time.Duration(p.Seconds) * time.Second
|
||||
if int64(d/time.Second) != p.Seconds {
|
||||
return 0, fmt.Errorf("duration: %#v is out of range for time.Duration", p)
|
||||
}
|
||||
if p.Nanos != 0 {
|
||||
d += time.Duration(p.Nanos)
|
||||
if (d < 0) != (p.Nanos < 0) {
|
||||
return 0, fmt.Errorf("duration: %#v is out of range for time.Duration", p)
|
||||
}
|
||||
}
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// DurationProto converts a time.Duration to a Duration.
|
||||
func durationProto(d time.Duration) *duration {
|
||||
nanos := d.Nanoseconds()
|
||||
secs := nanos / 1e9
|
||||
nanos -= secs * 1e9
|
||||
return &duration{
|
||||
Seconds: secs,
|
||||
Nanos: int32(nanos),
|
||||
}
|
||||
}
|
203
cmd/vendor/github.com/gogo/protobuf/proto/duration_gogo.go
generated
vendored
Normal file
203
cmd/vendor/github.com/gogo/protobuf/proto/duration_gogo.go
generated
vendored
Normal file
@ -0,0 +1,203 @@
|
||||
// Protocol Buffers for Go with Gadgets
|
||||
//
|
||||
// Copyright (c) 2016, The GoGo Authors. All rights reserved.
|
||||
// http://github.com/gogo/protobuf
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
package proto
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"time"
|
||||
)
|
||||
|
||||
var durationType = reflect.TypeOf((*time.Duration)(nil)).Elem()
|
||||
|
||||
type duration struct {
|
||||
Seconds int64 `protobuf:"varint,1,opt,name=seconds,proto3" json:"seconds,omitempty"`
|
||||
Nanos int32 `protobuf:"varint,2,opt,name=nanos,proto3" json:"nanos,omitempty"`
|
||||
}
|
||||
|
||||
func (m *duration) Reset() { *m = duration{} }
|
||||
func (*duration) ProtoMessage() {}
|
||||
func (*duration) String() string { return "duration<string>" }
|
||||
|
||||
func init() {
|
||||
RegisterType((*duration)(nil), "gogo.protobuf.proto.duration")
|
||||
}
|
||||
|
||||
func (o *Buffer) decDuration() (time.Duration, error) {
|
||||
b, err := o.DecodeRawBytes(true)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
dproto := &duration{}
|
||||
if err := Unmarshal(b, dproto); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return durationFromProto(dproto)
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_duration(p *Properties, base structPointer) error {
|
||||
d, err := o.decDuration()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
word64_Set(structPointer_Word64(base, p.field), o, uint64(d))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_ref_duration(p *Properties, base structPointer) error {
|
||||
d, err := o.decDuration()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
word64Val_Set(structPointer_Word64Val(base, p.field), o, uint64(d))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_slice_duration(p *Properties, base structPointer) error {
|
||||
d, err := o.decDuration()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
newBas := appendStructPointer(base, p.field, reflect.SliceOf(reflect.PtrTo(durationType)))
|
||||
var zero field
|
||||
setPtrCustomType(newBas, zero, &d)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_slice_ref_duration(p *Properties, base structPointer) error {
|
||||
d, err := o.decDuration()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
structPointer_Word64Slice(base, p.field).Append(uint64(d))
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_duration(p *Properties, base structPointer) (n int) {
|
||||
structp := structPointer_GetStructPointer(base, p.field)
|
||||
if structPointer_IsNil(structp) {
|
||||
return 0
|
||||
}
|
||||
dur := structPointer_Interface(structp, durationType).(*time.Duration)
|
||||
d := durationProto(*dur)
|
||||
size := Size(d)
|
||||
return size + sizeVarint(uint64(size)) + len(p.tagcode)
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_duration(p *Properties, base structPointer) error {
|
||||
structp := structPointer_GetStructPointer(base, p.field)
|
||||
if structPointer_IsNil(structp) {
|
||||
return ErrNil
|
||||
}
|
||||
dur := structPointer_Interface(structp, durationType).(*time.Duration)
|
||||
d := durationProto(*dur)
|
||||
data, err := Marshal(d)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(data)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_ref_duration(p *Properties, base structPointer) (n int) {
|
||||
dur := structPointer_InterfaceAt(base, p.field, durationType).(*time.Duration)
|
||||
d := durationProto(*dur)
|
||||
size := Size(d)
|
||||
return size + sizeVarint(uint64(size)) + len(p.tagcode)
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_ref_duration(p *Properties, base structPointer) error {
|
||||
dur := structPointer_InterfaceAt(base, p.field, durationType).(*time.Duration)
|
||||
d := durationProto(*dur)
|
||||
data, err := Marshal(d)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(data)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_slice_duration(p *Properties, base structPointer) (n int) {
|
||||
pdurs := structPointer_InterfaceAt(base, p.field, reflect.SliceOf(reflect.PtrTo(durationType))).(*[]*time.Duration)
|
||||
durs := *pdurs
|
||||
for i := 0; i < len(durs); i++ {
|
||||
if durs[i] == nil {
|
||||
return 0
|
||||
}
|
||||
dproto := durationProto(*durs[i])
|
||||
size := Size(dproto)
|
||||
n += len(p.tagcode) + size + sizeVarint(uint64(size))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_slice_duration(p *Properties, base structPointer) error {
|
||||
pdurs := structPointer_InterfaceAt(base, p.field, reflect.SliceOf(reflect.PtrTo(durationType))).(*[]*time.Duration)
|
||||
durs := *pdurs
|
||||
for i := 0; i < len(durs); i++ {
|
||||
if durs[i] == nil {
|
||||
return errRepeatedHasNil
|
||||
}
|
||||
dproto := durationProto(*durs[i])
|
||||
data, err := Marshal(dproto)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(data)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_slice_ref_duration(p *Properties, base structPointer) (n int) {
|
||||
pdurs := structPointer_InterfaceAt(base, p.field, reflect.SliceOf(durationType)).(*[]time.Duration)
|
||||
durs := *pdurs
|
||||
for i := 0; i < len(durs); i++ {
|
||||
dproto := durationProto(durs[i])
|
||||
size := Size(dproto)
|
||||
n += len(p.tagcode) + size + sizeVarint(uint64(size))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_slice_ref_duration(p *Properties, base structPointer) error {
|
||||
pdurs := structPointer_InterfaceAt(base, p.field, reflect.SliceOf(durationType)).(*[]time.Duration)
|
||||
durs := *pdurs
|
||||
for i := 0; i < len(durs); i++ {
|
||||
dproto := durationProto(durs[i])
|
||||
data, err := Marshal(dproto)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(data)
|
||||
}
|
||||
return nil
|
||||
}
|
25
cmd/vendor/github.com/gogo/protobuf/proto/encode.go
generated
vendored
25
cmd/vendor/github.com/gogo/protobuf/proto/encode.go
generated
vendored
@ -234,10 +234,6 @@ func Marshal(pb Message) ([]byte, error) {
|
||||
}
|
||||
p := NewBuffer(nil)
|
||||
err := p.Marshal(pb)
|
||||
var state errorState
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
return nil, err
|
||||
}
|
||||
if p.buf == nil && err == nil {
|
||||
// Return a non-nil slice on success.
|
||||
return []byte{}, nil
|
||||
@ -266,11 +262,8 @@ func (p *Buffer) Marshal(pb Message) error {
|
||||
// Can the object marshal itself?
|
||||
if m, ok := pb.(Marshaler); ok {
|
||||
data, err := m.Marshal()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
p.buf = append(p.buf, data...)
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
|
||||
t, base, err := getbase(pb)
|
||||
@ -282,7 +275,7 @@ func (p *Buffer) Marshal(pb Message) error {
|
||||
}
|
||||
|
||||
if collectStats {
|
||||
stats.Encode++
|
||||
(stats).Encode++ // Parens are to work around a goimports bug.
|
||||
}
|
||||
|
||||
if len(p.buf) > maxMarshalSize {
|
||||
@ -309,7 +302,7 @@ func Size(pb Message) (n int) {
|
||||
}
|
||||
|
||||
if collectStats {
|
||||
stats.Size++
|
||||
(stats).Size++ // Parens are to work around a goimports bug.
|
||||
}
|
||||
|
||||
return
|
||||
@ -1014,7 +1007,6 @@ func size_slice_struct_message(p *Properties, base structPointer) (n int) {
|
||||
if p.isMarshaler {
|
||||
m := structPointer_Interface(structp, p.stype).(Marshaler)
|
||||
data, _ := m.Marshal()
|
||||
n += len(p.tagcode)
|
||||
n += sizeRawBytes(data)
|
||||
continue
|
||||
}
|
||||
@ -1083,10 +1075,17 @@ func (o *Buffer) enc_map(p *Properties, base structPointer) error {
|
||||
|
||||
func (o *Buffer) enc_exts(p *Properties, base structPointer) error {
|
||||
exts := structPointer_Extensions(base, p.field)
|
||||
if err := encodeExtensions(exts); err != nil {
|
||||
|
||||
v, mu := exts.extensionsRead()
|
||||
if v == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
if err := encodeExtensionsMap(v); err != nil {
|
||||
return err
|
||||
}
|
||||
v, _ := exts.extensionsRead()
|
||||
|
||||
return o.enc_map_body(v)
|
||||
}
|
||||
|
16
cmd/vendor/github.com/gogo/protobuf/proto/encode_gogo.go
generated
vendored
16
cmd/vendor/github.com/gogo/protobuf/proto/encode_gogo.go
generated
vendored
@ -196,12 +196,10 @@ func size_ref_struct_message(p *Properties, base structPointer) int {
|
||||
// Encode a slice of references to message struct pointers ([]struct).
|
||||
func (o *Buffer) enc_slice_ref_struct_message(p *Properties, base structPointer) error {
|
||||
var state errorState
|
||||
ss := structPointer_GetStructPointer(base, p.field)
|
||||
ss1 := structPointer_GetRefStructPointer(ss, field(0))
|
||||
size := p.stype.Size()
|
||||
l := structPointer_Len(base, p.field)
|
||||
ss := structPointer_StructRefSlice(base, p.field, p.stype.Size())
|
||||
l := ss.Len()
|
||||
for i := 0; i < l; i++ {
|
||||
structp := structPointer_Add(ss1, field(uintptr(i)*size))
|
||||
structp := ss.Index(i)
|
||||
if structPointer_IsNil(structp) {
|
||||
return errRepeatedHasNil
|
||||
}
|
||||
@ -233,13 +231,11 @@ func (o *Buffer) enc_slice_ref_struct_message(p *Properties, base structPointer)
|
||||
|
||||
//TODO this is only copied, please fix this
|
||||
func size_slice_ref_struct_message(p *Properties, base structPointer) (n int) {
|
||||
ss := structPointer_GetStructPointer(base, p.field)
|
||||
ss1 := structPointer_GetRefStructPointer(ss, field(0))
|
||||
size := p.stype.Size()
|
||||
l := structPointer_Len(base, p.field)
|
||||
ss := structPointer_StructRefSlice(base, p.field, p.stype.Size())
|
||||
l := ss.Len()
|
||||
n += l * len(p.tagcode)
|
||||
for i := 0; i < l; i++ {
|
||||
structp := structPointer_Add(ss1, field(uintptr(i)*size))
|
||||
structp := ss.Index(i)
|
||||
if structPointer_IsNil(structp) {
|
||||
return // return the size up to this point
|
||||
}
|
||||
|
8
cmd/vendor/github.com/gogo/protobuf/proto/equal.go
generated
vendored
8
cmd/vendor/github.com/gogo/protobuf/proto/equal.go
generated
vendored
@ -54,13 +54,17 @@ Equality is defined in this way:
|
||||
in a proto3 .proto file, fields are not "set"; specifically,
|
||||
zero length proto3 "bytes" fields are equal (nil == {}).
|
||||
- Two repeated fields are equal iff their lengths are the same,
|
||||
and their corresponding elements are equal (a "bytes" field,
|
||||
although represented by []byte, is not a repeated field)
|
||||
and their corresponding elements are equal. Note a "bytes" field,
|
||||
although represented by []byte, is not a repeated field and the
|
||||
rule for the scalar fields described above applies.
|
||||
- Two unset fields are equal.
|
||||
- Two unknown field sets are equal if their current
|
||||
encoded state is equal.
|
||||
- Two extension sets are equal iff they have corresponding
|
||||
elements that are pairwise equal.
|
||||
- Two map fields are equal iff their lengths are the same,
|
||||
and they contain the same set of elements. Zero-length map
|
||||
fields are equal.
|
||||
- Every other combination of things are not equal.
|
||||
|
||||
The return value is undefined if a and b are not protocol buffers.
|
||||
|
4
cmd/vendor/github.com/gogo/protobuf/proto/extensions.go
generated
vendored
4
cmd/vendor/github.com/gogo/protobuf/proto/extensions.go
generated
vendored
@ -167,6 +167,7 @@ type ExtensionDesc struct {
|
||||
Field int32 // field number
|
||||
Name string // fully-qualified name of extension, for text formatting
|
||||
Tag string // protobuf tag style
|
||||
Filename string // name of the file in which the extension is defined
|
||||
}
|
||||
|
||||
func (ed *ExtensionDesc) repeated() bool {
|
||||
@ -587,6 +588,9 @@ func ExtensionDescs(pb Message) ([]*ExtensionDesc, error) {
|
||||
registeredExtensions := RegisteredExtensions(pb)
|
||||
|
||||
emap, mu := epb.extensionsRead()
|
||||
if emap == nil {
|
||||
return nil, nil
|
||||
}
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
extensions := make([]*ExtensionDesc, 0, len(emap))
|
||||
|
2
cmd/vendor/github.com/gogo/protobuf/proto/lib.go
generated
vendored
2
cmd/vendor/github.com/gogo/protobuf/proto/lib.go
generated
vendored
@ -308,7 +308,7 @@ func GetStats() Stats { return stats }
|
||||
// temporary Buffer and are fine for most applications.
|
||||
type Buffer struct {
|
||||
buf []byte // encode/decode byte stream
|
||||
index int // write point
|
||||
index int // read point
|
||||
|
||||
// pools of basic types to amortize allocation.
|
||||
bools []bool
|
||||
|
85
cmd/vendor/github.com/gogo/protobuf/proto/pointer_reflect_gogo.go
generated
vendored
Normal file
85
cmd/vendor/github.com/gogo/protobuf/proto/pointer_reflect_gogo.go
generated
vendored
Normal file
@ -0,0 +1,85 @@
|
||||
// Protocol Buffers for Go with Gadgets
|
||||
//
|
||||
// Copyright (c) 2016, The GoGo Authors. All rights reserved.
|
||||
// http://github.com/gogo/protobuf
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// +build appengine js
|
||||
|
||||
package proto
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
)
|
||||
|
||||
func structPointer_FieldPointer(p structPointer, f field) structPointer {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func appendStructPointer(base structPointer, f field, typ reflect.Type) structPointer {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func structPointer_InterfaceAt(p structPointer, f field, t reflect.Type) interface{} {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func structPointer_InterfaceRef(p structPointer, f field, t reflect.Type) interface{} {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func structPointer_GetRefStructPointer(p structPointer, f field) structPointer {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func structPointer_Add(p structPointer, size field) structPointer {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func structPointer_Len(p structPointer, f field) int {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func structPointer_GetSliceHeader(p structPointer, f field) *reflect.SliceHeader {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func structPointer_Copy(oldptr structPointer, newptr structPointer, size int) {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func structPointer_StructRefSlice(p structPointer, f field, size uintptr) *structRefSlice {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
type structRefSlice struct{}
|
||||
|
||||
func (v *structRefSlice) Len() int {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func (v *structRefSlice) Index(i int) structPointer {
|
||||
panic("not implemented")
|
||||
}
|
23
cmd/vendor/github.com/gogo/protobuf/proto/pointer_unsafe_gogo.go
generated
vendored
23
cmd/vendor/github.com/gogo/protobuf/proto/pointer_unsafe_gogo.go
generated
vendored
@ -26,7 +26,7 @@
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// +build !appengine
|
||||
// +build !appengine,!js
|
||||
|
||||
// This file contains the implementation of the proto field accesses using package unsafe.
|
||||
|
||||
@ -105,3 +105,24 @@ func structPointer_Add(p structPointer, size field) structPointer {
|
||||
func structPointer_Len(p structPointer, f field) int {
|
||||
return len(*(*[]interface{})(unsafe.Pointer(structPointer_GetRefStructPointer(p, f))))
|
||||
}
|
||||
|
||||
func structPointer_StructRefSlice(p structPointer, f field, size uintptr) *structRefSlice {
|
||||
return &structRefSlice{p: p, f: f, size: size}
|
||||
}
|
||||
|
||||
// A structRefSlice represents a slice of structs (themselves submessages or groups).
|
||||
type structRefSlice struct {
|
||||
p structPointer
|
||||
f field
|
||||
size uintptr
|
||||
}
|
||||
|
||||
func (v *structRefSlice) Len() int {
|
||||
return structPointer_Len(v.p, v.f)
|
||||
}
|
||||
|
||||
func (v *structRefSlice) Index(i int) structPointer {
|
||||
ss := structPointer_GetStructPointer(v.p, v.f)
|
||||
ss1 := structPointer_GetRefStructPointer(ss, 0)
|
||||
return structPointer_Add(ss1, field(uintptr(i)*v.size))
|
||||
}
|
||||
|
40
cmd/vendor/github.com/gogo/protobuf/proto/properties.go
generated
vendored
40
cmd/vendor/github.com/gogo/protobuf/proto/properties.go
generated
vendored
@ -190,10 +190,11 @@ type Properties struct {
|
||||
proto3 bool // whether this is known to be a proto3 field; set for []byte only
|
||||
oneof bool // whether this is a oneof field
|
||||
|
||||
Default string // default value
|
||||
HasDefault bool // whether an explicit default was provided
|
||||
CustomType string
|
||||
def_uint64 uint64
|
||||
Default string // default value
|
||||
HasDefault bool // whether an explicit default was provided
|
||||
CustomType string
|
||||
StdTime bool
|
||||
StdDuration bool
|
||||
|
||||
enc encoder
|
||||
valEnc valueEncoder // set for bool and numeric types only
|
||||
@ -340,6 +341,10 @@ func (p *Properties) Parse(s string) {
|
||||
p.OrigName = strings.Split(f, "=")[1]
|
||||
case strings.HasPrefix(f, "customtype="):
|
||||
p.CustomType = strings.Split(f, "=")[1]
|
||||
case f == "stdtime":
|
||||
p.StdTime = true
|
||||
case f == "stdduration":
|
||||
p.StdDuration = true
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -355,11 +360,22 @@ func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lock
|
||||
p.enc = nil
|
||||
p.dec = nil
|
||||
p.size = nil
|
||||
if len(p.CustomType) > 0 {
|
||||
isMap := typ.Kind() == reflect.Map
|
||||
if len(p.CustomType) > 0 && !isMap {
|
||||
p.setCustomEncAndDec(typ)
|
||||
p.setTag(lockGetProp)
|
||||
return
|
||||
}
|
||||
if p.StdTime && !isMap {
|
||||
p.setTimeEncAndDec(typ)
|
||||
p.setTag(lockGetProp)
|
||||
return
|
||||
}
|
||||
if p.StdDuration && !isMap {
|
||||
p.setDurationEncAndDec(typ)
|
||||
p.setTag(lockGetProp)
|
||||
return
|
||||
}
|
||||
switch t1 := typ; t1.Kind() {
|
||||
default:
|
||||
fmt.Fprintf(os.Stderr, "proto: no coders for %v\n", t1)
|
||||
@ -630,6 +646,10 @@ func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lock
|
||||
// so we need encoders for the pointer to this type.
|
||||
vtype = reflect.PtrTo(vtype)
|
||||
}
|
||||
|
||||
p.mvalprop.CustomType = p.CustomType
|
||||
p.mvalprop.StdDuration = p.StdDuration
|
||||
p.mvalprop.StdTime = p.StdTime
|
||||
p.mvalprop.init(vtype, "Value", f.Tag.Get("protobuf_val"), nil, lockGetProp)
|
||||
}
|
||||
p.setTag(lockGetProp)
|
||||
@ -920,7 +940,15 @@ func RegisterType(x Message, name string) {
|
||||
}
|
||||
|
||||
// MessageName returns the fully-qualified proto name for the given message type.
|
||||
func MessageName(x Message) string { return revProtoTypes[reflect.TypeOf(x)] }
|
||||
func MessageName(x Message) string {
|
||||
type xname interface {
|
||||
XXX_MessageName() string
|
||||
}
|
||||
if m, ok := x.(xname); ok {
|
||||
return m.XXX_MessageName()
|
||||
}
|
||||
return revProtoTypes[reflect.TypeOf(x)]
|
||||
}
|
||||
|
||||
// MessageType returns the message type (pointer to struct) for a named message.
|
||||
func MessageType(name string) reflect.Type { return protoTypes[name] }
|
||||
|
45
cmd/vendor/github.com/gogo/protobuf/proto/properties_gogo.go
generated
vendored
45
cmd/vendor/github.com/gogo/protobuf/proto/properties_gogo.go
generated
vendored
@ -51,6 +51,51 @@ func (p *Properties) setCustomEncAndDec(typ reflect.Type) {
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Properties) setDurationEncAndDec(typ reflect.Type) {
|
||||
if p.Repeated {
|
||||
if typ.Elem().Kind() == reflect.Ptr {
|
||||
p.enc = (*Buffer).enc_slice_duration
|
||||
p.dec = (*Buffer).dec_slice_duration
|
||||
p.size = size_slice_duration
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_slice_ref_duration
|
||||
p.dec = (*Buffer).dec_slice_ref_duration
|
||||
p.size = size_slice_ref_duration
|
||||
}
|
||||
} else if typ.Kind() == reflect.Ptr {
|
||||
p.enc = (*Buffer).enc_duration
|
||||
p.dec = (*Buffer).dec_duration
|
||||
p.size = size_duration
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_ref_duration
|
||||
p.dec = (*Buffer).dec_ref_duration
|
||||
p.size = size_ref_duration
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Properties) setTimeEncAndDec(typ reflect.Type) {
|
||||
if p.Repeated {
|
||||
if typ.Elem().Kind() == reflect.Ptr {
|
||||
p.enc = (*Buffer).enc_slice_time
|
||||
p.dec = (*Buffer).dec_slice_time
|
||||
p.size = size_slice_time
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_slice_ref_time
|
||||
p.dec = (*Buffer).dec_slice_ref_time
|
||||
p.size = size_slice_ref_time
|
||||
}
|
||||
} else if typ.Kind() == reflect.Ptr {
|
||||
p.enc = (*Buffer).enc_time
|
||||
p.dec = (*Buffer).dec_time
|
||||
p.size = size_time
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_ref_time
|
||||
p.dec = (*Buffer).dec_ref_time
|
||||
p.size = size_ref_time
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (p *Properties) setSliceOfNonPointerStructs(typ reflect.Type) {
|
||||
t2 := typ.Elem()
|
||||
p.sstype = typ
|
||||
|
177
cmd/vendor/github.com/gogo/protobuf/proto/text.go
generated
vendored
177
cmd/vendor/github.com/gogo/protobuf/proto/text.go
generated
vendored
@ -51,6 +51,7 @@ import (
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
@ -181,7 +182,93 @@ type raw interface {
|
||||
Bytes() []byte
|
||||
}
|
||||
|
||||
func writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
func requiresQuotes(u string) bool {
|
||||
// When type URL contains any characters except [0-9A-Za-z./\-]*, it must be quoted.
|
||||
for _, ch := range u {
|
||||
switch {
|
||||
case ch == '.' || ch == '/' || ch == '_':
|
||||
continue
|
||||
case '0' <= ch && ch <= '9':
|
||||
continue
|
||||
case 'A' <= ch && ch <= 'Z':
|
||||
continue
|
||||
case 'a' <= ch && ch <= 'z':
|
||||
continue
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// isAny reports whether sv is a google.protobuf.Any message
|
||||
func isAny(sv reflect.Value) bool {
|
||||
type wkt interface {
|
||||
XXX_WellKnownType() string
|
||||
}
|
||||
t, ok := sv.Addr().Interface().(wkt)
|
||||
return ok && t.XXX_WellKnownType() == "Any"
|
||||
}
|
||||
|
||||
// writeProto3Any writes an expanded google.protobuf.Any message.
|
||||
//
|
||||
// It returns (false, nil) if sv value can't be unmarshaled (e.g. because
|
||||
// required messages are not linked in).
|
||||
//
|
||||
// It returns (true, error) when sv was written in expanded format or an error
|
||||
// was encountered.
|
||||
func (tm *TextMarshaler) writeProto3Any(w *textWriter, sv reflect.Value) (bool, error) {
|
||||
turl := sv.FieldByName("TypeUrl")
|
||||
val := sv.FieldByName("Value")
|
||||
if !turl.IsValid() || !val.IsValid() {
|
||||
return true, errors.New("proto: invalid google.protobuf.Any message")
|
||||
}
|
||||
|
||||
b, ok := val.Interface().([]byte)
|
||||
if !ok {
|
||||
return true, errors.New("proto: invalid google.protobuf.Any message")
|
||||
}
|
||||
|
||||
parts := strings.Split(turl.String(), "/")
|
||||
mt := MessageType(parts[len(parts)-1])
|
||||
if mt == nil {
|
||||
return false, nil
|
||||
}
|
||||
m := reflect.New(mt.Elem())
|
||||
if err := Unmarshal(b, m.Interface().(Message)); err != nil {
|
||||
return false, nil
|
||||
}
|
||||
w.Write([]byte("["))
|
||||
u := turl.String()
|
||||
if requiresQuotes(u) {
|
||||
writeString(w, u)
|
||||
} else {
|
||||
w.Write([]byte(u))
|
||||
}
|
||||
if w.compact {
|
||||
w.Write([]byte("]:<"))
|
||||
} else {
|
||||
w.Write([]byte("]: <\n"))
|
||||
w.ind++
|
||||
}
|
||||
if err := tm.writeStruct(w, m.Elem()); err != nil {
|
||||
return true, err
|
||||
}
|
||||
if w.compact {
|
||||
w.Write([]byte("> "))
|
||||
} else {
|
||||
w.ind--
|
||||
w.Write([]byte(">\n"))
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
if tm.ExpandAny && isAny(sv) {
|
||||
if canExpand, err := tm.writeProto3Any(w, sv); canExpand {
|
||||
return err
|
||||
}
|
||||
}
|
||||
st := sv.Type()
|
||||
sprops := GetProperties(st)
|
||||
for i := 0; i < sv.NumField(); i++ {
|
||||
@ -234,10 +321,10 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
continue
|
||||
}
|
||||
if len(props.Enum) > 0 {
|
||||
if err := writeEnum(w, v, props); err != nil {
|
||||
if err := tm.writeEnum(w, v, props); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if err := writeAny(w, v, props); err != nil {
|
||||
} else if err := tm.writeAny(w, v, props); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
@ -279,7 +366,7 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := writeAny(w, key, props.mkeyprop); err != nil {
|
||||
if err := tm.writeAny(w, key, props.mkeyprop); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
@ -296,7 +383,7 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := writeAny(w, val, props.mvalprop); err != nil {
|
||||
if err := tm.writeAny(w, val, props.mvalprop); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
@ -368,10 +455,10 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
}
|
||||
|
||||
if len(props.Enum) > 0 {
|
||||
if err := writeEnum(w, fv, props); err != nil {
|
||||
if err := tm.writeEnum(w, fv, props); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if err := writeAny(w, fv, props); err != nil {
|
||||
} else if err := tm.writeAny(w, fv, props); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@ -389,7 +476,7 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
pv.Elem().Set(sv)
|
||||
}
|
||||
if pv.Type().Implements(extensionRangeType) {
|
||||
if err := writeExtensions(w, pv); err != nil {
|
||||
if err := tm.writeExtensions(w, pv); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@ -419,20 +506,45 @@ func writeRaw(w *textWriter, b []byte) error {
|
||||
}
|
||||
|
||||
// writeAny writes an arbitrary field.
|
||||
func writeAny(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
v = reflect.Indirect(v)
|
||||
|
||||
if props != nil && len(props.CustomType) > 0 {
|
||||
custom, ok := v.Interface().(Marshaler)
|
||||
if ok {
|
||||
data, err := custom.Marshal()
|
||||
if props != nil {
|
||||
if len(props.CustomType) > 0 {
|
||||
custom, ok := v.Interface().(Marshaler)
|
||||
if ok {
|
||||
data, err := custom.Marshal()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := writeString(w, string(data)); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
} else if props.StdTime {
|
||||
t, ok := v.Interface().(time.Time)
|
||||
if !ok {
|
||||
return fmt.Errorf("stdtime is not time.Time, but %T", v.Interface())
|
||||
}
|
||||
tproto, err := timestampProto(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := writeString(w, string(data)); err != nil {
|
||||
return err
|
||||
props.StdTime = false
|
||||
err = tm.writeAny(w, reflect.ValueOf(tproto), props)
|
||||
props.StdTime = true
|
||||
return err
|
||||
} else if props.StdDuration {
|
||||
d, ok := v.Interface().(time.Duration)
|
||||
if !ok {
|
||||
return fmt.Errorf("stdtime is not time.Duration, but %T", v.Interface())
|
||||
}
|
||||
return nil
|
||||
dproto := durationProto(d)
|
||||
props.StdDuration = false
|
||||
err := tm.writeAny(w, reflect.ValueOf(dproto), props)
|
||||
props.StdDuration = true
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
@ -482,15 +594,15 @@ func writeAny(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
}
|
||||
}
|
||||
w.indent()
|
||||
if tm, ok := v.Interface().(encoding.TextMarshaler); ok {
|
||||
text, err := tm.MarshalText()
|
||||
if etm, ok := v.Interface().(encoding.TextMarshaler); ok {
|
||||
text, err := etm.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err = w.Write(text); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if err := writeStruct(w, v); err != nil {
|
||||
} else if err := tm.writeStruct(w, v); err != nil {
|
||||
return err
|
||||
}
|
||||
w.unindent()
|
||||
@ -634,7 +746,7 @@ func (s int32Slice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
||||
|
||||
// writeExtensions writes all the extensions in pv.
|
||||
// pv is assumed to be a pointer to a protocol message struct that is extendable.
|
||||
func writeExtensions(w *textWriter, pv reflect.Value) error {
|
||||
func (tm *TextMarshaler) writeExtensions(w *textWriter, pv reflect.Value) error {
|
||||
emap := extensionMaps[pv.Type().Elem()]
|
||||
e := pv.Interface().(Message)
|
||||
|
||||
@ -689,13 +801,13 @@ func writeExtensions(w *textWriter, pv reflect.Value) error {
|
||||
|
||||
// Repeated extensions will appear as a slice.
|
||||
if !desc.repeated() {
|
||||
if err := writeExtension(w, desc.Name, pb); err != nil {
|
||||
if err := tm.writeExtension(w, desc.Name, pb); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
v := reflect.ValueOf(pb)
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
if err := writeExtension(w, desc.Name, v.Index(i).Interface()); err != nil {
|
||||
if err := tm.writeExtension(w, desc.Name, v.Index(i).Interface()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@ -704,7 +816,7 @@ func writeExtensions(w *textWriter, pv reflect.Value) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func writeExtension(w *textWriter, name string, pb interface{}) error {
|
||||
func (tm *TextMarshaler) writeExtension(w *textWriter, name string, pb interface{}) error {
|
||||
if _, err := fmt.Fprintf(w, "[%s]:", name); err != nil {
|
||||
return err
|
||||
}
|
||||
@ -713,7 +825,7 @@ func writeExtension(w *textWriter, name string, pb interface{}) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := writeAny(w, reflect.ValueOf(pb), nil); err != nil {
|
||||
if err := tm.writeAny(w, reflect.ValueOf(pb), nil); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
@ -740,12 +852,13 @@ func (w *textWriter) writeIndent() {
|
||||
|
||||
// TextMarshaler is a configurable text format marshaler.
|
||||
type TextMarshaler struct {
|
||||
Compact bool // use compact text format (one line).
|
||||
Compact bool // use compact text format (one line).
|
||||
ExpandAny bool // expand google.protobuf.Any messages of known types
|
||||
}
|
||||
|
||||
// Marshal writes a given protocol buffer in text format.
|
||||
// The only errors returned are from w.
|
||||
func (m *TextMarshaler) Marshal(w io.Writer, pb Message) error {
|
||||
func (tm *TextMarshaler) Marshal(w io.Writer, pb Message) error {
|
||||
val := reflect.ValueOf(pb)
|
||||
if pb == nil || val.IsNil() {
|
||||
w.Write([]byte("<nil>"))
|
||||
@ -760,11 +873,11 @@ func (m *TextMarshaler) Marshal(w io.Writer, pb Message) error {
|
||||
aw := &textWriter{
|
||||
w: ww,
|
||||
complete: true,
|
||||
compact: m.Compact,
|
||||
compact: tm.Compact,
|
||||
}
|
||||
|
||||
if tm, ok := pb.(encoding.TextMarshaler); ok {
|
||||
text, err := tm.MarshalText()
|
||||
if etm, ok := pb.(encoding.TextMarshaler); ok {
|
||||
text, err := etm.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -778,7 +891,7 @@ func (m *TextMarshaler) Marshal(w io.Writer, pb Message) error {
|
||||
}
|
||||
// Dereference the received pointer so we don't have outer < and >.
|
||||
v := reflect.Indirect(val)
|
||||
if err := writeStruct(aw, v); err != nil {
|
||||
if err := tm.writeStruct(aw, v); err != nil {
|
||||
return err
|
||||
}
|
||||
if bw != nil {
|
||||
@ -788,9 +901,9 @@ func (m *TextMarshaler) Marshal(w io.Writer, pb Message) error {
|
||||
}
|
||||
|
||||
// Text is the same as Marshal, but returns the string directly.
|
||||
func (m *TextMarshaler) Text(pb Message) string {
|
||||
func (tm *TextMarshaler) Text(pb Message) string {
|
||||
var buf bytes.Buffer
|
||||
m.Marshal(&buf, pb)
|
||||
tm.Marshal(&buf, pb)
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
|
6
cmd/vendor/github.com/gogo/protobuf/proto/text_gogo.go
generated
vendored
6
cmd/vendor/github.com/gogo/protobuf/proto/text_gogo.go
generated
vendored
@ -33,10 +33,10 @@ import (
|
||||
"reflect"
|
||||
)
|
||||
|
||||
func writeEnum(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
func (tm *TextMarshaler) writeEnum(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
m, ok := enumStringMaps[props.Enum]
|
||||
if !ok {
|
||||
if err := writeAny(w, v, props); err != nil {
|
||||
if err := tm.writeAny(w, v, props); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@ -48,7 +48,7 @@ func writeEnum(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
}
|
||||
s, ok := m[key]
|
||||
if !ok {
|
||||
if err := writeAny(w, v, props); err != nil {
|
||||
if err := tm.writeAny(w, v, props); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
195
cmd/vendor/github.com/gogo/protobuf/proto/text_parser.go
generated
vendored
195
cmd/vendor/github.com/gogo/protobuf/proto/text_parser.go
generated
vendored
@ -46,9 +46,13 @@ import (
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// Error string emitted when deserializing Any and fields are already set
|
||||
const anyRepeatedlyUnpacked = "Any message unpacked multiple times, or %q already set"
|
||||
|
||||
type ParseError struct {
|
||||
Message string
|
||||
Line int // 1-based line number
|
||||
@ -168,7 +172,7 @@ func (p *textParser) advance() {
|
||||
p.cur.offset, p.cur.line = p.offset, p.line
|
||||
p.cur.unquoted = ""
|
||||
switch p.s[0] {
|
||||
case '<', '>', '{', '}', ':', '[', ']', ';', ',':
|
||||
case '<', '>', '{', '}', ':', '[', ']', ';', ',', '/':
|
||||
// Single symbol
|
||||
p.cur.value, p.s = p.s[0:1], p.s[1:len(p.s)]
|
||||
case '"', '\'':
|
||||
@ -456,7 +460,10 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
fieldSet := make(map[string]bool)
|
||||
// A struct is a sequence of "name: value", terminated by one of
|
||||
// '>' or '}', or the end of the input. A name may also be
|
||||
// "[extension]".
|
||||
// "[extension]" or "[type/url]".
|
||||
//
|
||||
// The whole struct can also be an expanded Any message, like:
|
||||
// [type/url] < ... struct contents ... >
|
||||
for {
|
||||
tok := p.next()
|
||||
if tok.err != nil {
|
||||
@ -466,33 +473,74 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
break
|
||||
}
|
||||
if tok.value == "[" {
|
||||
// Looks like an extension.
|
||||
// Looks like an extension or an Any.
|
||||
//
|
||||
// TODO: Check whether we need to handle
|
||||
// namespace rooted names (e.g. ".something.Foo").
|
||||
tok = p.next()
|
||||
if tok.err != nil {
|
||||
return tok.err
|
||||
extName, err := p.consumeExtName()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if s := strings.LastIndex(extName, "/"); s >= 0 {
|
||||
// If it contains a slash, it's an Any type URL.
|
||||
messageName := extName[s+1:]
|
||||
mt := MessageType(messageName)
|
||||
if mt == nil {
|
||||
return p.errorf("unrecognized message %q in google.protobuf.Any", messageName)
|
||||
}
|
||||
tok = p.next()
|
||||
if tok.err != nil {
|
||||
return tok.err
|
||||
}
|
||||
// consume an optional colon
|
||||
if tok.value == ":" {
|
||||
tok = p.next()
|
||||
if tok.err != nil {
|
||||
return tok.err
|
||||
}
|
||||
}
|
||||
var terminator string
|
||||
switch tok.value {
|
||||
case "<":
|
||||
terminator = ">"
|
||||
case "{":
|
||||
terminator = "}"
|
||||
default:
|
||||
return p.errorf("expected '{' or '<', found %q", tok.value)
|
||||
}
|
||||
v := reflect.New(mt.Elem())
|
||||
if pe := p.readStruct(v.Elem(), terminator); pe != nil {
|
||||
return pe
|
||||
}
|
||||
b, err := Marshal(v.Interface().(Message))
|
||||
if err != nil {
|
||||
return p.errorf("failed to marshal message of type %q: %v", messageName, err)
|
||||
}
|
||||
if fieldSet["type_url"] {
|
||||
return p.errorf(anyRepeatedlyUnpacked, "type_url")
|
||||
}
|
||||
if fieldSet["value"] {
|
||||
return p.errorf(anyRepeatedlyUnpacked, "value")
|
||||
}
|
||||
sv.FieldByName("TypeUrl").SetString(extName)
|
||||
sv.FieldByName("Value").SetBytes(b)
|
||||
fieldSet["type_url"] = true
|
||||
fieldSet["value"] = true
|
||||
continue
|
||||
}
|
||||
|
||||
var desc *ExtensionDesc
|
||||
// This could be faster, but it's functional.
|
||||
// TODO: Do something smarter than a linear scan.
|
||||
for _, d := range RegisteredExtensions(reflect.New(st).Interface().(Message)) {
|
||||
if d.Name == tok.value {
|
||||
if d.Name == extName {
|
||||
desc = d
|
||||
break
|
||||
}
|
||||
}
|
||||
if desc == nil {
|
||||
return p.errorf("unrecognized extension %q", tok.value)
|
||||
}
|
||||
// Check the extension terminator.
|
||||
tok = p.next()
|
||||
if tok.err != nil {
|
||||
return tok.err
|
||||
}
|
||||
if tok.value != "]" {
|
||||
return p.errorf("unrecognized extension terminator %q", tok.value)
|
||||
return p.errorf("unrecognized extension %q", extName)
|
||||
}
|
||||
|
||||
props := &Properties{}
|
||||
@ -550,7 +598,11 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
props = oop.Prop
|
||||
nv := reflect.New(oop.Type.Elem())
|
||||
dst = nv.Elem().Field(0)
|
||||
sv.Field(oop.Field).Set(nv)
|
||||
field := sv.Field(oop.Field)
|
||||
if !field.IsNil() {
|
||||
return p.errorf("field '%s' would overwrite already parsed oneof '%s'", name, sv.Type().Field(oop.Field).Name)
|
||||
}
|
||||
field.Set(nv)
|
||||
}
|
||||
if !dst.IsValid() {
|
||||
return p.errorf("unknown field name %q in %v", name, st)
|
||||
@ -657,6 +709,35 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
return reqFieldErr
|
||||
}
|
||||
|
||||
// consumeExtName consumes extension name or expanded Any type URL and the
|
||||
// following ']'. It returns the name or URL consumed.
|
||||
func (p *textParser) consumeExtName() (string, error) {
|
||||
tok := p.next()
|
||||
if tok.err != nil {
|
||||
return "", tok.err
|
||||
}
|
||||
|
||||
// If extension name or type url is quoted, it's a single token.
|
||||
if len(tok.value) > 2 && isQuote(tok.value[0]) && tok.value[len(tok.value)-1] == tok.value[0] {
|
||||
name, err := unquoteC(tok.value[1:len(tok.value)-1], rune(tok.value[0]))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return name, p.consumeToken("]")
|
||||
}
|
||||
|
||||
// Consume everything up to "]"
|
||||
var parts []string
|
||||
for tok.value != "]" {
|
||||
parts = append(parts, tok.value)
|
||||
tok = p.next()
|
||||
if tok.err != nil {
|
||||
return "", p.errorf("unrecognized type_url or extension name: %s", tok.err)
|
||||
}
|
||||
}
|
||||
return strings.Join(parts, ""), nil
|
||||
}
|
||||
|
||||
// consumeOptionalSeparator consumes an optional semicolon or comma.
|
||||
// It is used in readStruct to provide backward compatibility.
|
||||
func (p *textParser) consumeOptionalSeparator() error {
|
||||
@ -717,6 +798,80 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if props.StdTime {
|
||||
fv := v
|
||||
p.back()
|
||||
props.StdTime = false
|
||||
tproto := ×tamp{}
|
||||
err := p.readAny(reflect.ValueOf(tproto).Elem(), props)
|
||||
props.StdTime = true
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tim, err := timestampFromProto(tproto)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if props.Repeated {
|
||||
t := reflect.TypeOf(v.Interface())
|
||||
if t.Kind() == reflect.Slice {
|
||||
if t.Elem().Kind() == reflect.Ptr {
|
||||
ts := fv.Interface().([]*time.Time)
|
||||
ts = append(ts, &tim)
|
||||
fv.Set(reflect.ValueOf(ts))
|
||||
return nil
|
||||
} else {
|
||||
ts := fv.Interface().([]time.Time)
|
||||
ts = append(ts, tim)
|
||||
fv.Set(reflect.ValueOf(ts))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
if reflect.TypeOf(v.Interface()).Kind() == reflect.Ptr {
|
||||
v.Set(reflect.ValueOf(&tim))
|
||||
} else {
|
||||
v.Set(reflect.Indirect(reflect.ValueOf(&tim)))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if props.StdDuration {
|
||||
fv := v
|
||||
p.back()
|
||||
props.StdDuration = false
|
||||
dproto := &duration{}
|
||||
err := p.readAny(reflect.ValueOf(dproto).Elem(), props)
|
||||
props.StdDuration = true
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dur, err := durationFromProto(dproto)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if props.Repeated {
|
||||
t := reflect.TypeOf(v.Interface())
|
||||
if t.Kind() == reflect.Slice {
|
||||
if t.Elem().Kind() == reflect.Ptr {
|
||||
ds := fv.Interface().([]*time.Duration)
|
||||
ds = append(ds, &dur)
|
||||
fv.Set(reflect.ValueOf(ds))
|
||||
return nil
|
||||
} else {
|
||||
ds := fv.Interface().([]time.Duration)
|
||||
ds = append(ds, dur)
|
||||
fv.Set(reflect.ValueOf(ds))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
if reflect.TypeOf(v.Interface()).Kind() == reflect.Ptr {
|
||||
v.Set(reflect.ValueOf(&dur))
|
||||
} else {
|
||||
v.Set(reflect.Indirect(reflect.ValueOf(&dur)))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
switch fv := v; fv.Kind() {
|
||||
case reflect.Slice:
|
||||
at := v.Type()
|
||||
@ -759,12 +914,12 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
|
||||
fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem()))
|
||||
return p.readAny(fv.Index(fv.Len()-1), props)
|
||||
case reflect.Bool:
|
||||
// Either "true", "false", 1 or 0.
|
||||
// true/1/t/True or false/f/0/False.
|
||||
switch tok.value {
|
||||
case "true", "1":
|
||||
case "true", "1", "t", "True":
|
||||
fv.SetBool(true)
|
||||
return nil
|
||||
case "false", "0":
|
||||
case "false", "0", "f", "False":
|
||||
fv.SetBool(false)
|
||||
return nil
|
||||
}
|
||||
|
113
cmd/vendor/github.com/gogo/protobuf/proto/timestamp.go
generated
vendored
Normal file
113
cmd/vendor/github.com/gogo/protobuf/proto/timestamp.go
generated
vendored
Normal file
@ -0,0 +1,113 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
package proto
|
||||
|
||||
// This file implements operations on google.protobuf.Timestamp.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
// Seconds field of the earliest valid Timestamp.
|
||||
// This is time.Date(1, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
|
||||
minValidSeconds = -62135596800
|
||||
// Seconds field just after the latest valid Timestamp.
|
||||
// This is time.Date(10000, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
|
||||
maxValidSeconds = 253402300800
|
||||
)
|
||||
|
||||
// validateTimestamp determines whether a Timestamp is valid.
|
||||
// A valid timestamp represents a time in the range
|
||||
// [0001-01-01, 10000-01-01) and has a Nanos field
|
||||
// in the range [0, 1e9).
|
||||
//
|
||||
// If the Timestamp is valid, validateTimestamp returns nil.
|
||||
// Otherwise, it returns an error that describes
|
||||
// the problem.
|
||||
//
|
||||
// Every valid Timestamp can be represented by a time.Time, but the converse is not true.
|
||||
func validateTimestamp(ts *timestamp) error {
|
||||
if ts == nil {
|
||||
return errors.New("timestamp: nil Timestamp")
|
||||
}
|
||||
if ts.Seconds < minValidSeconds {
|
||||
return fmt.Errorf("timestamp: %#v before 0001-01-01", ts)
|
||||
}
|
||||
if ts.Seconds >= maxValidSeconds {
|
||||
return fmt.Errorf("timestamp: %#v after 10000-01-01", ts)
|
||||
}
|
||||
if ts.Nanos < 0 || ts.Nanos >= 1e9 {
|
||||
return fmt.Errorf("timestamp: %#v: nanos not in range [0, 1e9)", ts)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// TimestampFromProto converts a google.protobuf.Timestamp proto to a time.Time.
|
||||
// It returns an error if the argument is invalid.
|
||||
//
|
||||
// Unlike most Go functions, if Timestamp returns an error, the first return value
|
||||
// is not the zero time.Time. Instead, it is the value obtained from the
|
||||
// time.Unix function when passed the contents of the Timestamp, in the UTC
|
||||
// locale. This may or may not be a meaningful time; many invalid Timestamps
|
||||
// do map to valid time.Times.
|
||||
//
|
||||
// A nil Timestamp returns an error. The first return value in that case is
|
||||
// undefined.
|
||||
func timestampFromProto(ts *timestamp) (time.Time, error) {
|
||||
// Don't return the zero value on error, because corresponds to a valid
|
||||
// timestamp. Instead return whatever time.Unix gives us.
|
||||
var t time.Time
|
||||
if ts == nil {
|
||||
t = time.Unix(0, 0).UTC() // treat nil like the empty Timestamp
|
||||
} else {
|
||||
t = time.Unix(ts.Seconds, int64(ts.Nanos)).UTC()
|
||||
}
|
||||
return t, validateTimestamp(ts)
|
||||
}
|
||||
|
||||
// TimestampProto converts the time.Time to a google.protobuf.Timestamp proto.
|
||||
// It returns an error if the resulting Timestamp is invalid.
|
||||
func timestampProto(t time.Time) (*timestamp, error) {
|
||||
seconds := t.Unix()
|
||||
nanos := int32(t.Sub(time.Unix(seconds, 0)))
|
||||
ts := ×tamp{
|
||||
Seconds: seconds,
|
||||
Nanos: nanos,
|
||||
}
|
||||
if err := validateTimestamp(ts); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ts, nil
|
||||
}
|
229
cmd/vendor/github.com/gogo/protobuf/proto/timestamp_gogo.go
generated
vendored
Normal file
229
cmd/vendor/github.com/gogo/protobuf/proto/timestamp_gogo.go
generated
vendored
Normal file
@ -0,0 +1,229 @@
|
||||
// Protocol Buffers for Go with Gadgets
|
||||
//
|
||||
// Copyright (c) 2016, The GoGo Authors. All rights reserved.
|
||||
// http://github.com/gogo/protobuf
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
package proto
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"time"
|
||||
)
|
||||
|
||||
var timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
|
||||
|
||||
type timestamp struct {
|
||||
Seconds int64 `protobuf:"varint,1,opt,name=seconds,proto3" json:"seconds,omitempty"`
|
||||
Nanos int32 `protobuf:"varint,2,opt,name=nanos,proto3" json:"nanos,omitempty"`
|
||||
}
|
||||
|
||||
func (m *timestamp) Reset() { *m = timestamp{} }
|
||||
func (*timestamp) ProtoMessage() {}
|
||||
func (*timestamp) String() string { return "timestamp<string>" }
|
||||
|
||||
func init() {
|
||||
RegisterType((*timestamp)(nil), "gogo.protobuf.proto.timestamp")
|
||||
}
|
||||
|
||||
func (o *Buffer) decTimestamp() (time.Time, error) {
|
||||
b, err := o.DecodeRawBytes(true)
|
||||
if err != nil {
|
||||
return time.Time{}, err
|
||||
}
|
||||
tproto := ×tamp{}
|
||||
if err := Unmarshal(b, tproto); err != nil {
|
||||
return time.Time{}, err
|
||||
}
|
||||
return timestampFromProto(tproto)
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_time(p *Properties, base structPointer) error {
|
||||
t, err := o.decTimestamp()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
setPtrCustomType(base, p.field, &t)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_ref_time(p *Properties, base structPointer) error {
|
||||
t, err := o.decTimestamp()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
setCustomType(base, p.field, &t)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_slice_time(p *Properties, base structPointer) error {
|
||||
t, err := o.decTimestamp()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
newBas := appendStructPointer(base, p.field, reflect.SliceOf(reflect.PtrTo(timeType)))
|
||||
var zero field
|
||||
setPtrCustomType(newBas, zero, &t)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_slice_ref_time(p *Properties, base structPointer) error {
|
||||
t, err := o.decTimestamp()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
newBas := appendStructPointer(base, p.field, reflect.SliceOf(timeType))
|
||||
var zero field
|
||||
setCustomType(newBas, zero, &t)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_time(p *Properties, base structPointer) (n int) {
|
||||
structp := structPointer_GetStructPointer(base, p.field)
|
||||
if structPointer_IsNil(structp) {
|
||||
return 0
|
||||
}
|
||||
tim := structPointer_Interface(structp, timeType).(*time.Time)
|
||||
t, err := timestampProto(*tim)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
size := Size(t)
|
||||
return size + sizeVarint(uint64(size)) + len(p.tagcode)
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_time(p *Properties, base structPointer) error {
|
||||
structp := structPointer_GetStructPointer(base, p.field)
|
||||
if structPointer_IsNil(structp) {
|
||||
return ErrNil
|
||||
}
|
||||
tim := structPointer_Interface(structp, timeType).(*time.Time)
|
||||
t, err := timestampProto(*tim)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data, err := Marshal(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(data)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_ref_time(p *Properties, base structPointer) (n int) {
|
||||
tim := structPointer_InterfaceAt(base, p.field, timeType).(*time.Time)
|
||||
t, err := timestampProto(*tim)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
size := Size(t)
|
||||
return size + sizeVarint(uint64(size)) + len(p.tagcode)
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_ref_time(p *Properties, base structPointer) error {
|
||||
tim := structPointer_InterfaceAt(base, p.field, timeType).(*time.Time)
|
||||
t, err := timestampProto(*tim)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data, err := Marshal(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(data)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_slice_time(p *Properties, base structPointer) (n int) {
|
||||
ptims := structPointer_InterfaceAt(base, p.field, reflect.SliceOf(reflect.PtrTo(timeType))).(*[]*time.Time)
|
||||
tims := *ptims
|
||||
for i := 0; i < len(tims); i++ {
|
||||
if tims[i] == nil {
|
||||
return 0
|
||||
}
|
||||
tproto, err := timestampProto(*tims[i])
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
size := Size(tproto)
|
||||
n += len(p.tagcode) + size + sizeVarint(uint64(size))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_slice_time(p *Properties, base structPointer) error {
|
||||
ptims := structPointer_InterfaceAt(base, p.field, reflect.SliceOf(reflect.PtrTo(timeType))).(*[]*time.Time)
|
||||
tims := *ptims
|
||||
for i := 0; i < len(tims); i++ {
|
||||
if tims[i] == nil {
|
||||
return errRepeatedHasNil
|
||||
}
|
||||
tproto, err := timestampProto(*tims[i])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data, err := Marshal(tproto)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(data)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_slice_ref_time(p *Properties, base structPointer) (n int) {
|
||||
ptims := structPointer_InterfaceAt(base, p.field, reflect.SliceOf(timeType)).(*[]time.Time)
|
||||
tims := *ptims
|
||||
for i := 0; i < len(tims); i++ {
|
||||
tproto, err := timestampProto(tims[i])
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
size := Size(tproto)
|
||||
n += len(p.tagcode) + size + sizeVarint(uint64(size))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_slice_ref_time(p *Properties, base structPointer) error {
|
||||
ptims := structPointer_InterfaceAt(base, p.field, reflect.SliceOf(timeType)).(*[]time.Time)
|
||||
tims := *ptims
|
||||
for i := 0; i < len(tims); i++ {
|
||||
tproto, err := timestampProto(tims[i])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data, err := Marshal(tproto)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(data)
|
||||
}
|
||||
return nil
|
||||
}
|
260
cmd/vendor/github.com/golang/protobuf/jsonpb/jsonpb.go
generated
vendored
260
cmd/vendor/github.com/golang/protobuf/jsonpb/jsonpb.go
generated
vendored
@ -44,6 +44,7 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
@ -51,6 +52,8 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/golang/protobuf/proto"
|
||||
|
||||
stpb "github.com/golang/protobuf/ptypes/struct"
|
||||
)
|
||||
|
||||
// Marshaler is a configurable object for converting between
|
||||
@ -72,6 +75,22 @@ type Marshaler struct {
|
||||
OrigName bool
|
||||
}
|
||||
|
||||
// JSONPBMarshaler is implemented by protobuf messages that customize the
|
||||
// way they are marshaled to JSON. Messages that implement this should
|
||||
// also implement JSONPBUnmarshaler so that the custom format can be
|
||||
// parsed.
|
||||
type JSONPBMarshaler interface {
|
||||
MarshalJSONPB(*Marshaler) ([]byte, error)
|
||||
}
|
||||
|
||||
// JSONPBUnmarshaler is implemented by protobuf messages that customize
|
||||
// the way they are unmarshaled from JSON. Messages that implement this
|
||||
// should also implement JSONPBMarshaler so that the custom format can be
|
||||
// produced.
|
||||
type JSONPBUnmarshaler interface {
|
||||
UnmarshalJSONPB(*Unmarshaler, []byte) error
|
||||
}
|
||||
|
||||
// Marshal marshals a protocol buffer into JSON.
|
||||
func (m *Marshaler) Marshal(out io.Writer, pb proto.Message) error {
|
||||
writer := &errWriter{writer: out}
|
||||
@ -89,6 +108,12 @@ func (m *Marshaler) MarshalToString(pb proto.Message) (string, error) {
|
||||
|
||||
type int32Slice []int32
|
||||
|
||||
var nonFinite = map[string]float64{
|
||||
`"NaN"`: math.NaN(),
|
||||
`"Infinity"`: math.Inf(1),
|
||||
`"-Infinity"`: math.Inf(-1),
|
||||
}
|
||||
|
||||
// For sorting extensions ids to ensure stable output.
|
||||
func (s int32Slice) Len() int { return len(s) }
|
||||
func (s int32Slice) Less(i, j int) bool { return s[i] < s[j] }
|
||||
@ -100,6 +125,31 @@ type wkt interface {
|
||||
|
||||
// marshalObject writes a struct to the Writer.
|
||||
func (m *Marshaler) marshalObject(out *errWriter, v proto.Message, indent, typeURL string) error {
|
||||
if jsm, ok := v.(JSONPBMarshaler); ok {
|
||||
b, err := jsm.MarshalJSONPB(m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if typeURL != "" {
|
||||
// we are marshaling this object to an Any type
|
||||
var js map[string]*json.RawMessage
|
||||
if err = json.Unmarshal(b, &js); err != nil {
|
||||
return fmt.Errorf("type %T produced invalid JSON: %v", v, err)
|
||||
}
|
||||
turl, err := json.Marshal(typeURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal type URL %q to JSON: %v", typeURL, err)
|
||||
}
|
||||
js["@type"] = (*json.RawMessage)(&turl)
|
||||
if b, err = json.Marshal(js); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
out.write(string(b))
|
||||
return out.err
|
||||
}
|
||||
|
||||
s := reflect.ValueOf(v).Elem()
|
||||
|
||||
// Handle well-known types.
|
||||
@ -126,8 +176,8 @@ func (m *Marshaler) marshalObject(out *errWriter, v proto.Message, indent, typeU
|
||||
out.write(x)
|
||||
out.write(`s"`)
|
||||
return out.err
|
||||
case "Struct":
|
||||
// Let marshalValue handle the `fields` map.
|
||||
case "Struct", "ListValue":
|
||||
// Let marshalValue handle the `Struct.fields` map or the `ListValue.values` slice.
|
||||
// TODO: pass the correct Properties if needed.
|
||||
return m.marshalValue(out, &proto.Properties{}, s.Field(0), indent)
|
||||
case "Timestamp":
|
||||
@ -180,7 +230,7 @@ func (m *Marshaler) marshalObject(out *errWriter, v proto.Message, indent, typeU
|
||||
|
||||
// IsNil will panic on most value kinds.
|
||||
switch value.Kind() {
|
||||
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
case reflect.Chan, reflect.Func, reflect.Interface:
|
||||
if value.IsNil() {
|
||||
continue
|
||||
}
|
||||
@ -208,6 +258,10 @@ func (m *Marshaler) marshalObject(out *errWriter, v proto.Message, indent, typeU
|
||||
if value.Len() == 0 {
|
||||
continue
|
||||
}
|
||||
case reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
if value.IsNil() {
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -371,10 +425,15 @@ func (m *Marshaler) marshalField(out *errWriter, prop *proto.Properties, v refle
|
||||
|
||||
// marshalValue writes the value to the Writer.
|
||||
func (m *Marshaler) marshalValue(out *errWriter, prop *proto.Properties, v reflect.Value, indent string) error {
|
||||
|
||||
var err error
|
||||
v = reflect.Indirect(v)
|
||||
|
||||
// Handle nil pointer
|
||||
if v.Kind() == reflect.Invalid {
|
||||
out.write("null")
|
||||
return out.err
|
||||
}
|
||||
|
||||
// Handle repeated elements.
|
||||
if v.Kind() == reflect.Slice && v.Type().Elem().Kind() != reflect.Uint8 {
|
||||
out.write("[")
|
||||
@ -404,9 +463,6 @@ func (m *Marshaler) marshalValue(out *errWriter, prop *proto.Properties, v refle
|
||||
|
||||
// Handle well-known types.
|
||||
// Most are handled up in marshalObject (because 99% are messages).
|
||||
type wkt interface {
|
||||
XXX_WellKnownType() string
|
||||
}
|
||||
if wkt, ok := v.Interface().(wkt); ok {
|
||||
switch wkt.XXX_WellKnownType() {
|
||||
case "NullValue":
|
||||
@ -494,6 +550,24 @@ func (m *Marshaler) marshalValue(out *errWriter, prop *proto.Properties, v refle
|
||||
return out.err
|
||||
}
|
||||
|
||||
// Handle non-finite floats, e.g. NaN, Infinity and -Infinity.
|
||||
if v.Kind() == reflect.Float32 || v.Kind() == reflect.Float64 {
|
||||
f := v.Float()
|
||||
var sval string
|
||||
switch {
|
||||
case math.IsInf(f, 1):
|
||||
sval = `"Infinity"`
|
||||
case math.IsInf(f, -1):
|
||||
sval = `"-Infinity"`
|
||||
case math.IsNaN(f):
|
||||
sval = `"NaN"`
|
||||
}
|
||||
if sval != "" {
|
||||
out.write(sval)
|
||||
return out.err
|
||||
}
|
||||
}
|
||||
|
||||
// Default handling defers to the encoding/json library.
|
||||
b, err := json.Marshal(v.Interface())
|
||||
if err != nil {
|
||||
@ -569,12 +643,13 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
|
||||
return u.unmarshalValue(target.Elem(), inputValue, prop)
|
||||
}
|
||||
|
||||
// Handle well-known types.
|
||||
type wkt interface {
|
||||
XXX_WellKnownType() string
|
||||
if jsu, ok := target.Addr().Interface().(JSONPBUnmarshaler); ok {
|
||||
return jsu.UnmarshalJSONPB(u, []byte(inputValue))
|
||||
}
|
||||
if wkt, ok := target.Addr().Interface().(wkt); ok {
|
||||
switch wkt.XXX_WellKnownType() {
|
||||
|
||||
// Handle well-known types.
|
||||
if w, ok := target.Addr().Interface().(wkt); ok {
|
||||
switch w.XXX_WellKnownType() {
|
||||
case "DoubleValue", "FloatValue", "Int64Value", "UInt64Value",
|
||||
"Int32Value", "UInt32Value", "BoolValue", "StringValue", "BytesValue":
|
||||
// "Wrappers use the same representation in JSON
|
||||
@ -583,9 +658,72 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
|
||||
// so we don't have to do any extra work.
|
||||
return u.unmarshalValue(target.Field(0), inputValue, prop)
|
||||
case "Any":
|
||||
return fmt.Errorf("unmarshaling Any not supported yet")
|
||||
// Use json.RawMessage pointer type instead of value to support pre-1.8 version.
|
||||
// 1.8 changed RawMessage.MarshalJSON from pointer type to value type, see
|
||||
// https://github.com/golang/go/issues/14493
|
||||
var jsonFields map[string]*json.RawMessage
|
||||
if err := json.Unmarshal(inputValue, &jsonFields); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
val, ok := jsonFields["@type"]
|
||||
if !ok || val == nil {
|
||||
return errors.New("Any JSON doesn't have '@type'")
|
||||
}
|
||||
|
||||
var turl string
|
||||
if err := json.Unmarshal([]byte(*val), &turl); err != nil {
|
||||
return fmt.Errorf("can't unmarshal Any's '@type': %q", *val)
|
||||
}
|
||||
target.Field(0).SetString(turl)
|
||||
|
||||
mname := turl
|
||||
if slash := strings.LastIndex(mname, "/"); slash >= 0 {
|
||||
mname = mname[slash+1:]
|
||||
}
|
||||
mt := proto.MessageType(mname)
|
||||
if mt == nil {
|
||||
return fmt.Errorf("unknown message type %q", mname)
|
||||
}
|
||||
|
||||
m := reflect.New(mt.Elem()).Interface().(proto.Message)
|
||||
if _, ok := m.(wkt); ok {
|
||||
val, ok := jsonFields["value"]
|
||||
if !ok {
|
||||
return errors.New("Any JSON doesn't have 'value'")
|
||||
}
|
||||
|
||||
if err := u.unmarshalValue(reflect.ValueOf(m).Elem(), *val, nil); err != nil {
|
||||
return fmt.Errorf("can't unmarshal Any nested proto %T: %v", m, err)
|
||||
}
|
||||
} else {
|
||||
delete(jsonFields, "@type")
|
||||
nestedProto, err := json.Marshal(jsonFields)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't generate JSON for Any's nested proto to be unmarshaled: %v", err)
|
||||
}
|
||||
|
||||
if err = u.unmarshalValue(reflect.ValueOf(m).Elem(), nestedProto, nil); err != nil {
|
||||
return fmt.Errorf("can't unmarshal Any nested proto %T: %v", m, err)
|
||||
}
|
||||
}
|
||||
|
||||
b, err := proto.Marshal(m)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't marshal proto %T into Any.Value: %v", m, err)
|
||||
}
|
||||
target.Field(1).SetBytes(b)
|
||||
|
||||
return nil
|
||||
case "Duration":
|
||||
unq, err := strconv.Unquote(string(inputValue))
|
||||
ivStr := string(inputValue)
|
||||
if ivStr == "null" {
|
||||
target.Field(0).SetInt(0)
|
||||
target.Field(1).SetInt(0)
|
||||
return nil
|
||||
}
|
||||
|
||||
unq, err := strconv.Unquote(ivStr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -600,7 +738,14 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
|
||||
target.Field(1).SetInt(ns)
|
||||
return nil
|
||||
case "Timestamp":
|
||||
unq, err := strconv.Unquote(string(inputValue))
|
||||
ivStr := string(inputValue)
|
||||
if ivStr == "null" {
|
||||
target.Field(0).SetInt(0)
|
||||
target.Field(1).SetInt(0)
|
||||
return nil
|
||||
}
|
||||
|
||||
unq, err := strconv.Unquote(ivStr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -611,6 +756,62 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
|
||||
target.Field(0).SetInt(int64(t.Unix()))
|
||||
target.Field(1).SetInt(int64(t.Nanosecond()))
|
||||
return nil
|
||||
case "Struct":
|
||||
if string(inputValue) == "null" {
|
||||
// Interpret a null struct as empty.
|
||||
return nil
|
||||
}
|
||||
var m map[string]json.RawMessage
|
||||
if err := json.Unmarshal(inputValue, &m); err != nil {
|
||||
return fmt.Errorf("bad StructValue: %v", err)
|
||||
}
|
||||
target.Field(0).Set(reflect.ValueOf(map[string]*stpb.Value{}))
|
||||
for k, jv := range m {
|
||||
pv := &stpb.Value{}
|
||||
if err := u.unmarshalValue(reflect.ValueOf(pv).Elem(), jv, prop); err != nil {
|
||||
return fmt.Errorf("bad value in StructValue for key %q: %v", k, err)
|
||||
}
|
||||
target.Field(0).SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(pv))
|
||||
}
|
||||
return nil
|
||||
case "ListValue":
|
||||
if string(inputValue) == "null" {
|
||||
// Interpret a null ListValue as empty.
|
||||
return nil
|
||||
}
|
||||
var s []json.RawMessage
|
||||
if err := json.Unmarshal(inputValue, &s); err != nil {
|
||||
return fmt.Errorf("bad ListValue: %v", err)
|
||||
}
|
||||
target.Field(0).Set(reflect.ValueOf(make([]*stpb.Value, len(s), len(s))))
|
||||
for i, sv := range s {
|
||||
if err := u.unmarshalValue(target.Field(0).Index(i), sv, prop); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
case "Value":
|
||||
ivStr := string(inputValue)
|
||||
if ivStr == "null" {
|
||||
target.Field(0).Set(reflect.ValueOf(&stpb.Value_NullValue{}))
|
||||
} else if v, err := strconv.ParseFloat(ivStr, 0); err == nil {
|
||||
target.Field(0).Set(reflect.ValueOf(&stpb.Value_NumberValue{v}))
|
||||
} else if v, err := strconv.Unquote(ivStr); err == nil {
|
||||
target.Field(0).Set(reflect.ValueOf(&stpb.Value_StringValue{v}))
|
||||
} else if v, err := strconv.ParseBool(ivStr); err == nil {
|
||||
target.Field(0).Set(reflect.ValueOf(&stpb.Value_BoolValue{v}))
|
||||
} else if err := json.Unmarshal(inputValue, &[]json.RawMessage{}); err == nil {
|
||||
lv := &stpb.ListValue{}
|
||||
target.Field(0).Set(reflect.ValueOf(&stpb.Value_ListValue{lv}))
|
||||
return u.unmarshalValue(reflect.ValueOf(lv).Elem(), inputValue, prop)
|
||||
} else if err := json.Unmarshal(inputValue, &map[string]json.RawMessage{}); err == nil {
|
||||
sv := &stpb.Struct{}
|
||||
target.Field(0).Set(reflect.ValueOf(&stpb.Value_StructValue{sv}))
|
||||
return u.unmarshalValue(reflect.ValueOf(sv).Elem(), inputValue, prop)
|
||||
} else {
|
||||
return fmt.Errorf("unrecognized type for Value %q", ivStr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
@ -694,6 +895,26 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
|
||||
}
|
||||
}
|
||||
}
|
||||
// Handle proto2 extensions.
|
||||
if len(jsonFields) > 0 {
|
||||
if ep, ok := target.Addr().Interface().(proto.Message); ok {
|
||||
for _, ext := range proto.RegisteredExtensions(ep) {
|
||||
name := fmt.Sprintf("[%s]", ext.Name)
|
||||
raw, ok := jsonFields[name]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
delete(jsonFields, name)
|
||||
nv := reflect.New(reflect.TypeOf(ext.ExtensionType).Elem())
|
||||
if err := u.unmarshalValue(nv.Elem(), raw, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := proto.SetExtension(ep, ext, nv.Interface()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if !u.AllowUnknownFields && len(jsonFields) > 0 {
|
||||
// Pick any field to be the scapegoat.
|
||||
var f string
|
||||
@ -766,6 +987,15 @@ func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMe
|
||||
inputValue = inputValue[1 : len(inputValue)-1]
|
||||
}
|
||||
|
||||
// Non-finite numbers can be encoded as strings.
|
||||
isFloat := targetType.Kind() == reflect.Float32 || targetType.Kind() == reflect.Float64
|
||||
if isFloat {
|
||||
if num, ok := nonFinite[string(inputValue)]; ok {
|
||||
target.SetFloat(num)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Use the encoding/json for parsing other value types.
|
||||
return json.Unmarshal(inputValue, target.Addr().Interface())
|
||||
}
|
||||
|
11
cmd/vendor/github.com/golang/protobuf/proto/encode.go
generated
vendored
11
cmd/vendor/github.com/golang/protobuf/proto/encode.go
generated
vendored
@ -1075,10 +1075,17 @@ func (o *Buffer) enc_map(p *Properties, base structPointer) error {
|
||||
|
||||
func (o *Buffer) enc_exts(p *Properties, base structPointer) error {
|
||||
exts := structPointer_Extensions(base, p.field)
|
||||
if err := encodeExtensions(exts); err != nil {
|
||||
|
||||
v, mu := exts.extensionsRead()
|
||||
if v == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
if err := encodeExtensionsMap(v); err != nil {
|
||||
return err
|
||||
}
|
||||
v, _ := exts.extensionsRead()
|
||||
|
||||
return o.enc_map_body(v)
|
||||
}
|
||||
|
1
cmd/vendor/github.com/golang/protobuf/proto/extensions.go
generated
vendored
1
cmd/vendor/github.com/golang/protobuf/proto/extensions.go
generated
vendored
@ -154,6 +154,7 @@ type ExtensionDesc struct {
|
||||
Field int32 // field number
|
||||
Name string // fully-qualified name of extension, for text formatting
|
||||
Tag string // protobuf tag style
|
||||
Filename string // name of the file in which the extension is defined
|
||||
}
|
||||
|
||||
func (ed *ExtensionDesc) repeated() bool {
|
||||
|
1
cmd/vendor/github.com/golang/protobuf/proto/lib.go
generated
vendored
1
cmd/vendor/github.com/golang/protobuf/proto/lib.go
generated
vendored
@ -73,7 +73,6 @@ for a protocol buffer variable v:
|
||||
When the .proto file specifies `syntax="proto3"`, there are some differences:
|
||||
|
||||
- Non-repeated fields of non-message type are values instead of pointers.
|
||||
- Getters are only generated for message and oneof fields.
|
||||
- Enum types do not get an Enum method.
|
||||
|
||||
The simplest way to describe this is to see an example.
|
||||
|
168
cmd/vendor/github.com/golang/protobuf/ptypes/any/any.pb.go
generated
vendored
Normal file
168
cmd/vendor/github.com/golang/protobuf/ptypes/any/any.pb.go
generated
vendored
Normal file
@ -0,0 +1,168 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// source: github.com/golang/protobuf/ptypes/any/any.proto
|
||||
|
||||
/*
|
||||
Package any is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
github.com/golang/protobuf/ptypes/any/any.proto
|
||||
|
||||
It has these top-level messages:
|
||||
Any
|
||||
*/
|
||||
package any
|
||||
|
||||
import proto "github.com/golang/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
import math "math"
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
// `Any` contains an arbitrary serialized protocol buffer message along with a
|
||||
// URL that describes the type of the serialized message.
|
||||
//
|
||||
// Protobuf library provides support to pack/unpack Any values in the form
|
||||
// of utility functions or additional generated methods of the Any type.
|
||||
//
|
||||
// Example 1: Pack and unpack a message in C++.
|
||||
//
|
||||
// Foo foo = ...;
|
||||
// Any any;
|
||||
// any.PackFrom(foo);
|
||||
// ...
|
||||
// if (any.UnpackTo(&foo)) {
|
||||
// ...
|
||||
// }
|
||||
//
|
||||
// Example 2: Pack and unpack a message in Java.
|
||||
//
|
||||
// Foo foo = ...;
|
||||
// Any any = Any.pack(foo);
|
||||
// ...
|
||||
// if (any.is(Foo.class)) {
|
||||
// foo = any.unpack(Foo.class);
|
||||
// }
|
||||
//
|
||||
// Example 3: Pack and unpack a message in Python.
|
||||
//
|
||||
// foo = Foo(...)
|
||||
// any = Any()
|
||||
// any.Pack(foo)
|
||||
// ...
|
||||
// if any.Is(Foo.DESCRIPTOR):
|
||||
// any.Unpack(foo)
|
||||
// ...
|
||||
//
|
||||
// The pack methods provided by protobuf library will by default use
|
||||
// 'type.googleapis.com/full.type.name' as the type URL and the unpack
|
||||
// methods only use the fully qualified type name after the last '/'
|
||||
// in the type URL, for example "foo.bar.com/x/y.z" will yield type
|
||||
// name "y.z".
|
||||
//
|
||||
//
|
||||
// JSON
|
||||
// ====
|
||||
// The JSON representation of an `Any` value uses the regular
|
||||
// representation of the deserialized, embedded message, with an
|
||||
// additional field `@type` which contains the type URL. Example:
|
||||
//
|
||||
// package google.profile;
|
||||
// message Person {
|
||||
// string first_name = 1;
|
||||
// string last_name = 2;
|
||||
// }
|
||||
//
|
||||
// {
|
||||
// "@type": "type.googleapis.com/google.profile.Person",
|
||||
// "firstName": <string>,
|
||||
// "lastName": <string>
|
||||
// }
|
||||
//
|
||||
// If the embedded message type is well-known and has a custom JSON
|
||||
// representation, that representation will be embedded adding a field
|
||||
// `value` which holds the custom JSON in addition to the `@type`
|
||||
// field. Example (for message [google.protobuf.Duration][]):
|
||||
//
|
||||
// {
|
||||
// "@type": "type.googleapis.com/google.protobuf.Duration",
|
||||
// "value": "1.212s"
|
||||
// }
|
||||
//
|
||||
type Any struct {
|
||||
// A URL/resource name whose content describes the type of the
|
||||
// serialized protocol buffer message.
|
||||
//
|
||||
// For URLs which use the scheme `http`, `https`, or no scheme, the
|
||||
// following restrictions and interpretations apply:
|
||||
//
|
||||
// * If no scheme is provided, `https` is assumed.
|
||||
// * The last segment of the URL's path must represent the fully
|
||||
// qualified name of the type (as in `path/google.protobuf.Duration`).
|
||||
// The name should be in a canonical form (e.g., leading "." is
|
||||
// not accepted).
|
||||
// * An HTTP GET on the URL must yield a [google.protobuf.Type][]
|
||||
// value in binary format, or produce an error.
|
||||
// * Applications are allowed to cache lookup results based on the
|
||||
// URL, or have them precompiled into a binary to avoid any
|
||||
// lookup. Therefore, binary compatibility needs to be preserved
|
||||
// on changes to types. (Use versioned type names to manage
|
||||
// breaking changes.)
|
||||
//
|
||||
// Schemes other than `http`, `https` (or the empty scheme) might be
|
||||
// used with implementation specific semantics.
|
||||
//
|
||||
TypeUrl string `protobuf:"bytes,1,opt,name=type_url,json=typeUrl" json:"type_url,omitempty"`
|
||||
// Must be a valid serialized protocol buffer of the above specified type.
|
||||
Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
|
||||
}
|
||||
|
||||
func (m *Any) Reset() { *m = Any{} }
|
||||
func (m *Any) String() string { return proto.CompactTextString(m) }
|
||||
func (*Any) ProtoMessage() {}
|
||||
func (*Any) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
|
||||
func (*Any) XXX_WellKnownType() string { return "Any" }
|
||||
|
||||
func (m *Any) GetTypeUrl() string {
|
||||
if m != nil {
|
||||
return m.TypeUrl
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Any) GetValue() []byte {
|
||||
if m != nil {
|
||||
return m.Value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Any)(nil), "google.protobuf.Any")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("github.com/golang/protobuf/ptypes/any/any.proto", fileDescriptor0) }
|
||||
|
||||
var fileDescriptor0 = []byte{
|
||||
// 184 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xd2, 0x4f, 0xcf, 0x2c, 0xc9,
|
||||
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
|
||||
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x4f, 0xcc,
|
||||
0xab, 0x04, 0x61, 0x3d, 0xb0, 0xb8, 0x10, 0x7f, 0x7a, 0x7e, 0x7e, 0x7a, 0x4e, 0xaa, 0x1e, 0x4c,
|
||||
0x95, 0x92, 0x19, 0x17, 0xb3, 0x63, 0x5e, 0xa5, 0x90, 0x24, 0x17, 0x07, 0x48, 0x79, 0x7c, 0x69,
|
||||
0x51, 0x8e, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x67, 0x10, 0x3b, 0x88, 0x1f, 0x5a, 0x94, 0x23, 0x24,
|
||||
0xc2, 0xc5, 0x5a, 0x96, 0x98, 0x53, 0x9a, 0x2a, 0xc1, 0xa4, 0xc0, 0xa8, 0xc1, 0x13, 0x04, 0xe1,
|
||||
0x38, 0xe5, 0x73, 0x09, 0x27, 0xe7, 0xe7, 0xea, 0xa1, 0x19, 0xe7, 0xc4, 0xe1, 0x98, 0x57, 0x19,
|
||||
0x00, 0xe2, 0x04, 0x30, 0x46, 0xa9, 0x12, 0xe5, 0xb8, 0x45, 0x4c, 0xcc, 0xee, 0x01, 0x4e, 0xab,
|
||||
0x98, 0xe4, 0xdc, 0x21, 0x46, 0x05, 0x40, 0x95, 0xe8, 0x85, 0xa7, 0xe6, 0xe4, 0x78, 0xe7, 0xe5,
|
||||
0x97, 0xe7, 0x85, 0x80, 0x94, 0x26, 0xb1, 0x81, 0xf5, 0x1a, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff,
|
||||
0x45, 0x1f, 0x1a, 0xf2, 0xf3, 0x00, 0x00, 0x00,
|
||||
}
|
382
cmd/vendor/github.com/golang/protobuf/ptypes/struct/struct.pb.go
generated
vendored
Normal file
382
cmd/vendor/github.com/golang/protobuf/ptypes/struct/struct.pb.go
generated
vendored
Normal file
@ -0,0 +1,382 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// source: github.com/golang/protobuf/ptypes/struct/struct.proto
|
||||
|
||||
/*
|
||||
Package structpb is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
github.com/golang/protobuf/ptypes/struct/struct.proto
|
||||
|
||||
It has these top-level messages:
|
||||
Struct
|
||||
Value
|
||||
ListValue
|
||||
*/
|
||||
package structpb
|
||||
|
||||
import proto "github.com/golang/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
import math "math"
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
// `NullValue` is a singleton enumeration to represent the null value for the
|
||||
// `Value` type union.
|
||||
//
|
||||
// The JSON representation for `NullValue` is JSON `null`.
|
||||
type NullValue int32
|
||||
|
||||
const (
|
||||
// Null value.
|
||||
NullValue_NULL_VALUE NullValue = 0
|
||||
)
|
||||
|
||||
var NullValue_name = map[int32]string{
|
||||
0: "NULL_VALUE",
|
||||
}
|
||||
var NullValue_value = map[string]int32{
|
||||
"NULL_VALUE": 0,
|
||||
}
|
||||
|
||||
func (x NullValue) String() string {
|
||||
return proto.EnumName(NullValue_name, int32(x))
|
||||
}
|
||||
func (NullValue) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
|
||||
func (NullValue) XXX_WellKnownType() string { return "NullValue" }
|
||||
|
||||
// `Struct` represents a structured data value, consisting of fields
|
||||
// which map to dynamically typed values. In some languages, `Struct`
|
||||
// might be supported by a native representation. For example, in
|
||||
// scripting languages like JS a struct is represented as an
|
||||
// object. The details of that representation are described together
|
||||
// with the proto support for the language.
|
||||
//
|
||||
// The JSON representation for `Struct` is JSON object.
|
||||
type Struct struct {
|
||||
// Unordered map of dynamically typed values.
|
||||
Fields map[string]*Value `protobuf:"bytes,1,rep,name=fields" json:"fields,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
}
|
||||
|
||||
func (m *Struct) Reset() { *m = Struct{} }
|
||||
func (m *Struct) String() string { return proto.CompactTextString(m) }
|
||||
func (*Struct) ProtoMessage() {}
|
||||
func (*Struct) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
|
||||
func (*Struct) XXX_WellKnownType() string { return "Struct" }
|
||||
|
||||
func (m *Struct) GetFields() map[string]*Value {
|
||||
if m != nil {
|
||||
return m.Fields
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// `Value` represents a dynamically typed value which can be either
|
||||
// null, a number, a string, a boolean, a recursive struct value, or a
|
||||
// list of values. A producer of value is expected to set one of that
|
||||
// variants, absence of any variant indicates an error.
|
||||
//
|
||||
// The JSON representation for `Value` is JSON value.
|
||||
type Value struct {
|
||||
// The kind of value.
|
||||
//
|
||||
// Types that are valid to be assigned to Kind:
|
||||
// *Value_NullValue
|
||||
// *Value_NumberValue
|
||||
// *Value_StringValue
|
||||
// *Value_BoolValue
|
||||
// *Value_StructValue
|
||||
// *Value_ListValue
|
||||
Kind isValue_Kind `protobuf_oneof:"kind"`
|
||||
}
|
||||
|
||||
func (m *Value) Reset() { *m = Value{} }
|
||||
func (m *Value) String() string { return proto.CompactTextString(m) }
|
||||
func (*Value) ProtoMessage() {}
|
||||
func (*Value) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
|
||||
func (*Value) XXX_WellKnownType() string { return "Value" }
|
||||
|
||||
type isValue_Kind interface {
|
||||
isValue_Kind()
|
||||
}
|
||||
|
||||
type Value_NullValue struct {
|
||||
NullValue NullValue `protobuf:"varint,1,opt,name=null_value,json=nullValue,enum=google.protobuf.NullValue,oneof"`
|
||||
}
|
||||
type Value_NumberValue struct {
|
||||
NumberValue float64 `protobuf:"fixed64,2,opt,name=number_value,json=numberValue,oneof"`
|
||||
}
|
||||
type Value_StringValue struct {
|
||||
StringValue string `protobuf:"bytes,3,opt,name=string_value,json=stringValue,oneof"`
|
||||
}
|
||||
type Value_BoolValue struct {
|
||||
BoolValue bool `protobuf:"varint,4,opt,name=bool_value,json=boolValue,oneof"`
|
||||
}
|
||||
type Value_StructValue struct {
|
||||
StructValue *Struct `protobuf:"bytes,5,opt,name=struct_value,json=structValue,oneof"`
|
||||
}
|
||||
type Value_ListValue struct {
|
||||
ListValue *ListValue `protobuf:"bytes,6,opt,name=list_value,json=listValue,oneof"`
|
||||
}
|
||||
|
||||
func (*Value_NullValue) isValue_Kind() {}
|
||||
func (*Value_NumberValue) isValue_Kind() {}
|
||||
func (*Value_StringValue) isValue_Kind() {}
|
||||
func (*Value_BoolValue) isValue_Kind() {}
|
||||
func (*Value_StructValue) isValue_Kind() {}
|
||||
func (*Value_ListValue) isValue_Kind() {}
|
||||
|
||||
func (m *Value) GetKind() isValue_Kind {
|
||||
if m != nil {
|
||||
return m.Kind
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Value) GetNullValue() NullValue {
|
||||
if x, ok := m.GetKind().(*Value_NullValue); ok {
|
||||
return x.NullValue
|
||||
}
|
||||
return NullValue_NULL_VALUE
|
||||
}
|
||||
|
||||
func (m *Value) GetNumberValue() float64 {
|
||||
if x, ok := m.GetKind().(*Value_NumberValue); ok {
|
||||
return x.NumberValue
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Value) GetStringValue() string {
|
||||
if x, ok := m.GetKind().(*Value_StringValue); ok {
|
||||
return x.StringValue
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Value) GetBoolValue() bool {
|
||||
if x, ok := m.GetKind().(*Value_BoolValue); ok {
|
||||
return x.BoolValue
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *Value) GetStructValue() *Struct {
|
||||
if x, ok := m.GetKind().(*Value_StructValue); ok {
|
||||
return x.StructValue
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Value) GetListValue() *ListValue {
|
||||
if x, ok := m.GetKind().(*Value_ListValue); ok {
|
||||
return x.ListValue
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// XXX_OneofFuncs is for the internal use of the proto package.
|
||||
func (*Value) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
|
||||
return _Value_OneofMarshaler, _Value_OneofUnmarshaler, _Value_OneofSizer, []interface{}{
|
||||
(*Value_NullValue)(nil),
|
||||
(*Value_NumberValue)(nil),
|
||||
(*Value_StringValue)(nil),
|
||||
(*Value_BoolValue)(nil),
|
||||
(*Value_StructValue)(nil),
|
||||
(*Value_ListValue)(nil),
|
||||
}
|
||||
}
|
||||
|
||||
func _Value_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
|
||||
m := msg.(*Value)
|
||||
// kind
|
||||
switch x := m.Kind.(type) {
|
||||
case *Value_NullValue:
|
||||
b.EncodeVarint(1<<3 | proto.WireVarint)
|
||||
b.EncodeVarint(uint64(x.NullValue))
|
||||
case *Value_NumberValue:
|
||||
b.EncodeVarint(2<<3 | proto.WireFixed64)
|
||||
b.EncodeFixed64(math.Float64bits(x.NumberValue))
|
||||
case *Value_StringValue:
|
||||
b.EncodeVarint(3<<3 | proto.WireBytes)
|
||||
b.EncodeStringBytes(x.StringValue)
|
||||
case *Value_BoolValue:
|
||||
t := uint64(0)
|
||||
if x.BoolValue {
|
||||
t = 1
|
||||
}
|
||||
b.EncodeVarint(4<<3 | proto.WireVarint)
|
||||
b.EncodeVarint(t)
|
||||
case *Value_StructValue:
|
||||
b.EncodeVarint(5<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.StructValue); err != nil {
|
||||
return err
|
||||
}
|
||||
case *Value_ListValue:
|
||||
b.EncodeVarint(6<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.ListValue); err != nil {
|
||||
return err
|
||||
}
|
||||
case nil:
|
||||
default:
|
||||
return fmt.Errorf("Value.Kind has unexpected type %T", x)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func _Value_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
|
||||
m := msg.(*Value)
|
||||
switch tag {
|
||||
case 1: // kind.null_value
|
||||
if wire != proto.WireVarint {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
x, err := b.DecodeVarint()
|
||||
m.Kind = &Value_NullValue{NullValue(x)}
|
||||
return true, err
|
||||
case 2: // kind.number_value
|
||||
if wire != proto.WireFixed64 {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
x, err := b.DecodeFixed64()
|
||||
m.Kind = &Value_NumberValue{math.Float64frombits(x)}
|
||||
return true, err
|
||||
case 3: // kind.string_value
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
x, err := b.DecodeStringBytes()
|
||||
m.Kind = &Value_StringValue{x}
|
||||
return true, err
|
||||
case 4: // kind.bool_value
|
||||
if wire != proto.WireVarint {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
x, err := b.DecodeVarint()
|
||||
m.Kind = &Value_BoolValue{x != 0}
|
||||
return true, err
|
||||
case 5: // kind.struct_value
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(Struct)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Kind = &Value_StructValue{msg}
|
||||
return true, err
|
||||
case 6: // kind.list_value
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(ListValue)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.Kind = &Value_ListValue{msg}
|
||||
return true, err
|
||||
default:
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
func _Value_OneofSizer(msg proto.Message) (n int) {
|
||||
m := msg.(*Value)
|
||||
// kind
|
||||
switch x := m.Kind.(type) {
|
||||
case *Value_NullValue:
|
||||
n += proto.SizeVarint(1<<3 | proto.WireVarint)
|
||||
n += proto.SizeVarint(uint64(x.NullValue))
|
||||
case *Value_NumberValue:
|
||||
n += proto.SizeVarint(2<<3 | proto.WireFixed64)
|
||||
n += 8
|
||||
case *Value_StringValue:
|
||||
n += proto.SizeVarint(3<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(len(x.StringValue)))
|
||||
n += len(x.StringValue)
|
||||
case *Value_BoolValue:
|
||||
n += proto.SizeVarint(4<<3 | proto.WireVarint)
|
||||
n += 1
|
||||
case *Value_StructValue:
|
||||
s := proto.Size(x.StructValue)
|
||||
n += proto.SizeVarint(5<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case *Value_ListValue:
|
||||
s := proto.Size(x.ListValue)
|
||||
n += proto.SizeVarint(6<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case nil:
|
||||
default:
|
||||
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// `ListValue` is a wrapper around a repeated field of values.
|
||||
//
|
||||
// The JSON representation for `ListValue` is JSON array.
|
||||
type ListValue struct {
|
||||
// Repeated field of dynamically typed values.
|
||||
Values []*Value `protobuf:"bytes,1,rep,name=values" json:"values,omitempty"`
|
||||
}
|
||||
|
||||
func (m *ListValue) Reset() { *m = ListValue{} }
|
||||
func (m *ListValue) String() string { return proto.CompactTextString(m) }
|
||||
func (*ListValue) ProtoMessage() {}
|
||||
func (*ListValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
|
||||
func (*ListValue) XXX_WellKnownType() string { return "ListValue" }
|
||||
|
||||
func (m *ListValue) GetValues() []*Value {
|
||||
if m != nil {
|
||||
return m.Values
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Struct)(nil), "google.protobuf.Struct")
|
||||
proto.RegisterType((*Value)(nil), "google.protobuf.Value")
|
||||
proto.RegisterType((*ListValue)(nil), "google.protobuf.ListValue")
|
||||
proto.RegisterEnum("google.protobuf.NullValue", NullValue_name, NullValue_value)
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterFile("github.com/golang/protobuf/ptypes/struct/struct.proto", fileDescriptor0)
|
||||
}
|
||||
|
||||
var fileDescriptor0 = []byte{
|
||||
// 417 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x92, 0x41, 0x8b, 0xd3, 0x40,
|
||||
0x14, 0x80, 0x3b, 0xc9, 0x36, 0x98, 0x17, 0x59, 0x97, 0x11, 0xb4, 0xac, 0xa0, 0xa1, 0x7b, 0x09,
|
||||
0x22, 0x09, 0x56, 0x04, 0x31, 0x5e, 0x0c, 0xac, 0xbb, 0x60, 0x58, 0x62, 0x74, 0x57, 0xf0, 0x52,
|
||||
0x9a, 0x34, 0x8d, 0xa1, 0xd3, 0x99, 0x90, 0xcc, 0x28, 0x3d, 0xfa, 0x2f, 0x3c, 0x7b, 0xf4, 0xe8,
|
||||
0xaf, 0xf3, 0x28, 0x33, 0x93, 0x44, 0x69, 0x29, 0x78, 0x9a, 0xbe, 0x37, 0xdf, 0xfb, 0xe6, 0xbd,
|
||||
0xd7, 0xc0, 0xf3, 0xb2, 0xe2, 0x9f, 0x45, 0xe6, 0xe7, 0x6c, 0x13, 0x94, 0x8c, 0x2c, 0x68, 0x19,
|
||||
0xd4, 0x0d, 0xe3, 0x2c, 0x13, 0xab, 0xa0, 0xe6, 0xdb, 0xba, 0x68, 0x83, 0x96, 0x37, 0x22, 0xe7,
|
||||
0xdd, 0xe1, 0xab, 0x5b, 0x7c, 0xa7, 0x64, 0xac, 0x24, 0x85, 0xdf, 0xb3, 0xd3, 0xef, 0x08, 0xac,
|
||||
0xf7, 0x8a, 0xc0, 0x21, 0x58, 0xab, 0xaa, 0x20, 0xcb, 0x76, 0x82, 0x5c, 0xd3, 0x73, 0x66, 0x67,
|
||||
0xfe, 0x0e, 0xec, 0x6b, 0xd0, 0x7f, 0xa3, 0xa8, 0x73, 0xca, 0x9b, 0x6d, 0xda, 0x95, 0x9c, 0xbe,
|
||||
0x03, 0xe7, 0x9f, 0x34, 0x3e, 0x01, 0x73, 0x5d, 0x6c, 0x27, 0xc8, 0x45, 0x9e, 0x9d, 0xca, 0x9f,
|
||||
0xf8, 0x09, 0x8c, 0xbf, 0x2c, 0x88, 0x28, 0x26, 0x86, 0x8b, 0x3c, 0x67, 0x76, 0x6f, 0x4f, 0x7e,
|
||||
0x23, 0x6f, 0x53, 0x0d, 0xbd, 0x34, 0x5e, 0xa0, 0xe9, 0x2f, 0x03, 0xc6, 0x2a, 0x89, 0x43, 0x00,
|
||||
0x2a, 0x08, 0x99, 0x6b, 0x81, 0x94, 0x1e, 0xcf, 0x4e, 0xf7, 0x04, 0x57, 0x82, 0x10, 0xc5, 0x5f,
|
||||
0x8e, 0x52, 0x9b, 0xf6, 0x01, 0x3e, 0x83, 0xdb, 0x54, 0x6c, 0xb2, 0xa2, 0x99, 0xff, 0x7d, 0x1f,
|
||||
0x5d, 0x8e, 0x52, 0x47, 0x67, 0x07, 0xa8, 0xe5, 0x4d, 0x45, 0xcb, 0x0e, 0x32, 0x65, 0xe3, 0x12,
|
||||
0xd2, 0x59, 0x0d, 0x3d, 0x02, 0xc8, 0x18, 0xeb, 0xdb, 0x38, 0x72, 0x91, 0x77, 0x4b, 0x3e, 0x25,
|
||||
0x73, 0x1a, 0x78, 0xa5, 0x2c, 0x22, 0xe7, 0x1d, 0x32, 0x56, 0xa3, 0xde, 0x3f, 0xb0, 0xc7, 0x4e,
|
||||
0x2f, 0x72, 0x3e, 0x4c, 0x49, 0xaa, 0xb6, 0xaf, 0xb5, 0x54, 0xed, 0xfe, 0x94, 0x71, 0xd5, 0xf2,
|
||||
0x61, 0x4a, 0xd2, 0x07, 0x91, 0x05, 0x47, 0xeb, 0x8a, 0x2e, 0xa7, 0x21, 0xd8, 0x03, 0x81, 0x7d,
|
||||
0xb0, 0x94, 0xac, 0xff, 0x47, 0x0f, 0x2d, 0xbd, 0xa3, 0x1e, 0x3f, 0x00, 0x7b, 0x58, 0x22, 0x3e,
|
||||
0x06, 0xb8, 0xba, 0x8e, 0xe3, 0xf9, 0xcd, 0xeb, 0xf8, 0xfa, 0xfc, 0x64, 0x14, 0x7d, 0x43, 0x70,
|
||||
0x37, 0x67, 0x9b, 0x5d, 0x45, 0xe4, 0xe8, 0x69, 0x12, 0x19, 0x27, 0xe8, 0xd3, 0xd3, 0xff, 0xfd,
|
||||
0x30, 0x43, 0x7d, 0xd4, 0xd9, 0x6f, 0x84, 0x7e, 0x18, 0xe6, 0x45, 0x12, 0xfd, 0x34, 0x1e, 0x5e,
|
||||
0x68, 0x79, 0xd2, 0xf7, 0xf7, 0xb1, 0x20, 0xe4, 0x2d, 0x65, 0x5f, 0xe9, 0x07, 0x59, 0x99, 0x59,
|
||||
0x4a, 0xf5, 0xec, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x9b, 0x6e, 0x5d, 0x3c, 0xfe, 0x02, 0x00,
|
||||
0x00,
|
||||
}
|
2
cmd/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/context.go
generated
vendored
2
cmd/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/context.go
generated
vendored
@ -94,7 +94,7 @@ func AnnotateContext(ctx context.Context, req *http.Request) (context.Context, e
|
||||
if len(pairs) == 0 {
|
||||
return ctx, nil
|
||||
}
|
||||
return metadata.NewContext(ctx, metadata.Pairs(pairs...)), nil
|
||||
return metadata.NewOutgoingContext(ctx, metadata.Pairs(pairs...)), nil
|
||||
}
|
||||
|
||||
// ServerMetadata consists of metadata sent from gRPC server.
|
||||
|
25
cmd/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/query.go
generated
vendored
25
cmd/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/query.go
generated
vendored
@ -48,8 +48,11 @@ func populateFieldValueFromPath(msg proto.Message, fieldPath []string, values []
|
||||
return fmt.Errorf("non-aggregate type in the mid of path: %s", strings.Join(fieldPath, "."))
|
||||
}
|
||||
var f reflect.Value
|
||||
f, props = fieldByProtoName(m, fieldName)
|
||||
if !f.IsValid() {
|
||||
var err error
|
||||
f, props, err = fieldByProtoName(m, fieldName)
|
||||
if err != nil {
|
||||
return err
|
||||
} else if !f.IsValid() {
|
||||
grpclog.Printf("field not found in %T: %s", msg, strings.Join(fieldPath, "."))
|
||||
return nil
|
||||
}
|
||||
@ -92,14 +95,26 @@ func populateFieldValueFromPath(msg proto.Message, fieldPath []string, values []
|
||||
|
||||
// fieldByProtoName looks up a field whose corresponding protobuf field name is "name".
|
||||
// "m" must be a struct value. It returns zero reflect.Value if no such field found.
|
||||
func fieldByProtoName(m reflect.Value, name string) (reflect.Value, *proto.Properties) {
|
||||
func fieldByProtoName(m reflect.Value, name string) (reflect.Value, *proto.Properties, error) {
|
||||
props := proto.GetProperties(m.Type())
|
||||
|
||||
// look up field name in oneof map
|
||||
if op, ok := props.OneofTypes[name]; ok {
|
||||
v := reflect.New(op.Type.Elem())
|
||||
field := m.Field(op.Field)
|
||||
if !field.IsNil() {
|
||||
return reflect.Value{}, nil, fmt.Errorf("field already set for %s oneof", props.Prop[op.Field].OrigName)
|
||||
}
|
||||
field.Set(v)
|
||||
return v.Elem().Field(0), op.Prop, nil
|
||||
}
|
||||
|
||||
for _, p := range props.Prop {
|
||||
if p.OrigName == name {
|
||||
return m.FieldByName(p.Name), p
|
||||
return m.FieldByName(p.Name), p, nil
|
||||
}
|
||||
}
|
||||
return reflect.Value{}, nil
|
||||
return reflect.Value{}, nil, nil
|
||||
}
|
||||
|
||||
func populateRepeatedField(f reflect.Value, values []string, props *proto.Properties) error {
|
||||
|
2
cmd/vendor/github.com/kr/pty/ioctl.go
generated
vendored
2
cmd/vendor/github.com/kr/pty/ioctl.go
generated
vendored
@ -1,3 +1,5 @@
|
||||
// +build !windows
|
||||
|
||||
package pty
|
||||
|
||||
import "syscall"
|
||||
|
76
cmd/vendor/github.com/kr/pty/pty_dragonfly.go
generated
vendored
Normal file
76
cmd/vendor/github.com/kr/pty/pty_dragonfly.go
generated
vendored
Normal file
@ -0,0 +1,76 @@
|
||||
package pty
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
"strings"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// same code as pty_darwin.go
|
||||
func open() (pty, tty *os.File, err error) {
|
||||
p, err := os.OpenFile("/dev/ptmx", os.O_RDWR, 0)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
sname, err := ptsname(p)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
err = grantpt(p)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
err = unlockpt(p)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
t, err := os.OpenFile(sname, os.O_RDWR, 0)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return p, t, nil
|
||||
}
|
||||
|
||||
func grantpt(f *os.File) error {
|
||||
_, err := isptmaster(f.Fd())
|
||||
return err
|
||||
}
|
||||
|
||||
func unlockpt(f *os.File) error {
|
||||
_, err := isptmaster(f.Fd())
|
||||
return err
|
||||
}
|
||||
|
||||
func isptmaster(fd uintptr) (bool, error) {
|
||||
err := ioctl(fd, syscall.TIOCISPTMASTER, 0)
|
||||
return err == nil, err
|
||||
}
|
||||
|
||||
var (
|
||||
emptyFiodgnameArg fiodgnameArg
|
||||
ioctl_FIODNAME = _IOW('f', 120, unsafe.Sizeof(emptyFiodgnameArg))
|
||||
)
|
||||
|
||||
func ptsname(f *os.File) (string, error) {
|
||||
name := make([]byte, _C_SPECNAMELEN)
|
||||
fa := fiodgnameArg{Name: (*byte)(unsafe.Pointer(&name[0])), Len: _C_SPECNAMELEN, Pad_cgo_0: [4]byte{0, 0, 0, 0}}
|
||||
|
||||
err := ioctl(f.Fd(), ioctl_FIODNAME, uintptr(unsafe.Pointer(&fa)))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
for i, c := range name {
|
||||
if c == 0 {
|
||||
s := "/dev/" + string(name[:i])
|
||||
return strings.Replace(s, "ptm", "pts", -1), nil
|
||||
}
|
||||
}
|
||||
return "", errors.New("TIOCPTYGNAME string not NUL-terminated")
|
||||
}
|
2
cmd/vendor/github.com/kr/pty/pty_unsupported.go
generated
vendored
2
cmd/vendor/github.com/kr/pty/pty_unsupported.go
generated
vendored
@ -1,4 +1,4 @@
|
||||
// +build !linux,!darwin,!freebsd
|
||||
// +build !linux,!darwin,!freebsd,!dragonfly
|
||||
|
||||
package pty
|
||||
|
||||
|
2
cmd/vendor/github.com/kr/pty/run.go
generated
vendored
2
cmd/vendor/github.com/kr/pty/run.go
generated
vendored
@ -1,3 +1,5 @@
|
||||
// +build !windows
|
||||
|
||||
package pty
|
||||
|
||||
import (
|
||||
|
17
cmd/vendor/github.com/kr/pty/types_dragonfly.go
generated
vendored
Normal file
17
cmd/vendor/github.com/kr/pty/types_dragonfly.go
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
// +build ignore
|
||||
|
||||
package pty
|
||||
|
||||
/*
|
||||
#define _KERNEL
|
||||
#include <sys/conf.h>
|
||||
#include <sys/param.h>
|
||||
#include <sys/filio.h>
|
||||
*/
|
||||
import "C"
|
||||
|
||||
const (
|
||||
_C_SPECNAMELEN = C.SPECNAMELEN /* max length of devicename */
|
||||
)
|
||||
|
||||
type fiodgnameArg C.struct_fiodname_args
|
2
cmd/vendor/github.com/kr/pty/util.go
generated
vendored
2
cmd/vendor/github.com/kr/pty/util.go
generated
vendored
@ -1,3 +1,5 @@
|
||||
// +build !windows
|
||||
|
||||
package pty
|
||||
|
||||
import (
|
||||
|
14
cmd/vendor/github.com/kr/pty/ztypes_dragonfly_amd64.go
generated
vendored
Normal file
14
cmd/vendor/github.com/kr/pty/ztypes_dragonfly_amd64.go
generated
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
// Created by cgo -godefs - DO NOT EDIT
|
||||
// cgo -godefs types_dragonfly.go
|
||||
|
||||
package pty
|
||||
|
||||
const (
|
||||
_C_SPECNAMELEN = 0x3f
|
||||
)
|
||||
|
||||
type fiodgnameArg struct {
|
||||
Name *byte
|
||||
Len uint32
|
||||
Pad_cgo_0 [4]byte
|
||||
}
|
12
cmd/vendor/github.com/kr/pty/ztypes_mipsx.go
generated
vendored
Normal file
12
cmd/vendor/github.com/kr/pty/ztypes_mipsx.go
generated
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
// Created by cgo -godefs - DO NOT EDIT
|
||||
// cgo -godefs types.go
|
||||
|
||||
// +build linux
|
||||
// +build mips mipsle mips64 mips64le
|
||||
|
||||
package pty
|
||||
|
||||
type (
|
||||
_C_int int32
|
||||
_C_uint uint32
|
||||
)
|
28
cmd/vendor/github.com/russross/blackfriday/block.go
generated
vendored
28
cmd/vendor/github.com/russross/blackfriday/block.go
generated
vendored
@ -15,8 +15,7 @@ package blackfriday
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
|
||||
"github.com/shurcooL/sanitized_anchor_name"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
// Parse block-level data.
|
||||
@ -243,7 +242,7 @@ func (p *parser) prefixHeader(out *bytes.Buffer, data []byte) int {
|
||||
}
|
||||
if end > i {
|
||||
if id == "" && p.flags&EXTENSION_AUTO_HEADER_IDS != 0 {
|
||||
id = sanitized_anchor_name.Create(string(data[i:end]))
|
||||
id = SanitizedAnchorName(string(data[i:end]))
|
||||
}
|
||||
work := func() bool {
|
||||
p.inline(out, data[i:end])
|
||||
@ -1364,7 +1363,7 @@ func (p *parser) paragraph(out *bytes.Buffer, data []byte) int {
|
||||
|
||||
id := ""
|
||||
if p.flags&EXTENSION_AUTO_HEADER_IDS != 0 {
|
||||
id = sanitized_anchor_name.Create(string(data[prev:eol]))
|
||||
id = SanitizedAnchorName(string(data[prev:eol]))
|
||||
}
|
||||
|
||||
p.r.Header(out, work, level, id)
|
||||
@ -1428,3 +1427,24 @@ func (p *parser) paragraph(out *bytes.Buffer, data []byte) int {
|
||||
p.renderParagraph(out, data[:i])
|
||||
return i
|
||||
}
|
||||
|
||||
// SanitizedAnchorName returns a sanitized anchor name for the given text.
|
||||
//
|
||||
// It implements the algorithm specified in the package comment.
|
||||
func SanitizedAnchorName(text string) string {
|
||||
var anchorName []rune
|
||||
futureDash := false
|
||||
for _, r := range text {
|
||||
switch {
|
||||
case unicode.IsLetter(r) || unicode.IsNumber(r):
|
||||
if futureDash && len(anchorName) > 0 {
|
||||
anchorName = append(anchorName, '-')
|
||||
}
|
||||
futureDash = false
|
||||
anchorName = append(anchorName, unicode.ToLower(r))
|
||||
default:
|
||||
futureDash = true
|
||||
}
|
||||
}
|
||||
return string(anchorName)
|
||||
}
|
||||
|
32
cmd/vendor/github.com/russross/blackfriday/doc.go
generated
vendored
Normal file
32
cmd/vendor/github.com/russross/blackfriday/doc.go
generated
vendored
Normal file
@ -0,0 +1,32 @@
|
||||
// Package blackfriday is a Markdown processor.
|
||||
//
|
||||
// It translates plain text with simple formatting rules into HTML or LaTeX.
|
||||
//
|
||||
// Sanitized Anchor Names
|
||||
//
|
||||
// Blackfriday includes an algorithm for creating sanitized anchor names
|
||||
// corresponding to a given input text. This algorithm is used to create
|
||||
// anchors for headings when EXTENSION_AUTO_HEADER_IDS is enabled. The
|
||||
// algorithm is specified below, so that other packages can create
|
||||
// compatible anchor names and links to those anchors.
|
||||
//
|
||||
// The algorithm iterates over the input text, interpreted as UTF-8,
|
||||
// one Unicode code point (rune) at a time. All runes that are letters (category L)
|
||||
// or numbers (category N) are considered valid characters. They are mapped to
|
||||
// lower case, and included in the output. All other runes are considered
|
||||
// invalid characters. Invalid characters that preceed the first valid character,
|
||||
// as well as invalid character that follow the last valid character
|
||||
// are dropped completely. All other sequences of invalid characters
|
||||
// between two valid characters are replaced with a single dash character '-'.
|
||||
//
|
||||
// SanitizedAnchorName exposes this functionality, and can be used to
|
||||
// create compatible links to the anchor names generated by blackfriday.
|
||||
// This algorithm is also implemented in a small standalone package at
|
||||
// github.com/shurcooL/sanitized_anchor_name. It can be useful for clients
|
||||
// that want a small package and don't need full functionality of blackfriday.
|
||||
package blackfriday
|
||||
|
||||
// NOTE: Keep Sanitized Anchor Name algorithm in sync with package
|
||||
// github.com/shurcooL/sanitized_anchor_name.
|
||||
// Otherwise, users of sanitized_anchor_name will get anchor names
|
||||
// that are incompatible with those generated by blackfriday.
|
4
cmd/vendor/github.com/russross/blackfriday/inline.go
generated
vendored
4
cmd/vendor/github.com/russross/blackfriday/inline.go
generated
vendored
@ -488,6 +488,7 @@ func link(p *parser, out *bytes.Buffer, data []byte, offset int) int {
|
||||
}
|
||||
|
||||
p.notes = append(p.notes, ref)
|
||||
p.notesRecord[string(ref.link)] = struct{}{}
|
||||
|
||||
link = ref.link
|
||||
title = ref.title
|
||||
@ -498,9 +499,10 @@ func link(p *parser, out *bytes.Buffer, data []byte, offset int) int {
|
||||
return 0
|
||||
}
|
||||
|
||||
if t == linkDeferredFootnote {
|
||||
if t == linkDeferredFootnote && !p.isFootnote(lr) {
|
||||
lr.noteId = len(p.notes) + 1
|
||||
p.notes = append(p.notes, lr)
|
||||
p.notesRecord[string(lr.link)] = struct{}{}
|
||||
}
|
||||
|
||||
// keep link and title from reference
|
||||
|
12
cmd/vendor/github.com/russross/blackfriday/markdown.go
generated
vendored
12
cmd/vendor/github.com/russross/blackfriday/markdown.go
generated
vendored
@ -13,9 +13,6 @@
|
||||
//
|
||||
//
|
||||
|
||||
// Blackfriday markdown processor.
|
||||
//
|
||||
// Translates plain text with simple formatting rules into HTML or LaTeX.
|
||||
package blackfriday
|
||||
|
||||
import (
|
||||
@ -221,7 +218,8 @@ type parser struct {
|
||||
// Footnotes need to be ordered as well as available to quickly check for
|
||||
// presence. If a ref is also a footnote, it's stored both in refs and here
|
||||
// in notes. Slice is nil if footnotes not enabled.
|
||||
notes []*reference
|
||||
notes []*reference
|
||||
notesRecord map[string]struct{}
|
||||
}
|
||||
|
||||
func (p *parser) getRef(refid string) (ref *reference, found bool) {
|
||||
@ -244,6 +242,11 @@ func (p *parser) getRef(refid string) (ref *reference, found bool) {
|
||||
return ref, found
|
||||
}
|
||||
|
||||
func (p *parser) isFootnote(ref *reference) bool {
|
||||
_, ok := p.notesRecord[string(ref.link)]
|
||||
return ok
|
||||
}
|
||||
|
||||
//
|
||||
//
|
||||
// Public interface
|
||||
@ -379,6 +382,7 @@ func MarkdownOptions(input []byte, renderer Renderer, opts Options) []byte {
|
||||
|
||||
if extensions&EXTENSION_FOOTNOTES != 0 {
|
||||
p.notes = make([]*reference, 0)
|
||||
p.notesRecord = make(map[string]struct{})
|
||||
}
|
||||
|
||||
first := firstPass(p, input)
|
||||
|
21
cmd/vendor/github.com/shurcooL/sanitized_anchor_name/LICENSE
generated
vendored
21
cmd/vendor/github.com/shurcooL/sanitized_anchor_name/LICENSE
generated
vendored
@ -1,21 +0,0 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2015 Dmitri Shuralyov
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
29
cmd/vendor/github.com/shurcooL/sanitized_anchor_name/main.go
generated
vendored
29
cmd/vendor/github.com/shurcooL/sanitized_anchor_name/main.go
generated
vendored
@ -1,29 +0,0 @@
|
||||
// Package sanitized_anchor_name provides a func to create sanitized anchor names.
|
||||
//
|
||||
// Its logic can be reused by multiple packages to create interoperable anchor names
|
||||
// and links to those anchors.
|
||||
//
|
||||
// At this time, it does not try to ensure that generated anchor names
|
||||
// are unique, that responsibility falls on the caller.
|
||||
package sanitized_anchor_name // import "github.com/shurcooL/sanitized_anchor_name"
|
||||
|
||||
import "unicode"
|
||||
|
||||
// Create returns a sanitized anchor name for the given text.
|
||||
func Create(text string) string {
|
||||
var anchorName []rune
|
||||
var futureDash = false
|
||||
for _, r := range []rune(text) {
|
||||
switch {
|
||||
case unicode.IsLetter(r) || unicode.IsNumber(r):
|
||||
if futureDash && len(anchorName) > 0 {
|
||||
anchorName = append(anchorName, '-')
|
||||
}
|
||||
futureDash = false
|
||||
anchorName = append(anchorName, unicode.ToLower(r))
|
||||
default:
|
||||
futureDash = true
|
||||
}
|
||||
}
|
||||
return string(anchorName)
|
||||
}
|
8
cmd/vendor/golang.org/x/text/unicode/norm/input.go
generated
vendored
8
cmd/vendor/golang.org/x/text/unicode/norm/input.go
generated
vendored
@ -90,16 +90,20 @@ func (in *input) charinfoNFKC(p int) (uint16, int) {
|
||||
}
|
||||
|
||||
func (in *input) hangul(p int) (r rune) {
|
||||
var size int
|
||||
if in.bytes == nil {
|
||||
if !isHangulString(in.str[p:]) {
|
||||
return 0
|
||||
}
|
||||
r, _ = utf8.DecodeRuneInString(in.str[p:])
|
||||
r, size = utf8.DecodeRuneInString(in.str[p:])
|
||||
} else {
|
||||
if !isHangul(in.bytes[p:]) {
|
||||
return 0
|
||||
}
|
||||
r, _ = utf8.DecodeRune(in.bytes[p:])
|
||||
r, size = utf8.DecodeRune(in.bytes[p:])
|
||||
}
|
||||
if size != hangulUTF8Size {
|
||||
return 0
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
202
cmd/vendor/google.golang.org/genproto/LICENSE
generated
vendored
Normal file
202
cmd/vendor/google.golang.org/genproto/LICENSE
generated
vendored
Normal file
@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
143
cmd/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go
generated
vendored
Normal file
143
cmd/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go
generated
vendored
Normal file
@ -0,0 +1,143 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// source: google/rpc/status.proto
|
||||
|
||||
/*
|
||||
Package status is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
google/rpc/status.proto
|
||||
|
||||
It has these top-level messages:
|
||||
Status
|
||||
*/
|
||||
package status
|
||||
|
||||
import proto "github.com/golang/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
import math "math"
|
||||
import google_protobuf "github.com/golang/protobuf/ptypes/any"
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
// The `Status` type defines a logical error model that is suitable for different
|
||||
// programming environments, including REST APIs and RPC APIs. It is used by
|
||||
// [gRPC](https://github.com/grpc). The error model is designed to be:
|
||||
//
|
||||
// - Simple to use and understand for most users
|
||||
// - Flexible enough to meet unexpected needs
|
||||
//
|
||||
// # Overview
|
||||
//
|
||||
// The `Status` message contains three pieces of data: error code, error message,
|
||||
// and error details. The error code should be an enum value of
|
||||
// [google.rpc.Code][google.rpc.Code], but it may accept additional error codes if needed. The
|
||||
// error message should be a developer-facing English message that helps
|
||||
// developers *understand* and *resolve* the error. If a localized user-facing
|
||||
// error message is needed, put the localized message in the error details or
|
||||
// localize it in the client. The optional error details may contain arbitrary
|
||||
// information about the error. There is a predefined set of error detail types
|
||||
// in the package `google.rpc` which can be used for common error conditions.
|
||||
//
|
||||
// # Language mapping
|
||||
//
|
||||
// The `Status` message is the logical representation of the error model, but it
|
||||
// is not necessarily the actual wire format. When the `Status` message is
|
||||
// exposed in different client libraries and different wire protocols, it can be
|
||||
// mapped differently. For example, it will likely be mapped to some exceptions
|
||||
// in Java, but more likely mapped to some error codes in C.
|
||||
//
|
||||
// # Other uses
|
||||
//
|
||||
// The error model and the `Status` message can be used in a variety of
|
||||
// environments, either with or without APIs, to provide a
|
||||
// consistent developer experience across different environments.
|
||||
//
|
||||
// Example uses of this error model include:
|
||||
//
|
||||
// - Partial errors. If a service needs to return partial errors to the client,
|
||||
// it may embed the `Status` in the normal response to indicate the partial
|
||||
// errors.
|
||||
//
|
||||
// - Workflow errors. A typical workflow has multiple steps. Each step may
|
||||
// have a `Status` message for error reporting purpose.
|
||||
//
|
||||
// - Batch operations. If a client uses batch request and batch response, the
|
||||
// `Status` message should be used directly inside batch response, one for
|
||||
// each error sub-response.
|
||||
//
|
||||
// - Asynchronous operations. If an API call embeds asynchronous operation
|
||||
// results in its response, the status of those operations should be
|
||||
// represented directly using the `Status` message.
|
||||
//
|
||||
// - Logging. If some API errors are stored in logs, the message `Status` could
|
||||
// be used directly after any stripping needed for security/privacy reasons.
|
||||
type Status struct {
|
||||
// The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code].
|
||||
Code int32 `protobuf:"varint,1,opt,name=code" json:"code,omitempty"`
|
||||
// A developer-facing error message, which should be in English. Any
|
||||
// user-facing error message should be localized and sent in the
|
||||
// [google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client.
|
||||
Message string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"`
|
||||
// A list of messages that carry the error details. There will be a
|
||||
// common set of message types for APIs to use.
|
||||
Details []*google_protobuf.Any `protobuf:"bytes,3,rep,name=details" json:"details,omitempty"`
|
||||
}
|
||||
|
||||
func (m *Status) Reset() { *m = Status{} }
|
||||
func (m *Status) String() string { return proto.CompactTextString(m) }
|
||||
func (*Status) ProtoMessage() {}
|
||||
func (*Status) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
|
||||
|
||||
func (m *Status) GetCode() int32 {
|
||||
if m != nil {
|
||||
return m.Code
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Status) GetMessage() string {
|
||||
if m != nil {
|
||||
return m.Message
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Status) GetDetails() []*google_protobuf.Any {
|
||||
if m != nil {
|
||||
return m.Details
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Status)(nil), "google.rpc.Status")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("google/rpc/status.proto", fileDescriptor0) }
|
||||
|
||||
var fileDescriptor0 = []byte{
|
||||
// 209 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4f, 0xcf, 0xcf, 0x4f,
|
||||
0xcf, 0x49, 0xd5, 0x2f, 0x2a, 0x48, 0xd6, 0x2f, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xd6, 0x2b, 0x28,
|
||||
0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x82, 0x48, 0xe8, 0x15, 0x15, 0x24, 0x4b, 0x49, 0x42, 0x15, 0x81,
|
||||
0x65, 0x92, 0x4a, 0xd3, 0xf4, 0x13, 0xf3, 0x2a, 0x21, 0xca, 0x94, 0xd2, 0xb8, 0xd8, 0x82, 0xc1,
|
||||
0xda, 0x84, 0x84, 0xb8, 0x58, 0x92, 0xf3, 0x53, 0x52, 0x25, 0x18, 0x15, 0x18, 0x35, 0x58, 0x83,
|
||||
0xc0, 0x6c, 0x21, 0x09, 0x2e, 0xf6, 0xdc, 0xd4, 0xe2, 0xe2, 0xc4, 0xf4, 0x54, 0x09, 0x26, 0x05,
|
||||
0x46, 0x0d, 0xce, 0x20, 0x18, 0x57, 0x48, 0x8f, 0x8b, 0x3d, 0x25, 0xb5, 0x24, 0x31, 0x33, 0xa7,
|
||||
0x58, 0x82, 0x59, 0x81, 0x59, 0x83, 0xdb, 0x48, 0x44, 0x0f, 0x6a, 0x21, 0xcc, 0x12, 0x3d, 0xc7,
|
||||
0xbc, 0xca, 0x20, 0x98, 0x22, 0xa7, 0x38, 0x2e, 0xbe, 0xe4, 0xfc, 0x5c, 0x3d, 0x84, 0xa3, 0x9c,
|
||||
0xb8, 0x21, 0xf6, 0x06, 0x80, 0x94, 0x07, 0x30, 0x46, 0x99, 0x43, 0xa5, 0xd2, 0xf3, 0x73, 0x12,
|
||||
0xf3, 0xd2, 0xf5, 0xf2, 0x8b, 0xd2, 0xf5, 0xd3, 0x53, 0xf3, 0xc0, 0x86, 0xe9, 0x43, 0xa4, 0x12,
|
||||
0x0b, 0x32, 0x8b, 0x91, 0xfc, 0x69, 0x0d, 0xa1, 0x16, 0x31, 0x31, 0x07, 0x05, 0x38, 0x27, 0xb1,
|
||||
0x81, 0x55, 0x1a, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff, 0xa4, 0x53, 0xf0, 0x7c, 0x10, 0x01, 0x00,
|
||||
0x00,
|
||||
}
|
8
cmd/vendor/google.golang.org/grpc/balancer.go
generated
vendored
8
cmd/vendor/google.golang.org/grpc/balancer.go
generated
vendored
@ -35,6 +35,7 @@ package grpc
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"sync"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
@ -60,6 +61,10 @@ type BalancerConfig struct {
|
||||
// use to dial to a remote load balancer server. The Balancer implementations
|
||||
// can ignore this if it does not need to talk to another party securely.
|
||||
DialCreds credentials.TransportCredentials
|
||||
// Dialer is the custom dialer the Balancer implementation can use to dial
|
||||
// to a remote load balancer server. The Balancer implementations
|
||||
// can ignore this if it doesn't need to talk to remote balancer.
|
||||
Dialer func(context.Context, string) (net.Conn, error)
|
||||
}
|
||||
|
||||
// BalancerGetOptions configures a Get call.
|
||||
@ -385,6 +390,9 @@ func (rr *roundRobin) Notify() <-chan []Address {
|
||||
func (rr *roundRobin) Close() error {
|
||||
rr.mu.Lock()
|
||||
defer rr.mu.Unlock()
|
||||
if rr.done {
|
||||
return errBalancerClosed
|
||||
}
|
||||
rr.done = true
|
||||
if rr.w != nil {
|
||||
rr.w.Close()
|
||||
|
109
cmd/vendor/google.golang.org/grpc/call.go
generated
vendored
109
cmd/vendor/google.golang.org/grpc/call.go
generated
vendored
@ -43,6 +43,7 @@ import (
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/peer"
|
||||
"google.golang.org/grpc/stats"
|
||||
"google.golang.org/grpc/status"
|
||||
"google.golang.org/grpc/transport"
|
||||
)
|
||||
|
||||
@ -72,14 +73,17 @@ func recvResponse(ctx context.Context, dopts dialOptions, t transport.ClientTran
|
||||
}
|
||||
}
|
||||
for {
|
||||
if err = recv(p, dopts.codec, stream, dopts.dc, reply, dopts.maxMsgSize, inPayload); err != nil {
|
||||
if c.maxReceiveMessageSize == nil {
|
||||
return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)")
|
||||
}
|
||||
if err = recv(p, dopts.codec, stream, dopts.dc, reply, *c.maxReceiveMessageSize, inPayload); err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
if inPayload != nil && err == io.EOF && stream.StatusCode() == codes.OK {
|
||||
if inPayload != nil && err == io.EOF && stream.Status().Code() == codes.OK {
|
||||
// TODO in the current implementation, inTrailer may be handled before inPayload in some cases.
|
||||
// Fix the order if necessary.
|
||||
dopts.copts.StatsHandler.HandleRPC(ctx, inPayload)
|
||||
@ -92,11 +96,7 @@ func recvResponse(ctx context.Context, dopts dialOptions, t transport.ClientTran
|
||||
}
|
||||
|
||||
// sendRequest writes out various information of an RPC such as Context and Message.
|
||||
func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor, callHdr *transport.CallHdr, t transport.ClientTransport, args interface{}, opts *transport.Options) (_ *transport.Stream, err error) {
|
||||
stream, err := t.NewStream(ctx, callHdr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor, c *callInfo, callHdr *transport.CallHdr, stream *transport.Stream, t transport.ClientTransport, args interface{}, opts *transport.Options) (err error) {
|
||||
defer func() {
|
||||
if err != nil {
|
||||
// If err is connection error, t will be closed, no need to close stream here.
|
||||
@ -119,7 +119,13 @@ func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor,
|
||||
}
|
||||
outBuf, err := encode(dopts.codec, args, compressor, cbuf, outPayload)
|
||||
if err != nil {
|
||||
return nil, Errorf(codes.Internal, "grpc: %v", err)
|
||||
return err
|
||||
}
|
||||
if c.maxSendMessageSize == nil {
|
||||
return Errorf(codes.Internal, "callInfo maxSendMessageSize field uninitialized(nil)")
|
||||
}
|
||||
if len(outBuf) > *c.maxSendMessageSize {
|
||||
return Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(outBuf), *c.maxSendMessageSize)
|
||||
}
|
||||
err = t.Write(stream, outBuf, opts)
|
||||
if err == nil && outPayload != nil {
|
||||
@ -130,10 +136,10 @@ func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor,
|
||||
// does not exist.) so that t.Write could get io.EOF from wait(...). Leave the following
|
||||
// recvResponse to get the final status.
|
||||
if err != nil && err != io.EOF {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
// Sent successfully.
|
||||
return stream, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
// Invoke sends the RPC request on the wire and returns after response is received.
|
||||
@ -148,14 +154,18 @@ func Invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
|
||||
|
||||
func invoke(ctx context.Context, method string, args, reply interface{}, cc *ClientConn, opts ...CallOption) (e error) {
|
||||
c := defaultCallInfo
|
||||
if mc, ok := cc.getMethodConfig(method); ok {
|
||||
c.failFast = !mc.WaitForReady
|
||||
if mc.Timeout > 0 {
|
||||
var cancel context.CancelFunc
|
||||
ctx, cancel = context.WithTimeout(ctx, mc.Timeout)
|
||||
defer cancel()
|
||||
}
|
||||
mc := cc.GetMethodConfig(method)
|
||||
if mc.WaitForReady != nil {
|
||||
c.failFast = !*mc.WaitForReady
|
||||
}
|
||||
|
||||
if mc.Timeout != nil && *mc.Timeout >= 0 {
|
||||
var cancel context.CancelFunc
|
||||
ctx, cancel = context.WithTimeout(ctx, *mc.Timeout)
|
||||
defer cancel()
|
||||
}
|
||||
|
||||
opts = append(cc.dopts.callOptions, opts...)
|
||||
for _, o := range opts {
|
||||
if err := o.before(&c); err != nil {
|
||||
return toRPCErr(err)
|
||||
@ -166,6 +176,10 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
|
||||
o.after(&c)
|
||||
}
|
||||
}()
|
||||
|
||||
c.maxSendMessageSize = getMaxSize(mc.MaxReqSize, c.maxSendMessageSize, defaultClientMaxSendMessageSize)
|
||||
c.maxReceiveMessageSize = getMaxSize(mc.MaxRespSize, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize)
|
||||
|
||||
if EnableTracing {
|
||||
c.traceInfo.tr = trace.New("grpc.Sent."+methodFamily(method), method)
|
||||
defer c.traceInfo.tr.Finish()
|
||||
@ -182,26 +196,25 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = newContextWithRPCInfo(ctx)
|
||||
sh := cc.dopts.copts.StatsHandler
|
||||
if sh != nil {
|
||||
ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method})
|
||||
ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: c.failFast})
|
||||
begin := &stats.Begin{
|
||||
Client: true,
|
||||
BeginTime: time.Now(),
|
||||
FailFast: c.failFast,
|
||||
}
|
||||
sh.HandleRPC(ctx, begin)
|
||||
}
|
||||
defer func() {
|
||||
if sh != nil {
|
||||
defer func() {
|
||||
end := &stats.End{
|
||||
Client: true,
|
||||
EndTime: time.Now(),
|
||||
Error: e,
|
||||
}
|
||||
sh.HandleRPC(ctx, end)
|
||||
}
|
||||
}()
|
||||
}()
|
||||
}
|
||||
topts := &transport.Options{
|
||||
Last: true,
|
||||
Delay: false,
|
||||
@ -223,6 +236,9 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
|
||||
if cc.dopts.cp != nil {
|
||||
callHdr.SendCompress = cc.dopts.cp.Type()
|
||||
}
|
||||
if c.creds != nil {
|
||||
callHdr.Creds = c.creds
|
||||
}
|
||||
|
||||
gopts := BalancerGetOptions{
|
||||
BlockingWait: !c.failFast,
|
||||
@ -230,7 +246,7 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
|
||||
t, put, err = cc.getTransport(ctx, gopts)
|
||||
if err != nil {
|
||||
// TODO(zhaoq): Probably revisit the error handling.
|
||||
if _, ok := err.(*rpcError); ok {
|
||||
if _, ok := status.FromError(err); ok {
|
||||
return err
|
||||
}
|
||||
if err == errConnClosing || err == errConnUnavailable {
|
||||
@ -245,19 +261,35 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
|
||||
if c.traceInfo.tr != nil {
|
||||
c.traceInfo.tr.LazyLog(&payload{sent: true, msg: args}, true)
|
||||
}
|
||||
stream, err = sendRequest(ctx, cc.dopts, cc.dopts.cp, callHdr, t, args, topts)
|
||||
stream, err = t.NewStream(ctx, callHdr)
|
||||
if err != nil {
|
||||
if put != nil {
|
||||
if _, ok := err.(transport.ConnectionError); ok {
|
||||
// If error is connection error, transport was sending data on wire,
|
||||
// and we are not sure if anything has been sent on wire.
|
||||
// If error is not connection error, we are sure nothing has been sent.
|
||||
updateRPCInfoInContext(ctx, rpcInfo{bytesSent: true, bytesReceived: false})
|
||||
}
|
||||
put()
|
||||
}
|
||||
if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
|
||||
continue
|
||||
}
|
||||
return toRPCErr(err)
|
||||
}
|
||||
err = sendRequest(ctx, cc.dopts, cc.dopts.cp, &c, callHdr, stream, t, args, topts)
|
||||
if err != nil {
|
||||
if put != nil {
|
||||
updateRPCInfoInContext(ctx, rpcInfo{
|
||||
bytesSent: stream.BytesSent(),
|
||||
bytesReceived: stream.BytesReceived(),
|
||||
})
|
||||
put()
|
||||
put = nil
|
||||
}
|
||||
// Retry a non-failfast RPC when
|
||||
// i) there is a connection error; or
|
||||
// ii) the server started to drain before this RPC was initiated.
|
||||
if _, ok := err.(transport.ConnectionError); ok || err == transport.ErrStreamDrain {
|
||||
if c.failFast {
|
||||
return toRPCErr(err)
|
||||
}
|
||||
if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
|
||||
continue
|
||||
}
|
||||
return toRPCErr(err)
|
||||
@ -265,13 +297,13 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
|
||||
err = recvResponse(ctx, cc.dopts, t, &c, stream, reply)
|
||||
if err != nil {
|
||||
if put != nil {
|
||||
updateRPCInfoInContext(ctx, rpcInfo{
|
||||
bytesSent: stream.BytesSent(),
|
||||
bytesReceived: stream.BytesReceived(),
|
||||
})
|
||||
put()
|
||||
put = nil
|
||||
}
|
||||
if _, ok := err.(transport.ConnectionError); ok || err == transport.ErrStreamDrain {
|
||||
if c.failFast {
|
||||
return toRPCErr(err)
|
||||
}
|
||||
if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
|
||||
continue
|
||||
}
|
||||
return toRPCErr(err)
|
||||
@ -281,9 +313,12 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
|
||||
}
|
||||
t.CloseStream(stream, nil)
|
||||
if put != nil {
|
||||
updateRPCInfoInContext(ctx, rpcInfo{
|
||||
bytesSent: stream.BytesSent(),
|
||||
bytesReceived: stream.BytesReceived(),
|
||||
})
|
||||
put()
|
||||
put = nil
|
||||
}
|
||||
return Errorf(stream.StatusCode(), "%s", stream.StatusDesc())
|
||||
return stream.Status().Err()
|
||||
}
|
||||
}
|
||||
|
222
cmd/vendor/google.golang.org/grpc/clientconn.go
generated
vendored
222
cmd/vendor/google.golang.org/grpc/clientconn.go
generated
vendored
@ -36,7 +36,6 @@ package grpc
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"net"
|
||||
"strings"
|
||||
"sync"
|
||||
@ -57,8 +56,7 @@ var (
|
||||
ErrClientConnClosing = errors.New("grpc: the client connection is closing")
|
||||
// ErrClientConnTimeout indicates that the ClientConn cannot establish the
|
||||
// underlying connections within the specified timeout.
|
||||
// DEPRECATED: Please use context.DeadlineExceeded instead. This error will be
|
||||
// removed in Q1 2017.
|
||||
// DEPRECATED: Please use context.DeadlineExceeded instead.
|
||||
ErrClientConnTimeout = errors.New("grpc: timed out when dialing")
|
||||
|
||||
// errNoTransportSecurity indicates that there is no transport security
|
||||
@ -80,7 +78,8 @@ var (
|
||||
errConnClosing = errors.New("grpc: the connection is closing")
|
||||
// errConnUnavailable indicates that the connection is unavailable.
|
||||
errConnUnavailable = errors.New("grpc: the connection is unavailable")
|
||||
errNoAddr = errors.New("grpc: there is no address available to dial")
|
||||
// errBalancerClosed indicates that the balancer is closed.
|
||||
errBalancerClosed = errors.New("grpc: balancer is closed")
|
||||
// minimum time to give a connection to complete
|
||||
minConnectTimeout = 20 * time.Second
|
||||
)
|
||||
@ -88,30 +87,54 @@ var (
|
||||
// dialOptions configure a Dial call. dialOptions are set by the DialOption
|
||||
// values passed to Dial.
|
||||
type dialOptions struct {
|
||||
unaryInt UnaryClientInterceptor
|
||||
streamInt StreamClientInterceptor
|
||||
codec Codec
|
||||
cp Compressor
|
||||
dc Decompressor
|
||||
bs backoffStrategy
|
||||
balancer Balancer
|
||||
block bool
|
||||
insecure bool
|
||||
timeout time.Duration
|
||||
scChan <-chan ServiceConfig
|
||||
copts transport.ConnectOptions
|
||||
maxMsgSize int
|
||||
unaryInt UnaryClientInterceptor
|
||||
streamInt StreamClientInterceptor
|
||||
codec Codec
|
||||
cp Compressor
|
||||
dc Decompressor
|
||||
bs backoffStrategy
|
||||
balancer Balancer
|
||||
block bool
|
||||
insecure bool
|
||||
timeout time.Duration
|
||||
scChan <-chan ServiceConfig
|
||||
copts transport.ConnectOptions
|
||||
callOptions []CallOption
|
||||
}
|
||||
|
||||
const defaultClientMaxMsgSize = math.MaxInt32
|
||||
const (
|
||||
defaultClientMaxReceiveMessageSize = 1024 * 1024 * 4
|
||||
defaultClientMaxSendMessageSize = 1024 * 1024 * 4
|
||||
)
|
||||
|
||||
// DialOption configures how we set up the connection.
|
||||
type DialOption func(*dialOptions)
|
||||
|
||||
// WithMaxMsgSize returns a DialOption which sets the maximum message size the client can receive.
|
||||
func WithMaxMsgSize(s int) DialOption {
|
||||
// WithInitialWindowSize returns a DialOption which sets the value for initial window size on a stream.
|
||||
// The lower bound for window size is 64K and any value smaller than that will be ignored.
|
||||
func WithInitialWindowSize(s int32) DialOption {
|
||||
return func(o *dialOptions) {
|
||||
o.maxMsgSize = s
|
||||
o.copts.InitialWindowSize = s
|
||||
}
|
||||
}
|
||||
|
||||
// WithInitialConnWindowSize returns a DialOption which sets the value for initial window size on a connection.
|
||||
// The lower bound for window size is 64K and any value smaller than that will be ignored.
|
||||
func WithInitialConnWindowSize(s int32) DialOption {
|
||||
return func(o *dialOptions) {
|
||||
o.copts.InitialConnWindowSize = s
|
||||
}
|
||||
}
|
||||
|
||||
// WithMaxMsgSize returns a DialOption which sets the maximum message size the client can receive. Deprecated: use WithDefaultCallOptions(MaxCallRecvMsgSize(s)) instead.
|
||||
func WithMaxMsgSize(s int) DialOption {
|
||||
return WithDefaultCallOptions(MaxCallRecvMsgSize(s))
|
||||
}
|
||||
|
||||
// WithDefaultCallOptions returns a DialOption which sets the default CallOptions for calls over the connection.
|
||||
func WithDefaultCallOptions(cos ...CallOption) DialOption {
|
||||
return func(o *dialOptions) {
|
||||
o.callOptions = append(o.callOptions, cos...)
|
||||
}
|
||||
}
|
||||
|
||||
@ -206,7 +229,7 @@ func WithTransportCredentials(creds credentials.TransportCredentials) DialOption
|
||||
}
|
||||
|
||||
// WithPerRPCCredentials returns a DialOption which sets
|
||||
// credentials which will place auth state on each outbound RPC.
|
||||
// credentials and places auth state on each outbound RPC.
|
||||
func WithPerRPCCredentials(creds credentials.PerRPCCredentials) DialOption {
|
||||
return func(o *dialOptions) {
|
||||
o.copts.PerRPCCredentials = append(o.copts.PerRPCCredentials, creds)
|
||||
@ -243,7 +266,7 @@ func WithStatsHandler(h stats.Handler) DialOption {
|
||||
}
|
||||
}
|
||||
|
||||
// FailOnNonTempDialError returns a DialOption that specified if gRPC fails on non-temporary dial errors.
|
||||
// FailOnNonTempDialError returns a DialOption that specifies if gRPC fails on non-temporary dial errors.
|
||||
// If f is true, and dialer returns a non-temporary error, gRPC will fail the connection to the network
|
||||
// address and won't try to reconnect.
|
||||
// The default value of FailOnNonTempDialError is false.
|
||||
@ -297,22 +320,29 @@ func Dial(target string, opts ...DialOption) (*ClientConn, error) {
|
||||
}
|
||||
|
||||
// DialContext creates a client connection to the given target. ctx can be used to
|
||||
// cancel or expire the pending connecting. Once this function returns, the
|
||||
// cancel or expire the pending connection. Once this function returns, the
|
||||
// cancellation and expiration of ctx will be noop. Users should call ClientConn.Close
|
||||
// to terminate all the pending operations after this function returns.
|
||||
// This is the EXPERIMENTAL API.
|
||||
func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) {
|
||||
cc := &ClientConn{
|
||||
target: target,
|
||||
conns: make(map[Address]*addrConn),
|
||||
}
|
||||
cc.ctx, cc.cancel = context.WithCancel(context.Background())
|
||||
cc.dopts.maxMsgSize = defaultClientMaxMsgSize
|
||||
|
||||
for _, opt := range opts {
|
||||
opt(&cc.dopts)
|
||||
}
|
||||
cc.mkp = cc.dopts.copts.KeepaliveParams
|
||||
|
||||
if cc.dopts.copts.Dialer == nil {
|
||||
cc.dopts.copts.Dialer = newProxyDialer(
|
||||
func(ctx context.Context, addr string) (net.Conn, error) {
|
||||
return dialContext(ctx, "tcp", addr)
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
grpcUA := "grpc-go/" + Version
|
||||
if cc.dopts.copts.UserAgent != "" {
|
||||
cc.dopts.copts.UserAgent += " " + grpcUA
|
||||
} else {
|
||||
@ -337,15 +367,16 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
|
||||
}
|
||||
}()
|
||||
|
||||
scSet := false
|
||||
if cc.dopts.scChan != nil {
|
||||
// Wait for the initial service config.
|
||||
// Try to get an initial service config.
|
||||
select {
|
||||
case sc, ok := <-cc.dopts.scChan:
|
||||
if ok {
|
||||
cc.sc = sc
|
||||
scSet = true
|
||||
}
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
default:
|
||||
}
|
||||
}
|
||||
// Set defaults.
|
||||
@ -361,53 +392,44 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
|
||||
} else if cc.dopts.insecure && cc.dopts.copts.Authority != "" {
|
||||
cc.authority = cc.dopts.copts.Authority
|
||||
} else {
|
||||
colonPos := strings.LastIndex(target, ":")
|
||||
if colonPos == -1 {
|
||||
colonPos = len(target)
|
||||
}
|
||||
cc.authority = target[:colonPos]
|
||||
cc.authority = target
|
||||
}
|
||||
var ok bool
|
||||
waitC := make(chan error, 1)
|
||||
go func() {
|
||||
var addrs []Address
|
||||
defer close(waitC)
|
||||
if cc.dopts.balancer == nil && cc.sc.LB != nil {
|
||||
cc.dopts.balancer = cc.sc.LB
|
||||
}
|
||||
if cc.dopts.balancer == nil {
|
||||
// Connect to target directly if balancer is nil.
|
||||
addrs = append(addrs, Address{Addr: target})
|
||||
} else {
|
||||
if cc.dopts.balancer != nil {
|
||||
var credsClone credentials.TransportCredentials
|
||||
if creds != nil {
|
||||
credsClone = creds.Clone()
|
||||
}
|
||||
config := BalancerConfig{
|
||||
DialCreds: credsClone,
|
||||
Dialer: cc.dopts.copts.Dialer,
|
||||
}
|
||||
if err := cc.dopts.balancer.Start(target, config); err != nil {
|
||||
waitC <- err
|
||||
return
|
||||
}
|
||||
ch := cc.dopts.balancer.Notify()
|
||||
if ch == nil {
|
||||
// There is no name resolver installed.
|
||||
addrs = append(addrs, Address{Addr: target})
|
||||
} else {
|
||||
addrs, ok = <-ch
|
||||
if !ok || len(addrs) == 0 {
|
||||
waitC <- errNoAddr
|
||||
return
|
||||
if ch != nil {
|
||||
if cc.dopts.block {
|
||||
doneChan := make(chan struct{})
|
||||
go cc.lbWatcher(doneChan)
|
||||
<-doneChan
|
||||
} else {
|
||||
go cc.lbWatcher(nil)
|
||||
}
|
||||
}
|
||||
}
|
||||
for _, a := range addrs {
|
||||
if err := cc.resetAddrConn(a, false, nil); err != nil {
|
||||
waitC <- err
|
||||
return
|
||||
}
|
||||
}
|
||||
close(waitC)
|
||||
// No balancer, or no resolver within the balancer. Connect directly.
|
||||
if err := cc.resetAddrConn(Address{Addr: target}, cc.dopts.block, nil); err != nil {
|
||||
waitC <- err
|
||||
return
|
||||
}
|
||||
}()
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
@ -417,16 +439,21 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// If balancer is nil or balancer.Notify() is nil, ok will be false here.
|
||||
// The lbWatcher goroutine will not be created.
|
||||
if ok {
|
||||
go cc.lbWatcher()
|
||||
if cc.dopts.scChan != nil && !scSet {
|
||||
// Blocking wait for the initial service config.
|
||||
select {
|
||||
case sc, ok := <-cc.dopts.scChan:
|
||||
if ok {
|
||||
cc.sc = sc
|
||||
}
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
}
|
||||
|
||||
if cc.dopts.scChan != nil {
|
||||
go cc.scWatcher()
|
||||
}
|
||||
|
||||
return cc, nil
|
||||
}
|
||||
|
||||
@ -475,9 +502,14 @@ type ClientConn struct {
|
||||
mu sync.RWMutex
|
||||
sc ServiceConfig
|
||||
conns map[Address]*addrConn
|
||||
// Keepalive parameter can be udated if a GoAway is received.
|
||||
mkp keepalive.ClientParameters
|
||||
}
|
||||
|
||||
func (cc *ClientConn) lbWatcher() {
|
||||
// lbWatcher watches the Notify channel of the balancer in cc and manages
|
||||
// connections accordingly. If doneChan is not nil, it is closed after the
|
||||
// first successfull connection is made.
|
||||
func (cc *ClientConn) lbWatcher(doneChan chan struct{}) {
|
||||
for addrs := range cc.dopts.balancer.Notify() {
|
||||
var (
|
||||
add []Address // Addresses need to setup connections.
|
||||
@ -504,7 +536,15 @@ func (cc *ClientConn) lbWatcher() {
|
||||
}
|
||||
cc.mu.Unlock()
|
||||
for _, a := range add {
|
||||
cc.resetAddrConn(a, true, nil)
|
||||
if doneChan != nil {
|
||||
err := cc.resetAddrConn(a, true, nil)
|
||||
if err == nil {
|
||||
close(doneChan)
|
||||
doneChan = nil
|
||||
}
|
||||
} else {
|
||||
cc.resetAddrConn(a, false, nil)
|
||||
}
|
||||
}
|
||||
for _, c := range del {
|
||||
c.tearDown(errConnDrain)
|
||||
@ -533,12 +573,15 @@ func (cc *ClientConn) scWatcher() {
|
||||
// resetAddrConn creates an addrConn for addr and adds it to cc.conns.
|
||||
// If there is an old addrConn for addr, it will be torn down, using tearDownErr as the reason.
|
||||
// If tearDownErr is nil, errConnDrain will be used instead.
|
||||
func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr error) error {
|
||||
func (cc *ClientConn) resetAddrConn(addr Address, block bool, tearDownErr error) error {
|
||||
ac := &addrConn{
|
||||
cc: cc,
|
||||
addr: addr,
|
||||
dopts: cc.dopts,
|
||||
}
|
||||
cc.mu.RLock()
|
||||
ac.dopts.copts.KeepaliveParams = cc.mkp
|
||||
cc.mu.RUnlock()
|
||||
ac.ctx, ac.cancel = context.WithCancel(cc.ctx)
|
||||
ac.stateCV = sync.NewCond(&ac.mu)
|
||||
if EnableTracing {
|
||||
@ -583,8 +626,7 @@ func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr err
|
||||
stale.tearDown(tearDownErr)
|
||||
}
|
||||
}
|
||||
// skipWait may overwrite the decision in ac.dopts.block.
|
||||
if ac.dopts.block && !skipWait {
|
||||
if block {
|
||||
if err := ac.resetTransport(false); err != nil {
|
||||
if err != errConnClosing {
|
||||
// Tear down ac and delete it from cc.conns.
|
||||
@ -617,12 +659,23 @@ func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr err
|
||||
return nil
|
||||
}
|
||||
|
||||
// TODO: Avoid the locking here.
|
||||
func (cc *ClientConn) getMethodConfig(method string) (m MethodConfig, ok bool) {
|
||||
// GetMethodConfig gets the method config of the input method.
|
||||
// If there's an exact match for input method (i.e. /service/method), we return
|
||||
// the corresponding MethodConfig.
|
||||
// If there isn't an exact match for the input method, we look for the default config
|
||||
// under the service (i.e /service/). If there is a default MethodConfig for
|
||||
// the serivce, we return it.
|
||||
// Otherwise, we return an empty MethodConfig.
|
||||
func (cc *ClientConn) GetMethodConfig(method string) MethodConfig {
|
||||
// TODO: Avoid the locking here.
|
||||
cc.mu.RLock()
|
||||
defer cc.mu.RUnlock()
|
||||
m, ok = cc.sc.Methods[method]
|
||||
return
|
||||
m, ok := cc.sc.Methods[method]
|
||||
if !ok {
|
||||
i := strings.LastIndex(method, "/")
|
||||
m, _ = cc.sc.Methods[method[:i+1]]
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions) (transport.ClientTransport, func(), error) {
|
||||
@ -663,6 +716,7 @@ func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions)
|
||||
}
|
||||
if !ok {
|
||||
if put != nil {
|
||||
updateRPCInfoInContext(ctx, rpcInfo{bytesSent: false, bytesReceived: false})
|
||||
put()
|
||||
}
|
||||
return nil, nil, errConnClosing
|
||||
@ -670,6 +724,7 @@ func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions)
|
||||
t, err := ac.wait(ctx, cc.dopts.balancer != nil, !opts.BlockingWait)
|
||||
if err != nil {
|
||||
if put != nil {
|
||||
updateRPCInfoInContext(ctx, rpcInfo{bytesSent: false, bytesReceived: false})
|
||||
put()
|
||||
}
|
||||
return nil, nil, err
|
||||
@ -721,6 +776,20 @@ type addrConn struct {
|
||||
tearDownErr error
|
||||
}
|
||||
|
||||
// adjustParams updates parameters used to create transports upon
|
||||
// receiving a GoAway.
|
||||
func (ac *addrConn) adjustParams(r transport.GoAwayReason) {
|
||||
switch r {
|
||||
case transport.TooManyPings:
|
||||
v := 2 * ac.dopts.copts.KeepaliveParams.Time
|
||||
ac.cc.mu.Lock()
|
||||
if v > ac.cc.mkp.Time {
|
||||
ac.cc.mkp.Time = v
|
||||
}
|
||||
ac.cc.mu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
// printf records an event in ac's event log, unless ac has been closed.
|
||||
// REQUIRES ac.mu is held.
|
||||
func (ac *addrConn) printf(format string, a ...interface{}) {
|
||||
@ -829,11 +898,14 @@ func (ac *addrConn) resetTransport(closeTransport bool) error {
|
||||
}
|
||||
ac.mu.Unlock()
|
||||
closeTransport = false
|
||||
timer := time.NewTimer(sleepTime - time.Since(connectTime))
|
||||
select {
|
||||
case <-time.After(sleepTime - time.Since(connectTime)):
|
||||
case <-timer.C:
|
||||
case <-ac.ctx.Done():
|
||||
timer.Stop()
|
||||
return ac.ctx.Err()
|
||||
}
|
||||
timer.Stop()
|
||||
continue
|
||||
}
|
||||
ac.mu.Lock()
|
||||
@ -877,6 +949,7 @@ func (ac *addrConn) transportMonitor() {
|
||||
}
|
||||
return
|
||||
case <-t.GoAway():
|
||||
ac.adjustParams(t.GetGoAwayReason())
|
||||
// If GoAway happens without any network I/O error, ac is closed without shutting down the
|
||||
// underlying transport (the transport will be closed when all the pending RPCs finished or
|
||||
// failed.).
|
||||
@ -885,9 +958,9 @@ func (ac *addrConn) transportMonitor() {
|
||||
// In both cases, a new ac is created.
|
||||
select {
|
||||
case <-t.Error():
|
||||
ac.cc.resetAddrConn(ac.addr, true, errNetworkIO)
|
||||
ac.cc.resetAddrConn(ac.addr, false, errNetworkIO)
|
||||
default:
|
||||
ac.cc.resetAddrConn(ac.addr, true, errConnDrain)
|
||||
ac.cc.resetAddrConn(ac.addr, false, errConnDrain)
|
||||
}
|
||||
return
|
||||
case <-t.Error():
|
||||
@ -896,7 +969,8 @@ func (ac *addrConn) transportMonitor() {
|
||||
t.Close()
|
||||
return
|
||||
case <-t.GoAway():
|
||||
ac.cc.resetAddrConn(ac.addr, true, errNetworkIO)
|
||||
ac.adjustParams(t.GetGoAwayReason())
|
||||
ac.cc.resetAddrConn(ac.addr, false, errNetworkIO)
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
119
cmd/vendor/google.golang.org/grpc/codec.go
generated
vendored
Normal file
119
cmd/vendor/google.golang.org/grpc/codec.go
generated
vendored
Normal file
@ -0,0 +1,119 @@
|
||||
/*
|
||||
*
|
||||
* Copyright 2014, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
*/
|
||||
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"math"
|
||||
"sync"
|
||||
|
||||
"github.com/golang/protobuf/proto"
|
||||
)
|
||||
|
||||
// Codec defines the interface gRPC uses to encode and decode messages.
|
||||
// Note that implementations of this interface must be thread safe;
|
||||
// a Codec's methods can be called from concurrent goroutines.
|
||||
type Codec interface {
|
||||
// Marshal returns the wire format of v.
|
||||
Marshal(v interface{}) ([]byte, error)
|
||||
// Unmarshal parses the wire format into v.
|
||||
Unmarshal(data []byte, v interface{}) error
|
||||
// String returns the name of the Codec implementation. The returned
|
||||
// string will be used as part of content type in transmission.
|
||||
String() string
|
||||
}
|
||||
|
||||
// protoCodec is a Codec implementation with protobuf. It is the default codec for gRPC.
|
||||
type protoCodec struct {
|
||||
}
|
||||
|
||||
type cachedProtoBuffer struct {
|
||||
lastMarshaledSize uint32
|
||||
proto.Buffer
|
||||
}
|
||||
|
||||
func capToMaxInt32(val int) uint32 {
|
||||
if val > math.MaxInt32 {
|
||||
return uint32(math.MaxInt32)
|
||||
}
|
||||
return uint32(val)
|
||||
}
|
||||
|
||||
func (p protoCodec) marshal(v interface{}, cb *cachedProtoBuffer) ([]byte, error) {
|
||||
protoMsg := v.(proto.Message)
|
||||
newSlice := make([]byte, 0, cb.lastMarshaledSize)
|
||||
|
||||
cb.SetBuf(newSlice)
|
||||
cb.Reset()
|
||||
if err := cb.Marshal(protoMsg); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
out := cb.Bytes()
|
||||
cb.lastMarshaledSize = capToMaxInt32(len(out))
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (p protoCodec) Marshal(v interface{}) ([]byte, error) {
|
||||
cb := protoBufferPool.Get().(*cachedProtoBuffer)
|
||||
out, err := p.marshal(v, cb)
|
||||
|
||||
// put back buffer and lose the ref to the slice
|
||||
cb.SetBuf(nil)
|
||||
protoBufferPool.Put(cb)
|
||||
return out, err
|
||||
}
|
||||
|
||||
func (p protoCodec) Unmarshal(data []byte, v interface{}) error {
|
||||
cb := protoBufferPool.Get().(*cachedProtoBuffer)
|
||||
cb.SetBuf(data)
|
||||
v.(proto.Message).Reset()
|
||||
err := cb.Unmarshal(v.(proto.Message))
|
||||
cb.SetBuf(nil)
|
||||
protoBufferPool.Put(cb)
|
||||
return err
|
||||
}
|
||||
|
||||
func (protoCodec) String() string {
|
||||
return "proto"
|
||||
}
|
||||
|
||||
var (
|
||||
protoBufferPool = &sync.Pool{
|
||||
New: func() interface{} {
|
||||
return &cachedProtoBuffer{
|
||||
Buffer: proto.Buffer{},
|
||||
lastMarshaledSize: 16,
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
2
cmd/vendor/google.golang.org/grpc/codes/codes.go
generated
vendored
2
cmd/vendor/google.golang.org/grpc/codes/codes.go
generated
vendored
@ -44,7 +44,7 @@ const (
|
||||
// OK is returned on success.
|
||||
OK Code = 0
|
||||
|
||||
// Canceled indicates the operation was cancelled (typically by the caller).
|
||||
// Canceled indicates the operation was canceled (typically by the caller).
|
||||
Canceled Code = 1
|
||||
|
||||
// Unknown error. An example of where this error may be returned is
|
||||
|
12
cmd/vendor/google.golang.org/grpc/credentials/credentials.go
generated
vendored
12
cmd/vendor/google.golang.org/grpc/credentials/credentials.go
generated
vendored
@ -102,6 +102,10 @@ type TransportCredentials interface {
|
||||
// authentication protocol on rawConn for clients. It returns the authenticated
|
||||
// connection and the corresponding auth information about the connection.
|
||||
// Implementations must use the provided context to implement timely cancellation.
|
||||
// gRPC will try to reconnect if the error returned is a temporary error
|
||||
// (io.EOF, context.DeadlineExceeded or err.Temporary() == true).
|
||||
// If the returned error is a wrapper error, implementations should make sure that
|
||||
// the error implements Temporary() to have the correct retry behaviors.
|
||||
ClientHandshake(context.Context, string, net.Conn) (net.Conn, AuthInfo, error)
|
||||
// ServerHandshake does the authentication handshake for servers. It returns
|
||||
// the authenticated connection and the corresponding auth information about
|
||||
@ -192,14 +196,14 @@ func NewTLS(c *tls.Config) TransportCredentials {
|
||||
return tc
|
||||
}
|
||||
|
||||
// NewClientTLSFromCert constructs a TLS from the input certificate for client.
|
||||
// NewClientTLSFromCert constructs TLS credentials from the input certificate for client.
|
||||
// serverNameOverride is for testing only. If set to a non empty string,
|
||||
// it will override the virtual host name of authority (e.g. :authority header field) in requests.
|
||||
func NewClientTLSFromCert(cp *x509.CertPool, serverNameOverride string) TransportCredentials {
|
||||
return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp})
|
||||
}
|
||||
|
||||
// NewClientTLSFromFile constructs a TLS from the input certificate file for client.
|
||||
// NewClientTLSFromFile constructs TLS credentials from the input certificate file for client.
|
||||
// serverNameOverride is for testing only. If set to a non empty string,
|
||||
// it will override the virtual host name of authority (e.g. :authority header field) in requests.
|
||||
func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredentials, error) {
|
||||
@ -214,12 +218,12 @@ func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredent
|
||||
return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp}), nil
|
||||
}
|
||||
|
||||
// NewServerTLSFromCert constructs a TLS from the input certificate for server.
|
||||
// NewServerTLSFromCert constructs TLS credentials from the input certificate for server.
|
||||
func NewServerTLSFromCert(cert *tls.Certificate) TransportCredentials {
|
||||
return NewTLS(&tls.Config{Certificates: []tls.Certificate{*cert}})
|
||||
}
|
||||
|
||||
// NewServerTLSFromFile constructs a TLS from the input certificate file and key
|
||||
// NewServerTLSFromFile constructs TLS credentials from the input certificate file and key
|
||||
// file for server.
|
||||
func NewServerTLSFromFile(certFile, keyFile string) (TransportCredentials, error) {
|
||||
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
|
||||
|
113
cmd/vendor/google.golang.org/grpc/go16.go
generated
vendored
Normal file
113
cmd/vendor/google.golang.org/grpc/go16.go
generated
vendored
Normal file
@ -0,0 +1,113 @@
|
||||
// +build go1.6,!go1.7
|
||||
|
||||
/*
|
||||
* Copyright 2016, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
*/
|
||||
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
"google.golang.org/grpc/transport"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// dialContext connects to the address on the named network.
|
||||
func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
|
||||
return (&net.Dialer{Cancel: ctx.Done()}).Dial(network, address)
|
||||
}
|
||||
|
||||
func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error {
|
||||
req.Cancel = ctx.Done()
|
||||
if err := req.Write(conn); err != nil {
|
||||
return fmt.Errorf("failed to write the HTTP request: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// toRPCErr converts an error into an error from the status package.
|
||||
func toRPCErr(err error) error {
|
||||
if _, ok := status.FromError(err); ok {
|
||||
return err
|
||||
}
|
||||
switch e := err.(type) {
|
||||
case transport.StreamError:
|
||||
return status.Error(e.Code, e.Desc)
|
||||
case transport.ConnectionError:
|
||||
return status.Error(codes.Internal, e.Desc)
|
||||
default:
|
||||
switch err {
|
||||
case context.DeadlineExceeded:
|
||||
return status.Error(codes.DeadlineExceeded, err.Error())
|
||||
case context.Canceled:
|
||||
return status.Error(codes.Canceled, err.Error())
|
||||
case ErrClientConnClosing:
|
||||
return status.Error(codes.FailedPrecondition, err.Error())
|
||||
}
|
||||
}
|
||||
return status.Error(codes.Unknown, err.Error())
|
||||
}
|
||||
|
||||
// convertCode converts a standard Go error into its canonical code. Note that
|
||||
// this is only used to translate the error returned by the server applications.
|
||||
func convertCode(err error) codes.Code {
|
||||
switch err {
|
||||
case nil:
|
||||
return codes.OK
|
||||
case io.EOF:
|
||||
return codes.OutOfRange
|
||||
case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF:
|
||||
return codes.FailedPrecondition
|
||||
case os.ErrInvalid:
|
||||
return codes.InvalidArgument
|
||||
case context.Canceled:
|
||||
return codes.Canceled
|
||||
case context.DeadlineExceeded:
|
||||
return codes.DeadlineExceeded
|
||||
}
|
||||
switch {
|
||||
case os.IsExist(err):
|
||||
return codes.AlreadyExists
|
||||
case os.IsNotExist(err):
|
||||
return codes.NotFound
|
||||
case os.IsPermission(err):
|
||||
return codes.PermissionDenied
|
||||
}
|
||||
return codes.Unknown
|
||||
}
|
113
cmd/vendor/google.golang.org/grpc/go17.go
generated
vendored
Normal file
113
cmd/vendor/google.golang.org/grpc/go17.go
generated
vendored
Normal file
@ -0,0 +1,113 @@
|
||||
// +build go1.7
|
||||
|
||||
/*
|
||||
* Copyright 2016, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
*/
|
||||
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
"google.golang.org/grpc/transport"
|
||||
|
||||
netctx "golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// dialContext connects to the address on the named network.
|
||||
func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
|
||||
return (&net.Dialer{}).DialContext(ctx, network, address)
|
||||
}
|
||||
|
||||
func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error {
|
||||
req = req.WithContext(ctx)
|
||||
if err := req.Write(conn); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// toRPCErr converts an error into an error from the status package.
|
||||
func toRPCErr(err error) error {
|
||||
if _, ok := status.FromError(err); ok {
|
||||
return err
|
||||
}
|
||||
switch e := err.(type) {
|
||||
case transport.StreamError:
|
||||
return status.Error(e.Code, e.Desc)
|
||||
case transport.ConnectionError:
|
||||
return status.Error(codes.Internal, e.Desc)
|
||||
default:
|
||||
switch err {
|
||||
case context.DeadlineExceeded, netctx.DeadlineExceeded:
|
||||
return status.Error(codes.DeadlineExceeded, err.Error())
|
||||
case context.Canceled, netctx.Canceled:
|
||||
return status.Error(codes.Canceled, err.Error())
|
||||
case ErrClientConnClosing:
|
||||
return status.Error(codes.FailedPrecondition, err.Error())
|
||||
}
|
||||
}
|
||||
return status.Error(codes.Unknown, err.Error())
|
||||
}
|
||||
|
||||
// convertCode converts a standard Go error into its canonical code. Note that
|
||||
// this is only used to translate the error returned by the server applications.
|
||||
func convertCode(err error) codes.Code {
|
||||
switch err {
|
||||
case nil:
|
||||
return codes.OK
|
||||
case io.EOF:
|
||||
return codes.OutOfRange
|
||||
case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF:
|
||||
return codes.FailedPrecondition
|
||||
case os.ErrInvalid:
|
||||
return codes.InvalidArgument
|
||||
case context.Canceled, netctx.Canceled:
|
||||
return codes.Canceled
|
||||
case context.DeadlineExceeded, netctx.DeadlineExceeded:
|
||||
return codes.DeadlineExceeded
|
||||
}
|
||||
switch {
|
||||
case os.IsExist(err):
|
||||
return codes.AlreadyExists
|
||||
case os.IsNotExist(err):
|
||||
return codes.NotFound
|
||||
case os.IsPermission(err):
|
||||
return codes.PermissionDenied
|
||||
}
|
||||
return codes.Unknown
|
||||
}
|
765
cmd/vendor/google.golang.org/grpc/grpclb.go
generated
vendored
Normal file
765
cmd/vendor/google.golang.org/grpc/grpclb.go
generated
vendored
Normal file
@ -0,0 +1,765 @@
|
||||
/*
|
||||
*
|
||||
* Copyright 2016, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
*/
|
||||
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"net"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
"google.golang.org/grpc/codes"
|
||||
lbpb "google.golang.org/grpc/grpclb/grpc_lb_v1"
|
||||
"google.golang.org/grpc/grpclog"
|
||||
"google.golang.org/grpc/metadata"
|
||||
"google.golang.org/grpc/naming"
|
||||
)
|
||||
|
||||
// Client API for LoadBalancer service.
|
||||
// Mostly copied from generated pb.go file.
|
||||
// To avoid circular dependency.
|
||||
type loadBalancerClient struct {
|
||||
cc *ClientConn
|
||||
}
|
||||
|
||||
func (c *loadBalancerClient) BalanceLoad(ctx context.Context, opts ...CallOption) (*balanceLoadClientStream, error) {
|
||||
desc := &StreamDesc{
|
||||
StreamName: "BalanceLoad",
|
||||
ServerStreams: true,
|
||||
ClientStreams: true,
|
||||
}
|
||||
stream, err := NewClientStream(ctx, desc, c.cc, "/grpc.lb.v1.LoadBalancer/BalanceLoad", opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x := &balanceLoadClientStream{stream}
|
||||
return x, nil
|
||||
}
|
||||
|
||||
type balanceLoadClientStream struct {
|
||||
ClientStream
|
||||
}
|
||||
|
||||
func (x *balanceLoadClientStream) Send(m *lbpb.LoadBalanceRequest) error {
|
||||
return x.ClientStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func (x *balanceLoadClientStream) Recv() (*lbpb.LoadBalanceResponse, error) {
|
||||
m := new(lbpb.LoadBalanceResponse)
|
||||
if err := x.ClientStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// AddressType indicates the address type returned by name resolution.
|
||||
type AddressType uint8
|
||||
|
||||
const (
|
||||
// Backend indicates the server is a backend server.
|
||||
Backend AddressType = iota
|
||||
// GRPCLB indicates the server is a grpclb load balancer.
|
||||
GRPCLB
|
||||
)
|
||||
|
||||
// AddrMetadataGRPCLB contains the information the name resolver for grpclb should provide. The
|
||||
// name resolver used by the grpclb balancer is required to provide this type of metadata in
|
||||
// its address updates.
|
||||
type AddrMetadataGRPCLB struct {
|
||||
// AddrType is the type of server (grpc load balancer or backend).
|
||||
AddrType AddressType
|
||||
// ServerName is the name of the grpc load balancer. Used for authentication.
|
||||
ServerName string
|
||||
}
|
||||
|
||||
// NewGRPCLBBalancer creates a grpclb load balancer.
|
||||
func NewGRPCLBBalancer(r naming.Resolver) Balancer {
|
||||
return &balancer{
|
||||
r: r,
|
||||
}
|
||||
}
|
||||
|
||||
type remoteBalancerInfo struct {
|
||||
addr string
|
||||
// the server name used for authentication with the remote LB server.
|
||||
name string
|
||||
}
|
||||
|
||||
// grpclbAddrInfo consists of the information of a backend server.
|
||||
type grpclbAddrInfo struct {
|
||||
addr Address
|
||||
connected bool
|
||||
// dropForRateLimiting indicates whether this particular request should be
|
||||
// dropped by the client for rate limiting.
|
||||
dropForRateLimiting bool
|
||||
// dropForLoadBalancing indicates whether this particular request should be
|
||||
// dropped by the client for load balancing.
|
||||
dropForLoadBalancing bool
|
||||
}
|
||||
|
||||
type balancer struct {
|
||||
r naming.Resolver
|
||||
target string
|
||||
mu sync.Mutex
|
||||
seq int // a sequence number to make sure addrCh does not get stale addresses.
|
||||
w naming.Watcher
|
||||
addrCh chan []Address
|
||||
rbs []remoteBalancerInfo
|
||||
addrs []*grpclbAddrInfo
|
||||
next int
|
||||
waitCh chan struct{}
|
||||
done bool
|
||||
expTimer *time.Timer
|
||||
rand *rand.Rand
|
||||
|
||||
clientStats lbpb.ClientStats
|
||||
}
|
||||
|
||||
func (b *balancer) watchAddrUpdates(w naming.Watcher, ch chan []remoteBalancerInfo) error {
|
||||
updates, err := w.Next()
|
||||
if err != nil {
|
||||
grpclog.Printf("grpclb: failed to get next addr update from watcher: %v", err)
|
||||
return err
|
||||
}
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
if b.done {
|
||||
return ErrClientConnClosing
|
||||
}
|
||||
for _, update := range updates {
|
||||
switch update.Op {
|
||||
case naming.Add:
|
||||
var exist bool
|
||||
for _, v := range b.rbs {
|
||||
// TODO: Is the same addr with different server name a different balancer?
|
||||
if update.Addr == v.addr {
|
||||
exist = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if exist {
|
||||
continue
|
||||
}
|
||||
md, ok := update.Metadata.(*AddrMetadataGRPCLB)
|
||||
if !ok {
|
||||
// TODO: Revisit the handling here and may introduce some fallback mechanism.
|
||||
grpclog.Printf("The name resolution contains unexpected metadata %v", update.Metadata)
|
||||
continue
|
||||
}
|
||||
switch md.AddrType {
|
||||
case Backend:
|
||||
// TODO: Revisit the handling here and may introduce some fallback mechanism.
|
||||
grpclog.Printf("The name resolution does not give grpclb addresses")
|
||||
continue
|
||||
case GRPCLB:
|
||||
b.rbs = append(b.rbs, remoteBalancerInfo{
|
||||
addr: update.Addr,
|
||||
name: md.ServerName,
|
||||
})
|
||||
default:
|
||||
grpclog.Printf("Received unknow address type %d", md.AddrType)
|
||||
continue
|
||||
}
|
||||
case naming.Delete:
|
||||
for i, v := range b.rbs {
|
||||
if update.Addr == v.addr {
|
||||
copy(b.rbs[i:], b.rbs[i+1:])
|
||||
b.rbs = b.rbs[:len(b.rbs)-1]
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
grpclog.Println("Unknown update.Op ", update.Op)
|
||||
}
|
||||
}
|
||||
// TODO: Fall back to the basic round-robin load balancing if the resulting address is
|
||||
// not a load balancer.
|
||||
select {
|
||||
case <-ch:
|
||||
default:
|
||||
}
|
||||
ch <- b.rbs
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *balancer) serverListExpire(seq int) {
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
// TODO: gRPC interanls do not clear the connections when the server list is stale.
|
||||
// This means RPCs will keep using the existing server list until b receives new
|
||||
// server list even though the list is expired. Revisit this behavior later.
|
||||
if b.done || seq < b.seq {
|
||||
return
|
||||
}
|
||||
b.next = 0
|
||||
b.addrs = nil
|
||||
// Ask grpc internals to close all the corresponding connections.
|
||||
b.addrCh <- nil
|
||||
}
|
||||
|
||||
func convertDuration(d *lbpb.Duration) time.Duration {
|
||||
if d == nil {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(d.Seconds)*time.Second + time.Duration(d.Nanos)*time.Nanosecond
|
||||
}
|
||||
|
||||
func (b *balancer) processServerList(l *lbpb.ServerList, seq int) {
|
||||
if l == nil {
|
||||
return
|
||||
}
|
||||
servers := l.GetServers()
|
||||
expiration := convertDuration(l.GetExpirationInterval())
|
||||
var (
|
||||
sl []*grpclbAddrInfo
|
||||
addrs []Address
|
||||
)
|
||||
for _, s := range servers {
|
||||
md := metadata.Pairs("lb-token", s.LoadBalanceToken)
|
||||
addr := Address{
|
||||
Addr: fmt.Sprintf("%s:%d", net.IP(s.IpAddress), s.Port),
|
||||
Metadata: &md,
|
||||
}
|
||||
sl = append(sl, &grpclbAddrInfo{
|
||||
addr: addr,
|
||||
dropForRateLimiting: s.DropForRateLimiting,
|
||||
dropForLoadBalancing: s.DropForLoadBalancing,
|
||||
})
|
||||
addrs = append(addrs, addr)
|
||||
}
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
if b.done || seq < b.seq {
|
||||
return
|
||||
}
|
||||
if len(sl) > 0 {
|
||||
// reset b.next to 0 when replacing the server list.
|
||||
b.next = 0
|
||||
b.addrs = sl
|
||||
b.addrCh <- addrs
|
||||
if b.expTimer != nil {
|
||||
b.expTimer.Stop()
|
||||
b.expTimer = nil
|
||||
}
|
||||
if expiration > 0 {
|
||||
b.expTimer = time.AfterFunc(expiration, func() {
|
||||
b.serverListExpire(seq)
|
||||
})
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (b *balancer) sendLoadReport(s *balanceLoadClientStream, interval time.Duration, done <-chan struct{}) {
|
||||
ticker := time.NewTicker(interval)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
case <-done:
|
||||
return
|
||||
}
|
||||
b.mu.Lock()
|
||||
stats := b.clientStats
|
||||
b.clientStats = lbpb.ClientStats{} // Clear the stats.
|
||||
b.mu.Unlock()
|
||||
t := time.Now()
|
||||
stats.Timestamp = &lbpb.Timestamp{
|
||||
Seconds: t.Unix(),
|
||||
Nanos: int32(t.Nanosecond()),
|
||||
}
|
||||
if err := s.Send(&lbpb.LoadBalanceRequest{
|
||||
LoadBalanceRequestType: &lbpb.LoadBalanceRequest_ClientStats{
|
||||
ClientStats: &stats,
|
||||
},
|
||||
}); err != nil {
|
||||
grpclog.Printf("grpclb: failed to send load report: %v", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (b *balancer) callRemoteBalancer(lbc *loadBalancerClient, seq int) (retry bool) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
stream, err := lbc.BalanceLoad(ctx)
|
||||
if err != nil {
|
||||
grpclog.Printf("grpclb: failed to perform RPC to the remote balancer %v", err)
|
||||
return
|
||||
}
|
||||
b.mu.Lock()
|
||||
if b.done {
|
||||
b.mu.Unlock()
|
||||
return
|
||||
}
|
||||
b.mu.Unlock()
|
||||
initReq := &lbpb.LoadBalanceRequest{
|
||||
LoadBalanceRequestType: &lbpb.LoadBalanceRequest_InitialRequest{
|
||||
InitialRequest: &lbpb.InitialLoadBalanceRequest{
|
||||
Name: b.target,
|
||||
},
|
||||
},
|
||||
}
|
||||
if err := stream.Send(initReq); err != nil {
|
||||
grpclog.Printf("grpclb: failed to send init request: %v", err)
|
||||
// TODO: backoff on retry?
|
||||
return true
|
||||
}
|
||||
reply, err := stream.Recv()
|
||||
if err != nil {
|
||||
grpclog.Printf("grpclb: failed to recv init response: %v", err)
|
||||
// TODO: backoff on retry?
|
||||
return true
|
||||
}
|
||||
initResp := reply.GetInitialResponse()
|
||||
if initResp == nil {
|
||||
grpclog.Println("grpclb: reply from remote balancer did not include initial response.")
|
||||
return
|
||||
}
|
||||
// TODO: Support delegation.
|
||||
if initResp.LoadBalancerDelegate != "" {
|
||||
// delegation
|
||||
grpclog.Println("TODO: Delegation is not supported yet.")
|
||||
return
|
||||
}
|
||||
streamDone := make(chan struct{})
|
||||
defer close(streamDone)
|
||||
b.mu.Lock()
|
||||
b.clientStats = lbpb.ClientStats{} // Clear client stats.
|
||||
b.mu.Unlock()
|
||||
if d := convertDuration(initResp.ClientStatsReportInterval); d > 0 {
|
||||
go b.sendLoadReport(stream, d, streamDone)
|
||||
}
|
||||
// Retrieve the server list.
|
||||
for {
|
||||
reply, err := stream.Recv()
|
||||
if err != nil {
|
||||
grpclog.Printf("grpclb: failed to recv server list: %v", err)
|
||||
break
|
||||
}
|
||||
b.mu.Lock()
|
||||
if b.done || seq < b.seq {
|
||||
b.mu.Unlock()
|
||||
return
|
||||
}
|
||||
b.seq++ // tick when receiving a new list of servers.
|
||||
seq = b.seq
|
||||
b.mu.Unlock()
|
||||
if serverList := reply.GetServerList(); serverList != nil {
|
||||
b.processServerList(serverList, seq)
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (b *balancer) Start(target string, config BalancerConfig) error {
|
||||
b.rand = rand.New(rand.NewSource(time.Now().Unix()))
|
||||
// TODO: Fall back to the basic direct connection if there is no name resolver.
|
||||
if b.r == nil {
|
||||
return errors.New("there is no name resolver installed")
|
||||
}
|
||||
b.target = target
|
||||
b.mu.Lock()
|
||||
if b.done {
|
||||
b.mu.Unlock()
|
||||
return ErrClientConnClosing
|
||||
}
|
||||
b.addrCh = make(chan []Address)
|
||||
w, err := b.r.Resolve(target)
|
||||
if err != nil {
|
||||
b.mu.Unlock()
|
||||
grpclog.Printf("grpclb: failed to resolve address: %v, err: %v", target, err)
|
||||
return err
|
||||
}
|
||||
b.w = w
|
||||
b.mu.Unlock()
|
||||
balancerAddrsCh := make(chan []remoteBalancerInfo, 1)
|
||||
// Spawn a goroutine to monitor the name resolution of remote load balancer.
|
||||
go func() {
|
||||
for {
|
||||
if err := b.watchAddrUpdates(w, balancerAddrsCh); err != nil {
|
||||
grpclog.Printf("grpclb: the naming watcher stops working due to %v.\n", err)
|
||||
close(balancerAddrsCh)
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
// Spawn a goroutine to talk to the remote load balancer.
|
||||
go func() {
|
||||
var (
|
||||
cc *ClientConn
|
||||
// ccError is closed when there is an error in the current cc.
|
||||
// A new rb should be picked from rbs and connected.
|
||||
ccError chan struct{}
|
||||
rb *remoteBalancerInfo
|
||||
rbs []remoteBalancerInfo
|
||||
rbIdx int
|
||||
)
|
||||
|
||||
defer func() {
|
||||
if ccError != nil {
|
||||
select {
|
||||
case <-ccError:
|
||||
default:
|
||||
close(ccError)
|
||||
}
|
||||
}
|
||||
if cc != nil {
|
||||
cc.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
for {
|
||||
var ok bool
|
||||
select {
|
||||
case rbs, ok = <-balancerAddrsCh:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
foundIdx := -1
|
||||
if rb != nil {
|
||||
for i, trb := range rbs {
|
||||
if trb == *rb {
|
||||
foundIdx = i
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if foundIdx >= 0 {
|
||||
if foundIdx >= 1 {
|
||||
// Move the address in use to the beginning of the list.
|
||||
b.rbs[0], b.rbs[foundIdx] = b.rbs[foundIdx], b.rbs[0]
|
||||
rbIdx = 0
|
||||
}
|
||||
continue // If found, don't dial new cc.
|
||||
} else if len(rbs) > 0 {
|
||||
// Pick a random one from the list, instead of always using the first one.
|
||||
if l := len(rbs); l > 1 && rb != nil {
|
||||
tmpIdx := b.rand.Intn(l - 1)
|
||||
b.rbs[0], b.rbs[tmpIdx] = b.rbs[tmpIdx], b.rbs[0]
|
||||
}
|
||||
rbIdx = 0
|
||||
rb = &rbs[0]
|
||||
} else {
|
||||
// foundIdx < 0 && len(rbs) <= 0.
|
||||
rb = nil
|
||||
}
|
||||
case <-ccError:
|
||||
ccError = nil
|
||||
if rbIdx < len(rbs)-1 {
|
||||
rbIdx++
|
||||
rb = &rbs[rbIdx]
|
||||
} else {
|
||||
rb = nil
|
||||
}
|
||||
}
|
||||
|
||||
if rb == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if cc != nil {
|
||||
cc.Close()
|
||||
}
|
||||
// Talk to the remote load balancer to get the server list.
|
||||
var (
|
||||
err error
|
||||
dopts []DialOption
|
||||
)
|
||||
if creds := config.DialCreds; creds != nil {
|
||||
if rb.name != "" {
|
||||
if err := creds.OverrideServerName(rb.name); err != nil {
|
||||
grpclog.Printf("grpclb: failed to override the server name in the credentials: %v", err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
dopts = append(dopts, WithTransportCredentials(creds))
|
||||
} else {
|
||||
dopts = append(dopts, WithInsecure())
|
||||
}
|
||||
if dialer := config.Dialer; dialer != nil {
|
||||
// WithDialer takes a different type of function, so we instead use a special DialOption here.
|
||||
dopts = append(dopts, func(o *dialOptions) { o.copts.Dialer = dialer })
|
||||
}
|
||||
ccError = make(chan struct{})
|
||||
cc, err = Dial(rb.addr, dopts...)
|
||||
if err != nil {
|
||||
grpclog.Printf("grpclb: failed to setup a connection to the remote balancer %v: %v", rb.addr, err)
|
||||
close(ccError)
|
||||
continue
|
||||
}
|
||||
b.mu.Lock()
|
||||
b.seq++ // tick when getting a new balancer address
|
||||
seq := b.seq
|
||||
b.next = 0
|
||||
b.mu.Unlock()
|
||||
go func(cc *ClientConn, ccError chan struct{}) {
|
||||
lbc := &loadBalancerClient{cc}
|
||||
b.callRemoteBalancer(lbc, seq)
|
||||
cc.Close()
|
||||
select {
|
||||
case <-ccError:
|
||||
default:
|
||||
close(ccError)
|
||||
}
|
||||
}(cc, ccError)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *balancer) down(addr Address, err error) {
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
for _, a := range b.addrs {
|
||||
if addr == a.addr {
|
||||
a.connected = false
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (b *balancer) Up(addr Address) func(error) {
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
if b.done {
|
||||
return nil
|
||||
}
|
||||
var cnt int
|
||||
for _, a := range b.addrs {
|
||||
if a.addr == addr {
|
||||
if a.connected {
|
||||
return nil
|
||||
}
|
||||
a.connected = true
|
||||
}
|
||||
if a.connected && !a.dropForRateLimiting && !a.dropForLoadBalancing {
|
||||
cnt++
|
||||
}
|
||||
}
|
||||
// addr is the only one which is connected. Notify the Get() callers who are blocking.
|
||||
if cnt == 1 && b.waitCh != nil {
|
||||
close(b.waitCh)
|
||||
b.waitCh = nil
|
||||
}
|
||||
return func(err error) {
|
||||
b.down(addr, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (b *balancer) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) {
|
||||
var ch chan struct{}
|
||||
b.mu.Lock()
|
||||
if b.done {
|
||||
b.mu.Unlock()
|
||||
err = ErrClientConnClosing
|
||||
return
|
||||
}
|
||||
seq := b.seq
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
put = func() {
|
||||
s, ok := rpcInfoFromContext(ctx)
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
if b.done || seq < b.seq {
|
||||
return
|
||||
}
|
||||
b.clientStats.NumCallsFinished++
|
||||
if !s.bytesSent {
|
||||
b.clientStats.NumCallsFinishedWithClientFailedToSend++
|
||||
} else if s.bytesReceived {
|
||||
b.clientStats.NumCallsFinishedKnownReceived++
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
b.clientStats.NumCallsStarted++
|
||||
if len(b.addrs) > 0 {
|
||||
if b.next >= len(b.addrs) {
|
||||
b.next = 0
|
||||
}
|
||||
next := b.next
|
||||
for {
|
||||
a := b.addrs[next]
|
||||
next = (next + 1) % len(b.addrs)
|
||||
if a.connected {
|
||||
if !a.dropForRateLimiting && !a.dropForLoadBalancing {
|
||||
addr = a.addr
|
||||
b.next = next
|
||||
b.mu.Unlock()
|
||||
return
|
||||
}
|
||||
if !opts.BlockingWait {
|
||||
b.next = next
|
||||
if a.dropForLoadBalancing {
|
||||
b.clientStats.NumCallsFinished++
|
||||
b.clientStats.NumCallsFinishedWithDropForLoadBalancing++
|
||||
} else if a.dropForRateLimiting {
|
||||
b.clientStats.NumCallsFinished++
|
||||
b.clientStats.NumCallsFinishedWithDropForRateLimiting++
|
||||
}
|
||||
b.mu.Unlock()
|
||||
err = Errorf(codes.Unavailable, "%s drops requests", a.addr.Addr)
|
||||
return
|
||||
}
|
||||
}
|
||||
if next == b.next {
|
||||
// Has iterated all the possible address but none is connected.
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if !opts.BlockingWait {
|
||||
if len(b.addrs) == 0 {
|
||||
b.clientStats.NumCallsFinished++
|
||||
b.clientStats.NumCallsFinishedWithClientFailedToSend++
|
||||
b.mu.Unlock()
|
||||
err = Errorf(codes.Unavailable, "there is no address available")
|
||||
return
|
||||
}
|
||||
// Returns the next addr on b.addrs for a failfast RPC.
|
||||
addr = b.addrs[b.next].addr
|
||||
b.next++
|
||||
b.mu.Unlock()
|
||||
return
|
||||
}
|
||||
// Wait on b.waitCh for non-failfast RPCs.
|
||||
if b.waitCh == nil {
|
||||
ch = make(chan struct{})
|
||||
b.waitCh = ch
|
||||
} else {
|
||||
ch = b.waitCh
|
||||
}
|
||||
b.mu.Unlock()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
b.mu.Lock()
|
||||
b.clientStats.NumCallsFinished++
|
||||
b.clientStats.NumCallsFinishedWithClientFailedToSend++
|
||||
b.mu.Unlock()
|
||||
err = ctx.Err()
|
||||
return
|
||||
case <-ch:
|
||||
b.mu.Lock()
|
||||
if b.done {
|
||||
b.clientStats.NumCallsFinished++
|
||||
b.clientStats.NumCallsFinishedWithClientFailedToSend++
|
||||
b.mu.Unlock()
|
||||
err = ErrClientConnClosing
|
||||
return
|
||||
}
|
||||
|
||||
if len(b.addrs) > 0 {
|
||||
if b.next >= len(b.addrs) {
|
||||
b.next = 0
|
||||
}
|
||||
next := b.next
|
||||
for {
|
||||
a := b.addrs[next]
|
||||
next = (next + 1) % len(b.addrs)
|
||||
if a.connected {
|
||||
if !a.dropForRateLimiting && !a.dropForLoadBalancing {
|
||||
addr = a.addr
|
||||
b.next = next
|
||||
b.mu.Unlock()
|
||||
return
|
||||
}
|
||||
if !opts.BlockingWait {
|
||||
b.next = next
|
||||
if a.dropForLoadBalancing {
|
||||
b.clientStats.NumCallsFinished++
|
||||
b.clientStats.NumCallsFinishedWithDropForLoadBalancing++
|
||||
} else if a.dropForRateLimiting {
|
||||
b.clientStats.NumCallsFinished++
|
||||
b.clientStats.NumCallsFinishedWithDropForRateLimiting++
|
||||
}
|
||||
b.mu.Unlock()
|
||||
err = Errorf(codes.Unavailable, "drop requests for the addreess %s", a.addr.Addr)
|
||||
return
|
||||
}
|
||||
}
|
||||
if next == b.next {
|
||||
// Has iterated all the possible address but none is connected.
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
// The newly added addr got removed by Down() again.
|
||||
if b.waitCh == nil {
|
||||
ch = make(chan struct{})
|
||||
b.waitCh = ch
|
||||
} else {
|
||||
ch = b.waitCh
|
||||
}
|
||||
b.mu.Unlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (b *balancer) Notify() <-chan []Address {
|
||||
return b.addrCh
|
||||
}
|
||||
|
||||
func (b *balancer) Close() error {
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
if b.done {
|
||||
return errBalancerClosed
|
||||
}
|
||||
b.done = true
|
||||
if b.expTimer != nil {
|
||||
b.expTimer.Stop()
|
||||
}
|
||||
if b.waitCh != nil {
|
||||
close(b.waitCh)
|
||||
}
|
||||
if b.addrCh != nil {
|
||||
close(b.addrCh)
|
||||
}
|
||||
if b.w != nil {
|
||||
b.w.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
629
cmd/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go
generated
vendored
Normal file
629
cmd/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go
generated
vendored
Normal file
@ -0,0 +1,629 @@
|
||||
// Code generated by protoc-gen-go.
|
||||
// source: grpclb.proto
|
||||
// DO NOT EDIT!
|
||||
|
||||
/*
|
||||
Package grpc_lb_v1 is a generated protocol buffer package.
|
||||
|
||||
It is generated from these files:
|
||||
grpclb.proto
|
||||
|
||||
It has these top-level messages:
|
||||
Duration
|
||||
Timestamp
|
||||
LoadBalanceRequest
|
||||
InitialLoadBalanceRequest
|
||||
ClientStats
|
||||
LoadBalanceResponse
|
||||
InitialLoadBalanceResponse
|
||||
ServerList
|
||||
Server
|
||||
*/
|
||||
package grpc_lb_v1
|
||||
|
||||
import proto "github.com/golang/protobuf/proto"
|
||||
import fmt "fmt"
|
||||
import math "math"
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
|
||||
|
||||
type Duration struct {
|
||||
// Signed seconds of the span of time. Must be from -315,576,000,000
|
||||
// to +315,576,000,000 inclusive.
|
||||
Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
|
||||
// Signed fractions of a second at nanosecond resolution of the span
|
||||
// of time. Durations less than one second are represented with a 0
|
||||
// `seconds` field and a positive or negative `nanos` field. For durations
|
||||
// of one second or more, a non-zero value for the `nanos` field must be
|
||||
// of the same sign as the `seconds` field. Must be from -999,999,999
|
||||
// to +999,999,999 inclusive.
|
||||
Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
|
||||
}
|
||||
|
||||
func (m *Duration) Reset() { *m = Duration{} }
|
||||
func (m *Duration) String() string { return proto.CompactTextString(m) }
|
||||
func (*Duration) ProtoMessage() {}
|
||||
func (*Duration) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
|
||||
|
||||
func (m *Duration) GetSeconds() int64 {
|
||||
if m != nil {
|
||||
return m.Seconds
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Duration) GetNanos() int32 {
|
||||
if m != nil {
|
||||
return m.Nanos
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type Timestamp struct {
|
||||
// Represents seconds of UTC time since Unix epoch
|
||||
// 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
|
||||
// 9999-12-31T23:59:59Z inclusive.
|
||||
Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
|
||||
// Non-negative fractions of a second at nanosecond resolution. Negative
|
||||
// second values with fractions must still have non-negative nanos values
|
||||
// that count forward in time. Must be from 0 to 999,999,999
|
||||
// inclusive.
|
||||
Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
|
||||
}
|
||||
|
||||
func (m *Timestamp) Reset() { *m = Timestamp{} }
|
||||
func (m *Timestamp) String() string { return proto.CompactTextString(m) }
|
||||
func (*Timestamp) ProtoMessage() {}
|
||||
func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
|
||||
|
||||
func (m *Timestamp) GetSeconds() int64 {
|
||||
if m != nil {
|
||||
return m.Seconds
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Timestamp) GetNanos() int32 {
|
||||
if m != nil {
|
||||
return m.Nanos
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type LoadBalanceRequest struct {
|
||||
// Types that are valid to be assigned to LoadBalanceRequestType:
|
||||
// *LoadBalanceRequest_InitialRequest
|
||||
// *LoadBalanceRequest_ClientStats
|
||||
LoadBalanceRequestType isLoadBalanceRequest_LoadBalanceRequestType `protobuf_oneof:"load_balance_request_type"`
|
||||
}
|
||||
|
||||
func (m *LoadBalanceRequest) Reset() { *m = LoadBalanceRequest{} }
|
||||
func (m *LoadBalanceRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*LoadBalanceRequest) ProtoMessage() {}
|
||||
func (*LoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
|
||||
|
||||
type isLoadBalanceRequest_LoadBalanceRequestType interface {
|
||||
isLoadBalanceRequest_LoadBalanceRequestType()
|
||||
}
|
||||
|
||||
type LoadBalanceRequest_InitialRequest struct {
|
||||
InitialRequest *InitialLoadBalanceRequest `protobuf:"bytes,1,opt,name=initial_request,json=initialRequest,oneof"`
|
||||
}
|
||||
type LoadBalanceRequest_ClientStats struct {
|
||||
ClientStats *ClientStats `protobuf:"bytes,2,opt,name=client_stats,json=clientStats,oneof"`
|
||||
}
|
||||
|
||||
func (*LoadBalanceRequest_InitialRequest) isLoadBalanceRequest_LoadBalanceRequestType() {}
|
||||
func (*LoadBalanceRequest_ClientStats) isLoadBalanceRequest_LoadBalanceRequestType() {}
|
||||
|
||||
func (m *LoadBalanceRequest) GetLoadBalanceRequestType() isLoadBalanceRequest_LoadBalanceRequestType {
|
||||
if m != nil {
|
||||
return m.LoadBalanceRequestType
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *LoadBalanceRequest) GetInitialRequest() *InitialLoadBalanceRequest {
|
||||
if x, ok := m.GetLoadBalanceRequestType().(*LoadBalanceRequest_InitialRequest); ok {
|
||||
return x.InitialRequest
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *LoadBalanceRequest) GetClientStats() *ClientStats {
|
||||
if x, ok := m.GetLoadBalanceRequestType().(*LoadBalanceRequest_ClientStats); ok {
|
||||
return x.ClientStats
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// XXX_OneofFuncs is for the internal use of the proto package.
|
||||
func (*LoadBalanceRequest) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
|
||||
return _LoadBalanceRequest_OneofMarshaler, _LoadBalanceRequest_OneofUnmarshaler, _LoadBalanceRequest_OneofSizer, []interface{}{
|
||||
(*LoadBalanceRequest_InitialRequest)(nil),
|
||||
(*LoadBalanceRequest_ClientStats)(nil),
|
||||
}
|
||||
}
|
||||
|
||||
func _LoadBalanceRequest_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
|
||||
m := msg.(*LoadBalanceRequest)
|
||||
// load_balance_request_type
|
||||
switch x := m.LoadBalanceRequestType.(type) {
|
||||
case *LoadBalanceRequest_InitialRequest:
|
||||
b.EncodeVarint(1<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.InitialRequest); err != nil {
|
||||
return err
|
||||
}
|
||||
case *LoadBalanceRequest_ClientStats:
|
||||
b.EncodeVarint(2<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.ClientStats); err != nil {
|
||||
return err
|
||||
}
|
||||
case nil:
|
||||
default:
|
||||
return fmt.Errorf("LoadBalanceRequest.LoadBalanceRequestType has unexpected type %T", x)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func _LoadBalanceRequest_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
|
||||
m := msg.(*LoadBalanceRequest)
|
||||
switch tag {
|
||||
case 1: // load_balance_request_type.initial_request
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(InitialLoadBalanceRequest)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.LoadBalanceRequestType = &LoadBalanceRequest_InitialRequest{msg}
|
||||
return true, err
|
||||
case 2: // load_balance_request_type.client_stats
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(ClientStats)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.LoadBalanceRequestType = &LoadBalanceRequest_ClientStats{msg}
|
||||
return true, err
|
||||
default:
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
func _LoadBalanceRequest_OneofSizer(msg proto.Message) (n int) {
|
||||
m := msg.(*LoadBalanceRequest)
|
||||
// load_balance_request_type
|
||||
switch x := m.LoadBalanceRequestType.(type) {
|
||||
case *LoadBalanceRequest_InitialRequest:
|
||||
s := proto.Size(x.InitialRequest)
|
||||
n += proto.SizeVarint(1<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case *LoadBalanceRequest_ClientStats:
|
||||
s := proto.Size(x.ClientStats)
|
||||
n += proto.SizeVarint(2<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case nil:
|
||||
default:
|
||||
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
type InitialLoadBalanceRequest struct {
|
||||
// Name of load balanced service (IE, balancer.service.com)
|
||||
// length should be less than 256 bytes.
|
||||
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
|
||||
}
|
||||
|
||||
func (m *InitialLoadBalanceRequest) Reset() { *m = InitialLoadBalanceRequest{} }
|
||||
func (m *InitialLoadBalanceRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*InitialLoadBalanceRequest) ProtoMessage() {}
|
||||
func (*InitialLoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
|
||||
|
||||
func (m *InitialLoadBalanceRequest) GetName() string {
|
||||
if m != nil {
|
||||
return m.Name
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Contains client level statistics that are useful to load balancing. Each
|
||||
// count except the timestamp should be reset to zero after reporting the stats.
|
||||
type ClientStats struct {
|
||||
// The timestamp of generating the report.
|
||||
Timestamp *Timestamp `protobuf:"bytes,1,opt,name=timestamp" json:"timestamp,omitempty"`
|
||||
// The total number of RPCs that started.
|
||||
NumCallsStarted int64 `protobuf:"varint,2,opt,name=num_calls_started,json=numCallsStarted" json:"num_calls_started,omitempty"`
|
||||
// The total number of RPCs that finished.
|
||||
NumCallsFinished int64 `protobuf:"varint,3,opt,name=num_calls_finished,json=numCallsFinished" json:"num_calls_finished,omitempty"`
|
||||
// The total number of RPCs that were dropped by the client because of rate
|
||||
// limiting.
|
||||
NumCallsFinishedWithDropForRateLimiting int64 `protobuf:"varint,4,opt,name=num_calls_finished_with_drop_for_rate_limiting,json=numCallsFinishedWithDropForRateLimiting" json:"num_calls_finished_with_drop_for_rate_limiting,omitempty"`
|
||||
// The total number of RPCs that were dropped by the client because of load
|
||||
// balancing.
|
||||
NumCallsFinishedWithDropForLoadBalancing int64 `protobuf:"varint,5,opt,name=num_calls_finished_with_drop_for_load_balancing,json=numCallsFinishedWithDropForLoadBalancing" json:"num_calls_finished_with_drop_for_load_balancing,omitempty"`
|
||||
// The total number of RPCs that failed to reach a server except dropped RPCs.
|
||||
NumCallsFinishedWithClientFailedToSend int64 `protobuf:"varint,6,opt,name=num_calls_finished_with_client_failed_to_send,json=numCallsFinishedWithClientFailedToSend" json:"num_calls_finished_with_client_failed_to_send,omitempty"`
|
||||
// The total number of RPCs that finished and are known to have been received
|
||||
// by a server.
|
||||
NumCallsFinishedKnownReceived int64 `protobuf:"varint,7,opt,name=num_calls_finished_known_received,json=numCallsFinishedKnownReceived" json:"num_calls_finished_known_received,omitempty"`
|
||||
}
|
||||
|
||||
func (m *ClientStats) Reset() { *m = ClientStats{} }
|
||||
func (m *ClientStats) String() string { return proto.CompactTextString(m) }
|
||||
func (*ClientStats) ProtoMessage() {}
|
||||
func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
|
||||
|
||||
func (m *ClientStats) GetTimestamp() *Timestamp {
|
||||
if m != nil {
|
||||
return m.Timestamp
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *ClientStats) GetNumCallsStarted() int64 {
|
||||
if m != nil {
|
||||
return m.NumCallsStarted
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *ClientStats) GetNumCallsFinished() int64 {
|
||||
if m != nil {
|
||||
return m.NumCallsFinished
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *ClientStats) GetNumCallsFinishedWithDropForRateLimiting() int64 {
|
||||
if m != nil {
|
||||
return m.NumCallsFinishedWithDropForRateLimiting
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *ClientStats) GetNumCallsFinishedWithDropForLoadBalancing() int64 {
|
||||
if m != nil {
|
||||
return m.NumCallsFinishedWithDropForLoadBalancing
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *ClientStats) GetNumCallsFinishedWithClientFailedToSend() int64 {
|
||||
if m != nil {
|
||||
return m.NumCallsFinishedWithClientFailedToSend
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *ClientStats) GetNumCallsFinishedKnownReceived() int64 {
|
||||
if m != nil {
|
||||
return m.NumCallsFinishedKnownReceived
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type LoadBalanceResponse struct {
|
||||
// Types that are valid to be assigned to LoadBalanceResponseType:
|
||||
// *LoadBalanceResponse_InitialResponse
|
||||
// *LoadBalanceResponse_ServerList
|
||||
LoadBalanceResponseType isLoadBalanceResponse_LoadBalanceResponseType `protobuf_oneof:"load_balance_response_type"`
|
||||
}
|
||||
|
||||
func (m *LoadBalanceResponse) Reset() { *m = LoadBalanceResponse{} }
|
||||
func (m *LoadBalanceResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*LoadBalanceResponse) ProtoMessage() {}
|
||||
func (*LoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
|
||||
|
||||
type isLoadBalanceResponse_LoadBalanceResponseType interface {
|
||||
isLoadBalanceResponse_LoadBalanceResponseType()
|
||||
}
|
||||
|
||||
type LoadBalanceResponse_InitialResponse struct {
|
||||
InitialResponse *InitialLoadBalanceResponse `protobuf:"bytes,1,opt,name=initial_response,json=initialResponse,oneof"`
|
||||
}
|
||||
type LoadBalanceResponse_ServerList struct {
|
||||
ServerList *ServerList `protobuf:"bytes,2,opt,name=server_list,json=serverList,oneof"`
|
||||
}
|
||||
|
||||
func (*LoadBalanceResponse_InitialResponse) isLoadBalanceResponse_LoadBalanceResponseType() {}
|
||||
func (*LoadBalanceResponse_ServerList) isLoadBalanceResponse_LoadBalanceResponseType() {}
|
||||
|
||||
func (m *LoadBalanceResponse) GetLoadBalanceResponseType() isLoadBalanceResponse_LoadBalanceResponseType {
|
||||
if m != nil {
|
||||
return m.LoadBalanceResponseType
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *LoadBalanceResponse) GetInitialResponse() *InitialLoadBalanceResponse {
|
||||
if x, ok := m.GetLoadBalanceResponseType().(*LoadBalanceResponse_InitialResponse); ok {
|
||||
return x.InitialResponse
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *LoadBalanceResponse) GetServerList() *ServerList {
|
||||
if x, ok := m.GetLoadBalanceResponseType().(*LoadBalanceResponse_ServerList); ok {
|
||||
return x.ServerList
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// XXX_OneofFuncs is for the internal use of the proto package.
|
||||
func (*LoadBalanceResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
|
||||
return _LoadBalanceResponse_OneofMarshaler, _LoadBalanceResponse_OneofUnmarshaler, _LoadBalanceResponse_OneofSizer, []interface{}{
|
||||
(*LoadBalanceResponse_InitialResponse)(nil),
|
||||
(*LoadBalanceResponse_ServerList)(nil),
|
||||
}
|
||||
}
|
||||
|
||||
func _LoadBalanceResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
|
||||
m := msg.(*LoadBalanceResponse)
|
||||
// load_balance_response_type
|
||||
switch x := m.LoadBalanceResponseType.(type) {
|
||||
case *LoadBalanceResponse_InitialResponse:
|
||||
b.EncodeVarint(1<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.InitialResponse); err != nil {
|
||||
return err
|
||||
}
|
||||
case *LoadBalanceResponse_ServerList:
|
||||
b.EncodeVarint(2<<3 | proto.WireBytes)
|
||||
if err := b.EncodeMessage(x.ServerList); err != nil {
|
||||
return err
|
||||
}
|
||||
case nil:
|
||||
default:
|
||||
return fmt.Errorf("LoadBalanceResponse.LoadBalanceResponseType has unexpected type %T", x)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func _LoadBalanceResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
|
||||
m := msg.(*LoadBalanceResponse)
|
||||
switch tag {
|
||||
case 1: // load_balance_response_type.initial_response
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(InitialLoadBalanceResponse)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.LoadBalanceResponseType = &LoadBalanceResponse_InitialResponse{msg}
|
||||
return true, err
|
||||
case 2: // load_balance_response_type.server_list
|
||||
if wire != proto.WireBytes {
|
||||
return true, proto.ErrInternalBadWireType
|
||||
}
|
||||
msg := new(ServerList)
|
||||
err := b.DecodeMessage(msg)
|
||||
m.LoadBalanceResponseType = &LoadBalanceResponse_ServerList{msg}
|
||||
return true, err
|
||||
default:
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
func _LoadBalanceResponse_OneofSizer(msg proto.Message) (n int) {
|
||||
m := msg.(*LoadBalanceResponse)
|
||||
// load_balance_response_type
|
||||
switch x := m.LoadBalanceResponseType.(type) {
|
||||
case *LoadBalanceResponse_InitialResponse:
|
||||
s := proto.Size(x.InitialResponse)
|
||||
n += proto.SizeVarint(1<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case *LoadBalanceResponse_ServerList:
|
||||
s := proto.Size(x.ServerList)
|
||||
n += proto.SizeVarint(2<<3 | proto.WireBytes)
|
||||
n += proto.SizeVarint(uint64(s))
|
||||
n += s
|
||||
case nil:
|
||||
default:
|
||||
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
type InitialLoadBalanceResponse struct {
|
||||
// This is an application layer redirect that indicates the client should use
|
||||
// the specified server for load balancing. When this field is non-empty in
|
||||
// the response, the client should open a separate connection to the
|
||||
// load_balancer_delegate and call the BalanceLoad method. Its length should
|
||||
// be less than 64 bytes.
|
||||
LoadBalancerDelegate string `protobuf:"bytes,1,opt,name=load_balancer_delegate,json=loadBalancerDelegate" json:"load_balancer_delegate,omitempty"`
|
||||
// This interval defines how often the client should send the client stats
|
||||
// to the load balancer. Stats should only be reported when the duration is
|
||||
// positive.
|
||||
ClientStatsReportInterval *Duration `protobuf:"bytes,2,opt,name=client_stats_report_interval,json=clientStatsReportInterval" json:"client_stats_report_interval,omitempty"`
|
||||
}
|
||||
|
||||
func (m *InitialLoadBalanceResponse) Reset() { *m = InitialLoadBalanceResponse{} }
|
||||
func (m *InitialLoadBalanceResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*InitialLoadBalanceResponse) ProtoMessage() {}
|
||||
func (*InitialLoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
|
||||
|
||||
func (m *InitialLoadBalanceResponse) GetLoadBalancerDelegate() string {
|
||||
if m != nil {
|
||||
return m.LoadBalancerDelegate
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *InitialLoadBalanceResponse) GetClientStatsReportInterval() *Duration {
|
||||
if m != nil {
|
||||
return m.ClientStatsReportInterval
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type ServerList struct {
|
||||
// Contains a list of servers selected by the load balancer. The list will
|
||||
// be updated when server resolutions change or as needed to balance load
|
||||
// across more servers. The client should consume the server list in order
|
||||
// unless instructed otherwise via the client_config.
|
||||
Servers []*Server `protobuf:"bytes,1,rep,name=servers" json:"servers,omitempty"`
|
||||
// Indicates the amount of time that the client should consider this server
|
||||
// list as valid. It may be considered stale after waiting this interval of
|
||||
// time after receiving the list. If the interval is not positive, the
|
||||
// client can assume the list is valid until the next list is received.
|
||||
ExpirationInterval *Duration `protobuf:"bytes,3,opt,name=expiration_interval,json=expirationInterval" json:"expiration_interval,omitempty"`
|
||||
}
|
||||
|
||||
func (m *ServerList) Reset() { *m = ServerList{} }
|
||||
func (m *ServerList) String() string { return proto.CompactTextString(m) }
|
||||
func (*ServerList) ProtoMessage() {}
|
||||
func (*ServerList) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
|
||||
|
||||
func (m *ServerList) GetServers() []*Server {
|
||||
if m != nil {
|
||||
return m.Servers
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *ServerList) GetExpirationInterval() *Duration {
|
||||
if m != nil {
|
||||
return m.ExpirationInterval
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Contains server information. When none of the [drop_for_*] fields are true,
|
||||
// use the other fields. When drop_for_rate_limiting is true, ignore all other
|
||||
// fields. Use drop_for_load_balancing only when it is true and
|
||||
// drop_for_rate_limiting is false.
|
||||
type Server struct {
|
||||
// A resolved address for the server, serialized in network-byte-order. It may
|
||||
// either be an IPv4 or IPv6 address.
|
||||
IpAddress []byte `protobuf:"bytes,1,opt,name=ip_address,json=ipAddress,proto3" json:"ip_address,omitempty"`
|
||||
// A resolved port number for the server.
|
||||
Port int32 `protobuf:"varint,2,opt,name=port" json:"port,omitempty"`
|
||||
// An opaque but printable token given to the frontend for each pick. All
|
||||
// frontend requests for that pick must include the token in its initial
|
||||
// metadata. The token is used by the backend to verify the request and to
|
||||
// allow the backend to report load to the gRPC LB system.
|
||||
//
|
||||
// Its length is variable but less than 50 bytes.
|
||||
LoadBalanceToken string `protobuf:"bytes,3,opt,name=load_balance_token,json=loadBalanceToken" json:"load_balance_token,omitempty"`
|
||||
// Indicates whether this particular request should be dropped by the client
|
||||
// for rate limiting.
|
||||
DropForRateLimiting bool `protobuf:"varint,4,opt,name=drop_for_rate_limiting,json=dropForRateLimiting" json:"drop_for_rate_limiting,omitempty"`
|
||||
// Indicates whether this particular request should be dropped by the client
|
||||
// for load balancing.
|
||||
DropForLoadBalancing bool `protobuf:"varint,5,opt,name=drop_for_load_balancing,json=dropForLoadBalancing" json:"drop_for_load_balancing,omitempty"`
|
||||
}
|
||||
|
||||
func (m *Server) Reset() { *m = Server{} }
|
||||
func (m *Server) String() string { return proto.CompactTextString(m) }
|
||||
func (*Server) ProtoMessage() {}
|
||||
func (*Server) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
|
||||
|
||||
func (m *Server) GetIpAddress() []byte {
|
||||
if m != nil {
|
||||
return m.IpAddress
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Server) GetPort() int32 {
|
||||
if m != nil {
|
||||
return m.Port
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Server) GetLoadBalanceToken() string {
|
||||
if m != nil {
|
||||
return m.LoadBalanceToken
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Server) GetDropForRateLimiting() bool {
|
||||
if m != nil {
|
||||
return m.DropForRateLimiting
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *Server) GetDropForLoadBalancing() bool {
|
||||
if m != nil {
|
||||
return m.DropForLoadBalancing
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Duration)(nil), "grpc.lb.v1.Duration")
|
||||
proto.RegisterType((*Timestamp)(nil), "grpc.lb.v1.Timestamp")
|
||||
proto.RegisterType((*LoadBalanceRequest)(nil), "grpc.lb.v1.LoadBalanceRequest")
|
||||
proto.RegisterType((*InitialLoadBalanceRequest)(nil), "grpc.lb.v1.InitialLoadBalanceRequest")
|
||||
proto.RegisterType((*ClientStats)(nil), "grpc.lb.v1.ClientStats")
|
||||
proto.RegisterType((*LoadBalanceResponse)(nil), "grpc.lb.v1.LoadBalanceResponse")
|
||||
proto.RegisterType((*InitialLoadBalanceResponse)(nil), "grpc.lb.v1.InitialLoadBalanceResponse")
|
||||
proto.RegisterType((*ServerList)(nil), "grpc.lb.v1.ServerList")
|
||||
proto.RegisterType((*Server)(nil), "grpc.lb.v1.Server")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("grpclb.proto", fileDescriptor0) }
|
||||
|
||||
var fileDescriptor0 = []byte{
|
||||
// 733 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x55, 0xdd, 0x4e, 0x1b, 0x39,
|
||||
0x14, 0x66, 0x36, 0xfc, 0xe5, 0x24, 0x5a, 0x58, 0x93, 0x85, 0xc0, 0xc2, 0x2e, 0x1b, 0xa9, 0x34,
|
||||
0xaa, 0x68, 0x68, 0x43, 0x7b, 0xd1, 0x9f, 0x9b, 0x02, 0x45, 0x41, 0xe5, 0xa2, 0x72, 0xa8, 0x7a,
|
||||
0x55, 0x59, 0x4e, 0xc6, 0x80, 0xc5, 0xc4, 0x9e, 0xda, 0x4e, 0x68, 0x2f, 0x7b, 0xd9, 0x47, 0xe9,
|
||||
0x63, 0x54, 0x7d, 0x86, 0xbe, 0x4f, 0x65, 0x7b, 0x26, 0x33, 0x90, 0x1f, 0xd4, 0xbb, 0xf1, 0xf1,
|
||||
0x77, 0xbe, 0xf3, 0xf9, 0xd8, 0xdf, 0x19, 0x28, 0x5f, 0xa8, 0xb8, 0x1b, 0x75, 0x1a, 0xb1, 0x92,
|
||||
0x46, 0x22, 0xb0, 0xab, 0x46, 0xd4, 0x69, 0x0c, 0x1e, 0xd7, 0x9e, 0xc3, 0xe2, 0x51, 0x5f, 0x51,
|
||||
0xc3, 0xa5, 0x40, 0x55, 0x58, 0xd0, 0xac, 0x2b, 0x45, 0xa8, 0xab, 0xc1, 0x76, 0x50, 0x2f, 0xe0,
|
||||
0x74, 0x89, 0x2a, 0x30, 0x27, 0xa8, 0x90, 0xba, 0xfa, 0xc7, 0x76, 0x50, 0x9f, 0xc3, 0x7e, 0x51,
|
||||
0x7b, 0x01, 0xc5, 0x33, 0xde, 0x63, 0xda, 0xd0, 0x5e, 0xfc, 0xdb, 0xc9, 0xdf, 0x03, 0x40, 0xa7,
|
||||
0x92, 0x86, 0x07, 0x34, 0xa2, 0xa2, 0xcb, 0x30, 0xfb, 0xd8, 0x67, 0xda, 0xa0, 0xb7, 0xb0, 0xc4,
|
||||
0x05, 0x37, 0x9c, 0x46, 0x44, 0xf9, 0x90, 0xa3, 0x2b, 0x35, 0xef, 0x35, 0x32, 0xd5, 0x8d, 0x13,
|
||||
0x0f, 0x19, 0xcd, 0x6f, 0xcd, 0xe0, 0x3f, 0x93, 0xfc, 0x94, 0xf1, 0x25, 0x94, 0xbb, 0x11, 0x67,
|
||||
0xc2, 0x10, 0x6d, 0xa8, 0xf1, 0x2a, 0x4a, 0xcd, 0xb5, 0x3c, 0xdd, 0xa1, 0xdb, 0x6f, 0xdb, 0xed,
|
||||
0xd6, 0x0c, 0x2e, 0x75, 0xb3, 0xe5, 0xc1, 0x3f, 0xb0, 0x1e, 0x49, 0x1a, 0x92, 0x8e, 0x2f, 0x93,
|
||||
0x8a, 0x22, 0xe6, 0x73, 0xcc, 0x6a, 0x7b, 0xb0, 0x3e, 0x51, 0x09, 0x42, 0x30, 0x2b, 0x68, 0x8f,
|
||||
0x39, 0xf9, 0x45, 0xec, 0xbe, 0x6b, 0x5f, 0x67, 0xa1, 0x94, 0x2b, 0x86, 0xf6, 0xa1, 0x68, 0xd2,
|
||||
0x0e, 0x26, 0xe7, 0xfc, 0x3b, 0x2f, 0x6c, 0xd8, 0x5e, 0x9c, 0xe1, 0xd0, 0x03, 0xf8, 0x4b, 0xf4,
|
||||
0x7b, 0xa4, 0x4b, 0xa3, 0x48, 0xdb, 0x33, 0x29, 0xc3, 0x42, 0x77, 0xaa, 0x02, 0x5e, 0x12, 0xfd,
|
||||
0xde, 0xa1, 0x8d, 0xb7, 0x7d, 0x18, 0xed, 0x02, 0xca, 0xb0, 0xe7, 0x5c, 0x70, 0x7d, 0xc9, 0xc2,
|
||||
0x6a, 0xc1, 0x81, 0x97, 0x53, 0xf0, 0x71, 0x12, 0x47, 0x04, 0x1a, 0xa3, 0x68, 0x72, 0xcd, 0xcd,
|
||||
0x25, 0x09, 0x95, 0x8c, 0xc9, 0xb9, 0x54, 0x44, 0x51, 0xc3, 0x48, 0xc4, 0x7b, 0xdc, 0x70, 0x71,
|
||||
0x51, 0x9d, 0x75, 0x4c, 0xf7, 0x6f, 0x33, 0xbd, 0xe7, 0xe6, 0xf2, 0x48, 0xc9, 0xf8, 0x58, 0x2a,
|
||||
0x4c, 0x0d, 0x3b, 0x4d, 0xe0, 0x88, 0xc2, 0xde, 0x9d, 0x05, 0x72, 0xed, 0xb6, 0x15, 0xe6, 0x5c,
|
||||
0x85, 0xfa, 0x94, 0x0a, 0x59, 0xef, 0x6d, 0x89, 0x0f, 0xf0, 0x70, 0x52, 0x89, 0xe4, 0x19, 0x9c,
|
||||
0x53, 0x1e, 0xb1, 0x90, 0x18, 0x49, 0x34, 0x13, 0x61, 0x75, 0xde, 0x15, 0xd8, 0x19, 0x57, 0xc0,
|
||||
0x5f, 0xd5, 0xb1, 0xc3, 0x9f, 0xc9, 0x36, 0x13, 0x21, 0x6a, 0xc1, 0xff, 0x63, 0xe8, 0xaf, 0x84,
|
||||
0xbc, 0x16, 0x44, 0xb1, 0x2e, 0xe3, 0x03, 0x16, 0x56, 0x17, 0x1c, 0xe5, 0xd6, 0x6d, 0xca, 0x37,
|
||||
0x16, 0x85, 0x13, 0x50, 0xed, 0x47, 0x00, 0x2b, 0x37, 0x9e, 0x8d, 0x8e, 0xa5, 0xd0, 0x0c, 0xb5,
|
||||
0x61, 0x39, 0x73, 0x80, 0x8f, 0x25, 0x4f, 0x63, 0xe7, 0x2e, 0x0b, 0x78, 0x74, 0x6b, 0x06, 0x2f,
|
||||
0x0d, 0x3d, 0x90, 0x90, 0x3e, 0x83, 0x92, 0x66, 0x6a, 0xc0, 0x14, 0x89, 0xb8, 0x36, 0x89, 0x07,
|
||||
0x56, 0xf3, 0x7c, 0x6d, 0xb7, 0x7d, 0xca, 0x9d, 0x87, 0x40, 0x0f, 0x57, 0x07, 0x9b, 0xb0, 0x71,
|
||||
0xcb, 0x01, 0x9e, 0xd3, 0x5b, 0xe0, 0x5b, 0x00, 0x1b, 0x93, 0xa5, 0xa0, 0x27, 0xb0, 0x9a, 0x4f,
|
||||
0x56, 0x24, 0x64, 0x11, 0xbb, 0xa0, 0x26, 0xb5, 0x45, 0x25, 0xca, 0x92, 0xd4, 0x51, 0xb2, 0x87,
|
||||
0xde, 0xc1, 0x66, 0xde, 0xb2, 0x44, 0xb1, 0x58, 0x2a, 0x43, 0xb8, 0x30, 0x4c, 0x0d, 0x68, 0x94,
|
||||
0xc8, 0xaf, 0xe4, 0xe5, 0xa7, 0x43, 0x0c, 0xaf, 0xe7, 0xdc, 0x8b, 0x5d, 0xde, 0x49, 0x92, 0x56,
|
||||
0xfb, 0x12, 0x00, 0x64, 0xc7, 0x44, 0xbb, 0x76, 0x62, 0xd9, 0x95, 0x9d, 0x58, 0x85, 0x7a, 0xa9,
|
||||
0x89, 0x46, 0xfb, 0x81, 0x53, 0x08, 0x7a, 0x0d, 0x2b, 0xec, 0x53, 0xcc, 0x7d, 0x95, 0x4c, 0x4a,
|
||||
0x61, 0x8a, 0x14, 0x94, 0x25, 0x0c, 0x35, 0xfc, 0x0c, 0x60, 0xde, 0x53, 0xa3, 0x2d, 0x00, 0x1e,
|
||||
0x13, 0x1a, 0x86, 0x8a, 0x69, 0x3f, 0x34, 0xcb, 0xb8, 0xc8, 0xe3, 0x57, 0x3e, 0x60, 0xe7, 0x87,
|
||||
0x55, 0x9f, 0x4c, 0x4d, 0xf7, 0x6d, 0xed, 0x7c, 0xe3, 0x2e, 0x8c, 0xbc, 0x62, 0xc2, 0x69, 0x28,
|
||||
0xe2, 0xe5, 0x5c, 0x2b, 0xcf, 0x6c, 0x1c, 0xed, 0xc3, 0xea, 0x14, 0xdb, 0x2e, 0xe2, 0x95, 0x70,
|
||||
0x8c, 0x45, 0x9f, 0xc2, 0xda, 0x34, 0x2b, 0x2e, 0xe2, 0x4a, 0x38, 0xc6, 0x76, 0xcd, 0x0e, 0x94,
|
||||
0x73, 0xf7, 0xaf, 0x10, 0x86, 0x52, 0xf2, 0x6d, 0xc3, 0xe8, 0xdf, 0x7c, 0x83, 0x46, 0x87, 0xe5,
|
||||
0xc6, 0x7f, 0x13, 0xf7, 0xfd, 0x43, 0xaa, 0x07, 0x8f, 0x82, 0xce, 0xbc, 0xfb, 0x7d, 0xed, 0xff,
|
||||
0x0a, 0x00, 0x00, 0xff, 0xff, 0x64, 0xbf, 0xda, 0x5e, 0xce, 0x06, 0x00, 0x00,
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user