Compare commits

...

367 Commits

Author SHA1 Message Date
a2d4f85d33 *: bump to v2.3.0-alpha.0 2015-11-06 09:08:39 -08:00
63872d812a Merge pull request #3825 from jonboulle/master
contrib: add example systemd unit file
2015-11-06 17:53:32 +01:00
652d3f1974 contrib: add example systemd unit file 2015-11-06 17:50:19 +01:00
e3ce605cb5 Merge pull request #3826 from jonboulle/scripts
scripts: enforce genproto.sh is run from repo root
2015-11-06 17:26:16 +01:00
de0cb472be scripts: enforce genproto.sh is run from repo root 2015-11-06 16:13:24 +01:00
f48e95f7b0 Merge pull request #3822 from mitake/strict-reconfig-error-log
etcdserver: correct error log for strict reconfig checking
2015-11-06 15:13:27 +01:00
2c8ffa6bcb etcdserver: correct error log for strict reconfig checking
This commit fixes an error log caused by the strict reconfig checking
option.

Before:
14:21:38 etcd2 | 2015-11-05 14:21:38.870356 E | etcdhttp: got unexpected response error (etcdserver: re-configuration failed due to not enough started members)

After:
log
13:27:33 etcd2 | 2015-11-05 13:27:33.089364 E | etcdhttp: etcdserver: re-configuration failed due to not enough started members

The error is not an unexpected thing therefore the old message is
incorrect.
2015-11-06 11:03:42 +09:00
9d880f136f Merge pull request #3818 from yichengq/req-snap-log
etcdserver: fix snapshot index in creation log line
2015-11-05 14:04:46 -08:00
0874c44cdc etcdserver: fix snapshot index in creation log line
The snapshot is created at appliedi instead of snapi.
2015-11-05 14:02:09 -08:00
dadfdf6af8 Merge pull request #3802 from yichengq/fix-storage-watch
storage: delete key instead of setting it to false
2015-11-05 11:40:46 -08:00
08f0d94019 Merge pull request #3809 from xiang90/rpc_kv
*: refactor kv rpc implementation
2015-11-04 19:05:48 -08:00
47cad59571 Merge pull request #3813 from yichengq/update-version
*: update clusterMinVersion and feature maps for incoming v2.3
2015-11-04 14:37:37 -08:00
ec3c2d23a3 *: update feature maps to adopt v2.3.0 2015-11-04 14:30:35 -08:00
b82c171f5f version: update MinClusterVersion to v2.2.0
This is the preparation for bumping to v2.3.0-alpha
2015-11-04 14:30:04 -08:00
03951495d3 Merge pull request #3811 from gyuho/storage_watchergauge_fix
storage: move watcherGauge to watchable_store
2015-11-04 13:23:10 -08:00
6e5eb03544 storage: move watcherGauge to watchable_store
watcherGauge should be increased everytime we creates Watcher, not per watch
method call.
2015-11-04 13:17:47 -08:00
319b77b051 Merge pull request #3810 from gyuho/storage_metrics_add_watcher_gauge
storage: add metrics to watcher
2015-11-04 13:09:07 -08:00
4ebf28aa2e storage: add metrics to watcher
This adds metrics to watcher, and changes some order in MustRegister function
calls in init (same order that we define the gauges).
2015-11-04 13:01:52 -08:00
33fe6f41fb Merge pull request #3808 from yichengq/fix-wait-test
pkg/wait: extend wait timeout in TestWaitTime
2015-11-04 11:39:24 -08:00
3d15526c35 Merge pull request #3796 from yichengq/fix-get-version
etcdserver: not reuse connections for peer transport
2015-11-04 11:39:14 -08:00
c37bd2385a *: refactor kv rpc implementation 2015-11-04 11:36:17 -08:00
3b8349c06e pkg/wait: extend wait timeout in TestWaitTime
Fix this error happening on travis:
```
--- FAIL: TestWaitTime-2 (0.01s)
		wait_time_test.go:46: cannot receive from ch as expected
```
2015-11-04 11:18:17 -08:00
4ccbcb91c8 rafthttp: add functions to create listener and roundTripper
This moves the code to create listener and roundTripper for raft communication
to the same place, and use explicit functions to build them. This prevents
possible development errors in the future.
2015-11-04 11:12:46 -08:00
32819f6b3f etcdserver: use roundTripper to request peerURL
It uses roundTripper instead of Transport because roundTripper is
sufficient for its requirements.
2015-11-04 10:49:42 -08:00
5272ee99b5 Merge pull request #3804 from xiang90/ctl_watch
etcdctlv3: support watch
2015-11-04 10:21:05 -08:00
616078dc1b Merge pull request #3807 from xiang90/kv
*: rename etcd service to kv service in gRPC
2015-11-04 10:11:02 -08:00
1a3f7f7fa4 *: rename etcd service to kv service in gRPC 2015-11-04 10:05:49 -08:00
65d153db73 Merge pull request #3783 from yichengq/merge-logger
rafthttp: use MergeLogger for rafthttp logging
2015-11-04 09:48:43 -08:00
6040d57106 rafthttp: use MergeLogger to merge message-drop log
rafthttp logs repeated messages when amounts of message-drop logs
happen, and it becomes log spamming.
Use MergeLogger to merge log lines in this case.
2015-11-04 07:26:58 -08:00
5329159b5e rafthttp: remove failureMap from peerStatus
The logging mechanism is verbose, so it is removed from peerStatus.

We would like to see the status change
of connection with peers, and one error that leads to deactivation.
There is no need to print out all non-repeated errors.
2015-11-04 07:26:33 -08:00
5c1b833232 etcdctlv3: support watch
A draft impl for demo.
2015-11-03 19:28:57 -08:00
c8e622f517 storage: make putm/delm a set with empty value
This cleans the code, and reduces the allocation space.
2015-11-03 19:10:45 -08:00
6dbfc21846 storage: delete key instead of setting it to false
When getting the watched events, it iterate all keys in putm and delm
to generate the events. If we don't delete the key from putm/delm,
it would range on the key that is not actually put or deleted. This is
incorrect.

Fix the panic that happens when single put/delete is watched.
2015-11-03 19:00:39 -08:00
94c6b6a93d Merge pull request #3801 from yichengq/fix-raft-timeout
raft: extend wait timeout in TestNodeAdvance
2015-11-03 18:29:47 -08:00
0de52414cd raft: extend wait timeout in TestNodeAdvance
This fixes the failure met in semaphore CI.
2015-11-03 16:57:18 -08:00
1f1d8e9282 Merge pull request #3800 from xiang90/watch_server
*: serve watch service
2015-11-03 16:32:29 -08:00
10de2e6dbe *: serve watch service
Implement watch service and hook it up
with grpc server in etcdmain.
2015-11-03 15:58:34 -08:00
70cb8b8391 Merge pull request #3799 from gyuho/nameing_in_metrics_watching
storage: apply same naming in metrics.go
2015-11-03 15:23:39 -08:00
bdc280c4a7 storage: apply same naming in metrics.go
This is PR following up with Xiang's https://github.com/coreos/etcd/pull/3795,
and to make the naming consistent with its interface change.
2015-11-03 15:19:18 -08:00
f6b097c0cc Merge pull request #3798 from xiang90/watch_new
*: add v3 watch service
2015-11-03 14:39:20 -08:00
c160085f44 *: add v3 watch service 2015-11-03 14:21:24 -08:00
154fc8e19c Merge pull request #3795 from xiang90/watch_stream
storage: add watchChan
2015-11-03 13:32:49 -08:00
a1129dd5a5 storage: support multiple watching per watcher
We want to support multiple watchings per one watcher chan. Then
we can have one single go routine to watch multiple keys/prefixs.
2015-11-03 12:36:11 -08:00
34e7611093 Merge pull request #3797 from gyuho/procfile_20151103
Procfile: delay proxy waiting for initial cluster
2015-11-03 10:24:48 -08:00
5e29449d61 Procfile: delay proxy waiting for initial cluster
This fixes #3647 by delaying proxy start by 3 seconds. Without this, proxy
starts at the same time as initial cluster and since the default proxy director
refresh interval is 30-second, if cluster is not ready at first trial, the
proxy misses to discover them and has to wait another 30-seconds, which delays
the proxying for first 30-second.
2015-11-03 10:22:48 -08:00
0eee88a3d9 etcdserver: use timeout transport as peer transport
This pairs with remote timeout listeners.

etcd uses timeout listener, and times out the accepted connections
if there is no activity. So the idle connections may time out easily.
Becaus timeout transport doesn't reuse connections, it prevents using
timeouted connection.

This fixes the problem that etcd fail to get version of peers.
2015-11-03 07:58:03 -08:00
fe165de1d1 Merge pull request #3794 from yichengq/fix-proxy-term
etcdmain: fix parsing discovery error
2015-11-02 17:33:47 -08:00
9757dcd3a2 etcdmain: fix parsing discovery error
The discovery error is wrapped into a struct now, and cannot be compared
to predefined errors. Correct the comparison behavior to fix the
problem.
2015-11-02 17:23:06 -08:00
4fd65ecd4c Merge pull request #3785 from yichengq/fix-block-test
storage: extend wait timeout for execution
2015-11-02 12:53:18 -08:00
a74dd4c47a Merge pull request #3790 from xiang90/etcd-top
add etcdtop
2015-11-02 12:02:05 -08:00
8f9d237d21 Merge pull request #3792 from wojtek-t/update_ugorji
Update dependency on ugorji/go/codec
2015-11-02 07:43:48 -08:00
65ae8784fb client: regenerate code to unmarshal key response
Regenerate code for unmarshaling key response with a new version of
ugorji/go/codec.
2015-11-02 12:06:32 +01:00
02eec7763d Godeps: update ugorji/go/codec dependency
Update ugorji/go/codec dependency to the newer version.
2015-11-02 12:03:49 +01:00
e1b2e7245b tools/etcd-top: add copyright header 2015-11-01 18:19:32 -08:00
c4f8fe96e8 travis: install libpcap 2015-11-01 18:16:20 -08:00
2bfe995fb8 Godeps: add dependency for etcd-top 2015-11-01 18:07:27 -08:00
00557e96af tools: add etcd-top 2015-11-01 18:07:27 -08:00
59b5dabc66 storage: extend wait timeout for execution
Extend timeout to pass always in traivs.
2015-10-30 17:44:31 -07:00
f787e7904b Merge pull request #3762 from jonboulle/auth
etcdserver: restructure auth.Store and auth.User
2015-10-30 16:49:28 -07:00
ee522025b3 etcdserver: restructure auth.Store and auth.User
This attempts to decouple password-related functions, which previously
existed both in the Store and User structs, by splitting them out into a
separate interface, PasswordStore.  This means that they can be more
easily swapped out during testing.

This also changes the relevant tests to use mock password functions
instead of the bcrypt-backed implementations; as a result, the tests are
much faster.

Before:
```
	github.com/coreos/etcd/etcdserver/auth		31.495s
	github.com/coreos/etcd/etcdserver/etcdhttp	91.205s
```

After:
```
	github.com/coreos/etcd/etcdserver/auth		1.207s
	github.com/coreos/etcd/etcdserver/etcdhttp	1.207s
```
2015-10-30 16:33:40 -07:00
2840260b3b Merge pull request #3781 from gyuho/doc_typo_20151029
Documentation: fix typo in proxy.md
2015-10-29 20:34:16 -07:00
294f03f85f Documentation: fix typo in proxy.md
This fixes some typos in proxy.md.

Thanks,
2015-10-29 20:30:50 -07:00
00b4880494 Update ROADMAP.md 2015-10-29 16:05:40 -07:00
42255748cd Update ROADMAP.md 2015-10-29 16:03:14 -07:00
1b3d9130c9 Merge pull request #3759 from yichengq/rafthttp-unreachable
rafthttp: mark unreachable on unexpected response
2015-10-29 15:12:23 -07:00
1944893ef8 Merge pull request #3776 from gyuho/etcdmain_doc
etcdmain: fix package description compatible with godoc.org
2015-10-29 12:36:11 -07:00
821c071f3f etcdmain: fix package description for godoc.org
This fixes package description for etcdmain that wasn't compatible with
godoc.org, by deleting the extra blank lines between comment and package name.
2015-10-29 12:28:52 -07:00
84d7825a77 rafthttp: stop masking errMemberRemoved in pipeline
It makes logic more straightforward and readable. Also, it makes the
handle method consistent with stream and snapshot sender.
2015-10-28 21:40:48 -07:00
908a011604 rafthttp: mark unreachable on unexpected response
In rafthttp, when making request to some endpoint, it may receive
response with unexpected status code and header. This indicates the endpoint
doesn't function correctly. It should mark the endpoint unreachable.
2015-10-28 21:40:11 -07:00
2fe6893d5d Merge pull request #3772 from xiang90/watcher_sep
storage: move watcher interface into watcher.go
2015-10-28 21:14:55 -07:00
f71bcfa8ce storage: move watcher interface into watcher.go 2015-10-28 21:10:58 -07:00
de99c9ed58 Merge pull request #3770 from yichengq/link-etcdctl
docs/libraries-and-tools: update the link of etcdctl
2015-10-28 14:07:33 -07:00
4f36897f8c Merge pull request #3767 from kamilhark/master
Added etcdsh command line tool to the list
2015-10-28 13:41:14 -07:00
ebde1d720e docs/libraries-and-tools: update the link of etcdctl
The old repo is deprecated, and we develop etcdctl in etcd repo now.
2015-10-28 13:37:28 -07:00
b5c176360e Merge pull request #3768 from yichengq/fix-publish-test
etcdserver: extend wait timeout in TestPublishRetry
2015-10-28 13:31:51 -07:00
695a5148cf Merge pull request #3769 from msoap/fix-docs
documentation: changed link to style doc
LGTM. Link accurate (old link was redir'd anyway).
Tiny fix, doc-only, so directly merging.
2015-10-28 13:13:37 -07:00
3dad5fffc0 documentation: changed link to style doc
Go-project has been moved from code.google.com to github.com
2015-10-28 21:49:28 +02:00
7d757bbc8a etcdserver: extend wait timeout in TestPublishRetry
It fixes the failure in semaphore CI:
```
--- FAIL: TestPublishRetry (0.00s)
		server_test.go:1108: len(action) = 1, want >= 2
```
2015-10-28 12:07:00 -07:00
ae18e6ea37 docs/libraries-and-tools: Added etcdsh command line tool to the list 2015-10-28 19:38:09 +01:00
099d8674c4 Merge pull request #3746 from yichengq/load-storage
etcdserver: fix recovering snapshot from disk
2015-10-27 14:42:41 -07:00
4b8ee2d66e storage: skip old entry in ConsistentWatchableStore
This avoids to apply the same entry twice when restoring from disk.
2015-10-26 23:26:06 -07:00
263b270708 etcdserver: commit v3 storage before releasing WAL
This ensures that v3 storage could always find the following log entries
when restart.
2015-10-26 21:06:08 -07:00
70f9407d2d Merge pull request #3758 from xiang90/race
*: fix various data races detected by race detector
2015-10-26 20:57:31 -07:00
ab4892ade2 Merge pull request #3749 from gyuho/etcdmain_flags_20151025
etcdmain: make flags and formats idential
2015-10-26 20:54:37 -07:00
a8e6e71bf9 *: fix various data races detected by race detector 2015-10-26 20:49:37 -07:00
306dd7183b Merge pull request #3757 from xiang90/race
rafthttp: fix data races detected by go race detector
2015-10-26 17:10:17 -07:00
336d177c82 rafthttp: fix data races detected by go race detector 2015-10-26 15:29:08 -07:00
4766227b76 Merge pull request #3750 from yichengq/rafthttp-continue
rafthttp: fix wrong return in pipeline.handle
2015-10-26 14:11:42 -07:00
4076dda101 rafthttp: fix wrong return in pipeline.handle
pipeline.handle is a long-living one, and should continue to receive
next message to send out when current message fails to send. So it
should `continue` instead of `return` here.
2015-10-26 14:05:19 -07:00
44bbc87698 Merge pull request #3756 from suryanathan/master
docs/libraries-and-tools: Update libraries-and-tools.md with etcdcpp
2015-10-26 13:52:42 -07:00
e4ada19996 docs/libraries-and-tools: Update libraries-and-tools.md with etcdcpp
Add a c++ language binding for API version 2.2.0
2015-10-26 16:44:17 -04:00
cc378585a9 Merge pull request #3755 from jonboulle/master
travis: only run unit tests
2015-10-26 13:36:18 -07:00
516be7a781 travis: only run unit tests
Travis has chronic problems successfully running the integration suite -
and we've successfully moved to Semaphore for that purpose - but can
still be useful as a fail-fast option for testing unit tests and formatting.
2015-10-26 12:47:15 -07:00
52782cf8ee etcdmain: make flags and formats idential
This makes flagsline and config.go identical in its flag description and some
punctuation conventions.
2015-10-25 06:31:37 -07:00
d44b79c3c9 Merge pull request #3748 from coreos/revert-3737-rafthttp-continue
Revert "rafthttp: fix wrong return in pipeline.handle"
2015-10-24 21:05:52 -07:00
5eda45ece6 Revert "rafthttp: fix wrong return in pipeline.handle" 2015-10-24 20:25:56 -07:00
dbba5bb373 Merge pull request #3737 from yichengq/rafthttp-continue
rafthttp: fix wrong return in pipeline.handle
2015-10-24 19:42:38 -07:00
7e38f05ceb Merge pull request #3742 from yichengq/save-index
etcdserver: save consistent index into v3 storage
2015-10-24 09:48:28 -07:00
15ed6d8268 etcdserver: save consistent index into v3 storage
This helps to recover consistent index when restart in the future.
2015-10-24 09:27:24 -07:00
f648d52afe rafthttp: fix wrong return in pipeline.handle
pipeline.handle is a long-living one, and should continue to receive
next message to send out when current message fails to send. So it
should `continue` instead of `return` here.
2015-10-23 17:00:03 -07:00
41cb39b68a storage: Get -> ConsistentIndex in ConsistentIndexGetter
To make the method name more specific in the context.
2015-10-23 16:40:55 -07:00
4f47b08cf6 Merge pull request #3744 from yichengq/fix-sem
raft: extend wait timeout in TestMultiNodeAdvance
2015-10-23 13:20:52 -07:00
bf3057e5bd raft: extend wait timeout in TestMultiNodeAdvance
This fixes the failure met in semaphore CI:

```
--- FAIL: TestMultiNodeAdvance-2 (0.01s)
		multinode_test.go:458: expect Ready after Advance, but there is
		no Ready available
```
2015-10-23 12:08:24 -07:00
01559fafeb Merge pull request #3741 from yichengq/receive-restore
etcdserver: restore KV snapshot when receiving snapshot
2015-10-23 09:24:17 -07:00
cacc0d6432 etcdserver: restore KV snapshot when receiving snapshot
When a slow follower receives the snapshot sent from the leader, it
should rename the snapshot file to the default KV file path, and
restore KV snapshot.

Have tested it manually and it works pretty well.
2015-10-23 08:43:26 -07:00
d33c26c20a Merge pull request #3730 from yichengq/storage-consistent
storage: add consistentWatchableStore
2015-10-23 08:15:04 -07:00
4fb4bc3ca8 storage: add consistentWatchableStore
consistentWatchableStore maintains an index that is always consistent
with the latest txn. The index could be used to indicate the progress
of the store so far when recovery.
2015-10-22 22:54:51 -07:00
ae62a77de6 Merge pull request #3729 from xiang90/mem_bench
doc: add benchmark doc for new storage pkg
2015-10-22 10:54:43 -07:00
e3cedeeb12 doc: add benchmark doc for new storage pkg 2015-10-22 13:53:03 -04:00
2feccd3fa4 Merge pull request #3733 from yichengq/fix-wait-timeout
pkg/transport: extend wait timeout for write
2015-10-22 13:07:14 -04:00
d3ebecdddd pkg/transport: extend wait timeout for write
This helps the test to pass safely in semaphore CI.

Based on my manual testing, it may take at most 500ms to return
error in semaphore CI, so I set 1s as a safe value.
2015-10-21 18:27:21 -07:00
8b08fff1e9 Merge pull request #3731 from yichengq/storage-kv
storage: fix WatchableKV interface and refine comment
2015-10-21 17:27:24 -07:00
01b163e77d Merge pull request #3588 from gyuho/storage/watchable_store.go-use-map-for-unsynced
storage/watchable_store.go: use map for unsynced
2015-10-21 16:50:15 -07:00
44cecb8624 Merge pull request #3732 from yichengq/config-header
docs/configuration: fix heading hierarchy
2015-10-21 15:50:38 -07:00
f73d0ed1d9 storage: use map for watchable store unsynced
This is for `TODO: use map to reduce cancel cost`.
I switched slice to map, and benchmark results show
that map implementation performs better, as follows:

```
[1]:
benchmark                                   old ns/op     new ns/op     delta
BenchmarkWatchableStoreUnsyncedCancel       215212        1307          -99.39%
BenchmarkWatchableStoreUnsyncedCancel-2     120453        710           -99.41%
BenchmarkWatchableStoreUnsyncedCancel-4     120765        748           -99.38%
BenchmarkWatchableStoreUnsyncedCancel-8     121391        719           -99.41%

benchmark                                   old allocs     new allocs     delta
BenchmarkWatchableStoreUnsyncedCancel       0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-2     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-4     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-8     0              0              +0.00%

benchmark                                   old bytes     new bytes     delta
BenchmarkWatchableStoreUnsyncedCancel       200           1             -99.50%
BenchmarkWatchableStoreUnsyncedCancel-2     138           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-4     138           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-8     139           0             -100.00%

[2]:
benchmark                                   old ns/op     new ns/op     delta
BenchmarkWatchableStoreUnsyncedCancel       212550        1117          -99.47%
BenchmarkWatchableStoreUnsyncedCancel-2     120927        691           -99.43%
BenchmarkWatchableStoreUnsyncedCancel-4     120752        699           -99.42%
BenchmarkWatchableStoreUnsyncedCancel-8     121012        688           -99.43%

benchmark                                   old allocs     new allocs     delta
BenchmarkWatchableStoreUnsyncedCancel       0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-2     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-4     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-8     0              0              +0.00%

benchmark                                   old bytes     new bytes     delta
BenchmarkWatchableStoreUnsyncedCancel       197           1             -99.49%
BenchmarkWatchableStoreUnsyncedCancel-2     138           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-4     138           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-8     139           0             -100.00%

[3]:
benchmark                                   old ns/op     new ns/op     delta
BenchmarkWatchableStoreUnsyncedCancel       214268        1183          -99.45%
BenchmarkWatchableStoreUnsyncedCancel-2     120763        759           -99.37%
BenchmarkWatchableStoreUnsyncedCancel-4     120321        708           -99.41%
BenchmarkWatchableStoreUnsyncedCancel-8     121628        680           -99.44%

benchmark                                   old allocs     new allocs     delta
BenchmarkWatchableStoreUnsyncedCancel       0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-2     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-4     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-8     0              0              +0.00%

benchmark                                   old bytes     new bytes     delta
BenchmarkWatchableStoreUnsyncedCancel       200           1             -99.50%
BenchmarkWatchableStoreUnsyncedCancel-2     139           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-4     138           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-8     139           0             -100.00%

[4]:
benchmark                                   old ns/op     new ns/op     delta
BenchmarkWatchableStoreUnsyncedCancel       208332        1089          -99.48%
BenchmarkWatchableStoreUnsyncedCancel-2     121011        691           -99.43%
BenchmarkWatchableStoreUnsyncedCancel-4     120678        681           -99.44%
BenchmarkWatchableStoreUnsyncedCancel-8     121303        721           -99.41%

benchmark                                   old allocs     new allocs     delta
BenchmarkWatchableStoreUnsyncedCancel       0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-2     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-4     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-8     0              0              +0.00%

benchmark                                   old bytes     new bytes     delta
BenchmarkWatchableStoreUnsyncedCancel       194           1             -99.48%
BenchmarkWatchableStoreUnsyncedCancel-2     139           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-4     139           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-8     139           0             -100.00%

[5]:
benchmark                                   old ns/op     new ns/op     delta
BenchmarkWatchableStoreUnsyncedCancel       211900        1097          -99.48%
BenchmarkWatchableStoreUnsyncedCancel-2     121795        753           -99.38%
BenchmarkWatchableStoreUnsyncedCancel-4     123182        700           -99.43%
BenchmarkWatchableStoreUnsyncedCancel-8     122820        688           -99.44%

benchmark                                   old allocs     new allocs     delta
BenchmarkWatchableStoreUnsyncedCancel       0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-2     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-4     0              0              +0.00%
BenchmarkWatchableStoreUnsyncedCancel-8     0              0              +0.00%

benchmark                                   old bytes     new bytes     delta
BenchmarkWatchableStoreUnsyncedCancel       198           1             -99.49%
BenchmarkWatchableStoreUnsyncedCancel-2     140           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-4     141           0             -100.00%
BenchmarkWatchableStoreUnsyncedCancel-8     141           0             -100.00%
```
2015-10-21 15:30:15 -07:00
f95d18f766 docs/configuration: fix heading hierarchy
to make it consistent with other sections in the doc.
2015-10-21 15:23:48 -07:00
027c073d55 storage: refine Watch comment in WatchableKV
Explain explicitly how these arguments are used.
2015-10-21 14:47:52 -07:00
56b7584418 Merge pull request #3725 from joshix/hdinghier-mulligan
Documentation: Fix heading hierarchy.
2015-10-21 13:52:57 -07:00
2673e657e6 storage: fix WatchableKV interface
We delete endRev from the watch functionality, so the interface needs
to be fixed.
2015-10-21 11:50:25 -07:00
35eb26ef5d Merge pull request #3726 from yichengq/watch-store
storage: add store field in watchableStore
2015-10-21 11:07:45 -07:00
0f7374ce89 storage: KV field -> store field in watchableStore
We need to access the underlying store to use its RangeEvents function.
It is not good to use unnecessary type conversion.

The underlying store is also needed for further store upon
watchableStore.
2015-10-20 19:23:20 -07:00
8d3ed0176c Merge pull request #3727 from yichengq/govet
raft: fix malformed example name
2015-10-20 16:51:47 -07:00
01806c3e80 raft: fix malformed example name
It is reported by latest govet:
```
gopath/src/github.com/coreos/etcd/raft/example_test.go:26: Example_Node
has malformed example suffix: Node
```
2015-10-20 16:40:01 -07:00
98bdeab53b Documentation: Fix heading hierarchy.
Correct the hierarchy of Markdown symbols in document headings.
2015-10-20 15:26:49 -07:00
704bff0c77 Merge pull request #3724 from coreos/philips-patch-1
README: fix language for release binaries
2015-10-20 14:22:02 -07:00
5b5b0ef060 README: attempt to make it even clearer 2015-10-20 14:17:33 -07:00
b38e21a9e9 README: fix language for release binaries
Address confusion on where to find stuff and make it easier to find with keywords for various operating systems.
2015-10-20 14:07:40 -07:00
9635d8d94c Merge pull request #3720 from yichengq/clean-streamAppV1
rafthttp: deprecate streamTypeMsgApp and remove msgApp stream sent restriction due to streamTypeMsgApp
2015-10-20 10:37:51 -07:00
de669be6d6 Merge pull request #3683 from yichengq/raft-block
etcdserver: fix raft state machine may block
2015-10-20 09:44:34 -07:00
ab5df57ecf etcdserver: fix raft state machine may block
When snapshot store requests raft snapshot from etcdserver apply loop,
it may block on the channel for some time, or wait some time for KV to
snapshot. This is unexpected because raft state machine should be unblocked.

Even worse, this block may lead to deadlock:
1. raft state machine waits on getting snapshot from raft memory storage
2. raft memory storage waits snapshot store to get snapshot
3. snapshot store requests raft snapshot from apply loop
4. apply loop is applying entries, and waits raftNode loop to finish
messages sending
5. raftNode loop waits peer loop in Transport to send out messages
6. peer loop in Transport waits for raft state machine to process message

Fix it by changing the logic of getSnap to be asynchronously creation.
2015-10-20 09:19:34 -07:00
b61eaf3335 rafthttp: msgApp{Reader/Writer} -> msgAppV2{Reader/Writer}
To make what it serves more clear.
2015-10-20 08:28:06 -07:00
5060b2f322 rafthttp: send all MsgApp on stream msgAppV2
For stream msgAppV2, as long as the message is MsgApp type, it should be sent
through stream msgAppV2.
2015-10-20 08:23:36 -07:00
33231fccdd rafthttp: fix wrong stream name returned by pick
msgAppWriter uses streamAppV2 type, and it should return the correct name.
2015-10-20 08:17:06 -07:00
f725f6a552 rafthttp: deprecate streamTypeMsgApp
streamTypeMsgApp is only used in etcd 2.0. etcd 2.3 should not talk to
etcd 2.0, either send or receive requests. So I deprecate streamTypeMsgApp
and its related stuffs from rafthttp package.

updating term is only used from streamTypeMsgApp, so it is removed too.
2015-10-20 08:15:54 -07:00
eb7bce893e Merge pull request #3721 from mitake/servevars
etcdserver: don't allow methods other than GET in /debug/vars
2015-10-20 08:01:38 -07:00
1b0c65c299 etcdserver: don't allow methods other than GET in /debug/vars
Currently, /debug/vars seems to allow all types of methods e.g. PUT,
POST, etc. However, this path is a readonly stuff so it should allow
GET only.
2015-10-20 17:19:42 +09:00
7dcb99b60e Merge pull request #3656 from endocode/kayrus/client_doc
Added example on how to get node's value
2015-10-19 20:41:12 -07:00
e5082fce54 Merge pull request #3718 from gyuho/gyuho_README
README: fix typo
2015-10-19 17:14:46 -07:00
7d326087f9 README: fix typo
It looks like a typo. Or is it elided sentence?
Thanks,
2015-10-19 17:03:40 -07:00
4e9f137d1b Merge pull request #3716 from yichengq/add-sem-badge
README: add semaphore CI status badge into README
2015-10-19 16:25:21 -07:00
6ebd62a869 README: add semaphore CI status badge into README 2015-10-19 16:08:40 -07:00
32dd4d5de3 Merge pull request #3657 from xiang90/fix_remove
etcdserver: skip updating attr if the member does not exist
2015-10-19 13:35:57 -07:00
776e9fb7be Merge pull request #3703 from xiang90/bolt
storage/backend: avoid creating new bolt.tx during a batchTx
2015-10-19 10:30:57 -07:00
afb35e366d client: added example on how to get node's value 2015-10-19 10:31:05 +02:00
79c263b2ec Merge pull request #3707 from xiang90/CI
pkg/transport: longer timeout for slow CI
2015-10-18 17:22:15 -07:00
7c6e2deb66 Merge pull request #3708 from xiang90/travis
travis: drop go-nyet
2015-10-18 16:43:53 -07:00
559b76f401 travis: drop go-nyet 2015-10-18 16:41:57 -07:00
3c1ecf70cf pkg/transport: longer timeout for slow CI 2015-10-18 16:32:18 -07:00
5372e11727 Merge pull request #3704 from xiang90/rafthttp
clean up rafthttp pkg: round1
2015-10-18 10:00:26 -07:00
427a154aae rafthttp: various clean up 2015-10-18 09:49:18 -07:00
7d3af5e15f rafthttp: rename message.go -> message_codec.go 2015-10-18 09:49:11 -07:00
e87cd0c17b rafthttp: move new funcs to right place 2015-10-18 09:48:59 -07:00
f08d750b0b Merge pull request #3697 from mqliang/cluster-health
etcdctl: fix health check condition
2015-10-18 09:26:05 -07:00
478fab6aca rafthttp: rename NewHandler to newPipelineHandler 2015-10-17 22:33:28 -07:00
080c11d14e rafthttp: make ConnReadLimitByte private and add comment 2015-10-17 22:20:36 -07:00
5efdd7bc6d storage/backend: avoid creating new bolt.tx during a batchTx 2015-10-17 21:31:34 -07:00
b2d92dedae etcdctl:fix health check condition 2015-10-18 08:22:13 +08:00
d07c9b00e5 Merge pull request #3701 from xiang90/rm_end_watcher
storage: remove the endRev of watcher
2015-10-17 16:04:27 -07:00
6556bf1643 storage: remove the endRev of watcher 2015-10-17 15:59:49 -07:00
f78a11d468 Merge pull request #3694 from philips/fix-configuration-headers
Documentation: configuration docs headers
2015-10-16 12:39:10 -07:00
22d8ca4c9a Documentation: configuration docs headers
The configuration docs are indented weird compared to our standards.
This makes the sidebar here not work right:
https://coreos.com/etcd/docs/latest/configuration.html
2015-10-16 12:32:57 -07:00
fd07e02604 Merge pull request #3691 from gyuho/documentation_20151015
raft/documentation: clarify progress's subjects.
2015-10-15 20:02:00 -07:00
1716d5858f raft/documentation: clarify progress's subjects.
If I understand correctly, `progress` represents the states of follower. For
me, some comments weren't clear because it was missing the subjects of
`progress`. This adds more clarification on who is doing what. Please let me
know if I misunderstood anything. Thanks,
2015-10-15 19:15:08 -07:00
9ce79dbbc3 Merge pull request #3685 from gyuho/etcdctl_mk_command_2
etcdctl: fix mk command with PrevNoExist
2015-10-15 12:13:54 -07:00
df34d67e98 Merge pull request #3689 from ccding/patch-1
raft/doc: fix misuse of `for' loop in docs
2015-10-15 09:20:05 -07:00
362df8e470 raft/doc: fix misuse of `for' loop in docs 2015-10-15 11:13:30 -05:00
1dab7e8084 etcdctl/command: mk command with PrevNoExist
This attempts to fix #3676. `PrevNoExist` checks if the key previously exists
and if so, it returns an error, which is how `mk` command is supposed to work.
The previous code ignores the previous key and overwrites with the later value.

/cc @yichengq
2015-10-15 09:05:17 -07:00
c2e49b5622 Merge pull request #3687 from ccding/patch-1
raft/doc: fix typos
2015-10-15 07:52:43 -07:00
f1f92f0fa3 raft/doc: fix typos 2015-10-15 02:17:34 -05:00
afd74dfeb7 Merge pull request #3611 from mitake/etcdctl-timeout
etcdctl: use a context with -total-timeout in simple commands
2015-10-14 16:13:34 -07:00
f78ccbbd3f Merge pull request #3681 from yichengq/godep-update
Godeps: update prometheus dependency
2015-10-14 11:44:49 -07:00
d807c895b8 Godeps: update prometheus dependency
prometheus updates its directory layout
(https://github.com/prometheus/client_golang#where-is-model-extraction-and-text)
and makes Godeps restore/save unable to work.

Remove all prometheus dependency manually and godep save again to fix
this problem.
2015-10-14 09:58:36 -07:00
aeb6ef5d8a Merge pull request #3680 from gyuho/Documentation_20151014
Documentation: typos in discovery, faq, security
2015-10-14 08:50:02 -07:00
0598de2e25 Documentation: typos in discovery, faq, security
This just fixes some typos from my reading Documentation.

Thanks,

Documentation: security
2015-10-14 08:48:01 -07:00
d2b40c1c98 Merge pull request #3666 from yichengq/transport-snap
rafthttp: support sending v3 snapshot message
2015-10-13 23:58:02 -07:00
1f21ccf166 rafthttp: support sending v3 snapshot message
Use snapshotSender to send v3 snapshot message. It puts raft snapshot
message and v3 snapshot into request body, then sends it to the target peer.
When it receives http.StatusNoContent, it knows the message has been
received and processed successfully.

As receiver, snapHandler saves v3 snapshot and then processes the raft snapshot
message, then respond with http.StatusNoContent.
2015-10-13 23:11:28 -07:00
6afd8e4fd9 ROADMAP: fix v3 API issues link 2015-10-12 20:56:20 -07:00
70dc205500 Merge pull request #3665 from raoofm/patch-2
hack: bench to use Content Type as value written was always empty
2015-10-12 11:35:34 -07:00
90e8dcd9bf hack: benchmarking to use Content Type
hack: benchmarking to use Content Type: application/x-www-form-urlencoded

boom sends PUT request with Content-Type: text/html, and this cannot be parsed by etcd. It should use Content-Type: application/x-www-form-urlencoded. Post this fix the official etcd benchmarks should be updated as the keysize was not a factor as the value being set was always empty.
2015-10-12 10:03:09 -04:00
df7074911e Merge pull request #3664 from yichengq/transport-more
rafthttp: build transport inside pkg instead of passed-in
2015-10-11 22:00:05 -07:00
207c92b627 rafthttp: build transport inside pkg instead of passed-in
rafthttp has different requirements for connections created by the
transport for different usage, and this is hard to achieve when giving
one http.RoundTripper. Pass into pkg the data needed to build transport
now, and let rafthttp build its own transports.
2015-10-11 21:42:37 -07:00
988a09eb20 Merge pull request #3663 from yichengq/transport-rt
pkg/transport: pass dial timeout to NewTransport
2015-10-11 10:12:09 -07:00
9673eb625a pkg/transport: pass dial timeout to NewTransport
So we could set dial timeout for new transport, which makes it
customizable according to max RTT.
2015-10-11 10:09:25 -07:00
017f5f4670 Merge pull request #3662 from yichengq/transport
rafthttp: expose struct to set configuration
2015-10-11 09:35:15 -07:00
233e717e2f rafthttp: expose struct to set configuration
transport takes too many arguments and the new function is unable to
read. Change the way to set fields in transport struct directly.
2015-10-11 09:02:16 -07:00
9534b11ad2 Merge pull request #3660 from gyuho/Documentation_typos_20151009
Documentation: fix typos
2015-10-09 16:10:49 -07:00
d05d6f8bbb Documentation: fix typos
I found some typos. Please let me know if you have any feedback.

Thanks,

Documentation: fix metrics.md typo

Documentation: trim blank lines in metrics.md
2015-10-09 15:56:58 -07:00
ce45159767 Merge pull request #3655 from wojtek-t/update_dependency
Update dependency on ugorji/go/codec
2015-10-09 10:23:47 -07:00
4eb598be06 client: regenerate code to unmarshal key response
Regenerate code for unmarshaling key response with a new version of
ugorji/go/codec
2015-10-09 10:59:42 +02:00
8ebbec8e05 Godeps: update ugorji/go/codec dependency
Update ugorji/go/codec dependency to the newer version (a bunch of fixed were made).
2015-10-09 10:59:42 +02:00
eadfd138a4 Merge pull request #3658 from mqliang/patch-2
docs/api.md: fix documentation
2015-10-08 19:51:54 -07:00
8c580ffe2d docs/api.md: fix documentation
Fix documentation
2015-10-09 10:41:12 +08:00
98e30ca7c2 etcdserver: skip updating attr if the member does not exist 2015-10-08 14:07:16 -07:00
f74ff9b867 Merge pull request #3644 from mitake/test-race
etcdserver, test: don't access testing.T in time.AfterFunc()'s own go…
2015-10-07 08:34:58 -07:00
dc394a0a99 Merge pull request #3649 from kkaneda/kkaneda/comment_fix
raft: fix a description of MemoryStorage.Compact
2015-10-06 21:51:20 -07:00
ebd8cb04c1 raft: fix a description of MemoryStorage.Compact
The parameter name is compactIndex, not i.
2015-10-06 21:49:33 -07:00
68dd3ee621 etcdserver, test: don't access testing.T in time.AfterFunc()'s own goroutine
time.AfterFunc() creates its own goroutine and calls the callback
function in the goroutine. It can cause datarace like the problem
fixed in the commit de1a16e0f1 . This
commit also fixes the potential dataraces of tests in
etcdserver/server_test.go .
2015-10-06 11:37:08 +09:00
21179d929f Merge pull request #3616 from yichengq/storage-txn
storage: hold batchTx lock during KV txn
2015-10-05 17:12:52 -07:00
4f2ada3f1e Merge pull request #3643 from xiang90/metrics_storage
storage: add metrics for db total size
2015-10-05 17:11:00 -07:00
0aa2f1192a storage: add metrics for db total size 2015-10-05 16:56:30 -07:00
522ee6ab3a Merge pull request #3635 from yichengq/parse-ipv6
pkg/types: fix unwanted unescape in NewURLsMap
2015-10-05 15:30:51 -07:00
699e37562e Merge pull request #3637 from yichengq/run-snapshot
etcdserver: get existing snapshot instead of requesting one
2015-10-05 14:57:03 -07:00
e117f36e48 pkg/types: fix unwanted unescape in NewURLsMap
We use url.ParseQuery to parse names-to-urls string, but it has side
effect that unescape the string. If the initial-cluster string has ipv6
which contains `%25`, it will unescape it to `%` and make further url
parse failed.

Fix it by modifiying the parse process.

Go1.4 doesn't support literal IPv6 address w/ zone in
URI(https://github.com/golang/go/issues/6530), so we only enable tests
in Go1.5+.
2015-10-05 14:54:17 -07:00
8c94ae0ee3 etcdserver: get existing snapshot instead of requesting one
This fixes the problem that proposal cannot be applied.

When start the etcdserver.run loop, it expects to get the latest
existing snapshot. It should not attempt to request one because the loop
is the entity to create the snapshot.
2015-10-05 14:32:16 -07:00
ba949ae2be Merge pull request #3640 from xiang90/watch_metrics
storage: add metrics for watchers
2015-10-05 11:46:50 -07:00
09157d4f1a storage: add metrics for watchers 2015-10-05 11:32:56 -07:00
432b1bc230 Merge pull request #3638 from gyuho/documentation_proxy
Documentation: proxy.md typo, line-breaks
2015-10-05 07:57:22 -07:00
f2dae5a0d2 Documentation: proxy.md typo, line-breaks
1. I found a little typo (easily -> easy)
2. If you go to https://coreos.com/etcd/docs/2.0.9/proxy.html,
   the proxy flag command is out of width of the web-page. Can
   we have line-breaks between flags to make the command easier
   to read?

Thanks for the great documentation!
2015-10-04 12:25:25 -07:00
c97dda766e storage: hold batchTx lock during KV txn
One txn is treated as atomic, and might contain multiple Put/Delete/Range
operations. For now, between these operations, we might call forecCommit
to sync the change to disk, or backend may commit it in background.
Thus the snapshot state might contains an unfinished multiple objects
transaction, which is dangerous if database is restored from the snapshot.

This PR makes KV txn hold batchTx lock during the process and avoids
commit to happen.
2015-10-03 16:01:05 -07:00
581cc5cff4 Merge pull request #3608 from yichengq/storage-snapshot
storage: update KV.Snapshot function
2015-10-03 15:32:14 -07:00
36f4303fc3 storage/etcdserver: update KV.Snapshot function
When using Snapshot function, it is expected:
1. know the size of snapshot before writing data
2. split snapshot-ready phase and write-data phase. so we could cut
snapshot first and write data later.

Update its interface to fit the requirement of etcdserver.
2015-10-03 10:15:23 -07:00
8c0db94fef Merge pull request #3631 from yichengq/create-snapshot
etcdserver: support to create raft snapshot at apply loop
2015-10-03 10:03:27 -07:00
675d4306b0 Merge pull request #3634 from yichengq/fix-cluster-output
etcdserver: print out correct restored cluster info
2015-10-02 16:23:33 -07:00
18c568bc82 etcdserver: print out correct restored cluster info
Before this PR, it always prints nil because cluster info has not been
covered when print:

```
2015-10-02 14:00:24.353631 I | etcdserver: loaded cluster information
from store: <nil>
```
2015-10-02 16:11:32 -07:00
69ca0b8475 Merge pull request #3633 from xiang90/systemd_readiness
etcdmain: print out error and suggestion for fixing notify issue
2015-10-02 13:56:16 -07:00
51043830d4 etcdmain: print out error and suggestion for fixing notify issue 2015-10-02 13:39:41 -07:00
bfe9502f4f etcdserver: support to create raft snapshot at apply loop
and snapStore could trigger it to create the latest raft snapshot.
2015-10-02 13:17:56 -07:00
f8a4d1f01b Merge pull request #3607 from xiang90/doc_name
doc: emphasize name should be unique
2015-10-02 12:23:06 -07:00
2733e3f543 doc: emphasize name should be unique 2015-10-02 09:58:20 -07:00
f093559b1d Merge pull request #3632 from mickep76/master
docs/libraries-and-tools: add etcd-rest rest api daemon
2015-10-02 09:49:59 -07:00
4835a411c7 docs/libraries-and-tools: add etcd-rest rest api daemon 2015-10-02 14:55:57 +02:00
ccce61bda9 Merge pull request #3614 from yichengq/snapshot-store
etcdserver: add snapshotStore and raftStorage
2015-10-01 19:35:34 -07:00
f47cbf3073 Merge pull request #3627 from jelmer/typofix
Fix typo: boostrapping -> bootstrapping.
2015-10-01 19:02:47 -07:00
2276328720 etcdserver: add snapshotStore and raftStorage
snapshotStore is the store of snapshot, and it supports to get latest snapshot
and save incoming snapshot.

raftStorage supports to get latest snapshot when v3demo is open.
2015-10-01 19:00:59 -07:00
d70975e54c Documentation/configuration.md: Fix typo.
boostrapping -> bootstrapping
2015-10-01 20:03:16 +00:00
715fdfb669 Merge pull request #3093 from mwitkow-io/feature/httpd_metrics
add `events` metrics in etcdhttp.
2015-10-01 12:10:58 -07:00
6e9943a037 Merge pull request #3629 from ccding/master
raft: fix typo in doc
2015-10-01 10:11:12 -07:00
b2edf1d24a raft: fix typo in doc 2015-10-01 11:21:23 -05:00
1b2dc1c796 metrics: add events metrics in etcdhttp. 2015-10-01 08:11:42 +01:00
46e5444d93 Merge pull request #3625 from yichengq/fix-race
pkg/transport: fix a data race in TestReadWriteTimeoutDialer
2015-09-30 17:50:48 -07:00
de1a16e0f1 pkg/transport: fix a data race in TestReadWriteTimeoutDialer
Accessing test.T async will cause data race.

Change to use select to coordinate the access of test.T.
2015-09-30 17:29:24 -07:00
036ea58a77 Documentation: 04 snapshot: add example with fleet 2015-09-30 16:35:28 -07:00
b043635868 Documentation: fix-up the kubernetes github URL 2015-09-30 10:58:33 -07:00
533e728b64 Merge pull request #3609 from yichengq/raft-snapshot
raft: kill TODO about behavior when snapshot fails
2015-09-29 19:32:31 -07:00
4c82b481a5 raft: improve behavior when snapshot fails
etcd is going to support incremental snapshot, and we design to let it
send at most one snapshot out at first stage. So when one snapshot is in
flight, snapshot request will return error.

When failing to get snapshot when sending MsgSnap, raft prints out
related log and abort sending this message.
2015-09-29 19:15:15 -07:00
a535cf2cad Merge pull request #3610 from yichengq/load-storage
etcdserver: restore v3 storage when restart
2015-09-29 11:58:38 -07:00
49d262185d Merge pull request #3590 from yichengq/discovery-log
etcdmain: improve log when join discovery fails
2015-09-29 08:02:18 -07:00
33a0df3e33 etcdctl: use a context with -total-timeout in simple commands
Like the commit 8ebc933111, this commit lets simple etcdctl commands
use a context with timeout value passed via -total-timeout.

This commit doesn't change complex commands like watch,
cluster-health, and import because it is not obvious that using the
context in the commands is good or not.
2015-09-29 17:23:01 +09:00
5d906a0acc etcdserver: restore v3 storage when restart
To load the previous data.
2015-09-29 00:14:27 -07:00
939aa96a34 etcdmain: improve log when join discovery fails
Before this PR, the log is

```
2015/09/1 13:18:31 etcdmain: client: etcd cluster is unavailable or
misconfigured
```

It is quite hard for people to understand what happens.

Now we print out the exact reason for the failure, and explains the way
to handle it.
2015-09-28 23:23:50 -07:00
783884a04e Merge pull request #3606 from kkaneda/kkaneda/tiny_fix
raft: remove an obsolete TODO comment on 4MB maxMsgSize hard coding
2015-09-28 21:44:45 -07:00
f602767e50 raft: remove an obsolete TODO comment on 4MB maxMsgSize hard coding
The TODO comment was added by 7571b2cd, and it was addressed by d9b5b56c.
2015-09-28 21:31:12 -07:00
6c05a01ec6 Merge pull request #3604 from gyuho/replace_netutil_BasicAuth
etcdhttp/auth: BasicAuth method in standard pkg
2015-09-28 15:55:46 -07:00
6264a41e22 ectd/Getting-etcd: update README to require Go1.4+
Notice `For those wanting to try the very latest version,`
2015-09-28 15:35:09 -07:00
e16f81838b etcdhttp/auth: BasicAuth method in standard pkg
I created a new PR from https://github.com/coreos/etcd/pull/3598.
This is for `TODO: use the standard lib BasicAuth method when we move to
Go 1.4.` [1]. `BasicAuth` method got into Go standard package a year ago. [2]

---
1. https://github.com/coreos/etcd/blob/master/pkg/netutil/netutil.go#L126-L138
2. https://codereview.appspot.com/76540043/
2015-09-28 14:02:55 -07:00
7410698761 Merge pull request #3530 from mitake/etcdctl-timeout-v2
etcdctl: use user specified timeout value for entire command execution
2015-09-28 09:45:02 -07:00
8ebc933111 etcdctl: use user specified timeout value for entire command execution
etcdctl should be capable to use a user specified timeout value for
total command execution, not only per request timeout. This commit
adds a new option --total-timeout to the command. The value passed via
this option is used as a timeout value of entire command execution.

Fixes coreos#3517
2015-09-28 10:31:46 +09:00
c645ac23c0 docs: fix link 2015-09-26 17:43:33 -07:00
49d52eaf1e Merge pull request #3596 from xiang90/json_header
etcdhttp: add Content-Type: application/json header to version handler
2015-09-25 15:27:29 -07:00
1226838381 etcdhttp: add Content-Type: application/json header to version handler 2015-09-25 15:14:13 -07:00
c9be719d92 Merge pull request #3579 from gyuho/etcdserver/etcdhttp/httptypes/errors.go-WriteTo-returns-error
httptypes: WriteTo to return error
2015-09-25 14:31:48 -07:00
93edabf85f Merge pull request #3594 from yichengq/exit
etcdmain: exit after print out ErrDuplicateID
2015-09-25 14:28:45 -07:00
dc9a75df1c etcdmain: exit after print out ErrDuplicateID
etcd should exit after printing log for unhandlable error.
2015-09-25 14:10:50 -07:00
60a641762b Merge pull request #3593 from xiang90/fix_race
pkg/transport: fix a data race in TestWriteReadTimeoutListener
2015-09-25 10:16:17 -07:00
5d033c22af pkg/transport: fix a data race in TestWriteReadTimeoutListener 2015-09-25 10:02:37 -07:00
dff702b2b8 Merge pull request #3564 from gouyang/master
Improve proxy log for retrying an unavailable endpoint
2015-09-25 10:02:15 -07:00
85f4475f62 httptypes/errors: HTTPError.WriteTo returns error
Squashing all commits into this one
(from https://github.com/coreos/etcd/pull/357).

Thanks,
2015-09-25 08:06:26 -07:00
e35eeeae42 proxy: improve log for retrying an unavailable endpoint
Fixes #3541

Signed-off-by: Guohua ouyang <guohuaouyang@gmail.com>
2015-09-25 07:36:49 +08:00
9de7f24301 Merge pull request #3554 from mitake/reconfig-doc
doc: add a description of -strict-reconfig-check
2015-09-24 08:07:32 -07:00
78791f81a6 doc: add a description of -strict-reconfig-check 2015-09-24 11:44:55 +09:00
0813a0f2d1 Merge pull request #3585 from xiang90/fix_hash
storage: fix hash by iterating kv
2015-09-23 11:39:21 -07:00
385e17583f storage: fix hash by iterating kv 2015-09-23 11:28:33 -07:00
370ce37d32 Merge pull request #3584 from mickep76/master
docs/libraries-and-tools: add etcd-export tool
2015-09-23 09:32:35 -07:00
c1db1338c9 docs/libraries-and-tools: add etcd-export tool 2015-09-23 18:29:11 +02:00
d6db4e6d6b Merge pull request #3577 from gyuho/storage/watchable_store.go-defer-fix
storage/watchable_store: defer to Unlock s.mu
2015-09-23 07:37:29 -07:00
4113509828 storage/watchable_store: defer to Unlock s.mu
New PR from https://github.com/coreos/etcd/pull/3575.
This add `defer` to `s.mu`. Current code does not `Unlock`
in the correct scope, I think.

(Sorry, I accidentally deleted my fork so the changes
might not sound continuous from my previous pull requests.)
2015-09-22 23:25:07 -07:00
89acdd6245 Merge pull request #3555 from xiang90/proxy_doc
doc: add proxy promotion doc
2015-09-22 12:59:40 -07:00
932bb76cbb Merge pull request #3570 from yichengq/extend-timeout
integration: extend request timeout
2015-09-22 10:17:13 -07:00
eba8a2ed90 Merge pull request #3566 from xiang90/error_msg
etcdsever: mismatch error uses the same format as the corresponding flag
2015-09-22 07:41:46 -07:00
13cfb4284f Merge pull request #3573 from TheHippo/patch-1
docs/security: fixed command typo
2015-09-22 07:41:30 -07:00
2540a3fb7e etcdsever: mismatch error uses the same format as the corresponding flags 2015-09-21 19:32:10 -07:00
94f3297299 docs/security: fixed command typo
`-peer-client-cert-atuh` should be `-peer-client-cert-auth`
2015-09-22 03:39:29 +02:00
305a0d7ab9 integration: extend request timeout
Extend request timeout to give etcd cluster enough time to return
response.
2015-09-21 16:50:22 -07:00
ea3dbfed60 Merge pull request #3408 from MSamman/extend-auth-api
etcdserver: extend auth api
2015-09-21 11:51:19 -07:00
999b2c6ec2 doc: add proxy promotion doc 2015-09-21 11:47:37 -07:00
6188933c81 Merge pull request #3556 from xiang90/better_error_logging
etcdmain: better logging when user forget to set initial flags
2015-09-21 10:52:34 -07:00
3b70bf87c3 etcdmain: better logging when user forget to set initial flags 2015-09-21 10:43:26 -07:00
574d1b0d46 Merge pull request #3563 from dnaeon/fixes
Fix etcd/client API example
2015-09-21 10:06:41 -07:00
d6459b8b84 client: Fix API example 2015-09-21 19:51:29 +03:00
6ae1f6c6e4 etcdserver: extend auth api
allow recursive query on users and roles to get more detail

Fixes #3278
2015-09-21 00:51:18 -07:00
f3d2b5831c Merge pull request #3558 from yichengq/watch
storage: add tests for RangeEvents and its underlying functions
2015-09-20 23:58:41 -07:00
cbddb8670a Merge pull request #3561 from ceh/raft-doc-typo
raft: fix Node doc typo
2015-09-20 21:52:34 -07:00
b9f22cb69b raft: fix Node doc typo 2015-09-21 06:13:33 +02:00
d72914c36f storage: clarify comment for store.RangeEvents and fix related bugs
Change to the function:
1. specify the meaning of startRev and endRev parameters
2. specify the meaning of returned nextRev

Moreover, it adds unit tests for the function.
2015-09-19 23:17:03 -07:00
5709b66dfb storage: add unit test for index.RangeEvents 2015-09-19 23:08:24 -07:00
87b5143b15 storage: fix missing continue in keyIndex.since
It should continue to skip following operations.

The test from rev14 to rev0 fails if it doesn't call continue and append
all revisions of the same main rev to the list.
2015-09-19 23:01:18 -07:00
158d6e0e03 storage: fix calculating generation in keyIndex.since
It should skip last empty generation when the key is just tombstoned.

The rev15 and rev16 in the test fails if it doesn't skip last empty generation
and find previous generations.
2015-09-19 22:58:45 -07:00
06180be154 Merge pull request #3533 from xiang90/proxy
proxy: expose proxy configuration
2015-09-18 14:18:06 -07:00
ac29432aab proxy: add a test for configHandler 2015-09-18 13:43:54 -07:00
0f9b2046ef Merge pull request #3547 from bdarnell/multinode-node-ids
raft: Allow per-group nodeIDs in MultiNode.
2015-09-18 13:29:07 -07:00
b7baaa6bc8 raft: Allow per-group nodeIDs in MultiNode.
This feature is motivated by
https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/replica_tombstone.md
which requires a change to the way CockroachDB constructs its node IDs.
2015-09-18 15:36:36 -04:00
be80d11948 storage: enhance test for keyIndex.Get and keyIndex.Compact
It covers the case that one key is set multiple times in one main
revision now.
2015-09-17 18:26:17 -07:00
cedad49dcf Merge pull request #3543 from mitake/reconfig-remove
etcdserver: forbid removing started member if quorum cannot be preserved in strict reconfig mode
2015-09-17 18:22:53 -07:00
f8859a980d etcdserver: forbid removing started member if quorum cannot be preserved in strict reconfig mode
Like the commit 6974fc63ed, this commit lets etcdserver forbid
removing started member if quorum cannot be preserved after
reconfiguration if the option -strict-reconfig-check is passed to
etcd. The removal can cause deadlock if unstarted members have wrong
peer URLs.
2015-09-18 10:09:57 +09:00
c4b3ad72d9 Merge pull request #3544 from xiang90/bench
v3benchmark: add put benchmark
2015-09-17 15:10:13 -07:00
f69582e1a2 v3benchmark: add put benchmark 2015-09-17 14:48:07 -07:00
97b67fdbfc Merge pull request #3548 from yichengq/travis
storage/backend: extend wait timeout for commit to finish
2015-09-16 14:22:46 -07:00
f7efbe8b14 storage/backend: extend wait timeout for commit to finish
It needs to take more time on travis. Fix:

```
--- FAIL: TestBackendBatchIntervalCommit (0.01s)
		backend_test.go:113: bucket test does not exit
```
2015-09-16 14:14:51 -07:00
ec4142576e Merge pull request #3534 from xiang90/grpc_err
etcdserver: better v3 api error handling
2015-09-16 12:32:28 -07:00
804d80387d Merge pull request #3546 from gae123/patch-1
doc: update admin_guide.md for the recent Go1.5 default MAXPROCS change
2015-09-16 10:44:38 -07:00
7fb0eb8f56 doc: update admin_guide.md for the recent Go1.5 default MAXPROCS change 2015-09-16 10:43:40 -07:00
ce47161ae0 Merge pull request #3540 from xiang90/bench
Godep: add cheggaaa dependency
2015-09-15 16:27:07 -07:00
4628d08879 Godep: add cheggaaa dependency 2015-09-15 16:24:31 -07:00
3a2700141e Merge pull request #3539 from xiang90/bench
godep: use github.com/cheggaaa/pb
2015-09-15 16:12:34 -07:00
38dd680f2e godep: use github.com/cheggaaa/pb 2015-09-15 16:08:07 -07:00
8bb50635ce Merge pull request #3538 from xiang90/bench
benchmarkv3: refactoring the main logic
2015-09-15 15:58:32 -07:00
4deb12fbbb benchmarkv3: refactoring the main logic 2015-09-15 15:57:38 -07:00
3221b6787e Merge pull request #3537 from jonboulle/master
*: add missing license headers + test
2015-09-15 14:19:17 -07:00
108f97d63e test: add license header check 2015-09-15 14:09:01 -07:00
7848ac3979 *: add missing license headers 2015-09-15 14:09:01 -07:00
867954f3ad Merge pull request #3535 from xiang90/rev
storage: add rev into kv interface
2015-09-15 12:54:29 -07:00
6d1f0ce89f storage: add rev into kv interface 2015-09-15 12:11:00 -07:00
94f4069a25 etcdserver: better v3 api error handling 2015-09-15 11:20:06 -07:00
e079f87410 proxy: expose proxy configuration 2015-09-15 10:27:51 -07:00
c082488e23 Merge pull request #3507 from yichengq/watch
storage: support basic watch
2015-09-15 00:04:36 -07:00
ec43e0a4c3 storage: introduce WatchableKV and watch feature
WatchableKV is an interface upon KV, and supports watch feature.
2015-09-14 23:53:03 -07:00
34bfac99c4 Merge pull request #3529 from yichengq/snapshot
etcdserver: rename db file into a formal directory
2015-09-14 23:43:17 -07:00
352cd768c6 etcdserver: fix shadow declaration 2015-09-14 23:25:16 -07:00
05c74bd890 etcdserver: rename db file into a formal directory
and rename it to a formal name
2015-09-14 22:41:40 -07:00
51f1ee055e Merge pull request #3526 from yichengq/snapshot
etcdserver: forbid to unset v3 demo once used
2015-09-14 21:36:39 -07:00
1f0fb3d9aa etcdserver: forbid to unset v3 demo once used
After enabling v3 demo, it may change the underlying data organization
for v3 store. So we forbid to unset --experimental-v3demo once it has
been used.
2015-09-14 21:27:11 -07:00
b1c2d7e526 Merge pull request #3528 from xiang90/compact
*: support v3 compaction
2015-09-14 20:04:50 -07:00
94f784826a *: support v3 compaction 2015-09-14 19:59:36 -07:00
e0d8923f7b Merge pull request #3524 from xiang90/grpc_error
etcdserver: use gRPC error instead of error message in header
2015-09-14 16:38:44 -07:00
7183387110 etcdserver: use gRPC error instead of error message in header 2015-09-14 16:11:13 -07:00
d04382c30e Merge pull request #3525 from gyuho/master
etcdserver, store: fix grammars in comments (a->an existing)
2015-09-14 13:47:49 -07:00
c2dcf7431e etcdserver, store: fix grammars in comments (a->an existing)
I found some grammatical errors in comments.

This pull request was submitted https://github.com/coreos/etcd/pull/3513.
I am resubmitting following the correct guidlines.
2015-09-14 13:41:13 -07:00
1fc122741d Merge pull request #3521 from raoofm/patch-3
doc: faq.md change flag --peers to --endpoint
2015-09-14 09:43:41 -07:00
c7b4c67436 Merge pull request #3514 from xiang90/v3_raft
support clustered v3 api
2015-09-14 09:35:02 -07:00
d685135832 doc: faq.md change flag --peers to --endpoint
doc: faq.md change flag --peers to --endpoint

Changing the flag to --endpoint and mentioning that --peers is deprecated.
2015-09-14 12:22:06 -04:00
451cce4a90 Merge pull request #3516 from xiang90/hash_improved
storage: support hash state
2015-09-13 21:46:12 -07:00
714b5e0b08 storage: support hash state 2015-09-13 21:34:58 -07:00
cdaa263346 Merge pull request #3506 from philips/improve-tocommit-error
raft: improve panic error message
2015-09-13 17:46:53 -07:00
f8fd2c10d6 Merge pull request #3449 from yichengq/cleanup-max-election
Documentation/tuning: cleanup paragraph on max election
2015-09-13 16:55:16 -07:00
95bb6d7584 Merge pull request #3508 from amarshall/patch-3
readme: Use SVG image for build status badge
2015-09-13 15:21:58 -07:00
40e0a33fcd Merge pull request #3511 from xiang90/v3_raft
Procfile: add a v3DemoProcfile
2015-09-13 08:43:14 -07:00
4c81615cef etcdserver: initial support for cluster-wide v3 request 2015-09-13 08:32:01 -07:00
600456f4ba etcdserverpb: update proto file for raftInternalRequest
We needs to assign each raftInternalRequest an ID for getting
the response after it goes through raft.

We also needs an empty response for error case.
2015-09-13 08:28:10 -07:00
ac7253f28e Procfile: add a v3DemoProcfile 2015-09-12 23:08:56 -07:00
662b4966d0 Merge pull request #3510 from xiang90/v3_raft
etcdmain: support gRPC addr flag
2015-09-12 22:58:08 -07:00
a0cfcf2dd7 etcdmain: support gRPC addr flag 2015-09-12 22:52:51 -07:00
35f1531576 Merge pull request #3509 from xiang90/v3_raft
etcdctlv3: support endpoint flag
2015-09-12 22:51:36 -07:00
121d2b9e9d etcdctlv3: support endpoint flag 2015-09-12 22:46:43 -07:00
0894294074 readme: Use SVG image for build status badge
More accessible, better scaling.
2015-09-13 01:12:11 -04:00
0ca800fbac Merge pull request #3479 from mitake/membership
etcdserver: avoid deadlock caused by adding members with wrong peer URLs
2015-09-12 22:09:13 -07:00
dad32646eb etcdserver: enhance test cases for isReadyToAddNewMember
- a case of a cluster with even number members
 - a case of an empty cluster
2015-09-13 12:30:10 +09:00
d9cf752060 etcdserver: add test for isReadyToAddNewMember
Also fixed check for special case of one-member cluster
2015-09-13 11:16:08 +09:00
6974fc63ed etcdserver: avoid deadlock caused by adding members with wrong peer URLs
Current membership changing functionality of etcd seems to have a
problem which can cause deadlock.

How to produce:
 1. construct N node cluster
 2. add N new nodes with etcdctl member add, without starting the new members

What happens:
After finishing add N nodes, a total number of the cluster becomes 2 *
N and a quorum number of the cluster becomes N + 1. It means
membership change requires at least N + 1 nodes because Raft treats
membership information in its log like other ordinal log append
requests.

Assume the peer URLs of the added nodes are wrong because of miss
operation or bugs in wrapping program which launch etcd. In such a
case, both of adding and removing members are impossible because the
quorum isn't preserved. Of course ordinal requests cannot be
served. The cluster would seem to be deadlock.

Of course, the best practice of adding new nodes is adding one node
and let the node start one by one. However, the effect of this problem
is so serious. I think preventing the problem forcibly would be
valuable.

Solution:
This patch lets etcd forbid adding a new node if the operation changes
quorum and the number of changed quorum is larger than a number of
running nodes. If etcd is launched with a newly added option
-strict-reconfig-check, the checking logic is activated. If the option
isn't passed, default behavior of reconfig is kept.

Fixes https://github.com/coreos/etcd/issues/3477
2015-09-13 09:31:53 +09:00
68d4ec3e13 raft: improve panic error message
Give a human being some insight into how we might have gotten to this
state based on feedback from #3504.
2015-09-12 12:17:02 -07:00
d4e19d1afb Merge pull request #3501 from yichengq/update-peers
docs/admin_guide: use ETCDCTL_ENDPOINT
2015-09-12 08:31:47 -07:00
e9512f8c5f docs/admin_guide: use ETCDCTL_ENDPOINT
because ETCDCTL_PEERS is not prefered.
2015-09-11 19:38:55 -07:00
28a371471a Merge pull request #3500 from yichengq/fix-ETCD
libraries-and-tools.md: correct project name to etcd
2015-09-11 19:34:15 -07:00
4e71954111 libraries-and-tools.md: correct project name to etcd
etcd is the official name of the project.
2015-09-11 19:31:40 -07:00
a528cb6f5d Merge pull request #3495 from rekby/patch-2
libraries-and-tools.md: add etcddir
2015-09-11 19:30:03 -07:00
f8f702b3f8 Merge pull request #3497 from jonboulle/master
docs: add official client to libraries-and-tools
2015-09-11 15:17:30 -07:00
fd82f0b8d5 docs: add official client to libraries-and-tools 2015-09-11 15:16:02 -07:00
136efd3ba9 libraries-and-tools.md: add etcddir 2015-09-11 21:04:56 +03:00
c5c3ae4790 Merge pull request #3486 from yichengq/readme
README: warn that master branch is unstable
2015-09-10 19:20:35 -07:00
56d61d995a Merge pull request #3487 from onlyjob/master
Minor spelling corrections (codespell).
2015-09-10 17:46:29 -07:00
b2f4a5f587 *: fix spelling issues (codespell).
Signed-off-by: Dmitry Smirnov <onlyjob@member.fsf.org>
2015-09-11 10:22:29 +10:00
bd5f924b0f README: warn that master branch is unstable
Avoid users building from master branch for stable binaries.
2015-09-10 14:27:10 -07:00
1de63deca4 Merge pull request #3483 from xiang90/update_roadmap
roadmap.md: update roadmap for 2.3
2015-09-10 13:32:30 -07:00
f51c2f471b Merge pull request #3482 from yichengq/client
client: add Nodes type to faciliate sorting
2015-09-10 12:29:12 -07:00
07bd9f65d3 roadmap.md: update roadmap for 2.3 2015-09-10 12:24:48 -07:00
2f558e56d2 client: add Nodes to codecgen and regenerate 2015-09-10 11:51:59 -07:00
eb51901830 client: add Nodes type to faciliate sorting
This helps users to sort easily.
2015-09-10 11:03:12 -07:00
48a4d2ccba Documentation/tuning: cleanup paragraph on max election
- Use one sentence per line for easier diffing
- Walkthrough the thought process and cleanup the grammar
- Move below the other sections

Original author: @philips
2015-09-06 00:38:03 -07:00
383 changed files with 38570 additions and 12927 deletions

View File

@ -4,8 +4,10 @@ go:
- 1.4
- 1.5
install:
- go get github.com/barakmich/go-nyet
addons:
apt:
packages:
- libpcap-dev
script:
- INTEGRATION=y ./test
- ./test

View File

@ -35,7 +35,7 @@ Thanks for your contributions!
### Code style
The coding style suggested by the Golang community is used in etcd. See the [style doc](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) for details.
The coding style suggested by the Golang community is used in etcd. See the [style doc](https://github.com/golang/go/wiki/CodeReviewComments) for details.
Please follow this style to make etcd easy to review, maintain and develop.

View File

@ -1,4 +1,4 @@
## Snapshot Migration
# Snapshot Migration
You can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.2 cluster using a snapshot migration. After snapshot migration, the etcd indexes of your data will change. Many etcd applications rely on these indexes to behave correctly. This operation should only be done while all etcd applications are stopped.
@ -15,7 +15,7 @@ etcdctl --endpoint new_cluster.example.com import --snap backup.snap
```
If you have a large amount of data, you can specify more concurrent works to copy data in parallel by using `-c` flag.
If you have hidden keys to copy, you can use `--hidden` flag to specify.
If you have hidden keys to copy, you can use `--hidden` flag to specify. For example fleet uses `/_coreos.com/fleet` so to import those keys use `--hidden /_coreos.com`.
And the data will quickly copy into the new cluster:

View File

@ -1,8 +1,8 @@
## Administration
# Administration
### Data Directory
## Data Directory
#### Lifecycle
### Lifecycle
When first started, etcd stores its configuration into a data directory specified by the data-dir configuration parameter.
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
@ -20,7 +20,7 @@ Once removed the member can be re-added with an empty data directory.
[remove-a-member]: runtime-configuration.md#remove-a-member
#### Contents
### Contents
The data directory has two sub-directories in it:
@ -32,18 +32,18 @@ If `--wal-dir` flag is set, etcd will write the write ahead log files to the spe
[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal
[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap
### Cluster Management
## Cluster Management
#### Lifecycle
### Lifecycle
If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.
#### Monitoring
### Monitoring
It is important to monitor your production etcd cluster for healthy information and runtime metrics.
##### Health Monitoring
#### Health Monitoring
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health": "true"}`, then the cluster is healthy. Please note the `/health` endpoint is still an experimental one as in etcd 2.2.
@ -63,16 +63,16 @@ member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:323
cluster is healthy
```
##### Runtime Metrics
#### Runtime Metrics
etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the server. You can read more through the runtime metrics [doc](metrics.md).
#### Debugging
### Debugging
Debugging a distributed system can be difficult. etcd provides several ways to make debug
easier.
##### Enabling Debug Logging
#### Enabling Debug Logging
When you want to debug etcd without stopping it, you can enable debug logging at runtime.
etcd exposes logging configuration at `/config/local/log`.
@ -85,7 +85,7 @@ $ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
$ # debug logging disabled
```
##### Debugging Variables
#### Debugging Variables
Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
@ -107,7 +107,7 @@ Debug variables are exposed for real-time debugging purposes. Developers who are
}
```
#### Optimal Cluster Size
### Optimal Cluster Size
The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.
@ -152,7 +152,7 @@ This example will walk you through the process of migrating the infra1 member to
|infra2|10.0.1.12:2380|
```sh
$ export ETCDCTL_PEERS=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
$ export ETCDCTL_ENDPOINT=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
```
```sh
@ -293,6 +293,6 @@ If timeout happens several times continuously, administrators should check statu
#### Maximum OS threads
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [may change in Go 1.5](https://docs.google.com/document/d/1At2Ls5_fhJQ59kDK2DFVhFu3g5mATSXqqV5QrxinasI/edit)).
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [has changed in Go 1.5](https://golang.org/doc/go1.5#runtime)).
When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable `GOMAXPROCS` to the desired number when starting etcd. For more information on this variable, see the Go [runtime](https://golang.org/pkg/runtime) documentation.

View File

@ -359,7 +359,7 @@ curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=2008'
#### Connection being closed prematurely
The server may close a long polling connection before emitting any events.
This can happend due to a timeout or the server being shutdown.
This can happen due to a timeout or the server being shutdown.
Since the HTTP header is sent immediately upon accepting the connection, the response will be seen as empty: `200 OK` and empty body.
The clients should be prepared to deal with this scenario and retry the watch.
@ -506,7 +506,7 @@ The current comparable conditions are:
2. `prevIndex` - checks the previous modifiedIndex of the key.
3. `prevExist` - checks existence of the key: if `prevExist` is true, it is an `update` request; if prevExist is `false`, it is a `create` request.
3. `prevExist` - checks existence of the key: if `prevExist` is true, it is an `update` request; if `prevExist` is `false`, it is a `create` request.
Here is a simple example.
Let's create a key-value pair first: `foo=one`.

View File

@ -19,7 +19,7 @@ Each role has exact one associated Permission List. An permission list exists fo
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatability. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatibility. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
#### Permissions
@ -124,7 +124,7 @@ The User JSON object is formed as follows:
Password is only passed when necessary.
**Get a list of users**
**Get a List of Users**
GET/HEAD /v2/auth/users
@ -137,7 +137,36 @@ GET/HEAD /v2/auth/users
Content-type: application/json
200 Body:
{
"users": ["alice", "bob", "eve"]
"users": [
{
"user": "alice",
"roles": [
{
"role": "root",
"permissions": {
"kv": {
"read": ["*"],
"write": ["*"]
}
}
}
]
},
{
"user": "bob",
"roles": [
{
"role": "guest",
"permissions": {
"kv": {
"read": ["*"],
"write": ["*"]
}
}
}
]
}
]
}
**Get User Details**
@ -155,7 +184,26 @@ GET/HEAD /v2/auth/users/alice
200 Body:
{
"user" : "alice",
"roles" : ["fleet", "etcd"]
"roles" : [
{
"role": "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
},
{
"role": "etcd",
"permissions" : {
"kv" : {
"read": [ "*" ],
"write": [ "*" ]
}
}
}
]
}
**Create Or Update A User**
@ -213,22 +261,6 @@ A full role structure may look like this. A Permission List structure is used fo
}
```
**Get a list of Roles**
GET/HEAD /v2/auth/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"roles": ["fleet", "etcd", "quay"]
}
**Get Role Details**
GET/HEAD /v2/auth/roles/fleet
@ -252,6 +284,50 @@ GET/HEAD /v2/auth/roles/fleet
}
}
**Get a list of Roles**
GET/HEAD /v2/auth/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"roles": [
{
"role": "fleet",
"permissions": {
"kv": {
"read": ["/fleet/"],
"write": ["/fleet/"]
}
}
},
{
"role": "etcd",
"permissions": {
"kv": {
"read": ["*"],
"write": ["*"]
}
}
},
{
"role": "quay",
"permissions": {
"kv": {
"read": ["*"],
"write": ["*"]
}
}
}
]
}
**Create Or Update A Role**
PUT /v2/auth/roles/rkt

View File

@ -134,7 +134,7 @@ $ etcdctl role remove myrolename
## Enabling authentication
The minimal steps to enabling auth follow. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
Make sure the root user is created:

View File

@ -0,0 +1,98 @@
# Storage Memory Usage Benchmark
<!---todo: link storage to storage design doc-->
Two components of etcd storage consume physical memory. The etcd process allocates an *in-memory index* to speed key lookup. The process's *page cache*, managed by the operating system, stores recently-accessed data from disk for quick re-use.
The in-memory index holds all the keys in a [B-tree][btree] data structure, along with pointers to the on-disk data (the values). Each key in the B-tree may contain multiple pointers, pointing to different versions of its values. The theoretical memory consumption of the in-memory index can hence be approximated with the formula:
`N * (c1 + avg_key_size) + N * (avg_versions_of_key) * (c2 + size_of_pointer)`
where `c1` is the key metadata overhead and `c2` is the version metadata overhead.
The graph shows the detailed structure of the in-memory index B-tree.
```
In mem index
+------------+
| key || ... |
+--------------+ | || |
| | +------------+
| | | v1 || ... |
| disk <----------------| || | Tree Node
| | +------------+
| | | v2 || ... |
| <----------------+ || |
| | +------------+
+--------------+ +-----+ | | |
| | | | |
| +------------+
|
|
^
------+
| ... |
| |
+-----+
| ... | Tree Node
| |
+-----+
| ... |
| |
------+
```
[Page cache memory][pagecache] is managed by the operating system and is not covered in detail in this document.
## Testing Environment
etcd version
- git head https://github.com/coreos/etcd/commit/776e9fb7be7eee5e6b58ab977c8887b4fe4d48db
GCE n1-standard-2 machine type
- 7.5 GB memory
- 2x CPUs
## In-memory index memory usage
In this test, we only benchmark the memory usage of the in-memory index. The goal is to find `c1` and `c2` mentioned above and to understand the hard limit of memory consumption of the storage.
We calculate the memory usage consumption via the Go runtime.ReadMemStats. We calculate the total allocated bytes difference before creating the index and after creating the index. It cannot perfectly reflect the memory usage of the in-memory index itself but can show the rough consumption pattern.
| N | versions | key size | memory usage |
|------|----------|----------|--------------|
| 100K | 1 | 64bytes | 22MB |
| 100K | 5 | 64bytes | 39MB |
| 1M | 1 | 64bytes | 218MB |
| 1M | 5 | 64bytes | 432MB |
| 100K | 1 | 256bytes | 41MB |
| 100K | 5 | 256bytes | 65MB |
| 1M | 1 | 256bytes | 409MB |
| 1M | 5 | 256bytes | 506MB |
Based on the result, we can calculate `c1=120bytes`, `c2=30bytes`. We only need two sets of data to calculate `c1` and `c2`, since they are the only unknown variable in the formula. The `c1=120bytes` and `c2=30bytes` are the average value of the 4 sets of `c1` and `c2` we calculated. The key metadata overhead is still relatively nontrivial (50%) for small key-value pairs. However, this is a significant improvement over the old store, which had at least 1000% overhead.
## Overall memory usage
The overall memory usage captures how much RSS etcd consumes with the storage. The value size should have very little impact on the overall memory usage of etcd, since we keep values on disk and only retain hot values in memory, managed by the OS page cache.
| N | versions | key size | value size | memory usage |
|------|----------|----------|------------|--------------|
| 100K | 1 | 64bytes | 256bytes | 40MB |
| 100K | 5 | 64bytes | 256bytes | 89MB |
| 1M | 1 | 64bytes | 256bytes | 470MB |
| 1M | 5 | 64bytes | 256bytes | 880MB |
| 100K | 1 | 64bytes | 1KB | 102MB |
| 100K | 5 | 64bytes | 1KB | 164MB |
| 1M | 1 | 64bytes | 1KB | 587MB |
| 1M | 5 | 64bytes | 1KB | 836MB |
Based on the result, we know the value size does not significantly impact the memory consumption. There is some minor increase due to more data held in the OS page cache.
[btree]: https://en.wikipedia.org/wiki/B-tree
[pagecache]: https://en.wikipedia.org/wiki/Page_cache

View File

@ -1,6 +1,6 @@
## Branch Management
# Branch Management
### Guide
## Guide
- New development occurs on the [master branch](https://github.com/coreos/etcd/tree/master)
- Master branch should always have a green build!

View File

@ -124,7 +124,7 @@ There two methods that can be used for discovery:
### etcd Discovery
To better understand the design about discovery service protocol, we suggest you read [this](./discovery_protocol.md).
To better understand the design about discovery service protocol, we suggest you read [this](discovery_protocol.md).
#### Lifetime of a Discovery URL
@ -148,7 +148,7 @@ If you bootstrap an etcd cluster using discovery service with more than the expe
The URL you will use in this case will be `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` and the etcd members will use the `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` directory for registration as they start.
Each member must have a different name flag specified. Or discovery will fail due to duplicated name.
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
@ -200,7 +200,7 @@ ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573d
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
Each member must have a different name flag specified. Or discovery will fail due to duplicated name.
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
@ -384,7 +384,7 @@ $ etcd --proxy on -discovery-srv example.com
#### Error Cases
You might see the an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `-initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets.
You might see an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `-initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets.
# 0.4 to 2.0+ Migration Guide

View File

@ -1,4 +1,4 @@
## Configuration Flags
# Configuration Flags
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment.
@ -8,251 +8,256 @@ To start etcd automatically using custom settings at startup in Linux, using a [
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
### Member Flags
## Member Flags
##### -name
### -name
+ Human-readable name for this member.
+ default: "default"
+ env variable: ETCD_NAME
+ This value is referenced as this node's own entries listed in the `-initial-cluster` flag (Ex: `default=http://localhost:2380` or `default=http://localhost:2380,default=http://localhost:7001`). This needs to match the key used in the flag if you're using [static boostrapping](clustering.md#static).
+ This value is referenced as this node's own entries listed in the `-initial-cluster` flag (Ex: `default=http://localhost:2380` or `default=http://localhost:2380,default=http://localhost:7001`). This needs to match the key used in the flag if you're using [static bootstrapping](clustering.md#static). When using discovery, each member must have a unique name. `Hostname` or `machine-id` can be a good choice.
##### -data-dir
### -data-dir
+ Path to the data directory.
+ default: "${name}.etcd"
+ env variable: ETCD_DATA_DIR
##### -wal-dir
### -wal-dir
+ Path to the dedicated wal directory. If this flag is set, etcd will write the WAL files to the walDir rather than the dataDir. This allows a dedicated disk to be used, and helps avoid io competition between logging and other IO operations.
+ default: ""
+ env variable: ETCD_WAL_DIR
##### -snapshot-count
### -snapshot-count
+ Number of committed transactions to trigger a snapshot to disk.
+ default: "10000"
+ env variable: ETCD_SNAPSHOT_COUNT
##### -heartbeat-interval
### -heartbeat-interval
+ Time (in milliseconds) of a heartbeat interval.
+ default: "100"
+ env variable: ETCD_HEARTBEAT_INTERVAL
##### -election-timeout
### -election-timeout
+ Time (in milliseconds) for an election to timeout. See [Documentation/tuning.md](tuning.md#time-parameters) for details.
+ default: "1000"
+ env variable: ETCD_ELECTION_TIMEOUT
##### -listen-peer-urls
### -listen-peer-urls
+ List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be either http or https.If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2380,http://localhost:7001"
+ env variable: ETCD_LISTEN_PEER_URLS
+ example: "http://10.0.0.1:2380"
+ invalid example: "http://example.com:2380" (domain name is invalid for binding)
##### -listen-client-urls
### -listen-client-urls
+ List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2379,http://localhost:4001"
+ env variable: ETCD_LISTEN_CLIENT_URLS
+ example: "http://10.0.0.1:2379"
+ invalid example: "http://example.com:2379" (domain name is invalid for binding)
##### -max-snapshots
### -max-snapshots
+ Maximum number of snapshot files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_SNAPSHOTS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or your preference for safety) is recommended.
##### -max-wals
### -max-wals
+ Maximum number of wal files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_WALS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or your preference for safety) is recommended.
##### -cors
### -cors
+ Comma-separated white list of origins for CORS (cross-origin resource sharing).
+ default: none
+ env variable: ETCD_CORS
### Clustering Flags
## Clustering Flags
`-initial` prefix flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member.
`-discovery` prefix flags need to be set when using [discovery service][discovery].
##### -initial-advertise-peer-urls
### -initial-advertise-peer-urls
+ List of this member's peer URLs to advertise to the rest of the cluster. These addresses are used for communicating etcd data around the cluster. At least one must be routable to all cluster members. These URLs can contain domain names.
+ default: "http://localhost:2380,http://localhost:7001"
+ env variable: ETCD_INITIAL_ADVERTISE_PEER_URLS
+ example: "http://example.com:2380, http://10.0.0.1:2380"
##### -initial-cluster
### -initial-cluster
+ Initial cluster configuration for bootstrapping.
+ default: "default=http://localhost:2380,default=http://localhost:7001"
+ env variable: ETCD_INITIAL_CLUSTER
+ The key is the value of the `-name` flag for each node provided. The default uses `default` for the key because this is the default for the `-name` flag.
##### -initial-cluster-state
### -initial-cluster-state
+ Initial cluster state ("new" or "existing"). Set to `new` for all members present during initial static or DNS bootstrapping. If this option is set to `existing`, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely.
+ default: "new"
+ env variable: ETCD_INITIAL_CLUSTER_STATE
[static bootstrap]: clustering.md#static
##### -initial-cluster-token
### -initial-cluster-token
+ Initial cluster token for the etcd cluster during bootstrap.
+ default: "etcd-cluster"
+ env variable: ETCD_INITIAL_CLUSTER_TOKEN
##### -advertise-client-urls
### -advertise-client-urls
+ List of this member's client URLs to advertise to the rest of the cluster. These URLs can contain domain names.
+ default: "http://localhost:2379,http://localhost:4001"
+ env variable: ETCD_ADVERTISE_CLIENT_URLS
+ example: "http://example.com:2379, http://10.0.0.1:2379"
+ Be careful if you are advertising URLs such as http://localhost:2379 from a cluster member and are using the proxy feature of etcd. This will cause loops, because the proxy will be forwarding requests to itself until its resources (memory, file descriptors) are eventually depleted.
##### -discovery
### -discovery
+ Discovery URL used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY
##### -discovery-srv
### -discovery-srv
+ DNS srv domain used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY_SRV
##### -discovery-fallback
### -discovery-fallback
+ Expected behavior ("exit" or "proxy") when discovery services fails.
+ default: "proxy"
+ env variable: ETCD_DISCOVERY_FALLBACK
##### -discovery-proxy
### -discovery-proxy
+ HTTP proxy to use for traffic to discovery service.
+ default: none
+ env variable: ETCD_DISCOVERY_PROXY
### Proxy Flags
### -strict-reconfig-check
+ Reject reconfiguration requests that would cause quorum loss.
+ default: false
+ env variable: ETCD_STRICT_RECONFIG_CHECK
## Proxy Flags
`-proxy` prefix flags configures etcd to run in [proxy mode][proxy].
##### -proxy
### -proxy
+ Proxy mode setting ("off", "readonly" or "on").
+ default: "off"
+ env variable: ETCD_PROXY
##### -proxy-failure-wait
### -proxy-failure-wait
+ Time (in milliseconds) an endpoint will be held in a failed state before being reconsidered for proxied requests.
+ default: 5000
+ env variable: ETCD_PROXY_FAILURE_WAIT
##### -proxy-refresh-interval
### -proxy-refresh-interval
+ Time (in milliseconds) of the endpoints refresh interval.
+ default: 30000
+ env variable: ETCD_PROXY_REFRESH_INTERVAL
##### -proxy-dial-timeout
### -proxy-dial-timeout
+ Time (in milliseconds) for a dial to timeout or 0 to disable the timeout
+ default: 1000
+ env variable: ETCD_PROXY_DIAL_TIMEOUT
##### -proxy-write-timeout
### -proxy-write-timeout
+ Time (in milliseconds) for a write to timeout or 0 to disable the timeout.
+ default: 5000
+ env variable: ETCD_PROXY_WRITE_TIMEOUT
##### -proxy-read-timeout
### -proxy-read-timeout
+ Time (in milliseconds) for a read to timeout or 0 to disable the timeout.
+ Don't change this value if you use watches because they are using long polling requests.
+ default: 0
+ env variable: ETCD_PROXY_READ_TIMEOUT
### Security Flags
## Security Flags
The security flags help to [build a secure etcd cluster][security].
##### -ca-file [DEPRECATED]
### -ca-file [DEPRECATED]
+ Path to the client server TLS CA file. `-ca-file ca.crt` could be replaced by `-trusted-ca-file ca.crt -client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_CA_FILE
##### -cert-file
### -cert-file
+ Path to the client server TLS cert file.
+ default: none
+ env variable: ETCD_CERT_FILE
##### -key-file
### -key-file
+ Path to the client server TLS key file.
+ default: none
+ env variable: ETCD_KEY_FILE
##### -client-cert-auth
### -client-cert-auth
+ Enable client cert authentication.
+ default: false
+ env variable: ETCD_CLIENT_CERT_AUTH
##### -trusted-ca-file
### -trusted-ca-file
+ Path to the client server TLS trusted CA key file.
+ default: none
+ env variable: ETCD_TRUSTED_CA_FILE
##### -peer-ca-file [DEPRECATED]
### -peer-ca-file [DEPRECATED]
+ Path to the peer server TLS CA file. `-peer-ca-file ca.crt` could be replaced by `-peer-trusted-ca-file ca.crt -peer-client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_PEER_CA_FILE
##### -peer-cert-file
### -peer-cert-file
+ Path to the peer server TLS cert file.
+ default: none
+ env variable: ETCD_PEER_CERT_FILE
##### -peer-key-file
### -peer-key-file
+ Path to the peer server TLS key file.
+ default: none
+ env variable: ETCD_PEER_KEY_FILE
##### -peer-client-cert-auth
### -peer-client-cert-auth
+ Enable peer client cert authentication.
+ default: false
+ env variable: ETCD_PEER_CLIENT_CERT_AUTH
##### -peer-trusted-ca-file
### -peer-trusted-ca-file
+ Path to the peer server TLS trusted CA file.
+ default: none
+ env variable: ETCD_PEER_TRUSTED_CA_FILE
### Logging Flags
## Logging Flags
##### -debug
### -debug
+ Drop the default log level to DEBUG for all subpackages.
+ default: false (INFO for all packages)
+ env variable: ETCD_DEBUG
##### -log-package-levels
### -log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: none (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
### Unsafe Flags
## Unsafe Flags
Please be CAUTIOUS when using unsafe flags because it will break the guarantees given by the consensus protocol.
For example, it may panic if other members in the cluster are still alive.
Follow the instructions when using these flags.
##### -force-new-cluster
+ Force to create a new one-member cluster. It commits configuration changes in force to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
### -force-new-cluster
+ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
+ default: false
+ env variable: ETCD_FORCE_NEW_CLUSTER
### Experimental Flags
## Experimental Flags
##### -experimental-v3demo
### -experimental-v3demo
+ Enable experimental [v3 demo API](rfc/v3api.proto).
+ default: false
+ env variable: ETCD_EXPERIMENTAL_V3DEMO
### Miscellaneous Flags
## Miscellaneous Flags
##### -version
### -version
+ Print the version and exit.
+ default: false

View File

@ -4,7 +4,7 @@ Discovery service protocol helps new etcd member to discover all other members i
Discovery service protocol is _only_ used in cluster bootstrap phase, and cannot be used for runtime reconfiguration or cluster monitoring.
The protocol uses a new discovery token to bootstrap one _unique_ etcd cluster. Remember that one discovery token can represent only one etcd cluster. As long as discovery protocol on this token starts, even if fails halfway, it must not be used to bootstrap another etcd cluster.
The protocol uses a new discovery token to bootstrap one _unique_ etcd cluster. Remember that one discovery token can represent only one etcd cluster. As long as discovery protocol on this token starts, even if it fails halfway, it must not be used to bootstrap another etcd cluster.
The rest of this article will walk through the discovery process with examples that correspond to a self-hosted discovery cluster. The public discovery service, discovery.etcd.io, functions the same way, but with a layer of polish to abstract away ugly URLs, generate UUIDs automatically, and provide some protections against excessive requests. At its core, the public discovery service still uses an etcd cluster as the data store as described in this document.

View File

@ -1,4 +1,4 @@
Error Code
# Error Code
======
This document describes the error code used in key space '/v2/keys'. Feel free to import 'github.com/coreos/etcd/error' to use.

View File

@ -37,7 +37,7 @@ timeout.
A proxy is a redirection server to the etcd cluster. The proxy handles the
redirection of a client to the current configuration of the etcd cluster. A
typical usecase is to start a proxy on a machine, and on first boot up of the
typical use case is to start a proxy on a machine, and on first boot up of the
proxy specify both the `--proxy` flag and the `--initial-cluster` flag.
From there, any etcdctl client that starts up automatically speaks to the local
@ -57,24 +57,26 @@ and their integration with the reconfiguration API.
Thus, a member that is down, even infinitely, will never be automatically
removed from the etcd cluster member list.
This makes sense because its usually an application level / administrative
This makes sense because it's usually an application level / administrative
action to determine whether a reconfiguration should happen based on health.
For more information, refer to [Documentation/runtime-reconfiguration.md].
For more information, refer to
[Documentation/runtime-reconf-design.md](https://github.com/coreos/etcd/blob/master/Documentation/runtime-reconf-design.md).
## 6) how does --peers work with etcdctl?
## 6) how does --endpoint work with etcdctl?
The `--peers` flag can specify any number of etcd cluster members in a comma
The `--endpoint` flag can specify any number of etcd cluster members in a comma
separated list. This list might be a subset, equal to, or more than the actual
etcd cluster member list itself.
If only one peer is specified via the `--peers` flag, the etcdctl discovers the
If only one peer is specified via the `--endpoint` flag, the etcdctl discovers the
rest of the cluster via the member list of that one peer, and then it randomly
chooses a member to use. Again, the client can use the `quorum=true` flag on
reads, which will always fail when using a member in the minority.
If peers from multiple clusters are specified via the `--peers` flag, etcdctl
If peers from multiple clusters are specified via the `--endpoint` flag, etcdctl
will randomly choose a peer, and the request will simply get routed to one of
the clusters. This is probably not what you want.
Note: --peers flag is now deprecated and --endpoint should be used instead,
as it might confuse users to give etcdctl a peerURL.

View File

@ -1,35 +1,35 @@
## Glossary
# Glossary
This document defines the various terms used in etcd documentation, command line and source code.
### Node
## Node
Node is an instance of raft state machine.
It has a unique identification, and records other nodes' progress internally when it is the leader.
### Member
## Member
Member is an instance of etcd. It hosts a node, and provides service to clients.
### Cluster
## Cluster
Cluster consists of several members.
The node in each member follows raft consensus protocol to replicate logs. Cluster receives proposals from members, commits them and apply to local store.
### Peer
## Peer
Peer is another member of the same cluster.
### Proposal
## Proposal
A proposal is a request (for example a write request, a configuration change request) that needs to go through raft protocol.
### Client
## Client
Client is a caller of the cluster's HTTP API.
### Machine (deprecated)
## Machine (deprecated)
The alternative of Member in etcd before 2.0

View File

@ -46,7 +46,7 @@ ExecStart=/usr/bin/etcd
There are several error cases:
0) Init has already ran and the data directory is already configured
0) Init has already run and the data directory is already configured
1) Discovery fails because of network timeout, etc
2) Discovery fails because the cluster is already full and etcd needs to fall back to proxy
3) Static cluster configuration fails because of conflict, misconfiguration or timeout

View File

@ -1,19 +1,24 @@
## Libraries and Tools
# Libraries and Tools
**Tools**
- [etcdctl](https://github.com/coreos/etcdctl) - A command line client for etcd
- [etcdctl](https://github.com/coreos/etcd/tree/master/etcdctl) - A command line client for etcd
- [etcd-backup](https://github.com/fanhattan/etcd-backup) - A powerful command line utility for dumping/restoring etcd - Supports v2
- [etcd-dump](https://npmjs.org/package/etcd-dump) - Command line utility for dumping/restoring etcd.
- [etcd-fs](https://github.com/xetorthio/etcd-fs) - FUSE filesystem for etcd
- [etcddir](https://github.com/rekby/etcddir) - Realtime sync etcd and local directory. Work with windows and linux.
- [etcd-browser](https://github.com/henszey/etcd-browser) - A web-based key/value editor for etcd using AngularJS
- [etcd-lock](https://github.com/datawisesystems/etcd-lock) - Master election & distributed r/w lock implementation using etcd - Supports v2
- [etcd-console](https://github.com/matishsiao/etcd-console) - A web-base key/value editor for etcd using PHP
- [etcd-viewer](https://github.com/nikfoundas/etcd-viewer) - An etcd key-value store editor/viewer written in Java
- [etcd-export](https://github.com/mickep76/etcd-export) - Export/Import etcd directory as JSON/YAML/TOML and Validate directory using JSON schema
- [etcd-rest](https://github.com/mickep76/etcd-rest) - Create generic REST API in Go using etcd as a backend with validation using JSON schema
- [etcdsh](https://github.com/kamilhark/etcdsh) - A command line client with support of command history and tab completion. Supports v2
**Go libraries**
- [go-etcd](https://github.com/coreos/go-etcd) - Supports v2
- [etcd/client](https://github.com/coreos/etcd/blob/master/client) - the officially maintained Go client
- [go-etcd](https://github.com/coreos/go-etcd) - the deprecated official client. May be useful for older (<2.0.0) versions of etcd.
**Java libraries**
@ -49,6 +54,7 @@
**C++ libraries**
- [edwardcapriolo/etcdcpp](https://github.com/edwardcapriolo/etcdcpp) - Supports v2
- [suryanathan/etcdcpp](https://github.com/suryanathan/etcdcpp) - Supports v2 (with waits)
**Clojure libraries**
@ -111,7 +117,7 @@ A detailed recap of client functionalities can be found in the [clients compatib
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
- [scrz](https://github.com/scrz/scrz) - Container manager, stores configuration in etcd.
- [fleet](https://github.com/coreos/fleet) - Distributed init system
- [GoogleCloudPlatform/kubernetes](https://github.com/GoogleCloudPlatform/kubernetes) - Container cluster manager.
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.
- [duedil-ltd/discodns](https://github.com/duedil-ltd/discodns) - Simple DNS nameserver using etcd as a database for names and records.
- [skynetservices/skydns](https://github.com/skynetservices/skydns) - RFC compliant DNS server

View File

@ -1,19 +1,19 @@
## Metrics
# Metrics
**NOTE: The metrics feature is considered as an experimental. We might add/change/remove metrics without warning in the future releases.**
**NOTE: The metrics feature is considered experimental. We may add/change/remove metrics without warning in future releases.**
etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the server. The metrics can be used for real-time monitoring and debugging.
The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics` of etcd. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).
You can also follow the doc [here](http://prometheus.io/docs/introduction/getting_started/) to start a Promethus server and monitor etcd metrics.
You can also follow the doc [here](http://prometheus.io/docs/introduction/getting_started/) to start a Prometheus server and monitor etcd metrics.
The naming of metrics follows the suggested [best practice of Promethus](http://prometheus.io/docs/practices/naming/). A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
The naming of metrics follows the suggested [best practice of Prometheus](http://prometheus.io/docs/practices/naming/). A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
etcd now exposes the following metrics:
### etcdserver
## etcdserver
| Name | Description | Type |
|-----------------------------------------|--------------------------------------------------|---------|
@ -30,46 +30,7 @@ Pending proposal (`pending_proposal_total`) gives you an idea about how many pro
Failed proposals (`proposal_failed_total`) are normally related to two issues: temporary failures related to a leader election or longer duration downtime caused by a loss of quorum in the cluster.
### store
These metrics describe the accesses into the data store of etcd members that exist in the cluster. They
are useful to count what kind of actions are taken by users. It is also useful to see and whether all etcd members
"see" the same set of data mutations, and whether reads and watches (which are local) are equally distributed.
All these metrics are prefixed with `etcd_store_`.
| Name | Description | Type |
|---------------------------|------------------------------------------------------------------------------------------|--------------------|
| reads_total | Total number of reads from store, should differ among etcd members (local reads). | Counter(action) |
| writes_total | Total number of writes to store, should be same among all etcd members. | Counter(action) |
| reads_failed_total | Number of failed reads from store (e.g. key missing) on local reads. | Counter(action) |
| writes_failed_total | Number of failed writes to store (e.g. failed compare and swap). | Counter(action) |
| expires_total | Total number of expired keys (due to TTL).   | Counter |
| watch_requests_totals | Total number of incoming watch requests to this etcd member (local watches). | Counter |
| watchers | Current count of active watchers on this etcd member. | Gauge |
Both `reads_total` and `writes_total` count both successful and failed requests. `reads_failed_total` and
`writes_failed_total` count failed requests. A lot of failed writes indicate possible contentions on keys (e.g. when
doing `compareAndSet`), and read failures indicate that some clients try to access keys that don't exist.
Example Prometheus queries that may be useful from these metrics (across all etcd members):
* `sum(rate(etcd_store_reads_total{job="etcd"}[1m])) by (action)`
`max(rate(etcd_store_writes_total{job="etcd"}[1m])) by (action)`
Rate of reads and writes by action, across all servers across a time window of `1m`. The reason why `max` is used
for writes as opposed to `sum` for reads is because all of etcd nodes in the cluster apply all writes to their stores.
Shows the rate of successfull readonly/write queries across all servers, across a time window of `1m`.
* `sum(rate(etcd_store_watch_requests_total{job="etcd"}[1m]))`
Shows rate of new watch requests per second. Likely driven by how often watched keys change.
* `sum(etcd_store_watchers{job="etcd"})`
Number of active watchers across all etcd servers.
### wal
## wal
| Name | Description | Type |
|------------------------------------|--------------------------------------------------|---------|
@ -78,7 +39,39 @@ Example Prometheus queries that may be useful from these metrics (across all etc
Abnormally high fsync duration (`fsync_durations_microseconds`) indicates disk issues and might cause the cluster to be unstable.
### snapshot
## http requests
These metrics describe the serving of requests (non-watch events) served by etcd members in non-proxy mode: total
incoming requests, request failures and processing latency (inc. raft rounds for storage). They are useful for tracking
user-generated traffic hitting the etcd cluster .
All these metrics are prefixed with `etcd_http_`
| Name | Description | Type |
|--------------------------------|-----------------------------------------------------------------------------------------|--------------------|
| received_total | Total number of events after parsing and auth. | Counter(method) |
| failed_total | Total number of failed events.   | Counter(method,error) |
| successful_duration_second | Bucketed handling times of the requests, including raft rounds for writes. | Histogram(method) |
Example Prometheus queries that may be useful from these metrics (across all etcd members):
* `sum(rate(etcd_http_failed_total{job="etcd"}[1m]) by (method) / sum(rate(etcd_http_events_received_total{job="etcd"})[1m]) by (method)`
Shows the fraction of events that failed by HTTP method across all members, across a time window of `1m`.
* `sum(rate(etcd_http_received_total{job="etcd",method="GET})[1m]) by (method)`
`sum(rate(etcd_http_received_total{job="etcd",method~="GET})[1m]) by (method)`
Shows the rate of successful readonly/write queries across all servers, across a time window of `1m`.
* `histogram_quantile(0.9, sum(increase(etcd_http_successful_processing_seconds{job="etcd",method="GET"}[5m]) ) by (le))`
`histogram_quantile(0.9, sum(increase(etcd_http_successful_processing_seconds{job="etcd",method!="GET"}[5m]) ) by (le))`
Show the 0.90-tile latency (in seconds) of read/write (respectively) event handling across all members, with a window of `5m`.
## snapshot
| Name | Description | Type |
|--------------------------------------------|------------------------------------------------------------|---------|
@ -87,7 +80,7 @@ Abnormally high fsync duration (`fsync_durations_microseconds`) indicates disk i
Abnormally high snapshot duration (`snapshot_save_total_durations_microseconds`) indicates disk issues and might cause the cluster to be unstable.
### rafthttp
## rafthttp
| Name | Description | Type | Labels |
|-----------------------------------|--------------------------------------------|---------|--------------------------------|
@ -106,7 +99,7 @@ Label `msgType` is the type of raft message. `MsgApp` is log replication message
Label `remoteID` is the member ID of the message destination.
### proxy
## proxy
etcd members operating in proxy mode do not do store operations. They forward all requests
to cluster instances.
@ -134,4 +127,4 @@ Example Prometheus queries that may be useful from these metrics (across all etc
* `sum(rate(etcd_proxy_dropped_total{job="etcd"}[1m])) by (proxying_error)`
Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to etcd cluster.

View File

@ -1,4 +1,4 @@
## Members API
# Members API
* [List members](#list-members)
* [Add a member](#add-a-member)
@ -103,7 +103,7 @@ Change the peer urls of a given member. The member ID must be a hex-encoded uint
If the POST body is malformed an HTTP 400 will be returned. If the member does not exist in the cluster an HTTP 404 will be returned. If any of the given peerURLs exists in the cluster an HTTP 409 will be returned. If the cluster fails to process the request within timeout an HTTP 500 will be returned, though the request may be processed later.
#### Request
### Request
```
PUT /v2/members/<id> HTTP/1.1
@ -111,7 +111,7 @@ PUT /v2/members/<id> HTTP/1.1
{"peerURLs": ["http://10.0.0.10:2380"]}
```
#### Example
### Example
```sh
curl http://10.0.0.10:2379/v2/members/272e204152 -XPUT \

View File

@ -1,4 +1,6 @@
# etcd in Production
etcd is being used successfully by many companies in production. It is,
however, under active development and systems like etcd are difficult to get
correct. If you are comfortable with bleeding-edge software please use etcd and
however, under active development, and systems like etcd are difficult to get
correct. If you are comfortable with bleeding-edge software, please use etcd and
provide us with the feedback and testing young software needs.

View File

@ -1,6 +1,6 @@
## Proxy
# Proxy
etcd can now run as a transparent proxy. Running etcd as a proxy allows for easily discovery of etcd within your infrastructure, since it can run on each machine as a local service. In this mode, etcd acts as a reverse proxy and forwards client requests to an active etcd cluster. The etcd proxy does not participate in the consensus replication of the etcd cluster, thus it neither increases the resilience nor decreases the write performance of the etcd cluster.
etcd can now run as a transparent proxy. Running etcd as a proxy allows for easy discovery of etcd within your infrastructure, since it can run on each machine as a local service. In this mode, etcd acts as a reverse proxy and forwards client requests to an active etcd cluster. The etcd proxy does not participate in the consensus replication of the etcd cluster, thus it neither increases the resilience nor decreases the write performance of the etcd cluster.
etcd currently supports two proxy modes: `readwrite` and `readonly`. The default mode is `readwrite`, which forwards both read and write requests to the etcd cluster. A `readonly` etcd proxy only forwards read requests to the etcd cluster, and returns `HTTP 501` to all write requests.
@ -8,30 +8,114 @@ The proxy will shuffle the list of cluster members periodically to avoid sending
The member list used by proxy consists of all client URLs advertised within the cluster, as specified in each members' `-advertise-client-urls` flag. If this flag is set incorrectly, requests sent to the proxy are forwarded to wrong addresses and then fail. Including URLs in the `-advertise-client-urls` flag that point to the proxy itself, e.g. http://localhost:2379, is even more problematic as it will cause loops, because the proxy keeps trying to forward requests to itself until its resources (memory, file descriptors) are eventually depleted. The fix for this problem is to restart etcd member with correct `-advertise-client-urls` flag. After client URLs list in proxy is recalculated, which happens every 30 seconds, requests will be forwarded correctly.
### Using an etcd proxy
## Using an etcd proxy
To start etcd in proxy mode, you need to provide three flags: `proxy`, `listen-client-urls`, and `initial-cluster` (or `discovery`).
To start a readwrite proxy, set `-proxy on`; To start a readonly proxy, set `-proxy readonly`.
The proxy will be listening on `listen-client-urls` and forward requests to the etcd cluster discovered from in `initial-cluster` or `discovery` url.
#### Start an etcd proxy with a static configuration
### Start an etcd proxy with a static configuration
To start a proxy that will connect to a statically defined etcd cluster, specify the `initial-cluster` flag:
```
etcd -proxy on -listen-client-urls http://127.0.0.1:8080 -initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
etcd -proxy on \
-listen-client-urls http://127.0.0.1:8080 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
```
#### Start an etcd proxy with the discovery service
### Start an etcd proxy with the discovery service
If you bootstrap an etcd cluster using the [discovery service][discovery-service], you can also start the proxy with the same `discovery`.
To start a proxy using the discovery service, specify the `discovery` flag. The proxy will wait until the etcd cluster defined at the `discovery` url finishes bootstrapping, and then start to forward the requests.
```
etcd -proxy on -listen-client-urls http://127.0.0.1:8080 -discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcd -proxy on \
-listen-client-urls http://127.0.0.1:8080 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
#### Fallback to proxy mode with discovery service
## Fallback to proxy mode with discovery service
If you bootstrap a etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery url, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `proxy-fallback` flag.
## Promote a proxy to a member of etcd cluster
A Proxy is in the part of etcd cluster that does not participate in consensus. A proxy will not promote itself to an etcd member that participates in consensus automtically in any case.
If you want to promote a proxy to an etcd member, there are four steps you need to follow:
- use etcdctl to add the proxy node as an etcd member into the existing cluster
- stop the etcd proxy process or service
- remove the existing proxy data directory
- restart the etcd process with new member configuration
## Example
We assume you have a one member etcd cluster with one proxy. The cluster information is listed below:
|Name|Address|
|------|---------|
|infra0|10.0.1.10|
|proxy0|10.0.1.11|
This example walks you through a case that you promote one proxy to an etcd member. The cluster will become a two member cluster after finishing the four steps.
### Add a new member into the existing cluster
First, use etcdctl to add the member to the cluster, which will output the environment variables need to correctly configure the new member:
``` bash
$ etcdctl -endpoint http://10.0.1.10:2379 member add infra1 http://10.0.1.11:2380
added member 9bf1b35fc7761a23 to cluster
ETCD_NAME="infra1"
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380"
ETCD_INITIAL_CLUSTER_STATE=existing
```
### Stop the proxy process
Stop the existing proxy so we can wipe it's state on disk and reload it with the new configuration:
``` bash
px aux | grep etcd
kill %etcd_proxy_pid%
```
or (if you are running etcd proxy as etcd service under systemd)
``` bash
sudo systemctl stop etcd
```
### Remove the existing proxy data dir
``` bash
rm -rf %data_dir%/proxy
```
### Start etcd as a new member
Finally, start the reconfigured member and make sure it joins the cluster correctly:
``` bash
$ export ETCD_NAME="infra1"
$ export ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380"
$ export ETCD_INITIAL_CLUSTER_STATE=existing
$ etcd -listen-client-urls http://10.0.1.11:2379 \
-advertise-client-urls http://10.0.1.11:2379 \
-listen-peer-urls http://10.0.1.11:2380 \
-initial-advertise-peer-urls http://10.0.1.11:2380 \
-data-dir %data_dir%
```
If you are running etcd under systemd, you should modify the service file with correct configuration and restart the service:
``` bash
sudo systemd restart etcd
```
If you see an error, you can read the [add member troubleshooting doc](runtime-configuration.md#error-cases).
[discovery-service]: clustering.md#discovery

View File

@ -1,4 +1,4 @@
## Reporting Bugs
# Reporting Bugs
If you find bugs or documentation mistakes in etcd project, please let us know by [opening an issue](https://github.com/coreos/etcd/issues/new). We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check there that one does not already exist.
@ -20,7 +20,7 @@ We might ask you for further information to locate a bug. A duplicated bug repor
## Frequently Asked Questions
### How to get stack trace
### How to get a stack trace
``` bash
$ kill -QUIT $PID

View File

@ -1,4 +1,4 @@
## Design
# Design
1. Flatten binary key-value space
@ -32,9 +32,9 @@
[protobuf](./v3api.proto)
### Examples
## Examples
#### Put a key (foo=bar)
### Put a key (foo=bar)
```
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
@ -47,7 +47,7 @@ PutResponse {
}
```
#### Get a key (assume we have foo=bar)
### Get a key (assume we have foo=bar)
```
Get ( RangeRequest { key = foo } )
@ -68,7 +68,7 @@ RangeResponse {
}
```
#### Range over a key space (assume we have foo0=bar0… foo100=bar100)
### Range over a key space (assume we have foo0=bar0… foo100=bar100)
```
Range ( RangeRequest { key = foo, end_key = foo80, limit = 30 } )
@ -97,7 +97,7 @@ RangeResponse {
}
```
#### Finish a txn (assume we have foo0=bar0, foo1=bar1)
### Finish a txn (assume we have foo0=bar0, foo1=bar1)
```
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
@ -129,7 +129,7 @@ TxnResponse {
}
```
#### Watch on a key/range
### Watch on a key/range
```
Watch( WatchRequest{

View File

@ -1,7 +1,6 @@
syntax = "proto3";
// Interface exported by the server.
service etcd {
service KV {
// Range gets the keys in the range from the store.
rpc Range(RangeRequest) returns (RangeResponse) {}
@ -20,11 +19,6 @@ service etcd {
// and generates events with the same revision in the event history.
rpc Txn(TxnRequest) returns (TxnResponse) {}
// Watch watches the events happening or happened in etcd. Both input and output
// are stream. One watch rpc can watch for multiple ranges and get a stream of
// events. The whole events history can be watched unless compacted.
rpc WatchRange(stream WatchRangeRequest) returns (stream WatchRangeResponse) {}
// Compact compacts the event history in etcd. User should compact the
// event history periodically, or it will grow infinitely.
rpc Compact(CompactionRequest) returns (CompactionResponse) {}
@ -50,6 +44,14 @@ service etcd {
rpc LeaseKeepAlive(stream LeaseKeepAliveRequest) returns (stream LeaseKeepAliveResponse) {}
}
service watch {
// Watch watches the events happening or happened. Both input and output
// are stream. One watch rpc can watch for multiple keys or prefixs and
// get a stream of events. The whole events history can be watched unless
// compacted.
rpc Watch(stream WatchRequest) returns (stream WatchResponse) {}
}
message ResponseHeader {
// an error type message?
string error = 1;
@ -190,21 +192,22 @@ message KeyValue {
bytes value = 5;
}
message WatchRangeRequest {
// if the range_end is not given, the request returns the key.
message WatchRequest {
// the key to be watched
bytes key = 1;
// if the range_end is given, it gets the keys in range [key, range_end).
bytes range_end = 2;
// the prefix to be watched.
bytes prefix = 2;
// start_revision is an optional revision (including) to watch from. No start_revision is "now".
int64 start_revision = 3;
// end_revision is an optional revision (excluding) to end watch. No end_revision is "forever".
int64 end_revision = 4;
bool progress_notification = 5;
// TODO: support Range watch?
// TODO: support notification every time interval or revision increase?
// TODO: support cancel watch if the server cannot reach with majority?
}
message WatchRangeResponse {
message WatchResponse {
ResponseHeader header = 1;
repeated Event events = 2;
// TODO: support batched events response?
storagepb.Event event = 2;
}
message Event {

View File

@ -1,8 +1,8 @@
## Runtime Reconfiguration
# Runtime Reconfiguration
etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.
Reconfiguration requests can only be processed when the the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
Reconfiguration requests can only be processed when the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
To better understand the design behind runtime reconfiguration, we suggest you read [this](runtime-reconf-design.md).
@ -131,7 +131,7 @@ The new member will run as a part of the cluster and immediately begin catching
If you are adding multiple members the best practice is to configure a single member at a time and verify it starts correctly before adding more new members.
If you add a new member to a 1-node cluster, the cluster cannot make progress before the new member starts because it needs two members as majority to agree on the consensus. You will only see this behavior between the time `etcdctl member add` informs the cluster about the new member and the new member successfully establishing a connection to the existing one.
#### Error Cases
#### Error Cases When Adding Members
In the following case we have not included our new host in the list of enumerated nodes.
If this is a new cluster, the node must be added to the list of initial cluster members.
@ -154,10 +154,18 @@ etcdserver: assign ids error: unmatched member while checking PeerURLs
exit 1
```
When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any alive member in the cluster:
When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any active member in the cluster:
```sh
$ etcd
etcd: this member has been permanently removed from the cluster. Exiting.
exit 1
```
### Strict Reconfiguration Check Mode (`-strict-reconfig-check`)
As described in the above, the best practice of adding new members is to configure a single member at a time and verify it starts correctly before adding more new members. This step by step approach is very important because if newly added members is not configured correctly (for example the peer URLs are incorrect), the cluster can lose quorum. The quorum loss happens since the newly added member are counted in the quorum even if that member is not reachable from other existing members. Also quorum loss might happen if there is a connectivity issue or there are operational issues.
For avoiding this problem, etcd provides an option `-strict-reconfig-check`. If this option is passed to etcd, etcd rejects reconfiguration requests if the number of started members will be less than a quorum of the reconfigured cluster.
It is recommended to enable this option. However, it is disabled by default because of keeping compatibility.

View File

@ -1,10 +1,10 @@
### Design of Runtime Reconfiguration
# Design of Runtime Reconfiguration
Runtime reconfiguration is one of the hardest and most error prone features in a distributed system, especially in a consensus based system like etcd.
Read on to learn about the design of etcd's runtime reconfiguration commands and how we tackled these problems.
### Two Phase Config Changes Keep you Safe
## Two Phase Config Changes Keep you Safe
In etcd, every runtime reconfiguration has to go through [two phases](Documentation/runtime-configuration.md#add-a-new-member) for safety reasons. For example, to add a member you need to first inform cluster of new configuration and then start the new member.
@ -22,7 +22,7 @@ Without the explicit workflow around cluster membership etcd would be vulnerable
We think runtime reconfiguration should be a low frequent operation. We made the decision to keep it explicit and user-driven to ensure configuration safety and keep your cluster always running smoothly under your control.
### Permanent Loss of Quorum Requires New Cluster
## Permanent Loss of Quorum Requires New Cluster
If a cluster permanently loses a majority of its members, a new cluster will need to be started from an old data directory to recover the previous state.
@ -30,7 +30,7 @@ It is entirely possible to force removing the failed members from the existing c
If you have a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest you to read the [disaster recovery documentation](admin_guide.md#disaster-recovery) and prepare for permanent majority lose before you put etcd into production.
### Do Not Use Public Discovery Service For Runtime Reconfiguration
## Do Not Use Public Discovery Service For Runtime Reconfiguration
The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, you should use runtime reconfiguration API.
@ -38,10 +38,10 @@ Discovery service is designed for bootstrapping an etcd cluster in the cloud env
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
1. it introduces a external dependencies for the entire life-cycle of your cluster, not just bootstrap time. If there is a network issue between your cluster and public discover service, your cluster will suffer from it.
1. it introduces external dependencies for the entire life-cycle of your cluster, not just bootstrap time. If there is a network issue between your cluster and public discovery service, your cluster will suffer from it.
2. public discovery service must reflect correct runtime configuration of your cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard.
3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload.
If you want to have a discovery service that supports runtime reconfiguration, the best choice is to build your private one.
If you want to have a discovery service that supports runtime reconfiguration, the best choice is to build your private one.

View File

@ -97,7 +97,7 @@ $ curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client
-L https://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -v
```
You should able to see:
You should be able to see:
```
...
@ -138,7 +138,7 @@ $ etcd -name infra1 -data-dir infra1 \
# member2
$ etcd -name infra2 -data-dir infra2 \
-peer-client-cert-atuh -peer-trusted-ca-file=/path/to/ca.crt -peer-cert-file=/path/to/member2.crt -peer-key-file=/path/to/member2.key \
-peer-client-cert-auth -peer-trusted-ca-file=/path/to/ca.crt -peer-cert-file=/path/to/member2.crt -peer-key-file=/path/to/member2.key \
-initial-advertise-peer-urls=https://10.0.1.11:2380 -listen-peer-urls=https://10.0.1.11:2380 \
-discovery ${DISCOVERY_URL}
```
@ -152,7 +152,7 @@ The etcd members will form a cluster and all communication between members in th
The internal protocol of etcd v2.0.x uses a lot of short-lived HTTP connections.
So, when enabling TLS you may need to increase the heartbeat interval and election timeouts to reduce internal cluster connection churn.
A reasonable place to start are these values: ` --heartbeat-interval 500 --election-timeout 2500`.
This issues is resolved in the etcd v2.1.x series of releases which uses fewer connections.
These issues are resolved in the etcd v2.1.x series of releases which uses fewer connections.
### I'm seeing a SSLv3 alert handshake failure when using SSL client authentication?

View File

@ -1,16 +1,16 @@
## Tuning
# Tuning
The default settings in etcd should work well for installations on a local network where the average network latency is low.
However, when using etcd across multiple data centers or over networks with high latency you may need to tweak the heartbeat interval and election timeout settings.
The network isn't the only source of latency. Each request and response may be impacted by slow disks on both the leader and follower. Each of these timeouts represents the total time from request to successful response from the other machine.
### Time Parameters
## Time Parameters
The underlying distributed consensus protocol relies on two separate time parameters to ensure that nodes can handoff leadership if one stalls or goes offline.
The first parameter is called the *Heartbeat Interval*.
This is the frequency with which the leader will notify followers that it is still the leader.
For best pratices, the parameter should be set around round-trip time between members.
For best practices, the parameter should be set around round-trip time between members.
By default, etcd uses a `100ms` heartbeat interval.
The second parameter is the *Election Timeout*.
@ -27,11 +27,14 @@ The election timeout should be set based on the heartbeat interval and average r
Election timeouts must be at least 10 times the round-trip time so it can account for variance in your network.
For example, if the round-trip time between your members is 10ms then you should have at least a 100ms election timeout.
The upper limit of election timeout is 50000ms, which should only be used when deploying global etcd cluster. First, 5s is the upper limit of average global round-trip time. A reasonable round-trip time for the continental united states is 130ms, and the time between US and japan is around 350-400ms. Because package gets delayed a lot, and network situation may be terrible, 5s is a safe value for it. Then, because election timeout should be an order of magnitude bigger than broadcast time, 50s becomes its maximum.
You should also set your election timeout to at least 5 to 10 times your heartbeat interval to account for variance in leader replication.
For a heartbeat interval of 50ms you should set your election timeout to at least 250ms - 500ms.
The upper limit of election timeout is 50000ms (50s), which should only be used when deploying a globally-distributed etcd cluster.
A reasonable round-trip time for the continental United States is 130ms, and the time between US and Japan is around 350-400ms.
If your network has uneven performance or regular packet delays/loss then it is possible that a couple of retries may be necessary to successfully send a packet. So 5s is a safe upper limit of global round-trip time.
As the election timeout should be an order of magnitude bigger than broadcast time, in the case of ~5s for a globally distributed cluster, then 50 seconds becomes a reasonable maximum.
The heartbeat interval and election timeout value should be the same for all members in one cluster. Setting different values for etcd members may disrupt cluster stability.
You can override the default values on the command line:
@ -46,7 +49,7 @@ $ ETCD_HEARTBEAT_INTERVAL=100 ETCD_ELECTION_TIMEOUT=500 etcd
The values are specified in milliseconds.
### Snapshots
## Snapshots
etcd appends all key changes to a log file.
This log grows forever and is a complete linear history of every change made to the keys.

View File

@ -1,4 +1,4 @@
## Upgrade etcd to 2.1
# Upgrade etcd to 2.1
In the general case, upgrading from etcd 2.0 to 2.1 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v2.0 processes and replace them with etcd v2.1 processes
@ -6,15 +6,15 @@ In the general case, upgrading from etcd 2.0 to 2.1 can be a zero-downtime, roll
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade Checklists
## Upgrade Checklists
#### Upgrade Requirement
### Upgrade Requirements
To upgrade an existing etcd deployment to 2.1, you must be running 2.0. If youre running a version of etcd before 2.0, you must upgrade to [2.0](https://github.com/coreos/etcd/releases/tag/v2.0.13) before upgrading to 2.1.
Also, to ensure a smooth rolling upgrade, your running cluster must be healthy. You can check the health of the cluster by using `etcdctl cluster-health` command.
#### Preparedness
### Preparedness
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
@ -22,13 +22,13 @@ You might also want to [backup your data directory](admin_guide.md#backing-up-th
etcd 2.1 introduces a new [authentication](auth_api.md) feature, which is disabled by default. If your deployment depends on these, you may want to test the auth features before enabling them in production.
#### Mixed Versions
### Mixed Versions
While upgrading, an etcd cluster supports mixed versions of etcd members. The cluster is only considered upgraded once all its members are upgraded to 2.1.
Internally, etcd members negotiate with each other to determine the overall etcd cluster version, which controls the reported cluster version and the supported features. For example, if you are mid-upgrade, any 2.1 features (such as the the authentication feature mentioned above) wont be available.
#### Limitations
### Limitations
If you encounter any issues during the upgrade, you can attempt to restart the etcd process in trouble using a newer v2.1 binary to solve the problem. One known issue is that etcd v2.0.0 and v2.0.2 may panic during rolling upgrades due to an existing bug, which has been fixed since etcd v2.0.3.
@ -36,7 +36,7 @@ It might take up to 2 minutes for the newly upgraded member to catch up with the
If you have even more data, this might take more time. If you have a data size larger than 100MB you should contact us before upgrading, so we can make sure the upgrades work smoothly.
#### Downgrade
### Downgrade
If all members have been upgraded to v2.1, the cluster will be upgraded to v2.1, and downgrade is **not possible**. If any member is still v2.0, the cluster will remain in v2.0, and you can go back to use v2.0 binary.

View File

@ -1,4 +1,4 @@
## Upgrade etcd from 2.1 to 2.2
# Upgrade etcd from 2.1 to 2.2
In the general case, upgrading from etcd 2.1 to 2.2 can be a zero-downtime, rolling upgrade:
@ -7,33 +7,33 @@ In the general case, upgrading from etcd 2.1 to 2.2 can be a zero-downtime, roll
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade Checklists
## Upgrade Checklists
#### Upgrade Requirement
### Upgrade Requirement
To upgrade an existing etcd deployment to 2.2, you must be running 2.1. If youre running a version of etcd before 2.1, you must upgrade to [2.1](https://github.com/coreos/etcd/releases/tag/v2.1.2) before upgrading to 2.2.
Also, to ensure a smooth rolling upgrade, your running cluster must be healthy. You can check the health of the cluster by using `etcdctl cluster-health` command.
#### Preparedness
### Preparedness
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
You might also want to [backup your data directory](admin_guide.md#backing-up-the-datastore) for a potential [downgrade](#downgrade).
#### Mixed Versions
### Mixed Versions
While upgrading, an etcd cluster supports mixed versions of etcd members. The cluster is only considered upgraded once all its members are upgraded to 2.2.
Internally, etcd members negotiate with each other to determine the overall etcd cluster version, which controls the reported cluster version and the supported features.
#### Limitations
### Limitations
If you have a data size larger than 100MB you should contact us before upgrading, so we can make sure the upgrades work smoothly.
Every etcd 2.2 member will do health checking across the cluster periodically. etcd 2.1 member does not support health checking. During the upgrade, etcd 2.2 member will log warning about the unhealthy state of etcd 2.1 member. You can ignore the warning.
#### Downgrade
### Downgrade
If all members have been upgraded to v2.2, the cluster will be upgraded to v2.2, and downgrade is **not possible**. If any member is still v2.1, the cluster will remain in v2.1, and you can go back to use v2.1 binary.

42
Godeps/Godeps.json generated
View File

@ -1,6 +1,6 @@
{
"ImportPath": "github.com/coreos/etcd",
"GoVersion": "go1.4.2",
"GoVersion": "go1.5",
"Packages": [
"./..."
],
@ -10,6 +10,10 @@
"Comment": "null-5",
"Rev": "75cd24fc2f2c2a2088577d12123ddee5f54e0675"
},
{
"ImportPath": "github.com/akrennmair/gopcap",
"Rev": "00e11033259acb75598ba416495bb708d864a010"
},
{
"ImportPath": "github.com/beorn7/perks/quantile",
"Rev": "b965b613227fddccbfffe13eae360ed3fa822f8d"
@ -27,6 +31,10 @@
"ImportPath": "github.com/bradfitz/http2",
"Rev": "3e36af6d3af0e56fa3da71099f864933dea3d9fb"
},
{
"ImportPath": "github.com/cheggaaa/pb",
"Rev": "da1f27ad1d9509b16f65f52fd9d8138b0f2dc7b2"
},
{
"ImportPath": "github.com/codegangsta/cli",
"Comment": "1.2.0-26-gf7ebb76",
@ -79,20 +87,10 @@
"ImportPath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
"Rev": "fc2b8d3a73c4867e51861bbdd5ae3c1f0869dd6a"
},
{
"ImportPath": "github.com/prometheus/client_golang/model",
"Comment": "0.5.0-10-ga842dc1",
"Rev": "a842dc11e0621c34a71cab634d1d0190a59802a8"
},
{
"ImportPath": "github.com/prometheus/client_golang/prometheus",
"Comment": "0.5.0-10-ga842dc1",
"Rev": "a842dc11e0621c34a71cab634d1d0190a59802a8"
},
{
"ImportPath": "github.com/prometheus/client_golang/text",
"Comment": "0.5.0-10-ga842dc1",
"Rev": "a842dc11e0621c34a71cab634d1d0190a59802a8"
"Comment": "0.7.0-52-ge51041b",
"Rev": "e51041b3fa41cece0dca035740ba6411905be473"
},
{
"ImportPath": "github.com/prometheus/client_model/go",
@ -100,12 +98,20 @@
"Rev": "fa8ad6fec33561be4280a8f0514318c79d7f6cb6"
},
{
"ImportPath": "github.com/prometheus/procfs",
"Rev": "ee2372b58cee877abe07cde670d04d3b3bac5ee6"
"ImportPath": "github.com/prometheus/common/expfmt",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/rakyll/pb",
"Rev": "dc507ad06b7462501281bb4691ee43f0b1d1ec37"
"ImportPath": "github.com/prometheus/common/model",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/procfs",
"Rev": "454a56f35412459b5e684fd5ec0f9211b94f002a"
},
{
"ImportPath": "github.com/spacejam/loghisto",
"Rev": "323309774dec8b7430187e46cd0793974ccca04a"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
@ -113,7 +119,7 @@
},
{
"ImportPath": "github.com/ugorji/go/codec",
"Rev": "5abd4e96a45c386928ed2ca2a7ef63e2533e18ec"
"Rev": "f1f1a805ed361a0e078bb537e4ea78cd37dcf065"
},
{
"ImportPath": "github.com/xiang90/probing",

View File

@ -0,0 +1,5 @@
#*
*~
/tools/pass/pass
/tools/pcaptest/pcaptest
/tools/tcpdump/tcpdump

View File

@ -0,0 +1,27 @@
Copyright (c) 2009-2011 Andreas Krennmair. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Andreas Krennmair nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,11 @@
# PCAP
This is a simple wrapper around libpcap for Go. Originally written by Andreas
Krennmair <ak@synflood.at> and only minorly touched up by Mark Smith <mark@qq.is>.
Please see the included pcaptest.go and tcpdump.go programs for instructions on
how to use this library.
Miek Gieben <miek@miek.nl> has created a more Go-like package and replaced functionality
with standard functions from the standard library. The package has also been renamed to
pcap.

View File

@ -0,0 +1,527 @@
package pcap
import (
"encoding/binary"
"fmt"
"net"
"reflect"
"strings"
)
const (
TYPE_IP = 0x0800
TYPE_ARP = 0x0806
TYPE_IP6 = 0x86DD
TYPE_VLAN = 0x8100
IP_ICMP = 1
IP_INIP = 4
IP_TCP = 6
IP_UDP = 17
)
const (
ERRBUF_SIZE = 256
// According to pcap-linktype(7).
LINKTYPE_NULL = 0
LINKTYPE_ETHERNET = 1
LINKTYPE_TOKEN_RING = 6
LINKTYPE_ARCNET = 7
LINKTYPE_SLIP = 8
LINKTYPE_PPP = 9
LINKTYPE_FDDI = 10
LINKTYPE_ATM_RFC1483 = 100
LINKTYPE_RAW = 101
LINKTYPE_PPP_HDLC = 50
LINKTYPE_PPP_ETHER = 51
LINKTYPE_C_HDLC = 104
LINKTYPE_IEEE802_11 = 105
LINKTYPE_FRELAY = 107
LINKTYPE_LOOP = 108
LINKTYPE_LINUX_SLL = 113
LINKTYPE_LTALK = 104
LINKTYPE_PFLOG = 117
LINKTYPE_PRISM_HEADER = 119
LINKTYPE_IP_OVER_FC = 122
LINKTYPE_SUNATM = 123
LINKTYPE_IEEE802_11_RADIO = 127
LINKTYPE_ARCNET_LINUX = 129
LINKTYPE_LINUX_IRDA = 144
LINKTYPE_LINUX_LAPD = 177
)
type addrHdr interface {
SrcAddr() string
DestAddr() string
Len() int
}
type addrStringer interface {
String(addr addrHdr) string
}
func decodemac(pkt []byte) uint64 {
mac := uint64(0)
for i := uint(0); i < 6; i++ {
mac = (mac << 8) + uint64(pkt[i])
}
return mac
}
// Decode decodes the headers of a Packet.
func (p *Packet) Decode() {
if len(p.Data) <= 14 {
return
}
p.Type = int(binary.BigEndian.Uint16(p.Data[12:14]))
p.DestMac = decodemac(p.Data[0:6])
p.SrcMac = decodemac(p.Data[6:12])
if len(p.Data) >= 15 {
p.Payload = p.Data[14:]
}
switch p.Type {
case TYPE_IP:
p.decodeIp()
case TYPE_IP6:
p.decodeIp6()
case TYPE_ARP:
p.decodeArp()
case TYPE_VLAN:
p.decodeVlan()
}
}
func (p *Packet) headerString(headers []interface{}) string {
// If there's just one header, return that.
if len(headers) == 1 {
if hdr, ok := headers[0].(fmt.Stringer); ok {
return hdr.String()
}
}
// If there are two headers (IPv4/IPv6 -> TCP/UDP/IP..)
if len(headers) == 2 {
// Commonly the first header is an address.
if addr, ok := p.Headers[0].(addrHdr); ok {
if hdr, ok := p.Headers[1].(addrStringer); ok {
return fmt.Sprintf("%s %s", p.Time, hdr.String(addr))
}
}
}
// For IP in IP, we do a recursive call.
if len(headers) >= 2 {
if addr, ok := headers[0].(addrHdr); ok {
if _, ok := headers[1].(addrHdr); ok {
return fmt.Sprintf("%s > %s IP in IP: ",
addr.SrcAddr(), addr.DestAddr(), p.headerString(headers[1:]))
}
}
}
var typeNames []string
for _, hdr := range headers {
typeNames = append(typeNames, reflect.TypeOf(hdr).String())
}
return fmt.Sprintf("unknown [%s]", strings.Join(typeNames, ","))
}
// String prints a one-line representation of the packet header.
// The output is suitable for use in a tcpdump program.
func (p *Packet) String() string {
// If there are no headers, print "unsupported protocol".
if len(p.Headers) == 0 {
return fmt.Sprintf("%s unsupported protocol %d", p.Time, int(p.Type))
}
return fmt.Sprintf("%s %s", p.Time, p.headerString(p.Headers))
}
// Arphdr is a ARP packet header.
type Arphdr struct {
Addrtype uint16
Protocol uint16
HwAddressSize uint8
ProtAddressSize uint8
Operation uint16
SourceHwAddress []byte
SourceProtAddress []byte
DestHwAddress []byte
DestProtAddress []byte
}
func (arp *Arphdr) String() (s string) {
switch arp.Operation {
case 1:
s = "ARP request"
case 2:
s = "ARP Reply"
}
if arp.Addrtype == LINKTYPE_ETHERNET && arp.Protocol == TYPE_IP {
s = fmt.Sprintf("%012x (%s) > %012x (%s)",
decodemac(arp.SourceHwAddress), arp.SourceProtAddress,
decodemac(arp.DestHwAddress), arp.DestProtAddress)
} else {
s = fmt.Sprintf("addrtype = %d protocol = %d", arp.Addrtype, arp.Protocol)
}
return
}
func (p *Packet) decodeArp() {
if len(p.Payload) < 8 {
return
}
pkt := p.Payload
arp := new(Arphdr)
arp.Addrtype = binary.BigEndian.Uint16(pkt[0:2])
arp.Protocol = binary.BigEndian.Uint16(pkt[2:4])
arp.HwAddressSize = pkt[4]
arp.ProtAddressSize = pkt[5]
arp.Operation = binary.BigEndian.Uint16(pkt[6:8])
if len(pkt) < int(8+2*arp.HwAddressSize+2*arp.ProtAddressSize) {
return
}
arp.SourceHwAddress = pkt[8 : 8+arp.HwAddressSize]
arp.SourceProtAddress = pkt[8+arp.HwAddressSize : 8+arp.HwAddressSize+arp.ProtAddressSize]
arp.DestHwAddress = pkt[8+arp.HwAddressSize+arp.ProtAddressSize : 8+2*arp.HwAddressSize+arp.ProtAddressSize]
arp.DestProtAddress = pkt[8+2*arp.HwAddressSize+arp.ProtAddressSize : 8+2*arp.HwAddressSize+2*arp.ProtAddressSize]
p.Headers = append(p.Headers, arp)
if len(pkt) >= int(8+2*arp.HwAddressSize+2*arp.ProtAddressSize) {
p.Payload = p.Payload[8+2*arp.HwAddressSize+2*arp.ProtAddressSize:]
}
}
// IPadr is the header of an IP packet.
type Iphdr struct {
Version uint8
Ihl uint8
Tos uint8
Length uint16
Id uint16
Flags uint8
FragOffset uint16
Ttl uint8
Protocol uint8
Checksum uint16
SrcIp []byte
DestIp []byte
}
func (p *Packet) decodeIp() {
if len(p.Payload) < 20 {
return
}
pkt := p.Payload
ip := new(Iphdr)
ip.Version = uint8(pkt[0]) >> 4
ip.Ihl = uint8(pkt[0]) & 0x0F
ip.Tos = pkt[1]
ip.Length = binary.BigEndian.Uint16(pkt[2:4])
ip.Id = binary.BigEndian.Uint16(pkt[4:6])
flagsfrags := binary.BigEndian.Uint16(pkt[6:8])
ip.Flags = uint8(flagsfrags >> 13)
ip.FragOffset = flagsfrags & 0x1FFF
ip.Ttl = pkt[8]
ip.Protocol = pkt[9]
ip.Checksum = binary.BigEndian.Uint16(pkt[10:12])
ip.SrcIp = pkt[12:16]
ip.DestIp = pkt[16:20]
pEnd := int(ip.Length)
if pEnd > len(pkt) {
pEnd = len(pkt)
}
if len(pkt) >= pEnd && int(ip.Ihl*4) < pEnd {
p.Payload = pkt[ip.Ihl*4 : pEnd]
} else {
p.Payload = []byte{}
}
p.Headers = append(p.Headers, ip)
p.IP = ip
switch ip.Protocol {
case IP_TCP:
p.decodeTcp()
case IP_UDP:
p.decodeUdp()
case IP_ICMP:
p.decodeIcmp()
case IP_INIP:
p.decodeIp()
}
}
func (ip *Iphdr) SrcAddr() string { return net.IP(ip.SrcIp).String() }
func (ip *Iphdr) DestAddr() string { return net.IP(ip.DestIp).String() }
func (ip *Iphdr) Len() int { return int(ip.Length) }
type Vlanhdr struct {
Priority byte
DropEligible bool
VlanIdentifier int
Type int // Not actually part of the vlan header, but the type of the actual packet
}
func (v *Vlanhdr) String() {
fmt.Sprintf("VLAN Priority:%d Drop:%v Tag:%d", v.Priority, v.DropEligible, v.VlanIdentifier)
}
func (p *Packet) decodeVlan() {
pkt := p.Payload
vlan := new(Vlanhdr)
if len(pkt) < 4 {
return
}
vlan.Priority = (pkt[2] & 0xE0) >> 13
vlan.DropEligible = pkt[2]&0x10 != 0
vlan.VlanIdentifier = int(binary.BigEndian.Uint16(pkt[:2])) & 0x0FFF
vlan.Type = int(binary.BigEndian.Uint16(p.Payload[2:4]))
p.Headers = append(p.Headers, vlan)
if len(pkt) >= 5 {
p.Payload = p.Payload[4:]
}
switch vlan.Type {
case TYPE_IP:
p.decodeIp()
case TYPE_IP6:
p.decodeIp6()
case TYPE_ARP:
p.decodeArp()
}
}
type Tcphdr struct {
SrcPort uint16
DestPort uint16
Seq uint32
Ack uint32
DataOffset uint8
Flags uint16
Window uint16
Checksum uint16
Urgent uint16
Data []byte
}
const (
TCP_FIN = 1 << iota
TCP_SYN
TCP_RST
TCP_PSH
TCP_ACK
TCP_URG
TCP_ECE
TCP_CWR
TCP_NS
)
func (p *Packet) decodeTcp() {
if len(p.Payload) < 20 {
return
}
pkt := p.Payload
tcp := new(Tcphdr)
tcp.SrcPort = binary.BigEndian.Uint16(pkt[0:2])
tcp.DestPort = binary.BigEndian.Uint16(pkt[2:4])
tcp.Seq = binary.BigEndian.Uint32(pkt[4:8])
tcp.Ack = binary.BigEndian.Uint32(pkt[8:12])
tcp.DataOffset = (pkt[12] & 0xF0) >> 4
tcp.Flags = binary.BigEndian.Uint16(pkt[12:14]) & 0x1FF
tcp.Window = binary.BigEndian.Uint16(pkt[14:16])
tcp.Checksum = binary.BigEndian.Uint16(pkt[16:18])
tcp.Urgent = binary.BigEndian.Uint16(pkt[18:20])
if len(pkt) >= int(tcp.DataOffset*4) {
p.Payload = pkt[tcp.DataOffset*4:]
}
p.Headers = append(p.Headers, tcp)
p.TCP = tcp
}
func (tcp *Tcphdr) String(hdr addrHdr) string {
return fmt.Sprintf("TCP %s:%d > %s:%d %s SEQ=%d ACK=%d LEN=%d",
hdr.SrcAddr(), int(tcp.SrcPort), hdr.DestAddr(), int(tcp.DestPort),
tcp.FlagsString(), int64(tcp.Seq), int64(tcp.Ack), hdr.Len())
}
func (tcp *Tcphdr) FlagsString() string {
var sflags []string
if 0 != (tcp.Flags & TCP_SYN) {
sflags = append(sflags, "syn")
}
if 0 != (tcp.Flags & TCP_FIN) {
sflags = append(sflags, "fin")
}
if 0 != (tcp.Flags & TCP_ACK) {
sflags = append(sflags, "ack")
}
if 0 != (tcp.Flags & TCP_PSH) {
sflags = append(sflags, "psh")
}
if 0 != (tcp.Flags & TCP_RST) {
sflags = append(sflags, "rst")
}
if 0 != (tcp.Flags & TCP_URG) {
sflags = append(sflags, "urg")
}
if 0 != (tcp.Flags & TCP_NS) {
sflags = append(sflags, "ns")
}
if 0 != (tcp.Flags & TCP_CWR) {
sflags = append(sflags, "cwr")
}
if 0 != (tcp.Flags & TCP_ECE) {
sflags = append(sflags, "ece")
}
return fmt.Sprintf("[%s]", strings.Join(sflags, " "))
}
type Udphdr struct {
SrcPort uint16
DestPort uint16
Length uint16
Checksum uint16
}
func (p *Packet) decodeUdp() {
if len(p.Payload) < 8 {
return
}
pkt := p.Payload
udp := new(Udphdr)
udp.SrcPort = binary.BigEndian.Uint16(pkt[0:2])
udp.DestPort = binary.BigEndian.Uint16(pkt[2:4])
udp.Length = binary.BigEndian.Uint16(pkt[4:6])
udp.Checksum = binary.BigEndian.Uint16(pkt[6:8])
p.Headers = append(p.Headers, udp)
p.UDP = udp
if len(p.Payload) >= 8 {
p.Payload = pkt[8:]
}
}
func (udp *Udphdr) String(hdr addrHdr) string {
return fmt.Sprintf("UDP %s:%d > %s:%d LEN=%d CHKSUM=%d",
hdr.SrcAddr(), int(udp.SrcPort), hdr.DestAddr(), int(udp.DestPort),
int(udp.Length), int(udp.Checksum))
}
type Icmphdr struct {
Type uint8
Code uint8
Checksum uint16
Id uint16
Seq uint16
Data []byte
}
func (p *Packet) decodeIcmp() *Icmphdr {
if len(p.Payload) < 8 {
return nil
}
pkt := p.Payload
icmp := new(Icmphdr)
icmp.Type = pkt[0]
icmp.Code = pkt[1]
icmp.Checksum = binary.BigEndian.Uint16(pkt[2:4])
icmp.Id = binary.BigEndian.Uint16(pkt[4:6])
icmp.Seq = binary.BigEndian.Uint16(pkt[6:8])
p.Payload = pkt[8:]
p.Headers = append(p.Headers, icmp)
return icmp
}
func (icmp *Icmphdr) String(hdr addrHdr) string {
return fmt.Sprintf("ICMP %s > %s Type = %d Code = %d ",
hdr.SrcAddr(), hdr.DestAddr(), icmp.Type, icmp.Code)
}
func (icmp *Icmphdr) TypeString() (result string) {
switch icmp.Type {
case 0:
result = fmt.Sprintf("Echo reply seq=%d", icmp.Seq)
case 3:
switch icmp.Code {
case 0:
result = "Network unreachable"
case 1:
result = "Host unreachable"
case 2:
result = "Protocol unreachable"
case 3:
result = "Port unreachable"
default:
result = "Destination unreachable"
}
case 8:
result = fmt.Sprintf("Echo request seq=%d", icmp.Seq)
case 30:
result = "Traceroute"
}
return
}
type Ip6hdr struct {
// http://www.networksorcery.com/enp/protocol/ipv6.htm
Version uint8 // 4 bits
TrafficClass uint8 // 8 bits
FlowLabel uint32 // 20 bits
Length uint16 // 16 bits
NextHeader uint8 // 8 bits, same as Protocol in Iphdr
HopLimit uint8 // 8 bits
SrcIp []byte // 16 bytes
DestIp []byte // 16 bytes
}
func (p *Packet) decodeIp6() {
if len(p.Payload) < 40 {
return
}
pkt := p.Payload
ip6 := new(Ip6hdr)
ip6.Version = uint8(pkt[0]) >> 4
ip6.TrafficClass = uint8((binary.BigEndian.Uint16(pkt[0:2]) >> 4) & 0x00FF)
ip6.FlowLabel = binary.BigEndian.Uint32(pkt[0:4]) & 0x000FFFFF
ip6.Length = binary.BigEndian.Uint16(pkt[4:6])
ip6.NextHeader = pkt[6]
ip6.HopLimit = pkt[7]
ip6.SrcIp = pkt[8:24]
ip6.DestIp = pkt[24:40]
if len(p.Payload) >= 40 {
p.Payload = pkt[40:]
}
p.Headers = append(p.Headers, ip6)
switch ip6.NextHeader {
case IP_TCP:
p.decodeTcp()
case IP_UDP:
p.decodeUdp()
case IP_ICMP:
p.decodeIcmp()
case IP_INIP:
p.decodeIp()
}
}
func (ip6 *Ip6hdr) SrcAddr() string { return net.IP(ip6.SrcIp).String() }
func (ip6 *Ip6hdr) DestAddr() string { return net.IP(ip6.DestIp).String() }
func (ip6 *Ip6hdr) Len() int { return int(ip6.Length) }

View File

@ -0,0 +1,247 @@
package pcap
import (
"bytes"
"testing"
"time"
)
var testSimpleTcpPacket *Packet = &Packet{
Data: []byte{
0x00, 0x00, 0x0c, 0x9f, 0xf0, 0x20, 0xbc, 0x30, 0x5b, 0xe8, 0xd3, 0x49,
0x08, 0x00, 0x45, 0x00, 0x01, 0xa4, 0x39, 0xdf, 0x40, 0x00, 0x40, 0x06,
0x55, 0x5a, 0xac, 0x11, 0x51, 0x49, 0xad, 0xde, 0xfe, 0xe1, 0xc5, 0xf7,
0x00, 0x50, 0xc5, 0x7e, 0x0e, 0x48, 0x49, 0x07, 0x42, 0x32, 0x80, 0x18,
0x00, 0x73, 0xab, 0xb1, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x03, 0x77,
0x37, 0x9c, 0x42, 0x77, 0x5e, 0x3a, 0x47, 0x45, 0x54, 0x20, 0x2f, 0x20,
0x48, 0x54, 0x54, 0x50, 0x2f, 0x31, 0x2e, 0x31, 0x0d, 0x0a, 0x48, 0x6f,
0x73, 0x74, 0x3a, 0x20, 0x77, 0x77, 0x77, 0x2e, 0x66, 0x69, 0x73, 0x68,
0x2e, 0x63, 0x6f, 0x6d, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x6e, 0x65, 0x63,
0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x6b, 0x65, 0x65, 0x70, 0x2d, 0x61,
0x6c, 0x69, 0x76, 0x65, 0x0d, 0x0a, 0x55, 0x73, 0x65, 0x72, 0x2d, 0x41,
0x67, 0x65, 0x6e, 0x74, 0x3a, 0x20, 0x4d, 0x6f, 0x7a, 0x69, 0x6c, 0x6c,
0x61, 0x2f, 0x35, 0x2e, 0x30, 0x20, 0x28, 0x58, 0x31, 0x31, 0x3b, 0x20,
0x4c, 0x69, 0x6e, 0x75, 0x78, 0x20, 0x78, 0x38, 0x36, 0x5f, 0x36, 0x34,
0x29, 0x20, 0x41, 0x70, 0x70, 0x6c, 0x65, 0x57, 0x65, 0x62, 0x4b, 0x69,
0x74, 0x2f, 0x35, 0x33, 0x35, 0x2e, 0x32, 0x20, 0x28, 0x4b, 0x48, 0x54,
0x4d, 0x4c, 0x2c, 0x20, 0x6c, 0x69, 0x6b, 0x65, 0x20, 0x47, 0x65, 0x63,
0x6b, 0x6f, 0x29, 0x20, 0x43, 0x68, 0x72, 0x6f, 0x6d, 0x65, 0x2f, 0x31,
0x35, 0x2e, 0x30, 0x2e, 0x38, 0x37, 0x34, 0x2e, 0x31, 0x32, 0x31, 0x20,
0x53, 0x61, 0x66, 0x61, 0x72, 0x69, 0x2f, 0x35, 0x33, 0x35, 0x2e, 0x32,
0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x3a, 0x20, 0x74, 0x65,
0x78, 0x74, 0x2f, 0x68, 0x74, 0x6d, 0x6c, 0x2c, 0x61, 0x70, 0x70, 0x6c,
0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x78, 0x68, 0x74, 0x6d,
0x6c, 0x2b, 0x78, 0x6d, 0x6c, 0x2c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x78, 0x6d, 0x6c, 0x3b, 0x71, 0x3d,
0x30, 0x2e, 0x39, 0x2c, 0x2a, 0x2f, 0x2a, 0x3b, 0x71, 0x3d, 0x30, 0x2e,
0x38, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x45, 0x6e,
0x63, 0x6f, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x67, 0x7a, 0x69, 0x70,
0x2c, 0x64, 0x65, 0x66, 0x6c, 0x61, 0x74, 0x65, 0x2c, 0x73, 0x64, 0x63,
0x68, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x4c, 0x61,
0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x3a, 0x20, 0x65, 0x6e, 0x2d, 0x55,
0x53, 0x2c, 0x65, 0x6e, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x38, 0x0d, 0x0a,
0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x43, 0x68, 0x61, 0x72, 0x73,
0x65, 0x74, 0x3a, 0x20, 0x49, 0x53, 0x4f, 0x2d, 0x38, 0x38, 0x35, 0x39,
0x2d, 0x31, 0x2c, 0x75, 0x74, 0x66, 0x2d, 0x38, 0x3b, 0x71, 0x3d, 0x30,
0x2e, 0x37, 0x2c, 0x2a, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x33, 0x0d, 0x0a,
0x0d, 0x0a,
}}
func BenchmarkDecodeSimpleTcpPacket(b *testing.B) {
for i := 0; i < b.N; i++ {
testSimpleTcpPacket.Decode()
}
}
func TestDecodeSimpleTcpPacket(t *testing.T) {
p := testSimpleTcpPacket
p.Decode()
if p.DestMac != 0x00000c9ff020 {
t.Error("Dest mac", p.DestMac)
}
if p.SrcMac != 0xbc305be8d349 {
t.Error("Src mac", p.SrcMac)
}
if len(p.Headers) != 2 {
t.Error("Incorrect number of headers", len(p.Headers))
return
}
if ip, ipOk := p.Headers[0].(*Iphdr); ipOk {
if ip.Version != 4 {
t.Error("ip Version", ip.Version)
}
if ip.Ihl != 5 {
t.Error("ip header length", ip.Ihl)
}
if ip.Tos != 0 {
t.Error("ip TOS", ip.Tos)
}
if ip.Length != 420 {
t.Error("ip Length", ip.Length)
}
if ip.Id != 14815 {
t.Error("ip ID", ip.Id)
}
if ip.Flags != 0x02 {
t.Error("ip Flags", ip.Flags)
}
if ip.FragOffset != 0 {
t.Error("ip Fragoffset", ip.FragOffset)
}
if ip.Ttl != 64 {
t.Error("ip TTL", ip.Ttl)
}
if ip.Protocol != 6 {
t.Error("ip Protocol", ip.Protocol)
}
if ip.Checksum != 0x555A {
t.Error("ip Checksum", ip.Checksum)
}
if !bytes.Equal(ip.SrcIp, []byte{172, 17, 81, 73}) {
t.Error("ip Src", ip.SrcIp)
}
if !bytes.Equal(ip.DestIp, []byte{173, 222, 254, 225}) {
t.Error("ip Dest", ip.DestIp)
}
if tcp, tcpOk := p.Headers[1].(*Tcphdr); tcpOk {
if tcp.SrcPort != 50679 {
t.Error("tcp srcport", tcp.SrcPort)
}
if tcp.DestPort != 80 {
t.Error("tcp destport", tcp.DestPort)
}
if tcp.Seq != 0xc57e0e48 {
t.Error("tcp seq", tcp.Seq)
}
if tcp.Ack != 0x49074232 {
t.Error("tcp ack", tcp.Ack)
}
if tcp.DataOffset != 8 {
t.Error("tcp dataoffset", tcp.DataOffset)
}
if tcp.Flags != 0x18 {
t.Error("tcp flags", tcp.Flags)
}
if tcp.Window != 0x73 {
t.Error("tcp window", tcp.Window)
}
if tcp.Checksum != 0xabb1 {
t.Error("tcp checksum", tcp.Checksum)
}
if tcp.Urgent != 0 {
t.Error("tcp urgent", tcp.Urgent)
}
} else {
t.Error("Second header is not TCP header")
}
} else {
t.Error("First header is not IP header")
}
if string(p.Payload) != "GET / HTTP/1.1\r\nHost: www.fish.com\r\nConnection: keep-alive\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Encoding: gzip,deflate,sdch\r\nAccept-Language: en-US,en;q=0.8\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3\r\n\r\n" {
t.Error("--- PAYLOAD STRING ---\n", string(p.Payload), "\n--- PAYLOAD BYTES ---\n", p.Payload)
}
}
// Makes sure packet payload doesn't display the 6 trailing null of this packet
// as part of the payload. They're actually the ethernet trailer.
func TestDecodeSmallTcpPacketHasEmptyPayload(t *testing.T) {
p := &Packet{
// This packet is only 54 bits (an empty TCP RST), thus 6 trailing null
// bytes are added by the ethernet layer to make it the minimum packet size.
Data: []byte{
0xbc, 0x30, 0x5b, 0xe8, 0xd3, 0x49, 0xb8, 0xac, 0x6f, 0x92, 0xd5, 0xbf,
0x08, 0x00, 0x45, 0x00, 0x00, 0x28, 0x00, 0x00, 0x40, 0x00, 0x40, 0x06,
0x3f, 0x9f, 0xac, 0x11, 0x51, 0xc5, 0xac, 0x11, 0x51, 0x49, 0x00, 0x63,
0x9a, 0xef, 0x00, 0x00, 0x00, 0x00, 0x2e, 0xc1, 0x27, 0x83, 0x50, 0x14,
0x00, 0x00, 0xc3, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}}
p.Decode()
if p.Payload == nil {
t.Error("Nil payload")
}
if len(p.Payload) != 0 {
t.Error("Non-empty payload:", p.Payload)
}
}
func TestDecodeVlanPacket(t *testing.T) {
p := &Packet{
Data: []byte{
0x00, 0x10, 0xdb, 0xff, 0x10, 0x00, 0x00, 0x15, 0x2c, 0x9d, 0xcc, 0x00, 0x81, 0x00, 0x01, 0xf7,
0x08, 0x00, 0x45, 0x00, 0x00, 0x28, 0x29, 0x8d, 0x40, 0x00, 0x7d, 0x06, 0x83, 0xa0, 0xac, 0x1b,
0xca, 0x8e, 0x45, 0x16, 0x94, 0xe2, 0xd4, 0x0a, 0x00, 0x50, 0xdf, 0xab, 0x9c, 0xc6, 0xcd, 0x1e,
0xe5, 0xd1, 0x50, 0x10, 0x01, 0x00, 0x5a, 0x74, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}}
p.Decode()
if p.Type != TYPE_VLAN {
t.Error("Didn't detect vlan")
}
if len(p.Headers) != 3 {
t.Error("Incorrect number of headers:", len(p.Headers))
for i, h := range p.Headers {
t.Errorf("Header %d: %#v", i, h)
}
t.FailNow()
}
if _, ok := p.Headers[0].(*Vlanhdr); !ok {
t.Errorf("First header isn't vlan: %q", p.Headers[0])
}
if _, ok := p.Headers[1].(*Iphdr); !ok {
t.Errorf("Second header isn't IP: %q", p.Headers[1])
}
if _, ok := p.Headers[2].(*Tcphdr); !ok {
t.Errorf("Third header isn't TCP: %q", p.Headers[2])
}
}
func TestDecodeFuzzFallout(t *testing.T) {
testData := []struct {
Data []byte
}{
{[]byte("000000000000\x81\x000")},
{[]byte("000000000000\x81\x00000")},
{[]byte("000000000000\x86\xdd0")},
{[]byte("000000000000\b\x000")},
{[]byte("000000000000\b\x060")},
{[]byte{}},
{[]byte("000000000000\b\x0600000000")},
{[]byte("000000000000\x86\xdd000000\x01000000000000000000000000000000000")},
{[]byte("000000000000\x81\x0000\b\x0600000000")},
{[]byte("000000000000\b\x00n0000000000000000000")},
{[]byte("000000000000\x86\xdd000000\x0100000000000000000000000000000000000")},
{[]byte("000000000000\x81\x0000\b\x00g0000000000000000000")},
//{[]byte()},
{[]byte("000000000000\b\x00400000000\x110000000000")},
{[]byte("0nMء\xfe\x13\x13\x81\x00gr\b\x00&x\xc9\xe5b'\x1e0\x00\x04\x00\x0020596224")},
{[]byte("000000000000\x81\x0000\b\x00400000000\x110000000000")},
{[]byte("000000000000\b\x00000000000\x0600\xff0000000")},
{[]byte("000000000000\x86\xdd000000\x06000000000000000000000000000000000")},
{[]byte("000000000000\x81\x0000\b\x00000000000\x0600b0000000")},
{[]byte("000000000000\x81\x0000\b\x00400000000\x060000000000")},
{[]byte("000000000000\x86\xdd000000\x11000000000000000000000000000000000")},
{[]byte("000000000000\x86\xdd000000\x0600000000000000000000000000000000000000000000M")},
{[]byte("000000000000\b\x00500000000\x0600000000000")},
{[]byte("0nM\xd80\xfe\x13\x13\x81\x00gr\b\x00&x\xc9\xe5b'\x1e0\x00\x04\x00\x0020596224")},
}
for _, entry := range testData {
pkt := &Packet{
Time: time.Now(),
Caplen: uint32(len(entry.Data)),
Len: uint32(len(entry.Data)),
Data: entry.Data,
}
pkt.Decode()
/*
func() {
defer func() {
if err := recover(); err != nil {
t.Fatalf("%d. %q failed: %v", idx, string(entry.Data), err)
}
}()
pkt.Decode()
}()
*/
}
}

View File

@ -0,0 +1,206 @@
package pcap
import (
"encoding/binary"
"fmt"
"io"
"time"
)
// FileHeader is the parsed header of a pcap file.
// http://wiki.wireshark.org/Development/LibpcapFileFormat
type FileHeader struct {
MagicNumber uint32
VersionMajor uint16
VersionMinor uint16
TimeZone int32
SigFigs uint32
SnapLen uint32
Network uint32
}
type PacketTime struct {
Sec int32
Usec int32
}
// Convert the PacketTime to a go Time struct.
func (p *PacketTime) Time() time.Time {
return time.Unix(int64(p.Sec), int64(p.Usec)*1000)
}
// Packet is a single packet parsed from a pcap file.
//
// Convenient access to IP, TCP, and UDP headers is provided after Decode()
// is called if the packet is of the appropriate type.
type Packet struct {
Time time.Time // packet send/receive time
Caplen uint32 // bytes stored in the file (caplen <= len)
Len uint32 // bytes sent/received
Data []byte // packet data
Type int // protocol type, see LINKTYPE_*
DestMac uint64
SrcMac uint64
Headers []interface{} // decoded headers, in order
Payload []byte // remaining non-header bytes
IP *Iphdr // IP header (for IP packets, after decoding)
TCP *Tcphdr // TCP header (for TCP packets, after decoding)
UDP *Udphdr // UDP header (for UDP packets after decoding)
}
// Reader parses pcap files.
type Reader struct {
flip bool
buf io.Reader
err error
fourBytes []byte
twoBytes []byte
sixteenBytes []byte
Header FileHeader
}
// NewReader reads pcap data from an io.Reader.
func NewReader(reader io.Reader) (*Reader, error) {
r := &Reader{
buf: reader,
fourBytes: make([]byte, 4),
twoBytes: make([]byte, 2),
sixteenBytes: make([]byte, 16),
}
switch magic := r.readUint32(); magic {
case 0xa1b2c3d4:
r.flip = false
case 0xd4c3b2a1:
r.flip = true
default:
return nil, fmt.Errorf("pcap: bad magic number: %0x", magic)
}
r.Header = FileHeader{
MagicNumber: 0xa1b2c3d4,
VersionMajor: r.readUint16(),
VersionMinor: r.readUint16(),
TimeZone: r.readInt32(),
SigFigs: r.readUint32(),
SnapLen: r.readUint32(),
Network: r.readUint32(),
}
return r, nil
}
// Next returns the next packet or nil if no more packets can be read.
func (r *Reader) Next() *Packet {
d := r.sixteenBytes
r.err = r.read(d)
if r.err != nil {
return nil
}
timeSec := asUint32(d[0:4], r.flip)
timeUsec := asUint32(d[4:8], r.flip)
capLen := asUint32(d[8:12], r.flip)
origLen := asUint32(d[12:16], r.flip)
data := make([]byte, capLen)
if r.err = r.read(data); r.err != nil {
return nil
}
return &Packet{
Time: time.Unix(int64(timeSec), int64(timeUsec)),
Caplen: capLen,
Len: origLen,
Data: data,
}
}
func (r *Reader) read(data []byte) error {
var err error
n, err := r.buf.Read(data)
for err == nil && n != len(data) {
var chunk int
chunk, err = r.buf.Read(data[n:])
n += chunk
}
if len(data) == n {
return nil
}
return err
}
func (r *Reader) readUint32() uint32 {
data := r.fourBytes
if r.err = r.read(data); r.err != nil {
return 0
}
return asUint32(data, r.flip)
}
func (r *Reader) readInt32() int32 {
data := r.fourBytes
if r.err = r.read(data); r.err != nil {
return 0
}
return int32(asUint32(data, r.flip))
}
func (r *Reader) readUint16() uint16 {
data := r.twoBytes
if r.err = r.read(data); r.err != nil {
return 0
}
return asUint16(data, r.flip)
}
// Writer writes a pcap file.
type Writer struct {
writer io.Writer
buf []byte
}
// NewWriter creates a Writer that stores output in an io.Writer.
// The FileHeader is written immediately.
func NewWriter(writer io.Writer, header *FileHeader) (*Writer, error) {
w := &Writer{
writer: writer,
buf: make([]byte, 24),
}
binary.LittleEndian.PutUint32(w.buf, header.MagicNumber)
binary.LittleEndian.PutUint16(w.buf[4:], header.VersionMajor)
binary.LittleEndian.PutUint16(w.buf[6:], header.VersionMinor)
binary.LittleEndian.PutUint32(w.buf[8:], uint32(header.TimeZone))
binary.LittleEndian.PutUint32(w.buf[12:], header.SigFigs)
binary.LittleEndian.PutUint32(w.buf[16:], header.SnapLen)
binary.LittleEndian.PutUint32(w.buf[20:], header.Network)
if _, err := writer.Write(w.buf); err != nil {
return nil, err
}
return w, nil
}
// Writer writes a packet to the underlying writer.
func (w *Writer) Write(pkt *Packet) error {
binary.LittleEndian.PutUint32(w.buf, uint32(pkt.Time.Unix()))
binary.LittleEndian.PutUint32(w.buf[4:], uint32(pkt.Time.Nanosecond()))
binary.LittleEndian.PutUint32(w.buf[8:], uint32(pkt.Time.Unix()))
binary.LittleEndian.PutUint32(w.buf[12:], pkt.Len)
if _, err := w.writer.Write(w.buf[:16]); err != nil {
return err
}
_, err := w.writer.Write(pkt.Data)
return err
}
func asUint32(data []byte, flip bool) uint32 {
if flip {
return binary.BigEndian.Uint32(data)
}
return binary.LittleEndian.Uint32(data)
}
func asUint16(data []byte, flip bool) uint16 {
if flip {
return binary.BigEndian.Uint16(data)
}
return binary.LittleEndian.Uint16(data)
}

View File

@ -0,0 +1,266 @@
// Interface to both live and offline pcap parsing.
package pcap
/*
#cgo linux LDFLAGS: -lpcap
#cgo freebsd LDFLAGS: -lpcap
#cgo darwin LDFLAGS: -lpcap
#cgo windows CFLAGS: -I C:/WpdPack/Include
#cgo windows,386 LDFLAGS: -L C:/WpdPack/Lib -lwpcap
#cgo windows,amd64 LDFLAGS: -L C:/WpdPack/Lib/x64 -lwpcap
#include <stdlib.h>
#include <pcap.h>
// Workaround for not knowing how to cast to const u_char**
int hack_pcap_next_ex(pcap_t *p, struct pcap_pkthdr **pkt_header,
u_char **pkt_data) {
return pcap_next_ex(p, pkt_header, (const u_char **)pkt_data);
}
*/
import "C"
import (
"errors"
"net"
"syscall"
"time"
"unsafe"
)
type Pcap struct {
cptr *C.pcap_t
}
type Stat struct {
PacketsReceived uint32
PacketsDropped uint32
PacketsIfDropped uint32
}
type Interface struct {
Name string
Description string
Addresses []IFAddress
// TODO: add more elements
}
type IFAddress struct {
IP net.IP
Netmask net.IPMask
// TODO: add broadcast + PtP dst ?
}
func (p *Pcap) Next() (pkt *Packet) {
rv, _ := p.NextEx()
return rv
}
// Openlive opens a device and returns a *Pcap handler
func Openlive(device string, snaplen int32, promisc bool, timeout_ms int32) (handle *Pcap, err error) {
var buf *C.char
buf = (*C.char)(C.calloc(ERRBUF_SIZE, 1))
h := new(Pcap)
var pro int32
if promisc {
pro = 1
}
dev := C.CString(device)
defer C.free(unsafe.Pointer(dev))
h.cptr = C.pcap_open_live(dev, C.int(snaplen), C.int(pro), C.int(timeout_ms), buf)
if nil == h.cptr {
handle = nil
err = errors.New(C.GoString(buf))
} else {
handle = h
}
C.free(unsafe.Pointer(buf))
return
}
func Openoffline(file string) (handle *Pcap, err error) {
var buf *C.char
buf = (*C.char)(C.calloc(ERRBUF_SIZE, 1))
h := new(Pcap)
cf := C.CString(file)
defer C.free(unsafe.Pointer(cf))
h.cptr = C.pcap_open_offline(cf, buf)
if nil == h.cptr {
handle = nil
err = errors.New(C.GoString(buf))
} else {
handle = h
}
C.free(unsafe.Pointer(buf))
return
}
func (p *Pcap) NextEx() (pkt *Packet, result int32) {
var pkthdr *C.struct_pcap_pkthdr
var buf_ptr *C.u_char
var buf unsafe.Pointer
result = int32(C.hack_pcap_next_ex(p.cptr, &pkthdr, &buf_ptr))
buf = unsafe.Pointer(buf_ptr)
if nil == buf {
return
}
pkt = new(Packet)
pkt.Time = time.Unix(int64(pkthdr.ts.tv_sec), int64(pkthdr.ts.tv_usec)*1000)
pkt.Caplen = uint32(pkthdr.caplen)
pkt.Len = uint32(pkthdr.len)
pkt.Data = C.GoBytes(buf, C.int(pkthdr.caplen))
return
}
func (p *Pcap) Close() {
C.pcap_close(p.cptr)
}
func (p *Pcap) Geterror() error {
return errors.New(C.GoString(C.pcap_geterr(p.cptr)))
}
func (p *Pcap) Getstats() (stat *Stat, err error) {
var cstats _Ctype_struct_pcap_stat
if -1 == C.pcap_stats(p.cptr, &cstats) {
return nil, p.Geterror()
}
stats := new(Stat)
stats.PacketsReceived = uint32(cstats.ps_recv)
stats.PacketsDropped = uint32(cstats.ps_drop)
stats.PacketsIfDropped = uint32(cstats.ps_ifdrop)
return stats, nil
}
func (p *Pcap) Setfilter(expr string) (err error) {
var bpf _Ctype_struct_bpf_program
cexpr := C.CString(expr)
defer C.free(unsafe.Pointer(cexpr))
if -1 == C.pcap_compile(p.cptr, &bpf, cexpr, 1, 0) {
return p.Geterror()
}
if -1 == C.pcap_setfilter(p.cptr, &bpf) {
C.pcap_freecode(&bpf)
return p.Geterror()
}
C.pcap_freecode(&bpf)
return nil
}
func Version() string {
return C.GoString(C.pcap_lib_version())
}
func (p *Pcap) Datalink() int {
return int(C.pcap_datalink(p.cptr))
}
func (p *Pcap) Setdatalink(dlt int) error {
if -1 == C.pcap_set_datalink(p.cptr, C.int(dlt)) {
return p.Geterror()
}
return nil
}
func DatalinkValueToName(dlt int) string {
if name := C.pcap_datalink_val_to_name(C.int(dlt)); name != nil {
return C.GoString(name)
}
return ""
}
func DatalinkValueToDescription(dlt int) string {
if desc := C.pcap_datalink_val_to_description(C.int(dlt)); desc != nil {
return C.GoString(desc)
}
return ""
}
func Findalldevs() (ifs []Interface, err error) {
var buf *C.char
buf = (*C.char)(C.calloc(ERRBUF_SIZE, 1))
defer C.free(unsafe.Pointer(buf))
var alldevsp *C.pcap_if_t
if -1 == C.pcap_findalldevs((**C.pcap_if_t)(&alldevsp), buf) {
return nil, errors.New(C.GoString(buf))
}
defer C.pcap_freealldevs((*C.pcap_if_t)(alldevsp))
dev := alldevsp
var i uint32
for i = 0; dev != nil; dev = (*C.pcap_if_t)(dev.next) {
i++
}
ifs = make([]Interface, i)
dev = alldevsp
for j := uint32(0); dev != nil; dev = (*C.pcap_if_t)(dev.next) {
var iface Interface
iface.Name = C.GoString(dev.name)
iface.Description = C.GoString(dev.description)
iface.Addresses = findalladdresses(dev.addresses)
// TODO: add more elements
ifs[j] = iface
j++
}
return
}
func findalladdresses(addresses *_Ctype_struct_pcap_addr) (retval []IFAddress) {
// TODO - make it support more than IPv4 and IPv6?
retval = make([]IFAddress, 0, 1)
for curaddr := addresses; curaddr != nil; curaddr = (*_Ctype_struct_pcap_addr)(curaddr.next) {
var a IFAddress
var err error
if a.IP, err = sockaddr_to_IP((*syscall.RawSockaddr)(unsafe.Pointer(curaddr.addr))); err != nil {
continue
}
if a.Netmask, err = sockaddr_to_IP((*syscall.RawSockaddr)(unsafe.Pointer(curaddr.addr))); err != nil {
continue
}
retval = append(retval, a)
}
return
}
func sockaddr_to_IP(rsa *syscall.RawSockaddr) (IP []byte, err error) {
switch rsa.Family {
case syscall.AF_INET:
pp := (*syscall.RawSockaddrInet4)(unsafe.Pointer(rsa))
IP = make([]byte, 4)
for i := 0; i < len(IP); i++ {
IP[i] = pp.Addr[i]
}
return
case syscall.AF_INET6:
pp := (*syscall.RawSockaddrInet6)(unsafe.Pointer(rsa))
IP = make([]byte, 16)
for i := 0; i < len(IP); i++ {
IP[i] = pp.Addr[i]
}
return
}
err = errors.New("Unsupported address type")
return
}
func (p *Pcap) Inject(data []byte) (err error) {
buf := (*C.char)(C.malloc((C.size_t)(len(data))))
for i := 0; i < len(data); i++ {
*(*byte)(unsafe.Pointer(uintptr(unsafe.Pointer(buf)) + uintptr(i))) = data[i]
}
if -1 == C.pcap_sendpacket(p.cptr, (*C.u_char)(unsafe.Pointer(buf)), (C.int)(len(data))) {
err = p.Geterror()
}
C.free(unsafe.Pointer(buf))
return
}

View File

@ -0,0 +1,49 @@
package main
import (
"flag"
"fmt"
"os"
"runtime/pprof"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/akrennmair/gopcap"
)
func main() {
var filename *string = flag.String("file", "", "filename")
var decode *bool = flag.Bool("d", false, "If true, decode each packet")
var cpuprofile *string = flag.String("cpuprofile", "", "filename")
flag.Parse()
h, err := pcap.Openoffline(*filename)
if err != nil {
fmt.Printf("Couldn't create pcap reader: %v", err)
}
if *cpuprofile != "" {
if out, err := os.Create(*cpuprofile); err == nil {
pprof.StartCPUProfile(out)
defer func() {
pprof.StopCPUProfile()
out.Close()
}()
} else {
panic(err)
}
}
i, nilPackets := 0, 0
start := time.Now()
for pkt, code := h.NextEx(); code != -2; pkt, code = h.NextEx() {
if pkt == nil {
nilPackets++
} else if *decode {
pkt.Decode()
}
i++
}
duration := time.Since(start)
fmt.Printf("Took %v to process %v packets, %v per packet, %d nil packets\n", duration, i, duration/time.Duration(i), nilPackets)
}

View File

@ -0,0 +1,96 @@
package main
// Parses a pcap file, writes it back to disk, then verifies the files
// are the same.
import (
"bufio"
"flag"
"fmt"
"io"
"os"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/akrennmair/gopcap"
)
var input *string = flag.String("input", "", "input file")
var output *string = flag.String("output", "", "output file")
var decode *bool = flag.Bool("decode", false, "print decoded packets")
func copyPcap(dest, src string) {
f, err := os.Open(src)
if err != nil {
fmt.Printf("couldn't open %q: %v\n", src, err)
return
}
defer f.Close()
reader, err := pcap.NewReader(bufio.NewReader(f))
if err != nil {
fmt.Printf("couldn't create reader: %v\n", err)
return
}
w, err := os.Create(dest)
if err != nil {
fmt.Printf("couldn't open %q: %v\n", dest, err)
return
}
defer w.Close()
buf := bufio.NewWriter(w)
writer, err := pcap.NewWriter(buf, &reader.Header)
if err != nil {
fmt.Printf("couldn't create writer: %v\n", err)
return
}
for {
pkt := reader.Next()
if pkt == nil {
break
}
if *decode {
pkt.Decode()
fmt.Println(pkt.String())
}
writer.Write(pkt)
}
buf.Flush()
}
func check(dest, src string) {
f, err := os.Open(src)
if err != nil {
fmt.Printf("couldn't open %q: %v\n", src, err)
return
}
defer f.Close()
freader := bufio.NewReader(f)
g, err := os.Open(dest)
if err != nil {
fmt.Printf("couldn't open %q: %v\n", src, err)
return
}
defer g.Close()
greader := bufio.NewReader(g)
for {
fb, ferr := freader.ReadByte()
gb, gerr := greader.ReadByte()
if ferr == io.EOF && gerr == io.EOF {
break
}
if fb == gb {
continue
}
fmt.Println("FAIL")
return
}
fmt.Println("PASS")
}
func main() {
flag.Parse()
copyPcap(*output, *input)
check(*output, *input)
}

View File

@ -0,0 +1,82 @@
package main
import (
"flag"
"fmt"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/akrennmair/gopcap"
)
func min(x uint32, y uint32) uint32 {
if x < y {
return x
}
return y
}
func main() {
var device *string = flag.String("d", "", "device")
var file *string = flag.String("r", "", "file")
var expr *string = flag.String("e", "", "filter expression")
flag.Parse()
var h *pcap.Pcap
var err error
ifs, err := pcap.Findalldevs()
if len(ifs) == 0 {
fmt.Printf("Warning: no devices found : %s\n", err)
} else {
for i := 0; i < len(ifs); i++ {
fmt.Printf("dev %d: %s (%s)\n", i+1, ifs[i].Name, ifs[i].Description)
}
}
if *device != "" {
h, err = pcap.Openlive(*device, 65535, true, 0)
if h == nil {
fmt.Printf("Openlive(%s) failed: %s\n", *device, err)
return
}
} else if *file != "" {
h, err = pcap.Openoffline(*file)
if h == nil {
fmt.Printf("Openoffline(%s) failed: %s\n", *file, err)
return
}
} else {
fmt.Printf("usage: pcaptest [-d <device> | -r <file>]\n")
return
}
defer h.Close()
fmt.Printf("pcap version: %s\n", pcap.Version())
if *expr != "" {
fmt.Printf("Setting filter: %s\n", *expr)
err := h.Setfilter(*expr)
if err != nil {
fmt.Printf("Warning: setting filter failed: %s\n", err)
}
}
for pkt := h.Next(); pkt != nil; pkt = h.Next() {
fmt.Printf("time: %d.%06d (%s) caplen: %d len: %d\nData:",
int64(pkt.Time.Second()), int64(pkt.Time.Nanosecond()),
time.Unix(int64(pkt.Time.Second()), 0).String(), int64(pkt.Caplen), int64(pkt.Len))
for i := uint32(0); i < pkt.Caplen; i++ {
if i%32 == 0 {
fmt.Printf("\n")
}
if 32 <= pkt.Data[i] && pkt.Data[i] <= 126 {
fmt.Printf("%c", pkt.Data[i])
} else {
fmt.Printf(".")
}
}
fmt.Printf("\n\n")
}
}

View File

@ -0,0 +1,121 @@
package main
import (
"bufio"
"flag"
"fmt"
"os"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/akrennmair/gopcap"
)
const (
TYPE_IP = 0x0800
TYPE_ARP = 0x0806
TYPE_IP6 = 0x86DD
IP_ICMP = 1
IP_INIP = 4
IP_TCP = 6
IP_UDP = 17
)
var out *bufio.Writer
var errout *bufio.Writer
func main() {
var device *string = flag.String("i", "", "interface")
var snaplen *int = flag.Int("s", 65535, "snaplen")
var hexdump *bool = flag.Bool("X", false, "hexdump")
expr := ""
out = bufio.NewWriter(os.Stdout)
errout = bufio.NewWriter(os.Stderr)
flag.Usage = func() {
fmt.Fprintf(errout, "usage: %s [ -i interface ] [ -s snaplen ] [ -X ] [ expression ]\n", os.Args[0])
errout.Flush()
os.Exit(1)
}
flag.Parse()
if len(flag.Args()) > 0 {
expr = flag.Arg(0)
}
if *device == "" {
devs, err := pcap.Findalldevs()
if err != nil {
fmt.Fprintf(errout, "tcpdump: couldn't find any devices: %s\n", err)
}
if 0 == len(devs) {
flag.Usage()
}
*device = devs[0].Name
}
h, err := pcap.Openlive(*device, int32(*snaplen), true, 0)
if h == nil {
fmt.Fprintf(errout, "tcpdump: %s\n", err)
errout.Flush()
return
}
defer h.Close()
if expr != "" {
ferr := h.Setfilter(expr)
if ferr != nil {
fmt.Fprintf(out, "tcpdump: %s\n", ferr)
out.Flush()
}
}
for pkt := h.Next(); pkt != nil; pkt = h.Next() {
pkt.Decode()
fmt.Fprintf(out, "%s\n", pkt.String())
if *hexdump {
Hexdump(pkt)
}
out.Flush()
}
}
func min(a, b int) int {
if a < b {
return a
}
return b
}
func Hexdump(pkt *pcap.Packet) {
for i := 0; i < len(pkt.Data); i += 16 {
Dumpline(uint32(i), pkt.Data[i:min(i+16, len(pkt.Data))])
}
}
func Dumpline(addr uint32, line []byte) {
fmt.Fprintf(out, "\t0x%04x: ", int32(addr))
var i uint16
for i = 0; i < 16 && i < uint16(len(line)); i++ {
if i%2 == 0 {
out.WriteString(" ")
}
fmt.Fprintf(out, "%02x", line[i])
}
for j := i; j <= 16; j++ {
if j%2 == 0 {
out.WriteString(" ")
}
out.WriteString(" ")
}
out.WriteString(" ")
for i = 0; i < 16 && i < uint16(len(line)); i++ {
if line[i] >= 32 && line[i] <= 126 {
fmt.Fprintf(out, "%c", line[i])
} else {
out.WriteString(".")
}
}
out.WriteString("\n")
}

View File

@ -0,0 +1,4 @@
language: go
go:
- 1.4.2
sudo: false

View File

@ -57,6 +57,12 @@ bar.ShowTimeLeft = true
// show average speed
bar.ShowSpeed = true
// sets the width of the progress bar
bar.SetWidth(80)
// sets the width of the progress bar, but if terminal size smaller will be ignored
bar.SetMaxWidth(80)
// convert output to readable format (like KB, MB)
bar.SetUnits(pb.U_BYTES)

View File

@ -1,14 +1,14 @@
package main
import (
"github.com/cheggaaa/pb"
"os"
"fmt"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/cheggaaa/pb"
"io"
"time"
"strings"
"net/http"
"os"
"strconv"
"strings"
"time"
)
func main() {
@ -18,12 +18,12 @@ func main() {
return
}
sourceName, destName := os.Args[1], os.Args[2]
// check source
var source io.Reader
var sourceSize int64
if strings.HasPrefix(sourceName, "http://") {
// open as url
// open as url
resp, err := http.Get(sourceName)
if err != nil {
fmt.Printf("Can't get %s: %v\n", sourceName, err)
@ -54,9 +54,7 @@ func main() {
sourceSize = sourceStat.Size()
source = s
}
// create dest
dest, err := os.Create(destName)
if err != nil {
@ -64,15 +62,15 @@ func main() {
return
}
defer dest.Close()
// create bar
// create bar
bar := pb.New(int(sourceSize)).SetUnits(pb.U_BYTES).SetRefreshRate(time.Millisecond * 10)
bar.ShowSpeed = true
bar.Start()
// create multi writer
writer := io.MultiWriter(dest, bar)
// and copy
io.Copy(writer, source)
bar.Finish()

View File

@ -1,7 +1,7 @@
package main
import (
"github.com/cheggaaa/pb"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/cheggaaa/pb"
"time"
)
@ -13,7 +13,7 @@ func main() {
bar.ShowPercent = true
// show bar (by default already true)
bar.ShowPercent = true
bar.ShowBar = true
// no need counters
bar.ShowCounters = true

45
Godeps/_workspace/src/github.com/cheggaaa/pb/format.go generated vendored Normal file
View File

@ -0,0 +1,45 @@
package pb
import (
"fmt"
"strconv"
"strings"
)
type Units int
const (
// By default, without type handle
U_NO Units = iota
// Handle as b, Kb, Mb, etc
U_BYTES
)
// Format integer
func Format(i int64, units Units) string {
switch units {
case U_BYTES:
return FormatBytes(i)
default:
// by default just convert to string
return strconv.FormatInt(i, 10)
}
}
// Convert bytes to human readable string. Like a 2 MB, 64.2 KB, 52 B
func FormatBytes(i int64) (result string) {
switch {
case i > (1024 * 1024 * 1024 * 1024):
result = fmt.Sprintf("%.02f TB", float64(i)/1024/1024/1024/1024)
case i > (1024 * 1024 * 1024):
result = fmt.Sprintf("%.02f GB", float64(i)/1024/1024/1024)
case i > (1024 * 1024):
result = fmt.Sprintf("%.02f MB", float64(i)/1024/1024)
case i > 1024:
result = fmt.Sprintf("%.02f KB", float64(i)/1024)
default:
result = fmt.Sprintf("%d B", i)
}
result = strings.Trim(result, " ")
return
}

View File

@ -0,0 +1,37 @@
package pb
import (
"fmt"
"strconv"
"testing"
)
func Test_DefaultsToInteger(t *testing.T) {
value := int64(1000)
expected := strconv.Itoa(int(value))
actual := Format(value, -1)
if actual != expected {
t.Error(fmt.Sprintf("Expected {%s} was {%s}", expected, actual))
}
}
func Test_CanFormatAsInteger(t *testing.T) {
value := int64(1000)
expected := strconv.Itoa(int(value))
actual := Format(value, U_NO)
if actual != expected {
t.Error(fmt.Sprintf("Expected {%s} was {%s}", expected, actual))
}
}
func Test_CanFormatAsBytes(t *testing.T) {
value := int64(1000)
expected := "1000 B"
actual := Format(value, U_BYTES)
if actual != expected {
t.Error(fmt.Sprintf("Expected {%s} was {%s}", expected, actual))
}
}

367
Godeps/_workspace/src/github.com/cheggaaa/pb/pb.go generated vendored Normal file
View File

@ -0,0 +1,367 @@
package pb
import (
"fmt"
"io"
"math"
"strings"
"sync"
"sync/atomic"
"time"
"unicode/utf8"
)
const (
// Default refresh rate - 200ms
DEFAULT_REFRESH_RATE = time.Millisecond * 200
FORMAT = "[=>-]"
)
// DEPRECATED
// variables for backward compatibility, from now do not work
// use pb.Format and pb.SetRefreshRate
var (
DefaultRefreshRate = DEFAULT_REFRESH_RATE
BarStart, BarEnd, Empty, Current, CurrentN string
)
// Create new progress bar object
func New(total int) *ProgressBar {
return New64(int64(total))
}
// Create new progress bar object uding int64 as total
func New64(total int64) *ProgressBar {
pb := &ProgressBar{
Total: total,
RefreshRate: DEFAULT_REFRESH_RATE,
ShowPercent: true,
ShowCounters: true,
ShowBar: true,
ShowTimeLeft: true,
ShowFinalTime: true,
Units: U_NO,
ManualUpdate: false,
isFinish: make(chan struct{}),
currentValue: -1,
}
return pb.Format(FORMAT)
}
// Create new object and start
func StartNew(total int) *ProgressBar {
return New(total).Start()
}
// Callback for custom output
// For example:
// bar.Callback = func(s string) {
// mySuperPrint(s)
// }
//
type Callback func(out string)
type ProgressBar struct {
current int64 // current must be first member of struct (https://code.google.com/p/go/issues/detail?id=5278)
Total int64
RefreshRate time.Duration
ShowPercent, ShowCounters bool
ShowSpeed, ShowTimeLeft, ShowBar bool
ShowFinalTime bool
Output io.Writer
Callback Callback
NotPrint bool
Units Units
Width int
ForceWidth bool
ManualUpdate bool
finishOnce sync.Once //Guards isFinish
isFinish chan struct{}
startTime time.Time
startValue int64
currentValue int64
prefix, postfix string
BarStart string
BarEnd string
Empty string
Current string
CurrentN string
}
// Start print
func (pb *ProgressBar) Start() *ProgressBar {
pb.startTime = time.Now()
pb.startValue = pb.current
if pb.Total == 0 {
pb.ShowTimeLeft = false
pb.ShowPercent = false
}
if !pb.ManualUpdate {
go pb.writer()
}
return pb
}
// Increment current value
func (pb *ProgressBar) Increment() int {
return pb.Add(1)
}
// Set current value
func (pb *ProgressBar) Set(current int) *ProgressBar {
return pb.Set64(int64(current))
}
// Set64 sets the current value as int64
func (pb *ProgressBar) Set64(current int64) *ProgressBar {
atomic.StoreInt64(&pb.current, current)
return pb
}
// Add to current value
func (pb *ProgressBar) Add(add int) int {
return int(pb.Add64(int64(add)))
}
func (pb *ProgressBar) Add64(add int64) int64 {
return atomic.AddInt64(&pb.current, add)
}
// Set prefix string
func (pb *ProgressBar) Prefix(prefix string) *ProgressBar {
pb.prefix = prefix
return pb
}
// Set postfix string
func (pb *ProgressBar) Postfix(postfix string) *ProgressBar {
pb.postfix = postfix
return pb
}
// Set custom format for bar
// Example: bar.Format("[=>_]")
func (pb *ProgressBar) Format(format string) *ProgressBar {
formatEntries := strings.Split(format, "")
if len(formatEntries) == 5 {
pb.BarStart = formatEntries[0]
pb.BarEnd = formatEntries[4]
pb.Empty = formatEntries[3]
pb.Current = formatEntries[1]
pb.CurrentN = formatEntries[2]
}
return pb
}
// Set bar refresh rate
func (pb *ProgressBar) SetRefreshRate(rate time.Duration) *ProgressBar {
pb.RefreshRate = rate
return pb
}
// Set units
// bar.SetUnits(U_NO) - by default
// bar.SetUnits(U_BYTES) - for Mb, Kb, etc
func (pb *ProgressBar) SetUnits(units Units) *ProgressBar {
pb.Units = units
return pb
}
// Set max width, if width is bigger than terminal width, will be ignored
func (pb *ProgressBar) SetMaxWidth(width int) *ProgressBar {
pb.Width = width
pb.ForceWidth = false
return pb
}
// Set bar width
func (pb *ProgressBar) SetWidth(width int) *ProgressBar {
pb.Width = width
pb.ForceWidth = true
return pb
}
// End print
func (pb *ProgressBar) Finish() {
//Protect multiple calls
pb.finishOnce.Do(func() {
close(pb.isFinish)
pb.write(atomic.LoadInt64(&pb.current))
if !pb.NotPrint {
fmt.Println()
}
})
}
// End print and write string 'str'
func (pb *ProgressBar) FinishPrint(str string) {
pb.Finish()
fmt.Println(str)
}
// implement io.Writer
func (pb *ProgressBar) Write(p []byte) (n int, err error) {
n = len(p)
pb.Add(n)
return
}
// implement io.Reader
func (pb *ProgressBar) Read(p []byte) (n int, err error) {
n = len(p)
pb.Add(n)
return
}
// Create new proxy reader over bar
func (pb *ProgressBar) NewProxyReader(r io.Reader) *Reader {
return &Reader{r, pb}
}
func (pb *ProgressBar) write(current int64) {
width := pb.GetWidth()
var percentBox, countersBox, timeLeftBox, speedBox, barBox, end, out string
// percents
if pb.ShowPercent {
percent := float64(current) / (float64(pb.Total) / float64(100))
percentBox = fmt.Sprintf(" %.02f %% ", percent)
}
// counters
if pb.ShowCounters {
if pb.Total > 0 {
countersBox = fmt.Sprintf("%s / %s ", Format(current, pb.Units), Format(pb.Total, pb.Units))
} else {
countersBox = Format(current, pb.Units) + " / ? "
}
}
// time left
fromStart := time.Now().Sub(pb.startTime)
currentFromStart := current - pb.startValue
select {
case <-pb.isFinish:
if pb.ShowFinalTime {
left := (fromStart / time.Second) * time.Second
timeLeftBox = left.String()
}
default:
if pb.ShowTimeLeft && currentFromStart > 0 {
perEntry := fromStart / time.Duration(currentFromStart)
left := time.Duration(pb.Total-currentFromStart) * perEntry
left = (left / time.Second) * time.Second
timeLeftBox = left.String()
}
}
// speed
if pb.ShowSpeed && currentFromStart > 0 {
fromStart := time.Now().Sub(pb.startTime)
speed := float64(currentFromStart) / (float64(fromStart) / float64(time.Second))
speedBox = Format(int64(speed), pb.Units) + "/s "
}
barWidth := utf8.RuneCountInString(countersBox + pb.BarStart + pb.BarEnd + percentBox + timeLeftBox + speedBox + pb.prefix + pb.postfix)
// bar
if pb.ShowBar {
size := width - barWidth
if size > 0 {
if pb.Total > 0 {
curCount := int(math.Ceil((float64(current) / float64(pb.Total)) * float64(size)))
emptCount := size - curCount
barBox = pb.BarStart
if emptCount < 0 {
emptCount = 0
}
if curCount > size {
curCount = size
}
if emptCount <= 0 {
barBox += strings.Repeat(pb.Current, curCount)
} else if curCount > 0 {
barBox += strings.Repeat(pb.Current, curCount-1) + pb.CurrentN
}
barBox += strings.Repeat(pb.Empty, emptCount) + pb.BarEnd
} else {
barBox = pb.BarStart
pos := size - int(current)%int(size)
if pos-1 > 0 {
barBox += strings.Repeat(pb.Empty, pos-1)
}
barBox += pb.Current
if size-pos-1 > 0 {
barBox += strings.Repeat(pb.Empty, size-pos-1)
}
barBox += pb.BarEnd
}
}
}
// check len
out = pb.prefix + countersBox + barBox + percentBox + speedBox + timeLeftBox + pb.postfix
if utf8.RuneCountInString(out) < width {
end = strings.Repeat(" ", width-utf8.RuneCountInString(out))
}
// and print!
switch {
case pb.Output != nil:
fmt.Fprint(pb.Output, "\r"+out+end)
case pb.Callback != nil:
pb.Callback(out + end)
case !pb.NotPrint:
fmt.Print("\r" + out + end)
}
}
func (pb *ProgressBar) GetWidth() int {
if pb.ForceWidth {
return pb.Width
}
width := pb.Width
termWidth, _ := terminalWidth()
if width == 0 || termWidth <= width {
width = termWidth
}
return width
}
// Write the current state of the progressbar
func (pb *ProgressBar) Update() {
c := atomic.LoadInt64(&pb.current)
if c != pb.currentValue {
pb.write(c)
pb.currentValue = c
}
}
// Internal loop for writing progressbar
func (pb *ProgressBar) writer() {
pb.Update()
for {
select {
case <-pb.isFinish:
return
case <-time.After(pb.RefreshRate):
pb.Update()
}
}
}
type window struct {
Row uint16
Col uint16
Xpixel uint16
Ypixel uint16
}

View File

@ -0,0 +1,7 @@
// +build linux darwin freebsd netbsd openbsd
package pb
import "syscall"
const sys_ioctl = syscall.SYS_IOCTL

View File

@ -0,0 +1,5 @@
// +build solaris
package pb
const sys_ioctl = 54

View File

@ -0,0 +1,37 @@
package pb
import (
"testing"
)
func Test_IncrementAddsOne(t *testing.T) {
count := 5000
bar := New(count)
expected := 1
actual := bar.Increment()
if actual != expected {
t.Errorf("Expected {%d} was {%d}", expected, actual)
}
}
func Test_Width(t *testing.T) {
count := 5000
bar := New(count)
width := 100
bar.SetWidth(100).Callback = func(out string) {
if len(out) != width {
t.Errorf("Bar width expected {%d} was {%d}", len(out), width)
}
}
bar.Start()
bar.Increment()
bar.Finish()
}
func Test_MultipleFinish(t *testing.T) {
bar := New(5000)
bar.Add(2000)
bar.Finish()
bar.Finish()
}

View File

@ -1,16 +1,16 @@
// +build windows
package pb
import (
"github.com/olekukonko/ts"
)
func bold(str string) string {
return str
}
func terminalWidth() (int, error) {
size , err := ts.GetSize()
return size.Col() , err
}
// +build windows
package pb
import (
"github.com/olekukonko/ts"
)
func bold(str string) string {
return str
}
func terminalWidth() (int, error) {
size, err := ts.GetSize()
return size.Col(), err
}

View File

@ -1,8 +1,9 @@
// +build linux darwin freebsd
// +build linux darwin freebsd netbsd openbsd solaris
package pb
import (
"os"
"runtime"
"syscall"
"unsafe"
@ -13,6 +14,16 @@ const (
TIOCGWINSZ_OSX = 1074295912
)
var tty *os.File
func init() {
var err error
tty, err = os.Open("/dev/tty")
if err != nil {
tty = os.Stdin
}
}
func bold(str string) string {
return "\033[1m" + str + "\033[0m"
}
@ -23,8 +34,8 @@ func terminalWidth() (int, error) {
if runtime.GOOS == "darwin" {
tio = TIOCGWINSZ_OSX
}
res, _, err := syscall.Syscall(syscall.SYS_IOCTL,
uintptr(syscall.Stdin),
res, _, err := syscall.Syscall(sys_ioctl,
tty.Fd(),
uintptr(tio),
uintptr(unsafe.Pointer(w)),
)

17
Godeps/_workspace/src/github.com/cheggaaa/pb/reader.go generated vendored Normal file
View File

@ -0,0 +1,17 @@
package pb
import (
"io"
)
// It's proxy reader, implement io.Reader
type Reader struct {
io.Reader
bar *ProgressBar
}
func (r *Reader) Read(p []byte) (n int, err error) {
n, err = r.Reader.Read(p)
r.bar.Add(n)
return
}

View File

@ -1,55 +0,0 @@
// Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"sort"
"testing"
)
func testLabelNames(t testing.TB) {
var scenarios = []struct {
in LabelNames
out LabelNames
}{
{
in: LabelNames{"ZZZ", "zzz"},
out: LabelNames{"ZZZ", "zzz"},
},
{
in: LabelNames{"aaa", "AAA"},
out: LabelNames{"AAA", "aaa"},
},
}
for i, scenario := range scenarios {
sort.Sort(scenario.in)
for j, expected := range scenario.out {
if expected != scenario.in[j] {
t.Errorf("%d.%d expected %s, got %s", i, j, expected, scenario.in[j])
}
}
}
}
func TestLabelNames(t *testing.T) {
testLabelNames(t)
}
func BenchmarkLabelNames(b *testing.B) {
for i := 0; i < b.N; i++ {
testLabelNames(b)
}
}

View File

@ -1,64 +0,0 @@
// Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"fmt"
"sort"
"strings"
)
// A LabelSet is a collection of LabelName and LabelValue pairs. The LabelSet
// may be fully-qualified down to the point where it may resolve to a single
// Metric in the data store or not. All operations that occur within the realm
// of a LabelSet can emit a vector of Metric entities to which the LabelSet may
// match.
type LabelSet map[LabelName]LabelValue
// Merge is a helper function to non-destructively merge two label sets.
func (l LabelSet) Merge(other LabelSet) LabelSet {
result := make(LabelSet, len(l))
for k, v := range l {
result[k] = v
}
for k, v := range other {
result[k] = v
}
return result
}
func (l LabelSet) String() string {
labelStrings := make([]string, 0, len(l))
for label, value := range l {
labelStrings = append(labelStrings, fmt.Sprintf("%s=%q", label, value))
}
switch len(labelStrings) {
case 0:
return ""
default:
sort.Strings(labelStrings)
return fmt.Sprintf("{%s}", strings.Join(labelStrings, ", "))
}
}
// MergeFromMetric merges Metric into this LabelSet.
func (l LabelSet) MergeFromMetric(m Metric) {
for k, v := range m {
l[k] = v
}
}

View File

@ -1,192 +0,0 @@
// Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"encoding/json"
"fmt"
"sort"
"strings"
)
var separator = []byte{0}
// A Metric is similar to a LabelSet, but the key difference is that a Metric is
// a singleton and refers to one and only one stream of samples.
type Metric map[LabelName]LabelValue
// Equal compares the metrics.
func (m Metric) Equal(o Metric) bool {
if len(m) != len(o) {
return false
}
for ln, lv := range m {
olv, ok := o[ln]
if !ok {
return false
}
if olv != lv {
return false
}
}
return true
}
// Before compares the metrics, using the following criteria:
//
// If m has fewer labels than o, it is before o. If it has more, it is not.
//
// If the number of labels is the same, the superset of all label names is
// sorted alphanumerically. The first differing label pair found in that order
// determines the outcome: If the label does not exist at all in m, then m is
// before o, and vice versa. Otherwise the label value is compared
// alphanumerically.
//
// If m and o are equal, the method returns false.
func (m Metric) Before(o Metric) bool {
if len(m) < len(o) {
return true
}
if len(m) > len(o) {
return false
}
lns := make(LabelNames, 0, len(m)+len(o))
for ln := range m {
lns = append(lns, ln)
}
for ln := range o {
lns = append(lns, ln)
}
// It's probably not worth it to de-dup lns.
sort.Sort(lns)
for _, ln := range lns {
mlv, ok := m[ln]
if !ok {
return true
}
olv, ok := o[ln]
if !ok {
return false
}
if mlv < olv {
return true
}
if mlv > olv {
return false
}
}
return false
}
// String implements Stringer.
func (m Metric) String() string {
metricName, hasName := m[MetricNameLabel]
numLabels := len(m) - 1
if !hasName {
numLabels = len(m)
}
labelStrings := make([]string, 0, numLabels)
for label, value := range m {
if label != MetricNameLabel {
labelStrings = append(labelStrings, fmt.Sprintf("%s=%q", label, value))
}
}
switch numLabels {
case 0:
if hasName {
return string(metricName)
}
return "{}"
default:
sort.Strings(labelStrings)
return fmt.Sprintf("%s{%s}", metricName, strings.Join(labelStrings, ", "))
}
}
// Fingerprint returns a Metric's Fingerprint.
func (m Metric) Fingerprint() Fingerprint {
return metricToFingerprint(m)
}
// Fingerprint returns a Metric's Fingerprint calculated by a faster hashing
// algorithm, which is, however, more susceptible to hash collisions.
func (m Metric) FastFingerprint() Fingerprint {
return metricToFastFingerprint(m)
}
// Clone returns a copy of the Metric.
func (m Metric) Clone() Metric {
clone := Metric{}
for k, v := range m {
clone[k] = v
}
return clone
}
// MergeFromLabelSet merges a label set into this Metric, prefixing a collision
// prefix to the label names merged from the label set where required.
func (m Metric) MergeFromLabelSet(labels LabelSet, collisionPrefix LabelName) {
for k, v := range labels {
if collisionPrefix != "" {
for {
if _, exists := m[k]; !exists {
break
}
k = collisionPrefix + k
}
}
m[k] = v
}
}
// COWMetric wraps a Metric to enable copy-on-write access patterns.
type COWMetric struct {
Copied bool
Metric Metric
}
// Set sets a label name in the wrapped Metric to a given value and copies the
// Metric initially, if it is not already a copy.
func (m *COWMetric) Set(ln LabelName, lv LabelValue) {
m.doCOW()
m.Metric[ln] = lv
}
// Delete deletes a given label name from the wrapped Metric and copies the
// Metric initially, if it is not already a copy.
func (m *COWMetric) Delete(ln LabelName) {
m.doCOW()
delete(m.Metric, ln)
}
// doCOW copies the underlying Metric if it is not already a copy.
func (m *COWMetric) doCOW() {
if !m.Copied {
m.Metric = m.Metric.Clone()
m.Copied = true
}
}
// String implements fmt.Stringer.
func (m COWMetric) String() string {
return m.Metric.String()
}
// MarshalJSON implements json.Marshaler.
func (m COWMetric) MarshalJSON() ([]byte, error) {
return json.Marshal(m.Metric)
}

View File

@ -1,79 +0,0 @@
// Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
// Sample is a sample value with a timestamp and a metric.
type Sample struct {
Metric Metric
Value SampleValue
Timestamp Timestamp
}
// Equal compares first the metrics, then the timestamp, then the value.
func (s *Sample) Equal(o *Sample) bool {
if s == o {
return true
}
if !s.Metric.Equal(o.Metric) {
return false
}
if !s.Timestamp.Equal(o.Timestamp) {
return false
}
if !s.Value.Equal(o.Value) {
return false
}
return true
}
// Samples is a sortable Sample slice. It implements sort.Interface.
type Samples []*Sample
func (s Samples) Len() int {
return len(s)
}
// Less compares first the metrics, then the timestamp.
func (s Samples) Less(i, j int) bool {
switch {
case s[i].Metric.Before(s[j].Metric):
return true
case s[j].Metric.Before(s[i].Metric):
return false
case s[i].Timestamp.Before(s[j].Timestamp):
return true
default:
return false
}
}
func (s Samples) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// Equal compares two sets of samples and returns true if they are equal.
func (s Samples) Equal(o Samples) bool {
if len(s) != len(o) {
return false
}
for i, sample := range s {
if !sample.Equal(o[i]) {
return false
}
}
return true
}

View File

@ -1,114 +0,0 @@
// Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"sort"
"testing"
)
func TestSamplesSort(t *testing.T) {
input := Samples{
&Sample{
Metric: Metric{
MetricNameLabel: "A",
},
Timestamp: 1,
},
&Sample{
Metric: Metric{
MetricNameLabel: "A",
},
Timestamp: 2,
},
&Sample{
Metric: Metric{
MetricNameLabel: "C",
},
Timestamp: 1,
},
&Sample{
Metric: Metric{
MetricNameLabel: "C",
},
Timestamp: 2,
},
&Sample{
Metric: Metric{
MetricNameLabel: "B",
},
Timestamp: 1,
},
&Sample{
Metric: Metric{
MetricNameLabel: "B",
},
Timestamp: 2,
},
}
expected := Samples{
&Sample{
Metric: Metric{
MetricNameLabel: "A",
},
Timestamp: 1,
},
&Sample{
Metric: Metric{
MetricNameLabel: "A",
},
Timestamp: 2,
},
&Sample{
Metric: Metric{
MetricNameLabel: "B",
},
Timestamp: 1,
},
&Sample{
Metric: Metric{
MetricNameLabel: "B",
},
Timestamp: 2,
},
&Sample{
Metric: Metric{
MetricNameLabel: "C",
},
Timestamp: 1,
},
&Sample{
Metric: Metric{
MetricNameLabel: "C",
},
Timestamp: 2,
},
}
sort.Sort(input)
for i, actual := range input {
actualFp := actual.Metric.Fingerprint()
expectedFp := expected[i].Metric.Fingerprint()
if !actualFp.Equal(expectedFp) {
t.Fatalf("%d. Incorrect fingerprint. Got %s; want %s", i, actualFp.String(), expectedFp.String())
}
if actual.Timestamp != expected[i].Timestamp {
t.Fatalf("%d. Incorrect timestamp. Got %s; want %s", i, actual.Timestamp, expected[i].Timestamp)
}
}
}

View File

@ -1,37 +0,0 @@
// Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"fmt"
"strconv"
)
// A SampleValue is a representation of a value for a given sample at a given
// time.
type SampleValue float64
// Equal does a straight v==o.
func (v SampleValue) Equal(o SampleValue) bool {
return v == o
}
// MarshalJSON implements json.Marshaler.
func (v SampleValue) MarshalJSON() ([]byte, error) {
return []byte(fmt.Sprintf(`"%s"`, v)), nil
}
func (v SampleValue) String() string {
return strconv.FormatFloat(float64(v), 'f', -1, 64)
}

View File

@ -1,116 +0,0 @@
// Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package model
import (
"math"
"strconv"
native_time "time"
)
// Timestamp is the number of milliseconds since the epoch
// (1970-01-01 00:00 UTC) excluding leap seconds.
type Timestamp int64
const (
// MinimumTick is the minimum supported time resolution. This has to be
// at least native_time.Second in order for the code below to work.
MinimumTick = native_time.Millisecond
// second is the timestamp duration equivalent to one second.
second = int64(native_time.Second / MinimumTick)
// The number of nanoseconds per minimum tick.
nanosPerTick = int64(MinimumTick / native_time.Nanosecond)
// Earliest is the earliest timestamp representable. Handy for
// initializing a high watermark.
Earliest = Timestamp(math.MinInt64)
// Latest is the latest timestamp representable. Handy for initializing
// a low watermark.
Latest = Timestamp(math.MaxInt64)
)
// Equal reports whether two timestamps represent the same instant.
func (t Timestamp) Equal(o Timestamp) bool {
return t == o
}
// Before reports whether the timestamp t is before o.
func (t Timestamp) Before(o Timestamp) bool {
return t < o
}
// After reports whether the timestamp t is after o.
func (t Timestamp) After(o Timestamp) bool {
return t > o
}
// Add returns the Timestamp t + d.
func (t Timestamp) Add(d native_time.Duration) Timestamp {
return t + Timestamp(d/MinimumTick)
}
// Sub returns the Duration t - o.
func (t Timestamp) Sub(o Timestamp) native_time.Duration {
return native_time.Duration(t-o) * MinimumTick
}
// Time returns the time.Time representation of t.
func (t Timestamp) Time() native_time.Time {
return native_time.Unix(int64(t)/second, (int64(t)%second)*nanosPerTick)
}
// Unix returns t as a Unix time, the number of seconds elapsed
// since January 1, 1970 UTC.
func (t Timestamp) Unix() int64 {
return int64(t) / second
}
// UnixNano returns t as a Unix time, the number of nanoseconds elapsed
// since January 1, 1970 UTC.
func (t Timestamp) UnixNano() int64 {
return int64(t) * nanosPerTick
}
// String returns a string representation of the timestamp.
func (t Timestamp) String() string {
return strconv.FormatFloat(float64(t)/float64(second), 'f', -1, 64)
}
// MarshalJSON implements the json.Marshaler interface.
func (t Timestamp) MarshalJSON() ([]byte, error) {
return []byte(t.String()), nil
}
// Now returns the current time as a Timestamp.
func Now() Timestamp {
return TimestampFromTime(native_time.Now())
}
// TimestampFromTime returns the Timestamp equivalent to the time.Time t.
func TimestampFromTime(t native_time.Time) Timestamp {
return TimestampFromUnixNano(t.UnixNano())
}
// TimestampFromUnix returns the Timestamp equivalent to the Unix timestamp t
// provided in seconds.
func TimestampFromUnix(t int64) Timestamp {
return Timestamp(t * second)
}
// TimestampFromUnixNano returns the Timestamp equivalent to the Unix timestamp
// t provided in nanoseconds.
func TimestampFromUnixNano(t int64) Timestamp {
return Timestamp(t / nanosPerTick)
}

View File

@ -9,18 +9,19 @@ import (
"sort"
"strings"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_golang/model"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
var (
metricNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_:]*$`)
labelNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_]*$`)
labelNameRE = regexp.MustCompile("^[a-zA-Z_][a-zA-Z0-9_]*$")
)
// reservedLabelPrefix is a prefix which is not legal in user-supplied
// label names.
const reservedLabelPrefix = "__"
// Labels represents a collection of label name -> value mappings. This type is
// commonly used with the With(Labels) and GetMetricWith(Labels) methods of
// metric vector Collectors, e.g.:
@ -134,7 +135,7 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *
for _, val := range labelValues {
b.Reset()
b.WriteString(val)
b.WriteByte(model.SeparatorByte)
b.WriteByte(separatorByte)
h.Write(b.Bytes())
}
d.id = h.Sum64()
@ -145,12 +146,12 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *
h.Reset()
b.Reset()
b.WriteString(help)
b.WriteByte(model.SeparatorByte)
b.WriteByte(separatorByte)
h.Write(b.Bytes())
for _, labelName := range labelNames {
b.Reset()
b.WriteString(labelName)
b.WriteByte(model.SeparatorByte)
b.WriteByte(separatorByte)
h.Write(b.Bytes())
}
d.dimHash = h.Sum64()
@ -195,5 +196,5 @@ func (d *Desc) String() string {
func checkLabelName(l string) bool {
return labelNameRE.MatchString(l) &&
!strings.HasPrefix(l, model.ReservedLabelPrefix)
!strings.HasPrefix(l, reservedLabelPrefix)
}

View File

@ -61,12 +61,13 @@
// It also exports some stats about the HTTP usage of the /metrics
// endpoint. (See the Handler function for more detail.)
//
// A more advanced metric type is the Summary.
// Two more advanced metric types are the Summary and Histogram.
//
// In addition to the fundamental metric types Gauge, Counter, and Summary, a
// very important part of the Prometheus data model is the partitioning of
// samples along dimensions called labels, which results in metric vectors. The
// fundamental types are GaugeVec, CounterVec, and SummaryVec.
// In addition to the fundamental metric types Gauge, Counter, Summary, and
// Histogram, a very important part of the Prometheus data model is the
// partitioning of samples along dimensions called labels, which results in
// metric vectors. The fundamental types are GaugeVec, CounterVec, SummaryVec,
// and HistogramVec.
//
// Those are all the parts needed for basic usage. Detailed documentation and
// examples are provided below.

View File

@ -17,10 +17,8 @@ import (
"runtime"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_golang/prometheus"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
func NewCallbackMetric(desc *prometheus.Desc, callback func() float64) *CallbackMetric {

View File

@ -23,11 +23,9 @@ import (
"sort"
"time"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_golang/prometheus"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
func ExampleGauge() {
@ -392,6 +390,9 @@ func ExampleSummaryVec() {
temps.WithLabelValues("lithobates-catesbeianus").Observe(32 + math.Floor(100*math.Cos(float64(i)*0.11))/10)
}
// Create a Summary without any observations.
temps.WithLabelValues("leiopelma-hochstetteri")
// Just for demonstration, let's check the state of the summary vector
// by (ab)using its Collect method and the Write method of its elements
// (which is usually only used by Prometheus internally - code like the
@ -414,6 +415,26 @@ func ExampleSummaryVec() {
// Output:
// [label: <
// name: "species"
// value: "leiopelma-hochstetteri"
// >
// summary: <
// sample_count: 0
// sample_sum: 0
// quantile: <
// quantile: 0.5
// value: nan
// >
// quantile: <
// quantile: 0.9
// value: nan
// >
// quantile: <
// quantile: 0.99
// value: nan
// >
// >
// label: <
// name: "species"
// value: "lithobates-catesbeianus"
// >
// summary: <

View File

@ -19,9 +19,8 @@ import (
"sort"
"strings"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_golang/prometheus"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
func ExampleExpvarCollector() {

View File

@ -1,6 +1,7 @@
package prometheus
import (
"fmt"
"runtime"
"runtime/debug"
"time"
@ -9,6 +10,9 @@ import (
type goCollector struct {
goroutines Gauge
gcDesc *Desc
// metrics to describe and collect
metrics memStatsMetrics
}
// NewGoCollector returns a collector which exports metrics about the current
@ -16,20 +20,216 @@ type goCollector struct {
func NewGoCollector() *goCollector {
return &goCollector{
goroutines: NewGauge(GaugeOpts{
Name: "go_goroutines",
Help: "Number of goroutines that currently exist.",
Namespace: "go",
Name: "goroutines",
Help: "Number of goroutines that currently exist.",
}),
gcDesc: NewDesc(
"go_gc_duration_seconds",
"A summary of the GC invocation durations.",
nil, nil),
metrics: memStatsMetrics{
{
desc: NewDesc(
memstatNamespace("alloc_bytes"),
"Number of bytes allocated and still in use.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.Alloc) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("alloc_bytes_total"),
"Total number of bytes allocated, even if freed.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.TotalAlloc) },
valType: CounterValue,
}, {
desc: NewDesc(
memstatNamespace("sys_bytes"),
"Number of bytes obtained by system. Sum of all system allocations.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.Sys) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("lookups_total"),
"Total number of pointer lookups.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.Lookups) },
valType: CounterValue,
}, {
desc: NewDesc(
memstatNamespace("mallocs_total"),
"Total number of mallocs.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.Mallocs) },
valType: CounterValue,
}, {
desc: NewDesc(
memstatNamespace("frees_total"),
"Total number of frees.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.Frees) },
valType: CounterValue,
}, {
desc: NewDesc(
memstatNamespace("heap_alloc_bytes"),
"Number of heap bytes allocated and still in use.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapAlloc) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("heap_sys_bytes"),
"Number of heap bytes obtained from system.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapSys) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("heap_idle_bytes"),
"Number of heap bytes waiting to be used.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapIdle) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("heap_inuse_bytes"),
"Number of heap bytes that are in use.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapInuse) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("heap_released_bytes_total"),
"Total number of heap bytes released to OS.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapReleased) },
valType: CounterValue,
}, {
desc: NewDesc(
memstatNamespace("heap_objects"),
"Number of allocated objects.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapObjects) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("stack_inuse_bytes"),
"Number of bytes in use by the stack allocator.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.StackInuse) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("stack_sys_bytes"),
"Number of bytes obtained from system for stack allocator.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.StackSys) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("mspan_inuse_bytes"),
"Number of bytes in use by mspan structures.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.MSpanInuse) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("mspan_sys_bytes"),
"Number of bytes used for mspan structures obtained from system.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.MSpanSys) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("mcache_inuse_bytes"),
"Number of bytes in use by mcache structures.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.MCacheInuse) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("mcache_sys_bytes"),
"Number of bytes used for mcache structures obtained from system.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.MCacheSys) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("buck_hash_sys_bytes"),
"Number of bytes used by the profiling bucket hash table.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.BuckHashSys) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("gc_sys_bytes"),
"Number of bytes used for garbage collection system metadata.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.GCSys) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("other_sys_bytes"),
"Number of bytes used for other system allocations.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.OtherSys) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("next_gc_bytes"),
"Number of heap bytes when next garbage collection will take place.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.NextGC) },
valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("last_gc_time_seconds"),
"Number of seconds since 1970 of last garbage collection.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.LastGC*10 ^ 9) },
valType: GaugeValue,
},
},
}
}
func memstatNamespace(s string) string {
return fmt.Sprintf("go_memstats_%s", s)
}
// Describe returns all descriptions of the collector.
func (c *goCollector) Describe(ch chan<- *Desc) {
ch <- c.goroutines.Desc()
ch <- c.gcDesc
for _, i := range c.metrics {
ch <- i.desc
}
}
// Collect returns the current state of all metrics of the collector.
@ -47,4 +247,17 @@ func (c *goCollector) Collect(ch chan<- Metric) {
}
quantiles[0.0] = stats.PauseQuantiles[0].Seconds()
ch <- MustNewConstSummary(c.gcDesc, uint64(stats.NumGC), float64(stats.PauseTotal.Seconds()), quantiles)
ms := &runtime.MemStats{}
runtime.ReadMemStats(ms)
for _, i := range c.metrics {
ch <- MustNewConstMetric(i.desc, i.valType, i.eval(ms))
}
}
// memStatsMetrics provide description, value, and value type for memstat metrics.
type memStatsMetrics []struct {
desc *Desc
eval func(*runtime.MemStats) float64
valType ValueType
}

View File

@ -50,6 +50,10 @@ func TestGoCollector(t *testing.T) {
t.Errorf("want 1 new goroutine, got %d", diff)
}
// GoCollector performs two sends per call.
// On line 27 we need to receive the second send
// to shut down cleanly.
<-ch
return
}
case <-time.After(1 * time.Second):

View File

@ -21,8 +21,6 @@ import (
"sync/atomic"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_golang/model"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
@ -49,6 +47,10 @@ type Histogram interface {
Observe(float64)
}
// bucketLabel is used for the label that defines the upper bound of a
// bucket of a histogram ("le" -> "less or equal").
const bucketLabel = "le"
var (
// DefBuckets are the default Histogram buckets. The default buckets are
// tailored to broadly measure the response time (in seconds) of a
@ -57,7 +59,7 @@ var (
DefBuckets = []float64{.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10}
errBucketLabelNotAllowed = fmt.Errorf(
"%q is not allowed as label name in histograms", model.BucketLabel,
"%q is not allowed as label name in histograms", bucketLabel,
)
)
@ -171,12 +173,12 @@ func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogr
}
for _, n := range desc.variableLabels {
if n == model.BucketLabel {
if n == bucketLabel {
panic(errBucketLabelNotAllowed)
}
}
for _, lp := range desc.constLabelPairs {
if lp.GetName() == model.BucketLabel {
if lp.GetName() == bucketLabel {
panic(errBucketLabelNotAllowed)
}
}

View File

@ -124,6 +124,10 @@ func BenchmarkHistogramWrite8(b *testing.B) {
var testBuckets = []float64{-2, -1, -0.5, 0, 0.5, 1, 2, math.Inf(+1)}
func TestHistogramConcurrency(t *testing.T) {
if testing.Short() {
t.Skip("Skipping test in short mode.")
}
rand.Seed(42)
it := func(n uint32) bool {
@ -198,6 +202,10 @@ func TestHistogramConcurrency(t *testing.T) {
}
func TestHistogramVecConcurrency(t *testing.T) {
if testing.Short() {
t.Skip("Skipping test in short mode.")
}
rand.Seed(42)
objectives := make([]float64, 0, len(DefObjectives))

View File

@ -14,6 +14,9 @@
package prometheus
import (
"bufio"
"io"
"net"
"net/http"
"strconv"
"strings"
@ -141,7 +144,18 @@ func InstrumentHandlerFuncWithOpts(opts SummaryOpts, handlerFunc func(http.Respo
urlLen = len(r.URL.String())
}
go computeApproximateRequestSize(r, out, urlLen)
handlerFunc(delegate, r)
_, cn := w.(http.CloseNotifier)
_, fl := w.(http.Flusher)
_, hj := w.(http.Hijacker)
_, rf := w.(io.ReaderFrom)
var rw http.ResponseWriter
if cn && fl && hj && rf {
rw = &fancyResponseWriterDelegator{delegate}
} else {
rw = delegate
}
handlerFunc(rw, r)
elapsed := float64(time.Since(now)) / float64(time.Microsecond)
@ -178,7 +192,7 @@ type responseWriterDelegator struct {
handler, method string
status int
written int
written int64
wroteHeader bool
}
@ -193,7 +207,32 @@ func (r *responseWriterDelegator) Write(b []byte) (int, error) {
r.WriteHeader(http.StatusOK)
}
n, err := r.ResponseWriter.Write(b)
r.written += n
r.written += int64(n)
return n, err
}
type fancyResponseWriterDelegator struct {
*responseWriterDelegator
}
func (f *fancyResponseWriterDelegator) CloseNotify() <-chan bool {
return f.ResponseWriter.(http.CloseNotifier).CloseNotify()
}
func (f *fancyResponseWriterDelegator) Flush() {
f.ResponseWriter.(http.Flusher).Flush()
}
func (f *fancyResponseWriterDelegator) Hijack() (net.Conn, *bufio.ReadWriter, error) {
return f.ResponseWriter.(http.Hijacker).Hijack()
}
func (f *fancyResponseWriterDelegator) ReadFrom(r io.Reader) (int64, error) {
if !f.wroteHeader {
f.WriteHeader(http.StatusOK)
}
n, err := f.ResponseWriter.(io.ReaderFrom).ReadFrom(r)
f.written += n
return n, err
}

View File

@ -19,6 +19,8 @@ import (
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
const separatorByte byte = 255
// A Metric models a single sample value with its meta data being exported to
// Prometheus. Implementers of Metric in this package inclued Gauge, Counter,
// Untyped, and Summary. Users can implement their own Metric types, but that

View File

@ -33,13 +33,9 @@ import (
"strings"
"sync"
"github.com/coreos/etcd/Godeps/_workspace/src/bitbucket.org/ww/goautoneg"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_golang/model"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_golang/text"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/common/expfmt"
)
var (
@ -170,6 +166,11 @@ func Unregister(c Collector) bool {
// checks are performed, but no further consistency checks (which would require
// knowledge of a metric descriptor).
//
// Sorting concerns: The caller is responsible for sorting the label pairs in
// each metric. However, the order of metrics will be sorted by the registry as
// it is required anyway after merging with the metric families collected
// conventionally.
//
// The function must be callable at any time and concurrently.
func SetMetricFamilyInjectionHook(hook func() []*dto.MetricFamily) {
defRegistry.metricFamilyInjectionHook = hook
@ -341,7 +342,7 @@ func (r *registry) Push(job, instance, pushURL, method string) error {
}
buf := r.getBuf()
defer r.giveBuf(buf)
if _, err := r.writePB(buf, text.WriteProtoDelimited); err != nil {
if err := r.writePB(expfmt.NewEncoder(buf, expfmt.FmtProtoDelim)); err != nil {
if r.panicOnCollectError {
panic(err)
}
@ -364,11 +365,11 @@ func (r *registry) Push(job, instance, pushURL, method string) error {
}
func (r *registry) ServeHTTP(w http.ResponseWriter, req *http.Request) {
enc, contentType := chooseEncoder(req)
contentType := expfmt.Negotiate(req.Header)
buf := r.getBuf()
defer r.giveBuf(buf)
writer, encoding := decorateWriter(req, buf)
if _, err := r.writePB(writer, enc); err != nil {
if err := r.writePB(expfmt.NewEncoder(writer, contentType)); err != nil {
if r.panicOnCollectError {
panic(err)
}
@ -379,7 +380,7 @@ func (r *registry) ServeHTTP(w http.ResponseWriter, req *http.Request) {
closer.Close()
}
header := w.Header()
header.Set(contentTypeHeader, contentType)
header.Set(contentTypeHeader, string(contentType))
header.Set(contentLengthHeader, fmt.Sprint(buf.Len()))
if encoding != "" {
header.Set(contentEncodingHeader, encoding)
@ -387,7 +388,7 @@ func (r *registry) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.Write(buf.Bytes())
}
func (r *registry) writePB(w io.Writer, writeEncoded encoder) (int, error) {
func (r *registry) writePB(encoder expfmt.Encoder) error {
var metricHashes map[uint64]struct{}
if r.collectChecksEnabled {
metricHashes = make(map[uint64]struct{})
@ -439,7 +440,7 @@ func (r *registry) writePB(w io.Writer, writeEncoded encoder) (int, error) {
// TODO: Consider different means of error reporting so
// that a single erroneous metric could be skipped
// instead of blowing up the whole collection.
return 0, fmt.Errorf("error collecting metric %v: %s", desc, err)
return fmt.Errorf("error collecting metric %v: %s", desc, err)
}
switch {
case metricFamily.Type != nil:
@ -455,11 +456,11 @@ func (r *registry) writePB(w io.Writer, writeEncoded encoder) (int, error) {
case dtoMetric.Histogram != nil:
metricFamily.Type = dto.MetricType_HISTOGRAM.Enum()
default:
return 0, fmt.Errorf("empty metric collected: %s", dtoMetric)
return fmt.Errorf("empty metric collected: %s", dtoMetric)
}
if r.collectChecksEnabled {
if err := r.checkConsistency(metricFamily, dtoMetric, desc, metricHashes); err != nil {
return 0, err
return err
}
}
metricFamily.Metric = append(metricFamily.Metric, dtoMetric)
@ -473,7 +474,7 @@ func (r *registry) writePB(w io.Writer, writeEncoded encoder) (int, error) {
if r.collectChecksEnabled {
for _, m := range mf.Metric {
if err := r.checkConsistency(mf, m, nil, metricHashes); err != nil {
return 0, err
return err
}
}
}
@ -482,7 +483,7 @@ func (r *registry) writePB(w io.Writer, writeEncoded encoder) (int, error) {
for _, m := range mf.Metric {
if r.collectChecksEnabled {
if err := r.checkConsistency(existingMF, m, nil, metricHashes); err != nil {
return 0, err
return err
}
}
existingMF.Metric = append(existingMF.Metric, m)
@ -503,15 +504,12 @@ func (r *registry) writePB(w io.Writer, writeEncoded encoder) (int, error) {
}
sort.Strings(names)
var written int
for _, name := range names {
w, err := writeEncoded(w, metricFamiliesByName[name])
written += w
if err != nil {
return written, err
if err := encoder.Encode(metricFamiliesByName[name]); err != nil {
return err
}
}
return written, nil
return nil
}
func (r *registry) checkConsistency(metricFamily *dto.MetricFamily, dtoMetric *dto.Metric, desc *Desc, metricHashes map[uint64]struct{}) error {
@ -520,10 +518,11 @@ func (r *registry) checkConsistency(metricFamily *dto.MetricFamily, dtoMetric *d
if metricFamily.GetType() == dto.MetricType_GAUGE && dtoMetric.Gauge == nil ||
metricFamily.GetType() == dto.MetricType_COUNTER && dtoMetric.Counter == nil ||
metricFamily.GetType() == dto.MetricType_SUMMARY && dtoMetric.Summary == nil ||
metricFamily.GetType() == dto.MetricType_HISTOGRAM && dtoMetric.Histogram == nil ||
metricFamily.GetType() == dto.MetricType_UNTYPED && dtoMetric.Untyped == nil {
return fmt.Errorf(
"collected metric %q is not a %s",
dtoMetric, metricFamily.Type,
"collected metric %s %s is not a %s",
metricFamily.GetName(), dtoMetric, metricFamily.GetType(),
)
}
@ -531,19 +530,24 @@ func (r *registry) checkConsistency(metricFamily *dto.MetricFamily, dtoMetric *d
h := fnv.New64a()
var buf bytes.Buffer
buf.WriteString(metricFamily.GetName())
buf.WriteByte(model.SeparatorByte)
buf.WriteByte(separatorByte)
h.Write(buf.Bytes())
// Make sure label pairs are sorted. We depend on it for the consistency
// check. Label pairs must be sorted by contract. But the point of this
// method is to check for contract violations. So we better do the sort
// now.
sort.Sort(LabelPairSorter(dtoMetric.Label))
for _, lp := range dtoMetric.Label {
buf.Reset()
buf.WriteString(lp.GetValue())
buf.WriteByte(model.SeparatorByte)
buf.WriteByte(separatorByte)
h.Write(buf.Bytes())
}
metricHash := h.Sum64()
if _, exists := metricHashes[metricHash]; exists {
return fmt.Errorf(
"collected metric %q was collected before with the same name and label values",
dtoMetric,
"collected metric %s %s was collected before with the same name and label values",
metricFamily.GetName(), dtoMetric,
)
}
metricHashes[metricHash] = struct{}{}
@ -555,14 +559,14 @@ func (r *registry) checkConsistency(metricFamily *dto.MetricFamily, dtoMetric *d
// Desc consistency with metric family.
if metricFamily.GetName() != desc.fqName {
return fmt.Errorf(
"collected metric %q has name %q but should have %q",
dtoMetric, metricFamily.GetName(), desc.fqName,
"collected metric %s %s has name %q but should have %q",
metricFamily.GetName(), dtoMetric, metricFamily.GetName(), desc.fqName,
)
}
if metricFamily.GetHelp() != desc.help {
return fmt.Errorf(
"collected metric %q has help %q but should have %q",
dtoMetric, metricFamily.GetHelp(), desc.help,
"collected metric %s %s has help %q but should have %q",
metricFamily.GetName(), dtoMetric, metricFamily.GetHelp(), desc.help,
)
}
@ -576,8 +580,8 @@ func (r *registry) checkConsistency(metricFamily *dto.MetricFamily, dtoMetric *d
}
if len(lpsFromDesc) != len(dtoMetric.Label) {
return fmt.Errorf(
"labels in collected metric %q are inconsistent with descriptor %s",
dtoMetric, desc,
"labels in collected metric %s %s are inconsistent with descriptor %s",
metricFamily.GetName(), dtoMetric, desc,
)
}
sort.Sort(LabelPairSorter(lpsFromDesc))
@ -586,8 +590,8 @@ func (r *registry) checkConsistency(metricFamily *dto.MetricFamily, dtoMetric *d
if lpFromDesc.GetName() != lpFromMetric.GetName() ||
lpFromDesc.Value != nil && lpFromDesc.GetValue() != lpFromMetric.GetValue() {
return fmt.Errorf(
"labels in collected metric %q are inconsistent with descriptor %s",
dtoMetric, desc,
"labels in collected metric %s %s are inconsistent with descriptor %s",
metricFamily.GetName(), dtoMetric, desc,
)
}
}
@ -597,7 +601,10 @@ func (r *registry) checkConsistency(metricFamily *dto.MetricFamily, dtoMetric *d
// Is the desc registered?
if _, exist := r.descIDs[desc.id]; !exist {
return fmt.Errorf("collected metric %q with unregistered descriptor %s", dtoMetric, desc)
return fmt.Errorf(
"collected metric %s %s with unregistered descriptor %s",
metricFamily.GetName(), dtoMetric, desc,
)
}
return nil
@ -672,34 +679,6 @@ func newDefaultRegistry() *registry {
return r
}
func chooseEncoder(req *http.Request) (encoder, string) {
accepts := goautoneg.ParseAccept(req.Header.Get(acceptHeader))
for _, accept := range accepts {
switch {
case accept.Type == "application" &&
accept.SubType == "vnd.google.protobuf" &&
accept.Params["proto"] == "io.prometheus.client.MetricFamily":
switch accept.Params["encoding"] {
case "delimited":
return text.WriteProtoDelimited, DelimitedTelemetryContentType
case "text":
return text.WriteProtoText, ProtoTextTelemetryContentType
case "compact-text":
return text.WriteProtoCompactText, ProtoCompactTextTelemetryContentType
default:
continue
}
case accept.Type == "text" &&
accept.SubType == "plain" &&
(accept.Params["version"] == "0.0.4" || accept.Params["version"] == ""):
return text.MetricFamilyToText, TextTelemetryContentType
default:
continue
}
}
return text.MetricFamilyToText, TextTelemetryContentType
}
// decorateWriter wraps a writer to handle gzip compression if requested. It
// returns the decorated writer and the appropriate "Content-Encoding" header
// (which is empty if no compression is enabled).

View File

@ -68,14 +68,14 @@ func testHandler(t testing.TB) {
Metric: []*dto.Metric{
{
Label: []*dto.LabelPair{
{
Name: proto.String("externallabelname"),
Value: proto.String("externalval1"),
},
{
Name: proto.String("externalconstname"),
Value: proto.String("externalconstvalue"),
},
{
Name: proto.String("externallabelname"),
Value: proto.String("externalval1"),
},
},
Counter: &dto.Counter{
Value: proto.Float64(1),
@ -100,27 +100,27 @@ func testHandler(t testing.TB) {
externalMetricFamilyAsBytes := externalBuf.Bytes()
externalMetricFamilyAsText := []byte(`# HELP externalname externaldocstring
# TYPE externalname counter
externalname{externallabelname="externalval1",externalconstname="externalconstvalue"} 1
externalname{externalconstname="externalconstvalue",externallabelname="externalval1"} 1
`)
externalMetricFamilyAsProtoText := []byte(`name: "externalname"
help: "externaldocstring"
type: COUNTER
metric: <
label: <
name: "externallabelname"
value: "externalval1"
>
label: <
name: "externalconstname"
value: "externalconstvalue"
>
label: <
name: "externallabelname"
value: "externalval1"
>
counter: <
value: 1
>
>
`)
externalMetricFamilyAsProtoCompactText := []byte(`name:"externalname" help:"externaldocstring" type:COUNTER metric:<label:<name:"externallabelname" value:"externalval1" > label:<name:"externalconstname" value:"externalconstvalue" > counter:<value:1 > >
externalMetricFamilyAsProtoCompactText := []byte(`name:"externalname" help:"externaldocstring" type:COUNTER metric:<label:<name:"externalconstname" value:"externalconstvalue" > label:<name:"externallabelname" value:"externalval1" > counter:<value:1 > >
`)
expectedMetricFamily := &dto.MetricFamily{

View File

@ -23,12 +23,13 @@ import (
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/beorn7/perks/quantile"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_golang/model"
)
// quantileLabel is used for the label that defines the quantile in a
// summary.
const quantileLabel = "quantile"
// A Summary captures individual observations from an event or sample stream and
// summarizes them in a manner similar to traditional summary statistics: 1. sum
// of observations, 2. observation count, 3. rank estimations.
@ -57,7 +58,7 @@ var (
DefObjectives = map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001}
errQuantileLabelNotAllowed = fmt.Errorf(
"%q is not allowed as label name in summaries", model.QuantileLabel,
"%q is not allowed as label name in summaries", quantileLabel,
)
)
@ -112,7 +113,9 @@ type SummaryOpts struct {
ConstLabels Labels
// Objectives defines the quantile rank estimates with their respective
// absolute error. The default value is DefObjectives.
// absolute error. If Objectives[q] = e, then the value reported
// for q will be the φ-quantile value for some φ between q-e and q+e.
// The default value is DefObjectives.
Objectives map[float64]float64
// MaxAge defines the duration for which an observation stays relevant
@ -170,12 +173,12 @@ func newSummary(desc *Desc, opts SummaryOpts, labelValues ...string) Summary {
}
for _, n := range desc.variableLabels {
if n == model.QuantileLabel {
if n == quantileLabel {
panic(errQuantileLabelNotAllowed)
}
}
for _, lp := range desc.constLabelPairs {
if lp.GetName() == model.QuantileLabel {
if lp.GetName() == quantileLabel {
panic(errQuantileLabelNotAllowed)
}
}

View File

@ -21,7 +21,7 @@ import "hash/fnv"
// An Untyped metric works the same as a Gauge. The only difference is that to
// no type information is implied.
//
// To create Gauge instances, use NewUntyped.
// To create Untyped instances, use NewUntyped.
type Untyped interface {
Metric
Collector

View File

@ -20,9 +20,8 @@ import (
"sort"
"sync/atomic"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
// ValueType is an enumeration of metric types that represent a simple value.

View File

@ -58,6 +58,11 @@ func (m *MetricVec) Collect(ch chan<- Metric) {
// GetMetricWithLabelValues returns the Metric for the given slice of label
// values (same order as the VariableLabels in Desc). If that combination of
// label values is accessed for the first time, a new Metric is created.
//
// It is possible to call this method without using the returned Metric to only
// create the new Metric but leave it at its start value (e.g. a Summary or
// Histogram without any observations). See also the SummaryVec example.
//
// Keeping the Metric for later use is possible (and should be considered if
// performance is critical), but keep in mind that Reset, DeleteLabelValues and
// Delete can be used to delete the Metric from the MetricVec. In that case, the
@ -87,8 +92,9 @@ func (m *MetricVec) GetMetricWithLabelValues(lvs ...string) (Metric, error) {
// GetMetricWith returns the Metric for the given Labels map (the label names
// must match those of the VariableLabels in Desc). If that label map is
// accessed for the first time, a new Metric is created. Implications of keeping
// the Metric are the same as for GetMetricWithLabelValues.
// accessed for the first time, a new Metric is created. Implications of
// creating a Metric without using it and keeping the Metric for later use are
// the same as for GetMetricWithLabelValues.
//
// An error is returned if the number and names of the Labels are inconsistent
// with those of the VariableLabels in Desc.

View File

@ -1,43 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package text
import (
"fmt"
"io"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/matttproud/golang_protobuf_extensions/pbutil"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
// WriteProtoDelimited writes the MetricFamily to the writer in delimited
// protobuf format and returns the number of bytes written and any error
// encountered.
func WriteProtoDelimited(w io.Writer, p *dto.MetricFamily) (int, error) {
return pbutil.WriteDelimited(w, p)
}
// WriteProtoText writes the MetricFamily to the writer in text format and
// returns the number of bytes written and any error encountered.
func WriteProtoText(w io.Writer, p *dto.MetricFamily) (int, error) {
return fmt.Fprintf(w, "%s\n", proto.MarshalTextString(p))
}
// WriteProtoCompactText writes the MetricFamily to the writer in compact text
// format and returns the number of bytes written and any error encountered.
func WriteProtoCompactText(w io.Writer, p *dto.MetricFamily) (int, error) {
return fmt.Fprintf(w, "%s\n", p)
}

View File

@ -11,19 +11,21 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package text
package expfmt
import (
"bytes"
"compress/gzip"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"io"
"io/ioutil"
"testing"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/matttproud/golang_protobuf_extensions/pbutil"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
var parser TextParser
// Benchmarks to show how much penalty text format parsing actually inflicts.
//
// Example results on Linux 3.13.0, Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz, go1.4.

View File

@ -0,0 +1,410 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package expfmt
import (
"fmt"
"io"
"math"
"mime"
"net/http"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/matttproud/golang_protobuf_extensions/pbutil"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/common/model"
)
// Decoder types decode an input stream into metric families.
type Decoder interface {
Decode(*dto.MetricFamily) error
}
type DecodeOptions struct {
// Timestamp is added to each value from the stream that has no explicit timestamp set.
Timestamp model.Time
}
// ResponseFormat extracts the correct format from a HTTP response header.
// If no matching format can be found FormatUnknown is returned.
func ResponseFormat(h http.Header) Format {
ct := h.Get(hdrContentType)
mediatype, params, err := mime.ParseMediaType(ct)
if err != nil {
return FmtUnknown
}
const (
textType = "text/plain"
jsonType = "application/json"
)
switch mediatype {
case ProtoType:
if p, ok := params["proto"]; ok && p != ProtoProtocol {
return FmtUnknown
}
if e, ok := params["encoding"]; ok && e != "delimited" {
return FmtUnknown
}
return FmtProtoDelim
case textType:
if v, ok := params["version"]; ok && v != TextVersion {
return FmtUnknown
}
return FmtText
case jsonType:
var prometheusAPIVersion string
if params["schema"] == "prometheus/telemetry" && params["version"] != "" {
prometheusAPIVersion = params["version"]
} else {
prometheusAPIVersion = h.Get("X-Prometheus-API-Version")
}
switch prometheusAPIVersion {
case "0.0.2", "":
return fmtJSON2
default:
return FmtUnknown
}
}
return FmtUnknown
}
// NewDecoder returns a new decoder based on the given input format.
// If the input format does not imply otherwise, a text format decoder is returned.
func NewDecoder(r io.Reader, format Format) Decoder {
switch format {
case FmtProtoDelim:
return &protoDecoder{r: r}
case fmtJSON2:
return newJSON2Decoder(r)
}
return &textDecoder{r: r}
}
// protoDecoder implements the Decoder interface for protocol buffers.
type protoDecoder struct {
r io.Reader
}
// Decode implements the Decoder interface.
func (d *protoDecoder) Decode(v *dto.MetricFamily) error {
_, err := pbutil.ReadDelimited(d.r, v)
return err
}
// textDecoder implements the Decoder interface for the text protcol.
type textDecoder struct {
r io.Reader
p TextParser
fams []*dto.MetricFamily
}
// Decode implements the Decoder interface.
func (d *textDecoder) Decode(v *dto.MetricFamily) error {
// TODO(fabxc): Wrap this as a line reader to make streaming safer.
if len(d.fams) == 0 {
// No cached metric families, read everything and parse metrics.
fams, err := d.p.TextToMetricFamilies(d.r)
if err != nil {
return err
}
if len(fams) == 0 {
return io.EOF
}
d.fams = make([]*dto.MetricFamily, 0, len(fams))
for _, f := range fams {
d.fams = append(d.fams, f)
}
}
*v = *d.fams[0]
d.fams = d.fams[1:]
return nil
}
type SampleDecoder struct {
Dec Decoder
Opts *DecodeOptions
f dto.MetricFamily
}
func (sd *SampleDecoder) Decode(s *model.Vector) error {
if err := sd.Dec.Decode(&sd.f); err != nil {
return err
}
*s = extractSamples(&sd.f, sd.Opts)
return nil
}
// Extract samples builds a slice of samples from the provided metric families.
func ExtractSamples(o *DecodeOptions, fams ...*dto.MetricFamily) model.Vector {
var all model.Vector
for _, f := range fams {
all = append(all, extractSamples(f, o)...)
}
return all
}
func extractSamples(f *dto.MetricFamily, o *DecodeOptions) model.Vector {
switch f.GetType() {
case dto.MetricType_COUNTER:
return extractCounter(o, f)
case dto.MetricType_GAUGE:
return extractGauge(o, f)
case dto.MetricType_SUMMARY:
return extractSummary(o, f)
case dto.MetricType_UNTYPED:
return extractUntyped(o, f)
case dto.MetricType_HISTOGRAM:
return extractHistogram(o, f)
}
panic("expfmt.extractSamples: unknown metric family type")
}
func extractCounter(o *DecodeOptions, f *dto.MetricFamily) model.Vector {
samples := make(model.Vector, 0, len(f.Metric))
for _, m := range f.Metric {
if m.Counter == nil {
continue
}
lset := make(model.LabelSet, len(m.Label)+1)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.MetricNameLabel] = model.LabelValue(f.GetName())
smpl := &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(m.Counter.GetValue()),
}
if m.TimestampMs != nil {
smpl.Timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000)
} else {
smpl.Timestamp = o.Timestamp
}
samples = append(samples, smpl)
}
return samples
}
func extractGauge(o *DecodeOptions, f *dto.MetricFamily) model.Vector {
samples := make(model.Vector, 0, len(f.Metric))
for _, m := range f.Metric {
if m.Gauge == nil {
continue
}
lset := make(model.LabelSet, len(m.Label)+1)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.MetricNameLabel] = model.LabelValue(f.GetName())
smpl := &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(m.Gauge.GetValue()),
}
if m.TimestampMs != nil {
smpl.Timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000)
} else {
smpl.Timestamp = o.Timestamp
}
samples = append(samples, smpl)
}
return samples
}
func extractUntyped(o *DecodeOptions, f *dto.MetricFamily) model.Vector {
samples := make(model.Vector, 0, len(f.Metric))
for _, m := range f.Metric {
if m.Untyped == nil {
continue
}
lset := make(model.LabelSet, len(m.Label)+1)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.MetricNameLabel] = model.LabelValue(f.GetName())
smpl := &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(m.Untyped.GetValue()),
}
if m.TimestampMs != nil {
smpl.Timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000)
} else {
smpl.Timestamp = o.Timestamp
}
samples = append(samples, smpl)
}
return samples
}
func extractSummary(o *DecodeOptions, f *dto.MetricFamily) model.Vector {
samples := make(model.Vector, 0, len(f.Metric))
for _, m := range f.Metric {
if m.Summary == nil {
continue
}
timestamp := o.Timestamp
if m.TimestampMs != nil {
timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000)
}
for _, q := range m.Summary.Quantile {
lset := make(model.LabelSet, len(m.Label)+2)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
// BUG(matt): Update other names to "quantile".
lset[model.LabelName(model.QuantileLabel)] = model.LabelValue(fmt.Sprint(q.GetQuantile()))
lset[model.MetricNameLabel] = model.LabelValue(f.GetName())
samples = append(samples, &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(q.GetValue()),
Timestamp: timestamp,
})
}
lset := make(model.LabelSet, len(m.Label)+1)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_sum")
samples = append(samples, &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(m.Summary.GetSampleSum()),
Timestamp: timestamp,
})
lset = make(model.LabelSet, len(m.Label)+1)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_count")
samples = append(samples, &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(m.Summary.GetSampleCount()),
Timestamp: timestamp,
})
}
return samples
}
func extractHistogram(o *DecodeOptions, f *dto.MetricFamily) model.Vector {
samples := make(model.Vector, 0, len(f.Metric))
for _, m := range f.Metric {
if m.Histogram == nil {
continue
}
timestamp := o.Timestamp
if m.TimestampMs != nil {
timestamp = model.TimeFromUnixNano(*m.TimestampMs * 1000000)
}
infSeen := false
for _, q := range m.Histogram.Bucket {
lset := make(model.LabelSet, len(m.Label)+2)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.LabelName(model.BucketLabel)] = model.LabelValue(fmt.Sprint(q.GetUpperBound()))
lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_bucket")
if math.IsInf(q.GetUpperBound(), +1) {
infSeen = true
}
samples = append(samples, &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(q.GetCumulativeCount()),
Timestamp: timestamp,
})
}
lset := make(model.LabelSet, len(m.Label)+1)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_sum")
samples = append(samples, &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(m.Histogram.GetSampleSum()),
Timestamp: timestamp,
})
lset = make(model.LabelSet, len(m.Label)+1)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_count")
count := &model.Sample{
Metric: model.Metric(lset),
Value: model.SampleValue(m.Histogram.GetSampleCount()),
Timestamp: timestamp,
}
samples = append(samples, count)
if !infSeen {
// Append an infinity bucket sample.
lset := make(model.LabelSet, len(m.Label)+2)
for _, p := range m.Label {
lset[model.LabelName(p.GetName())] = model.LabelValue(p.GetValue())
}
lset[model.LabelName(model.BucketLabel)] = model.LabelValue("+Inf")
lset[model.MetricNameLabel] = model.LabelValue(f.GetName() + "_bucket")
samples = append(samples, &model.Sample{
Metric: model.Metric(lset),
Value: count.Value,
Timestamp: timestamp,
})
}
}
return samples
}

View File

@ -0,0 +1,356 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package expfmt
import (
"io"
"net/http"
"reflect"
"sort"
"strings"
"testing"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/common/model"
)
func TestTextDecoder(t *testing.T) {
var (
ts = model.Now()
in = `
# Only a quite simple scenario with two metric families.
# More complicated tests of the parser itself can be found in the text package.
# TYPE mf2 counter
mf2 3
mf1{label="value1"} -3.14 123456
mf1{label="value2"} 42
mf2 4
`
out = model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "mf1",
"label": "value1",
},
Value: -3.14,
Timestamp: 123456,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "mf1",
"label": "value2",
},
Value: 42,
Timestamp: ts,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "mf2",
},
Value: 3,
Timestamp: ts,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "mf2",
},
Value: 4,
Timestamp: ts,
},
}
)
dec := &SampleDecoder{
Dec: &textDecoder{r: strings.NewReader(in)},
Opts: &DecodeOptions{
Timestamp: ts,
},
}
var all model.Vector
for {
var smpls model.Vector
err := dec.Decode(&smpls)
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
all = append(all, smpls...)
}
sort.Sort(all)
sort.Sort(out)
if !reflect.DeepEqual(all, out) {
t.Fatalf("output does not match")
}
}
func TestProtoDecoder(t *testing.T) {
var testTime = model.Now()
scenarios := []struct {
in string
expected model.Vector
}{
{
in: "",
},
{
in: "\x8f\x01\n\rrequest_count\x12\x12Number of requests\x18\x00\"0\n#\n\x0fsome_label_name\x12\x10some_label_value\x1a\t\t\x00\x00\x00\x00\x00\x00E\xc0\"6\n)\n\x12another_label_name\x12\x13another_label_value\x1a\t\t\x00\x00\x00\x00\x00\x00U@",
expected: model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"some_label_name": "some_label_value",
},
Value: -42,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"another_label_name": "another_label_value",
},
Value: 84,
Timestamp: testTime,
},
},
},
{
in: "\xb9\x01\n\rrequest_count\x12\x12Number of requests\x18\x02\"O\n#\n\x0fsome_label_name\x12\x10some_label_value\"(\x1a\x12\t\xaeG\xe1z\x14\xae\xef?\x11\x00\x00\x00\x00\x00\x00E\xc0\x1a\x12\t+\x87\x16\xd9\xce\xf7\xef?\x11\x00\x00\x00\x00\x00\x00U\xc0\"A\n)\n\x12another_label_name\x12\x13another_label_value\"\x14\x1a\x12\t\x00\x00\x00\x00\x00\x00\xe0?\x11\x00\x00\x00\x00\x00\x00$@",
expected: model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count_count",
"some_label_name": "some_label_value",
},
Value: 0,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count_sum",
"some_label_name": "some_label_value",
},
Value: 0,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"some_label_name": "some_label_value",
"quantile": "0.99",
},
Value: -42,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"some_label_name": "some_label_value",
"quantile": "0.999",
},
Value: -84,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count_count",
"another_label_name": "another_label_value",
},
Value: 0,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count_sum",
"another_label_name": "another_label_value",
},
Value: 0,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"another_label_name": "another_label_value",
"quantile": "0.5",
},
Value: 10,
Timestamp: testTime,
},
},
},
{
in: "\x8d\x01\n\x1drequest_duration_microseconds\x12\x15The response latency.\x18\x04\"S:Q\b\x85\x15\x11\xcd\xcc\xccL\x8f\xcb:A\x1a\v\b{\x11\x00\x00\x00\x00\x00\x00Y@\x1a\f\b\x9c\x03\x11\x00\x00\x00\x00\x00\x00^@\x1a\f\b\xd0\x04\x11\x00\x00\x00\x00\x00\x00b@\x1a\f\b\xf4\v\x11\x9a\x99\x99\x99\x99\x99e@\x1a\f\b\x85\x15\x11\x00\x00\x00\x00\x00\x00\xf0\u007f",
expected: model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "100",
},
Value: 123,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "120",
},
Value: 412,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "144",
},
Value: 592,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "172.8",
},
Value: 1524,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "+Inf",
},
Value: 2693,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_sum",
},
Value: 1756047.3,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_count",
},
Value: 2693,
Timestamp: testTime,
},
},
},
{
// The metric type is unset in this protobuf, which needs to be handled
// correctly by the decoder.
in: "\x1c\n\rrequest_count\"\v\x1a\t\t\x00\x00\x00\x00\x00\x00\xf0?",
expected: model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
},
Value: 1,
Timestamp: testTime,
},
},
},
}
for i, scenario := range scenarios {
dec := &SampleDecoder{
Dec: &protoDecoder{r: strings.NewReader(scenario.in)},
Opts: &DecodeOptions{
Timestamp: testTime,
},
}
var all model.Vector
for {
var smpls model.Vector
err := dec.Decode(&smpls)
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
all = append(all, smpls...)
}
sort.Sort(all)
sort.Sort(scenario.expected)
if !reflect.DeepEqual(all, scenario.expected) {
t.Fatalf("%d. output does not match, want: %#v, got %#v", i, scenario.expected, all)
}
}
}
func testDiscriminatorHTTPHeader(t testing.TB) {
var scenarios = []struct {
input map[string]string
output Format
err error
}{
{
input: map[string]string{"Content-Type": `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="delimited"`},
output: FmtProtoDelim,
},
{
input: map[string]string{"Content-Type": `application/vnd.google.protobuf; proto="illegal"; encoding="delimited"`},
output: FmtUnknown,
},
{
input: map[string]string{"Content-Type": `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="illegal"`},
output: FmtUnknown,
},
{
input: map[string]string{"Content-Type": `text/plain; version=0.0.4`},
output: FmtText,
},
{
input: map[string]string{"Content-Type": `text/plain`},
output: FmtText,
},
{
input: map[string]string{"Content-Type": `text/plain; version=0.0.3`},
output: FmtUnknown,
},
}
for i, scenario := range scenarios {
var header http.Header
if len(scenario.input) > 0 {
header = http.Header{}
}
for key, value := range scenario.input {
header.Add(key, value)
}
actual := ResponseFormat(header)
if scenario.output != actual {
t.Errorf("%d. expected %s, got %s", i, scenario.output, actual)
}
}
}
func TestDiscriminatorHTTPHeader(t *testing.T) {
testDiscriminatorHTTPHeader(t)
}
func BenchmarkDiscriminatorHTTPHeader(b *testing.B) {
for i := 0; i < b.N; i++ {
testDiscriminatorHTTPHeader(b)
}
}

View File

@ -0,0 +1,87 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package expfmt
import (
"fmt"
"io"
"net/http"
"github.com/coreos/etcd/Godeps/_workspace/src/bitbucket.org/ww/goautoneg"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/matttproud/golang_protobuf_extensions/pbutil"
dto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/prometheus/client_model/go"
)
// Encoder types encode metric families into an underlying wire protocol.
type Encoder interface {
Encode(*dto.MetricFamily) error
}
type encoder func(*dto.MetricFamily) error
func (e encoder) Encode(v *dto.MetricFamily) error {
return e(v)
}
// Negotiate returns the Content-Type based on the given Accept header.
// If no appropriate accepted type is found, FmtText is returned.
func Negotiate(h http.Header) Format {
for _, ac := range goautoneg.ParseAccept(h.Get(hdrAccept)) {
// Check for protocol buffer
if ac.Type+"/"+ac.SubType == ProtoType && ac.Params["proto"] == ProtoProtocol {
switch ac.Params["encoding"] {
case "delimited":
return FmtProtoDelim
case "text":
return FmtProtoText
case "compact-text":
return FmtProtoCompact
}
}
// Check for text format.
ver := ac.Params["version"]
if ac.Type == "text" && ac.SubType == "plain" && (ver == TextVersion || ver == "") {
return FmtText
}
}
return FmtText
}
// NewEncoder returns a new encoder based on content type negotiation.
func NewEncoder(w io.Writer, format Format) Encoder {
switch format {
case FmtProtoDelim:
return encoder(func(v *dto.MetricFamily) error {
_, err := pbutil.WriteDelimited(w, v)
return err
})
case FmtProtoCompact:
return encoder(func(v *dto.MetricFamily) error {
_, err := fmt.Fprintln(w, v.String())
return err
})
case FmtProtoText:
return encoder(func(v *dto.MetricFamily) error {
_, err := fmt.Fprintln(w, proto.MarshalTextString(v))
return err
})
case FmtText:
return encoder(func(v *dto.MetricFamily) error {
_, err := MetricFamilyToText(w, v)
return err
})
}
panic("expfmt.NewEncoder: unknown format")
}

View File

@ -0,0 +1,40 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// A package for reading and writing Prometheus metrics.
package expfmt
type Format string
const (
TextVersion = "0.0.4"
ProtoType = `application/vnd.google.protobuf`
ProtoProtocol = `io.prometheus.client.MetricFamily`
ProtoFmt = ProtoType + "; proto=" + ProtoProtocol + ";"
// The Content-Type values for the different wire protocols.
FmtUnknown Format = `<unknown>`
FmtText Format = `text/plain; version=` + TextVersion
FmtProtoDelim Format = ProtoFmt + ` encoding=delimited`
FmtProtoText Format = ProtoFmt + ` encoding=text`
FmtProtoCompact Format = ProtoFmt + ` encoding=compact-text`
// fmtJSON2 is hidden as it is deprecated.
fmtJSON2 Format = `application/json; version=0.0.2`
)
const (
hdrContentType = "Content-Type"
hdrAccept = "Accept"
)

View File

@ -1,4 +1,4 @@
// Copyright 2013 The Prometheus Authors
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
@ -11,26 +11,26 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package model
// Build only when actually fuzzing
// +build gofuzz
import (
"sort"
)
package expfmt
// A LabelValue is an associated value for a LabelName.
type LabelValue string
import "bytes"
// LabelValues is a sortable LabelValue slice. It implements sort.Interface.
type LabelValues []LabelValue
// Fuzz text metric parser with with github.com/dvyukov/go-fuzz:
//
// go-fuzz-build github.com/prometheus/client_golang/text
// go-fuzz -bin text-fuzz.zip -workdir fuzz
//
// Further input samples should go in the folder fuzz/corpus.
func Fuzz(in []byte) int {
parser := TextParser{}
_, err := parser.TextToMetricFamilies(bytes.NewReader(in))
func (l LabelValues) Len() int {
return len(l)
}
func (l LabelValues) Less(i, j int) bool {
return sort.StringsAreSorted([]string{string(l[i]), string(l[j])})
}
func (l LabelValues) Swap(i, j int) {
l[i], l[j] = l[j], l[i]
if err != nil {
return 0
}
return 1
}

View File

@ -0,0 +1,2 @@

View File

@ -0,0 +1,6 @@
minimal_metric 1.234
another_metric -3e3 103948
# Even that:
no_labels{} 3
# HELP line for non-existing metric will be ignored.

View File

@ -0,0 +1,12 @@
# A normal comment.
#
# TYPE name counter
name{labelname="val1",basename="basevalue"} NaN
name {labelname="val2",basename="base\"v\\al\nue"} 0.23 1234567890
# HELP name two-line\n doc str\\ing
# HELP name2 doc str"ing 2
# TYPE name2 gauge
name2{labelname="val2" ,basename = "basevalue2" } +Inf 54321
name2{ labelname = "val1" , }-Inf

View File

@ -0,0 +1,22 @@
# TYPE my_summary summary
my_summary{n1="val1",quantile="0.5"} 110
decoy -1 -2
my_summary{n1="val1",quantile="0.9"} 140 1
my_summary_count{n1="val1"} 42
# Latest timestamp wins in case of a summary.
my_summary_sum{n1="val1"} 4711 2
fake_sum{n1="val1"} 2001
# TYPE another_summary summary
another_summary_count{n2="val2",n1="val1"} 20
my_summary_count{n2="val2",n1="val1"} 5 5
another_summary{n1="val1",n2="val2",quantile=".3"} -1.2
my_summary_sum{n1="val2"} 08 15
my_summary{n1="val3", quantile="0.2"} 4711
my_summary{n1="val1",n2="val2",quantile="-12.34",} NaN
# some
# funny comments
# HELP
# HELP
# HELP my_summary
# HELP my_summary

View File

@ -0,0 +1,10 @@
# HELP request_duration_microseconds The response latency.
# TYPE request_duration_microseconds histogram
request_duration_microseconds_bucket{le="100"} 123
request_duration_microseconds_bucket{le="120"} 412
request_duration_microseconds_bucket{le="144"} 592
request_duration_microseconds_bucket{le="172.8"} 1524
request_duration_microseconds_bucket{le="+Inf"} 2693
request_duration_microseconds_sum 1.7560473e+06
request_duration_microseconds_count 2693

View File

@ -0,0 +1 @@
bla 3.14

View File

@ -0,0 +1 @@
metric{label="\t"} 3.14

View File

@ -0,0 +1 @@
metric{label="bla"} 3.14 2 3

View File

@ -0,0 +1 @@
metric{label="bla"} blubb

View File

@ -0,0 +1,3 @@
# HELP metric one
# HELP metric two

Some files were not shown because too many files have changed in this diff Show More