Track calls blocked by ACLs using metrics
42 KiB
layout | page_title | sidebar_current | description |
---|---|---|---|
docs | Telemetry | docs-agent-telemetry | The Consul agent collects various runtime metrics about the performance of different libraries and subsystems. These metrics are aggregated on a ten second interval and are retained for one minute. |
Telemetry
The Consul agent collects various runtime metrics about the performance of different libraries and subsystems. These metrics are aggregated on a ten second interval and are retained for one minute.
To view this data, you must send a signal to the Consul process: on Unix,
this is USR1
while on Windows it is BREAK
. Once Consul receives the signal,
it will dump the current telemetry information to the agent's stderr
.
This telemetry information can be used for debugging or otherwise getting a better view of what Consul is doing.
Additionally, if the telemetry
configuration options
are provided, the telemetry information will be streamed to a
statsite or statsd server where
it can be aggregated and flushed to Graphite or any other metrics store. This
information can also be viewed with the metrics endpoint in JSON
format or using Prometheus format.
Below is sample output of a telemetry dump:
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.free_count': 4387.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.heap_objects': 3163.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_pause_ns': 1151002.000
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_runs': 4.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.accept': Count: 5 Sum: 5.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.command': Count: 10 Sum: 10.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events': Count: 5 Sum: 5.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.foo': Count: 4 Sum: 4.000
[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.baz': Count: 1 Sum: 1.000
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.memberlist.gossip': Count: 50 Min: 0.007 Mean: 0.020 Max: 0.041 Stddev: 0.007 Sum: 0.989
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Intent': Count: 10 Sum: 0.000
[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Event': Count: 10 Min: 0.000 Mean: 2.500 Max: 5.000 Stddev: 2.121 Sum: 25.000
Key Metrics
These are some metrics emitted that can help you understand the health of your cluster at a glance. For a full list of metrics emitted by Consul, see Metrics Reference
Transaction timing
Metric Name | Description |
---|---|
consul.kvs.apply |
This measures the time it takes to complete an update to the KV store. |
consul.txn.apply |
This measures the time spent applying a transaction operation. |
consul.raft.apply |
This counts the number of Raft transactions occurring over the interval. |
consul.raft.commitTime |
This measures the time it takes to commit a new entry to the Raft log on the leader. |
Why they're important: Taken together, these metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. Generally these should all be fairly consistent and no more than a few milliseconds. Sudden changes in any of the timing values could be due to unexpected load on the Consul servers, or due to problems on the servers themselves.
What to look for: Deviations (in any of these metrics) of more than 50% from baseline over the previous hour.
Leadership changes
Metric Name | Description |
---|---|
consul.raft.leader.lastContact |
Measures the time since the leader was last able to contact the follower nodes when checking its leader lease. |
consul.raft.state.candidate |
This increments whenever a Consul server starts an election. |
consul.raft.state.leader |
This increments whenever a Consul server becomes a leader. |
Why they're important: Normally, your Consul cluster should have a stable leader. If there are frequent elections or leadership changes, it would likely indicate network issues between the Consul servers, or that the Consul servers themselves are unable to keep up with the load.
What to look for: For a healthy cluster, you're looking for a lastContact
lower than 200ms, leader
> 0 and candidate
== 0. Deviations from this might indicate flapping leadership.
Autopilot
Metric Name | Description |
---|---|
consul.autopilot.healthy |
This tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. |
Why it's important: Obviously, you want your cluster to be healthy.
What to look for: Alert if healthy
is 0.
Memory usage
Metric Name | Description |
---|---|
consul.runtime.alloc_bytes |
This measures the number of bytes allocated by the Consul process. |
consul.runtime.sys_bytes |
This is the total number of bytes of memory obtained from the OS. |
Why they're important: Consul keeps all of its data in memory. If Consul consumes all available memory, it will crash.
What to look for: If consul.runtime.sys_bytes
exceeds 90% of total avaliable system memory.
Garbage collection
Metric Name | Description |
---|---|
consul.runtime.total_gc_pause_ns |
Number of nanoseconds consumed by stop-the-world garbage collection (GC) pauses since Consul started. |
Why it's important: GC pause is a "stop-the-world" event, meaning that all runtime threads are blocked until GC completes. Normally these pauses last only a few nanoseconds. But if memory usage is high, the Go runtime may GC so frequently that it starts to slow down Consul.
What to look for: Warning if total_gc_pause_ns
exceeds 2 seconds/minute, critical if it exceeds 5 seconds/minute.
NOTE: total_gc_pause_ns
is a cumulative counter, so in order to calculate rates (such as GC/minute),
you will need to apply a function such as InfluxDB's non_negative_difference()
.
Network activity - RPC Count
Metric Name | Description |
---|---|
consul.client.rpc |
Increments whenever a Consul agent in client mode makes an RPC request to a Consul server |
consul.client.rpc.exceeded |
Increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's limits configuration. |
consul.client.rpc.failed |
Increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails. |
Why they're important: These measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. A high RPC count, especially from consul.client.rpcexceeded
meaning that the requests are being rate-limited, could imply a misconfigured Consul agent.
What to look for:
Sudden large changes to the consul.client.rpc
metrics (greater than 50% deviation from baseline).
consul.client.rpc.exceeded
or consul.client.rpc.failed
count > 0, as it implies that an agent is being rate-limited or fails to make an RPC request to a Consul server
When telemetry is being streamed to an external metrics store, the interval is defined to be that store's flush interval. Otherwise, the interval can be assumed to be 10 seconds when retrieving metrics from the built-in store using the above described signals.
Metrics Reference
This is a full list of metrics emitted by Consul.
Metric | Description | Unit | Type |
---|---|---|---|
`consul.acl.blocked.service.registration` | This increments whenever a deregistration fails for a service (blocked by an ACL) | requests | counter |
`consul.acl.blocked.<check|node|service>.registration` | This increments whenever a registration fails for an entity (check, node or service) is blocked by an ACL | requests | counter |
`consul.client.rpc` | This increments whenever a Consul agent in client mode makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers. | requests | counter |
`consul.client.rpc.exceeded` | This increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/docs/agent/options.html#limits) configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers. | rejected requests | counter |
`consul.client.rpc.failed` | This increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails. | requests | counter |
`consul.client.api.catalog_register.` | This increments whenever a Consul agent receives a catalog register request. | requests | counter |
`consul.client.api.success.catalog_register.` | This increments whenever a Consul agent successfully responds to a catalog register request. | requests | counter |
`consul.client.rpc.error.catalog_register.` | This increments whenever a Consul agent receives an RPC error for a catalog register request. | errors | counter |
`consul.client.api.catalog_deregister.` | This increments whenever a Consul agent receives a catalog de-register request. | requests | counter |
`consul.client.api.success.catalog_deregister.` | This increments whenever a Consul agent successfully responds to a catalog de-register request. | requests | counter |
`consul.client.rpc.error.catalog_deregister.` | This increments whenever a Consul agent receives an RPC error for a catalog de-register request. | errors | counter |
`consul.client.api.catalog_datacenters.` | This increments whenever a Consul agent receives a request to list datacenters in the catalog. | requests | counter |
`consul.client.api.success.catalog_datacenters.` | This increments whenever a Consul agent successfully responds to a request to list datacenters. | requests | counter |
`consul.client.rpc.error.catalog_datacenters.` | This increments whenever a Consul agent receives an RPC error for a request to list datacenters. | errors | counter |
`consul.client.api.catalog_nodes.` | This increments whenever a Consul agent receives a request to list nodes from the catalog. | requests | counter |
`consul.client.api.success.catalog_nodes.` | This increments whenever a Consul agent successfully responds to a request to list nodes. | requests | counter |
`consul.client.rpc.error.catalog_nodes.` | This increments whenever a Consul agent receives an RPC error for a request to list nodes. | errors | counter |
`consul.client.api.catalog_services.` | This increments whenever a Consul agent receives a request to list services from the catalog. | requests | counter |
`consul.client.api.success.catalog_services.` | This increments whenever a Consul agent successfully responds to a request to list services. | requests | counter |
`consul.client.rpc.error.catalog_services.` | This increments whenever a Consul agent receives an RPC error for a request to list services. | errors | counter |
`consul.client.api.catalog_service_nodes.` | This increments whenever a Consul agent receives a request to list nodes offering a service. | requests | counter |
`consul.client.api.success.catalog_service_nodes.` | This increments whenever a Consul agent successfully responds to a request to list nodes offering a service. | requests | counter |
`consul.client.rpc.error.catalog_service_nodes.` | This increments whenever a Consul agent receives an RPC error for a request to list nodes offering a service. | errors | counter |
`consul.client.api.catalog_node_services.` | This increments whenever a Consul agent receives a request to list services registered in a node. | requests | counter |
`consul.client.api.success.catalog_node_services.` | This increments whenever a Consul agent successfully responds to a request to list services in a service. | requests | counter |
`consul.client.rpc.error.catalog_node_services.` | This increments whenever a Consul agent receives an RPC error for a request to list services in a service. | errors | counter |
`consul.runtime.num_goroutines` | This tracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value. | number of goroutines | gauge |
`consul.runtime.alloc_bytes` | This measures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value. | bytes | gauge |
`consul.runtime.heap_objects` | This measures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value. | number of objects | gauge |
`consul.acl.cache_hit` | The number of ACL cache hits. | hits | counter |
`consul.acl.cache_miss` | The number of ACL cache misses. | misses | counter |
`consul.acl.replication_hit` | The number of ACL replication cache hits (when not running in the ACL datacenter). | hits | counter |
`consul.dns.stale_queries` | This increments when an agent serves a query within the allowed stale threshold. | queries | counter |
`consul.dns.ptr_query.` | This measures the time spent handling a reverse DNS query for the given node. | ms | timer |
`consul.dns.domain_query.` | This measures the time spent handling a domain query for the given node. | ms | timer |
`consul.http..` | This tracks how long it takes to service the given HTTP request for the given verb and path. Paths do not include details like service or key names, for these an underscore will be present as a placeholder (eg. `consul.http.GET.v1.kv._`) | ms | timer |
Server Health
These metrics are used to monitor the health of the Consul servers.
Cluster Health
These metrics give insight into the health of the cluster as a whole.
Metric | Description | Unit | Type |
---|---|---|---|
`consul.memberlist.degraded.probe` | This metric counts the number of times the agent has performed failure detection on an other agent at a slower probe rate. The agent uses its own health metric as an indicator to perform this action. (If its health score is low, means that the node is healthy, and vice versa.) | probes / interval | counter |
`consul.memberlist.degraded.timeout` | This metric counts the number of times an agent was marked as a dead node, whilst not getting enough confirmations from a randomly selected list of agent nodes in an agent's membership. | occurrence / interval | counter |
`consul.memberlist.msg.dead` | This metric counts the number of times an agent has marked another agent to be a dead node. | messages / interval | counter |
`consul.memberlist.health.score` | This metric describes a node's perception of its own health based on how well it is meeting the soft real-time requirements of the protocol. This metric ranges from 0 to 8, where 0 indicates "totally healthy". This health score is used to scale the time between outgoing probes, and higher scores translate into longer probing intervals. For more details see section IV of the Lifeguard paper: https://arxiv.org/pdf/1707.00788.pdf | score | gauge |
`consul.memberlist.msg.suspect` | This increments when an agent suspects another as failed when executing random probes as part of the gossip protocol. These can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the [required ports](/docs/agent/options.html#ports). | suspect messages received / interval | counter |
`consul.memberlist.tcp.accept` | This metric counts the number of times an agent has accepted an incoming TCP stream connection. | connections accepted / interval | counter |
`consul.memberlist.udp.sent/received` | This metric measures the total number of bytes sent/received by an agent through the UDP protocol. | bytes sent or bytes received / interval | counter |
`consul.memberlist.tcp.connect` | This metric counts the number of times an agent has initiated a push/pull sync with an other agent. | push/pull initiated / interval | counter |
`consul.memberlist.tcp.sent` | This metric measures the total number of bytes sent by an agent through the TCP protocol | bytes sent / interval | counter |
`consul.memberlist.gossip` | This metric gives the number of gossips (messages) broadcasted to a set of randomly selected nodes. | messages / Interval | counter |
`consul.memberlist.msg_alive` | This metric counts the number of alive agents, that the agent has mapped out so far, based on the message information given by the network layer. | nodes / Interval | counter |
`consul.memberlist.msg_dead` | This metric gives the number of dead agents, that the agent has mapped out so far, based on the message information given by the network layer. | nodes / Interval | counter |
`consul.memberlist.msg_suspect` | This metric gives the number of suspect nodes, that the agent has mapped out so far, based on the message information given by the network layer. | nodes / Interval | counter |
`consul.memberlist.probeNode` | This metric measures the time taken to perform a single round of failure detection on a select agent. | nodes / Interval | counter |
`consul.memberlist.pushPullNode` | This metric measures the number of agents that have exchanged state with this agent. | nodes / Interval | counter |
`consul.serf.member.flap` | Available in Consul 0.7 and later, this increments when an agent is marked dead and then recovers within a short time period. This can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the [required ports](/docs/agent/options.html#ports). | flaps / interval | counter |
`consul.serf.events` | This increments when an agent processes an [event](/docs/commands/event.html). Consul uses events internally so there may be additional events showing in telemetry. There are also a per-event counters emitted as `consul.serf.events.`. | events / interval | counter |
`consul.autopilot.failure_tolerance` | This tracks the number of voting servers that the cluster can lose while continuing to function. | servers | gauge |
`consul.autopilot.healthy` | This tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. | boolean | gauge |
`consul.session_ttl.active` | This tracks the active number of sessions being tracked. | sessions | gauge |
`consul.catalog.service.query.` | This increments for each catalog query for the given service. | queries | counter |
`consul.catalog.service.query-tag..` | This increments for each catalog query for the given service with the given tag. | queries | counter |
`consul.catalog.service.not-found.` | This increments for each catalog query where the given service could not be found. | queries | counter |
`consul.health.service.query.` | This increments for each health query for the given service. | queries | counter |
`consul.health.service.query-tag..` | This increments for each health query for the given service with the given tag. | queries | counter |
`consul.health.service.not-found.` | This increments for each health query where the given service could not be found. | queries | counter |
Connect Built-in Proxy Metrics
Consul Connect's built-in proxy is by default configured to log metrics to the same sink as the agent that starts it when running as a managed proxy.
When running in this mode it emits some basic metrics. These will be expanded upon in the future.
All metrics are prefixed with consul.proxy.<proxied-service-id>
to distinguish
between multiple proxies on a given host. The table below use web
as an
example service name for brevity.
Labels
Most labels have a dst
label and some have a src
label. When using metrics
sinks and timeseries stores that support labels or tags, these allow aggregating
the connections by service name.
Assuming all services are using a managed built-in proxy, you can get a complete overview of both number of open connections and bytes sent and recieved between all services by aggregating over these metrics.
For example aggregating over all upstream
(i.e. outbound) connections which
have both src
and dst
labels, you can get a sum of all the bandwidth in and
out of a given service or the total number of connections between two services.
Metrics Reference
The standard go runtime metrics are exported by go-metrics
as with Consul
agent. The table below describes the additional metrics exported by the proxy.
Metric | Description | Unit | Type |
---|---|---|---|
`consul.proxy.web.runtime.*` | The same go runtime metrics as documented for the agent above. | mixed | mixed |
`consul.proxy.web.inbound.conns` | Shows the current number of connections open from inbound requests to the proxy. Where supported a `dst` label is added indicating the service name the proxy represents. | connections | gauge |
`consul.proxy.web.inbound.rx_bytes` | This increments by the number of bytes received from an inbound client connection. Where supported a `dst` label is added indicating the service name the proxy represents. | bytes | counter |
`consul.proxy.web.inbound.tx_bytes` | This increments by the number of bytes transfered to an inbound client connection. Where supported a `dst` label is added indicating the service name the proxy represents. | bytes | counter |
`consul.proxy.web.upstream.conns` | Shows the current number of connections open from a proxy instance to an upstream. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | connections | gauge |
`consul.proxy.web.inbound.rx_bytes` | This increments by the number of bytes received from an upstream connection. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | bytes | counter |
`consul.proxy.web.inbound.tx_bytes` | This increments by the number of bytes transfered to an upstream connection. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | bytes | counter |