The Consul agent collects various runtime metrics about the performance of different libraries and subsystems. These metrics are aggregated on a ten second interval and are retained for one minute.
These are some metrics emitted that can help you understand the health of your cluster at a glance. For a full list of metrics emitted by Consul, see [Metrics Reference](#metrics-reference)
### Transaction timing
| Metric Name | Description |
| :----------------------- | :---------- |
| `consul.kvs.apply` | This measures the time it takes to complete an update to the KV store. |
| `consul.txn.apply` | This measures the time spent applying a transaction operation. |
| `consul.raft.apply` | This counts the number of Raft transactions occurring over the interval. |
| `consul.raft.commitTime` | This measures the time it takes to commit a new entry to the Raft log on the leader. |
**Why they're important:** Taken together, these metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. Generally these should all be fairly consistent and no more than a few milliseconds. Sudden changes in any of the timing values could be due to unexpected load on the Consul servers, or due to problems on the servers themselves.
**What to look for:** Deviations (in any of these metrics) of more than 50% from baseline over the previous hour.
### Leadership changes
| Metric Name | Description |
| :---------- | :---------- |
| `consul.raft.leader.lastContact` | Measures the time since the leader was last able to contact the follower nodes when checking its leader lease. |
| `consul.raft.state.candidate` | This increments whenever a Consul server starts an election. |
| `consul.raft.state.leader` | This increments whenever a Consul server becomes a leader. |
**Why they're important:** Normally, your Consul cluster should have a stable leader. If there are frequent elections or leadership changes, it would likely indicate network issues between the Consul servers, or that the Consul servers themselves are unable to keep up with the load.
**What to look for:** For a healthy cluster, you're looking for a `lastContact` lower than 200ms, `leader` > 0 and `candidate` == 0. Deviations from this might indicate flapping leadership.
### Autopilot
| Metric Name | Description |
| :---------- | :---------- |
| `consul.autopilot.healthy` | This tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. |
**Why it's important:** Obviously, you want your cluster to be healthy.
**What to look for:** Alert if `healthy` is 0.
### Memory usage
| Metric Name | Description |
| :---------- | :---------- |
| `consul.runtime.alloc_bytes` | This measures the number of bytes allocated by the Consul process. |
| `consul.runtime.sys_bytes` | This is the total number of bytes of memory obtained from the OS. |
**Why they're important:** Consul keeps all of its data in memory. If Consul consumes all available memory, it will crash.
**What to look for:** If `consul.runtime.sys_bytes` exceeds 90% of total avaliable system memory.
### Garbage collection
| Metric Name | Description |
| :---------- | :---------- |
| `consul.runtime.total_gc_pause_ns` | Number of nanoseconds consumed by stop-the-world garbage collection (GC) pauses since Consul started. |
**Why it's important:** GC pause is a "stop-the-world" event, meaning that all runtime threads are blocked until GC completes. Normally these pauses last only a few nanoseconds. But if memory usage is high, the Go runtime may GC so frequently that it starts to slow down Consul.
**What to look for:** Warning if `total_gc_pause_ns` exceeds 2 seconds/minute, critical if it exceeds 5 seconds/minute.
**NOTE:** `total_gc_pause_ns` is a cumulative counter, so in order to calculate rates (such as GC/minute),
you will need to apply a function such as InfluxDB's [`non_negative_difference()`](https://docs.influxdata.com/influxdb/v1.5/query_language/functions/#non-negative-difference).
### Network activity - RPC Count
| Metric Name | Description |
| :---------- | :---------- |
| `consul.client.rpc` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server |
| `consul.client.rpc.exceeded` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/docs/agent/options.html#limits) configuration. |
| `consul.client.rpc.failed` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails. |
**Why they're important:** These measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. A high RPC count, especially from `consul.client.rpcexceeded` meaning that the requests are being rate-limited, could imply a misconfigured Consul agent.
**What to look for:**
Sudden large changes to the `consul.client.rpc` metrics (greater than 50% deviation from baseline).
`consul.client.rpc.exceeded` or `consul.client.rpc.failed` count > 0, as it implies that an agent is being rate-limited or fails to make an RPC request to a Consul server
<td>This increments whenever a Consul agent in client mode makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers.</td>
<td>requests</td>
<td>counter</td>
</tr>
<tr>
<td>`consul.client.rpc.exceeded`</td>
<td>This increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/docs/agent/options.html#limits) configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers.</td>
<td>This tracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value.</td>
<td>number of goroutines</td>
<td>gauge</td>
</tr>
<tr>
<td>`consul.runtime.alloc_bytes`</td>
<td>This measures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value.</td>
<td>bytes</td>
<td>gauge</td>
</tr>
<tr>
<td>`consul.runtime.heap_objects`</td>
<td>This measures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value.</td>
<td>This tracks how long it takes to service the given HTTP request for the given verb and path. Paths do not include details like service or key names, for these an underscore will be present as a placeholder (eg. `consul.http.GET.v1.kv._`)</td>
These metrics are used to monitor the health of the Consul servers.
<tableclass="table table-bordered table-striped">
<tr>
<th>Metric</th>
<th>Description</th>
<th>Unit</th>
<th>Type</th>
</tr>
<tr>
<td>`consul.raft.state.leader`</td>
<td>This increments whenever a Consul server becomes a leader. If there are frequent leadership changes this may be indication that the servers are overloaded and aren't meeting the soft real-time requirements for Raft, or that there are networking problems between the servers.</td>
<td>leadership transitions / interval</td>
<td>counter</td>
</tr>
<tr>
<td>`consul.raft.state.candidate`</td>
<td>This increments whenever a Consul server starts an election. If this increments without a leadership change occurring it could indicate that a single server is overloaded or is experiencing network connectivity issues.</td>
<td>election attempts / interval</td>
<td>counter</td>
</tr>
<tr>
<td>`consul.raft.apply`</td>
<td>This counts the number of Raft transactions occurring over the interval, which is a general indicator of the write load on the Consul servers.</td>
<td>raft transactions / interval</td>
<td>counter</td>
</tr>
<tr>
<td>`consul.raft.commitTime`</td>
<td>This measures the time it takes to commit a new entry to the Raft log on the leader.</td>
<td>ms</td>
<td>timer</td>
</tr>
<tr>
<td>`consul.raft.leader.dispatchLog`</td>
<td>This measures the time it takes for the leader to write log entries to disk.</td>
<td>ms</td>
<td>timer</td>
</tr>
<tr>
<td>`consul.raft.replication.appendEntries`</td>
<td>This measures the time it takes to replicate log entries to followers. This is a general indicator of the load pressure on the Consul servers, as well as the performance of the communication between the servers.</td>
<td>This will only be emitted by the Raft leader and measures the time since the leader was last able to contact the follower nodes when checking its leader lease. It can be used as a measure for how stable the Raft timing is and how close the leader is to timing out its lease.<br><br>The lease timeout is 500 ms times the [`raft_multiplier` configuration](/docs/agent/options.html#raft_multiplier), so this telemetry value should not be getting close to that configured value, otherwise the Raft timing is marginal and might need to be tuned, or more powerful servers might be needed. See the [Server Performance](/docs/guides/performance.html) guide for more details.</td>
These metrics give insight into the health of the cluster as a whole.
<tableclass="table table-bordered table-striped">
<tr>
<th>Metric</th>
<th>Description</th>
<th>Unit</th>
<th>Type</th>
</tr>
<tr>
<td>`consul.memberlist.msg.suspect`</td>
<td>This increments when an agent suspects another as failed when executing random probes as part of the gossip protocol. These can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the [required ports](/docs/agent/options.html#ports).</td>
<td>suspect messages received / interval</td>
<td>counter</td>
</tr>
<tr>
<td>`consul.serf.member.flap`</td>
<td>Available in Consul 0.7 and later, this increments when an agent is marked dead and then recovers within a short time period. This can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the [required ports](/docs/agent/options.html#ports).</td>
<td>This increments when an agent processes an [event](/docs/commands/event.html). Consul uses events internally so there may be additional events showing in telemetry. There are also a per-event counters emitted as `consul.serf.events.<event name>`.</td>
<td>This tracks the number of voting servers that the cluster can lose while continuing to function.</td>
<td>servers</td>
<td>gauge</td>
</tr>
<tr>
<td>`consul.autopilot.healthy`</td>
<td>This tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0.</td>