Agent Telemetry
The Consul agent collects various runtime metrics about the performance of different libraries and subsystems. These metrics are aggregated on a ten second (10s) interval and are retained for one minute. An interval is the period of time between instances of data being collected and aggregated.
When telemetry is being streamed to an external metrics store, the interval is defined to be that store's flush interval.
External Store | Interval (seconds) |
---|---|
dogstatsd | 10s |
Prometheus | 60s |
statsd | 10s |
To view this data, you must send a signal to the Consul process: on Unix,
this is USR1
while on Windows it is BREAK
. Once Consul receives the signal,
it will dump the current telemetry information to the agent's stderr
.
This telemetry information can be used for debugging or otherwise getting a better view of what Consul is doing. Review the Monitoring and Metrics tutorial to learn how collect and interpret Consul data.
By default, all metric names of gauge type are prefixed with the hostname of the consul agent, e.g.,
consul.hostname.server.isLeader
. To disable prefixing the hostname, set
telemetry.disable_hostname=true
in the agent configuration.
Additionally, if the telemetry
configuration options
are provided, the telemetry information will be streamed to a
statsite or statsd server where
it can be aggregated and flushed to Graphite or any other metrics store.
For a configuration example for Telegraf, review the Monitoring with Telegraf tutorial.
This information can also be viewed with the metrics endpoint in JSON format or using Prometheus format.
Sample output of telemetry dump
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.free_count': 4387.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.heap_objects': 3163.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_pause_ns': 1151002.000[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_runs': 4.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.accept': Count: 5 Sum: 5.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.command': Count: 10 Sum: 10.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events': Count: 5 Sum: 5.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.foo': Count: 4 Sum: 4.000[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.baz': Count: 1 Sum: 1.000[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.memberlist.gossip': Count: 50 Min: 0.007 Mean: 0.020 Max: 0.041 Stddev: 0.007 Sum: 0.989[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Intent': Count: 10 Sum: 0.000[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Event': Count: 10 Min: 0.000 Mean: 2.500 Max: 5.000 Stddev: 2.121 Sum: 25.000
Key Metrics
These are some metrics emitted that can help you understand the health of your cluster at a glance. A Grafana dashboard is also available, which is maintained by the Consul team and displays these metrics for easy visualization. For a full list of metrics emitted by Consul, see Metrics Reference
Transaction timing
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.kvs.apply | Measures the time it takes to complete an update to the KV store. | ms | timer |
consul.txn.apply | Measures the time spent applying a transaction operation. | ms | timer |
consul.raft.apply | Counts the number of Raft transactions applied during the interval. This metric is only reported on the leader. | raft transactions / interval | counter |
consul.raft.commitTime | Measures the time it takes to commit a new entry to the Raft log on the leader. | ms | timer |
Why they're important: Taken together, these metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. Generally these should all be fairly consistent and no more than a few milliseconds. Sudden changes in any of the timing values could be due to unexpected load on the Consul servers, or due to problems on the servers themselves.
What to look for: Deviations (in any of these metrics) of more than 50% from baseline over the previous hour.
Leadership changes
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.raft.leader.lastContact | Measures the time since the leader was last able to contact the follower nodes when checking its leader lease. | ms | timer |
consul.raft.state.candidate | Increments whenever a Consul server starts an election. | elections | counter |
consul.raft.state.leader | Increments whenever a Consul server becomes a leader. | leaders | counter |
consul.server.isLeader | Track if a server is a leader(1) or not(0). | 1 or 0 | gauge |
Why they're important: Normally, your Consul cluster should have a stable leader. If there are frequent elections or leadership changes, it would likely indicate network issues between the Consul servers, or that the Consul servers themselves are unable to keep up with the load.
What to look for: For a healthy cluster, you're looking for a lastContact
lower than 200ms, leader
> 0 and candidate
== 0. Deviations from this might indicate flapping leadership.
Certificate Authority Expiration
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.mesh.active-root-ca.expiry | The number of seconds until the root CA expires, updated every hour. | seconds | gauge |
consul.mesh.active-signing-ca.expiry | The number of seconds until the signing CA expires, updated every hour. | seconds | gauge |
consul.agent.tls.cert.expiry | The number of seconds until the server agent's TLS certificate expires, updated every hour. | seconds | gauge |
Why they're important: Consul Mesh requires a CA to sign all certificates used to connect the mesh and the mesh network ceases to work if they expire and become invalid. The Root is particularly important to monitor as Consul does not automatically rotate it. The TLS certificate metric monitors the certificate that the server's agent uses to connect with the other agents in the cluster.
What to look for: The Root CA should be monitored for an approaching expiration, to indicate it is time for you to rotate the "root" CA either manually or with external automation. Consul should rotate the signing (intermediate) certificate automatically, but we recommend monitoring the rotation. When the certificate does not rotate, check the server agent logs for messages related to the CA system. The agent TLS certificate's rotation handling varies based on the configuration.
Autopilot
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.autopilot.healthy | Tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. | health state | gauge |
Why it's important: Autopilot can expose the overall health of your cluster with a simple boolean.
What to look for: Alert if healthy
is 0. Some other indicators of an unhealthy cluster would be:
consul.raft.commitTime
- This can help reflect the speed of state store changes being performed by the agent. If this number is rising, the server may be experiencing an issue due to degraded resources on the host.- Leadership change metrics - Check for deviation from the recommended values. This can indicate failed leadership elections or flapping nodes.
Memory usage
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.runtime.alloc_bytes | Measures the number of bytes allocated by the Consul process. | bytes | gauge |
consul.runtime.sys_bytes | Measures the total number of bytes of memory obtained from the OS. | bytes | gauge |
Why they're important: Consul keeps all of its data in memory. If Consul consumes all available memory, it will crash.
What to look for: If consul.runtime.sys_bytes
exceeds 90% of total available system memory.
NOTE: This metric is calculated using Go's runtime package
MemStats. This will have a
different output than using information gathered from top
. For more
information, see GH-4734.
Garbage collection
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.runtime.total_gc_pause_ns | Number of nanoseconds consumed by stop-the-world garbage collection (GC) pauses since Consul started. | ns | gauge |
Why it's important: GC pause is a "stop-the-world" event, meaning that all runtime threads are blocked until GC completes. Normally these pauses last only a few nanoseconds. But if memory usage is high, the Go runtime may GC so frequently that it starts to slow down Consul.
What to look for: Warning if total_gc_pause_ns
exceeds 2 seconds/minute, critical if it exceeds 5 seconds/minute.
NOTE: total_gc_pause_ns
is a cumulative counter, so in order to calculate rates (such as GC/minute),
you will need to apply a function such as InfluxDB's non_negative_difference()
.
Network activity - RPC Count
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.client.rpc | Increments whenever a Consul agent makes an RPC request to a Consul server | requests | counter |
consul.client.rpc.exceeded | Increments whenever a Consul agent makes an RPC request to a Consul server gets rate limited by that agent's limits configuration. | requests | counter |
consul.client.rpc.failed | Increments whenever a Consul agent makes an RPC request to a Consul server and fails. | requests | counter |
Why they're important: These measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. A high RPC count, especially from consul.client.rpcexceeded
meaning that the requests are being rate-limited, could imply a misconfigured Consul agent.
What to look for:
Sudden large changes to the consul.client.rpc
metrics (greater than 50% deviation from baseline).
consul.client.rpc.exceeded
or consul.client.rpc.failed
count > 0, as it implies that an agent is being rate-limited or fails to make an RPC request to a Consul server
Raft Thread Saturation
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.raft.thread.main.saturation | An approximate measurement of the proportion of time the main Raft goroutine is busy and unavailable to accept new work. | percentage | sample |
consul.raft.thread.fsm.saturation | An approximate measurement of the proportion of time the Raft FSM goroutine is busy and unavailable to accept new work. | percentage | sample |
Why they're important: These measurements are a useful proxy for how much capacity a Consul server has to accept additional write load. High saturation of the Raft goroutines can lead to elevated latency in the rest of the system and cause cluster instability.
What to look for: Generally, a server's steady-state saturation should be less than 50%.
NOTE: These metrics are approximate and under extremely heavy load won't give a perfect fine-grained view of how much headroom a server has available. Instead, treat them as an early warning sign.
Requirements:
- Consul 1.13.0+
Raft Replication Capacity Issues
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.raft.fsm.lastRestoreDuration | Measures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it's value since most servers only restore during restarts which are typically infrequent. | ms | gauge |
consul.raft.leader.oldestLogAge | The number of milliseconds since the oldest log in the leader's log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with consul.raft.fsm.lastRestoreDuration and consul.raft.rpc.installSnapshot to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated. | ms | gauge |
consul.raft.rpc.installSnapshot | Measures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state. | ms | timer |
Why they're important: These metrics allow operators to monitor the health and capacity of raft replication on servers. When Consul is handling large amounts of data and high write throughput it is possible for the cluster to get into the following state:
- Write throughput is high (say 500 commits per second or more) and constant
- The leader is writing out a large snapshot every minute or so
- The snapshot is large enough that it takes considerable time to restore from disk on a restart or from the leader if a follower gets behind
- Disk IO available allows the leader to write a snapshot faster than it can be restored from disk on a follower
Under these conditions, a follower after a restart may be unable to catch up on
replication and become a voter again since it takes longer to restore from disk
or the leader than the leader takes to write a new snapshot and truncate its
logs. Servers retain
raft_trailing_logs
(default
10240
) log entries even if their snapshot was more recent. On a leader
processing 500 commits/second, that is only about 20 seconds worth of logs.
Assuming the leader is able to write out a snapshot and truncate the logs in
less than 20 seconds, there will only be 20 seconds worth of "recent" logs
available on the leader right after the leader has taken a snapshot and never
more than about 80 seconds worth assuming it is taking a snapshot and truncating
logs every 60 seconds.
In this state, followers must be able to restore a snapshot into memory and resume replication in under 80 seconds otherwise they will never be able to rejoin the cluster until write rates reduce. If they take more than 20 seconds then there will be a chance that they are unlucky with timing when they restart and have to download a snapshot again from the servers one or more times. If they take 50 seconds or more then they will likely fail to catch up more often than they succeed and will remain non-voters for some time until they happen to complete the restore just before the leader truncates its logs.
In the worst case, the follower will be left continually downloading snapshots from the leader which are always too old to use by the time they are restored. This can put additional strain on the leader transferring large snapshots repeatedly as well as reduce the fault tolerance and serving capacity of the cluster.
Since Consul 1.5.3
raft_trailing_logs
has been
configurable. Increasing it allows the leader to retain more logs and give
followers more time to restore and catch up. The tradeoff is potentially
slower appends which eventually might affect write throughput and latency
negatively so setting it arbitrarily high is not recommended. Before Consul
1.10.0 it required a rolling restart to change this configuration on the leader
though and since no followers could restart without loosing health this could
mean loosing cluster availability and needing to recover the cluster from a loss
of quorum.
Since Consul 1.10.0
raft_trailing_logs
is now
reloadable with consul reload
or SIGHUP
allowing operators to increase this
without the leader restarting or loosing leadership allowing the cluster to be
recovered gracefully.
Monitoring these metrics can help avoid or diagnose this state.
What to look for:
consul.raft.leader.oldestLogAge
should look like a saw-tooth wave increasing
linearly with time until the leader takes a snapshot and then jumping down as
the oldest logs are truncated. The lowest point on that line should remain
comfortably higher (i.e. 2x or more) than the time it takes to restore a
snapshot.
There are two ways a snapshot can be restored on a follower: from disk on
startup or from the leader during an installSnapshot
RPC. The leader only
sends an installSnapshot
RPC if the follower is new and has no state, or if
it's state is too old for it to catch up with the leaders logs.
consul.raft.fsm.lastRestoreDuration
shows the time it took to restore from
either source the last time it happened. Most of the time this is when the
server was started. It's a gauge that will always show the last restore duration
(in Consul 1.10.0 and later) however long ago that was.
consul.raft.rpc.installSnapshot
is the timing information from the leader's
perspective when it installs a new snapshot on a follower. It includes the time
spent transferring the data as well as the follower restoring it. Since these
events are typically infrequent, you may need to graph the last value observed,
for example using max_over_time
with a large range in Prometheus. While the
restore part will also be reflected in lastRestoreDuration
, it can be useful
to observe this too since the logs need to be able to cover this entire
operation including the snapshot delivery to ensure followers can always catch
up safely.
Graphing consul.raft.leader.oldestLogAge
on the same axes as the other two
metrics here can help see at a glance if restore times are creeping dangerously
close to the limit of what the leader is retaining at the current write rate.
Note that if servers don't restart often, then the snapshot could have grown significantly since the last restore happened so last restore times might not reflect what would happen if an agent restarts now.
License Expiration Enterprise
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.system.licenseExpiration | Number of hours until the Consul Enterprise license will expire. | hours | gauge |
Why they're important:
This measurement indicates how many hours are left before the Consul Enterprise license expires. When the license expires some Consul Enterprise features will cease to work. An example of this is that after expiration, it is no longer possible to create or modify resources in non-default namespaces or to manage namespace definitions themselves even though reads of namespaced resources will still work.
What to look for:
This metric should be monitored to ensure that the license doesn't expire to prevent degradation of functionality.
Bolt DB Performance
Metric Name | Description | Unit | Type |
---|---|---|---|
consul.raft.boltdb.freelistBytes | Represents the number of bytes necessary to encode the freelist metadata. When raft_logstore.boltdb.no_freelist_sync is set to false these metadata bytes must also be written to disk for each committed log. | bytes | gauge |
consul.raft.boltdb.logsPerBatch | Measures the number of logs being written per batch to the db. | logs | sample |
consul.raft.boltdb.storeLogs | Measures the amount of time spent writing logs to the db. | ms | timer |
consul.raft.boltdb.writeCapacity | Theoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can perform | logs/second | sample |
Requirements:
- Consul 1.11.0+
Why they're important:
The consul.raft.boltdb.storeLogs
metric is a direct indicator of disk write performance of a Consul server. If there are issues with the disk or
performance degradations related to Bolt DB, these metrics will show the issue and potentially the cause as well.
What to look for:
The primary thing to look for are increases in the consul.raft.boltdb.storeLogs
times. Its value will directly govern an
upper limit to the throughput of write operations within Consul.
In Consul each write operation will turn into a single Raft log to be committed. Raft will process these
logs and store them within Bolt DB in batches. Each call to store logs within Bolt DB is measured to record how long
it took as well as how many logs were contained in the batch. Writing logs in this fashion is serialized so that
a subsequent log storage operation can only be started after the previous one completed. The maximum number
of log storage operations that can be performed each second is represented with the consul.raft.boltdb.writeCapacity
metric. When log storage operations are becoming slower you may not see an immediate decrease in write capacity
due to increased batch sizes of the each operation. However, the max batch size allowed is 64 logs. Therefore if
the logsPerBatch
metric is near 64 and the storeLogs
metric is seeing increased time to write each batch to disk,
then it is likely that increased write latencies and other errors may occur.
There can be a number of potential issues that can cause this. Often times it could be performance of the underlying
disks that is the issue. Other times it may be caused by Bolt DB behavior. Bolt DB keeps track of free space within
the raft.db
file. When needing to allocate data it will use existing free space first before further expanding the
file. By default, Bolt DB will write a data structure containing metadata about free pages within the DB to disk for
every log storage operation. Therefore if the free space within the database grows excessively large, such as after
a large spike in writes beyond the normal steady state and a subsequent slow down in the write rate, then Bolt DB
could end up writing a large amount of extra data to disk for each log storage operation. This has the potential
to drastically increase disk write throughput, potentially beyond what the underlying disks can keep up with. To
detect this situation you can look at the consul.raft.boltdb.freelistBytes
metric. This metric is a count of
the extra bytes that are being written for each log storage operation beyond the log data itself. While not a clear
indicator of an actual issue, this metric can be used to diagnose why the consul.raft.boltdb.storeLogs
metric
is high.
If Bolt DB log storage performance becomes an issue and is caused by free list management then setting
raft_logstore.boltdb.no_freelist_sync
to true
in the server's configuration
may help to reduce disk IO and log storage operation times. Disabling free list syncing will however increase
the startup time for a server as it must scan the raft.db file for free space instead of loading the already
populated free list structure.
Consul includes an experiment backend configuration that you can use instead of BoldDB. Refer to Experimental WAL LogStore backend for more information.
Metrics Reference
This is a full list of metrics emitted by Consul.
Metric | Description | Unit | Type |
---|---|---|---|
consul.acl.blocked.{check,service}.deregistration | Increments whenever a deregistration fails for an entity (check or service) is blocked by an ACL. | requests | counter |
consul.acl.blocked.{check,node,service}.registration | Increments whenever a registration fails for an entity (check, node or service) is blocked by an ACL. | requests | counter |
consul.api.http | This samples how long it takes to service the given HTTP request for the given verb and path. Includes labels for path and method . path does not include details like service or key names, for these an underscore will be present as a placeholder (eg. path=v1.kv._ ) | ms | timer |
consul.client.rpc | Increments whenever a Consul agent makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers. | requests | counter |
consul.client.rpc.exceeded | Increments whenever a Consul agent makes an RPC request to a Consul server gets rate limited by that agent's limits configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers. | rejected requests | counter |
consul.client.rpc.failed | Increments whenever a Consul agent makes an RPC request to a Consul server and fails. | requests | counter |
consul.client.api.catalog_register | Increments whenever a Consul agent receives a catalog register request. | requests | counter |
consul.client.api.success.catalog_register | Increments whenever a Consul agent successfully responds to a catalog register request. | requests | counter |
consul.client.rpc.error.catalog_register | Increments whenever a Consul agent receives an RPC error for a catalog register request. | errors | counter |
consul.client.api.catalog_deregister | Increments whenever a Consul agent receives a catalog deregister request. | requests | counter |
consul.client.api.success.catalog_deregister | Increments whenever a Consul agent successfully responds to a catalog deregister request. | requests | counter |
consul.client.rpc.error.catalog_deregister | Increments whenever a Consul agent receives an RPC error for a catalog deregister request. | errors | counter |
consul.client.api.catalog_datacenters | Increments whenever a Consul agent receives a request to list datacenters in the catalog. | requests | counter |
consul.client.api.success.catalog_datacenters | Increments whenever a Consul agent successfully responds to a request to list datacenters. | requests | counter |
consul.client.rpc.error.catalog_datacenters | Increments whenever a Consul agent receives an RPC error for a request to list datacenters. | errors | counter |
consul.client.api.catalog_nodes | Increments whenever a Consul agent receives a request to list nodes from the catalog. | requests | counter |
consul.client.api.success.catalog_nodes | Increments whenever a Consul agent successfully responds to a request to list nodes. | requests | counter |
consul.client.rpc.error.catalog_nodes | Increments whenever a Consul agent receives an RPC error for a request to list nodes. | errors | counter |
consul.client.api.catalog_services | Increments whenever a Consul agent receives a request to list services from the catalog. | requests | counter |
consul.client.api.success.catalog_services | Increments whenever a Consul agent successfully responds to a request to list services. | requests | counter |
consul.client.rpc.error.catalog_services | Increments whenever a Consul agent receives an RPC error for a request to list services. | errors | counter |
consul.client.api.catalog_service_nodes | Increments whenever a Consul agent receives a request to list nodes offering a service. | requests | counter |
consul.client.api.success.catalog_service_nodes | Increments whenever a Consul agent successfully responds to a request to list nodes offering a service. | requests | counter |
consul.client.api.error.catalog_service_nodes | Increments whenever a Consul agent receives an RPC error for request to list nodes offering a service. | requests | counter |
consul.client.rpc.error.catalog_service_nodes | Increments whenever a Consul agent receives an RPC error for a request to list nodes offering a service. | errors | counter |
consul.client.api.catalog_node_services | Increments whenever a Consul agent receives a request to list services registered in a node. | requests | counter |
consul.client.api.success.catalog_node_services | Increments whenever a Consul agent successfully responds to a request to list services in a node. | requests | counter |
consul.client.rpc.error.catalog_node_services | Increments whenever a Consul agent receives an RPC error for a request to list services in a node. | errors | counter |
consul.client.api.catalog_node_service_list | Increments whenever a Consul agent receives a request to list a node's registered services. | requests | counter |
consul.client.rpc.error.catalog_node_service_list | Increments whenever a Consul agent receives an RPC error for request to list a node's registered services. | errors | counter |
consul.client.api.success.catalog_node_service_list | Increments whenever a Consul agent successfully responds to a request to list a node's registered services. | requests | counter |
consul.client.api.catalog_gateway_services | Increments whenever a Consul agent receives a request to list services associated with a gateway. | requests | counter |
consul.client.api.success.catalog_gateway_services | Increments whenever a Consul agent successfully responds to a request to list services associated with a gateway. | requests | counter |
consul.client.rpc.error.catalog_gateway_services | Increments whenever a Consul agent receives an RPC error for a request to list services associated with a gateway. | errors | counter |
consul.runtime.num_goroutines | Tracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value. | number of goroutines | gauge |
consul.runtime.alloc_bytes | Measures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value. | bytes | gauge |
consul.runtime.heap_objects | Measures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value. | number of objects | gauge |
consul.state.nodes | Measures the current number of nodes registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge |
consul.state.peerings | Measures the current number of peerings registered with Consul. It is only emitted by Consul servers. Added in v1.13.0. | number of objects | gauge |
consul.state.services | Measures the current number of unique services registered with Consul, based on service name. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge |
consul.state.service_instances | Measures the current number of unique service instances registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge |
consul.state.kv_entries | Measures the current number of entries in the Consul KV store. It is only emitted by Consul servers. Added in v1.10.3. | number of objects | gauge |
consul.state.connect_instances | Measures the current number of unique mesh service instances registered with Consul labeled by Kind (e.g. connect-proxy, connect-native, etc). Added in v1.10.4 | number of objects | gauge |
consul.state.config_entries | Measures the current number of configuration entries registered with Consul labeled by Kind (e.g. service-defaults, proxy-defaults, etc). See Configuration Entries for more information. Added in v1.10.4 | number of objects | gauge |
consul.members.clients | Measures the current number of client agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of clients | gauge |
consul.members.servers | Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of servers | gauge |
consul.dns.stale_queries | Increments when an agent serves a query within the allowed stale threshold. | queries | counter |
consul.dns.ptr_query | Measures the time spent handling a reverse DNS query for the given node. | ms | timer |
consul.dns.domain_query | Measures the time spent handling a domain query for the given node. | ms | timer |
consul.system.licenseExpiration | Enterprise This measures the number of hours remaining on the agents license. | hours | gauge |
consul.version | Represents the Consul version. | agents | gauge |
Server Health
These metrics are used to monitor the health of the Consul servers.
Metric | Description | Unit | Type |
---|---|---|---|
consul.acl.ResolveToken | Measures the time it takes to resolve an ACL token. | ms | timer |
consul.acl.ResolveTokenToIdentity | Measures the time it takes to resolve an ACL token to an Identity. This metric was removed in Consul 1.12. The time will now be reflected in consul.acl.ResolveToken . | ms | timer |
consul.acl.token.cache_hit | Increments if Consul is able to resolve a token's identity from the cache. | cache read op | counter |
consul.acl.token.cache_miss | Increments if Consul cannot resolve a token's identity from the cache. | cache read op | counter |
consul.cache.bypass | Counts how many times a request bypassed the cache because no cache-key was provided. | counter | counter |
consul.cache.fetch_success | Counts the number of successful fetches by the cache. | counter | counter |
consul.cache.fetch_error | Counts the number of failed fetches by the cache. | counter | counter |
consul.cache.evict_expired | Counts the number of expired entries that are evicted. | counter | counter |
consul.raft.applied_index | Represents the raft applied index. | index | gauge |
consul.raft.apply | Counts the number of Raft transactions occurring over the interval, which is a general indicator of the write load on the Consul servers. | raft transactions / interval | counter |
consul.raft.barrier | Counts the number of times the agent has started the barrier i.e the number of times it has issued a blocking call, to ensure that the agent has all the pending operations that were queued, to be applied to the agent's FSM. | blocks / interval | counter |
consul.raft.boltdb.freelistBytes | Represents the number of bytes necessary to encode the freelist metadata. When raft_logstore.boltdb.no_freelist_sync is set to false these metadata bytes must also be written to disk for each committed log. | bytes | gauge |
consul.raft.boltdb.freePageBytes | Represents the number of bytes of free space within the raft.db file. | bytes | gauge |
consul.raft.boltdb.getLog | Measures the amount of time spent reading logs from the db. | ms | timer |
consul.raft.boltdb.logBatchSize | Measures the total size in bytes of logs being written to the db in a single batch. | bytes | sample |
consul.raft.boltdb.logsPerBatch | Measures the number of logs being written per batch to the db. | logs | sample |
consul.raft.boltdb.logSize | Measures the size of logs being written to the db. | bytes | sample |
consul.raft.boltdb.numFreePages | Represents the number of free pages within the raft.db file. | pages | gauge |
consul.raft.boltdb.numPendingPages | Represents the number of pending pages within the raft.db that will soon become free. | pages | gauge |
consul.raft.boltdb.openReadTxn | Represents the number of open read transactions against the db | transactions | gauge |
consul.raft.boltdb.totalReadTxn | Represents the total number of started read transactions against the db | transactions | gauge |
consul.raft.boltdb.storeLogs | Measures the amount of time spent writing logs to the db. | ms | timer |
consul.raft.boltdb.txstats.cursorCount | Counts the number of cursors created since Consul was started. | cursors | counter |
consul.raft.boltdb.txstats.nodeCount | Counts the number of node allocations within the db since Consul was started. | allocations | counter |
consul.raft.boltdb.txstats.nodeDeref | Counts the number of node dereferences in the db since Consul was started. | dereferences | counter |
consul.raft.boltdb.txstats.pageAlloc | Represents the number of bytes allocated within the db since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase. | bytes | gauge |
consul.raft.boltdb.txstats.pageCount | Represents the number of pages allocated since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase. | pages | gauge |
consul.raft.boltdb.txstats.rebalance | Counts the number of node rebalances performed in the db since Consul was started. | rebalances | counter |
consul.raft.boltdb.txstats.rebalanceTime | Measures the time spent rebalancing nodes in the db. | ms | timer |
consul.raft.boltdb.txstats.spill | Counts the number of nodes spilled in the db since Consul was started. | spills | counter |
consul.raft.boltdb.txstats.spillTime | Measures the time spent spilling nodes in the db. | ms | timer |
consul.raft.boltdb.txstats.split | Counts the number of nodes split in the db since Consul was started. | splits | counter |
consul.raft.boltdb.txstats.write | Counts the number of writes to the db since Consul was started. | writes | counter |
consul.raft.boltdb.txstats.writeTime | Measures the amount of time spent performing writes to the db. | ms | timer |
consul.raft.boltdb.writeCapacity | Theoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can perform | logs/second | sample |
consul.raft.commitNumLogs | Measures the count of logs processed for application to the FSM in a single batch. | logs | gauge |
consul.raft.commitTime | Measures the time it takes to commit a new entry to the Raft log on the leader. | ms | timer |
consul.raft.fsm.lastRestoreDuration | Measures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it's value since most servers only restore during restarts which are typically infrequent. | ms | gauge |
consul.raft.fsm.snapshot | Measures the time taken by the FSM to record the current state for the snapshot. | ms | timer |
consul.raft.fsm.apply | Measures the time to apply a log to the FSM. | ms | timer |
consul.raft.fsm.enqueue | Measures the amount of time to enqueue a batch of logs for the FSM to apply. | ms | timer |
consul.raft.fsm.restore | Measures the time taken by the FSM to restore its state from a snapshot. | ms | timer |
consul.raft.last_index | Represents the raft applied index. | index | gauge |
consul.raft.leader.dispatchLog | Measures the time it takes for the leader to write log entries to disk. | ms | timer |
consul.raft.leader.dispatchNumLogs | Measures the number of logs committed to disk in a batch. | logs | gauge |
consul.raft.logstore.verifier.checkpoints_written | Counts the number of checkpoint entries written to the LogStore. | checkpoints | counter |
consul.raft.logstore.verifier.dropped_reports | Counts how many times the verifier routine was still busy when the next checksum came in and so verification for a range was skipped. If you see this happen, consider increasing the interval between checkpoints with raft_logstore.verification.interval | reports dropped | counter |
consul.raft.logstore.verifier.ranges_verified | Counts the number of log ranges for which a verification report has been completed. Refer to Monitor Raft metrics and logs for WAL for more information. | log ranges verifications | counter |
consul.raft.logstore.verifier.read_checksum_failures | Counts the number of times a range of logs between two check points contained at least one disk corruption. Refer to Monitor Raft metrics and logs for WAL for more information. | disk corruptions | counter |
consul.raft.logstore.verifier.write_checksum_failures | Counts the number of times a follower has a different checksum to the leader at the point where it writes to the log. This could be caused by either a disk-corruption on the leader (unlikely) or some other corruption of the log entries in-flight. | in-flight corruptions | counter |
consul.raft.leader.lastContact | Measures the time since the leader was last able to contact the follower nodes when checking its leader lease. It can be used as a measure for how stable the Raft timing is and how close the leader is to timing out its lease.The lease timeout is 500 ms times the raft_multiplier configuration, so this telemetry value should not be getting close to that configured value, otherwise the Raft timing is marginal and might need to be tuned, or more powerful servers might be needed. See the Server Performance guide for more details. | ms | timer |
consul.raft.leader.oldestLogAge | The number of milliseconds since the oldest log in the leader's log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with consul.raft.fsm.lastRestoreDuration and consul.raft.rpc.installSnapshot to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated. Note: this metric won't be emitted until the leader writes a snapshot. After an upgrade to Consul 1.10.0 it won't be emitted until the oldest log was written after the upgrade. | ms | gauge |
consul.raft.replication.heartbeat | Measures the time taken to invoke appendEntries on a peer, so that it doesn't timeout on a periodic basis. | ms | timer |
consul.raft.replication.appendEntries | Measures the time it takes to replicate log entries to followers. This is a general indicator of the load pressure on the Consul servers, as well as the performance of the communication between the servers. | ms | timer |
consul.raft.replication.appendEntries.rpc | Measures the time taken by the append entries RPC to replicate the log entries of a leader agent onto its follower agent(s). | ms | timer |
consul.raft.replication.appendEntries.logs | Counts the number of logs replicated to an agent to bring it up to speed with the leader's logs. | logs appended/ interval | counter |
consul.raft.restore | Counts the number of times the restore operation has been performed by the agent. Here, restore refers to the action of raft consuming an external snapshot to restore its state. | operation invoked / interval | counter |
consul.raft.restoreUserSnapshot | Measures the time taken by the agent to restore the FSM state from a user's snapshot | ms | timer |
consul.raft.rpc.appendEntries | Measures the time taken to process an append entries RPC call from an agent. | ms | timer |
consul.raft.rpc.appendEntries.storeLogs | Measures the time taken to add any outstanding logs for an agent, since the last appendEntries was invoked | ms | timer |
consul.raft.rpc.appendEntries.processLogs | Measures the time taken to process the outstanding log entries of an agent. | ms | timer |
consul.raft.rpc.installSnapshot | Measures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state. | ms | timer |
consul.raft.rpc.processHeartBeat | Measures the time taken to process a heartbeat request. | ms | timer |
consul.raft.rpc.requestVote | Measures the time taken to process the request vote RPC call. | ms | timer |
consul.raft.snapshot.create | Measures the time taken to initialize the snapshot process. | ms | timer |
consul.raft.snapshot.persist | Measures the time taken to dump the current snapshot taken by the Consul agent to the disk. | ms | timer |
consul.raft.snapshot.takeSnapshot | Measures the total time involved in taking the current snapshot (creating one and persisting it) by the Consul agent. | ms | timer |
consul.serf.snapshot.appendLine | Measures the time taken by the Consul agent to append an entry into the existing log. | ms | timer |
consul.serf.snapshot.compact | Measures the time taken by the Consul agent to compact a log. This operation occurs only when the snapshot becomes large enough to justify the compaction . | ms | timer |
consul.raft.state.candidate | Increments whenever a Consul server starts an election. If this increments without a leadership change occurring it could indicate that a single server is overloaded or is experiencing network connectivity issues. | election attempts / interval | counter |
consul.raft.state.leader | Increments whenever a Consul server becomes a leader. If there are frequent leadership changes this may be indication that the servers are overloaded and aren't meeting the soft real-time requirements for Raft, or that there are networking problems between the servers. | leadership transitions / interval | counter |
consul.raft.state.follower | Counts the number of times an agent has entered the follower mode. This happens when a new agent joins the cluster or after the end of a leader election. | follower state entered / interval | counter |
consul.raft.transition.heartbeat_timeout | The number of times an agent has transitioned to the Candidate state, after receive no heartbeat messages from the last known leader. | timeouts / interval | counter |
consul.raft.verify_leader | This metric doesn't have a direct correlation to the leader change. It just counts the number of times an agent checks if it is still the leader or not. For example, during every consistent read, the check is done. Depending on the load in the system, this metric count can be high as it is incremented each time a consistent read is completed. | checks / interval | Counter |
consul.raft.wal.head_truncations | Counts how many log entries have been truncated from the head - i.e. the oldest entries. by graphing the rate of change over time you can see individual truncate calls as spikes. | logs entries truncated | counter |
consul.raft.wal.last_segment_age_seconds | A gauge that is set each time we rotate a segment and describes the number of seconds between when that segment file was first created and when it was sealed. this gives a rough estimate how quickly writes are filling the disk. | seconds | gauge |
consul.raft.wal.log_appends | Counts the number of calls to StoreLog(s) i.e. number of batches of entries appended. | calls | counter |
consul.raft.wal.log_entries_read | Counts the number of log entries read. | log entries read | counter |
consul.raft.wal.log_entries_written | Counts the number of log entries written. | log entries written | counter |
consul.raft.wal.log_entry_bytes_read | Counts the bytes of log entry read from segments before decoding. actual bytes read from disk might be higher as it includes headers and index entries and possible secondary reads for large entries that don't fit in buffers. | bytes | counter |
consul.raft.wal.log_entry_bytes_written | Counts the bytes of log entry after encoding with Codec. Actual bytes written to disk might be slightly higher as it includes headers and index entries. | bytes | counter |
consul.raft.wal.segment_rotations | Counts how many times we move to a new segment file. | rotations | counter |
consul.raft.wal.stable_gets | Counts how many calls to StableStore.Get or GetUint64. | calls | counter |
consul.raft.wal.stable_sets | Counts how many calls to StableStore.Set or SetUint64. | calls | counter |
consul.raft.wal.tail_truncations | Counts how many log entries have been truncated from the head - i.e. the newest entries. by graphing the rate of change over time you can see individual truncate calls as spikes. | logs entries truncated | counter |
consul.rpc.accept_conn | Increments when a server accepts an RPC connection. | connections | counter |
consul.rpc.rate_limit.exceeded | Increments whenever an RPC is over a configured rate limit. In permissive mode, the RPC is still allowed to proceed. | RPCs | counter |
consul.rpc.rate_limit.log_dropped | Increments whenever a log that is emitted because an RPC exceeded a rate limit gets dropped because the output buffer is full. | log messages dropped | counter |
consul.catalog.register | Measures the time it takes to complete a catalog register operation. | ms | timer |
consul.catalog.deregister | Measures the time it takes to complete a catalog deregister operation. | ms | timer |
consul.server.isLeader | Track if a server is a leader(1) or not(0) | 1 or 0 | gauge |
consul.fsm.register | Measures the time it takes to apply a catalog register operation to the FSM. | ms | timer |
consul.fsm.deregister | Measures the time it takes to apply a catalog deregister operation to the FSM. | ms | timer |
consul.fsm.session | Measures the time it takes to apply the given session operation to the FSM. | ms | timer |
consul.fsm.kvs | Measures the time it takes to apply the given KV operation to the FSM. | ms | timer |
consul.fsm.tombstone | Measures the time it takes to apply the given tombstone operation to the FSM. | ms | timer |
consul.fsm.coordinate.batch-update | Measures the time it takes to apply the given batch coordinate update to the FSM. | ms | timer |
consul.fsm.prepared-query | Measures the time it takes to apply the given prepared query update operation to the FSM. | ms | timer |
consul.fsm.txn | Measures the time it takes to apply the given transaction update to the FSM. | ms | timer |
consul.fsm.autopilot | Measures the time it takes to apply the given autopilot update to the FSM. | ms | timer |
consul.fsm.persist | Measures the time it takes to persist the FSM to a raft snapshot. | ms | timer |
consul.fsm.intention | Measures the time it takes to apply an intention operation to the state store. | ms | timer |
consul.fsm.ca | Measures the time it takes to apply CA configuration operations to the FSM. | ms | timer |
consul.fsm.ca.leaf | Measures the time it takes to apply an operation while signing a leaf certificate. | ms | timer |
consul.fsm.acl.token | Measures the time it takes to apply an ACL token operation to the FSM. | ms | timer |
consul.fsm.acl.policy | Measures the time it takes to apply an ACL policy operation to the FSM. | ms | timer |
consul.fsm.acl.bindingrule | Measures the time it takes to apply an ACL binding rule operation to the FSM. | ms | timer |
consul.fsm.acl.authmethod | Measures the time it takes to apply an ACL authmethod operation to the FSM. | ms | timer |
consul.fsm.system_metadata | Measures the time it takes to apply a system metadata operation to the FSM. | ms | timer |
consul.kvs.apply | Measures the time it takes to complete an update to the KV store. | ms | timer |
consul.leader.barrier | Measures the time spent waiting for the raft barrier upon gaining leadership. | ms | timer |
consul.leader.reconcile | Measures the time spent updating the raft store from the serf member information. | ms | timer |
consul.leader.reconcileMember | Measures the time spent updating the raft store for a single serf member's information. | ms | timer |
consul.leader.reapTombstones | Measures the time spent clearing tombstones. | ms | timer |
consul.leader.replication.acl-policies.status | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL policy replication was successful or 0 if there was an error. | healthy | gauge |
consul.leader.replication.acl-policies.index | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL policies in the primary datacenter that have been successfully replicated. | index | gauge |
consul.leader.replication.acl-roles.status | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL role replication was successful or 0 if there was an error. | healthy | gauge |
consul.leader.replication.acl-roles.index | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL roles in the primary datacenter that have been successfully replicated. | index | gauge |
consul.leader.replication.acl-tokens.status | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL token replication was successful or 0 if there was an error. | healthy | gauge |
consul.leader.replication.acl-tokens.index | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL tokens in the primary datacenter that have been successfully replicated. | index | gauge |
consul.leader.replication.config-entries.status | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of config entry replication was successful or 0 if there was an error. | healthy | gauge |
consul.leader.replication.config-entries.index | This will only be emitted by the leader in a secondary datacenter. Increments to the index of config entries in the primary datacenter that have been successfully replicated. | index | gauge |
consul.leader.replication.federation-state.status | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of federation state replication was successful or 0 if there was an error. | healthy | gauge |
consul.leader.replication.federation-state.index | This will only be emitted by the leader in a secondary datacenter. Increments to the index of federation states in the primary datacenter that have been successfully replicated. | index | gauge |
consul.leader.replication.namespaces.status | Enterprise This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of namespace replication was successful or 0 if there was an error. | healthy | gauge |
consul.leader.replication.namespaces.index | Enterprise This will only be emitted by the leader in a secondary datacenter. Increments to the index of namespaces in the primary datacenter that have been successfully replicated. | index | gauge |
consul.prepared-query.apply | Measures the time it takes to apply a prepared query update. | ms | timer |
consul.prepared-query.execute_remote | Measures the time it takes to process a prepared query execute request that was forwarded to another datacenter. | ms | timer |
consul.prepared-query.execute | Measures the time it takes to process a prepared query execute request. | ms | timer |
consul.prepared-query.explain | Measures the time it takes to process a prepared query explain request. | ms | timer |
consul.rpc.raft_handoff | Increments when a server accepts a Raft-related RPC connection. | connections | counter |
consul.rpc.request | Increments when a server receives a Consul-related RPC request. | requests | counter |
consul.rpc.request_error | Increments when a server returns an error from an RPC request. | errors | counter |
consul.rpc.query | Increments when a server receives a read RPC request, indicating the rate of new read queries. See consul.rpc.queries_blocking for the current number of in-flight blocking RPC calls. This metric changed in 1.7.0 to only increment on the start of a query. The rate of queries will appear lower, but is more accurate. | queries | counter |
consul.rpc.queries_blocking | The current number of in-flight blocking queries the server is handling. | queries | gauge |
consul.rpc.cross-dc | Increments when a server sends a (potentially blocking) cross datacenter RPC query. | queries | counter |
consul.rpc.consistentRead | Measures the time spent confirming that a consistent read can be performed. | ms | timer |
consul.session.apply | Measures the time spent applying a session update. | ms | timer |
consul.session.renew | Measures the time spent renewing a session. | ms | timer |
consul.session_ttl.invalidate | Measures the time spent invalidating an expired session. | ms | timer |
consul.txn.apply | Measures the time spent applying a transaction operation. | ms | timer |
consul.txn.read | Measures the time spent returning a read transaction. | ms | timer |
consul.grpc.client.request.count | Counts the number of gRPC requests made by the client agent to a Consul server. Includes a server_type label indicating either the internal or external gRPC server. | requests | counter |
consul.grpc.client.connection.count | Counts the number of new gRPC connections opened by the client agent to a Consul server. Includes a server_type label indicating either the internal or external gRPC server. | connections | counter |
consul.grpc.client.connections | Measures the number of active gRPC connections open from the client agent to any Consul servers. Includes a server_type label indicating either the internal or external gRPC server. | connections | gauge |
consul.grpc.server.request.count | Counts the number of gRPC requests received by the server. Includes a server_type label indicating either the internal or external gRPC server. | requests | counter |
consul.grpc.server.connection.count | Counts the number of new gRPC connections received by the server. Includes a server_type label indicating either the internal or external gRPC server. | connections | counter |
consul.grpc.server.connections | Measures the number of active gRPC connections open on the server. Includes a server_type label indicating either the internal or external gRPC server. | connections | gauge |
consul.grpc.server.stream.count | Counts the number of new gRPC streams received by the server. Includes a server_type label indicating either the internal or external gRPC server. | streams | counter |
consul.grpc.server.streams | Measures the number of active gRPC streams handled by the server. Includes a server_type label indicating either the internal or external gRPC server. | streams | gauge |
consul.xds.server.streams | Measures the number of active xDS streams handled by the server split by protocol version. | streams | gauge |
consul.xds.server.streamsUnauthenticated | Measures the number of active xDS streams handled by the server that are unauthenticated because ACLs are not enabled or ACL tokens were missing. | streams | gauge |
consul.xds.server.idealStreamsMax | The maximum number of xDS streams per server, chosen to achieve a roughly even spread of load across servers. | streams | gauge |
consul.xds.server.streamDrained | Counts the number of xDS streams that are drained when rebalancing the load between servers. | streams | counter |
consul.xds.server.streamStart | Measures the time taken to first generate xDS resources after an xDS stream is opened. | ms | timer |
Server Workload
Requirements:
- Consul 1.12.0+
The following label-based RPC metrics provide insight about the workload on a Consul server and the source of the workload.
The prefix_filter
telemetry configuration setting blocks or enables all RPC metric method calls. Specify the RPC metrics you want to allow in the prefix_filter
:
telemetry { prefix_filter = ["+consul.rpc.server.call"]}
Metric | Description | Unit | Type |
---|---|---|---|
consul.rpc.server.call | Measures the elapsed time taken to complete an RPC call. | ms | summary |
Labels
The server workload metrics above come with the following labels:
Label Name | Description | Possible values |
---|---|---|
method | The name of the RPC method. | The value of any RPC request in Consul. |
errored | Indicates whether the RPC call errored. | true or false . |
request_type | Whether it is a read or write request. | read , write or unreported . |
rpc_type | The RPC implementation. | net/rpc or internal . |
leader | Whether the server was a leader or not at the time of the request. | true , false or unreported . |
Label Explanations
The internal
value for the rpc_type
in the table above refers to leader and cluster management RPC operations that Consul performs.
Historically, internal
RPC operation metrics were accounted under the same metric names.
The unreported
value for the request_type
in the table above refers to RPC requests within Consul where it is difficult to ascertain whether a request is read
or write
type.
The unreported
value for the leader
label in the table above refers to RPC requests where Consul cannot determine the leadership status for a server.
Read Request Labels
In addition to the labels above, for read requests, the following may be populated:
Label Name | Description | Possible values |
---|---|---|
blocking | Whether the read request passed in a MinQueryIndex . | true if a MinQueryIndex was passed, false otherwise. |
target_datacenter | The target datacenter for the read request. | The string value of the target datacenter for the request. |
locality | Gives an indication of whether the RPC request is local or has been forwarded. | local if current server data center is the same as target_datacenter , otherwise forwarded . |
Here is a Prometheus style example of an RPC metric and its labels:
Sample output of telemetry dump
... consul_rpc_server_call{errored="false",method="Catalog.ListNodes",request_type="read",rpc_type="net/rpc",quantile="0.5"} 255 ...
Cluster Health
These metrics give insight into the health of the cluster as a whole.
Query for the consul.memberlist.*
and consul.serf.*
metrics can be appended
with certain labels to further distinguish data between different gossip pools.
The supported label for CE is network
, while segment
, partition
, area
are allowed for Enterprise.
Metric | Description | Unit | Type |
---|---|---|---|
consul.memberlist.degraded.probe | Counts the number of times the agent has performed failure detection on another agent at a slower probe rate. The agent uses its own health metric as an indicator to perform this action. (If its health score is low, means that the node is healthy, and vice versa.) | probes / interval | counter |
consul.memberlist.degraded.timeout | Counts the number of times an agent was marked as a dead node, whilst not getting enough confirmations from a randomly selected list of agent nodes in an agent's membership. | occurrence / interval | counter |
consul.memberlist.msg.dead | Counts the number of times an agent has marked another agent to be a dead node. | messages / interval | counter |
consul.memberlist.health.score | Describes a node's perception of its own health based on how well it is meeting the soft real-time requirements of the protocol. This metric ranges from 0 to 8, where 0 indicates "totally healthy". This health score is used to scale the time between outgoing probes, and higher scores translate into longer probing intervals. For more details see section IV of the Lifeguard paper: https://arxiv.org/pdf/1707.00788.pdf | score | gauge |
consul.memberlist.msg.suspect | Increments when an agent suspects another as failed when executing random probes as part of the gossip protocol. These can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the required ports. | suspect messages received / interval | counter |
consul.memberlist.tcp.accept | Counts the number of times an agent has accepted an incoming TCP stream connection. | connections accepted / interval | counter |
consul.memberlist.udp.sent/received | Measures the total number of bytes sent/received by an agent through the UDP protocol. | bytes sent or bytes received / interval | counter |
consul.memberlist.tcp.connect | Counts the number of times an agent has initiated a push/pull sync with an other agent. | push/pull initiated / interval | counter |
consul.memberlist.tcp.sent | Measures the total number of bytes sent by an agent through the TCP protocol | bytes sent / interval | counter |
consul.memberlist.gossip | Measures the time taken for gossip messages to be broadcasted to a set of randomly selected nodes. | ms | timer |
consul.memberlist.msg_alive | Counts the number of alive messages, that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter |
consul.memberlist.msg_dead | The number of dead messages that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter |
consul.memberlist.msg_suspect | The number of suspect messages that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter |
consul.memberlist.node.instances | Tracks the number of instances in each of the node states: alive, dead, suspect, and left. | nodes | gauge |
consul.memberlist.probeNode | Measures the time taken to perform a single round of failure detection on a select agent. | nodes / Interval | counter |
consul.memberlist.pushPullNode | Measures the number of agents that have exchanged state with this agent. | nodes / Interval | counter |
consul.memberlist.queue.broadcasts | Measures the number of messages waiting to be broadcast to other gossip participants. | messages | sample |
consul.memberlist.size.local | Measures the size in bytes of the memberlist before it is sent to another gossip recipient. | bytes | gauge |
consul.memberlist.size.remote | Measures the size in bytes of incoming memberlists from other gossip participants. | bytes | gauge |
consul.serf.member.failed | Increments when an agent is marked dead. This can be an indicator of overloaded agents, network problems, or configuration errors where agents cannot connect to each other on the required ports. | failures / interval | counter |
consul.serf.member.flap | Available in Consul 0.7 and later, this increments when an agent is marked dead and then recovers within a short time period. This can be an indicator of overloaded agents, network problems, or configuration errors where agents cannot connect to each other on the required ports. | flaps / interval | counter |
consul.serf.member.join | Increments when an agent joins the cluster. If an agent flapped or failed this counter also increments when it re-joins. | joins / interval | counter |
consul.serf.member.left | Increments when an agent leaves the cluster. | leaves / interval | counter |
consul.serf.events | Increments when an agent processes an event. Consul uses events internally so there may be additional events showing in telemetry. There are also a per-event counters emitted as consul.serf.events. . | events / interval | counter |
consul.serf.events.<type> | Breakdown of consul.serf.events by type of event. | events / interval | counter |
consul.serf.msgs.sent | This metric is sample of the number of bytes of messages broadcast to the cluster. In a given time interval, the sum of this metric is the total number of bytes sent and the count is the number of messages sent. | message bytes / interval | counter |
consul.autopilot.failure_tolerance | Tracks the number of voting servers that the cluster can lose while continuing to function. | servers | gauge |
consul.autopilot.healthy | Tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. | boolean | gauge |
consul.session_ttl.active | Tracks the active number of sessions being tracked. | sessions | gauge |
consul.catalog.service.query | Increments for each catalog query for the given service. | queries | counter |
consul.catalog.service.query-tag | Increments for each catalog query for the given service with the given tag. | queries | counter |
consul.catalog.service.query-tags | Increments for each catalog query for the given service with the given tags. | queries | counter |
consul.catalog.service.not-found | Increments for each catalog query where the given service could not be found. | queries | counter |
consul.catalog.connect.query | Increments for each mesh-based catalog query for the given service. | queries | counter |
consul.catalog.connect.query-tag | Increments for each mesh-based catalog query for the given service with the given tag. | queries | counter |
consul.catalog.connect.query-tags | Increments for each mesh-based catalog query for the given service with the given tags. | queries | counter |
consul.catalog.connect.not-found | Increments for each mesh-based catalog query where the given service could not be found. | queries | counter |
Service Mesh Built-in Proxy Metrics
Consul service mesh's built-in proxy is by default configured to log metrics to the same sink as the agent that starts it.
When running in this mode it emits some basic metrics. These will be expanded upon in the future.
All metrics are prefixed with consul.proxy.<proxied-service-id>
to distinguish
between multiple proxies on a given host. The table below use web
as an
example service name for brevity.
Labels
Most labels have a dst
label and some have a src
label. When using metrics
sinks and timeseries stores that support labels or tags, these allow aggregating
the connections by service name.
Assuming all services are using a managed built-in proxy, you can get a complete overview of both number of open connections and bytes sent and received between all services by aggregating over these metrics.
For example aggregating over all upstream
(i.e. outbound) connections which
have both src
and dst
labels, you can get a sum of all the bandwidth in and
out of a given service or the total number of connections between two services.
Metrics Reference
The standard go runtime metrics are exported by go-metrics
as with Consul
agent. The table below describes the additional metrics exported by the proxy.
Metric | Description | Unit | Type |
---|---|---|---|
consul.proxy.web.runtime.* | The same go runtime metrics as documented for the agent above. | mixed | mixed |
consul.proxy.web.inbound.conns | Shows the current number of connections open from inbound requests to the proxy. Where supported a dst label is added indicating the service name the proxy represents. | connections | gauge |
consul.proxy.web.inbound.rx_bytes | Increments by the number of bytes received from an inbound client connection. Where supported a dst label is added indicating the service name the proxy represents. | bytes | counter |
consul.proxy.web.inbound.tx_bytes | Increments by the number of bytes transferred to an inbound client connection. Where supported a dst label is added indicating the service name the proxy represents. | bytes | counter |
consul.proxy.web.upstream.conns | Shows the current number of connections open from a proxy instance to an upstream. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to. | connections | gauge |
consul.proxy.web.inbound.rx_bytes | Increments by the number of bytes received from an upstream connection. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to. | bytes | counter |
consul.proxy.web.inbound.tx_bytes | Increments by the number of bytes transferred to an upstream connection. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to. | bytes | counter |
Peering metrics
Requirements:
- Consul 1.13.0+
Cluster peering refers to Consul clusters that communicate through a peer connection, as opposed to a federated connection. Consul collects metrics that describe the number of services exported to a peered cluster. Peering metrics are only emitted by the leader server. These metrics are emitted every 9 seconds.
Metric | Description | Unit | Type |
---|---|---|---|
consul.peering.exported_services | Counts the number of services exported with exported service configuration entries to a peer cluster. | count | gauge |
consul.peering.healthy | Tracks the health of a peering connection as reported by the server. If Consul detects errors while sending or receiving from a peer which do not recover within a reasonable time, this metric returns 0. Healthy connections return 1. | health | gauge |
Labels
Consul attaches the following labels to metric values.
Label Name | Description | Possible values |
---|---|---|
peer_name | The name of the peering on the reporting cluster or leader. | Any defined peer name in the cluster |
peer_id | The ID of a peer connected to the reporting cluster or leader. | Any UUID |
partition | Enterprise Name of the partition that the peering is created in. | Any defined partition name in the cluster |
Server Host Metrics
Consul servers can report the following metrics about the host's system resources. This feature must be enabled in the agent telemetry configuration. Note that if the Consul server is operating inside a container these metrics still report host resource usage and do not report any resource limits placed on the container.
Requirements:
- Consul 1.15.3+
Metric | Description | Unit | Type |
---|---|---|---|
consul.host.memory.total | The total physical memory in bytes | mixed | mixed |
consul.host.memory.available | The available physical memory in bytes | mixed | mixed |
consul.host.memory.free | The free physical memory in bytes | mixed | mixed |
consul.host.memory.used | The used physical memory in bytes | mixed | mixed |
consul.host.memory.used_percent | The used physical memory as a percentage of total physical memory | mixed | mixed |
consul.host.cpu.total | The host's total cpu utilization | ||
consul.host.cpu.user | The cpu utilization in user space | ||
consul.host.cpu.idle | The cpu utilization in idle state | ||
consul.host.cpu.iowait | The cpu utilization in iowait state | ||
consul.host.cpu.system | The cpu utilization in system space | ||
consul.host.disk.size | The size in bytes of the data_dir disk | ||
consul.host.disk.used | The number of bytes used on the data_dir disk | ||
consul.host.disk.available | The number of bytes available on the data_dir disk | ||
consul.host.disk.used_percent | The percentage of disk space used on the data_dir disk | ||
consul.host.disk.inodes_percent | The percentage of inode usage on the data_dir disk | ||
consul.host.uptime | The uptime of the host in seconds |