2014-04-10 21:49:12 +00:00
|
|
|
---
|
2020-04-06 20:27:35 +00:00
|
|
|
layout: 'intro'
|
|
|
|
page_title: 'Consul vs. Nagios, Sensu'
|
|
|
|
sidebar_current: 'vs-other-nagios-sensu'
|
2014-10-19 23:40:10 +00:00
|
|
|
description: |-
|
|
|
|
Nagios and Sensu are both tools built for monitoring. They are used to quickly notify operators when an issue occurs.
|
2014-04-10 21:49:12 +00:00
|
|
|
---
|
|
|
|
|
|
|
|
# Consul vs. Nagios, Sensu
|
|
|
|
|
|
|
|
Nagios and Sensu are both tools built for monitoring. They are used
|
|
|
|
to quickly notify operators when an issue occurs.
|
|
|
|
|
|
|
|
Nagios uses a group of central servers that are configured to perform
|
|
|
|
checks on remote hosts. This design makes it difficult to scale Nagios,
|
|
|
|
as large fleets quickly reach the limit of vertical scaling, and Nagios
|
2014-04-16 03:17:00 +00:00
|
|
|
does not easily scale horizontally. Nagios is also notoriously
|
2014-04-10 21:49:12 +00:00
|
|
|
difficult to use with modern DevOps and configuration management tools,
|
|
|
|
as local configurations must be updated when remote servers are added
|
|
|
|
or removed.
|
|
|
|
|
|
|
|
Sensu has a much more modern design, relying on local agents to run
|
|
|
|
checks and pushing results to an AMQP broker. A number of servers
|
|
|
|
ingest and handle the result of the health checks from the broker. This model
|
2015-03-08 15:19:26 +00:00
|
|
|
is more scalable than Nagios, as it allows for much more horizontal scaling
|
2014-04-10 21:49:12 +00:00
|
|
|
and a weaker coupling between the servers and agents. However, the central broker
|
2015-03-08 15:19:26 +00:00
|
|
|
has scaling limits and acts as a single point of failure in the system.
|
2014-04-10 21:49:12 +00:00
|
|
|
|
|
|
|
Consul provides the same health checking abilities as both Nagios and Sensu,
|
2014-04-17 21:45:53 +00:00
|
|
|
is friendly to modern DevOps, and avoids the scaling issues inherent in the
|
|
|
|
other systems. Consul runs all checks locally, like Sensu, avoiding placing
|
2014-04-10 21:49:12 +00:00
|
|
|
a burden on central servers. The status of checks is maintained by the Consul
|
|
|
|
servers, which are fault tolerant and have no single point of failure.
|
2015-03-08 15:19:26 +00:00
|
|
|
Lastly, Consul can scale to vastly more checks because it relies on edge-triggered
|
2014-04-17 21:45:53 +00:00
|
|
|
updates. This means that an update is only triggered when a check transitions
|
|
|
|
from "passing" to "failing" or vice versa.
|
2014-04-10 21:49:12 +00:00
|
|
|
|
|
|
|
In a large fleet, the majority of checks are passing, and even the minority
|
2014-04-16 03:17:00 +00:00
|
|
|
that are failing are persistent. By capturing changes only, Consul reduces
|
2014-04-10 21:49:12 +00:00
|
|
|
the amount of networking and compute resources used by the health checks,
|
|
|
|
allowing the system to be much more scalable.
|
|
|
|
|
|
|
|
An astute reader may notice that if a Consul agent dies, then no edge triggered
|
2015-03-08 15:19:26 +00:00
|
|
|
updates will occur. From the perspective of other nodes, all checks will appear
|
2014-04-10 21:49:12 +00:00
|
|
|
to be in a steady state. However, Consul guards against this as well. The
|
|
|
|
[gossip protocol](/docs/internals/gossip.html) used between clients and servers
|
|
|
|
integrates a distributed failure detector. This means that if a Consul agent fails,
|
|
|
|
the failure will be detected, and thus all checks being run by that node can be
|
2015-03-08 15:19:26 +00:00
|
|
|
assumed failed. This failure detector distributes the work among the entire cluster
|
|
|
|
while, most importantly, enabling the edge triggered architecture to work.
|