Merge pull request #13914 from hashicorp/docs/remove-comparisons-from-ref-docs

docs: remove comparative info from ref docs site
This commit is contained in:
Jared Kirschner 2022-07-27 02:42:41 -04:00 committed by GitHub
commit 39480deed0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 2 additions and 592 deletions

View File

@ -115,7 +115,5 @@ forward the request to the remote datacenter and return the result.
## Next Steps
- See [how Consul compares to other software](/docs/intro/vs) to assess how it fits into your
existing infrastructure.
- Continue onwards with [HashiCorp Learn](https://learn.hashicorp.com/tutorials/consul/get-started-install)
to learn more about Consul and how to get Consul up and running.
Continue onwards with [HashiCorp Learn](https://learn.hashicorp.com/tutorials/consul/get-started-install)
to learn more about Consul and how to get Consul up and running.

View File

@ -1,45 +0,0 @@
---
layout: docs
page_title: 'Consul vs. Chef, Puppet, etc.'
description: >-
It is not uncommon to find people using Chef, Puppet, and other configuration
management tools to build service discovery mechanisms. This is usually done
by querying global state to construct configuration files on each node during
a periodic convergence run.
---
# Consul vs. Chef, Puppet, etc.
It is not uncommon to find people using Chef, Puppet, and other configuration
management tools to build service discovery mechanisms. This is usually
done by querying global state to construct configuration files on each
node during a periodic convergence run.
Unfortunately, this approach has
a number of pitfalls. The configuration information is static
and cannot update any more frequently than convergence runs. Generally this
is on the interval of many minutes or hours. Additionally, there is no
mechanism to incorporate the system state in the configuration: nodes which
are unhealthy may receive traffic exacerbating issues further. Using this
approach also makes supporting multiple datacenters challenging as a central
group of servers must manage all datacenters.
Consul is designed specifically as a service discovery tool. As such,
it is much more dynamic and responsive to the state of the cluster. Nodes
can register and deregister the services they provide, enabling dependent
applications and services to rapidly discover all providers. By using the
integrated health checking, Consul can route traffic away from unhealthy
nodes, allowing systems and services to gracefully recover. Static configuration
that may be provided by configuration management tools can be moved into the
dynamic key/value store. This allows application configuration to be updated
without a slow convergence run. Lastly, because each datacenter runs independently,
supporting multiple datacenters is no different than a single datacenter.
That said, Consul is not a replacement for configuration management tools.
These tools are still critical to set up applications, including Consul itself.
Static provisioning is best managed by existing tools while dynamic state and
discovery is better managed by Consul. The separation of configuration management
and cluster management also has a number of advantageous side effects: Chef recipes
and Puppet manifests become simpler without global state, periodic runs are no longer
required for service or configuration changes, and the infrastructure can become
immutable since config management runs require no global state.

View File

@ -1,35 +0,0 @@
---
layout: docs
page_title: Consul vs. Custom Solutions
description: >-
As a codebase grows, a monolithic app often evolves into a Service Oriented
Architecture (SOA). A universal pain point for SOA is service discovery and
configuration. In many cases, this leads to organizations building home grown
solutions. It is an undisputed fact that distributed systems are hard;
building one is error-prone and time-consuming. Most systems cut corners by
introducing single points of failure such as a single Redis or RDBMS to
maintain cluster state. These solutions may work in the short term, but they
are rarely fault tolerant or scalable. Besides these limitations, they require
time and resources to build and maintain.
---
# Consul vs. Custom Solutions
As a codebase grows, a monolithic app often evolves into a Service Oriented
Architecture (SOA). A universal pain point for SOA is service discovery and
configuration. In many cases, this leads to organizations building home grown
solutions. It is an undisputed fact that distributed systems are hard; building
one is error-prone and time-consuming. Most systems cut corners by introducing
single points of failure such as a single Redis or RDBMS to maintain cluster
state. These solutions may work in the short term, but they are rarely fault
tolerant or scalable. Besides these limitations, they require time and resources
to build and maintain.
Consul provides the core set of features needed by an SOA out of the box. By
using Consul, organizations can leverage open source work to reduce the time
and effort spent re-inventing the wheel and can focus instead on their business
applications.
Consul is built on well-cited research and is designed with the constraints of
distributed systems in mind. At every step, Consul takes efforts to provide a
robust and scalable solution for organizations of any size.

View File

@ -1,54 +0,0 @@
---
layout: docs
page_title: Consul vs. Eureka
description: >-
Eureka is a service discovery tool that provides a best effort registry and
discovery service. It uses central servers and clients which are typically
natively integrated with SDKs. Consul provides a super set of features, such
as health checking, key/value storage, ACLs, and multi-datacenter awareness.
---
# Consul vs. Eureka
Eureka is a service discovery tool. The architecture is primarily client/server,
with a set of Eureka servers per datacenter, usually one per availability zone.
Typically clients of Eureka use an embedded SDK to register and discover services.
For clients that are not natively integrated, a sidecar such as Ribbon is used
to transparently discover services via Eureka.
Eureka provides a weakly consistent view of services, using best effort replication.
When a client registers with a server, that server will make an attempt to replicate
to the other servers but provides no guarantee. Service registrations have a short
Time-To-Live (TTL), requiring clients to heartbeat with the servers. Unhealthy services
or nodes will stop heartbeating, causing them to timeout and be removed from the registry.
Discovery requests can route to any service, which can serve stale or missing data due to
the best effort replication. This simplified model allows for easy cluster administration
and high scalability.
Consul provides a super set of features, including richer health checking, key/value store,
and multi-datacenter awareness. Consul requires a set of servers in each datacenter, along
with an agent on each client, similar to using a sidecar like Ribbon. The Consul agent allows
most applications to be Consul unaware, performing the service registration via configuration
files and discovery via DNS or load balancer sidecars.
Consul provides a strong consistency guarantee, since servers replicate state using the
[Raft protocol](/docs/architecture/consensus). Consul supports a rich set of health checks
including TCP, HTTP, Nagios/Sensu compatible scripts, or TTL based like Eureka. Client nodes
participate in a [gossip based health check](/docs/architecture/gossip), which distributes
the work of health checking, unlike centralized heartbeating which becomes a scalability challenge.
Discovery requests are routed to the elected Consul leader which allows them to be strongly consistent
by default. Clients that allow for stale reads enable any server to process their request allowing
for linear scalability like Eureka.
The strongly consistent nature of Consul means it can be used as a locking service for leader
elections and cluster coordination. Eureka does not provide similar guarantees, and typically
requires running ZooKeeper for services that need to perform coordination or have stronger
consistency needs.
Consul provides a toolkit of features needed to support a service oriented architecture.
This includes service discovery, but also rich health checking, locking, Key/Value, multi-datacenter
federation, an event system, and ACLs. Both Consul and the ecosystem of tools like consul-template
and envconsul try to minimize application changes required to integration, to avoid needing
native integration via SDKs. Eureka is part of a larger Netflix OSS suite, which expects applications
to be relatively homogeneous and tightly integrated. As a result, Eureka only solves a limited
subset of problems, expecting other tools such as ZooKeeper to be used alongside.

View File

@ -1,22 +0,0 @@
---
layout: docs
page_title: Consul vs. Other Software
description: >-
The problems Consul solves are varied, but each individual feature has been
solved by many different systems. Although there is no single system that
provides all the features of Consul, there are other options available to
solve some of these problems.
---
# Consul vs. Other Software
The problems Consul solves are varied, but each individual feature has been
solved by many different systems. Although there is no single system that
provides all the features of Consul, there are other options available to solve
some of these problems.
In this section, we compare Consul to some other options. In most cases, Consul
is not mutually exclusive with any other system.
Use the navigation to the left to read the comparison of Consul to specific
systems.

View File

@ -1,80 +0,0 @@
---
layout: docs
page_title: Consul vs. Istio
description: >-
Istio is a platform for connecting and securing microservices. This page
describes the similarities and differences between Istio and Consul.
---
# Consul vs. Istio
Istio is an open platform to connect, manage, and secure microservices.
To enable the full functionality of Istio, multiple services must
be deployed. For the control plane: Pilot, Mixer, and Citadel must be
deployed and for the data plane an Envoy sidecar is deployed. Additionally,
Istio requires a 3rd party service catalog from Kubernetes, Consul, Eureka,
or others. Finally, Istio requires an external system for storing state,
typically etcd. At a minimum, three Istio-dedicated services along with at
least one separate distributed system (in addition to Istio) must be
configured to use the full functionality of Istio.
Istio provides layer 7 features for path-based routing, traffic shaping,
load balancing, and telemetry. Access control policies can be configured
targeting both layer 7 and layer 4 properties to control access, routing,
and more based on service identity.
Consul is a single binary providing both server and client capabilities, and
includes all functionality for service catalog, configuration, TLS certificates,
authorization, and more. No additional systems need to be installed to use
Consul, although Consul optionally supports external systems such as Vault
to augment behavior. This architecture enables Consul to be easily installed
on any platform, including directly onto the machine.
Consul uses an agent-based model where each node in the cluster runs a
Consul Client. This client maintains a local cache that is efficiently updated
from servers. As a result, all secure service communication APIs respond in
microseconds and do not require any external communication. This allows us to
do connection enforcement at the edge without communicating to central
servers. Istio flows requests to a central Mixer service and must push
updates out via Pilot. This dramatically reduces the scalability of Istio,
whereas Consul is able to efficiently distribute updates and perform all
work on the edge.
Consul provides layer 7 features for path-based routing, traffic shifting,
load balancing, and telemetry. Consul enforces authorization and identity to
layer 4 only — either the TLS connection can be established or it can't.
We believe service identity should be tied to layer 4, whereas layer 7 should be
used for routing, telemetry, etc. We will be adding more layer 7 features to Consul in the future.
The data plane for Consul is pluggable. It includes a built-in proxy with
a larger performance trade off for ease of use. But you may also use third
party proxies such as Envoy to leverage layer 7 features. The ability to use the
right proxy for the job allows flexible heterogeneous deployments where
different proxies may be more correct for the applications they're proxying. We
encourage users leverage the pluggable data plane layer and use a proxy which
supports the layer 7 features necessary for the cluster.
In addition to third party proxy support, applications can natively integrate
with the Connect protocol. As a result, the performance overhead of introducing
Connect is negligible. These "Connect-native" applications can interact with
any other Connect-capable services, whether they're using a proxy or are
also Connect-native.
Consul implements automatic TLS certificate management complete with rotation
support. Both leaf and root certificates can be rotated automatically across
a large Consul cluster with zero disruption to connections. The certificate
management system is pluggable through code change in Consul and will be
exposed as an external plugin system shortly. This enables Consul to work
with any PKI solution.
Because Consul's service connection feature "Connect" is built-in, it
inherits the operational stability of Consul. Consul has been in production
for large companies since 2014 and is known to be deployed on as many as
50,000 nodes in a single cluster.
This comparison is based on our own limited usage of Istio as well as
talking to Istio users. If you feel there are inaccurate statements in this
comparison, please click "Edit This Page" in the footer of this page and
propose edits. We strive for technical accuracy and will review and update
this post for inaccuracies as quickly as possible.

View File

@ -1,43 +0,0 @@
---
layout: docs
page_title: 'Consul vs. Nagios'
description: >-
Nagios is a tool built for monitoring. It is used to quickly
notify operators when an issue occurs.
---
# Consul vs. Nagios
Nagios is a tool built for monitoring. It is used to quickly notify
operators when an issue occurs.
Nagios uses a group of central servers that are configured to perform
checks on remote hosts. This design makes it difficult to scale Nagios,
as large fleets quickly reach the limit of vertical scaling, and Nagios
does not easily scale horizontally. Nagios is also notoriously
difficult to use with modern DevOps and configuration management tools,
as local configurations must be updated when remote servers are added
or removed.
Consul provides the same health checking abilities as Nagios,
is friendly to modern DevOps, and avoids the inherent scaling issues.
Consul runs all checks locally, avoiding placing a burden on central servers.
The status of checks is maintained by the Consul servers, which are fault
tolerant and have no single point of failure. Lastly, Consul can scale to
vastly more checks because it relies on edge-triggered updates. This means
that an update is only triggered when a check transitions from "passing"
to "failing" or vice versa.
In a large fleet, the majority of checks are passing, and even the minority
that are failing are persistent. By capturing changes only, Consul reduces
the amount of networking and compute resources used by the health checks,
allowing the system to be much more scalable.
An astute reader may notice that if a Consul agent dies, then no edge triggered
updates will occur. From the perspective of other nodes, all checks will appear
to be in a steady state. However, Consul guards against this as well. The
[gossip protocol](/docs/architecture/gossip) used between clients and servers
integrates a distributed failure detector. This means that if a Consul agent fails,
the failure will be detected, and thus all checks being run by that node can be
assumed failed. This failure detector distributes the work among the entire cluster
while, most importantly, enabling the edge triggered architecture to work.

View File

@ -1,55 +0,0 @@
---
layout: docs
page_title: Consul vs. Envoy and Other Proxies
description: >-
Modern service proxies provide high-level service routing, authentication,
telemetry, and more for microservice and cloud environments. Envoy is a
popular and feature rich proxy. This page describes how Consul relates to
proxies such as Envoy.
---
# Consul vs. Envoy and Other Proxies
Modern service proxies provide high-level service routing, authentication,
telemetry, and more for microservice and cloud environments. Envoy is
a popular and feature-rich proxy that is often
used on its own. Consul [integrates with Envoy](/docs/connect/proxies/envoy) to simplify its configuration.
Proxies require a rich set of configuration to operate since backend
addresses, frontend listeners, routes, filters, telemetry shipping, and
more must all be configured. Further, a modern infrastructure contains
many proxies, often one proxy per service as proxies are deployed in
a "sidecar" model next to a service. Therefore, a primary challenge of
proxies is the configuration sprawl and orchestration.
Proxies form what is referred to as the "data plane": the pathway which
data travels for network connections. Above this is the "control plane"
which provides the rules and configuration for the data plane. Proxies
typically integrate with outside solutions to provide the control plane.
For example, Envoy integrates with Consul to dynamically populate
service backend addresses.
Consul is a control plane solution. The service catalog serves as a registry
for services and their addresses and can be used to route traffic for proxies.
The Connect feature of Consul provides the TLS certificates and service
access graph, but still requires a proxy to exist in the data path. As a
control plane, Consul integrates with many data plane solutions including
Envoy, HAProxy, Nginx, and more.
The [Consul Envoy integration](/docs/connect/proxies/envoy)
is currently the primary way to utilize advanced layer 7 features provided
by Consul. In addition to Envoy, Consul enables
third party proxies to integrate with Connect and provide the data
plane with Consul operating as the control plane.
Proxies provide excellent solutions to layer 7 concerns such as path-based
routing, tracing and telemetry, and more. By supporting a pluggable data plane model, the right proxy can be
deployed as needed.
For performance-critical applications or those
that utilize layer 7 functionality, Envoy can be used. For non-performance critical layer 4 applications, you can use Consul's [built-in proxy](/docs/connect/proxies/built-in) for convenience.
For some applications that may require hardware, a hardware load balancer
such as an F5 appliance may be deployed. Consul encourages this use of the right
proxy for the scenario and treats hardware load balancers as swappable components that can be run
alongside other proxies, assuming they integrate with the [necessary APIs](/docs/connect/proxies/integrate)
for Connect.

View File

@ -1,54 +0,0 @@
---
layout: docs
page_title: Consul vs. Serf
description: >-
Serf is a node discovery and orchestration tool and is the only tool discussed
so far that is built on an eventually-consistent gossip model with no
centralized servers. It provides a number of features, including group
membership, failure detection, event broadcasts, and a query mechanism.
However, Serf does not provide any high-level features such as service
discovery, health checking or key/value storage. Consul is a complete system
providing all of those features.
---
# Consul vs. Serf
[Serf](https://www.serf.io) is a node discovery and orchestration tool and is the only
tool discussed so far that is built on an eventually-consistent gossip model
with no centralized servers. It provides a number of features, including group
membership, failure detection, event broadcasts, and a query mechanism. However,
Serf does not provide any high-level features such as service discovery, health
checking or key/value storage. Consul is a complete system providing all of those
features.
The internal [gossip protocol](/docs/architecture/gossip) used within Consul is in
fact powered by the Serf library: Consul leverages the membership and failure detection
features and builds upon them to add service discovery. By contrast, the discovery
feature of Serf is at a node level, while Consul provides a service and node level
abstraction.
The health checking provided by Serf is very low level and only indicates if the
agent is alive. Consul extends this to provide a rich health checking system
that handles liveness in addition to arbitrary host and service-level checks.
Health checks are integrated with a central catalog that operators can easily
query to gain insight into the cluster.
The membership provided by Serf is at a node level, while Consul focuses
on the service level abstraction, mapping single nodes to multiple services.
This can be simulated in Serf using tags, but it is much more limited and does
not provide useful query interfaces. Consul also makes use of a strongly-consistent
catalog while Serf is only eventually-consistent.
In addition to the service level abstraction and improved health checking,
Consul provides a key/value store and support for multiple datacenters.
Serf can run across the WAN but with degraded performance. Consul makes use
of [multiple gossip pools](/docs/architecture) so that
the performance of Serf over a LAN can be retained while still using it over
a WAN for linking together multiple datacenters.
Consul is opinionated in its usage while Serf is a more flexible and
general purpose tool. In [CAP](https://en.wikipedia.org/wiki/CAP_theorem) terms,
Consul uses a CP architecture, favoring consistency over availability. Serf is an
AP system and sacrifices consistency for availability. This means Consul cannot
operate if the central servers cannot form a quorum while Serf will continue to
function under almost all circumstances.

View File

@ -1,45 +0,0 @@
---
layout: docs
page_title: Consul vs. SkyDNS
description: >-
SkyDNS is a tool designed to provide service discovery. It uses multiple
central servers that are strongly-consistent and fault-tolerant. Nodes
register services using an HTTP API, and queries can be made over HTTP or DNS
to perform discovery.
---
# Consul vs. SkyDNS
SkyDNS is a tool designed to provide service discovery.
It uses multiple central servers that are strongly-consistent and
fault-tolerant. Nodes register services using an HTTP API, and
queries can be made over HTTP or DNS to perform discovery.
Consul is very similar but provides a superset of features. Consul
also relies on multiple central servers to provide strong consistency
and fault tolerance. Nodes can use an HTTP API or use an agent to
register services, and queries are made over HTTP or DNS.
However, the systems differ in many ways. Consul provides a much richer
health checking framework with support for arbitrary checks and
a highly scalable failure detection scheme. SkyDNS relies on naive
heartbeating and TTLs, an approach which has known scalability issues.
Additionally, the heartbeat only provides a limited liveness check
versus the rich health checks that Consul performs.
Multiple datacenters can be supported by using "regions" in SkyDNS;
however, the data is managed and queried from a single cluster. If servers
are split between datacenters, the replication protocol will suffer from
very long commit times. If all the SkyDNS servers are in a central datacenter,
then connectivity issues can cause entire datacenters to lose availability.
Additionally, even without a connectivity issue, query performance will
suffer as requests must always be performed in a remote datacenter.
Consul supports multiple datacenters out of the box, and it purposely
scopes the managed data to be per-datacenter. This means each datacenter
runs an independent cluster of servers. Requests are forwarded to remote
datacenters if necessary; requests for services within a datacenter
never go over the WAN, and connectivity issues between datacenters do not
affect availability within a datacenter. Additionally, the unavailability
of one datacenter does not affect the discovery of services
in any other datacenter.

View File

@ -1,64 +0,0 @@
---
layout: docs
page_title: Consul vs. SmartStack
description: >-
SmartStack is a tool which tackles the service discovery problem. It has a
rather unique architecture and has 4 major components: ZooKeeper, HAProxy,
Synapse, and Nerve. The ZooKeeper servers are responsible for storing cluster
state in a consistent and fault-tolerant manner. Each node in the SmartStack
cluster then runs both Nerves and Synapses. The Nerve is responsible for
running health checks against a service and registering with the ZooKeeper
servers. Synapse queries ZooKeeper for service providers and dynamically
configures HAProxy. Finally, clients speak to HAProxy, which does health
checking and load balancing across service providers.
---
# Consul vs. SmartStack
SmartStack is a tool which tackles the service discovery problem. It has a rather
unique architecture and has 4 major components: ZooKeeper, HAProxy, Synapse, and Nerve.
The ZooKeeper servers are responsible for storing cluster state in a consistent and
fault-tolerant manner. Each node in the SmartStack cluster then runs both Nerves and
Synapses. The Nerve is responsible for running health checks against a service and
registering with the ZooKeeper servers. Synapse queries ZooKeeper for service providers
and dynamically configures HAProxy. Finally, clients speak to HAProxy, which does
health checking and load balancing across service providers.
Consul is a much simpler and more contained system as it does not rely on any external
components. Consul uses an integrated [gossip protocol](/docs/architecture/gossip)
to track all nodes and perform server discovery. This means that server addresses
do not need to be hardcoded and updated fleet-wide on changes, unlike SmartStack.
Service registration for both Consul and Nerves can be done with a configuration file,
but Consul also supports an API to dynamically change the services and checks that are
in use.
For discovery, SmartStack clients must use HAProxy, requiring that Synapse be
configured with all desired endpoints in advance. Consul clients instead
use the DNS or HTTP APIs without any configuration needed in advance. Consul
also provides a "tag" abstraction, allowing services to provide metadata such
as versions, primary/secondary designations, or opaque labels that can be used for
filtering. Clients can then request only the service providers which have
matching tags.
The systems also differ in how they manage health checking. Nerve performs local health
checks in a manner similar to Consul agents. However, Consul maintains separate catalog
and health systems. This division allows operators to see which nodes are in each service
pool and provides insight into failing checks. Nerve simply deregisters nodes on failed
checks, providing limited operational insight. Synapse also configures HAProxy to perform
additional health checks. This causes all potential service clients to check for
liveness. With large fleets, this N-to-N style health checking may be prohibitively
expensive.
Consul generally provides a much richer health checking system. Consul supports
Nagios-style plugins, enabling a vast catalog of checks to be used. Consul allows for
both service- and host-level checks. There is even a "dead man's switch" check that allows
applications to easily integrate custom health checks. Finally, all of this is integrated
into a Health and Catalog system with APIs enabling operators to gain insight into the
broader system.
In addition to the service discovery and health checking, Consul also provides
an integrated key/value store for configuration and multi-datacenter support.
While it may be possible to configure SmartStack for multiple datacenters,
the central ZooKeeper cluster would be a serious impediment to a fault-tolerant
deployment.

View File

@ -19,51 +19,6 @@
"path": "intro/usecases/what-is-a-service-mesh"
}
]
},
{
"title": "Consul vs. Other Software",
"routes": [
{
"title": "Overview",
"path": "intro/vs"
},
{
"title": "Chef, Puppet, etc.",
"path": "intro/vs/chef-puppet"
},
{
"title": "Nagios",
"path": "intro/vs/nagios"
},
{
"title": "SkyDNS",
"path": "intro/vs/skydns"
},
{
"title": "SmartStack",
"path": "intro/vs/smartstack"
},
{
"title": "Serf",
"path": "intro/vs/serf"
},
{
"title": "Eureka",
"path": "intro/vs/eureka"
},
{
"title": "Istio",
"path": "intro/vs/istio"
},
{
"title": "Envoy and Other Proxies",
"path": "intro/vs/proxies"
},
{
"title": "Custom Solutions",
"path": "intro/vs/custom"
}
]
}
]
},

View File

@ -291,52 +291,6 @@ module.exports = [
permanent: true,
},
{ source: '/intro', destination: '/docs/intro', permanent: true },
{ source: '/intro/vs', destination: '/docs/intro/vs', permanent: true },
{
source: '/intro/vs/chef-puppet',
destination: '/docs/intro/vs/chef-puppet',
permanent: true,
},
{
source: '/intro/vs/nagios',
destination: '/docs/intro/vs/nagios',
permanent: true,
},
{
source: '/intro/vs/skydns',
destination: '/docs/intro/vs/skydns',
permanent: true,
},
{
source: '/intro/vs/smartstack',
destination: '/docs/intro/vs/smartstack',
permanent: true,
},
{
source: '/intro/vs/serf',
destination: '/docs/intro/vs/serf',
permanent: true,
},
{
source: '/intro/vs/eureka',
destination: '/docs/intro/vs/eureka',
permanent: true,
},
{
source: '/intro/vs/istio',
destination: '/docs/intro/vs/istio',
permanent: true,
},
{
source: '/intro/vs/proxies',
destination: '/docs/intro/vs/proxies',
permanent: true,
},
{
source: '/intro/vs/custom',
destination: '/docs/intro/vs/custom',
permanent: true,
},
{
source: '/docs/k8s/ambassador',
destination: