docs: Removal of Consul vs ZooKeeper analysis (#10469)

* docs: Removal of Consul vs ZooKeeper

Although Consul does have a KV, we are not positioning Consul as a first class KV store versus other alternatives such as etcd or Zookeeper. Will remove this since this has not been updated with further analysis since this content was created.

* Removing from Zookeeper analysis Navbar
* Removing Zookeeper analysis from redirects
This commit is contained in:
David Yu 2021-06-24 07:23:57 -07:00 committed by GitHub
parent 722e8398ce
commit 3c1fda212a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 1 additions and 78 deletions

View File

@ -1,68 +0,0 @@
---
layout: docs
page_title: 'Consul vs. ZooKeeper, doozerd, etcd'
description: >-
ZooKeeper, doozerd, and etcd are all similar in their architecture. All three
have server nodes that require a quorum of nodes to operate (usually a simple
majority). They are strongly-consistent and expose various primitives that can
be used through client libraries within applications to build complex
distributed systems.
---
# Consul vs. ZooKeeper, doozerd, etcd
ZooKeeper, doozerd, and etcd are all similar in their architecture. All three have
server nodes that require a quorum of nodes to operate (usually a simple majority).
They are strongly-consistent and expose various primitives that can be used through
client libraries within applications to build complex distributed systems.
Consul also uses server nodes within a single datacenter. In each datacenter, Consul
servers require a quorum to operate and provide strong consistency. However, Consul
has native support for multiple datacenters as well as a more feature-rich gossip
system that links server nodes and clients.
All of these systems have roughly the same semantics when providing key/value storage:
reads are strongly consistent and availability is sacrificed for consistency in the
face of a network partition. However, the differences become more apparent when these
systems are used for advanced cases.
The semantics provided by these systems are attractive for building service discovery
systems, but it's important to stress that these features must be built. ZooKeeper et
al. provide only a primitive K/V store and require that application developers build
their own system to provide service discovery. Consul, by contrast, provides an
opinionated framework for service discovery and eliminates the guess-work and
development effort. Clients simply register services and then perform discovery using
a DNS or HTTP interface. Other systems require a home-rolled solution.
A compelling service discovery framework must incorporate health checking and the
possibility of failures as well. It is not useful to know that Node A provides the Foo
service if that node has failed or the service crashed. Naive systems make use of
heartbeating, using periodic updates and TTLs. These schemes require work linear
to the number of nodes and place the demand on a fixed number of servers. Additionally,
the failure detection window is at least as long as the TTL.
ZooKeeper provides ephemeral nodes which are K/V entries that are removed when a client
disconnects. These are more sophisticated than a heartbeat system but still have
inherent scalability issues and add client-side complexity. All clients must maintain
active connections to the ZooKeeper servers and perform keep-alives. Additionally, this
requires "thick clients" which are difficult to write and often result in debugging
challenges.
Consul uses a very different architecture for health checking. Instead of only having
server nodes, Consul clients run on every node in the cluster. These clients are part
of a [gossip pool](/docs/internals/gossip) which serves several functions,
including distributed health checking. The gossip protocol implements an efficient
failure detector that can scale to clusters of any size without concentrating the work
on any select group of servers. The clients also enable a much richer set of health
checks to be run locally, whereas ZooKeeper ephemeral nodes are a very primitive check
of liveness. With Consul, clients can check that a web server is returning 200 status
codes, that memory utilization is not critical, that there is sufficient disk space,
etc. The Consul clients expose a simple HTTP interface and avoid exposing the complexity
of the system to clients in the same way as ZooKeeper.
Consul provides first-class support for service discovery, health checking,
K/V storage, and multiple datacenters. To support anything more than simple K/V storage,
all these other systems require additional tools and libraries to be built on
top. By using client nodes, Consul provides a simple API that only requires thin clients.
Additionally, the API can be avoided entirely by using configuration files and the
DNS interface to have a complete service discovery solution with no development at all.

View File

@ -13,10 +13,6 @@
"title": "Overview",
"path": "intro/vs"
},
{
"title": "ZooKeeper, doozerd, etcd",
"path": "intro/vs/zookeeper"
},
{
"title": "Chef, Puppet, etc.",
"path": "intro/vs/chef-puppet"
@ -991,4 +987,4 @@
"path": "guides",
"hidden": true
}
]
]

View File

@ -255,11 +255,6 @@ module.exports = [
},
{ source: '/intro', destination: '/docs/intro', permanent: true },
{ source: '/intro/vs', destination: '/docs/intro/vs', permanent: true },
{
source: '/intro/vs/zookeeper',
destination: '/docs/intro/vs/zookeeper',
permanent: true,
},
{
source: '/intro/vs/chef-puppet',
destination: '/docs/intro/vs/chef-puppet',