website: Documentation cleanup

This commit is contained in:
Armon Dadgar 2014-04-09 11:06:27 -07:00
parent 4ed6972862
commit 911ce92cf3
12 changed files with 32 additions and 28 deletions

View File

@ -24,7 +24,7 @@ There are two different kinds of checks:
set to the failed state. This mechanism is used to allow an application to
directly report it's health. For example, a web app can periodically curl the
endpoint, and if the app fails, then the TTL will expire and the health check
enters a critical state.
enters a critical state. This is conceptually similar to a dead man's switch.
## Check Definition

View File

@ -72,7 +72,7 @@ the node.
## Service Lookups
A service lookup is the alternate type of query. It is used to query for service
providers. The format of a service lookup is more complex and is like the following:
providers. The format of a service lookup is like the following:
<tag>.<service>.service.<datacenter>.<domain>

View File

@ -53,7 +53,11 @@ a single endpoint:
This is the only endpoint that is used with the Key/Value store.
It's use depends on the HTTP method. The `GET`, `PUT` and `DELETE` methods
are all supported.
are all supported. It is important to note that each datacenter has its
own K/V store, and that there is no replication between datacenters.
By default the datacenter of the agent is queried, however the dc can
be provided using the "?dc=" query parameter. If a client wants to write
to all Datacenters, one request per datacenter must be made.
When using the `GET` method, Consul will return the specified key,
or if the "?recurse" query parameter is provided, it will return

View File

@ -10,10 +10,9 @@ The Consul agent provides a complete RPC mechanism that can
be used to control the agent programmatically. This RPC
mechanism is the same one used by the CLI, but can be
used by other applications to easily leverage the power
of Consul without directly embedding. Additionally, it can
be used as a fast IPC mechanism to allow applications to
receive events immediately instead of using the fork/exec
model of event handlers.
of Consul without directly embedding. It is important to note
that the RPC protocol does not support all the same operations
as the [HTTP API](/docs/agent/http.html).
## Implementation Details

View File

@ -19,7 +19,7 @@ In general, the telemetry information is used for debugging or otherwise
getting a better view into what Consul is doing.
Additionally, if the `-statsite` [option](/docs/agent/options.html) is provided,
then the telemetry information will be streamed to a [statsite](github.com/armon/statsite)
then the telemetry information will be streamed to a [statsite](http://github.com/armon/statsite)
server where it can be aggregate and flushed to Graphite or any other metrics store.
Below is an example output:

View File

@ -14,7 +14,7 @@ eventually rejoin the cluster. The true purpose of this method is to force
remove "failed" nodes.
Consul periodically tries to reconnect to "failed" nodes in case it is a
network partition. After some configured amount of time (by default 24 hours),
network partition. After some configured amount of time (by default 72 hours),
Consul will reap "failed" nodes and stop trying to reconnect. The `force-leave`
command can be used to transition the "failed" nodes to "left" nodes more
quickly.

View File

@ -12,6 +12,14 @@ The info command provides various debugging information that can be
useful to operators. Depending on if the agent is a client or server,
information about different sub-systems will be returned.
There are currently the top-level keys for:
* agent: Provides information about the agent
* consul: Information about the consul library (client or server)
* raft: Provides info about the Raft [consensus library](/docs/internals/consensus.html)
* serf_lan: Provides info about the LAN [gossip pool](/docs/internals/gossip.html)
* serf_wan: Provides info about the WAN [gossip pool](/docs/internals/gossip.html)
Here is an example output:
agent:
@ -56,14 +64,6 @@ Here is an example output:
query_queue = 0
query_time = 1
There are currently the top-level keys for:
* agent: Provides information about the agent
* consul: Information about the consul library (client or server)
* raft: Provides info about the Raft [consensus library](/docs/internals/consensus.html)
* serf_lan: Provides info about the LAN [gossip pool](/docs/internals/gossip.html)
* serf_wan: Provides info about the WAN [gossip pool](/docs/internals/gossip.html)
## Usage
Usage: `consul info`

View File

@ -6,7 +6,7 @@ sidebar_current: "docs-guides"
# Consul Guides
This section provides various guides for common actions. Due to the complex nature
This section provides various guides for common actions. Due to the nature
of Consul, some of these procedures can be complex, so our goal is to provide
guidance do doing them safely.
@ -15,10 +15,10 @@ The following guides are available:
* [Bootstrapping](/docs/guides/bootstrapping.html) - This guide covers bootstrapping a new
datacenter. This covers safely adding the initial Consul servers.
* [External Services](/docs/guides/external.html) - This guide covers registering
an external service. This allows using 3rd party services within the Consul framework.
* TODO: Adding and removing servers
* TODO: Joining datacenters
* [External Services](/docs/guides/external.html) - This guide covers registering
an external service. This allows using 3rd party services within the Consul framework.

View File

@ -99,12 +99,12 @@ The server nodes also operate as part of a WAN gossip. This pool is different fr
as it is optimized for the higher latency of the internet, and is expected to only contain
other Consul server nodes. The purpose of this pool is to allow datacenters to discover each
other in a low touch manner. Bringing a new datacenter online is as easy as joining the existing
WAN gossip. Because the servers are all operating in this pool, it also enables cross-dc requests.
WAN gossip. Because the servers are all operating in this pool, it also enables cross-datacenter requests.
When a server receives a request for a different datacenter, it forwards it to a random server
in the correct datacenter. That server may then forward to the local leader.
This results in a very low coupling between datacenters, but because of failure detection,
connection caching and multiplexing, cross-dc requests are relatively fast and reliable.
connection caching and multiplexing, cross-datacenter requests are relatively fast and reliable.
## Getting in depth

View File

@ -7,7 +7,7 @@ sidebar_current: "docs-internals-consensus"
# Consensus Protocol
Consul uses a [consensus protocol](http://en.wikipedia.org/wiki/Consensus_(computer_science))
to provide [Consistency and Availability](http://en.wikipedia.org/wiki/CAP_theorem) as defined by CAP.
to provide [Consistency](http://en.wikipedia.org/wiki/CAP_theorem) as defined by CAP.
This page documents the details of this internal protocol. The consensus protocol is based on
["Raft: In search of an Understandable Consensus Algorithm"](https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf).
@ -114,7 +114,8 @@ to agree on an entry instead of a handful.
When getting started, a single Consul server is put into "bootstrap" mode. This mode
allows it to self-elect as a leader. Once a leader is elected, other servers can be
added to the peer set in a way that preserves consistency and safety. Eventually,
bootstrap mode can be disabled, once the first few servers are added.
bootstrap mode can be disabled, once the first few servers are added. See [this
guide](/docs/guides/bootstrapping.html) for more details.
Since all servers participate as part of the peer set, they all know the current
leader. When an RPC request arrives at a non-leader server, the request is

View File

@ -33,7 +33,7 @@ event broadcasts for events like leader election.
The WAN pool is globally unique, as all servers should participate in the WAN pool
regardless of datacenter. Membership information provided by the WAN pool allows
servers to perform cross datacenter requests. THe integrated failure detection
servers to perform cross datacenter requests. The integrated failure detection
allows Consul to gracefully handle an entire datacenter losing connectivity, or just
a single server in a remote datacenter.

View File

@ -8,10 +8,10 @@ sidebar_current: "docs-internals-security"
Consul relies on both a lightweight gossip mechanism and an RPC system
to provide various features. Both of the systems have different security
mechanisms that stem from their independent designs. However, the goals
mechanisms that stem from their designs. However, the goals
of Consuls security are to provide [confidentiality, integrity and authentication](http://en.wikipedia.org/wiki/Information_security).
The [gossip protocol](/docs/internals/gossip.html) is powered by Serf,
The [gossip protocol](/docs/internals/gossip.html) is powered by [Serf](http://www.serfdom.io/),
which uses a symmetric key, or shared secret, cryptosystem. There are more
details on the security of [Serf here](http://www.serfdom.io/docs/internals/security.html).