Merge pull request #500 from d3xf/pr-minor-doc-fixes
Minor documentation fixes
This commit is contained in:
commit
7a5b83bf24
|
@ -72,7 +72,7 @@ There are several important components that `consul agent` outputs:
|
|||
agent. This is also the address other applications can use over [RPC to control Consul](/docs/agent/rpc.html).
|
||||
|
||||
* **Cluster Addr**: This is the address and ports used for communication between
|
||||
Consul agents in a cluster. Every Consul agent in a cluster does not have to
|
||||
Consul agents in a cluster. Not all Consul agents in a cluster have to
|
||||
use the same port, but this address **MUST** be reachable by all other nodes.
|
||||
|
||||
## Stopping an Agent
|
||||
|
@ -105,7 +105,7 @@ this lifecycle is useful to building a mental model of an agent's interactions
|
|||
with a cluster, and how the cluster treats a node.
|
||||
|
||||
When an agent is first started, it does not know about any other node in the cluster.
|
||||
To discover it's peers, it must _join_ the cluster. This is done with the `join`
|
||||
To discover its peers, it must _join_ the cluster. This is done with the `join`
|
||||
command or by providing the proper configuration to auto-join on start. Once a node
|
||||
joins, this information is gossiped to the entire cluster, meaning all nodes will
|
||||
eventually be aware of each other. If the agent is a server, existing servers will
|
||||
|
@ -115,18 +115,18 @@ In the case of a network failure, some nodes may be unreachable by other nodes.
|
|||
In this case, unreachable nodes are marked as _failed_. It is impossible to distinguish
|
||||
between a network failure and an agent crash, so both cases are handled the same.
|
||||
Once a node is marked as failed, this information is updated in the service catalog.
|
||||
There is some nuance here relating, since this update is only possible if the
|
||||
servers can still [form a quorum](/docs/internals/consensus.html). Once the network
|
||||
failure recovers, or a crashed agent restarts, the cluster will repair itself,
|
||||
and unmark a node as failed. The health check in the catalog will also be updated
|
||||
to reflect this.
|
||||
There is some nuance here since this update is only possible if the servers can
|
||||
still [form a quorum](/docs/internals/consensus.html). Once the network recovers,
|
||||
or a crashed agent restarts, the cluster will repair itself, and unmark
|
||||
a node as failed. The health check in the catalog will also be updated to reflect
|
||||
this.
|
||||
|
||||
When a node _leaves_, it specifies it's intent to do so, and so the cluster
|
||||
When a node _leaves_, it specifies its intent to do so, and so the cluster
|
||||
marks that node as having _left_. Unlike the _failed_ case, all of the
|
||||
services provided by a node are immediately deregistered. If the agent was
|
||||
a server, replication to it will stop. To prevent an accumulation
|
||||
of dead nodes, Consul will automatically reap _failed_ nodes out of the
|
||||
catalog as well. This is currently done on a non-configurable interval
|
||||
which defaults to 72 hours. Reaping is similar to leaving, causing all
|
||||
associated services to be deregistered.
|
||||
catalog as well. This is currently done on a non-configurable interval of
|
||||
72 hours. Reaping is similar to leaving, causing all associated services
|
||||
to be deregistered.
|
||||
|
||||
|
|
|
@ -16,12 +16,12 @@ or added at runtime over the HTTP interface.
|
|||
There are two different kinds of checks:
|
||||
|
||||
* Script + Interval - These checks depend on invoking an external application
|
||||
which does the health check and exits with an appropriate exit code, potentially
|
||||
that does the health check and exits with an appropriate exit code, potentially
|
||||
generating some output. A script is paired with an invocation interval (e.g.
|
||||
every 30 seconds). This is similar to the Nagios plugin system.
|
||||
|
||||
* TTL - These checks retain their last known state for a given TTL. The state
|
||||
of the check must be updated periodically over the HTTP interface. If an
|
||||
* Time to Live (TTL) - These checks retain their last known state for a given TTL.
|
||||
The state of the check must be updated periodically over the HTTP interface. If an
|
||||
external system fails to update the status within a given TTL, the check is
|
||||
set to the failed state. This mechanism is used to allow an application to
|
||||
directly report its health. For example, a web app can periodically curl the
|
||||
|
@ -75,7 +75,7 @@ check can be registered dynamically using the [HTTP API](/docs/agent/http.html).
|
|||
## Check Scripts
|
||||
|
||||
A check script is generally free to do anything to determine the status
|
||||
of the check. The only limitations placed are the exit codes must convey
|
||||
of the check. The only limitations placed are that the exit codes must convey
|
||||
a specific meaning. Specifically:
|
||||
|
||||
* Exit code 0 - Check is passing
|
||||
|
|
|
@ -8,22 +8,23 @@ description: |-
|
|||
|
||||
# DNS Interface
|
||||
|
||||
One of the primary query interfaces for Consul is using DNS.
|
||||
One of the primary query interfaces for Consul is DNS.
|
||||
The DNS interface allows applications to make use of service
|
||||
discovery without any high-touch integration with Consul. For
|
||||
example, instead of making any HTTP API requests to Consul,
|
||||
example, instead of making HTTP API requests to Consul,
|
||||
a host can use the DNS server directly and just do a name lookup
|
||||
for "redis.service.east-aws.consul".
|
||||
|
||||
This query automatically translates to a lookup of nodes that
|
||||
provide the redis service, located in the "east-aws" datacenter,
|
||||
with no failing health checks. It's that simple!
|
||||
provide the redis service, are located in the "east-aws" datacenter,
|
||||
and have no failing health checks. It's that simple!
|
||||
|
||||
There are a number of [configuration options](/docs/agent/options.html) that
|
||||
are important for the DNS interface. They are `client_addr`, `ports.dns`, `recursors`,
|
||||
`domain`, and `dns_config`. By default Consul will listen on 127.0.0.1:8600 for DNS queries
|
||||
in the "consul." domain, without support for DNS recursion. All queries are case-insensitive, a
|
||||
name lookup for `PostgreSQL.node.dc1.consul` will find all nodes named `postgresql`, no matter of case.
|
||||
in the "consul." domain, without support for DNS recursion. All queries are case-insensitive: a
|
||||
name lookup for `PostgreSQL.node.dc1.consul` will find all nodes named `postgresql`,
|
||||
regardless of case.
|
||||
|
||||
There are a few ways to use the DNS interface. One option is to use a custom
|
||||
DNS resolver library and point it at Consul. Another option is to set Consul
|
||||
|
@ -76,24 +77,24 @@ consul. 0 IN SOA ns.consul. postmaster.consul. 1392836399 3600 600 86400 0
|
|||
## Service Lookups
|
||||
|
||||
A service lookup is the alternate type of query. It is used to query for service
|
||||
providers and supports two mode of lookup, a strict RCF style lookup and the
|
||||
standard lookup.
|
||||
providers and supports two lookup methods: standard lookup, and strict
|
||||
[RFC 2782](https://tools.ietf.org/html/rfc2782) lookup.
|
||||
|
||||
### Standard Style Lookup
|
||||
### Standard Lookup
|
||||
|
||||
The format of a standard service lookup is like the following:
|
||||
The format of a standard service lookup is:
|
||||
|
||||
[tag.]<service>.service[.datacenter][.domain]
|
||||
|
||||
As with node lookups, the `datacenter` is optional, as is the `tag`. If no tag is
|
||||
provided, then no filtering is done on tag. So, if we want to find any redis service
|
||||
providers in our local datacenter, we could lookup "redis.service.consul.", however
|
||||
providers in our local datacenter, we could lookup "redis.service.consul.", while
|
||||
if we care about the PostgreSQL master in a particular datacenter, we could lookup
|
||||
"master.postgresql.service.dc2.consul."
|
||||
|
||||
The DNS query system makes use of health check information to prevent routing
|
||||
to unhealthy nodes. When a service query is made, any services failing their health
|
||||
check, or failing a node system check will be omitted from the results. To allow
|
||||
check, or failing a node system check, will be omitted from the results. To allow
|
||||
for simple load balancing, the set of nodes returned is also randomized each time.
|
||||
These simple mechanisms make it easy to use DNS along with application level retries
|
||||
as a simple foundation for an auto-healing service oriented architecture.
|
||||
|
@ -124,13 +125,13 @@ consul.service.consul. 0 IN SRV 1 1 8300 foobar.node.dc1.consul.
|
|||
foobar.node.dc1.consul. 0 IN A 10.1.10.12
|
||||
```
|
||||
|
||||
### RFC-2782 Style Lookup
|
||||
### RFC 2782 Lookup
|
||||
|
||||
The format for RFC style lookups uses the following format:
|
||||
The format for RFC 2782 SRV lookups is:
|
||||
|
||||
_<service>._<protocol>.service[.datacenter][.domain]
|
||||
|
||||
Per [RFC-2782](https://www.ietf.org/rfc/rfc2782.txt), SRV queries should use
|
||||
Per [RFC 2782](https://tools.ietf.org/html/rfc2782), SRV queries should use
|
||||
underscores (_) as a prefix to the `service` and `protocol` values in a query to
|
||||
prevent DNS collisions. The `protocol` value can be any of the tags for a
|
||||
service or if the service has no tags, the value "tcp" should be used. If "tcp"
|
||||
|
@ -139,7 +140,7 @@ is specified as the protocol, the query will not perform any tag filtering.
|
|||
Other than the query format and default "tcp" protocol/tag value, the behavior
|
||||
of the RFC style lookup is the same as the standard style of lookup.
|
||||
|
||||
Using the RCF style lookup, If you registered the service "rabbitmq" on port
|
||||
Using RFC 2782 lookup, If you registered the service "rabbitmq" on port
|
||||
5672 and tagged it with "amqp" you would query the SRV record as
|
||||
"_rabbitmq._amqp.service.consul" as illustrated in the example below:
|
||||
|
||||
|
@ -168,7 +169,7 @@ rabbitmq.node1.dc1.consul. 0 IN A 10.1.11.20
|
|||
|
||||
When the DNS query is performed using UDP, Consul will truncate the results
|
||||
without setting the truncate bit. This is to prevent a redundant lookup over
|
||||
TCP which generate additional load. If the lookup is done over TCP, the results
|
||||
TCP that generates additional load. If the lookup is done over TCP, the results
|
||||
are not truncated.
|
||||
|
||||
## Caching
|
||||
|
|
|
@ -15,7 +15,7 @@ executable. As an example, you could watch the status of health checks and
|
|||
notify an external system when a check is critical.
|
||||
|
||||
Watches are implemented using blocking queries in the [HTTP API](/docs/agent/http.html).
|
||||
Agent's automatically make the proper API calls to watch for changes,
|
||||
Agents automatically make the proper API calls to watch for changes,
|
||||
and inform a handler when the data view has updated.
|
||||
|
||||
Watches can be configured as part of the [agent's configuration](/docs/agent/options.html),
|
||||
|
@ -36,7 +36,7 @@ The watch specification specifies the view of data to be monitored.
|
|||
Once that view is updated the specified handler is invoked. The handler
|
||||
can be any executable.
|
||||
|
||||
A handler should read it's input from stdin, and expect to read
|
||||
A handler should read its input from stdin, and expect to read
|
||||
JSON formatted data. The format of the data depends on the type of the
|
||||
watch. Each watch type documents the format type, and because they
|
||||
map directly to an HTTP API, handlers should expect the input to
|
||||
|
@ -280,7 +280,7 @@ An example of the output of this command:
|
|||
### Type: checks
|
||||
|
||||
The "checks" watch type is used to monitor the checks of a given
|
||||
service or in a specific state. It optionally takes the "service"
|
||||
service or those in a specific state. It optionally takes the "service"
|
||||
parameter to filter to a specific service, or "state" to filter
|
||||
to a specific state. By default, it will watch all checks.
|
||||
|
||||
|
|
|
@ -10,8 +10,8 @@ description: |-
|
|||
|
||||
We expect Consul to run in large clusters as long-running agents. Because
|
||||
upgrading agents in this sort of environment relies heavily on protocol
|
||||
compatibility, this page makes it clear on our promise to keeping different
|
||||
Consul versions compatible with each other.
|
||||
compatibility, this page makes clear our promise to keep different Consul
|
||||
versions compatible with each other.
|
||||
|
||||
We promise that every subsequent release of Consul will remain backwards
|
||||
compatible with _at least_ one prior version. Concretely: version 0.5 can
|
||||
|
|
|
@ -11,7 +11,7 @@ description: |-
|
|||
Consul provides an optional Access Control List (ACL) system which can be used to control
|
||||
access to data and APIs. The ACL system is a
|
||||
[Capability-based system](http://en.wikipedia.org/wiki/Capability-based_security) that relies
|
||||
on tokens which can have fine grained rules applied to them. It is very similar to
|
||||
on tokens to which fine grained rules can be applied. It is very similar to
|
||||
[AWS IAM](http://aws.amazon.com/iam/) in many ways.
|
||||
|
||||
## ACL Design
|
||||
|
@ -30,10 +30,10 @@ perform all actions.
|
|||
The token ID is passed along with each RPC request to the servers. Agents
|
||||
[can be configured](/docs/agent/options.html) with `acl_token` to provide a default token,
|
||||
but the token can also be specified by a client on a [per-request basis](/docs/agent/http.html).
|
||||
ACLs are new as of Consul 0.4, meaning versions prior do not provide a token.
|
||||
ACLs are new as of Consul 0.4, meaning prior versions do not provide a token.
|
||||
This is handled by the special "anonymous" token. Anytime there is no token provided,
|
||||
the rules defined by that token are automatically applied. This lets policy be enforced
|
||||
on legacy clients.
|
||||
the rules defined by that token are automatically applied. This allows
|
||||
policy to be enforced on legacy clients.
|
||||
|
||||
Enforcement is always done by the server nodes. All servers must be [configured
|
||||
to provide](/docs/agent/options.html) an `acl_datacenter`, which enables
|
||||
|
@ -47,7 +47,7 @@ all the tokens.
|
|||
When a request is made to any non-authoritative server with a token, it must
|
||||
be resolved into the appropriate policy. This is done by reading the token
|
||||
from the authoritative server and caching a configurable `acl_ttl`. The implication
|
||||
of caching is that the cache TTL is an upper-bound on the staleness of policy
|
||||
of caching is that the cache TTL is an upper bound on the staleness of policy
|
||||
that is enforced. It is possible to set a zero TTL, but this has adverse
|
||||
performance impacts, as every request requires refreshing the policy.
|
||||
|
||||
|
|
|
@ -23,20 +23,20 @@ Before describing the architecture, we provide a glossary of terms to help
|
|||
clarify what is being discussed:
|
||||
|
||||
* Agent - An agent is the long running daemon on every member of the Consul cluster.
|
||||
It is started by running `consul agent`. The agent is able to run in either *client*,
|
||||
It is started by running `consul agent`. The agent is able to run in either *client*
|
||||
or *server* mode. Since all nodes must be running an agent, it is simpler to refer to
|
||||
the node as either being a client or server, but there are other instances of the agent. All
|
||||
the node as being either a client or server, but there are other instances of the agent. All
|
||||
agents can run the DNS or HTTP interfaces, and are responsible for running checks and
|
||||
keeping services in sync.
|
||||
|
||||
* Client - A client is an agent that forwards all RPCs to a server. The client is relatively
|
||||
stateless. The only background activity a client performs is taking part of LAN gossip pool.
|
||||
This has a minimal resource overhead and consumes only a small amount of network bandwidth.
|
||||
stateless. The only background activity a client performs is taking part in the LAN gossip
|
||||
pool. This has a minimal resource overhead and consumes only a small amount of network
|
||||
bandwidth.
|
||||
|
||||
* Server - An agent that is server mode. When in server mode, there is an expanded set
|
||||
of responsibilities including participating in the Raft quorum, maintaining cluster state,
|
||||
responding to RPC queries, WAN gossip to other datacenters, and forwarding queries to leaders
|
||||
or remote datacenters.
|
||||
* Server - A server is an agent with an expanded set of responsibilities including
|
||||
participating in the Raft quorum, maintaining cluster state, responding to RPC queries,
|
||||
WAN gossip to other datacenters, and forwarding queries to leaders or remote datacenters.
|
||||
|
||||
* Datacenter - A datacenter seems obvious, but there are subtle details such as multiple
|
||||
availability zones in EC2. We define a datacenter to be a networking environment that is
|
||||
|
@ -47,13 +47,13 @@ the public internet.
|
|||
the elected leader as well as agreement on the ordering of transactions. Since these
|
||||
transactions are applied to a FSM, we implicitly include the consistency of a replicated
|
||||
state machine. Consensus is described in more detail on [Wikipedia](http://en.wikipedia.org/wiki/Consensus_(computer_science)),
|
||||
as well as our [implementation here](/docs/internals/consensus.html).
|
||||
and our implementation is described [here](/docs/internals/consensus.html).
|
||||
|
||||
* Gossip - Consul is built on top of [Serf](http://www.serfdom.io/), which provides a full
|
||||
[gossip protocol](http://en.wikipedia.org/wiki/Gossip_protocol) that is used for multiple purposes.
|
||||
Serf provides membership, failure detection, and event broadcast mechanisms. Our use of these
|
||||
is described more in the [gossip documentation](/docs/internals/gossip.html). It is enough to know
|
||||
gossip involves random node-to-node communication, primarily over UDP.
|
||||
that gossip involves random node-to-node communication, primarily over UDP.
|
||||
|
||||
* LAN Gossip - Refers to the LAN gossip pool, which contains nodes that are all
|
||||
located on the same local area network or datacenter.
|
||||
|
@ -62,8 +62,8 @@ located on the same local area network or datacenter.
|
|||
servers are primarily located in different datacenters and typically communicate
|
||||
over the internet or wide area network.
|
||||
|
||||
* RPC - RPC is short for a Remote Procedure Call. This is a request / response mechanism
|
||||
allowing a client to make a request from a server.
|
||||
* RPC - Remote Procedure Call. This is a request / response mechanism allowing a
|
||||
client to make a request of a server.
|
||||
|
||||
## 10,000 foot view
|
||||
|
||||
|
@ -73,7 +73,7 @@ From a 10,000 foot altitude the architecture of Consul looks like this:
|
|||
![Consul Architecture](consul-arch.png)
|
||||
</div>
|
||||
|
||||
Lets break down this image and describe each piece. First of all we can see
|
||||
Let's break down this image and describe each piece. First of all we can see
|
||||
that there are two datacenters, one and two respectively. Consul has first
|
||||
class support for multiple datacenters and expects this to be the common case.
|
||||
|
||||
|
@ -85,9 +85,9 @@ and they can easily scale into the thousands or tens of thousands.
|
|||
|
||||
All the nodes that are in a datacenter participate in a [gossip protocol](/docs/internals/gossip.html).
|
||||
This means there is a gossip pool that contains all the nodes for a given datacenter. This serves
|
||||
a few purposes: first, there is no need to configure clients with the addresses of servers,
|
||||
a few purposes: first, there is no need to configure clients with the addresses of servers;
|
||||
discovery is done automatically. Second, the work of detecting node failures
|
||||
is not placed on the servers but is distributed. This makes the failure detection much more
|
||||
is not placed on the servers but is distributed. This makes failure detection much more
|
||||
scalable than naive heartbeating schemes. Thirdly, it is used as a messaging layer to notify
|
||||
when important events such as leader election take place.
|
||||
|
||||
|
@ -97,8 +97,8 @@ processing all queries and transactions. Transactions must also be replicated to
|
|||
as part of the [consensus protocol](/docs/internals/consensus.html). Because of this requirement,
|
||||
when a non-leader server receives an RPC request it forwards it to the cluster leader.
|
||||
|
||||
The server nodes also operate as part of a WAN gossip. This pool is different from the LAN pool,
|
||||
as it is optimized for the higher latency of the internet, and is expected to only contain
|
||||
The server nodes also operate as part of a WAN gossip pool. This pool is different from the LAN pool,
|
||||
as it is optimized for the higher latency of the internet, and is expected to contain only
|
||||
other Consul server nodes. The purpose of this pool is to allow datacenters to discover each
|
||||
other in a low touch manner. Bringing a new datacenter online is as easy as joining the existing
|
||||
WAN gossip. Because the servers are all operating in this pool, it also enables cross-datacenter requests.
|
||||
|
@ -110,8 +110,8 @@ connection caching and multiplexing, cross-datacenter requests are relatively fa
|
|||
|
||||
## Getting in depth
|
||||
|
||||
At this point we've covered the high level architecture of Consul, but there are much
|
||||
more details to each of the sub-systems. The [consensus protocol](/docs/internals/consensus.html) is
|
||||
At this point we've covered the high level architecture of Consul, but there are many
|
||||
more details for each of the sub-systems. The [consensus protocol](/docs/internals/consensus.html) is
|
||||
documented in detail, as is the [gossip protocol](/docs/internals/gossip.html). The [documentation](/docs/internals/security.html)
|
||||
for the security model and protocols used are also available.
|
||||
|
||||
|
|
|
@ -10,8 +10,8 @@ description: |-
|
|||
|
||||
Consul relies on both a lightweight gossip mechanism and an RPC system
|
||||
to provide various features. Both of the systems have different security
|
||||
mechanisms that stem from their designs. However, the goals
|
||||
of Consuls security are to provide [confidentiality, integrity and authentication](http://en.wikipedia.org/wiki/Information_security).
|
||||
mechanisms that stem from their designs. However, the overall goal
|
||||
of Consul's security model is to provide [confidentiality, integrity and authentication](http://en.wikipedia.org/wiki/Information_security).
|
||||
|
||||
The [gossip protocol](/docs/internals/gossip.html) is powered by [Serf](http://www.serfdom.io/),
|
||||
which uses a symmetric key, or shared secret, cryptosystem. There are more
|
||||
|
@ -19,10 +19,11 @@ details on the security of [Serf here](http://www.serfdom.io/docs/internals/secu
|
|||
|
||||
The RPC system supports using end-to-end TLS, with optional client authentication.
|
||||
[TLS](http://en.wikipedia.org/wiki/Transport_Layer_Security) is a widely deployed asymmetric
|
||||
cryptosystem, and is the foundation of security on the Internet.
|
||||
cryptosystem, and is the foundation of security on the Web, as well as
|
||||
some other critical parts of the Internet.
|
||||
|
||||
This means Consul communication is protected against eavesdropping, tampering,
|
||||
or spoofing. This makes it possible to run Consul over untrusted networks such
|
||||
and spoofing. This makes it possible to run Consul over untrusted networks such
|
||||
as EC2 and other shared hosting providers.
|
||||
|
||||
~> **Advanced Topic!** This page covers the technical details of
|
||||
|
|
|
@ -64,9 +64,9 @@ systems to be built that require an operator to intervene in the
|
|||
case of a failure, but preclude the possibility of a split-brain.
|
||||
|
||||
The final nuance is that sessions may provide a `lock-delay`. This
|
||||
is a time duration, between 0 and 60 second. When a session invalidation
|
||||
is a time duration, between 0 and 60 seconds. When a session invalidation
|
||||
takes place, Consul prevents any of the previously held locks from
|
||||
being re-acquired for the `lock-delay` interval; this is a safe guard
|
||||
being re-acquired for the `lock-delay` interval; this is a safeguard
|
||||
inspired by Google's Chubby. The purpose of this delay is to allow
|
||||
the potentially still live leader to detect the invalidation and stop
|
||||
processing requests that may lead to inconsistent state. While not a
|
||||
|
@ -79,7 +79,7 @@ mechanism by providing a zero delay value.
|
|||
|
||||
Integration between the Key/Value store and sessions are the primary
|
||||
place where sessions are used. A session must be created prior to use,
|
||||
and is then referred to by it's ID.
|
||||
and is then referred to by its ID.
|
||||
|
||||
The Key/Value API is extended to support an `acquire` and `release` operation.
|
||||
The `acquire` operation acts like a Check-And-Set operation, except it
|
||||
|
@ -93,7 +93,7 @@ since the request will fail if given an invalid session. A critical note is
|
|||
that the lock can be released without being the creator of the session.
|
||||
This is by design, as it allows operators to intervene and force terminate
|
||||
a session if necessary. As mentioned above, a session invalidation will also
|
||||
cause all held locks to be released. When a lock is released, the `LockIndex`,
|
||||
cause all held locks to be released. When a lock is released, the `LockIndex`
|
||||
does not change, however the `Session` is cleared and the `ModifyIndex` increments.
|
||||
|
||||
These semantics (heavily borrowed from Chubby), allow the tuple of (Key, LockIndex, Session)
|
||||
|
@ -103,7 +103,7 @@ is incremented on each `acquire`, even if the same session re-acquires a lock,
|
|||
the `sequencer` will be able to detect a stale request. Similarly, if a session is
|
||||
invalided, the Session corresponding to the given `LockIndex` will be blank.
|
||||
|
||||
To make clear, this locking system is purely *advisory*. There is no enforcement
|
||||
To be clear, this locking system is purely *advisory*. There is no enforcement
|
||||
that clients must acquire a lock to perform any operation. Any client can
|
||||
read, write, and delete a key without owning the corresponding lock. It is not
|
||||
the goal of Consul to protect against misbehaving clients.
|
||||
|
|
|
@ -8,8 +8,8 @@ description: |-
|
|||
|
||||
# Run the Consul Agent
|
||||
|
||||
After Consul is installed, the agent must be run. The agent can either run
|
||||
in a server or client mode. Each datacenter must have at least one server,
|
||||
After Consul is installed, the agent must be run. The agent can run either
|
||||
in server or client mode. Each datacenter must have at least one server,
|
||||
although 3 or 5 is recommended. A single server deployment is _**highly**_ discouraged
|
||||
as data loss is inevitable in a failure scenario. [This guide](/docs/guides/bootstrapping.html)
|
||||
covers bootstrapping a new datacenter. All other agents run in client mode, which
|
||||
|
@ -114,10 +114,11 @@ By gracefully leaving, Consul notifies other cluster members that the
|
|||
node _left_. If you had forcibly killed the agent process, other members
|
||||
of the cluster would have detected that the node _failed_. When a member leaves,
|
||||
its services and checks are removed from the catalog. When a member fails,
|
||||
its health is simply marked as critical, but is not removed from the catalog.
|
||||
its health is simply marked as critical, but it is not removed from the catalog.
|
||||
Consul will automatically try to reconnect to _failed_ nodes, which allows it
|
||||
to recover from certain network conditions, while _left_ nodes are no longer contacted.
|
||||
|
||||
Additionally, if an agent is operating as a server, a graceful leave is important
|
||||
to avoid causing a potential availability outage affecting the [consensus protocol](/docs/internals/consensus.html).
|
||||
See the [guides section](/docs/guides/index.html) to safely add and remove servers.
|
||||
See the [guides section](/docs/guides/index.html) for how to safely add
|
||||
and remove servers.
|
||||
|
|
|
@ -19,9 +19,8 @@ two node cluster running.
|
|||
## Defining Checks
|
||||
|
||||
Similarly to a service, a check can be registered either by providing a
|
||||
[check definition](/docs/agent/checks.html)
|
||||
, or by making the appropriate calls to the
|
||||
[HTTP API](/docs/agent/http.html).
|
||||
[check definition](/docs/agent/checks.html), or by making the
|
||||
appropriate calls to the [HTTP API](/docs/agent/http.html).
|
||||
|
||||
We will use the check definition, because just like services, definitions
|
||||
are the most common way to setup checks.
|
||||
|
|
|
@ -14,12 +14,12 @@ Consul, but didn't show how this could be extended to a scalable production
|
|||
service discovery infrastructure. On this page, we'll create our first
|
||||
real cluster with multiple members.
|
||||
|
||||
When starting a Consul agent, it begins without knowledge of any other node, and is
|
||||
an isolated cluster of one. To learn about other cluster members, the agent must
|
||||
_join_ an existing cluster. To join an existing cluster, it only needs to know
|
||||
about a _single_ existing member. After it joins, the agent will gossip with this
|
||||
member and quickly discover the other members in the cluster. A Consul
|
||||
agent can join any other agent, it doesn't have to be an agent in server mode.
|
||||
When a Consul agent is started, it begins without knowledge of any other node,
|
||||
and is an isolated cluster of one. To learn about other cluster members, the
|
||||
agent must _join_ an existing cluster. To join an existing cluster, it only
|
||||
needs to know about a _single_ existing member. After it joins, the agent will
|
||||
gossip with this member and quickly discover the other members in the cluster.
|
||||
A Consul agent can join any other agent, not just agents in server mode.
|
||||
|
||||
## Starting the Agents
|
||||
|
||||
|
@ -78,7 +78,7 @@ agent-one 172.20.20.10:8301 alive role=consul,dc=dc1,vsn=2,vsn_min=1,vsn_m
|
|||
agent-two 172.20.20.11:8301 alive role=node,dc=dc1,vsn=2,vsn_min=1,vsn_max=2
|
||||
```
|
||||
|
||||
-> **Remember:** To join a cluster, a Consul agent needs to only
|
||||
-> **Remember:** To join a cluster, a Consul agent only needs to
|
||||
learn about <em>one existing member</em>. After joining the cluster, the
|
||||
agents gossip with each other to propagate full membership information.
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ $ curl http://localhost:8500/v1/kv/?recurse
|
|||
Here we have created 3 keys, each with the value of "test". Note that the
|
||||
`Value` field returned is base64 encoded to allow non-UTF8
|
||||
characters. For the "web/key2" key, we set a `flag` value of 42. All keys
|
||||
support setting a 64bit integer flag value. This is opaque to Consul but can
|
||||
support setting a 64-bit integer flag value. This is opaque to Consul but can
|
||||
be used by clients for any purpose.
|
||||
|
||||
After setting the values, we then issued a GET request to retrieve multiple
|
||||
|
@ -115,5 +115,5 @@ returning the current, unchanged value. This can be used to efficiently wait for
|
|||
key modifications. Additionally, this same technique can be used to wait for a list
|
||||
of keys, waiting only until any of the keys has a newer modification time.
|
||||
|
||||
This is only a few example of what the API supports. For full documentation, please
|
||||
reference the [HTTP API](/docs/agent/http.html).
|
||||
These are only a few examples of what the API supports. For full
|
||||
documentation, please see the [HTTP API](/docs/agent/http.html).
|
||||
|
|
|
@ -67,9 +67,9 @@ service using either the DNS or HTTP API.
|
|||
|
||||
Let's first query it using the DNS API. For the DNS API, the DNS name
|
||||
for services is `NAME.service.consul`. All DNS names are always in the
|
||||
`consul` namespace. The `service` subdomain on that tells Consul we're
|
||||
querying services, and the `NAME` is the name of the service. For the
|
||||
web service we registered, that would be `web.service.consul`:
|
||||
`consul` namespace. The `service` subdomain tells Consul we're querying
|
||||
services, and the `NAME` is the name of the service. For the web service
|
||||
we registered, that would be `web.service.consul`:
|
||||
|
||||
```text
|
||||
$ dig @127.0.0.1 -p 8600 web.service.consul
|
||||
|
|
Loading…
Reference in a new issue