Merge pull request #113 from pooya/master

website: fix a couple of typos.
This commit is contained in:
Mitchell Hashimoto 2014-05-03 16:21:22 -07:00
commit 87fbb32d99
10 changed files with 11 additions and 11 deletions

View File

@ -22,7 +22,7 @@ There are two different kinds of checks:
of the check must be updated periodically over the HTTP interface. If an
external system fails to update the status within a given TTL, the check is
set to the failed state. This mechanism is used to allow an application to
directly report it's health. For example, a web app can periodically curl the
directly report its health. For example, a web app can periodically curl the
endpoint, and if the app fails, then the TTL will expire and the health check
enters a critical state. This is conceptually similar to a dead man's switch.

View File

@ -495,7 +495,7 @@ requires `Node` to be provided, while `Datacenter` will be defaulted
to match that of the agent. If only `Node` is provided, then the node, and
all associated services and checks are deleted. If `CheckID` is provided, only
that check belonging to the node is removed. If `ServiceID` is provided, then the
service along with it's associated health check (if any) is removed.
service along with its associated health check (if any) is removed.
If the API call succeeds a 200 status code is returned.

View File

@ -43,7 +43,7 @@ be discovered.
Lastly, a service can have an associated health check. This is a powerful
feature as it allows a web balancer to gracefully remove failing nodes, or
a database to replace a failed slave, etc. The health check is strongly integrated
in the DNS interface as well. If a service is failing it's health check or
in the DNS interface as well. If a service is failing its health check or
a node has any failing system-level check, the DNS interface will omit that
node from any service query.

View File

@ -76,7 +76,7 @@ that there are two datacenters, one and two respectively. Consul has first
class support for multiple datacenters and expects this to be the common case.
Within each datacenter we have a mixture of clients and servers. It is expected
that there be between three and five servers. This strikes a balance between
that there be between three to five servers. This strikes a balance between
availability in the case of failure and performance, as consensus gets progressively
slower as more machines are added. However, there is no limit to the number of clients,
and they can easily scale into the thousands or tens of thousands.

View File

@ -140,7 +140,7 @@ supports 3 different consistency modes for reads.
The three read modes are:
* default - Raft makes use of leader leasing, providing a time window
in which the leader assumes it's role is stable. However, if a leader
in which the leader assumes its role is stable. However, if a leader
is partitioned from the remaining peers, a new leader may be elected
while the old leader is holding the lease. This means there are 2 leader
nodes. There is no risk of a split-brain since the old leader will be

View File

@ -114,8 +114,8 @@ and shut down.
By gracefully leaving, Consul notifies other cluster members that the
node _left_. If you had forcibly killed the agent process, other members
of the cluster would have detected that the node _failed_. When a member leaves,
it's services and checks are removed from the catalog. When a member fails,
it's health is simply marked as critical, but is not removed from the catalog.
its services and checks are removed from the catalog. When a member fails,
its health is simply marked as critical, but is not removed from the catalog.
Consul will automatically try to reconnect to _failed_ nodes, which allows it
to recover from certain network conditions, while _left_ nodes are no longer contacted.

View File

@ -88,7 +88,7 @@ will not return any results, since the service is unhealthy:
This section should have shown that checks can be easily added. Check definitions
can be updated by changing configuration files and sending a `SIGHUP` to the agent.
Alternatively the HTTP API can be used to add, remove and modify checks dynamically.
The API allows allows for a "dead man's switch" or [TTL based check](/docs/agent/checks.html).
The API allows for a "dead man's switch" or [TTL based check](/docs/agent/checks.html).
TTL checks can be used to integrate an application more tightly with Consul, enabling
business logic to be evaluated as part of passing a check.

View File

@ -47,7 +47,7 @@ Available commands are:
```
If you get an error that `consul` could not be found, then your PATH
environmental variable was not setup properly. Please go back and ensure
environment variable was not setup properly. Please go back and ensure
that your PATH variable contains the directory where Consul was installed.
Otherwise, Consul is installed and ready to go!

View File

@ -17,7 +17,7 @@ an isolated cluster of one. To learn about other cluster members, the agent mus
_join_ an existing cluster. To join an existing cluster, only needs to know
about a _single_ existing member. After it joins, the agent will gossip with this
member and quickly discover the other members in the cluster. A Consul
agent can join any other agent, it doesn't have be an agent in server mode.
agent can join any other agent, it doesn't have to be an agent in server mode.
## Starting the Agents

View File

@ -58,7 +58,7 @@ $ curl http://localhost:8500/v1/kv/?recurse
```
Here we have created 3 keys, each with the value of "test". Note that the
`Value` field returned is base64 encoded to encode allow for non-UTF8
`Value` field returned is base64 encoded to allow non-UTF8
characters. For the "web/key2" key, we set a `flag` value of 42. All keys
support setting a 64bit integer flag value. This is opaque to Consul but can
be used by clients for any purpose.