Fix typos, grammar errors, and misspellings

This commit is contained in:
Jacques Fuentes 2014-04-15 23:17:00 -04:00
parent 9493b9c126
commit 7f02ef95bd
5 changed files with 13 additions and 13 deletions

View File

@ -14,7 +14,7 @@ of a reference for all available features.
## What is Consul? ## What is Consul?
Consul has multiple components, but as a whole, it is tool for discovering Consul has multiple components, but as a whole, it is a tool for discovering
and configuring services in your infrastructure. It provides several and configuring services in your infrastructure. It provides several
key features: key features:
@ -30,7 +30,7 @@ key features:
discovery components to route traffic away from unhealthy hosts. discovery components to route traffic away from unhealthy hosts.
* **Key/Value Store**: Applications can make use of Consul's hierarchical key/value * **Key/Value Store**: Applications can make use of Consul's hierarchical key/value
store for any number of purposes including dynamic configuration, feature flagging, store for any number of purposes including: dynamic configuration, feature flagging,
coordination, leader election, etc. The simple HTTP API makes it easy to use. coordination, leader election, etc. The simple HTTP API makes it easy to use.
* **Multi Datacenter**: Consul supports multiple datacenters out of the box. This * **Multi Datacenter**: Consul supports multiple datacenters out of the box. This
@ -70,6 +70,6 @@ forward the request to the remote datacenter and return the result.
## Next Steps ## Next Steps
See the page on [how Consul compares to other software](/intro/vs/index.html) See the page on [how Consul compares to other software](/intro/vs/index.html)
to see just how it fits into your existing infrastructure. Or continue onwards with to see how it fits into your existing infrastructure. Or continue onwards with
the [getting started guide](/intro/getting-started/install.html) to get the [getting started guide](/intro/getting-started/install.html) to get
Consul up and running and see how it works. Consul up and running and see how it works.

View File

@ -24,8 +24,8 @@ Consul is designed specifically as a service discovery tool. As such,
it is much more dynamic and responsive to the state of the cluster. Nodes it is much more dynamic and responsive to the state of the cluster. Nodes
can register and deregister the services they provide, enabling dependent can register and deregister the services they provide, enabling dependent
applications and services to rapidly discover all providers. By using the applications and services to rapidly discover all providers. By using the
integrating health checking, Consul can route traffic away from unhealthy integrated health checking, Consul can route traffic away from unhealthy
nodes, and allowing systems and services to gracefully recover. Static configuration nodes, allowing systems and services to gracefully recover. Static configuration
that may be provided by configuraiton management tools can be moved into the that may be provided by configuraiton management tools can be moved into the
dynamic key/value store. This allows application configuration to be updated dynamic key/value store. This allows application configuration to be updated
without a slow convergence run. Lastly, because each datacenter runs indepedently, without a slow convergence run. Lastly, because each datacenter runs indepedently,

View File

@ -12,7 +12,7 @@ to quickly notify operators when an issue occurs.
Nagios uses a group of central servers that are configured to perform Nagios uses a group of central servers that are configured to perform
checks on remote hosts. This design makes it difficult to scale Nagios, checks on remote hosts. This design makes it difficult to scale Nagios,
as large fleets quickly reach the limit of vertical scaling, and Nagios as large fleets quickly reach the limit of vertical scaling, and Nagios
does not easily horizontal scale either. Nagios is also notoriously does not easily scale horizontally. Nagios is also notoriously
difficult to use with modern DevOps and configuration management tools, difficult to use with modern DevOps and configuration management tools,
as local configurations must be updated when remote servers are added as local configurations must be updated when remote servers are added
or removed. or removed.
@ -31,10 +31,10 @@ a burden on central servers. The status of checks is maintained by the Consul
servers, which are fault tolerant and have no single point of failure. servers, which are fault tolerant and have no single point of failure.
Lastly, Consul can scale to vastly more checks because it relies on edge triggered Lastly, Consul can scale to vastly more checks because it relies on edge triggered
updates. This means only when a check transitions from "passing" to "failing" updates. This means only when a check transitions from "passing" to "failing"
or visa versa an update is triggered. or vice versa an update is triggered.
In a large fleet, the majority of checks are passing, and even the minority In a large fleet, the majority of checks are passing, and even the minority
that are failing are persistent. By capturing only changes, Consul reduces that are failing are persistent. By capturing changes only, Consul reduces
the amount of networking and compute resources used by the health checks, the amount of networking and compute resources used by the health checks,
allowing the system to be much more scalable. allowing the system to be much more scalable.

View File

@ -34,7 +34,7 @@ matching tags.
The systems also differ in how they manage health checking. The systems also differ in how they manage health checking.
Nerve's performs local health checks in a manner similar to Consul agents. Nerve's performs local health checks in a manner similar to Consul agents.
However, Consul maintains seperate catalog and health systems, which allow However, Consul maintains separate catalog and health systems, which allow
operators to see which nodes are in each service pool, as well as providing operators to see which nodes are in each service pool, as well as providing
insight into failing checks. Nerve simply deregisters nodes on failed checks, insight into failing checks. Nerve simply deregisters nodes on failed checks,
providing limited operator insight. Synapse also configures HAProxy to perform providing limited operator insight. Synapse also configures HAProxy to perform

View File

@ -18,7 +18,7 @@ as well as a more complex gossip system that links server nodes and clients.
If any of these systems are used for pure key/value storage, then they all If any of these systems are used for pure key/value storage, then they all
roughly provide the same semantics. Reads are strongly consistent, and availability roughly provide the same semantics. Reads are strongly consistent, and availability
is sacraficed for consistency in the face of a network partition. However, the differences is sacrificed for consistency in the face of a network partition. However, the differences
become more apparent when these systems are used for advanced cases. become more apparent when these systems are used for advanced cases.
The semantics provided by these systems are attractive for building The semantics provided by these systems are attractive for building
@ -47,15 +47,15 @@ These clients are part of a [gossip pool](/docs/internals/gossip.html), which
serves several functions including distributed health checking. The gossip protocol implements serves several functions including distributed health checking. The gossip protocol implements
an efficient failure detector that can scale to clusters of any size without concentrating an efficient failure detector that can scale to clusters of any size without concentrating
the work on any select group of servers. The clients also enable a much richer set of health checks to be run locally, the work on any select group of servers. The clients also enable a much richer set of health checks to be run locally,
where ZooKeeper ephemeral nodes are a very primitve check of liveness. Clients can check that whereas ZooKeeper ephemeral nodes are a very primitve check of liveness. Clients can check that
a web server is returning 200 status codes, that memory utilization is not critical, there is sufficient a web server is returning 200 status codes, that memory utilization is not critical, there is sufficient
disk space, etc. The Consul clients expose a simple HTTP interface and avoid exposing the complexity disk space, etc. The Consul clients expose a simple HTTP interface and avoid exposing the complexity
of the system is to clients in the same way as ZooKeeper. of the system is to clients in the same way as ZooKeeper.
Consul provides first class support for service discovery, health checking, Consul provides first class support for service discovery, health checking,
K/V storage, and multiple datacenters. To support anything more that simple K/V storage, K/V storage, and multiple datacenters. To support anything more than simple K/V storage,
all these other systems require additional tools and libraries to be built on all these other systems require additional tools and libraries to be built on
top. By using client nodes, Consul provides a simple API than only requires thin clients. top. By using client nodes, Consul provides a simple API that only requires thin clients.
Additionally, the API can be avoided entirely by using configuration files and the Additionally, the API can be avoided entirely by using configuration files and the
DNS interface to have a complete service discovery solution with no development at all. DNS interface to have a complete service discovery solution with no development at all.