website: istio vs. and nomad platform guide

This commit is contained in:
Mitchell Hashimoto 2018-06-19 15:14:02 +09:00 committed by Jack Pearkes
parent 0c43a0f448
commit 596b72e971
5 changed files with 279 additions and 5 deletions

View File

@ -0,0 +1,196 @@
---
layout: "docs"
page_title: "Connect - Nomad"
sidebar_current: "docs-connect-platform-nomad"
description: |-
Intentions define access control for services via Connect and are used to control which services may establish connections. Intentions can be managed via the API, CLI, or UI.
---
# Connect on Nomad
Connect can be used with [Nomad](https://www.nomadproject.io) to provide
secure service-to-service communication between Nomad jobs. The ability to
use the dynamic port feature of Nomad makes Connect particularly easy to use.
Using Connect with Nomad today requires manually specifying the Connect
sidecar proxy and managing intentions directly via Consul (outside of Nomad).
The Consul and Nomad teams are working together towards a more automatic
and unified solution in an upcoming Nomad release.
~> **Important security note:** As of Nomad 0.8.4, Nomad doesn't yet support network namespacing
for tasks in a task group. As a result, running Connect with Nomad should
assume the same [security checklist](/docs/connect/security.html#prevent-non-connect-traffic-to-services) as running directly on a machine without namespacing.
## Requirements
To use Connect with Nomad, the following requirements must be first be
satisfied:
* **Nomad 0.8.3 or later.** - The server and clients of the Nomad cluster
must be running version 0.8.3 or later. This is the earliest version that
was verified to work with Connect. It is possible to work with earlier
versions but it is untested.
* **Consul 1.2.0 or later.** - A Consul cluster must be setup and running with
version 1.2.0 or later.
Nomad must be [configured to use this Consul cluster](https://www.nomadproject.io/docs/service-discovery/index.html).
## Accepting Connect for an Existing Service
The job specification below shows a job that is configured with Connect.
The example uses `raw_exec` for now just to show how it can be used locally
but the Docker driver or any other driver could easily be used. The example
will be updated to use the official Consul Docker image following release.
The example below shows a hypothetical database being configured to listen
with Connect only. Explanations of the various sections follow the example.
```hcl
job "db" {
datacenters = ["dc1"]
group "db" {
task "db" {
driver = "raw_exec"
config {
command = "/usr/local/bin/my-database"
args = ["-bind", "127.0.0.1:${NOMAD_PORT_tcp}"]
}
resources {
network {
port "tcp" {}
}
}
}
task "connect-proxy" {
driver = "raw_exec"
config {
command = "/usr/local/bin/consul"
args = [
"connect", "proxy",
"-service", "db",
"-service-addr", "${NOMAD_ADDR_db_tcp}",
"-listen", ":${NOMAD_PORT_tcp}",
"-register",
]
}
resources {
network {
port "tcp" {}
}
}
}
}
}
```
The job specification contains a single task group "db" with two tasks.
By placing the two tasks in the same group, the Connect proxy will always
be colocated directly next to the database, and has access to information
such as the dynamic port it is running on.
For the "db" task, there are a few important configurations:
* The `-bind` address for the database is loopback only and listening on
a dynamic port. This prevents non-Connect connections from outside of
the node that the database is running on.
* The `tcp` port is dynamic. This removes any static constraints on the port,
allowing Nomad to allocate any available port for any allocation. Further,
it slightly increases security by not using a well-known port.
* The database is _not_ registered with Consul using a `service` block.
This isn't strictly necessary, but since we won't be connecting directly
to this service, we also don't need to register it. You mauy want to
register this service still for health checks or if it isn't switching
to exclusively accept Connect connections.
Next, the "connect-proxy" task is colocated next to the "db" task. This is
using "raw_exec" executing Consul directly. In the future this example will
be updated to use the official Consul Docker image.
The important configuration for this proxy:
* The `-service` and `-service-addr` flag specify the name of the service
the proxy is representing. The address is set to the interpolation
`${NOMAD_ADDR_db_tcp}` which allows the database to listen on any
dynamic address and the proxy can still find it.
* The `-listen` flag sets up a public listener (TLS) to accept connections
on behalf of the "db" service. The port this is listening on is dynamic,
since service discovery can be used to find the service ports.
* The `-register` flag tells the proxy to self-register with Consul. Nomad
doesn't currently know how to register Connect proxies with the `service`
stanza, and this causes the proxy to register itself so it is discoverable.
Following running this job specification, the DB will be started with a
Conect proxy. The only public listener from the job is the proxy. This means
that only Connect connections can access the database from an external node.
## Connecting to Upstream Dependencies
In addition to accepting Connect-based connections, services often need
to connect to upstream dependencies that are listening via Connect. For
example, a "web" application may need to connect to the "db" exposed
in the example above.
The job specification below shows an example of this scenario:
```hcl
job "web" {
datacenters = ["dc1"]
group "web" {
task "web" {
# ... typical configuration.
env {
DATABASE_URL = "postgresql://${NOMAD_ADDR_proxy_tcp}/db"
}
}
task "proxy" {
driver = "raw_exec"
config {
command = "/usr/local/bin/consul"
args = [
"connect", "proxy",
"-service", "web",
"-upstream", "db:${NOMAD_PORT_tcp}",
]
}
resources {
network {
port "tcp" {}
}
}
}
}
}
```
Starting with the "proxy" task, the primary difference to accepting
connections is that the service address, `-listen`, and `-register` flag
are not specified. This prevents the proxy from registering itself as
a valid listener for the given service.
The `-upstream` flag is specified to configure a private listener to
connect to the service "db" as "web". The port is dynamic.
Finally, the "web" task is configured to use the localhost address to
connect to the database. This will establish a connection to the remote
DB using Connect. Interpolation is used to retrieve the address dynamically
since the port is dynamic.
-> **Both -listen and -upstream can be specified** for services that both
accept Connect connections as well as have upstream dependencies. This
can be done on a single proxy instance rather than having multiple.

View File

@ -16,12 +16,12 @@ detailed reference of available features.
## What is Consul?
Consul has multiple components, but as a whole, it is a tool for discovering
and configuring services in your infrastructure. It provides several
key features:
Consul has multiple components, but as a whole, it is a tool for
discovering, connecting, configuring, and securing services in your
infrastructure. It provides several key features:
* **Service Discovery**: Clients of Consul can _provide_ a service, such as
`api` or `mysql`, and other clients can use Consul to _discover_ providers
* **Service Discovery**: Clients of Consul can register a service, such as
`api` or `mysql`, and other clients can use Consul to discover providers
of a given service. Using either DNS or HTTP, applications can easily find
the services they depend upon.
@ -35,6 +35,11 @@ key features:
store for any number of purposes, including dynamic configuration, feature flagging,
coordination, leader election, and more. The simple HTTP API makes it easy to use.
* **Secure Service Communication**: Consul can generate and distribute TLS
certificates for services to establish mutual TLS connections.
[Intentions](/docs/connect/intentions.html)
can be used to define what services are allowed to communicate.
* **Multi Datacenter**: Consul supports multiple datacenters out of the box. This
means users of Consul do not have to worry about building additional layers of
abstraction to grow to multiple regions.

View File

@ -0,0 +1,67 @@
---
layout: "intro"
page_title: "Consul vs. Istio"
sidebar_current: "vs-other-istio"
description: |-
Istio is a platform for connecting and securing microservices. This page describes the similarities and differences between Istio and Consul.
---
# Consul vs. Istio
Istio is an open platform to connect, manage, and secure microservices.
To enable the full functionality of Istio, multiple services must
be deployed. For the control plane: Pilot, Mixer, and Citadel must be
deployed and for the data plane an Envoy sidecar is deployed. Additionally,
Istio requires a 3rd party service catalog from Kubernetes, Consul, Eureka,
or others. At a minimum, three Istio-dedicated services along with at
least one separate distributed system (in addition to Istio) must be
configured for the full functionality of Istio.
Istio is architected to work on any platform. However, the documentation
and resources for installing and configuring Istio on non-Kubernetes systems
are few and the number of moving pieces Istio requires poses a challenge for
installation, configuration, and operation.
Istio provides layer 7 features for path-based routing, traffic shaping,
load balancing, and telemetry. Access control policies can be configured
targeting both layer 7 and layer 4 properties to control access, routing,
and more based on service identity.
Consul is a single binary providing both server and client capabilities, and
includes all functionality for service catalog, configuration, TLS certificates,
authorization, and more. No additional systems need to be installed to use
Consul, although Consul optionally supports external systems such as Vault
to augment behavior. This architecture enables Consul to be easily installed
on any platform, including directly onto the machine.
Consul uses an agent-based model where each node in the cluster runs a
Consul Client. This client maintains a local cache that is efficiently updated
from servers. As a result, all secure service communication APIs respond in
microseconds and do not require any external communication. Further,
service-to-service communication continues operating even if the Consul
cluster is degraded.
The data plane for Consul is pluggable. It includes a built-in proxy with
a larger performance trade off for ease of use. But you may also use third
party proxies such as Envoy. The ability to use the right proxy for the job
allows flexible heterogeneous deployments where different proxies may be
more correct for the applications they're proxying.
Consul enforces authorization and identity to layer 4 only. We believe
service identity should be tied to layer 4, whereas layer 7 should be used
for routing, telemetry, etc. We encourge users to use the pluggable data
plane layer to use a proxy that supports the layer 7 features necessary
for the cluster. Consul will be adding more layer 7 features in the future.
Consul implements automatic TLS certificate management complete with rotation
support. Both leaf and root certificates can be rotated automatically across
a large Consul cluster with zero disruption to connections. The certificate
management system is pluggable through code change in Consul and will be
exposed as an external plugin system shortly. This enables Consul to work
with any PKI solution.
Because Consul's service connection feature "Connect" is built-in, it
inherits the operational stability of Consul. Consul has been in production
for large companies since 2014 and is known to be deployed on as many as
50,000 nodes in a single cluster.

View File

@ -279,6 +279,9 @@
<li<%= sidebar_current("docs-connect-dev") %>>
<a href="/docs/connect/dev.html">Develop and Debug</a>
</li>
<li<%= sidebar_current("docs-connect-platform-nomad") %>>
<a href="/docs/connect/platform/nomad.html">Nomad</a>
</li>
<li<%= sidebar_current("docs-connect-security") %>>
<a href="/docs/connect/security.html">Security</a>
</li>

View File

@ -29,6 +29,9 @@
<li<%= sidebar_current("vs-other-eureka") %>>
<a href="/intro/vs/eureka.html">Eureka</a>
</li>
<li<%= sidebar_current("vs-other-istio") %>>
<a href="/intro/vs/istio.html">Istio</a>
</li>
<li<%= sidebar_current("vs-other-custom") %>>
<a href="/intro/vs/custom.html">Custom Solutions</a>
</li>