docs: Consul Dataplane updates for v.1.14.0 (#15384)

* Consul Architecture update

* Consul on Kubernetes architecture

* Install Consul on Kubernetes with Helm updates

* Vault as the Secrets Backend Data Integration

* Kubernetes Service Mesh Overview

* Terminating Gateways

* Fully updated

* Join external service to k8s

* Consul on Kubernetes

* Configure metrics for Consul on Kubernetes

* Service Sync for Consul on Kubernetes

* Custom Resource Definitions for Consul on k8s

* Upgrading Consul on Kubernetes Components

* Rolling Updates to TLS

* Dataplanes diagram

* Upgrade instructions

* k8s architecture page updates

* Update website/content/docs/k8s/connect/observability/metrics.mdx

Co-authored-by: Riddhi Shah <riddhi@hashicorp.com>

* Update website/content/docs/architecture/index.mdx

* Update website/content/docs/k8s/connect/terminating-gateways.mdx

* CRDs

* updating version numbers

* Updated example config

* Image clean up

* Apply suggestions from code review

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/k8s/architecture.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Riddhi Shah <riddhi@hashicorp.com>
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
This commit is contained in:
Jeff Boruszak 2022-11-17 17:04:29 -06:00 committed by GitHub
parent 5c9bf9cd4b
commit de543f1aee
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 479 additions and 704 deletions

View File

@ -2,7 +2,7 @@
layout: docs
page_title: Consul Architecture
description: >-
Consul datacenters consist of clusters of server agents (control plane) and client agents deployed alongside service instances (dataplane). Learn how these components and their different communication methods make Consul possible.
Consul datacenters consist of clusters of server agents (control plane) and client agents deployed alongside service instances (data plane). Learn how these components and their different communication methods make Consul possible.
---
# Consul Architecture
@ -47,10 +47,12 @@ Consul servers establish consensus using the Raft algorithm on port `8300`. Refe
### Client agents
Consul clients report node and service health status to the Consul cluster. In a typical deployment, you must run client agents on every compute node in your datacenter. Clients use remote procedure calls (RPC) to interact with servers. By default, clients send RPC requests to the servers on port `8300`.
Consul clients report node and service health status to the Consul cluster. In a typical deployment, you must run client agents on every compute node in your datacenter. Clients use remote procedure calls (RPC) to interact with servers. By default, clients send RPC requests to the servers on port `8300`.
There are no limits to the number of client agents or services you can use with Consul, but production deployments should distribute services across multiple Consul datacenters. Using a multi-datacenter deployment enhances infrastructure resilience and limits control plane issues. We recommend deploying a maximum of 5,000 client agents per datacenter. Some large organizations have deployed tens of thousands of client agents and hundreds of thousands of service instances across a multi-datacenter deployment. Refer to [Cross-datacenter requests](#cross-datacenter-requests) for additional information.
You can also run Consul with an alternate service mesh configuration that deploys Envoy proxies but not client agents. Refer to [Simplified Service Mesh with Consul Dataplanes](/docs/connect/dataplane) for more information.
## LAN gossip pool
Client and server agents participate in a LAN gossip pool so that they can distribute and perform node [health checks](/docs/discovery/checks). Agents in the pool propagate the health check information across the cluster. Agent gossip communication occurs on port `8301` using UDP. Agent gossip falls back to TCP if UDP is not available. Refer to [Gossip Protocol](/docs/architecture/gossip) for additional information.

View File

@ -13,10 +13,12 @@ Consul Dataplane requires servers running Consul v1.14.0+ and Consul K8s v1.0.0+
## What is Consul Dataplane?
In standard deployments, Consul uses a control plane that contains both *server agents* and *client agents*. Server agents maintain the service catalog and service mesh, including its security and consistency, while client agents manage communications between service instances, their sidecar proxies, and the servers. While this model is optimal for applications deployed on virtual machines or bare metal servers, orchestrators such as Kubernetes already include components called *kubelets* that support health checking and service location functions typically provided by the client agent.
In standard deployments, Consul uses a control plane that contains both _server agents_ and _client agents_. Server agents maintain the service catalog and service mesh, including its security and consistency, while client agents manage communications between service instances, their sidecar proxies, and the servers. While this model is optimal for applications deployed on virtual machines or bare metal servers, orchestrators such as Kubernetes already include components called _kubelets_ that support health checking and service location functions typically provided by the client agent.
Consul Dataplane manages Envoy proxies and leaves responsibility for other functions to the orchestrator. As a result, it removes the need to run client agents on every pod. In addition, services no longer need to be reregistered to a local client agent after restarting a service instance, as a client agents lack of access to persistent data storage in Kubernetes deployments is no longer an issue.
![Diagram of Consul Dataplanes in Kubernetes deployment](/img/dataplanes-diagram.png)
## Benefits
**Fewer networking requirements**: Without client agents, Consul does not require bidirectional network connectivity across multiple protocols to enable gossip communication. Instead, it requires a single gRPC connection to the Consul servers, which significantly simplifies requirements for the operator.
@ -53,6 +55,10 @@ $ export VERSION=1.0.0 && \
curl --location "https://releases.hashicorp.com/consul-k8s/${VERSION}/consul-k8s_${VERSION}_darwin_amd64.zip" --output consul-k8s-cli.zip
```
### Upgrading
Before you upgrade Consul to a version that uses Consul Dataplane, you must edit your Helm chart so that client agents are removed from your deployments. Refer to [upgrading to Consul Dataplane](/docs/k8s/upgrade#upgrading-to-consul-dataplanes) for more information.
## Feature support
Consul Dataplane supports the following features:

View File

@ -2,15 +2,17 @@
layout: docs
page_title: Consul on Kubernetes Control Plane Architecture
description: >-
When running on Kubernetes, Consuls control plane architecture does not change significantly. Server agents are deployed as a StatefulSet with a persistent volume, while client agents run as a k8s DaemonSet with an exposed API port.
When running on Kubernetes, Consuls control plane architecture does not change significantly. Server agents are deployed as a StatefulSet with a persistent volume, while client agents can run as a k8s DaemonSet with an exposed API port or be omitted with Consul Dataplanes.
---
# Architecture
This topic describes the architecture, components, and resources associated with Consul deployments to Kubernetes. Consul employs the same architectural design on Kubernetes as it does with other platforms (see [Architecture](/docs/architecture)), but Kubernetes provides additional benefits that make operating a Consul cluster easier.
This topic describes the architecture, components, and resources associated with Consul deployments to Kubernetes. Consul employs the same architectural design on Kubernetes as it does with other platforms, but Kubernetes provides additional benefits that make operating a Consul cluster easier. Refer to [Consul Architecture](/docs/architecture) for more general information on Consul's architecture.
Refer to the standard [production deployment guide](https://learn.hashicorp.com/consul/datacenter-deploy/deployment-guide) for important information, regardless of the deployment platform.
> **For more specific guidance:**
> - For guidance on datacenter design, refer to [Consul and Kubernetes Reference Architecture](/consul/tutorials/kubernetes-production/kubernetes-reference-architecture).
> - For step-by-step deployment guidance, refer to [Consul and Kubernetes Deployment Guide](/consul/tutorials/kubernetes-production/kubernetes-deployment-guide).
> - For non-Kubernetes guidance, refer to the standard [production deployment guide](/consul/tutorials/production-deploy/deployment-guide).
## Server Agents
@ -35,32 +37,12 @@ Volume Claims when a
[StatefulSet is deleted](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage),
so this must done manually when removing servers.
## Client Agents
## Consul Dataplane
The client agents are run as a **DaemonSet**. This places one agent
(within its own pod) on each Kubernetes node.
The clients expose the Consul HTTP API via a static port (8500)
bound to the host port. This enables all other pods on the node to connect
to the node-local agent using the host IP that can be retrieved via the
Kubernetes downward API. See
[accessing the Consul HTTP API](/docs/k8s/installation/install#accessing-the-consul-http-api)
for an example.
By default, Consul on Kubernetes uses an alternate service mesh configuration that injects sidecars without client agents. _Consul Dataplane_ manages Envoy proxies and leaves responsibility for other functions to the orchestrator, which removes the need to run client agents on every pod.
We do not use a **NodePort** Kubernetes service because requests to node ports get randomly routed
to any pod in the service and we need to be able to route directly to the Consul
client running on our node.
![Diagram of Consul Dataplanes in Kubernetes deployment](/img/dataplanes-diagram.png)
-> **Note:** There is no way to bind to a local-only
host port. Therefore, any other node can connect to the agent. This should be
considered for security. For a properly production-secured agent with TLS
and ACLs, this is safe.
Refer to [Simplified Service Mesh with Consul Dataplanes](/docs/connect/dataplane/index) for more information.
We run Consul clients as a **DaemonSet** instead of running a client in each
application pod as a sidecar because this would turn
a pod into a "node" in Consul and also causes an explosion of resource usage
since every pod needs a Consul agent. Service registration should be handled via the
catalog syncing feature with Services rather than pods.
-> **Note:** Due to a limitation of anti-affinity rules with DaemonSets,
a client-mode agent runs alongside server-mode agents in Kubernetes. This
duplication wastes some resources, but otherwise functions perfectly fine.
Consul Dataplane is the default proxy manager in Consul on Kubernetes 1.14 and later. If you are on Consul 1.13 or older, refer to [upgrading to Consul Dataplane](/docs/k8s/upgrade#upgrading-to-consul-dataplanes) for specific upgrade instructions.

View File

@ -10,10 +10,9 @@ description: >-
[Consul Service Mesh](/docs/connect) is a feature built into to Consul that enables
automatic service-to-service authorization and connection encryption across
your Consul services. Consul Service Mesh can be used with Kubernetes to secure pod
communication with other pods and external Kubernetes services. "Consul Connect" refers to the service mesh functionality within Consul and is used interchangeably with the name
"Consul Service Mesh."
communication with other pods and external Kubernetes services. _Consul Connect_ refers to the component in Consul that enables service mesh functionality. We sometimes use Connect to mean Consul service mesh throughout the documentation.
The Connect sidecar running Envoy can be automatically injected into pods in
Consul can automatically inject the sidecar running Envoy into pods in
your cluster, making configuration for Kubernetes automatic.
This functionality is provided by the
[consul-k8s project](https://github.com/hashicorp/consul-k8s) and can be
@ -22,32 +21,11 @@ automatically installed and configured using the
## Usage
When the
[Connect injector is installed](/docs/k8s/connect#installation-and-configuration),
the Connect sidecar can be automatically added to all pods. This sidecar can both
accept and establish connections using Connect, enabling the pod to communicate
to clients and dependencies exclusively over authorized and encrypted
connections.
-> **Important:** As of consul-k8s `v0.26.0` and Consul Helm `v0.32.0`, a Kubernetes
service is required to run services on the Consul service mesh.
-> **Note:** The examples in this section are valid and use
publicly available images. If you've installed the Connect injector, feel free
to run the examples in this section to try Connect with Kubernetes.
Please note the documentation below this section on how to properly install
and configure the Connect injector.
Installing Consul on Kubernetes with [`connect-inject` enabled](/docs/k8s/connect#installation-and-configuration) adds a sidecar to all pods. By default, it enables service mesh functionality with Consul Dataplane by injecting an Envoy proxy. You can also configure Consul to inject a client agent sidecar to connect to your service mesh. Refer to [Simplified Service Mesh with Consul Dataplane](/docs/connect/dataplane) for more information.
### Accepting Inbound Connections
An example Deployment is shown below with Connect enabled to accept inbound
connections. Notice that the Deployment would still be fully functional without
Connect. Minimal to zero modifications are required to enable Connect in Kubernetes.
Notice also that even though we're using a Deployment here, the same configuration
would work on a Pod, a StatefulSet, or a DaemonSet.
This Deployment specification starts a server that responds to any
HTTP request with the static text "hello world".
-> **Note:** As of consul-k8s `v0.26.0` and Consul Helm `v0.32.0`, having a Kubernetes
service is **required** to run services on the Consul Service Mesh.
```yaml
apiVersion: v1

View File

@ -7,20 +7,16 @@ description: >-
# Configure Metrics for Consul on Kubernetes
Consul on Kubernetes integrates with Prometheus and Grafana to provide metrics for Consul Service Mesh. The metrics
Consul on Kubernetes integrates with Prometheus and Grafana to provide metrics for Consul service mesh. The metrics
available are:
* Connect Service metrics
* Sidecar proxy metrics
* Consul agent metrics
* Ingress, Terminating and Mesh Gateway metrics
- Connect service metrics
- Sidecar proxy metrics
- Consul agent metrics
- Ingress, terminating, and mesh gateway metrics
Specific sidecar proxy metrics can also be seen in the Consul UI Topology Visualization view. This section documents how to enable each of these.
-> **Note:** Metrics will be supported in Consul-helm >= `0.31.0` and consul-k8s >= `0.25.0`. However, enabling the [metrics merging feature](#connect-service-and-sidecar-metrics-with-metrics-merging) with Helm value (`defaultEnableMerging`) or
annotation (`consul.hashicorp.com/enable-metrics-merging`) can only be used with Consul `1.10.0` and above. The
other metrics configuration can still be used before Consul `1.10.0`.
## Connect Service and Sidecar Metrics with Metrics Merging
Prometheus annotations are used to instruct Prometheus to scrape metrics from Pods. Prometheus annotations only support
@ -28,33 +24,40 @@ scraping from one endpoint on a Pod, so Consul on Kubernetes supports metrics me
sidecar proxy metrics are merged into one endpoint. If there are no service metrics, it also supports just scraping the
sidecar proxy metrics.
<!-- Diagram hidden for v1.14.0 release - needs updates for Dataplanes
The diagram below illustrates how the metrics integration works when merging is enabled:
[![Metrics Merging Architecture](/img/metrics-arch.png)](/img/metrics-arch.png)
-->
Connect service metrics can be configured with the Helm values nested under [`connectInject.metrics`](/docs/k8s/helm#v-connectinject-metrics).
Metrics and metrics merging can be enabled by default for all connect-injected Pods with the following Helm values:
```yaml
connectInject:
metrics:
defaultEnabled: true # by default, this inherits from the value global.metrics.enabled
defaultEnableMerging: true
```
They can also be overridden on a per-Pod basis using the annotations `consul.hashicorp.com/enable-metrics` and
`consul.hashicorp.com/enable-metrics-merging`.
~> In most cases, the default settings will be sufficient. If you are encountering issues with colliding ports or service
~> In most cases, the default settings are sufficient. If you encounter issues with colliding ports or service
metrics not being merged, you may need to change the defaults.
The Prometheus annotations configure the endpoint to scrape the metrics from. As shown in the diagram, the annotations point to a listener on `0.0.0.0:20200` on the Envoy sidecar. This listener and the corresponding Prometheus annotations can be configured with the following Helm values (or overridden on a per-Pod basis with Consul annotations `consul.hashicorp.com/prometheus-scrape-port` and `consul.hashicorp.com/prometheus-scrape-path`):
The Prometheus annotations specify which endpoint to scrape the metrics from. The annotations point to a listener on `0.0.0.0:20200` on the Envoy sidecar. You can configure the listener and the corresponding Prometheus annotations using the following Helm values. Alternatively, you can specify the `consul.hashicorp.com/prometheus-scrape-port` and `consul.hashicorp.com/prometheus-scrape-path` Consul annotations to override them on a per-Pod basis:
```yaml
connectInject:
metrics:
defaultPrometheusScrapePort: 20200
defaultPrometheusScrapePath: "/metrics"
```
Those Helm values will result in the following Prometheus annotations being automatically added to the Pod for scraping:
The Helm values specified in the previous example result in the following Prometheus annotations being automatically added to the Pod for scraping:
```yaml
metadata:
annotations:
@ -63,26 +66,24 @@ metadata:
prometheus.io/port: "20200"
```
When metrics alone are enabled, the listener in the diagram on `0.0.0.0:20200` would point directly at the sidecar
metrics endpoint, rather than the merged metrics endpoint. The Prometheus scraping annotations would stay the same.
When metrics and metrics merging are *both* enabled, metrics are combined from the service and the sidecar proxy, and
exposed via a local server on the Consul sidecar for scraping. This endpoint is called the merged metrics endpoint and
defaults to `127.0.0.1:20100/stats/prometheus`. The listener will target the merged metrics endpoint in the above case.
When metrics and metrics merging are both enabled, metrics are combined from the service and the sidecar proxy, and
exposed through a local server on the Consul Dataplane sidecar for scraping. This endpoint is called the merged metrics endpoint and
defaults to `127.0.0.1:20100/stats/prometheus`. The listener targets the merged metrics endpoint in the above case.
It can be configured with the following Helm values (or overridden on a per-Pod basis with
`consul.hashicorp.com/merged-metrics-port`):
`consul.hashicorp.com/merged-metrics-port`:
```yaml
connectInject:
metrics:
defaultMergedMetricsPort: 20100
```
The endpoint to scrape service metrics from can be configured only on a per-Pod basis via the Pod annotations `consul.hashicorp.com/service-metrics-port` and `consul.hashicorp.com/service-metrics-path`. If these are not configured, the service metrics port will default to the port used to register the service with Consul (`consul.hashicorp.com/connect-service-port`), which in turn defaults to the first port on the first container of the Pod. The service metrics path will default to `/metrics`.
The endpoint to scrape service metrics from can be configured only on a per-Pod basis with the Pod annotations `consul.hashicorp.com/service-metrics-port` and `consul.hashicorp.com/service-metrics-path`. If these are not configured, the service metrics port defaults to the port used to register the service with Consul (`consul.hashicorp.com/connect-service-port`), which in turn defaults to the first port on the first container of the Pod. The service metrics path defaults to `/metrics`.
## Consul Agent Metrics
Metrics from the Consul server and client Pods can be scraped via Prometheus by setting the field `global.metrics.enableAgentMetrics` to `true`. Additionally, one can configure the metrics retention time on the agents by configuring
the field `global.metrics.agentMetricsRetentionTime` which expects a duration and defaults to `"1m"`. This value must be greater than `"0m"` for the Consul servers and clients to emit metrics at all. As the Prometheus deployment currently does not support scraping TLS endpoints, agent metrics are currently *unsupported* when TLS is enabled.
Metrics from the Consul server Pods can be scraped with Prometheus by setting the field `global.metrics.enableAgentMetrics` to `true`. Additionally, one can configure the metrics retention time on the agents by configuring
the field `global.metrics.agentMetricsRetentionTime` which expects a duration and defaults to `"1m"`. This value must be greater than `"0m"` for the Consul servers to emit metrics at all. As the Prometheus deployment currently does not support scraping TLS endpoints, agent metrics are currently unsupported when TLS is enabled.
```yaml
global:
@ -94,10 +95,10 @@ global:
## Gateway Metrics
Metrics from the Consul gateways, namely the Ingress Gateways, Terminating Gateways and the Mesh Gateways can be scraped
via Prometheus by setting the field `global.metrics.enableGatewayMetrics` to `true`. The gateways emit standard Envoy proxy
metrics. To ensure that the metrics are not exposed to the public internet (as Mesh and Ingress gateways can have public
IPs), their metrics endpoints are exposed on the Pod IP of the respective gateway instance, rather than on all
Metrics from the Consul ingress, terminating, and mesh gateways can be scraped
with Prometheus by setting the field `global.metrics.enableGatewayMetrics` to `true`. The gateways emit standard Envoy proxy
metrics. To ensure that the metrics are not exposed to the public internet, as mesh and ingress gateways can have public
IPs, their metrics endpoints are exposed on the Pod IP of the respective gateway instance, rather than on all
interfaces on `0.0.0.0`.
```yaml
@ -109,16 +110,16 @@ global:
## Metrics in the UI Topology Visualization
Consul's built-in UI has a topology visualization for services part of the Consul Service Mesh. The topology visualization has the ability to fetch basic metrics from a metrics provider for each service and display those metrics as part of the [topology visualization](/docs/connect/observability/ui-visualization).
Consul's built-in UI has a topology visualization for services that are part of the Consul service mesh. The topology visualization has the ability to fetch basic metrics from a metrics provider for each service and display those metrics as part of the [topology visualization](/docs/connect/observability/ui-visualization).
The diagram below illustrates how the UI displays service metrics for a sample application:
![UI Topology View](/img/ui-service-topology-view-hover.png)
The topology view is configured under `ui.metrics`. This will enable the Consul UI to query the provider specified by
`ui.metrics.provider` at the URL of the Prometheus server `ui.metrics.baseURL` to display sidecar proxy metrics for the
service. The UI will display some specific sidecar proxy Prometheus metrics when `ui.metrics.enabled` is `true` and
`ui.enabled` is true. The value of `ui.metrics.enabled` defaults to `"-"` which means it will inherit from the value of
The topology view is configured under `ui.metrics`. This configuration enables the Consul UI to query the provider specified by
`ui.metrics.provider` at the URL of the Prometheus server `ui.metrics.baseURL`, and then display sidecar proxy metrics for the
service. The UI displays some specific sidecar proxy Prometheus metrics when `ui.metrics.enabled` is `true` and
`ui.enabled` is true. The value of `ui.metrics.enabled` defaults to `"-"` which means it inherits from the value of
`global.metrics.enabled.`
```yaml
@ -132,13 +133,13 @@ ui:
## Deploying Prometheus (_for demo and non-production use-cases only_)
The Helm chart contains demo manifests for deploying Prometheus. It can be installed with Helm via `prometheus.enabled`. This manifest is based on the community manifest for Prometheus.
The Helm chart contains demo manifests for deploying Prometheus. It can be installed with Helm with `prometheus.enabled`. This manifest is based on the community manifest for Prometheus.
The Prometheus deployment is designed to allow quick bootstrapping for trial and demo use cases, and is not recommended for production use-cases.
Prometheus will be installed in the same namespace as Consul, and will be installed
Prometheus is be installed in the same namespace as Consul, and gets installed
and uninstalled along with the Consul installation.
Grafana can optionally be utilized with Prometheus to display metrics. The installation and configuration of Grafana must be managed separately from the Consul Helm chart. The [Layer 7 Observability with Prometheus, Grafana, and Kubernetes](https://learn.hashicorp.com/tutorials/consul/kubernetes-layer7-observability?in=consul/kubernetes?in=consul/kubernetes)) tutorial provides an installation walkthrough using Helm.
Grafana can optionally be utilized with Prometheus to display metrics. The installation and configuration of Grafana must be managed separately from the Consul Helm chart. The [Layer 7 Observability with Prometheus, Grafana, and Kubernetes](/consul/tutorials/kubernetes/kubernetes-layer7-observability) tutorial provides an installation walkthrough using Helm.
```yaml
prometheus:

View File

@ -9,18 +9,18 @@ description: >-
Adding a terminating gateway is a multi-step process:
- Update the Helm chart with terminating gateway config options
- Update the Helm chart with terminating gateway configuration options
- Deploy the Helm chart
- Access the Consul agent
- Register external services with Consul
## Requirements
- [Consul](https://www.consul.io/docs/install#install-consul)
- [Consul](/docs/install#install-consul)
- [Consul on Kubernetes CLI](/docs/k8s/k8s-cli)
- Familiarity with [Terminating Gateways](/docs/connect/gateways/terminating-gateway)
## Update the Helm chart with terminating gateway config options
## Update the Helm chart with terminating gateway configuration options
Minimum required Helm options:
@ -29,10 +29,6 @@ Minimum required Helm options:
```yaml
global:
name: consul
connectInject:
enabled: true
controller:
enabled: true
terminatingGateways:
enabled: true
```
@ -49,9 +45,10 @@ $ consul-k8s install -f values.yaml
## Accessing the Consul agent
You can access the Consul server directly from your host via `kubectl port-forward`. This is helpful for interacting with your Consul UI locally as well as for validating the connectivity of the application.
You can access the Consul server directly from your host by running `kubectl port-forward`. This is helpful for interacting with your Consul UI locally as well as for validating the connectivity of the application.
<Tabs>
<Tab heading="Without TLS">
```shell-session
@ -62,6 +59,7 @@ $ kubectl port-forward consul-server-0 8500 &
$ export CONSUL_HTTP_ADDR=http://localhost:8500
```
</Tab>
<Tab heading="With TLS">
If TLS is enabled use port 8501:
@ -75,6 +73,7 @@ $ export CONSUL_HTTP_ADDR=https://localhost:8501
$ export CONSUL_HTTP_SSL_VERIFY=false
```
</Tab>
</Tabs>
If ACLs are enabled also set:
@ -116,7 +115,7 @@ The following table describes traffic behaviors when using the `destination` fie
| L7 | Hostname | No | <nobr>Allowed</nobr> | A `Host` or `:authority` header is required. |
| L7 | IP | No | <nobr>Allowed</nobr> | There are no limitations on dialing IPs without TLS. |
You can provide a `caFile` to secure traffic between unencrypted clients that connect to external services through the terminating gateway.
You can provide a `caFile` to secure traffic that connect to external services through the terminating gateway.
Refer to [Create the configuration entry for the terminating gateway](#create-the-configuration-entry-for-the-terminating-gateway) for details.
-> **Note:** Regardless of the `protocol` specified in the `ServiceDefaults`, [L7 intentions](/docs/connect/config-entries/service-intentions#permissions) are not currently supported with `ServiceDefaults` destinations.
@ -149,11 +148,12 @@ $ kubectl apply --filename service-defaults.yaml
All other terminating gateway operations can use the name of the `ServiceDefaults` component, in this case "example-https", as a Consul service name.
</Tab>
<Tab heading="Using Consul catalog">
Normally, Consul services are registered with the Consul client on the node that
they're running on. Since this is an external service, there is no Consul node
to register it onto. Instead, we will make up a node name and register the
Normally, Consul services are registered on the node that
they're running on. Since this service is an external service, there is no Consul node
to register it onto. Instead, we must make up a node name and register the
service to that node.
Create a sample external service and register it with Consul.
@ -194,7 +194,7 @@ $ curl --request PUT --data @external.json --insecure $CONSUL_HTTP_ADDR/v1/catal
true
```
If ACLs and TLS are enabled :
If ACLs and TLS are enabled:
```shell-session
$ curl --request PUT --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" --data @external.json --insecure $CONSUL_HTTP_ADDR/v1/catalog/register
@ -275,7 +275,7 @@ spec:
If TLS is enabled for external services registered through the Consul catalog and you are not using [transparent proxy `destination`](#register-an-external-service-as-a-destination), you must include the [`caFile`](/docs/connect/config-entries/terminating-gateway#cafile) parameter that points to the system trust store of the terminating gateway container.
By default, the trust store is located in the `/etc/ssl/certs/ca-certificates.crt` directory.
Configure the [`caFile`](https://www.consul.io/docs/connect/config-entries/terminating-gateway#cafile) parameter in the `TerminatingGateway` config entry to point to the `/etc/ssl/cert.pem` directory if TLS is enabled and you are using one of the following components:
Configure the [`caFile`](/docs/connect/config-entries/terminating-gateway#cafile) parameter in the `TerminatingGateway` config entry to point to the `/etc/ssl/cert.pem` directory if TLS is enabled and you are using one of the following components:
- Consul Helm chart 0.43 or older
- An Envoy image with an alpine base image

View File

@ -8,7 +8,7 @@ description: >-
# Custom Resource Definitions (CRDs) for Consul on Kubernetes
This topic describes how to manage Consul [configuration entries](/docs/agent/config-entries)
via Kubernetes Custom Resources. Configuration entries provide cluster-wide defaults for the service mesh.
with Kubernetes Custom Resources. Configuration entries provide cluster-wide defaults for the service mesh.
## Requirements
@ -50,37 +50,15 @@ Hang tight while we grab the latest from your chart repositories...
Update Complete. ⎈Happy Helming!⎈
```
Next, you must configure consul-helm via your `values.yaml` to install the custom resource definitions
and enable the controller that acts on them:
<CodeBlockConfig filename="values.yaml" highlight="4-5,7-8">
```yaml
global:
name: consul
controller:
enabled: true
connectInject:
enabled: true
```
</CodeBlockConfig>
Note that:
1. `controller.enabled: true` installs the CRDs and enables the controller.
1. Configuration entries are used to configure Consul service mesh so it's also
expected that `connectInject` will be enabled.
See [Install with Helm Chart](/docs/k8s/installation/install) for further installation
Refer to [Install with Helm Chart](/docs/k8s/installation/install) for further installation
instructions.
**Note**: Configuration entries require `connectInject` to be enabled, which is a default behavior in the official Helm Chart. If you disabled this setting, you must re-enable it to use CRDs.
## Upgrading An Existing Cluster to CRDs
If you have an existing Consul cluster running on Kubernetes you may need to perform
extra steps to migrate to CRDs. See [Upgrade An Existing Cluster to CRDs](/docs/k8s/crds/upgrade-to-crds) for full instructions.
extra steps to migrate to CRDs. Refer to [Upgrade An Existing Cluster to CRDs](/docs/k8s/crds/upgrade-to-crds) for full instructions.
## Usage
@ -88,7 +66,7 @@ Once installed, you can use `kubectl` to create and manage Consul's configuratio
### Create
You can create configuration entries via `kubectl apply`.
You can create configuration entries with `kubectl apply`.
```shell-session
$ cat <<EOF | kubectl apply --filename -
@ -103,7 +81,7 @@ EOF
servicedefaults.consul.hashicorp.com/foo created
```
See [Configuration Entries](/docs/agent/config-entries) for detailed schema documentation.
Refer to [Configuration Entries](/docs/agent/config-entries) for detailed schema documentation.
### Get
@ -121,7 +99,7 @@ in Consul.
### Describe
You can use `kubectl describe [kind] [name]` to investigate the status of the
configuration entry. If `SYNCED` is false, the status will contain the reason
configuration entry. If `SYNCED` is false, the status contains the reason
why.
```shell-session
@ -137,7 +115,7 @@ Status:
You can use `kubectl edit [kind] [name]` to edit the configuration entry:
```shell
```shell-session
$ kubectl edit servicedefaults foo
# change protocol: http => protocol: tcp
servicedefaults.consul.hashicorp.com/foo edited
@ -171,11 +149,11 @@ Error from server (NotFound): servicedefaults.consul.hashicorp.com "foo" not fou
If running `kubectl delete` hangs without exiting, there may be
a dependent configuration entry registered with Consul that prevents the target configuration entry from being
deleted. For example, if you set the protocol of your service to `http` via `ServiceDefaults` and then
create a `ServiceSplitter`, you won't be able to delete the `ServiceDefaults`.
deleted. For example, if you set the protocol of your service to `http` in `ServiceDefaults` and then
create a `ServiceSplitter`, you will not be able to delete `ServiceDefaults`.
This is because by deleting the `ServiceDefaults` config, you are setting the
protocol back to the default which is `tcp`. Since `ServiceSplitter` requires
protocol back to the default which is `tcp`. Because `ServiceSplitter` requires
that the service has an `http` protocol, Consul will not allow the `ServiceDefaults`
to be deleted since that would put Consul into a broken state.
@ -188,7 +166,7 @@ the `ServiceSplitter`.
Consul Open Source (Consul OSS) ignores Kubernetes namespaces and registers all services into the same
global Consul registry based on their names. For example, service `web` in Kubernetes namespace
`web-ns` and service `admin` in Kubernetes namespace `admin-ns` will be registered into
`web-ns` and service `admin` in Kubernetes namespace `admin-ns` are registered into
Consul as `web` and `admin` with the Kubernetes source namespace ignored.
When creating custom resources to configure these services, the namespace of the
@ -213,8 +191,7 @@ metadata:
spec: ...
```
~> **NOTE:** If two custom resources of the same kind **and** the same name are attempted to
be created in different Kubernetes namespaces, the last one created will not be synced.
~> **Note:** If you create two custom resources with identical `kind` and `name` values in different Kubernetes namespaces, the last one you create is not able to sync.
#### ServiceIntentions Special Case
@ -271,25 +248,24 @@ spec:
</CodeBlockConfig>
~> **NOTE:** If two `ServiceIntentions` resources set the same `spec.destination.name`, the
last one created will not be synced.
~> **Note:** If two `ServiceIntentions` resources set the same `spec.destination.name`, the
last one created is not synced.
### Consul Enterprise <EnterpriseAlert inline />
Consul Enterprise supports multiple configurations for how Kubernetes namespaces are mapped
to Consul namespaces. The Consul namespace that the custom resource is registered
into depends on the configuration being used but in general, you should create your
custom resources in the same Kubernetes namespace as the service they're configuring and
everything will work as expected.
custom resources in the same Kubernetes namespace as the service they configure.
The details on each configuration are:
1. **Mirroring** - The Kubernetes namespace will be "mirrored" into Consul, i.e.
service `web` in Kubernetes namespace `web-ns` will be registered as service `web`
1. **Mirroring** - The Kubernetes namespace is mirrored into Consul. For example, the
service `web` in Kubernetes namespace `web-ns` is registered as service `web`
in the Consul namespace `web-ns`. In the same vein, a `ServiceDefaults` custom resource with
name `web` in Kubernetes namespace `web-ns` will configure that same service.
name `web` in Kubernetes namespace `web-ns` configures that same service.
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
This is configured with [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
<CodeBlockConfig highlight="6-7">
@ -305,13 +281,12 @@ The details on each configuration are:
</CodeBlockConfig>
1. **Mirroring with prefix** - The Kubernetes namespace will be "mirrored" into Consul
with a prefix added to the Consul namespace, i.e.
if the prefix is `k8s-` then service `web` in Kubernetes namespace `web-ns` will be registered as service `web`
1. **Mirroring with prefix** - The Kubernetes namespace is mirrored into Consul
with a prefix added to the Consul namespace. For example, if the prefix is `k8s-` then service `web` in Kubernetes namespace `web-ns` will be registered as service `web`
in the Consul namespace `k8s-web-ns`. In the same vein, a `ServiceDefaults` custom resource with
name `web` in Kubernetes namespace `web-ns` will configure that same service.
name `web` in Kubernetes namespace `web-ns` configures that same service.
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
This is configured with [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
<CodeBlockConfig highlight="8">
@ -329,17 +304,16 @@ The details on each configuration are:
</CodeBlockConfig>
1. **Single destination namespace** - The Kubernetes namespace is ignored and all services
will be registered into the same Consul namespace, i.e. if the destination Consul
namespace is `my-ns` then service `web` in Kubernetes namespace `web-ns` will
be registered as service `web` in Consul namespace `my-ns`.
are registered into the same Consul namespace. For example, if the destination Consul
namespace is `my-ns` then service `web` in Kubernetes namespace `web-ns` is registered as service `web` in Consul namespace `my-ns`.
In this configuration, the Kubernetes namespace of the custom resource is ignored.
For example, a `ServiceDefaults` custom resource with the name `web` in Kubernetes
namespace `admin-ns` will configure the service with name `web` even though that
namespace `admin-ns` configures the service with name `web` even though that
service is running in Kubernetes namespace `web-ns` because the `ServiceDefaults`
resource ends up registered into the same Consul namespace `my-ns`.
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
This is configured with [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
<CodeBlockConfig highlight="7">
@ -355,13 +329,12 @@ The details on each configuration are:
</CodeBlockConfig>
~> **NOTE:** In this configuration, if two custom resources of the same kind **and** the same name are attempted to
be created in two Kubernetes namespaces, the last one created will not be synced.
~> **Note:** In this configuration, if two custom resources are created in two Kubernetes namespaces with identical `name` and `kind` values, the last one created is not synced.
#### ServiceIntentions Special Case (Enterprise)
`ServiceIntentions` are different from the other custom resources because the
name of the resource doesn't matter. For other resources, the name of the resource
name of the resource does not matter. For other resources, the name of the resource
determines which service it configures. For example, this resource configures
the service `web`:
@ -379,7 +352,7 @@ spec:
</CodeBlockConfig>
For `ServiceIntentions`, because we need to support the ability to create
wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service),
wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to any service),
and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name`
to configure the destination service for the intention:
@ -415,5 +388,5 @@ spec:
In addition, we support the field `spec.destination.namespace` to configure
the destination service's Consul namespace. If `spec.destination.namespace`
is empty, then the Consul namespace used will be the same as the other
is empty, then the Consul namespace used is the same as the other
config entries as outlined above.

View File

@ -1,80 +1,65 @@
---
layout: docs
page_title: Join External Clients to Consul on Kubernetes
page_title: Join External Services to Consul on Kubernetes
description: >-
Client agents running on VMs can join a Consul datacenter running on Kubernetes. Configure the Kubernetes installation to accept communication from external clients.
Services running on a virtual machine (VM) can join a Consul datacenter running on Kubernetes. Learn how to configure the Kubernetes installation to accept communication from external services.
---
# Join External Clients to Consul on Kubernetes
# Join External Services to Consul on Kubernetes
Consul clients running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes.
## Networking
Within one datacenter, Consul typically requires a fully connected
[network](/docs/architecture). This means the IPs of every client and server
agent should be routable by every other client and server agent in the
datacenter. Clients need to be able to [gossip](/docs/architecture/gossip) with
every other agent and make RPC calls to servers. Servers need to be able to
gossip with every other agent. See [Architecture](/docs/architecture) for more details.
-> **Consul Enterprise customers** may use [network
segments](/docs/enterprise/network-segments) to enable non-fully-connected
topologies. However, out-of-cluster nodes must still be able to communicate
with the server pod or host IP addresses.
Services running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes.
## Auto-join
The recommended way to join a cluster running within Kubernetes is to
use the ["k8s" cloud auto-join provider](/docs/install/cloud-auto-join#kubernetes-k8s).
The auto-join provider dynamically discovers IP addresses to join using
the Kubernetes API. It authenticates with Kubernetes using a standard
`kubeconfig` file. This works with all major hosted Kubernetes offerings
`kubeconfig` file. Auto-join works with all major hosted Kubernetes offerings
as well as self-hosted installations. The token in the `kubeconfig` file
needs to have permissions to list pods in the namespace where Consul servers
are deployed.
The auto-join string below will join a Consul server cluster that is
started using the [official Helm chart](/docs/k8s/helm):
The auto-join string below joins a Consul server agent to a cluster using the [official Helm chart](/docs/k8s/helm):
```shell-session
$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"'
```
-> **Note:** This auto-join command only connects on the default gossip port
8301, whether you are joining on the pod network or via host ports. Either a
consul server or client that is already a member of the datacenter should be
listening on this port for the external client agent to be able to use
8301, whether you are joining on the pod network or via host ports. A
Consul server that is already a member of the datacenter should be
listening on this port for the external service to connect through
auto-join.
### Auto-join on the Pod network
In the default Consul Helm chart installation, Consul clients and servers are
routable only via their pod IPs for server RPCs and gossip (HTTP
API calls to Consul clients can also be made through host IPs). This means any
external client agents joining the Consul cluster running on Kubernetes would
need to be able to have connectivity to those pod IPs.
In many hosted Kubernetes environments, you will need to explicitly configure
In the default Consul Helm chart installation, Consul servers are
routable through their pod IPs for server RPCs. As a result, any
external agents joining the Consul cluster running on Kubernetes
need to be able to connect to those pod IPs.
In many hosted Kubernetes environments, you need to explicitly configure
your hosting provider to ensure that pod IPs are routable from external VMs.
See [Azure AKS
CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking),
[AWS EKS
CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) and
[GKE VPC-native
clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips).
For more information, refer to [Azure AKS CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking),
[AWS EKS CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) and
[GKE VPC-native clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips).
Given you have the [official Helm chart](/docs/k8s/helm) installed with the default values, do the following to join an external client agent.
To join external agents with Consul on Kubernetes deployments installed with default values through the [official Helm chart](/docs/k8s/helm):
1. Make sure the pod IPs of the clients and servers in Kubernetes are
1. Make sure the pod IPs of the servers in Kubernetes are
routable from the VM and that the VM can access port 8301 (for gossip) and
port 8300 (for server RPC) on those pod IPs.
1. Make sure that the client and server pods running in Kubernetes can route
1. Make sure that the server pods running in Kubernetes can route
to the VM's advertise IP on its gossip port (default 8301).
2. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM.
1. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM.
2. On the external VM, run:
```bash
1. On the external VM, run:
```shell-session
consul agent \
-advertise="$ADVERTISE_IP" \
-retry-join='provider=k8s label_selector="app=consul,component=server"' \
@ -86,7 +71,8 @@ Given you have the [official Helm chart](/docs/k8s/helm) installed with the defa
-data-dir=$DATA_DIR \
```
3. Check if the join was successful by running `consul members`. Sample output:
1. Run `consul members` to check if the join was successful.
```shell-session
/ $ consul members
Node Address Status Type Build Protocol DC Segment
@ -97,10 +83,10 @@ Given you have the [official Helm chart](/docs/k8s/helm) installed with the defa
gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 <default>
```
### Auto-join via host ports
If your external VMs can't connect to Kubernetes pod IPs, but they can connect
to the internal host IPs of the nodes in the Kubernetes cluster, you have the
option to expose the clients and server ports on the host IP instead.
### Auto-join through host ports
If your external VMs cannot connect to Kubernetes pod IPs but they can connect
to the internal host IPs of the nodes in the Kubernetes cluster, you can join the two by exposing ports on the host IP instead.
1. Install the [official Helm chart](/docs/k8s/helm) with the following values:
```yaml
@ -114,19 +100,20 @@ option to expose the clients and server ports on the host IP instead.
# Note that this needs to be different than 8301, to avoid conflicting with the client gossip hostPort
port: 9301
```
This will expose the client gossip ports, the server gossip ports and the server RPC port at `hostIP:hostPort`. Note that `hostIP` is the **internal** IP of the VM that the client/server pods are deployed on.
This installation exposes the client gossip ports, the server gossip ports and the server RPC port at `hostIP:hostPort`. Note that `hostIP` is the **internal** IP of the VM that the client/server pods are deployed on.
1. Make sure the IPs of the Kubernetes nodes are routable from the VM and
that the VM can access ports 8301 and 9301 (for gossip) and port 8300 (for
server RPC) on those node IPs.
1. Make sure the client and server pods running in Kubernetes can route to
1. Make sure the server pods running in Kubernetes can route to
the VM's advertise IP on its gossip port (default 8301).
3. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM.
1. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM.
4. On the external VM, run (note the addition of `host_network=true` in the retry-join argument):
```bash
1. On the external VM, run:
```shell-session
consul agent \
-advertise="$ADVERTISE_IP" \
-retry-join='provider=k8s host_network=true label_selector="app=consul,component=server"'
@ -137,7 +124,11 @@ option to expose the clients and server ports on the host IP instead.
-datacenter=$DATACENTER \
-data-dir=$DATA_DIR \
```
3. Check if the join was successful by running `consul members`. Sample output:
Note the addition of `host_network=true` in the retry-join argument.
1. Run `consul members` to check if the join was successful.
```shell-session
/ $ consul members
Node Address Status Type Build Protocol DC Segment
@ -149,13 +140,12 @@ option to expose the clients and server ports on the host IP instead.
```
## Manual join
If you are unable to use auto-join, you can also follow the instructions in
either of the auto-join sections but instead of using a `provider` key in the
`-retry-join` flag, you would need to pass the address of at least one
consul server, e.g: `-retry-join=$CONSUL_SERVER_IP:$SERVER_SERFLAN_PORT`. A
`kubeconfig` file is not required when using manual join.
However, rather than hardcoding the IP, it's recommended to set up a DNS entry
that would resolve to the consul servers' pod IPs (if using the pod network) or
host IPs that the server pods are running on (if using host ports).
If you are unable to use auto-join, try following the instructions in
either of the auto-join sections, but instead of using a `provider` key in the
`-retry-join` flag, pass the address of at least one Consul server. Example: `-retry-join=$CONSUL_SERVER_IP:$SERVER_SERFLAN_PORT`.
A `kubeconfig` file is not required when using manual join.
Instead of hardcoding an IP address, we recommend you set up a DNS entry
that resolves to the pod IPs or host IPs that the Consul server pods are running on.

View File

@ -7,13 +7,15 @@ description: >-
# Vault as the Secrets Backend - Data Integration
## Overview
This topic describes how to configure Vault and Consul in order to share secrets for use within Consul.
This topic describes an overview of how to configure Vault and Consul in order to share secrets for use within Consul.
## Prerequisites
### General Integration Steps
Before you set up the data integration between Vault and Consul on Kubernetes, read and complete the steps in the [Systems Integration](/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/docs/k8s/deployment-configurations/vault).
You must complete two general procedures for each secret you wish to store in Vault.
## General integration steps
For each secret you want to store in Vault, you must complete two multi-step procedures.
Complete the following steps once:
1. Store the secret in Vault.
@ -21,40 +23,18 @@ Complete the following steps once:
Repeat the following steps for each datacenter in the cluster:
1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access.
1. Update the Consul on Kubernetes helm chart.
1. Update the Consul on Kubernetes Helm chart.
## Prerequisites
Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have read and completed the steps in the [Systems Integration](/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/docs/k8s/deployment-configurations/vault).
## Secrets-to-service account mapping
### Example - Gossip Encryption Key Integration
At the most basic level, the goal of this configuration is to authorize a Consul on Kubernetes service account to access a secret in Vault.
Following the general integration steps, a more detailed workflow for integration of the [Gossip encryption key](/docs/k8s/deployment-configurations/vault/data-integration/gossip) with the Vault Secrets backend would like the following:
Complete the following steps once:
1. Store the secret in Vault.
- Save the gossip encryption key in Vault at the path `secret/consul/gossip`.
1. Create a Vault policy that authorizes the desired level of access to the secret.
- Create a Vault policy that you name `gossip-policy` which allows `read` access to the path `secret/consul/gossip`.
Repeat the following steps for each datacenter in the cluster:
1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access.
- Both Consul servers and Consul clients need access to the gossip encryption key, so you create two Vault Kubernetes:
- A role called `consul-server` that maps the Kubernetes namespace and service account name for your consul servers to the `gossip-policy` created in [step 2](#one-time-setup-in-vault) of One time setup in Vault.
- A role called `consul-client` that maps the Kubernetes namespace and service account name for your consul clients to the `gossip-policy` created in [step 2](#one-time-setup-in-vault) of One time setup in Vault..
1. Update the Consul on Kubernetes helm chart.
- Configure the Vault Kubernetes auth roles created for the gossip encryption key:
- [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole) is set to the `consul-server` Vault Kubernetes auth role created previously.
- [`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole) is set to the `consul-client` Vault Kubernetes auth role created previously.
## Secrets to Service Account Mapping
At the most basic level, the goal of this configuration is to authorize a Consul on Kubernetes service account to access a secret in Vault.
Below is a mapping of Vault secrets and the Consul on Kubernetes service accounts that need to access them.
(NOTE: `Consul components` refers to all other services and jobs that are not Consul servers or clients.
The following table associates Vault secrets and the Consul on Kubernetes service accounts that require access.
(NOTE: `Consul components` refers to all other services and jobs that are not Consul servers or clients.
It includes things like terminating gateways, ingress gateways, etc.)
### Primary Datacenter
### Primary datacenter
| Secret | Service Account For | Configurable Role in Consul k8s Helm |
| ------ | ------------------- | ------------------------------------ |
|[ACL Bootstrap token](/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)|
@ -67,8 +47,10 @@ It includes things like terminating gateways, ingress gateways, etc.)
|[Service Mesh and Consul client TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Webhook TLS certificates for controller and connect inject](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul controllers<br/>Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)<br />[`global.secretsBackend.vault.connectInjectRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)|
### Secondary Datacenters
### Secondary datacenters
The mapping for secondary data centers is similar with the following differences:
- There is no use of bootstrap token because ACLs would have been bootstrapped in the primary datacenter.
- ACL Partition token is mapped to both the `server-acl-init` job and the `partition-init` job service accounts.
- ACL Replication token is mapped to both the `server-acl-init` job and Consul service accounts.
@ -83,26 +65,14 @@ The mapping for secondary data centers is similar with the following differences
|[Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls) | Consul servers<br/>Consul clients<br/>Consul components | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)<br/>[`global.secretsBackend.vault.consulCARole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulcarole)|
|[Service Mesh and Consul client TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Webhook TLS certificates for controller and connect inject](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul controllers<br/>Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)<br />[`global.secretsBackend.vault.connectInjectRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)|
### Combining policies within roles
As you can see in the table above, depending upon your needs, a Consul on Kubernetes service account could have the need to request more than one secret. In these cases, you will want to create one role for the Consul on Kubernetes service account that is mapped to multiple policies, each of which allows it access to a given secret.
For example, if your Consul on Kubernetes servers need access to [Gossip encryption key](/docs/k8s/deployment-configurations/vault/data-integration/gossip), [Consul Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls), and [Enterprise license](/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license), assuming you have already saved the secrets in vault, you would:
For example, if your Consul on Kubernetes servers need access to [Consul Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls) and an [Enterprise license](/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license):
1. Create a policy for each secret.
1. Gossip encryption key
<CodeBlockConfig filename="gossip-policy.hcl">
```HCL
path "secret/data/consul/gossip" {
capabilities = ["read"]
}
```
</CodeBlockConfig>
```shell-session
$ vault policy write gossip-policy gossip-policy.hcl
```
1. Consul Server TLS credentials
@ -141,12 +111,14 @@ For example, if your Consul on Kubernetes servers need access to [Gossip encrypt
$ vault write auth/kubernetes/role/consul-server \
bound_service_account_names=<Consul server service account> \
bound_service_account_namespaces=<Consul installation namespace> \
policies=gossip-policy,ca-policy,license-policy \
policies=ca-policy,license-policy \
ttl=1h
```
## Detailed data integration guides
The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitrary secrets:
- [ACL Bootstrap token](/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token)
- [ACL Partition token](/docs/k8s/deployment-configurations/vault/data-integration/partition-token)
- [ACL Replication token](/docs/k8s/deployment-configurations/vault/data-integration/replication-token)
@ -155,9 +127,11 @@ The following secrets can be stored in Vault KV secrets engine, which is meant t
- [Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config)
The following TLS certificates and keys can generated and managed by Vault the Vault PKI Engine, which is meant to handle things like certificate expiration and rotation:
- [Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls)
- [Service Mesh and Consul client TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca)
- [Vault as the Webhook Certificate Provider for Consul Controller and Connect Inject on Kubernetes](/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs)
## Secrets to Service Account Mapping
## Secrets-to-service account mapping
Read through the [detailed data integration guides](#detailed-data-integration-guides) that are pertinent to your environment.

View File

@ -14,59 +14,58 @@ This section documents the official integrations between Consul and Kubernetes.
## Use Cases
**Consul Service Mesh:**
**Consul Service Mesh**:
Consul can automatically inject the [Consul Service Mesh](/docs/connect)
sidecar into pods so that they can accept and establish encrypted
and authorized network connections via mutual TLS. And because Consul Service Mesh
can run anywhere, pods can also communicate with external services (and
vice versa) over a fully encrypted connection.
and authorized network connections with mutual TLS. And because Consul Service Mesh
can run anywhere, pods and external services can communicate with each other over a fully encrypted connection.
**Service sync to enable Kubernetes and non-Kubernetes services to communicate:**
Consul can sync Kubernetes services with its own service registry. This allows
Kubernetes services to use native Kubernetes service discovery to discover
**Service sync to enable Kubernetes and non-Kubernetes services to communicate**:
Consul can sync Kubernetes services with its own service registry. This service sync allows
Kubernetes services to use Kubernetes' native service discovery capabilities to discover
and connect to external services registered in Consul, and for external services
to use Consul service discovery to discover and connect to Kubernetes services.
**And more!** Consul can run directly on Kubernetes, so in addition to the
**Additional integrations**: Consul can run directly on Kubernetes, so in addition to the
native integrations provided by Consul itself, any other tool built for
Kubernetes can choose to leverage Consul.
Kubernetes can leverage Consul.
## Getting Started With Consul and Kubernetes
There are several ways to try Consul with Kubernetes in different environments.
**Tutorials**
### Tutorials
- The [Getting Started with Consul Service Mesh track](https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy?in=consul/gs-consul-service-mesh?utm_source=docs)
- The [Service Mesh on Kubernetes collection](/consul/tutorials/kubernetes?utm_source=docs)
provides guidance for installing Consul as service mesh for Kubernetes using the Helm
chart, deploying services in the service mesh, and using intentions to secure service
communications.
- The [Migrate to Microservices with Consul Service Mesh on Kubernetes](https://learn.hashicorp.com/collections/consul/microservices?utm_source=docs)
- The [Migrate to Microservices with Consul Service Mesh on Kubernetes](/consul/tutorials/microservices?utm_source=docs)
collection uses an example application written by a fictional company to illustrate why and how organizations can
migrate from monolith to microservices using Consul service mesh on Kubernetes. The case study in this collection
should provide information valuable for understanding how to develop services that leverage Consul during any stage
of your microservices journey.
- The [Consul and Minikube guide](https://learn.hashicorp.com/tutorials/consul/kubernetes-minikube?utm_source=docs) is a quick step-by-step guide for deploying Consul with the official Helm chart on a local instance of Minikube.
- The [Consul and Minikube guide](/consul/tutorials/kubernetes/kubernetes-minikube?utm_source=docs) is a quick step-by-step guide for deploying Consul with the official Helm chart on a local instance of Minikube.
- Review production best practices and cloud-specific configurations for deploying Consul on managed Kubernetes runtimes.
- The [Consul on Azure Kubernetes Service (AKS) tutorial](https://learn.hashicorp.com/tutorials/consul/kubernetes-aks-azure?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on AKS. The guide also allows you to practice deploying two microservices.
- The [Consul on Amazon Elastic Kubernetes Service (EKS) tutorial](https://learn.hashicorp.com/tutorials/consul/kubernetes-eks-aws?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on EKS. Additionally, it provides guidance on interacting with your datacenter with the Consul UI, CLI, and API.
- The [Consul on Google Kubernetes Engine (GKE) tutorial](https://learn.hashicorp.com/tutorials/consul/kubernetes-gke-google?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on GKE. Additionally, it provides guidance on interacting with your datacenter with the Consul UI, CLI, and API.
- The [Consul on Azure Kubernetes Service (AKS) tutorial](/consul/tutorials/kubernetes/kubernetes-aks-azure?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on AKS. The guide also allows you to practice deploying two microservices.
- The [Consul on Amazon Elastic Kubernetes Service (EKS) tutorial](/consul/tutorials/kubernetes/kubernetes-eks-aws?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on EKS. Additionally, it provides guidance on interacting with your datacenter with the Consul UI, CLI, and API.
- The [Consul on Google Kubernetes Engine (GKE) tutorial](/consul/tutorials/kubernetes/kubernetes-gke-google?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on GKE. Additionally, it provides guidance on interacting with your datacenter with the Consul UI, CLI, and API.
- The [Consul and Kubernetes Reference Architecture](https://learn.hashicorp.com/tutorials/consul/kubernetes-reference-architecture?utm_source=docs) guide provides recommended practices for production.
- The [Consul and Kubernetes Reference Architecture](/consul/tutorials/kubernetes/kubernetes-reference-architecture?utm_source=docs) guide provides recommended practices for production.
- The [Consul and Kubernetes Deployment](https://learn.hashicorp.com/tutorials/consul/kubernetes-deployment-guide?utm_source=docs) tutorial covers the necessary steps to install and configure a new Consul cluster on Kubernetes in production.
- The [Consul and Kubernetes Deployment](/consul/tutorials/kubernetes/kubernetes-deployment-guide?utm_source=docs) tutorial covers the necessary steps to install and configure a new Consul cluster on Kubernetes in production.
- The [Secure Consul and Registered Services on Kubernetes](https://learn.hashicorp.com/tutorials/consul/kubernetes-secure-agents?utm_source=docs) tutorial covers
- The [Secure Consul and Registered Services on Kubernetes](consul/tutorials/kubernetes/kubernetes-secure-agents?utm_source=docs) tutorial covers
the necessary steps to secure a Consul cluster running on Kubernetes in production.
- The [Layer 7 Observability with Consul Service Mesh](https://learn.hashicorp.com/tutorials/consul/kubernetes-layer7-observability?in=consul/kubernetes) tutorial covers monitoring a
- The [Layer 7 Observability with Consul Service Mesh](/consul/tutorials/kubernetes/kubernetes-layer7-observability) tutorial covers monitoring a
Consul service mesh running on Kubernetes with Prometheus and Grafana.
**Documentation**
### Documentation
- [Installing Consul](/docs/k8s/installation/install) covers how to install Consul using the Helm chart.
- [Helm Chart Reference](/docs/k8s/helm) describes the different options for configuring the Helm chart.

View File

@ -61,7 +61,7 @@ The [Homebrew](https://brew.sh) package manager is required to complete the foll
```shell-session
$ consul-k8s version
consul-k8s 0.39.0
consul-k8s 1.0
```
</Tab>
@ -90,7 +90,7 @@ The [Homebrew](https://brew.sh) package manager is required to complete the foll
```shell-session
$ consul-k8s version
consul-k8s 0.39.0
consul-k8s 1.0
```
</Tab>
@ -119,7 +119,7 @@ The [Homebrew](https://brew.sh) package manager is required to complete the foll
```shell-session
$ consul-k8s version
consul-k8s 0.39.0
consul-k8s 1.0
```
</Tab>
@ -137,7 +137,7 @@ Complete the following instructions to install a specific version of the CLI so
1. Download the appropriate version of Consul K8s CLI using the following `curl` command. Set the `$VERSION` environment variable to the appropriate version for your deployment.
```shell-session
$ export VERSION=0.39.0 && \
$ export VERSION=1.0 && \
curl --location "https://releases.hashicorp.com/consul-k8s/${VERSION}/consul-k8s_${VERSION}_darwin_amd64.zip" --output consul-k8s-cli.zip
```
@ -157,7 +157,7 @@ Complete the following instructions to install a specific version of the CLI so
```shell-session
$ consul-k8s version
consul-k8s 0.39.0
consul-k8s 1.0
```
</Tab>
@ -187,7 +187,7 @@ Complete the following instructions to install a specific version of the CLI so
```shell-session
$ consul-k8s version
consul-k8s 0.39.0
consul-k8s 1.0
```
</Tab>
@ -209,15 +209,15 @@ Complete the following instructions to install a specific version of the CLI so
1. Install the `consul-k8s` CLI.
```shell-session
$ export VERSION=-0.39.0 && \
$ export VERSION=-1.0 && \
sudo yum -y install consul-k8s-${VERSION}-1
```
1. (Optional) Issue the `consul-k8s version` command to verify the installation.
2. (Optional) Issue the `consul-k8s version` command to verify the installation.
```shell-session
$ consul-k8s version
consul-k8s 0.39.0
consul-k8s 1.0
```
</Tab>
@ -268,15 +268,7 @@ $ consul-k8s status
==> Consul-K8s Status Summary
NAME | NAMESPACE | STATUS | CHARTVERSION | APPVERSION | REVISION | LAST UPDATED
---------+-----------+----------+--------------+------------+----------+--------------------------
consul | consul | deployed | 0.40.0 | 1.11.2 | 1 | 2022/01/31 16:58:51 PST
==> Config:
connectInject:
enabled: true
controller:
enabled: true
global:
name: consul
consul | consul | deployed | 0.40.0 | 1.14.0 | 1 | 2022/01/31 16:58:51 PST
✓ Consul servers healthy (3/3)
✓ Consul clients healthy (3/3)

View File

@ -9,28 +9,23 @@ description: >-
This topic describes how to install Consul on Kubernetes using the official Consul Helm chart. For instruction on how to install Consul on Kubernetes using the Consul K8s CLI, refer to [Installing the Consul K8s CLI](/docs/k8s/installation/install-cli).
## Introduction
We recommend using the Consul Helm chart to install Consul on Kubernetes for multi-cluster installations that involve cross-partition or cross datacenter communication. The Helm chart installs and configures all necessary components to run Consul. The configuration enables you to run a server cluster, a client cluster, or both.
We recommend using the Consul Helm chart to install Consul on Kubernetes for multi-cluster installations that involve cross-partition or cross datacenter communication. The Helm chart installs and configures all necessary components to run Consul.
Consul can run directly on Kubernetes in server or client mode so that you can leverage Consul functionality if your workloads are fully deployed to Kubernetes. For heterogeneous workloads, Consul agents can join a server running inside or outside of Kubernetes. Refer to the [architecture section](/docs/k8s/architecture) to learn more about the general architecture of Consul on Kubernetes.
Consul can run directly on Kubernetes so that you can leverage Consul functionality if your workloads are fully deployed to Kubernetes. For heterogeneous workloads, Consul agents can join a server running inside or outside of Kubernetes. Refer to the [Consul on Kubernetes architecture](/docs/k8s/architecture) to learn more about its general architecture.
The Helm chart exposes several useful configurations and automatically sets up complex resources, but it does not automatically operate Consul. You must still become familiar with how to monitor, backup, and upgrade the Consul cluster.
The Helm chart has no required configuration, so it installs a Consul cluster with default configurations. We strongly recommend that you [learn about the configuration options](/docs/k8s/helm#configuration-values) prior to going to production.
The Helm chart has no required configuration, so it installs a Consul cluster with default configurations. We strongly recommend that you [learn about the configuration options](/docs/k8s/helm#configuration-values) before going to production.
-> **Security warning**: By default, Helm installs Consul with security configurations disabled so that the out-of-box experience is optimized for new users. We strongly recommend using a properly-secured Kubernetes cluster or making sure that you understand and enable [Consuls security features](/docs/security) before going into production. Some security features are not supported in the Helm chart and require additional manual configuration.
Refer to the [architecture](/docs/k8s/installation/install#architecture) section to learn more about the general architecture of Consul on Kubernetes.
For a hands-on experience with Consul as a service mesh
for Kubernetes, follow the [Getting Started with Consul service
mesh](https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy?in=consul/gs-consul-service-mesh?utm_source=docs) tutorial.
> For a hands-on experience with Consul as a service mesh for Kubernetes, follow the [Getting Started with Consul service mesh tutorial](/consul/tutorials/kubernetes-features/service-mesh-deploy?utm_source=docs).
## Requirements
- Helm version 3.2+. Visit the [Helm website](https://helm.sh/docs/intro/install/) to download the latest version.
Using the Helm Chart requires Helm version 3.2+. Visit the [Helm website](https://helm.sh/docs/intro/install/) to download the latest version.
## Install Consul
@ -46,11 +41,10 @@ mesh](https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy?in=consul
```shell-session
$ helm search repo hashicorp/consul
NAME CHART VERSION APP VERSION DESCRIPTION
hashicorp/consul 0.39.0 1.11.1 Official HashiCorp Consul Chart
hashicorp/consul 0.45.0 1.14.0 Official HashiCorp Consul Chart
```
1. Prior to installing via Helm, ensure that the `consul` Kubernetes namespace does not exist, as installing on a dedicated namespace
is recommended.
1. Before you install Consul on Kubernetes with Helm, ensure that the `consul` Kubernetes namespace does not exist. We recommend installing Consul on a dedicated namespace.
```shell-session
$ kubectl get namespace
@ -61,46 +55,44 @@ mesh](https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy?in=consul
kube-system Active 18h
```
1. Install Consul on Kubernetes using Helm. The Helm chart does everything to set up a recommended Consul-on-Kubernetes deployment. After installation, a Consul cluster will be formed, a leader will be elected, and every node will have a running Consul agent.
1. To install the latest version of Consul on Kubernetes, issue the following command to install Consul with the default configuration using Helm. You could also install Consul on a dedicated namespace of your choosing by modifying the value of the `-n` flag for the Helm install.
1. Install Consul on Kubernetes using Helm. The Helm chart does everything to set up your deployment: after installation, agents automatically form clusters, elect leaders, and run the necessary agents.
- Run the following command to install the latest version of Consul on Kubernetes with its default configuration.
```shell-session
$ helm install consul hashicorp/consul --set global.name=consul --create-namespace --namespace consul
```
```shell-session
$ helm install consul hashicorp/consul --set global.name=consul --create-namespace --namespace consul
```
1. To install a specific version of Consul on Kubernetes, issue the following command with `--version` flag to install the specified version with the default configuration using Helm.
You can also install Consul on a dedicated namespace of your choosing by modifying the value of the `-n` flag for the Helm install.
```shell-session
$ export VERSION=0.43.0
$ helm install consul hashicorp/consul --set global.name=consul --version ${VERSION} --create-namespace --namespace consul
```
- To install a specific version of Consul on Kubernetes, issue the following command with `--version` flag:
```shell-session
$ export VERSION=0.43.0
$ helm install consul hashicorp/consul --set global.name=consul --version ${VERSION} --create-namespace --namespace consul
```
## Custom installation
If you want to customize your installation,
create a `values.yaml` file to override the default settings.
You can learn what settings are available by running `helm inspect values hashicorp/consul`
or by reading the [Helm Chart Reference](/docs/k8s/helm).
To learn what settings are available, run `helm inspect values hashicorp/consul`
or read the [Helm Chart Reference](/docs/k8s/helm).
### Minimal `values.yaml` for Consul service mesh
The minimal settings to enable [Consul Service Mesh]((/docs/k8s/connect)) would be captured in the following `values.yaml` config file:
The following `values.yaml` config file contains the minimum required settings to enable [Consul Service Mesh]((/docs/k8s/connect)):
<CodeBlockConfig filename="values.yaml">
```yaml
global:
name: consul
connectInject:
enabled: true
controller:
enabled: true
```
</CodeBlockConfig>
Once you've created your `values.yaml` file, run `helm install` with the `--values` flag:
After you create your `values.yaml` file, run `helm install` with the `--values` flag:
```shell-session
$ helm install consul hashicorp/consul --create-namespace --namespace consul --values values.yaml
@ -120,58 +112,67 @@ Because the plugin is executed by the local Kubernetes kubelet, the plugin alrea
The Consul Helm Chart is responsible for installing the Consul CNI plugin.
To configure the plugin to be installed, add the following configuration to your `values.yaml` file:
<CodeTabs tabs={[ "Reference configuration","GKE configuration","OpenShift configuration" ]}>
<Tabs>
<Tab heading="Reference configuration"> ", "]}>
<CodeBlockConfig filename="values.yaml">
```yaml
global:
name: consul
connectInject:
enabled: true
cni:
```yaml
global:
name: consul
connectInject:
enabled: true
logLevel: info
cniBinDir: "/opt/cni/bin"
cniNetDir: "/etc/cni/net.d"
cni:
enabled: true
logLevel: info
cniBinDir: "/opt/cni/bin"
cniNetDir: "/etc/cni/net.d"
```
</CodeBlockConfig>
</Tab>
<Tab heading="GKE configuration">
<CodeBlockConfig filename="values.yaml">
```yaml
global:
name: consul
connectInject:
enabled: true
cni:
```yaml
global:
name: consul
connectInject:
enabled: true
logLevel: info
cniBinDir: "/home/kubernetes/bin"
cniNetDir: "/etc/cni/net.d"
cni:
enabled: true
logLevel: info
cniBinDir: "/home/kubernetes/bin"
cniNetDir: "/etc/cni/net.d"
```
</CodeBlockConfig>
</Tab>
<Tab heading="OpenShift configuration">
<CodeBlockConfig filename="values.yaml">
```yaml
global:
name: consul
openshift:
```yaml
global:
name: consul
openshift:
enabled: true
connectInject:
enabled: true
connectInject:
enabled: true
cni:
enabled: true
logLevel: info
multus: true
cniBinDir: "/var/lib/cni/bin"
cniNetDir: "/etc/kubernetes/cni/net.d"
cni:
enabled: true
logLevel: info
multus: true
cniBinDir: "/var/lib/cni/bin"
cniNetDir: "/etc/kubernetes/cni/net.d"
```
</CodeBlockConfig>
</Tab>
</CodeTabs>
</Tabs>
The following table describes the available CNI plugin options:
@ -185,43 +186,43 @@ The following table describes the available CNI plugin options:
### Enable Consul service mesh on select namespaces
By default, Consul Service Mesh is enabled on almost all namespaces (with the exception of `kube-system` and `local-path-storage`) within a Kubernetes cluster. You can restrict this to a subset of namespaces by specifying a `namespaceSelector` that matches a label attached to each namespace denoting whether to enable Consul service mesh. In order to default to enabling service mesh on select namespaces by label, the `connectInject.default` value must be set to `true`.
By default, Consul Service Mesh is enabled on almost all namespaces within a Kubernetes cluster, with the exception of `kube-system` and `local-path-storage`. To restrict the service mesh to a subset of namespaces:
1. specify a `namespaceSelector` that matches a label attached to each namespace where you want to deploy the service mesh. In order to default to enabling service mesh on select namespaces by label, the `connectInject.default` value must be set to `true`.
<CodeBlockConfig filename="values.yaml">
```yaml
global:
name: consul
connectInject:
enabled: true
default: true
namespaceSelector: |
matchLabels:
connect-inject : enabled
controller:
enabled: true
```yaml
global:
name: consul
connectInject:
enabled: true
default: true
namespaceSelector: |
matchLabels:
connect-inject : enabled
```
</CodeBlockConfig>
Label the namespace(s), where you would like to enable Consul Service Mesh.
1. Label the namespaces where you would like to enable Consul Service Mesh.
```shell-session
$ kubectl create ns foo
$ kubectl label namespace foo connect-inject=enabled
```
```shell-session
$ kubectl create ns foo
$ kubectl label namespace foo connect-inject=enabled
```
Next, run `helm install` with the `--values` flag:
1. Run `helm install` with the `--values` flag:
```shell-session
$ helm install consul hashicorp/consul --create-namespace --namespace consul --values values.yaml
NAME: consul
```
```shell-session
$ helm install consul hashicorp/consul --create-namespace --namespace consul --values values.yaml
NAME: consul
```
### Update your Consul on Kubernetes configuration
If you've already installed Consul and want to make changes, you'll need to run
`helm upgrade`. See [Upgrading](/docs/k8s/upgrade) for more details.
If you already installed Consul and want to make changes, you need to run
`helm upgrade`. Refer to [Upgrading](/docs/k8s/upgrade) for more details.
## Usage
@ -230,38 +231,38 @@ You can view the Consul UI and access the Consul HTTP API after installation.
### Viewing the Consul UI
The Consul UI is enabled by default when using the Helm chart.
For security reasons, it isn't exposed via a `LoadBalancer` Service by default so you must
use `kubectl port-forward` to visit the UI.
#### TLS Disabled
For security reasons, it is not exposed through a `LoadBalancer` service by default. To visit the UI, you must
use `kubectl port-forward`.
If running with TLS disabled, the Consul UI will be accessible via http on port 8500:
#### Port forward with TLS disabled
If running with TLS disabled, the Consul UI is accessible through http on port 8500:
```shell-session
$ kubectl port-forward service/consul-server --namespace consul 8500:8500
...
```
Once the port is forwarded navigate to [http://localhost:8500](http://localhost:8500).
After you set up the port forward, navigate to [http://localhost:8500](http://localhost:8500).
#### TLS Enabled
#### Port forward with TLS enabled
If running with TLS enabled, the Consul UI will be accessible via https on port 8501:
If running with TLS enabled, the Consul UI is accessible through https on port 8501:
```shell-session
$ kubectl port-forward service/consul-server --namespace consul 8501:8501
...
```
Once the port is forwarded navigate to [https://localhost:8501](https://localhost:8501).
After you set up the port forward, navigate to [https://localhost:8501](https://localhost:8501).
~> You'll need to click through an SSL warning from your browser because the
~> You need to click through an SSL warning from your browser because the
Consul certificate authority is self-signed and not in the browser's trust store.
#### ACLs Enabled
If ACLs are enabled, you will need to input an ACL token into the UI in order
to see all resources and make modifications.
If ACLs are enabled, you need to input an ACL token to display all resources and make modifications in the UI.
To retrieve the bootstrap token that has full permissions, run:
@ -276,25 +277,18 @@ Then paste the token into the UI under the ACLs tab (without the `%`).
to retrieve the bootstrap token since secondary datacenters use a separate token
with less permissions.
#### Exposing the UI via a service
#### Exposing the UI through a service
If you want to expose the UI via a Kubernetes Service, configure
the [`ui.service` chart values](/docs/k8s/helm#v-ui-service).
This service will allow requests to the Consul servers so it should
Because this service allows requests to the Consul servers, it should
not be open to the world.
### Accessing the Consul HTTP API
The Consul HTTP API should be accessed by communicating to the local agent
running on the same node. While technically any listening agent (client or
server) can respond to the HTTP API, communicating with the local agent
has important caching behavior, and allows you to use the simpler
[`/agent` endpoints for services and checks](/api-docs/agent).
While technically any listening agent can respond to the HTTP API, communicating with the local Consul node has important caching behavior and allows you to use the simpler [`/agent` endpoints for services and checks](/api-docs/agent).
For Consul installed via the Helm chart, a client agent is installed on
each Kubernetes node. This is explained in the [architecture](/docs/k8s/installation/install#client-agents)
section. To access the agent, you may use the
[downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/).
To find information about a node, you can use the [downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/).
An example pod specification is shown below. In addition to pods, anything
with a pod template can also access the downward API and can therefore also

View File

@ -15,7 +15,7 @@ However, depending on your Kubernetes use case, your upgrade procedure may be di
## Gradual TLS Rollout without Consul Connect
If you're **not using Consul Connect**, follow this process.
If you do not use a service mesh, follow this process.
1. Run a Helm upgrade with the following config:
@ -31,8 +31,7 @@ If you're **not using Consul Connect**, follow this process.
updatePartition: <number_of_server_replicas>
```
This upgrade will trigger a rolling update of the clients, as well as any
other `consul-k8s` components, such as sync catalog or client snapshot deployments.
This upgrade trigger a rolling update of `consul-k8s` components.
1. Perform a rolling upgrade of the servers, as described in
[Upgrade Consul Servers](/docs/k8s/upgrade#upgrading-consul-servers).
@ -50,9 +49,9 @@ applications to it.
1. Add a new identical node pool.
1. Cordon all nodes in the **old** pool by running `kubectl cordon`
to ensure Kubernetes doesn't schedule any new workloads on those nodes
and instead schedules onto the new nodes, which shortly will be TLS-enabled.
1. Cordon all nodes in the old pool by running `kubectl cordon`.
This command ensures Kubernetes does not schedule any new workloads on those nodes,
and instead schedules onto the new TLS-enabled nodes.
1. Create the following Helm config file for the upgrade:
@ -72,26 +71,16 @@ applications to it.
```
In this configuration, we're setting `server.updatePartition` to the number of
server replicas as described in [Upgrade Consul Servers](/docs/k8s/upgrade#upgrading-consul-servers)
and `client.updateStrategy` to `OnDelete` to manually trigger an upgrade of the clients.
server replicas as described in [Upgrade Consul Servers](/docs/k8s/upgrade#upgrading-consul-servers).
1. Run `helm upgrade` with the above config file. The upgrade will trigger an update of all
components except clients and servers, such as the Consul Connect webhook deployment
or the sync catalog deployment. Note that the sync catalog and the client
snapshot deployments will not be in the `ready` state until the clients on their
nodes are upgraded. It is OK to proceed to the next step without them being ready
because Kubernetes will keep the old deployment pod around, and so there will be no
downtime.
1. Gradually perform an upgrade of the clients by deleting client pods on the **new** node
pool.
1. Run `helm upgrade` with the above config file.
1. At this point, all components (e.g., Consul Connect webhook and sync catalog) should be running
on the new node pool.
1. Redeploy all your Connect-enabled applications.
One way to trigger a redeploy is to run `kubectl drain` on the nodes in the old pool.
Now that the Connect webhook is TLS-aware, it will add TLS configuration to
Now that the Connect webhook is TLS-aware, it adds TLS configuration to
the sidecar proxy. Also, Kubernetes should schedule these applications on the new node pool.
1. Perform a rolling upgrade of the servers described in

View File

@ -23,18 +23,17 @@ For non-Kubernetes nodes, they can access services using the standard
[Consul DNS](/docs/discovery/dns) or HTTP API.
**Why sync Consul services to Kubernetes?** Syncing Consul services to
Kubernetes services enables non-Kubernetes services (such as external to
the cluster) to be accessed in a native Kubernetes way: using kube-dns,
environment variables, etc. This makes it very easy to automate external
Kubernetes services enables non-Kubernetes services to be accessed using kube-dns and Kubernetes-specific
environment variables. This integration makes it very easy to automate external
service discovery, including hosted services like databases.
## Installation and Configuration
## Installation and configuration
~> Enabling both Service Mesh and Service Sync on the same Kubernetes services is not supported, as Service Mesh also registers Kubernetes service instances to Consul. Please ensure that Service Sync is only enabled for namespaces and services that are not injected with the Consul sidecar for Service Mesh as described in [Sync Enable/Disable](/docs/k8s/service-sync#sync-enable-disable).
~> Enabling both Service Mesh and Service Sync on the same Kubernetes services is not supported, as Service Mesh also registers Kubernetes service instances to Consul. Ensure that Service Sync is only enabled for namespaces and services that are not injected with the Consul sidecar for Service Mesh as described in [Sync Enable/Disable](/docs/k8s/service-sync#sync-enable-disable).
The service sync is done using an external long-running process in the
The service sync uses an external long-running process in the
[consul-k8s project](https://github.com/hashicorp/consul-k8s). This process
can run either in or out of a Kubernetes cluster. However, running this within
can run either inside or outside of a Kubernetes cluster. However, running this process within
the Kubernetes cluster is generally easier since it is automated using the
[Helm chart](/docs/k8s/helm).
@ -54,8 +53,7 @@ syncCatalog:
enabled: true
```
This will enable services to sync _in both directions_. You can also choose
to only sync Kubernetes services to Consul or vice versa by disabling a direction.
This value enables service syncing in both direction. You can also disable a direction so that only Kubernetes services sync to Consul, or only Consul services sync to Kubernetes.
To only enable syncing Consul services to Kubernetes use the config:
@ -75,7 +73,7 @@ syncCatalog:
toK8S: false
```
See the [Helm configuration](/docs/k8s/helm#v-synccatalog)
Refer to the [Helm configuration](/docs/k8s/helm#v-synccatalog)
for more information.
### Authentication
@ -83,16 +81,15 @@ for more information.
The sync process must authenticate to both Kubernetes and Consul to read
and write services.
If running `consul-k8s` using the Helm chart then this authentication is handled for you.
If running `consul-k8s` using the Helm chart, then this authentication is handled for you.
If running `consul-k8s` outside of Kubernetes, a valid kubeconfig file must be provided with cluster
and authentication information. The sync process will look into the default locations
and authentication information. The sync process looks into the default locations
for both in-cluster and out-of-cluster authentication. If `kubectl` works,
then the sync program should work.
For Consul, if ACLs are configured on the cluster, a Consul
[ACL token](https://learn.hashicorp.com/tutorials/consul/access-control-setup-production)
will need to be provided. Review the [ACL rules](/docs/security/acl/acl-rules)
If ACLs are configured on the Consul cluster, you need to provide a Consul
[ACL token](/consul/tutorials/security/access-control-setup-production). Review the [ACL rules](/docs/security/acl/acl-rules)
when creating this token so that it only allows the necessary privileges. The catalog
sync process accepts this token by using the [`CONSUL_HTTP_TOKEN`](/commands#consul_http_token)
environment variable. This token should be set as a
@ -103,22 +100,21 @@ and referenced in the Helm chart.
This sync registers Kubernetes services to the Consul catalog automatically.
This enables discovery and connection to Kubernetes services using native
Consul service discovery such as DNS or HTTP. This is particularly useful for
This sync enables discovery and connection to Kubernetes services using native
Consul service discovery protocols such as DNS or HTTP. This is particularly useful for
non-Kubernetes nodes. This also causes all discoverable services to be part of
a central service catalog in Consul for further syncing into alternate
Kubernetes clusters or other platforms.
Each synced service will be registered onto a Consul node called `k8s-sync`. This
is not a real node because there is no Consul client registering and monitoring
the services. Instead, the catalog sync process is monitoring Kubernetes
Each synced service is registered onto a Consul node called `k8s-sync`. This node
is not a real node. Instead, the catalog sync process is monitoring Kubernetes
and syncing the services to Consul.
### Kubernetes Service Types
### Kubernetes service types
Not all Kubernetes services are externally accessible. The sync program by
default will only sync services of the following types or configurations.
If a service type is not listed below, then the sync program will ignore that
default only syncs services of the following types or configurations.
If a service type is not listed below, then the sync program ignores that
service type.
#### NodePort
@ -126,59 +122,60 @@ service type.
[NodePort services](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)
register a static port that every node in the K8S cluster listens on.
For NodePort services, a Consul service instance will be created for each
For NodePort services, a Consul service instance is created for each
node that has the representative pod running. While Kubernetes configures
a static port on all nodes in the cluster, this limits the number of service
instances to be equal to the nodes running the target pods.
By default it will use the external IP of the node but this can be configured via
By default it uses the external IP of the node but this can be configured through
the [`nodePortSyncType` helm option](/docs/k8s/helm#v-synccatalog-nodeportsynctype).
The service instance's port will be set to the _first_ defined node port of the service unless
set specifically via the `consul.hashicorp.com/service-port` annotation (see [Service Ports](/docs/k8s/service-sync#service-ports)).
The service instance's port is set to the first defined node port of the service unless
set specifically in the `consul.hashicorp.com/service-port` annotation. Refer to [Service Ports](/docs/k8s/service-sync#service-ports) for more information.
#### LoadBalancer
For LoadBalancer services, a single service instance will be registered with
For LoadBalancer services, a single service instance is registered with
the external IP of the created load balancer. Because this is already a load
balancer, only one service instance will be registered with Consul rather
balancer, only one service instance is registered with Consul rather
than registering each individual pod endpoint.
The service instance's port will be set to the _first_ defined port of the
service unless set specifically via the `consul.hashicorp.com/service-port` annotation (see [Service Ports](/docs/k8s/service-sync#service-ports)).
The service instance's port is set to the first defined port of the
service unless set specifically in the `consul.hashicorp.com/service-port` annotation. Refer to [Service Ports](/docs/k8s/service-sync#service-ports) for more information.
#### External IPs
Any service type may specify an
"[external IP](https://kubernetes.io/docs/concepts/services-networking/service/#external-ips)"
configuration. The external IP must be configured by some other system, but
any service discovery will resolve to this set of IP addresses rather than a
any service discovery resolves to this set of IP addresses rather than a
virtual IP.
If an external IP list is present, a service instance in Consul will be created
If an external IP list is present, a service instance in Consul is created
for each external IP. It is assumed that if an external IP is present that it
is routable and configured by some other system.
The service instance's port will be set to the _first_ defined port of the
service unless set specifically via the `consul.hashicorp.com/service-port` annotation (see [Service Ports](/docs/k8s/service-sync#service-ports)).
The service instance's port is set to the _first_ defined port of the
service unless set specifically with the `consul.hashicorp.com/service-port` annotation. Refer to [Service Ports](/docs/k8s/service-sync#service-ports) for more information.
#### ClusterIP
ClusterIP services are synced by default as of `consul-k8s` version 0.3.0.
Each pod that is an endpoint for the service will be synced as a Consul service
Each pod that is an endpoint for the service is synced as a Consul service
instance with its IP set to the pod IP and its port set to the `targetPort`.
The service instance's port can be overridden via the `consul.hashicorp.com/service-port` annotation (see [Service Ports](/docs/k8s/service-sync#service-ports)).
The service instance's port can be overridden with the `consul.hashicorp.com/service-port` annotation. Refer to [Service Ports](/docs/k8s/service-sync#service-ports) for more information.
-> In Kubernetes clusters where pod IPs are not accessible outside the cluster,
the services registered in Consul may not be routable. To
skip syncing ClusterIP services, set [`syncClusterIPServices`](/docs/k8s/helm#v-synccatalog-syncclusteripservices)
to `false` in the Helm chart values file.
### Sync Enable/Disable
### Enable and disable sync
By default, all valid service types (as explained above) are synced from every Kubernetes
namespace (except for `kube-system` and `kube-public`).
If you wish to only sync specific services via annotation, set the default to `false`:
By default, all valid service types are synced from every Kubernetes
namespace except for `kube-system` and `kube-public`.
To only sync specific services, first modify the annotation to set the default to `false`:
```yaml
syncCatalog:
@ -186,7 +183,7 @@ syncCatalog:
default: false
```
And explicitly enable syncing specific services via the `consul.hashicorp.com/service-sync` annotation:
Then, explicitly enable syncing specific services with the `consul.hashicorp.com/service-sync` annotation:
```yaml
kind: Service
@ -197,7 +194,7 @@ metadata:
'consul.hashicorp.com/service-sync': 'true'
```
-> **NOTE:** If the annotation is set to `false` when the default sync is `true`, the service will **not** be synced.
-> **Note:** If the annotation is set to `false` when the default sync is `true`, the service does not sync.
You can allow or deny syncing from specific Kubernetes namespaces by setting the
`k8sAllowNamespaces` and `k8sDenyNamespaces` keys:
@ -210,10 +207,10 @@ syncCatalog:
k8sDenyNamespaces: ['kube-system', 'kube-public']
```
In the default configuration (shown above), services from all namespaces except for
`kube-system` and `kube-public` will be synced.
In the default configuration, services from all namespaces except for
`kube-system` and `kube-public` are synced.
If you wish to only sync from specific namespaces, you can list only those
To only sync from specific namespaces, you can list only those
namespaces in the `k8sAllowNamespaces` key:
```yaml
@ -224,8 +221,7 @@ syncCatalog:
k8sDenyNamespaces: []
```
If you wish to sync from every namespace _except_ specific namespaces, you can
use `*` in the allow list and then specify the non-syncing namespaces in the deny list:
To sync from every namespace except specific namespaces, use `*` in the allow list and then specify the non-syncing namespaces in the deny list:
```yaml
syncCatalog:
@ -235,15 +231,15 @@ syncCatalog:
k8sDenyNamespaces: ['no-sync-ns-1', 'no-sync-ns-2']
```
-> **NOTE:** The deny list takes precedence over the allow list. If a namespace
is listed in both lists, it will **not** be synced.
-> **Note:** The deny list takes precedence over the allow list. If a namespace
is listed in both lists, it does not sync.
### Service Name
### Service name
When a Kubernetes service is synced to Consul, the name of the service in Consul
by default will be the value of the "name" metadata on that Kubernetes service.
by default is the value of the `name` metadata on that Kubernetes service.
This makes it so that service sync works with zero configuration changes.
This can be overridden using an annotation to specify the Consul service name:
This setting can be overridden using an annotation to specify the Consul service name:
```yaml
kind: Service
@ -254,17 +250,15 @@ metadata:
'consul.hashicorp.com/service-name': my-consul-service
```
**If a conflicting service name exists in Consul,** the sync program will
register additional instances to that same service. Therefore, services inside
**If a conflicting service name exists in Consul,** the sync program
registers additional instances to that same service. Therefore, services inside
and outside of Kubernetes should have different names unless you want either
side to potentially connect. This default behavior also enables gracefully
transitioning a service from outside of K8S to inside, and vice versa.
transitioning a service between deployments inside and outside of Kubernetes.
### Service Ports
### Service ports
When syncing the Kubernetes service to Consul, the Consul service port will be
the first defined port in the service. Additionally, all ports will be
registered in the service instance metadata with the key "port-X" where X is
When syncing the Kubernetes service to Consul, the Consul service port is the first defined port in the service. Additionally, all ports become registered in the service instance metadata with the key "port-X," where X is
the name of the port and the value is the externally accessible port.
The default service port can be overridden using an annotation:
@ -280,13 +274,10 @@ metadata:
The annotation value may be a name of a port (recommended) or an exact port value.
### Service Tags
### Service tags
A service registered in Consul from Kubernetes will always have the tag "k8s" added
to it. Additional tags can be specified with a comma-separated annotation value
as shown below. This will also automatically include the "k8s" tag which can't
be disabled. The values should be specified comma-separated without any
additional whitespace.
A service registered in Consul from Kubernetes always has the tag "k8s" added
to it. Additional tags can be specified with a comma-separated annotation value. These custom tags automatically include the "k8s" tag, which can't be disabled. When specifying values, use commas without whitespace.
```yaml
kind: Service
@ -297,16 +288,15 @@ metadata:
'consul.hashicorp.com/service-tags': 'primary,foo'
```
### Service Meta
### Service meta
A service registered in Consul from Kubernetes will set the `external-source` key to
"kubernetes". This can be used by API consumers, the UI, CLI, etc. to filter
service instances that are set in k8s. The Consul UI (in Consul 1.2.3 and later)
will read this value to show a Kubernetes icon next to all externally
A service registered in Consul from Kubernetes sets the `external-source` key to
`kubernetes`. This can be used from the APLI, CLI and UI to filter
service instances that are set in k8s. The Consul UI displays a Kubernetes icon next to all externally
registered services from Kubernetes.
Additional metadata can be specified using annotations. The "KEY" below can be
set to any key. This allows setting multiple meta values:
set to any key. This allows setting multiple meta values.
```yaml
kind: Service
@ -320,15 +310,13 @@ metadata:
### Consul Enterprise Namespaces
Consul Enterprise supports Consul namespaces. These can be used when syncing
from Kubernetes to Consul (although not vice-versa).
from Kubernetes to Consul. However, namespaces are not supported when syncing from Consul to Kubernetes.
There are three options available:
1. **Single Destination Namespace** Sync all Kubernetes services, regardless of namespace,
into the same Consul namespace.
This can be configured with:
```yaml
global:
enableConsulNamespaces: true
@ -339,13 +327,11 @@ There are three options available:
consulDestinationNamespace: 'my-consul-ns'
```
1. **Mirror Namespaces** - Each Kubernetes service will be synced to a Consul namespace with the same
1. **Mirror Namespaces** - Each Kubernetes service is synced to a Consul namespace with the same
name as its Kubernetes namespace.
For example, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `ns-1`.
If a mirrored namespace does not exist in Consul, it will be created.
This can be configured with:
For example, service `foo` in Kubernetes namespace `ns-1` is synced to the Consul namespace `ns-1`.
If a mirrored namespace does not exist in Consul, it is created automatically.
```yaml
global:
@ -359,13 +345,11 @@ There are three options available:
addK8SNamespaceSuffix: false
```
1. **Mirror Namespaces With Prefix** - Each Kubernetes service will be synced to a Consul namespace
with the same name as its Kubernetes namespace **with a prefix**. For example, given a prefix
`k8s-`, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace
1. **Mirror Namespaces With Prefix** - Each Kubernetes service is synced to a Consul namespace
with the same name as its Kubernetes namespace with a prefix. For example, given a prefix
`k8s-`, service `foo` in Kubernetes namespace `ns-1` syncs to the Consul namespace
`k8s-ns-1`.
This can be configured with:
```yaml
global:
enableConsulNamespaces: true
@ -379,24 +363,20 @@ There are three options available:
addK8SNamespaceSuffix: false
```
-> Note that in both mirroring examples we're setting `addK8SNamespaceSuffix: false`. If set to `true`
(the default), the Kubernetes namespace will be added as a suffix to each
-> Note that in both mirroring examples, `addK8SNamespaceSuffix` is set to `false`. If set to its default value, `true`, the Kubernetes namespace is added as a suffix to each
Consul service name. For example Kubernetes service `foo` in namespace `k8s-ns`
would be registered into Consul with the name `foo-k8s-ns`.
This is useful when syncing from multiple Kubernetes namespaces to
a single consul namespace but is likely something you'll want turned off
when mirroring namespaces since services won't overlap with services from
a single Consul namespace. However, you may want to disable this behavior
when mirroring namespaces so that services do not overlap with services from
other namespaces.
## Consul to Kubernetes
This syncs Consul services into first-class Kubernetes services.
The sync service will create an [`ExternalName`](https://kubernetes.io/docs/concepts/services-networking/service/#externalname)
`Service` for each Consul service. The "external name" will be
the Consul DNS name.
The sync creates an [`ExternalName`](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) for each Consul service. The "external name" is the Consul DNS name.
For example, given a Consul service `foo`, a Kubernetes Service will be created
as follows:
For example, given a Consul service `foo`:
```yaml
apiVersion: v1
@ -410,22 +390,22 @@ spec:
```
With Consul To Kubernetes syncing enabled, DNS requests of the form `<consul-service-name>`
will be serviced by Consul DNS. From a different Kubernetes namespace than where Consul
are serviced by Consul DNS. From a different Kubernetes namespace than where Consul
is deployed, the DNS request would need to be `<consul-service-name>.<consul-namespace>`.
-> **Note:** Consul to Kubernetes syncing **isn't required** if you've enabled [Consul DNS on Kubernetes](/docs/k8s/dns)
_and_ all you need to do is address services in the form `<consul-service-name>.service.consul`, i.e. you don't need Kubernetes `Service` objects created.
-> **Note:** Consul to Kubernetes syncing is not required if you enabled [Consul DNS on Kubernetes](/docs/k8s/dns).
All you need to do is address services in the form `<consul-service-name>.service.consul`, so you do not need Kubernetes `Service` objects created.
~> **Requires Consul DNS via CoreDNS in Kubernetes:** This feature requires that
[Consul DNS](/docs/k8s/dns) is configured within Kubernetes.
Additionally, **[CoreDNS](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#config-coredns)
is required (instead of kube-dns)** to resolve an
Additionally, [CoreDNS](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#config-coredns)
is required instead of kube-dns to resolve an
issue with resolving `externalName` services pointing to custom domains.
### Sync Enable/Disable
### Enable and disable sync
All Consul services visible to the sync process based on its given ACL token
will be synced to Kubernetes.
are synced to Kubernetes.
There is no way to change this behavior per service. For the opposite sync
direction (Kubernetes to Consul), you can use Kubernetes annotations to disable
@ -437,24 +417,24 @@ In the future, we hope to support per-service configuration.
### Service Name
When a Consul service is synced to Kubernetes, the name of the Kubernetes
service will exactly match the name of the Consul service.
service matches the name of the Consul service exactly.
To change this default exact match behavior, it is possible to specify a
prefix to be added to service names within Kubernetes by using the
`-k8s-service-prefix` flag. This can also be specified in the Helm
configuration.
**If a conflicting service is found,** the service will not be synced. This
**If a conflicting service is found**: the service is not synced. This
does not match the Kubernetes to Consul behavior, but given the current
implementation we must do this because Kubernetes can't mix both CNAME and
implementation we must do this because Kubernetes cannott mix both CNAME and
Endpoint-based services.
### Kubernetes Service Labels and Annotations
### Kubernetes service labels and annotations
Any Consul services synced to Kubernetes will be labeled and annotated.
An annotation `consul.hashicorp.com/synced` will be set to "true" to note
Any Consul services synced to Kubernetes are labeled and annotated.
An annotation `consul.hashicorp.com/synced` is set to `true` to note
that this is a synced service from Consul.
Additionally, a label `consul=true` will be specified so that label selectors
Additionally, a label `consul=true` is specified so that label selectors
can be used with `kubectl` and other tooling to easily filter all Consul-synced
services.

View File

@ -7,40 +7,19 @@ description: >-
# Upgrading Consul on Kubernetes Components
## Upgrade Types
## Upgrade types
Consul on Kubernetes will need to be upgraded/updated if you change your Helm configuration,
if a new Helm chart is released, or if you wish to upgrade your Consul version.
We recommend updating Consul on Kubernetes when:
- You change your Helm configuration
- A new Helm chart is released
- You want to upgrade your Consul version.
### Helm Configuration Changes
### Helm configuration changes
If you make a change to your Helm values file, you will need to perform a `helm upgrade`
If you make a change to your Helm values file, you need to perform a `helm upgrade`
for those changes to take effect.
For example, if you've installed Consul with the following:
<CodeBlockConfig filename="values.yaml">
```yaml
global:
name: consul
connectInject:
enabled: false
```
</CodeBlockConfig>
And you wish to set `connectInject.enabled` to `true`:
```diff
global:
name: consul
connectInject:
- enabled: false
+ enabled: true
```
To update your deployment configuration using Helm, perform the following steps.
For example, if you installed Consul with `connectInject.enabled: false` and you want to change its value to `true`:
1. Determine your current installed chart version.
@ -58,17 +37,15 @@ To update your deployment configuration using Helm, perform the following steps.
$ helm upgrade consul hashicorp/consul --namespace consul --version 0.40.0 --values /path/to/my/values.yaml
```
**Before performing the upgrade, be sure you've read the other sections on this page,
**Before performing the upgrade, be sure you read the other sections on this page,
continuing at [Determining What Will Change](#determining-what-will-change).**
~> NOTE: It's important to always set the `--version` flag, because otherwise Helm
will use the most up-to-date version in its local cache, which may result in an
unintended upgrade.
~> Note: You should always set the `--version` flag when upgrading Helm. Otherwise Helm uses the most up-to-date version in its local cache, which may result in an unintended upgrade.
### Helm Chart Version Upgrade
### Upgrade Helm chart version
You may wish to upgrade your Helm chart version to take advantage of new features,
bugfixes, or because you want to upgrade your Consul version, and it requires a
You may wish to upgrade your Helm chart version to take advantage of new features and
bug fixes, or because you want to upgrade your Consul version and it requires a
certain Helm chart version.
1. Update your local Helm repository cache:
@ -98,7 +75,8 @@ certain Helm chart version.
```
In this example, version `0.39.0` (from `consul-k8s:0.39.0`) is being used.
If you want to upgrade to the latest `0.40.0` version, use the following procedure:
If you want to upgrade to the latest `0.40.0` version, use the following procedure:
1. Check the changelog for any breaking changes from that version and any versions in between: [CHANGELOG.md](https://github.com/hashicorp/consul-k8s/blob/main/CHANGELOG.md).
@ -111,14 +89,14 @@ certain Helm chart version.
**Before performing the upgrade, be sure you've read the other sections on this page,
continuing at [Determining What Will Change](#determining-what-will-change).**
### Consul Version Upgrade
### Upgrade Consul version
If a new version of Consul is released, you will need to perform a Helm upgrade
to update to the new version.
If a new version of Consul is released, you need to perform a Helm upgrade
to update to the new version. Before you upgrade to a new version:
1. Ensure you've read the [Upgrading Consul](/docs/upgrading) documentation.
1. Ensure you've read any [specific instructions](/docs/upgrading/upgrade-specific) for the version you're upgrading
to and the Consul [changelog](https://github.com/hashicorp/consul/blob/main/CHANGELOG.md) for that version.
1. Read the [Upgrading Consul](/docs/upgrading) documentation.
1. Read any [specific instructions](/docs/upgrading/upgrade-specific) for the version you want to upgrade
to, as well as the Consul [changelog](https://github.com/hashicorp/consul/blob/main/CHANGELOG.md) for that version.
1. Read our [Compatibility Matrix](/docs/k8s/compatibility) to ensure
your current Helm chart version supports this Consul version. If it does not,
you may need to also upgrade your Helm chart version at the same time.
@ -133,7 +111,7 @@ to update to the new version.
</CodeBlockConfig>
1. Determine your current installed chart version. In this example, version `0.39.0` (from `consul-k8s:0.39.0`) is being used.
2. Determine the version of your exisiting Helm installation. In this example, version `0.39.0` (from `consul-k8s:0.39.0`) is being used.
```shell-session
$ helm list --filter consul --namespace consul
@ -150,18 +128,16 @@ to update to the new version.
**Before performing the upgrade, be sure you have read the other sections on this page,
continuing at [Determining What Will Change](#determining-what-will-change).**
~> NOTE: It's important to always set the `--version` flag, because otherwise Helm
will use the most up-to-date version in its local cache, which may result in an
unintended upgrade.
~> Note: You should always set the `--version` flag when upgrading Helm. Otherwise Helm uses the most up-to-date version in its local cache, which may result in an unintended upgrade.
## Determining What Will Change
## Determine scope of changes
Before upgrading, it's important to understand what changes will be made to your
cluster. For example, you will need to take more care if your upgrade will result
Before upgrading, it is important to understand the changes that affect your
cluster. For example, you need to take more care if your upgrade results
in the Consul server StatefulSet being redeployed.
There is no built-in functionality in Helm that shows what a helm upgrade will
change. There is, however, a Helm plugin [helm-diff](https://github.com/databus23/helm-diff)
There is no built-in functionality in Helm that shows what a helm upgrade
changes. There is, however, a Helm plugin [helm-diff](https://github.com/databus23/helm-diff)
that can be used.
1. Install `helm-diff` with:
@ -177,80 +153,30 @@ that can be used.
$ helm diff upgrade consul hashicorp/consul --namespace consul --version 0.40.0 --values /path/to/your/values.yaml
```
This will print out the manifests that will be updated and their diffs.
This command prints out the manifests that will be updated and their diffs.
1. To see only the objects that will be updated, add `| grep "has changed"`:
1. To see only updated objects, add `| grep "has changed"`:
```shell-session
$ helm diff upgrade consul hashicorp/consul --namespace consul --version 0.40.0 --values /path/to/your/values.yaml |
grep "has changed"
```
1. Take specific note if `consul-client, DaemonSet` or `consul-server, StatefulSet` are listed.
This means that your Consul client daemonset or Consul server statefulset (or both) will be redeployed.
1. Take specific note if `consul-server, StatefulSet` is listed, as it means your Consul server statefulset will be redeployed.
If either is being redeployed, we will follow the same pattern for upgrades as
on other platforms: the servers will be redeployed one-by-one, and then the
clients will be redeployed in batches. Read [Upgrading Consul](/docs/upgrading) and then continue
reading below.
If your Consul server statefulset needs to be redeployed, follow the same pattern for upgrades as
on other platforms by redploying servers one by one. Refer tp [Upgrading Consul](/docs/upgrading) for more information.
If neither the client daemonset nor the server statefulset is being redeployed,
then you can continue with the helm upgrade without any specific sequence to follow.
If neither the server statefulset is not being redeployed,
then you can continue with the Helm upgrade without any specific sequence to follow.
## Service Mesh
## Upgrade Consul servers
If you are using Consul's service mesh features, as opposed to the [service sync](/docs/k8s/service-sync)
functionality, you must be aware of the behavior of the service mesh during upgrades.
Initiate the server upgrade:
Consul clients operate as a daemonset across all Kubernetes nodes. During an upgrade,
if the Consul client daemonset has changed, the client pods will need to be restarted
because their spec has changed.
When a Consul client pod is restarted, it will deregister itself from Consul when it stops.
When the pod restarts, it will re-register itself with Consul.
Thus, during the period between the Consul client on a node stopping and restarting,
the following will occur:
1. The node will be deregistered from Consul. It will not show up in the Consul UI
nor in API requests.
1. Because the node is deregistered, all service pods that were on that node will
also be deregistered. This means they will not receive service mesh traffic
until the Consul client pod restarts.
1. Service pods on that node can continue to make requests through the service
mesh because each Envoy proxy maintains a cache of the locations of upstream
services. However, if the upstream services change IPs, Envoy will not be able
to refresh its cluster information until its local Consul client is restarted.
So services can continue to make requests without downtime for a short period
of time, however, it's important for the local Consul client to be restarted
as quickly as possible.
Once the local Consul client pod restarts, each service pod needs to be re-registered
with its local Consul client. This is done automatically by the connect inject controller.
Because service mesh pods are briefly deregistered during a Consul client restart,
it's **important that you do not restart all Consul clients at once**. Otherwise
you may experience downtime because no replicas of a specific service will be in the mesh.
In addition, it's **important that you have multiple replicas** for each service.
If you only have one replica, then during restart of the Consul client on the
node hosting that replica, it will be briefly deregistered from the mesh. Since
it's the only replica, other services will not be able to make calls to that
service. (NOTE: This can also be avoided by stopping that replica so it is rescheduled to
a node whose Consul client has already been updated.)
Given the above, we recommend that after Consul servers are upgraded, the Consul
client daemonset is set to use the `OnDelete` update strategy and Consul clients
are deleted one by one or in batches. See [Upgrading Consul Servers](#upgrading-consul-server)
and [Upgrading Consul Clients](#upgrading-consul-clients) for more details.
## Upgrading Consul Servers
To initiate the upgrade:
1. Change the `global.image` value to the desired Consul version
1. Set the `server.updatePartition` value _equal to the number of server replicas_.
By default there are 3 servers, so you would set this value to `3`
1. Set the `updateStrategy` for clients to `OnDelete`
1. Change the `global.image` value to the desired Consul version.
1. Set the `server.updatePartition` value to the number of server replicas. By default there are 3 servers, so you would set this value to `3`.
1. Set the `updateStrategy` for clients to `OnDelete`.
<CodeBlockConfig filename="values.yaml">
@ -259,9 +185,6 @@ To initiate the upgrade:
image: 'consul:123.456'
server:
updatePartition: 3
client:
updateStrategy: |
type: OnDelete
```
</CodeBlockConfig>
@ -269,13 +192,7 @@ To initiate the upgrade:
The `updatePartition` value controls how many instances of the server
cluster are updated. Only instances with an index _greater than_ the
`updatePartition` value are updated (zero-indexed). Therefore, by setting
it equal to replicas, none should update yet.
The `updateStrategy` controls how Kubernetes rolls out changes to the client daemonset.
By setting it to `OnDelete`, no clients will be restarted until their pods are deleted.
Without this, they would be redeployed alongside the servers because their Docker
image versions have changed. This is not desirable because we want the Consul
servers to be upgraded _before_ the clients.
it equal to replicas, updates should not occur immediately.
1. Next, perform the upgrade:
@ -283,8 +200,8 @@ To initiate the upgrade:
$ helm upgrade consul hashicorp/consul --namespace consul --version <your-version> --values /path/to/your/values.yaml
```
This will not cause the servers to redeploy (although the resource will be updated). If
everything is stable, begin by decreasing the `updatePartition` value by one,
This command does not cause the servers to redeploy, although the resource is updated. If
everything is stable, decrease the `updatePartition` value by one
and performing `helm upgrade` again. This will cause the first Consul server
to be stopped and restarted with the new image.
@ -296,38 +213,33 @@ To initiate the upgrade:
`updatePartition` is `0`. At this point, you may remove the
`updatePartition` configuration. Your server upgrade is complete.
## Upgrading Consul Clients
## Upgrading to Consul Dataplane
With the servers upgraded, it is time to upgrade the clients.
If you are using Consul's service mesh features, you will want to be careful
restarting the clients as outlined in [Service Mesh](#service-mesh).
In earlier versions, Consul on Kubernetes used client agents in its deployments. As of v1.14.0, Consul uses [Consul Dataplane](/docs/connect/dataplane/) in Kubernetes deployments instead of client agents.
You can either:
If you upgrade Consul from a version that uses client agents to a version the uses dataplanes, complete the following steps to upgrade your deployment safely and without downtime.
1. Manually issue `kubectl delete pod <id>` for each consul daemonset pod
1. Set the updateStrategy to rolling update with a small number:
1. Before you upgrade, edit your Helm chart to enable Consul client agents by setting `client.enabled` and `client.updateStrategy`:
```yaml
```yaml filename="values.yaml"
client:
updateStrategy: |
rollingUpdate:
maxUnavailable: 2
type: RollingUpdate
enabled: true
updateStrategy:
type: OnDelete
```
Then, run `helm upgrade`. This will upgrade the clients in batches, waiting
until the clients come up healthy before continuing.
1. Add `consul.hashicorp.com/consul-k8s-version: 1.0.0` to the annotations for each pod you upgrade.
1. Cordon and drain each node to ensure there are no connect pods active on it, and then delete the
consul client pod on that node.
1. Follow our [recommended procedures to upgrade servers](#upgrading-consul-servers) on Kubernetes deployments to upgrade Helm values for the new version of Consul.
-> NOTE: If you are using only the Service Sync functionality, you can perform an upgrade without
following a specific sequence since that component is more resilient to brief restarts of
Consul clients.
1. Run `kubectl rollout restart` to restart your service mesh applications. Restarting service mesh application causes Kubernetes to re-inject them with the webhook for dataplanes.
## Configuring TLS on an Existing Cluster
1. Restart all gateways in your service mesh.
1. Disable client agents in your Helm chart by deleting the `client` stanza or setting `client.enabled` to `false`.
## Configuring TLS on an existing cluster
If you already have a Consul cluster deployed on Kubernetes and
would like to turn on TLS for internal Consul communication,
please see
[Configuring TLS on an Existing Cluster](/docs/k8s/operations/tls-on-existing-cluster).
refer to [Configuring TLS on an Existing Cluster](/docs/k8s/operations/tls-on-existing-cluster).

BIN
website/public/img/dataplanes-diagram.png (Stored with Git LFS) Normal file

Binary file not shown.