docs: Update code blocks across website
* Use CodeTabs for examples in multiple formats. * Ensure correct language on code fences. * Use CodeBlockConfig for examples with filenames, or which need highlighted content.
This commit is contained in:
parent
f02ea91a8b
commit
db59597cac
|
@ -210,7 +210,7 @@ $ curl \
|
|||
|
||||
### Sample Response
|
||||
|
||||
```json
|
||||
```text
|
||||
true
|
||||
```
|
||||
|
||||
|
|
|
@ -204,7 +204,7 @@ The table below shows this endpoint's support for
|
|||
|
||||
### Sample Payload
|
||||
|
||||
```text
|
||||
```hcl
|
||||
agent "" {
|
||||
policy = "read"
|
||||
}
|
||||
|
@ -218,7 +218,7 @@ $ curl -X POST -d @rules.hcl http://127.0.0.1:8500/v1/acl/rules/translate
|
|||
|
||||
### Sample Response
|
||||
|
||||
```text
|
||||
```hcl
|
||||
agent_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
|
@ -257,7 +257,7 @@ $ curl -X GET http://127.0.0.1:8500/v1/acl/rules/translate/4f48f7e6-9359-4890-8e
|
|||
|
||||
### Sample Response
|
||||
|
||||
```text
|
||||
```hcl
|
||||
agent_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
|
|
|
@ -601,7 +601,7 @@ $ curl \
|
|||
|
||||
### Sample Response
|
||||
|
||||
```text
|
||||
```log
|
||||
YYYY/MM/DD HH:MM:SS [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:127.0.0.1:8300 Address:127.0.0.1:8300}]
|
||||
YYYY/MM/DD HH:MM:SS [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
|
||||
YYYY/MM/DD HH:MM:SS [INFO] serf: EventMemberJoin: machine-osx 127.0.0.1
|
||||
|
|
|
@ -213,7 +213,7 @@ The table below shows this endpoint's support for
|
|||
|
||||
### Sample Payload
|
||||
|
||||
```text
|
||||
```json
|
||||
{
|
||||
"Node": "agent-one",
|
||||
"Segment": "",
|
||||
|
|
|
@ -33,7 +33,7 @@ leader and starting saving snapshots.
|
|||
|
||||
As snapshots are saved, they will be reported in the log produced by the agent:
|
||||
|
||||
```text
|
||||
```log
|
||||
2016/11/16 21:21:13 [INFO] Snapshot agent running
|
||||
2016/11/16 21:21:13 [INFO] Waiting to obtain leadership...
|
||||
2016/11/16 21:21:13 [INFO] Obtained leadership
|
||||
|
@ -67,8 +67,7 @@ If ACLs are enabled the following privileges are required:
|
|||
The following is a example least privilege policy which allows the snapshot agent
|
||||
to run on a node named `server-1234`.
|
||||
|
||||
<Tabs>
|
||||
<Tab heading="HCL">
|
||||
<CodeTabs>
|
||||
|
||||
```hcl
|
||||
# Required to read and snapshot ACL data
|
||||
|
@ -89,9 +88,6 @@ service "consul-snapshot" {
|
|||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab heading="JSON">
|
||||
|
||||
```json
|
||||
{
|
||||
"acl": "write",
|
||||
|
@ -113,8 +109,7 @@ service "consul-snapshot" {
|
|||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
</CodeTabs>
|
||||
|
||||
Additional `session` rules should be created, or `session_prefix` used, if the
|
||||
snapshot agent is deployed across more than one host.
|
||||
|
|
|
@ -63,7 +63,9 @@ create and update configuration entries. This command will load either a JSON or
|
|||
HCL file holding the configuration entry definition and then will push this
|
||||
configuration to Consul.
|
||||
|
||||
Example HCL Configuration File - `proxy-defaults.hcl`:
|
||||
Example HCL Configuration File:
|
||||
|
||||
<CodeBlockConfig filename="proxy-defaults.hcl">
|
||||
|
||||
```hcl
|
||||
Kind = "proxy-defaults"
|
||||
|
@ -74,6 +76,8 @@ Config {
|
|||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Then to apply this configuration, run:
|
||||
|
||||
```shell-session
|
||||
|
|
|
@ -538,7 +538,7 @@ definitions support being updated during a reload.
|
|||
|
||||
#### Example Configuration File
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"datacenter": "east-aws",
|
||||
"data_dir": "/opt/consul",
|
||||
|
@ -1099,24 +1099,19 @@ Valid time units are 'ns', 'us' (or 'µs'), 'ms', 's', 'm', 'h'."
|
|||
within a double quoted string value must be escaped with a backslash `\`.
|
||||
Some example templates:
|
||||
|
||||
<Tabs>
|
||||
<Tab heading="HCL">
|
||||
<CodeTabs>
|
||||
|
||||
```hcl
|
||||
bind_addr = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}"
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab heading="JSON">
|
||||
|
||||
```json
|
||||
{
|
||||
"bind_addr": "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}"
|
||||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
</CodeTabs>
|
||||
|
||||
- `cache` configuration for client agents. The configurable values are the following:
|
||||
|
||||
|
@ -2312,7 +2307,7 @@ signed by the CA can be used to gain full access to Consul.
|
|||
encryption and authentication. Failing to set [`verify_incoming`](#verify_incoming) or [`verify_outgoing`](#verify_outgoing)
|
||||
will result in TLS not being enabled at all, even when specifying a [`ca_file`](#ca_file), [`cert_file`](#cert_file), and [`key_file`](#key_file).
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"datacenter": "east-aws",
|
||||
"data_dir": "/opt/consul",
|
||||
|
@ -2336,7 +2331,7 @@ will result in TLS not being enabled at all, even when specifying a [`ca_file`](
|
|||
|
||||
See, especially, the use of the `ports` setting:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
"ports": {
|
||||
"https": 8501
|
||||
}
|
||||
|
|
|
@ -41,7 +41,7 @@ format or using [Prometheus](https://prometheus.io/) format.
|
|||
|
||||
Below is sample output of a telemetry dump:
|
||||
|
||||
```text
|
||||
```log
|
||||
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000
|
||||
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000
|
||||
[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000
|
||||
|
|
|
@ -23,7 +23,7 @@ To get started with the built-in proxy and see a working example you can follow
|
|||
Below is a complete example of all the configuration options available
|
||||
for the built-in proxy.
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"service": {
|
||||
...
|
||||
|
|
|
@ -385,8 +385,7 @@ To connect to a service via local Unix Domain Socket instead of a
|
|||
port, add `local_bind_socket_path` and optionally `local_bind_socket_mode`
|
||||
to the upstream config for a service:
|
||||
|
||||
<Tabs>
|
||||
<Tab heading="HCL">
|
||||
<CodeTabs>
|
||||
|
||||
```hcl
|
||||
upstreams = [
|
||||
|
@ -398,9 +397,6 @@ upstreams = [
|
|||
]
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab heading="JSON">
|
||||
|
||||
```json
|
||||
"upstreams": [
|
||||
{
|
||||
|
@ -411,8 +407,7 @@ upstreams = [
|
|||
]
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
</CodeTabs>
|
||||
|
||||
This will cause Envoy to create a socket with the path and mode
|
||||
provided, and connect that to service-1.
|
||||
|
@ -431,8 +426,8 @@ mesh, use either the `socket_path` field in the service definition or the
|
|||
`local_service_socket_path` field in the proxy definition. These
|
||||
fields are analogous to the `port` and `service_port` fields in their
|
||||
respective locations.
|
||||
<Tabs>
|
||||
<Tab heading="HCL">
|
||||
|
||||
<CodeTabs>
|
||||
|
||||
```hcl
|
||||
services {
|
||||
|
@ -441,9 +436,6 @@ services {
|
|||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab heading="JSON">
|
||||
|
||||
```json
|
||||
"services": {
|
||||
"name": "service-2",
|
||||
|
@ -451,11 +443,11 @@ services {
|
|||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
</CodeTabs>
|
||||
|
||||
Or in the proxy definition:
|
||||
<Tabs>
|
||||
<Tab heading="HCL">
|
||||
|
||||
<CodeTabs>
|
||||
|
||||
```hcl
|
||||
services {
|
||||
|
@ -472,9 +464,6 @@ services {
|
|||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab heading="JSON">
|
||||
|
||||
```json
|
||||
"services": {
|
||||
"name": "socket_service_2",
|
||||
|
@ -490,8 +479,7 @@ services {
|
|||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
</CodeTabs>
|
||||
|
||||
There is no mode field since the service is expected to create the
|
||||
socket it is listening on, not the Envoy proxy.
|
||||
|
|
|
@ -141,7 +141,7 @@ There are several different kinds of checks:
|
|||
|
||||
A script check:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "mem-util",
|
||||
|
@ -155,7 +155,7 @@ A script check:
|
|||
|
||||
A HTTP check:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "api",
|
||||
|
@ -174,7 +174,7 @@ A HTTP check:
|
|||
|
||||
A TCP check:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "ssh",
|
||||
|
@ -188,7 +188,7 @@ A TCP check:
|
|||
|
||||
A TTL check:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "web-app",
|
||||
|
@ -201,7 +201,7 @@ A TTL check:
|
|||
|
||||
A Docker check:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "mem-util",
|
||||
|
@ -216,7 +216,7 @@ A Docker check:
|
|||
|
||||
A gRPC check for the whole application:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "mem-util",
|
||||
|
@ -230,7 +230,7 @@ A gRPC check for the whole application:
|
|||
|
||||
A gRPC check for the specific `my_service` service:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "mem-util",
|
||||
|
@ -244,7 +244,7 @@ A gRPC check for the specific `my_service` service:
|
|||
|
||||
A h2ping check:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "h2ping-check",
|
||||
|
@ -257,7 +257,7 @@ A h2ping check:
|
|||
|
||||
An alias check for a local service:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "web-alias",
|
||||
|
@ -340,7 +340,7 @@ to be healthy. In certain cases, it may be desirable to specify the initial
|
|||
state of a health check. This can be done by specifying the `status` field in a
|
||||
health check definition, like so:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "mem",
|
||||
|
@ -361,7 +361,7 @@ that the status of the health check will only affect the health status of the
|
|||
given service instead of the entire node. Service-bound health checks may be
|
||||
provided by adding a `service_id` field to a check configuration:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"check": {
|
||||
"id": "web-app",
|
||||
|
@ -387,7 +387,7 @@ to use the agent's credentials when configured for TLS.
|
|||
Multiple check definitions can be defined using the `checks` (plural)
|
||||
key in your configuration file.
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"checks": [
|
||||
{
|
||||
|
|
|
@ -33,7 +33,7 @@ using the [HTTP API](/api).
|
|||
A service definition is a configuration that looks like the following. This
|
||||
example shows all possible fields, but note that only a few are required.
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"service": {
|
||||
"id": "redis",
|
||||
|
@ -281,7 +281,7 @@ Multiple services definitions can be provided at once when registering services
|
|||
via the agent configuration by using the plural `services` key (registering
|
||||
multiple services in this manner is not supported using the HTTP API).
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"services": [
|
||||
{
|
||||
|
|
|
@ -126,7 +126,7 @@ This maps to the `/v1/kv/` API internally.
|
|||
|
||||
Here is an example configuration:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"type": "key",
|
||||
"key": "foo/bar/baz",
|
||||
|
|
|
@ -45,8 +45,7 @@ The following example configures a destination called "My Sink" which stores aud
|
|||
events at the file `/tmp/audit.json`. The log file will be rotated either every
|
||||
24 hours, or when the log file size is greater than 25165824 bytes (24 megabytes).
|
||||
|
||||
<Tabs>
|
||||
<Tab heading="HCL">
|
||||
<CodeTabs>
|
||||
|
||||
```hcl
|
||||
audit {
|
||||
|
@ -62,8 +61,6 @@ audit {
|
|||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
<Tab heading="JSON">
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -83,8 +80,8 @@ audit {
|
|||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
</CodeTabs>
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab heading="Log to standard out">
|
||||
|
@ -93,8 +90,7 @@ audit {
|
|||
The following example configures a destination called "My Sink" which emits audit
|
||||
logs to standard out.
|
||||
|
||||
<Tabs>
|
||||
<Tab heading="HCL">
|
||||
<CodeTabs>
|
||||
|
||||
```hcl
|
||||
audit {
|
||||
|
@ -107,8 +103,6 @@ audit {
|
|||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
<Tab heading="JSON">
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -126,8 +120,8 @@ audit {
|
|||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
</CodeTabs>
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
|
@ -143,6 +137,8 @@ request.
|
|||
The value of the `payload.auth.accessor_id` field is the accessor ID of the
|
||||
[ACL token](/docs/security/acl/acl-system#acl-tokens) which issued the request.
|
||||
|
||||
<CodeBlockConfig highlight="10">
|
||||
|
||||
```json
|
||||
{
|
||||
"created_at": "2020-12-08T12:30:29.196365-05:00",
|
||||
|
@ -169,10 +165,14 @@ The value of the `payload.auth.accessor_id` field is the accessor ID of the
|
|||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
After the request is processed, a corresponding log entry is written for the HTTP
|
||||
response. The `stage` field is set to `OperationComplete` which indicates the agent
|
||||
has completed processing the request.
|
||||
|
||||
<CodeBlockConfig highlight="24">
|
||||
|
||||
```json
|
||||
{
|
||||
"created_at": "2020-12-08T12:30:29.202935-05:00",
|
||||
|
@ -201,3 +201,5 @@ has completed processing the request.
|
|||
}
|
||||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
|
|
@ -29,7 +29,7 @@ The HTTP API accepts only JSON formatted definitions while the CLI will parse ei
|
|||
|
||||
An example namespace definition looks like the following:
|
||||
|
||||
JSON:
|
||||
<CodeTabs>
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -59,8 +59,6 @@ JSON:
|
|||
}
|
||||
```
|
||||
|
||||
HCL:
|
||||
|
||||
```hcl
|
||||
Name = "team-1"
|
||||
Description = "Namespace for Team 1"
|
||||
|
@ -87,6 +85,8 @@ Meta {
|
|||
}
|
||||
```
|
||||
|
||||
</CodeTabs>
|
||||
|
||||
### Fields
|
||||
|
||||
- `Name` `(string: <required>)` - The namespaces name must be a valid DNS hostname label.
|
||||
|
|
|
@ -45,7 +45,7 @@ or specify no value at all on all the servers. Only servers that specify a value
|
|||
Suppose we are starting a three server cluster. We can start `Node A`, `Node B`, and `Node C` with each
|
||||
providing the `-bootstrap-expect 3` flag. Once the nodes are started, you should see a warning message in the service output.
|
||||
|
||||
```text
|
||||
```log
|
||||
[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
|
||||
```
|
||||
|
||||
|
@ -82,7 +82,7 @@ Successfully joined cluster by contacting 3 nodes.
|
|||
|
||||
Since a join operation is symmetric, it does not matter which node initiates it. Once the join is successful, one of the nodes will output something like:
|
||||
|
||||
```text
|
||||
```log
|
||||
[INFO] consul: adding server foo (Addr: 127.0.0.2:8300) (DC: dc1)
|
||||
[INFO] consul: adding server bar (Addr: 127.0.0.1:8300) (DC: dc1)
|
||||
[INFO] consul: Attempting bootstrap with nodes: [127.0.0.3:8300 127.0.0.2:8300 127.0.0.1:8300]
|
||||
|
|
|
@ -41,8 +41,9 @@ segment you wish to join.
|
|||
|
||||
For example, given the following segment configuration on the server agents:
|
||||
|
||||
<CodeBlockConfig filename="server-config.hcl">
|
||||
|
||||
```hcl
|
||||
# server-config.hcl
|
||||
segments = [
|
||||
{
|
||||
name = "alpha"
|
||||
|
@ -59,6 +60,8 @@ segments = [
|
|||
]
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
A Consul client agent wishing to join the "alpha" segment would need to be configured
|
||||
to use port `8303` as its Serf LAN port prior to attempting to join the cluster.
|
||||
|
||||
|
@ -68,13 +71,16 @@ to use port `8303` as its Serf LAN port prior to attempting to join the cluster.
|
|||
The following example configuration overrides the default Serf LAN port using the
|
||||
[`ports.serf_lan`](/docs/agent/options#serf_lan_port) configuration option.
|
||||
|
||||
<CodeBlockConfig filename="client-config.hcl">
|
||||
|
||||
```hcl
|
||||
# client-config.hcl
|
||||
ports {
|
||||
serf_lan = 8303
|
||||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
</Tab>
|
||||
<Tab heading="Command-line flag">
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ This is necessary because at this point, there are no other servers running in
|
|||
the datacenter! Lets call this first server `Node A`. When starting `Node A`
|
||||
something like the following will be logged:
|
||||
|
||||
```text
|
||||
```log
|
||||
2014/02/22 19:23:32 [INFO] consul: cluster leadership acquired
|
||||
```
|
||||
|
||||
|
@ -43,7 +43,7 @@ in a failure scenario. We start the next servers **without** specifying
|
|||
bootstrap mode. Once `Node B` and `Node C` are started, you should see a
|
||||
message to the effect of:
|
||||
|
||||
```text
|
||||
```log
|
||||
[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
|
||||
```
|
||||
|
||||
|
@ -66,7 +66,7 @@ Successfully joined cluster by contacting 2 nodes.
|
|||
|
||||
Once the join is successful, `Node A` should output something like:
|
||||
|
||||
```text
|
||||
```log
|
||||
[INFO] raft: Added peer 127.0.0.2:8300, starting replication
|
||||
....
|
||||
[INFO] raft: Added peer 127.0.0.3:8300, starting replication
|
||||
|
|
|
@ -32,7 +32,7 @@ expense of some performance in leader failure detection and leader election time
|
|||
|
||||
The default performance configuration is equivalent to this:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"performance": {
|
||||
"raft_multiplier": 5
|
||||
|
@ -50,7 +50,7 @@ timeouts by a factor of 5, so it can be quite slow during these events.
|
|||
|
||||
The high performance configuration is simple and looks like this:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"performance": {
|
||||
"raft_multiplier": 1
|
||||
|
|
|
@ -89,8 +89,9 @@ $ kubectl create secret generic vault-config --from-file=config=vault-config.jso
|
|||
We will provide this secret and the Vault CA secret, to the Consul server via the
|
||||
`server.extraVolumes` Helm value.
|
||||
|
||||
<CodeBlockConfig filename="config.yaml" highlight="4-13">
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
name: consul
|
||||
server:
|
||||
|
@ -108,6 +109,8 @@ connectInject:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Finally, [install](/docs/k8s/installation/install#installing-consul) the Helm chart using the above config file:
|
||||
|
||||
```shell-session
|
||||
|
|
|
@ -26,6 +26,8 @@ Adding an ingress gateway is a multi-step process that consists of the following
|
|||
|
||||
When deploying the Helm chart you must provide Helm with a custom YAML file that contains your environment configuration.
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
name: consul
|
||||
|
@ -41,6 +43,8 @@ ingressGateways:
|
|||
type: LoadBalancer
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
~> **Note:** this will create a public unauthenticated LoadBalancer in your cluster, please take appropriate security considerations.
|
||||
|
||||
The YAML snippet is the launching point for a valid configuration that must be supplied when installing using the [official consul-helm chart](https://hub.helm.sh/charts/hashicorp/consul).
|
||||
|
@ -66,6 +70,8 @@ you can configure the gateways via the [`IngressGateway`](/docs/connect/config-e
|
|||
|
||||
Here is an example `IngressGateway` resource:
|
||||
|
||||
<CodeBlockConfig filename="ingress-gateway.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
kind: IngressGateway
|
||||
|
@ -79,6 +85,8 @@ spec:
|
|||
- name: static-server
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Apply the `IngressGateway` resource with `kubectl apply`:
|
||||
|
||||
```shell-session
|
||||
|
@ -89,6 +97,8 @@ ingressgateway.consul.hashicorp.com/ingress-gateway created
|
|||
Since we're using `protocol: http`, we also need to set the protocol of our service
|
||||
`static-server` to http. To do that, we create a [`ServiceDefaults`](/docs/connect/config-entries/service-defaults) custom resource:
|
||||
|
||||
<CodeBlockConfig filename="service-defaults.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
kind: ServiceDefaults
|
||||
|
@ -98,6 +108,8 @@ spec:
|
|||
protocol: http
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Apply the `ServiceDefaults` resource with `kubectl apply`:
|
||||
|
||||
```shell-session
|
||||
|
@ -137,6 +149,8 @@ to allow the ingress gateway to route to the upstream services defined in the `I
|
|||
To create an intention that allows the ingress gateway to route to the service `static-server`, create a [`ServiceIntentions`](/docs/connect/config-entries/service-intentions)
|
||||
resource:
|
||||
|
||||
<CodeBlockConfig filename="service-intentions.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
kind: ServiceIntentions
|
||||
|
@ -150,6 +164,8 @@ spec:
|
|||
action: allow
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Apply the `ServiceIntentions` resource with `kubectl apply`:
|
||||
|
||||
```shell-session
|
||||
|
@ -163,6 +179,8 @@ For detailed instructions on how to configure zero-trust networking with intenti
|
|||
|
||||
Now you will deploy a sample application which echoes “hello world”
|
||||
|
||||
<CodeBlockConfig filename="static-server.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -210,6 +228,8 @@ spec:
|
|||
serviceAccountName: static-server
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
```shell-session
|
||||
$ kubectl apply -f static-server.yaml
|
||||
```
|
||||
|
@ -233,7 +253,9 @@ $ curl -H "Host: static-server.ingress.consul" "http://$EXTERNAL_IP:8080"
|
|||
~> **Security Warning:** Please be sure to delete the application and services created here as they represent a security risk through
|
||||
leaving an open and unauthenticated load balancer alive in your cluster.
|
||||
|
||||
To delete the ingress gateway, set enabled to false in your Helm configuration:
|
||||
To delete the ingress gateway, set enabled to `false` in your Helm configuration:
|
||||
|
||||
<CodeBlockConfig filename="config.yaml" highlight="8">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
|
@ -250,6 +272,8 @@ ingressGateways:
|
|||
type: LoadBalancer
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
And run Helm upgrade:
|
||||
|
||||
```shell-session
|
||||
|
|
|
@ -21,6 +21,8 @@ Adding a terminating gateway is a multi-step process:
|
|||
|
||||
Minimum required Helm options:
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
name: consul
|
||||
|
@ -32,6 +34,8 @@ terminatingGateways:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
## Deploying the Helm chart
|
||||
|
||||
Ensure you have the latest consul-helm chart and install Consul via helm using the following
|
||||
|
@ -91,6 +95,8 @@ service to that node.
|
|||
|
||||
Create a sample external service and register it with Consul.
|
||||
|
||||
<CodeBlockConfig filename="external.json">
|
||||
|
||||
```json
|
||||
{
|
||||
"Node": "example_com",
|
||||
|
@ -108,6 +114,8 @@ Create a sample external service and register it with Consul.
|
|||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
- `"Node": "example_com"` is our made up node name.
|
||||
- `"Address": "example.com"` is the address of our node. Services registered to that node will use this address if
|
||||
their own address isn't specified. If you're registering multiple external services, ensure you
|
||||
|
@ -141,12 +149,16 @@ being represented by the gateway:
|
|||
~> The CLI command should be run with the `-merge-policies`, `-merge-roles` and `-merge-service-identities` so
|
||||
nothing is removed from the terminating gateway token
|
||||
|
||||
<CodeBlockConfig filename="write-policy.hcl">
|
||||
|
||||
```hcl
|
||||
service "example-https" {
|
||||
policy = "write"
|
||||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
```shell-session
|
||||
$ consul acl policy create -name "example-https-write-policy" -rules @write-policy.hcl
|
||||
ID: xxxxxxxxxxxxxxx
|
||||
|
@ -186,7 +198,9 @@ Policies:
|
|||
Once the tokens have been updated, create the [TerminatingGateway](/docs/connect/config-entries/terminating-gateway)
|
||||
resource to configure the terminating gateway:
|
||||
|
||||
```hcl
|
||||
<CodeBlockConfig filename="terminating-gateway.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
kind: TerminatingGateway
|
||||
metadata:
|
||||
|
@ -197,6 +211,8 @@ spec:
|
|||
caFile: /etc/ssl/cert.pem
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
~> If TLS is enabled a `caFile` must be provided, it must point to the system trust store of the terminating gateway
|
||||
container (`/etc/ssl/cert.pem`).
|
||||
|
||||
|
@ -208,6 +224,8 @@ $ kubectl apply -f terminating-gateway.yaml
|
|||
|
||||
If using ACLs and TLS, create a [`ServiceIntentions`](/docs/connect/config-entries/service-intentions) resource to allow access from services in the mesh to the external service
|
||||
|
||||
<CodeBlockConfig filename="service-intentions.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
kind: ServiceIntentions
|
||||
|
@ -221,6 +239,8 @@ spec:
|
|||
action: allow
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Apply the `ServiceIntentions` resource with `kubectl apply`:
|
||||
|
||||
```shell-session
|
||||
|
@ -232,6 +252,8 @@ $ kubectl apply -f service-intentions.yaml
|
|||
Finally define and deploy the external services as upstreams for the internal mesh services that wish to talk to them.
|
||||
An example deployment is provided which will serve as a static client for the terminating gateway service.
|
||||
|
||||
<CodeBlockConfig filename="static-client.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -274,6 +296,8 @@ spec:
|
|||
serviceAccountName: static-client
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Run the service via `kubectl apply`:
|
||||
|
||||
```shell-session
|
||||
|
|
|
@ -49,6 +49,8 @@ Update Complete. ⎈Happy Helming!⎈
|
|||
Next, you must configure consul-helm via your `values.yaml` to install the custom resource definitions
|
||||
and enable the controller that acts on them:
|
||||
|
||||
<CodeBlockConfig filename="values.yaml" highlight="4-5,7-8">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
name: consul
|
||||
|
@ -60,6 +62,8 @@ connectInject:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Note that:
|
||||
|
||||
1. `controller.enabled: true` installs the CRDs and enables the controller.
|
||||
|
@ -215,6 +219,8 @@ name of the resource doesn't matter. For other resources, the name of the resour
|
|||
determines which service it configures. For example, this resource configures
|
||||
the service `web`:
|
||||
|
||||
<CodeBlockConfig highlight="4">
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
kind: ServiceDefaults
|
||||
|
@ -224,11 +230,15 @@ spec:
|
|||
protocol: http
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
For `ServiceIntentions`, because we need to support the ability to create
|
||||
wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service),
|
||||
and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name`
|
||||
to configure the destination service for the intention:
|
||||
|
||||
<CodeBlockConfig highlight="6-8,18-20">
|
||||
|
||||
```yaml
|
||||
# foo => * (allow)
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
|
@ -255,6 +265,8 @@ spec:
|
|||
action: allow
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
~> **NOTE:** If two `ServiceIntentions` resources set the same `spec.destination.name`, the
|
||||
last one created will not be synced.
|
||||
|
||||
|
@ -275,6 +287,8 @@ The details on each configuration are:
|
|||
|
||||
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
|
||||
|
||||
<CodeBlockConfig highlight="6-7">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
name: consul
|
||||
|
@ -285,6 +299,8 @@ The details on each configuration are:
|
|||
mirroringK8S: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
1. **Mirroring with prefix** - The Kubernetes namespace will be "mirrored" into Consul
|
||||
with a prefix added to the Consul namespace, i.e.
|
||||
if the prefix is `k8s-` then service `web` in Kubernetes namespace `web-ns` will be registered as service `web`
|
||||
|
@ -293,6 +309,8 @@ The details on each configuration are:
|
|||
|
||||
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
|
||||
|
||||
<CodeBlockConfig highlight="8">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
name: consul
|
||||
|
@ -304,6 +322,8 @@ The details on each configuration are:
|
|||
mirroringK8SPrefix: k8s-
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
1. **Single destination namespace** - The Kubernetes namespace is ignored and all services
|
||||
will be registered into the same Consul namespace, i.e. if the destination Consul
|
||||
namespace is `my-ns` then service `web` in Kubernetes namespace `web-ns` will
|
||||
|
@ -317,6 +337,8 @@ The details on each configuration are:
|
|||
|
||||
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
|
||||
|
||||
<CodeBlockConfig highlight="7">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
name: consul
|
||||
|
@ -327,6 +349,8 @@ The details on each configuration are:
|
|||
consulDestinationNamespace: 'my-ns'
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
~> **NOTE:** In this configuration, if two custom resources of the same kind **and** the same name are attempted to
|
||||
be created in two Kubernetes namespaces, the last one created will not be synced.
|
||||
|
||||
|
@ -337,6 +361,8 @@ name of the resource doesn't matter. For other resources, the name of the resour
|
|||
determines which service it configures. For example, this resource configures
|
||||
the service `web`:
|
||||
|
||||
<CodeBlockConfig highlight="4">
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
kind: ServiceDefaults
|
||||
|
@ -346,11 +372,15 @@ spec:
|
|||
protocol: http
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
For `ServiceIntentions`, because we need to support the ability to create
|
||||
wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service),
|
||||
and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name`
|
||||
to configure the destination service for the intention:
|
||||
|
||||
<CodeBlockConfig highlight="6-8,18-20">
|
||||
|
||||
```yaml
|
||||
# foo => * (allow)
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
|
@ -377,6 +407,8 @@ spec:
|
|||
action: allow
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
In addition, we support the field `spec.destination.namespace` to configure
|
||||
the destination service's Consul namespace. If `spec.destination.namespace`
|
||||
is empty, then the Consul namespace used will be the same as the other
|
||||
|
|
|
@ -141,6 +141,8 @@ in full cluster rebuilds.
|
|||
To verify DNS works, run a simple job to query DNS. Save the following
|
||||
job to the file `job.yaml` and run it:
|
||||
|
||||
<CodeBlockConfig filename="job.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
|
@ -157,6 +159,8 @@ spec:
|
|||
backoffLimit: 4
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
```shell-session
|
||||
$ kubectl apply -f job.yaml
|
||||
```
|
||||
|
|
|
@ -21,16 +21,20 @@ kubectl create secret generic consul-ent-license --from-literal="key=${secret}"
|
|||
|
||||
In your `config.yaml`, change the value of `global.image` to one of the enterprise [release tags](https://hub.docker.com/r/hashicorp/consul-enterprise/tags).
|
||||
|
||||
<CodeBlockConfig filename="config.yaml" highlight="2">
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
image: 'hashicorp/consul-enterprise:1.10.0-ent'
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Add the name and key of the secret you just created to `server.enterpriseLicense`, if using Consul version 1.10+.
|
||||
|
||||
<CodeBlockConfig filename="config.yaml" highlight="4-6">
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
image: 'hashicorp/consul-enterprise:1.10.0-ent'
|
||||
server:
|
||||
|
@ -39,12 +43,15 @@ server:
|
|||
secretKey: 'key'
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
If the version of Consul is < 1.10, use the following config with the name and key of the secret you just created.
|
||||
|
||||
-> **Note:** The value of `server.enterpriseLicense.enableLicenseAutoload` must be set to `false`.
|
||||
|
||||
<CodeBlockConfig filename="config.yaml" highlight="7">
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
image: 'hashicorp/consul-enterprise:1.8.3-ent'
|
||||
server:
|
||||
|
@ -54,6 +61,8 @@ server:
|
|||
enableLicenseAutoload: false
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Now run `helm install`:
|
||||
|
||||
```shell-session
|
||||
|
|
|
@ -27,8 +27,9 @@ example above, a fake [cloud auto-join](/docs/agent/cloud-auto-join)
|
|||
value is specified. This should be set to resolve to the proper addresses of
|
||||
your existing Consul cluster.
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
enabled: false
|
||||
|
||||
|
@ -41,6 +42,8 @@ client:
|
|||
- 'provider=my-cloud config=val ...'
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
-> **Networking:** Note that for the Kubernetes nodes to join an existing
|
||||
cluster, the nodes (and specifically the agent pods) must be able to connect
|
||||
to all other server and client agents inside and _outside_ of Kubernetes over [LAN](/docs/glossary#lan-gossip).
|
||||
|
@ -63,6 +66,8 @@ If you would like to use this feature with external Consul servers, you need to
|
|||
so that it can retrieve the clients' CA to use for securing the rest of the cluster.
|
||||
To do that, you must add the following values, in addition to the values mentioned above:
|
||||
|
||||
<CodeBlockConfig filename="config.yaml" highlight="2-8">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
tls:
|
||||
|
@ -74,6 +79,8 @@ externalServers:
|
|||
- 'provider=my-cloud config=val ...'
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
In most cases, `externalServers.hosts` will be the same as `client.join`, however, both keys must be set because
|
||||
they are used for different purposes: one for Serf LAN and the other for HTTPS connections.
|
||||
Please see the [reference documentation](/docs/k8s/helm#v-externalservers-hosts)
|
||||
|
@ -99,6 +106,8 @@ kubectl create secret generic bootstrap-token --from-literal='token=<your bootst
|
|||
|
||||
Then provide that secret to the Helm chart:
|
||||
|
||||
<CodeBlockConfig filename="config.yaml" highlight="4-6">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
acls:
|
||||
|
@ -108,6 +117,8 @@ global:
|
|||
secretKey: token
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
The bootstrap token requires the following minimal permissions:
|
||||
|
||||
- `acl:write`
|
||||
|
@ -120,6 +131,8 @@ to create policies, tokens, and an auth method. If you are [enabling Consul Conn
|
|||
so that the Consul servers can validate a Kubernetes service account token when using the [Kubernetes auth method](/docs/acl/auth-methods/kubernetes)
|
||||
with `consul login`.
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
externalServers:
|
||||
enabled: true
|
||||
|
@ -128,8 +141,12 @@ externalServers:
|
|||
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Your resulting Helm configuration will end up looking similar to this:
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
enabled: false
|
||||
|
@ -152,11 +169,15 @@ externalServers:
|
|||
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
### Bootstrapping ACLs via the Helm chart
|
||||
|
||||
If you would like the Helm chart to call the bootstrapping API and set the server tokens for you, then the steps are similar.
|
||||
The only difference is that you don't need to set the bootstrap token. The Helm chart will save the bootstrap token as a Kubernetes secret.
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
enabled: false
|
||||
|
@ -175,3 +196,5 @@ externalServers:
|
|||
- 'provider=my-cloud config=val ...'
|
||||
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
|
|
@ -21,8 +21,9 @@ to pods or nodes in another.
|
|||
First, we will deploy the Consul servers with Consul clients in the first cluster.
|
||||
For that, we will use the following Helm configuration:
|
||||
|
||||
<CodeBlockConfig filename="cluster1-config.yaml">
|
||||
|
||||
```yaml
|
||||
# cluster1-config.yaml
|
||||
global:
|
||||
datacenter: dc1
|
||||
tls:
|
||||
|
@ -42,6 +43,8 @@ ui:
|
|||
type: NodePort
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Note that we are deploying in a secure configuration, with gossip encryption,
|
||||
TLS for all components, and ACLs. We are enabling the Consul Service Mesh and the controller for CRDs
|
||||
so that we can use them to later verify that our services can connect with each other across clusters.
|
||||
|
@ -88,8 +91,9 @@ $ kubectl apply -f cluster1-credentials.yaml
|
|||
```
|
||||
To deploy in the second cluster, we will use the following Helm configuration:
|
||||
|
||||
<CodeBlockConfig filename="cluster2-config.yaml" highlight="6-11,15-17">
|
||||
|
||||
```yaml
|
||||
# cluster2-config.yaml
|
||||
global:
|
||||
enabled: false
|
||||
datacenter: dc1
|
||||
|
@ -127,6 +131,8 @@ connectInject:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Note that we're referencing secrets from the first cluster in ACL, gossip, and TLS configuration.
|
||||
|
||||
Next, we need to set up the `externalServers` configuration.
|
||||
|
@ -182,13 +188,15 @@ helm install cluster2 -f cluster2-config.yaml hashicorp/consul
|
|||
|
||||
## Verifying the Consul Service Mesh works
|
||||
|
||||
~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have a explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](https://www.consul.io/docs/k8s/connect#consul-hashicorp-com-connect-service-upstreams) annotation.
|
||||
~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have a explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](https://www.consul.io/docs/k8s/connect#consul-hashicorp-com-connect-service-upstreams) annotation.
|
||||
|
||||
Now that we have our Consul cluster in multiple k8s clusters up and running, we will
|
||||
deploy two services and verify that they can connect to each other.
|
||||
|
||||
First, we'll deploy `static-server` service in the first cluster:
|
||||
|
||||
<CodeBlockConfig filename="static-server.yaml">
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
|
@ -249,10 +257,14 @@ spec:
|
|||
serviceAccountName: static-server
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Note that we're defining a Service intention so that our services are allowed to talk to each other.
|
||||
|
||||
Then we'll deploy `static-client` in the second cluster with the following configuration:
|
||||
|
||||
<CodeBlockConfig filename="static-client.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -295,6 +307,8 @@ spec:
|
|||
serviceAccountName: static-client
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Once both services are up and running, we can connect to the `static-server` from `static-client`:
|
||||
|
||||
```shell
|
||||
|
|
|
@ -98,8 +98,9 @@ or by reading the [Helm Chart Reference](/docs/k8s/helm).
|
|||
For example, if you want to enable the [Consul Connect](/docs/k8s/connect) feature,
|
||||
use the following config file:
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
name: consul
|
||||
connectInject:
|
||||
|
@ -108,6 +109,8 @@ controller:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Once you've created your `config.yaml` file, run `helm install` with the `-f` flag:
|
||||
|
||||
```shell-session
|
||||
|
@ -218,6 +221,8 @@ spec:
|
|||
An example `Deployment` is also shown below to show how the host IP can
|
||||
be accessed from nested pod specifications:
|
||||
|
||||
<CodeBlockConfig highlight="18-28">
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
|
@ -249,6 +254,8 @@ spec:
|
|||
consul kv put hello world
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
## Architecture
|
||||
|
||||
Consul runs on Kubernetes with the same
|
||||
|
|
|
@ -34,6 +34,8 @@ support federation, see [Upgrading An Existing Cluster](#upgrading-an-existing-c
|
|||
You will need to use the following `config.yaml` file for your primary cluster,
|
||||
with the possible modifications listed below.
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
name: consul
|
||||
|
@ -77,11 +79,15 @@ meshGateway:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Modifications:
|
||||
|
||||
1. The Consul datacenter name is `dc1`. The datacenter name in each federated
|
||||
cluster **must be unique**.
|
||||
1. ACLs are enabled in the above config file. They can be disabled by setting:
|
||||
|
||||
|
||||
```yaml
|
||||
global:
|
||||
acls:
|
||||
|
@ -125,6 +131,8 @@ to install Consul on your primary cluster.
|
|||
If you have an existing cluster, you will need to upgrade it to ensure it has
|
||||
the following config:
|
||||
|
||||
<CodeBlockConfig filename="config.yaml">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
tls:
|
||||
|
@ -139,6 +147,8 @@ meshGateway:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
1. `global.tls.enabled` must be `true`. See [Configuring TLS on an Existing Cluster](/docs/k8s/operations/tls-on-existing-cluster)
|
||||
for more information on safely upgrading a cluster to use TLS.
|
||||
|
||||
|
@ -200,12 +210,16 @@ The federation secret is a Kubernetes secret containing information needed
|
|||
for secondary datacenters/clusters to federate with the primary. This secret is created
|
||||
automatically by setting:
|
||||
|
||||
<CodeBlockConfig highlight="2-3">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
federation:
|
||||
createFederationSecret: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
After the installation into your primary cluster you will need to export
|
||||
this secret:
|
||||
|
||||
|
@ -291,6 +305,8 @@ with the possible modifications listed below.
|
|||
-> **NOTE: ** You must use a separate Helm config file for each cluster (primary and secondaries) since their
|
||||
settings are different.
|
||||
|
||||
<CodeBlockConfig filename="config-cluster2.yaml">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
name: consul
|
||||
|
@ -340,6 +356,8 @@ server:
|
|||
load: true
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Modifications:
|
||||
|
||||
1. The Consul datacenter name is `dc2`. The primary datacenter's name was `dc1`.
|
||||
|
|
|
@ -74,13 +74,16 @@ The following sections detail how to export this data.
|
|||
|
||||
1. These certificates can be used in your server config file:
|
||||
|
||||
<CodeBlockConfig filename="server.hcl">
|
||||
|
||||
```hcl
|
||||
# server.hcl
|
||||
cert_file = "vm-dc-server-consul-0.pem"
|
||||
key_file = "vm-dc-server-consul-0-key.pem"
|
||||
ca_file = "consul-agent-ca.pem"
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
1. For clients, you can generate TLS certs with:
|
||||
|
||||
```shell-session
|
||||
|
|
|
@ -117,11 +117,16 @@ to update to the new version.
|
|||
1. Read our [Compatibility Matrix](/docs/k8s/upgrade/compatibility) to ensure
|
||||
your current Helm chart version supports this Consul version. If it does not,
|
||||
you may need to also upgrade your Helm chart version at the same time.
|
||||
1. Set `global.consul` in your `values.yaml` to the desired version:
|
||||
1. Set `global.image` in your `values.yaml` to the desired version:
|
||||
|
||||
<CodeBlockConfig filename="values.yaml" highlight="2">
|
||||
|
||||
```yaml
|
||||
global:
|
||||
image: consul:1.8.3
|
||||
```
|
||||
</CodeBlockConfig>
|
||||
|
||||
1. Determine your current installed chart version:
|
||||
|
||||
```bash
|
||||
|
|
|
@ -36,6 +36,8 @@ First, create a directory to organize Terraform configuration files that make up
|
|||
|
||||
The `main.tf` is the entry point of the module and this is where you can begin authoring your module. It can contain multiple Terraform resources related to an automation task that uses Consul service discovery information, particularly the required [`services` input variable](#services-variable). The code example below shows a resource using the `services` variable. When this example is used in automation with Consul-Terraform-Sync, the content of the local file would dynamically update as Consul service discovery information changes.
|
||||
|
||||
<CodeBlockConfig filename="main.tf">
|
||||
|
||||
```hcl
|
||||
# Create a file with service names and their node addresses
|
||||
resource "local_file" "consul_services" {
|
||||
|
@ -46,12 +48,16 @@ resource "local_file" "consul_services" {
|
|||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Something important to consider before authoring your module is deciding the [condition under which it will execute](/docs/nia/tasks#task-execution). This will allow you to potentially include other types of Consul-Terraform-Sync provided input variables in your module. It will also help inform your documentation and how users should configure their task for your module.
|
||||
|
||||
### Services Variable
|
||||
|
||||
To satisfy the specification requirements for a compatible module, copy the `services` variable declaration to the `variables.tf` file. Your module can optionally have other [variable declarations](#module-input-variables) and [Consul-Terraform-Sync provided input variables](/docs/nia/terraform-modules#optional-input-variables) in addition to `var.services`.
|
||||
|
||||
<CodeBlockConfig filename="variables.tf">
|
||||
|
||||
```hcl
|
||||
variable "services" {
|
||||
description = "Consul services monitored by Consul-Terraform-Sync"
|
||||
|
@ -80,6 +86,8 @@ variable "services" {
|
|||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Keys of the `services` map are unique identifiers of the service across Consul agents and data centers. Keys follow the format `service-id.node.datacenter` (or `service-id.node.namespace.datacenter` for Consul Enterprise). A complete list of attributes available for the `services` variable is included in the [documentation for Consul-Terraform-Sync tasks](/docs/nia/tasks#services-condition).
|
||||
|
||||
Terraform variables when passed as module arguments can be [lossy for object types](https://www.terraform.io/docs/configuration/types.html#conversion-of-complex-types). This allows Consul-Terraform-Sync to declare the full variable with every object attribute in the generated root module, and pass the variable to a child module that contains a subset of these attributes for its variable declaration. Modules compatible with Consul-Terraform-Sync may simplify the `var.services` declaration within the module by omitting unused attributes. For example, the following services variable has 4 attributes with the rest omitted.
|
||||
|
|
|
@ -325,7 +325,7 @@ Once the ACL system is bootstrapped, ACL tokens can be managed through the
|
|||
After the servers are restarted above, you will see new errors in the logs of the Consul
|
||||
servers related to permission denied errors:
|
||||
|
||||
```text
|
||||
```log
|
||||
2017/07/08 23:38:24 [WARN] agent: Node info update blocked by ACLs
|
||||
2017/07/08 23:38:44 [WARN] agent: Coordinate update blocked by ACLs
|
||||
```
|
||||
|
@ -379,7 +379,7 @@ $ curl \
|
|||
With that ACL agent token set, the servers will be able to sync themselves with the
|
||||
catalog:
|
||||
|
||||
```text
|
||||
```log
|
||||
2017/07/08 23:42:59 [INFO] agent: Synced node info
|
||||
```
|
||||
|
||||
|
@ -640,7 +640,7 @@ operator = "read"
|
|||
|
||||
This is equivalent to the following JSON input:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"key": {
|
||||
"": {
|
||||
|
@ -784,15 +784,15 @@ recursive reads via [the KV API](/api/kv#recurse) with an invalid token result i
|
|||
|
||||
```hcl
|
||||
key "" {
|
||||
policy = "deny"
|
||||
policy = "deny"
|
||||
}
|
||||
|
||||
key "bar" {
|
||||
policy = "list"
|
||||
policy = "list"
|
||||
}
|
||||
|
||||
key "baz" {
|
||||
policy = "read"
|
||||
policy = "read"
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -828,7 +828,7 @@ The `keyring` policy controls access to keyring operations in the
|
|||
|
||||
Keyring rules look like this:
|
||||
|
||||
```text
|
||||
```hcl
|
||||
keyring = "write"
|
||||
```
|
||||
|
||||
|
@ -900,7 +900,7 @@ The `operator` policy controls access to cluster-level operations in the
|
|||
|
||||
Operator rules look like this:
|
||||
|
||||
```text
|
||||
```hcl
|
||||
operator = "read"
|
||||
```
|
||||
|
||||
|
|
|
@ -125,6 +125,8 @@ Take note of the `ReplicatedIndex` value.
|
|||
Create a new file containing the payload for creating a new token named `test-ui-token.json`
|
||||
with the following contents:
|
||||
|
||||
<CodeBlockConfig filename="test-ui-token.json">
|
||||
|
||||
```json
|
||||
{
|
||||
"Name": "UI Token",
|
||||
|
@ -133,6 +135,8 @@ with the following contents:
|
|||
}
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
From a Consul server in DC1, create a new token using that file:
|
||||
|
||||
```shell
|
||||
|
|
|
@ -21,7 +21,7 @@ upgrade flow.
|
|||
Consul Enterprise 1.10 has removed temporary licensing capabilities from the binaries
|
||||
found on https://releases.hashicorp.com. Servers will no longer load a license previously
|
||||
set through the CLI or API. Instead the license must be present in the server's configuration
|
||||
or environment prior to starting. See the [licensing documentation](/docs/enterprise/license/overview)
|
||||
or environment prior to starting. See the [licensing documentation](/docs/enterprise/license/overview)
|
||||
for more information about how to configure the license. Client agents previously retrieved their
|
||||
license from the servers in the cluster within 30 minutes of starting and the snapshot agent
|
||||
would similarly retrieve its license from the server or client agent it was configured to use. As
|
||||
|
@ -820,7 +820,7 @@ defaulting [`allow_stale`](/docs/agent/options#allow_stale) to true for
|
|||
better utilization of available servers. If you want to retain the previous
|
||||
behavior, set the following configuration:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"dns_config": {
|
||||
"allow_stale": false
|
||||
|
@ -837,7 +837,7 @@ To continue to use the high-performance settings that were the default prior to
|
|||
Consul 0.7 (recommended for production servers), add the following
|
||||
configuration to all Consul servers when upgrading:
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"performance": {
|
||||
"raft_multiplier": 1
|
||||
|
|
Loading…
Reference in New Issue