docs: Use long form of CLI flags (#12030)

Use long form of CLI flags in all example commands.

Co-authored-by: mrspanishviking <kcardenas@hashicorp.com>
Co-authored-by: David Yu <dyu@hashicorp.com>
This commit is contained in:
Blake Covarrubias 2022-01-12 15:05:01 -08:00 committed by GitHub
parent 3a19174b1c
commit 5a12f2cf20
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
50 changed files with 311 additions and 299 deletions

View File

@ -33,7 +33,7 @@ build:
# local Docker image with the dependency changes included.
build-image:
@echo "==> Building Docker image..."
@docker build -t hashicorp-consul-website-local .
@docker build --tag hashicorp-consul-website-local .
# Use this if you have run `build-image` to use the locally built image
# rather than our CI-generated image to test dependency changes.

View File

@ -118,7 +118,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/auth-method
```
@ -174,7 +174,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/auth-method/minikube
$ curl --request GET http://127.0.0.1:8500/v1/acl/auth-method/minikube
```
### Sample Response
@ -296,7 +296,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/auth-method/minikube
```
@ -357,7 +357,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X DELETE \
$ curl --request DELETE \
http://127.0.0.1:8500/v1/acl/auth-method/minikube
```
@ -397,7 +397,7 @@ The table below shows this endpoint's support for
## Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/auth-methods
$ curl --request GET http://127.0.0.1:8500/v1/acl/auth-methods
```
### Sample Response

View File

@ -118,7 +118,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/binding-rule
```
@ -172,7 +172,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/binding-rule/000ed53c-e2d3-e7e6-31a5-c19bc3518a3d
$ curl --request GET http://127.0.0.1:8500/v1/acl/binding-rule/000ed53c-e2d3-e7e6-31a5-c19bc3518a3d
```
### Sample Response
@ -297,7 +297,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/binding-rule/000ed53c-e2d3-e7e6-31a5-c19bc3518a3d
```
@ -352,7 +352,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X DELETE \
$ curl --request DELETE \
http://127.0.0.1:8500/v1/acl/binding-rule/000ed53c-e2d3-e7e6-31a5-c19bc3518a3d
```
@ -395,7 +395,7 @@ The table below shows this endpoint's support for
## Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/binding-rules
$ curl --request GET http://127.0.0.1:8500/v1/acl/binding-rules
```
### Sample Response

View File

@ -217,7 +217,7 @@ agent "" {
### Sample Request
```shell-session
$ curl -X POST -d @rules.hcl http://127.0.0.1:8500/v1/acl/rules/translate
$ curl --request POST --data @rules.hcl http://127.0.0.1:8500/v1/acl/rules/translate
```
### Sample Response
@ -256,7 +256,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/rules/translate/4f48f7e6-9359-4890-8e67-6144a962b0a5
$ curl --request GET http://127.0.0.1:8500/v1/acl/rules/translate/4f48f7e6-9359-4890-8e67-6144a962b0a5
```
### Sample Response
@ -384,7 +384,7 @@ deleting a token for which you already must possess its secret.
```shell-session
$ curl \
-H "X-Consul-Token: b78d37c7-0ca7-5f4d-99ee-6d9975ce4586" \
--header "X-Consul-Token: b78d37c7-0ca7-5f4d-99ee-6d9975ce4586" \
--request POST \
http://127.0.0.1:8500/v1/acl/logout
```

View File

@ -68,7 +68,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/policy
```
@ -120,7 +120,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/policy/e359bd81-baca-903e-7e64-1ccd9fdc78f5
$ curl --request GET http://127.0.0.1:8500/v1/acl/policy/e359bd81-baca-903e-7e64-1ccd9fdc78f5
```
### Sample Response
@ -170,7 +170,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/policy/name/node-read
$ curl --request GET http://127.0.0.1:8500/v1/acl/policy/name/node-read
```
### Sample Response
@ -245,7 +245,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/policy/c01a1f82-44be-41b0-a686-685fb6e0f485
```
@ -299,7 +299,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X DELETE \
$ curl --request DELETE \
http://127.0.0.1:8500/v1/acl/policy/8f246b77-f3e1-ff88-5b48-8ec93abf3e05
```
@ -339,7 +339,7 @@ The table below shows this endpoint's support for
## Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/policies
$ curl --request GET http://127.0.0.1:8500/v1/acl/policies
```
### Sample Response

View File

@ -114,7 +114,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/role
```
@ -186,7 +186,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/role/aa770e5b-8b0b-7fcf-e5a1-8535fcc388b4
$ curl --request GET http://127.0.0.1:8500/v1/acl/role/aa770e5b-8b0b-7fcf-e5a1-8535fcc388b4
```
### Sample Response
@ -256,7 +256,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/role/name/example-role
$ curl --request GET http://127.0.0.1:8500/v1/acl/role/name/example-role
```
### Sample Response
@ -372,7 +372,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/role/8bec74a4-5ced-45ed-9c9d-bca6153490bb
```
@ -441,7 +441,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X DELETE \
$ curl --request DELETE \
http://127.0.0.1:8500/v1/acl/role/8f246b77-f3e1-ff88-5b48-8ec93abf3e05
```
@ -486,7 +486,7 @@ The table below shows this endpoint's support for
## Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/roles
$ curl --request GET http://127.0.0.1:8500/v1/acl/roles
```
### Sample Response

View File

@ -125,7 +125,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/token
```
@ -187,7 +187,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/token/6a1253d2-1785-24fd-91c2-f8e78c745511
$ curl --request GET http://127.0.0.1:8500/v1/acl/token/6a1253d2-1785-24fd-91c2-f8e78c745511
```
### Sample Response
@ -246,7 +246,7 @@ retrieving the data for a token that you must already possess its secret.
### Sample Request
```shell-session
$ curl -H "X-Consul-Token: 6a1253d2-1785-24fd-91c2-f8e78c745511" \
$ curl --header "X-Consul-Token: 6a1253d2-1785-24fd-91c2-f8e78c745511" \
http://127.0.0.1:8500/v1/acl/token/self
```
@ -389,7 +389,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/token/6a1253d2-1785-24fd-91c2-f8e78c745511
```
@ -465,7 +465,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
$ curl --request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/token/6a1253d2-1785-24fd-91c2-f8e78c745511/clone
```
@ -534,7 +534,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X DELETE \
$ curl --request DELETE \
http://127.0.0.1:8500/v1/acl/token/8f246b77-f3e1-ff88-5b48-8ec93abf3e05
```
@ -589,7 +589,7 @@ The table below shows this endpoint's support for
## Sample Request
```shell-session
$ curl -X GET http://127.0.0.1:8500/v1/acl/tokens
$ curl --request GET http://127.0.0.1:8500/v1/acl/tokens
```
### Sample Response

View File

@ -41,15 +41,15 @@ The table below shows this endpoint's support for
```json
{
"Name": "na-west",
"Description": "Partition for North America West",
"Description": "Partition for North America West"
}
```
### Sample Request
```shell-session
$ curl -X PUT \
-H "X-Consul-Token: 5cdcae6c-0cce-4210-86fe-5dff3b984a6e" \
$ curl ---request PUT \
--header "X-Consul-Token: 5cdcae6c-0cce-4210-86fe-5dff3b984a6e" \
--data @payload.json \
http://127.0.0.1:8500/v1/partition
```
@ -93,7 +93,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -H "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" \
$ curl --header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" \
http://127.0.0.1:8500/v1/partition/na-west
```
@ -145,8 +145,8 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
-H "X-Consul-Token: 5cdcae6c-0cce-4210-86fe-5dff3b984a6e" \
$ curl --request PUT \
--header "X-Consul-Token: 5cdcae6c-0cce-4210-86fe-5dff3b984a6e" \
--data @payload.json \
http://127.0.0.1:8500/v1/partition/na-west
```
@ -196,8 +196,8 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X DELETE \
-H "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" \
$ curl --request DELETE \
--header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" \
http://127.0.0.1:8500/v1/partition/na-west
```
@ -234,7 +234,7 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -H "X-Consul-Token: 0137db51-5895-4c25-b6cd-d9ed992f4a52" \
$ curl --header "X-Consul-Token: 0137db51-5895-4c25-b6cd-d9ed992f4a52" \
http://127.0.0.1:8500/v1/partitions
```

View File

@ -274,11 +274,11 @@ The table below shows this endpoint's support for
Those endpoints return the aggregated values of all health checks for the
service instance(s) and will return the corresponding HTTP codes:
| Result | Meaning |
| ------ | --------------------------------------------------------------- |
| Result | Meaning |
| ------ | ---------------------------------------------------------------- |
| `200` | All health checks of every matching service instance are passing |
| `400` | Bad parameter (missing service name of id) |
| `404` | No such service id or name |
| `400` | Bad parameter (missing service name of id) |
| `404` | No such service id or name |
| `429` | Some health checks are passing, at least one is warning |
| `503` | At least one of the health checks is critical |
@ -305,8 +305,8 @@ Given 2 services with name `web`, with web2 critical and web1 passing:
##### By Name, Text
```shell
$ curl http://localhost:8500/v1/agent/health/service/name/web?format=text
```shell-session
$ curl "http://localhost:8500/v1/agent/health/service/name/web?format=text"
critical
```
@ -438,8 +438,8 @@ Query the health status of the service with ID `web2`.
##### Failure By ID, Text
```shell
$ curl http://localhost:8500/v1/agent/health/service/id/web2?format=text
```shell-session
$ curl "http://localhost:8500/v1/agent/health/service/id/web2?format=text"
critical
```
@ -522,7 +522,7 @@ curl localhost:8500/v1/agent/health/service/id/web2
##### Success By ID, Text
```shell
$ curl localhost:8500/v1/agent/health/service/id/web1?format=text
$ curl "localhost:8500/v1/agent/health/service/id/web1?format=text"
passing
```

View File

@ -159,7 +159,7 @@ and vice versa. A catalog entry can have either, neither, or both.
$ curl \
--request PUT \
--data @payload.json \
-H "X-Consul-Namespace: team-1" \
--header "X-Consul-Namespace: team-1" \
http://127.0.0.1:8500/v1/catalog/register
```

View File

@ -117,7 +117,7 @@ The table below shows this endpoint's support for
$ curl \
--request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/connect/intentions/exact?source=web&destination=db
"http://127.0.0.1:8500/v1/connect/intentions/exact?source=web&destination=db"
```
### Sample Response
@ -674,7 +674,7 @@ The table below shows this endpoint's support for
```shell-session
$ curl \
http://127.0.0.1:8500/v1/connect/intentions/check?source=web&destination=db
"http://127.0.0.1:8500/v1/connect/intentions/check?source=web&destination=db"
```
### Sample Response
@ -736,7 +736,7 @@ The table below shows this endpoint's support for
```shell-session
$ curl \
http://127.0.0.1:8500/v1/connect/intentions/match?by=source&name=web
"http://127.0.0.1:8500/v1/connect/intentions/match?by=source&name=web"
```
### Sample Response

View File

@ -209,8 +209,8 @@ redirect {
Request:
```shell-session
$ curl -X POST \
-d'
$ curl --request POST \
--data '
{
"OverrideConnectTimeout": "7s",
"OverrideProtocol": "grpc",

View File

@ -137,8 +137,8 @@ is executed on the leader.
**Command - Unfiltered**
```shell
curl -X GET localhost:8500/v1/agent/services
```shell-session
$ curl --request GET localhost:8500/v1/agent/services
```
**Response - Unfiltered**
@ -227,8 +227,8 @@ curl --get localhost:8500/v1/agent/services --data-urlencode 'filter=Meta.env ==
**Command - Unfiltered**
```shell
curl -X GET localhost:8500/v1/catalog/service/api-internal
```shell-session
$ curl --request GET localhost:8500/v1/catalog/service/api-internal
```
**Response - Unfiltered**
@ -372,8 +372,8 @@ curl --get localhost:8500/v1/catalog/service/api-internal --data-urlencode 'filt
**Command - Unfiltered**
```shell
curl -X GET localhost:8500/v1/health/node/node-1
```shell-session
$ curl --request GET localhost:8500/v1/health/node/node-1
```
**Response - Unfiltered**
@ -413,8 +413,8 @@ curl -X GET localhost:8500/v1/health/node/node-1
**Command - Filtered**
```shell
curl -G localhost:8500/v1/health/node/node-1 --data-urlencode 'filter=ServiceName != ""'
```shell-session
$ curl --get localhost:8500/v1/health/node/node-1 --data-urlencode 'filter=ServiceName != ""'
```
**Response - Filtered**

View File

@ -54,7 +54,7 @@ The table below shows this endpoint's support for
```shell-session
$ curl \
-H "X-Consul-Namespace: *" \
--header "X-Consul-Namespace: *" \
http://127.0.0.1:8500/v1/health/node/my-node
```

View File

@ -98,8 +98,8 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
-H "X-Consul-Token: 5cdcae6c-0cce-4210-86fe-5dff3b984a6e" \
$ curl --request PUT \
--header "X-Consul-Token: 5cdcae6c-0cce-4210-86fe-5dff3b984a6e" \
--data @payload.json \
http://127.0.0.1:8500/v1/namespace
```
@ -169,7 +169,7 @@ the request has been granted any access in the namespace (read, list or write).
### Sample Request
```shell-session
$ curl -H "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" \
$ curl --header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" \
http://127.0.0.1:8500/v1/namespace/team-1
```
@ -296,8 +296,8 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X PUT \
-H "X-Consul-Token: 5cdcae6c-0cce-4210-86fe-5dff3b984a6e" \
$ curl --request PUT \
--header "X-Consul-Token: 5cdcae6c-0cce-4210-86fe-5dff3b984a6e" \
--data @payload.json \
http://127.0.0.1:8500/v1/namespace/team-1
```
@ -372,8 +372,8 @@ The table below shows this endpoint's support for
### Sample Request
```shell-session
$ curl -X DELETE \
-H "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" \
$ curl --request DELETE \
--header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" \
http://127.0.0.1:8500/v1/namespace/team-1
```
@ -439,7 +439,7 @@ the request has been granted any access in the namespace (read, list or write).
### Sample Request
```shell-session
$ curl -H "X-Consul-Token: 0137db51-5895-4c25-b6cd-d9ed992f4a52" \
$ curl --header "X-Consul-Token: 0137db51-5895-4c25-b6cd-d9ed992f4a52" \
http://127.0.0.1:8500/v1/namespaces
```

View File

@ -143,5 +143,5 @@ The table below shows this endpoint's support for
```shell-session
$ curl \
--request DELETE \
http://127.0.0.1:8500/v1/operator/raft/peer?address=1.2.3.4:5678
"http://127.0.0.1:8500/v1/operator/raft/peer?address=1.2.3.4:5678"
```

View File

@ -60,7 +60,7 @@ The table below shows this endpoint's support for
With a custom datacenter:
```shell-session
$ curl http://127.0.0.1:8500/v1/snapshot?dc=my-datacenter -o snapshot.tgz
$ curl http://127.0.0.1:8500/v1/snapshot?dc=my-datacenter --output snapshot.tgz
```
The above example results in a tarball named `snapshot.tgz` in the current working directory.

View File

@ -22,7 +22,7 @@ build for other platforms currently.
If the `-bootstrap` option is specified, the bootstrap config is generated in
the same way and then printed to stdout. This allows it to be redirected to a
file and used with `envoy -c bootstrap.json`. This works on all operating
file and used with `envoy --config-path bootstrap.json`. This works on all operating
systems allowing configuration to be generated on a host that Envoy doesn't
build on but then used in a virtualized environment that can run Envoy.
@ -66,7 +66,7 @@ proxy configuration needed.
- `-bootstrap` - If present, the command will simply output the generated
bootstrap config to stdout in JSON protobuf form. This can be directed to a
file and used to start Envoy with `envoy -c bootstrap.json`.
file and used to start Envoy with `envoy --config-path bootstrap.json`.
~> **Security Note:** If ACLs are enabled the bootstrap JSON will contain the
ACL token from `-token` or the environment and so should be handled as a secret.
@ -209,7 +209,7 @@ To pass additional arguments directly to Envoy, for example output logging
level, you can use:
```shell-session
$ consul connect envoy -sidecar-for web -- -l debug
$ consul connect envoy -sidecar-for web -- --log-level debug
```
### Multiple Proxy Instances

View File

@ -64,7 +64,7 @@ To get help for any specific command, pass the `-h` flag to the relevant
subcommand. For example, to see help about the `join` subcommand:
```shell-session
$ consul join -h
$ consul join --help
Usage: consul join [options] address ...
Tells a running Consul agent (with "consul agent") to join the cluster

View File

@ -73,17 +73,17 @@ consul partition create <OPTIONS>
The admin partition is created according to the values specified in the options. You can specify the following options:
| Option | Description | Default | Required |
| --- | --- | --- | --- |
| `-name` | String value that specifies the name for the new partition. | none | Required |
| `-description` &nbsp; &nbsp; &nbsp; &nbsp; | String value that specifies a description of the new partition. | none | Optional |
| `-format` | Specifies how to format the output of the operation in the console. | none | Optional |
| `-show-meta` | Prints the description and raft indices to the console in the response. <br/> This option does not take a value. Include the option when issuing the command to enable. | Disabled | Optional |
| Option | Description | Default | Required |
| ------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------- |
| `-name` | String value that specifies the name for the new partition. | none | Required |
| `-description` &nbsp; &nbsp; &nbsp; &nbsp; | String value that specifies a description of the new partition. | none | Optional |
| `-format` | Specifies how to format the output of the operation in the console. | none | Optional |
| `-show-meta` | Prints the description and raft indices to the console in the response. <br/> This option does not take a value. Include the option when issuing the command to enable. | Disabled | Optional |
In the following example, a partition named `webdev` is created:
```shell-session
consul partition create -name "webdev" -description "Partition for admin of webdev services" -format json -show-meta
$ consul partition create -name "webdev" -description "Partition for admin of webdev services" -format json -show-meta
{
"Name": "webdev",
@ -109,19 +109,19 @@ Use the following syntax to write from `stdin`:
consul partition write <OPTIONS> -
```
The definition file or `stdin` values can be provided in JSON or HCL format. Refer to the [Admin Partition Definition](#partition-definition) section for details about the supported parameters.
The definition file or `stdin` values can be provided in JSON or HCL format. Refer to the [Admin Partition Definition](#partition-definition) section for details about the supported parameters.
You can specify the following options:
| Option | Description | Default | Required |
| --- | --- | --- | --- |
| `-format` | Specifies how to format the output of the operation in the console. | none | Optional |
| Option | Description | Default | Required |
| -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------- |
| `-format` | Specifies how to format the output of the operation in the console. | none | Optional |
| `-show-meta` &nbsp; &nbsp; | Prints the description and raft indices to the console in the response. <br/> This option does not take a value. Include the option when issuing the command to enable. | Disabled | Optional |
In the following example, the `webdev-bu` partition is written using `stdin` values:
```shell-session
consul partition write -format json -show-meta - <<< 'name = "webdev-bu" description = "backup webdev partition"'
$ consul partition write -format json -show-meta - <<< 'name = "webdev-bu" description = "backup webdev partition"'
{
"Name": "webdev-bu",
@ -141,10 +141,10 @@ consul partition read <OPTIONS> <PARTITION_NAME>
The admin partition is created according to the values specified in the options. You can specify the following options:
| Option | Description | Default | Required |
| --- | --- | --- | --- |
| `-format` &nbsp; &nbsp; | Specifies how to format the output of the operation in the console. | none | Optional |
| `-meta` | Prints the description and raft indices to the console in the response. <br/> This option does not take a value. Include the option when issuing the command to enable. | Disabled | Optional |
| Option | Description | Default | Required |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------- |
| `-format` &nbsp; &nbsp; | Specifies how to format the output of the operation in the console. | none | Optional |
| `-meta` | Prints the description and raft indices to the console in the response. <br/> This option does not take a value. Include the option when issuing the command to enable. | Disabled | Optional |
In the following example, the configuration for the `webdev` partition is read:
@ -168,15 +168,15 @@ consul partition list <OPTIONS>
The admin partition is created according to the values specified in the options. You can specify the following options:
| Option | Description | Default | Required |
| --- | --- | --- | --- |
| `-format` | Specifies how to format the output of the operation in the console. | none | Optional |
| Option | Description | Default | Required |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------- |
| `-format` | Specifies how to format the output of the operation in the console. | none | Optional |
| `-show-meta` | Prints the description and raft indices to the console in the response. <br/> This option does not take a value. Include the option when issuing the command to enable. | Disabled | Optional |
The following example lists the admin partitions and their meta data in JSON format:
```shell-session
consul partition list -format json -show-meta
$ consul partition list -format json -show-meta
[
{
@ -204,8 +204,9 @@ consul partition list -format json -show-meta
The `delete` subcommand sends a request to the server to remove the specified partition.
```shell-session
consul partition delete <PARTITION_NAME>
$ consul partition delete <PARTITION_NAME>
```
In the following example, the `webdev-bu` partition is deleted:
```shell-session
@ -218,10 +219,10 @@ Admin partitions are managed exclusively through the HTTP API and the Consul CLI
The following parameters are supported in admin partition definition files:
| Option | Description | Default | Required |
| --- | --- | --- | --- |
| `Name` | String value that specifies the name of partition you are creating or writing. <br/> The value must be valid DNS hostname value. | none | Required |
| `Description` | String value that specifies a description for the partition you are creating or writing. <br/> The value should provide human-readable information to help other users understand the purpose of the partition. | none | Optional |
| Option | Description | Default | Required |
| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- |
| `Name` | String value that specifies the name of partition you are creating or writing. <br/> The value must be valid DNS hostname value. | none | Required |
| `Description` | String value that specifies a description for the partition you are creating or writing. <br/> The value should provide human-readable information to help other users understand the purpose of the partition. | none | Optional |
### Example Definition File
@ -236,12 +237,12 @@ Description = "Partition for dev team"
You can include the following options to interact with the HTTP API when using the `partition` command.
| Option | Description | Default | Required |
| --- | --- | --- | --- |
| `-ca-file` | Specifies the path to a certificate authority (CA) file when TLS is enabled.<br/> You can also specify `CONSUL_CACERT` as the value if the environment variable is configured. | none | Required if TLS is enabled |
| `-ca-path` | Specifies the path to a client certificate file when TLS is enabled. <br/> You can also specify `CONSUL_CAPATH` as the value if the environment variable is configured. | none | Required if TLS is enabled |
| `-client-cert` | Specifies the path to a client certificate file when TLS and the `verify_incoming` option are enabled. <br/> You can also specify `CONSUL_CLIENT_CERT` as the value if the environment variable is configured. | none | Required if TLS and `verify_incoming` are enabled |
| `-client-key` | Specifies the path to a client key file when TLS and the `verify_incoming` option are enabled. <br/> You can also specify `CONSUL_CLIENT_KEY` as the value if the environment variable is configured. | none | Required if TLS and `verify_incoming` are enabled |
| `-datacenter` | Specifies the name of the datacenter to query. <br/> Non-default admin partitions are only supported in the primary datacenter. | Datacenter of the queried agent | Required if the agent is in a non-primary datacenter. |
| `-http-addr` | Specifies the address and port number of the Consul HTTP agent. <br/>IP and DNS addresses are supported. The address must also include the port. <br/>You can also specify `CONSUL_HTTP_ADDR` if the environment variable is configured. <br/>To use an HTTPS address, set the `CONSUL_HTTP_SSL` environment variable to `true`. | `http://127.0.0.1:8500` | Optional |
| `-stale` | Boolean value that enables any Consul server (non-leader) to respond to the request. <br/>This switch can lower latency and increase throughput, but may result in stale data. This option has no effect on non-read operations. | `false` | Optional |
| Option | Description | Default | Required |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------- | ----------------------------------------------------- |
| `-ca-file` | Specifies the path to a certificate authority (CA) file when TLS is enabled.<br/> You can also specify `CONSUL_CACERT` as the value if the environment variable is configured. | none | Required if TLS is enabled |
| `-ca-path` | Specifies the path to a client certificate file when TLS is enabled. <br/> You can also specify `CONSUL_CAPATH` as the value if the environment variable is configured. | none | Required if TLS is enabled |
| `-client-cert` | Specifies the path to a client certificate file when TLS and the `verify_incoming` option are enabled. <br/> You can also specify `CONSUL_CLIENT_CERT` as the value if the environment variable is configured. | none | Required if TLS and `verify_incoming` are enabled |
| `-client-key` | Specifies the path to a client key file when TLS and the `verify_incoming` option are enabled. <br/> You can also specify `CONSUL_CLIENT_KEY` as the value if the environment variable is configured. | none | Required if TLS and `verify_incoming` are enabled |
| `-datacenter` | Specifies the name of the datacenter to query. <br/> Non-default admin partitions are only supported in the primary datacenter. | Datacenter of the queried agent | Required if the agent is in a non-primary datacenter. |
| `-http-addr` | Specifies the address and port number of the Consul HTTP agent. <br/>IP and DNS addresses are supported. The address must also include the port. <br/>You can also specify `CONSUL_HTTP_ADDR` if the environment variable is configured. <br/>To use an HTTPS address, set the `CONSUL_HTTP_SSL` environment variable to `true`. | `http://127.0.0.1:8500` | Optional |
| `-stale` | Boolean value that enables any Consul server (non-leader) to respond to the request. <br/>This switch can lower latency and increase throughput, but may result in stale data. This option has no effect on non-read operations. | `false` | Optional |

View File

@ -45,8 +45,8 @@ connect {
The configuration options are listed below.
-> **Note**: The first key is the value used in API calls, and the second key
(after the `/`) is used if you are adding the configuration to the agent's
configuration file.
(after the `/`) is used if you are adding the configuration to the agent's
configuration file.
- `PrivateKey` / `private_key` (`string: ""`) - A PEM-encoded private key
for signing operations. This must match the private key used for the root
@ -103,7 +103,7 @@ In order to use the Update CA Configuration HTTP endpoint, the private key and c
must be passed via JSON:
```shell-session
$ jq -n --arg key "$(cat root.key)" --arg cert "$(cat root.crt)" '
$ jq --null-input --arg key "$(cat root.key)" --arg cert "$(cat root.crt)" '
{
"Provider": "consul",
"Config": {

View File

@ -44,7 +44,7 @@ the example above, the proxy is identifying as `operator-mitchellh`.
With the proxy running, we can now use `psql` like normal:
```shell-session
$ psql -h 127.0.0.1 -p 8181 -U mitchellh mydb
$ psql --host=127.0.0.1 --port=8181 --username=mitchellh mydb
>
```

View File

@ -314,8 +314,9 @@ In the following example, the `alt_domain` parameter is set to `test-domain`:
```
```shell-session
$ dig @127.0.0.1 -p 8600 consul.service.test-domain SRV
$ dig @127.0.0.1 -p 8600 consul.service.test-domain SRV
```
The following responses are returned:
```
@ -330,7 +331,7 @@ machine.node.dc1.test-domain. 0 IN A 127.0.0.1
machine.node.dc1.test-domain. 0 IN TXT "consul-network-segment="
```
-> **PTR queries:** Responses to PTR queries (`<ip>.in-addr.arpa.`) will always use the
-> **PTR queries:** Responses to PTR queries (`<ip>.in-addr.arpa.`) will always use the
[primary domain](/docs/agent/options#domain) (not the alternative domain),
as there is no way for the query to specify a domain.

View File

@ -22,7 +22,7 @@ Admin partitions exist a level above namespaces in the identity hierarchy. They
### Default Admin Partition
Each Consul cluster will have a default admin partition named `default`. The `default` partition must contain the Consul servers. The `default` admin partition is different from other partitions that may be created because the namespaces and resources in this partition are replicated between datacenters when they are federated.
Each Consul cluster will have a default admin partition named `default`. The `default` partition must contain the Consul servers. The `default` admin partition is different from other partitions that may be created because the namespaces and resources in this partition are replicated between datacenters when they are federated.
Any resource created without specifying an admin partition will inherit the partition of the ACL token used to create the resource.
@ -39,7 +39,7 @@ When an admin partition is created, it will include the `default` namespace. You
### Cross-datacenter Replication
Only resources in the `default` admin partition will be replicated to secondary datacenters (also see [Known Limitations](#known-limitations)).
Only resources in the `default` admin partition will be replicated to secondary datacenters (also see [Known Limitations](#known-limitations)).
### DNS Queries
@ -82,11 +82,11 @@ Your Consul configuration must meet the following requirements to use admin part
One of the primary use cases for admin partitions is for enabling a service mesh across multiple Kubernetes clusters. The following requirements must be met to create admin partitions on Kubernetes:
* Two or more Kubernetes clusters. Consul servers must be deployed to a single cluster. The other clusters should run Consul clients.
* Two or more Kubernetes clusters. Consul servers must be deployed to a single cluster. The other clusters should run Consul clients.
* A Consul Enterprise license must be installed on each Kubernetes cluster.
* The helm chart for consul-k8s v0.39.0 or greater.
* Consul 1.11.1-ent or greater.
* All Consul clients must be able to communicate with the Consul servers in the `default` partition, and all servers must be able to communicate with the clients.
* All Consul clients must be able to communicate with the Consul servers in the `default` partition, and all servers must be able to communicate with the clients.
## Usage
@ -96,16 +96,17 @@ This section describes how to deploy Consul admin partitions to Kubernetes clust
The expected use case is to create admin partitions on Kubernetes clusters. This is because many organizations prefer to use cloud-managed Kubernetes offerings to provision separate Kubernetes clusters for individual teams, business units, or environments. This is opposed to deploying a single, large Kubernetes cluster. Organizations encounter problems, however, when they attempt to use a service mesh to enable multi-cluster use cases, such as administration tasks and communication between nodes.
The following procedure will result in an admin partition in each Kubernetes cluster. The Consul clients running in the cluster with servers will be in the `default` partition. Another partition called `clients` will also be created.
The following procedure will result in an admin partition in each Kubernetes cluster. The Consul clients running in the cluster with servers will be in the `default` partition. Another partition called `clients` will also be created.
Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernetes-requirements) before proceeding.
1. Verify that your VPC is configured to enable connectivity between the pods running Consul clients and servers. Refer to your virtual cloud provider's documentation for instructions on configuring network connectivity.
1. Verify that your VPC is configured to enable connectivity between the pods running Consul clients and servers. Refer to your virtual cloud provider's documentation for instructions on configuring network connectivity.
1. Create the license secret in each cluster, e.g.:
```shell-session
kubectl create secret generic license --from-file=key=[license file path i.e. ./license.hclic]
$ kubectl create secret generic license --from-file=key=[license file path i.e. ./license.hclic]
```
This step must also be completed for every cluster.
1. Create a server configuration values file to override the default Consul Helm chart settings:
@ -142,49 +143,49 @@ Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernet
enableRedirection: true
```
</CodeBlockConfig>
</CodeTabs>
</CodeTabs>
Refer to the [Helm Chart Configuration reference](/docs/k8s/helm) for details about the parameters you can specify in the file.
Refer to the [Helm Chart Configuration reference](/docs/k8s/helm) for details about the parameters you can specify in the file.
1. Install the Consul server(s) using the values file created in the previous step:
```shell-session
helm install server hashicorp/consul -f server.yaml
```
1. After the server starts, get the external IP address for partition service so that it can be added to the client configuration. The partition service is a `LoadBalancer` type. The IP address is used to bootstrap connectivity between servers and clients. <a name="get-external-ip-address"/>
```shell-session
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 77m
server-consul-connect-injector-svc ClusterIP 10.8.13.188 <none> 443/TCP 76s
server-consul-controller-webhook ClusterIP 10.8.14.178 <none> 443/TCP 77s
server-consul-dns ClusterIP 10.8.6.6 <none> 53/TCP,53/UDP 77s
server-consul-partition-service LoadBalancer 10.8.1.186 34.135.103.67 8501:31130/TCP,8301:31587/TCP,8300:30378/TCP 76s
server-consul-server ClusterIP None <none> 8501/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 76s
server-consul-ui ClusterIP 10.8.0.218 <none> 443/TCP 77s
$ helm install server hashicorp/consul --values server.yaml
```
1. After the server starts, get the external IP address for partition service so that it can be added to the client configuration. The IP address is used to bootstrap connectivity between servers and clients. <a name="get-external-ip-address"/>
```shell-session
$ kubectl get services --selector="app=consul,component=server" --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].ip}{end}"
34.135.103.67
```
1. Get the Kubernetes authentication method URL for the workload cluster:
```shell-session
kubectl config view -o "jsonpath={.clusters[?(@.name=='<workload-cluster-name>')].cluster.server}"
$ kubectl config view --output "jsonpath={.clusters[?(@.name=='<workload-cluster-name>')].cluster.server}"
```
Use the IP address printed to the console to configure the `k8sAuthMethodHost` parameter in the workload configuration file for your client nodes.
Use the IP address printed to the console to configure the `k8sAuthMethodHost` parameter in the workload configuration file for your client nodes.
1. Copy the server certificate to the workload cluster.
```shell-session
kubectl get secret server-consul-ca-cert --context <server-context> -o yaml | kubectl apply --context <client-context> -f -
$ kubectl get secret server-consul-ca-cert --context <server-context> --output yaml | kubectl apply --context <client-context> --filename -
```
1. Copy the server key to the workload cluster.
```shell-session
kubectl get secret server-consul-ca-key --context <server-context> -o yaml | kubectl apply --context <client-context> -f -
$ kubectl get secret server-consul-ca-key --context <server-context> --output yaml | kubectl apply --context <client-context> --filename -
```
1. If ACLs were enabled in the server configuration values file, copy the token to the workload cluster.
```shell-session
kubectl get secret server-consul-partitions-acl-token --context <server-context> -o yaml | kubectl apply --context <client-context> -f -
$ kubectl get secret server-consul-partitions-acl-token --context <server-context> --output yaml | kubectl apply --context <client-context> --filename -
```
1. Create the workload configuration for client nodes in your cluster. Create a configuration for each admin partition. In the following example, the external IP address and the Kubernetes authentication method IP address from the previous steps have been applied:
<CodeTabs heading="client.yaml">
@ -236,13 +237,14 @@ Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernet
enabled: true
enableRedirection: true
```
</CodeBlockConfig>
</CodeTabs>
1. Install the workload client clusters:
```shell-session
helm install clients hashicorp/consul -f client.yaml
helm install clients hashicorp/consul --values client.yaml
```
### Verifying the Deployment
@ -257,13 +259,13 @@ You can log into the Consul UI to verify that the partitions appear as expected.
The example command gets the token using the secret name configured in the values file (`bootstrap.secretName`), decodes the secret, and prints the usable token to the console in JSON format.
1. Open the Consul UI in a browser using the external IP address and port number described in a previous step (see [step 5](#get-external-ip-address)).
1. Open the Consul UI in a browser using the external IP address and port number described in a previous step (see [step 5](#get-external-ip-address)).
1. Click **Log in** and enter the decoded token when prompted.
You will see the `default` and `clients` partitions available in the **Admin Partition** drop-down menu.
![Partitions will appear in the Admin Partitions drop-down menu within the Consul UI.](/img/admin-partitions/consul-admin-partitions-verify-in-ui.png)
## Known Limitations
* Only the `default` admin partition is supported when federating multiple Consul datacenters in a WAN.
* Only the `default` admin partition is supported when federating multiple Consul datacenters in a WAN.

View File

@ -67,12 +67,12 @@ a copy of [`git`](https://www.git-scm.com/) in your `PATH`.
## Verifying the Installation
To verify Consul is properly installed, run `consul -v` on your system. You
To verify Consul is properly installed, run `consul version` on your system. You
should see help output. If you are executing it from the command line, make sure
it is on your PATH or you may get an error about Consul not being found.
```shell-session
$ consul -v
$ consul version
```
## Browser Compatibility Considerations

View File

@ -123,13 +123,13 @@ We will provide this secret and the Vault CA secret, to the Consul server via th
Finally, [install](/docs/k8s/installation/install#installing-consul) the Helm chart using the above config file:
```shell-session
$ helm install consul -f config.yaml hashicorp/consul
$ helm install consul --values config.yaml hashicorp/consul
```
Verify that the CA provider is set correctly:
```shell-session
$ kubectl exec consul-server-0 -- curl -s http://localhost:8500/v1/connect/ca/configuration\?pretty
$ kubectl exec consul-server-0 -- curl --silent http://localhost:8500/v1/connect/ca/configuration\?pretty
{
"Provider": "vault",
"Config": {

View File

@ -204,7 +204,7 @@ Because transparent proxy is enabled by default,
we use Kubernetes DNS to connect to our desired upstream.
```shell-session
$ kubectl exec deploy/static-client -- curl -s http://static-server/
$ kubectl exec deploy/static-client -- curl --silent http://static-server/
"hello world"
```
@ -216,7 +216,7 @@ without updating either of the running pods. You can then remove this
intention to allow connections again.
```shell-session
$ kubectl exec deploy/static-client -- curl -s http://static-server/
$ kubectl exec deploy/static-client -- curl --silent http://static-server/
command terminated with exit code 52
```
@ -551,7 +551,7 @@ application.
To verify the installation, run the
["Accepting Inbound Connections"](/docs/k8s/connect#accepting-inbound-connections)
example from the "Usage" section above. After running this example, run
`kubectl get pod static-server -o yaml`. In the raw YAML output, you should
`kubectl get pod static-server --output yaml`. In the raw YAML output, you should
see injected Connect containers and an annotation
`consul.hashicorp.com/connect-inject-status` set to `injected`. This
confirms that injection is working properly.

View File

@ -38,7 +38,7 @@ these annotations to the *pods* of your *ingress controller*.
consul.hashicorp.com/connect-inject: "true"
# Add the container ports used by your ingress controller
consul.hashicorp.com/transparent-proxy-exclude-inbound-ports: "80,8000,9000,8443"
# And the CIDR of your Kubernetes API: `kubectl get svc kubernetes -o jsonpath='{.spec.clusterIP}'
# And the CIDR of your Kubernetes API: `kubectl get svc kubernetes --output jsonpath='{.spec.clusterIP}'
consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs: "10.108.0.1/32"
```

View File

@ -95,7 +95,7 @@ used by the Helm chart when enabling ingress gateways.
Apply the `IngressGateway` resource with `kubectl apply`:
```shell-session
$ kubectl apply -f ingress-gateway.yaml
$ kubectl apply --filename ingress-gateway.yaml
ingressgateway.consul.hashicorp.com/ingress-gateway created
```
@ -118,7 +118,7 @@ spec:
Apply the `ServiceDefaults` resource with `kubectl apply`:
```shell-session
$ kubectl apply -f service-defaults.yaml
$ kubectl apply --filename service-defaults.yaml
servicedefaults.consul.hashicorp.com/static-server created
```
@ -174,7 +174,7 @@ spec:
Apply the `ServiceIntentions` resource with `kubectl apply`:
```shell-session
$ kubectl apply -f service-intentions.yaml
$ kubectl apply --filename service-intentions.yaml
serviceintentions.consul.hashicorp.com/ingress-gateway created
```
@ -236,7 +236,7 @@ spec:
</CodeBlockConfig>
```shell-session
$ kubectl apply -f static-server.yaml
$ kubectl apply --filename static-server.yaml
```
## Connecting to your application
@ -249,9 +249,9 @@ If TLS is enabled, use: [https://localhost:8501/ui/dc1/services/static-server/in
You can also validate the connectivity of the application from the ingress gateway using `curl`:
```shell-session
$ EXTERNAL_IP=$(kubectl get services | grep ingress-gateway | awk {print $4})
$ EXTERNAL_IP=$(kubectl get services --selector component=ingress-gateway --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].ip}{end}")
$ echo "Connecting to \"$EXTERNAL_IP\""
$ curl -H "Host: static-server.ingress.consul" "http://$EXTERNAL_IP:8080"
$ curl --header "Host: static-server.ingress.consul" "http://$EXTERNAL_IP:8080"
"hello world"
```
@ -282,5 +282,5 @@ ingressGateways:
And run Helm upgrade:
```shell-session
$ helm upgrade consul hashicorp/consul -f config.yaml
$ helm upgrade consul hashicorp/consul --values config.yaml
```

View File

@ -127,14 +127,14 @@ Create a sample external service and register it with Consul.
Register the external service with Consul:
```shell-session
$ curl --request PUT --data @external.json -k $CONSUL_HTTP_ADDR/v1/catalog/register
$ curl --request PUT --data @external.json --insecure $CONSUL_HTTP_ADDR/v1/catalog/register
true
```
If ACLs and TLS are enabled :
```shell-session
$ curl --request PUT --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" --data @external.json -k $CONSUL_HTTP_ADDR/v1/catalog/register
$ curl --request PUT --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" --data @external.json --insecure $CONSUL_HTTP_ADDR/v1/catalog/register
true
```
@ -219,7 +219,7 @@ container (`/etc/ssl/cert.pem`).
Apply the `TerminatingGateway` resource with `kubectl apply`:
```shell-session
$ kubectl apply -f terminating-gateway.yaml
$ kubectl apply --filename terminating-gateway.yaml
```
If using ACLs and TLS, create a [`ServiceIntentions`](/docs/connect/config-entries/service-intentions) resource to allow access from services in the mesh to the external service
@ -244,7 +244,7 @@ spec:
Apply the `ServiceIntentions` resource with `kubectl apply`:
```shell-session
$ kubectl apply -f service-intentions.yaml
$ kubectl apply --filename service-intentions.yaml
```
### Define the external services as upstreams for services in the mesh
@ -301,7 +301,7 @@ spec:
Run the service via `kubectl apply`:
```shell-session
$ kubectl apply -f static-client.yaml
$ kubectl apply --filename static-client.yaml
```
Wait for the service to be ready:
@ -314,5 +314,5 @@ deployment "static-client" successfully rolled out
You can verify connectivity of the static-client and terminating gateway via a curl command:
```shell-session
$ kubectl exec deploy/static-client -- curl -vvvs -H "Host: example-https.com" http://localhost:1234/
$ kubectl exec deploy/static-client -- curl -vvvs --header "Host: example-https.com" http://localhost:1234/
```

View File

@ -93,7 +93,7 @@ Once installed, you can use `kubectl` to create and manage Consul's configuratio
You can create configuration entries via `kubectl apply`.
```shell-session
$ cat <<EOF | kubectl apply -f -
$ cat <<EOF | kubectl apply --filename -
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:

View File

@ -30,7 +30,7 @@ The default name of the Consul DNS service will be `consul-consul-dns`. Use
that name to get the `ClusterIP`:
```shell-session
$ kubectl get svc consul-consul-dns -o jsonpath='{.spec.clusterIP}'
$ kubectl get svc consul-consul-dns --output jsonpath='{.spec.clusterIP}'
10.35.240.78%
```
@ -53,7 +53,7 @@ export CONSUL_DNS_IP=10.35.240.78
And create the `ConfigMap`:
```shell-session
$ cat <<EOF | kubectl apply -f -
$ cat <<EOF | kubectl apply --filename -
apiVersion: v1
kind: ConfigMap
metadata:
@ -72,7 +72,7 @@ configmap/kube-dns configured
Ensure that the `ConfigMap` was created successfully:
```shell-session
$ kubectl get configmap kube-dns -n kube-system -o yaml
$ kubectl get configmap kube-dns --namespace kube-system --output yaml
apiVersion: v1
data:
stubDomains: |
@ -102,7 +102,7 @@ Consul DNS service.
Edit the `ConfigMap`:
```shell-session
$ kubectl edit configmap coredns -n kube-system
$ kubectl edit configmap coredns --namespace kube-system
```
And add the `consul` block below the default `.:53` block and replace
@ -162,7 +162,7 @@ spec:
</CodeBlockConfig>
```shell-session
$ kubectl apply -f job.yaml
$ kubectl apply --filename job.yaml
```
Then query the pod name for the job and check the logs. You should see

View File

@ -762,9 +762,9 @@ Use these links to navigate to a particular top-level stanza.
You could retrieve this value from your `kubeconfig` by running:
```shell
kubectl config view \
-o jsonpath="{.clusters[?(@.name=='<your cluster name>')].cluster.server}"
```shell-session
$ kubectl config view \
--output jsonpath="{.clusters[?(@.name=='<your cluster name>')].cluster.server}"
```
### client

View File

@ -66,7 +66,7 @@ server:
Now run `helm install`:
```shell-session
$ helm install --wait hashicorp hashicorp/consul -f config.yaml
$ helm install --wait hashicorp hashicorp/consul --values config.yaml
```
Once the cluster is up, you can verify the nodes are running Consul Enterprise by

View File

@ -61,7 +61,7 @@ $ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=
Now we can install our Consul cluster with Helm:
```shell
$ helm install cluster1 -f cluster1-config.yaml hashicorp/consul
$ helm install cluster1 --values cluster1-config.yaml hashicorp/consul
```
~> **Note:** The Helm release name must be unique for each Kubernetes cluster.
@ -74,8 +74,8 @@ we need to extract the gossip encryption key we've created, the CA certificate
and the ACL bootstrap token generated during installation,
so that we can apply them to our second Kubernetes cluster.
```shell
kubectl get secret consul-gossip-encryption-key cluster1-consul-ca-cert cluster1-consul-bootstrap-acl-token -o yaml > cluster1-credentials.yaml
```shell-session
$ kubectl get secret consul-gossip-encryption-key cluster1-consul-ca-cert cluster1-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
```
## Deploying Consul clients in the second cluster
@ -86,8 +86,8 @@ that will join the first Consul cluster.
First, we need to apply credentials we've extracted from the first cluster to the second cluster:
```shell
$ kubectl apply -f cluster1-credentials.yaml
```shell-session
$ kubectl apply --filename cluster1-credentials.yaml
```
To deploy in the second cluster, we will use the following Helm configuration:
@ -140,7 +140,7 @@ Next, we need to set up the `externalServers` configuration.
The `externalServers.hosts` and `externalServers.httpsPort`
refer to the IP and port of the UI's NodePort service deployed in the first cluster.
Set the `externalServers.hosts` to any Node IP of the first cluster,
which you can see by running `kubectl get nodes -o wide`.
which you can see by running `kubectl get nodes --output wide`.
Set `externalServers.httpsPort` to the `nodePort` of the `cluster1-consul-ui` service.
In our example, the port is `31557`.
@ -182,8 +182,8 @@ for more details.
Now we're ready to install!
```shell
helm install cluster2 -f cluster2-config.yaml hashicorp/consul
```shell-session
$ helm install cluster2 --values cluster2-config.yaml hashicorp/consul
```
## Verifying the Consul Service Mesh works
@ -311,7 +311,7 @@ spec:
Once both services are up and running, we can connect to the `static-server` from `static-client`:
```shell
$ kubectl exec deploy/static-client -- curl -s localhost:1234
```shell-session
$ kubectl exec deploy/static-client -- curl --silent localhost:1234
"hello world"
```

View File

@ -1,6 +1,6 @@
---
layout: docs
page_title: Installing Consul on Kubernetes
page_title: Installing Consul on Kubernetes
description: >-
Consul can run directly on Kubernetes, both in server or client mode. For
pure-Kubernetes workloads, this enables Consul to also exist purely within
@ -27,22 +27,22 @@ mesh](https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy?utm_sourc
## Consul K8s CLI Installation
We recommend using the [Consul K8S CLI](/docs/k8s/k8s-cli) to install Consul on Kubernetes for single-cluster deployments. You can install Consul on Kubernetes using the Consul K8s CLI tool after installing the CLI.
We recommend using the [Consul K8S CLI](/docs/k8s/k8s-cli) to install Consul on Kubernetes for single-cluster deployments. You can install Consul on Kubernetes using the Consul K8s CLI tool after installing the CLI.
Before beginning the installation process, verify that `kubectl` is already configured to authenticate to the Kubernetes cluster using a valid `kubeconfig` file.
Before beginning the installation process, verify that `kubectl` is already configured to authenticate to the Kubernetes cluster using a valid `kubeconfig` file.
The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions.
1. Install the HashiCorp `tap`, which is a repository of all Homebrew packages for HashiCorp:
```shell-session
brew tap hashicorp/tap
$ brew tap hashicorp/tap
```
1. Install the Consul K8s CLI with the `hashicorp/tap/consul` formula.
```shell-session
brew install hashicorp/tap/consul-k8s
$ brew install hashicorp/tap/consul-k8s
```
1. Issue the `install` subcommand to install Consul on Kubernetes:
```shell-session
@ -123,7 +123,7 @@ The Consul Helm only supports Helm 3.2+. Install the latest version of the Helm
```
1. Prior to installing via Helm, ensure that the `consul` Kubernetes namespace does not exist, as installing on a dedicated namespace
is recommended.
is recommended.
```shell-session
$ kubectl get namespace
@ -134,11 +134,11 @@ The Consul Helm only supports Helm 3.2+. Install the latest version of the Helm
kube-system Active 18h
```
1. Issue the following command to install Consul with the default configuration using Helm. You could also install Consul on a dedicated
1. Issue the following command to install Consul with the default configuration using Helm. You could also install Consul on a dedicated
namespace of your choosing by modifying the value of the `-n` flag for the Helm install.
```shell-session
$ helm install consul hashicorp/consul --set global.name=consul --create-namespace -n consul
$ helm install consul hashicorp/consul --set global.name=consul --create-namespace --namespace consul
NAME: consul
...
```
@ -169,10 +169,10 @@ controller:
</CodeBlockConfig>
Once you've created your `config.yaml` file, run `helm install` with the `-f` flag:
Once you've created your `config.yaml` file, run `helm install` with the `--values` flag:
```shell-session
$ helm install consul hashicorp/consul --create-namespace -n consul -f config.yaml
$ helm install consul hashicorp/consul --create-namespace --namespace consul --values config.yaml
NAME: consul
...
```
@ -191,7 +191,7 @@ use `kubectl port-forward` to visit the UI.
If running with TLS disabled, the Consul UI will be accessible via http on port 8500:
```shell-session
$ kubectl port-forward service/consul-server -n consul 8500:8500
$ kubectl port-forward service/consul-server --namespace consul 8500:8500
...
```
@ -202,7 +202,7 @@ Once the port is forwarded navigate to [http://localhost:8500](http://localhost:
If running with TLS enabled, the Consul UI will be accessible via https on port 8501:
```shell-session
$ kubectl port-forward service/consul-server -n consul 8501:8501
$ kubectl port-forward service/consul-server --namespace consul 8501:8501
...
```

View File

@ -224,7 +224,7 @@ After the installation into your primary cluster you will need to export
this secret:
```shell-session
$ kubectl get secret consul-federation -o yaml > consul-federation-secret.yaml
$ kubectl get secret consul-federation --output yaml > consul-federation-secret.yaml
```
!> **Security note:** The federation secret makes it possible to gain
@ -249,7 +249,7 @@ Switched to context "dc2".
And import the secret:
```shell-session
$ kubectl apply -f consul-federation-secret.yaml
$ kubectl apply --filename consul-federation-secret.yaml
secret/consul-federation configured
```

View File

@ -99,7 +99,7 @@ Retrieve the WAN addresses of the mesh gateways:
```shell-session
$ kubectl exec statefulset/consul-server -- sh -c \
'curl -sk https://localhost:8501/v1/catalog/service/mesh-gateway | jq ".[].ServiceTaggedAddresses.wan"'
'curl --silent --insecure https://localhost:8501/v1/catalog/service/mesh-gateway | jq ".[].ServiceTaggedAddresses.wan"'
{
"Address": "1.2.3.4",
"Port": 443
@ -125,7 +125,7 @@ primary_gateways = ["1.2.3.4:443"]
If ACLs are enabled, you'll also need the replication ACL token:
```shell-session
$ kubectl get secrets/consul-acl-replication-acl-token --template='{{.data.token}}'
$ kubectl get secrets/consul-acl-replication-acl-token --template='{{.data.token | base64decode}}'
e7924dd1-dc3f-f644-da54-81a73ba0a178
```

View File

@ -29,10 +29,10 @@ vault write auth/kubernetes/role/consul-server \
```
To find out the service account name of the Consul server,
you can run:
you can run:
```shell-session
helm template --release-name <your release name> -s templates/server-serviceaccount.yaml hashicorp/consul
$ helm template --release-name <your release name> --show-only templates/server-serviceaccount.yaml hashicorp/consul
```
Now we can configure the Consul Helm chart to use Vault as the Connect CA provider:
@ -58,7 +58,7 @@ address if the Vault cluster is running the same Kubernetes cluster.
The `rootPKIPath` and `intermediatePKIPath` should be the same as the ones
defined in your Connect CA policy. Behind the scenes, Consul will authenticate to Vault using a Kubernetes
service account using the [Kubernetes auth method](https://www.vaultproject.io/docs/auth/kubernetes) and will use the Vault token for any API calls to Vault. If the Vault token can not be renewed, Consul will re-authenticate to
generate a new Vault token.
generate a new Vault token.
The `vaultCASecret` is the Kubernetes secret that stores the CA Certificate that is used for Vault communication. To provide a CA, you first need to create a Kubernetes secret containing the CA. For example, you may create a secret with the Vault CA like so:

View File

@ -14,26 +14,32 @@ To use Vault to issue Server TLS certificates the following will be needed:
1. Create Vault Policies that will allow the Consul components, e.g. ingress gateways, controller, to access the CA url.
1. Create Kubernetes auth roles that link these policies to the Kubernetes service accounts of the Consul components.
### Bootstrapping the PKI Engine
First, we need to bootstrap the Vault cluster by enabling and configuring the PKI Secrets Engine to be able to serve
TLS certificates to Consul. The process can be as simple as the following, or more complicated such as in this [example](https://learn.hashicorp.com/tutorials/consul/vault-pki-consul-secure-tls)
which also uses an intermediate signing authority.
* Enable the PKI Secrets Engine:
```shell-session
vault secrets enable pki
$ vault secrets enable pki
```
* Tune the engine to enable longer TTL:
```shell-session
vault secrets tune -max-lease-ttl=87600h pki
$ vault secrets tune -max-lease-ttl=87600h pki
```
* Generate the root CA
```shell-session
vault write -field=certificate pki/root/generate/internal \
$ vault write -field=certificate pki/root/generate/internal \
common_name="dc1.consul" \
ttl=87600h
```
-> **Note:** Where `common_name` is comprised of combining `global.datacenter` dot `global.domain`.
### Create Vault Policies for the Server TLS Certificates
@ -50,8 +56,9 @@ path "pki/issue/consul-server" {
```
```shell-session
vault policy write consul-server consul-server-policy.hcl
$ vault policy write consul-server consul-server-policy.hcl
```
-> **Note:** The PKI secret path referenced by the above Policy will be your `server.serverCert.secretName` Helm value.
### Create Vault Policies for the CA URL
@ -67,8 +74,9 @@ path "pki/cert/ca" {
```
```shell-session
vault policy write ca-policy ca-policy.hcl
$ vault policy write ca-policy ca-policy.hcl
```
-> **Note:** The PKI secret path referenced by the above Policy will be your `global.tls.caCert.secretName` Helm value.
### Create Vault Roles for the PKI engine, Consul servers and components
@ -115,14 +123,16 @@ $ vault write auth/kubernetes/role/consul-server \
To find out the service account name of the Consul server,
you can run:
```shell-session
helm template --release-name <your release name> -s templates/server-serviceaccount.yaml hashicorp/consul
$ helm template --release-name <your release name> --show-only templates/server-serviceaccount.yaml hashicorp/consul
```
-> **Note:** Should you enable other supported features such as gossip-encryption be sure to append additional policies to
the Kube auth role in a comma separated value e.g. `policies=consul-server,consul-gossip`
the Kube auth role in a comma separated value e.g. `policies=consul-server,consul-gossip`
Role for Consul clients:
```shell-session
$ vault write auth/kubernetes/role/consul-client \
bound_service_account_names=<Consul client service account> \
@ -133,7 +143,7 @@ $ vault write auth/kubernetes/role/consul-client \
To find out the service account name of the Consul client, use the command below.
```shell-session
$ helm template --release-name <your release name> -s templates/client-serviceaccount.yaml hashicorp/consul
$ helm template --release-name <your release name> --show-only templates/client-serviceaccount.yaml hashicorp/consul
```
-> **Note:** Should you enable other supported features such as gossip-encryption, ensure you append additional policies to
@ -151,7 +161,6 @@ $ vault write auth/kubernetes/role/consul-ca \
The above Vault Roles will now be your Helm values for `global.secretsBackend.vault.consulServerRole` and
`global.secretsBackend.vault.consulCARole` respectively.
## Deploying the Consul Helm chart
Now that we've configured Vault, you can configure the Consul Helm chart to
@ -174,9 +183,9 @@ server:
serverCert:
secretName: "pki/issue/consul-server"
extraVolumes:
- type: "secret"
name: <vaultCASecret>
load: "false"
- type: "secret"
name: <vaultCASecret>
load: "false"
```
The `vaultCASecret` is the Kubernetes secret that stores the CA Certificate that is used for Vault communication. To provide a CA, you first need to create a Kubernetes secret containing the CA. For example, you may create a secret with the Vault CA like so:

View File

@ -19,7 +19,7 @@ To explicitly perform server certificate rotation, follow these steps:
1. Perform a `helm upgrade`:
```shell-session
helm upgrade consul hashicorp/consul -f /path/to/my/values.yaml
$ helm upgrade consul hashicorp/consul --values /path/to/my/values.yaml
```
This should run the `tls-init` job that will generate new Server certificates.

View File

@ -8,21 +8,21 @@ description: Rotate the Gossip Encryption Key on Kubernetes Cluster safely
The following instructions provides a step-by-step manual process for rotating [gossip encryption](/docs/security/encryption#gossip-encryption) keys on Consul clusters that are deployed onto a Kubernetes cluster with Consul on Kubernetes.
The following steps should only be performed in the *primary datacenter* if your Consul clusters are [federated](https://www.consul.io/docs/k8s/installation/multi-cluster/kubernetes). Rotating the gossip encryption in the primary datacenter will automatically rotate the gossip encryption in the secondary datacenters.
The following steps should only be performed in the *primary datacenter* if your Consul clusters are [federated](https://www.consul.io/docs/k8s/installation/multi-cluster/kubernetes). Rotating the gossip encryption in the primary datacenter will automatically rotate the gossip encryption in the secondary datacenters.
-> **Note:** Careful precaution should be taken to prohibit new clients from joining during the gossip encryption rotation process, otherwise the new clients will join the gossip pool without knowledge of the new primary gossip encryption key. In addition, deletion of a gossip encryption key from the keyring should occur only after clients have safely migrated to utilizing the new gossip encryption key for communication.
-> **Note:** Careful precaution should be taken to prohibit new clients from joining during the gossip encryption rotation process, otherwise the new clients will join the gossip pool without knowledge of the new primary gossip encryption key. In addition, deletion of a gossip encryption key from the keyring should occur only after clients have safely migrated to utilizing the new gossip encryption key for communication.
1. (Optional) If Consul is installed in a dedicated namespace, set the kubeConfig context to the consul namespace. Otherwise, subsequent commands will need to include -n consul.
```shell-session
kubectl config set-context --current --namespace=consul
```
1. Generate a new key and store in safe place for retrieval in the future ([Vault KV Secrets Engine](https://www.vaultproject.io/docs/secrets/kv/kv-v2#usage) is a recommended option).
1. Generate a new key and store in safe place for retrieval in the future ([Vault KV Secrets Engine](https://www.vaultproject.io/docs/secrets/kv/kv-v2#usage) is a recommended option).
```shell-session
consul keygen
$ consul keygen
```
This should generate a new key which can be used as the gossip encryption key. In this example, we will be using
`Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w=` as the replacement gossip encryption key.
@ -37,9 +37,9 @@ The following steps should only be performed in the *primary datacenter* if your
1. **Note:** If ACLs are enabled, export the bootstrap token as the CONSUL_HTTP_TOKEN to perform all `consul keyring` operations. The bootstrap token can be found in the Kubernetes secret `consul-bootstrap-acl-token` of the primary datacenter.
```shell-session
export CONSUL_HTTP_TOKEN=<INSERT BOOTSTRAP ACL TOKEN>
$ export CONSUL_HTTP_TOKEN=<INSERT BOOTSTRAP ACL TOKEN>
```
1. Install the new Gossip encryption key with the `consul keyring` command:
```shell-session
@ -107,7 +107,7 @@ The following steps should only be performed in the *primary datacenter* if your
```
```shell-session
kubectl patch secret consul-gossip-encryption-key -p '{"stringData":{"key": "Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w="}}'
$ kubectl patch secret consul-gossip-encryption-key --patch='{"stringData":{"key": "Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w="}}'
```
**Note:** In the case of federated Consul clusters, update the federation-secret value for the gossip encryption key. The name of the secret and key can be found in the values file of the secondary datacenter.
@ -120,7 +120,7 @@ The following steps should only be performed in the *primary datacenter* if your
```
```shell-session
kubectl patch secret consul-federation -p '{"stringData":{"gossipEncryptionKey": "Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w="}}'
$ kubectl patch secret consul-federation --patch='{"stringData":{"gossipEncryptionKey": "Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w="}}'
```
1. Remove the old key once the new one has been installed successfully.

View File

@ -16,7 +16,6 @@ Issue the `consul-k8s uninstall` command to remove Consul on Kubernetes. You can
$ consul-k8s uninstall <OPTIONS>
```
In the following example, Consul will be uninstalled and the data removed without prompting you to verify the operations:
```shell-session
@ -29,18 +28,18 @@ Refer to the [Consul K8s CLI reference](/docs/k8s/k8s-cli#uninstall) topic for d
Run the `helm uninstall` **and** manually remove resources that Helm does not delete.
1. Although the Helm chart automates the deletion of CRDs upon uninstallation, sometimes the finalizers tied to those CRDs may not complete because the deletion of the CRDs rely on the Consul K8s controller running. Ensure that previously created CRDs for Consul on Kubernetes are deleted, so subsequent installs of Consul on Kubernetes on the same Kubernetes cluster do not get blocked.
1. Although the Helm chart automates the deletion of CRDs upon uninstallation, sometimes the finalizers tied to those CRDs may not complete because the deletion of the CRDs rely on the Consul K8s controller running. Ensure that previously created CRDs for Consul on Kubernetes are deleted, so subsequent installs of Consul on Kubernetes on the same Kubernetes cluster do not get blocked.
```shell-session
$ kubectl delete crd --selector app=consul
```
1. (Optional) If Consul is installed in a dedicated namespace, set the kubeConfig context to the `consul` namespace. Otherwise, subsequent commands will need to include `-n consul`.
1. (Optional) If Consul is installed in a dedicated namespace, set the kubeConfig context to the `consul` namespace. Otherwise, subsequent commands will need to include `--namespace consul`.
```shell-session
$ kubectl config set-context --current --namespace=consul
```
1. Run the `helm uninstall <release-name>` command and specify the release name you've installed Consul with, e.g.,:
```shell-session
@ -52,13 +51,13 @@ Run the `helm uninstall` **and** manually remove resources that Helm does not de
for the persistent volumes that store Consul's data. A [bug](https://github.com/helm/helm/issues/5156) in Helm prevents PVCs from being deleted. Issue the following commands:
```shell-session
$ kubectl get pvc -l chart=consul-helm
$ kubectl get pvc --selector="chart=consul-helm"
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-default-hashicorp-consul-server-0 Bound pvc-32cb296b-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
data-default-hashicorp-consul-server-1 Bound pvc-32d79919-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
data-default-hashicorp-consul-server-2 Bound pvc-331581ea-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
$ kubectl delete pvc -l chart=consul-helm
$ kubectl delete pvc --selector="chart=consul-helm"
persistentvolumeclaim "data-default-hashicorp-consul-server-0" deleted
persistentvolumeclaim "data-default-hashicorp-consul-server-1" deleted
persistentvolumeclaim "data-default-hashicorp-consul-server-2" deleted
@ -70,7 +69,7 @@ Run the `helm uninstall` **and** manually remove resources that Helm does not de
1. If installing with ACLs enabled, you will need to then delete the ACL secrets:
```shell-session
$ kubectl get secret | grep consul | grep Opaque
$ kubectl get secrets --field-selector="type=Opaque" | grep consul
consul-acl-replication-acl-token Opaque 1 41m
consul-bootstrap-acl-token Opaque 1 41m
consul-client-acl-token Opaque 1 41m
@ -84,7 +83,7 @@ Run the `helm uninstall` **and** manually remove resources that Helm does not de
created by another user with the word `consul`.
```shell-session
$ kubectl get secret | grep consul | grep Opaque | awk '{print $1}' | xargs kubectl delete secret
$ kubectl get secrets --field-selector="type=Opaque" | grep consul | awk '{print $1}' | xargs kubectl delete secret
secret "consul-acl-replication-acl-token" deleted
secret "consul-bootstrap-acl-token" deleted
secret "consul-client-acl-token" deleted
@ -107,8 +106,8 @@ Run the `helm uninstall` **and** manually remove resources that Helm does not de
$ kubectl delete serviceaccount consul-tls-init
serviceaccount "consul-tls-init" deleted
```
1. (Optional) Delete the namespace (i.e. `consul` in the following example) that you have dedicated for installing Consul on Kubernetes.
1. (Optional) Delete the namespace (i.e. `consul` in the following example) that you have dedicated for installing Consul on Kubernetes.
```shell-session
$ kubectl delete ns consul

View File

@ -40,7 +40,7 @@ Perform the following steps:
1. Determine your current installed chart version.
```bash
helm list -f consul
helm list --filter consul
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
consul default 2 2020-09-30 ... deployed consul-0.24.0 1.8.2
```
@ -49,8 +49,8 @@ In this example, version `0.24.0` (from `consul-0.24.0`) is being used.
1. Perform a `helm upgrade`:
```bash
helm upgrade consul hashicorp/consul --version 0.24.0 -f /path/to/my/values.yaml
```shell-session
$ helm upgrade consul hashicorp/consul --version 0.24.0 --values /path/to/my/values.yaml
```
**Before performing the upgrade, be sure you've read the other sections on this page,
@ -74,8 +74,8 @@ helm repo update
1. List all available versions:
```bash
helm search repo hashicorp/consul -l
```shell-session hideClipboard
$ helm search repo hashicorp/consul --versions
NAME CHART VERSION APP VERSION DESCRIPTION
hashicorp/consul 0.24.1 1.8.2 Official HashiCorp Consul Chart
hashicorp/consul 0.24.0 1.8.1 Official HashiCorp Consul Chart
@ -86,8 +86,8 @@ Here we can see that the latest version of `0.24.1`.
1. To determine which version you have installed, issue the following command:
```bash
helm list -f consul
```shell-session
$ helm list --filter consul
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
consul default 2 2020-09-30 ... deployed consul-0.24.0 1.8.2
```
@ -99,8 +99,8 @@ If you want to upgrade to the latest `0.24.1` version, use the following procedu
1. Upgrade by performing a `helm upgrade` with the `--version` flag:
```bash
helm upgrade consul hashicorp/consul --version 0.24.1 -f /path/to/my/values.yaml
```shell-session
$ helm upgrade consul hashicorp/consul --version 0.24.1 --values /path/to/my/values.yaml
```
**Before performing the upgrade, be sure you've read the other sections on this page,
@ -129,8 +129,8 @@ to update to the new version.
1. Determine your current installed chart version:
```bash
helm list -f consul
```shell-session
$ helm list --filter consul
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
consul default 2 2020-09-30 ... deployed consul-0.24.0 1.8.2
```
@ -139,8 +139,8 @@ In this example, version `0.24.0` (from `consul-0.24.0`) is being used.
1. Perform a `helm upgrade`:
```bash
helm upgrade consul hashicorp/consul --version 0.24.0 -f /path/to/my/values.yaml`.
```shell-session
$ helm upgrade consul hashicorp/consul --version 0.24.0 --values /path/to/my/values.yaml
```
**Before performing the upgrade, be sure you've read the other sections on this page,
@ -154,7 +154,7 @@ unintended upgrade.
Before upgrading, it's important to understand what changes will be made to your
cluster. For example, you will need to take more care if your upgrade will result
in the Consul server statefulset being redeployed.
in the Consul server StatefulSet being redeployed.
There is no built-in functionality in Helm that shows what a helm upgrade will
change. There is, however, a Helm plugin [helm-diff](https://github.com/databus23/helm-diff)
@ -169,16 +169,16 @@ helm plugin install https://github.com/databus23/helm-diff
1. If you are updating your `values.yaml` file, do so now.
1. Take the same `helm upgrade` command you were planning to issue but perform `helm diff upgrade` instead of `helm upgrade`:
```bash
helm diff upgrade consul hashicorp/consul --version 0.24.1 -f /path/to/your/values.yaml
```shell-session
$ helm diff upgrade consul hashicorp/consul --version 0.24.1 --values /path/to/your/values.yaml
```
This will print out the manifests that will be updated and their diffs.
1. To see only the objects that will be updated, add `| grep "has changed"`:
```bash
helm diff upgrade consul hashicorp/consul --version 0.24.1 -f /path/to/your/values.yaml |
```shell-session
$ helm diff upgrade consul hashicorp/consul --version 0.24.1 --values /path/to/your/values.yaml |
grep "has changed"
```
@ -272,8 +272,8 @@ servers to be upgraded _before_ the clients.
1. Next, perform the upgrade:
```bash
helm upgrade consul hashicorp/consul --version <your-version> -f /path/to/your/values.yaml
```shell-session
$ helm upgrade consul hashicorp/consul --version <your-version> --values /path/to/your/values.yaml
```
This will not cause the servers to redeploy (although the resource will be updated). If

View File

@ -167,8 +167,8 @@ You can get the AccessorID of every legacy token from the API. For example,
using `curl` and `jq` in bash:
```shell-session
$ LEGACY_IDS=$(curl -sH "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
'localhost:8500/v1/acl/tokens' | jq -r '.[] | select (.Legacy) | .AccessorID')
$ LEGACY_IDS=$(curl --silent --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
'localhost:8500/v1/acl/tokens' | jq --raw-output '.[] | select (.Legacy) | .AccessorID')
$ echo "$LEGACY_IDS"
621cbd12-dde7-de06-9be0-e28d067b5b7f
65cecc86-eb5b-ced5-92dc-f861cf7636fe
@ -230,8 +230,8 @@ You can get the AccessorID of every legacy token from the API. For example,
using `curl` and `jq` in bash:
```shell-session
$ LEGACY_IDS=$(curl -sH "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
'localhost:8500/v1/acl/tokens' | jq -r '.[] | select (.Legacy) | .AccessorID')
$ LEGACY_IDS=$(curl --silent --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
'localhost:8500/v1/acl/tokens' | jq --raw-output '.[] | select (.Legacy) | .AccessorID')
$ echo "$LEGACY_IDS"
8b65fdf9-303e-0894-9f87-e71b3273600c
d9deb39b-1b30-e100-b9c5-04aba3f593a1

View File

@ -203,7 +203,7 @@ be tricky to debug why things aren't working. Some tips for setting up OIDC:
request to obtain a JWT that you can inspect. An example of how to decode the
JWT (in this case located in the `access_token` field of a JSON response):
cat jwt.json | jq -r .access_token | cut -d. -f2 | base64 --decode
cat jwt.json | jq --raw-output .access_token | cut -d. -f2 | base64 --decode
- The [`VerboseOIDCLogging`](#verboseoidclogging) option is available which
will log the received OIDC token if debug level logging is enabled. This can

View File

@ -156,7 +156,7 @@ If the pods are unable to connect to a Consul client running on the same host,
first check if the Consul clients are up and running with `kubectl get pods`.
```shell-session
$ kubectl get pods -l "component=client"
$ kubectl get pods --selector="component=client"
NAME READY STATUS RESTARTS AGE
consul-kzws6 1/1 Running 0 58s
```

View File

@ -48,8 +48,8 @@ Looking through these changes prior to upgrading is highly recommended.
**1.** Check replication status in DC1 by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```
You should receive output similar to this:
@ -71,8 +71,8 @@ disabled, so this is normal even if replication is happening.
**2.** Check replication status in DC2 by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```
You should receive output similar to this:
@ -94,8 +94,8 @@ This should be done one DC at a time, leaving the primary DC for last.
**4.** Confirm that replication is still working in DC2 by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```
You should receive output similar to this:

View File

@ -58,8 +58,8 @@ Two very notable items are:
**1.** Check the replication status of the primary datacenter (DC1) by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```
You should receive output similar to this:
@ -81,8 +81,8 @@ disabled, so this is normal even if replication is happening.
**2.** Check replication status in DC2 by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```
You should receive output similar to this:
@ -115,9 +115,9 @@ continue working.
From a Consul server in DC2:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/list?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/list?pretty
```
Take note of the `ReplicatedIndex` value.
@ -139,15 +139,15 @@ with the following contents:
From a Consul server in DC1, create a new token using that file:
```shell
curl -X PUT -H "X-Consul-Token: $MASTER_TOKEN" -d @test-ui-token.json localhost:8500/v1/acl/create
```shell-session
$ curl --request PUT --header "X-Consul-Token: $MASTER_TOKEN" --data @test-ui-token.json localhost:8500/v1/acl/create
```
From a Consul server in DC2:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/list?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" "localhost:8500/v1/acl/replication?pretty"
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" "localhost:8500/v1/acl/list?pretty"
```
`ReplicatedIndex` should have incremented and you should find the new token listed. If you try using CLI ACL commands you will receive this error:
@ -169,8 +169,8 @@ Once this is complete, you should observe a log entry like this from your server
**6.** Confirm that replication is still working in DC2 by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" "localhost:8500/v1/acl/replication?pretty"
```
You should receive output similar to this:

View File

@ -41,8 +41,8 @@ Looking through these changes prior to upgrading is highly recommended.
**1.** Check replication status in DC1 by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" "localhost:8500/v1/acl/replication?pretty"
```
You should receive output similar to this:
@ -67,8 +67,8 @@ disabled, so this is normal even if replication is happening.
**2.** Check replication status in DC2 by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" "localhost:8500/v1/acl/replication?pretty"
```
You should receive output similar to this:
@ -92,8 +92,8 @@ You should receive output similar to this:
**4.** Confirm that replication is still working in DC2 by issuing the following curl command from a
consul server in that DC:
```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
```shell-session
$ curl --silent --header "X-Consul-Token: $MASTER_TOKEN" "localhost:8500/v1/acl/replication?pretty"
```
You should receive output similar to this: