Merge branch 'main' into fix-broken-dockerfile

This commit is contained in:
Michele Degges 2022-02-04 12:30:20 -08:00 committed by GitHub
commit e513430187
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
114 changed files with 8533 additions and 4285 deletions

3
.changelog/11956.txt Normal file
View File

@ -0,0 +1,3 @@
```release-note:improvement
ci: Enable security scanning for CRT
```

3
.changelog/12166.txt Normal file
View File

@ -0,0 +1,3 @@
```release-note:deprecation
acl: The `consul.acl.ResolveTokenToIdentity` metric is no longer reported. The values that were previous reported as part of this metric will now be part of the `consul.acl.ResolveToken` metric.
```

3
.changelog/12209.txt Normal file
View File

@ -0,0 +1,3 @@
```release-note:enhancement
ui: Use @hashicorp/flight icons for all our icons.
```

3
.changelog/12267.txt Normal file
View File

@ -0,0 +1,3 @@
```release-note:bug
ca: adjust validation of PrivateKeyType/Bits with the Vault provider, to remove the error when the cert is created manually in Vault.
```

View File

@ -359,7 +359,7 @@ jobs:
path: /tmp/jsonfile
- run: *notify-slack-failure
# build all distros
# build is a templated job for build-x
build-distros: &build-distros
docker:
- image: *GOLANG_IMAGE
@ -367,7 +367,13 @@ jobs:
<<: *ENVIRONMENT
steps:
- checkout
- run: ./build-support/scripts/build-local.sh
- run:
name: Build
command: |
for os in $XC_OS; do
target="./pkg/bin/${GOOS}_${GOARCH}/"
GOOS="$os" CGO_ENABLED=0 go build -o "$target" -ldflags "$(GOLDFLAGS)" -tags "$(GOTAGS)"
done
# save dev build to CircleCI
- store_artifacts:
@ -380,7 +386,7 @@ jobs:
environment:
<<: *build-env
XC_OS: "freebsd linux windows"
XC_ARCH: "386"
GOARCH: "386"
# build all amd64 architecture supported OS binaries
build-amd64:
@ -388,7 +394,7 @@ jobs:
environment:
<<: *build-env
XC_OS: "darwin freebsd linux solaris windows"
XC_ARCH: "amd64"
GOARCH: "amd64"
# build all arm/arm64 architecture supported OS binaries
build-arm:
@ -433,7 +439,11 @@ jobs:
- attach_workspace: # this normally runs as the first job and has nothing to attach; only used in main branch after rebuilding UI
at: .
- run:
command: make dev
name: Build
command: |
make dev
mkdir -p /home/circleci/go/bin
cp ./bin/consul /home/circleci/go/bin/consul
# save dev build to pass to downstream jobs
- persist_to_workspace:

View File

@ -1,4 +1,5 @@
# Contributing to Consul
>**Note:** We take Consul's security and our users' trust very seriously.
>If you believe you have found a security issue in Consul, please responsibly
>disclose by contacting us at security@hashicorp.com.
@ -14,7 +15,9 @@ talk to us! A great way to do this is in issues themselves. When you want to
work on an issue, comment on it first and tell us the approach you want to take.
## Getting Started
### Some Ways to Contribute
* Report potential bugs.
* Suggest product enhancements.
* Increase our test coverage.
@ -24,7 +27,8 @@ work on an issue, comment on it first and tell us the approach you want to take.
are deployed from this repo.
* Respond to questions about usage on the issue tracker or the Consul section of the [HashiCorp forum]: (https://discuss.hashicorp.com/c/consul)
### Reporting an Issue:
### Reporting an Issue
>Note: Issues on GitHub for Consul are intended to be related to bugs or feature requests.
>Questions should be directed to other community resources such as the: [Discuss Forum](https://discuss.hashicorp.com/c/consul/29), [FAQ](https://www.consul.io/docs/faq.html), or [Guides](https://www.consul.io/docs/guides/index.html).
@ -53,42 +57,47 @@ issue. Stale issues will be closed.
4. The issue is addressed in a pull request or commit. The issue will be
referenced in the commit message so that the code that fixes it is clearly
linked.
linked. Any change a Consul user might need to know about will include a
changelog entry in the PR.
5. The issue is closed.
## Building Consul
If you wish to work on Consul itself, you'll first need [Go](https://golang.org)
installed (The version of Go should match the one of our [CI config's](https://github.com/hashicorp/consul/blob/main/.circleci/config.yml) Go image).
Next, clone this repository and then run `make dev`. In a few moments, you'll have a working
`consul` executable in `consul/bin` and `$GOPATH/bin`:
>Note: `make dev` will build for your local machine's os/architecture. If you wish to build for all os/architecture combinations use `make`.
## Making Changes to Consul
The first step to making changes is to fork Consul. Afterwards, the easiest way
to work on the fork is to set it as a remote of the Consul project:
### Prerequisites
1. Navigate to `$GOPATH/src/github.com/hashicorp/consul`
2. Rename the existing remote's name: `git remote rename origin upstream`.
3. Add your fork as a remote by running
`git remote add origin <github url of fork>`. For example:
`git remote add origin https://github.com/myusername/consul`.
4. Checkout a feature branch: `git checkout -t -b new-feature`
5. Make changes
6. Push changes to the fork when ready to submit PR:
`git push -u origin new-feature`
If you wish to work on Consul itself, you'll first need to:
- install [Go](https://golang.org) (the version should match that of our
[CI config's](https://github.com/hashicorp/consul/blob/main/.circleci/config.yml) Go image).
- [fork the Consul repo](../docs/contributing/fork-the-project.md)
By following these steps you can push to your fork to create a PR, but the code on disk still
lives in the spot where the go cli tools are expecting to find it.
### Building Consul
>Note: If you make any changes to the code, run `gofmt -s -w` to automatically format the code according to Go standards.
To build Consul, run `make dev`. In a few moments, you'll have a working
`consul` executable in `consul/bin` and `$GOPATH/bin`:
## Testing
>Note: `make dev` will build for your local machine's os/architecture. If you wish to build for all os/architecture combinations, use `make`.
### Modifying the Code
#### Code Formatting
Go provides [tooling to apply consistent code formatting](https://golang.org/doc/effective_go#formatting).
If you make any changes to the code, run `gofmt -s -w` to automatically format the code according to Go standards.
#### Updating Go Module Dependencies
If a dependency is added or change, run `go mod tidy` to update `go.mod` and `go.sum`.
#### Developer Documentation
Developer-focused documentation about the Consul code base is under [./docs],
and godoc package document can be read at [pkg.go.dev/github.com/hashicorp/consul].
[./docs]: ../docs/README.md
[pkg.go.dev/github.com/hashicorp/consul]: https://pkg.go.dev/github.com/hashicorp/consul
### Testing
During development, it may be more convenient to check your work-in-progress by running only the tests which you expect to be affected by your changes, as the full test suite can take several minutes to execute. [Go's built-in test tool](https://golang.org/pkg/cmd/go/internal/test/) allows specifying a list of packages to test and the `-run` option to only include test names matching a regular expression.
The `go test -short` flag can also be used to skip slower tests.
@ -99,22 +108,44 @@ Examples (run from the repository root):
When a pull request is opened CI will run all tests and lint to verify the change.
## Go Module Dependencies
### Submitting a Pull Request
If a dependency is added or change, run `go mod tidy` to update `go.mod` and `go.sum`.
Before writing any code, we recommend:
- Create a Github issue if none already exists for the code change you'd like to make.
- Write a comment on the Github issue indicating you're interested in contributing so
maintainers can provide their perspective if needed.
## Developer Documentation
Keep your pull requests (PRs) small and open them early so you can get feedback on
approach from maintainers before investing your time in larger changes. For example,
see how [applying URL-decoding of resource names across the whole HTTP API](https://github.com/hashicorp/consul/issues/11258)
started with [iterating on the right approach for a few endpoints](https://github.com/hashicorp/consul/pull/11335)
before applying more broadly.
Documentation about the Consul code base is under [./docs],
and godoc package document can be read at [pkg.go.dev/github.com/hashicorp/consul].
When you're ready to submit a pull request:
1. Review the [list of checklists](#checklists) for common changes and follow any
that apply to your work.
2. Include evidence that your changes work as intended (e.g., add/modify unit tests;
describe manual tests you ran, in what environment,
and the results including screenshots or terminal output).
3. Open the PR from your fork against base repository `hashicorp/consul` and branch `main`.
- [Link the PR to its associated issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue).
4. Include any specific questions that you have for the reviewer in the PR description
or as a PR comment in Github.
- If there's anything you find the need to explain or clarify in the PR, consider
whether that explanation should be added in the source code as comments.
- You can submit a [draft PR](https://github.blog/2019-02-14-introducing-draft-pull-requests/)
if your changes aren't finalized but would benefit from in-process feedback.
5. If there's any reason Consul users might need to know about this change,
[add a changelog entry](../docs/contributing/add-a-changelog-entry.md).
6. After you submit, the Consul maintainers team needs time to carefully review your
contribution and ensure it is production-ready, considering factors such as: security,
backwards-compatibility, potential regressions, etc.
7. After you address Consul maintainer feedback and the PR is approved, a Consul maintainer
will merge it. Your contribution will be available from the next major release (e.g., 1.x)
unless explicitly backported to an existing or previous major release by the maintainer.
[./docs]: ../docs/README.md
[pkg.go.dev/github.com/hashicorp/consul]: https://pkg.go.dev/github.com/hashicorp/consul
#### Checklists
### Checklists
Some common changes that many PRs require such as adding config fields, are
documented through checklists.
Please check in [docs/](../docs/) for any `checklist-*.md` files that might help
with your change.
Some common changes that many PRs require are documented through checklists as
`checklist-*.md` files in [docs/](../docs/), including:
- [Adding config fields](../docs/config/checklist-adding-config-fields.md)

View File

@ -3,9 +3,9 @@ name: build
on:
push:
# Sequence of patterns matched against refs/heads
branches: [
"main"
]
branches:
# Push events on the main branch
- main
env:
PKG_NAME: consul
@ -145,6 +145,7 @@ jobs:
config_dir: ".release/linux/package"
preinstall: ".release/linux/preinstall"
postinstall: ".release/linux/postinstall"
preremove: ".release/linux/preremove"
postremove: ".release/linux/postremove"
- name: Set Package Names

View File

@ -42,8 +42,36 @@ event "upload-dev" {
}
}
event "notarize-darwin-amd64" {
event "security-scan-binaries" {
depends = ["upload-dev"]
action "security-scan-binaries" {
organization = "hashicorp"
repository = "crt-workflows-common"
workflow = "security-scan-binaries"
config = "security-scan.hcl"
}
notification {
on = "fail"
}
}
event "security-scan-containers" {
depends = ["security-scan-binaries"]
action "security-scan-containers" {
organization = "hashicorp"
repository = "crt-workflows-common"
workflow = "security-scan-containers"
config = "security-scan.hcl"
}
notification {
on = "fail"
}
}
event "notarize-darwin-amd64" {
depends = ["security-scan-containers"]
action "notarize-darwin-amd64" {
organization = "hashicorp"
repository = "crt-workflows-common"

View File

@ -1,14 +1,19 @@
#!/bin/bash
if [ "$1" = "purge" ]
then
userdel consul
if [ -d "/run/systemd/system" ]; then
systemctl --system daemon-reload >/dev/null || :
fi
if [ "$1" == "upgrade" ] && [ -d /run/systemd/system ]; then
systemctl --system daemon-reload >/dev/null || true
systemctl restart consul >/dev/null || true
fi
case "$1" in
purge | 0)
userdel consul
;;
upgrade | [1-9]*)
if [ -d "/run/systemd/system" ]; then
systemctl try-restart consul.service >/dev/null || :
fi
;;
esac
exit 0

11
.release/linux/preremove Normal file
View File

@ -0,0 +1,11 @@
#!/bin/bash
case "$1" in
remove | 0)
if [ -d "/run/systemd/system" ]; then
systemctl --no-reload disable consul.service > /dev/null || :
systemctl stop consul.service > /dev/null || :
fi
;;
esac
exit 0

View File

@ -0,0 +1,13 @@
container {
dependencies = true
alpine_secdb = true
secrets = true
}
binary {
secrets = true
go_modules = false
osv = true
oss_index = true
nvd = true
}

View File

@ -1,6 +1,6 @@
# This Dockerfile contains multiple targets.
# Use 'docker build --target=<name> .' to build one.
# e.g. `docker build --target=dev .`
# e.g. `docker build --target=official .`
#
# All non-dev targets have a VERSION argument that must be provided
# via --build-arg=VERSION=<version> when building.
@ -191,4 +191,4 @@ ENTRYPOINT ["docker-entrypoint.sh"]
# By default you'll get an insecure single-node development server that stores
# everything in RAM, exposes a web UI and HTTP endpoints, and bootstraps itself.
# Don't use this configuration for production.
CMD ["agent", "-dev", "-client", "0.0.0.0"]
CMD ["agent", "-dev", "-client", "0.0.0.0"]

View File

@ -1,3 +1,6 @@
# For documentation on building consul from source, refer to:
# https://www.consul.io/docs/install#compiling-from-source
SHELL = bash
GOGOVERSION?=$(shell grep github.com/gogo/protobuf go.mod | awk '{print $$2}')
GOTOOLS = \
@ -12,8 +15,6 @@ GOTOOLS = \
github.com/hashicorp/lint-consul-retry@master
GOTAGS ?=
GOOS?=$(shell go env GOOS)
GOARCH?=$(shell go env GOARCH)
GOPATH=$(shell go env GOPATH)
MAIN_GOPATH=$(shell go env GOPATH | cut -d: -f1)
@ -134,20 +135,17 @@ ifdef SKIP_DOCKER_BUILD
ENVOY_INTEG_DEPS=noop
endif
# all builds binaries for all targets
all: bin
all: dev-build
# used to make integration dependencies conditional
noop: ;
bin: tools
@$(SHELL) $(CURDIR)/build-support/scripts/build-local.sh
# dev creates binaries for testing locally - these are put into ./bin and $GOPATH
# dev creates binaries for testing locally - these are put into ./bin
dev: dev-build
dev-build:
@$(SHELL) $(CURDIR)/build-support/scripts/build-local.sh -o $(GOOS) -a $(GOARCH)
mkdir -p bin
CGO_ENABLED=0 go build -o ./bin -ldflags "$(GOLDFLAGS)" -tags "$(GOTAGS)"
dev-docker: linux
@echo "Pulling consul container image - $(CONSUL_IMAGE_VERSION)"
@ -175,9 +173,10 @@ ifeq ($(CIRCLE_BRANCH), main)
@docker push $(CI_DEV_DOCKER_NAMESPACE)/$(CI_DEV_DOCKER_IMAGE_NAME):latest
endif
# linux builds a linux package independent of the source platform
# linux builds a linux binary independent of the source platform
linux:
@$(SHELL) $(CURDIR)/build-support/scripts/build-local.sh -o linux -a amd64
mkdir -p bin
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o ./bin -ldflags "$(GOLDFLAGS)" -tags "$(GOTAGS)"
# dist builds binaries for all platforms and packages them for distribution
dist:

View File

@ -1,11 +1,20 @@
# Consul [![CircleCI](https://circleci.com/gh/hashicorp/consul/tree/main.svg?style=svg)](https://circleci.com/gh/hashicorp/consul/tree/main) [![Discuss](https://img.shields.io/badge/discuss-consul-ca2171.svg?style=flat)](https://discuss.hashicorp.com/c/consul)
# Consul
<p>
<a href="https://consul.io" title="Consul website">
<img src="./website/public/img/logo-hashicorp.svg" alt="HashiCorp Consul logo" width="200px">
</a>
</p>
[![Docker Pulls](https://img.shields.io/docker/pulls/_/consul.svg)](https://hub.docker.com/_/consul)
[![Go Report Card](https://goreportcard.com/badge/github.com/hashicorp/consul)](https://goreportcard.com/report/github.com/hashicorp/consul)
Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
* Website: https://www.consul.io
* Tutorials: [HashiCorp Learn](https://learn.hashicorp.com/consul)
* Forum: [Discuss](https://discuss.hashicorp.com/c/consul)
Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
Consul provides several key features:
* **Multi-Datacenter** - Consul is built to be datacenter aware, and can

View File

@ -15,7 +15,7 @@ import (
// critical purposes, such as logging. Therefore we interpret all errors as empty-string
// so we can safely log it without handling non-critical errors at the usage site.
func (a *Agent) aclAccessorID(secretID string) string {
ident, err := a.delegate.ResolveTokenToIdentity(secretID)
ident, err := a.delegate.ResolveTokenAndDefaultMeta(secretID, nil, nil)
if acl.IsErrNotFound(err) {
return ""
}
@ -23,10 +23,7 @@ func (a *Agent) aclAccessorID(secretID string) string {
a.logger.Debug("non-critical error resolving acl token accessor for logging", "error", err)
return ""
}
if ident == nil {
return ""
}
return ident.ID()
return ident.AccessorID()
}
// vetServiceRegister makes sure the service registration action is allowed by
@ -174,7 +171,7 @@ func (a *Agent) filterMembers(token string, members *[]serf.Member) error {
if authz.NodeRead(node, &authzContext) == acl.Allow {
continue
}
accessorID := a.aclAccessorID(token)
accessorID := authz.AccessorID()
a.logger.Debug("dropping node from result due to ACLs", "node", node, "accessorID", accessorID)
m = append(m[:i], m[i+1:]...)
i--

View File

@ -39,6 +39,12 @@ type TestACLAgent struct {
func NewTestACLAgent(t *testing.T, name string, hcl string, resolveAuthz authzResolver, resolveIdent identResolver) *TestACLAgent {
t.Helper()
if resolveIdent == nil {
resolveIdent = func(s string) (structs.ACLIdentity, error) {
return nil, nil
}
}
a := &TestACLAgent{resolveAuthzFn: resolveAuthz, resolveIdentFn: resolveIdent}
dataDir := testutil.TempDir(t, "acl-agent")
@ -86,26 +92,15 @@ func (a *TestACLAgent) ResolveToken(secretID string) (acl.Authorizer, error) {
return authz, err
}
func (a *TestACLAgent) ResolveTokenToIdentityAndAuthorizer(secretID string) (structs.ACLIdentity, acl.Authorizer, error) {
if a.resolveAuthzFn == nil {
return nil, nil, fmt.Errorf("ResolveTokenToIdentityAndAuthorizer call is unexpected - no authz resolver callback set")
}
return a.resolveAuthzFn(secretID)
}
func (a *TestACLAgent) ResolveTokenToIdentity(secretID string) (structs.ACLIdentity, error) {
if a.resolveIdentFn == nil {
return nil, fmt.Errorf("ResolveTokenToIdentity call is unexpected - no ident resolver callback set")
}
return a.resolveIdentFn(secretID)
}
func (a *TestACLAgent) ResolveTokenAndDefaultMeta(secretID string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (acl.Authorizer, error) {
identity, authz, err := a.ResolveTokenToIdentityAndAuthorizer(secretID)
func (a *TestACLAgent) ResolveTokenAndDefaultMeta(secretID string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (consul.ACLResolveResult, error) {
authz, err := a.ResolveToken(secretID)
if err != nil {
return nil, err
return consul.ACLResolveResult{}, err
}
identity, err := a.resolveIdentFn(secretID)
if err != nil {
return consul.ACLResolveResult{}, err
}
// Default the EnterpriseMeta based on the Tokens meta or actual defaults
@ -119,7 +114,7 @@ func (a *TestACLAgent) ResolveTokenAndDefaultMeta(secretID string, entMeta *stru
// Use the meta to fill in the ACL authorization context
entMeta.FillAuthzContext(authzContext)
return authz, err
return consul.ACLResolveResult{Authorizer: authz, ACLIdentity: identity}, err
}
// All of these are stubs to satisfy the interface
@ -523,22 +518,3 @@ func TestACL_filterChecksWithAuthorizer(t *testing.T) {
_, ok = checks["my-other"]
require.False(t, ok)
}
// TODO: remove?
func TestACL_ResolveIdentity(t *testing.T) {
t.Parallel()
a := NewTestACLAgent(t, t.Name(), TestACLConfig(), nil, catalogIdent)
// this test is meant to ensure we are calling the correct function
// which is ResolveTokenToIdentity on the Agent delegate. Our
// nil authz resolver will cause it to emit an error if used
ident, err := a.delegate.ResolveTokenToIdentity(nodeROSecret)
require.NoError(t, err)
require.NotNil(t, ident)
// just double checkingto ensure if we had used the wrong function
// that an error would be produced
_, err = a.delegate.ResolveTokenAndDefaultMeta(nodeROSecret, nil, nil)
require.Error(t, err)
}

View File

@ -167,14 +167,11 @@ type delegate interface {
// RemoveFailedNode is used to remove a failed node from the cluster.
RemoveFailedNode(node string, prune bool, entMeta *structs.EnterpriseMeta) error
// TODO: replace this method with consul.ACLResolver
ResolveTokenToIdentity(token string) (structs.ACLIdentity, error)
// ResolveTokenAndDefaultMeta returns an acl.Authorizer which authorizes
// actions based on the permissions granted to the token.
// If either entMeta or authzContext are non-nil they will be populated with the
// default partition and namespace from the token.
ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (acl.Authorizer, error)
ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (consul.ACLResolveResult, error)
RPC(method string, args interface{}, reply interface{}) error
SnapshotRPC(args *structs.SnapshotRequest, in io.Reader, out io.Writer, replyFn structs.SnapshotReplyFn) error

View File

@ -1640,8 +1640,8 @@ type fakeResolveTokenDelegate struct {
authorizer acl.Authorizer
}
func (f fakeResolveTokenDelegate) ResolveTokenAndDefaultMeta(_ string, _ *structs.EnterpriseMeta, _ *acl.AuthorizerContext) (acl.Authorizer, error) {
return f.authorizer, nil
func (f fakeResolveTokenDelegate) ResolveTokenAndDefaultMeta(_ string, _ *structs.EnterpriseMeta, _ *acl.AuthorizerContext) (consul.ACLResolveResult, error) {
return consul.ACLResolveResult{Authorizer: f.authorizer}, nil
}
func TestAgent_Reload(t *testing.T) {

View File

@ -136,9 +136,7 @@ func (s *HTTPHandlers) CatalogRegister(resp http.ResponseWriter, req *http.Reque
}
if err := s.rewordUnknownEnterpriseFieldError(decodeBody(req.Body, &args)); err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Request decode failed: %v", err)
return nil, nil
return nil, BadRequestError{Reason: fmt.Sprintf("Request decode failed: %v", err)}
}
// Setup the default DC if not provided
@ -168,9 +166,7 @@ func (s *HTTPHandlers) CatalogDeregister(resp http.ResponseWriter, req *http.Req
return nil, err
}
if err := s.rewordUnknownEnterpriseFieldError(decodeBody(req.Body, &args)); err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Request decode failed: %v", err)
return nil, nil
return nil, BadRequestError{Reason: fmt.Sprintf("Request decode failed: %v", err)}
}
// Setup the default DC if not provided
@ -367,9 +363,7 @@ func (s *HTTPHandlers) catalogServiceNodes(resp http.ResponseWriter, req *http.R
return nil, err
}
if args.ServiceName == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing service name")
return nil, nil
return nil, BadRequestError{Reason: "Missing service name"}
}
// Make the RPC request
@ -444,9 +438,7 @@ func (s *HTTPHandlers) CatalogNodeServices(resp http.ResponseWriter, req *http.R
return nil, err
}
if args.Node == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing node name")
return nil, nil
return nil, BadRequestError{Reason: "Missing node name"}
}
// Make the RPC request
@ -511,9 +503,7 @@ func (s *HTTPHandlers) CatalogNodeServiceList(resp http.ResponseWriter, req *htt
return nil, err
}
if args.Node == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing node name")
return nil, nil
return nil, BadRequestError{Reason: "Missing node name"}
}
// Make the RPC request
@ -564,9 +554,7 @@ func (s *HTTPHandlers) CatalogGatewayServices(resp http.ResponseWriter, req *htt
return nil, err
}
if args.ServiceName == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing gateway name")
return nil, nil
return nil, BadRequestError{Reason: "Missing gateway name"}
}
// Make the RPC request

View File

@ -90,16 +90,12 @@ func (s *HTTPHandlers) configDelete(resp http.ResponseWriter, req *http.Request)
pathArgs := strings.SplitN(kindAndName, "/", 2)
if len(pathArgs) != 2 {
resp.WriteHeader(http.StatusNotFound)
fmt.Fprintf(resp, "Must provide both a kind and name to delete")
return nil, nil
return nil, NotFoundError{Reason: "Must provide both a kind and name to delete"}
}
entry, err := structs.MakeConfigEntry(pathArgs[0], pathArgs[1])
if err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "%v", err)
return nil, nil
return nil, BadRequestError{Reason: err.Error()}
}
args.Entry = entry
// Parse enterprise meta.

View File

@ -16,10 +16,9 @@ import (
vaultapi "github.com/hashicorp/vault/api"
"github.com/mitchellh/mapstructure"
"github.com/hashicorp/consul/lib/decode"
"github.com/hashicorp/consul/agent/connect"
"github.com/hashicorp/consul/agent/structs"
"github.com/hashicorp/consul/lib/decode"
)
const (
@ -161,6 +160,34 @@ func (v *VaultProvider) Configure(cfg ProviderConfig) error {
return nil
}
func (v *VaultProvider) ValidateConfigUpdate(prevRaw, nextRaw map[string]interface{}) error {
prev, err := ParseVaultCAConfig(prevRaw)
if err != nil {
return fmt.Errorf("failed to parse existing CA config: %w", err)
}
next, err := ParseVaultCAConfig(nextRaw)
if err != nil {
return fmt.Errorf("failed to parse new CA config: %w", err)
}
if prev.RootPKIPath != next.RootPKIPath {
return nil
}
if prev.PrivateKeyType != "" && prev.PrivateKeyType != connect.DefaultPrivateKeyType {
if prev.PrivateKeyType != next.PrivateKeyType {
return fmt.Errorf("cannot update the PrivateKeyType field without changing RootPKIPath")
}
}
if prev.PrivateKeyBits != 0 && prev.PrivateKeyBits != connect.DefaultPrivateKeyBits {
if prev.PrivateKeyBits != next.PrivateKeyBits {
return fmt.Errorf("cannot update the PrivateKeyBits field without changing RootPKIPath")
}
}
return nil
}
// renewToken uses a vaultapi.LifetimeWatcher to repeatedly renew our token's lease.
// If the token can no longer be renewed and auth method is set,
// it will re-authenticate to Vault using the auth method and restart the renewer with the new token.
@ -273,31 +300,6 @@ func (v *VaultProvider) GenerateRoot() (RootResult, error) {
if err != nil {
return RootResult{}, err
}
if rootPEM != "" {
rootCert, err := connect.ParseCert(rootPEM)
if err != nil {
return RootResult{}, err
}
// Vault PKI doesn't allow in-place cert/key regeneration. That
// means if you need to change either the key type or key bits then
// you also need to provide new mount points.
// https://www.vaultproject.io/api-docs/secret/pki#generate-root
//
// A separate bug in vault likely also requires that you use the
// ForceWithoutCrossSigning option when changing key types.
foundKeyType, foundKeyBits, err := connect.KeyInfoFromCert(rootCert)
if err != nil {
return RootResult{}, err
}
if v.config.PrivateKeyType != foundKeyType {
return RootResult{}, fmt.Errorf("cannot update the PrivateKeyType field without choosing a new PKI mount for the root CA")
}
if v.config.PrivateKeyBits != foundKeyBits {
return RootResult{}, fmt.Errorf("cannot update the PrivateKeyBits field without choosing a new PKI mount for the root CA")
}
}
}
return RootResult{PEM: rootPEM}, nil
@ -400,17 +402,14 @@ func (v *VaultProvider) SetIntermediate(intermediatePEM, rootPEM string) error {
return fmt.Errorf("cannot set an intermediate using another root in the primary datacenter")
}
err := validateSetIntermediate(
intermediatePEM, rootPEM,
"", // we don't have access to the private key directly
v.spiffeID,
)
// the private key is in vault, so we can't use it in this validation
err := validateSetIntermediate(intermediatePEM, rootPEM, "", v.spiffeID)
if err != nil {
return err
}
_, err = v.client.Logical().Write(v.config.IntermediatePKIPath+"intermediate/set-signed", map[string]interface{}{
"certificate": fmt.Sprintf("%s\n%s", intermediatePEM, rootPEM),
"certificate": intermediatePEM,
})
if err != nil {
return err

View File

@ -34,10 +34,6 @@ var ACLSummaries = []prometheus.SummaryDefinition{
Name: []string{"acl", "ResolveToken"},
Help: "This measures the time it takes to resolve an ACL token.",
},
{
Name: []string{"acl", "ResolveTokenToIdentity"},
Help: "This measures the time it takes to resolve an ACL token to an Identity.",
},
}
// These must be kept in sync with the constants in command/agent/acl.go.
@ -1057,13 +1053,16 @@ func (r *ACLResolver) resolveLocallyManagedToken(token string) (structs.ACLIdent
return r.resolveLocallyManagedEnterpriseToken(token)
}
func (r *ACLResolver) ResolveTokenToIdentityAndAuthorizer(token string) (structs.ACLIdentity, acl.Authorizer, error) {
// ResolveToken to an acl.Authorizer and structs.ACLIdentity. The acl.Authorizer
// can be used to check permissions granted to the token, and the ACLIdentity
// describes the token and any defaults applied to it.
func (r *ACLResolver) ResolveToken(token string) (ACLResolveResult, error) {
if !r.ACLsEnabled() {
return nil, acl.ManageAll(), nil
return ACLResolveResult{Authorizer: acl.ManageAll()}, nil
}
if acl.RootAuthorizer(token) != nil {
return nil, nil, acl.ErrRootDenied
return ACLResolveResult{}, acl.ErrRootDenied
}
// handle the anonymous token
@ -1072,7 +1071,7 @@ func (r *ACLResolver) ResolveTokenToIdentityAndAuthorizer(token string) (structs
}
if ident, authz, ok := r.resolveLocallyManagedToken(token); ok {
return ident, authz, nil
return ACLResolveResult{Authorizer: authz, ACLIdentity: ident}, nil
}
defer metrics.MeasureSince([]string{"acl", "ResolveToken"}, time.Now())
@ -1082,10 +1081,11 @@ func (r *ACLResolver) ResolveTokenToIdentityAndAuthorizer(token string) (structs
r.handleACLDisabledError(err)
if IsACLRemoteError(err) {
r.logger.Error("Error resolving token", "error", err)
return &missingIdentity{reason: "primary-dc-down", token: token}, r.down, nil
ident := &missingIdentity{reason: "primary-dc-down", token: token}
return ACLResolveResult{Authorizer: r.down, ACLIdentity: ident}, nil
}
return nil, nil, err
return ACLResolveResult{}, err
}
// Build the Authorizer
@ -1098,7 +1098,7 @@ func (r *ACLResolver) ResolveTokenToIdentityAndAuthorizer(token string) (structs
authz, err := policies.Compile(r.cache, &conf)
if err != nil {
return nil, nil, err
return ACLResolveResult{}, err
}
chain = append(chain, authz)
@ -1106,42 +1106,32 @@ func (r *ACLResolver) ResolveTokenToIdentityAndAuthorizer(token string) (structs
if err != nil {
if IsACLRemoteError(err) {
r.logger.Error("Error resolving identity defaults", "error", err)
return identity, r.down, nil
return ACLResolveResult{Authorizer: r.down, ACLIdentity: identity}, nil
}
return nil, nil, err
return ACLResolveResult{}, err
} else if authz != nil {
chain = append(chain, authz)
}
chain = append(chain, acl.RootAuthorizer(r.config.ACLDefaultPolicy))
return identity, acl.NewChainedAuthorizer(chain), nil
return ACLResolveResult{Authorizer: acl.NewChainedAuthorizer(chain), ACLIdentity: identity}, nil
}
// TODO: rename to AccessorIDFromToken. This method is only used to retrieve the
// ACLIdentity.ID, so we don't need to return a full ACLIdentity. We could
// return a much smaller type (instad of just a string) to allow for changes
// in the future.
func (r *ACLResolver) ResolveTokenToIdentity(token string) (structs.ACLIdentity, error) {
if !r.ACLsEnabled() {
return nil, nil
type ACLResolveResult struct {
acl.Authorizer
// TODO: likely we can reduce this interface
ACLIdentity structs.ACLIdentity
}
func (a ACLResolveResult) AccessorID() string {
if a.ACLIdentity == nil {
return ""
}
return a.ACLIdentity.ID()
}
if acl.RootAuthorizer(token) != nil {
return nil, acl.ErrRootDenied
}
// handle the anonymous token
if token == "" {
token = anonymousToken
}
if ident, _, ok := r.resolveLocallyManagedToken(token); ok {
return ident, nil
}
defer metrics.MeasureSince([]string{"acl", "ResolveTokenToIdentity"}, time.Now())
return r.resolveIdentityFromToken(token)
func (a ACLResolveResult) Identity() structs.ACLIdentity {
return a.ACLIdentity
}
func (r *ACLResolver) ACLsEnabled() bool {
@ -1160,6 +1150,30 @@ func (r *ACLResolver) ACLsEnabled() bool {
return true
}
func (r *ACLResolver) ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (ACLResolveResult, error) {
result, err := r.ResolveToken(token)
if err != nil {
return ACLResolveResult{}, err
}
if entMeta == nil {
entMeta = &structs.EnterpriseMeta{}
}
// Default the EnterpriseMeta based on the Tokens meta or actual defaults
// in the case of unknown identity
if result.ACLIdentity != nil {
entMeta.Merge(result.ACLIdentity.EnterpriseMetadata())
} else {
entMeta.Merge(structs.DefaultEnterpriseMetaInDefaultPartition())
}
// Use the meta to fill in the ACL authorization context
entMeta.FillAuthzContext(authzContext)
return result, err
}
// aclFilter is used to filter results from our state store based on ACL rules
// configured for the provided token.
type aclFilter struct {
@ -1967,7 +1981,7 @@ func filterACLWithAuthorizer(logger hclog.Logger, authorizer acl.Authorizer, sub
// not authorized for read access will be removed from subj.
func filterACL(r *ACLResolver, token string, subj interface{}) error {
// Get the ACL from the token
_, authorizer, err := r.ResolveTokenToIdentityAndAuthorizer(token)
authorizer, err := r.ResolveToken(token)
if err != nil {
return err
}

View File

@ -1,7 +1,6 @@
package consul
import (
"github.com/hashicorp/consul/acl"
"github.com/hashicorp/consul/agent/structs"
)
@ -48,35 +47,3 @@ func (c *clientACLResolverBackend) ResolveRoleFromID(roleID string) (bool, *stru
// clients do no local role resolution at the moment
return false, nil, nil
}
func (c *Client) ResolveTokenToIdentity(token string) (structs.ACLIdentity, error) {
// not using ResolveTokenToIdentityAndAuthorizer because in this case we don't
// need to resolve the roles, policies and namespace but just want the identity
// information such as accessor id.
return c.acls.ResolveTokenToIdentity(token)
}
// TODO: Server has an identical implementation, remove duplication
func (c *Client) ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (acl.Authorizer, error) {
identity, authz, err := c.acls.ResolveTokenToIdentityAndAuthorizer(token)
if err != nil {
return nil, err
}
if entMeta == nil {
entMeta = &structs.EnterpriseMeta{}
}
// Default the EnterpriseMeta based on the Tokens meta or actual defaults
// in the case of unknown identity
if identity != nil {
entMeta.Merge(identity.EnterpriseMetadata())
} else {
entMeta.Merge(structs.DefaultEnterpriseMetaInDefaultPartition())
}
// Use the meta to fill in the ACL authorization context
entMeta.FillAuthzContext(authzContext)
return authz, err
}

View File

@ -724,7 +724,7 @@ func (a *ACL) tokenSetInternal(args *structs.ACLTokenSetRequest, reply *structs.
}
// Purge the identity from the cache to prevent using the previous definition of the identity
a.srv.acls.cache.RemoveIdentity(tokenSecretCacheID(token.SecretID))
a.srv.ACLResolver.cache.RemoveIdentity(tokenSecretCacheID(token.SecretID))
// Don't check expiration times here as it doesn't really matter.
if _, updatedToken, err := a.srv.fsm.State().ACLTokenGetByAccessor(nil, token.AccessorID, nil); err == nil && updatedToken != nil {
@ -876,7 +876,7 @@ func (a *ACL) TokenDelete(args *structs.ACLTokenDeleteRequest, reply *string) er
}
// Purge the identity from the cache to prevent using the previous definition of the identity
a.srv.acls.cache.RemoveIdentity(tokenSecretCacheID(token.SecretID))
a.srv.ACLResolver.cache.RemoveIdentity(tokenSecretCacheID(token.SecretID))
if reply != nil {
*reply = token.AccessorID
@ -1198,7 +1198,7 @@ func (a *ACL) PolicySet(args *structs.ACLPolicySetRequest, reply *structs.ACLPol
}
// Remove from the cache to prevent stale cache usage
a.srv.acls.cache.RemovePolicy(policy.ID)
a.srv.ACLResolver.cache.RemovePolicy(policy.ID)
if _, policy, err := a.srv.fsm.State().ACLPolicyGetByID(nil, policy.ID, &policy.EnterpriseMeta); err == nil && policy != nil {
*reply = *policy
@ -1257,7 +1257,7 @@ func (a *ACL) PolicyDelete(args *structs.ACLPolicyDeleteRequest, reply *string)
return fmt.Errorf("Failed to apply policy delete request: %v", err)
}
a.srv.acls.cache.RemovePolicy(policy.ID)
a.srv.ACLResolver.cache.RemovePolicy(policy.ID)
*reply = policy.Name
@ -1318,12 +1318,12 @@ func (a *ACL) PolicyResolve(args *structs.ACLPolicyBatchGetRequest, reply *struc
}
// get full list of policies for this token
identity, policies, err := a.srv.acls.resolveTokenToIdentityAndPolicies(args.Token)
identity, policies, err := a.srv.ACLResolver.resolveTokenToIdentityAndPolicies(args.Token)
if err != nil {
return err
}
entIdentity, entPolicies, err := a.srv.acls.resolveEnterpriseIdentityAndPolicies(identity)
entIdentity, entPolicies, err := a.srv.ACLResolver.resolveEnterpriseIdentityAndPolicies(identity)
if err != nil {
return err
}
@ -1609,7 +1609,7 @@ func (a *ACL) RoleSet(args *structs.ACLRoleSetRequest, reply *structs.ACLRole) e
}
// Remove from the cache to prevent stale cache usage
a.srv.acls.cache.RemoveRole(role.ID)
a.srv.ACLResolver.cache.RemoveRole(role.ID)
if _, role, err := a.srv.fsm.State().ACLRoleGetByID(nil, role.ID, &role.EnterpriseMeta); err == nil && role != nil {
*reply = *role
@ -1664,7 +1664,7 @@ func (a *ACL) RoleDelete(args *structs.ACLRoleDeleteRequest, reply *string) erro
return fmt.Errorf("Failed to apply role delete request: %v", err)
}
a.srv.acls.cache.RemoveRole(role.ID)
a.srv.ACLResolver.cache.RemoveRole(role.ID)
*reply = role.Name
@ -1719,12 +1719,12 @@ func (a *ACL) RoleResolve(args *structs.ACLRoleBatchGetRequest, reply *structs.A
}
// get full list of roles for this token
identity, roles, err := a.srv.acls.resolveTokenToIdentityAndRoles(args.Token)
identity, roles, err := a.srv.ACLResolver.resolveTokenToIdentityAndRoles(args.Token)
if err != nil {
return err
}
entIdentity, entRoles, err := a.srv.acls.resolveEnterpriseIdentityAndRoles(identity)
entIdentity, entRoles, err := a.srv.ACLResolver.resolveEnterpriseIdentityAndRoles(identity)
if err != nil {
return err
}
@ -2481,7 +2481,7 @@ func (a *ACL) Logout(args *structs.ACLLogoutRequest, reply *bool) error {
}
// Purge the identity from the cache to prevent using the previous definition of the identity
a.srv.acls.cache.RemoveIdentity(tokenSecretCacheID(token.SecretID))
a.srv.ACLResolver.cache.RemoveIdentity(tokenSecretCacheID(token.SecretID))
*reply = true

View File

@ -165,47 +165,10 @@ func (s *serverACLResolverBackend) ResolveRoleFromID(roleID string) (bool, *stru
return s.InPrimaryDatacenter() || index > 0, role, acl.ErrNotFound
}
func (s *Server) ResolveToken(token string) (acl.Authorizer, error) {
_, authz, err := s.acls.ResolveTokenToIdentityAndAuthorizer(token)
return authz, err
}
func (s *Server) ResolveTokenToIdentity(token string) (structs.ACLIdentity, error) {
// not using ResolveTokenToIdentityAndAuthorizer because in this case we don't
// need to resolve the roles, policies and namespace but just want the identity
// information such as accessor id.
return s.acls.ResolveTokenToIdentity(token)
}
// TODO: Client has an identical implementation, remove duplication
func (s *Server) ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (acl.Authorizer, error) {
identity, authz, err := s.acls.ResolveTokenToIdentityAndAuthorizer(token)
if err != nil {
return nil, err
}
if entMeta == nil {
entMeta = &structs.EnterpriseMeta{}
}
// Default the EnterpriseMeta based on the Tokens meta or actual defaults
// in the case of unknown identity
if identity != nil {
entMeta.Merge(identity.EnterpriseMetadata())
} else {
entMeta.Merge(structs.DefaultEnterpriseMetaInDefaultPartition())
}
// Use the meta to fill in the ACL authorization context
entMeta.FillAuthzContext(authzContext)
return authz, err
}
func (s *Server) filterACL(token string, subj interface{}) error {
return filterACL(s.acls, token, subj)
return filterACL(s.ACLResolver, token, subj)
}
func (s *Server) filterACLWithAuthorizer(authorizer acl.Authorizer, subj interface{}) {
filterACLWithAuthorizer(s.acls.logger, authorizer, subj)
filterACLWithAuthorizer(s.ACLResolver.logger, authorizer, subj)
}

View File

@ -46,10 +46,11 @@ type asyncResolutionResult struct {
err error
}
func verifyAuthorizerChain(t *testing.T, expected acl.Authorizer, actual acl.Authorizer) {
expectedChainAuthz, ok := expected.(*acl.ChainedAuthorizer)
func verifyAuthorizerChain(t *testing.T, expected ACLResolveResult, actual ACLResolveResult) {
t.Helper()
expectedChainAuthz, ok := expected.Authorizer.(*acl.ChainedAuthorizer)
require.True(t, ok, "expected Authorizer is not a ChainedAuthorizer")
actualChainAuthz, ok := actual.(*acl.ChainedAuthorizer)
actualChainAuthz, ok := actual.Authorizer.(*acl.ChainedAuthorizer)
require.True(t, ok, "actual Authorizer is not a ChainedAuthorizer")
expectedChain := expectedChainAuthz.AuthorizerChain()
@ -65,19 +66,13 @@ func verifyAuthorizerChain(t *testing.T, expected acl.Authorizer, actual acl.Aut
}
func resolveTokenAsync(r *ACLResolver, token string, ch chan *asyncResolutionResult) {
_, authz, err := r.ResolveTokenToIdentityAndAuthorizer(token)
authz, err := r.ResolveToken(token)
ch <- &asyncResolutionResult{authz: authz, err: err}
}
// Deprecated: use resolveToken or ACLResolver.ResolveTokenToIdentityAndAuthorizer instead
func (r *ACLResolver) ResolveToken(token string) (acl.Authorizer, error) {
_, authz, err := r.ResolveTokenToIdentityAndAuthorizer(token)
return authz, err
}
func resolveToken(t *testing.T, r *ACLResolver, token string) acl.Authorizer {
t.Helper()
_, authz, err := r.ResolveTokenToIdentityAndAuthorizer(token)
authz, err := r.ResolveToken(token)
require.NoError(t, err)
return authz
}
@ -739,7 +734,7 @@ func TestACLResolver_Disabled(t *testing.T) {
r := newTestACLResolver(t, delegate, nil)
authz, err := r.ResolveToken("does not exist")
require.Equal(t, acl.ManageAll(), authz)
require.Equal(t, ACLResolveResult{Authorizer: acl.ManageAll()}, authz)
require.Nil(t, err)
}
@ -753,22 +748,19 @@ func TestACLResolver_ResolveRootACL(t *testing.T) {
r := newTestACLResolver(t, delegate, nil)
t.Run("Allow", func(t *testing.T) {
authz, err := r.ResolveToken("allow")
require.Nil(t, authz)
_, err := r.ResolveToken("allow")
require.Error(t, err)
require.True(t, acl.IsErrRootDenied(err))
})
t.Run("Deny", func(t *testing.T) {
authz, err := r.ResolveToken("deny")
require.Nil(t, authz)
_, err := r.ResolveToken("deny")
require.Error(t, err)
require.True(t, acl.IsErrRootDenied(err))
})
t.Run("Manage", func(t *testing.T) {
authz, err := r.ResolveToken("manage")
require.Nil(t, authz)
_, err := r.ResolveToken("manage")
require.Error(t, err)
require.True(t, acl.IsErrRootDenied(err))
})
@ -817,7 +809,11 @@ func TestACLResolver_DownPolicy(t *testing.T) {
authz, err := r.ResolveToken("foo")
require.NoError(t, err)
require.NotNil(t, authz)
require.Equal(t, authz, acl.DenyAll())
expected := ACLResolveResult{
Authorizer: acl.DenyAll(),
ACLIdentity: &missingIdentity{reason: "primary-dc-down", token: "foo"},
}
require.Equal(t, expected, authz)
requireIdentityCached(t, r, tokenSecretCacheID("foo"), false, "not present")
})
@ -841,7 +837,11 @@ func TestACLResolver_DownPolicy(t *testing.T) {
authz, err := r.ResolveToken("foo")
require.NoError(t, err)
require.NotNil(t, authz)
require.Equal(t, authz, acl.AllowAll())
expected := ACLResolveResult{
Authorizer: acl.AllowAll(),
ACLIdentity: &missingIdentity{reason: "primary-dc-down", token: "foo"},
}
require.Equal(t, expected, authz)
requireIdentityCached(t, r, tokenSecretCacheID("foo"), false, "not present")
})
@ -958,7 +958,7 @@ func TestACLResolver_DownPolicy(t *testing.T) {
config.Config.ACLDownPolicy = "extend-cache"
})
_, authz, err := r.ResolveTokenToIdentityAndAuthorizer("not-found")
authz, err := r.ResolveToken("not-found")
require.NoError(t, err)
require.NotNil(t, authz)
require.Equal(t, acl.Deny, authz.NodeWrite("foo", nil))
@ -1255,10 +1255,9 @@ func TestACLResolver_DownPolicy(t *testing.T) {
// the go routine spawned will eventually return and this will be a not found error
retry.Run(t, func(t *retry.R) {
authz3, err := r.ResolveToken("found")
_, err := r.ResolveToken("found")
assert.Error(t, err)
assert.True(t, acl.IsErrNotFound(err))
assert.Nil(t, authz3)
})
requireIdentityCached(t, r, tokenSecretCacheID("found"), false, "no longer cached")
@ -1526,7 +1525,6 @@ func TestACLResolver_Client(t *testing.T) {
// policies within the cache)
authz, err = r.ResolveToken("a1a54629-5050-4d17-8a4e-560d2423f835")
require.EqualError(t, err, acl.ErrNotFound.Error())
require.Nil(t, authz)
require.True(t, modified)
require.True(t, deleted)
@ -1534,36 +1532,6 @@ func TestACLResolver_Client(t *testing.T) {
require.Equal(t, policyResolves, int32(3))
})
t.Run("Resolve-Identity", func(t *testing.T) {
t.Parallel()
delegate := &ACLResolverTestDelegate{
enabled: true,
datacenter: "dc1",
legacy: false,
localTokens: false,
localPolicies: false,
}
delegate.tokenReadFn = delegate.plainTokenReadFn
delegate.policyResolveFn = delegate.plainPolicyResolveFn
delegate.roleResolveFn = delegate.plainRoleResolveFn
r := newTestACLResolver(t, delegate, nil)
ident, err := r.ResolveTokenToIdentity("found-policy-and-role")
require.NoError(t, err)
require.NotNil(t, ident)
require.Equal(t, "5f57c1f6-6a89-4186-9445-531b316e01df", ident.ID())
require.EqualValues(t, 0, delegate.localTokenResolutions)
require.EqualValues(t, 1, delegate.remoteTokenResolutions)
require.EqualValues(t, 0, delegate.localPolicyResolutions)
require.EqualValues(t, 0, delegate.remotePolicyResolutions)
require.EqualValues(t, 0, delegate.localRoleResolutions)
require.EqualValues(t, 0, delegate.remoteRoleResolutions)
require.EqualValues(t, 0, delegate.remoteLegacyResolutions)
})
t.Run("Concurrent-Token-Resolve", func(t *testing.T) {
t.Parallel()
@ -1705,8 +1673,7 @@ func testACLResolver_variousTokens(t *testing.T, delegate *ACLResolverTestDelega
runTwiceAndReset("Missing Identity", func(t *testing.T) {
delegate.UseTestLocalData(nil)
authz, err := r.ResolveToken("doesn't exist")
require.Nil(t, authz)
_, err := r.ResolveToken("doesn't exist")
require.Error(t, err)
require.True(t, acl.IsErrNotFound(err))
})
@ -3959,12 +3926,12 @@ func TestACLResolver_AgentRecovery(t *testing.T) {
tokens.UpdateAgentRecoveryToken("9a184a11-5599-459e-b71a-550e5f9a5a23", token.TokenSourceConfig)
ident, authz, err := r.ResolveTokenToIdentityAndAuthorizer("9a184a11-5599-459e-b71a-550e5f9a5a23")
authz, err := r.ResolveToken("9a184a11-5599-459e-b71a-550e5f9a5a23")
require.NoError(t, err)
require.NotNil(t, ident)
require.Equal(t, "agent-recovery:foo", ident.ID())
require.NotNil(t, authz)
require.Equal(t, r.agentRecoveryAuthz, authz)
require.NotNil(t, authz.ACLIdentity)
require.Equal(t, "agent-recovery:foo", authz.ACLIdentity.ID())
require.NotNil(t, authz.Authorizer)
require.Equal(t, r.agentRecoveryAuthz, authz.Authorizer)
require.Equal(t, acl.Allow, authz.AgentWrite("foo", nil))
require.Equal(t, acl.Allow, authz.NodeRead("bar", nil))
require.Equal(t, acl.Deny, authz.NodeWrite("bar", nil))
@ -4028,7 +3995,7 @@ func TestACLResolver_ACLsEnabled(t *testing.T) {
}
func TestACLResolver_ResolveTokenToIdentityAndAuthorizer_UpdatesPurgeTheCache(t *testing.T) {
func TestACLResolver_ResolveToken_UpdatesPurgeTheCache(t *testing.T) {
if testing.Short() {
t.Skip("too slow for testing.Short")
}
@ -4065,7 +4032,7 @@ func TestACLResolver_ResolveTokenToIdentityAndAuthorizer_UpdatesPurgeTheCache(t
require.NoError(t, err)
runStep(t, "first resolve", func(t *testing.T) {
_, authz, err := srv.acls.ResolveTokenToIdentityAndAuthorizer(token)
authz, err := srv.ACLResolver.ResolveToken(token)
require.NoError(t, err)
require.NotNil(t, authz)
require.Equal(t, acl.Allow, authz.KeyRead("foo", nil))
@ -4084,7 +4051,7 @@ func TestACLResolver_ResolveTokenToIdentityAndAuthorizer_UpdatesPurgeTheCache(t
err := msgpackrpc.CallWithCodec(codec, "ACL.PolicySet", &reqPolicy, &structs.ACLPolicy{})
require.NoError(t, err)
_, authz, err := srv.acls.ResolveTokenToIdentityAndAuthorizer(token)
authz, err := srv.ACLResolver.ResolveToken(token)
require.NoError(t, err)
require.NotNil(t, authz)
require.Equal(t, acl.Deny, authz.KeyRead("foo", nil))
@ -4100,7 +4067,7 @@ func TestACLResolver_ResolveTokenToIdentityAndAuthorizer_UpdatesPurgeTheCache(t
err := msgpackrpc.CallWithCodec(codec, "ACL.TokenDelete", &req, &resp)
require.NoError(t, err)
_, _, err = srv.acls.ResolveTokenToIdentityAndAuthorizer(token)
_, err = srv.ACLResolver.ResolveToken(token)
require.True(t, acl.IsErrNotFound(err), "Error %v is not acl.ErrNotFound", err)
})
}

View File

@ -107,7 +107,7 @@ func (s *Server) reapExpiredACLTokens(local, global bool) (int, error) {
// Purge the identities from the cache
for _, secretID := range secretIDs {
s.acls.cache.RemoveIdentity(tokenSecretCacheID(secretID))
s.ACLResolver.cache.RemoveIdentity(tokenSecretCacheID(secretID))
}
return len(req.TokenIDs), nil

View File

@ -56,7 +56,7 @@ type Client struct {
config *Config
// acls is used to resolve tokens to effective policies
acls *ACLResolver
*ACLResolver
// Connection pool to consul servers
connPool *pool.ConnPool
@ -127,7 +127,7 @@ func NewClient(config *Config, deps Deps) (*Client, error) {
Tokens: deps.Tokens,
}
var err error
if c.acls, err = NewACLResolver(&aclConfig); err != nil {
if c.ACLResolver, err = NewACLResolver(&aclConfig); err != nil {
c.Shutdown()
return nil, fmt.Errorf("Failed to create ACL resolver: %v", err)
}
@ -172,7 +172,7 @@ func (c *Client) Shutdown() error {
// Close the connection pool
c.connPool.Shutdown()
c.acls.Close()
c.ACLResolver.Close()
return nil
}

View File

@ -558,92 +558,88 @@ func TestConnectCAConfig_Vault_TriggerRotation_Fails(t *testing.T) {
t.Parallel()
testVault := ca.NewTestVaultServer(t)
defer testVault.Stop()
_, s1 := testServerWithConfig(t, func(c *Config) {
c.Build = "1.6.0"
c.PrimaryDatacenter = "dc1"
c.CAConfig = &structs.CAConfiguration{
Provider: "vault",
Config: map[string]interface{}{
"Address": testVault.Addr,
"Token": testVault.RootToken,
"RootPKIPath": "pki-root/",
"IntermediatePKIPath": "pki-intermediate/",
},
newConfig := func(keyType string, keyBits int) map[string]interface{} {
return map[string]interface{}{
"Address": testVault.Addr,
"Token": testVault.RootToken,
"RootPKIPath": "pki-root/",
"IntermediatePKIPath": "pki-intermediate/",
"PrivateKeyType": keyType,
"PrivateKeyBits": keyBits,
}
})
defer s1.Shutdown()
codec := rpcClient(t, s1)
defer codec.Close()
testrpc.WaitForTestAgent(t, s1.RPC, "dc1")
// Capture the current root.
{
rootList, _, err := getTestRoots(s1, "dc1")
require.NoError(t, err)
require.Len(t, rootList.Roots, 1)
}
cases := []struct {
_, s1 := testServerWithConfig(t, func(c *Config) {
c.CAConfig = &structs.CAConfiguration{
Provider: "vault",
Config: newConfig(connect.DefaultPrivateKeyType, connect.DefaultPrivateKeyBits),
}
})
testrpc.WaitForTestAgent(t, s1.RPC, "dc1")
// note: unlike many table tests, the ordering of these cases does matter
// because any non-errored case will modify the CA config, and any subsequent
// tests will use the same agent with that new CA config.
testSteps := []struct {
name string
configFn func() (*structs.CAConfiguration, error)
configFn func() *structs.CAConfiguration
expectErr string
}{
{
name: "cannot edit key bits",
configFn: func() (*structs.CAConfiguration, error) {
name: "allow modifying key type and bits from default",
configFn: func() *structs.CAConfiguration {
return &structs.CAConfiguration{
Provider: "vault",
Config: map[string]interface{}{
"Address": testVault.Addr,
"Token": testVault.RootToken,
"RootPKIPath": "pki-root/",
"IntermediatePKIPath": "pki-intermediate/",
//
"PrivateKeyType": "ec",
"PrivateKeyBits": 384,
},
Provider: "vault",
Config: newConfig("rsa", 4096),
ForceWithoutCrossSigning: true,
}, nil
}
},
expectErr: `error generating CA root certificate: cannot update the PrivateKeyBits field without choosing a new PKI mount for the root CA`,
},
{
name: "cannot edit key type",
configFn: func() (*structs.CAConfiguration, error) {
name: "error when trying to modify key bits",
configFn: func() *structs.CAConfiguration {
return &structs.CAConfiguration{
Provider: "vault",
Config: map[string]interface{}{
"Address": testVault.Addr,
"Token": testVault.RootToken,
"RootPKIPath": "pki-root/",
"IntermediatePKIPath": "pki-intermediate/",
//
"PrivateKeyType": "rsa",
"PrivateKeyBits": 4096,
},
Provider: "vault",
Config: newConfig("rsa", 2048),
ForceWithoutCrossSigning: true,
}, nil
}
},
expectErr: `cannot update the PrivateKeyBits field without changing RootPKIPath`,
},
{
name: "error when trying to modify key type",
configFn: func() *structs.CAConfiguration {
return &structs.CAConfiguration{
Provider: "vault",
Config: newConfig("ec", 256),
ForceWithoutCrossSigning: true,
}
},
expectErr: `cannot update the PrivateKeyType field without changing RootPKIPath`,
},
{
name: "allow update that does not change key type or bits",
configFn: func() *structs.CAConfiguration {
return &structs.CAConfiguration{
Provider: "vault",
Config: newConfig("rsa", 4096),
ForceWithoutCrossSigning: true,
}
},
expectErr: `error generating CA root certificate: cannot update the PrivateKeyType field without choosing a new PKI mount for the root CA`,
},
}
for _, tc := range cases {
for _, tc := range testSteps {
t.Run(tc.name, func(t *testing.T) {
newConfig, err := tc.configFn()
require.NoError(t, err)
args := &structs.CARequest{
Datacenter: "dc1",
Config: newConfig,
Config: tc.configFn(),
}
var reply interface{}
err = msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationSet", args, &reply)
codec := rpcClient(t, s1)
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationSet", args, &reply)
if tc.expectErr == "" {
require.NoError(t, err)
} else {

View File

@ -100,20 +100,13 @@ func (s *Intention) Apply(args *structs.IntentionRequest, reply *string) error {
}
// Get the ACL token for the request for the checks below.
identity, authz, err := s.srv.acls.ResolveTokenToIdentityAndAuthorizer(args.Token)
var entMeta structs.EnterpriseMeta
authz, err := s.srv.ACLResolver.ResolveTokenAndDefaultMeta(args.Token, &entMeta, nil)
if err != nil {
return err
}
var accessorID string
var entMeta structs.EnterpriseMeta
if identity != nil {
entMeta.Merge(identity.EnterpriseMetadata())
accessorID = identity.ID()
} else {
entMeta.Merge(structs.DefaultEnterpriseMetaInDefaultPartition())
}
accessorID := authz.AccessorID()
var (
mut *structs.IntentionMutation
legacyWrite bool
@ -432,7 +425,8 @@ func (s *Intention) Get(args *structs.IntentionQueryRequest, reply *structs.Inde
// Get the ACL token for the request for the checks below.
var entMeta structs.EnterpriseMeta
if _, err := s.srv.ResolveTokenAndDefaultMeta(args.Token, &entMeta, nil); err != nil {
authz, err := s.srv.ResolveTokenAndDefaultMeta(args.Token, &entMeta, nil)
if err != nil {
return err
}
@ -479,13 +473,11 @@ func (s *Intention) Get(args *structs.IntentionQueryRequest, reply *structs.Inde
reply.Intentions = structs.Intentions{ixn}
// Filter
if err := s.srv.filterACL(args.Token, reply); err != nil {
return err
}
s.srv.filterACLWithAuthorizer(authz, reply)
// If ACLs prevented any responses, error
if len(reply.Intentions) == 0 {
accessorID := s.aclAccessorID(args.Token)
accessorID := authz.AccessorID()
// todo(kit) Migrate intention access denial logging over to audit logging when we implement it
s.logger.Warn("Request to get intention denied due to ACLs", "intention", args.IntentionID, "accessorID", accessorID)
return acl.ErrPermissionDenied
@ -618,7 +610,7 @@ func (s *Intention) Match(args *structs.IntentionQueryRequest, reply *structs.In
for _, entry := range args.Match.Entries {
entry.FillAuthzContext(&authzContext)
if prefix := entry.Name; prefix != "" && authz.IntentionRead(prefix, &authzContext) != acl.Allow {
accessorID := s.aclAccessorID(args.Token)
accessorID := authz.AccessorID()
// todo(kit) Migrate intention access denial logging over to audit logging when we implement it
s.logger.Warn("Operation on intention prefix denied due to ACLs", "prefix", prefix, "accessorID", accessorID)
return acl.ErrPermissionDenied
@ -708,7 +700,7 @@ func (s *Intention) Check(args *structs.IntentionQueryRequest, reply *structs.In
var authzContext acl.AuthorizerContext
query.FillAuthzContext(&authzContext)
if authz.ServiceRead(prefix, &authzContext) != acl.Allow {
accessorID := s.aclAccessorID(args.Token)
accessorID := authz.AccessorID()
// todo(kit) Migrate intention access denial logging over to audit logging when we implement it
s.logger.Warn("test on intention denied due to ACLs", "prefix", prefix, "accessorID", accessorID)
return acl.ErrPermissionDenied
@ -760,24 +752,6 @@ func (s *Intention) Check(args *structs.IntentionQueryRequest, reply *structs.In
return nil
}
// aclAccessorID is used to convert an ACLToken's secretID to its accessorID for non-
// critical purposes, such as logging. Therefore we interpret all errors as empty-string
// so we can safely log it without handling non-critical errors at the usage site.
func (s *Intention) aclAccessorID(secretID string) string {
_, ident, err := s.srv.ResolveIdentityFromToken(secretID)
if acl.IsErrNotFound(err) {
return ""
}
if err != nil {
s.logger.Debug("non-critical error resolving acl token accessor for logging", "error", err)
return ""
}
if ident == nil {
return ""
}
return ident.ID()
}
func (s *Intention) validateEnterpriseIntention(ixn *structs.Intention) error {
if err := s.srv.validateEnterpriseIntentionPartition(ixn.SourcePartition); err != nil {
return fmt.Errorf("Invalid source partition %q: %v", ixn.SourcePartition, err)

View File

@ -401,13 +401,13 @@ func (m *Internal) EventFire(args *structs.EventFireRequest,
}
// Check ACLs
authz, err := m.srv.ResolveToken(args.Token)
authz, err := m.srv.ResolveTokenAndDefaultMeta(args.Token, nil, nil)
if err != nil {
return err
}
if authz.EventWrite(args.Name, nil) != acl.Allow {
accessorID := m.aclAccessorID(args.Token)
accessorID := authz.AccessorID()
m.logger.Warn("user event blocked by ACLs", "event", args.Name, "accessorID", accessorID)
return acl.ErrPermissionDenied
}
@ -433,11 +433,11 @@ func (m *Internal) KeyringOperation(
}
// Check ACLs
identity, authz, err := m.srv.acls.ResolveTokenToIdentityAndAuthorizer(args.Token)
authz, err := m.srv.ACLResolver.ResolveToken(args.Token)
if err != nil {
return err
}
if err := m.srv.validateEnterpriseToken(identity); err != nil {
if err := m.srv.validateEnterpriseToken(authz.Identity()); err != nil {
return err
}
switch args.Operation {
@ -545,21 +545,3 @@ func (m *Internal) executeKeyringOpMgr(
return serfResp, err
}
// aclAccessorID is used to convert an ACLToken's secretID to its accessorID for non-
// critical purposes, such as logging. Therefore we interpret all errors as empty-string
// so we can safely log it without handling non-critical errors at the usage site.
func (m *Internal) aclAccessorID(secretID string) string {
_, ident, err := m.srv.ResolveIdentityFromToken(secretID)
if acl.IsErrNotFound(err) {
return ""
}
if err != nil {
m.logger.Debug("non-critical error resolving acl token accessor for logging", "error", err)
return ""
}
if ident == nil {
return ""
}
return ident.ID()
}

View File

@ -363,7 +363,7 @@ func (s *Server) initializeACLs(ctx context.Context) error {
// Purge the cache, since it could've changed while we were not the
// leader.
s.acls.cache.Purge()
s.ACLResolver.cache.Purge()
// Purge the auth method validators since they could've changed while we
// were not leader.

View File

@ -654,7 +654,7 @@ func (c *CAManager) secondaryInitializeIntermediateCA(provider ca.Provider, conf
}
if needsNewIntermediate {
if err := c.secondaryRenewIntermediate(provider, newActiveRoot); err != nil {
if err := c.secondaryRequestNewSigningCert(provider, newActiveRoot); err != nil {
return err
}
} else {
@ -781,7 +781,7 @@ func (c *CAManager) UpdateConfiguration(args *structs.CARequest) (reterr error)
}()
// Attempt to initialize the config if we failed to do so in Initialize for some reason
_, err = c.initializeCAConfig()
prevConfig, err := c.initializeCAConfig()
if err != nil {
return err
}
@ -832,6 +832,15 @@ func (c *CAManager) UpdateConfiguration(args *structs.CARequest) (reterr error)
RawConfig: args.Config.Config,
State: args.Config.State,
}
if args.Config.Provider == config.Provider {
if validator, ok := newProvider.(ValidateConfigUpdater); ok {
if err := validator.ValidateConfigUpdate(prevConfig.Config, args.Config.Config); err != nil {
return fmt.Errorf("new configuration is incompatible with previous configuration: %w", err)
}
}
}
if err := newProvider.Configure(pCfg); err != nil {
return fmt.Errorf("error configuring provider: %v", err)
}
@ -858,6 +867,19 @@ func (c *CAManager) UpdateConfiguration(args *structs.CARequest) (reterr error)
return nil
}
// ValidateConfigUpdater is an optional interface that may be implemented
// by a ca.Provider. If the provider implements this interface, the
// ValidateConfigurationUpdate will be called when a user attempts to change the
// CA configuration, and the provider type has not changed from the previous
// configuration.
type ValidateConfigUpdater interface {
// ValidateConfigUpdate should return an error if the next configuration is
// incompatible with the previous configuration.
//
// TODO: use better types after https://github.com/hashicorp/consul/issues/12238
ValidateConfigUpdate(previous, next map[string]interface{}) error
}
func (c *CAManager) primaryUpdateRootCA(newProvider ca.Provider, args *structs.CARequest, config *structs.CAConfiguration) error {
providerRoot, err := newProvider.GenerateRoot()
if err != nil {
@ -1028,9 +1050,11 @@ func (c *CAManager) primaryRenewIntermediate(provider ca.Provider, newActiveRoot
return nil
}
// secondaryRenewIntermediate should only be called while the state lock is held by
// setting the state to non-ready.
func (c *CAManager) secondaryRenewIntermediate(provider ca.Provider, newActiveRoot *structs.CARoot) error {
// secondaryRequestNewSigningCert creates a Certificate Signing Request, sends
// the request to the primary, and stores the received certificate in the
// provider.
// Should only be called while the state lock is held by setting the state to non-ready.
func (c *CAManager) secondaryRequestNewSigningCert(provider ca.Provider, newActiveRoot *structs.CARoot) error {
csr, err := provider.GenerateIntermediateCSR()
if err != nil {
return err
@ -1145,7 +1169,7 @@ func (c *CAManager) RenewIntermediate(ctx context.Context, isPrimary bool) error
// Enough time has passed, go ahead with getting a new intermediate.
renewalFunc := c.primaryRenewIntermediate
if !isPrimary {
renewalFunc = c.secondaryRenewIntermediate
renewalFunc = c.secondaryRequestNewSigningCert
}
errCh := make(chan error, 1)
go func() {

View File

@ -17,6 +17,7 @@ import (
"time"
msgpackrpc "github.com/hashicorp/net-rpc-msgpackrpc"
vaultapi "github.com/hashicorp/vault/api"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -606,6 +607,88 @@ func TestCAManager_UpdateConfiguration_Vault_Primary(t *testing.T) {
require.Equal(t, connect.HexString(cert.SubjectKeyId), newRoot.SigningKeyID)
}
func TestCAManager_Initialize_Vault_WithIntermediateAsPrimaryCA(t *testing.T) {
if testing.Short() {
t.Skip("too slow for testing.Short")
}
ca.SkipIfVaultNotPresent(t)
vault := ca.NewTestVaultServer(t)
vclient := vault.Client()
generateExternalRootCA(t, vclient)
meshRootPath := "pki-root"
primaryCert := setupPrimaryCA(t, vclient, meshRootPath)
_, s1 := testServerWithConfig(t, func(c *Config) {
c.CAConfig = &structs.CAConfiguration{
Provider: "vault",
Config: map[string]interface{}{
"Address": vault.Addr,
"Token": vault.RootToken,
"RootPKIPath": meshRootPath,
"IntermediatePKIPath": "pki-intermediate/",
// TODO: there are failures to init the CA system if these are not set
// to the values of the already initialized CA.
"PrivateKeyType": "ec",
"PrivateKeyBits": 256,
},
}
})
defer s1.Shutdown()
runStep(t, "check primary DC", func(t *testing.T) {
testrpc.WaitForTestAgent(t, s1.RPC, "dc1")
codec := rpcClient(t, s1)
roots := structs.IndexedCARoots{}
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", &structs.DCSpecificRequest{}, &roots)
require.NoError(t, err)
require.Len(t, roots.Roots, 1)
require.Equal(t, primaryCert, roots.Roots[0].RootCert)
leafCertPEM := getLeafCert(t, codec, roots.TrustDomain, "dc1")
verifyLeafCert(t, roots.Roots[0], leafCertPEM)
})
// TODO: renew primary leaf signing cert
// TODO: rotate root
runStep(t, "run secondary DC", func(t *testing.T) {
_, sDC2 := testServerWithConfig(t, func(c *Config) {
c.Datacenter = "dc2"
c.PrimaryDatacenter = "dc1"
c.CAConfig = &structs.CAConfiguration{
Provider: "vault",
Config: map[string]interface{}{
"Address": vault.Addr,
"Token": vault.RootToken,
"RootPKIPath": meshRootPath,
"IntermediatePKIPath": "pki-secondary/",
// TODO: there are failures to init the CA system if these are not set
// to the values of the already initialized CA.
"PrivateKeyType": "ec",
"PrivateKeyBits": 256,
},
}
})
defer sDC2.Shutdown()
joinWAN(t, sDC2, s1)
testrpc.WaitForActiveCARoot(t, sDC2.RPC, "dc2", nil)
codec := rpcClient(t, sDC2)
roots := structs.IndexedCARoots{}
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", &structs.DCSpecificRequest{}, &roots)
require.NoError(t, err)
require.Len(t, roots.Roots, 1)
leafCertPEM := getLeafCert(t, codec, roots.TrustDomain, "dc2")
verifyLeafCert(t, roots.Roots[0], leafCertPEM)
// TODO: renew secondary leaf signing cert
})
}
func getLeafCert(t *testing.T, codec rpc.ClientCodec, trustDomain string, dc string) string {
pk, _, err := connect.GeneratePrivateKey()
require.NoError(t, err)
@ -624,3 +707,58 @@ func getLeafCert(t *testing.T, codec rpc.ClientCodec, trustDomain string, dc str
return cert.CertPEM
}
func generateExternalRootCA(t *testing.T, client *vaultapi.Client) string {
t.Helper()
err := client.Sys().Mount("corp", &vaultapi.MountInput{
Type: "pki",
Description: "External root, probably corporate CA",
Config: vaultapi.MountConfigInput{
MaxLeaseTTL: "2400h",
DefaultLeaseTTL: "1h",
},
})
require.NoError(t, err, "failed to mount")
resp, err := client.Logical().Write("corp/root/generate/internal", map[string]interface{}{
"common_name": "corporate CA",
"ttl": "2400h",
})
require.NoError(t, err, "failed to generate root")
return resp.Data["certificate"].(string)
}
func setupPrimaryCA(t *testing.T, client *vaultapi.Client, path string) string {
t.Helper()
err := client.Sys().Mount(path, &vaultapi.MountInput{
Type: "pki",
Description: "primary CA for Consul CA",
Config: vaultapi.MountConfigInput{
MaxLeaseTTL: "2200h",
DefaultLeaseTTL: "1h",
},
})
require.NoError(t, err, "failed to mount")
out, err := client.Logical().Write(path+"/intermediate/generate/internal", map[string]interface{}{
"common_name": "primary CA",
"ttl": "2200h",
"key_type": "ec",
"key_bits": 256,
})
require.NoError(t, err, "failed to generate root")
intermediate, err := client.Logical().Write("corp/root/sign-intermediate", map[string]interface{}{
"csr": out.Data["csr"],
"use_csr_values": true,
"format": "pem_bundle",
"ttl": "2200h",
})
require.NoError(t, err, "failed to sign intermediate")
_, err = client.Logical().Write(path+"/intermediate/set-signed", map[string]interface{}{
"certificate": intermediate.Data["certificate"],
})
require.NoError(t, err, "failed to set signed intermediate")
return ca.EnsureTrailingNewline(intermediate.Data["certificate"].(string))
}

View File

@ -17,11 +17,11 @@ func (op *Operator) AutopilotGetConfiguration(args *structs.DCSpecificRequest, r
}
// This action requires operator read access.
identity, authz, err := op.srv.acls.ResolveTokenToIdentityAndAuthorizer(args.Token)
authz, err := op.srv.ACLResolver.ResolveToken(args.Token)
if err != nil {
return err
}
if err := op.srv.validateEnterpriseToken(identity); err != nil {
if err := op.srv.validateEnterpriseToken(authz.Identity()); err != nil {
return err
}
if authz.OperatorRead(nil) != acl.Allow {
@ -49,11 +49,11 @@ func (op *Operator) AutopilotSetConfiguration(args *structs.AutopilotSetConfigRe
}
// This action requires operator write access.
identity, authz, err := op.srv.acls.ResolveTokenToIdentityAndAuthorizer(args.Token)
authz, err := op.srv.ACLResolver.ResolveToken(args.Token)
if err != nil {
return err
}
if err := op.srv.validateEnterpriseToken(identity); err != nil {
if err := op.srv.validateEnterpriseToken(authz.Identity()); err != nil {
return err
}
if authz.OperatorWrite(nil) != acl.Allow {
@ -84,11 +84,11 @@ func (op *Operator) ServerHealth(args *structs.DCSpecificRequest, reply *structs
}
// This action requires operator read access.
identity, authz, err := op.srv.acls.ResolveTokenToIdentityAndAuthorizer(args.Token)
authz, err := op.srv.ACLResolver.ResolveToken(args.Token)
if err != nil {
return err
}
if err := op.srv.validateEnterpriseToken(identity); err != nil {
if err := op.srv.validateEnterpriseToken(authz.Identity()); err != nil {
return err
}
if authz.OperatorRead(nil) != acl.Allow {
@ -151,11 +151,11 @@ func (op *Operator) AutopilotState(args *structs.DCSpecificRequest, reply *autop
}
// This action requires operator read access.
identity, authz, err := op.srv.acls.ResolveTokenToIdentityAndAuthorizer(args.Token)
authz, err := op.srv.ACLResolver.ResolveToken(args.Token)
if err != nil {
return err
}
if err := op.srv.validateEnterpriseToken(identity); err != nil {
if err := op.srv.validateEnterpriseToken(authz.Identity()); err != nil {
return err
}
if authz.OperatorRead(nil) != acl.Allow {

View File

@ -81,11 +81,11 @@ func (op *Operator) RaftRemovePeerByAddress(args *structs.RaftRemovePeerRequest,
// This is a super dangerous operation that requires operator write
// access.
identity, authz, err := op.srv.acls.ResolveTokenToIdentityAndAuthorizer(args.Token)
authz, err := op.srv.ACLResolver.ResolveToken(args.Token)
if err != nil {
return err
}
if err := op.srv.validateEnterpriseToken(identity); err != nil {
if err := op.srv.validateEnterpriseToken(authz.Identity()); err != nil {
return err
}
if authz.OperatorWrite(nil) != acl.Allow {
@ -134,11 +134,11 @@ func (op *Operator) RaftRemovePeerByID(args *structs.RaftRemovePeerRequest, repl
// This is a super dangerous operation that requires operator write
// access.
identity, authz, err := op.srv.acls.ResolveTokenToIdentityAndAuthorizer(args.Token)
authz, err := op.srv.ACLResolver.ResolveToken(args.Token)
if err != nil {
return err
}
if err := op.srv.validateEnterpriseToken(identity); err != nil {
if err := op.srv.validateEnterpriseToken(authz.Identity()); err != nil {
return err
}
if authz.OperatorWrite(nil) != acl.Allow {

View File

@ -141,7 +141,7 @@ type Server struct {
aclConfig *acl.Config
// acls is used to resolve tokens to effective policies
acls *ACLResolver
*ACLResolver
aclAuthMethodValidators authmethod.Cache
@ -457,7 +457,7 @@ func NewServer(config *Config, flat Deps) (*Server, error) {
Tokens: flat.Tokens,
}
// Initialize the ACL resolver.
if s.acls, err = NewACLResolver(&aclConfig); err != nil {
if s.ACLResolver, err = NewACLResolver(&aclConfig); err != nil {
s.Shutdown()
return nil, fmt.Errorf("Failed to create ACL resolver: %v", err)
}
@ -994,8 +994,8 @@ func (s *Server) Shutdown() error {
s.connPool.Shutdown()
}
if s.acls != nil {
s.acls.Close()
if s.ACLResolver != nil {
s.ACLResolver.Close()
}
if s.fsm != nil {

View File

@ -121,7 +121,7 @@ func (s *Server) setupSerfConfig(opts setupSerfOptions) (*serf.Config, error) {
// TODO(ACL-Legacy-Compat): remove in phase 2. These are kept for now to
// allow for upgrades.
if s.acls.ACLsEnabled() {
if s.ACLResolver.ACLsEnabled() {
conf.Tags[metadata.TagACLs] = string(structs.ACLModeEnabled)
} else {
conf.Tags[metadata.TagACLs] = string(structs.ACLModeDisabled)

View File

@ -8,16 +8,13 @@ import (
"github.com/hashicorp/consul/agent/structs"
)
// checkCoordinateDisabled will return a standard response if coordinates are
// disabled. This returns true if they are disabled and we should not continue.
func (s *HTTPHandlers) checkCoordinateDisabled(resp http.ResponseWriter, req *http.Request) bool {
// checkCoordinateDisabled will return an unauthorized error if coordinates are
// disabled. Otherwise, a nil error will be returned.
func (s *HTTPHandlers) checkCoordinateDisabled() error {
if !s.agent.config.DisableCoordinates {
return false
return nil
}
resp.WriteHeader(http.StatusUnauthorized)
fmt.Fprint(resp, "Coordinate support disabled")
return true
return UnauthorizedError{Reason: "Coordinate support disabled"}
}
// sorter wraps a coordinate list and implements the sort.Interface to sort by
@ -44,8 +41,8 @@ func (s *sorter) Less(i, j int) bool {
// CoordinateDatacenters returns the WAN nodes in each datacenter, along with
// raw network coordinates.
func (s *HTTPHandlers) CoordinateDatacenters(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
if s.checkCoordinateDisabled(resp, req) {
return nil, nil
if err := s.checkCoordinateDisabled(); err != nil {
return nil, err
}
var out []structs.DatacenterMap
@ -73,8 +70,8 @@ func (s *HTTPHandlers) CoordinateDatacenters(resp http.ResponseWriter, req *http
// CoordinateNodes returns the LAN nodes in the given datacenter, along with
// raw network coordinates.
func (s *HTTPHandlers) CoordinateNodes(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
if s.checkCoordinateDisabled(resp, req) {
return nil, nil
if err := s.checkCoordinateDisabled(); err != nil {
return nil, err
}
args := structs.DCSpecificRequest{}
@ -98,8 +95,8 @@ func (s *HTTPHandlers) CoordinateNodes(resp http.ResponseWriter, req *http.Reque
// CoordinateNode returns the LAN node in the given datacenter, along with
// raw network coordinates.
func (s *HTTPHandlers) CoordinateNode(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
if s.checkCoordinateDisabled(resp, req) {
return nil, nil
if err := s.checkCoordinateDisabled(); err != nil {
return nil, err
}
node, err := getPathSuffixUnescaped(req.URL.Path, "/v1/coordinate/node/")
@ -153,15 +150,13 @@ func filterCoordinates(req *http.Request, in structs.Coordinates) structs.Coordi
// CoordinateUpdate inserts or updates the LAN coordinate of a node.
func (s *HTTPHandlers) CoordinateUpdate(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
if s.checkCoordinateDisabled(resp, req) {
return nil, nil
if err := s.checkCoordinateDisabled(); err != nil {
return nil, err
}
args := structs.CoordinateUpdateRequest{}
if err := decodeBody(req.Body, &args); err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Request decode failed: %v", err)
return nil, nil
return nil, BadRequestError{Reason: fmt.Sprintf("Request decode failed: %v", err)}
}
s.parseDC(req, &args.Datacenter)
s.parseToken(req, &args.Token)

View File

@ -39,16 +39,14 @@ func TestCoordinate_Disabled_Response(t *testing.T) {
req, _ := http.NewRequest("PUT", "/should/not/care", nil)
resp := httptest.NewRecorder()
obj, err := tt(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
err, ok := err.(UnauthorizedError)
if !ok {
t.Fatalf("expected unauthorized error but got %v", err)
}
if obj != nil {
t.Fatalf("bad: %#v", obj)
}
if got, want := resp.Code, http.StatusUnauthorized; got != want {
t.Fatalf("got %d want %d", got, want)
}
if !strings.Contains(resp.Body.String(), "Coordinate support disabled") {
if !strings.Contains(err.Error(), "Coordinate support disabled") {
t.Fatalf("bad: %#v", resp)
}
})

View File

@ -47,11 +47,6 @@ func (m *delegateMock) RemoveFailedNode(node string, prune bool, entMeta *struct
return m.Called(node, prune, entMeta).Error(0)
}
func (m *delegateMock) ResolveTokenToIdentity(token string) (structs.ACLIdentity, error) {
ret := m.Called(token)
return ret.Get(0).(structs.ACLIdentity), ret.Error(1)
}
func (m *delegateMock) ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (acl.Authorizer, error) {
ret := m.Called(token, entMeta, authzContext)
return ret.Get(0).(acl.Authorizer), ret.Error(1)

View File

@ -51,9 +51,7 @@ func (s *HTTPHandlers) DiscoveryChainRead(resp http.ResponseWriter, req *http.Re
if apiReq.OverrideMeshGateway.Mode != "" {
_, err := structs.ValidateMeshGatewayMode(string(apiReq.OverrideMeshGateway.Mode))
if err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Invalid OverrideMeshGateway.Mode parameter")
return nil, nil
return nil, BadRequestError{Reason: "Invalid OverrideMeshGateway.Mode parameter"}
}
args.OverrideMeshGateway = apiReq.OverrideMeshGateway
}

View File

@ -2,7 +2,6 @@ package agent
import (
"bytes"
"fmt"
"io"
"net/http"
"strconv"
@ -26,9 +25,7 @@ func (s *HTTPHandlers) EventFire(resp http.ResponseWriter, req *http.Request) (i
return nil, err
}
if event.Name == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing name")
return nil, nil
return nil, BadRequestError{Reason: "Missing name"}
}
// Get the ACL token
@ -58,9 +55,7 @@ func (s *HTTPHandlers) EventFire(resp http.ResponseWriter, req *http.Request) (i
// Try to fire the event
if err := s.agent.UserEvent(dc, token, event); err != nil {
if acl.IsErrPermissionDenied(err) {
resp.WriteHeader(http.StatusForbidden)
fmt.Fprint(resp, acl.ErrPermissionDenied.Error())
return nil, nil
return nil, ForbiddenError{Reason: acl.ErrPermissionDenied.Error()}
}
resp.WriteHeader(http.StatusInternalServerError)
return nil, err

View File

@ -88,13 +88,11 @@ func TestEventFire_token(t *testing.T) {
url := fmt.Sprintf("/v1/event/fire/%s?token=%s", c.event, token)
req, _ := http.NewRequest("PUT", url, nil)
resp := httptest.NewRecorder()
if _, err := a.srv.EventFire(resp, req); err != nil {
t.Fatalf("err: %s", err)
}
_, err := a.srv.EventFire(resp, req)
// Check the result
body := resp.Body.String()
if c.allowed {
body := resp.Body.String()
if acl.IsErrPermissionDenied(errors.New(body)) {
t.Fatalf("bad: %s", body)
}
@ -102,11 +100,11 @@ func TestEventFire_token(t *testing.T) {
t.Fatalf("bad: %d", resp.Code)
}
} else {
if !acl.IsErrPermissionDenied(errors.New(body)) {
t.Fatalf("bad: %s", body)
if !acl.IsErrPermissionDenied(err) {
t.Fatalf("bad: %s", err.Error())
}
if resp.Code != 403 {
t.Fatalf("bad: %d", resp.Code)
if err, ok := err.(ForbiddenError); !ok {
t.Fatalf("Expected forbidden but got %v", err)
}
}
}

View File

@ -1,7 +1,6 @@
package agent
import (
"fmt"
"net/http"
"net/url"
"strconv"
@ -36,9 +35,7 @@ func (s *HTTPHandlers) HealthChecksInState(resp http.ResponseWriter, req *http.R
return nil, err
}
if args.State == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing check state")
return nil, nil
return nil, BadRequestError{Reason: "Missing check state"}
}
// Make the RPC request
@ -82,9 +79,7 @@ func (s *HTTPHandlers) HealthNodeChecks(resp http.ResponseWriter, req *http.Requ
// Pull out the service name
args.Node = strings.TrimPrefix(req.URL.Path, "/v1/health/node/")
if args.Node == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing node name")
return nil, nil
return nil, BadRequestError{Reason: "Missing node name"}
}
// Make the RPC request
@ -130,9 +125,7 @@ func (s *HTTPHandlers) HealthServiceChecks(resp http.ResponseWriter, req *http.R
// Pull out the service name
args.ServiceName = strings.TrimPrefix(req.URL.Path, "/v1/health/checks/")
if args.ServiceName == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing service name")
return nil, nil
return nil, BadRequestError{Reason: "Missing service name"}
}
// Make the RPC request
@ -218,9 +211,7 @@ func (s *HTTPHandlers) healthServiceNodes(resp http.ResponseWriter, req *http.Re
// Pull out the service name
args.ServiceName = strings.TrimPrefix(req.URL.Path, prefix)
if args.ServiceName == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing service name")
return nil, nil
return nil, BadRequestError{Reason: "Missing service name"}
}
out, md, err := s.agent.rpcClientHealth.ServiceNodes(req.Context(), args)
@ -238,9 +229,7 @@ func (s *HTTPHandlers) healthServiceNodes(resp http.ResponseWriter, req *http.Re
// Filter to only passing if specified
filter, err := getBoolQueryParam(params, api.HealthPassing)
if err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Invalid value for ?passing")
return nil, nil
return nil, BadRequestError{Reason: "Invalid value for ?passing"}
}
// FIXME: remove filterNonPassing, replace with nodes.Filter, which is used by DNSServer

View File

@ -1,14 +1,13 @@
package agent
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"net/url"
"reflect"
"strconv"
"strings"
"testing"
"time"
@ -1241,18 +1240,12 @@ func TestHealthServiceNodes_PassingFilter(t *testing.T) {
t.Run("passing_bad", func(t *testing.T) {
req, _ := http.NewRequest("GET", "/v1/health/service/consul?passing=nope-nope-nope", nil)
resp := httptest.NewRecorder()
a.srv.HealthServiceNodes(resp, req)
if code := resp.Code; code != 400 {
t.Errorf("bad response code %d, expected %d", code, 400)
_, err := a.srv.HealthServiceNodes(resp, req)
if _, ok := err.(BadRequestError); !ok {
t.Fatalf("Expected bad request error but got %v", err)
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatal(err)
}
if !bytes.Contains(body, []byte("Invalid value for ?passing")) {
t.Errorf("bad %s", body)
if !strings.Contains(err.Error(), "Invalid value for ?passing") {
t.Errorf("bad %s", err.Error())
}
})
}
@ -1654,12 +1647,12 @@ func TestHealthConnectServiceNodes_PassingFilter(t *testing.T) {
req, _ := http.NewRequest("GET", fmt.Sprintf(
"/v1/health/connect/%s?passing=nope-nope", args.Service.Proxy.DestinationServiceName), nil)
resp := httptest.NewRecorder()
a.srv.HealthConnectServiceNodes(resp, req)
assert.Equal(t, 400, resp.Code)
_, err := a.srv.HealthConnectServiceNodes(resp, req)
assert.NotNil(t, err)
_, ok := err.(BadRequestError)
assert.True(t, ok)
body, err := ioutil.ReadAll(resp.Body)
assert.Nil(t, err)
assert.True(t, bytes.Contains(body, []byte("Invalid value for ?passing")))
assert.True(t, strings.Contains(err.Error(), "Invalid value for ?passing"))
})
}

View File

@ -78,6 +78,14 @@ func (e UnauthorizedError) Error() string {
return e.Reason
}
type EntityTooLargeError struct {
Reason string
}
func (e EntityTooLargeError) Error() string {
return e.Reason
}
// CodeWithPayloadError allow returning non HTTP 200
// Error codes while not returning PlainText payload
type CodeWithPayloadError struct {
@ -91,10 +99,11 @@ func (e CodeWithPayloadError) Error() string {
}
type ForbiddenError struct {
Reason string
}
func (e ForbiddenError) Error() string {
return "Access is restricted"
return e.Reason
}
// HTTPHandlers provides an HTTP api for an agent.
@ -443,6 +452,11 @@ func (s *HTTPHandlers) wrap(handler endpoint, methods []string) http.HandlerFunc
return err.Error() == consul.ErrRateLimited.Error()
}
isEntityToLarge := func(err error) bool {
_, ok := err.(EntityTooLargeError)
return ok
}
addAllowHeader := func(methods []string) {
resp.Header().Add("Allow", strings.Join(methods, ","))
}
@ -488,6 +502,9 @@ func (s *HTTPHandlers) wrap(handler endpoint, methods []string) http.HandlerFunc
case isTooManyRequests(err):
resp.WriteHeader(http.StatusTooManyRequests)
fmt.Fprint(resp, err.Error())
case isEntityToLarge(err):
resp.WriteHeader(http.StatusRequestEntityTooLarge)
fmt.Fprint(resp, err.Error())
default:
resp.WriteHeader(http.StatusInternalServerError)
fmt.Fprint(resp, err.Error())
@ -1136,7 +1153,7 @@ func (s *HTTPHandlers) checkWriteAccess(req *http.Request) error {
}
}
return ForbiddenError{}
return ForbiddenError{Reason: "Access is restricted"}
}
func (s *HTTPHandlers) parseFilter(req *http.Request, filter *string) {

View File

@ -323,9 +323,7 @@ func (s *HTTPHandlers) IntentionGetExact(resp http.ResponseWriter, req *http.Req
if err := s.agent.RPC("Intention.Get", &args, &reply); err != nil {
// We have to check the string since the RPC sheds the error type
if err.Error() == consul.ErrIntentionNotFound.Error() {
resp.WriteHeader(http.StatusNotFound)
fmt.Fprint(resp, err.Error())
return nil, nil
return nil, NotFoundError{Reason: err.Error()}
}
// Not ideal, but there are a number of error scenarios that are not
@ -521,9 +519,7 @@ func (s *HTTPHandlers) IntentionSpecificGet(id string, resp http.ResponseWriter,
if err := s.agent.RPC("Intention.Get", &args, &reply); err != nil {
// We have to check the string since the RPC sheds the error type
if err.Error() == consul.ErrIntentionNotFound.Error() {
resp.WriteHeader(http.StatusNotFound)
fmt.Fprint(resp, err.Error())
return nil, nil
return nil, NotFoundError{Reason: err.Error()}
}
// Not ideal, but there are a number of error scenarios that are not

View File

@ -55,8 +55,8 @@ func (s *HTTPHandlers) KVSGet(resp http.ResponseWriter, req *http.Request, args
params := req.URL.Query()
if _, ok := params["recurse"]; ok {
method = "KVS.List"
} else if missingKey(resp, args) {
return nil, nil
} else if args.Key == "" {
return nil, BadRequestError{Reason: "Missing key name"}
}
// Do not allow wildcard NS on GET reqs
@ -156,8 +156,8 @@ func (s *HTTPHandlers) KVSPut(resp http.ResponseWriter, req *http.Request, args
if err := s.parseEntMetaNoWildcard(req, &args.EnterpriseMeta); err != nil {
return nil, err
}
if missingKey(resp, args) {
return nil, nil
if args.Key == "" {
return nil, BadRequestError{Reason: "Missing key name"}
}
if conflictingFlags(resp, req, "cas", "acquire", "release") {
return nil, nil
@ -208,13 +208,10 @@ func (s *HTTPHandlers) KVSPut(resp http.ResponseWriter, req *http.Request, args
// Check the content-length
if req.ContentLength > int64(s.agent.config.KVMaxValueSize) {
resp.WriteHeader(http.StatusRequestEntityTooLarge)
fmt.Fprintf(resp,
"Request body(%d bytes) too large, max size: %d bytes. See %s.",
req.ContentLength, s.agent.config.KVMaxValueSize,
"https://www.consul.io/docs/agent/options.html#kv_max_value_size",
)
return nil, nil
return nil, EntityTooLargeError{
Reason: fmt.Sprintf("Request body(%d bytes) too large, max size: %d bytes. See %s.",
req.ContentLength, s.agent.config.KVMaxValueSize, "https://www.consul.io/docs/agent/options.html#kv_max_value_size"),
}
}
// Copy the value
@ -259,8 +256,8 @@ func (s *HTTPHandlers) KVSDelete(resp http.ResponseWriter, req *http.Request, ar
params := req.URL.Query()
if _, ok := params["recurse"]; ok {
applyReq.Op = api.KVDeleteTree
} else if missingKey(resp, args) {
return nil, nil
} else if args.Key == "" {
return nil, BadRequestError{Reason: "Missing key name"}
}
// Check for cas value
@ -286,16 +283,6 @@ func (s *HTTPHandlers) KVSDelete(resp http.ResponseWriter, req *http.Request, ar
return true, nil
}
// missingKey checks if the key is missing
func missingKey(resp http.ResponseWriter, args *structs.KeyRequest) bool {
if args.Key == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing key name")
return true
}
return false
}
// conflictingFlags determines if non-composable flags were passed in a request.
func conflictingFlags(resp http.ResponseWriter, req *http.Request, flags ...string) bool {
params := req.URL.Query()

View File

@ -14,6 +14,7 @@ import (
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/consul/acl"
"github.com/hashicorp/consul/agent/consul"
"github.com/hashicorp/consul/agent/structs"
"github.com/hashicorp/consul/agent/token"
"github.com/hashicorp/consul/api"
@ -150,7 +151,7 @@ func (c *CheckState) CriticalFor() time.Duration {
type rpc interface {
RPC(method string, args interface{}, reply interface{}) error
ResolveTokenToIdentity(secretID string) (structs.ACLIdentity, error)
ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (consul.ACLResolveResult, error)
}
// State is used to represent the node's services,
@ -1538,7 +1539,7 @@ func (l *State) notifyIfAliased(serviceID structs.ServiceID) {
// critical purposes, such as logging. Therefore we interpret all errors as empty-string
// so we can safely log it without handling non-critical errors at the usage site.
func (l *State) aclAccessorID(secretID string) string {
ident, err := l.Delegate.ResolveTokenToIdentity(secretID)
ident, err := l.Delegate.ResolveTokenAndDefaultMeta(secretID, nil, nil)
if acl.IsErrNotFound(err) {
return ""
}
@ -1546,8 +1547,5 @@ func (l *State) aclAccessorID(secretID string) string {
l.logger.Debug("non-critical error resolving acl token accessor for logging", "error", err)
return ""
}
if ident == nil {
return ""
}
return ident.ID()
return ident.AccessorID()
}

View File

@ -12,8 +12,10 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/hashicorp/consul/acl"
"github.com/hashicorp/consul/agent"
"github.com/hashicorp/consul/agent/config"
"github.com/hashicorp/consul/agent/consul"
"github.com/hashicorp/consul/agent/local"
"github.com/hashicorp/consul/agent/structs"
"github.com/hashicorp/consul/agent/token"
@ -2372,6 +2374,6 @@ func (f *fakeRPC) RPC(method string, args interface{}, reply interface{}) error
return nil
}
func (f *fakeRPC) ResolveTokenToIdentity(_ string) (structs.ACLIdentity, error) {
return nil, nil
func (f *fakeRPC) ResolveTokenAndDefaultMeta(string, *structs.EnterpriseMeta, *acl.AuthorizerContext) (consul.ACLResolveResult, error) {
return consul.ACLResolveResult{}, nil
}

View File

@ -23,9 +23,7 @@ func (s *HTTPHandlers) preparedQueryCreate(resp http.ResponseWriter, req *http.R
s.parseDC(req, &args.Datacenter)
s.parseToken(req, &args.Token)
if err := decodeBody(req.Body, &args.Query); err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Request decode failed: %v", err)
return nil, nil
return nil, BadRequestError{Reason: fmt.Sprintf("Request decode failed: %v", err)}
}
var reply string
@ -145,9 +143,7 @@ func (s *HTTPHandlers) preparedQueryExecute(id string, resp http.ResponseWriter,
// We have to check the string since the RPC sheds
// the specific error type.
if structs.IsErrQueryNotFound(err) {
resp.WriteHeader(http.StatusNotFound)
fmt.Fprint(resp, err.Error())
return nil, nil
return nil, NotFoundError{Reason: err.Error()}
}
return nil, err
}
@ -200,9 +196,7 @@ RETRY_ONCE:
// We have to check the string since the RPC sheds
// the specific error type.
if structs.IsErrQueryNotFound(err) {
resp.WriteHeader(http.StatusNotFound)
fmt.Fprint(resp, err.Error())
return nil, nil
return nil, NotFoundError{Reason: err.Error()}
}
return nil, err
}
@ -231,9 +225,7 @@ RETRY_ONCE:
// We have to check the string since the RPC sheds
// the specific error type.
if structs.IsErrQueryNotFound(err) {
resp.WriteHeader(http.StatusNotFound)
fmt.Fprint(resp, err.Error())
return nil, nil
return nil, NotFoundError{Reason: err.Error()}
}
return nil, err
}
@ -255,9 +247,7 @@ func (s *HTTPHandlers) preparedQueryUpdate(id string, resp http.ResponseWriter,
s.parseToken(req, &args.Token)
if req.ContentLength > 0 {
if err := decodeBody(req.Body, &args.Query); err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Request decode failed: %v", err)
return nil, nil
return nil, BadRequestError{Reason: fmt.Sprintf("Request decode failed: %v", err)}
}
}

View File

@ -620,11 +620,9 @@ func TestPreparedQuery_Execute(t *testing.T) {
body := bytes.NewBuffer(nil)
req, _ := http.NewRequest("GET", "/v1/query/not-there/execute", body)
resp := httptest.NewRecorder()
if _, err := a.srv.PreparedQuerySpecific(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 404 {
t.Fatalf("bad code: %d", resp.Code)
_, err := a.srv.PreparedQuerySpecific(resp, req)
if err, ok := err.(NotFoundError); !ok {
t.Fatalf("Expected not found error but got %v", err)
}
})
}
@ -757,11 +755,9 @@ func TestPreparedQuery_Explain(t *testing.T) {
body := bytes.NewBuffer(nil)
req, _ := http.NewRequest("GET", "/v1/query/not-there/explain", body)
resp := httptest.NewRecorder()
if _, err := a.srv.PreparedQuerySpecific(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 404 {
t.Fatalf("bad code: %d", resp.Code)
_, err := a.srv.PreparedQuerySpecific(resp, req)
if err, ok := err.(NotFoundError); !ok {
t.Fatalf("Expected not found error but got %v", err)
}
})
@ -848,11 +844,9 @@ func TestPreparedQuery_Get(t *testing.T) {
body := bytes.NewBuffer(nil)
req, _ := http.NewRequest("GET", "/v1/query/f004177f-2c28-83b7-4229-eacc25fe55d1", body)
resp := httptest.NewRecorder()
if _, err := a.srv.PreparedQuerySpecific(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 404 {
t.Fatalf("bad code: %d", resp.Code)
_, err := a.srv.PreparedQuerySpecific(resp, req)
if err, ok := err.(NotFoundError); !ok {
t.Fatalf("Expected not found error but got %v", err)
}
})
}

View File

@ -40,9 +40,7 @@ func (s *HTTPHandlers) SessionCreate(resp http.ResponseWriter, req *http.Request
// Handle optional request body
if req.ContentLength > 0 {
if err := s.rewordUnknownEnterpriseFieldError(lib.DecodeJSON(req.Body, &args.Session)); err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Request decode failed: %v", err)
return nil, nil
return nil, BadRequestError{Reason: fmt.Sprintf("Request decode failed: %v", err)}
}
}
@ -77,9 +75,7 @@ func (s *HTTPHandlers) SessionDestroy(resp http.ResponseWriter, req *http.Reques
return nil, err
}
if args.Session.ID == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing session")
return nil, nil
return nil, BadRequestError{Reason: "Missing session"}
}
var out string
@ -107,18 +103,14 @@ func (s *HTTPHandlers) SessionRenew(resp http.ResponseWriter, req *http.Request)
}
args.Session = args.SessionID
if args.SessionID == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing session")
return nil, nil
return nil, BadRequestError{Reason: "Missing session"}
}
var out structs.IndexedSessions
if err := s.agent.RPC("Session.Renew", &args, &out); err != nil {
return nil, err
} else if out.Sessions == nil {
resp.WriteHeader(http.StatusNotFound)
fmt.Fprintf(resp, "Session id '%s' not found", args.SessionID)
return nil, nil
return nil, NotFoundError{Reason: fmt.Sprintf("Session id '%s' not found", args.SessionID)}
}
return out.Sessions, nil
@ -142,9 +134,7 @@ func (s *HTTPHandlers) SessionGet(resp http.ResponseWriter, req *http.Request) (
}
args.Session = args.SessionID
if args.SessionID == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing session")
return nil, nil
return nil, BadRequestError{Reason: "Missing session"}
}
var out structs.IndexedSessions
@ -200,9 +190,7 @@ func (s *HTTPHandlers) SessionsForNode(resp http.ResponseWriter, req *http.Reque
return nil, err
}
if args.Node == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing node name")
return nil, nil
return nil, BadRequestError{Reason: "Missing node name"}
}
var out structs.IndexedSessions

View File

@ -89,8 +89,8 @@ var ACLBootstrapNotAllowedErr = errors.New("ACL bootstrap no longer allowed")
var ACLBootstrapInvalidResetIndexErr = errors.New("Invalid ACL bootstrap reset index")
type ACLIdentity interface {
// ID returns a string that can be used for logging and telemetry. This should not
// contain any secret data used for authentication
// ID returns the accessor ID, a string that can be used for logging and
// telemetry. It is not the secret ID used for authentication.
ID() string
SecretToken() string
PolicyIDs() []string

View File

@ -63,7 +63,7 @@ func isWrite(op api.KVOp) bool {
// internal RPC format. This returns a count of the number of write ops, and
// a boolean, that if false means an error response has been generated and
// processing should stop.
func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) (structs.TxnOps, int, bool) {
func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) (structs.TxnOps, int, error) {
// The TxnMaxReqLen limit and KVMaxValueSize limit both default to the
// suggested raft data size and can be configured independently. The
// TxnMaxReqLen is enforced on the cumulative size of the transaction,
@ -87,13 +87,10 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) (
// Check Content-Length first before decoding to return early
if req.ContentLength > maxTxnLen {
resp.WriteHeader(http.StatusRequestEntityTooLarge)
fmt.Fprintf(resp,
"Request body(%d bytes) too large, max size: %d bytes. See %s.",
req.ContentLength, maxTxnLen,
"https://www.consul.io/docs/agent/options.html#txn_max_req_len",
)
return nil, 0, false
return nil, 0, EntityTooLargeError{
Reason: fmt.Sprintf("Request body(%d bytes) too large, max size: %d bytes. See %s.",
req.ContentLength, maxTxnLen, "https://www.consul.io/docs/agent/options.html#txn_max_req_len"),
}
}
var ops api.TxnOps
@ -102,30 +99,24 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) (
if err.Error() == "http: request body too large" {
// The request size is also verified during decoding to double check
// if the Content-Length header was not set by the client.
resp.WriteHeader(http.StatusRequestEntityTooLarge)
fmt.Fprintf(resp,
"Request body too large, max size: %d bytes. See %s.",
maxTxnLen,
"https://www.consul.io/docs/agent/options.html#txn_max_req_len",
)
return nil, 0, EntityTooLargeError{
Reason: fmt.Sprintf("Request body too large, max size: %d bytes. See %s.",
maxTxnLen, "https://www.consul.io/docs/agent/options.html#txn_max_req_len"),
}
} else {
// Note the body is in API format, and not the RPC format. If we can't
// decode it, we will return a 400 since we don't have enough context to
// associate the error with a given operation.
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Failed to parse body: %v", err)
return nil, 0, BadRequestError{Reason: fmt.Sprintf("Failed to parse body: %v", err)}
}
return nil, 0, false
}
// Enforce a reasonable upper limit on the number of operations in a
// transaction in order to curb abuse.
if size := len(ops); size > maxTxnOps {
resp.WriteHeader(http.StatusRequestEntityTooLarge)
fmt.Fprintf(resp, "Transaction contains too many operations (%d > %d)",
size, maxTxnOps)
return nil, 0, false
return nil, 0, EntityTooLargeError{
Reason: fmt.Sprintf("Transaction contains too many operations (%d > %d)", size, maxTxnOps),
}
}
// Convert the KV API format into the RPC format. Note that fixupKVOps
@ -138,9 +129,9 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) (
case in.KV != nil:
size := len(in.KV.Value)
if int64(size) > kvMaxValueSize {
resp.WriteHeader(http.StatusRequestEntityTooLarge)
fmt.Fprintf(resp, "Value for key %q is too large (%d > %d bytes)", in.KV.Key, size, s.agent.config.KVMaxValueSize)
return nil, 0, false
return nil, 0, EntityTooLargeError{
Reason: fmt.Sprintf("Value for key %q is too large (%d > %d bytes)", in.KV.Key, size, s.agent.config.KVMaxValueSize),
}
}
verb := in.KV.Verb
@ -297,7 +288,7 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) (
}
}
return opsRPC, writes, true
return opsRPC, writes, nil
}
// Txn handles requests to apply multiple operations in a single, atomic
@ -306,9 +297,9 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) (
// and everything else will be routed through Raft like a normal write.
func (s *HTTPHandlers) Txn(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
// Convert the ops from the API format to the internal format.
ops, writes, ok := s.convertOps(resp, req)
if !ok {
return nil, nil
ops, writes, err := s.convertOps(resp, req)
if err != nil {
return nil, err
}
// Fast-path a transaction with only writes to the read-only endpoint,

View File

@ -30,13 +30,12 @@ func TestTxnEndpoint_Bad_JSON(t *testing.T) {
buf := bytes.NewBuffer([]byte("{"))
req, _ := http.NewRequest("PUT", "/v1/txn", buf)
resp := httptest.NewRecorder()
if _, err := a.srv.Txn(resp, req); err != nil {
t.Fatalf("err: %v", err)
_, err := a.srv.Txn(resp, req)
err, ok := err.(BadRequestError)
if !ok {
t.Fatalf("expected bad request error but got %v", err)
}
if resp.Code != 400 {
t.Fatalf("expected 400, got %d", resp.Code)
}
if !bytes.Contains(resp.Body.Bytes(), []byte("Failed to parse")) {
if !strings.Contains(err.Error(), "Failed to parse") {
t.Fatalf("expected conflicting args error")
}
}
@ -63,15 +62,13 @@ func TestTxnEndpoint_Bad_Size_Item(t *testing.T) {
`, value)))
req, _ := http.NewRequest("PUT", "/v1/txn", buf)
resp := httptest.NewRecorder()
if _, err := agent.srv.Txn(resp, req); err != nil {
_, err := agent.srv.Txn(resp, req)
if err, ok := err.(EntityTooLargeError); !ok && !wantPass {
t.Fatalf("expected too large error but got %v", err)
}
if err != nil && wantPass {
t.Fatalf("err: %v", err)
}
if resp.Code != 413 && !wantPass {
t.Fatalf("expected 413, got %d", resp.Code)
}
if resp.Code != 200 && wantPass {
t.Fatalf("expected 200, got %d", resp.Code)
}
}
t.Run("exceeds default limits", func(t *testing.T) {
@ -140,15 +137,13 @@ func TestTxnEndpoint_Bad_Size_Net(t *testing.T) {
`, value, value, value)))
req, _ := http.NewRequest("PUT", "/v1/txn", buf)
resp := httptest.NewRecorder()
if _, err := agent.srv.Txn(resp, req); err != nil {
_, err := agent.srv.Txn(resp, req)
if err, ok := err.(EntityTooLargeError); !ok && !wantPass {
t.Fatalf("expected too large error but got %v", err)
}
if err != nil && wantPass {
t.Fatalf("err: %v", err)
}
if resp.Code != 413 && !wantPass {
t.Fatalf("expected 413, got %d", resp.Code)
}
if resp.Code != 200 && wantPass {
t.Fatalf("expected 200, got %d", resp.Code)
}
}
t.Run("exceeds default limits", func(t *testing.T) {
@ -209,11 +204,9 @@ func TestTxnEndpoint_Bad_Size_Ops(t *testing.T) {
`, strings.Repeat(`{ "KV": { "Verb": "get", "Key": "key" } },`, 2*maxTxnOps))))
req, _ := http.NewRequest("PUT", "/v1/txn", buf)
resp := httptest.NewRecorder()
if _, err := a.srv.Txn(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 413 {
t.Fatalf("expected 413, got %d", resp.Code)
_, err := a.srv.Txn(resp, req)
if err, ok := err.(EntityTooLargeError); !ok {
t.Fatalf("expected too large error but got %v", err)
}
}

View File

@ -139,9 +139,7 @@ func (s *HTTPHandlers) UINodeInfo(resp http.ResponseWriter, req *http.Request) (
return nil, err
}
if args.Node == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing node name")
return nil, nil
return nil, BadRequestError{Reason: "Missing node name"}
}
// Make the RPC request
@ -255,9 +253,7 @@ func (s *HTTPHandlers) UIGatewayServicesNodes(resp http.ResponseWriter, req *htt
return nil, err
}
if args.ServiceName == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing gateway name")
return nil, nil
return nil, BadRequestError{Reason: "Missing gateway name"}
}
// Make the RPC request
@ -301,16 +297,12 @@ func (s *HTTPHandlers) UIServiceTopology(resp http.ResponseWriter, req *http.Req
return nil, err
}
if args.ServiceName == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing service name")
return nil, nil
return nil, BadRequestError{Reason: "Missing service name"}
}
kind, ok := req.URL.Query()["kind"]
if !ok {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing service kind")
return nil, nil
return nil, BadRequestError{Reason: "Missing service kind"}
}
args.ServiceKind = structs.ServiceKind(kind[0])
@ -318,9 +310,7 @@ func (s *HTTPHandlers) UIServiceTopology(resp http.ResponseWriter, req *http.Req
case structs.ServiceKindTypical, structs.ServiceKindIngressGateway:
// allowed
default:
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Unsupported service kind %q", args.ServiceKind)
return nil, nil
return nil, BadRequestError{Reason: fmt.Sprintf("Unsupported service kind %q", args.ServiceKind)}
}
// Make the RPC request
@ -584,9 +574,7 @@ func (s *HTTPHandlers) UIGatewayIntentions(resp http.ResponseWriter, req *http.R
return nil, err
}
if name == "" {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, "Missing gateway name")
return nil, nil
return nil, BadRequestError{Reason: "Missing gateway name"}
}
args.Match = &structs.IntentionQueryMatch{
Type: structs.IntentionMatchDestination,

View File

@ -1409,14 +1409,15 @@ func TestUIServiceTopology(t *testing.T) {
retry.Run(t, func(r *retry.R) {
resp := httptest.NewRecorder()
obj, err := a.srv.UIServiceTopology(resp, tc.httpReq)
assert.Nil(r, err)
if tc.wantErr != "" {
assert.NotNil(r, err)
assert.Nil(r, tc.want) // should not define a non-nil want
require.Equal(r, tc.wantErr, resp.Body.String())
require.Contains(r, err.Error(), tc.wantErr)
require.Nil(r, obj)
return
}
assert.Nil(r, err)
require.NoError(r, checkIndex(resp))
require.NotNil(r, obj)

View File

@ -326,8 +326,7 @@ function build_consul {
-e CGO_ENABLED=0 \
-e GOLDFLAGS="${GOLDFLAGS}" \
-e GOTAGS="${GOTAGS}" \
${image_name} \
./build-support/scripts/build-local.sh -o "${XC_OS}" -a "${XC_ARCH}")
${image_name} make linux
ret=$?
if test $ret -eq 0
@ -354,110 +353,3 @@ function build_consul {
popd > /dev/null
return $ret
}
function build_consul_local {
# Arguments:
# $1 - Path to the top level Consul source
# $2 - Space separated string of OSes to build. If empty will use env vars for determination.
# $3 - Space separated string of architectures to build. If empty will use env vars for determination.
# $4 - Subdirectory to put binaries in under pkg/bin (optional)
#
# Returns:
# 0 - success
# * - error
#
# Note:
# The GOLDFLAGS and GOTAGS environment variables will be used if set
# If the CONSUL_DEV environment var is truthy only the local platform/architecture is built.
# If the XC_OS or the XC_ARCH environment vars are present then only those platforms/architectures
# will be built. Otherwise all supported platform/architectures are built
# The GOXPARALLEL environment variable is used if set
if ! test -d "$1"
then
err "ERROR: '$1' is not a directory. build_consul must be called with the path to the top level source as the first argument'"
return 1
fi
local sdir="$1"
local build_os="$2"
local build_arch="$3"
local extra_dir_name="$4"
local extra_dir=""
if test -n "${extra_dir_name}"
then
extra_dir="${extra_dir_name}/"
fi
pushd ${sdir} > /dev/null
if is_set "${CONSUL_DEV}"
then
if test -z "${XC_OS}"
then
XC_OS=$(go env GOOS)
fi
if test -z "${XC_ARCH}"
then
XC_ARCH=$(go env GOARCH)
fi
fi
XC_OS=${XC_OS:-"solaris darwin freebsd linux windows"}
XC_ARCH=${XC_ARCH:-"386 amd64 arm arm64"}
if test -z "${build_os}"
then
build_os="${XC_OS}"
fi
if test -z "${build_arch}"
then
build_arch="${XC_ARCH}"
fi
status_stage "==> Building Consul - OSes: ${build_os}, Architectures: ${build_arch}"
mkdir pkg.bin.new 2> /dev/null
status "Building sequentially with go install"
for os in ${build_os}
do
for arch in ${build_arch}
do
outdir="pkg.bin.new/${extra_dir}${os}_${arch}"
osarch="${os}/${arch}"
if ! supported_osarch "${osarch}"
then
continue
fi
echo "---> ${osarch}"
mkdir -p "${outdir}"
GOBIN_EXTRA=""
if test "${os}" != "$(go env GOHOSTOS)" -o "${arch}" != "$(go env GOHOSTARCH)"
then
GOBIN_EXTRA="${os}_${arch}/"
fi
binname="consul"
if [ $os == "windows" ];then
binname="consul.exe"
fi
debug_run env CGO_ENABLED=0 GOOS=${os} GOARCH=${arch} go install -ldflags "${GOLDFLAGS}" -tags "${GOTAGS}" && cp "${MAIN_GOPATH}/bin/${GOBIN_EXTRA}${binname}" "${outdir}/${binname}"
if test $? -ne 0
then
err "ERROR: Failed to build Consul for ${osarch}"
rm -r pkg.bin.new
return 1
fi
done
done
build_consul_post "${sdir}" "${extra_dir_name}"
if test $? -ne 0
then
err "ERROR: Failed postprocessing Consul binaries"
return 1
fi
return 0
}

View File

@ -1,107 +0,0 @@
#!/bin/bash
SCRIPT_NAME="$(basename ${BASH_SOURCE[0]})"
pushd $(dirname ${BASH_SOURCE[0]}) > /dev/null
SCRIPT_DIR=$(pwd)
pushd ../.. > /dev/null
SOURCE_DIR=$(pwd)
popd > /dev/null
pushd ../functions > /dev/null
FN_DIR=$(pwd)
popd > /dev/null
popd > /dev/null
source "${SCRIPT_DIR}/functions.sh"
function usage {
cat <<-EOF
Usage: ${SCRIPT_NAME} [<options ...>]
Description:
This script will build the Consul binary on the local system.
All the requisite tooling must be installed for this to be
successful.
Options:
-s | --source DIR Path to source to build.
Defaults to "${SOURCE_DIR}"
-o | --os OSES Space separated string of OS
platforms to build.
-a | --arch ARCH Space separated string of
architectures to build.
-h | --help Print this help text.
EOF
}
function err_usage {
err "$1"
err ""
err "$(usage)"
}
function main {
declare sdir="${SOURCE_DIR}"
declare build_os=""
declare build_arch=""
while test $# -gt 0
do
case "$1" in
-h | --help )
usage
return 0
;;
-s | --source )
if test -z "$2"
then
err_usage "ERROR: option -s/--source requires an argument"
return 1
fi
if ! test -d "$2"
then
err_usage "ERROR: '$2' is not a directory and not suitable for the value of -s/--source"
return 1
fi
sdir="$2"
shift 2
;;
-o | --os )
if test -z "$2"
then
err_usage "ERROR: option -o/--os requires an argument"
return 1
fi
build_os="$2"
shift 2
;;
-a | --arch )
if test -z "$2"
then
err_usage "ERROR: option -a/--arch requires an argument"
return 1
fi
build_arch="$2"
shift 2
;;
* )
err_usage "ERROR: Unknown argument: '$1'"
return 1
;;
esac
done
build_consul_local "${sdir}" "${build_os}" "${build_arch}" || return 1
return 0
}
main "$@"
exit $?

View File

@ -0,0 +1,42 @@
# Adding a Changelog Entry
Any change that a Consul user might need to know about should have a changelog entry.
What doesn't need a changelog entry?
- Docs changes
- Typos fixes, unless they are in a public-facing API
- Code changes we are certain no Consul users will need to know about
To include a [changelog entry](../.changelog) in a PR, commit a text file
named `.changelog/<PR#>.txt`, where `<PR#>` is the number associated with the open
PR in Github. The text file should describe the changes in the following format:
````
```release-note:<change type>
<code area>: <brief description of the improvement you made here>
```
````
Valid values for `<change type>` include:
- `feature`: for the addition of a new feature
- `improvement`: for an improvement (not a bug fix) to an existing feature
- `bug`: for a bug fix
- `security`: for any Common Vulnerabilities and Exposures (CVE) resolutions
- `breaking-change`: for any change that is not fully backwards-compatible
- `deprecation`: for functionality which is now marked for removal in a future release
`<code area>` is meant to categorize the functionality affected by the change.
Some common values are:
- `checks`: related to node or service health checks
- `cli`: related to the command-line interface and its commands
- `config`: related to configuration changes (e.g., adding a new config option)
- `connect`: catch-all for the Connect subsystem that provides service mesh functionality
if no more specific `<code area>` applies
- `http`: related to the HTTP API interface and its endpoints
- `dns`: related to DNS functionality
- `ui`: any change related to the built-in Consul UI (`website/` folder)
Look in the [`.changelog/`](../.changelog) folder for examples of existing changelog entries.
If a PR deserves multiple changelog entries, just add multiple entries separated by a newline
in the format described above to the `.changelog/<PR#>.txt` file.

View File

@ -0,0 +1,19 @@
# Forking the Consul Repo
Community members wishing to contribute code to Consul must fork the Consul project
(`your-github-username/consul`). Branches pushed to that fork can then be submitted
as pull requests to the upstream project (`hashicorp/consul`).
To locally clone the repo so that you can pull the latest from the upstream project
(`hashicorp/consul`) and push changes to your own fork (`your-github-username/consul`):
1. [Create the forked repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo#forking-a-repository) (`your-github-username/consul`)
2. Clone the `hashicorp/consul` repository and `cd` into the folder
3. Make `hashicorp/consul` the `upstream` remote rather than `origin`:
`git remote rename origin upstream`.
4. Add your fork as the `origin` remote. For example:
`git remote add origin https://github.com/myusername/consul`
5. Checkout a feature branch: `git checkout -t -b new-feature`
6. [Make changes](../../.github/CONTRIBUTING.md#modifying-the-code)
7. Push changes to the fork when ready to [submit a PR](../../.github/CONTRIBUTING.md#submitting-a-pull-request):
`git push -u origin new-feature`

View File

@ -5,8 +5,8 @@ all: build
deps: ../../node_modules clean
# incase we ever need to clean anything again
clean:
rm -rf ./tmp
build-staging: deps
yarn run build:staging
@ -17,6 +17,9 @@ build-ci: deps
build: deps
yarn run build
build-debug:
BROCCOLI_DEBUG=* $(MAKE) build
start: deps
yarn run start

View File

@ -100,7 +100,6 @@
%sort-button::before {
@extend %with-sort-mask, %as-pseudo;
position: relative;
top: 4px;
width: 16px;
height: 16px;
}

View File

@ -1,3 +1,31 @@
.consul-external-source {
@extend %pill-200, %frame-gray-600, %p1;
}
.consul-external-source.kubernetes::before {
@extend %with-logo-kubernetes-color-icon, %as-pseudo;
}
.consul-external-source.terraform::before {
@extend %with-logo-terraform-color-icon, %as-pseudo;
}
.consul-external-source.nomad::before {
@extend %with-logo-nomad-color-icon, %as-pseudo;
}
.consul-external-source.consul::before,
.consul-external-source.consul-api-gateway::before {
@extend %with-logo-consul-color-icon, %as-pseudo;
}
.consul-external-source.vault::before {
@extend %with-vault-100;
}
.consul-external-source.aws::before {
@extend %with-aws-100;
}
.consul-external-source.leader::before {
@extend %with-star-outline-mask, %as-pseudo;
}
.consul-external-source.jwt::before {
@extend %with-logo-jwt-color-icon, %as-pseudo;
}
.consul-external-source.oidc::before {
@extend %with-logo-oidc-color-icon, %as-pseudo;
}

View File

@ -1,9 +1,7 @@
%pill-allow::before,
%pill-deny::before,
%pill-l7::before {
@extend %as-pseudo;
margin-right: 5px;
font-size: 0.9em;
}
%pill-allow,
%pill-deny,
@ -24,11 +22,11 @@
@extend %frame-gray-900;
}
%pill-allow::before {
@extend %with-arrow-right-mask;
@extend %with-allow-300;
}
%pill-deny::before {
@extend %with-deny-color-mask;
@extend %with-deny-300;
}
%pill-l7::before {
@extend %with-layers-mask;
@extend %with-l7-300;
}

View File

@ -1,12 +1,11 @@
.consul-intention-fieldsets {
.value-allow > :last-child::before {
@extend %with-arrow-right-mask, %as-pseudo;
color: rgb(var(--tone-green-500));
@extend %with-allow-500;
}
.value-deny > :last-child::before {
@extend %with-deny-color-icon, %as-pseudo;
@extend %with-deny-500;
}
.value- > :last-child::before {
@extend %with-layers-mask, %as-pseudo;
@extend %with-l7-500;
}
}

View File

@ -20,31 +20,3 @@ span.policy-node-identity::before {
span.policy-service-identity::before {
content: 'Service Identity: ';
}
%pill.kubernetes::before {
@extend %with-logo-kubernetes-color-icon, %as-pseudo;
}
%pill.terraform::before {
@extend %with-logo-terraform-color-icon, %as-pseudo;
}
%pill.nomad::before {
@extend %with-logo-nomad-color-icon, %as-pseudo;
}
%pill.consul::before,
%pill.consul-api-gateway::before {
@extend %with-logo-consul-color-icon, %as-pseudo;
}
%pill.vault::before {
@extend %with-vault-300;
}
%pill.aws::before {
@extend %with-logo-aws-color-icon, %as-pseudo;
}
%pill.leader::before {
@extend %with-star-outline-mask, %as-pseudo;
}
%pill.jwt::before {
@extend %with-logo-jwt-color-icon, %as-pseudo;
}
%pill.oidc::before {
@extend %with-logo-oidc-color-icon, %as-pseudo;
}

View File

@ -6,7 +6,8 @@
}
%pill::before {
margin-right: 4px;
font-size: 0.8em;
width: 0.75rem !important; /* 12px */
height: 0.75rem !important; /* 12px */
}
%pill-200 {
@extend %pill;

View File

@ -50,7 +50,7 @@
color: rgb(var(--tone-gray-500));
}
%popover-select .aws button::before {
@extend %with-logo-aws-color-icon, %as-pseudo;
@extend %with-aws-300;
}
%popover-select .kubernetes button::before {
@extend %with-logo-kubernetes-color-icon, %as-pseudo;

View File

@ -0,0 +1,18 @@
import Helper from '@ember/component/helper';
import { assert } from '@ember/debug';
import { adoptStyles } from '@lit/reactive-element';
export default class AdoptStylesHelper extends Helper {
/**
* Adopt/apply given styles to a `ShadowRoot` using constructable styleSheets if supported
*
* @param {[ShadowRoot, CSSResultGroup]} params
*/
compute([$shadow, styles], hash) {
assert(
'adopt-styles can only be used to apply styles to ShadowDOM elements',
$shadow instanceof ShadowRoot
);
adoptStyles($shadow, [styles]);
}
}

View File

@ -0,0 +1,28 @@
# adopt-styles
Adopt/apply given styles to a `ShadowRoot` using constructable styleSheets if supported
```hbs preview-template
<div
{{attach-shadow (set this 'shadow')}}
>
{{#if this.shadow}}
{{#in-element this.shadow}}
{{adopt-styles this.shadow (css '
:host {
background-color: red;
width: 100px;
height: 100px;
}
')}}
{{/in-element}}
{{/if}}
</div>
```
## Positional Arguments
| Argument | Type | Default | Description |
| --- | --- | --- | --- |
| `params` | `[ShadowRoot, CSSResultGroup]` | | |

View File

@ -0,0 +1,8 @@
import Helper from '@ember/component/helper';
import { css } from '@lit/reactive-element';
export default class ConsoleLogHelper extends Helper {
compute([str], hash) {
return css([str]);
}
}

View File

@ -0,0 +1,22 @@
import { helper } from '@ember/component/helper';
/**
* Conditionally maps styles to a string ready for typical DOM
* usage (i.e. semi-colon delimited)
*
* @typedef {([string, (string | undefined), string] | [string, (string | undefined)])} styleInfo
* @param {styleInfo[]} entries - An array of `styleInfo`s to map
* @param {boolean} transform=true - whether to perform the build-time 'helper
* to modifier' transpilation. Note a transpiler needs installing separately.
*/
const styleMap = (entries, transform = true) => {
const str = entries.reduce((prev, [prop, value, unit = '']) => {
if (value == null) {
return prev;
}
return `${prev}${prop}:${value.toString()}${unit};`;
}, '');
return str.length > 0 ? str : undefined;
};
export default helper(styleMap);

View File

@ -0,0 +1,58 @@
# style-map
`{{style-map}}` is used to easily add a list of styles, conditionally, and
have them all formatted nicely to be printed in a DOM `style` attribute.
As well as an entry-like array you can also pass an additional `unit` property
as the 3rd item in the array. This is to make it easier to do mathamatical
calculations for units without having to use `(concat)`.
If any property has a value of `null` or `undefined`, that style property will
not be included in the resulting string.
```hbs preview-template
<figure>
<figcaption>
This div has the correct style added/omitted.
</figcaption>
<div
style={{style-map
(array 'outline' '1px solid red')
(array 'width' '600px')
(array 'height' 100 'px')
(array 'padding' 1 'rem')
(array 'background' null)
}}
>
<code>
style={{style-map
(array 'outline' '1px solid red')
(array 'width' '600px')
(array 'height' 100 'px')
(array 'padding' 1 'rem')
(array 'background' null)
}}
</code>
</div>
</figure>
```
## Positional Arguments
| Argument | Type | Default | Description |
| --- | --- | --- | --- |
| `entries` | `styleInfo[]` | | An array of `styleInfo`s to map |
## Named Arguments
| Argument | Type | Default | Description |
| --- | --- | --- | --- |
| `transform` | `boolean` | true | whether to perform the build-time 'helper to modifier' transpilation |
## Types
| Type | Default | Description |
| --- | --- | --- |
| `styleInfo` | `([string, (string \| undefined), string] \| [string, (string \| undefined)])` | |

View File

@ -0,0 +1,23 @@
import { setModifierManager, capabilities } from '@ember/modifier';
export default setModifierManager(
() => ({
capabilities: capabilities('3.13', { disableAutoTracking: true }),
createModifier() {},
installModifier(_state, element, { positional: [fn, ...args], named }) {
let shadow;
try {
shadow = element.attachShadow({ mode: 'open' });
} catch (e) {
// shadow = false;
console.error(e);
}
fn(shadow);
},
updateModifier() {},
destroyModifier() {},
}),
class CustomElementModifier {}
);

View File

@ -0,0 +1,28 @@
# attach-shadow
`{{attach-shadow (set this 'shadow')}}` attaches a `ShadowRoot` to the modified DOM element
and pass a reference to that `ShadowRoot` to the setter function.
Please note: This should be used as a utility modifier for when having access
to the shadow DOM is handy, not really for building full blown shadow DOM
based Web Components.
```hbs preview-template
<div
{{attach-shadow (set this 'shadow')}}
>
{{#if this.shadow}}
{{#in-element this.shadow}}
<slot name="name"></slot>
{{/in-element}}
{{/if}}
<p slot="name">Hello from the shadows!</p>
</div>
```
## Positional Arguments
| Argument | Type | Default | Description |
| --- | --- | --- | --- |
| `setter` | `function` | | Usually `set` or `mut` or similar |

View File

@ -0,0 +1,45 @@
import Modifier from 'ember-modifier';
import { action } from '@ember/object';
import { inject as service } from '@ember/service';
export default class OnOutsideModifier extends Modifier {
@service('dom') dom;
constructor() {
super(...arguments);
this.doc = this.dom.document();
}
async connect(params, options) {
await new Promise(resolve => setTimeout(resolve, 0));
try {
this.doc.addEventListener(params[0], this.listen);
} catch (e) {
// continue
}
}
@action
listen(e) {
if (this.dom.isOutside(this.element, e.target)) {
const dispatch = typeof this.params[1] === 'function' ? this.params[1] : _ => {};
dispatch.apply(this.element, [e]);
}
}
disconnect() {
this.doc.removeEventListener('click', this.listen);
}
didReceiveArguments() {
this.params = this.args.positional;
this.options = this.args.named;
}
didInstall() {
this.connect(this.args.positional, this.args.named);
}
willRemove() {
this.disconnect();
}
}

View File

@ -0,0 +1,22 @@
# on-outside
`{{on-outside 'click' (fn @callback}}` works similarly to `{{on }}` but allows
you to attach handlers that is specifically not the currently modified
element.
```hbs preview-template
<button
{{on-outside 'click' (set this 'clicked' 'outside clicked')}}
{{on 'click' (set this 'clicked' 'inside clicked')}}
style="background: red;width: 100px;height: 100px;"
>
{{or this.clicked "click me or outside of me"}}
</button>
```
## Positional Arguments
| Argument | Type | Default | Description |
| --- | --- | --- | --- |
| `event` | `string` | | Name of the event to listen for |
| `handler` | `function` | | Function to handle the event |

View File

@ -15,3 +15,6 @@
@import 'icons';
/* global control of themeable components */
@import 'themes';
/* debug only, this is empty during a production build */
/* but uses the contents of ./debug.scss during dev */
@import '_debug';

View File

@ -1,3 +1,11 @@
%theme-light {
--theme-dark-none: ;
--theme-light-none: initial;
}
%theme-dark {
--theme-dark-none: initial;
--theme-light-none: ;
}
%with-icon {
background-repeat: no-repeat;
background-position: center;
@ -20,8 +28,8 @@
content: '';
visibility: visible;
background-size: contain;
width: 1.2em;
height: 1.2em;
width: 1rem; /* 16px */
height: 1rem; /* 16px */
vertical-align: text-top;
}
%led-icon {

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,4 +1,3 @@
@import './app';
@import './base/icons/debug';
@import 'prism-coldark-dark';

View File

@ -1,4 +1,71 @@
%with-vault-100 {
@extend %with-vault-16-mask, %as-pseudo;
color: rgb(var(--tone-vault-500));
}
%with-vault-300 {
@extend %with-vault-16-mask, %as-pseudo;
color: rgb(var(--tone-vault-500));
}
%with-aws-100,
%with-aws-300 {
@extend %aws-color-16-svg-prop;
@extend %with-icon, %as-pseudo;
background-image: var(--theme-dark-none) var(--aws-color-16-svg);
-webkit-mask-image: var(--theme-light-none) var(--aws-color-16-svg);
-webkit-mask-repeat: var(--theme-light-none) no-repeat;
-webkit-mask-position: var(--theme-light-none) center;
mask-image: var(--theme-light-none) var(--aws-color-16-svg);
mask-repeat: var(--theme-light-none) no-repeat;
mask-position: var(--theme-light-none) center;
background-color: var(--theme-light-none) rgb(var(--white));
}
%with-allow-100,
%with-aws-100,
%with-deny-100,
%with-l7-100,
%with-vault-100 {
width: 0.75rem; /* 12px */
height: 0.75rem; /* 12px */
}
%with-allow-500,
%with-deny-500,
%with-l7-500 {
width: 1.25rem; /* 20px */
height: 1.25rem; /* 20px */
}
%with-allow-300,
%with-allow-500 {
color: rgb(var(--tone-green-500));
}
%with-deny-300,
%with-deny-500 {
color: rgb(var(--tone-red-500));
}
%with-allow-300,
%with-allow-500,
%with-deny-300,
%with-deny-500,
%with-l7-300,
%with-l7-500 {
@extend %as-pseudo;
}
%with-allow-300 {
@extend %with-arrow-right-16-mask;
}
%with-allow-500 {
@extend %with-arrow-right-24-mask;
}
%with-deny-300 {
@extend %with-skip-16-mask;
}
%with-deny-500 {
@extend %with-skip-24-mask;
}
%with-l7-300 {
@extend %with-layers-16-mask;
}
%with-l7-500 {
@extend %with-layers-24-mask;
}

View File

@ -97,14 +97,8 @@ module.exports = function(defaults, $ = process.env) {
// exclude docfy
'@docfy/ember'
];
} else {
// add debug css is we are not in test or production environments
outputPaths.app = {
css: {
'debug': '/assets/debug.css'
}
}
}
if(['production'].includes(env)) {
// everything apart from production is 'debug', including test
// which means this and everything it affects is never tested

View File

@ -3,6 +3,8 @@
const Funnel = require('broccoli-funnel');
const mergeTrees = require('broccoli-merge-trees');
const writeFile = require('broccoli-file-creator');
const read = require('fs').readFileSync;
module.exports = {
name: require('./package').name,
@ -12,15 +14,18 @@ module.exports = {
* @import 'app-name/components/component-name/index.scss'
*/
treeForStyles: function(tree) {
let debug = read(`${this.project.root}/app/styles/debug.scss`);
if (['production', 'test'].includes(process.env.EMBER_ENV)) {
debug = '';
}
return this._super.treeForStyles.apply(this, [
mergeTrees(
(tree || []).concat([
new Funnel(`${this.project.root}/app/components`, {
destDir: `app/components`,
include: ['**/*.scss'],
}),
])
),
mergeTrees([
writeFile(`_debug.scss`, debug),
new Funnel(`${this.project.root}/app/components`, {
destDir: `app/components`,
include: ['**/*.scss'],
}),
]),
]);
},
};

View File

@ -8,9 +8,7 @@ module.exports = ({ appName, environment, rootURL, config }) => `
<link rel="icon" href="${rootURL}assets/favicon.svg" type="image/svg+xml">
<link rel="apple-touch-icon" href="${rootURL}assets/apple-touch-icon.png">
<link integrity="" rel="stylesheet" href="${rootURL}assets/vendor.css">
<link integrity="" rel="stylesheet" href="${rootURL}assets/${
!['test', 'production'].includes(environment) ? 'debug' : appName
}.css">
<link integrity="" rel="stylesheet" href="${rootURL}assets/${appName}.css">
${
environment === 'test' ? `<link rel="stylesheet" href="${rootURL}assets/test-support.css">` : ``
}

View File

@ -63,6 +63,7 @@
"@glimmer/component": "^1.0.0",
"@glimmer/tracking": "^1.0.0",
"@hashicorp/ember-cli-api-double": "^3.1.0",
"@lit/reactive-element": "^1.2.1",
"@mapbox/rehype-prism": "^0.6.0",
"@xstate/fsm": "^1.4.0",
"a11y-dialog": "^6.0.1",
@ -79,8 +80,8 @@
"chalk": "^4.1.0",
"clipboard": "^2.0.4",
"consul-acls": "*",
"consul-partitions": "*",
"consul-nspaces": "*",
"consul-partitions": "*",
"css.escape": "^1.5.1",
"d3-array": "^2.8.0",
"d3-scale": "^3.2.3",

View File

@ -1560,6 +1560,11 @@
resolved "https://registry.yarnpkg.com/@istanbuljs/schema/-/schema-0.1.3.tgz#e45e384e4b8ec16bce2fd903af78450f6bf7ec98"
integrity sha512-ZXRY4jNvVgSVQ8DL3LTcakaAtXwTVUxE81hslsyD2AtoXW/wVob10HkOJ1X/pAlcI7D+2YoZKg5do8G/w6RYgA==
"@lit/reactive-element@^1.2.1":
version "1.2.1"
resolved "https://registry.yarnpkg.com/@lit/reactive-element/-/reactive-element-1.2.1.tgz#8620d7f0baf63e12821fa93c34d21e23477736f7"
integrity sha512-03FYfMguIWo9E1y1qcTpXzoO8Ukpn0j5o4GjNFq/iHqJEPY6pYopsU44e7NSFIgCTorr8wdUU5PfVy8VeD6Rwg==
"@mapbox/rehype-prism@^0.6.0":
version "0.6.0"
resolved "https://registry.yarnpkg.com/@mapbox/rehype-prism/-/rehype-prism-0.6.0.tgz#3d8a860870951d4354257d0ba908d11545bd5ed5"

View File

@ -21,8 +21,8 @@ export default function Footer({ openConsentManager }) {
<Link href="/security">
<a>Security</a>
</Link>
<Link href="/files/press-kit.zip">
<a>Press Kit</a>
<Link href="https://www.hashicorp.com/brand">
<a>Brand</a>
</Link>
<a onClick={openConsentManager}>Consent Manager</a>
</div>

View File

@ -36,6 +36,8 @@ Usage: `consul connect redirect-traffic [options]`
#### Options for Traffic Redirection Rules
- `-consul-dns-ip` - The IP address of the Consul DNS resolver. If provided, DNS queries will be redirected to the provided IP address for name resolution.
- `-proxy-id` - The [proxy service](/docs/connect/registration/service-registration) ID.
This service ID must already be registered with the local agent.
@ -47,13 +49,13 @@ Usage: `consul connect redirect-traffic [options]`
- `-exclude-inbound-port` - Inbound port to exclude from traffic redirection. May be provided multiple times.
- `exclude-outbound-cidr` - Outbound CIDR to exclude from traffic redirection. May be provided multiple times.
- `-exclude-outbound-cidr` - Outbound CIDR to exclude from traffic redirection. May be provided multiple times.
- `exclude-outbound-port` - Outbound port to exclude from traffic redirection. May be provided multiple times.
- `-exclude-outbound-port` - Outbound port to exclude from traffic redirection. May be provided multiple times.
- `exclude-uid` - Additional user ID to exclude from traffic redirection. May be provided multiple times.
- `-exclude-uid` - Additional user ID to exclude from traffic redirection. May be provided multiple times.
- `netns` - The Linux network namespace where traffic redirection rules should apply.
- `-netns` - The Linux network namespace where traffic redirection rules should apply.
This must be a path to the network namespace, e.g. /var/run/netns/foo.
#### Enterprise Options

View File

@ -388,7 +388,7 @@ These metrics are used to monitor the health of the Consul servers.
| Metric | Description | Unit | Type |
| --------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ------- |
| `consul.acl.ResolveToken` | Measures the time it takes to resolve an ACL token. | ms | timer |
| `consul.acl.ResolveTokenToIdentity` | Measures the time it takes to resolve an ACL token to an Identity. | ms | timer |
| `consul.acl.ResolveTokenToIdentity` | Measures the time it takes to resolve an ACL token to an Identity. This metric was removed in Consul 1.12. The time will now be reflected in `consul.acl.ResolveToken`. | ms | timer |
| `consul.acl.token.cache_hit` | Increments if Consul is able to resolve a token's identity, or a legacy token, from the cache. | cache read op | counter |
| `consul.acl.token.cache_miss` | Increments if Consul cannot resolve a token's identity, or a legacy token, from the cache. | cache read op | counter |
| `consul.cache.bypass` | Counts how many times a request bypassed the cache because no cache-key was provided. | counter | counter |

View File

@ -21,10 +21,10 @@ Consul API Gateway implements the Kubernetes [Gateway API Specification](https:/
Your datacenter must meet the following requirements prior to configuring the Consul API Gateway:
- A Kubernetes cluster must be running
- Kubernetes 1.21+
- `kubectl` 1.21+
- Consul 1.11.2+
- HashiCorp Helm chart 0.40.0+
- HashiCorp Consul Helm chart 0.40.0+
## Installation
@ -38,7 +38,7 @@ Your datacenter must meet the following requirements prior to configuring the Co
</CodeBlockConfig>
1. Create a values file for your Consul server agents that contains the following parameters:
1. Create a `values.yaml` file for your Consul API Gateway deployment. Copy the content below into the `values.yaml` file. The `values.yaml` will be used by the Consul Helm chart. See [Helm Chart Configuration - apigateway](https://www.consul.io/docs/k8s/helm#apigateway) for more available options on how to configure your Consul API Gateway deployment via the Helm chart.
<CodeBlockConfig hideClipboard filename="values.yaml">

View File

@ -600,7 +600,12 @@ spec:
protocol: http
services:
- name: api
# HTTP Header manipulation is not supported in Kubernetes CRD
requestHeaders:
add:
x-gateway: us-east-ingress
responseHeaders:
remove:
- x-debug
```
```json
@ -676,7 +681,12 @@ spec:
services:
- name: api
namespace: frontend
# HTTP Header manipulation is not supported in Kubernetes CRD
requestHeaders:
add:
x-gateway: us-east-ingress
responseHeaders:
remove:
- x-debug
```
```json
@ -981,21 +991,25 @@ You can specify the following parameters to configure ingress gateway configurat
},
{
name: 'TLSMinVersion',
yaml: false,
type: 'string: ""',
description: "Set the default minimum TLS version supported for the gateway's listeners. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy v1.22.0 and newer [will default to TLS 1.2 as a min version](https://github.com/envoyproxy/envoy/pull/19330), while older releases of Envoy default to TLS 1.0.",
description:
"Set the default minimum TLS version supported for the gateway's listeners. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy v1.22.0 and newer [will default to TLS 1.2 as a min version](https://github.com/envoyproxy/envoy/pull/19330), while older releases of Envoy default to TLS 1.0.",
},
{
name: 'TLSMaxVersion',
yaml: false,
type: 'string: ""',
description: {
hcl:
"Set the default maximum TLS version supported for the gateway's listeners. Must be greater than or equal to `TLSMinVersion`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`." ,
"Set the default maximum TLS version supported for the gateway's listeners. Must be greater than or equal to `TLSMinVersion`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`.",
yaml:
"Set the default maximum TLS version supported for the gateway's listeners. Must be greater than or equal to `tls_min_version`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`." ,
"Set the default maximum TLS version supported for the gateway's listeners. Must be greater than or equal to `tls_min_version`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`.",
},
},
{
name: 'CipherSuites',
yaml: false,
type: 'array<string>: <optional>',
description: `Set the default list of TLS cipher suites for the gateway's
listeners to support when negotiating connections using
@ -1007,11 +1021,10 @@ You can specify the following parameters to configure ingress gateway configurat
releases of Envoy may remove currently-supported but
insecure cipher suites, and future releases of Consul
may add new supported cipher suites if any are added to
Envoy.`
Envoy.`,
},
{
name: 'SDS',
yaml: false,
type: 'SDSConfig: <optional>',
description:
'Defines a set of parameters that configures the gateway to load TLS certificates from an external SDS service. See [SDS](/docs/connect/gateways/ingress-gateway#sds) for more details on usage.<br><br>SDS properties defined in this field are used as defaults for all listeners on the gateway.',
@ -1105,7 +1118,6 @@ You can specify the following parameters to configure ingress gateway configurat
\`*-suffix.example.com\` are not.`,
},
{
yaml: false,
name: 'RequestHeaders',
type: 'HTTPHeaderModifiers: <optional>',
description: `A set of [HTTP-specific header modification rules](/docs/connect/config-entries/service-router#httpheadermodifiers)
@ -1113,7 +1125,6 @@ You can specify the following parameters to configure ingress gateway configurat
This cannot be used with a \`tcp\` listener.`,
},
{
yaml: false,
name: 'ResponseHeaders',
type: 'HTTPHeaderModifiers: <optional>',
description: `A set of [HTTP-specific header modification rules](/docs/connect/config-entries/service-router#httpheadermodifiers)
@ -1122,7 +1133,6 @@ You can specify the following parameters to configure ingress gateway configurat
},
{
name: 'TLS',
yaml: false,
type: 'ServiceTLSConfig: <optional>',
description: 'TLS configuration for this service.',
children: [
@ -1154,7 +1164,6 @@ You can specify the following parameters to configure ingress gateway configurat
},
{
name: 'TLS',
yaml: false,
type: 'TLSConfig: <optional>',
description: 'TLS configuration for this listener.',
children: [
@ -1165,26 +1174,26 @@ You can specify the following parameters to configure ingress gateway configurat
hcl:
"Set this configuration to `true` to enable built-in TLS for this listener.<br><br>If TLS is enabled, then each host defined in each service's `Hosts` field will be added as a DNSSAN to the gateway's x509 certificate. Note that even hosts from other listeners with TLS disabled will be added. TLS can not be disabled for individual listeners if it is enabled on the gateway.",
yaml:
"Set this configuration to `true` to enable built-in TLS for this listener.<br><br>If TLS is enabled, then each host defined in the `hosts` field will be added as a DNSSAN to the gateway's x509 certificate. Note that even hosts from other listeners with TLS disabled will be added. TLS can not be disabled for individual listeners if it is enabled on the gateway.",
"Set this configuration to `true` to enable built-in TLS for this listener.<br><br>If TLS is enabled, then each host defined in each service's `hosts` field will be added as a DNSSAN to the gateway's x509 certificate. Note that even hosts from other listeners with TLS disabled will be added. TLS can not be disabled for individual listeners if it is enabled on the gateway.",
},
},
{
name: 'TLSMinVersion',
yaml: false,
type: 'string: ""',
description: "Set the minimum TLS version supported for this listener. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy v1.22.0 and newer [will default to TLS 1.2 as a min version](https://github.com/envoyproxy/envoy/pull/19330), while older releases of Envoy default to TLS 1.0.",
description:
'Set the minimum TLS version supported for this listener. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy v1.22.0 and newer [will default to TLS 1.2 as a min version](https://github.com/envoyproxy/envoy/pull/19330), while older releases of Envoy default to TLS 1.0.',
},
{
name: 'TLSMaxVersion',
yaml: false,
type: 'string: ""',
description: {
hcl:
"Set the maximum TLS version supported for this listener. Must be greater than or equal to `TLSMinVersion`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`." ,
yaml:
"Set the maximum TLS version supported for this listener. Must be greater than or equal to `tls_min_version`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`." ,
},
description:
'Set the maximum TLS version supported for this listener. Must be greater than or equal to `TLSMinVersion`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`.',
},
{
name: 'CipherSuites',
yaml: false,
type: 'array<string>: <optional>',
description: `Set the list of TLS cipher suites to support when negotiating
connections using TLS 1.2 or earlier. If unspecified,
@ -1195,7 +1204,7 @@ You can specify the following parameters to configure ingress gateway configurat
and is dependent on underlying support in Envoy. Future
releases of Envoy may remove currently-supported but
insecure cipher suites, and future releases of Consul
may add new supported cipher suites if any are added to Envoy.`
may add new supported cipher suites if any are added to Envoy.`,
},
{
name: 'SDS',

View File

@ -36,10 +36,9 @@ service of the same name.
to any configured
[`service-resolver`](/docs/connect/config-entries/service-resolver).
## UI
## UI
Once a `service-router` is successfully entered, you can view it in the UI. Service routers, service splitters, and service resolvers can all be viewed by clicking on your service then switching to the *routing* tab.
Once a `service-router` is successfully entered, you can view it in the UI. Service routers, service splitters, and service resolvers can all be viewed by clicking on your service then switching to the _routing_ tab.
![screenshot of service router in the UI](/img/l7-routing/Router.png)
@ -309,14 +308,16 @@ spec:
name: 'Namespace',
type: `string: "default"`,
enterprise: true,
description: 'Specifies the namespace to which the configuration entry will apply.',
description:
'Specifies the namespace to which the configuration entry will apply.',
yaml: false,
},
{
name: 'Partition',
type: `string: "default"`,
enterprise: true,
description: 'Specifies the admin partition to which the configuration will apply.',
description:
'Specifies the admin partition to which the configuration will apply.',
yaml: false,
},
{
@ -596,7 +597,6 @@ spec:
'A list of HTTP response status codes that are eligible for retry.',
},
{
yaml: false,
name: 'RequestHeaders',
type: 'HTTPHeaderModifiers: <optional>',
description: `A set of [HTTP-specific header modification rules](/docs/connect/config-entries/service-router#httpheadermodifiers)
@ -604,7 +604,6 @@ spec:
This cannot be used with a \`tcp\` listener.`,
},
{
yaml: false,
name: 'ResponseHeaders',
type: 'HTTPHeaderModifiers: <optional>',
description: `A set of [HTTP-specific header modification rules](/docs/connect/config-entries/service-router#httpheadermodifiers)
@ -614,21 +613,12 @@ spec:
]}
/>
### `HTTPHeaderModifiers`
<ConfigEntryReference
topLevel={false}
yaml={false}
keys={[
{
hcl: false,
name: 'Unsupported',
type: '',
description: `HTTP Header modification is not yet supported in our Kubernetes CRDs.`,
},
{
yaml: false,
name: 'Add',
type: 'map<string|string>: optional',
description: `The set of key/value pairs that specify header values to add.
@ -641,7 +631,6 @@ spec:
metadata into the value added.`,
},
{
yaml: false,
name: 'Set',
type: 'map<string|string>: optional',
description: `The set of key/value pairs that specify header values to add.
@ -654,7 +643,6 @@ spec:
metadata into the value added.`,
},
{
yaml: false,
name: 'Remove',
type: 'array<string>: optional',
description: `The set of header names to remove. Only headers

View File

@ -39,9 +39,9 @@ resolution stage.
to any configured
[`service-resolver`](/docs/connect/config-entries/service-resolver).
## UI
## UI
Once a `service-splitter` is successfully entered, you can view it in the UI. Service routers, service splitters, and service resolvers can all be viewed by clicking on your service then switching to the *routing* tab.
Once a `service-splitter` is successfully entered, you can view it in the UI. Service routers, service splitters, and service resolvers can all be viewed by clicking on your service then switching to the _routing_ tab.
![screenshot of service splitter in the UI](/img/l7-routing/Splitter.png)
@ -152,13 +152,12 @@ spec:
</CodeTabs>
### Set HTTP Headers
Split traffic between two subsets with extra headers added so clients can tell
which version (not yet supported in Kubernetes CRD):
which version:
<CodeTabs tabs={[ "HCL", "JSON" ]}>
<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>
```hcl
Kind = "service-splitter"
@ -185,6 +184,25 @@ Splits = [
]
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: web
spec:
splits:
- weight: 90
serviceSubset: v1
responseHeaders:
set:
x-web-version: v1
- weight: 10
serviceSubset: v2
responseHeaders:
set:
x-web-version: v2
```
```json
{
"Kind": "service-splitter",
@ -240,14 +258,16 @@ Splits = [
name: 'Namespace',
type: `string: "default"`,
enterprise: true,
description: 'Specifies the namespace to which the configuration entry will apply.',
description:
'Specifies the namespace to which the configuration entry will apply.',
yaml: false,
},
{
name: 'Partition',
type: `string: "default"`,
enterprise: true,
description: 'Specifies the admin partition to which the configuration entry will apply.',
description:
'Specifies the admin partition to which the configuration entry will apply.',
yaml: false,
},
{
@ -314,7 +334,6 @@ Splits = [
'The admin partition to resolve the service from instead of the current partition. If empty, the current partition is used.',
},
{
yaml: false,
name: 'RequestHeaders',
type: 'HTTPHeaderModifiers: <optional>',
description: `A set of [HTTP-specific header modification rules](/docs/connect/config-entries/service-router#httpheadermodifiers)
@ -322,7 +341,6 @@ Splits = [
This cannot be used with a \`tcp\` listener.`,
},
{
yaml: false,
name: 'ResponseHeaders',
type: 'HTTPHeaderModifiers: <optional>',
description: `A set of [HTTP-specific header modification rules](/docs/connect/config-entries/service-router#httpheadermodifiers)

View File

@ -13,7 +13,6 @@ The [Envoy proxy](/docs/connect/proxies/envoy) should be used for production dep
Consul comes with a built-in L4 proxy for testing and development with Consul
Connect service mesh.
## Getting Started
To get started with the built-in proxy and see a working example you can follow the [Getting Started](https://learn.hashicorp.com/tutorials/consul/get-started-service-networking) tutorial.
@ -57,10 +56,9 @@ All fields are optional with a reasonable default.
_public_ mTLS listener to. It defaults to the same address the agent binds to.
- `bind_port` - The port the proxy will bind its _public_
mTLS listener to. If not provided, the agent will attempt to assign one from its
[configured proxy port range](/docs/agent/options#sidecar_min_port) if available.
By default the range is [20000, 20255] and the port is selected at random from
that range.
mTLS listener to. If not provided, the agent will assign a random port from its
configured proxy port range specified by [`sidecar_min_port`](/docs/agent/options#sidecar_min_port)
and [`sidecar_max_port`](/docs/agent/options#sidecar_max_port).
- `local_service_address`- The `[address]:port`
that the proxy should use to connect to the local application instance. By default

Some files were not shown because too many files have changed in this diff Show More