open-consul/GNUmakefile

414 lines
14 KiB
Makefile
Raw Normal View History

2017-04-27 18:49:43 +00:00
SHELL = bash
GOGOVERSION?=$(shell grep github.com/gogo/protobuf go.mod | awk '{print $$2}')
2016-05-07 20:02:12 +00:00
GOTOOLS = \
github.com/elazarl/go-bindata-assetfs/go-bindata-assetfs@master \
github.com/hashicorp/go-bindata/go-bindata@master \
2016-05-07 20:02:12 +00:00
golang.org/x/tools/cmd/cover \
2017-06-29 10:55:01 +00:00
golang.org/x/tools/cmd/stringer \
github.com/gogo/protobuf/protoc-gen-gofast@$(GOGOVERSION) \
github.com/hashicorp/protoc-gen-go-binary \
github.com/vektra/mockery/cmd/mockery \
github.com/golangci/golangci-lint/cmd/golangci-lint@v1.23.6
GOTAGS ?=
GOOS?=$(shell go env GOOS)
GOARCH?=$(shell go env GOARCH)
2018-01-04 19:38:20 +00:00
GOPATH=$(shell go env GOPATH)
MAIN_GOPATH=$(shell go env GOPATH | cut -d: -f1)
ASSETFS_PATH?=agent/bindata_assetfs.go
# Get the git commit
GIT_COMMIT?=$(shell git rev-parse --short HEAD)
GIT_COMMIT_YEAR?=$(shell git show -s --format=%cd --date=format:%Y HEAD)
GIT_DIRTY?=$(shell test -n "`git status --porcelain`" && echo "+CHANGES" || true)
GIT_DESCRIBE?=$(shell git describe --tags --always --match "v*")
GIT_IMPORT=github.com/hashicorp/consul/version
GOLDFLAGS=-X $(GIT_IMPORT).GitCommit=$(GIT_COMMIT)$(GIT_DIRTY) -X $(GIT_IMPORT).GitDescribe=$(GIT_DESCRIBE)
Add support for implementing new requests with protobufs instea… (#6502) * Add build system support for protobuf generation This is done generically so that we don’t have to keep updating the makefile to add another proto generation. Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date. If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild> Providing the PROTOFILES var will override the default behavior of finding all the .proto files. * Start adding types to the agent/proto package These will be needed for some other work and are by no means comprehensive. * Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store. * Use protobuf marshalling of raft requests instead of msgpack for protoc generated types. This does not change any encoding of existing types. * Removed structs package automatically encoding with protobuf marshalling Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf` * Run update-vendor to fixup modules.txt Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical. * Rename some things and implement the structs.RPCInfo interface bits agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth. * Use the right encoding function. * Renamed agent/proto package to agent/agentpb to prevent package name conflicts * Update modules.txt to fix ordering * Change blockingQuery to take in interfaces for the query options and meta * Add %T to error output. * Add/Update some comments
2019-09-20 18:37:22 +00:00
PROTOFILES?=$(shell find . -name '*.proto' | grep -v 'vendor/')
PROTOGOFILES=$(PROTOFILES:.proto=.pb.go)
PROTOGOBINFILES=$(PROTOFILES:.proto=.pb.binary.go)
Add support for implementing new requests with protobufs instea… (#6502) * Add build system support for protobuf generation This is done generically so that we don’t have to keep updating the makefile to add another proto generation. Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date. If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild> Providing the PROTOFILES var will override the default behavior of finding all the .proto files. * Start adding types to the agent/proto package These will be needed for some other work and are by no means comprehensive. * Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store. * Use protobuf marshalling of raft requests instead of msgpack for protoc generated types. This does not change any encoding of existing types. * Removed structs package automatically encoding with protobuf marshalling Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf` * Run update-vendor to fixup modules.txt Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical. * Rename some things and implement the structs.RPCInfo interface bits agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth. * Use the right encoding function. * Renamed agent/proto package to agent/agentpb to prevent package name conflicts * Update modules.txt to fix ordering * Change blockingQuery to take in interfaces for the query options and meta * Add %T to error output. * Add/Update some comments
2019-09-20 18:37:22 +00:00
ifeq ($(FORCE_REBUILD),1)
NOCACHE=--no-cache
else
NOCACHE=
endif
DOCKER_BUILD_QUIET?=1
ifeq (${DOCKER_BUILD_QUIET},1)
QUIET=-q
else
QUIET=
endif
CONSUL_DEV_IMAGE?=consul-dev
GO_BUILD_TAG?=consul-build-go
UI_BUILD_TAG?=consul-build-ui
BUILD_CONTAINER_NAME?=consul-builder
CONSUL_IMAGE_VERSION?=latest
################
# CI Variables #
################
CI_DEV_DOCKER_NAMESPACE?=hashicorpdev
CI_DEV_DOCKER_IMAGE_NAME?=consul
CI_DEV_DOCKER_WORKDIR?=bin/
################
TEST_MODCACHE?=1
TEST_BUILDCACHE?=1
# You can only use as many CPUs as you have allocated to docker
ifdef TEST_DOCKER_CPUS
TEST_DOCKER_RESOURCE_CONSTRAINTS=--cpus $(TEST_DOCKER_CPUS)
TEST_PARALLELIZATION=-e GOMAXPROCS=$(TEST_DOCKER_CPUS)
else
TEST_DOCKER_RESOURCE_CONSTRAINTS=
TEST_PARALLELIZATION=
endif
ifeq ($(TEST_MODCACHE), 1)
TEST_MODCACHE_VOL=-v $(MAIN_GOPATH)/pkg/mod:/go/pkg/mod
else
TEST_MODCACHE_VOL=
endif
ifeq ($(TEST_BUILDCACHE), 1)
TEST_BUILDCACHE_VOL=-v $(shell go env GOCACHE):/root/.caches/go-build
else
TEST_BUILDCACHE_VOL=
endif
DIST_TAG?=1
DIST_BUILD?=1
DIST_SIGN?=1
ifdef DIST_VERSION
2018-06-18 18:36:24 +00:00
DIST_VERSION_ARG=-v "$(DIST_VERSION)"
else
DIST_VERSION_ARG=
endif
ifdef DIST_RELEASE_DATE
2018-06-18 18:36:24 +00:00
DIST_DATE_ARG=-d "$(DIST_RELEASE_DATE)"
else
DIST_DATE_ARG=
endif
ifdef DIST_PRERELEASE
DIST_REL_ARG=-r "$(DIST_PRERELEASE)"
else
DIST_REL_ARG=
endif
PUB_GIT?=1
PUB_WEBSITE?=1
ifeq ($(PUB_GIT),1)
PUB_GIT_ARG=-g
else
PUB_GIT_ARG=
endif
ifeq ($(PUB_WEBSITE),1)
PUB_WEBSITE_ARG=-w
else
PUB_WEBSITE_ARG=
endif
2018-06-26 18:26:23 +00:00
export GO_BUILD_TAG
export UI_BUILD_TAG
export BUILD_CONTAINER_NAME
export GIT_COMMIT
export GIT_COMMIT_YEAR
export GIT_DIRTY
export GIT_DESCRIBE
export GOTAGS
export GOLDFLAGS
2013-12-06 23:43:07 +00:00
# Allow skipping docker build during integration tests in CI since we already
# have a built binary
ENVOY_INTEG_DEPS?=dev-docker
ifdef SKIP_DOCKER_BUILD
ENVOY_INTEG_DEPS=noop
endif
2018-06-26 18:26:23 +00:00
DEV_PUSH?=0
ifeq ($(DEV_PUSH),1)
DEV_PUSH_ARG=
else
DEV_PUSH_ARG=--no-push
endif
# all builds binaries for all targets
all: bin
# used to make integration dependencies conditional
noop: ;
bin: tools
@$(SHELL) $(CURDIR)/build-support/scripts/build-local.sh
2015-10-22 18:16:01 +00:00
2015-10-22 19:00:35 +00:00
# dev creates binaries for testing locally - these are put into ./bin and $GOPATH
dev: changelogfmt dev-build
2017-07-18 07:22:49 +00:00
dev-build:
@$(SHELL) $(CURDIR)/build-support/scripts/build-local.sh -o $(GOOS) -a $(GOARCH)
dev-docker: linux
@echo "Pulling consul container image - $(CONSUL_IMAGE_VERSION)"
@docker pull consul:$(CONSUL_IMAGE_VERSION) >/dev/null
@echo "Building Consul Development container - $(CONSUL_DEV_IMAGE)"
@docker build $(NOCACHE) $(QUIET) -t '$(CONSUL_DEV_IMAGE)' --build-arg CONSUL_IMAGE_VERSION=$(CONSUL_IMAGE_VERSION) $(CURDIR)/pkg/bin/linux_amd64 -f $(CURDIR)/build-support/docker/Consul-Dev.dockerfile
2015-10-22 18:16:01 +00:00
# In CircleCI, the linux binary will be attached from a previous step at bin/. This make target
# should only run in CI and not locally.
ci.dev-docker:
@echo "Pulling consul container image - $(CONSUL_IMAGE_VERSION)"
@docker pull consul:$(CONSUL_IMAGE_VERSION) >/dev/null
@echo "Building Consul Development container - $(CI_DEV_DOCKER_IMAGE_NAME)"
@docker build $(NOCACHE) $(QUIET) -t '$(CI_DEV_DOCKER_NAMESPACE)/$(CI_DEV_DOCKER_IMAGE_NAME):$(GIT_COMMIT)' \
--build-arg CONSUL_IMAGE_VERSION=$(CONSUL_IMAGE_VERSION) \
--label COMMIT_SHA=$(CIRCLE_SHA1) \
--label PULL_REQUEST=$(CIRCLE_PULL_REQUEST) \
--label CIRCLE_BUILD_URL=$(CIRCLE_BUILD_URL) \
$(CI_DEV_DOCKER_WORKDIR) -f $(CURDIR)/build-support/docker/Consul-Dev.dockerfile
@echo $(DOCKER_PASS) | docker login -u="$(DOCKER_USER)" --password-stdin
@echo "Pushing dev image to: https://cloud.docker.com/u/hashicorpdev/repository/docker/hashicorpdev/consul"
@docker push $(CI_DEV_DOCKER_NAMESPACE)/$(CI_DEV_DOCKER_IMAGE_NAME):$(GIT_COMMIT)
ifeq ($(CIRCLE_BRANCH), master)
@docker tag $(CI_DEV_DOCKER_NAMESPACE)/$(CI_DEV_DOCKER_IMAGE_NAME):$(GIT_COMMIT) $(CI_DEV_DOCKER_NAMESPACE)/$(CI_DEV_DOCKER_IMAGE_NAME):latest
@docker push $(CI_DEV_DOCKER_NAMESPACE)/$(CI_DEV_DOCKER_IMAGE_NAME):latest
endif
changelogfmt:
@echo "--> Making [GH-xxxx] references clickable..."
@sed -E 's|([^\[])\[GH-([0-9]+)\]|\1[[GH-\2](https://github.com/hashicorp/consul/issues/\2)]|g' CHANGELOG.md > changelog.tmp && mv changelog.tmp CHANGELOG.md
2017-06-28 14:48:00 +00:00
# linux builds a linux package independent of the source platform
2017-05-04 11:31:56 +00:00
linux:
@$(SHELL) $(CURDIR)/build-support/scripts/build-local.sh -o linux -a amd64
2017-05-04 11:31:56 +00:00
# dist builds binaries for all platforms and packages them for distribution
dist:
@$(SHELL) $(CURDIR)/build-support/scripts/release.sh -t '$(DIST_TAG)' -b '$(DIST_BUILD)' -S '$(DIST_SIGN)' $(DIST_VERSION_ARG) $(DIST_DATE_ARG) $(DIST_REL_ARG)
2018-06-26 16:08:33 +00:00
verify:
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
@$(SHELL) $(CURDIR)/build-support/scripts/verify.sh
2018-06-26 16:08:33 +00:00
publish:
2018-06-18 18:36:24 +00:00
@$(SHELL) $(CURDIR)/build-support/scripts/publish.sh $(PUB_GIT_ARG) $(PUB_WEBSITE_ARG)
2015-10-22 18:16:01 +00:00
dev-tree:
@$(SHELL) $(CURDIR)/build-support/scripts/dev.sh $(DEV_PUSH_ARG)
cover: cov
cov: other-consul dev-build
go test -tags '$(GOTAGS)' ./... -coverprofile=coverage.out
cd sdk && go test -tags '$(GOTAGS)' ./... -coverprofile=../coverage.sdk.part
cd api && go test -tags '$(GOTAGS)' ./... -coverprofile=../coverage.api.part
grep -h -v "mode: set" coverage.{sdk,api}.part >> coverage.out
rm -f coverage.{sdk,api}.part
go tool cover -html=coverage.out
2013-12-06 23:43:07 +00:00
test: other-consul dev-build vet test-internal
go-mod-tidy:
@echo "--> Running go mod tidy"
@cd sdk && go mod tidy
@cd api && go mod tidy
@go mod tidy
update-vendor: go-mod-tidy
@echo "--> Running go mod vendor"
@go mod vendor
@echo "--> Removing vendoring of our own nested modules"
@rm -rf vendor/github.com/hashicorp/consul
@grep -v "hashicorp/consul/" < vendor/modules.txt > vendor/modules.txt.new
@mv vendor/modules.txt.new vendor/modules.txt
test-internal:
New config parser, HCL support, multiple bind addrs (#3480) * new config parser for agent This patch implements a new config parser for the consul agent which makes the following changes to the previous implementation: * add HCL support * all configuration fragments in tests and for default config are expressed as HCL fragments * HCL fragments can be provided on the command line so that they can eventually replace the command line flags. * HCL/JSON fragments are parsed into a temporary Config structure which can be merged using reflection (all values are pointers). The existing merge logic of overwrite for values and append for slices has been preserved. * A single builder process generates a typed runtime configuration for the agent. The new implementation is more strict and fails in the builder process if no valid runtime configuration can be generated. Therefore, additional validations in other parts of the code should be removed. The builder also pre-computes all required network addresses so that no address/port magic should be required where the configuration is used and should therefore be removed. * Upgrade github.com/hashicorp/hcl to support int64 * improve error messages * fix directory permission test * Fix rtt test * Fix ForceLeave test * Skip performance test for now until we know what to do * Update github.com/hashicorp/memberlist to update log prefix * Make memberlist use the default logger * improve config error handling * do not fail on non-existing data-dir * experiment with non-uniform timeouts to get a handle on stalled leader elections * Run tests for packages separately to eliminate the spurious port conflicts * refactor private address detection and unify approach for ipv4 and ipv6. Fixes #2825 * do not allow unix sockets for DNS * improve bind and advertise addr error handling * go through builder using test coverage * minimal update to the docs * more coverage tests fixed * more tests * fix makefile * cleanup * fix port conflicts with external port server 'porter' * stop test server on error * do not run api test that change global ENV concurrently with the other tests * Run remaining api tests concurrently * no need for retry with the port number service * monkey patch race condition in go-sockaddr until we understand why that fails * monkey patch hcl decoder race condidtion until we understand why that fails * monkey patch spurious errors in strings.EqualFold from here * add test for hcl decoder race condition. Run with go test -parallel 128 * Increase timeout again * cleanup * don't log port allocations by default * use base command arg parsing to format help output properly * handle -dc deprecation case in Build * switch autopilot.max_trailing_logs to int * remove duplicate test case * remove unused methods * remove comments about flag/config value inconsistencies * switch got and want around since the error message was misleading. * Removes a stray debug log. * Removes a stray newline in imports. * Fixes TestACL_Version8. * Runs go fmt. * Adds a default case for unknown address types. * Reoders and reformats some imports. * Adds some comments and fixes typos. * Reorders imports. * add unix socket support for dns later * drop all deprecated flags and arguments * fix wrong field name * remove stray node-id file * drop unnecessary patch section in test * drop duplicate test * add test for LeaveOnTerm and SkipLeaveOnInt in client mode * drop "bla" and add clarifying comment for the test * split up tests to support enterprise/non-enterprise tests * drop raft multiplier and derive values during build phase * sanitize runtime config reflectively and add test * detect invalid config fields * fix tests with invalid config fields * use different values for wan sanitiziation test * drop recursor in favor of recursors * allow dns_config.udp_answer_limit to be zero * make sure tests run on machines with multiple ips * Fix failing tests in a few more places by providing a bind address in the test * Gets rid of skipped TestAgent_CheckPerformanceSettings and adds case for builder. * Add porter to server_test.go to make tests there less flaky * go fmt
2017-09-25 18:40:42 +00:00
@echo "--> Running go test"
@rm -f test.log exit-code
@# Dump verbose output to test.log so we can surface test names on failure but
@# hide it from travis as it exceeds their log limits and causes job to be
@# terminated (over 4MB and over 10k lines in the UI). We need to output
@# _something_ to stop them terminating us due to inactivity...
@echo "===================== submodule: sdk =====================" | tee -a test.log
cd sdk && { go test -v $(GOTEST_FLAGS) -tags '$(GOTAGS)' ./... 2>&1 ; echo $$? >> ../exit-code ; } | tee -a ../test.log | egrep '^(ok|FAIL|panic:|--- FAIL|--- PASS)'
@echo "===================== submodule: api =====================" | tee -a test.log
cd api && { go test -v $(GOTEST_FLAGS) -tags '$(GOTAGS)' ./... 2>&1 ; echo $$? >> ../exit-code ; } | tee -a ../test.log | egrep '^(ok|FAIL|panic:|--- FAIL|--- PASS)'
@echo "===================== submodule: root =====================" | tee -a test.log
{ go test -v $(GOTEST_FLAGS) -tags '$(GOTAGS)' ./... 2>&1 ; echo $$? >> exit-code ; } | tee -a test.log | egrep '^(ok|FAIL|panic:|--- FAIL|--- PASS)'
@# if everything worked fine then all 3 zeroes will be collapsed to a single zero here
@exit_codes="$$(sort -u exit-code)" ; echo "$$exit_codes" > exit-code
@echo "Exit code: $$(cat exit-code)"
@# This prints all the race report between ====== lines
@awk '/^WARNING: DATA RACE/ {do_print=1; print "=================="} do_print==1 {print} /^={10,}/ {do_print=0}' test.log || true
@grep -A10 'panic: ' test.log || true
@# Prints all the failure output until the next non-indented line - testify
@# helpers often output multiple lines for readability but useless if we can't
@# see them. Un-intuitive order of matches is necessary. No || true because
@# awk always returns true even if there is no match and it breaks non-bash
@# shells locally.
@awk '/^[^[:space:]]/ {do_print=0} /--- SKIP/ {do_print=1} do_print==1 {print}' test.log
@awk '/^[^[:space:]]/ {do_print=0} /--- FAIL/ {do_print=1} do_print==1 {print}' test.log
@grep '^FAIL' test.log || true
New config parser, HCL support, multiple bind addrs (#3480) * new config parser for agent This patch implements a new config parser for the consul agent which makes the following changes to the previous implementation: * add HCL support * all configuration fragments in tests and for default config are expressed as HCL fragments * HCL fragments can be provided on the command line so that they can eventually replace the command line flags. * HCL/JSON fragments are parsed into a temporary Config structure which can be merged using reflection (all values are pointers). The existing merge logic of overwrite for values and append for slices has been preserved. * A single builder process generates a typed runtime configuration for the agent. The new implementation is more strict and fails in the builder process if no valid runtime configuration can be generated. Therefore, additional validations in other parts of the code should be removed. The builder also pre-computes all required network addresses so that no address/port magic should be required where the configuration is used and should therefore be removed. * Upgrade github.com/hashicorp/hcl to support int64 * improve error messages * fix directory permission test * Fix rtt test * Fix ForceLeave test * Skip performance test for now until we know what to do * Update github.com/hashicorp/memberlist to update log prefix * Make memberlist use the default logger * improve config error handling * do not fail on non-existing data-dir * experiment with non-uniform timeouts to get a handle on stalled leader elections * Run tests for packages separately to eliminate the spurious port conflicts * refactor private address detection and unify approach for ipv4 and ipv6. Fixes #2825 * do not allow unix sockets for DNS * improve bind and advertise addr error handling * go through builder using test coverage * minimal update to the docs * more coverage tests fixed * more tests * fix makefile * cleanup * fix port conflicts with external port server 'porter' * stop test server on error * do not run api test that change global ENV concurrently with the other tests * Run remaining api tests concurrently * no need for retry with the port number service * monkey patch race condition in go-sockaddr until we understand why that fails * monkey patch hcl decoder race condidtion until we understand why that fails * monkey patch spurious errors in strings.EqualFold from here * add test for hcl decoder race condition. Run with go test -parallel 128 * Increase timeout again * cleanup * don't log port allocations by default * use base command arg parsing to format help output properly * handle -dc deprecation case in Build * switch autopilot.max_trailing_logs to int * remove duplicate test case * remove unused methods * remove comments about flag/config value inconsistencies * switch got and want around since the error message was misleading. * Removes a stray debug log. * Removes a stray newline in imports. * Fixes TestACL_Version8. * Runs go fmt. * Adds a default case for unknown address types. * Reoders and reformats some imports. * Adds some comments and fixes typos. * Reorders imports. * add unix socket support for dns later * drop all deprecated flags and arguments * fix wrong field name * remove stray node-id file * drop unnecessary patch section in test * drop duplicate test * add test for LeaveOnTerm and SkipLeaveOnInt in client mode * drop "bla" and add clarifying comment for the test * split up tests to support enterprise/non-enterprise tests * drop raft multiplier and derive values during build phase * sanitize runtime config reflectively and add test * detect invalid config fields * fix tests with invalid config fields * use different values for wan sanitiziation test * drop recursor in favor of recursors * allow dns_config.udp_answer_limit to be zero * make sure tests run on machines with multiple ips * Fix failing tests in a few more places by providing a bind address in the test * Gets rid of skipped TestAgent_CheckPerformanceSettings and adds case for builder. * Add porter to server_test.go to make tests there less flaky * go fmt
2017-09-25 18:40:42 +00:00
@if [ "$$(cat exit-code)" == "0" ] ; then echo "PASS" ; exit 0 ; else exit 1 ; fi
test-race:
$(MAKE) GOTEST_FLAGS=-race
# Run tests with config for CI so `make test` can still be local-dev friendly.
test-ci: other-consul dev-build vet
@ if ! GOTEST_FLAGS="-short -timeout 8m -p 3 -parallel 4" make test-internal; then \
echo " ============"; \
echo " Retrying 1/2"; \
echo " ============"; \
if ! GOTEST_FLAGS="-timeout 9m -p 1 -parallel 1" make test-internal; then \
echo " ============"; \
echo " Retrying 2/2"; \
echo " ============"; \
GOTEST_FLAGS="-timeout 9m -p 1 -parallel 1" make test-internal; \
fi \
fi
test-flake: other-consul vet
@$(SHELL) $(CURDIR)/build-support/scripts/test-flake.sh --pkg "$(FLAKE_PKG)" --test "$(FLAKE_TEST)" --cpus "$(FLAKE_CPUS)" --n "$(FLAKE_N)"
test-docker: linux go-build-image
@# -ti run in the foreground showing stdout
@# --rm removes the container once its finished running
@# GO_MODCACHE_VOL - args for mapping in the go module cache
@# GO_BUILD_CACHE_VOL - args for mapping in the go build cache
@# All the env vars are so we pass through all the relevant bits of information
@# Needed for running the tests
@# We map in our local linux_amd64 bin directory as thats where the linux dep
@# target dropped the binary. We could build the binary in the container too
@# but that might take longer as caching gets weird
@# Lastly we map the source dir here to the /consul workdir
@echo "Running tests within a docker container"
@docker run -ti --rm \
-e 'GOTEST_FLAGS=$(GOTEST_FLAGS)' \
-e 'GOTAGS=$(GOTAGS)' \
-e 'GIT_COMMIT=$(GIT_COMMIT)' \
-e 'GIT_COMMIT_YEAR=$(GIT_COMMIT_YEAR)' \
-e 'GIT_DIRTY=$(GIT_DIRTY)' \
-e 'GIT_DESCRIBE=$(GIT_DESCRIBE)' \
$(TEST_PARALLELIZATION) \
$(TEST_DOCKER_RESOURCE_CONSTRAINTS) \
$(TEST_MODCACHE_VOL) \
$(TEST_BUILDCACHE_VOL) \
-v $(MAIN_GOPATH)/bin/linux_amd64/:/go/bin \
-v $(shell pwd):/consul \
$(GO_BUILD_TAG) \
make test-internal
New config parser, HCL support, multiple bind addrs (#3480) * new config parser for agent This patch implements a new config parser for the consul agent which makes the following changes to the previous implementation: * add HCL support * all configuration fragments in tests and for default config are expressed as HCL fragments * HCL fragments can be provided on the command line so that they can eventually replace the command line flags. * HCL/JSON fragments are parsed into a temporary Config structure which can be merged using reflection (all values are pointers). The existing merge logic of overwrite for values and append for slices has been preserved. * A single builder process generates a typed runtime configuration for the agent. The new implementation is more strict and fails in the builder process if no valid runtime configuration can be generated. Therefore, additional validations in other parts of the code should be removed. The builder also pre-computes all required network addresses so that no address/port magic should be required where the configuration is used and should therefore be removed. * Upgrade github.com/hashicorp/hcl to support int64 * improve error messages * fix directory permission test * Fix rtt test * Fix ForceLeave test * Skip performance test for now until we know what to do * Update github.com/hashicorp/memberlist to update log prefix * Make memberlist use the default logger * improve config error handling * do not fail on non-existing data-dir * experiment with non-uniform timeouts to get a handle on stalled leader elections * Run tests for packages separately to eliminate the spurious port conflicts * refactor private address detection and unify approach for ipv4 and ipv6. Fixes #2825 * do not allow unix sockets for DNS * improve bind and advertise addr error handling * go through builder using test coverage * minimal update to the docs * more coverage tests fixed * more tests * fix makefile * cleanup * fix port conflicts with external port server 'porter' * stop test server on error * do not run api test that change global ENV concurrently with the other tests * Run remaining api tests concurrently * no need for retry with the port number service * monkey patch race condition in go-sockaddr until we understand why that fails * monkey patch hcl decoder race condidtion until we understand why that fails * monkey patch spurious errors in strings.EqualFold from here * add test for hcl decoder race condition. Run with go test -parallel 128 * Increase timeout again * cleanup * don't log port allocations by default * use base command arg parsing to format help output properly * handle -dc deprecation case in Build * switch autopilot.max_trailing_logs to int * remove duplicate test case * remove unused methods * remove comments about flag/config value inconsistencies * switch got and want around since the error message was misleading. * Removes a stray debug log. * Removes a stray newline in imports. * Fixes TestACL_Version8. * Runs go fmt. * Adds a default case for unknown address types. * Reoders and reformats some imports. * Adds some comments and fixes typos. * Reorders imports. * add unix socket support for dns later * drop all deprecated flags and arguments * fix wrong field name * remove stray node-id file * drop unnecessary patch section in test * drop duplicate test * add test for LeaveOnTerm and SkipLeaveOnInt in client mode * drop "bla" and add clarifying comment for the test * split up tests to support enterprise/non-enterprise tests * drop raft multiplier and derive values during build phase * sanitize runtime config reflectively and add test * detect invalid config fields * fix tests with invalid config fields * use different values for wan sanitiziation test * drop recursor in favor of recursors * allow dns_config.udp_answer_limit to be zero * make sure tests run on machines with multiple ips * Fix failing tests in a few more places by providing a bind address in the test * Gets rid of skipped TestAgent_CheckPerformanceSettings and adds case for builder. * Add porter to server_test.go to make tests there less flaky * go fmt
2017-09-25 18:40:42 +00:00
other-consul:
@echo "--> Checking for other consul instances"
@if ps -ef | grep 'consul agent' | grep -v grep ; then \
echo "Found other running consul agents. This may affect your tests." ; \
exit 1 ; \
fi
2013-12-06 23:43:07 +00:00
format:
@echo "--> Running go fmt"
@go fmt ./...
@cd api && go fmt ./... | sed 's@^@api/@'
@cd sdk && go fmt ./... | sed 's@^@sdk/@'
2014-05-06 05:38:21 +00:00
Makefile: add vet target Add a vet target in order to catch suspicious constructs reported by go vet. Vet has successfully detected problems in the past, for example, see c9333b1b9b472feb5cad80e2c8276d41b64bde88 Some vet flags are noisy. In particular, the following flags reports a large amount of generally unharmful constructs: ``` -assign: check for useless assignments -composites: check that composite literals used field-keyed elements -shadow: check for shadowed variables -shadowstrict: whether to be strict about shadowing -unreachable: check for unreachable code ``` In order to skip running the flags mentioned above, vet is invoked on a directory basis with `go tool vet .` since package- level type-checking with `go vet` doesn't accept flags. Hence, each file is vetted in isolation, which is weaker than package-level type-checking. But nevertheless, it might catch suspicious constructs that pose a real issue. The vet target runs the following flags on the entire repo: ``` -asmdecl: check assembly against Go declarations -atomic: check for common mistaken usages of the sync/atomic package -bool: check for mistakes involving boolean operators -buildtags: check that +build tags are valid -copylocks: check that locks are not passed by value -methods: check that canonically named methods are canonically defined -nilfunc: check for comparisons between functions and nil -printf: check printf-like invocations -rangeloops: check that range loop variables are used correctly -shift: check for useless shifts -structtags: check that struct field tags have canonical format and apply to exported fields as needed -unsafeptr: check for misuse of unsafe.Pointer ``` Now and then, it might make sense to check the output of the disabled flags manually. For example, `VETARGS=-unreachable make vet` can detect several lines of dead code that can be deleted, etc.
2015-01-17 06:44:10 +00:00
vet:
2017-03-23 23:06:25 +00:00
@echo "--> Running go vet"
@go vet -tags '$(GOTAGS)' ./... && \
(cd api && go vet -tags '$(GOTAGS)' ./...) && \
(cd sdk && go vet -tags '$(GOTAGS)' ./...); if [ $$? -ne 0 ]; then \
Makefile: add vet target Add a vet target in order to catch suspicious constructs reported by go vet. Vet has successfully detected problems in the past, for example, see c9333b1b9b472feb5cad80e2c8276d41b64bde88 Some vet flags are noisy. In particular, the following flags reports a large amount of generally unharmful constructs: ``` -assign: check for useless assignments -composites: check that composite literals used field-keyed elements -shadow: check for shadowed variables -shadowstrict: whether to be strict about shadowing -unreachable: check for unreachable code ``` In order to skip running the flags mentioned above, vet is invoked on a directory basis with `go tool vet .` since package- level type-checking with `go vet` doesn't accept flags. Hence, each file is vetted in isolation, which is weaker than package-level type-checking. But nevertheless, it might catch suspicious constructs that pose a real issue. The vet target runs the following flags on the entire repo: ``` -asmdecl: check assembly against Go declarations -atomic: check for common mistaken usages of the sync/atomic package -bool: check for mistakes involving boolean operators -buildtags: check that +build tags are valid -copylocks: check that locks are not passed by value -methods: check that canonically named methods are canonically defined -nilfunc: check for comparisons between functions and nil -printf: check printf-like invocations -rangeloops: check that range loop variables are used correctly -shift: check for useless shifts -structtags: check that struct field tags have canonical format and apply to exported fields as needed -unsafeptr: check for misuse of unsafe.Pointer ``` Now and then, it might make sense to check the output of the disabled flags manually. For example, `VETARGS=-unreachable make vet` can detect several lines of dead code that can be deleted, etc.
2015-01-17 06:44:10 +00:00
echo ""; \
echo "Vet found suspicious constructs. Please check the reported constructs"; \
2017-03-23 23:06:25 +00:00
echo "and fix them if necessary before submitting the code for review."; \
exit 1; \
Makefile: add vet target Add a vet target in order to catch suspicious constructs reported by go vet. Vet has successfully detected problems in the past, for example, see c9333b1b9b472feb5cad80e2c8276d41b64bde88 Some vet flags are noisy. In particular, the following flags reports a large amount of generally unharmful constructs: ``` -assign: check for useless assignments -composites: check that composite literals used field-keyed elements -shadow: check for shadowed variables -shadowstrict: whether to be strict about shadowing -unreachable: check for unreachable code ``` In order to skip running the flags mentioned above, vet is invoked on a directory basis with `go tool vet .` since package- level type-checking with `go vet` doesn't accept flags. Hence, each file is vetted in isolation, which is weaker than package-level type-checking. But nevertheless, it might catch suspicious constructs that pose a real issue. The vet target runs the following flags on the entire repo: ``` -asmdecl: check assembly against Go declarations -atomic: check for common mistaken usages of the sync/atomic package -bool: check for mistakes involving boolean operators -buildtags: check that +build tags are valid -copylocks: check that locks are not passed by value -methods: check that canonically named methods are canonically defined -nilfunc: check for comparisons between functions and nil -printf: check printf-like invocations -rangeloops: check that range loop variables are used correctly -shift: check for useless shifts -structtags: check that struct field tags have canonical format and apply to exported fields as needed -unsafeptr: check for misuse of unsafe.Pointer ``` Now and then, it might make sense to check the output of the disabled flags manually. For example, `VETARGS=-unreachable make vet` can detect several lines of dead code that can be deleted, etc.
2015-01-17 06:44:10 +00:00
fi
# If you've run "make ui" manually then this will get called for you. This is
# also run as part of the release build script when it verifies that there are no
# changes to the UI assets that aren't checked in.
static-assets:
@go-bindata-assetfs -pkg agent -prefix pkg -o $(ASSETFS_PATH) ./pkg/web_ui/...
@go fmt $(ASSETFS_PATH)
# Build the static web ui and build static assets inside a Docker container
ui: ui-docker static-assets-docker
tools:
@mkdir -p .gotools
@cd .gotools && if [[ ! -f go.mod ]]; then \
go mod init consul-tools ; \
fi
cd .gotools && go get -v $(GOTOOLS)
version:
@echo -n "Version: "
@$(SHELL) $(CURDIR)/build-support/scripts/version.sh
@echo -n "Version + release: "
@$(SHELL) $(CURDIR)/build-support/scripts/version.sh -r
@echo -n "Version + git: "
@$(SHELL) $(CURDIR)/build-support/scripts/version.sh -g
@echo -n "Version + release + git: "
@$(SHELL) $(CURDIR)/build-support/scripts/version.sh -r -g
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
docker-images: go-build-image ui-build-image
go-build-image:
@echo "Building Golang build container"
@docker build $(NOCACHE) $(QUIET) --build-arg 'GOTOOLS=$(GOTOOLS)' -t $(GO_BUILD_TAG) - < build-support/docker/Build-Go.dockerfile
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
ui-build-image:
@echo "Building UI build container"
@docker build $(NOCACHE) $(QUIET) -t $(UI_BUILD_TAG) - < build-support/docker/Build-UI.dockerfile
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
static-assets-docker: go-build-image
@$(SHELL) $(CURDIR)/build-support/scripts/build-docker.sh static-assets
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
consul-docker: go-build-image
@$(SHELL) $(CURDIR)/build-support/scripts/build-docker.sh consul
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
ui-docker: ui-build-image
@$(SHELL) $(CURDIR)/build-support/scripts/build-docker.sh ui
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
test-envoy-integ: $(ENVOY_INTEG_DEPS)
@$(SHELL) $(CURDIR)/test/integration/connect/envoy/run-tests.sh
test-vault-ca-provider:
ifeq ("$(CIRCLECI)","true")
# Run in CI
gotestsum --format=short-verbose --junitfile "$(TEST_RESULTS_DIR)/gotestsum-report.xml" -- $(CURDIR)/agent/connect/ca/* -run 'TestVault(CA)?Provider'
else
# Run locally
@echo "Running /agent/connect/ca TestVault(CA)?Provider tests in verbose mode"
@go test $(CURDIR)/agent/connect/ca/* -run 'TestVault(CA)?Provider' -v
endif
proto-delete:
Add support for implementing new requests with protobufs instea… (#6502) * Add build system support for protobuf generation This is done generically so that we don’t have to keep updating the makefile to add another proto generation. Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date. If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild> Providing the PROTOFILES var will override the default behavior of finding all the .proto files. * Start adding types to the agent/proto package These will be needed for some other work and are by no means comprehensive. * Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store. * Use protobuf marshalling of raft requests instead of msgpack for protoc generated types. This does not change any encoding of existing types. * Removed structs package automatically encoding with protobuf marshalling Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf` * Run update-vendor to fixup modules.txt Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical. * Rename some things and implement the structs.RPCInfo interface bits agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth. * Use the right encoding function. * Renamed agent/proto package to agent/agentpb to prevent package name conflicts * Update modules.txt to fix ordering * Change blockingQuery to take in interfaces for the query options and meta * Add %T to error output. * Add/Update some comments
2019-09-20 18:37:22 +00:00
@echo "Removing $(PROTOGOFILES)"
-@rm $(PROTOGOFILES)
@echo "Removing $(PROTOGOBINFILES)"
-@rm $(PROTOGOBINFILES)
Add support for implementing new requests with protobufs instea… (#6502) * Add build system support for protobuf generation This is done generically so that we don’t have to keep updating the makefile to add another proto generation. Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date. If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild> Providing the PROTOFILES var will override the default behavior of finding all the .proto files. * Start adding types to the agent/proto package These will be needed for some other work and are by no means comprehensive. * Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store. * Use protobuf marshalling of raft requests instead of msgpack for protoc generated types. This does not change any encoding of existing types. * Removed structs package automatically encoding with protobuf marshalling Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf` * Run update-vendor to fixup modules.txt Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical. * Rename some things and implement the structs.RPCInfo interface bits agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth. * Use the right encoding function. * Renamed agent/proto package to agent/agentpb to prevent package name conflicts * Update modules.txt to fix ordering * Change blockingQuery to take in interfaces for the query options and meta * Add %T to error output. * Add/Update some comments
2019-09-20 18:37:22 +00:00
proto-rebuild: proto-delete proto
Add support for implementing new requests with protobufs instea… (#6502) * Add build system support for protobuf generation This is done generically so that we don’t have to keep updating the makefile to add another proto generation. Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date. If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild> Providing the PROTOFILES var will override the default behavior of finding all the .proto files. * Start adding types to the agent/proto package These will be needed for some other work and are by no means comprehensive. * Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store. * Use protobuf marshalling of raft requests instead of msgpack for protoc generated types. This does not change any encoding of existing types. * Removed structs package automatically encoding with protobuf marshalling Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf` * Run update-vendor to fixup modules.txt Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical. * Rename some things and implement the structs.RPCInfo interface bits agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth. * Use the right encoding function. * Renamed agent/proto package to agent/agentpb to prevent package name conflicts * Update modules.txt to fix ordering * Change blockingQuery to take in interfaces for the query options and meta * Add %T to error output. * Add/Update some comments
2019-09-20 18:37:22 +00:00
proto: $(PROTOGOFILES) $(PROTOGOBINFILES)
Add support for implementing new requests with protobufs instea… (#6502) * Add build system support for protobuf generation This is done generically so that we don’t have to keep updating the makefile to add another proto generation. Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date. If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild> Providing the PROTOFILES var will override the default behavior of finding all the .proto files. * Start adding types to the agent/proto package These will be needed for some other work and are by no means comprehensive. * Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store. * Use protobuf marshalling of raft requests instead of msgpack for protoc generated types. This does not change any encoding of existing types. * Removed structs package automatically encoding with protobuf marshalling Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf` * Run update-vendor to fixup modules.txt Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical. * Rename some things and implement the structs.RPCInfo interface bits agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth. * Use the right encoding function. * Renamed agent/proto package to agent/agentpb to prevent package name conflicts * Update modules.txt to fix ordering * Change blockingQuery to take in interfaces for the query options and meta * Add %T to error output. * Add/Update some comments
2019-09-20 18:37:22 +00:00
@echo "Generated all protobuf Go files"
%.pb.go %.pb.binary.go: %.proto
@$(SHELL) $(CURDIR)/build-support/scripts/proto-gen.sh --grpc --import-replace "$<"
Add support for implementing new requests with protobufs instea… (#6502) * Add build system support for protobuf generation This is done generically so that we don’t have to keep updating the makefile to add another proto generation. Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date. If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild> Providing the PROTOFILES var will override the default behavior of finding all the .proto files. * Start adding types to the agent/proto package These will be needed for some other work and are by no means comprehensive. * Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store. * Use protobuf marshalling of raft requests instead of msgpack for protoc generated types. This does not change any encoding of existing types. * Removed structs package automatically encoding with protobuf marshalling Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf` * Run update-vendor to fixup modules.txt Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical. * Rename some things and implement the structs.RPCInfo interface bits agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth. * Use the right encoding function. * Renamed agent/proto package to agent/agentpb to prevent package name conflicts * Update modules.txt to fix ordering * Change blockingQuery to take in interfaces for the query options and meta * Add %T to error output. * Add/Update some comments
2019-09-20 18:37:22 +00:00
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
.PHONY: all ci bin dev dist cov test test-ci test-internal cover format vet ui static-assets tools
.PHONY: docker-images go-build-image ui-build-image static-assets-docker consul-docker ui-docker
.PHONY: version proto proto-rebuild proto-delete test-envoy-integ