This machinery was not used, and does not appear to be maintained. In practice we really
don't need anything to detect flaky tests. Our CI system identifies flaky tests at
https://app.circleci.com/insights/github/hashicorp/consul/workflows/go-tests/tests?branch=main
Mostly what we need is a way to reproduce flakes, which can be done directly with the Go
CLI, using the -race, -count, and (new in Go 1.17) -shuffle flags.
Signed-off-by: Jakub Sokołowski <jakub@status.im>
* agent: add failures_before_warning setting
The new setting allows users to specify the number of check failures
that have to happen before a service status us updated to be `warning`.
This allows for more visibility for detected issues without creating
alerts and pinging administrators. Unlike the previous behavior, which
caused the service status to not update until it reached the configured
`failures_before_critical` setting, now Consul updates the Web UI view
with the `warning` state and the output of the service check when
`failures_before_warning` is breached.
The default value of `FailuresBeforeWarning` is the same as the value of
`FailuresBeforeCritical`, which allows for retaining the previous default
behavior of not triggering a warning.
When `FailuresBeforeWarning` is set to a value higher than that of
`FailuresBeforeCritical it has no effect as `FailuresBeforeCritical`
takes precedence.
Resolves: https://github.com/hashicorp/consul/issues/10680
Signed-off-by: Jakub Sokołowski <jakub@status.im>
Co-authored-by: Jakub Sokołowski <jakub@status.im>
This ensures that if someone does include some extension Consul does not currently make use of, that extension is actually usable. Without linking these envoy protobufs into the main binary it can't round trip the escape hatches to send them down to envoy.
Whenenver the go-control-plane library is upgraded next we just have to re-run 'make envoy-library'.
Right now this is only hooked into the insecure RPC server and requires JWT authorization. If no JWT authorizer is setup in the configuration then we inject a disabled “authorizer” to always report that JWT authorization is disabled.
This only works so long as we use simplistic protobuf types. Constructs such as oneof or Any types that require type annotations for decoding properly will fail hard but that is by design. If/when we want to use any of that we will probably need to consider a v2 API.
* Add JSON and Binary Marshaler Generators for Protobuf Types
* Generate files with the correct version of gogo/protobuf
I have pinned the version in the makefile so when you run make tools you get the right version. This pulls the version out of go.mod so it should remain up to date.
The version at the time of this commit we are using is v1.2.1
* Fixup some shell output
* Update how we determine the version of gogo
This just greps the go.mod file instead of expecting the go mod cache to already be present
* Fixup vendoring and remove no longer needed json encoder functions
* Add build system support for protobuf generation
This is done generically so that we don’t have to keep updating the makefile to add another proto generation.
Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date.
If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild>
Providing the PROTOFILES var will override the default behavior of finding all the .proto files.
* Start adding types to the agent/proto package
These will be needed for some other work and are by no means comprehensive.
* Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store.
* Use protobuf marshalling of raft requests instead of msgpack for protoc generated types.
This does not change any encoding of existing types.
* Removed structs package automatically encoding with protobuf marshalling
Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf`
* Run update-vendor to fixup modules.txt
Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical.
* Rename some things and implement the structs.RPCInfo interface bits
agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth.
* Use the right encoding function.
* Renamed agent/proto package to agent/agentpb to prevent package name conflicts
* Update modules.txt to fix ordering
* Change blockingQuery to take in interfaces for the query options and meta
* Add %T to error output.
* Add/Update some comments
Adds additional 'enterprise' text underneath the 'startup' logo if the
ui is built with a CONSUL_BINARY_TYPE environment variable that doesn't
equal `oss`.
1. Prints the $version that you are passing through to the docker
container
2. Prints the CONSUL_VERSION that is used in the UI v2 footer
3. Additionally added a `mkdir -p` so so `make ui-docker` runs with a
clean exit if run in isolation
Improvements:
- More modular
- Building within docker doesn’t use volumes so can be run on a remote docker host
- Build containers include only minimal context so they only rarely need to be rebuilt and most of the time can be used from the cache.
- 3 build containers instead of 1. One based off of the upstream golang containers for building go stuff with all our required GOTOOLS installed. One like the old container based off ubuntu bionic for building the old UI (didn’t bother creating a much better container as this shouldn’t be needed once we completely remove the legacy UI). One for building the new UI. Its alpine based with all the node, ember, yarn stuff installed.
- Top level makefile has the ability to do a container based build without running make dist
- Can build for arbitrary platforms at the top level using: make consul-docker XC_OS=… XC_ARCH=…
- overridable functionality to allow for customizations to the enterprise build (like to generate multiple binaries)
- unified how we compile our go. always use gox even for dev-builds or rather always use the tooling around our scripts which will make sure things get copied to the correct places throughout the filesystem.