Introducing a new approach to testing Vault artifacts before merge
and after merge/notorization/signing. Rather than run a few static
scenarios across the artifacts, we now have the ability to run a
pseudo random sample of scenarios across many different build artifacts.
We've added 20 possible scenarios for the AMD64 and ARM64 binary
bundles, which we've broken into five test groups. On any given push to
a pull request branch, we will now choose a random test group and
execute its corresponding scenarios against the resulting build
artifacts. This gives us greater test coverage but lets us split the
verification across many different pull requests.
The post-merge release testing pipeline behaves in a similar fashion,
however, the artifacts that we use for testing have been notarized and
signed prior to testing. We've also reduce the number of groups so that
we run more scenarios after merge to a release branch.
We intend to take what we've learned building this in Github Actions and
roll it into an easier to use feature that is native to Enos. Until then,
we'll have to manually add scenarios to each matrix file and manually
number the test group. It's important to note that Github requires every
matrix to include at least one vector, so every artifact that is being
tested must include a single scenario in order for all workflows to pass
and thus satisfy branch merge requirements.
* Add support for different artifact types to enos-run
* Add support for different runner type to enos-run
* Add arm64 scenarios to build matrix
* Expand build matrices to include different variants
* Update Consul versions in Enos scenarios and matrices
* Refactor enos-run environment
* Add minimum version filtering support to enos-run. This allows us to
automatically exclude scenarios that require a more recent version of
Vault
* Add maximum version filtering support to enos-run. This allows us to
automatically exclude scenarios that require an older version of
Vault
* Fix Node 12 deprecation warnings
* Rename enos-verify-stable to enos-release-testing-oss
* Convert artifactory matrix into enos-release-testing-oss matrices
* Add all Vault editions to Enos scenario matrices
* Fix verify version with complex Vault edition metadata
* Rename the crt-builder to ci-helper
* Add more version helpers to ci-helper and Makefile
* Update CODEOWNERS for quality team
* Add support for filtering matrices by group and version constraints
* Add support for pseudo random test scenario execution
Signed-off-by: Ryan Cragun <me@ryan.ec>
* add Link config, init, and capabilities
* add node status proto
* bump protoc version to 3.21.9
* make proto
* adding link tests
* remove wrapped link
* add changelog entry
* update changelog entry
Here we make the following major changes:
* Centralize CRT builder logic into a script utility so that we can share the
logic for building artifacts in CI or locally.
* Simplify the build workflow by calling a reusable workflow many times
instead of repeating the contents.
* Create a workflow that validates whether or not the build workflow and all
child workflows have succeeded to allow for merge protection.
Motivation
* We need branch requirements for the build workflow and all subsequent
integration tests (QT-353)
* We need to ensure that the Enos local builder works (QT-558)
* Debugging build failures can be difficult because one has to hand craft the
steps to recreate the build
* Merge conflicts between Vault OSS and Vault ENT build workflows are quite
painful. As the build workflow must be the same file and name we'll reduce
what is contained in each that is unique. Implementations of building
will be unique per edition so we don't have to worry about conflict
resolution.
* Since we're going to be touching the build workflow to do the first two
items we might as well try and improve those other issues at the same time
to reduce the overhead of backports and conflicts.
Considerations
* Build logic for Vault OSS and Vault ENT differs
* The Enos local builder was duplicating a lot of what we did in the CRT build
workflow
* Version and other artifact metadata has been an issue before. Debugging it
has been tedious and error prone.
* The build workflow is full of brittle copy and paste that is hard to
understand, especially for all of the release editions in Vault Enterprise
* Branch check requirements for workflows are incredibly painful to use for
workflows that are dynamic or change often. The required workflows have to be
configured in Github settings by administrators. They would also prevent us
from having simple docs PRs since required integration workflows always have
to run to satisfy branch requirements.
* Doormat credentials requirements that are coming will require us to modify
which event types trigger workflows. This changes those ahead of time since
we're doing so much to build workflow. The only noticeable impact will be
that the build workflow no longer runs on pushes to non-main or release
branches. In order to test other branches it requires a workflow_dispatch
from the Actions tab or a pull request.
Solutions
* Centralize the logic that determines build metadata and creates releasable
Vault artifacts. Instead of cargo-culting logic multiple times in the build
workflow and the Enos local modules, we now have a crt-builder script which
determines build metadata and also handles building the UI, Vault, and the
package bundle. There are make targets for all of the available sub-commands.
Now what we use in the pipeline is the same thing as the local builder, and
it can be executed locally by developers. The crt-builder script works in OSS
and Enterprise so we will never have to deal with them being divergent or with
special casing things in the build workflow.
* Refactor the bulk of the Vault building into a reusable workflow that we can
call multiple times. This allows us to define Vault builds in a much simpler
manner and makes resolving merge conflicts much easier.
* Rather than trying to maintain a list and manually configure the branch check
requirements for build, we'll trigger a single workflow that uses the github
event system to determine if the build workflow (all of the sub-workflows
included) have passed. We'll then create branch restrictions on that single
workflow down the line.
Signed-off-by: Ryan Cragun me@ryan.ec
By adding the link flags `-s -w` we can reduce the Vault binary size
from 204 MB to 167 MB (about 18% reduction in size).
This removes the DWARF section of the binary.
i.e., before:
```
$ objdump --section-headers vault-debug
vault-debug: file format mach-o arm64
Sections:
Idx Name Size VMA Type
0 __text 03a00340 0000000100001000 TEXT
1 __symbol_stub1 00000618 0000000103a01340 TEXT
2 __rodata 00c18088 0000000103a01960 DATA
3 __rodata 015aee18 000000010461c000 DATA
4 __typelink 0004616c 0000000105bcae20 DATA
5 __itablink 0000eb68 0000000105c10fa0 DATA
6 __gosymtab 00000000 0000000105c1fb08 DATA
7 __gopclntab 02a5b8e0 0000000105c1fb20 DATA
8 __go_buildinfo 00008c10 000000010867c000 DATA
9 __nl_symbol_ptr 00000410 0000000108684c10 DATA
10 __noptrdata 000fed00 0000000108685020 DATA
11 __data 0004e1f0 0000000108783d20 DATA
12 __bss 00052520 00000001087d1f20 BSS
13 __noptrbss 000151b0 0000000108824440 BSS
14 __zdebug_abbrev 00000129 000000010883c000 DATA, DEBUG
15 __zdebug_line 00651374 000000010883c129 DATA, DEBUG
16 __zdebug_frame 001e1de9 0000000108e8d49d DATA, DEBUG
17 __debug_gdb_scri 00000043 000000010906f286 DATA, DEBUG
18 __zdebug_info 00de2c09 000000010906f2c9 DATA, DEBUG
19 __zdebug_loc 00a619ea 0000000109e51ed2 DATA, DEBUG
20 __zdebug_ranges 001e94a6 000000010a8b38bc DATA, DEBUG
```
And after:
```
$ objdump --section-headers vault-no-debug
vault-no-debug: file format mach-o arm64
Sections:
Idx Name Size VMA Type
0 __text 03a00340 0000000100001000 TEXT
1 __symbol_stub1 00000618 0000000103a01340 TEXT
2 __rodata 00c18088 0000000103a01960 DATA
3 __rodata 015aee18 000000010461c000 DATA
4 __typelink 0004616c 0000000105bcae20 DATA
5 __itablink 0000eb68 0000000105c10fa0 DATA
6 __gosymtab 00000000 0000000105c1fb08 DATA
7 __gopclntab 02a5b8e0 0000000105c1fb20 DATA
8 __go_buildinfo 00008c20 000000010867c000 DATA
9 __nl_symbol_ptr 00000410 0000000108684c20 DATA
10 __noptrdata 000fed00 0000000108685040 DATA
11 __data 0004e1f0 0000000108783d40 DATA
12 __bss 00052520 00000001087d1f40 BSS
13 __noptrbss 000151b0 0000000108824460 BSS
```
The only side effect I have been able to find is that it is no longer
possible to use [delve](https://github.com/go-delve/delve) to run the
Vault binary.
Note, however, that running delve and other debuggers requires access
to the full source code, which isn't provided for the Enterprise, HSM,
etc. binaries, so it isn't possible to debug those anyway outside of
people who have the full source.
* panic traces
* `vault debug`
* error messages
* Despite what the documentation says, these flags do *not* delete the
function symbol table (so it is not the same as having a `strip`ped
binary).
It contains mappings between the compiled binary and functions,
paramters, and variables in the source code.
Using `llvm-dwarfdump`, it looks like:
```
0x011a6d85: DW_TAG_subprogram
DW_AT_name ("github.com/hashicorp/vault/api.(*replicationStateStore).recordState")
DW_AT_low_pc (0x0000000000a99300)
DW_AT_high_pc (0x0000000000a99419)
DW_AT_frame_base (DW_OP_call_frame_cfa)
DW_AT_decl_file ("/home/swenson/vault/api/client.go")
DW_AT_external (0x01)
0x011a6de1: DW_TAG_formal_parameter
DW_AT_name ("w")
DW_AT_variable_parameter (0x00)
DW_AT_decl_line (1735)
DW_AT_type (0x00000000001e834a "github.com/hashicorp/vault/api.replicationStateStore *")
DW_AT_location (0x009e832a:
[0x0000000000a99300, 0x0000000000a9933a): DW_OP_reg0 RAX
[0x0000000000a9933a, 0x0000000000a99419): DW_OP_call_frame_cfa)
0x011a6def: DW_TAG_formal_parameter
DW_AT_name ("resp")
DW_AT_variable_parameter (0x00)
DW_AT_decl_line (1735)
DW_AT_type (0x00000000001e82a2 "github.com/hashicorp/vault/api.Response *")
DW_AT_location (0x009e8370:
[0x0000000000a99300, 0x0000000000a9933a): DW_OP_reg3 RBX
[0x0000000000a9933a, 0x0000000000a99419): DW_OP_fbreg +8)
0x011a6e00: DW_TAG_variable
DW_AT_name ("newState")
DW_AT_decl_line (1738)
DW_AT_type (0x0000000000119f32 "string")
DW_AT_location (0x009e83b7:
[0x0000000000a99385, 0x0000000000a99385): DW_OP_reg0 RAX, DW_OP_piece 0x8, DW_OP_piece 0x8
[0x0000000000a99385, 0x0000000000a993a4): DW_OP_reg0 RAX, DW_OP_piece 0x8, DW_OP_reg3 RBX, DW_OP_piece 0x8
[0x0000000000a993a4, 0x0000000000a993a7): DW_OP_piece 0x8, DW_OP_reg3 RBX, DW_OP_piece 0x8)
```
This says that the particular binary section is the function
`github.com/hashicorp/vault/api.(*replicationStateStore).recordState`,
from the file `/home/swenson/vault/api/client.go`, containing
the `w` parameter on line 1735 mapped to certain registers and memory,
the `resp` paramter on line 1735 mapped to certain reigsters and memory,
and the `newState` variable on line 1738, mapped to certain registers,
and memory.
It's really only useful for a debugger.
Anyone running the code in a debugger will need full access the source
code anyway, so presumably they will be able to run `make dev` and build
the version with the DWARF sections intact, and then run their debugger.
* Update go version to 1.19.2
This commit updates the default version of go to 1.19.2. This update
includes minor security fixes for archive/tar, net/http/httputil, and
regexp packages.
For more information on the release, see: https://go.dev/doc/devel/release#go1.19.2
* Update Docker versions in CI to 20.10.17
After updating Vault to go version 1.19.2, there were several SIGABRTs
in the vault tests. These were related to a missing `pthread_create`
syscall in Docker. Since CI was using a much older version of Docker,
the fix was to bump it to latest-1 (20.10.17).
While we're at it, add a note in the developer docs encouraging the use
of the latest Docker version.
Add plugin version to GRPC interface
Added a version interface in the sdk/logical so that it can be shared between all plugin types, and then wired it up to RunningVersion in the mounts, auth list, and database systems.
I've tested that this works with auth, database, and secrets plugin types, with the following logic to populate RunningVersion:
If a plugin has a PluginVersion() method implemented, then that is used
If not, and the plugin is built into the Vault binary, then the go.mod version is used
Otherwise, the it will be the empty string.
My apologies for the length of this PR.
* Placeholder backend should be external
We use a placeholder backend (previously a framework.Backend) before a
GRPC plugin is lazy-loaded. This makes us later think the plugin is a
builtin plugin.
So we added a `placeholderBackend` type that overrides the
`IsExternal()` method so that later we know that the plugin is external,
and don't give it a default builtin version.
Update Go to 1.18
From 1.17.12
1.18.5 was just released, but not all packages have been updated, so I
went with 1.18.4
Co-authored-by: Steven Clark <steven.clark@hashicorp.com>
Remove gox in favor of go build.
`gox` hasn't had a release to update it in many years, so is missing
support for many modern systems, like `darwin/arm64`.
In any case, we only use it for dev builds, where we don't even use
the ability of it to build for multiple platforms. Release builds use
`go build` now.
So, this switches to `go build` everywhere.
I pulled this down and tested it in Windows as well. (Side note: I
couldn't get `gox` to work in Windows, so couldn't build before this
change.)
* add BuildDate to version base
* populate BuildDate with ldflags
* include BuildDate in FullVersionNumber
* add BuildDate to seal-status and associated status cmd
* extend core/versions entries to include BuildDate
* include BuildDate in version-history API and CLI
* fix version history tests
* fix sys status tests
* fix TestStatusFormat
* remove extraneous LD_FLAGS from build.sh
* add BuildDate to build.bat
* fix TestSysUnseal_Reset
* attempt to add build-date to release builds
* add branch to github build workflow
* add get-build-date to build-* job needs
* fix release build command vars
* add missing quote in release build command
* Revert "add branch to github build workflow"
This reverts commit b835699ecb7c2c632757fa5fe64b3d5f60d2a886.
* add changelog entry
* port SSCT OSS
* port header hmac key to ent and generate token proto without make command
* remove extra nil check in request handling
* add changelog
* add comment to router.go
* change test var to use length constants
* remove local index is 0 check and extra defer which can be removed after use of ExternalID
* feat: DB plugin multiplexing (#13734)
* WIP: start from main and get a plugin runner from core
* move MultiplexedClient map to plugin catalog
- call sys.NewPluginClient from PluginFactory
- updates to getPluginClient
- thread through isMetadataMode
* use go-plugin ClientProtocol interface
- call sys.NewPluginClient from dbplugin.NewPluginClient
* move PluginSets to dbplugin package
- export dbplugin HandshakeConfig
- small refactor of PluginCatalog.getPluginClient
* add removeMultiplexedClient; clean up on Close()
- call client.Kill from plugin catalog
- set rpcClient when muxed client exists
* add ID to dbplugin.DatabasePluginClient struct
* only create one plugin process per plugin type
* update NewPluginClient to return connection ID to sdk
- wrap grpc.ClientConn so we can inject the ID into context
- get ID from context on grpc server
* add v6 multiplexing protocol version
* WIP: backwards compat for db plugins
* Ensure locking on plugin catalog access
- Create public GetPluginClient method for plugin catalog
- rename postgres db plugin
* use the New constructor for db plugins
* grpc server: use write lock for Close and rlock for CRUD
* cleanup MultiplexedClients on Close
* remove TODO
* fix multiplexing regression with grpc server connection
* cleanup grpc server instances on close
* embed ClientProtocol in Multiplexer interface
* use PluginClientConfig arg to make NewPluginClient plugin type agnostic
* create a new plugin process for non-muxed plugins
* feat: plugin multiplexing: handle plugin client cleanup (#13896)
* use closure for plugin client cleanup
* log and return errors; add comments
* move rpcClient wrapping to core for ID injection
* refactor core plugin client and sdk
* remove unused ID method
* refactor and only wrap clientConn on multiplexed plugins
* rename structs and do not export types
* Slight refactor of system view interface
* Revert "Slight refactor of system view interface"
This reverts commit 73d420e5cd2f0415e000c5a9284ea72a58016dd6.
* Revert "Revert "Slight refactor of system view interface""
This reverts commit f75527008a1db06d04a23e04c3059674be8adb5f.
* only provide pluginRunner arg to the internal newPluginClient method
* embed ClientProtocol in pluginClient and name logger
* Add back MLock support
* remove enableMlock arg from setupPluginCatalog
* rename plugin util interface to PluginClient
Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com>
* feature: multiplexing: fix unit tests (#14007)
* fix grpc_server tests and add coverage
* update run_config tests
* add happy path test case for grpc_server ID from context
* update test helpers
* feat: multiplexing: handle v5 plugin compiled with new sdk
* add mux supported flag and increase test coverage
* set multiplexingSupport field in plugin server
* remove multiplexingSupport field in sdk
* revert postgres to non-multiplexed
* add comments on grpc server fields
* use pointer receiver on grpc server methods
* add changelog
* use pointer for grpcserver instance
* Use a gRPC server to determine if a plugin should be multiplexed
* Apply suggestions from code review
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* add lock to removePluginClient
* add multiplexingSupport field to externalPlugin struct
* do not send nil to grpc MultiplexingSupport
* check err before logging
* handle locking scenario for cleanupFunc
* allow ServeConfigMultiplex to dispense v5 plugin
* reposition structs, add err check and comments
* add comment on locking for cleanupExternalPlugin
Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com>
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* adding CRT to main branch
* cleanup
* um i dont know how that got removed but heres the fix
* add vault.service
Co-authored-by: Kyle Penfound <kpenfound11@gmail.com>
* copy over the webui
move web_ui to http
remove web ui files, add .gitkeep
updates, messing with gitkeep and ignoring web_ui
update ui scripts
gitkeep
ignore http/web_ui
Remove debugging
remove the jwt reference, that was from something else
restore old jwt plugin
move things around
Revert "move things around"
This reverts commit 2a35121850f5b6b82064ecf78ebee5246601c04f.
Update ui path handling to not need the web_ui name part
add desc
move the http.FS conversion internal to assetFS
update gitignore
remove bindata dep
clean up some comments
remove asset check script that's no longer needed
Update readme
remove more bindata things
restore asset check
update packagespec
update stub
stub the assetFS method and set uiBuiltIn to false for non-ui builds
update packagespec to build ui
* fail if assets aren't found
* tidy up vendor
* go mod tidy
* updating .circleci
* restore tools.go
* re-re-re-run make packages
* re-enable arm64
* Adding change log
* Removing a file
Co-authored-by: hamid ghaf <hamid@hashicorp.com>
Adds BUILD_TAGS to the docker build commands for docker-dev and
docker-dev-ui. Also changes the respective Dockerfile's to use double
quotes with ${BUILD_TAGS} so that it's interpolated.
* Update go version to 1.15.3
* Fix OU ordering for go1.15.x testing
* Fix CI version
* Update docker image
* Fix test
* packagespec upgrade -version 0.1.8
Co-authored-by: Sam Salisbury <samsalisbury@gmail.com>
This is part 1 of 4 for renaming the `newdbplugin` package. This copies the existing package to the new location but keeps the current one in place so we can migrate the existing references over more easily.
* Add new Database v5 interface with gRPC client & server
This is primarily for making password policies available to the DB engine, however since there are a number of other problems with the current interface this is getting an overhaul to a more gRPC request/response approach for easier future compatibility.
This is the first in a series of PRs to add support for password policies in the combined database engine