Also remove one duplicate error masked by return.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Correctly preserve other issuer config params
When setting a new default issuer, our helper function would overwrite
other parameters in the issuer configuration entry. However, up until
now, there were none.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add new parameter to allow default to follow new
This parameter will allow operators to have the default issuer
automatically update when a new root is generated or a single issuer
with a key (potentially with others lacking key) is imported.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Storage migration tests fail on new members
These internal members shouldn't be tested by the storage migration
code, and so should be elided from the test results.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Follow new issuer on root generation, import
This updates the two places where issuers can be created (outside of
legacy CA bundle migration which already sets the default) to follow
newly created issuers when the config is set.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add changelog entry
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add documentation
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add test for new default-following behavior
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Specifying only `args` will just append them to the container image's
entrypoint instead of replacing it. Setting command overrides the
entrypoint, and args is then appended to the command.
* Add new API to PKI to list revoked certificates
- A new API that will return the list of serial numbers of
revoked certificates on the local cluster.
* Add cl
* PR feedback
Give the default SetupLoginMFATOTP helper a more robust period/skew. 403 failures on test-go-race are likely due to TOTP code timeouts being too aggressive.
* Ensure correct write ordering in rebuildIssuersChains
When troubleshooting a recent migration failure from 1.10->1.11, it was
noted that some PKI mounts had bad chain construction despite having
valid, chaining issuers. Due to the cluster's leadership trashing
between nodes, the migration logic was re-executed several times,
partially succeeding each time. While the legacy CA bundle migration
logic was written with this in mind, one shortcoming in the chain
building code lead us to truncate the ca_chain: by sorting the list of
issuers after including non-written issuers (with random IDs), these
issuers would occasionally be persisted prior to storage _prior_ to
existing CAs with modified chains.
The migration code carefully imported the active issuer prior to its
parents. However, due to this bug, there was a chance that, if write to
the pending parent succeeded but updating the active issuer didn't, the
active issuer's ca_chain field would only contain the self-reference and
not the parent's reference as well. Ultimately, a workaround of setting
and subsequently unsetting a manual chain would force a chain
regeneration.
In this patch, we simply fix the write ordering: because we need to
ensure a stable chain sorting, we leave the sort location in the same
place, but delay writing the provided referenceCert to the last
position. This is because the reference is meant to be the user-facing
action: without transactional write capabilities, other chains may
succeed, but if the last user-facing action fails, the user will
hopefully retry the action. This will also correct migration, by
ensuring the subsequent issuer import will be attempted again,
triggering another chain build and only persisting this issuer when
all other issuers have also been updated.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Remigrate ca_chains to fix any missing issuers
In the previous commit, we identified an issue that would occur on
legacy issuer migration to the new storage format. This is easy enough
to detect for any given mount (by an operator), but automating scanning
and remediating all PKI mounts in large deployments might be difficult.
Write a new storage migration version to regenerate all chains on
upgrade, once.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add changelog entry
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add issue to PKI considerations documentation
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Correct %v -> %w in chain building errs
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
This article seems to use the terms "shares" and "shards" interchangeably to describe the parts in which the secret is split under SSS.
While both seem to be correct, sticking to one term would save a newbie reader (like myself) the confusion.
Since the Wikipedia article that's linked in this article only mentions "shares" and the CLI flags (for recovery keys) also use `-shares`, I opted for that.
* moves service worker message event listener from addon to raft-storage-overview component
* adds changelog entry
* adds raft-storage-overview test for downloading snapshot via service worker
By adding the link flags `-s -w` we can reduce the Vault binary size
from 204 MB to 167 MB (about 18% reduction in size).
This removes the DWARF section of the binary.
i.e., before:
```
$ objdump --section-headers vault-debug
vault-debug: file format mach-o arm64
Sections:
Idx Name Size VMA Type
0 __text 03a00340 0000000100001000 TEXT
1 __symbol_stub1 00000618 0000000103a01340 TEXT
2 __rodata 00c18088 0000000103a01960 DATA
3 __rodata 015aee18 000000010461c000 DATA
4 __typelink 0004616c 0000000105bcae20 DATA
5 __itablink 0000eb68 0000000105c10fa0 DATA
6 __gosymtab 00000000 0000000105c1fb08 DATA
7 __gopclntab 02a5b8e0 0000000105c1fb20 DATA
8 __go_buildinfo 00008c10 000000010867c000 DATA
9 __nl_symbol_ptr 00000410 0000000108684c10 DATA
10 __noptrdata 000fed00 0000000108685020 DATA
11 __data 0004e1f0 0000000108783d20 DATA
12 __bss 00052520 00000001087d1f20 BSS
13 __noptrbss 000151b0 0000000108824440 BSS
14 __zdebug_abbrev 00000129 000000010883c000 DATA, DEBUG
15 __zdebug_line 00651374 000000010883c129 DATA, DEBUG
16 __zdebug_frame 001e1de9 0000000108e8d49d DATA, DEBUG
17 __debug_gdb_scri 00000043 000000010906f286 DATA, DEBUG
18 __zdebug_info 00de2c09 000000010906f2c9 DATA, DEBUG
19 __zdebug_loc 00a619ea 0000000109e51ed2 DATA, DEBUG
20 __zdebug_ranges 001e94a6 000000010a8b38bc DATA, DEBUG
```
And after:
```
$ objdump --section-headers vault-no-debug
vault-no-debug: file format mach-o arm64
Sections:
Idx Name Size VMA Type
0 __text 03a00340 0000000100001000 TEXT
1 __symbol_stub1 00000618 0000000103a01340 TEXT
2 __rodata 00c18088 0000000103a01960 DATA
3 __rodata 015aee18 000000010461c000 DATA
4 __typelink 0004616c 0000000105bcae20 DATA
5 __itablink 0000eb68 0000000105c10fa0 DATA
6 __gosymtab 00000000 0000000105c1fb08 DATA
7 __gopclntab 02a5b8e0 0000000105c1fb20 DATA
8 __go_buildinfo 00008c20 000000010867c000 DATA
9 __nl_symbol_ptr 00000410 0000000108684c20 DATA
10 __noptrdata 000fed00 0000000108685040 DATA
11 __data 0004e1f0 0000000108783d40 DATA
12 __bss 00052520 00000001087d1f40 BSS
13 __noptrbss 000151b0 0000000108824460 BSS
```
The only side effect I have been able to find is that it is no longer
possible to use [delve](https://github.com/go-delve/delve) to run the
Vault binary.
Note, however, that running delve and other debuggers requires access
to the full source code, which isn't provided for the Enterprise, HSM,
etc. binaries, so it isn't possible to debug those anyway outside of
people who have the full source.
* panic traces
* `vault debug`
* error messages
* Despite what the documentation says, these flags do *not* delete the
function symbol table (so it is not the same as having a `strip`ped
binary).
It contains mappings between the compiled binary and functions,
paramters, and variables in the source code.
Using `llvm-dwarfdump`, it looks like:
```
0x011a6d85: DW_TAG_subprogram
DW_AT_name ("github.com/hashicorp/vault/api.(*replicationStateStore).recordState")
DW_AT_low_pc (0x0000000000a99300)
DW_AT_high_pc (0x0000000000a99419)
DW_AT_frame_base (DW_OP_call_frame_cfa)
DW_AT_decl_file ("/home/swenson/vault/api/client.go")
DW_AT_external (0x01)
0x011a6de1: DW_TAG_formal_parameter
DW_AT_name ("w")
DW_AT_variable_parameter (0x00)
DW_AT_decl_line (1735)
DW_AT_type (0x00000000001e834a "github.com/hashicorp/vault/api.replicationStateStore *")
DW_AT_location (0x009e832a:
[0x0000000000a99300, 0x0000000000a9933a): DW_OP_reg0 RAX
[0x0000000000a9933a, 0x0000000000a99419): DW_OP_call_frame_cfa)
0x011a6def: DW_TAG_formal_parameter
DW_AT_name ("resp")
DW_AT_variable_parameter (0x00)
DW_AT_decl_line (1735)
DW_AT_type (0x00000000001e82a2 "github.com/hashicorp/vault/api.Response *")
DW_AT_location (0x009e8370:
[0x0000000000a99300, 0x0000000000a9933a): DW_OP_reg3 RBX
[0x0000000000a9933a, 0x0000000000a99419): DW_OP_fbreg +8)
0x011a6e00: DW_TAG_variable
DW_AT_name ("newState")
DW_AT_decl_line (1738)
DW_AT_type (0x0000000000119f32 "string")
DW_AT_location (0x009e83b7:
[0x0000000000a99385, 0x0000000000a99385): DW_OP_reg0 RAX, DW_OP_piece 0x8, DW_OP_piece 0x8
[0x0000000000a99385, 0x0000000000a993a4): DW_OP_reg0 RAX, DW_OP_piece 0x8, DW_OP_reg3 RBX, DW_OP_piece 0x8
[0x0000000000a993a4, 0x0000000000a993a7): DW_OP_piece 0x8, DW_OP_reg3 RBX, DW_OP_piece 0x8)
```
This says that the particular binary section is the function
`github.com/hashicorp/vault/api.(*replicationStateStore).recordState`,
from the file `/home/swenson/vault/api/client.go`, containing
the `w` parameter on line 1735 mapped to certain registers and memory,
the `resp` paramter on line 1735 mapped to certain reigsters and memory,
and the `newState` variable on line 1738, mapped to certain registers,
and memory.
It's really only useful for a debugger.
Anyone running the code in a debugger will need full access the source
code anyway, so presumably they will be able to run `make dev` and build
the version with the DWARF sections intact, and then run their debugger.
When running the test suite in CI (where requests are centralized from
relatively few IPs), we'd occasionally hit Dockerhub's rate limits.
Luckily Hashicorp runs a (limited) public mirror of the containers we
need, so we can switch to them here in the tests.
For consistency between developer and CI, we've opted to have the tests
always pull from the Hashicorp mirror, rather than updating the CI
runner to prefer the mirror.
We exclude nomad and influxdb as we don't presently mirror these repos.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Return revocation info within existing certs/<serial> api
- The api already returned both the certificate and a revocation_time
field populated. Update the api to return revocation_time_rfc3339
as we do elsewhere and also the issuer id if it was revoked.
- This will allow callers to associate a revoked cert with an issuer
* Add cl
* PR feedback (docs update)
* Update signed-ssh-certificates.mdx
Add a pointer to the doc regarding reading back the pub key with the CLI
* Update website/content/docs/secrets/ssh/signed-ssh-certificates.mdx
Co-authored-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Don't use a duplicate sync object for stepwise tests precheck
* Change STS test check to no longer look for a secret, add SetSourceIdentity policy to role
When copying data into the container, due to the id changes pointed
out in the previous attempt, the container couldn't read this data.
By creating a new user in the container, matching the host's UID/GID, we
can successfully copy data in/out of the container without worrying
about differing UID/GIDs.
See also: https://github.com/hashicorp/vault/pull/17658
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Bump validity period check to satisfy CircleCI
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Update builtin/logical/pki/backend_test.go
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Clarify when -format=raw fails
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Document Vault read's new -format=raw mode
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add raw format to usage, completion
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add missing support for raw format field printing
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Prohibit command execution with wrong formatter
This allows us to restrict the raw formatter to only commands that
understand it; otherwise, when running `vault write -format=raw`, we'd
actually hit the Vault server, but hide the output from the user. By
switching this to a flag-parse time check, we avoid running the rest of
the command if a bad formatter was specified.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Expose raw request from client.Logical()
Not all Vault API endpoints return well-formatted JSON objects.
Sometimes, in the case of the PKI secrets engine, they're not even
printable (/pki/ca returns a binary (DER-encoded) certificate). While
this endpoint isn't authenticated, in general the API caller would
either need to use Client.RawRequestWithContext(...) directly (which
the docs advise against), or setup their own net/http client and
re-create much of Client and/or Client.Logical.
Instead, exposing the raw Request (via the new ReadRawWithData(...))
allows callers to directly consume these non-JSON endpoints like they
would nearly any other endpoint.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add raw formatter for direct []byte data
As mentioned in the previous commit, some API endpoints return non-JSON
data. We get as far as fetching this data (via client.Logical().Read),
but parsing it as an api.Secret fails (as in this case, it is non-JSON).
Given that we intend to update `vault read` to support such endpoints,
we'll need a "raw" formatter that accepts []byte-encoded data and simply
writes it to the UI.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add support for reading raw API endpoints
Some endpoints, such as `pki/ca` and `pki/ca/pem` return non-JSON
objects. When calling `vault read` on these endpoints, an error
is returned because they cannot be parsed as api.Secret instances:
> Error reading pki/ca/pem: invalid character '-' in numeric literal
Indeed, we go to all the trouble of (successfully) fetching this value,
only to be unable to Unmarshal into a Secrets value. Instead, add
support for a new -format=raw option, allowing these endpoints to be
consumed by callers of `vault read` directly.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add changelog entry
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Remove panic
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Docs: API secret/ssh clarity on Create & Update
Added clarity notes on required permissions (`update` & `create`) that's otherwise not obvious without experience of other mounts that have requirements for similar ACL to manage. Resolves#9888.
* Update website/content/api-docs/secret/ssh.mdx
Co-authored-by: Loann Le <84412881+taoism4504@users.noreply.github.com>
* Update website/content/api-docs/secret/ssh.mdx
Co-authored-by: Loann Le <84412881+taoism4504@users.noreply.github.com>
* Docs: API secret/ssh clarity on Create & Update...
Reduced text (-1 line) further to feedback from @benashz; retaining details on `create` vs `update` difference as per [API transit method that calls this out too.](https://www.vaultproject.io/api-docs/secret/transit#encrypt-data)
* trigger ci
Co-authored-by: Loann Le <84412881+taoism4504@users.noreply.github.com>
Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>