open-vault/http/logical.go

556 lines
15 KiB
Go
Raw Normal View History

package http
import (
"bufio"
"encoding/base64"
"encoding/json"
2018-09-18 03:03:00 +00:00
"fmt"
2015-04-07 21:36:17 +00:00
"io"
"mime"
"net"
"net/http"
"strconv"
"strings"
"time"
uuid "github.com/hashicorp/go-uuid"
2018-09-18 03:03:00 +00:00
"github.com/hashicorp/vault/helper/namespace"
"github.com/hashicorp/vault/sdk/helper/consts"
"github.com/hashicorp/vault/sdk/logical"
"github.com/hashicorp/vault/vault"
"go.uber.org/atomic"
)
// bufferedReader can be used to replace a request body with a buffered
// version. The Close method invokes the original Closer.
type bufferedReader struct {
*bufio.Reader
rOrig io.ReadCloser
}
func newBufferedReader(r io.ReadCloser) *bufferedReader {
return &bufferedReader{
Reader: bufio.NewReader(r),
rOrig: r,
}
}
func (b *bufferedReader) Close() error {
return b.rOrig.Close()
}
const MergePatchContentTypeHeader = "application/merge-patch+json"
Recovery Mode (#7559) * Initial work * rework * s/dr/recovery * Add sys/raw support to recovery mode (#7577) * Factor the raw paths out so they can be run with a SystemBackend. # Conflicts: # vault/logical_system.go * Add handleLogicalRecovery which is like handleLogical but is only sufficient for use with the sys-raw endpoint in recovery mode. No authentication is done yet. * Integrate with recovery-mode. We now handle unauthenticated sys/raw requests, albeit on path v1/raw instead v1/sys/raw. * Use sys/raw instead raw during recovery. * Don't bother persisting the recovery token. Authenticate sys/raw requests with it. * RecoveryMode: Support generate-root for autounseals (#7591) * Recovery: Abstract config creation and log settings * Recovery mode integration test. (#7600) * Recovery: Touch up (#7607) * Recovery: Touch up * revert the raw backend creation changes * Added recovery operation token prefix * Move RawBackend to its own file * Update API path and hit it using CLI flag on generate-root * Fix a panic triggered when handling a request that yields a nil response. (#7618) * Improve integ test to actually make changes while in recovery mode and verify they're still there after coming back in regular mode. * Refuse to allow a second recovery token to be generated. * Resize raft cluster to size 1 and start as leader (#7626) * RecoveryMode: Setup raft cluster post unseal (#7635) * Setup raft cluster post unseal in recovery mode * Remove marking as unsealed as its not needed * Address review comments * Accept only one seal config in recovery mode as there is no scope for migration
2019-10-15 04:55:31 +00:00
func buildLogicalRequestNoAuth(perfStandby bool, w http.ResponseWriter, r *http.Request) (*logical.Request, io.ReadCloser, int, error) {
2018-09-18 03:03:00 +00:00
ns, err := namespace.FromContext(r.Context())
if err != nil {
return nil, nil, http.StatusBadRequest, nil
}
2018-09-18 03:03:00 +00:00
path := ns.TrimmedPath(r.URL.Path[len("/v1/"):])
var data map[string]interface{}
var origBody io.ReadCloser
var passHTTPReq bool
var responseWriter http.ResponseWriter
// Determine the operation
var op logical.Operation
switch r.Method {
case "DELETE":
op = logical.DeleteOperation
data = parseQuery(r.URL.Query())
case "GET":
op = logical.ReadOperation
queryVals := r.URL.Query()
var list bool
var err error
listStr := queryVals.Get("list")
if listStr != "" {
list, err = strconv.ParseBool(listStr)
2015-04-07 21:36:17 +00:00
if err != nil {
return nil, nil, http.StatusBadRequest, nil
}
if list {
op = logical.ListOperation
if !strings.HasSuffix(path, "/") {
path += "/"
}
}
}
if !list {
data = parseQuery(queryVals)
}
switch {
case strings.HasPrefix(path, "sys/pprof/"):
passHTTPReq = true
responseWriter = w
case path == "sys/storage/raft/snapshot":
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
responseWriter = w
case path == "sys/internal/counters/activity/export":
responseWriter = w
case path == "sys/monitor":
passHTTPReq = true
responseWriter = w
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
}
case "POST", "PUT":
op = logical.UpdateOperation
// Buffer the request body in order to allow us to peek at the beginning
// without consuming it. This approach involves no copying.
bufferedBody := newBufferedReader(r.Body)
r.Body = bufferedBody
// If we are uploading a snapshot we don't want to parse it. Instead
// we will simply add the HTTP request to the logical request object
// for later consumption.
if path == "sys/storage/raft/snapshot" || path == "sys/storage/raft/snapshot-force" {
passHTTPReq = true
origBody = r.Body
} else {
// Sample the first bytes to determine whether this should be parsed as
// a form or as JSON. The amount to look ahead (512 bytes) is arbitrary
// but extremely tolerant (i.e. allowing 511 bytes of leading whitespace
// and an incorrect content-type).
head, err := bufferedBody.Peek(512)
if err != nil && err != bufio.ErrBufferFull && err != io.EOF {
status := http.StatusBadRequest
logical.AdjustErrorStatusCode(&status, err)
return nil, nil, status, fmt.Errorf("error reading data")
}
if isForm(head, r.Header.Get("Content-Type")) {
formData, err := parseFormRequest(r)
if err != nil {
status := http.StatusBadRequest
logical.AdjustErrorStatusCode(&status, err)
return nil, nil, status, fmt.Errorf("error parsing form data")
}
data = formData
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
} else {
origBody, err = parseJSONRequest(perfStandby, r, w, &data)
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
if err == io.EOF {
data = nil
err = nil
}
if err != nil {
status := http.StatusBadRequest
logical.AdjustErrorStatusCode(&status, err)
return nil, nil, status, fmt.Errorf("error parsing JSON")
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
}
}
}
case "PATCH":
op = logical.PatchOperation
contentTypeHeader := r.Header.Get("Content-Type")
contentType, _, err := mime.ParseMediaType(contentTypeHeader)
if err != nil {
status := http.StatusBadRequest
logical.AdjustErrorStatusCode(&status, err)
return nil, nil, status, err
}
if contentType != MergePatchContentTypeHeader {
return nil, nil, http.StatusUnsupportedMediaType, fmt.Errorf("PATCH requires Content-Type of %s, provided %s", MergePatchContentTypeHeader, contentType)
}
origBody, err = parseJSONRequest(perfStandby, r, w, &data)
if err == io.EOF {
data = nil
err = nil
}
if err != nil {
status := http.StatusBadRequest
logical.AdjustErrorStatusCode(&status, err)
return nil, nil, status, fmt.Errorf("error parsing JSON")
}
case "LIST":
op = logical.ListOperation
if !strings.HasSuffix(path, "/") {
path += "/"
}
data = parseQuery(r.URL.Query())
case "OPTIONS", "HEAD":
default:
return nil, nil, http.StatusMethodNotAllowed, nil
}
requestId, err := uuid.GenerateUUID()
2016-07-26 19:50:37 +00:00
if err != nil {
return nil, nil, http.StatusInternalServerError, fmt.Errorf("failed to generate identifier for the request: %w", err)
2016-07-26 19:50:37 +00:00
}
Recovery Mode (#7559) * Initial work * rework * s/dr/recovery * Add sys/raw support to recovery mode (#7577) * Factor the raw paths out so they can be run with a SystemBackend. # Conflicts: # vault/logical_system.go * Add handleLogicalRecovery which is like handleLogical but is only sufficient for use with the sys-raw endpoint in recovery mode. No authentication is done yet. * Integrate with recovery-mode. We now handle unauthenticated sys/raw requests, albeit on path v1/raw instead v1/sys/raw. * Use sys/raw instead raw during recovery. * Don't bother persisting the recovery token. Authenticate sys/raw requests with it. * RecoveryMode: Support generate-root for autounseals (#7591) * Recovery: Abstract config creation and log settings * Recovery mode integration test. (#7600) * Recovery: Touch up (#7607) * Recovery: Touch up * revert the raw backend creation changes * Added recovery operation token prefix * Move RawBackend to its own file * Update API path and hit it using CLI flag on generate-root * Fix a panic triggered when handling a request that yields a nil response. (#7618) * Improve integ test to actually make changes while in recovery mode and verify they're still there after coming back in regular mode. * Refuse to allow a second recovery token to be generated. * Resize raft cluster to size 1 and start as leader (#7626) * RecoveryMode: Setup raft cluster post unseal (#7635) * Setup raft cluster post unseal in recovery mode * Remove marking as unsealed as its not needed * Address review comments * Accept only one seal config in recovery mode as there is no scope for migration
2019-10-15 04:55:31 +00:00
req := &logical.Request{
ID: requestId,
Operation: op,
Path: path,
Data: data,
Connection: getConnection(r),
Headers: r.Header,
Recovery Mode (#7559) * Initial work * rework * s/dr/recovery * Add sys/raw support to recovery mode (#7577) * Factor the raw paths out so they can be run with a SystemBackend. # Conflicts: # vault/logical_system.go * Add handleLogicalRecovery which is like handleLogical but is only sufficient for use with the sys-raw endpoint in recovery mode. No authentication is done yet. * Integrate with recovery-mode. We now handle unauthenticated sys/raw requests, albeit on path v1/raw instead v1/sys/raw. * Use sys/raw instead raw during recovery. * Don't bother persisting the recovery token. Authenticate sys/raw requests with it. * RecoveryMode: Support generate-root for autounseals (#7591) * Recovery: Abstract config creation and log settings * Recovery mode integration test. (#7600) * Recovery: Touch up (#7607) * Recovery: Touch up * revert the raw backend creation changes * Added recovery operation token prefix * Move RawBackend to its own file * Update API path and hit it using CLI flag on generate-root * Fix a panic triggered when handling a request that yields a nil response. (#7618) * Improve integ test to actually make changes while in recovery mode and verify they're still there after coming back in regular mode. * Refuse to allow a second recovery token to be generated. * Resize raft cluster to size 1 and start as leader (#7626) * RecoveryMode: Setup raft cluster post unseal (#7635) * Setup raft cluster post unseal in recovery mode * Remove marking as unsealed as its not needed * Address review comments * Accept only one seal config in recovery mode as there is no scope for migration
2019-10-15 04:55:31 +00:00
}
if passHTTPReq {
req.HTTPRequest = r
}
if responseWriter != nil {
req.ResponseWriter = logical.NewHTTPResponseWriter(responseWriter)
}
return req, origBody, 0, nil
}
func buildLogicalPath(r *http.Request) (string, int, error) {
ns, err := namespace.FromContext(r.Context())
if err != nil {
return "", http.StatusBadRequest, nil
}
path := ns.TrimmedPath(strings.TrimPrefix(r.URL.Path, "/v1/"))
switch r.Method {
case "GET":
var (
list bool
err error
)
queryVals := r.URL.Query()
listStr := queryVals.Get("list")
if listStr != "" {
list, err = strconv.ParseBool(listStr)
if err != nil {
return "", http.StatusBadRequest, nil
}
if list {
if !strings.HasSuffix(path, "/") {
path += "/"
}
}
}
case "LIST":
if !strings.HasSuffix(path, "/") {
path += "/"
}
}
return path, 0, nil
}
func buildLogicalRequest(core *vault.Core, w http.ResponseWriter, r *http.Request) (*logical.Request, io.ReadCloser, int, error) {
req, origBody, status, err := buildLogicalRequestNoAuth(core.PerfStandby(), w, r)
if err != nil || status != 0 {
return nil, nil, status, err
}
req.SetRequiredState(r.Header.Values(VaultIndexHeaderName))
requestAuth(r, req)
2017-01-04 21:44:03 +00:00
req, err = requestWrapInfo(r, req)
if err != nil {
return nil, nil, http.StatusBadRequest, fmt.Errorf("error parsing X-Vault-Wrap-TTL header: %w", err)
}
2018-09-18 03:03:00 +00:00
err = parseMFAHeader(req)
if err != nil {
return nil, nil, http.StatusBadRequest, fmt.Errorf("failed to parse X-Vault-MFA header: %w", err)
2018-09-18 03:03:00 +00:00
}
err = requestPolicyOverride(r, req)
if err != nil {
return nil, nil, http.StatusBadRequest, fmt.Errorf("failed to parse %s header: %w", PolicyOverrideHeaderName, err)
2018-09-18 03:03:00 +00:00
}
return req, origBody, 0, nil
}
// handleLogical returns a handler for processing logical requests. These requests
// may or may not end up getting forwarded under certain scenarios if the node
// is a performance standby. Some of these cases include:
// - Perf standby and token with limited use count.
// - Perf standby and token re-validation needed (e.g. due to invalid token).
// - Perf standby and control group error.
2018-09-18 03:03:00 +00:00
func handleLogical(core *vault.Core) http.Handler {
return handleLogicalInternal(core, false, false)
}
// handleLogicalWithInjector returns a handler for processing logical requests
// that also have their logical response data injected at the top-level payload.
// All forwarding behavior remains the same as `handleLogical`.
2018-09-18 03:03:00 +00:00
func handleLogicalWithInjector(core *vault.Core) http.Handler {
return handleLogicalInternal(core, true, false)
}
// handleLogicalNoForward returns a handler for processing logical local-only
// requests. These types of requests never forwarded, and return an
// `vault.ErrCannotForwardLocalOnly` error if attempted to do so.
func handleLogicalNoForward(core *vault.Core) http.Handler {
return handleLogicalInternal(core, false, true)
}
Recovery Mode (#7559) * Initial work * rework * s/dr/recovery * Add sys/raw support to recovery mode (#7577) * Factor the raw paths out so they can be run with a SystemBackend. # Conflicts: # vault/logical_system.go * Add handleLogicalRecovery which is like handleLogical but is only sufficient for use with the sys-raw endpoint in recovery mode. No authentication is done yet. * Integrate with recovery-mode. We now handle unauthenticated sys/raw requests, albeit on path v1/raw instead v1/sys/raw. * Use sys/raw instead raw during recovery. * Don't bother persisting the recovery token. Authenticate sys/raw requests with it. * RecoveryMode: Support generate-root for autounseals (#7591) * Recovery: Abstract config creation and log settings * Recovery mode integration test. (#7600) * Recovery: Touch up (#7607) * Recovery: Touch up * revert the raw backend creation changes * Added recovery operation token prefix * Move RawBackend to its own file * Update API path and hit it using CLI flag on generate-root * Fix a panic triggered when handling a request that yields a nil response. (#7618) * Improve integ test to actually make changes while in recovery mode and verify they're still there after coming back in regular mode. * Refuse to allow a second recovery token to be generated. * Resize raft cluster to size 1 and start as leader (#7626) * RecoveryMode: Setup raft cluster post unseal (#7635) * Setup raft cluster post unseal in recovery mode * Remove marking as unsealed as its not needed * Address review comments * Accept only one seal config in recovery mode as there is no scope for migration
2019-10-15 04:55:31 +00:00
func handleLogicalRecovery(raw *vault.RawBackend, token *atomic.String) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
req, _, statusCode, err := buildLogicalRequestNoAuth(false, w, r)
if err != nil || statusCode != 0 {
respondError(w, statusCode, err)
return
}
reqToken := r.Header.Get(consts.AuthHeaderName)
if reqToken == "" || token.Load() == "" || reqToken != token.Load() {
respondError(w, http.StatusForbidden, nil)
return
Recovery Mode (#7559) * Initial work * rework * s/dr/recovery * Add sys/raw support to recovery mode (#7577) * Factor the raw paths out so they can be run with a SystemBackend. # Conflicts: # vault/logical_system.go * Add handleLogicalRecovery which is like handleLogical but is only sufficient for use with the sys-raw endpoint in recovery mode. No authentication is done yet. * Integrate with recovery-mode. We now handle unauthenticated sys/raw requests, albeit on path v1/raw instead v1/sys/raw. * Use sys/raw instead raw during recovery. * Don't bother persisting the recovery token. Authenticate sys/raw requests with it. * RecoveryMode: Support generate-root for autounseals (#7591) * Recovery: Abstract config creation and log settings * Recovery mode integration test. (#7600) * Recovery: Touch up (#7607) * Recovery: Touch up * revert the raw backend creation changes * Added recovery operation token prefix * Move RawBackend to its own file * Update API path and hit it using CLI flag on generate-root * Fix a panic triggered when handling a request that yields a nil response. (#7618) * Improve integ test to actually make changes while in recovery mode and verify they're still there after coming back in regular mode. * Refuse to allow a second recovery token to be generated. * Resize raft cluster to size 1 and start as leader (#7626) * RecoveryMode: Setup raft cluster post unseal (#7635) * Setup raft cluster post unseal in recovery mode * Remove marking as unsealed as its not needed * Address review comments * Accept only one seal config in recovery mode as there is no scope for migration
2019-10-15 04:55:31 +00:00
}
resp, err := raw.HandleRequest(r.Context(), req)
if respondErrorCommon(w, req, resp, err) {
return
}
var httpResp *logical.HTTPResponse
if resp != nil {
httpResp = logical.LogicalResponseToHTTPResponse(resp)
httpResp.RequestID = req.ID
}
respondOk(w, httpResp)
})
}
// handleLogicalInternal is a common helper that returns a handler for
// processing logical requests. The behavior depends on the various boolean
// toggles. Refer to usage on functions for possible behaviors.
func handleLogicalInternal(core *vault.Core, injectDataIntoTopLevel bool, noForward bool) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
req, origBody, statusCode, err := buildLogicalRequest(core, w, r)
if err != nil || statusCode != 0 {
respondError(w, statusCode, err)
return
}
// Make the internal request. We attach the connection info
// as well in case this is an authentication request that requires
// it. Vault core handles stripping this if we need to. This also
// handles all error cases; if we hit respondLogical, the request is a
// success.
resp, ok, needsForward := request(core, w, r, req)
switch {
case needsForward && noForward:
respondError(w, http.StatusBadRequest, vault.ErrCannotForwardLocalOnly)
return
case needsForward && !noForward:
if origBody != nil {
r.Body = origBody
}
forwardRequest(core, w, r)
return
case !ok:
// If not ok, we simply return. The call on request should have
// taken care of setting the appropriate response code and payload
// in this case.
return
default:
// Build and return the proper response if everything is fine.
respondLogical(core, w, r, req, resp, injectDataIntoTopLevel)
return
}
2015-04-14 00:21:31 +00:00
})
}
func respondLogical(core *vault.Core, w http.ResponseWriter, r *http.Request, req *logical.Request, resp *logical.Response, injectDataIntoTopLevel bool) {
2016-08-08 15:55:24 +00:00
var httpResp *logical.HTTPResponse
var ret interface{}
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
// If vault's core has already written to the response writer do not add any
// additional output. Headers have already been sent.
if req != nil && req.ResponseWriter != nil && req.ResponseWriter.Written() {
return
}
2015-04-14 00:21:31 +00:00
if resp != nil {
if resp.Redirect != "" {
// If we have a redirect, redirect! We use a 307 code
2015-04-14 00:21:31 +00:00
// because we don't actually know if its permanent.
http.Redirect(w, r, resp.Redirect, 307)
return
}
2015-05-27 21:10:00 +00:00
// Check if this is a raw response
2016-09-29 04:01:28 +00:00
if _, ok := resp.Data[logical.HTTPStatusCode]; ok {
respondRaw(w, r, resp)
2015-05-27 21:10:00 +00:00
return
}
if resp.WrapInfo != nil && resp.WrapInfo.Token != "" {
2016-08-08 15:55:24 +00:00
httpResp = &logical.HTTPResponse{
WrapInfo: &logical.HTTPWrapInfo{
Token: resp.WrapInfo.Token,
2017-11-13 20:31:32 +00:00
Accessor: resp.WrapInfo.Accessor,
TTL: int(resp.WrapInfo.TTL.Seconds()),
CreationTime: resp.WrapInfo.CreationTime.Format(time.RFC3339Nano),
CreationPath: resp.WrapInfo.CreationPath,
WrappedAccessor: resp.WrapInfo.WrappedAccessor,
},
}
} else {
2016-09-29 19:03:47 +00:00
httpResp = logical.LogicalResponseToHTTPResponse(resp)
2016-08-08 15:55:24 +00:00
httpResp.RequestID = req.ID
}
ret = httpResp
if injectDataIntoTopLevel {
injector := logical.HTTPSysInjector{
Response: httpResp,
}
ret = injector
}
2015-04-14 00:21:31 +00:00
}
adjustResponse(core, w, req)
2015-04-14 00:21:31 +00:00
// Respond
2016-08-08 15:55:24 +00:00
respondOk(w, ret)
return
}
2015-05-27 21:10:00 +00:00
// respondRaw is used when the response is using HTTPContentType and HTTPRawBody
// to change the default response handling. This is only used for specific things like
// returning the CRL information on the PKI backends.
2016-09-29 04:01:28 +00:00
func respondRaw(w http.ResponseWriter, r *http.Request, resp *logical.Response) {
retErr := func(w http.ResponseWriter, err string) {
w.Header().Set("X-Vault-Raw-Error", err)
w.WriteHeader(http.StatusInternalServerError)
w.Write(nil)
}
2015-05-27 21:10:00 +00:00
// Ensure this is never a secret or auth response
if resp.Secret != nil || resp.Auth != nil {
2016-09-29 04:01:28 +00:00
retErr(w, "raw responses cannot contain secrets or auth")
2015-05-27 21:10:00 +00:00
return
}
// Get the status code
statusRaw, ok := resp.Data[logical.HTTPStatusCode]
if !ok {
2016-09-29 04:01:28 +00:00
retErr(w, "no status code given")
2015-05-27 21:10:00 +00:00
return
}
var status int
switch statusRaw.(type) {
case int:
status = statusRaw.(int)
case float64:
status = int(statusRaw.(float64))
case json.Number:
s64, err := statusRaw.(json.Number).Float64()
if err != nil {
retErr(w, "cannot decode status code")
return
}
status = int(s64)
default:
2016-09-29 04:01:28 +00:00
retErr(w, "cannot decode status code")
2015-05-27 21:10:00 +00:00
return
}
2016-09-29 04:01:28 +00:00
nonEmpty := status != http.StatusNoContent
var contentType string
var body []byte
// Get the content type header; don't require it if the body is empty
2015-05-27 21:10:00 +00:00
contentTypeRaw, ok := resp.Data[logical.HTTPContentType]
if !ok && nonEmpty {
2016-09-29 04:01:28 +00:00
retErr(w, "no content type given")
2015-05-27 21:10:00 +00:00
return
}
2016-09-29 04:01:28 +00:00
if ok {
contentType, ok = contentTypeRaw.(string)
if !ok {
retErr(w, "cannot decode content type")
return
}
2015-05-27 21:10:00 +00:00
}
2016-09-29 04:01:28 +00:00
if nonEmpty {
// Get the body
bodyRaw, ok := resp.Data[logical.HTTPRawBody]
if !ok {
goto WRITE_RESPONSE
2016-09-29 04:01:28 +00:00
}
switch bodyRaw.(type) {
case string:
Fix response wrapping from K/V version 2 (#4511) This takes place in two parts, since working on this exposed an issue with response wrapping when there is a raw body set. The changes are (in diff order): * A CurrentWrappingLookupFunc has been added to return the current value. This is necessary for the lookahead call since we don't want the lookahead call to be wrapped. * Support for unwrapping < 0.6.2 tokens via the API/CLI has been removed, because we now have backends returning 404s with data and can't rely on the 404 trick. These can still be read manually via cubbyhole/response. * KV preflight version request now ensures that its calls is not wrapped, and restores any given function after. * When responding with a raw body, instead of always base64-decoding a string value and erroring on failure, on failure we assume that it simply wasn't a base64-encoded value and use it as is. * A test that fails on master and works now that ensures that raw body responses that are wrapped and then unwrapped return the expected values. * A flag for response data that indicates to the wrapping handling that the data contained therein is already JSON decoded (more later). * RespondWithStatusCode now defaults to a string so that the value is HMAC'd during audit. The function always JSON encodes the body, so before now it was always returning []byte which would skip HMACing. We don't know what's in the data, so this is a "better safe than sorry" issue. If different behavior is needed, backends can always manually populate the data instead of relying on the helper function. * We now check unwrapped data after unwrapping to see if there were raw flags. If so, we try to detect whether the value can be unbase64'd. The reason is that if it can it was probably originally a []byte and shouldn't be audit HMAC'd; if not, it was probably originally a string and should be. In either case, we then set the value as the raw body and hit the flag indicating that it's already been JSON decoded so not to try again before auditing. Doing it this way ensures the right typing. * There is now a check to see if the data coming from unwrapping is already JSON decoded and if so the decoding is skipped before setting the audit response.
2018-05-10 19:40:03 +00:00
// This is best effort. The value may already be base64-decoded so
// if it doesn't work we just use as-is
bodyDec, err := base64.StdEncoding.DecodeString(bodyRaw.(string))
if err == nil {
body = bodyDec
} else {
body = []byte(bodyRaw.(string))
}
case []byte:
body = bodyRaw.([]byte)
default:
2016-09-29 04:01:28 +00:00
retErr(w, "cannot decode body")
return
}
2015-05-27 21:10:00 +00:00
}
WRITE_RESPONSE:
2015-05-27 21:10:00 +00:00
// Write the response
2016-09-29 04:01:28 +00:00
if contentType != "" {
w.Header().Set("Content-Type", contentType)
}
if cacheControl, ok := resp.Data[logical.HTTPCacheControlHeader].(string); ok {
w.Header().Set("Cache-Control", cacheControl)
}
if pragma, ok := resp.Data[logical.HTTPPragmaHeader].(string); ok {
w.Header().Set("Pragma", pragma)
}
if wwwAuthn, ok := resp.Data[logical.HTTPWWWAuthenticateHeader].(string); ok {
w.Header().Set("WWW-Authenticate", wwwAuthn)
}
2015-05-27 21:10:00 +00:00
w.WriteHeader(status)
w.Write(body)
}
// getConnection is used to format the connection information for
// attaching to a logical request
func getConnection(r *http.Request) (connection *logical.Connection) {
var remoteAddr string
Add remote_port in the audit logs when it is available (#12790) * Add remote_port in the audit logs when it is available The `request.remote_port` field is now present in the audit log when it is available: ``` { "time": "2021-10-10T13:53:51.760039Z", "type": "response", "auth": { "client_token": "hmac-sha256:1304aab0ac65747684e1b58248cc16715fa8f558f8d27e90fcbcb213220c0edf", "accessor": "hmac-sha256:f8cf0601dadd19aac84f205ded44c62898e3746a42108a51105a92ccc39baa43", "display_name": "root", "policies": [ "root" ], "token_policies": [ "root" ], "token_type": "service", "token_issue_time": "2021-10-10T15:53:44+02:00" }, "request": { "id": "829c04a1-0352-2d9d-9bc9-00b928d33df5", "operation": "update", "mount_type": "system", "client_token": "hmac-sha256:1304aab0ac65747684e1b58248cc16715fa8f558f8d27e90fcbcb213220c0edf", "client_token_accessor": "hmac-sha256:f8cf0601dadd19aac84f205ded44c62898e3746a42108a51105a92ccc39baa43", "namespace": { "id": "root" }, "path": "sys/audit/file", "data": { "description": "hmac-sha256:321a1d105f8c6fd62be4f34c4da4f0e6d1cdee9eb2ff4af0b59e1410950fe86b", "local": false, "options": { "file_path": "hmac-sha256:2421b5bf8dab1f9775b2e6e66e58d7bca99ab729f3f311782fda50717eee55b3" }, "type": "hmac-sha256:30dff9607b4087e3ae6808b4a3aa395b1fc064e467748c55c25ddf0e9b150fcc" }, "remote_address": "127.0.0.1", "remote_port": 54798 }, "response": { "mount_type": "system" } } ``` Closes https://github.com/hashicorp/vault/issues/7716 * Add changelog entry * Empty commit to trigger CI * Add test and explicit error handling * Change temporary file pattern in test
2022-01-26 23:47:15 +00:00
var remotePort int
Add remote_port in the audit logs when it is available (#12790) * Add remote_port in the audit logs when it is available The `request.remote_port` field is now present in the audit log when it is available: ``` { "time": "2021-10-10T13:53:51.760039Z", "type": "response", "auth": { "client_token": "hmac-sha256:1304aab0ac65747684e1b58248cc16715fa8f558f8d27e90fcbcb213220c0edf", "accessor": "hmac-sha256:f8cf0601dadd19aac84f205ded44c62898e3746a42108a51105a92ccc39baa43", "display_name": "root", "policies": [ "root" ], "token_policies": [ "root" ], "token_type": "service", "token_issue_time": "2021-10-10T15:53:44+02:00" }, "request": { "id": "829c04a1-0352-2d9d-9bc9-00b928d33df5", "operation": "update", "mount_type": "system", "client_token": "hmac-sha256:1304aab0ac65747684e1b58248cc16715fa8f558f8d27e90fcbcb213220c0edf", "client_token_accessor": "hmac-sha256:f8cf0601dadd19aac84f205ded44c62898e3746a42108a51105a92ccc39baa43", "namespace": { "id": "root" }, "path": "sys/audit/file", "data": { "description": "hmac-sha256:321a1d105f8c6fd62be4f34c4da4f0e6d1cdee9eb2ff4af0b59e1410950fe86b", "local": false, "options": { "file_path": "hmac-sha256:2421b5bf8dab1f9775b2e6e66e58d7bca99ab729f3f311782fda50717eee55b3" }, "type": "hmac-sha256:30dff9607b4087e3ae6808b4a3aa395b1fc064e467748c55c25ddf0e9b150fcc" }, "remote_address": "127.0.0.1", "remote_port": 54798 }, "response": { "mount_type": "system" } } ``` Closes https://github.com/hashicorp/vault/issues/7716 * Add changelog entry * Empty commit to trigger CI * Add test and explicit error handling * Change temporary file pattern in test
2022-01-26 23:47:15 +00:00
remoteAddr, port, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
remoteAddr = ""
Add remote_port in the audit logs when it is available (#12790) * Add remote_port in the audit logs when it is available The `request.remote_port` field is now present in the audit log when it is available: ``` { "time": "2021-10-10T13:53:51.760039Z", "type": "response", "auth": { "client_token": "hmac-sha256:1304aab0ac65747684e1b58248cc16715fa8f558f8d27e90fcbcb213220c0edf", "accessor": "hmac-sha256:f8cf0601dadd19aac84f205ded44c62898e3746a42108a51105a92ccc39baa43", "display_name": "root", "policies": [ "root" ], "token_policies": [ "root" ], "token_type": "service", "token_issue_time": "2021-10-10T15:53:44+02:00" }, "request": { "id": "829c04a1-0352-2d9d-9bc9-00b928d33df5", "operation": "update", "mount_type": "system", "client_token": "hmac-sha256:1304aab0ac65747684e1b58248cc16715fa8f558f8d27e90fcbcb213220c0edf", "client_token_accessor": "hmac-sha256:f8cf0601dadd19aac84f205ded44c62898e3746a42108a51105a92ccc39baa43", "namespace": { "id": "root" }, "path": "sys/audit/file", "data": { "description": "hmac-sha256:321a1d105f8c6fd62be4f34c4da4f0e6d1cdee9eb2ff4af0b59e1410950fe86b", "local": false, "options": { "file_path": "hmac-sha256:2421b5bf8dab1f9775b2e6e66e58d7bca99ab729f3f311782fda50717eee55b3" }, "type": "hmac-sha256:30dff9607b4087e3ae6808b4a3aa395b1fc064e467748c55c25ddf0e9b150fcc" }, "remote_address": "127.0.0.1", "remote_port": 54798 }, "response": { "mount_type": "system" } } ``` Closes https://github.com/hashicorp/vault/issues/7716 * Add changelog entry * Empty commit to trigger CI * Add test and explicit error handling * Change temporary file pattern in test
2022-01-26 23:47:15 +00:00
} else {
remotePort, err = strconv.Atoi(port)
if err != nil {
remotePort = 0
}
}
connection = &logical.Connection{
RemoteAddr: remoteAddr,
Add remote_port in the audit logs when it is available (#12790) * Add remote_port in the audit logs when it is available The `request.remote_port` field is now present in the audit log when it is available: ``` { "time": "2021-10-10T13:53:51.760039Z", "type": "response", "auth": { "client_token": "hmac-sha256:1304aab0ac65747684e1b58248cc16715fa8f558f8d27e90fcbcb213220c0edf", "accessor": "hmac-sha256:f8cf0601dadd19aac84f205ded44c62898e3746a42108a51105a92ccc39baa43", "display_name": "root", "policies": [ "root" ], "token_policies": [ "root" ], "token_type": "service", "token_issue_time": "2021-10-10T15:53:44+02:00" }, "request": { "id": "829c04a1-0352-2d9d-9bc9-00b928d33df5", "operation": "update", "mount_type": "system", "client_token": "hmac-sha256:1304aab0ac65747684e1b58248cc16715fa8f558f8d27e90fcbcb213220c0edf", "client_token_accessor": "hmac-sha256:f8cf0601dadd19aac84f205ded44c62898e3746a42108a51105a92ccc39baa43", "namespace": { "id": "root" }, "path": "sys/audit/file", "data": { "description": "hmac-sha256:321a1d105f8c6fd62be4f34c4da4f0e6d1cdee9eb2ff4af0b59e1410950fe86b", "local": false, "options": { "file_path": "hmac-sha256:2421b5bf8dab1f9775b2e6e66e58d7bca99ab729f3f311782fda50717eee55b3" }, "type": "hmac-sha256:30dff9607b4087e3ae6808b4a3aa395b1fc064e467748c55c25ddf0e9b150fcc" }, "remote_address": "127.0.0.1", "remote_port": 54798 }, "response": { "mount_type": "system" } } ``` Closes https://github.com/hashicorp/vault/issues/7716 * Add changelog entry * Empty commit to trigger CI * Add test and explicit error handling * Change temporary file pattern in test
2022-01-26 23:47:15 +00:00
RemotePort: remotePort,
ConnState: r.TLS,
}
return
}