open-vault/http/logical.go

479 lines
13 KiB
Go
Raw Normal View History

package http
import (
"encoding/base64"
"encoding/json"
2018-09-18 03:03:00 +00:00
"fmt"
2015-04-07 21:36:17 +00:00
"io"
"net"
"net/http"
"strconv"
"strings"
"time"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/go-uuid"
2018-09-18 03:03:00 +00:00
"github.com/hashicorp/vault/helper/namespace"
"github.com/hashicorp/vault/sdk/logical"
"github.com/hashicorp/vault/vault"
)
func buildLogicalRequest(core *vault.Core, w http.ResponseWriter, r *http.Request) (*logical.Request, io.ReadCloser, int, error) {
2018-09-18 03:03:00 +00:00
ns, err := namespace.FromContext(r.Context())
if err != nil {
return nil, nil, http.StatusBadRequest, nil
}
2018-09-18 03:03:00 +00:00
path := ns.TrimmedPath(r.URL.Path[len("/v1/"):])
var data map[string]interface{}
var origBody io.ReadCloser
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
var requestReader io.ReadCloser
var responseWriter io.Writer
// Determine the operation
var op logical.Operation
switch r.Method {
case "DELETE":
op = logical.DeleteOperation
case "GET":
op = logical.ReadOperation
queryVals := r.URL.Query()
var list bool
var err error
listStr := queryVals.Get("list")
if listStr != "" {
list, err = strconv.ParseBool(listStr)
2015-04-07 21:36:17 +00:00
if err != nil {
return nil, nil, http.StatusBadRequest, nil
}
if list {
op = logical.ListOperation
if !strings.HasSuffix(path, "/") {
path += "/"
}
}
}
if !list {
getData := map[string]interface{}{}
for k, v := range r.URL.Query() {
// Skip the help key as this is a reserved parameter
if k == "help" {
continue
}
switch {
case len(v) == 0:
case len(v) == 1:
getData[k] = v[0]
default:
getData[k] = v
}
}
if len(getData) > 0 {
data = getData
}
}
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
if path == "sys/storage/raft/snapshot" {
responseWriter = w
}
case "POST", "PUT":
op = logical.UpdateOperation
// Parse the request if we can
if op == logical.UpdateOperation {
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
// If we are uploading a snapshot we don't want to parse it. Instead
// we will simply add the request body to the logical request object
// for later consumption.
if path == "sys/storage/raft/snapshot" || path == "sys/storage/raft/snapshot-force" {
requestReader = r.Body
origBody = r.Body
} else {
origBody, err = parseRequest(core, r, w, &data)
if err == io.EOF {
data = nil
err = nil
}
if err != nil {
return nil, nil, http.StatusBadRequest, err
}
}
}
case "LIST":
op = logical.ListOperation
if !strings.HasSuffix(path, "/") {
path += "/"
}
case "OPTIONS":
default:
return nil, nil, http.StatusMethodNotAllowed, nil
}
2016-07-26 19:50:37 +00:00
request_id, err := uuid.GenerateUUID()
if err != nil {
return nil, nil, http.StatusBadRequest, errwrap.Wrapf("failed to generate identifier for the request: {{err}}", err)
2016-07-26 19:50:37 +00:00
}
2018-09-18 03:03:00 +00:00
req, err := requestAuth(core, r, &logical.Request{
2016-07-26 20:44:50 +00:00
ID: request_id,
Operation: op,
Path: path,
Data: data,
Connection: getConnection(r),
Headers: r.Header,
})
2018-09-18 03:03:00 +00:00
if err != nil {
if errwrap.Contains(err, logical.ErrPermissionDenied.Error()) {
return nil, nil, http.StatusForbidden, nil
2018-09-18 03:03:00 +00:00
}
return nil, nil, http.StatusBadRequest, errwrap.Wrapf("error performing token check: {{err}}", err)
2018-09-18 03:03:00 +00:00
}
2017-01-04 21:44:03 +00:00
req, err = requestWrapInfo(r, req)
if err != nil {
return nil, nil, http.StatusBadRequest, errwrap.Wrapf("error parsing X-Vault-Wrap-TTL header: {{err}}", err)
}
2018-09-18 03:03:00 +00:00
err = parseMFAHeader(req)
if err != nil {
return nil, nil, http.StatusBadRequest, errwrap.Wrapf("failed to parse X-Vault-MFA header: {{err}}", err)
2018-09-18 03:03:00 +00:00
}
err = requestPolicyOverride(r, req)
if err != nil {
return nil, nil, http.StatusBadRequest, errwrap.Wrapf(fmt.Sprintf(`failed to parse %s header: {{err}}`, PolicyOverrideHeaderName), err)
2018-09-18 03:03:00 +00:00
}
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
if requestReader != nil {
req.RequestReader = requestReader
}
if responseWriter != nil {
req.ResponseWriter = logical.NewHTTPResponseWriter(responseWriter)
}
return req, origBody, 0, nil
}
2018-09-18 03:03:00 +00:00
func handleLogical(core *vault.Core) http.Handler {
return handleLogicalInternal(core, false)
}
2018-09-18 03:03:00 +00:00
func handleLogicalWithInjector(core *vault.Core) http.Handler {
return handleLogicalInternal(core, true)
}
2018-09-18 03:03:00 +00:00
func handleLogicalInternal(core *vault.Core, injectDataIntoTopLevel bool) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
req, origBody, statusCode, err := buildLogicalRequest(core, w, r)
if err != nil || statusCode != 0 {
respondError(w, statusCode, err)
return
}
// Always forward requests that are using a limited use count token.
if core.PerfStandby() && req.ClientTokenRemainingUses > 0 {
if origBody != nil {
r.Body = origBody
}
2018-09-18 03:03:00 +00:00
forwardRequest(core, w, r)
return
}
// req.Path will be relative by this point. The prefix check is first
// to fail faster if we're not in this situation since it's a hot path
switch {
case strings.HasPrefix(req.Path, "sys/wrapping/"), strings.HasPrefix(req.Path, "auth/token/"):
// Get the token ns info; if we match the paths below we want to
// swap in the token context (but keep the relative path)
if err != nil {
core.Logger().Warn("error looking up just-set context", "error", err)
respondError(w, http.StatusInternalServerError, err)
return
}
2018-09-18 03:03:00 +00:00
te := req.TokenEntry()
newCtx := r.Context()
if te != nil {
ns, err := vault.NamespaceByID(newCtx, te.NamespaceID, core)
if err != nil {
core.Logger().Warn("error looking up namespace from the token's namespace ID", "error", err)
respondError(w, http.StatusInternalServerError, err)
return
}
if ns != nil {
newCtx = namespace.ContextWithNamespace(newCtx, ns)
}
}
switch req.Path {
// Route the token wrapping request to its respective sys NS
2018-09-18 03:03:00 +00:00
case "sys/wrapping/lookup", "sys/wrapping/rewrap", "sys/wrapping/unwrap":
r = r.WithContext(newCtx)
if err := wrappingVerificationFunc(r.Context(), core, req); err != nil {
if errwrap.Contains(err, logical.ErrPermissionDenied.Error()) {
respondError(w, http.StatusForbidden, err)
} else {
respondError(w, http.StatusBadRequest, err)
}
return
}
// The -self paths have no meaning outside of the token NS, so
// requests for these paths always go to the token NS
case "auth/token/lookup-self", "auth/token/renew-self", "auth/token/revoke-self":
r = r.WithContext(newCtx)
// For the following operations, we can set the proper namespace context
// using the token's embedded nsID if a relative path was provided. Since
// this is done at the HTTP layer, the operation will still be gated by
// ACLs.
case "auth/token/lookup", "auth/token/renew", "auth/token/revoke", "auth/token/revoke-orphan":
token, ok := req.Data["token"]
// If the token is not present (e.g. a bad request), break out and let the backend
// handle the error
if !ok {
// If this is a token lookup request and if the token is not
// explicitly provided, it will use the client token so we simply set
// the context to the client token's context.
if req.Path == "auth/token/lookup" {
r = r.WithContext(newCtx)
}
break
}
_, nsID := namespace.SplitIDFromString(token.(string))
if nsID != "" {
ns, err := vault.NamespaceByID(newCtx, nsID, core)
if err != nil {
core.Logger().Warn("error looking up namespace from the token's namespace ID", "error", err)
respondError(w, http.StatusInternalServerError, err)
return
}
if ns != nil {
newCtx = namespace.ContextWithNamespace(newCtx, ns)
r = r.WithContext(newCtx)
}
}
}
// The following relative sys/leases/ paths handles re-routing requests
// to the proper namespace using the lease ID on applicable paths.
case strings.HasPrefix(req.Path, "sys/leases/"):
switch req.Path {
// For the following operations, we can set the proper namespace context
// using the lease's embedded nsID if a relative path was provided. Since
// this is done at the HTTP layer, the operation will still be gated by
// ACLs.
case "sys/leases/lookup", "sys/leases/renew", "sys/leases/revoke", "sys/leases/revoke-force":
leaseID, ok := req.Data["lease_id"]
// If lease ID is not present, break out and let the backend handle the error
if !ok {
break
}
_, nsID := namespace.SplitIDFromString(leaseID.(string))
if nsID != "" {
newCtx := r.Context()
ns, err := vault.NamespaceByID(newCtx, nsID, core)
if err != nil {
core.Logger().Warn("error looking up namespace from the lease's namespace ID", "error", err)
respondError(w, http.StatusInternalServerError, err)
return
}
if ns != nil {
newCtx = namespace.ContextWithNamespace(newCtx, ns)
r = r.WithContext(newCtx)
}
}
2018-09-18 03:03:00 +00:00
}
}
// Make the internal request. We attach the connection info
// as well in case this is an authentication request that requires
// it. Vault core handles stripping this if we need to. This also
// handles all error cases; if we hit respondLogical, the request is a
// success.
resp, ok, needsForward := request(core, w, r, req)
if needsForward {
if origBody != nil {
r.Body = origBody
}
forwardRequest(core, w, r)
return
}
2015-04-08 18:19:03 +00:00
if !ok {
return
}
2015-04-14 00:21:31 +00:00
// Build the proper response
respondLogical(w, r, req, resp, injectDataIntoTopLevel)
2015-04-14 00:21:31 +00:00
})
}
func respondLogical(w http.ResponseWriter, r *http.Request, req *logical.Request, resp *logical.Response, injectDataIntoTopLevel bool) {
2016-08-08 15:55:24 +00:00
var httpResp *logical.HTTPResponse
var ret interface{}
Raft Storage Backend (#6888) * Work on raft backend * Add logstore locally * Add encryptor and unsealable interfaces * Add clustering support to raft * Remove client and handler * Bootstrap raft on init * Cleanup raft logic a bit * More raft work * Work on TLS config * More work on bootstrapping * Fix build * More work on bootstrapping * More bootstrapping work * fix build * Remove consul dep * Fix build * merged oss/master into raft-storage * Work on bootstrapping * Get bootstrapping to work * Clean up FMS and node-id * Update local node ID logic * Cleanup node-id change * Work on snapshotting * Raft: Add remove peer API (#906) * Add remove peer API * Add some comments * Fix existing snapshotting (#909) * Raft get peers API (#912) * Read raft configuration * address review feedback * Use the Leadership Transfer API to step-down the active node (#918) * Raft join and unseal using Shamir keys (#917) * Raft join using shamir * Store AEAD instead of master key * Split the raft join process to answer the challenge after a successful unseal * get the follower to standby state * Make unseal work * minor changes * Some input checks * reuse the shamir seal access instead of new default seal access * refactor joinRaftSendAnswer function * Synchronously send answer in auto-unseal case * Address review feedback * Raft snapshots (#910) * Fix existing snapshotting * implement the noop snapshotting * Add comments and switch log libraries * add some snapshot tests * add snapshot test file * add TODO * More work on raft snapshotting * progress on the ConfigStore strategy * Don't use two buckets * Update the snapshot store logic to hide the file logic * Add more backend tests * Cleanup code a bit * [WIP] Raft recovery (#938) * Add recovery functionality * remove fmt.Printfs * Fix a few fsm bugs * Add max size value for raft backend (#942) * Add max size value for raft backend * Include physical.ErrValueTooLarge in the message * Raft snapshot Take/Restore API (#926) * Inital work on raft snapshot APIs * Always redirect snapshot install/download requests * More work on the snapshot APIs * Cleanup code a bit * On restore handle special cases * Use the seal to encrypt the sha sum file * Add sealer mechanism and fix some bugs * Call restore while state lock is held * Send restore cb trigger through raft log * Make error messages nicer * Add test helpers * Add snapshot test * Add shamir unseal test * Add more raft snapshot API tests * Fix locking * Change working to initalize * Add underlying raw object to test cluster core * Move leaderUUID to core * Add raft TLS rotation logic (#950) * Add TLS rotation logic * Cleanup logic a bit * Add/Remove from follower state on add/remove peer * add comments * Update more comments * Update request_forwarding_service.proto * Make sure we populate all nodes in the followerstate obj * Update times * Apply review feedback * Add more raft config setting (#947) * Add performance config setting * Add more config options and fix tests * Test Raft Recovery (#944) * Test raft recovery * Leave out a node during recovery * remove unused struct * Update physical/raft/snapshot_test.go * Update physical/raft/snapshot_test.go * fix vendoring * Switch to new raft interface * Remove unused files * Switch a gogo -> proto instance * Remove unneeded vault dep in go.sum * Update helper/testhelpers/testhelpers.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * Update vault/cluster/cluster.go * track active key within the keyring itself (#6915) * track active key within the keyring itself * lookup and store using the active key ID * update docstring * minor refactor * Small text fixes (#6912) * Update physical/raft/raft.go Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com> * review feedback * Move raft logical system into separate file * Update help text a bit * Enforce cluster addr is set and use it for raft bootstrapping * Fix tests * fix http test panic * Pull in latest raft-snapshot library * Add comment
2019-06-20 19:14:58 +00:00
// If vault's core has already written to the response writer do not add any
// additional output. Headers have already been sent.
if req != nil && req.ResponseWriter != nil && req.ResponseWriter.Written() {
return
}
2015-04-14 00:21:31 +00:00
if resp != nil {
if resp.Redirect != "" {
// If we have a redirect, redirect! We use a 307 code
2015-04-14 00:21:31 +00:00
// because we don't actually know if its permanent.
http.Redirect(w, r, resp.Redirect, 307)
return
}
2015-05-27 21:10:00 +00:00
// Check if this is a raw response
2016-09-29 04:01:28 +00:00
if _, ok := resp.Data[logical.HTTPStatusCode]; ok {
respondRaw(w, r, resp)
2015-05-27 21:10:00 +00:00
return
}
if resp.WrapInfo != nil && resp.WrapInfo.Token != "" {
2016-08-08 15:55:24 +00:00
httpResp = &logical.HTTPResponse{
WrapInfo: &logical.HTTPWrapInfo{
Token: resp.WrapInfo.Token,
2017-11-13 20:31:32 +00:00
Accessor: resp.WrapInfo.Accessor,
TTL: int(resp.WrapInfo.TTL.Seconds()),
CreationTime: resp.WrapInfo.CreationTime.Format(time.RFC3339Nano),
CreationPath: resp.WrapInfo.CreationPath,
WrappedAccessor: resp.WrapInfo.WrappedAccessor,
},
}
} else {
2016-09-29 19:03:47 +00:00
httpResp = logical.LogicalResponseToHTTPResponse(resp)
2016-08-08 15:55:24 +00:00
httpResp.RequestID = req.ID
}
ret = httpResp
if injectDataIntoTopLevel {
injector := logical.HTTPSysInjector{
Response: httpResp,
}
ret = injector
}
2015-04-14 00:21:31 +00:00
}
// Respond
2016-08-08 15:55:24 +00:00
respondOk(w, ret)
return
}
2015-05-27 21:10:00 +00:00
// respondRaw is used when the response is using HTTPContentType and HTTPRawBody
// to change the default response handling. This is only used for specific things like
// returning the CRL information on the PKI backends.
2016-09-29 04:01:28 +00:00
func respondRaw(w http.ResponseWriter, r *http.Request, resp *logical.Response) {
retErr := func(w http.ResponseWriter, err string) {
w.Header().Set("X-Vault-Raw-Error", err)
w.WriteHeader(http.StatusInternalServerError)
w.Write(nil)
}
2015-05-27 21:10:00 +00:00
// Ensure this is never a secret or auth response
if resp.Secret != nil || resp.Auth != nil {
2016-09-29 04:01:28 +00:00
retErr(w, "raw responses cannot contain secrets or auth")
2015-05-27 21:10:00 +00:00
return
}
// Get the status code
statusRaw, ok := resp.Data[logical.HTTPStatusCode]
if !ok {
2016-09-29 04:01:28 +00:00
retErr(w, "no status code given")
2015-05-27 21:10:00 +00:00
return
}
var status int
switch statusRaw.(type) {
case int:
status = statusRaw.(int)
case float64:
status = int(statusRaw.(float64))
case json.Number:
s64, err := statusRaw.(json.Number).Float64()
if err != nil {
retErr(w, "cannot decode status code")
return
}
status = int(s64)
default:
2016-09-29 04:01:28 +00:00
retErr(w, "cannot decode status code")
2015-05-27 21:10:00 +00:00
return
}
2016-09-29 04:01:28 +00:00
nonEmpty := status != http.StatusNoContent
var contentType string
var body []byte
// Get the content type header; don't require it if the body is empty
2015-05-27 21:10:00 +00:00
contentTypeRaw, ok := resp.Data[logical.HTTPContentType]
if !ok && nonEmpty {
2016-09-29 04:01:28 +00:00
retErr(w, "no content type given")
2015-05-27 21:10:00 +00:00
return
}
2016-09-29 04:01:28 +00:00
if ok {
contentType, ok = contentTypeRaw.(string)
if !ok {
retErr(w, "cannot decode content type")
return
}
2015-05-27 21:10:00 +00:00
}
2016-09-29 04:01:28 +00:00
if nonEmpty {
// Get the body
bodyRaw, ok := resp.Data[logical.HTTPRawBody]
if !ok {
goto WRITE_RESPONSE
2016-09-29 04:01:28 +00:00
}
switch bodyRaw.(type) {
case string:
Fix response wrapping from K/V version 2 (#4511) This takes place in two parts, since working on this exposed an issue with response wrapping when there is a raw body set. The changes are (in diff order): * A CurrentWrappingLookupFunc has been added to return the current value. This is necessary for the lookahead call since we don't want the lookahead call to be wrapped. * Support for unwrapping < 0.6.2 tokens via the API/CLI has been removed, because we now have backends returning 404s with data and can't rely on the 404 trick. These can still be read manually via cubbyhole/response. * KV preflight version request now ensures that its calls is not wrapped, and restores any given function after. * When responding with a raw body, instead of always base64-decoding a string value and erroring on failure, on failure we assume that it simply wasn't a base64-encoded value and use it as is. * A test that fails on master and works now that ensures that raw body responses that are wrapped and then unwrapped return the expected values. * A flag for response data that indicates to the wrapping handling that the data contained therein is already JSON decoded (more later). * RespondWithStatusCode now defaults to a string so that the value is HMAC'd during audit. The function always JSON encodes the body, so before now it was always returning []byte which would skip HMACing. We don't know what's in the data, so this is a "better safe than sorry" issue. If different behavior is needed, backends can always manually populate the data instead of relying on the helper function. * We now check unwrapped data after unwrapping to see if there were raw flags. If so, we try to detect whether the value can be unbase64'd. The reason is that if it can it was probably originally a []byte and shouldn't be audit HMAC'd; if not, it was probably originally a string and should be. In either case, we then set the value as the raw body and hit the flag indicating that it's already been JSON decoded so not to try again before auditing. Doing it this way ensures the right typing. * There is now a check to see if the data coming from unwrapping is already JSON decoded and if so the decoding is skipped before setting the audit response.
2018-05-10 19:40:03 +00:00
// This is best effort. The value may already be base64-decoded so
// if it doesn't work we just use as-is
bodyDec, err := base64.StdEncoding.DecodeString(bodyRaw.(string))
if err == nil {
body = bodyDec
} else {
body = []byte(bodyRaw.(string))
}
case []byte:
body = bodyRaw.([]byte)
default:
2016-09-29 04:01:28 +00:00
retErr(w, "cannot decode body")
return
}
2015-05-27 21:10:00 +00:00
}
WRITE_RESPONSE:
2015-05-27 21:10:00 +00:00
// Write the response
2016-09-29 04:01:28 +00:00
if contentType != "" {
w.Header().Set("Content-Type", contentType)
}
2015-05-27 21:10:00 +00:00
w.WriteHeader(status)
w.Write(body)
}
// getConnection is used to format the connection information for
// attaching to a logical request
func getConnection(r *http.Request) (connection *logical.Connection) {
var remoteAddr string
remoteAddr, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
remoteAddr = ""
}
connection = &logical.Connection{
RemoteAddr: remoteAddr,
ConnState: r.TLS,
}
return
}