* conversion stage 1 * correct image paths * add sidebar title to frontmatter * docs/concepts and docs/internals * configuration docs and multi-level nav corrections * commands docs, index file corrections, small item nav correction * secrets converted * auth * add enterprise and agent docs * add extra dividers * secret section, wip * correct sidebar nav title in front matter for apu section, start working on api items * auth and backend, a couple directory structure fixes * remove old docs * intro side nav converted * reset sidebar styles, add hashi-global-styles * basic styling for nav sidebar * folder collapse functionality * patch up border length on last list item * wip restructure for content component * taking middleman hacking to the extreme, but its working * small css fix * add new mega nav * fix a small mistake from the rebase * fix a content resolution issue with middleman * title a couple missing docs pages * update deps, remove temporary markup * community page * footer to layout, community page css adjustments * wip downloads page * deps updated, downloads page ready * fix community page * homepage progress * add components, adjust spacing * docs and api landing pages * a bunch of fixes, add docs and api landing pages * update deps, add deploy scripts * add readme note * update deploy command * overview page, index title * Update doc fields Note this still requires the link fields to be populated -- this is solely related to copy on the description fields * Update api_basic_categories.yml Updated API category descriptions. Like the document descriptions you'll still need to update the link headers to the proper target pages. * Add bottom hero, adjust CSS, responsive friendly * Add mega nav title * homepage adjustments, asset boosts * small fixes * docs page styling fixes * meganav title * some category link corrections * Update API categories page updated to reflect the second level headings for api categories * Update docs_detailed_categories.yml Updated to represent the existing docs structure * Update docs_detailed_categories.yml * docs page data fix, extra operator page remove * api data fix * fix makefile * update deps, add product subnav to docs and api landing pages * Rearrange non-hands-on guides to _docs_ Since there is no place for these on learn.hashicorp, we'll put them under _docs_. * WIP Redirects for guides to docs * content and component updates * font weight hotfix, redirects * fix guides and intro sidenavs * fix some redirects * small style tweaks * Redirects to learn and internally to docs * Remove redirect to `/vault` * Remove `.html` from destination on redirects * fix incorrect index redirect * final touchups * address feedback from michell for makefile and product downloads
5 KiB
layout | page_title | sidebar_title | sidebar_current | description |
---|---|---|---|---|
docs | PostgreSQL - Secrets Engines | PostgreSQL <sup>DEPRECATED</sup> | docs-secrets-postgresql | The PostgreSQL secrets engine for Vault generates database credentials to access PostgreSQL. |
PostgreSQL Secrets Engine
Name: postgresql
~> Deprecation Note: This secrets engine is deprecated in favor of the combined databases secrets engine added in v0.7.1. See the documentation for the new implementation of this secrets engine at PostgreSQL database plugin.
The PostgreSQL secrets engine for Vault generates database credentials dynamically based on configured roles. This means that services that need to access a database no longer need to hardcode credentials: they can request them from Vault, and use Vault's leasing mechanism to more easily roll keys.
Additionally, it introduces a new ability: with every service accessing the database with unique credentials, it makes auditing much easier when questionable data access is discovered: you can track it down to the specific instance of a service based on the SQL username.
Vault makes use both of its own internal revocation system as well as the
VALID UNTIL
setting when creating PostgreSQL users to ensure that users
become invalid within a reasonable time of the lease expiring.
This page will show a quick start for this secrets engine. For detailed documentation
on every path, use vault path-help
after mounting the secrets engine.
Quick Start
The first step to using the PostgreSQL secrets engine is to mount it. Unlike the
kv
secrets engine, the postgresql
secrets engine is not mounted by default.
$ vault secrets enable postgresql
Success! Enabled the postgresql secrets engine at: postgresql/
Next, Vault must be configured to connect to the PostgreSQL. This is done by writing either a PostgreSQL URL or PG connection string:
$ vault write postgresql/config/connection \
connection_url="postgresql://root:vaulttest@vaulttest.ciuvljjni7uo.us-west-1.rds.amazonaws.com:5432/postgres"
In this case, we've configured Vault with the user "root" and password "vaulttest",
connecting to a PostgreSQL instance in AWS RDS. The "postgres" database name is being used.
It is important that the Vault user have the GRANT OPTION
privilege to manage users.
Optionally, we can configure the lease settings for credentials generated
by Vault. This is done by writing to the config/lease
key:
$ vault write postgresql/config/lease lease=1h lease_max=24h
Success! Data written to: postgresql/config/lease
This restricts each credential to being valid or leased for 1 hour at a time, with a maximum use period of 24 hours. This forces an application to renew their credentials at least hourly, and to recycle them once per day.
The next step is to configure a role. A role is a logical name that maps to a policy used to generated those credentials. For example, lets create a "readonly" role:
$ vault write postgresql/roles/readonly \
sql="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";"
Success! Data written to: postgresql/roles/readonly
By writing to the roles/readonly
path we are defining the readonly
role.
This role will be created by evaluating the given sql
statements. By
default, the {{name}}
, {{password}}
and {{expiration}}
fields will be populated by
Vault with dynamically generated values. This SQL statement is creating
the named user, and then granting it SELECT
or read-only privileges
to tables in the database. More complex GRANT
queries can be used to
customize the privileges of the role. See the PostgreSQL manual
for more information.
To generate a new set of credentials, we simply read from that role: Vault is now configured to create and manage credentials for Postgres!
$ vault read postgresql/creds/readonly
Key Value
--- -----
lease_id postgresql/creds/readonly/c888a097-b0e2-26a8-b306-fc7c84b98f07
lease_duration 3600
password 34205e88-0de1-68b7-6267-72d8e32c5d3d
username root-1430162075-7887
By reading from the creds/readonly
path, Vault has generated a new
set of credentials using the readonly
role configuration. Here we
see the dynamically generated username and password, along with a one
hour lease.
Using ACLs, it is possible to restrict using the postgresql secrets engine such that trusted operators can manage the role definitions, and both users and applications are restricted in the credentials they are allowed to read.
If you get stuck at any time, simply run vault path-help postgresql
or with a
subpath for interactive help output.
API
The PostgreSQL secrets engine has a full HTTP API. Please see the PostgreSQL secrets engine API for more details.