open-vault/enos/enos-modules.hcl

289 lines
6.5 KiB
HCL
Raw Normal View History

# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0
module "autopilot_upgrade_storageconfig" {
source = "./modules/autopilot_upgrade_storageconfig"
}
module "backend_consul" {
source = "./modules/backend_consul"
license = var.backend_license_path == null ? null : file(abspath(var.backend_license_path))
log_level = var.backend_log_level
}
module "backend_raft" {
source = "./modules/backend_raft"
}
module "build_crt" {
source = "./modules/build_crt"
}
module "build_local" {
source = "./modules/build_local"
}
module "build_artifactory" {
source = "./modules/vault_artifactory_artifact"
}
module "create_vpc" {
source = "./modules/create_vpc"
environment = "ci"
common_tags = var.tags
}
module "ec2_info" {
source = "./modules/ec2_info"
}
module "get_local_metadata" {
source = "./modules/get_local_metadata"
}
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
module "generate_secondary_token" {
source = "./modules/generate_secondary_token"
vault_install_dir = var.vault_install_dir
}
module "read_license" {
source = "./modules/read_license"
}
Backport [QT-602] Run `proxy` and `agent` test scenarios (#23176) into release/1.14.x (#23302) * [QT-602] Run `proxy` and `agent` test scenarios (#23176) Update our `proxy` and `agent` scenarios to support new variants and perform baseline verification and their scenario specific verification. We integrate these updated scenarios into the pipeline by adding them to artifact samples. We've also improved the reliability of the `autopilot` and `replication` scenarios by refactoring our IP address gathering. Previously, we'd ask vault for the primary IP address and use some Terraform logic to determine followers. The leader IP address gathering script was also implicitly responsible for ensuring that a found leader was within a given group of hosts, and thus waiting for a given cluster to have a leader, and also for doing some arithmetic and outputting `replication` specific output data. We've broken these responsibilities into individual modules, improved their error messages, and fixed various races and bugs, including: * Fix a race between creating the file audit device and installing and starting vault in the `replication` scenario. * Fix how we determine our leader and follower IP addresses. We now query vault instead of a prior implementation that inferred the followers and sometimes did not allow all nodes to be an expected leader. * Fix a bug where we'd always always fail on the first wrong condition in the `vault_verify_performance_replication` module. We also performed some maintenance tasks on Enos scenarios byupdating our references from `oss` to `ce` to handle the naming and license changes. We also enabled `shellcheck` linting for enos module scripts. * Rename `oss` to `ce` for license and naming changes. * Convert template enos scripts to scripts that take environment variables. * Add `shellcheck` linting for enos module scripts. * Add additional `backend` and `seal` support to `proxy` and `agent` scenarios. * Update scenarios to include all baseline verification. * Add `proxy` and `agent` scenarios to artifact samples. * Remove IP address verification from the `vault_get_cluster_ips` modules and implement a new `vault_wait_for_leader` module. * Determine follower IP addresses by querying vault in the `vault_get_cluster_ips` module. * Move replication specific behavior out of the `vault_get_cluster_ips` module and into it's own `replication_data` module. * Extend initial version support for the `upgrade` and `autopilot` scenarios. We also discovered an issue with undo_logs that has been described in the VAULT-20259. As such, we've disabled the undo_logs check until it has been fixed. * actions: fix actionlint error and linting logic (#23305) Signed-off-by: Ryan Cragun <me@ryan.ec>
2023-09-27 16:53:12 +00:00
module "replication_data" {
source = "./modules/replication_data"
}
module "seal_key_awskms" {
source = "./modules/seal_key_awskms"
common_tags = var.tags
}
module "seal_key_shamir" {
source = "./modules/seal_key_shamir"
common_tags = var.tags
}
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
module "shutdown_node" {
source = "./modules/shutdown_node"
}
module "shutdown_multiple_nodes" {
source = "./modules/shutdown_multiple_nodes"
}
module "start_vault" {
source = "./modules/start_vault"
install_dir = var.vault_install_dir
log_level = var.vault_log_level
}
module "stop_vault" {
source = "./modules/stop_vault"
}
enos: use on-demand targets (#21459) (#21464) Add an updated `target_ec2_instances` module that is capable of dynamically splitting target instances over subnet/az's that are compatible with the AMI architecture and the associated instance type for the architecture. Use the `target_ec2_instances` module where necessary. Ensure that `raft` storage scenarios don't provision unnecessary infrastructure with a new `target_ec2_shim` module. After a lot of trial, the state of Ec2 spot instance capacity, their associated APIs, and current support for different fleet types in AWS Terraform provider, have proven to make using spot instances for scenario targets too unreliable. The current state of each method: * `target_ec2_fleet`: unusable due to the fact that the `instant` type does not guarantee fulfillment of either `spot` or `on-demand` instance request types. The module does support both `on-demand` and `spot` request types and is capable of bidding across a maximum of four availability zones, which makes it an attractive choice if the `instant` type would always fulfill requests. Perhaps a `request` type with `wait_for_fulfillment` option like `aws_spot_fleet_request` would make it more viable for future consideration. * `target_ec2_spot_fleet`: more reliable if bidding for target instances that have capacity in the chosen zone. Issues in the AWS provider prevent us from bidding across multiple zones succesfully. Over the last 2-3 months target capacity for the instance types we'd prefer to use has dropped dramatically and the price is near-or-at on-demand. The volatility for nearly no cost savings means we should put this option on the shelf for now. * `target_ec2_instances`: the most reliable method we've got. It is now capable of automatically determing which subnets and availability zones to provision targets in and has been updated to be usable for both Vault and Consul targets. By default we use the cheapest medium instance types that we've found are reliable to test vault. * Update .gitignore * enos/modules/create_vpc: create a subnet for every availability zone * enos/modules/target_ec2_fleet: bid across the maximum of four availability zones for targets * enos/modules/target_ec2_spot_fleet: attempt to make the spot fleet bid across more availability zones for targets * enos/modules/target_ec2_instances: create module to use ec2:RunInstances for scenario targets * enos/modules/target_ec2_shim: create shim module to satisfy the target module interface * enos/scenarios: use target_ec2_shim for backend targets on raft storage scenarios * enos/modules/az_finder: remove unsed module Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2023-06-26 22:54:39 +00:00
# create target instances using ec2:CreateFleet
module "target_ec2_fleet" {
source = "./modules/target_ec2_fleet"
enos: use on-demand targets (#21459) (#21464) Add an updated `target_ec2_instances` module that is capable of dynamically splitting target instances over subnet/az's that are compatible with the AMI architecture and the associated instance type for the architecture. Use the `target_ec2_instances` module where necessary. Ensure that `raft` storage scenarios don't provision unnecessary infrastructure with a new `target_ec2_shim` module. After a lot of trial, the state of Ec2 spot instance capacity, their associated APIs, and current support for different fleet types in AWS Terraform provider, have proven to make using spot instances for scenario targets too unreliable. The current state of each method: * `target_ec2_fleet`: unusable due to the fact that the `instant` type does not guarantee fulfillment of either `spot` or `on-demand` instance request types. The module does support both `on-demand` and `spot` request types and is capable of bidding across a maximum of four availability zones, which makes it an attractive choice if the `instant` type would always fulfill requests. Perhaps a `request` type with `wait_for_fulfillment` option like `aws_spot_fleet_request` would make it more viable for future consideration. * `target_ec2_spot_fleet`: more reliable if bidding for target instances that have capacity in the chosen zone. Issues in the AWS provider prevent us from bidding across multiple zones succesfully. Over the last 2-3 months target capacity for the instance types we'd prefer to use has dropped dramatically and the price is near-or-at on-demand. The volatility for nearly no cost savings means we should put this option on the shelf for now. * `target_ec2_instances`: the most reliable method we've got. It is now capable of automatically determing which subnets and availability zones to provision targets in and has been updated to be usable for both Vault and Consul targets. By default we use the cheapest medium instance types that we've found are reliable to test vault. * Update .gitignore * enos/modules/create_vpc: create a subnet for every availability zone * enos/modules/target_ec2_fleet: bid across the maximum of four availability zones for targets * enos/modules/target_ec2_spot_fleet: attempt to make the spot fleet bid across more availability zones for targets * enos/modules/target_ec2_instances: create module to use ec2:RunInstances for scenario targets * enos/modules/target_ec2_shim: create shim module to satisfy the target module interface * enos/scenarios: use target_ec2_shim for backend targets on raft storage scenarios * enos/modules/az_finder: remove unsed module Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2023-06-26 22:54:39 +00:00
common_tags = var.tags
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
}
enos: use on-demand targets (#21459) (#21464) Add an updated `target_ec2_instances` module that is capable of dynamically splitting target instances over subnet/az's that are compatible with the AMI architecture and the associated instance type for the architecture. Use the `target_ec2_instances` module where necessary. Ensure that `raft` storage scenarios don't provision unnecessary infrastructure with a new `target_ec2_shim` module. After a lot of trial, the state of Ec2 spot instance capacity, their associated APIs, and current support for different fleet types in AWS Terraform provider, have proven to make using spot instances for scenario targets too unreliable. The current state of each method: * `target_ec2_fleet`: unusable due to the fact that the `instant` type does not guarantee fulfillment of either `spot` or `on-demand` instance request types. The module does support both `on-demand` and `spot` request types and is capable of bidding across a maximum of four availability zones, which makes it an attractive choice if the `instant` type would always fulfill requests. Perhaps a `request` type with `wait_for_fulfillment` option like `aws_spot_fleet_request` would make it more viable for future consideration. * `target_ec2_spot_fleet`: more reliable if bidding for target instances that have capacity in the chosen zone. Issues in the AWS provider prevent us from bidding across multiple zones succesfully. Over the last 2-3 months target capacity for the instance types we'd prefer to use has dropped dramatically and the price is near-or-at on-demand. The volatility for nearly no cost savings means we should put this option on the shelf for now. * `target_ec2_instances`: the most reliable method we've got. It is now capable of automatically determing which subnets and availability zones to provision targets in and has been updated to be usable for both Vault and Consul targets. By default we use the cheapest medium instance types that we've found are reliable to test vault. * Update .gitignore * enos/modules/create_vpc: create a subnet for every availability zone * enos/modules/target_ec2_fleet: bid across the maximum of four availability zones for targets * enos/modules/target_ec2_spot_fleet: attempt to make the spot fleet bid across more availability zones for targets * enos/modules/target_ec2_instances: create module to use ec2:RunInstances for scenario targets * enos/modules/target_ec2_shim: create shim module to satisfy the target module interface * enos/scenarios: use target_ec2_shim for backend targets on raft storage scenarios * enos/modules/az_finder: remove unsed module Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2023-06-26 22:54:39 +00:00
# create target instances using ec2:RunInstances
module "target_ec2_instances" {
source = "./modules/target_ec2_instances"
common_tags = var.tags
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
}
# don't create instances but satisfy the module interface
module "target_ec2_shim" {
source = "./modules/target_ec2_shim"
common_tags = var.tags
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
}
# create target instances using ec2:RequestSpotFleet
module "target_ec2_spot_fleet" {
source = "./modules/target_ec2_spot_fleet"
enos: use on-demand targets (#21459) (#21464) Add an updated `target_ec2_instances` module that is capable of dynamically splitting target instances over subnet/az's that are compatible with the AMI architecture and the associated instance type for the architecture. Use the `target_ec2_instances` module where necessary. Ensure that `raft` storage scenarios don't provision unnecessary infrastructure with a new `target_ec2_shim` module. After a lot of trial, the state of Ec2 spot instance capacity, their associated APIs, and current support for different fleet types in AWS Terraform provider, have proven to make using spot instances for scenario targets too unreliable. The current state of each method: * `target_ec2_fleet`: unusable due to the fact that the `instant` type does not guarantee fulfillment of either `spot` or `on-demand` instance request types. The module does support both `on-demand` and `spot` request types and is capable of bidding across a maximum of four availability zones, which makes it an attractive choice if the `instant` type would always fulfill requests. Perhaps a `request` type with `wait_for_fulfillment` option like `aws_spot_fleet_request` would make it more viable for future consideration. * `target_ec2_spot_fleet`: more reliable if bidding for target instances that have capacity in the chosen zone. Issues in the AWS provider prevent us from bidding across multiple zones succesfully. Over the last 2-3 months target capacity for the instance types we'd prefer to use has dropped dramatically and the price is near-or-at on-demand. The volatility for nearly no cost savings means we should put this option on the shelf for now. * `target_ec2_instances`: the most reliable method we've got. It is now capable of automatically determing which subnets and availability zones to provision targets in and has been updated to be usable for both Vault and Consul targets. By default we use the cheapest medium instance types that we've found are reliable to test vault. * Update .gitignore * enos/modules/create_vpc: create a subnet for every availability zone * enos/modules/target_ec2_fleet: bid across the maximum of four availability zones for targets * enos/modules/target_ec2_spot_fleet: attempt to make the spot fleet bid across more availability zones for targets * enos/modules/target_ec2_instances: create module to use ec2:RunInstances for scenario targets * enos/modules/target_ec2_shim: create shim module to satisfy the target module interface * enos/scenarios: use target_ec2_shim for backend targets on raft storage scenarios * enos/modules/az_finder: remove unsed module Signed-off-by: Ryan Cragun <me@ryan.ec> Co-authored-by: Ryan Cragun <me@ryan.ec>
2023-06-26 22:54:39 +00:00
common_tags = var.tags
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
}
module "vault_agent" {
source = "./modules/vault_agent"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_proxy" {
source = "./modules/vault_proxy"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_verify_agent_output" {
source = "./modules/vault_verify_agent_output"
vault_instance_count = var.vault_instance_count
}
module "vault_cluster" {
source = "./modules/vault_cluster"
install_dir = var.vault_install_dir
consul_license = var.backend_license_path == null ? null : file(abspath(var.backend_license_path))
log_level = var.vault_log_level
}
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
module "vault_get_cluster_ips" {
source = "./modules/vault_get_cluster_ips"
Backport [QT-602] Run `proxy` and `agent` test scenarios (#23176) into release/1.14.x (#23302) * [QT-602] Run `proxy` and `agent` test scenarios (#23176) Update our `proxy` and `agent` scenarios to support new variants and perform baseline verification and their scenario specific verification. We integrate these updated scenarios into the pipeline by adding them to artifact samples. We've also improved the reliability of the `autopilot` and `replication` scenarios by refactoring our IP address gathering. Previously, we'd ask vault for the primary IP address and use some Terraform logic to determine followers. The leader IP address gathering script was also implicitly responsible for ensuring that a found leader was within a given group of hosts, and thus waiting for a given cluster to have a leader, and also for doing some arithmetic and outputting `replication` specific output data. We've broken these responsibilities into individual modules, improved their error messages, and fixed various races and bugs, including: * Fix a race between creating the file audit device and installing and starting vault in the `replication` scenario. * Fix how we determine our leader and follower IP addresses. We now query vault instead of a prior implementation that inferred the followers and sometimes did not allow all nodes to be an expected leader. * Fix a bug where we'd always always fail on the first wrong condition in the `vault_verify_performance_replication` module. We also performed some maintenance tasks on Enos scenarios byupdating our references from `oss` to `ce` to handle the naming and license changes. We also enabled `shellcheck` linting for enos module scripts. * Rename `oss` to `ce` for license and naming changes. * Convert template enos scripts to scripts that take environment variables. * Add `shellcheck` linting for enos module scripts. * Add additional `backend` and `seal` support to `proxy` and `agent` scenarios. * Update scenarios to include all baseline verification. * Add `proxy` and `agent` scenarios to artifact samples. * Remove IP address verification from the `vault_get_cluster_ips` modules and implement a new `vault_wait_for_leader` module. * Determine follower IP addresses by querying vault in the `vault_get_cluster_ips` module. * Move replication specific behavior out of the `vault_get_cluster_ips` module and into it's own `replication_data` module. * Extend initial version support for the `upgrade` and `autopilot` scenarios. We also discovered an issue with undo_logs that has been described in the VAULT-20259. As such, we've disabled the undo_logs check until it has been fixed. * actions: fix actionlint error and linting logic (#23305) Signed-off-by: Ryan Cragun <me@ryan.ec>
2023-09-27 16:53:12 +00:00
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_raft_remove_peer" {
source = "./modules/vault_raft_remove_peer"
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
vault_install_dir = var.vault_install_dir
}
Backport [QT-602] Run `proxy` and `agent` test scenarios (#23176) into release/1.14.x (#23302) * [QT-602] Run `proxy` and `agent` test scenarios (#23176) Update our `proxy` and `agent` scenarios to support new variants and perform baseline verification and their scenario specific verification. We integrate these updated scenarios into the pipeline by adding them to artifact samples. We've also improved the reliability of the `autopilot` and `replication` scenarios by refactoring our IP address gathering. Previously, we'd ask vault for the primary IP address and use some Terraform logic to determine followers. The leader IP address gathering script was also implicitly responsible for ensuring that a found leader was within a given group of hosts, and thus waiting for a given cluster to have a leader, and also for doing some arithmetic and outputting `replication` specific output data. We've broken these responsibilities into individual modules, improved their error messages, and fixed various races and bugs, including: * Fix a race between creating the file audit device and installing and starting vault in the `replication` scenario. * Fix how we determine our leader and follower IP addresses. We now query vault instead of a prior implementation that inferred the followers and sometimes did not allow all nodes to be an expected leader. * Fix a bug where we'd always always fail on the first wrong condition in the `vault_verify_performance_replication` module. We also performed some maintenance tasks on Enos scenarios byupdating our references from `oss` to `ce` to handle the naming and license changes. We also enabled `shellcheck` linting for enos module scripts. * Rename `oss` to `ce` for license and naming changes. * Convert template enos scripts to scripts that take environment variables. * Add `shellcheck` linting for enos module scripts. * Add additional `backend` and `seal` support to `proxy` and `agent` scenarios. * Update scenarios to include all baseline verification. * Add `proxy` and `agent` scenarios to artifact samples. * Remove IP address verification from the `vault_get_cluster_ips` modules and implement a new `vault_wait_for_leader` module. * Determine follower IP addresses by querying vault in the `vault_get_cluster_ips` module. * Move replication specific behavior out of the `vault_get_cluster_ips` module and into it's own `replication_data` module. * Extend initial version support for the `upgrade` and `autopilot` scenarios. We also discovered an issue with undo_logs that has been described in the VAULT-20259. As such, we've disabled the undo_logs check until it has been fixed. * actions: fix actionlint error and linting logic (#23305) Signed-off-by: Ryan Cragun <me@ryan.ec>
2023-09-27 16:53:12 +00:00
module "vault_setup_perf_secondary" {
source = "./modules/vault_setup_perf_secondary"
vault_install_dir = var.vault_install_dir
}
module "vault_test_ui" {
source = "./modules/vault_test_ui"
ui_run_tests = var.ui_run_tests
}
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
module "vault_unseal_nodes" {
source = "./modules/vault_unseal_nodes"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_upgrade" {
source = "./modules/vault_upgrade"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
Backport [QT-602] Run `proxy` and `agent` test scenarios (#23176) into release/1.14.x (#23302) * [QT-602] Run `proxy` and `agent` test scenarios (#23176) Update our `proxy` and `agent` scenarios to support new variants and perform baseline verification and their scenario specific verification. We integrate these updated scenarios into the pipeline by adding them to artifact samples. We've also improved the reliability of the `autopilot` and `replication` scenarios by refactoring our IP address gathering. Previously, we'd ask vault for the primary IP address and use some Terraform logic to determine followers. The leader IP address gathering script was also implicitly responsible for ensuring that a found leader was within a given group of hosts, and thus waiting for a given cluster to have a leader, and also for doing some arithmetic and outputting `replication` specific output data. We've broken these responsibilities into individual modules, improved their error messages, and fixed various races and bugs, including: * Fix a race between creating the file audit device and installing and starting vault in the `replication` scenario. * Fix how we determine our leader and follower IP addresses. We now query vault instead of a prior implementation that inferred the followers and sometimes did not allow all nodes to be an expected leader. * Fix a bug where we'd always always fail on the first wrong condition in the `vault_verify_performance_replication` module. We also performed some maintenance tasks on Enos scenarios byupdating our references from `oss` to `ce` to handle the naming and license changes. We also enabled `shellcheck` linting for enos module scripts. * Rename `oss` to `ce` for license and naming changes. * Convert template enos scripts to scripts that take environment variables. * Add `shellcheck` linting for enos module scripts. * Add additional `backend` and `seal` support to `proxy` and `agent` scenarios. * Update scenarios to include all baseline verification. * Add `proxy` and `agent` scenarios to artifact samples. * Remove IP address verification from the `vault_get_cluster_ips` modules and implement a new `vault_wait_for_leader` module. * Determine follower IP addresses by querying vault in the `vault_get_cluster_ips` module. * Move replication specific behavior out of the `vault_get_cluster_ips` module and into it's own `replication_data` module. * Extend initial version support for the `upgrade` and `autopilot` scenarios. We also discovered an issue with undo_logs that has been described in the VAULT-20259. As such, we've disabled the undo_logs check until it has been fixed. * actions: fix actionlint error and linting logic (#23305) Signed-off-by: Ryan Cragun <me@ryan.ec>
2023-09-27 16:53:12 +00:00
module "vault_verify_autopilot" {
source = "./modules/vault_verify_autopilot"
vault_autopilot_upgrade_status = "await-server-removal"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_verify_raft_auto_join_voter" {
source = "./modules/vault_verify_raft_auto_join_voter"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_verify_undo_logs" {
source = "./modules/vault_verify_undo_logs"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_verify_replication" {
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
source = "./modules/vault_verify_replication"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_verify_ui" {
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
source = "./modules/vault_verify_ui"
vault_instance_count = var.vault_instance_count
}
module "vault_verify_unsealed" {
source = "./modules/vault_verify_unsealed"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
module "vault_setup_perf_primary" {
source = "./modules/vault_setup_perf_primary"
vault_install_dir = var.vault_install_dir
}
module "vault_verify_read_data" {
source = "./modules/vault_verify_read_data"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_verify_performance_replication" {
source = "./modules/vault_verify_performance_replication"
vault_install_dir = var.vault_install_dir
}
module "vault_verify_version" {
source = "./modules/vault_verify_version"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
[QT-19] Enable Enos replication scenario (#17748) * Add initial replication scenario config Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add support for replication with different backend and seal types Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update Consul versions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Additional config for replicaiton scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Refactor replication modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more steps for replication Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Work in progress with unsealing followers on secondary cluster Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add more replication scenario steps Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * More updates Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Working shamir scenario Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update to unify get Vault IP module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove duplicate module Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix race condition for secondary followers unseal Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Use consistent naming for module directories Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update replication scenario with latest test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Verify replication with awskms Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add write and retrive data support for all scenarios * Update all scenarios to verify write and read kv data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix write and read data modules Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add comments explaining the module run Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address review feedback and update consul version Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Address more review feedback Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Remove vault debug logging Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Exclude ent.fips1402 and ent.hsm.fips1402 packages from Enos test matrix Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add verification for replication connection status * Currently this verification fails on Consul due to VAULT-12332 Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Add replication scenario to Enos README Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Update README as per review suggesstions Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * [QT-452] Add recovery keys to scenario outputs Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix replication output var Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> * Fix autopilot scenario deps and add retry for read data Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com> Signed-off-by: Jaymala Sinha <jaymala@hashicorp.com>
2023-01-13 16:43:26 +00:00
module "vault_verify_write_data" {
source = "./modules/vault_verify_write_data"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
Backport [QT-602] Run `proxy` and `agent` test scenarios (#23176) into release/1.14.x (#23302) * [QT-602] Run `proxy` and `agent` test scenarios (#23176) Update our `proxy` and `agent` scenarios to support new variants and perform baseline verification and their scenario specific verification. We integrate these updated scenarios into the pipeline by adding them to artifact samples. We've also improved the reliability of the `autopilot` and `replication` scenarios by refactoring our IP address gathering. Previously, we'd ask vault for the primary IP address and use some Terraform logic to determine followers. The leader IP address gathering script was also implicitly responsible for ensuring that a found leader was within a given group of hosts, and thus waiting for a given cluster to have a leader, and also for doing some arithmetic and outputting `replication` specific output data. We've broken these responsibilities into individual modules, improved their error messages, and fixed various races and bugs, including: * Fix a race between creating the file audit device and installing and starting vault in the `replication` scenario. * Fix how we determine our leader and follower IP addresses. We now query vault instead of a prior implementation that inferred the followers and sometimes did not allow all nodes to be an expected leader. * Fix a bug where we'd always always fail on the first wrong condition in the `vault_verify_performance_replication` module. We also performed some maintenance tasks on Enos scenarios byupdating our references from `oss` to `ce` to handle the naming and license changes. We also enabled `shellcheck` linting for enos module scripts. * Rename `oss` to `ce` for license and naming changes. * Convert template enos scripts to scripts that take environment variables. * Add `shellcheck` linting for enos module scripts. * Add additional `backend` and `seal` support to `proxy` and `agent` scenarios. * Update scenarios to include all baseline verification. * Add `proxy` and `agent` scenarios to artifact samples. * Remove IP address verification from the `vault_get_cluster_ips` modules and implement a new `vault_wait_for_leader` module. * Determine follower IP addresses by querying vault in the `vault_get_cluster_ips` module. * Move replication specific behavior out of the `vault_get_cluster_ips` module and into it's own `replication_data` module. * Extend initial version support for the `upgrade` and `autopilot` scenarios. We also discovered an issue with undo_logs that has been described in the VAULT-20259. As such, we've disabled the undo_logs check until it has been fixed. * actions: fix actionlint error and linting logic (#23305) Signed-off-by: Ryan Cragun <me@ryan.ec>
2023-09-27 16:53:12 +00:00
module "vault_wait_for_leader" {
source = "./modules/vault_wait_for_leader"
Backport [QT-602] Run `proxy` and `agent` test scenarios (#23176) into release/1.14.x (#23302) * [QT-602] Run `proxy` and `agent` test scenarios (#23176) Update our `proxy` and `agent` scenarios to support new variants and perform baseline verification and their scenario specific verification. We integrate these updated scenarios into the pipeline by adding them to artifact samples. We've also improved the reliability of the `autopilot` and `replication` scenarios by refactoring our IP address gathering. Previously, we'd ask vault for the primary IP address and use some Terraform logic to determine followers. The leader IP address gathering script was also implicitly responsible for ensuring that a found leader was within a given group of hosts, and thus waiting for a given cluster to have a leader, and also for doing some arithmetic and outputting `replication` specific output data. We've broken these responsibilities into individual modules, improved their error messages, and fixed various races and bugs, including: * Fix a race between creating the file audit device and installing and starting vault in the `replication` scenario. * Fix how we determine our leader and follower IP addresses. We now query vault instead of a prior implementation that inferred the followers and sometimes did not allow all nodes to be an expected leader. * Fix a bug where we'd always always fail on the first wrong condition in the `vault_verify_performance_replication` module. We also performed some maintenance tasks on Enos scenarios byupdating our references from `oss` to `ce` to handle the naming and license changes. We also enabled `shellcheck` linting for enos module scripts. * Rename `oss` to `ce` for license and naming changes. * Convert template enos scripts to scripts that take environment variables. * Add `shellcheck` linting for enos module scripts. * Add additional `backend` and `seal` support to `proxy` and `agent` scenarios. * Update scenarios to include all baseline verification. * Add `proxy` and `agent` scenarios to artifact samples. * Remove IP address verification from the `vault_get_cluster_ips` modules and implement a new `vault_wait_for_leader` module. * Determine follower IP addresses by querying vault in the `vault_get_cluster_ips` module. * Move replication specific behavior out of the `vault_get_cluster_ips` module and into it's own `replication_data` module. * Extend initial version support for the `upgrade` and `autopilot` scenarios. We also discovered an issue with undo_logs that has been described in the VAULT-20259. As such, we've disabled the undo_logs check until it has been fixed. * actions: fix actionlint error and linting logic (#23305) Signed-off-by: Ryan Cragun <me@ryan.ec>
2023-09-27 16:53:12 +00:00
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "vault_wait_for_seal_rewrap" {
source = "./modules/vault_wait_for_seal_rewrap"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}
module "verify_seal_type" {
source = "./modules/verify_seal_type"
vault_install_dir = var.vault_install_dir
vault_instance_count = var.vault_instance_count
}