Consul-Terraform-Sync (CTS) is a service-oriented tool for managing network infrastructure near real-time. CTS runs as a daemon and integrates the network topology maintained by your Consul cluster with your network infrastructure to dynamically secure and connect services.
CTS uses Consul’s [blocking queries](/api-docs/features/blocking) functionality to monitor Consul for updates. If an endpoint does not support blocking queries, CTS uses polling to watch for changes. These mechanisms are referred to in CTS as *watchers*.
The watcher maintains a separate thread for each value monitored and runs any tasks that depend on the watched value whenever it is updated. These threads are referred to as _views_. For example, a thread may run a task to update a proxy when the watcher detects that an instance has become unhealthy .
By default, CTS stores [Terraform state data](https://www.terraform.io/docs/state/index.html) in the Consul KV, but you can specify where this information is stored by configuring the `backend` setting in the [Terraform driver configuration](/docs/nia/configuration#backend). The data persists if CTS stops and the backend is configured to a remote location.
By default, CTS stores task and event data in memory. This data is transient and does not persist. If you configure [CTS to run with high availability enabled](/docs/nia/usage/run-ha), CTS stores the data in the Consul KV. High availability is an enterprise feature that promotes CTS resiliency. When high availability is enabled, CTS stores and persists task changes and events that occur when an instance stops.
The data stored when operating in high availability mode includes task changes made using the task API or CLI. Examples of task changes include creating a new task, deleting a task, and enabling or disabling a task. You can empty the leader’s stored state information by starting CTS with the [`-reset-storage` flag](/docs/nia/cli/start#options).
If you [run CTS with high availability enabled](/docs/nia/usage/run-ha), CTS performs instance compatibility checks to ensure that all instances in the cluster behave consistently. Consistent instance behavior enables CTS to properly perform automations configured in the state storage.
The CTS instance compatibility check reports an error if the task [module](/docs/nia/configuration#module) is configured with a local module, but the module does not exist on the CTS instance. Refer to the [Terraform documentation](https://www.terraform.io/language/modules/sources#module-sources) for additional information about module sources. Example log:
[ERROR] ha.compat: error="compatibility check failure: stat ./example-module: no such file or directory"
```
Refer to [Error Messages](/docs/nia/usage/errors-ref) for additional information.
CTS instances perform a compatibility check on start-up based on the stored state and every five minutes after starting. If the check detects an incompatible CTS instance, it generates a log so that an operator can address it.
CTS logs the error message and continues to run when it finds an incompatibility. CTS can still elect an incompatible instance to be the leader, but tasks affected by the incompatibility do not run successfully. This can happen when all active CTS instances enter [`once-mode`](/docs/nia/cli/start#modes) and run the tasks once when initially elected.
We recommend following the network security guidelines described in the [Secure Consul-Terraform-Sync for Production](https://learn.hashicorp.com/tutorials/consul/consul-terraform-sync-secure?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial. The tutorial contains a checklist of best practices to secure your CTS installation for a production environment.