--- layout: "docs" page_title: "Consul Architecture" sidebar_current: "docs-internals-architecture" description: |- Consul is a complex system that has many different moving parts. To help users and developers of Consul form a mental model of how it works, this page documents the system architecture. --- # Consul Architecture Consul is a complex system that has many different moving parts. To help users and developers of Consul form a mental model of how it works, this page documents the system architecture. ~> **Advanced Topic!** This page covers technical details of the internals of Consul. You don't need to know these details to effectively operate and use Consul. These details are documented here for those who wish to learn about them without having to go spelunking through the source code. ## Glossary Before describing the architecture, we provide a glossary of terms to help clarify what is being discussed: * Agent - An agent is the long running daemon on every member of the Consul cluster. It is started by running `consul agent`. The agent is able to run in either *client*, or *server* mode. Since all nodes must be running an agent, it is simpler to refer to the node as either being a client or server, but there are other instances of the agent. All agents can run the DNS or HTTP interfaces, and are responsible for running checks and keeping services in sync. * Client - A client is an agent that forwards all RPCs to a server. The client is relatively stateless. The only background activity a client performs is taking part of LAN gossip pool. This has a minimal resource overhead and consumes only a small amount of network bandwidth. * Server - An agent that is server mode. When in server mode, there is an expanded set of responsibilities including participating in the Raft quorum, maintaining cluster state, responding to RPC queries, WAN gossip to other datacenters, and forwarding queries to leaders or remote datacenters. * Datacenter - A datacenter seems obvious, but there are subtle details such as multiple availability zones in EC2. We define a datacenter to be a networking environment that is private, low latency, and high bandwidth. This excludes communication that would traverse the public internet. * Consensus - When used in our documentation we use consensus to mean agreement upon the elected leader as well as agreement on the ordering of transactions. Since these transactions are applied to a FSM, we implicitly include the consistency of a replicated state machine. Consensus is described in more detail on [Wikipedia](http://en.wikipedia.org/wiki/Consensus_(computer_science)), as well as our [implementation here](/docs/internals/consensus.html). * Gossip - Consul is built on top of [Serf](http://www.serfdom.io/), which provides a full [gossip protocol](http://en.wikipedia.org/wiki/Gossip_protocol) that is used for multiple purposes. Serf provides membership, failure detection, and event broadcast mechanisms. Our use of these is described more in the [gossip documentation](/docs/internals/gossip.html). It is enough to know gossip involves random node-to-node communication, primarily over UDP. * LAN Gossip - Refers to the LAN gossip pool, which contains nodes that are all located on the same local area network or datacenter. * WAN Gossip - Refers to the WAN gossip pool, which contains only servers. These servers are primarily located in different datacenters and typically communicate over the internet or wide area network. * RPC - RPC is short for a Remote Procedure Call. This is a request / response mechanism allowing a client to make a request from a server. ## 10,000 foot view From a 10,000 foot altitude the architecture of Consul looks like this: