This topic provides and overview of admin partitions, which are entities that define one or more administrative boundaries for single Consul deployments.
Admin partitions exist a level above namespaces in the identity hierarchy. They contain one or more namespaces and allow multiple independent tenants to share a Consul server cluster. As a result, admin partitions enable you to define administrative and communication boundaries between services managed by separate teams or belonging to separate stakeholders. They can also segment production and non-production services within the Consul deployment.
-> **Preexisting resource nodes and namespaces**: Admin partitions were introduced in Consul 1.11. Resource nodes were not namespaced prior to 1.11. After upgrading to Consul 1.11 or later, all resource nodes will be namespaced.
Each Consul cluster will have at least one default admin partition (named `default`). Any resource created without specifying an admin partition will inherit the partition of the ACL token.
-> **Preexisting resources and the `default` partition**: Admin partitions were introduced in Consul 1.11. After upgrading to Consul 1.11 or later, the `default` partition will contain all resources created in previous versions.
Within a single datacenter, a namespace in one admin partition is logically separate from any other namespace with the same name in other admin partitions.
Only resources in the default admin partition will be replicated to secondary datacenters. Non-default partitions are currently not supported in secondary datacenters.
Client agents will be configured to operate within a specific admin partition. The DNS interface will only return results for the admin partition within the scope of the client.
Values specified for [`proxy-defaults`](/docs/connect/config-entries/proxy-defaults) configurations are scoped to a specific partition. Services registered in the partition will use the partition's `proxy-defaults` values.
You can configure services to be discoverable and accessible by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the `partition-exports` configuration entry in the partition where the services are registered. Refer to the [`partition-exports` documentation](/docs/connect/config-entries/partition-exports) for details.
Additionally, the `upstreams` configuration for proxies in the source partition must specify the name of the destination partition so that listeners can be created. Refer to the [Upstreamd Configuration Reference](/docs/connect/registration/service-registration#upstream-configuration-reference) for additional information.
* The `write` permission for `proxy-defaults` requires `mesh:write`. See [Admin Partition Rules](/docs/security/acl/acl-rules#admin-partition-rules) for additional information.
* With the exception of the `default` admin partition, ACL rules configured for admin partitions are isolated, so policies defined in partitions outside of the `default` partition can only reference its local partition.
One of the primary use cases for admin partitions is for enabling a service mesh on Kubernetes clusters. The following requirements must be met to create admin partitions on Kubernetes:
* Pods must be able to communicate with each other (flat pod and node network). See [step 3](#firewall-rules) in the Deploying Consul with Admin Partitions on Kubernetes section for additional information
This section describes how to deploy Consul admin partitions to Kubernetes clusters, as well as directs you to the CLI reference for interacting with the admin partitions API on the command line.
### Deploying Consul with Admin Partitions on Kubernetes
The expected use case to create admin partitions on Kubernetes clusters. This is because many organizations prefer to use cloud-managed Kubernetes offerings to provision separate Kubernetes clusters for individual teams, business units, or environments. This is opposed to deploying a single, large Kubernetes cluster. When these organizations attempt to use a service mesh to enable cross-cluster activities, such as administration tasks and communication between nodes, they encounter problems.
The following procedure will result in different admin partitions in each Kubernetes cluster. The Consul clients running in the cluster with servers will be in the `default` partition. Another partition called `clients` will also be created.
Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernetes-requirements) before proceeding.
1. <a id="firewall-rules"/> Update the firewall rules to ensure the pod network is flat. The following example for Google Kubernetes Engine (GKE) describes how to create a firewall rule that allows all pod and node network traffic to talk to the server and workload nodegroups:
1. Enter the `gke-cluster-1-7b43116f-node` and `gke-cluster-2-48d3bee6-node` node names in the target fields of the firewall rule.
1. Enter the `10.128.0.0/9`, `10.44.0.0/14`, and `10.4.0.0/14` IP into the source fields. This will ensure that traffic from the nodes and the pods in each cluster can access the nodes and pods in the other.
![Configured GKE cluster firewall rule for Consul admin partitions](/img/admin-partitions/consul-admin-partitions-gke-firewall-rule.png)
1. Create the license secret in each cluster, e.g.:
1. After the server starts, get the external IP address for partition service so that it can be added to the client configuration. The partition service is a `LoadBalancer` type. The IP address is where clients that across your partitions will communicate with servers in this cluster.
1. Create the workload configuration for client nodes in your cluster. Create a configuration for each admin partition. In the following example, the external IP address from the previous step has been applied:
You can use create and manage admin partitions through the CLI. Refer to the [admin partition CLI documentation](/commands/admin-partition) for details.
* Gossip between nodes in different admin partitions must be constrained. You can accomplish this with through the use of [network segments](network-segments).