2015-01-20 02:43:24 +00:00
---
layout: "docs"
page_title: "Semaphore"
sidebar_current: "docs-guides-semaphore"
description: |-
2017-04-04 16:33:22 +00:00
This guide demonstrates how to implement a distributed semaphore using the Consul KV store.
2015-01-20 02:43:24 +00:00
---
# Semaphore
2019-03-26 18:44:20 +00:00
A distributed semaphore can be useful when you want to coordinate many services, while
restricting access to certain resources. In this guide we will focus on using Consul's support for
sessions and Consul KV to build a distributed
semaphore. Note, there are a number of ways that a semaphore can be built, we will not cover all the possible methods in this guide.
To complete this guide successfully, you should have familiarity with
[Consul KV ](/docs/agent/kv.html ) and Consul [sessions ](/docs/internals/sessions.html ).
2015-01-20 02:43:24 +00:00
2015-03-01 05:18:21 +00:00
~> If you only need mutual exclusion or leader election,
[this guide ](/docs/guides/leader-election.html )
2015-01-20 02:43:24 +00:00
provides a simpler algorithm that can be used instead.
2019-03-26 18:44:20 +00:00
## Contending Nodes in the Semaphore
2015-01-20 02:43:24 +00:00
2015-03-01 05:18:21 +00:00
Let's imagine we have a set of nodes who are attempting to acquire a slot in the
2019-03-26 18:44:20 +00:00
semaphore. All nodes that are participating should agree on three decisions
2015-03-01 05:18:21 +00:00
2019-03-26 18:44:20 +00:00
- the prefix in the KV store used to coordinate.
- a single key to use as a lock.
- a limit on the number of slot holders.
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
### Session
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
The first step is for each contending node to create a session. Sessions allow us to build a system that
can gracefully handle failures.
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
This is done using the
[Session HTTP API ](/api/session.html#session_create ).
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
```sh
2018-10-04 22:06:53 +00:00
curl -X PUT -d '{"Name": "db-semaphore"}' \
2015-01-20 02:43:24 +00:00
http://localhost:8500/v1/session/create
```
2019-03-26 18:44:20 +00:00
This will return a JSON object contain the session ID.
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
```json
2015-01-20 02:43:24 +00:00
{
"ID": "4ca8e74b-6350-7587-addf-a18084928f3c"
}
```
2018-10-04 22:06:53 +00:00
-> **Note:** Sessions by default only make use of the gossip failure detector. That is, the session is considered held by a node as long as the default Serf health check has not declared the node unhealthy. Additional checks can be specified at session creation if desired.
2019-03-26 18:44:20 +00:00
### KV Entry for Node Locks
2018-10-04 22:06:53 +00:00
Next, we create a lock contender entry. Each contender creates a kv entry that is tied
to a session. This is done so that if a contender is holding a slot and fails, its session
is detached from the key, which can then be detected by the other contenders.
2015-01-20 02:43:24 +00:00
2015-03-01 05:18:21 +00:00
Create the contender key by doing an `acquire` on `<prefix>/<session>` via `PUT` .
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
```sh
2015-01-20 02:43:24 +00:00
curl -X PUT -d < body > http://localhost:8500/v1/kv/< prefix > /< session > ?acquire=< session >
```
2018-10-04 22:06:53 +00:00
`body` can be used to associate a meaningful value with the contender, such as its node’ s name.
This body is opaque to Consul but can be useful for human operators.
2015-03-01 05:18:21 +00:00
The `<session>` value is the ID returned by the call to
2017-04-04 16:33:22 +00:00
[`/v1/session/create` ](/api/session.html#session_create ).
2015-03-01 05:18:21 +00:00
The call will either return `true` or `false` . If `true` , the contender entry has been
2016-01-14 00:56:40 +00:00
created. If `false` , the contender node was not created; it's likely that this indicates
2015-03-01 05:18:21 +00:00
a session invalidation.
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
### Single Key for Coordination
2018-10-04 22:06:53 +00:00
The next step is to create a single key to coordinate which holders are currently
2015-03-01 05:18:21 +00:00
reserving a slot. A good choice for this lock key is simply `<prefix>/.lock` . We will
refer to this special coordinating key as `<lock>` .
2019-03-26 18:44:20 +00:00
```sh
2018-10-04 22:06:53 +00:00
curl -X PUT -d < body > http://localhost:8500/v1/kv/< lock > ?cas=0
2019-03-26 18:44:20 +00:00
```
2015-01-20 02:43:24 +00:00
2018-10-04 22:06:53 +00:00
Since the lock is being created, a `cas` index of 0 is used so that the key is only put if it does not exist.
2019-03-26 18:44:20 +00:00
The `body` of the request should contain both the intended slot limit for the semaphore and the session ids
of the current holders (initially only of the creator). A simple JSON body like the following works.
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
```json
2015-01-20 02:43:24 +00:00
{
2018-10-04 22:06:53 +00:00
"Limit": 2,
"Holders": [
"< session > "
]
2015-01-20 02:43:24 +00:00
}
```
2019-03-26 18:44:20 +00:00
## Semaphore Management
The current state of the semaphore is read by doing a `GET` on the entire `<prefix>` .
2018-10-04 22:06:53 +00:00
2019-03-26 18:44:20 +00:00
```sh
2018-10-04 22:06:53 +00:00
curl http://localhost:8500/v1/kv/< prefix > ?recurse
2019-03-26 18:44:20 +00:00
```
2018-10-04 22:06:53 +00:00
Within the list of the entries, we should find two keys: the `<lock>` and the
contender key ‘ < prefix > /< session > ’ .
2019-03-26 18:44:20 +00:00
```json
2018-10-04 22:06:53 +00:00
[
{
"LockIndex": 0,
"Key": "< lock > ",
"Flags": 0,
"Value": "eyJMaW1pdCI6IDIsIkhvbGRlcnMiOlsiPHNlc3Npb24+Il19",
"Session": "",
"CreateIndex": 898,
"ModifyIndex": 901
},
{
"LockIndex": 1,
"Key": "< prefix > /< session > ",
"Flags": 0,
"Value": null,
"Session": "< session > ",
"CreateIndex": 897,
"ModifyIndex": 897
}
]
```
Note that the `Value` we embedded into `<lock>` is Base64 encoded when returned by the API.
When the `<lock>` is read and its `Value` is decoded, we can verify the `Limit` agrees with the `Holders` count.
This is used to detect a potential conflict. The next step is to determine which of the current
slot holders are still alive. As part of the results of the `GET` , we also have all the contender
2015-01-20 02:43:24 +00:00
entries. By scanning those entries, we create a set of all the `Session` values. Any of the
`Holders` that are not in that set are pruned. In effect, we are creating a set of live contenders
2015-03-01 05:18:21 +00:00
based on the list results and doing a set difference with the `Holders` to detect and prune
2018-10-04 22:06:53 +00:00
any potentially failed holders. In this example `<session>` is present in `Holders` and
is attached to the key `<prefix>/<session>` , so no pruning is required.
2015-01-20 02:43:24 +00:00
2018-10-04 22:06:53 +00:00
If the number of holders after pruning is less than the limit, a contender attempts acquisition
by adding its own session to the `Holders` list and doing a Check-And-Set update of the `<lock>` .
This performs an optimistic update.
2015-01-20 02:43:24 +00:00
2018-10-04 22:06:53 +00:00
This is done with:
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
```sh
2018-10-04 22:06:53 +00:00
curl -X PUT -d < Updated Lock Body > http://localhost:8500/v1/kv/< lock > ?cas=< lock-modify-index >
2019-03-26 18:44:20 +00:00
```
2018-10-04 22:06:53 +00:00
`lock-modify-index` is the latest `ModifyIndex` value known for `<lock>` , 901 in this example.
If this request succeeds with `true` , the contender now holds a slot in the semaphore.
If this fails with `false` , then likely there was a race with another contender to acquire the slot.
2015-01-20 02:43:24 +00:00
2018-10-04 22:06:53 +00:00
To re-attempt the acquisition, we watch for changes on `<prefix>` . This is because a slot
may be released, a node may fail, etc. Watching for changes is done via a blocking query
against `/kv/<prefix>?recurse` .
2015-01-20 02:43:24 +00:00
2019-03-26 18:44:20 +00:00
Slot holders **must** continuously watch for changes to `<prefix>` since their slot can be
2018-10-04 22:06:53 +00:00
released by an operator or automatically released due to a false positive in the failure detector.
On changes to `<prefix>` the lock’ s `Holders` list must be re-checked to ensure the slot
is still held. Additionally, if the watch fails to connect the slot should be considered lost.
2015-03-01 05:18:21 +00:00
2018-10-04 22:06:53 +00:00
This semaphore system is purely *advisory* . Therefore it is up to the client to verify
that a slot is held before (and during) execution of some critical operation.
2015-01-20 02:43:24 +00:00
2018-10-04 22:06:53 +00:00
Lastly, if a slot holder ever wishes to release its slot voluntarily, it should be done by doing a
2015-03-01 05:18:21 +00:00
Check-And-Set operation against `<lock>` to remove its session from the `Holders` object.
2018-10-04 22:06:53 +00:00
Once that is done, both its contender key `<prefix>/<session>` and session should be deleted.
2019-03-26 18:44:20 +00:00
## Summary
In this guide we created a distributed semaphore using Consul KV and Consul sessions. We also learned how to manage the newly created semaphore.