open-consul/proto/private/pbpeerstream/peerstream.proto

148 lines
4.9 KiB
Protocol Buffer
Raw Normal View History

syntax = "proto3";
package hashicorp.consul.internal.peerstream;
Protobuf Refactoring for Multi-Module Cleanliness (#16302) Protobuf Refactoring for Multi-Module Cleanliness This commit includes the following: Moves all packages that were within proto/ to proto/private Rewrites imports to account for the packages being moved Adds in buf.work.yaml to enable buf workspaces Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes) Why: In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage. There were some recent changes to have our own ratelimiting annotations. The two combined were not working when I was trying to use them together (attempting to rebase another branch) Buf workspaces should be the solution to the problem Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root. This resulted in proto file name conflicts in the Go global protobuf type registry. The solution to that was to add in a private/ directory into the path within the proto/ directory. That then required rewriting all the imports. Is this safe? AFAICT yes The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc) Other than imports, there were no changes to any generated code as a result of this.
2023-02-17 21:14:46 +00:00
import "annotations/ratelimit/ratelimit.proto";
import "google/protobuf/any.proto";
Protobuf Refactoring for Multi-Module Cleanliness (#16302) Protobuf Refactoring for Multi-Module Cleanliness This commit includes the following: Moves all packages that were within proto/ to proto/private Rewrites imports to account for the packages being moved Adds in buf.work.yaml to enable buf workspaces Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes) Why: In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage. There were some recent changes to have our own ratelimiting annotations. The two combined were not working when I was trying to use them together (attempting to rebase another branch) Buf workspaces should be the solution to the problem Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root. This resulted in proto file name conflicts in the Go global protobuf type registry. The solution to that was to add in a private/ directory into the path within the proto/ directory. That then required rewriting all the imports. Is this safe? AFAICT yes The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc) Other than imports, there were no changes to any generated code as a result of this.
2023-02-17 21:14:46 +00:00
import "private/pbpeering/peering.proto";
import "private/pbservice/node.proto";
// TODO(peering): Handle this some other way
Protobuf Refactoring for Multi-Module Cleanliness (#16302) Protobuf Refactoring for Multi-Module Cleanliness This commit includes the following: Moves all packages that were within proto/ to proto/private Rewrites imports to account for the packages being moved Adds in buf.work.yaml to enable buf workspaces Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes) Why: In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage. There were some recent changes to have our own ratelimiting annotations. The two combined were not working when I was trying to use them together (attempting to rebase another branch) Buf workspaces should be the solution to the problem Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root. This resulted in proto file name conflicts in the Go global protobuf type registry. The solution to that was to add in a private/ directory into the path within the proto/ directory. That then required rewriting all the imports. Is this safe? AFAICT yes The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc) Other than imports, there were no changes to any generated code as a result of this.
2023-02-17 21:14:46 +00:00
import "private/pbstatus/status.proto";
// TODO(peering): comments
// TODO(peering): also duplicate the pbservice, some pbpeering, and ca stuff.
service PeerStreamService {
// StreamResources opens an event stream for resources to share between peers, such as services.
// Events are streamed as they happen.
// buf:lint:ignore RPC_REQUEST_STANDARD_NAME
// buf:lint:ignore RPC_RESPONSE_STANDARD_NAME
// buf:lint:ignore RPC_REQUEST_RESPONSE_UNIQUE
2023-01-04 16:07:02 +00:00
rpc StreamResources(stream ReplicationMessage) returns (stream ReplicationMessage) {
option (hashicorp.consul.internal.ratelimit.spec) = {
operation_type: OPERATION_TYPE_READ,
operation_category: OPERATION_CATEGORY_PEER_STREAM
};
2023-01-04 16:07:02 +00:00
}
// ExchangeSecret is a unary RPC for exchanging the one-time establishment secret
// for a long-lived stream secret.
2023-01-04 16:07:02 +00:00
rpc ExchangeSecret(ExchangeSecretRequest) returns (ExchangeSecretResponse) {
option (hashicorp.consul.internal.ratelimit.spec) = {
operation_type: OPERATION_TYPE_WRITE,
operation_category: OPERATION_CATEGORY_PEER_STREAM
};
2023-01-04 16:07:02 +00:00
}
}
message ReplicationMessage {
oneof Payload {
Open open = 1;
Request request = 2;
Response response = 3;
Terminated terminated = 4;
Heartbeat heartbeat = 5;
}
// Open is the initial message send by a dialing peer to establish the peering stream.
message Open {
// An identifier for the peer making the request.
// This identifier is provisioned by the serving peer prior to the request from the dialing peer.
string PeerID = 1;
// StreamSecretID contains the long-lived secret from stream authn/authz.
string StreamSecretID = 2;
// Remote contains metadata about the remote peer.
hashicorp.consul.internal.peering.RemoteInfo Remote = 3;
}
// A Request requests to subscribe to a resource of a given type.
message Request {
// An identifier for the peer making the request.
// This identifier is provisioned by the serving peer prior to the request from the dialing peer.
string PeerID = 1;
// ResponseNonce corresponding to that of the response being ACKed or NACKed.
// Initial subscription requests will have an empty nonce.
// The nonce is generated and incremented by the exporting peer.
// TODO
string ResponseNonce = 2;
// The type URL for the resource being requested or ACK/NACKed.
string ResourceURL = 3;
// The error if the previous response was not applied successfully.
// This field is empty in the first subscription request.
status.Status Error = 5;
}
// A Response contains resources corresponding to a subscription request.
message Response {
// Nonce identifying a response in a stream.
string Nonce = 1;
// The type URL of resource being returned.
string ResourceURL = 2;
// An identifier for the resource being returned.
// This could be the SPIFFE ID of the service.
string ResourceID = 3;
// The resource being returned.
google.protobuf.Any Resource = 4;
// REQUIRED. The operation to be performed in relation to the resource.
Operation operation = 5;
}
// Terminated is sent when a peering is deleted locally.
// This message signals to the peer that they should clean up their local state about the peering.
message Terminated {}
// Heartbeat is sent to verify that the connection is still active.
message Heartbeat {}
}
// Operation enumerates supported operations for replicated resources.
enum Operation {
OPERATION_UNSPECIFIED = 0;
// UPSERT represents a create or update event.
OPERATION_UPSERT = 1;
}
// LeaderAddress is sent when the peering service runs on a consul node
// that is not a leader. The node either lost leadership, or never was a leader.
message LeaderAddress {
// address is an ip:port best effort hint at what could be the cluster leader's address
string address = 1;
}
// ExportedService is one of the types of data returned via peer stream replication.
message ExportedService {
repeated hashicorp.consul.internal.service.CheckServiceNode Nodes = 1;
}
// ExportedServiceList is one of the types of data returned via peer stream replication.
message ExportedServiceList {
// The identifiers for the services being exported.
repeated string Services = 1;
}
message ExchangeSecretRequest {
// PeerID is the ID of the peering, as determined by the cluster that generated the
// peering token.
string PeerID = 1;
// EstablishmentSecret is the one-time-use secret encoded in the received peering token.
string EstablishmentSecret = 2;
}
message ExchangeSecretResponse {
// StreamSecret is the long-lived secret to be used for authentication with the
// peering stream handler.
string StreamSecret = 1;
}