[
https://issues.apache.org/jira/browse/PHOENIX-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Geoffrey Jacoby updated PHOENIX-5521:
-------------------------------------
Description:
An HBase coprocessor Endpoint hook that takes in a request from a remote
cluster (containing both the WALEdit's data and the WALKey's annotated metadata
telling the remote cluster what tenant_id, logical tablename, and timestamp the
data is associated with).
Ideally the API's message format should be configurable / pluggable, and could
be either a protobuf or an Avro schema similar to the WALEdit-like one
described by PHOENIX-5443. Endpoints in HBase are structured to work with
protobufs, so some conversion may be necessary in an Avro-compatible version.
Future work may also extend this to any conforming schema given by a schema
service such as the one in PHOENIX-5443, which would be useful in allowing
PHOENIX-5442's CDC service to be used as a backup / migration tool.
The endpoint hook would take the metadata + data and regenerate a complete set
of Phoenix mutations, both data and indexes, just as the phoenix client did for
the original SQL statement that generated the source-side edits. These
mutations would be written to the remote cluster by the normal Phoenix write
path.
was:
An HBase coprocessor Endpoint hook that takes in a request from a remote
cluster (containing both the WALEdit's data and the WALKey's annotated metadata
telling the remote cluster what tenant_id, logical tablename, and timestamp the
data is associated with).
Ideally the API's message format should be configurable / pluggable, and could
be either a protobuf or an Avro schema similar to the WALEdit-like one
described by PHOENIX-5443. Future work may also extend this to any conforming
schema given by a schema service such as the one in PHOENIX-5443, which would
be useful in allowing PHOENIX-5442's CDC service to be used as a backup /
migration tool.
The endpoint hook would take the metadata + data and regenerate a complete set
of Phoenix mutations, both data and indexes, just as the phoenix client did for
the original SQL statement that generated the source-side edits. These
mutations would be written to the remote cluster by the normal Phoenix write
path.
> Phoenix-level HBase Replication sink (Endpoint coproc)
> ------------------------------------------------------
>
> Key: PHOENIX-5521
> URL: https://issues.apache.org/jira/browse/PHOENIX-5521
> Project: Phoenix
> Issue Type: Sub-task
> Reporter: Geoffrey Jacoby
> Priority: Major
>
> An HBase coprocessor Endpoint hook that takes in a request from a remote
> cluster (containing both the WALEdit's data and the WALKey's annotated
> metadata telling the remote cluster what tenant_id, logical tablename, and
> timestamp the data is associated with).
> Ideally the API's message format should be configurable / pluggable, and
> could be either a protobuf or an Avro schema similar to the WALEdit-like one
> described by PHOENIX-5443. Endpoints in HBase are structured to work with
> protobufs, so some conversion may be necessary in an Avro-compatible version.
> Future work may also extend this to any conforming schema given by a schema
> service such as the one in PHOENIX-5443, which would be useful in allowing
> PHOENIX-5442's CDC service to be used as a backup / migration tool.
> The endpoint hook would take the metadata + data and regenerate a complete
> set of Phoenix mutations, both data and indexes, just as the phoenix client
> did for the original SQL statement that generated the source-side edits.
> These mutations would be written to the remote cluster by the normal Phoenix
> write path.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)