ce` in dst cluster. (no need if you don't want
> namespace mapping feature)
> 3. CREATE TABLE ***, COLUMN_ENCODED_BYTES=NONE;
> 4. Enjoy your queries.
>
>
> Cheers!
>
>
> ------
>
> Best regards,
> R.C
>
>
>
> _
hot in dst cluster.
> 2. run `-upgradeNamespace` in dst cluster. (no need if you don't want
> namespace mapping feature)
> 3. CREATE TABLE ***, COLUMN_ENCODED_BYTES=NONE;
> 4. Enjoy your queries.
>
>
> Cheers!
>
>
> ------
>
> Best regards,
> R.C
>
>
dev@phoenix.apache.org
Subject: Re: About mapping a phoenix table
Hello Reid,
I dont know if this fits your use case but there is a way of copying data
from a Phoenix Table to another Phoenix table in another cluster if data is
not present yet in either table.
We can use the fact that Phoen
Hello Reid,
I dont know if this fits your use case but there is a way of copying data
from a Phoenix Table to another Phoenix table in another cluster if data is
not present yet in either table.
We can use the fact that Phoenix stores its metadata using HBase tables.
Therefore by enabling replica
Hi Reid,
I'll throw my +1 onto Anil's Approach #1. I followed this path recently to
migrate all of our production data. Migrating Phoenix metadata by creating
tables manually on the destination is a little clunky, but HBase Snapshots
are quite easy to work with.
Good luck,
Nick
On Tue, Apr 2, 20
Hey Reid,
AFAIK, there is no official Phoenix tool to copy table between clusters. IMO,
it would be great to have an official tool to copy tables.
In our case, source and destination clusters are running Phoenix4.7. IMO, copy
between 4.7-4.14 might have some version incompatibility. So, you might
Hi team,
I'm trying to transport a phoenix table between two clusters, by copying all
related hbase files on hdfs from cluster A to cluster B.
But after i executed CreateTableStatement in phoenix, phoenix failed to map
those files into table, and `select *` got nothing.
The questions are,
Is th