You can use spark that has access to both clusters. Distcp would also work.
On Fri, Jul 15, 2016 at 9:37 AM, Otmane K.
wrote:
> Hello,
>
> What is the best way to copy Phoenix table from one cluster to another one
> ?
>
> Thank you,
> Otmane
>
>
t; https://apurtell.s3.amazonaws.com/phoenix/Drillix+Combined+Operational+%26+Analytical+SQL+at+Scale.pdf
>> [2] https://phoenix.apache.org/paged.html
>>
>> On Mon, Apr 18, 2016 at 2:18 PM, Li Gao <g...@marinsoftware.com> wrote:
>>
>>> Hi James,
>>>
he.org/paged.html
>
> On Mon, Apr 18, 2016 at 2:18 PM, Li Gao <g...@marinsoftware.com> wrote:
>
>> Hi James,
>>
>> Thanks for the quick reply. It is helpful but not sure it can solve the
>> issue we have. Let me state use case in another way to make it more
>>
e
> https://phoenix.apache.org/update_statistics.html
>
> Thanks,
> James
>
> On Mon, Apr 18, 2016 at 2:08 PM, Li Gao <g...@marinsoftware.com> wrote:
>
>> Hi,
>>
>> In Phoenix is it possible to query the data by region splits? i.e. if
>> Table A has 10 regi
Hi,
In Phoenix is it possible to query the data by region splits? i.e. if Table
A has 10 regions on the cluster, how I can issue 10 concurrent queries to
Table A so that each query covers exactly 1 region for the table? This is
helpful for us to split the queries across multiple processor
Hi Community,
I want to understand and confirm whether it is expected behavior that a
long running index creation will capture all in-flight new rows to the data
table while the index creation is still in progress.
i.e. when I issue CREATE INDEX there are only 1 million rows
after I issued
Hi community,
Does Phoenix Spark support arbitrary SELECT statements for generating DF or
RDD?
>From this reading: https://phoenix.apache.org/phoenix_spark.html I am not
sure how to do that.
Thanks,
Li
onably thorough set of examples on both
> DataFrames and RDDs with Phoenix. [1]
>
> Good luck,
>
> Josh
>
> [1]
> https://github.com/apache/phoenix/blob/master/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
>
> On Tue, Dec 15, 2015 at 5:57 PM, Li Gao
Hi Phoenix community,
We are encountering general performance degradation when doing table stats
query over the leading primary key column in a Phoenix table.
The table schema, used query, JMeter results observed over 1-hour window,
and the python script that slowly generate data into the table.
The latest chart for the 2nd query response time is attached with slow
growing table over 6 hours period.
Thanks a lot to James' suggestion. Just want to share it with the community
in case you encounter the similar situation as us.
Thanks,
Li
On Fri, Dec 11, 2015 at 7:46 PM, Li Gao &l
alcite.rel.rules.
>
> For examples, you can look at CalciteIT.java, which contains some basic
> test cases as well as some interesting stuff.
>
>
> Thanks,
> Maryann
>
>
>
> On Thu, Oct 8, 2015 at 2:37 PM, Li Gao <g...@marinsoftware.com> wrote:
>
>&g
un mvn commands.
>
> On Mon, Oct 5, 2015 at 6:43 PM, Li Gao <g...@marinsoftware.com> wrote:
>
>> Hi Maryann,
>>
>> This looks great. Thanks for pointing me to the right branch! For some
>> reason I am getting the following errors when I do mvn package
>
timization examples in
> our test file CalciteIT.java. You can also go
> http://www.slideshare.net/HBaseCon/ecosystem-session-2-49044349 for more
> information.
>
>
> Thanks,
> Maryann
>
>
>
>
> On Mon, Oct 5, 2015 at 2:08 PM, Li Gao <g...@marinsoftware.com> wr
13 matches
Mail list logo