I have updated my jira with updated instructions
https://issues.apache.org/jira/browse/PHOENIX-2834.
Please do let me know if you are able to build and use with CDH5.7
Thanks,
Ankur Jain
From: Andrew Purtell >
Reply-To:
Yes a stock client should work with a server modified for CDH assuming both
client and server versions are within the bounds specified by the backwards
compatibility policy (https://phoenix.apache.org/upgrading.html)
"Phoenix maintains backward compatibility across at least two minor releases
Pick the tree for your CDH 5.x version and it should all work. We are missing
trees for X=6 and X=7 and I will aim to get to that soon.
I did not test beyond insuring all Phoenix unit and integration tests passed.
> On Jun 9, 2016, at 7:55 PM, Benjamin Kim wrote:
>
>
> is cloudera's hbase 1.2.0-cdh5.7.0 that different from apache HBase 1.2.0?
Yes
As is the Cloudera HBase in 5.6, 5.5, 5.4, ... quite different from Apache
HBase in coprocessor and RPC internal extension APIs.
We have made some ports of Apache Phoenix releases to CDH here:
This interests me too. I asked Cloudera in their community forums a while back
but got no answer on this. I hope they don’t leave us out in the cold. I tried
building it too before with the instructions here
https://issues.apache.org/jira/browse/PHOENIX-2834. I could get it to build,
but I
FWIW, I've also reproduced this with Groovy 2.4.3, Oracle Java 1.7.0_79
and Apache Phoenix 4.8.0-SNAPSHOT locally.
Will dig some more.
Brian Jeltema wrote:
Groovy 2.4.3
JDK 1.8
On Jun 8, 2016, at 11:26 AM, Josh Elser > wrote:
Thanks for
Koert,
Apache Phoenix goes through a lot of work to provide multiple versions
of Phoenix for various versions of Apache HBase (0.98, 1.1, and 1.2
presently). The builds for each of these branches are tested against
those specific versions of HBase, so I doubt that there are issues
between
Thanks to some great work over at Amazon, there's now support for Phoenix
4.7 on top of HBase 1.2 in Amazon EMR. Check it out and give it a spin.
Detailed step-by-step instructions available here:
http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-phoenix.html
Thanks,
James
Hi, Josh:
Thanks for the answer. Do you know the underlining difference between the
following two ways of Loading a Dataframe? (using the Data Source API, or Load
as a DataFrame directly using a Configuration object)
Is there a Java interface to use the functionality of
Hi JM,
Are you looking toward replication to support DR? If so, you can rely on
HBase-level replication with a few gotchas and some operational hurdles:
- When upgrading Phoenix versions, upgrade the server-side first for both
the primary and secondary cluster. You can do a rolling upgrade and
Hi Jean,
Phoenix does not supports replication at present.(It will be super awesome
if it can) So, if you want to do replication of Phoenix tables you will
need to setup replication of all the underlying HBase tables for
corresponding Phoenix tables.
I think you will need to replicate all the
Hi,
When Phoenix is used, what is the recommended way to do replication?
Replication acts as a client on the 2nd cluster, so should we simply
configure Phoenix on both cluster and on the destination it will take care
of updating the index tables, etc. Or should all the tables on the
destination
Hi Xindian,
The phoenix-spark integration is based on the Phoenix MapReduce layer,
which doesn't support aggregate functions. However, as you mentioned, both
filtering and pruning predicates are pushed down to Phoenix. With an RDD or
DataFrame loaded, all of Spark's various aggregation methods
13 matches
Mail list logo