Due to the incompatibility, you will need to wait for a new release in
the 5.x branch to get 5.x working with HBase 2.0.x.
Phoenix 5.0.0 is also only compatible with 2.0.0 (found this out a few
months ago) as PHOENIX-4826[1] adds support for HBase 2.0.1, but is
currently unreleased.
My
This is a heads up regarding a breaking change that is currently in
avatica-go master and will be released as the next major version, 4.0.0.
In Apache Phoenix, string columns set to null or an empty string ("")
are considered to be equivalent. For more details on why this is the
case see [1].
Can you try enclosing the string with single quotes (haven't tried this
myself as I currently don't have access to my Phoenix test cluster):
CREATE TABLE TEST (
a BIGINT NOT NULL DEFAULT 0,
b CHAR(10) DEFAULT 'abc',
cf.c INTEGER DEFAULT 1
CONSTRAINT pk PRIMARY KEY (a ASC, b ASC)
);
On
on all my machines after this update, however,
I am still baffled as to why it the one using Hadoop 2.7.4 jars worked
correctly on some machines but failed on others.
On 28/09/2018 8:24 AM, Francis Chuang wrote:
I tried updating my hbase-phoenix-all-in-one image to use HBase built
with Hadoop 3
Base should have Hadoop3 jars.
Re-build HBase using the -Dhadoop.profile=3.0 (I think it is) CLI option.
On 9/26/18 7:21 AM, Francis Chuang wrote:
Upon further investigation, it appears that this is because
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab
is on
2.8.5 versions, however I am not familiar with Java and
the hadoop project, so I am not sure if this is going to introduce issues.
On 26/09/2018 4:44 PM, Francis Chuang wrote:
I wonder if this is because:
- HBase's binary distribution ships with Hadoop 2.7.4 jars.
- Phoenix 5.0.0 has Hadoop
://github.com/apache/incubator-tephra/blob/master/pom.xml#L211
On 26/09/2018 4:03 PM, Francis Chuang wrote:
Hi all,
I am using Phoenix 5.0.0 with HBase 2.0.0. I am seeing errors while
trying to create transactional tables using Phoenix.
I am using my Phoenix + HBase all in one docker image
Hi all,
I am using Phoenix 5.0.0 with HBase 2.0.0. I am seeing errors while
trying to create transactional tables using Phoenix.
I am using my Phoenix + HBase all in one docker image available here:
https://github.com/Boostport/hbase-phoenix-all-in-one
This is the error:
with Guava?
It isn't an exception which incompatible with Guava
Jaanai Zhang
Best regards!
Francis Chuang <mailto:francischu...@apache.org>> 于2018年9月25日周二 下午8:25写道:
Thanks for taking a look, Jaanai!
Is my method of installing HBase an
hat exists in your classpath?
Is this a compatibility issue with Guava?
It isn't an exception which incompatible with Guava
Jaanai Zhang
Best regards!
Francis Chuang <mailto:francischu...@apache.org>> 于2018年9月25日周二 下午8:25写道:
)
at
It looks like that HBase's Jars are incompatible.
Jaanai Zhang
Best regards!
Francis Chuang <mailto:francischu...@apache.org>> 于2018年9月25日周二 下午8:06写道:
Hi All,
I recently updated one of my Go apps to use Ph
Hi All,
I recently updated one of my Go apps to use Phoenix 5.0 with HBase
2.0.2. I am using my Phoenix + HBase all in one docker image available
here: https://github.com/Boostport/hbase-phoenix-all-in-one
This is the log/output from the exception:
RuntimeException:
Hi all,
I currently maintain a HBase + Phoenix all-in-one docker image[1]. The
image is currently used to test Phoenix support for the Avatica Go SQL
driver[2]. Judging by the number of pulls on docker hub (10k+), there
are probably other people using it.
The image spins up HBase server
Namespace mapping is something you need to enable on the server (it's
off by default).
See documentation for enabling it here:
http://phoenix.apache.org/namspace_mapping.html
Francis
On 24/05/2018 5:23 AM, Stepan Migunov wrote:
Thanks you for response, Josh!
I got something like
Hey Stepan,
There is a driver called phoenix-sharp
(https://github.com/Azure/hdinsight-phoenix-sharp) from MS Azure. The
project has not been updated for a while though.
Francis
On 22/05/2018 6:16 PM, Stepan Migunov wrote:
Hi,
Is the ODBC driver from Hortonworks the only way to access
I am not familiar with the JDBC driver, but Phoenix uses Avatica[1]
under the hood. The protobuf documentation does state that it's possible
to control the number of rows returned in each response. See
frame_max_size under the FetchRequest[2] message. This may be something
that you can set in
information on how to
report problems, and to get involved, visit the project website at
https://calcite.apache.org/avatica
Francis Chuang, on behalf of the Apache Calcite Team
17 matches
Mail list logo