Re: COALESCE Function Not Working With NULL Values

2019-05-14 Thread Francis Chuang
Due to the incompatibility, you will need to wait for a new release in 
the 5.x branch to get 5.x working with HBase 2.0.x.


Phoenix 5.0.0 is also only compatible with 2.0.0 (found this out a few 
months ago) as PHOENIX-4826[1] adds support for HBase 2.0.1, but is 
currently unreleased.


My suggestion would be to wait for the 5.1.0 release if you're able to. 
If you really need to deploy something right now, I'd suggest using the 
latest version of the 4.x.x branch (4.14.1) and HBase 1.4.x.


Francis

[1] https://issues.apache.org/jira/browse/PHOENIX-4826

On 15/05/2019 2:50 pm, Jestan Nirojan wrote:

Hi Jaanai,

Sorry I could not understand much from 
https://issues.apache.org/jira/browse/PHOENIX-5268
Because of this recent change, there will not be a Phoenix release for 
HBase 2.0.x in future Or there is an existing  compatibility issue ?

What is the Phoenix version recommended for a new deployment ? :) ,

thanks and regards,
-Jestan Nirojan

On Wed, May 15, 2019 at 7:04 AM Jaanai Zhang > wrote:


Hi, Jestan

Now Phoenix 5.0.0 is not compatible with HBase 2.0.5,
https://issues.apache.org/jira/browse/PHOENIX-5268


    Jaanai Zhang
    Best regards!



Jestan Nirojan mailto:jestanniro...@gmail.com>> 于2019年5月15日周三 上午5:04写道:

Hi William,

Thanks, It is working with
coalesce(functionThatMightReturnNull(), now()) without an
explicit null;
Phoenix Version is 5.0.0.0 which uses HBase 2.0.5
I have not opened any issue for this, I am not sure how it is
suppose to work.

I am developing  a phoenix driver for metabase
 (which is a BI/DataViz tool).
It seems for optional query parameter, null values are directly
set by the base metabase driver which I am trying to extend.

I wish if phoenix can support explicit null values.

thanks and regards,
-Jestan


On Tue, May 14, 2019 at 11:52 PM William Shen
mailto:wills...@marinsoftware.com>>
wrote:

Just took a look at the implementation, seems like Phoenix
relies on the first expression to not be an expression that
is not just an explicit "null" because it needs to evaluate
for data type coercion. What's the use case for specifying
an explicit null?

On the other hand, the following should work:
select coalesce(functionThatMightReturnNull(), now()) as date;

On Tue, May 14, 2019 at 11:14 AM William Shen
mailto:wills...@marinsoftware.com>> wrote:

Jestan,
It seems like a bug to me. What version of Phoenix are
you using, and did you create a ticket already?

On Tue, May 14, 2019 at 10:26 AM Jestan Nirojan
mailto:jestanniro...@gmail.com>> wrote:

Hi,

I am trying to use COALESCE function to handle
default value in WHERE condition like below.

select  * from table1 where created_date >=
coalesce(null, trunc(now(), 'day'));

But it throws NullPointerException

Caused by: java.lang.NullPointerException
at

org.apache.phoenix.schema.types.PDataType.equalsAny(PDataType.java:326)
at

org.apache.phoenix.schema.types.PDate.isCoercibleTo(PDate.java:111)
at

org.apache.phoenix.expression.function.CoalesceFunction.(CoalesceFunction.java:68)
... 47 more

I was able to reproduce the same error with
following query

select coalesce(null, now()) as date;

Here are some other variant of same issue

1. select coalesce(now(), now()) as date; //
returns 2019-05-14
2. select coalesce(now(), null) as date; // returns
empty
3. select coalesce(null, now()) as date; // throws
exception

I have tried the same for INT and VARCHAR, same outcome
Am I doing something wrong here or is coalesce
suppose to return a non null value ?

thanks and regards,
-Jestan Nirojan



Breaking change in Avatica-Go when handling null and empty strings for Apache Phoenix

2019-01-06 Thread Francis Chuang
This is a heads up regarding a breaking change that is currently in 
avatica-go master and will be released as the next major version, 4.0.0.


In Apache Phoenix, string columns set to null or an empty string ("") 
are considered to be equivalent. For more details on why this is the 
case see [1].


While fixing a bug to correctly work with null values in avatica-go [2], 
I had to break existing behavior.


Previous behavior: A string column set to null or an empty string will 
be returned as an empty string.


New behavior: A string column set to null or an empty string will be 
returned as a null.


The reason for this change is to take advantage of Go's database/sql 
package's builtin NullString type [3]. This type allows userland code to 
scan nullable columns into a variable without any errors.


Note: This breaking change will be part of 4.0.0 and will not affect 
users using 3.x.x. However, to take advantage of database/sql's null 
types, you will need to upgrade to 4.0.0 (when it is released) and 
upgrade your import paths to github.com/apache/calcite-avatica-go/v4
This change is only applicable for Apache Phoenix and will not affect 
HSQLDB.


[1] https://issues.apache.org/jira/browse/PHOENIX-947
[2] https://issues.apache.org/jira/browse/CALCITE-2763
[3] https://golang.org/pkg/database/sql/#NullString


Re: Probably issue of DEFAULT column value when creating table

2018-11-14 Thread Francis Chuang
Can you try enclosing the string with single quotes (haven't tried this 
myself as I currently don't have access to my Phoenix test cluster):


CREATE TABLE TEST (
a BIGINT NOT NULL DEFAULT 0,
b CHAR(10) DEFAULT 'abc',
cf.c INTEGER DEFAULT 1
CONSTRAINT pk PRIMARY KEY (a ASC, b ASC)
);

On 15/11/2018 4:38 pm, xin geng wrote:

Hi, all

I'm learning phoenix, when trying to create table with the sql below, 
I met a ColumnNotFoundException, which probably be a issue of phoenix. 
Please correct me if I'm wrong. :)


SQL:
CREATE TABLE TEST (
a BIGINT NOT NULL DEFAULT 0,
b CHAR(10) DEFAULT "abc",
cf.c INTEGER DEFAULT 1
CONSTRAINT pk PRIMARY KEY (a ASC, b ASC)
);

Exeption:
Error: ERROR 504 (42703): Undefined column. columnName=abc 
(state=42703,code=504)
org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
Undefined column. columnName=abc
at 
org.apache.phoenix.compile.FromCompiler$1.resolveColumn(FromCompiler.java:129)
at 
org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:408)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:146)
at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)

at org.apache.phoenix.parse.ColumnDef.validateDefault(ColumnDef.java:246)
at 
org.apache.phoenix.compile.CreateTableCompiler.compile(CreateTableCompiler.java:108)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateTableStatement.compilePlan(PhoenixStatement.java:788)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateTableStatement.compilePlan(PhoenixStatement.java:777)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:387)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:364)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1738)

at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)

I'm using phoenix4.13.1 and hbase1.2.5.





Re: org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to discover transaction service. -> TException: Unable to discover transaction service.

2018-09-27 Thread Francis Chuang
I updated the docker image file to remove the hadoop jars bundled with 
HBase and replace them with the ones from hadoop 2.8.5 from maven. Maven 
does not host any of the test jars which were present in the HBase 
distribution, but I did not notice any adverse effects.


The image works correctly on all my machines after this update, however, 
I am still baffled as to why it the one using Hadoop 2.7.4 jars worked 
correctly on some machines but failed on others.


On 28/09/2018 8:24 AM, Francis Chuang wrote:
I tried updating my hbase-phoenix-all-in-one image to use HBase built 
with Hadoop 3. Unfortunately, it didn't work. I think this might be 
because Hadoop 3.0.0 is too new for tephra (which uses 2.2.0):


starting Query Server, logging to 
/tmp/phoenix/phoenix-root-queryserver.log
Thu Sep 27 02:50:56 UTC 2018 Starting tephra service on 
m401b01-phoenix.m401b01

Running class org.apache.tephra.TransactionServiceMain
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/phoenix-5.0.0-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
Exception in thread "main" java.lang.NoSuchMethodError: 
com.ctc.wstx.stax.WstxInputFactory.createSR(Lcom/ctc/wstx/api/ReaderConfig;Lcom/ctc/wstx/io/SystemId;Lcom/ctc/wstx/io/InputBootstrapper;ZZ)Lorg/codehaus/stax2/XMLStreamReader2;
    at 
org.apache.hadoop.conf.Configuration.parse(Configuration.java:2803)
    at 
org.apache.hadoop.conf.Configuration.parse(Configuration.java:2787)
    at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2838)
    at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2812)
    at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2689)

    at org.apache.hadoop.conf.Configuration.get(Configuration.java:1160)
    at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1214)
    at 
org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1620)
    at 
org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:66)
    at 
org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:80)
    at 
org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:94)
    at 
org.apache.hadoop.hbase.util.HBaseConfTool.main(HBaseConfTool.java:39)

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/phoenix-5.0.0-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
Exception in thread "main" java.lang.NoSuchMethodError: 
com.ctc.wstx.stax.WstxInputFactory.createSR(Lcom/ctc/wstx/api/ReaderConfig;Lcom/ctc/wstx/io/SystemId;Lcom/ctc/wstx/io/InputBootstrapper;ZZ)Lorg/codehaus/stax2/XMLStreamReader2;
    at 
org.apache.hadoop.conf.Configuration.parse(Configuration.java:2803)
    at 
org.apache.hadoop.conf.Configuration.parse(Configuration.java:2787)
    at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2838)
    at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2812)
    at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2689)

    at org.apache.hadoop.conf.Configuration.get(Configuration.java:1160)
    at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1214)
    at 
org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1620)
    at 
org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:66)
    at 
org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:80)
    at 
org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:94)
    at 
org.apache.hadoop.hbase.zookeeper.ZKServerTool.main(ZKServerTool.java:63)
running master, logging to 
/opt/hbase/logs/hbase--master-m401b01-phoenix.m401b01.out

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/phoenix-5.0.0-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
Exception in thread "main" java.lang.NoSuchMethodError: 
com.ctc.wstx.stax.WstxInputFactory.createSR(Lcom/ctc/wstx/api/ReaderConfig;Lcom/ctc/wstx/io/SystemId;Lcom/ctc/wstx/io/InputBootstrapper;ZZ)Lorg/codehaus/stax2/XMLStreamReader2;
    at 
org.apache.hadoop.conf.Configuration.parse(Configuration.java:2803)
    at 
org.apache.hadoop.conf.Con

Re: org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to discover transaction service. -> TException: Unable to discover transaction service.

2018-09-27 Thread Francis Chuang
Base-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
: SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
: Exception in thread "main" java.lang.NoSuchMethodError: 
com.ctc.wstx.stax.WstxInputFactory.createSR(Lcom/ctc/wstx/api/ReaderConfig;Lcom/ctc/wstx/io/SystemId;Lcom/ctc/wstx/io/InputBootstrapper;ZZ)Lorg/codehaus/stax2/XMLStreamReader2;

:     at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2803)
:     at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2787)
:     at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2838)
:     at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2812)
:     at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2689)


I built HBase with 2.8.5 and was able to resolve the hasKerberosKeyTab 
methodNotFound error. The only problem is that it took 4 hours to run an 
automated build on Docker cloud to build HBase and the build failed 
eventually. I think I am going to download the Hadoop jars

from maven, rather than build HBase.

On 27/09/2018 12:56 AM, Josh Elser wrote:

If you're using HBase with Hadoop3, HBase should have Hadoop3 jars.

Re-build HBase using the -Dhadoop.profile=3.0 (I think it is) CLI option.

On 9/26/18 7:21 AM, Francis Chuang wrote:
Upon further investigation, it appears that this is because 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab 
is only available in Hadoop 2.8+. HBase ships with Hadoop 2.7.4 jars.


I noticed that Hadoop was bumped from 2.7.4 to 3.0.0 a few months ago 
to fix PQS/Avatica issues: 
https://github.com/apache/phoenix/blame/master/pom.xml#L70


I think this causes Phoenix to expect some things that are available 
in Hadoop 3.0.0, but are not present in HBase's Hadoop 2.7.4 jars.


I think I can try and replace the hadoop-*.jar files in hbase/lib 
with the equivalent 2.8.5 versions, however I am not familiar with 
Java and the hadoop project, so I am not sure if this is going to 
introduce issues.


On 26/09/2018 4:44 PM, Francis Chuang wrote:

I wonder if this is because:
- HBase's binary distribution ships with Hadoop 2.7.4 jars.
- Phoenix 5.0.0 has Hadoop 3.0.0 declared in its pom.xml: 
https://github.com/apache/phoenix/blob/8a819c6c3b4befce190c6ac759f744df511de61d/pom.xml#L70 

- Tephra has Hadoop 2.2.0 declared in its pom.xml: 
https://github.com/apache/incubator-tephra/blob/master/pom.xml#L211


On 26/09/2018 4:03 PM, Francis Chuang wrote:

Hi all,

I am using Phoenix 5.0.0 with HBase 2.0.0. I am seeing errors while 
trying to create transactional tables using Phoenix.


I am using my Phoenix + HBase all in one docker image available 
here: https://github.com/Boostport/hbase-phoenix-all-in-one


This is the error: 
org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to 
discover transaction service. -> TException: Unable to discover 
transaction service.


I checked the tephra logs and got the following:

Exception in thread "HDFSTransactionStateStorage STARTING" 
Exception in thread "ThriftRPCServer" 
com.google.common.util.concurrent.ExecutionError: 
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z 

    at 
com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1008) 

    at 
com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001) 

    at 
com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220) 

    at 
com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106) 

    at 
org.apache.tephra.TransactionManager.doStart(TransactionManager.java:245) 

    at 
com.google.common.util.concurrent.AbstractService.start(AbstractService.java:170) 

    at 
com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220) 

    at 
org.apache.tephra.distributed.TransactionServiceThriftHandler.init(TransactionServiceThriftHandler.java:249) 

    at 
org.apache.tephra.rpc.ThriftRPCServer.startUp(ThriftRPCServer.java:177) 

    at 
com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47) 


    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z 

    at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715) 

    at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:925) 

    at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873) 

    at 
org.apache.hadoop.security.UserGroupInformation.getC

Re: org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to discover transaction service. -> TException: Unable to discover transaction service.

2018-09-26 Thread Francis Chuang
Upon further investigation, it appears that this is because 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab 
is only available in Hadoop 2.8+. HBase ships with Hadoop 2.7.4 jars.


I noticed that Hadoop was bumped from 2.7.4 to 3.0.0 a few months ago to 
fix PQS/Avatica issues: 
https://github.com/apache/phoenix/blame/master/pom.xml#L70


I think this causes Phoenix to expect some things that are available in 
Hadoop 3.0.0, but are not present in HBase's Hadoop 2.7.4 jars.


I think I can try and replace the hadoop-*.jar files in hbase/lib with 
the equivalent 2.8.5 versions, however I am not familiar with Java and 
the hadoop project, so I am not sure if this is going to introduce issues.


On 26/09/2018 4:44 PM, Francis Chuang wrote:

I wonder if this is because:
- HBase's binary distribution ships with Hadoop 2.7.4 jars.
- Phoenix 5.0.0 has Hadoop 3.0.0 declared in its pom.xml: 
https://github.com/apache/phoenix/blob/8a819c6c3b4befce190c6ac759f744df511de61d/pom.xml#L70
- Tephra has Hadoop 2.2.0 declared in its pom.xml: 
https://github.com/apache/incubator-tephra/blob/master/pom.xml#L211


On 26/09/2018 4:03 PM, Francis Chuang wrote:

Hi all,

I am using Phoenix 5.0.0 with HBase 2.0.0. I am seeing errors while 
trying to create transactional tables using Phoenix.


I am using my Phoenix + HBase all in one docker image available here: 
https://github.com/Boostport/hbase-phoenix-all-in-one


This is the error: 
org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to 
discover transaction service. -> TException: Unable to discover 
transaction service.


I checked the tephra logs and got the following:

Exception in thread "HDFSTransactionStateStorage STARTING" Exception 
in thread "ThriftRPCServer" 
com.google.common.util.concurrent.ExecutionError: 
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1008)
    at 
com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001)
    at 
com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
    at 
com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106)
    at 
org.apache.tephra.TransactionManager.doStart(TransactionManager.java:245)
    at 
com.google.common.util.concurrent.AbstractService.start(AbstractService.java:170)
    at 
com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
    at 
org.apache.tephra.distributed.TransactionServiceThriftHandler.init(TransactionServiceThriftHandler.java:249)
    at 
org.apache.tephra.rpc.ThriftRPCServer.startUp(ThriftRPCServer.java:177)
    at 
com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47)

    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715)
    at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:925)
    at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873)
    at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740)
    at 
org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3472)
    at 
org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3310)

    at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
    at 
org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:104)
    at 
com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)

    ... 1 more
2018-09-26 04:31:11,290 INFO [leader-election-tx.service-leader] 
distributed.TransactionService (TransactionService.java:leader(115)) 
- Transaction Thrift Service didn't start on /0.0.0.0:15165
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715)
    at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:925)
    at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873)
    at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740)
    at 
org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3472)
    at 
org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3310)

    at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
    at 
org.apache.tephra.persi

Re: org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to discover transaction service. -> TException: Unable to discover transaction service.

2018-09-25 Thread Francis Chuang

I wonder if this is because:
- HBase's binary distribution ships with Hadoop 2.7.4 jars.
- Phoenix 5.0.0 has Hadoop 3.0.0 declared in its pom.xml: 
https://github.com/apache/phoenix/blob/8a819c6c3b4befce190c6ac759f744df511de61d/pom.xml#L70
- Tephra has Hadoop 2.2.0 declared in its pom.xml: 
https://github.com/apache/incubator-tephra/blob/master/pom.xml#L211


On 26/09/2018 4:03 PM, Francis Chuang wrote:

Hi all,

I am using Phoenix 5.0.0 with HBase 2.0.0. I am seeing errors while 
trying to create transactional tables using Phoenix.


I am using my Phoenix + HBase all in one docker image available here: 
https://github.com/Boostport/hbase-phoenix-all-in-one


This is the error: 
org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to 
discover transaction service. -> TException: Unable to discover 
transaction service.


I checked the tephra logs and got the following:

Exception in thread "HDFSTransactionStateStorage STARTING" Exception 
in thread "ThriftRPCServer" 
com.google.common.util.concurrent.ExecutionError: 
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1008)
    at 
com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001)
    at 
com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
    at 
com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106)
    at 
org.apache.tephra.TransactionManager.doStart(TransactionManager.java:245)
    at 
com.google.common.util.concurrent.AbstractService.start(AbstractService.java:170)
    at 
com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
    at 
org.apache.tephra.distributed.TransactionServiceThriftHandler.init(TransactionServiceThriftHandler.java:249)
    at 
org.apache.tephra.rpc.ThriftRPCServer.startUp(ThriftRPCServer.java:177)
    at 
com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47)

    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715)
    at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:925)
    at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873)
    at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740)
    at 
org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3472)
    at 
org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3310)

    at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
    at 
org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:104)
    at 
com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)

    ... 1 more
2018-09-26 04:31:11,290 INFO  [leader-election-tx.service-leader] 
distributed.TransactionService (TransactionService.java:leader(115)) - 
Transaction Thrift Service didn't start on /0.0.0.0:15165
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715)
    at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:925)
    at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873)
    at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740)
    at 
org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3472)
    at 
org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3310)

    at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
    at 
org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:104)
    at 
com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)

    at java.lang.Thread.run(Thread.java:748)

I know that HBase ships with the Hadoop 2.7.4 jars and I was not able 
to find "hasKerberosKeyTab" grepping through the source code for 
hadoop 2.7.4. However, I checked the Hadoop 2.7.4 source files from 
the stack trace above and the line numbers do not match up.


Interestingly, I only see this issue on my older machine (Core i7 920 
with 12GB of RAM) and Gitlab's CI environment (a Google Cloud 
n1-standard-1 instance with 1vCPU and 3.75GB of RAM). I know Michael 
also encountered this problem while running the Phoenix tests for 
calcite-a

org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to discover transaction service. -> TException: Unable to discover transaction service.

2018-09-25 Thread Francis Chuang

Hi all,

I am using Phoenix 5.0.0 with HBase 2.0.0. I am seeing errors while 
trying to create transactional tables using Phoenix.


I am using my Phoenix + HBase all in one docker image available here: 
https://github.com/Boostport/hbase-phoenix-all-in-one


This is the error: 
org.apache.phoenix.shaded.org.apache.thrift.TException: Unable to 
discover transaction service. -> TException: Unable to discover 
transaction service.


I checked the tephra logs and got the following:

Exception in thread "HDFSTransactionStateStorage STARTING" Exception in 
thread "ThriftRPCServer" 
com.google.common.util.concurrent.ExecutionError: 
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1008)
    at 
com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001)
    at 
com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
    at 
com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106)
    at 
org.apache.tephra.TransactionManager.doStart(TransactionManager.java:245)
    at 
com.google.common.util.concurrent.AbstractService.start(AbstractService.java:170)
    at 
com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
    at 
org.apache.tephra.distributed.TransactionServiceThriftHandler.init(TransactionServiceThriftHandler.java:249)
    at 
org.apache.tephra.rpc.ThriftRPCServer.startUp(ThriftRPCServer.java:177)
    at 
com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47)

    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715)
    at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:925)
    at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873)
    at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740)
    at 
org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3472)
    at 
org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3310)

    at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
    at 
org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:104)
    at 
com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)

    ... 1 more
2018-09-26 04:31:11,290 INFO  [leader-election-tx.service-leader] 
distributed.TransactionService (TransactionService.java:leader(115)) - 
Transaction Thrift Service didn't start on /0.0.0.0:15165
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(Ljavax/security/auth/Subject;)Z
    at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715)
    at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:925)
    at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873)
    at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740)
    at 
org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3472)
    at 
org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3310)

    at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
    at 
org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:104)
    at 
com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)

    at java.lang.Thread.run(Thread.java:748)

I know that HBase ships with the Hadoop 2.7.4 jars and I was not able to 
find "hasKerberosKeyTab" grepping through the source code for hadoop 
2.7.4. However, I checked the Hadoop 2.7.4 source files from the stack 
trace above and the line numbers do not match up.


Interestingly, I only see this issue on my older machine (Core i7 920 
with 12GB of RAM) and Gitlab's CI environment (a Google Cloud 
n1-standard-1 instance with 1vCPU and 3.75GB of RAM). I know Michael 
also encountered this problem while running the Phoenix tests for 
calcite-avatica-go on an older i5 machine from 2011.


It does seem to be pretty weird that we are only seeing this on machines 
where the CPU is not very powerful.


I also printed the classpath for tephra by doing:

$ # export HBASE_CONF_DIR=/opt/hbase/conf
$ # export HBASE_CP=/opt/hbase/lib
$ # export HBASE_HOME=/opt/hbase
$ # /opt/hbase/bin/tephra classpath
/opt/hbase/bin/../lib/*:/opt/hbase/bin/../conf/:/opt/hbase/phoenix-client/target/*:/opt/hbase/conf:/usr

Re: Phoenix 5.0 could not commit transaction: org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.phoenix.hbase

2018-09-25 Thread Francis Chuang
After some investigation, I found that Phoenix 5.0.0 is only compatible 
with HBase 2.0.0.


In 2.0.1 and onward, compare(final Cell a, final Cell b) in 
CellComparatorImpl was changed to final: 
https://github.com/apache/hbase/blame/master/hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparatorImpl.java#L67


This change affected HBase 2.0.1 and 2.0.2.

As Phoenix 5.0.0 relies on this behavior: 
https://github.com/apache/phoenix/blob/8a819c6c3b4befce190c6ac759f744df511de61d/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/IndexMemStore.java#L84


Fortunately, this is fixed in Phoenix master: 
https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/IndexMemStore.java#L82


The issue should be resolved in the next release of Phoenix.

The problem is that I wrongly assumed HBase's version numbers to be 
following semver and that a patch release would not introduce breaking 
changes.


On 26/09/2018 1:04 AM, Jaanai Zhang wrote:



Is my method of installing HBase and Phoenix correct?

Did you check versions of HBase that exists in your classpath?

Is this a compatibility issue with Guava?

It isn't an exception which incompatible with Guava


   Jaanai Zhang
   Best regards!



Francis Chuang <mailto:francischu...@apache.org>> 于2018年9月25日周二 下午8:25写道:


Thanks for taking a look, Jaanai!

Is my method of installing HBase and Phoenix correct? See

https://github.com/Boostport/hbase-phoenix-all-in-one/blob/master/Dockerfile#L12

Is this a compatibility issue with Guava?

Francis

On 25/09/2018 10:21 PM, Jaanai Zhang wrote:


org.apache.phoenix.hbase.index.covered.data.IndexMemStore$1
overrides
final method
compare.(Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;)I
     at java.lang.ClassLoader.defineClass1(Native Method)
     at
java.lang.ClassLoader.defineClass(ClassLoader.java:763)
     at

It looks like that HBase's Jars are incompatible.


   Jaanai Zhang
   Best regards!



Francis Chuang mailto:francischu...@apache.org>> 于2018年9月25日周二 下午8:06写道:

Hi All,

I recently updated one of my Go apps to use Phoenix 5.0 with
HBase
2.0.2. I am using my Phoenix + HBase all in one docker image
available
here: https://github.com/Boostport/hbase-phoenix-all-in-one

This is the log/output from the exception:

RuntimeException: org.apache.phoenix.execute.CommitException:
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:

Failed 1 action:
org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException:

Failed to build index for unexpected reason!
     at

org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:206)
     at
org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:351)
     at

org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1010)
     at

org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1007)
     at

org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
     at

org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
     at

org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1007)
     at

org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.prepareMiniBatchOperations(HRegion.java:3487)
     at

org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3896)
     at

org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3854)
     at

org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3785)
     at

org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
     at

org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
     at

org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
     at

org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
     at

org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlocking

Re: Phoenix 5.0 could not commit transaction: org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.phoenix.hbase

2018-09-25 Thread Francis Chuang
-asl-1.9.13.jar:/opt/hbase/lib/jackson-databind-2.9.2.jar:/opt/hbase/lib/jackson-jaxrs-1.8.3.jar:/opt/hbase/lib/jackson-jaxrs-base-2.9.2.jar:/opt/hbase/lib/jackson-jaxrs-json-provider-2.9.2.jar:/opt/hbase/lib/jackson-mapper-asl-1.9.13.jar:/opt/hbase/lib/jackson-module-jaxb-annotations-2.9.2.jar:/opt/hbase/lib/jackson-xc-1.8.3.jar:/opt/hbase/lib/jamon-runtime-2.4.1.jar:/opt/hbase/lib/java-xmlbuilder-0.4.jar:/opt/hbase/lib/javassist-3.20.0-GA.jar:/opt/hbase/lib/javax.annotation-api-1.2.jar:/opt/hbase/lib/javax.el-3.0.1-b08.jar:/opt/hbase/lib/javax.inject-2.5.0-b32.jar:/opt/hbase/lib/javax.servlet-api-3.1.0.jar:/opt/hbase/lib/javax.servlet.jsp-2.3.2.jar:/opt/hbase/lib/javax.servlet.jsp-api-2.3.1.jar:/opt/hbase/lib/javax.servlet.jsp.jstl-1.2.0.v201105211821.jar:/opt/hbase/lib/javax.servlet.jsp.jstl-1.2.2.jar:/opt/hbase/lib/javax.ws.rs-api-2.0.1.jar:/opt/hbase/lib/jaxb-api-2.2.12.jar:/opt/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/hbase/lib/jcodings-1.0.18.jar:/opt/hbase/lib/jersey-client-2.25.1.jar:/opt/hbase/lib/jersey-common-2.25.1.jar:/opt/hbase/lib/jersey-container-servlet-core-2.25.1.jar:/opt/hbase/lib/jersey-guava-2.25.1.jar:/opt/hbase/lib/jersey-media-jaxb-2.25.1.jar:/opt/hbase/lib/jersey-server-2.25.1.jar:/opt/hbase/lib/jets3t-0.9.0.jar:/opt/hbase/lib/jettison-1.3.8.jar:/opt/hbase/lib/jetty-6.1.26.jar:/opt/hbase/lib/jetty-http-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-io-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-jmx-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-jsp-9.2.19.v20160908.jar:/opt/hbase/lib/jetty-schemas-3.1.M0.jar:/opt/hbase/lib/jetty-security-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-server-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-servlet-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-sslengine-6.1.26.jar:/opt/hbase/lib/jetty-util-6.1.26.jar:/opt/hbase/lib/jetty-util-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-util-ajax-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-webapp-9.3.19.v20170502.jar:/opt/hbase/lib/jetty-xml-9.3.19.v20170502.jar:/opt/hbase/lib/joni-2.1.11.jar:/opt/hbase/lib/jsch-0.1.54.jar:/opt/hbase/lib/junit-4.12.jar:/opt/hbase/lib/leveldbjni-all-1.8.jar:/opt/hbase/lib/libthrift-0.9.3.jar:/opt/hbase/lib/log4j-1.2.17.jar:/opt/hbase/lib/metrics-core-3.2.1.jar:/opt/hbase/lib/netty-all-4.0.23.Final.jar:/opt/hbase/lib/org.eclipse.jdt.core-3.8.2.v20130121.jar:/opt/hbase/lib/osgi-resource-locator-1.0.1.jar:/opt/hbase/lib/paranamer-2.3.jar:/opt/hbase/lib/phoenix-5.0.0-HBase-2.0-client.jar:/opt/hbase/lib/phoenix-5.0.0-HBase-2.0-server.jar:/opt/hbase/lib/protobuf-java-2.5.0.jar:/opt/hbase/lib/remotecontent?filepath=com%2Fgoogle%2Fguava%2Fguava%2F13.0.1%2Fguava-13.0.1.jar:/opt/hbase/lib/slf4j-api-1.7.25.jar:/opt/hbase/lib/slf4j-log4j12-1.7.25.jar:/opt/hbase/lib/snappy-java-1.0.5.jar:/opt/hbase/lib/spymemcached-2.12.2.jar:/opt/hbase/lib/validation-api-1.1.0.Final.jar:/opt/hbase/lib/xmlenc-0.52.jar:/opt/hbase/lib/xz-1.0.jar:/opt/hbase/lib/zookeeper-3.4.10.jar::

From what I can see, the the hbase jars are 2.0.2.

Francis

On 26/09/2018 1:04 AM, Jaanai Zhang wrote:



Is my method of installing HBase and Phoenix correct?

Did you check versions of HBase that exists in your classpath?

Is this a compatibility issue with Guava?

It isn't an exception which incompatible with Guava


   Jaanai Zhang
   Best regards!



Francis Chuang <mailto:francischu...@apache.org>> 于2018年9月25日周二 下午8:25写道:


Thanks for taking a look, Jaanai!

Is my method of installing HBase and Phoenix correct? See

https://github.com/Boostport/hbase-phoenix-all-in-one/blob/master/Dockerfile#L12

Is this a compatibility issue with Guava?

Francis

On 25/09/2018 10:21 PM, Jaanai Zhang wrote:


org.apache.phoenix.hbase.index.covered.data.IndexMemStore$1
overrides
final method
compare.(Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;)I
     at java.lang.ClassLoader.defineClass1(Native Method)
     at
java.lang.ClassLoader.defineClass(ClassLoader.java:763)
     at

It looks like that HBase's Jars are incompatible.


   Jaanai Zhang
       Best regards!



Francis Chuang mailto:francischu...@apache.org>> 于2018年9月25日周二 下午8:06写道:

Hi All,

I recently updated one of my Go apps to use Phoenix 5.0 with
HBase
2.0.2. I am using my Phoenix + HBase all in one docker image
available
here: https://github.com/Boostport/hbase-phoenix-all-in-one

This is the log/output from the exception:

RuntimeException: org.apache.phoenix.execute.CommitException:
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:

Failed 1 action:
org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException:

Failed to build index for unexpected reason!
     at

org.apache.phoenix.hbase.index.util.Inde

Re: Phoenix 5.0 could not commit transaction: org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.phoenix.hbase

2018-09-25 Thread Francis Chuang

Thanks for taking a look, Jaanai!

Is my method of installing HBase and Phoenix correct? See 
https://github.com/Boostport/hbase-phoenix-all-in-one/blob/master/Dockerfile#L12


Is this a compatibility issue with Guava?

Francis

On 25/09/2018 10:21 PM, Jaanai Zhang wrote:


org.apache.phoenix.hbase.index.covered.data.IndexMemStore$1 overrides
final method
compare.(Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;)I
     at java.lang.ClassLoader.defineClass1(Native Method)
     at
java.lang.ClassLoader.defineClass(ClassLoader.java:763)
     at

It looks like that HBase's Jars are incompatible.


   Jaanai Zhang
   Best regards!



Francis Chuang <mailto:francischu...@apache.org>> 于2018年9月25日周二 下午8:06写道:


Hi All,

I recently updated one of my Go apps to use Phoenix 5.0 with HBase
2.0.2. I am using my Phoenix + HBase all in one docker image
available
here: https://github.com/Boostport/hbase-phoenix-all-in-one

This is the log/output from the exception:

RuntimeException: org.apache.phoenix.execute.CommitException:
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
Failed 1 action:
org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException:
Failed to build index for unexpected reason!
     at

org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:206)
     at
org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:351)
     at

org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1010)
     at

org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1007)
     at

org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
     at

org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
     at

org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1007)
     at

org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.prepareMiniBatchOperations(HRegion.java:3487)
     at

org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3896)
     at
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3854)
     at
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3785)
     at

org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
     at

org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
     at

org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
     at

org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
     at

org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
     at
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
     at
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
     at
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
     at
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
 Caused by: java.lang.VerifyError: class
org.apache.phoenix.hbase.index.covered.data.IndexMemStore$1 overrides
final method
compare.(Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;)I
     at java.lang.ClassLoader.defineClass1(Native Method)
     at
java.lang.ClassLoader.defineClass(ClassLoader.java:763)
     at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
     at
java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
     at
java.net.URLClassLoader.access$100(URLClassLoader.java:73)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
     at java.security.AccessController.doPrivileged(Native
Method)
     at
java.net.URLClassLoader.findClass(URLClassLoader.java:361)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
     at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
     at

org.apache.phoenix.hbase.index.covered.data.IndexMemStore.(IndexMemStore.java:82)
 

Phoenix 5.0 could not commit transaction: org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.phoenix.hbase.ind

2018-09-25 Thread Francis Chuang

Hi All,

I recently updated one of my Go apps to use Phoenix 5.0 with HBase 
2.0.2. I am using my Phoenix + HBase all in one docker image available 
here: https://github.com/Boostport/hbase-phoenix-all-in-one


This is the log/output from the exception:

RuntimeException: org.apache.phoenix.execute.CommitException: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
Failed 1 action: 
org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: 
Failed to build index for unexpected reason!
        at 
org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:206)
        at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:351)
        at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1010)
        at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1007)
        at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
        at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
        at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1007)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.prepareMiniBatchOperations(HRegion.java:3487)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3896)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3854)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3785)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
        at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
    Caused by: java.lang.VerifyError: class 
org.apache.phoenix.hbase.index.covered.data.IndexMemStore$1 overrides 
final method 
compare.(Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;)I

        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
        at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

        at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)

        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at 
org.apache.phoenix.hbase.index.covered.data.IndexMemStore.(IndexMemStore.java:82)
        at 
org.apache.phoenix.hbase.index.covered.LocalTableState.(LocalTableState.java:57)
        at 
org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.getIndexUpdate(NonTxIndexBuilder.java:52)
        at 
org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:90)
        at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:503)
        at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:348)

        ... 18 more
    : 1 time, servers with issues: 9ac923bd5c9f,16020,1537875547341 
-> CommitException: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
Failed 1 action: 
org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: 
Failed to build index for unexpected reason!
        at 
org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:206)
        at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:351)
        at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHos

[DISCUSS] Docker images for Phoenix

2018-06-21 Thread Francis Chuang

Hi all,

I currently maintain a HBase + Phoenix all-in-one docker image[1]. The 
image is currently used to test Phoenix support for the Avatica Go SQL 
driver[2]. Judging by the number of pulls on docker hub (10k+), there 
are probably other people using it.


The image spins up HBase server with local storage, using the bundled 
Zookeeper with Phoenix support. The Phoenix query server is also started 
on port 8765.


While the image is definitely not suitable for production use, I think 
the test image still has valid use-cases and offers a lot of 
convenience. It's also possible to update the image in the future so 
that it can be used to spin up production clusters as well as testing 
instances (similar to what Ceph has done[3]).


Would the Phoenix community interested in accepting the dockerfile + 
related files and making it part of Phoenix? The added benefit of this 
is that it would be possible to configure some automation and have the 
docker images published directly to dockerhub as an automated build for 
each release.


Francis

[1] https://github.com/Boostport/hbase-phoenix-all-in-one

[2] https://github.com/apache/calcite-avatica-go

[3] https://github.com/ceph/ceph-container



Re: Phoenix ODBC driver limitations

2018-05-23 Thread Francis Chuang
Namespace mapping is something you need to enable on the server (it's 
off by default).


See documentation for enabling it here: 
http://phoenix.apache.org/namspace_mapping.html


Francis

On 24/05/2018 5:23 AM, Stepan Migunov wrote:

Thanks you for response, Josh!

I got something like "Inconsistent namespace mapping properties" and thought
it was because it's impossible to set "isNamespaceMappingEnabled" for the
ODBC driver (client side). After your explanation I understood that the
"client" in this case is queryserver but not ODBC driver. And now I need to
check why queryserver doesn't apply this property.

-Original Message-
From: Josh Elser [mailto:els...@apache.org]
Sent: Wednesday, May 23, 2018 6:52 PM
To: user@phoenix.apache.org
Subject: Re: Phoenix ODBC driver limitations

I'd be surprised to hear that the ODBC driver would need to know anything
about namespace-mapping.

Do you have an error? Steps to reproduce an issue which you see?

The reason I am surprised is that namespace mapping is an implementation
detail of the JDBC driver which lives inside of PQS -- *not* the ODBC
driver. The trivial thing you can check would be to validate that the
hbase-site.xml which PQS references is up to date and that PQS was restarted
to pick up the newest version of hbase-site.xml

On 5/22/18 4:16 AM, Stepan Migunov wrote:

Hi,

Is the ODBC driver from Hortonworks the only way to access Phoenix from
.NET code now?
The problem is that driver has some critical limitations - it seems,
driver doesn't support Namespace Mapping (it couldn't be able to connect
to Phoenix if phoenix.schema.isNamespaceMappingEnabled=true) and doesn't
support query hints.

Regards,
Stepan.





Re: Phoenix ODBC driver limitations

2018-05-22 Thread Francis Chuang

Hey Stepan,

There is a driver called phoenix-sharp 
(https://github.com/Azure/hdinsight-phoenix-sharp) from MS Azure. The 
project has not been updated for a while though.


Francis

On 22/05/2018 6:16 PM, Stepan Migunov wrote:

Hi,

Is the ODBC driver from Hortonworks the only way to access Phoenix from .NET 
code now?
The problem is that driver has some critical limitations - it seems, driver 
doesn't support Namespace Mapping (it couldn't be able to connect to Phoenix if 
phoenix.schema.isNamespaceMappingEnabled=true) and doesn't support query hints.

Regards,
Stepan.





Re: Decrease HTTP chattiness?

2018-05-21 Thread Francis Chuang
I am not familiar with the JDBC driver, but Phoenix uses Avatica[1] 
under the hood. The protobuf documentation does state that it's possible 
to control the number of rows returned in each response. See 
frame_max_size under the FetchRequest[2] message. This may be something 
that you can set in the JDBC driver.


Francis

[1] https://calcite.apache.org/avatica
[2] 
https://calcite.apache.org/avatica/docs/protobuf_reference.html#fetchrequest


On 22/05/2018 8:39 AM, Kevin Minder wrote:
It is possible to decrease the chattiness of the Phoenix JDBC driver 
operating in HTTP mode?

We've tried using stmt.setFetchSize() but this appears to be ignored.
As it stands new we appear to be getting about 100 rows per POST which 
presents a number of throughput issues when the results can be 100,000 
rows.





Go Avatica/Phoenix database/sql driver 3.0.0 released

2018-04-28 Thread Francis Chuang

We recently released Apache Calcite Avatica Go 3.0.0.

avatica-go is a Go database/sql driver for Avatica that also contains an 
adapter for Phoenix. The 3.0.0 release is the first release under the 
Apache Foundation since the code has been donated.


If you are using the Boostport/avatica driver, we highly encourage you 
to switch to the apache/calcite-avatica-go driver. In most cases, 
upgrading is just updating the import path in your application.


Please find the announcement below:

The Apache Calcite team is pleased to announce the release of
Apache Calcite Avatica Go 3.0.0.

Avatica is a framework for building database drivers. Avatica
defines a wire API and serialization mechanism for clients to
communicate with a server as a proxy to a database. The reference
Avatica client and server are implemented in Java and communicate
over HTTP. Avatica is a sub-project of Apache Calcite.

The Avatica Go client is a Go database/sql driver that enables Go
programs to communicate with the Avatica server.

Avatica Go 3.0.0 is the first release by the Apache Calcite
project since the code was donated. This release includes support
for the Avatica HSQLDB backend, updated dependencies as well as
numerous bug fixes as described in the release notes:

  https://calcite.apache.org/avatica/docs/go_history.html#v3-0-0

The release is available here:

   
https://www.apache.org/dyn/closer.cgi/calcite/apache-calcite-avatica-go-3.0.0/

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at

   https://calcite.apache.org/avatica

Francis Chuang, on behalf of the Apache Calcite Team