Re: [ANNOUNCE] New VP Apache Phoenix

2020-04-16 Thread Reid Chan
Congratulation Ankit!

--

Best regards,
R.C




From: Josh Elser 
Sent: 16 April 2020 23:14
To: d...@phoenix.apache.org; user@phoenix.apache.org
Subject: [ANNOUNCE] New VP Apache Phoenix

I'm pleased to announce that the ASF board has just approved the
transition of VP Phoenix from myself to Ankit. As with all things, this
comes with the approval of the Phoenix PMC.

The ASF defines the responsibilities of the VP to be largely oversight
and secretarial. That is, a VP should be watching to make sure that the
project is following all foundation-level obligations and writing the
quarterly project reports about Phoenix to summarize the happenings. Of
course, a VP can choose to use this title to help drive movement and
innovation in the community, as well.

With this VP rotation, the PMC has also implicitly agreed to focus on a
more regular rotation schedule of the VP role. The current plan is to
revisit the VP role in another year.

Please join me in congratulating Ankit on this new role and thank him
for volunteering.

Thank you all for the opportunity to act as VP these last years.

- Josh


Re: Select * gets 0 rows from index table

2020-03-30 Thread Reid Chan
Hi Swaroopa,


>> What is the empty column value in the index table (actual hbase table) for 
>> corresponding row?

B is a string (ip), C is a int(0 or 1). they are empty in index, but not in 
data table. (not sure whether I comprehend the question right.)


>> Did you use IndexTool to rebuild the index?

No.




--

Best regards,
R.C




From: swaroopa kadam 
Sent: 31 March 2020 00:38
To: user@phoenix.apache.org
Subject: Re: Select * gets 0 rows from index table

Hey Reid,

Some questions:
What is the empty column value in the index table (actual hbase table) for 
corresponding row?

Did you use IndexTool to rebuild the index?

Thanks


On Mon, Mar 30, 2020 at 9:22 AM Reid Chan 
mailto:reidddc...@outlook.com>> wrote:
Hey Josh! I'm glad you show up!

Version: 4.15-HBase-1.4

>> Did you `select * from index_table` verbatim?

Yes. Because I found that as long as the query goes to index (checked from 
EXPLAIN), the result must be empty. So I checked what're inside index by select 
* index.


>> Caveat about covered columns in a query

Data table is (A primary key, B, C, D, E, ...), index on B include C . B, C 
columns are nullable.


>> What's the state of the index?

Index_state shows "a". What does it mean?


>> * Did you use Phoenix to create the data+index tables and to populate the 
>> data in those tables?

Yes.



--

Best regards,
R.C




From: Josh Elser mailto:els...@apache.org>>
Sent: 30 March 2020 23:42
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Select * gets 0 rows from index table

Hey Reid!

Can you clarify a couple of things?

* What version of Phoenix?
* Did you `select * from index_table` verbatim? Most of the time, when
you have an index table, you'd be interacting with the data table which
(behind the scenes) goes to the index table.
* * Caveat about covered columns in a query
* What's the state of the index? Look at the INDEX_STATE column in
system.catalog for your index table.
* Did you use Phoenix to create the data+index tables and to populate
the data in those tables?

On 3/30/20 4:35 AM, Reid Chan wrote:
> Hi team,
>
> I encountered a problem that select * from index_table limit x got 0 rows, 
> but underlying hbase has data (observed it from hbase shell > scan) and any 
> queries went to index table would get 0 rows as well.
>
> In the meantime the server had the following error message: 
> "index.GlobalIndexChecker: Could not find the newly rebuilt index row with 
> row key xxx for table yyy."
>
> Looking forward to get some hints from experienced users and devs.
>
> Thanks!
>
> --
>
> Best regards,
> R.C
>
>
--

Swaroopa Kadam
about.me/swaroopa_kadam<https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api>



Re: Select * gets 0 rows from index table

2020-03-30 Thread Reid Chan
Hey Josh! I'm glad you show up!

Version: 4.15-HBase-1.4

>> Did you `select * from index_table` verbatim?

Yes. Because I found that as long as the query goes to index (checked from 
EXPLAIN), the result must be empty. So I checked what're inside index by select 
* index.


>> Caveat about covered columns in a query

Data table is (A primary key, B, C, D, E, ...), index on B include C . B, C 
columns are nullable.


>> What's the state of the index?

Index_state shows "a". What does it mean?


>> * Did you use Phoenix to create the data+index tables and to populate the 
>> data in those tables?

Yes.



--

Best regards,
R.C




From: Josh Elser 
Sent: 30 March 2020 23:42
To: user@phoenix.apache.org
Subject: Re: Select * gets 0 rows from index table

Hey Reid!

Can you clarify a couple of things?

* What version of Phoenix?
* Did you `select * from index_table` verbatim? Most of the time, when
you have an index table, you'd be interacting with the data table which
(behind the scenes) goes to the index table.
* * Caveat about covered columns in a query
* What's the state of the index? Look at the INDEX_STATE column in
system.catalog for your index table.
* Did you use Phoenix to create the data+index tables and to populate
the data in those tables?

On 3/30/20 4:35 AM, Reid Chan wrote:
> Hi team,
>
> I encountered a problem that select * from index_table limit x got 0 rows, 
> but underlying hbase has data (observed it from hbase shell > scan) and any 
> queries went to index table would get 0 rows as well.
>
> In the meantime the server had the following error message: 
> "index.GlobalIndexChecker: Could not find the newly rebuilt index row with 
> row key xxx for table yyy."
>
> Looking forward to get some hints from experienced users and devs.
>
> Thanks!
>
> --
>
> Best regards,
> R.C
>
>


Select * gets 0 rows from index table

2020-03-30 Thread Reid Chan
Hi team,

I encountered a problem that select * from index_table limit x got 0 rows, but 
underlying hbase has data (observed it from hbase shell > scan) and any queries 
went to index table would get 0 rows as well.

In the meantime the server had the following error message: 
"index.GlobalIndexChecker: Could not find the newly rebuilt index row with row 
key xxx for table yyy."

Looking forward to get some hints from experienced users and devs.

Thanks!

--

Best regards,
R.C




Re: Curl kerberized QueryServer using protobuf type

2019-07-09 Thread Reid Chan
Thanks Josh!

Finally, I switched to JSON serialization as a simple workaround.




--

Best regards,
R.C




From: Josh Elser 
Sent: 02 July 2019 22:55
To: user@phoenix.apache.org
Subject: Re: Curl kerberized QueryServer using protobuf type

Hey Reid,

Protobuf is a binary format -- this is error'ing out because you're
sending it plain-text.

You're going to have quite a hard time constructing messages in bash
alone. There are lots of language bindings[1]. You should be able to
pick any of these to help encode/decode messages (if you want to use
cURL as your "transport").

IMO, Avatica's protocol is too complex (by necessity of implementing all
of the JDBC API) to just throw some hand-constructed JSON at. I think
the better solution would be to think about some simpler API that
exposes just the bare bones if you want something developer focused.

[1] https://developers.google.com/protocol-buffers/docs/reference/overview

On 7/1/19 5:53 AM, Reid Chan wrote:
> Hi team and other users,
>
> Following is the script used for connecting to QS,
> {code}
> #!/usr/bin/env bash
>
> set -u
>
> AVATICA="hostname:8765"
> echo $AVATICA
> CONNECTION_ID="conn-$(whoami)-$(date +%s)"
>
> echo "Open connection"
> openConnectionReq="message OpenConnectionRequest {string connection_id = 
> $CONNECTION_ID;}"
> curl -i --negotiate -u : -w "\n" "$AVATICA" -H "Content-Type: 
> application/protobuf" --data "$openConnectionReq"
> {code}
>
> But it ended with:
> org.apache.calcite.avatica.proto.Responses$ErrorResponse�
> �rg.apache.calcite.avatica.com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException:
>  Protocol message tag had invalid wire type.
>   at 
> org.apache.calcite.avatica.com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:111)
>   at 
> org.apache.calcite.avatica.com.google.protobuf.CodedInputStream$ArrayDecoder.skipField(CodedInputStream.java:591)
>   at 
> org.apache.calcite.avatica.proto.Common$WireMessage.(Common.java:12544)
>   at 
> org.apache.calcite.avatica.proto.Common$WireMessage.(Common.java:12511)
>   at 
> org.apache.calcite.avatica.proto.Common$WireMessage$1.parsePartialFrom(Common.java:13054)
>   at 
> org.apache.calcite.avatica.proto.Common$WireMessage$1.parsePartialFrom(Common.java:13049)
>   at 
> org.apache.calcite.avatica.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:91)
>   at 
> org.apache.calcite.avatica.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:96)
>   at 
> org.apache.calcite.avatica.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.calcite.avatica.com.google.protobuf.GeneratedMessageV3.parseWithIOException(GeneratedMessageV3.java:311)
>   at 
> org.apache.calcite.avatica.proto.Common$WireMessage.parseFrom(Common.java:12757)
>   at 
> org.apache.calcite.avatica.remote.ProtobufTranslationImpl.parseRequest(ProtobufTranslationImpl.java:410)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.decode(ProtobufHandler.java:51)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.decode(ProtobufHandler.java:31)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:93)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:123)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:121)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback$1.run(QueryServer.java:500)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback.doAsRemoteUser(QueryServer.java:497)
>   at 
> org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:884)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:120)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.ha

Questions about ZK Load Balancer

2019-07-09 Thread Reid Chan
Hi community,

Recently, i'm trying to apply the ZK-based Load Balancer on production env.

But it looks like a half-done feature, i couldn't find how a query server 
client get a registered QS from LB in client side codebase.

There's one method: LoadBalancer#getSingleServiceLocation, supposed to be 
called from client side, is dead codes and never invoked.

Highly appreciate any code pointer or suggestion or advice.


--

Best regards,
R.C




Curl kerberized QueryServer using protobuf type

2019-07-01 Thread Reid Chan
Hi team and other users,

Following is the script used for connecting to QS,
{code}
#!/usr/bin/env bash

set -u

AVATICA="hostname:8765"
echo $AVATICA
CONNECTION_ID="conn-$(whoami)-$(date +%s)"

echo "Open connection"
openConnectionReq="message OpenConnectionRequest {string connection_id = 
$CONNECTION_ID;}"
curl -i --negotiate -u : -w "\n" "$AVATICA" -H "Content-Type: 
application/protobuf" --data "$openConnectionReq"
{code}

But it ended with:
org.apache.calcite.avatica.proto.Responses$ErrorResponse�
�rg.apache.calcite.avatica.com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException:
 Protocol message tag had invalid wire type.
at 
org.apache.calcite.avatica.com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:111)
at 
org.apache.calcite.avatica.com.google.protobuf.CodedInputStream$ArrayDecoder.skipField(CodedInputStream.java:591)
at 
org.apache.calcite.avatica.proto.Common$WireMessage.(Common.java:12544)
at 
org.apache.calcite.avatica.proto.Common$WireMessage.(Common.java:12511)
at 
org.apache.calcite.avatica.proto.Common$WireMessage$1.parsePartialFrom(Common.java:13054)
at 
org.apache.calcite.avatica.proto.Common$WireMessage$1.parsePartialFrom(Common.java:13049)
at 
org.apache.calcite.avatica.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:91)
at 
org.apache.calcite.avatica.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:96)
at 
org.apache.calcite.avatica.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at 
org.apache.calcite.avatica.com.google.protobuf.GeneratedMessageV3.parseWithIOException(GeneratedMessageV3.java:311)
at 
org.apache.calcite.avatica.proto.Common$WireMessage.parseFrom(Common.java:12757)
at 
org.apache.calcite.avatica.remote.ProtobufTranslationImpl.parseRequest(ProtobufTranslationImpl.java:410)
at 
org.apache.calcite.avatica.remote.ProtobufHandler.decode(ProtobufHandler.java:51)
at 
org.apache.calcite.avatica.remote.ProtobufHandler.decode(ProtobufHandler.java:31)
at 
org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:93)
at 
org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
at 
org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:123)
at 
org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:121)
at 
org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback$1.run(QueryServer.java:500)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at 
org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback.doAsRemoteUser(QueryServer.java:497)
at 
org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:884)
at 
org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:120)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
EInvalidWireTypeException: Protocol message tag had invalid wire type. 
*02


It looked like there's something wrong when decoding response.

I can only ensure kerberos authentication passed, because klist command showed 
the HTTP service principal after scripted executed.

Can't find any reference about how to curl QS using protobuf instead of json, 
hopefully can get some help from community!


--

Best regards,
R.C




Phoenix admin?

2018-02-22 Thread Reid Chan
Hi team,

I created a table through HBase api, and then created a view for it on Phoenix.
And for some reasons, i dropped the view, but coprocessors are still attached 
on this table.

>From hbase webui:
'recommend:vulgar_feed', {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|'}, {NAME 
=> 'b'}

>From regionserver log:
2018-02-16 17:42:50,022 WARN 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver: Unable to 
collect stats for recommend:vulgar_feed
java.io.IOException: Unable to initialize the guide post depth
at 
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.init(DefaultStatisticsCollector.java:369)
at 
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.createCompactionScanner(DefaultStatisticsCollector.java:359)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$2.run(UngroupedAggregateRegionObserver.java:923)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$2.run(UngroupedAggregateRegionObserver.java:912)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:445)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:426)
at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:210)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preCompact(UngroupedAggregateRegionObserver.java:912)
at 
org.apache.hadoop.hbase.coprocessor.BaseRegionObserver.preCompact(BaseRegionObserver.java:195)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$9.call(RegionCoprocessorHost.java:595)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1722)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompact(RegionCoprocessorHost.java:590)
at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.postCreateCoprocScanner(Compactor.java:253)
at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:94)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:119)
at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1223)
at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1856)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:526)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:562)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=recommend.vulgar_feed
at 
org.apache.phoenix.schema.PMetaDataImpl.getTableRef(PMetaDataImpl.java:71)
at 
org.apache.phoenix.jdbc.PhoenixConnection.getTable(PhoenixConnection.java:575)
at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:444)
at 
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.initGuidepostDepth(DefaultStatisticsCollector.java:160)
at 
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.init(DefaultStatisticsCollector.java:367)


My question is, is it possible to drop those coprocessors through like phoenix 
admin, or some tools i missed? Although no harm done, i just think this drop 
not clean enough...


Re: Phoenix connection to kerberized hbase fails

2017-04-19 Thread Reid Chan
Hi rafa,

I followed the guides on site:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/configuring-phoenix-to-run-in-a-secure-cluster.html
, and linked those configuration files under phoenix bin directory.

But problem remains.

Best regards,
---R



--
View this message in context: 
http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-tp3419p3422.html
Sent from the Apache Phoenix User List mailing list archive at Nabble.com.


Re: Phoenix connection to kerberized hbase fails

2017-04-19 Thread Reid Chan
Version infomation, phoenix: phoenix-4.10.0-HBase-1.2, hbase: hbase-1.2.4



--
View this message in context: 
http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-tp3419p3420.html
Sent from the Apache Phoenix User List mailing list archive at Nabble.com.