Re: Almost ready for HBaseCon+PhoenixCon 2018 SanJose CFP

2018-03-20 Thread Yu Li
Great to know and thanks for the efforts sir.

Minor: in the CfP sector, first line, "The event's call for proposals is
available *on on* EasyChair", the double "on" should be merged (smile)

Best Regards,
Yu

On 21 March 2018 at 10:51, Josh Elser  wrote:

> Hi all,
>
> I've published a new website for the upcoming event in June in California
> at [1][2] for the HBase and Phoenix websites, respectively. 1 & 2 are
> identical.
>
> I've not yet updated any links on either website to link to the new page.
> I'd appreciate if folks can give their feedback on anything outwardly
> wrong, incorrect, etc. If folks are happy, then I'll work on linking from
> the main websites, and coordinating an official announcement via mail
> lists, social media, etc.
>
> The website is generated from [3]. If you really want to be my
> best-friend, let me know about the above things which are wrong via
> pull-request ;)
>
> - Josh
>
> [1] https://hbase.apache.org/hbasecon-phoenixcon-2018/
> [2] https://phoenix.apache.org/hbasecon-phoenixcon-2018/
> [3] https://github.com/joshelser/hbasecon-jekyll
>


Almost ready for HBaseCon+PhoenixCon 2018 SanJose CFP

2018-03-20 Thread Josh Elser

Hi all,

I've published a new website for the upcoming event in June in 
California at [1][2] for the HBase and Phoenix websites, respectively. 1 
& 2 are identical.


I've not yet updated any links on either website to link to the new 
page. I'd appreciate if folks can give their feedback on anything 
outwardly wrong, incorrect, etc. If folks are happy, then I'll work on 
linking from the main websites, and coordinating an official 
announcement via mail lists, social media, etc.


The website is generated from [3]. If you really want to be my 
best-friend, let me know about the above things which are wrong via 
pull-request ;)


- Josh

[1] https://hbase.apache.org/hbasecon-phoenixcon-2018/
[2] https://phoenix.apache.org/hbasecon-phoenixcon-2018/
[3] https://github.com/joshelser/hbasecon-jekyll


[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407331#comment-16407331
 ] 

Josh Elser commented on PHOENIX-4576:
-

{quote}a week ago,  I shared my feedback on the patch with [~rajeshbabu] and he 
has been working on fixing the same.
{quote}
Also, sorry, guys, for getting in the way of this work. My apologies.

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_addendum.patch, 
> PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407289#comment-16407289
 ] 

Hudson commented on PHOENIX-4661:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #67 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/67/])
PHOENIX-4661 Handled deleted PTables in the MetadataCache (elserj: rev 
a67869eba1c934197c240ed977678275fa734f00)
* (add) phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at 

[jira] [Commented] (PHOENIX-4023) Handle drop of shared index when UPDATE_CACHE_FREQUENCY is set

2018-03-20 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407233#comment-16407233
 ] 

Thomas D'Silva commented on PHOENIX-4023:
-

[~karanmehta93] this would be a good JIRA for you to work on.
FYI [~sukuna...@gmail.com]

> Handle drop of shared index when UPDATE_CACHE_FREQUENCY is set
> --
>
> Key: PHOENIX-4023
> URL: https://issues.apache.org/jira/browse/PHOENIX-4023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> When UPDATE_CACHE_FREQUENCY is set, a client will continue to use a shared 
> index (i.e. local index or view index). This is not good because once it's 
> dropped, it will stop being maintained which will lead to incorrect query 
> results.
> Some potential options:
> # Ignore UPDATE_CACHE_FREQUENCY if a table has a shared index. Not great 
> because it defeats the purpose of the feature.
> # Delay the index actually being dropped (or at least the stopping of the 
> maintenance) until the cache frequency passes. Makes some assumptions about 
> the UPDATE_CACHE_FREQUENCY as it could vary on a connection by connection 
> basis.
> # Do nothing and let the index continue to be used (figuring it's usage will 
> stop when the UPDATE_CACHE_FREQUENCY occurs and that's the cost of using this 
> feature).
> Option #2 seems the most viable. Perhaps a separate config for how long to 
> continue to do index maintenance after an index is dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-4661:
---

Resolution: Fixed
  Assignee: Ankit Singhal  (was: Josh Elser)

Thanks for the assist, Ankit and Sergey.

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> 

[jira] [Updated] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4661:

Fix Version/s: 4.14.0

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> 

[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407166#comment-16407166
 ] 

Josh Elser commented on PHOENIX-4576:
-

[~rajeshbabu], still needs to go to 5.x?

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_addendum.patch, 
> PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4023) Handle drop of shared index when UPDATE_CACHE_FREQUENCY is set

2018-03-20 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4023:
--
Description: 
When UPDATE_CACHE_FREQUENCY is set, a client will continue to use a shared 
index (i.e. local index or view index). This is not good because once it's 
dropped, it will stop being maintained which will lead to incorrect query 
results.

Some potential options:
# Ignore UPDATE_CACHE_FREQUENCY if a table has a shared index. Not great 
because it defeats the purpose of the feature.
# Delay the index actually being dropped (or at least the stopping of the 
maintenance) until the cache frequency passes. Makes some assumptions about the 
UPDATE_CACHE_FREQUENCY as it could vary on a connection by connection basis.
# Do nothing and let the index continue to be used (figuring it's usage will 
stop when the UPDATE_CACHE_FREQUENCY occurs and that's the cost of using this 
feature).

Option #2 seems the most viable. Perhaps a separate config for how long to 
continue to do index maintenance after an index is dropped.

  was:
When UPDATE_CACHE_FREQUENCY is set, a client will continue to use a shared 
index (i.e. local index or view index). This is not good because once it's 
dropped, it will stop being maintained which will lead to incorrect query 
results.

Some potential options:
# Ignore UPDATE_CACHE_FREQUENCY if a table has a shared index. Not great 
because it defeats the purpose of the feature.
# Delay the index actually being dropped (or at least the stopping of the 
maintenance) until the cache frequency passes. Makes some assumptions about the 
UPDATE_CACHE_FREQUENCY as it could vary on a connection by connection basis.
#. Do nothing and let the index continue to be used (figuring it's usage will 
stop when the UPDATE_CACHE_FREQUENCY occurs and that's the cost of using this 
feature).

Option #2 seems the most viable. Perhaps a separate config for how long to 
continue to do index maintenance after an index is dropped.


> Handle drop of shared index when UPDATE_CACHE_FREQUENCY is set
> --
>
> Key: PHOENIX-4023
> URL: https://issues.apache.org/jira/browse/PHOENIX-4023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> When UPDATE_CACHE_FREQUENCY is set, a client will continue to use a shared 
> index (i.e. local index or view index). This is not good because once it's 
> dropped, it will stop being maintained which will lead to incorrect query 
> results.
> Some potential options:
> # Ignore UPDATE_CACHE_FREQUENCY if a table has a shared index. Not great 
> because it defeats the purpose of the feature.
> # Delay the index actually being dropped (or at least the stopping of the 
> maintenance) until the cache frequency passes. Makes some assumptions about 
> the UPDATE_CACHE_FREQUENCY as it could vary on a connection by connection 
> basis.
> # Do nothing and let the index continue to be used (figuring it's usage will 
> stop when the UPDATE_CACHE_FREQUENCY occurs and that's the cost of using this 
> feature).
> Option #2 seems the most viable. Perhaps a separate config for how long to 
> continue to do index maintenance after an index is dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4605) Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using boolean

2018-03-20 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4605:
-

Assignee: James Taylor

> Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using 
> boolean
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407096#comment-16407096
 ] 

Josh Elser commented on PHOENIX-4661:
-

Cool, ITs passed locally (sans the LocalIndex merge IT whose patch I don't have 
applied locally). Going to commit.

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> 

[jira] [Commented] (PHOENIX-4636) Include python-phoenixdb into Phoenix

2018-03-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407005#comment-16407005
 ] 

Josh Elser commented on PHOENIX-4636:
-

{noformat}
test_time (phoenixdb.tests.test_types.TypesTest) ... FAIL
test_timestamp (phoenixdb.tests.test_types.TypesTest) ... FAIL{noformat}
These two tests are failing for me. Will try to figure out why quickly, but I'm 
thinking I might just push it and deal with them later.

> Include python-phoenixdb into Phoenix
> -
>
> Key: PHOENIX-4636
> URL: https://issues.apache.org/jira/browse/PHOENIX-4636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Josh Elser
>Priority: Major
>
> Include [https://github.com/lalinsky/python-phoenixdb] in Phoenix.
> Details about the library can be found at:-
>  [http://python-phoenixdb.readthedocs.io/en/latest/]
> Discussion thread:-
> [https://www.mail-archive.com/dev@phoenix.apache.org/msg45424.html]
> commit:-
> [https://github.com/lalinsky/python-phoenixdb/commit/1bb34488dd530ca65f91b29ef16aa7b71f26b806]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4594) Perform binary search on guideposts during query compilation

2018-03-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406950#comment-16406950
 ] 

James Taylor edited comment on PHOENIX-4594 at 3/20/18 8:20 PM:


A little more info on this one:
- Changes will be isolated to {{BaseResultIterators.getParallelScans(byte[] 
startKey, byte[] stopKey)}} method
- We currently walk through the guideposts by decoding them using 
PrefixByteCodec.decode(decoder, input) which prefix encodes the byte[] of the 
guideposts (since there will be a lot of overlap of the bytes from gp to gp).
- I'm pretty sure the issue with the slowness is due to our linear search 
(since the time is increasing as the guidepost width decreases), but you might 
want to confirm that through profiling first.
- Assuming this is the case, we'll need to make a pass through all guideposts 
and put them into a List in which we can perform a binary search. That'll use 
more memory (unless you know of a way to binary search while keeping the data 
encoded), but only during the execution of this method, so perhaps it's ok.
- Take note of the conditions we're search for as we linearly traverse the gps 
as those will become binary searches. You should be able to prune the search 
space as you go, since we traverse from smallest to biggest gp.
- We need to determine if there's a guidepost in each region as we use that 
information to drive the timestamp value we return (essentially the "last 
updated" information).
- The ExplainPlanWithStatsEnabledIT has a pretty good set of tests that need to 
keep passing after this change.




was (Author: jamestaylor):
A little more info on this one:
- Changes will be isolated to {{BaseResultIterators.getParallelScans(byte[] 
startKey, byte[] stopKey)}} method
- I'm pretty sure the issue with the slowness is due to our linear search 
(since the time is increasing as the guidepost width decreases), but you might 
want to confirm that through profiling first.
- We currently walk through the guideposts by decoding them using 
PrefixByteCodec.decode(decoder, input) which prefix encodes the byte[] of the 
guideposts (since there will be a lot of overlap of the bytes from gp to gp).
- Instead, we'll need to make a pass through all guideposts and put them into a 
List in which we can perform a binary search. That'll use more memory (unless 
you know of a way to binary search while keeping the data encoded), but only 
during the execution of this method, so perhaps it's ok.
- Take note of the conditions we're search for as we linearly traverse the gps 
as those will become binary searches. You should be able to prune the search 
space as you go, since we traverse from smallest to biggest gp.
- We need to determine if there's a guidepost in each region as we use that 
information to drive the timestamp value we return (essentially the "last 
updated" information).
- The ExplainPlanWithStatsEnabledIT has a pretty good set of tests that need to 
keep passing after this change.



> Perform binary search on guideposts during query compilation
> 
>
> Key: PHOENIX-4594
> URL: https://issues.apache.org/jira/browse/PHOENIX-4594
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Abhishek Singh Chouhan
>Priority: Major
>
> If there are many guideposts, performance will suffer during query 
> compilation because we do a linear search of the guideposts to find the 
> intersection with the scan ranges. Instead, in 
> BaseResultIterators.getParallelScans() we should populate an array of 
> guideposts and perform a binary search. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4594) Perform binary search on guideposts during query compilation

2018-03-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406950#comment-16406950
 ] 

James Taylor commented on PHOENIX-4594:
---

A little more info on this one:
- Changes will be isolated to {{BaseResultIterators.getParallelScans(byte[] 
startKey, byte[] stopKey)}} method
- I'm pretty sure the issue with the slowness is due to our linear search 
(since the time is increasing as the guidepost width decreases), but you might 
want to confirm that through profiling first.
- We currently walk through the guideposts by decoding them using 
PrefixByteCodec.decode(decoder, input) which prefix encodes the byte[] of the 
guideposts (since there will be a lot of overlap of the bytes from gp to gp).
- Instead, we'll need to make a pass through all guideposts and put them into a 
List in which we can perform a binary search. That'll use more memory (unless 
you know of a way to binary search while keeping the data encoded), but only 
during the execution of this method, so perhaps it's ok.
- Take note of the conditions we're search for as we linearly traverse the gps 
as those will become binary searches. You should be able to prune the search 
space as you go, since we traverse from smallest to biggest gp.
- We need to determine if there's a guidepost in each region as we use that 
information to drive the timestamp value we return (essentially the "last 
updated" information).
- The ExplainPlanWithStatsEnabledIT has a pretty good set of tests that need to 
keep passing after this change.



> Perform binary search on guideposts during query compilation
> 
>
> Key: PHOENIX-4594
> URL: https://issues.apache.org/jira/browse/PHOENIX-4594
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Abhishek Singh Chouhan
>Priority: Major
>
> If there are many guideposts, performance will suffer during query 
> compilation because we do a linear search of the guideposts to find the 
> intersection with the scan ranges. Instead, in 
> BaseResultIterators.getParallelScans() we should populate an array of 
> guideposts and perform a binary search. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4661:

Attachment: PHOENIX-4661_v2.patch

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> 

[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406861#comment-16406861
 ] 

Josh Elser commented on PHOENIX-4661:
-

v2 (which has the above change) resolved the issue I was seeing. Let me run 
through the ITs.

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> 

[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406732#comment-16406732
 ] 

Hudson commented on PHOENIX-4576:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1838 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1838/])
PHOENIX-4576 Fix LocalIndexSplitMergeIT tests failing in master branch - 
(rajeshbabu: rev 37b1e11626e5399093447144c53edbbd860df1fc)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/LocalIndexSplitMergeIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java


> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_addendum.patch, 
> PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406629#comment-16406629
 ] 

Josh Elser commented on PHOENIX-4661:
-

{code:java}
    PTable table = loadTable(env, key, cacheKey, clientTimeStamp, 
asOfTimeStamp, clientVersion);
    if (table == null || isTableDeleted(table)) { return null; }{code}
Needed to add an extra null check here to the result of {{loadTable}}.

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> 

[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406593#comment-16406593
 ] 

Sergey Soldatov commented on PHOENIX-4661:
--

[~an...@apache.org] yep, I was thinking about checking other calls of loadTable 
as well. +1 for your patch

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> 

[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406590#comment-16406590
 ] 

Hudson commented on PHOENIX-4576:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1814 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1814/])
PHOENIX-4576 Fix LocalIndexSplitMergeIT tests failing in master branch - 
(rajeshbabu: rev b6e33f30e2084879feeb3240444806b1e7069135)
* (edit) 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/LocalIndexSplitMergeIT.java


> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_addendum.patch, 
> PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406588#comment-16406588
 ] 

Hudson commented on PHOENIX-4576:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #66 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/66/])
PHOENIX-4576 Fix LocalIndexSplitMergeIT tests failing in master branch - 
(rajeshbabu: rev eaa6cdd2dc6f1102f6e7e6943b8a4420c2efc6d8)
* (edit) 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/LocalIndexSplitMergeIT.java


> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_addendum.patch, 
> PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406578#comment-16406578
 ] 

Josh Elser commented on PHOENIX-4661:
-

Thanks for the patches, guys. I didn't get far enough into the code to realize 
the fix was that easy :)

Trying this out locally.

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> 

[jira] [Assigned] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-4662:
--

Assignee: Csaba Skrabak

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Major
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4576:
-
Attachment: PHOENIX-4576_addendum.patch

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_addendum.patch, 
> PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4576:
-
Attachment: (was: PHOENIX-4576_addendum.patch)

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4576:
-
Attachment: PHOENIX-4576_addendum.patch

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_addendum.patch, 
> PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-20 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406295#comment-16406295
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4576:
--

Uploaded addendum to fix the case mentioned by Ankit along with the test case. 
Going to commit it.

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_addendum.patch, 
> PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4662:
---
Priority: Major  (was: Blocker)

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Priority: Major
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-03-20 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406105#comment-16406105
 ] 

Toshihiro Suzuki commented on PHOENIX-4658:
---

{quote}
Any idea what HBase version introduced the issue?
{quote}
It seems like when HBase supported reverse Scan in HBASE-4811, this issue was 
introduced. 

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406094#comment-16406094
 ] 

Ankit Singhal commented on PHOENIX-4661:


[~sergey.soldatov] , your patch was also right, but we may need to do it in all 
calls of loadTable(). can you review v1 patch, I alter access permissions of 
GetTable also so that it will check for Read/Exec access on the table instead 
of Admin permission?

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must 

[jira] [Updated] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4661:
---
Attachment: PHOENIX-4661_v1.patch

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> 

[jira] [Updated] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4662:
---
Priority: Blocker  (was: Major)

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Blocker
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak reassigned PHOENIX-4662:
--

Assignee: (was: Csaba Skrabak)

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Priority: Blocker
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4662:
---
Attachment: PHOENIX-4662.patch

> NullPointerException in TableResultIterator.java on cache resend
> 
>
> Key: PHOENIX-4662
> URL: https://issues.apache.org/jira/browse/PHOENIX-4662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Major
> Attachments: PHOENIX-4662.patch
>
>
> In the fix for PHOENIX-4010, there is a potential null dereference. Turned 
> out when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.
> The caches field is initialized to null and may be dereferenced after 
> "Retrying when Hash Join cache is not found on the server ,by sending the 
> cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2018-03-20 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406005#comment-16406005
 ] 

Csaba Skrabak commented on PHOENIX-4010:


Oh, it's released. :( Got it, [~an...@apache.org]. Added link to the new 
PHOENIX-4662.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.addendum.patch, PHOENIX-4010.patch, 
> PHOENIX-4010_v1.patch, PHOENIX-4010_v2.patch, PHOENIX-4010_v2_rebased.patch, 
> PHOENIX-4010_v2_rebased_1.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4662) NullPointerException in TableResultIterator.java on cache resend

2018-03-20 Thread Csaba Skrabak (JIRA)
Csaba Skrabak created PHOENIX-4662:
--

 Summary: NullPointerException in TableResultIterator.java on cache 
resend
 Key: PHOENIX-4662
 URL: https://issues.apache.org/jira/browse/PHOENIX-4662
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Csaba Skrabak
Assignee: Csaba Skrabak


In the fix for PHOENIX-4010, there is a potential null dereference. Turned out 
when we ran a previous version of HashJoinIT with PHOENIX-4010 backported.

The caches field is initialized to null and may be dereferenced after "Retrying 
when Hash Join cache is not found on the server ,by sending the cache again".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4661:
-
Attachment: PHOENIX-4661.patch

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> 

[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-03-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405885#comment-16405885
 ] 

James Taylor commented on PHOENIX-4658:
---

Thanks for the work and the patch on this, [~brfrn169].

bq. As a workaround, we can fix this issue on client side. I just attached the 
v1 patch. In this patch, when setting reversed to true for a scan, 
loadColumnFamiliesOnDemand is set to false.
Another alternative would be to disable a reverse scan when a skip scan is 
being done. Or another would be to have a hint that forces a forward scan. Not 
sure what the best option is (probably depends on the use case/workload). Any 
idea what HBase version introduced the issue?

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-03-20 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4658:
--
Fix Version/s: 4.14.0

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)