[jira] [Updated] (PHOENIX-5247) DROP TABLE and DROP VIEW commands fail to drop second or higher level child views

2019-04-17 Thread Kadir OZDEMIR (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5247:
---
Attachment: PHOENIX-5247.4.14-HBase-1.2.001.patch

> DROP TABLE and DROP VIEW commands fail to drop second or higher level child 
> views
> -
>
> Key: PHOENIX-5247
> URL: https://issues.apache.org/jira/browse/PHOENIX-5247
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.14.2
>
> Attachments: PHOENIX-5247.4.14-HBase-1.2.001.patch, 
> PHOENIX-5247.4.14.1-HBase-1.2.001.patch
>
>
> We have seen large number of orphan views in our production environments. The 
> method (doDropTable) that is used to drop tables and views drops only the 
> first level child views of tables. This seems to be the main root cause for 
> orphan views. doDropTable() is recursive only when the table type is TABLE or 
> SYSTEM. The table type for views is VIEW. The findChildViews method returns 
> the first level child views. So, doDropTable ignores dropping views of views 
> (i.e., second or higher level views).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5237) Support UPPER/LOWER functions in SQL statement

2019-04-17 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam resolved PHOENIX-5237.
-
Resolution: Duplicate

This feature is already available. :) 

> Support UPPER/LOWER functions in SQL statement
> --
>
> Key: PHOENIX-5237
> URL: https://issues.apache.org/jira/browse/PHOENIX-5237
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5241) Write to table with global index failed if meta of index changed (split, move, etc)

2019-04-17 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5241:
---

Assignee: Swaroopa Kadam

> Write to table with global index failed if meta of index changed (split, 
> move, etc)
> ---
>
> Key: PHOENIX-5241
> URL: https://issues.apache.org/jira/browse/PHOENIX-5241
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: phoenix-4.14.1-HBase-1.2
>Reporter: cuizhaohua
>Assignee: Swaroopa Kadam
>Priority: Major
>
> HBase version :1.2.6
> phoenix version: phoenix-4.14.1-HBase-1.2  (download from 
> [http://phoenix.apache.org/download.html])
> phoneinx client version:  phoenix-4.14.1-HBase-1.2   (download from 
> [http://phoenix.apache.org/download.html])
> step 1:
> 0: jdbc:phoenix::/hbase> UPSERT INTO test_meta_change VALUES ('1', 'foo');
>  1 row affected (0.298 seconds)
>  
> setp 2: move index of table  test_meta_change    region 
> hbase(main):008:0> move '0b158edd48c60560c358a3208fee8e24'
>  0 row(s) in 0.0500 seconds
>  
> step 3: get the error
> 0: jdbc:phoenix::/hbase> UPSERT INTO test_meta_change VALUES ('2', 'foo');
>  19/04/15 15:12:29 WARN client.AsyncProcess: #1, table=TEST_META_CHANGE, 
> attempt=1/35 failed=1ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to 
> the index failed. disableIndexOnFailure=true, Failed to write to multiple 
> index tables: [TEST_META_CHANGE_IDX] ,serverTimestamp=1555312349291,
>  at 
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
>  at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:172)
>  at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
>  at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
>  at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
>  at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
>  at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3324)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2823)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:758)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:720)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2168)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
> failed. disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [TEST_META_CHANGE_IDX]
>  at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
>  at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:171)
>  ... 22 more
>  Caused by: 
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException: 
> disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [TEST_META_CHANGE_IDX]
>  at 
> org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
>  at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.

[jira] [Updated] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4993:

Fix Version/s: 4.14.2

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch, PHOENIX-4993-master.02.patch, 
> PHOENIX-4993-master.addendum-1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5094:

Fix Version/s: 4.14.2

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.04.patch, PHOENIX-5094-4.14-HBase-1.3.05.patch, 
> PHOENIX-5094-master.01.patch, PHOENIX-5094-master.02.patch, 
> PHOENIX-5094-master.03.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5111) IndexTool gives NPE when trying to do a direct build without an output-path set

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5111:

Fix Version/s: 4.14.2

> IndexTool gives NPE when trying to do a direct build without an output-path 
> set
> ---
>
> Key: PHOENIX-5111
> URL: https://issues.apache.org/jira/browse/PHOENIX-5111
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Gokcen Iskender
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5111.patch, PHOENIX-5111.patch
>
>
> The IndexTool has several modes. If the -direct or -partial-rebuild flags are 
> not set, the tool assumes the user wants to rebuild the index by creating 
> HFiles and then bulk-loading them back into HBase, and requires an extra 
> -output-path flag to deterine where the temporary HFiles should live. 
> In practice, we've found that -direct mode (which loads using HBase Puts) is 
> quicker. However, even though there's logic to not require the -output-path 
> flag when -direct mode is chosen, the IndexTool will throw an NPE if it's not 
> present. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5080) Index becomes Active during Partial Index Rebuilder if Index Failure happens

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5080:

Fix Version/s: 4.14.2

> Index becomes Active during Partial Index Rebuilder if Index Failure happens
> 
>
> Key: PHOENIX-5080
> URL: https://issues.apache.org/jira/browse/PHOENIX-5080
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5080-4.x-HBase-1.3.01.patch, 
> PHOENIX-5080-4.x-HBase-1.3.02.patch, PHOENIX-5080-4.x-HBase-1.3.02.patch, 
> PHOENIX-5080-4.x-HBase-1.3.03.patch, PHOENIX-5080-4.x-HBase-1.3.04.patch, 
> PHOENIX-5080-4.x-HBase-1.3.05.patch, PHOENIX-5080-4.x-HBase-1.3.06.patch, 
> PHOENIX-5080-4.x-HBase-1.3.06.patch, PHOENIX-5080.01.patch, 
> PHOENIX-5080.01.patch
>
>
> After PHOENIX-4130 and PHOENIX-4600 , If there is Index failure during 
> Partial Index Rebuild, Rebuilder will try again to write Index updates. If it 
> succeeds then it will transition Index from INACTIVE to ACTIVE, even before 
> Rebuilder finishes.
> Here is where it goes wrong, I think :- 
> {code:java}
> PhoenixIndexFailurePolicy.java :- 
> public static void doBatchWithRetries(MutateCommand mutateCommand,
>             IndexWriteException iwe, PhoenixConnection connection, 
> ReadOnlyProps config) throws IOException {
> 
> while (canRetryMore(numRetry++, maxTries, canRetryUntil)) {
> ...
> handleIndexWriteSuccessFromClient(iwe, connection);
> ...
> }
> }
> 
> private static void handleIndexWriteSuccessFromClient(IndexWriteException 
> indexWriteException, PhoenixConnection conn) {
>         handleExceptionFromClient(indexWriteException, conn, 
> PIndexState.ACTIVE);
> }
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5247) DROP TABLE and DROP VIEW commands fail to drop second or higher level child views

2019-04-17 Thread Kadir OZDEMIR (JIRA)
Kadir OZDEMIR created PHOENIX-5247:
--

 Summary: DROP TABLE and DROP VIEW commands fail to drop second or 
higher level child views
 Key: PHOENIX-5247
 URL: https://issues.apache.org/jira/browse/PHOENIX-5247
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.2
Reporter: Kadir OZDEMIR
Assignee: Kadir OZDEMIR
 Fix For: 4.14.2


We have seen large number of orphan views in our production environments. The 
method (doDropTable) that is used to drop tables and views drops only the first 
level child views of tables. This seems to be the main root cause for orphan 
views. doDropTable() is recursive only when the table type is TABLE or SYSTEM. 
The table type for views is VIEW. The findChildViews method returns the first 
level child views. So, doDropTable ignores dropping views of views (i.e., 
second or higher level views).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-17 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: (was: PHOENIX-5213.4.x-HBase-1.4.v4.patch)

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v4.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.
> 5) rename the jar to match the final name in the repository:  
> phoenix-client-{version}.jar  There is now a symlink 
> phoenix-{version}-client.jar to maintain backwards compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-17 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: PHOENIX-5213.4.x-HBase-1.4.v4.patch

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v4.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.
> 5) rename the jar to match the final name in the repository:  
> phoenix-client-{version}.jar  There is now a symlink 
> phoenix-{version}-client.jar to maintain backwards compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5246) PhoenixAccessControllers.getAccessControllers() method is not correctly implementing the double-checked locking

2019-04-17 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5246:
---

Assignee: Swaroopa Kadam

> PhoenixAccessControllers.getAccessControllers() method is not correctly 
> implementing the double-checked locking
> ---
>
> Key: PHOENIX-5246
> URL: https://issues.apache.org/jira/browse/PHOENIX-5246
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Thomas D'Silva
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> By [~elserj] on PHOENIX-5070: 
> This looks to me that the getAccessControllers() method is not correctly 
> implementing the double-checked locking "approach" as per 
> https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java (the 
> accessControllers variable must be volatile).
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5246) PhoenixAccessControllers.getAccessControllers() method is not correctly implementing the double-checked locking

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5246:

Description: 
By [~elserj] on PHOENIX-5070: 

This looks to me that the getAccessControllers() method is not correctly 
implementing the double-checked locking "approach" as per 
https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java (the 
accessControllers variable must be volatile).

If we want to avoid taking an explicit lock, what about using AtomicReference 
instead? Can we spin out another Jira issue to fix that?

  was: as per 
https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java (the 
accessControllers variable must be volatile.


> PhoenixAccessControllers.getAccessControllers() method is not correctly 
> implementing the double-checked locking
> ---
>
> Key: PHOENIX-5246
> URL: https://issues.apache.org/jira/browse/PHOENIX-5246
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Thomas D'Silva
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> By [~elserj] on PHOENIX-5070: 
> This looks to me that the getAccessControllers() method is not correctly 
> implementing the double-checked locking "approach" as per 
> https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java (the 
> accessControllers variable must be volatile).
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5246) PhoenixAccessControllers.getAccessControllers() method is not correctly implementing the double-checked locking

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5246:

Labels: SFDC  (was: )

> PhoenixAccessControllers.getAccessControllers() method is not correctly 
> implementing the double-checked locking
> ---
>
> Key: PHOENIX-5246
> URL: https://issues.apache.org/jira/browse/PHOENIX-5246
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Thomas D'Silva
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
>  as per https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java 
> (the accessControllers variable must be volatile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5070) NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in secure setup

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5070:

Fix Version/s: 5.1.0

> NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in 
> secure setup
> -
>
> Key: PHOENIX-5070
> URL: https://issues.apache.org/jira/browse/PHOENIX-5070
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5070-4.x-HBase-1.3.01.patch, 
> PHOENIX-5070-4.x-HBase-1.3.02.patch, PHOENIX-5070-4.x-HBase-1.3.03.patch, 
> PHOENIX-5070.patch
>
>
> PhoenixAccessController populates accessControllers during calls like 
> loadTable before it checks if current user has all required permission for 
> given Hbase table and schema. 
> With [Phoenix-4661|https://issues.apache.org/jira/browse/PHOENIX-4661] , We 
> somehow removed this for only preGetTable func call. Because of this when we 
> upgrade Phoenix from 4.13.0 to 4.14.1 , we get NPE for accessControllers in 
> PhoenixAccessController#getUserPermissions. 
>  Here is exception stack trace :- 
>  
> {code:java}
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>  org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NullPointerException
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:598)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16357)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8354)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2208)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2190)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:409)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:403)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
> at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:453)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:434)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:210)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:403)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:482)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:104)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:161)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:81)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preGetTable(PhoenixMetaDataCoprocessorHost.java:157)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:563)
> ... 9 more
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1291)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:231)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:35542)
> at 
> org.apache.hadoo

[jira] [Updated] (PHOENIX-5070) NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in secure setup

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5070:

Fix Version/s: 4.14.2

> NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in 
> secure setup
> -
>
> Key: PHOENIX-5070
> URL: https://issues.apache.org/jira/browse/PHOENIX-5070
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Blocker
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5070-4.x-HBase-1.3.01.patch, 
> PHOENIX-5070-4.x-HBase-1.3.02.patch, PHOENIX-5070-4.x-HBase-1.3.03.patch, 
> PHOENIX-5070.patch
>
>
> PhoenixAccessController populates accessControllers during calls like 
> loadTable before it checks if current user has all required permission for 
> given Hbase table and schema. 
> With [Phoenix-4661|https://issues.apache.org/jira/browse/PHOENIX-4661] , We 
> somehow removed this for only preGetTable func call. Because of this when we 
> upgrade Phoenix from 4.13.0 to 4.14.1 , we get NPE for accessControllers in 
> PhoenixAccessController#getUserPermissions. 
>  Here is exception stack trace :- 
>  
> {code:java}
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>  org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NullPointerException
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:598)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16357)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8354)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2208)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2190)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:409)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:403)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
> at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:453)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:434)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:210)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:403)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:482)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:104)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:161)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:81)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preGetTable(PhoenixMetaDataCoprocessorHost.java:157)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:563)
> ... 9 more
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1291)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:231)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:35542)
> at 
> org.apache.hadoop.hbas

[jira] [Created] (PHOENIX-5246) PhoenixAccessControllers.getAccessControllers() method is not correctly implementing the double-checked locking

2019-04-17 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-5246:
---

 Summary: PhoenixAccessControllers.getAccessControllers() method is 
not correctly implementing the double-checked locking
 Key: PHOENIX-5246
 URL: https://issues.apache.org/jira/browse/PHOENIX-5246
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Thomas D'Silva
 Fix For: 4.15.0, 5.1.0, 4.14.2


 as per https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java (the 
accessControllers variable must be volatile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5048) Index Rebuilder does not handle INDEX_STATE timestamp check for all index

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5048:

Fix Version/s: 4.14.2

> Index Rebuilder does not handle INDEX_STATE timestamp check for all index
> -
>
> Key: PHOENIX-5048
> URL: https://issues.apache.org/jira/browse/PHOENIX-5048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5048.patch, PHOENIX-5048.v2.patch, 
> PHOENIX-5048.v3.patch, PHOENIX-5048.v4.patch, PHOENIX-5048.v5.patch
>
>
> After rebuilder is finished for Partial Index Rebuild, It will check if Index 
> state has been updated after Upper bound of the scan we use in partial index 
> Rebuild. If that happens then it will fail Index Rebuild as Index write 
> failure occured while we were rebuilding Index.
> {code:java}
> MetaDataEndpointImpl.java#updateIndexState()
> public void updateIndexState(RpcController controller, 
> UpdateIndexStateRequest request,
> RpcCallback done) {
> ...
> // If the index status has been updated after the upper bound of the scan we 
> use
> // to partially rebuild the index, then we need to fail the rebuild because an
> // index write failed before the rebuild was complete.
> if (actualTimestamp > expectedTimestamp) {
> builder.setReturnCode(MetaDataProtos.MutationCode.UNALLOWED_TABLE_MUTATION);
> builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
> done.run(builder.build());
> return;
> }
> ...
> }{code}
> After Introduction of TrackingParallelWriterIndexCommitter 
> [PHOENIX-3815|https://issues.apache.org/jira/browse/PHOENIX-3815], we only 
> disable Index which get failure . Before that , in 
> ParallelWriterIndexCommitter we were disabling all index even if Index 
> failure happens for one Index only. 
> Suppose Data Table has 3 index and above condition becomes true for first 
> index , then we won't even check for remain two Index.
> {code:java}
> MetaDataRegionObserver.java#BuildIndexScheduleTask.java#run()
> for (PTable indexPTable : indexesToPartiallyRebuild) {
> String indexTableFullName = SchemaUtil.getTableName(
> indexPTable.getSchemaName().getString(),
> indexPTable.getTableName().getString());
> if (scanEndTime == latestUpperBoundTimestamp) {
> IndexUtil.updateIndexState(conn, indexTableFullName, PIndexState.ACTIVE, 0L, 
> latestUpperBoundTimestamp);
> batchExecutedPerTableMap.remove(dataPTable.getName());
> LOG.info("Making Index:" + indexPTable.getTableName() + " active after 
> rebuilding");
> } else {
> // Increment timestamp so that client sees updated disable timestamp
> IndexUtil.updateIndexState(conn, indexTableFullName, 
> indexPTable.getIndexState(), scanEndTime * signOfDisableTimeStamp, 
> latestUpperBoundTimestamp);
> Long noOfBatches = batchExecutedPerTableMap.get(dataPTable.getName());
> if (noOfBatches == null) {
> noOfBatches = 0l;
> }
> batchExecutedPerTableMap.put(dataPTable.getName(), ++noOfBatches);
> LOG.info("During Round-robin build: Successfully updated index disabled 
> timestamp for "
> + indexTableFullName + " to " + scanEndTime);
> }
> }
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4781:

Fix Version/s: 4.14.2

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 4.14.2, 5.1
>
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch, PHOENIX-4781.4.x-HBase-1.4.v4.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v5.patch, PHOENIX-4781.addendum.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5025) Tool to clean up orphan views

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5025:

Fix Version/s: 4.14.2
   4.15.0

> Tool to clean up orphan views
> -
>
> Key: PHOENIX-5025
> URL: https://issues.apache.org/jira/browse/PHOENIX-5025
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 5.0.0, 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5025.master.0001.patch, 
> PHOENIX-5025.master.0002.patch, PHOENIX-5025.master.patch
>
>
> A view without its base table is an orphan view. Since views are virtual 
> tables and their data is stored in their base tables, they are useless when 
> they become orphan. A base table can have child views, grandchild views and 
> so on. Due to some reasons/bugs, when a base table was dropped, its views 
> were not not properly cleaned up in the past. For example, the drop table 
> code did not support cleaning up grandchild views. This has been recently 
> fixed by PHOENIX-4764. Although PHOENIX-4764 prevents new orphan views due to 
> table drop operations, it does not clean up existing orphan views. It is also 
> believed that when the system catalog table was split due to a bug in the 
> past, it also contributed to creating orphan views as Phoenix did not support 
> splittable system catalog. Therefore, Phoenix needs a tool to clean up orphan 
> views.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4989) Include disruptor jar in shaded dependency

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4989:

Fix Version/s: 4.14.2

> Include disruptor jar in shaded dependency
> --
>
> Key: PHOENIX-4989
> URL: https://issues.apache.org/jira/browse/PHOENIX-4989
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-4989-4.x-HBase-1.3.patch
>
>
> Include disruptor jar in shaded dependency as hbase has a different version 
> of the same.
> As a result we are not able to run any MR job like IndexScrutinity or 
> IndexTool using phoenix on hbase 1.3 onwards cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4870) LoggingPhoenixConnection should log metrics when AutoCommit is set to True.

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4870:

Fix Version/s: 4.14.2

> LoggingPhoenixConnection should log metrics when AutoCommit is set to True.
> ---
>
> Key: PHOENIX-4870
> URL: https://issues.apache.org/jira/browse/PHOENIX-4870
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4870-4.x-HBase-1.4.patch, PHOENIX-4870.patch
>
>
> When LoggingPhoenixConnection calls commit or close, metrics logs are written 
> properly, however, when LoggingPhoenixConnection is explicitly set with 
> AutoCommit as true, metrics don't get logged at all. This bug can also be 
> tested by adding the following test scenario in PhoenixLoggingMetricsIT.java 
> class. 
> {code:java}
> @Test
> public void testPhoenixMetricsLoggedOnAutoCommit() throws Exception {
> // Autocommit is turned on explicitly
> loggedConn.setAutoCommit(true);
> //with executeUpdate() method
> // run SELECT to verify read metrics are logged
> String query = "SELECT * FROM " + tableName1;
> verifyQueryLevelMetricsLogging(query);
> // run UPSERT SELECT to verify mutation metrics are logged
> String upsertSelect = "UPSERT INTO " + tableName2 + " SELECT * FROM " + 
> tableName1;
> loggedConn.createStatement().executeUpdate(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> //with execute() method
> loggedConn.createStatement().execute(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> clearAllTestMetricMaps();
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4864) Fix NullPointerException while Logging some DDL Statements

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4864:

Fix Version/s: 4.14.2
   5.1.0
   4.15.0

> Fix NullPointerException while Logging some DDL Statements
> --
>
> Key: PHOENIX-4864
> URL: https://issues.apache.org/jira/browse/PHOENIX-4864
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ashutosh Parekh
>Assignee: Ashutosh Parekh
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4864.patch
>
>
> We encounter a NullPointerException when ResultSet is null when some type of 
> DDL queries are executed. The following error is encountered.
> java.lang.NullPointerException: null
>  at 
> org.apache.phoenix.jdbc.LoggingPhoenixResultSet.close(LoggingPhoenixResultSet.java:40)
>  at 
> org.apache.calcite.avatica.jdbc.JdbcMeta$StatementExpiryHandler.onRemoval(JdbcMeta.java:1105)
>  at 
> com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1963)
> ...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4853) Add sql statement to PhoenixMetricsLog interface for query level metrics logging

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4853:

Fix Version/s: 4.14.2

> Add sql statement to PhoenixMetricsLog interface for query level metrics 
> logging
> 
>
> Key: PHOENIX-4853
> URL: https://issues.apache.org/jira/browse/PHOENIX-4853
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> We get query level metrics when we try to close the 
> {{LoggingPhoenixResultSet}} object. It is better to add the SQL statement to 
> the PhoenixMetricsLog interface so that we can attach the metrics to the 
> exact SQL statement. This helps in debugging whenever we determine that 
> particular query is taking a long time to run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4854) Make LoggingPhoenixResultSet idempotent when logging metrics

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4854:

Fix Version/s: 4.14.2

> Make LoggingPhoenixResultSet idempotent when logging metrics
> 
>
> Key: PHOENIX-4854
> URL: https://issues.apache.org/jira/browse/PHOENIX-4854
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> ResultSet close method can be called multiple times and LoggingResultSet 
> object tries to call PhoenixMetricsLog methods every single time. These per 
> query metrics don't get cleared up, rather they are all at "0" value once 
> they are consumed and reset. This Jira is an enhancement to bring the 
> idempotency in the class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4835) LoggingPhoenixConnection should log metrics upon connection close

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4835:

Fix Version/s: 4.14.2

> LoggingPhoenixConnection should log metrics upon connection close
> -
>
> Key: PHOENIX-4835
> URL: https://issues.apache.org/jira/browse/PHOENIX-4835
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4835.4.x-HBase-1.4.001.patch, 
> PHOENIX-4835.4.x-HBase-1.4.002.patch
>
>
> {{LoggingPhoenixConnection}} currently logs metrics upon {{commit()}}, which 
> may miss the logging of metrics sometimes if commit is never called. We 
> should move it to {{close()}} method instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4834) PhoenixMetricsLog interface methods should not depend on specific logger

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4834:

Fix Version/s: 4.14.2

> PhoenixMetricsLog interface methods should not depend on specific logger
> 
>
> Key: PHOENIX-4834
> URL: https://issues.apache.org/jira/browse/PHOENIX-4834
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4834.4.x-HBase-1.4.001.patch, 
> PHOENIX-4834.4.x-HBase-1.4.002.patch
>
>
> {{PhoenixMetricsLog}} is an interface that provides a wrapper around various 
> JDBC objects with logging functionality upon close/commit. The methods take 
> in {{Logger}} as an input, which is {{org.slf4j.Logger}}. A better way of 
> doing is that the interface should just pass the metrics and allow the user 
> to configure and use whatever logging library they want to use.
> This Jira will deprecate the older methods by provide a default 
> implementation for them and add the new methods.
> Ideally we should have provided default implementations, but since we are on 
> Java 7, we are unable to do that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3991) ROW_TIMESTAMP on TIMESTAMP column type throws ArrayOutOfBound when upserting without providing a value.

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3991:

Fix Version/s: 4.14.2
   5.1.0
   4.15.0

> ROW_TIMESTAMP on TIMESTAMP column type throws ArrayOutOfBound when upserting 
> without providing a value.
> ---
>
> Key: PHOENIX-3991
> URL: https://issues.apache.org/jira/browse/PHOENIX-3991
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Eric Belanger
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-3991-1.patch
>
>
> {code:sql}
> CREATE TABLE TEST (
>   CREATED TIMESTAMP NOT NULL,
>   ID CHAR(36) NOT NULL,
>   DEFINITION VARCHAR,
>   CONSTRAINT TEST_PK PRIMARY KEY (CREATED ROW_TIMESTAMP, ID)
> )
> -- WORKS
> UPSERT INTO TEST (CREATED, ID, DEFINITION) VALUES (NOW(), 'A', 'DEFINITION 
> A');
> -- ArrayOutOfBoundException
> UPSERT INTO TEST (ID, DEFINITION) VALUES ('A', 'DEFINITION A');
> {code}
> Stack Trace:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 8
>   at 
> org.apache.phoenix.execute.MutationState.getNewRowKeyWithRowTimestamp(MutationState.java:554)
>   at 
> org.apache.phoenix.execute.MutationState.generateMutations(MutationState.java:640)
>   at 
> org.apache.phoenix.execute.MutationState.addRowMutations(MutationState.java:572)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1003)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4755) Provide an option to plugin custom avatica server config in PQS

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4755:

Fix Version/s: 4.14.2

> Provide an option to plugin custom avatica server config in PQS
> ---
>
> Key: PHOENIX-4755
> URL: https://issues.apache.org/jira/browse/PHOENIX-4755
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
>  Labels: queryserver
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4755.001.diff, PHOENIX-4755.002.diff, 
> PHOENIX-4755.003.diff, PHOENIX-4755.4.x-HBase-1.4.patch
>
>
> CALCITE-2294 Allow customization for {{AvaticaServerConfiguration}} for 
> plugging new authentication mechanisms
> Add a new Phoenix level property and provide resolve the class using 
> {{InstanceResolver}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4750) Resolve server customizers and provide them to Avatica

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4750:

Fix Version/s: 4.14.2

> Resolve server customizers and provide them to Avatica
> --
>
> Key: PHOENIX-4750
> URL: https://issues.apache.org/jira/browse/PHOENIX-4750
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
>  Labels: queryserver
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4750.patch, PHOENIX-4750.v2.patch, 
> PHOENIX-4750.v3.patch, PHOENIX-4750.v4.patch, PHOENIX-4750.v5.patch
>
>
> CALCITE-2284 allows finer grained customization of the underlying Avatica 
> HttpServer.
> Resolve server customizers on the PQS classpath and provide them to the 
> HttpServer builder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5005:

Fix Version/s: 4.14.2

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5005.4.x-HBase-1.4.v2.patch, PHOENIX-5005.4.x-HBase-1.4.v3.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5008) CQSI.init should not bubble up RetriableUpgradeException to client in case of an UpgradeRequiredException

2019-04-17 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5008:

Fix Version/s: 4.14.2

> CQSI.init should not bubble up RetriableUpgradeException to client in case of 
> an UpgradeRequiredException
> -
>
> Key: PHOENIX-5008
> URL: https://issues.apache.org/jira/browse/PHOENIX-5008
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5008-4.x-HBase-1.3_addendum.patch, 
> PHOENIX-5008.patch
>
>
> Inside _ConnectionQueryServicesImpl_._init_, if we catch a 
> _RetriableUpgradeException_, we re-throw this exception. In its caller 
> methods for example, in _PhoenixDriver.getConnectionQueryServices_, this is 
> caught as a _SQLException_, and this fails the initialization of the 
> ConnectionQueryServices and removes the new CQS object from the cache. 
> In the case that the _RetriableUpgradeException_ is an instance of an 
> _UpgradeNotRequiredException_ or an _UpgradeInProgressException_, this can 
> only occur when we attempt to upgrade system tables, either wrongly or 
> concurrently when there is an ongoing attempt for the same. In this case, it 
> is fine to bubble the exception up to the end client and the client will 
> subsequently have to re-attempt to create a connection (calling CQS.init 
> again).
> However, if the _RetriableUpgradeException_ is an instance of an 
> _UpgradeRequiredException_,  the end-client will never be able to get a 
> connection and thus will never be able to manually run "EXECUTE UPGRADE". In 
> this case, instead of re-throwing the exception, we should log that the 
> client must manually run "EXECUTE UPGRADE" before being able to run any other 
> commands and let the CQS.init succeed. Thus, the client will get a connection 
> which has "upgradeRequired" set and this connection will fail for any query 
> except "EXECUTE UPGRADE".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)