[jira] [Updated] (PHOENIX-5676) Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread Abhishek Singh Chouhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5676:

Fix Version/s: (was: 4.15.1)
   4.14.4

> Inline-verification from IndexTool does not handle TTL/row-expiry
> -
>
> Key: PHOENIX-5676
> URL: https://issues.apache.org/jira/browse/PHOENIX-5676
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 5.1.0, 4.14.4, 4.16.0
>
> Attachments: PHOENIX-5676-4.x-HBase-1.5.002.patch, 
> PHOENIX-5676-4.x-HBase-1.5.patch, PHOENIX-5676-4.x-HBase-1.5.patch, 
> PHOENIX-5676-master.002.patch, PHOENIX-5676-master.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> If a data-table has TTL on it, it's indexes inherit the TTL too. Hence when 
> we run IndexTool with verification on such tables and it's indexes, rows that 
> are near expiry will successfully get rebuilt, but may not be returned on 
> verification read due to expiry. This will result in index verification 
> problem and may also fail rebuild job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5675) IndexUpgradeTool should allow verify options for IndexTool run

2020-01-15 Thread Swaroopa Kadam (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5675:

Attachment: PHOENIX-5675.4.x-HBase-1.3.add.patch

> IndexUpgradeTool should allow verify options for IndexTool run
> --
>
> Key: PHOENIX-5675
> URL: https://issues.apache.org/jira/browse/PHOENIX-5675
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.15.1, 4.14.4, 4.16.0
>
> Attachments: PHOENIX-5675.4.x-HBase-1.3.add.patch, 
> PHOENIX-5675.4.x-HBase-1.3.v1.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> PHOENIX-5658 & PHOENIX-5674 add IndexTool options for before/after 
> verifications.
> IndexUpgraeTool must allow passthru of these IndexTool options when 
> submitting rebuild jobs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5676) Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread Abhishek Singh Chouhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5676:

Attachment: PHOENIX-5676-master.002.patch

> Inline-verification from IndexTool does not handle TTL/row-expiry
> -
>
> Key: PHOENIX-5676
> URL: https://issues.apache.org/jira/browse/PHOENIX-5676
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: PHOENIX-5676-4.x-HBase-1.5.002.patch, 
> PHOENIX-5676-4.x-HBase-1.5.patch, PHOENIX-5676-4.x-HBase-1.5.patch, 
> PHOENIX-5676-master.002.patch, PHOENIX-5676-master.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> If a data-table has TTL on it, it's indexes inherit the TTL too. Hence when 
> we run IndexTool with verification on such tables and it's indexes, rows that 
> are near expiry will successfully get rebuilt, but may not be returned on 
> verification read due to expiry. This will result in index verification 
> problem and may also fail rebuild job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5676) Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread Abhishek Singh Chouhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5676:

Attachment: PHOENIX-5676-4.x-HBase-1.5.002.patch

> Inline-verification from IndexTool does not handle TTL/row-expiry
> -
>
> Key: PHOENIX-5676
> URL: https://issues.apache.org/jira/browse/PHOENIX-5676
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: PHOENIX-5676-4.x-HBase-1.5.002.patch, 
> PHOENIX-5676-4.x-HBase-1.5.patch, PHOENIX-5676-4.x-HBase-1.5.patch, 
> PHOENIX-5676-master.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> If a data-table has TTL on it, it's indexes inherit the TTL too. Hence when 
> we run IndexTool with verification on such tables and it's indexes, rows that 
> are near expiry will successfully get rebuilt, but may not be returned on 
> verification read due to expiry. This will result in index verification 
> problem and may also fail rebuild job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5645) BaseScannerRegionObserver should prevent compaction from purging very recently deleted cells

2020-01-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5645:
-
Attachment: PHOENIX-5645-4.14-HBase-1.3.v2.patch

> BaseScannerRegionObserver should prevent compaction from purging very 
> recently deleted cells
> 
>
> Key: PHOENIX-5645
> URL: https://issues.apache.org/jira/browse/PHOENIX-5645
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: PHOENIX-5645-4.14-HBase-1.3.v2.patch, 
> PHOENIX-5645-4.14-HBase-1.4.patch, PHOENIX-5645-4.x-HBase-1.5-v2.patch, 
> PHOENIX-5645-4.x-HBase-1.5.patch, PHOENIX-5645-4.x-HBase-1.5.v3.patch, 
> PHOENIX-5645-addendum-4.x-HBase-1.5.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Phoenix's SCN feature has some problems, because HBase major compaction can 
> remove Cells that have been deleted or whose TTL or max versions has caused 
> them to be expired. 
> For example, IndexTool rebuilds and index scrutiny can both give strange, 
> incorrect results if a major compaction occurs in the middle of their run. In 
> the rebuild case, it's because we're rewriting "history" on the index at the 
> same time that compaction is rewriting "history" by purging deleted and 
> expired cells. 
> Create a new configuration property called "max lookback age", which declares 
> that no data written more recently than the max lookback age will be 
> compacted away. The max lookback age must be smaller than the TTL, and it 
> should not be legal for a user to look back further in the past than the 
> table's TTL. 
> Max lookback age by default will not be set, and the current behavior will be 
> preserved. But if max lookback age is set, it will be enforced by the 
> BaseScannerRegionObserver for all tables. 
> In the future, this should be contributed as a general feature to HBase for 
> arbitrary tables. See HBASE-23602.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5645) BaseScannerRegionObserver should prevent compaction from purging very recently deleted cells

2020-01-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5645:
-
Attachment: (was: PHOENIX-5645-4.x-HBase-1.5.v3.patch)

> BaseScannerRegionObserver should prevent compaction from purging very 
> recently deleted cells
> 
>
> Key: PHOENIX-5645
> URL: https://issues.apache.org/jira/browse/PHOENIX-5645
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: PHOENIX-5645-4.14-HBase-1.3.v2.patch, 
> PHOENIX-5645-4.14-HBase-1.4.patch, PHOENIX-5645-4.x-HBase-1.5-v2.patch, 
> PHOENIX-5645-4.x-HBase-1.5.patch, PHOENIX-5645-4.x-HBase-1.5.v3.patch, 
> PHOENIX-5645-addendum-4.x-HBase-1.5.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Phoenix's SCN feature has some problems, because HBase major compaction can 
> remove Cells that have been deleted or whose TTL or max versions has caused 
> them to be expired. 
> For example, IndexTool rebuilds and index scrutiny can both give strange, 
> incorrect results if a major compaction occurs in the middle of their run. In 
> the rebuild case, it's because we're rewriting "history" on the index at the 
> same time that compaction is rewriting "history" by purging deleted and 
> expired cells. 
> Create a new configuration property called "max lookback age", which declares 
> that no data written more recently than the max lookback age will be 
> compacted away. The max lookback age must be smaller than the TTL, and it 
> should not be legal for a user to look back further in the past than the 
> table's TTL. 
> Max lookback age by default will not be set, and the current behavior will be 
> preserved. But if max lookback age is set, it will be enforced by the 
> BaseScannerRegionObserver for all tables. 
> In the future, this should be contributed as a general feature to HBase for 
> arbitrary tables. See HBASE-23602.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5645) BaseScannerRegionObserver should prevent compaction from purging very recently deleted cells

2020-01-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5645:
-
Attachment: PHOENIX-5645-4.x-HBase-1.5.v3.patch

> BaseScannerRegionObserver should prevent compaction from purging very 
> recently deleted cells
> 
>
> Key: PHOENIX-5645
> URL: https://issues.apache.org/jira/browse/PHOENIX-5645
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: PHOENIX-5645-4.14-HBase-1.3.v2.patch, 
> PHOENIX-5645-4.14-HBase-1.4.patch, PHOENIX-5645-4.x-HBase-1.5-v2.patch, 
> PHOENIX-5645-4.x-HBase-1.5.patch, PHOENIX-5645-4.x-HBase-1.5.v3.patch, 
> PHOENIX-5645-addendum-4.x-HBase-1.5.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Phoenix's SCN feature has some problems, because HBase major compaction can 
> remove Cells that have been deleted or whose TTL or max versions has caused 
> them to be expired. 
> For example, IndexTool rebuilds and index scrutiny can both give strange, 
> incorrect results if a major compaction occurs in the middle of their run. In 
> the rebuild case, it's because we're rewriting "history" on the index at the 
> same time that compaction is rewriting "history" by purging deleted and 
> expired cells. 
> Create a new configuration property called "max lookback age", which declares 
> that no data written more recently than the max lookback age will be 
> compacted away. The max lookback age must be smaller than the TTL, and it 
> should not be legal for a user to look back further in the past than the 
> table's TTL. 
> Max lookback age by default will not be set, and the current behavior will be 
> preserved. But if max lookback age is set, it will be enforced by the 
> BaseScannerRegionObserver for all tables. 
> In the future, this should be contributed as a general feature to HBase for 
> arbitrary tables. See HBASE-23602.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5044) Remove server side mutation code from Phoenix

2020-01-15 Thread Siddhi Mehta (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddhi Mehta updated PHOENIX-5044:
--
Labels: phoenix-hardening  (was: )

> Remove server side mutation code from Phoenix
> -
>
> Key: PHOENIX-5044
> URL: https://issues.apache.org/jira/browse/PHOENIX-5044
> Project: Phoenix
>  Issue Type: Task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
>  Labels: phoenix-hardening
> Attachments: 5044-looksee-v2.txt, 5044-looksee-v3.txt, 
> 5044-looksee.txt
>
>
> This is for *discussion*. Perhaps controversial.
> It generally seems to be a bad - if well-intentioned - idea to trigger 
> mutations directly from the server. The main causes are UPSERT SELECT for the 
> same table and DELETE FROM.
> IMHO, it's generally better to allow the client to handle this. There might 
> be larger network overhead, but we get better chunking, better pacing, and 
> behavior more in line with how HBase was intended to work.
> In PHOENIX-5026 I introduced a flag to disable server triggered mutations in 
> the two cases mentioned above. I now think it's better to just remove the 
> server code and also perform these from the client.
> (Note that server side reads - aggregation, filters, etc - are still insanely 
> valuable and not affected by this)
> Let's discuss.
> [~tdsilva], [~an...@apache.org], [~jamestaylor], [~vincentpoon], [~gjacoby]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5672) Unable to find cached index metadata with large UPSERT/SELECT and local index.

2020-01-15 Thread Siddhi Mehta (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddhi Mehta updated PHOENIX-5672:
--
Labels: phoenix-hardening  (was: )

> Unable to find cached index metadata with large UPSERT/SELECT and local index.
> --
>
> Key: PHOENIX-5672
> URL: https://issues.apache.org/jira/browse/PHOENIX-5672
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Lars Hofhansl
>Priority: Major
>  Labels: phoenix-hardening
>
> Doing a very large UPSERT/SELECT back into the same table. After a while I 
> get this exception. This happens with server side mutation turned off or on 
> and regardless of the batch-size (which I have increased to 1 in this 
> last example).
> {code:java}
> 20/01/10 16:41:54 WARN client.AsyncProcess: #1, table=TEST, attempt=1/35 
> failed=1ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1180967500149768360 
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,157870268
>  Index update failed20/01/10 16:41:54 WARN client.AsyncProcess: #1, 
> table=TEST, attempt=1/35 failed=1ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1180967500149768360 
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,157870268
>  Index update failed at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:113) at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:87) at 
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:101)
>  at 
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaData(PhoenixIndexMetaDataBuilder.java:51)
>  at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:100)
>  at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:73)
>  at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexMetaData(IndexBuildManager.java:84)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.getPhoenixIndexMetaData(IndexRegionObserver.java:594)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutateWithExceptions(IndexRegionObserver.java:646)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(IndexRegionObserver.java:334)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1024)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1742)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1827)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1783)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1020)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3425)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3163) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3105) 
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:944)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:872)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2472)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36812)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)Caused
>  by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index 
> metadata.  key=-1180967500149768360 
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,157870268
>  at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:542)
>  at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  at 
>

[jira] [Updated] (PHOENIX-5636) Improve the error message when client connects to server with higher major version

2020-01-15 Thread Christine Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Feng updated PHOENIX-5636:

Attachment: PHOENIX-5636.master.v5.patch

> Improve the error message when client connects to server with higher major 
> version
> --
>
> Key: PHOENIX-5636
> URL: https://issues.apache.org/jira/browse/PHOENIX-5636
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Sandeep Guggilam
>Assignee: Christine Feng
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 4.15.1
>
> Attachments: PHOENIX-5636.master.v1.patch, 
> PHOENIX-5636.master.v2.patch, PHOENIX-5636.master.v3.patch, 
> PHOENIX-5636.master.v4.patch, PHOENIX-5636.master.v5.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When a 4.14 client connects to a 5.0 server, it errors out saying " Outdated 
> jars. Newer Phoenix clients can't communicate with older Phoenix servers"
> It should probably error out with "Major version of client is less than that 
> of the server"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-01-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: PHOENIX-5601.4.x-HBase-1.3.002.patch

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-5601.4.x-HBase-1.3.002.patch, 
> PHOENIX-5601.master.002.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-01-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.4.x-HBase-1.3.001.patch)

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-5601.master.002.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-01-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: PHOENIX-5601.master.002.patch

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-5601.4.x-HBase-1.3.001.patch, 
> PHOENIX-5601.master.002.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-01-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.master.001.patch)

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-5601.4.x-HBase-1.3.001.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5634) Use 'phoenix.default.update.cache.frequency' from connection properties at query time

2020-01-15 Thread Nitesh Maheshwari (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitesh Maheshwari updated PHOENIX-5634:
---
Attachment: (was: PHOENIX-5634.master.v2.patch)

> Use 'phoenix.default.update.cache.frequency' from connection properties at 
> query time
> -
>
> Key: PHOENIX-5634
> URL: https://issues.apache.org/jira/browse/PHOENIX-5634
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Nitesh Maheshwari
>Assignee: Nitesh Maheshwari
>Priority: Minor
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5634.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5634.master.v3.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> We have the config 'phoenix.default.update.cache.frequency' which specifies 
> the time a client should wait before it refreshes its metadata cache entry 
> for a table by fetching the latest metadata from system catalog. This value 
> could be set for a table in the following ways (in the following preference 
> order):
>  # Specifying UPDATE_CACHE_FREQUENCY in table creation DDL
>  # Specifying the connection property 'phoenix.default.update.cache.frequency'
>  # Using the default 'phoenix.default.update.cache.frequency'
> At query time, we look at whether UPDATE_CACHE_FREQUENCY was specified for 
> the table and decide based on that value if the latest metadata for a table 
> should be fetched from system catalog to update the cache. However, when the 
> table doesn't have UPDATE_CACHE_FREQUENCY specified we should look at the 
> connection property 'phoenix.default.update.cache.frequency' (or the default 
> 'phoenix.default.update.cache.frequency' when the connection level property 
> is not set) to make that decision. The support for latter is missing - this 
> Jira is intended to add that.
> This will aid exiting installations where the tables were created without a 
> specified UPDATE_CACHE_FREQUENCY, and thus always hit the system catalog to 
> get the latest metadata when referenced. With this support, we will be able 
> to reduce the load on system catalog by specifying a connection level 
> property for all tables referenced from the connection (as against UPSERTing 
> each table entry in system catalog to set an UPDATE_CACHE_FREQUENCY value).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5683) Invalid pom for phoenix-connectors

2020-01-15 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-5683:

Description: 
Multiple warnings/error from Maven as to the pom structure for the project
 * Duplicate maven-compiler-plugin definitions in phoenix-spark
 * Invalid parent element in presto-phoenix-shaded
 * Incorrect phoenix version set
 * Tephra version not defined

  was:
Multiple warnings/error from Maven as to the pom structure for the project
 * Duplicate maven-compiler-plugin definitions in phoenix-spark
 * Invalid parent element in presto-phoenix-shaded


> Invalid pom for phoenix-connectors
> --
>
> Key: PHOENIX-5683
> URL: https://issues.apache.org/jira/browse/PHOENIX-5683
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: connectors-1.0.0
>
>
> Multiple warnings/error from Maven as to the pom structure for the project
>  * Duplicate maven-compiler-plugin definitions in phoenix-spark
>  * Invalid parent element in presto-phoenix-shaded
>  * Incorrect phoenix version set
>  * Tephra version not defined



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5619) CREATE TABLE AS SELECT for Phoenix table doesn't work correctly in Hive

2020-01-15 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-5619.
-
Fix Version/s: connectors-1.0.0
   Resolution: Fixed

Thanks for the fix, Toshi!

> CREATE TABLE AS SELECT for Phoenix table doesn't work correctly in Hive
> ---
>
> Key: PHOENIX-5619
> URL: https://issues.apache.org/jira/browse/PHOENIX-5619
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-3.1.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: connectors-1.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The steps to reproduce are as follows:
> 1. Create a table in Phoenix:
> {code:java}
> CREATE TABLE TEST (ID VARCHAR PRIMARY KEY, COL VARCHAR);
> {code}
> 2. Create a table in Hive that's based on the table in Phoenix created in the 
> step 1:
> {code:java}
> CREATE EXTERNAL TABLE test (id STRING, col STRING)
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES (
>   "phoenix.table.name" = "TEST",
>   "phoenix.zookeeper.quorum" = "",
>   "phoenix.zookeeper.znode.parent" = "/hbase-unsecure",
>   "phoenix.zookeeper.client.port" = "2181",
>   "phoenix.rowkeys" = "ID",
>   "phoenix.column.mapping" = "id:ID, col:COL"
> );
> {code}
> 3. Intert data to the Hive table in Hive:
> {code:java}
> INSERT INTO TABLE test VALUES ('id', 'col');
> {code}
> 4. Run CREATE TABLE AS SELECT in Hive
> {code:java}
> CREATE TABLE test2 AS SELECT * from test;
> {code}
>  
> After the step 4, I face the following error:
> {code:java}
> 2019-12-13 08:22:20,963 [DEBUG] [TezChild] |client.RpcRetryingCallerImpl|: 
> Call exception, tries=7, retries=16, started=8159 ms ago, cancelled=false, 
> msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase/meta-region-server, details=row 'SYSTEM:CATALOG' on table 
> 'hbase:meta' at null, exception=java.io.IOException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase/meta-region-server
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2009)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:785)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:741)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:712)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:594)
>   at 
> org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:72)
>   at 
> org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1066)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:389)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:441)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:438)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallable.call(RpcRetryingCallable.java:58)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3080)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3072)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:438)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1106)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1502)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2740)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixSt

[jira] [Created] (PHOENIX-5684) Set batch-mode for phoenix-connectors

2020-01-15 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-5684:
---

 Summary: Set batch-mode for phoenix-connectors
 Key: PHOENIX-5684
 URL: https://issues.apache.org/jira/browse/PHOENIX-5684
 Project: Phoenix
  Issue Type: Improvement
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: connectors-1.0.0


The phoenix-connectors precommit is innundated with crap output from Maven. The 
{{-B}} option will squash it all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5683) Invalid pom for phoenix-connectors

2020-01-15 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-5683:
---

 Summary: Invalid pom for phoenix-connectors
 Key: PHOENIX-5683
 URL: https://issues.apache.org/jira/browse/PHOENIX-5683
 Project: Phoenix
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: connectors-1.0.0


Multiple warnings/error from Maven as to the pom structure for the project
 * Duplicate maven-compiler-plugin definitions in phoenix-spark
 * Invalid parent element in presto-phoenix-shaded



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5671) Add tests for ViewUtil

2020-01-15 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5671:
---
Attachment: PHOENIX-5671.master.001.patch

> Add tests for ViewUtil
> --
>
> Key: PHOENIX-5671
> URL: https://issues.apache.org/jira/browse/PHOENIX-5671
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.16.0
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-5671.master.001.patch, PHOENIX-5671.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Adding tests for ViewUtil to understand hasChildViews, 
> getSystemTableForChildLinks, and other APIs are working as expected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5634) Use 'phoenix.default.update.cache.frequency' from connection properties at query time

2020-01-15 Thread Nitesh Maheshwari (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitesh Maheshwari updated PHOENIX-5634:
---
Attachment: (was: PHOENIX-5634.master.v1.patch)

> Use 'phoenix.default.update.cache.frequency' from connection properties at 
> query time
> -
>
> Key: PHOENIX-5634
> URL: https://issues.apache.org/jira/browse/PHOENIX-5634
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Nitesh Maheshwari
>Assignee: Nitesh Maheshwari
>Priority: Minor
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5634.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5634.master.v2.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> We have the config 'phoenix.default.update.cache.frequency' which specifies 
> the time a client should wait before it refreshes its metadata cache entry 
> for a table by fetching the latest metadata from system catalog. This value 
> could be set for a table in the following ways (in the following preference 
> order):
>  # Specifying UPDATE_CACHE_FREQUENCY in table creation DDL
>  # Specifying the connection property 'phoenix.default.update.cache.frequency'
>  # Using the default 'phoenix.default.update.cache.frequency'
> At query time, we look at whether UPDATE_CACHE_FREQUENCY was specified for 
> the table and decide based on that value if the latest metadata for a table 
> should be fetched from system catalog to update the cache. However, when the 
> table doesn't have UPDATE_CACHE_FREQUENCY specified we should look at the 
> connection property 'phoenix.default.update.cache.frequency' (or the default 
> 'phoenix.default.update.cache.frequency' when the connection level property 
> is not set) to make that decision. The support for latter is missing - this 
> Jira is intended to add that.
> This will aid exiting installations where the tables were created without a 
> specified UPDATE_CACHE_FREQUENCY, and thus always hit the system catalog to 
> get the latest metadata when referenced. With this support, we will be able 
> to reduce the load on system catalog by specifying a connection level 
> property for all tables referenced from the connection (as against UPSERTing 
> each table entry in system catalog to set an UPDATE_CACHE_FREQUENCY value).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5634) Use 'phoenix.default.update.cache.frequency' from connection properties at query time

2020-01-15 Thread Nitesh Maheshwari (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitesh Maheshwari updated PHOENIX-5634:
---
Attachment: (was: PHOENIX-5634.4.x-HBase-1.3.v2.patch)

> Use 'phoenix.default.update.cache.frequency' from connection properties at 
> query time
> -
>
> Key: PHOENIX-5634
> URL: https://issues.apache.org/jira/browse/PHOENIX-5634
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Nitesh Maheshwari
>Assignee: Nitesh Maheshwari
>Priority: Minor
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5634.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5634.master.v2.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> We have the config 'phoenix.default.update.cache.frequency' which specifies 
> the time a client should wait before it refreshes its metadata cache entry 
> for a table by fetching the latest metadata from system catalog. This value 
> could be set for a table in the following ways (in the following preference 
> order):
>  # Specifying UPDATE_CACHE_FREQUENCY in table creation DDL
>  # Specifying the connection property 'phoenix.default.update.cache.frequency'
>  # Using the default 'phoenix.default.update.cache.frequency'
> At query time, we look at whether UPDATE_CACHE_FREQUENCY was specified for 
> the table and decide based on that value if the latest metadata for a table 
> should be fetched from system catalog to update the cache. However, when the 
> table doesn't have UPDATE_CACHE_FREQUENCY specified we should look at the 
> connection property 'phoenix.default.update.cache.frequency' (or the default 
> 'phoenix.default.update.cache.frequency' when the connection level property 
> is not set) to make that decision. The support for latter is missing - this 
> Jira is intended to add that.
> This will aid exiting installations where the tables were created without a 
> specified UPDATE_CACHE_FREQUENCY, and thus always hit the system catalog to 
> get the latest metadata when referenced. With this support, we will be able 
> to reduce the load on system catalog by specifying a connection level 
> property for all tables referenced from the connection (as against UPSERTing 
> each table entry in system catalog to set an UPDATE_CACHE_FREQUENCY value).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5634) Use 'phoenix.default.update.cache.frequency' from connection properties at query time

2020-01-15 Thread Nitesh Maheshwari (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitesh Maheshwari updated PHOENIX-5634:
---
Attachment: (was: PHOENIX-5634.4.x-HBase-1.3.v1.patch)

> Use 'phoenix.default.update.cache.frequency' from connection properties at 
> query time
> -
>
> Key: PHOENIX-5634
> URL: https://issues.apache.org/jira/browse/PHOENIX-5634
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Nitesh Maheshwari
>Assignee: Nitesh Maheshwari
>Priority: Minor
> Fix For: 5.1.0, 4.15.1
>
> Attachments: PHOENIX-5634.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5634.master.v2.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> We have the config 'phoenix.default.update.cache.frequency' which specifies 
> the time a client should wait before it refreshes its metadata cache entry 
> for a table by fetching the latest metadata from system catalog. This value 
> could be set for a table in the following ways (in the following preference 
> order):
>  # Specifying UPDATE_CACHE_FREQUENCY in table creation DDL
>  # Specifying the connection property 'phoenix.default.update.cache.frequency'
>  # Using the default 'phoenix.default.update.cache.frequency'
> At query time, we look at whether UPDATE_CACHE_FREQUENCY was specified for 
> the table and decide based on that value if the latest metadata for a table 
> should be fetched from system catalog to update the cache. However, when the 
> table doesn't have UPDATE_CACHE_FREQUENCY specified we should look at the 
> connection property 'phoenix.default.update.cache.frequency' (or the default 
> 'phoenix.default.update.cache.frequency' when the connection level property 
> is not set) to make that decision. The support for latter is missing - this 
> Jira is intended to add that.
> This will aid exiting installations where the tables were created without a 
> specified UPDATE_CACHE_FREQUENCY, and thus always hit the system catalog to 
> get the latest metadata when referenced. With this support, we will be able 
> to reduce the load on system catalog by specifying a connection level 
> property for all tables referenced from the connection (as against UPSERTing 
> each table entry in system catalog to set an UPDATE_CACHE_FREQUENCY value).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5673) The mutation state is silently getting cleared on the execution of any DDL

2020-01-15 Thread Abhishek Singh Chouhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5673:

Priority: Critical  (was: Major)

> The mutation state is silently getting cleared on the execution of any DDL
> --
>
> Key: PHOENIX-5673
> URL: https://issues.apache.org/jira/browse/PHOENIX-5673
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Sandeep Guggilam
>Priority: Critical
>  Labels: beginner, newbie
> Fix For: 4.16.0
>
>
> When we execute any DDL statement, the mutations state is rolled back 
> silently without informing the user. It should probably throw an exception 
> saying that the mutation state is not empty when executing any DDL. See the 
> below example:
>  
> Steps to reproduce:
> create table t1 (pk varchar not null primary key, mycol varchar)
> upsert into t1 (pk, mycol) values ('x','x');
> create table t2 (pk varchar not null primary key, mycol varchar)
> When we try to execute the above statements and do a conn.commit() at the 
> end, it would silently rollback the upsert statement when we execute the 
> second create statement and you wouldn't see the ('x', 'x') values in the 
> first table. Instead it should probably throw an exception saying that the 
> mutation state is not empty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Moving Phoenix master to Hbase 2.2

2020-01-15 Thread Andrew Purtell
I suppose so, but release building is scripted. The build script can iterate 
over a set of desired HBase version targets and drive the build by setting 
parameters on the maven command line. 


> On Jan 15, 2020, at 2:01 AM, Guanghao Zhang  wrote:
> 
> 
>> 
>> 
>> Anyway let’s assume for now you want to unify all the branches for HBase
>> 1.x. Start with the lowest HBase version you want to support. Then iterate
>> up to the highest HBase version you want to support. Whenever you run into
>> compile problems, make a new version specific maven module, add logic to
>> the parent POM that chooses the right one. Then for each implicated file,
>> move it into the version specific maven modules, duplicating as needed, and
>> finally fixing up where needed.
>> 
> +1. So we want to use one branch to handle all hbase branches? But we still
> need to release multi src/bin tar for multi hbase versions?
> 
> Andrew Purtell  于2020年1月15日周三 上午10:55写道:
> 
>> Take PhoenixAccessController as an example. Over time the HBase interfaces
>> change in minor ways. You’ll need different compilation units for this
>> class to be able to compile it across a wide range of 1.x. However the
>> essential Phoenix functionality does not change. The logic that makes up
>> the method bodies can be factored into a class that groups together static
>> helper methods which come to contain this common logic. The common class
>> can remain in the core module. Then all you have in the version specific
>> modules is scaffolding. In that scaffolding, calls to the static methods in
>> core. It’s not a clever refactor but is DRY. Over time this can be made
>> cleaner case by case where the naive transformation has a distasteful
>> result.
>> 
>> 
>>> On Jan 14, 2020, at 6:40 PM, Andrew Purtell 
>> wrote:
>>> 
>> 


[jira] [Updated] (PHOENIX-5674) IndexTool to not write already correct index rows

2020-01-15 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5674:
---
Attachment: PHOENIX-5674.master.003.patch

> IndexTool to not write already correct index rows
> -
>
> Key: PHOENIX-5674
> URL: https://issues.apache.org/jira/browse/PHOENIX-5674
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.1, 4.14.4
>
> Attachments: PHOENIX-5674.4.x-HBase-1.5.001.patch, 
> PHOENIX-5674.4.x-HBase-1.5.002.patch, PHOENIX-5674.master.001.patch, 
> PHOENIX-5674.master.002.patch, PHOENIX-5674.master.003.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> IndexTool can avoid writing index rows if they are already consistent with 
> data-table. This will specially be useful when rebuilding index on DR-site 
> where indexes are replicated already, but rebuild might be needed for catchup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5674) IndexTool to not write already correct index rows

2020-01-15 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5674:
---
Attachment: PHOENIX-5674.4.x-HBase-1.5.002.patch

> IndexTool to not write already correct index rows
> -
>
> Key: PHOENIX-5674
> URL: https://issues.apache.org/jira/browse/PHOENIX-5674
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.1, 4.14.4
>
> Attachments: PHOENIX-5674.4.x-HBase-1.5.001.patch, 
> PHOENIX-5674.4.x-HBase-1.5.002.patch, PHOENIX-5674.master.001.patch, 
> PHOENIX-5674.master.002.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> IndexTool can avoid writing index rows if they are already consistent with 
> data-table. This will specially be useful when rebuilding index on DR-site 
> where indexes are replicated already, but rebuild might be needed for catchup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5674) IndexTool to not write already correct index rows

2020-01-15 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5674:
---
Attachment: PHOENIX-5674.4.x-HBase-1.5.001.patch

> IndexTool to not write already correct index rows
> -
>
> Key: PHOENIX-5674
> URL: https://issues.apache.org/jira/browse/PHOENIX-5674
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.1, 4.14.4
>
> Attachments: PHOENIX-5674.4.x-HBase-1.5.001.patch, 
> PHOENIX-5674.master.001.patch, PHOENIX-5674.master.002.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> IndexTool can avoid writing index rows if they are already consistent with 
> data-table. This will specially be useful when rebuilding index on DR-site 
> where indexes are replicated already, but rebuild might be needed for catchup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5674) IndexTool to not write already correct index rows

2020-01-15 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5674:
---
Attachment: PHOENIX-5674.master.002.patch

> IndexTool to not write already correct index rows
> -
>
> Key: PHOENIX-5674
> URL: https://issues.apache.org/jira/browse/PHOENIX-5674
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.1, 4.14.4
>
> Attachments: PHOENIX-5674.master.001.patch, 
> PHOENIX-5674.master.002.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> IndexTool can avoid writing index rows if they are already consistent with 
> data-table. This will specially be useful when rebuilding index on DR-site 
> where indexes are replicated already, but rebuild might be needed for catchup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Moving Phoenix master to Hbase 2.2

2020-01-15 Thread Guanghao Zhang
>
> Anyway let’s assume for now you want to unify all the branches for HBase
> 1.x. Start with the lowest HBase version you want to support. Then iterate
> up to the highest HBase version you want to support. Whenever you run into
> compile problems, make a new version specific maven module, add logic to
> the parent POM that chooses the right one. Then for each implicated file,
> move it into the version specific maven modules, duplicating as needed, and
> finally fixing up where needed.
>
+1. So we want to use one branch to handle all hbase branches? But we still
need to release multi src/bin tar for multi hbase versions?

Andrew Purtell  于2020年1月15日周三 上午10:55写道:

> Take PhoenixAccessController as an example. Over time the HBase interfaces
> change in minor ways. You’ll need different compilation units for this
> class to be able to compile it across a wide range of 1.x. However the
> essential Phoenix functionality does not change. The logic that makes up
> the method bodies can be factored into a class that groups together static
> helper methods which come to contain this common logic. The common class
> can remain in the core module. Then all you have in the version specific
> modules is scaffolding. In that scaffolding, calls to the static methods in
> core. It’s not a clever refactor but is DRY. Over time this can be made
> cleaner case by case where the naive transformation has a distasteful
> result.
>
>
> > On Jan 14, 2020, at 6:40 PM, Andrew Purtell 
> wrote:
> >
>


[jira] [Updated] (PHOENIX-5676) Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread Abhishek Singh Chouhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5676:

Attachment: PHOENIX-5676-4.x-HBase-1.5.patch

> Inline-verification from IndexTool does not handle TTL/row-expiry
> -
>
> Key: PHOENIX-5676
> URL: https://issues.apache.org/jira/browse/PHOENIX-5676
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: PHOENIX-5676-4.x-HBase-1.5.patch, 
> PHOENIX-5676-4.x-HBase-1.5.patch, PHOENIX-5676-master.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> If a data-table has TTL on it, it's indexes inherit the TTL too. Hence when 
> we run IndexTool with verification on such tables and it's indexes, rows that 
> are near expiry will successfully get rebuilt, but may not be returned on 
> verification read due to expiry. This will result in index verification 
> problem and may also fail rebuild job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5676) Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread Abhishek Singh Chouhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5676:

Fix Version/s: 5.1.0

> Inline-verification from IndexTool does not handle TTL/row-expiry
> -
>
> Key: PHOENIX-5676
> URL: https://issues.apache.org/jira/browse/PHOENIX-5676
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: PHOENIX-5676-4.x-HBase-1.5.patch, 
> PHOENIX-5676-master.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> If a data-table has TTL on it, it's indexes inherit the TTL too. Hence when 
> we run IndexTool with verification on such tables and it's indexes, rows that 
> are near expiry will successfully get rebuilt, but may not be returned on 
> verification read due to expiry. This will result in index verification 
> problem and may also fail rebuild job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5676) Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread Abhishek Singh Chouhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5676:

Attachment: PHOENIX-5676-master.patch

> Inline-verification from IndexTool does not handle TTL/row-expiry
> -
>
> Key: PHOENIX-5676
> URL: https://issues.apache.org/jira/browse/PHOENIX-5676
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.1, 4.16.0
>
> Attachments: PHOENIX-5676-4.x-HBase-1.5.patch, 
> PHOENIX-5676-master.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> If a data-table has TTL on it, it's indexes inherit the TTL too. Hence when 
> we run IndexTool with verification on such tables and it's indexes, rows that 
> are near expiry will successfully get rebuilt, but may not be returned on 
> verification read due to expiry. This will result in index verification 
> problem and may also fail rebuild job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)