[jira] [Updated] (PHOENIX-5969) Read repair reduces the number of rows returned for LIMIT queries

2020-06-22 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5969:
---
Attachment: PHOENIX-5969.4.x.002.patch

> Read repair reduces the number of rows returned for LIMIT queries
> -
>
> Key: PHOENIX-5969
> URL: https://issues.apache.org/jira/browse/PHOENIX-5969
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Attachments: PHOENIX-5969.4.x.001.patch, PHOENIX-5969.4.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Phoenix uses HBase PageFilter to limit the number of rows returned by scans. 
> If a scanned index row is unverified, GlobalIndexChecker repairs this rows. 
> This repair operation leads to either skipping the unverified row or scanning 
> its repaired version. Every scanned row including unverified rows are counted 
> by the page filter. Since unverified rows are counted but not returned for 
> the query, the actual number of rows returned for a LIMIT query becomes less 
> than the set limit (i.e., page size) for the query.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5861) Delete index data failed,due to pool closed

2020-06-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5861:
---
Priority: Critical  (was: Trivial)

> Delete index data failed,due to pool closed
> ---
>
> Key: PHOENIX-5861
> URL: https://issues.apache.org/jira/browse/PHOENIX-5861
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.15.0, 4.14.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Critical
> Attachments: PHOENIX-5861.4.13.x-HBASE.1.3.x.002.patch
>
>
> when delete index data,throw pool closed in 
> TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
> client side, In the case of index table enable,Indexer will deal index data 
> in server side. Finally server use HTable of index table  which batch 
> mutations. 
> When region split, region close firstly,and then region will close phoenix 
> CP(Indexer) which call stop method  of Indexer. this method will stop  
> IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
> start to roll back, which can not processing IndexWriter  and 
> IndexBuildManager ,recoveryWriter initialization. afterwards deal of index 
> data will failed which throw pool close.
> A simple test where region split failed, roll back success. but failed to 
> delete index data.
> 1.create data table and index table
> 2.bulkload data for this table
> 3.alter hbase-server code, which region split throw exception , after region 
> close happen. 
> 4.use hbase shell , split region.
> 5.view regionserver log, where region split failed, and then rollback success.
> 6.user phoenix sqlline.py for delete data, which  will throw exption
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5861) Delete index data failed,due to pool closed

2020-06-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5861:
---
Priority: Trivial  (was: Critical)

> Delete index data failed,due to pool closed
> ---
>
> Key: PHOENIX-5861
> URL: https://issues.apache.org/jira/browse/PHOENIX-5861
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.15.0, 4.14.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Trivial
> Attachments: PHOENIX-5861.4.13.x-HBASE.1.3.x.002.patch
>
>
> when delete index data,throw pool closed in 
> TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
> client side, In the case of index table enable,Indexer will deal index data 
> in server side. Finally server use HTable of index table  which batch 
> mutations. 
> When region split, region close firstly,and then region will close phoenix 
> CP(Indexer) which call stop method  of Indexer. this method will stop  
> IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
> start to roll back, which can not processing IndexWriter  and 
> IndexBuildManager ,recoveryWriter initialization. afterwards deal of index 
> data will failed which throw pool close.
> A simple test where region split failed, roll back success. but failed to 
> delete index data.
> 1.create data table and index table
> 2.bulkload data for this table
> 3.alter hbase-server code, which region split throw exception , after region 
> close happen. 
> 4.use hbase shell , split region.
> 5.view regionserver log, where region split failed, and then rollback success.
> 6.user phoenix sqlline.py for delete data, which  will throw exption
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5973) IndexTool tests are really slow

2020-06-22 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-5973:


 Summary: IndexTool tests are really slow
 Key: PHOENIX-5973
 URL: https://issues.apache.org/jira/browse/PHOENIX-5973
 Project: Phoenix
  Issue Type: Test
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


The many IndexTool test suites have clearly had some bad performance 
regressions recently, which is a part of why the overall Phoenix pre commit 
builds are timing out. We should investigate why and fix. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5972) IndexTool should fast-fail if HBase table or index is disabled

2020-06-22 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-5972:


 Summary: IndexTool should fast-fail if HBase table or index is 
disabled
 Key: PHOENIX-5972
 URL: https://issues.apache.org/jira/browse/PHOENIX-5972
 Project: Phoenix
  Issue Type: Improvement
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


I noticed while checking perf on some slow-running index tool tests that if the 
index table is disabled at the HBase level, the MapReduce job will still keep 
trying for quite awhile before giving up. We should check at the start of a 
mapper to see if either the data table or index table is down, and if so, fail 
the mapper. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)