[ 
https://issues.apache.org/jira/browse/PHOENIX-3609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15829325#comment-15829325
 ] 

Hadoop QA commented on PHOENIX-3609:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12848231/PHOENIX-3609.patch
  against master branch at commit a675211909415ca376e432d25f8a8822aadf5712.
  ATTACHMENT ID: 12848231

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
43 warning messages.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
    +        PhoenixConnection conn = 
DriverManager.getConnection(getUrl()).unwrap(PhoenixConnection.class);
+        try (HTableInterface metaTable = 
conn.getQueryServices().getTable(TableName.META_TABLE_NAME.getName());
+                statement.execute("upsert into " + tableName + "  values(" + i 
+ ",'fn" + i + "','ln" + i + "')");
+    private void copyLocalIndexHFiles(Configuration conf, HRegionInfo 
fromRegion, HRegionInfo toRegion, boolean move)
+        Path seondRegion = new Path(HTableDescriptor.getTableDir(root, 
fromRegion.getTableName()) + Path.SEPARATOR
+        Path hfilePath = 
FSUtils.getCurrentFileSystem(conf).listFiles(seondRegion, 
true).next().getPath();
+        Path firstRegionPath = new Path(HTableDescriptor.getTableDir(root, 
toRegion.getTableName()) + Path.SEPARATOR
+        assertTrue(FileUtil.copy(currentFileSystem, hfilePath, 
currentFileSystem, firstRegionPath, move, conf));
+            List<StoreFileScanner> scanners, ScanType scanType, long 
smallestReadPoint, long earliestPutTs,
+        super(store, store.getScanInfo(), scan, scanners, scanType, 
smallestReadPoint, earliestPutTs);

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/735//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/735//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/735//console

This message is automatically generated.

> Detect and fix corrupted local index region during compaction
> -------------------------------------------------------------
>
>                 Key: PHOENIX-3609
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3609
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.8.0
>            Reporter: Ankit Singhal
>            Assignee: Ankit Singhal
>             Fix For: 4.10.0
>
>         Attachments: PHOENIX-3609.patch
>
>
> Local index regions can be corrupted when hbck is run to fix the overlap 
> regions and directories are simply merged for them to create a single region.
> we can detect this during compaction by looking at the start keys of each 
> store files and comparing prefix with region start key. if local index for 
> the region is found inconsistent, we will read the store files of 
> corresponding data region and recreate a data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to