[jira] [Commented] (HBASE-18740) Upgrade Zookeeper version to 3.4.10

2017-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152174#comment-16152174
 ] 

Hudson commented on HBASE-18740:


FAILURE: Integrated in Jenkins build HBase-1.5 #43 (See 
[https://builds.apache.org/job/HBase-1.5/43/])
HBASE-18740 Upgrade Zookeeper version to 3.4.10 (jerryjch: rev 
6a5bb3b48c12e3f441fb221d2f954e8a67706334)
* (edit) pom.xml


> Upgrade Zookeeper version to 3.4.10
> ---
>
> Key: HBASE-18740
> URL: https://issues.apache.org/jira/browse/HBASE-18740
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18740-branch-1.patch, HBASE-18740-branch-1.patch, 
> HBASE-18740-master.patch
>
>
> Branch 1.4 and branch 1 are still on Zookeeper 3.4.6.
> Branch 2 and master branch have upgraded to 3.4.9.
> There are some important fixes we'd like to have. See the linked JIRAs.
> Another critical fix is ZOOKEEPER-2146, which can be explored maliciously.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152156#comment-16152156
 ] 

Chia-Ping Tsai commented on HBASE-18746:


+1

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18490) Modifying a table descriptor to enable replicas does not create replica regions

2017-09-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152155#comment-16152155
 ] 

ramkrishna.s.vasudevan commented on HBASE-18490:


Working on a patch. Observed few more issues. Will raise JIRAs for them.

> Modifying a table descriptor to enable replicas does not create replica 
> regions
> ---
>
> Key: HBASE-18490
> URL: https://issues.apache.org/jira/browse/HBASE-18490
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0-alpha-1
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: TestRegionReplicasWithRestartScenarios.java
>
>
> After creating a table, if we try to modify the table to enable region 
> replication, the new Htable Descriptor is not taken into account and the 
> table is enabled again with default single region.
> Ping [~enis], [~tedyu], [~devaraj].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16390) Fix documentation around setAutoFlush

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152152#comment-16152152
 ] 

Chia-Ping Tsai commented on HBASE-16390:


TestThriftHttpServer pass locally. Will commit it later.

> Fix documentation around setAutoFlush
> -
>
> Key: HBASE-16390
> URL: https://issues.apache.org/jira/browse/HBASE-16390
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: stack
>Assignee: Sahil Aggarwal
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16390.master.001.patch
>
>
> Our documentation is a little confused around setAutoFlush. Talks of Table 
> but setAutoFlush is not in the Table interface. It was on HTable but was 
> deprecated and since removed. Clean up the doc:
> {code}
> 100.4. HBase Client: AutoFlush
> When performing a lot of Puts, make sure that setAutoFlush is set to false
> on your Table
> 
> instance.
> Otherwise, the Puts will be sent one at a time to the RegionServer. Puts
> added via table.add(Put) and table.add(  Put) wind up in the same
> write buffer. If autoFlush = false, these messages are not sent until the
> write-buffer is filled. To explicitly flush the messages, call flushCommits.
> Calling close on the Table instance will invoke flushCommits
> {code}
> Spotted by Jeff Shmain.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are deleted

2017-09-03 Thread wenbang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenbang updated HBASE-18743:

Attachment: (was: HBASE-18743_branch-1.patch)

> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18743-branch-1.patch, HBASE_18743.patch, 
> HBASE_18743_v1.patch, HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are deleted

2017-09-03 Thread wenbang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenbang updated HBASE-18743:

Attachment: (was: HBASE-18743_branch-1_v1.patch)

> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18743_branch-1.patch, HBASE-18743-branch-1.patch, 
> HBASE_18743.patch, HBASE_18743_v1.patch, HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are deleted

2017-09-03 Thread wenbang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenbang updated HBASE-18743:

Attachment: HBASE-18743-branch-1.patch

rename patch for branch-1

> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18743_branch-1.patch, HBASE-18743-branch-1.patch, 
> HBASE-18743_branch-1_v1.patch, HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18740) Upgrade Zookeeper version to 3.4.10

2017-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152137#comment-16152137
 ] 

Hudson commented on HBASE-18740:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3654 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3654/])
HBASE-18740 Upgrade Zookeeper version to 3.4.10 (jerryjch: rev 
2305510b7a81451d0a2c9bea0007bd36b7758118)
* (edit) pom.xml


> Upgrade Zookeeper version to 3.4.10
> ---
>
> Key: HBASE-18740
> URL: https://issues.apache.org/jira/browse/HBASE-18740
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18740-branch-1.patch, HBASE-18740-branch-1.patch, 
> HBASE-18740-master.patch
>
>
> Branch 1.4 and branch 1 are still on Zookeeper 3.4.6.
> Branch 2 and master branch have upgraded to 3.4.9.
> There are some important fixes we'd like to have. See the linked JIRAs.
> Another critical fix is ZOOKEEPER-2146, which can be explored maliciously.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are delete

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152135#comment-16152135
 ] 

Hadoop QA commented on HBASE-18743:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HBASE-18743 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18743 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885188/HBASE-18743_branch-1_v1.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8458/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18743_branch-1.patch, 
> HBASE-18743_branch-1_v1.patch, HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are deleted

2017-09-03 Thread wenbang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenbang updated HBASE-18743:

Attachment: HBASE-18743_branch-1_v1.patch

move test to TestNamespace

> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18743_branch-1.patch, 
> HBASE-18743_branch-1_v1.patch, HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18740) Upgrade Zookeeper version to 3.4.10

2017-09-03 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-18740:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-alpha-3
   1.5.0
   1.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the review.  Pushed.

> Upgrade Zookeeper version to 3.4.10
> ---
>
> Key: HBASE-18740
> URL: https://issues.apache.org/jira/browse/HBASE-18740
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18740-branch-1.patch, HBASE-18740-branch-1.patch, 
> HBASE-18740-master.patch
>
>
> Branch 1.4 and branch 1 are still on Zookeeper 3.4.6.
> Branch 2 and master branch have upgraded to 3.4.9.
> There are some important fixes we'd like to have. See the linked JIRAs.
> Another critical fix is ZOOKEEPER-2146, which can be explored maliciously.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18740) Upgrade Zookeeper version to 3.4.10

2017-09-03 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-18740:
-
Summary: Upgrade Zookeeper version to 3.4.10  (was: Upgrade Zookeeper 
version in branch 1.4 and branch-1)

> Upgrade Zookeeper version to 3.4.10
> ---
>
> Key: HBASE-18740
> URL: https://issues.apache.org/jira/browse/HBASE-18740
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Jerry He
>Assignee: Jerry He
> Attachments: HBASE-18740-branch-1.patch, HBASE-18740-branch-1.patch, 
> HBASE-18740-master.patch
>
>
> Branch 1.4 and branch 1 are still on Zookeeper 3.4.6.
> Branch 2 and master branch have upgraded to 3.4.9.
> There are some important fixes we'd like to have. See the linked JIRAs.
> Another critical fix is ZOOKEEPER-2146, which can be explored maliciously.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15607) Remove PB references from Admin for 2.0

2017-09-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152109#comment-16152109
 ] 

ramkrishna.s.vasudevan commented on HBASE-15607:


[~saint@gmail.com]
Yes will add a release note. But regarding the depreciation patch except for 
the missing for SnapshotInfo. Rest is taken care of already right?

> Remove PB references from Admin for 2.0
> ---
>
> Key: HBASE-15607
> URL: https://issues.apache.org/jira/browse/HBASE-15607
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 2.0.0-alpha-3
>
> Attachments: HBASE-15607_1.patch, HBASE-15607_2.patch, 
> HBASE-15607_3.patch, HBASE-15607_3.patch, HBASE-15607_4.patch, 
> HBASE-15607_4.patch, HBASE-15607_branch-1.patch, HBASE-15607.patch
>
>
> This is a sub-task for HBASE-15174.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are delete

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152102#comment-16152102
 ] 

Hadoop QA commented on HBASE-18743:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HBASE-18743 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18743 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885185/HBASE-18743_branch-1.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8457/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18743_branch-1.patch, HBASE_18743.patch, 
> HBASE_18743_v1.patch, HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are delete

2017-09-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152099#comment-16152099
 ] 

Ted Yu commented on HBASE-18743:


There is TestNamespace.

Can you move the test there ?

> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18743_branch-1.patch, HBASE_18743.patch, 
> HBASE_18743_v1.patch, HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are deleted

2017-09-03 Thread wenbang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenbang updated HBASE-18743:

Attachment: HBASE-18743_branch-1.patch

added patch to branch-1
thank you Chia-Ping Tsai,i have added the test.

> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18743_branch-1.patch, HBASE_18743.patch, 
> HBASE_18743_v1.patch, HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18674) upgrade hbase to commons-lang3

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152095#comment-16152095
 ] 

Hadoop QA commented on HBASE-18674:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/8456/console in case of 
problems.


> upgrade hbase to commons-lang3
> --
>
> Key: HBASE-18674
> URL: https://issues.apache.org/jira/browse/HBASE-18674
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha-2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18674.master.001.patch, 
> HBASE-18674.master.001.patch, hbase-18674.master.002.patch, 
> hbase-18674.master.002.patch, HBASE-18674.master.002.patch, 
> hbase-18674.master.003.patch, hbase-18674.master.004.patch, 
> hbase-18674.master.004.patch, hbase-18674.master.005.patch, 
> hbase-18674.master.006.patch, hbase-18674.master.007.patch, 
> hbase-18674.master.008.patch, hbase-18674.master.008.patch
>
>
> upgrade hbase to use commons-lang 3.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18674) upgrade hbase to commons-lang3

2017-09-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152089#comment-16152089
 ] 

stack commented on HBASE-18674:
---

.002 (upper-case HBASE-18674 in patch name). Rebase after purging all pom edits 
and addressing a few little import changes needed since other patches went in.

> upgrade hbase to commons-lang3
> --
>
> Key: HBASE-18674
> URL: https://issues.apache.org/jira/browse/HBASE-18674
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha-2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18674.master.001.patch, 
> HBASE-18674.master.001.patch, hbase-18674.master.002.patch, 
> hbase-18674.master.002.patch, HBASE-18674.master.002.patch, 
> hbase-18674.master.003.patch, hbase-18674.master.004.patch, 
> hbase-18674.master.004.patch, hbase-18674.master.005.patch, 
> hbase-18674.master.006.patch, hbase-18674.master.007.patch, 
> hbase-18674.master.008.patch, hbase-18674.master.008.patch
>
>
> upgrade hbase to use commons-lang 3.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18674) upgrade hbase to commons-lang3

2017-09-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18674:
--
Attachment: HBASE-18674.master.002.patch

> upgrade hbase to commons-lang3
> --
>
> Key: HBASE-18674
> URL: https://issues.apache.org/jira/browse/HBASE-18674
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha-2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18674.master.001.patch, 
> HBASE-18674.master.001.patch, hbase-18674.master.002.patch, 
> hbase-18674.master.002.patch, HBASE-18674.master.002.patch, 
> hbase-18674.master.003.patch, hbase-18674.master.004.patch, 
> hbase-18674.master.004.patch, hbase-18674.master.005.patch, 
> hbase-18674.master.006.patch, hbase-18674.master.007.patch, 
> hbase-18674.master.008.patch, hbase-18674.master.008.patch
>
>
> upgrade hbase to use commons-lang 3.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18723) [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity list the dependencies we exploit

2017-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152081#comment-16152081
 ] 

Hudson commented on HBASE-18723:


FAILURE: Integrated in Jenkins build HBase-2.0 #451 (See 
[https://builds.apache.org/job/HBase-2.0/451/])
HBASE-18723 [pom cleanup] Do a pass with dependency:analyze; remove (stack: rev 
91ab25b469a441904f883dd8e130af2653a2609d)
* (edit) hbase-mapreduce/pom.xml
* (edit) hbase-backup/pom.xml
* (edit) hbase-rest/pom.xml
* (edit) pom.xml


> [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity 
> list the dependencies we exploit
> -
>
> Key: HBASE-18723
> URL: https://issues.apache.org/jira/browse/HBASE-18723
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-alpha-3
>
> Attachments: HBASE-18723.master.001.patch, 
> HBASE-18723.master.002.patch, HBASE-18723.master.003.patch, 
> HBASE-18723.master.004.patch, 
> HBASE-18723-pom-cleanup-Do-a-pass-with-dependency.addendum.patch
>
>
> Do a pass over our poms. They are sloppy including unused jars and not 
> listing actually used dependencies. Undo 'required' dependencies like junit 
> and mockito; not all modules need these anymore.
> This cleanup motivated by failures up on jenkins where a build step is not 
> finding transitive includes; explicit mention is needed (See failures in 
> HBASE-18674).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18674) upgrade hbase to commons-lang3

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152067#comment-16152067
 ] 

Hadoop QA commented on HBASE-18674:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HBASE-18674 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885180/hbase-18674.master.008.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8455/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> upgrade hbase to commons-lang3
> --
>
> Key: HBASE-18674
> URL: https://issues.apache.org/jira/browse/HBASE-18674
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha-2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18674.master.001.patch, 
> HBASE-18674.master.001.patch, hbase-18674.master.002.patch, 
> hbase-18674.master.002.patch, hbase-18674.master.003.patch, 
> hbase-18674.master.004.patch, hbase-18674.master.004.patch, 
> hbase-18674.master.005.patch, hbase-18674.master.006.patch, 
> hbase-18674.master.007.patch, hbase-18674.master.008.patch, 
> hbase-18674.master.008.patch
>
>
> upgrade hbase to use commons-lang 3.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18674) upgrade hbase to commons-lang3

2017-09-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18674:
--
Attachment: hbase-18674.master.008.patch

Retry after the below went in which hopefully cleans up the outstanding 
complaints about dependencies not referenced in module poms:


commit 0e95a8a0ae24b0d19b391d49794d6716a8e86bcd
Author: Michael Stack 
Date:   Sat Sep 2 13:14:09 2017 -0700

HBASE-18723 [pom cleanup] Do a pass with dependency:analyze; remove unused 
and explicity list the dependencies we exploit; ADDENDUM

Addendum addresses holes found running HBASE-18674 against hadoopqa.


> upgrade hbase to commons-lang3
> --
>
> Key: HBASE-18674
> URL: https://issues.apache.org/jira/browse/HBASE-18674
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha-2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18674.master.001.patch, 
> HBASE-18674.master.001.patch, hbase-18674.master.002.patch, 
> hbase-18674.master.002.patch, hbase-18674.master.003.patch, 
> hbase-18674.master.004.patch, hbase-18674.master.004.patch, 
> hbase-18674.master.005.patch, hbase-18674.master.006.patch, 
> hbase-18674.master.007.patch, hbase-18674.master.008.patch, 
> hbase-18674.master.008.patch
>
>
> upgrade hbase to use commons-lang 3.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18723) [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity list the dependencies we exploit

2017-09-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152063#comment-16152063
 ] 

stack commented on HBASE-18723:
---

Pushed to master and branch-2. Resolving though there might be another cycle of 
cleanup todo  yet.

> [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity 
> list the dependencies we exploit
> -
>
> Key: HBASE-18723
> URL: https://issues.apache.org/jira/browse/HBASE-18723
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-alpha-3
>
> Attachments: HBASE-18723.master.001.patch, 
> HBASE-18723.master.002.patch, HBASE-18723.master.003.patch, 
> HBASE-18723.master.004.patch, 
> HBASE-18723-pom-cleanup-Do-a-pass-with-dependency.addendum.patch
>
>
> Do a pass over our poms. They are sloppy including unused jars and not 
> listing actually used dependencies. Undo 'required' dependencies like junit 
> and mockito; not all modules need these anymore.
> This cleanup motivated by failures up on jenkins where a build step is not 
> finding transitive includes; explicit mention is needed (See failures in 
> HBASE-18674).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18723) [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity list the dependencies we exploit

2017-09-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18723:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed addendum (.004) to master and branch-2.

> [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity 
> list the dependencies we exploit
> -
>
> Key: HBASE-18723
> URL: https://issues.apache.org/jira/browse/HBASE-18723
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-alpha-3
>
> Attachments: HBASE-18723.master.001.patch, 
> HBASE-18723.master.002.patch, HBASE-18723.master.003.patch, 
> HBASE-18723.master.004.patch, 
> HBASE-18723-pom-cleanup-Do-a-pass-with-dependency.addendum.patch
>
>
> Do a pass over our poms. They are sloppy including unused jars and not 
> listing actually used dependencies. Undo 'required' dependencies like junit 
> and mockito; not all modules need these anymore.
> This cleanup motivated by failures up on jenkins where a build step is not 
> finding transitive includes; explicit mention is needed (See failures in 
> HBASE-18674).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18740) Upgrade Zookeeper version in branch 1.4 and branch-1

2017-09-03 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152058#comment-16152058
 ] 

Yu Li commented on HBASE-18740:
---

+1, checking [zookeeper 3.4.10 release 
note|http://zookeeper.apache.org/doc/r3.4.10/releasenotes.html] I think it's 
good to upgrade.

Maybe we could update the title of the JIRA to something like "Upgrade 
Zookeeper version to 3.4.10"? Thanks.

> Upgrade Zookeeper version in branch 1.4 and branch-1
> 
>
> Key: HBASE-18740
> URL: https://issues.apache.org/jira/browse/HBASE-18740
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Jerry He
>Assignee: Jerry He
> Attachments: HBASE-18740-branch-1.patch, HBASE-18740-branch-1.patch, 
> HBASE-18740-master.patch
>
>
> Branch 1.4 and branch 1 are still on Zookeeper 3.4.6.
> Branch 2 and master branch have upgraded to 3.4.9.
> There are some important fixes we'd like to have. See the linked JIRAs.
> Another critical fix is ZOOKEEPER-2146, which can be explored maliciously.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18131) Add an hbase shell command to clear deadserver list in ServerManager

2017-09-03 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li reassigned HBASE-18131:
-

Assignee: Guangxu Cheng  (was: Yu Li)

Sorry for the late response [~andrewcheng], just back from a short vacation. 
Just assigned to you, please go ahead, thanks.

> Add an hbase shell command to clear deadserver list in ServerManager
> 
>
> Key: HBASE-18131
> URL: https://issues.apache.org/jira/browse/HBASE-18131
> Project: HBase
>  Issue Type: New Feature
>  Components: Operability
>Reporter: Yu Li
>Assignee: Guangxu Cheng
> Fix For: 2.0.0, 1.4.0
>
>
> Currently if a regionserver is aborted due to fatal error or stopped by 
> operator on purpose, it will be added into {{ServerManager#deadservers}} list 
> and shown as "Dead Servers" in the master UI. This is a valid warn for 
> operators to  notice the self-aborted servers and give a sanity check to 
> avoid further issues. However, after necessary checks, even if operator is 
> sure that the node is decommissioned (such as for repair), there's no way to 
> clear the dead server list except restarting master. See more details in 
> [this 
> discussion|http://mail-archives.apache.org/mod_mbox/hbase-user/201705.mbox/%3CCAM7-19%2BD4MLu2b1R94%2BtWQDspjfny2sCy4Qit8JtCgjvTOZzzg%40mail.gmail.com%3E]
>  in mail list
> Here we propose to add a hbase shell command to allow clearing dead server 
> list in {{ServerManager}} for advanced users, and the command should be 
> executed with caution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18723) [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity list the dependencies we exploit

2017-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152033#comment-16152033
 ] 

Hudson commented on HBASE-18723:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3653 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3653/])
HBASE-18723 [pom cleanup] Do a pass with dependency:analyze; remove (stack: rev 
0e95a8a0ae24b0d19b391d49794d6716a8e86bcd)
* (edit) hbase-mapreduce/pom.xml
* (edit) hbase-backup/pom.xml
* (edit) hbase-rest/pom.xml
* (edit) pom.xml


> [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity 
> list the dependencies we exploit
> -
>
> Key: HBASE-18723
> URL: https://issues.apache.org/jira/browse/HBASE-18723
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-alpha-3
>
> Attachments: HBASE-18723.master.001.patch, 
> HBASE-18723.master.002.patch, HBASE-18723.master.003.patch, 
> HBASE-18723.master.004.patch, 
> HBASE-18723-pom-cleanup-Do-a-pass-with-dependency.addendum.patch
>
>
> Do a pass over our poms. They are sloppy including unused jars and not 
> listing actually used dependencies. Undo 'required' dependencies like junit 
> and mockito; not all modules need these anymore.
> This cleanup motivated by failures up on jenkins where a build step is not 
> finding transitive includes; explicit mention is needed (See failures in 
> HBASE-18674).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15497) Incorrect javadoc for atomicity guarantee of Increment and Append

2017-09-03 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152008#comment-16152008
 ] 

Jerry He commented on HBASE-15497:
--

+1

> Incorrect javadoc for atomicity guarantee of Increment and Append
> -
>
> Key: HBASE-15497
> URL: https://issues.apache.org/jira/browse/HBASE-15497
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
>Priority: Minor
> Attachments: HBASE-15497-v1.patch
>
>
> At the front of {{Increment.java}} file, there is comment about read 
> atomicity:
> {code}
>  * This operation does not appear atomic to readers.  Increments are done
>  * under a single row lock, so write operations to a row are synchronized, but
>  * readers do not take row locks so get and scan operations can see this
>  * operation partially completed.
> {code}
> It seems this comment is not true after MVCC integrated 
> [HBASE-4583|https://issues.apache.org/jira/browse/HBASE-4583]. Currently, the 
> readers can be guaranteed to read the whole result of Increment if I am not 
> wrong. Similar comments also exist in {{Append.java}}, {{Table#append(...)}} 
> and {{Table#increment(...)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16390) Fix documentation around setAutoFlush

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151990#comment-16151990
 ] 

Hadoop QA commented on HBASE-16390:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}172m 53s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}224m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.thrift.TestThriftHttpServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:47a5614 |
| JIRA Issue | HBASE-16390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848756/HBASE-16390.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 31a4cd6d55eb 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 83175fd |
| Default Java | 1.8.0_144 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8453/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8453/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8453/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Fix documentation around setAutoFlush
> -
>
> Key: HBASE-16390
> URL: https://issues.apache.org/jira/browse/HBASE-16390
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: stack
>Assignee: Sahil Aggarwal
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16390.master.001.patch
>
>
> Our documentation is a little confused around setAutoFlush. Talks of Table 
> but setAutoFlush is not in the Table interface. It was on HTable but was 
> deprecated and since removed. Clean up the doc:
> {code}
> 100.4. HBase Client: AutoFlush
> When performing a lot of Puts, make sure that setAutoFlush is set to false
> on your Table
> 
> instance.
> Otherwise, the Puts will be sent one at a time to the RegionServer. Puts
> added via table.add(Put) and table.add(  Put) wind up in the same
> write buffer. If autoFlush = false, these messages are not sent until the
> write-buffer is filled. To explicitly flush the messages, call flushCommits.
> Calling close on the Table instance will invoke flushCommits
> {code}
> Spotted by Jeff Shmain.



--
This message was sen

[jira] [Commented] (HBASE-18749) Apply the TimeRange from ColumnFamily to filter the segment scanner

2017-09-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151937#comment-16151937
 ] 

Ted Yu commented on HBASE-18749:


lgtm

> Apply the TimeRange from ColumnFamily to filter the segment scanner
> ---
>
> Key: HBASE-18749
> URL: https://issues.apache.org/jira/browse/HBASE-18749
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0-alpha-3
>
> Attachments: HBASE-18749.v0.patch
>
>
> We can evict the unused segment scanner early.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18718) Document the coprocessor.Export

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151935#comment-16151935
 ] 

Hadoop QA commented on HBASE-18718:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
36m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 35s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestHTableMultiplexer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:47a5614 |
| JIRA Issue | HBASE-18718 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885149/HBASE-18718.v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux acb1f4f669bb 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 83175fd |
| Default Java | 1.8.0_144 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8449/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8449/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8449/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Document the coprocessor.Export
> ---
>
> Key: HBASE-18718
> URL: https://issues.apache.org/jira/browse/HBASE-18718
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, documentation, tooling
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18718.v0.patch, HBASE-18718.v0.png, 
> HBASE-18718.v1.patch, HBASE-18718.v1.patch, HBASE-18718.v1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18749) Apply the TimeRange from ColumnFamily to filter the segment scanner

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151933#comment-16151933
 ] 

Hadoop QA commented on HBASE-18749:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}108m 
25s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:47a5614 |
| JIRA Issue | HBASE-18749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885150/HBASE-18749.v0.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux d51d2bcdb47b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 83175fd |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8450/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8450/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Apply the TimeRange from ColumnFamily to filter the segment scanner
> ---
>
> Key: HBASE-18749
> URL: https://issues.apache.org/jira/browse/HBASE-18749
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-

[jira] [Commented] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151932#comment-16151932
 ] 

Hadoop QA commented on HBASE-18746:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
43s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:3c8b364 |
| JIRA Issue | HBASE-18746 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885155/HBASE-18746.branch-2.v1.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux dbf074c925b0 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / c762753 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8454/testReport/ |
| modules | C: hbase-mapreduce U: hbase-mapreduce |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8454/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HB

[jira] [Commented] (HBASE-15497) Incorrect javadoc for atomicity guarantee of Increment and Append

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151917#comment-16151917
 ] 

Hadoop QA commented on HBASE-15497:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
36m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 6s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:47a5614 |
| JIRA Issue | HBASE-15497 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12797898/HBASE-15497-v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 008b229ddbe1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 83175fd |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8452/testReport/ |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8452/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Incorrect javadoc for atomicity guarantee of Increment and Append
> -
>
> Key: HBASE-15497
> URL: https://issues.apache.org/jira/browse/HBASE-15497
> Project: HBase
>  Issue Type: Bug
>  Components: document

[jira] [Commented] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151915#comment-16151915
 ] 

ChunHao commented on HBASE-18746:
-

[~chia7712], I have removed the comment in v1 patch, thank you.

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Status: Patch Available  (was: Open)

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Status: Open  (was: Patch Available)

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Attachment: HBASE-18746.branch-2.v1.patch

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Attachment: (was: HBASE-18746.branch-2.v1.patch)

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Status: Open  (was: Patch Available)

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Status: Patch Available  (was: Open)

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Attachment: HBASE-18746.branch-2.v1.patch

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch, 
> HBASE-18746.branch-2.v1.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151913#comment-16151913
 ] 

Hadoop QA commented on HBASE-18746:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
34m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
6s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:3c8b364 |
| JIRA Issue | HBASE-18746 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885151/HBASE-18746.branch-2.v0.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 06fe14f33213 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / c762753 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8451/testReport/ |
| modules | C: hbase-mapreduce U: hbase-mapreduce |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8451/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HB

[jira] [Commented] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151909#comment-16151909
 ] 

Chia-Ping Tsai commented on HBASE-18746:


{code}
   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
   // when it will be available on all the supported versions.
{code}
The comment will be stale after we commit ur patch, so you can also remove it.

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18750) Document the non-writebuffer for HTable

2017-09-03 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-18750:
--

 Summary: Document the non-writebuffer for HTable
 Key: HBASE-18750
 URL: https://issues.apache.org/jira/browse/HBASE-18750
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Chia-Ping Tsai
Priority: Minor
 Fix For: 2.0.0-alpha-3


Cleanup the docs saying "HTable use write buffer"

{code}
Default size of the HTable client write buffer in bytes. A bigger buffer takes 
more memory — on both the client and server side since server instantiates the 
passed write buffer to process it — but a larger buffer size reduces the number 
of RPCs made. For an estimate of server-side memory-used, evaluate 
hbase.client.write.buffer * hbase.regionserver.handler.count
{code}

{code}
Put either adds new rows to a table (if the key is new) or can update existing 
rows (if the key already exists). Puts are executed via Table.put (writeBuffer) 
or Table.batch (non-writeBuffer).
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16390) Fix documentation around setAutoFlush

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151900#comment-16151900
 ] 

Chia-Ping Tsai commented on HBASE-16390:


-Will commit it later.- Let us trigger the QA first :)

> Fix documentation around setAutoFlush
> -
>
> Key: HBASE-16390
> URL: https://issues.apache.org/jira/browse/HBASE-16390
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: stack
>Assignee: Sahil Aggarwal
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16390.master.001.patch
>
>
> Our documentation is a little confused around setAutoFlush. Talks of Table 
> but setAutoFlush is not in the Table interface. It was on HTable but was 
> deprecated and since removed. Clean up the doc:
> {code}
> 100.4. HBase Client: AutoFlush
> When performing a lot of Puts, make sure that setAutoFlush is set to false
> on your Table
> 
> instance.
> Otherwise, the Puts will be sent one at a time to the RegionServer. Puts
> added via table.add(Put) and table.add(  Put) wind up in the same
> write buffer. If autoFlush = false, these messages are not sent until the
> write-buffer is filled. To explicitly flush the messages, call flushCommits.
> Calling close on the Table instance will invoke flushCommits
> {code}
> Spotted by Jeff Shmain.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16390) Fix documentation around setAutoFlush

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-16390:
---
Status: Patch Available  (was: Open)

> Fix documentation around setAutoFlush
> -
>
> Key: HBASE-16390
> URL: https://issues.apache.org/jira/browse/HBASE-16390
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: stack
>Assignee: Sahil Aggarwal
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16390.master.001.patch
>
>
> Our documentation is a little confused around setAutoFlush. Talks of Table 
> but setAutoFlush is not in the Table interface. It was on HTable but was 
> deprecated and since removed. Clean up the doc:
> {code}
> 100.4. HBase Client: AutoFlush
> When performing a lot of Puts, make sure that setAutoFlush is set to false
> on your Table
> 
> instance.
> Otherwise, the Puts will be sent one at a time to the RegionServer. Puts
> added via table.add(Put) and table.add(  Put) wind up in the same
> write buffer. If autoFlush = false, these messages are not sent until the
> write-buffer is filled. To explicitly flush the messages, call flushCommits.
> Calling close on the Table instance will invoke flushCommits
> {code}
> Spotted by Jeff Shmain.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16390) Fix documentation around setAutoFlush

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151898#comment-16151898
 ] 

Chia-Ping Tsai commented on HBASE-16390:


HBASE-13395 had removed the HTableInterface from docs. Will commit it later.

> Fix documentation around setAutoFlush
> -
>
> Key: HBASE-16390
> URL: https://issues.apache.org/jira/browse/HBASE-16390
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: stack
>Assignee: Sahil Aggarwal
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16390.master.001.patch
>
>
> Our documentation is a little confused around setAutoFlush. Talks of Table 
> but setAutoFlush is not in the Table interface. It was on HTable but was 
> deprecated and since removed. Clean up the doc:
> {code}
> 100.4. HBase Client: AutoFlush
> When performing a lot of Puts, make sure that setAutoFlush is set to false
> on your Table
> 
> instance.
> Otherwise, the Puts will be sent one at a time to the RegionServer. Puts
> added via table.add(Put) and table.add(  Put) wind up in the same
> write buffer. If autoFlush = false, these messages are not sent until the
> write-buffer is filled. To explicitly flush the messages, call flushCommits.
> Calling close on the Table instance will invoke flushCommits
> {code}
> Spotted by Jeff Shmain.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15497) Incorrect javadoc for atomicity guarantee of Increment and Append

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15497:
---
Status: Patch Available  (was: Open)

> Incorrect javadoc for atomicity guarantee of Increment and Append
> -
>
> Key: HBASE-15497
> URL: https://issues.apache.org/jira/browse/HBASE-15497
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Priority: Minor
> Attachments: HBASE-15497-v1.patch
>
>
> At the front of {{Increment.java}} file, there is comment about read 
> atomicity:
> {code}
>  * This operation does not appear atomic to readers.  Increments are done
>  * under a single row lock, so write operations to a row are synchronized, but
>  * readers do not take row locks so get and scan operations can see this
>  * operation partially completed.
> {code}
> It seems this comment is not true after MVCC integrated 
> [HBASE-4583|https://issues.apache.org/jira/browse/HBASE-4583]. Currently, the 
> readers can be guaranteed to read the whole result of Increment if I am not 
> wrong. Similar comments also exist in {{Append.java}}, {{Table#append(...)}} 
> and {{Table#increment(...)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-15497) Incorrect javadoc for atomicity guarantee of Increment and Append

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned HBASE-15497:
--

Assignee: Jianwei Cui

> Incorrect javadoc for atomicity guarantee of Increment and Append
> -
>
> Key: HBASE-15497
> URL: https://issues.apache.org/jira/browse/HBASE-15497
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
>Priority: Minor
> Attachments: HBASE-15497-v1.patch
>
>
> At the front of {{Increment.java}} file, there is comment about read 
> atomicity:
> {code}
>  * This operation does not appear atomic to readers.  Increments are done
>  * under a single row lock, so write operations to a row are synchronized, but
>  * readers do not take row locks so get and scan operations can see this
>  * operation partially completed.
> {code}
> It seems this comment is not true after MVCC integrated 
> [HBASE-4583|https://issues.apache.org/jira/browse/HBASE-4583]. Currently, the 
> readers can be guaranteed to read the whole result of Increment if I am not 
> wrong. Similar comments also exist in {{Append.java}}, {{Table#append(...)}} 
> and {{Table#increment(...)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15497) Incorrect javadoc for atomicity guarantee of Increment and Append

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151893#comment-16151893
 ] 

Chia-Ping Tsai commented on HBASE-15497:


LGTM

> Incorrect javadoc for atomicity guarantee of Increment and Append
> -
>
> Key: HBASE-15497
> URL: https://issues.apache.org/jira/browse/HBASE-15497
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Priority: Minor
> Attachments: HBASE-15497-v1.patch
>
>
> At the front of {{Increment.java}} file, there is comment about read 
> atomicity:
> {code}
>  * This operation does not appear atomic to readers.  Increments are done
>  * under a single row lock, so write operations to a row are synchronized, but
>  * readers do not take row locks so get and scan operations can see this
>  * operation partially completed.
> {code}
> It seems this comment is not true after MVCC integrated 
> [HBASE-4583|https://issues.apache.org/jira/browse/HBASE-4583]. Currently, the 
> readers can be guaranteed to read the whole result of Increment if I am not 
> wrong. Similar comments also exist in {{Append.java}}, {{Table#append(...)}} 
> and {{Table#increment(...)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-14043) Syntax error in Section 26.2 of Reference Guide

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved HBASE-14043.

Resolution: Duplicate

see HBASE-11533

> Syntax error in Section 26.2 of Reference Guide
> ---
>
> Key: HBASE-14043
> URL: https://issues.apache.org/jira/browse/HBASE-14043
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Joe McCarthy
>Priority: Trivial
>
> The following string does not appear rendered as the preceding string 
> describing Table.put:
> "link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch(java.util.List,
>  java.lang.Object[])[Table.batch] (non-writeBuffer)"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are delete

2017-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151891#comment-16151891
 ] 

Hudson commented on HBASE-18743:


FAILURE: Integrated in Jenkins build HBase-2.0 #449 (See 
[https://builds.apache.org/job/HBase-2.0/449/])
HBASE-18743 HFiles in use by a table which has the same name and (tedyu: rev 
c762753b4ba98c4e17abb020d63d6f78abc61bc2)
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHFileLink.java


> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Status: Patch Available  (was: Open)

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18746:

Attachment: HBASE-18746.branch-2.v0.patch

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE-18746.branch-2.v0.patch
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-13859) Add guidelines for beginner label to the ref guide

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151880#comment-16151880
 ] 

Chia-Ping Tsai commented on HBASE-13859:


HBASE-12794 had already put the beginner link in the "[Working on an 
issue|http://hbase.apache.org/book.html#_working_on_an_issue]"; section. Does 
that resolve this issue?

> Add guidelines for beginner label to the ref guide
> --
>
> Key: HBASE-13859
> URL: https://issues.apache.org/jira/browse/HBASE-13859
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Sean Busbey
>Priority: Minor
>
> Right now we talk about the beginner label only in the intro for hte "Getting 
> involved" section. We should also add a mention of using it to the section 
> that explains how to file an issue for new comers and to the section on what 
> committers should do.
> Just something that points out that "beginner" is the correct label, since we 
> share labels across all of jira and it can be confusing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18718) Document the coprocessor.Export

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18718:
---
Component/s: documentation

> Document the coprocessor.Export
> ---
>
> Key: HBASE-18718
> URL: https://issues.apache.org/jira/browse/HBASE-18718
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, documentation, tooling
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18718.v0.patch, HBASE-18718.v0.png, 
> HBASE-18718.v1.patch, HBASE-18718.v1.patch, HBASE-18718.v1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18749) Apply the TimeRange from ColumnFamily to filter the segment scanner

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18749:
---
Attachment: HBASE-18749.v0.patch

> Apply the TimeRange from ColumnFamily to filter the segment scanner
> ---
>
> Key: HBASE-18749
> URL: https://issues.apache.org/jira/browse/HBASE-18749
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0-alpha-3
>
> Attachments: HBASE-18749.v0.patch
>
>
> We can evict the unused segment scanner early.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18749) Apply the TimeRange from ColumnFamily to filter the segment scanner

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18749:
---
Status: Patch Available  (was: Open)

> Apply the TimeRange from ColumnFamily to filter the segment scanner
> ---
>
> Key: HBASE-18749
> URL: https://issues.apache.org/jira/browse/HBASE-18749
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0-alpha-3
>
> Attachments: HBASE-18749.v0.patch
>
>
> We can evict the unused segment scanner early.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18718) Document the coprocessor.Export

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18718:
---
Status: Patch Available  (was: Open)

> Document the coprocessor.Export
> ---
>
> Key: HBASE-18718
> URL: https://issues.apache.org/jira/browse/HBASE-18718
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, tooling
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18718.v0.patch, HBASE-18718.v0.png, 
> HBASE-18718.v1.patch, HBASE-18718.v1.patch, HBASE-18718.v1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18718) Document the coprocessor.Export

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18718:
---
Attachment: HBASE-18718.v1.patch

> Document the coprocessor.Export
> ---
>
> Key: HBASE-18718
> URL: https://issues.apache.org/jira/browse/HBASE-18718
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, tooling
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18718.v0.patch, HBASE-18718.v0.png, 
> HBASE-18718.v1.patch, HBASE-18718.v1.patch, HBASE-18718.v1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18718) Document the coprocessor.Export

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18718:
---
Status: Open  (was: Patch Available)

> Document the coprocessor.Export
> ---
>
> Key: HBASE-18718
> URL: https://issues.apache.org/jira/browse/HBASE-18718
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, tooling
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18718.v0.patch, HBASE-18718.v0.png, 
> HBASE-18718.v1.patch, HBASE-18718.v1.patch, HBASE-18718.v1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18749) Apply the TimeRange from ColumnFamily to filter the segment scanner

2017-09-03 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-18749:
--

 Summary: Apply the TimeRange from ColumnFamily to filter the 
segment scanner
 Key: HBASE-18749
 URL: https://issues.apache.org/jira/browse/HBASE-18749
 Project: HBase
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai
Priority: Minor
 Fix For: 2.0.0-alpha-3


We can evict the unused segment scanner early.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are delete

2017-09-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151852#comment-16151852
 ] 

Ted Yu commented on HBASE-18743:


Patch doesn't apply to branch-1

Please attach patch for branch-1

> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151847#comment-16151847
 ] 

Hadoop QA commented on HBASE-18375:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
34m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 
30s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:3c8b364 |
| JIRA Issue | HBASE-18375 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885135/HBASE-18375-branch-2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux c0938e09d239 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / a37417c |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8448/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8448/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastas

[jira] [Commented] (HBASE-18699) Copy LoadIncrementalHFiles to another package and mark the old one as deprecated

2017-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151844#comment-16151844
 ] 

Hudson commented on HBASE-18699:


FAILURE: Integrated in Jenkins build HBase-2.0 #448 (See 
[https://builds.apache.org/job/HBase-2.0/448/])
HBASE-18699 Copy LoadIncrementalHFiles to another package and mark the 
(zhangduo: rev a37417c25414e37cb719c69867fc8be11b0b94f4)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/mapreduce/MapReduceRestoreJob.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/HadoopSecurityEnabledUserProviderForTesting.java
* (edit) 
hbase-spark/src/test/java/org/apache/hadoop/hbase/spark/TestJavaHBaseContext.java
* (edit) 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestLoadIncrementalHFiles.java
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/BackupUtils.java
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java
* (delete) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFiles.java
* (edit) 
hbase-endpoint/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpToolWithBulkLoadedData.java
* (edit) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java
* (edit) 
hbase-spark/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseBulkLoadExample.java
* (delete) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestLoadIncrementalHFilesSplitRecovery.java
* (delete) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/RestoreTablesClient.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestSecureLoadIncrementalHFiles.java
* (edit) 
hbase-backup/src/test/java/org/apache/hadoop/hbase/backup/TestBackupBase.java
* (edit) 
hbase-spark-it/src/test/java/org/apache/hadoop/hbase/spark/IntegrationTestSparkBulkLoad.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/compactions/PartitionedMobCompactor.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java
* (delete) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFilesSplitRecovery.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* (edit) 
hbase-endpoint/src/test/java/org/apache/hadoop/hbase/coprocessor/TestSecureExport.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestSecureLoadIncrementalHFilesSplitRecovery.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/tool/LoadIncrementalHFiles.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/MapreduceTestingShim.java
* (edit) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobSecureExportSnapshot.java
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/RestoreTool.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java
* (delete) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/HadoopSecurityEnabledUserProviderForTesting.java
* (edit) 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Driver.java
* (edit) 
hbase-backup/src/test/java/org/apache/hadoop/hbase/backup/TestIncrementalBackupWithBulkLoad.java
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestImportTsv.java
* (edit) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* (edit) 
hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/BulkLoadSuite.scala
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerWithBulkload.java
* (edit) src/main/asciidoc/_chapters/ops_mgt.adoc
* (edit) 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java


> Copy LoadIncrementalHFiles to another package and mark the old one as 
> deprecated
> 
>
> Key: HBASE-18699
> URL: https://issues.apache.org/jira/browse/HBASE-18699

[jira] [Commented] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are delete

2017-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151843#comment-16151843
 ] 

Hudson commented on HBASE-18743:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3650 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3650/])
HBASE-18743 HFiles in use by a table which has the same name and (tedyu: rev 
83175fdf8375527fb893debfa441e3862d5093b9)
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHFileLink.java


> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18743) HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are deleted

2017-09-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18743:
---
Summary: HFiles in use by a table which has the same name and namespace 
with a default table cloned from snapshot may be deleted when that snapshot and 
default table are deleted  (was: HFiles that are in use by a table whitch have 
the same name and namespace with a default table cloned from snapshot may be 
deleted when that snapshot and default table is deleted)

> HFiles in use by a table which has the same name and namespace with a default 
> table cloned from snapshot may be deleted when that snapshot and default 
> table are deleted
> 
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-09-03 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-18375:

Attachment: (was: HBASE-18375-branch-2.patch)

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-18375-branch-2.patch, HBASE-18375-V01.patch, 
> HBASE-18375-V02-branch-2.patch, HBASE-18375-V02.patch, HBASE-18375-V03.patch, 
> HBASE-18375-V04.patch, HBASE-18375-V05.patch, HBASE-18375-V06.patch, 
> HBASE-18375-V07.patch, HBASE-18375-V08.patch, HBASE-18375-V09.patch, 
> HBASE-18375-V10.patch, HBASE-18375-V11.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-09-03 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151815#comment-16151815
 ] 

Anastasia Braginsky commented on HBASE-18375:
-

Attaching a freshly rebased patch for branch-2...

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-18375-branch-2.patch, HBASE-18375-V01.patch, 
> HBASE-18375-V02-branch-2.patch, HBASE-18375-V02.patch, HBASE-18375-V03.patch, 
> HBASE-18375-V04.patch, HBASE-18375-V05.patch, HBASE-18375-V06.patch, 
> HBASE-18375-V07.patch, HBASE-18375-V08.patch, HBASE-18375-V09.patch, 
> HBASE-18375-V10.patch, HBASE-18375-V11.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-09-03 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-18375:

Attachment: HBASE-18375-branch-2.patch

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-18375-branch-2.patch, HBASE-18375-V01.patch, 
> HBASE-18375-V02-branch-2.patch, HBASE-18375-V02.patch, HBASE-18375-V03.patch, 
> HBASE-18375-V04.patch, HBASE-18375-V05.patch, HBASE-18375-V06.patch, 
> HBASE-18375-V07.patch, HBASE-18375-V08.patch, HBASE-18375-V09.patch, 
> HBASE-18375-V10.patch, HBASE-18375-V11.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18699) Copy LoadIncrementalHFiles to another package and mark the old one as deprecated

2017-09-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151813#comment-16151813
 ] 

Hudson commented on HBASE-18699:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3649 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3649/])
HBASE-18699 Copy LoadIncrementalHFiles to another package and mark the 
(zhangduo: rev 9e53f2927b3154eb703560933ddad489c2e232b5)
* (edit) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java
* (edit) 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
* (delete) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/HadoopSecurityEnabledUserProviderForTesting.java
* (edit) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobSecureExportSnapshot.java
* (edit) 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
* (edit) 
hbase-spark-it/src/test/java/org/apache/hadoop/hbase/spark/IntegrationTestSparkBulkLoad.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/HadoopSecurityEnabledUserProviderForTesting.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestSecureLoadIncrementalHFiles.java
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/BackupUtils.java
* (edit) 
hbase-backup/src/test/java/org/apache/hadoop/hbase/backup/TestBackupBase.java
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/mapreduce/MapReduceRestoreJob.java
* (edit) 
hbase-endpoint/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpToolWithBulkLoadedData.java
* (delete) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* (delete) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFiles.java
* (edit) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestLoadIncrementalHFiles.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerWithBulkload.java
* (edit) 
hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/BulkLoadSuite.scala
* (delete) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFilesSplitRecovery.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestLoadIncrementalHFilesSplitRecovery.java
* (delete) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
* (edit) src/main/asciidoc/_chapters/ops_mgt.adoc
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/TestSecureLoadIncrementalHFilesSplitRecovery.java
* (edit) 
hbase-backup/src/test/java/org/apache/hadoop/hbase/backup/TestIncrementalBackupWithBulkLoad.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/compactions/PartitionedMobCompactor.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/tool/LoadIncrementalHFiles.java
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestImportTsv.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/tool/MapreduceTestingShim.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* (edit) 
hbase-spark/src/test/java/org/apache/hadoop/hbase/spark/TestJavaHBaseContext.java
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/RestoreTablesClient.java
* (edit) 
hbase-endpoint/src/test/java/org/apache/hadoop/hbase/coprocessor/TestSecureExport.java
* (edit) 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Driver.java
* (edit) 
hbase-spark/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseBulkLoadExample.java
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/util/RestoreTool.java


> Copy LoadIncrementalHFiles to another package and mark the old one as 
> deprecated
> 
>
> Key: HBASE-18699
> URL: https://issues.apache.org/jira

[jira] [Commented] (HBASE-18748) Cache pre-warming upon replication

2017-09-03 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151792#comment-16151792
 ] 

Anastasia Braginsky commented on HBASE-18748:
-

As explained in the description, we would like to add a feature to the HBase 
replication methodology. The failover from primary cluster to secondary should 
have zero effect on the read latency. Currently there is a spike in the read 
latency upon failover due to cache on the secondary being cold. Simple 
redirection (duplication by user application) of reads to secondary prior to 
failover, resolves this issue. However, to make secondary to proceed all the 
reads is some waist of resources. Therefore, the suggestion is to redirect only 
"relevant" reads. In other words, the suggested solution is to selectively 
replay read requests at the backup - namely, those reads that caused cache-ins 
at the primary. 

We intend to use WAL replication as transport protocol (hopefully, as black 
box), and of course add custom replay callbacks. Meaning, to add a new "read 
type" of WAL entries, that are going to be rare, only upon cache-in. Those, 
read WAL entries, are going to be replicated on the secondary cluster. Of 
course, the cache blocks on primary and secondary may diverse, but this is a 
good heuristic.

What do you think about this suggestion? [~stack] and everybody, we would like 
to hear from you! May be this is anyhow already implemented and we are not 
aware?

> Cache pre-warming upon replication
> --
>
> Key: HBASE-18748
> URL: https://issues.apache.org/jira/browse/HBASE-18748
> Project: HBase
>  Issue Type: New Feature
>Reporter: Anastasia Braginsky
>
> HBase's cluster replication is very important and widely used feature. Let's 
> assume primary cluster is replicated to secondary (backup) cluster using the 
> WAL of the primary cluster to propagate the changes. Let's also assume the 
> secondary cluster is a target for failover when needed and should become 
> primary when needed.
> We suggest improving the way the HBase cluster failover works today. Namely, 
> upon failover, the backup RS's cache is cold. Warming it up to the right 
> working set takes many minutes. The suggested solution is to selectively 
> replay read requests at the backup - namely, those reads that caused 
> cache-ins at the primary. We intend to use WAL replication as transport 
> protocol (hopefully, as black box), and of course add custom replay 
> callbacks. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18736) Cleanup the HTD/HCD for Admin

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151789#comment-16151789
 ] 

Chia-Ping Tsai commented on HBASE-18736:


Will commit it tomorrow if no objections.

> Cleanup the HTD/HCD for Admin
> -
>
> Key: HBASE-18736
> URL: https://issues.apache.org/jira/browse/HBASE-18736
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-alpha-3
>
> Attachments: HBASE-18736.v0.patch
>
>
> see the 
> [discussion|https://issues.apache.org/jira/browse/HBASE-18729?focusedCommentId=16150675&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16150675]
>  in HBASE-18729.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18748) Cache pre-warming upon replication

2017-09-03 Thread Anastasia Braginsky (JIRA)
Anastasia Braginsky created HBASE-18748:
---

 Summary: Cache pre-warming upon replication
 Key: HBASE-18748
 URL: https://issues.apache.org/jira/browse/HBASE-18748
 Project: HBase
  Issue Type: New Feature
Reporter: Anastasia Braginsky


HBase's cluster replication is very important and widely used feature. Let's 
assume primary cluster is replicated to secondary (backup) cluster using the 
WAL of the primary cluster to propagate the changes. Let's also assume the 
secondary cluster is a target for failover when needed and should become 
primary when needed.

We suggest improving the way the HBase cluster failover works today. Namely, 
upon failover, the backup RS's cache is cold. Warming it up to the right 
working set takes many minutes. The suggested solution is to selectively replay 
read requests at the backup - namely, those reads that caused cache-ins at the 
primary. We intend to use WAL replication as transport protocol (hopefully, as 
black box), and of course add custom replay callbacks. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table

2017-09-03 Thread wenbang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151787#comment-16151787
 ] 

wenbang commented on HBASE-18743:
-

Hi Chia-Ping Tsai,I have  closed the pull request.

> HFiles that are in use by a table whitch have the same name and namespace 
> with a default table cloned from snapshot may be deleted when that snapshot 
> and default table is deleted
> --
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18699) Copy LoadIncrementalHFiles to another package and mark the old one as deprecated

2017-09-03 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18699:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks all for reviewing.

> Copy LoadIncrementalHFiles to another package and mark the old one as 
> deprecated
> 
>
> Key: HBASE-18699
> URL: https://issues.apache.org/jira/browse/HBASE-18699
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 3.0.0, 2.0.0-alpha-2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18699.patch, HBASE-18699-v1.patch, 
> HBASE-18699-v2.patch, HBASE-18699-v3.patch, HBASE-18699-v3.patch
>
>
> LoadIncrementalHFiles does not depend on map reduce.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-09-03 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-18747:
-

 Summary: Introduce new example and helper classes to tell CP users 
how to do filtering on scanners
 Key: HBASE-18747
 URL: https://issues.apache.org/jira/browse/HBASE-18747
 Project: HBase
  Issue Type: Sub-task
  Components: Coprocessors
Reporter: Duo Zhang


Finally we decided that CP users should not have the ability to create 
{{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
filter out some cells when flush or compaction by simply provide a filter when 
constructing {{StoreScanner}}.

But I think filtering out some cells is a very important usage for CP users, so 
we need to provide the ability in another way. Theoretically it can be done 
with wrapping an {{InternalScanner}}, but I think we need to give an example, 
or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151781#comment-16151781
 ] 

Chia-Ping Tsai commented on HBASE-18743:


The following test fails without the patch.
{code}
  @Test
  public void testValueOfNamespaceAndQualifier() {
TableName name0 = TableName.valueOf("table");
TableName name1 = TableName.valueOf("table", "table");
assertEquals(NamespaceDescriptor.DEFAULT_NAMESPACE_NAME_STR, 
name0.getNamespaceAsString());
assertEquals("table", name0.getQualifierAsString());
assertEquals("table", name0.getNameAsString());
assertEquals("table", name1.getNamespaceAsString());
assertEquals("table", name1.getQualifierAsString());
assertEquals("table:table", name1.getNameAsString());
  }
{code}
+1 


> HFiles that are in use by a table whitch have the same name and namespace 
> with a default table cloned from snapshot may be deleted when that snapshot 
> and default table is deleted
> --
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18699) Copy LoadIncrementalHFiles to another package and mark the old one as deprecated

2017-09-03 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151778#comment-16151778
 ] 

Duo Zhang commented on HBASE-18699:
---

Will commit shortly.

> Copy LoadIncrementalHFiles to another package and mark the old one as 
> deprecated
> 
>
> Key: HBASE-18699
> URL: https://issues.apache.org/jira/browse/HBASE-18699
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 3.0.0, 2.0.0-alpha-2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18699.patch, HBASE-18699-v1.patch, 
> HBASE-18699-v2.patch, HBASE-18699-v3.patch, HBASE-18699-v3.patch
>
>
> LoadIncrementalHFiles does not depend on map reduce.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18743) HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table i

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18743:
---
Fix Version/s: 2.0.0-alpha-3
   1.2.7
   1.5.0
   1.3.2
   1.4.0

> HFiles that are in use by a table whitch have the same name and namespace 
> with a default table cloned from snapshot may be deleted when that snapshot 
> and default table is deleted
> --
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table

2017-09-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151777#comment-16151777
 ] 

Chia-Ping Tsai commented on HBASE-18743:


Nice find. Would you please close the github pull request?

> HFiles that are in use by a table whitch have the same name and namespace 
> with a default table cloned from snapshot may be deleted when that snapshot 
> and default table is deleted
> --
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18743) HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table

2017-09-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned HBASE-18743:
--

Assignee: wenbang

> HFiles that are in use by a table whitch have the same name and namespace 
> with a default table cloned from snapshot may be deleted when that snapshot 
> and default table is deleted
> --
>
> Key: HBASE-18743
> URL: https://issues.apache.org/jira/browse/HBASE-18743
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.12
>Reporter: wenbang
>Assignee: wenbang
>Priority: Critical
> Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, 
> HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still 
> in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same 
> namespace and name with a default table cloned from snapshot.when snapshot 
> and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, 
> and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table 
> is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is 
> not normal, resulting in can not find the reference file, hfilecleaner to 
> delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>   final Path dstFamilyPath, final TableName linkedTable, final String 
> linkedRegion,
>   final String hfileName, final boolean createBackRef) throws IOException 
> {
> String familyName = dstFamilyPath.getName();
> String regionName = dstFamilyPath.getParent().getName();
> String tableName = 
> FSUtils.getTableName(dstFamilyPath.getParent().getParent())
> .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
> return TableName.valueOf(tablePath.getParent().getName(), 
> tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18683) Upgrade hbase to commons-math 3

2017-09-03 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151774#comment-16151774
 ] 

Peter Somogyi commented on HBASE-18683:
---

I reviewed the LICENSE.vm file and it has commons-math3 parts but not the same 
as it is present in the 3.6.1 version. Also noticed that the vm file has parts 
for commons-math2 and commons-math3 as well. Shall we get rid of v2? Maven 
dependenct:tree does not show any reference to commons-math v2.

> Upgrade hbase to commons-math 3
> ---
>
> Key: HBASE-18683
> URL: https://issues.apache.org/jira/browse/HBASE-18683
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha-2
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
> Fix For: 2.0.0
>
> Attachments: HBASE-18683.master.001.patch, 
> HBASE-18683.master.001.patch, HBASE-18683.master.002.patch, LICENSE.txt, 
> NOTICE.txt
>
>
> Upgrade hbase to use commons-math 3.6.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18112) Write RequestTooBigException back to client for NettyRpcServer

2017-09-03 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-18112:
-
Summary: Write RequestTooBigException back to client for NettyRpcServer  
(was: Write RequestTooLargeException back to client for NettyRpcServer)

> Write RequestTooBigException back to client for NettyRpcServer
> --
>
> Key: HBASE-18112
> URL: https://issues.apache.org/jira/browse/HBASE-18112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: Duo Zhang
>Assignee: Toshihiro Suzuki
>
> For now we just close the connection so NettyRpcServer can not pass TestIPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18743) HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151753#comment-16151753
 ] 

Hadoop QA commented on HBASE-18743:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m  
9s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:47a5614 |
| JIRA Issue | HBASE-18743 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885124/HBASE_18743_v2.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 78e150d1cae1 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7c51d3f |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8447/testReport/ |
| modules | C: hbase-common hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8447/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> HFiles that are in use by a t

[jira] [Assigned] (HBASE-18746) Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails

2017-09-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao reassigned HBASE-18746:
---

Assignee: ChunHao

> Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot 
> fails
> ---
>
> Key: HBASE-18746
> URL: https://issues.apache.org/jira/browse/HBASE-18746
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Chia-Ping Tsai
>Assignee: ChunHao
>Priority: Minor
>  Labels: beginner
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-3
>
>
> {code}
> // Run the MR Job
> if (!job.waitForCompletion(true)) {
>   // TODO: Replace the fixed string with job.getStatus().getFailureInfo()
>   // when it will be available on all the supported versions.
>   throw new ExportSnapshotException("Copy Files Map-Reduce Job failed");
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18723) [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity list the dependencies we exploit

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151725#comment-16151725
 ] 

Hadoop QA commented on HBASE-18723:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
36m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
1s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
44s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
3s{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}196m 
42s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 5s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}287m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:47a5614 |
| JIRA Issue | HBASE-18723 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885116/HBASE-18723.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux 8e549bd38f83 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7c51d3f |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8445/testReport/ |
| modules | C: hbase-mapreduce hbase-backup hbase-rest . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/8445/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> [pom cleanup] Do a pass with dependency:analyze; remove unused and explicity 
> list the dependencies we exploit
> --

[jira] [Commented] (HBASE-18743) HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table

2017-09-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151718#comment-16151718
 ] 

Hadoop QA commented on HBASE-18743:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
39m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 46s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.TestSplitLogWorker |
|   | org.apache.hadoop.hbase.coprocessor.TestHTableWrapper |
|   | org.apache.hadoop.hbase.regionserver.wal.TestFSHLog |
|   | org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries |
|   | org.apache.hadoop.hbase.TestHBaseTestingUtility |
|   | org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook |
|   | org.apache.hadoop.hbase.wal.TestWALFiltering |
|   | org.apache.hadoop.hbase.coprocessor.TestCoprocessorMetrics |
|   | org.apache.hadoop.hbase.regionserver.wal.TestWALReplay |
|   | org.apache.hadoop.hbase.security.access.TestAccessController2 |
|   | org.apache.hadoop.hbase.replication.TestReplicationStateHBaseImpl |
|   | org.apache.hadoop.hbase.quotas.TestMasterSpaceQuotaObserver |
|   | org.apache.hadoop.hbase.wal.TestWALSplitCompressed |
|   | org.apache.hadoop.hbase.replication.regionserver.TestWALEntryStream |
|   | org.apache.hadoop.hbase.constraint.TestConstraint |
|   | org.apache.hadoop.hbase.mob.compactions.TestPartitioned