[jira] [Updated] (HBASE-7403) Online Merge

2013-03-09 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7403:


Attachment: hbase-7403-trunkv20.patch

patchv20 addressing above comments (Preferred to move lower load region) and 
fix javadoc warnings

> Online Merge
> 
>
> Key: HBASE-7403
> URL: https://issues.apache.org/jira/browse/HBASE-7403
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.0, 0.94.6
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7403-trunkv5.patch, 7403-trunkv6.patch, 7403v5.diff, 
> 7403-v5.txt, 7403v5.txt, hbase-7403-94v1.patch, hbase-7403-trunkv10.patch, 
> hbase-7403-trunkv11.patch, hbase-7403-trunkv12.patch, 
> hbase-7403-trunkv13.patch, hbase-7403-trunkv14.patch, 
> hbase-7403-trunkv15.patch, hbase-7403-trunkv16.patch, 
> hbase-7403-trunkv19.patch, hbase-7403-trunkv1.patch, 
> hbase-7403-trunkv20.patch, hbase-7403-trunkv5.patch, 
> hbase-7403-trunkv6.patch, hbase-7403-trunkv7.patch, hbase-7403-trunkv8.patch, 
> hbase-7403-trunkv9.patch, merge region.pdf
>
>
> The feature of this online merge:
> 1.Online,no necessary to disable table
> 2.Less change for current code, could applied in trunk,0.94 or 0.92,0.90
> 3.Easy to call merege request, no need to input a long region name, only 
> encoded name enough
> 4.No limit when operation, you don't need to tabke care the events like 
> Server Dead, Balance, Split, Disabing/Enabing table, no need to take care 
> whether you send a wrong merge request, it has alread done for you
> 5.Only little offline time for two merging regions
> Usage:
> 1.Tool:  
> bin/hbase org.apache.hadoop.hbase.util.OnlineMerge [-force] [-async] [-show] 
>   
> 2.API: static void MergeManager#createMergeRequest
> We need merge in the following cases:
> 1.Region hole or region overlap, can’t be fix by hbck
> 2.Region become empty because of TTL and not reasonable Rowkey design
> 3.Region is always empty or very small because of presplit when create table
> 4.Too many empty or small regions would reduce the system performance(e.g. 
> mslab)
> Current merge tools only support offline and are not able to redo if 
> exception is thrown in the process of merging, causing a dirty data
> For online system, we need a online merge.
> This implement logic of this patch for  Online Merge is :
> For example, merge regionA and regionB into regionC
> 1.Offline the two regions A and B
> 2.Merge the two regions in the HDFS(Create regionC’s directory, move 
> regionA’s and regionB’s file to regionC’s directory, delete regionA’s and 
> regionB’s directory)
> 3.Add the merged regionC to .META.
> 4.Assign the merged regionC
> As design of this patch , once we do the merge work in the HDFS,we could redo 
> it until successful if it throws exception or abort or server restart, but 
> couldn’t be rolled back. 
> It depends on
> Use zookeeper to record the transaction journal state, make redo easier
> Use zookeeper to send/receive merge request
> Merge transaction is executed on the master
> Support calling merge request through API or shell tool
> About the merge process, please see the attachment and patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8060) "Num compacting KVs" diverges from "num compacted KVs" over time

2013-03-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598172#comment-13598172
 ] 

Lars Hofhansl commented on HBASE-8060:
--

Wow... That pretty far off. Does this only happen in trunk/0.95?

> "Num compacting KVs" diverges from "num compacted KVs" over time
> 
>
> Key: HBASE-8060
> URL: https://issues.apache.org/jira/browse/HBASE-8060
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, UI
>Affects Versions: 0.95.0, 0.96.0
>Reporter: Andrew Purtell
> Attachments: screenshot.png
>
>
> I have been running what amounts to an ingestion test for a day or so. This 
> is an all-in-one cluster launched with './bin/hbase master start' from 
> sources. In the RS stats on the master UI, the "num compacting KVs" has 
> diverged from "num compacted KVs" even though compaction has been completed 
> from perspective of selection, no compaction tasks are running on the RS. I 
> think this could be confusing -- is compaction happening or not?
> Or maybe I'm misunderstanding what this is supposed to show?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7403) Online Merge

2013-03-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598173#comment-13598173
 ] 

chunhui shen commented on HBASE-7403:
-

bq.Can we consider metrics so that region with less load is moved onto region 
server where region with more load resides ?
It seems better.

"-1 site. The patch appears to cause mvn site goal to fail."
I couldn't find the reason of this warning...

> Online Merge
> 
>
> Key: HBASE-7403
> URL: https://issues.apache.org/jira/browse/HBASE-7403
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.0, 0.94.6
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7403-trunkv5.patch, 7403-trunkv6.patch, 7403v5.diff, 
> 7403-v5.txt, 7403v5.txt, hbase-7403-94v1.patch, hbase-7403-trunkv10.patch, 
> hbase-7403-trunkv11.patch, hbase-7403-trunkv12.patch, 
> hbase-7403-trunkv13.patch, hbase-7403-trunkv14.patch, 
> hbase-7403-trunkv15.patch, hbase-7403-trunkv16.patch, 
> hbase-7403-trunkv19.patch, hbase-7403-trunkv1.patch, 
> hbase-7403-trunkv5.patch, hbase-7403-trunkv6.patch, hbase-7403-trunkv7.patch, 
> hbase-7403-trunkv8.patch, hbase-7403-trunkv9.patch, merge region.pdf
>
>
> The feature of this online merge:
> 1.Online,no necessary to disable table
> 2.Less change for current code, could applied in trunk,0.94 or 0.92,0.90
> 3.Easy to call merege request, no need to input a long region name, only 
> encoded name enough
> 4.No limit when operation, you don't need to tabke care the events like 
> Server Dead, Balance, Split, Disabing/Enabing table, no need to take care 
> whether you send a wrong merge request, it has alread done for you
> 5.Only little offline time for two merging regions
> Usage:
> 1.Tool:  
> bin/hbase org.apache.hadoop.hbase.util.OnlineMerge [-force] [-async] [-show] 
>   
> 2.API: static void MergeManager#createMergeRequest
> We need merge in the following cases:
> 1.Region hole or region overlap, can’t be fix by hbck
> 2.Region become empty because of TTL and not reasonable Rowkey design
> 3.Region is always empty or very small because of presplit when create table
> 4.Too many empty or small regions would reduce the system performance(e.g. 
> mslab)
> Current merge tools only support offline and are not able to redo if 
> exception is thrown in the process of merging, causing a dirty data
> For online system, we need a online merge.
> This implement logic of this patch for  Online Merge is :
> For example, merge regionA and regionB into regionC
> 1.Offline the two regions A and B
> 2.Merge the two regions in the HDFS(Create regionC’s directory, move 
> regionA’s and regionB’s file to regionC’s directory, delete regionA’s and 
> regionB’s directory)
> 3.Add the merged regionC to .META.
> 4.Assign the merged regionC
> As design of this patch , once we do the merge work in the HDFS,we could redo 
> it until successful if it throws exception or abort or server restart, but 
> couldn’t be rolled back. 
> It depends on
> Use zookeeper to record the transaction journal state, make redo easier
> Use zookeeper to send/receive merge request
> Merge transaction is executed on the master
> Support calling merge request through API or shell tool
> About the merge process, please see the attachment and patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7403) Online Merge

2013-03-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598168#comment-13598168
 ] 

Ted Yu commented on HBASE-7403:
---

{code}
+if (!onSameRS) {
+  // Move region_b to region a's location
+  RegionPlan regionPlan = new RegionPlan(region_b, region_b_location,
+  region_a_location);
{code}
Can we consider metrics so that region with less load is moved onto region 
server where region with more load resides ?

> Online Merge
> 
>
> Key: HBASE-7403
> URL: https://issues.apache.org/jira/browse/HBASE-7403
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.0, 0.94.6
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7403-trunkv5.patch, 7403-trunkv6.patch, 7403v5.diff, 
> 7403-v5.txt, 7403v5.txt, hbase-7403-94v1.patch, hbase-7403-trunkv10.patch, 
> hbase-7403-trunkv11.patch, hbase-7403-trunkv12.patch, 
> hbase-7403-trunkv13.patch, hbase-7403-trunkv14.patch, 
> hbase-7403-trunkv15.patch, hbase-7403-trunkv16.patch, 
> hbase-7403-trunkv19.patch, hbase-7403-trunkv1.patch, 
> hbase-7403-trunkv5.patch, hbase-7403-trunkv6.patch, hbase-7403-trunkv7.patch, 
> hbase-7403-trunkv8.patch, hbase-7403-trunkv9.patch, merge region.pdf
>
>
> The feature of this online merge:
> 1.Online,no necessary to disable table
> 2.Less change for current code, could applied in trunk,0.94 or 0.92,0.90
> 3.Easy to call merege request, no need to input a long region name, only 
> encoded name enough
> 4.No limit when operation, you don't need to tabke care the events like 
> Server Dead, Balance, Split, Disabing/Enabing table, no need to take care 
> whether you send a wrong merge request, it has alread done for you
> 5.Only little offline time for two merging regions
> Usage:
> 1.Tool:  
> bin/hbase org.apache.hadoop.hbase.util.OnlineMerge [-force] [-async] [-show] 
>   
> 2.API: static void MergeManager#createMergeRequest
> We need merge in the following cases:
> 1.Region hole or region overlap, can’t be fix by hbck
> 2.Region become empty because of TTL and not reasonable Rowkey design
> 3.Region is always empty or very small because of presplit when create table
> 4.Too many empty or small regions would reduce the system performance(e.g. 
> mslab)
> Current merge tools only support offline and are not able to redo if 
> exception is thrown in the process of merging, causing a dirty data
> For online system, we need a online merge.
> This implement logic of this patch for  Online Merge is :
> For example, merge regionA and regionB into regionC
> 1.Offline the two regions A and B
> 2.Merge the two regions in the HDFS(Create regionC’s directory, move 
> regionA’s and regionB’s file to regionC’s directory, delete regionA’s and 
> regionB’s directory)
> 3.Add the merged regionC to .META.
> 4.Assign the merged regionC
> As design of this patch , once we do the merge work in the HDFS,we could redo 
> it until successful if it throws exception or abort or server restart, but 
> couldn’t be rolled back. 
> It depends on
> Use zookeeper to record the transaction journal state, make redo easier
> Use zookeeper to send/receive merge request
> Merge transaction is executed on the master
> Support calling merge request through API or shell tool
> About the merge process, please see the attachment and patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7904) Upgrade hadoop 2.0 dependency to 2.0.4-alpha

2013-03-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598167#comment-13598167
 ] 

Ted Yu commented on HBASE-7904:
---

I couldn't reproduce TestRowCounter failure on Mac.

Will investigate more.

> Upgrade hadoop 2.0 dependency to 2.0.4-alpha
> 
>
> Key: HBASE-7904
> URL: https://issues.apache.org/jira/browse/HBASE-7904
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7904.txt, 7904-v2.txt, 7904-v4-hadoop-2.0.txt, 
> 7904-v4.txt, 7904-v4.txt, 7904-v5.txt, hbase-7904-v3.txt
>
>
> 2.0.3-alpha has been released.
> We should upgrade the dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7403) Online Merge

2013-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598165#comment-13598165
 ] 

Hadoop QA commented on HBASE-7403:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12572947/hbase-7403-trunkv19.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.backup.TestHFileArchiving
  org.apache.hadoop.hbase.client.TestHTableMultiplexer

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.client.TestMultiParallel.testBatchWithDelete(TestMultiParallel.java:343)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4744//console

This message is automatically generated.

> Online Merge
> 
>
> Key: HBASE-7403
> URL: https://issues.apache.org/jira/browse/HBASE-7403
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.0, 0.94.6
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7403-trunkv5.patch, 7403-trunkv6.patch, 7403v5.diff, 
> 7403-v5.txt, 7403v5.txt, hbase-7403-94v1.patch, hbase-7403-trunkv10.patch, 
> hbase-7403-trunkv11.patch, hbase-7403-trunkv12.patch, 
> hbase-7403-trunkv13.patch, hbase-7403-trunkv14.patch, 
> hbase-7403-trunkv15.patch, hbase-7403-trunkv16.patch, 
> hbase-7403-trunkv19.patch, hbase-7403-trunkv1.patch, 
> hbase-7403-trunkv5.patch, hbase-7403-trunkv6.patch, hbase-7403-trunkv7.patch, 
> hbase-7403-trunkv8.patch, hbase-7403-trunkv9.patch, merge region.pdf
>
>
> The feature of this online merge:
> 1.Online,no necessary to disable table
> 2.Less change for current code, could applied in trunk,0.94 or 0.92,0.90
> 3.Easy to call merege request, no need to input a long region name, only 
> encoded name enough
> 4.No limit when operation, you don't need to tabke care the events like 
> Server Dead, Balance, Split, Disabing/Enabing table, no need to take care 
> whether you send a wrong merge request, it has alread done for you
> 5.Only little offline time for two merging regions
> Usage:
> 1.Tool:  
> bin/hbase org.apache.hadoop.hbase.util.OnlineMerge [-force] [-async] [-show] 
>   
> 2.API: static void MergeManager#createMergeRequest
> We need merge in the following cases:
> 1.Region hole or region overlap, can’t be fix by hbck
> 2.Region become empty because of TTL and not reasonable Rowkey design
> 3.Region is always empty or very small because o

[jira] [Commented] (HBASE-7403) Online Merge

2013-03-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598164#comment-13598164
 ] 

chunhui shen commented on HBASE-7403:
-

bq.If one (or both) of the regions were receiving non-trivial load prior to 
this action, would client(s) be affected ?
Yes, region would be off services in a short time, it is equal with moving 
region, e.g balance a region

Compared with split, region merge operation would take a overhead of moving one 
region

> Online Merge
> 
>
> Key: HBASE-7403
> URL: https://issues.apache.org/jira/browse/HBASE-7403
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.0, 0.94.6
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7403-trunkv5.patch, 7403-trunkv6.patch, 7403v5.diff, 
> 7403-v5.txt, 7403v5.txt, hbase-7403-94v1.patch, hbase-7403-trunkv10.patch, 
> hbase-7403-trunkv11.patch, hbase-7403-trunkv12.patch, 
> hbase-7403-trunkv13.patch, hbase-7403-trunkv14.patch, 
> hbase-7403-trunkv15.patch, hbase-7403-trunkv16.patch, 
> hbase-7403-trunkv19.patch, hbase-7403-trunkv1.patch, 
> hbase-7403-trunkv5.patch, hbase-7403-trunkv6.patch, hbase-7403-trunkv7.patch, 
> hbase-7403-trunkv8.patch, hbase-7403-trunkv9.patch, merge region.pdf
>
>
> The feature of this online merge:
> 1.Online,no necessary to disable table
> 2.Less change for current code, could applied in trunk,0.94 or 0.92,0.90
> 3.Easy to call merege request, no need to input a long region name, only 
> encoded name enough
> 4.No limit when operation, you don't need to tabke care the events like 
> Server Dead, Balance, Split, Disabing/Enabing table, no need to take care 
> whether you send a wrong merge request, it has alread done for you
> 5.Only little offline time for two merging regions
> Usage:
> 1.Tool:  
> bin/hbase org.apache.hadoop.hbase.util.OnlineMerge [-force] [-async] [-show] 
>   
> 2.API: static void MergeManager#createMergeRequest
> We need merge in the following cases:
> 1.Region hole or region overlap, can’t be fix by hbck
> 2.Region become empty because of TTL and not reasonable Rowkey design
> 3.Region is always empty or very small because of presplit when create table
> 4.Too many empty or small regions would reduce the system performance(e.g. 
> mslab)
> Current merge tools only support offline and are not able to redo if 
> exception is thrown in the process of merging, causing a dirty data
> For online system, we need a online merge.
> This implement logic of this patch for  Online Merge is :
> For example, merge regionA and regionB into regionC
> 1.Offline the two regions A and B
> 2.Merge the two regions in the HDFS(Create regionC’s directory, move 
> regionA’s and regionB’s file to regionC’s directory, delete regionA’s and 
> regionB’s directory)
> 3.Add the merged regionC to .META.
> 4.Assign the merged regionC
> As design of this patch , once we do the merge work in the HDFS,we could redo 
> it until successful if it throws exception or abort or server restart, but 
> couldn’t be rolled back. 
> It depends on
> Use zookeeper to record the transaction journal state, make redo easier
> Use zookeeper to send/receive merge request
> Merge transaction is executed on the master
> Support calling merge request through API or shell tool
> About the merge process, please see the attachment and patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7403) Online Merge

2013-03-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598161#comment-13598161
 ] 

Ted Yu commented on HBASE-7403:
---

bq. master move the regions together(on the regionserver)
What's the implication of the above action ?
If one (or both) of the regions were receiving non-trivial load prior to this 
action, would client(s) be affected ?

> Online Merge
> 
>
> Key: HBASE-7403
> URL: https://issues.apache.org/jira/browse/HBASE-7403
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.0, 0.94.6
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7403-trunkv5.patch, 7403-trunkv6.patch, 7403v5.diff, 
> 7403-v5.txt, 7403v5.txt, hbase-7403-94v1.patch, hbase-7403-trunkv10.patch, 
> hbase-7403-trunkv11.patch, hbase-7403-trunkv12.patch, 
> hbase-7403-trunkv13.patch, hbase-7403-trunkv14.patch, 
> hbase-7403-trunkv15.patch, hbase-7403-trunkv16.patch, 
> hbase-7403-trunkv19.patch, hbase-7403-trunkv1.patch, 
> hbase-7403-trunkv5.patch, hbase-7403-trunkv6.patch, hbase-7403-trunkv7.patch, 
> hbase-7403-trunkv8.patch, hbase-7403-trunkv9.patch, merge region.pdf
>
>
> The feature of this online merge:
> 1.Online,no necessary to disable table
> 2.Less change for current code, could applied in trunk,0.94 or 0.92,0.90
> 3.Easy to call merege request, no need to input a long region name, only 
> encoded name enough
> 4.No limit when operation, you don't need to tabke care the events like 
> Server Dead, Balance, Split, Disabing/Enabing table, no need to take care 
> whether you send a wrong merge request, it has alread done for you
> 5.Only little offline time for two merging regions
> Usage:
> 1.Tool:  
> bin/hbase org.apache.hadoop.hbase.util.OnlineMerge [-force] [-async] [-show] 
>   
> 2.API: static void MergeManager#createMergeRequest
> We need merge in the following cases:
> 1.Region hole or region overlap, can’t be fix by hbck
> 2.Region become empty because of TTL and not reasonable Rowkey design
> 3.Region is always empty or very small because of presplit when create table
> 4.Too many empty or small regions would reduce the system performance(e.g. 
> mslab)
> Current merge tools only support offline and are not able to redo if 
> exception is thrown in the process of merging, causing a dirty data
> For online system, we need a online merge.
> This implement logic of this patch for  Online Merge is :
> For example, merge regionA and regionB into regionC
> 1.Offline the two regions A and B
> 2.Merge the two regions in the HDFS(Create regionC’s directory, move 
> regionA’s and regionB’s file to regionC’s directory, delete regionA’s and 
> regionB’s directory)
> 3.Add the merged regionC to .META.
> 4.Assign the merged regionC
> As design of this patch , once we do the merge work in the HDFS,we could redo 
> it until successful if it throws exception or abort or server restart, but 
> couldn’t be rolled back. 
> It depends on
> Use zookeeper to record the transaction journal state, make redo easier
> Use zookeeper to send/receive merge request
> Merge transaction is executed on the master
> Support calling merge request through API or shell tool
> About the merge process, please see the attachment and patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598156#comment-13598156
 ] 

ramkrishna.s.vasudevan commented on HBASE-8059:
---

I think am late to this.  Still +1 Ted.

> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 7904-v4-hadoop-2.0.txt, 8059-v1.txt, 8059-v2.txt, 
> hadoop-2.0-template-pom.xml
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7403) Online Merge

2013-03-09 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7403:


Attachment: hbase-7403-trunkv19.patch

PatchV19  re-construct the implementation of Online Merge.

Since we have the atomic mutations in META, we could implement 
RegionMergeTransaction just like SplitTransaction. 

It would be much clearer.


Current process of merging regions:

a.client send RPC(dispacth merging regions) to master
b.master move the regions together(on the regionserver)
c.master send RPC(merge regions) to regionserver
d.DispacthMergingRegionHandler on the master would process the request 
asynchronously
e.Regionserver execute the regions merge transaction in the thread pool

> Online Merge
> 
>
> Key: HBASE-7403
> URL: https://issues.apache.org/jira/browse/HBASE-7403
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.0, 0.94.6
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7403-trunkv5.patch, 7403-trunkv6.patch, 7403v5.diff, 
> 7403-v5.txt, 7403v5.txt, hbase-7403-94v1.patch, hbase-7403-trunkv10.patch, 
> hbase-7403-trunkv11.patch, hbase-7403-trunkv12.patch, 
> hbase-7403-trunkv13.patch, hbase-7403-trunkv14.patch, 
> hbase-7403-trunkv15.patch, hbase-7403-trunkv16.patch, 
> hbase-7403-trunkv19.patch, hbase-7403-trunkv1.patch, 
> hbase-7403-trunkv5.patch, hbase-7403-trunkv6.patch, hbase-7403-trunkv7.patch, 
> hbase-7403-trunkv8.patch, hbase-7403-trunkv9.patch, merge region.pdf
>
>
> The feature of this online merge:
> 1.Online,no necessary to disable table
> 2.Less change for current code, could applied in trunk,0.94 or 0.92,0.90
> 3.Easy to call merege request, no need to input a long region name, only 
> encoded name enough
> 4.No limit when operation, you don't need to tabke care the events like 
> Server Dead, Balance, Split, Disabing/Enabing table, no need to take care 
> whether you send a wrong merge request, it has alread done for you
> 5.Only little offline time for two merging regions
> Usage:
> 1.Tool:  
> bin/hbase org.apache.hadoop.hbase.util.OnlineMerge [-force] [-async] [-show] 
>   
> 2.API: static void MergeManager#createMergeRequest
> We need merge in the following cases:
> 1.Region hole or region overlap, can’t be fix by hbck
> 2.Region become empty because of TTL and not reasonable Rowkey design
> 3.Region is always empty or very small because of presplit when create table
> 4.Too many empty or small regions would reduce the system performance(e.g. 
> mslab)
> Current merge tools only support offline and are not able to redo if 
> exception is thrown in the process of merging, causing a dirty data
> For online system, we need a online merge.
> This implement logic of this patch for  Online Merge is :
> For example, merge regionA and regionB into regionC
> 1.Offline the two regions A and B
> 2.Merge the two regions in the HDFS(Create regionC’s directory, move 
> regionA’s and regionB’s file to regionC’s directory, delete regionA’s and 
> regionB’s directory)
> 3.Add the merged regionC to .META.
> 4.Assign the merged regionC
> As design of this patch , once we do the merge work in the HDFS,we could redo 
> it until successful if it throws exception or abort or server restart, but 
> couldn’t be rolled back. 
> It depends on
> Use zookeeper to record the transaction journal state, make redo easier
> Use zookeeper to send/receive merge request
> Merge transaction is executed on the master
> Support calling merge request through API or shell tool
> About the merge process, please see the attachment and patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598150#comment-13598150
 ] 

Hudson commented on HBASE-8059:
---

Integrated in HBase-TRUNK #3940 (See 
[https://builds.apache.org/job/HBase-TRUNK/3940/])
HBASE-8059 Enhance test-patch.sh so that patch can specify hadoop-2.0 as 
the default profile, part 1 (Ted Yu) (Revision 1454777)

 Result = SUCCESS
tedyu : 
Files : 
* /hbase/trunk/dev-support/test-patch.sh


> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 7904-v4-hadoop-2.0.txt, 8059-v1.txt, 8059-v2.txt, 
> hadoop-2.0-template-pom.xml
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8060) "Num compacting KVs" diverges from "num compacted KVs" over time

2013-03-09 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-8060:
--

Affects Version/s: 0.96.0
   0.95.0

> "Num compacting KVs" diverges from "num compacted KVs" over time
> 
>
> Key: HBASE-8060
> URL: https://issues.apache.org/jira/browse/HBASE-8060
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, UI
>Affects Versions: 0.95.0, 0.96.0
>Reporter: Andrew Purtell
> Attachments: screenshot.png
>
>
> I have been running what amounts to an ingestion test for a day or so. This 
> is an all-in-one cluster launched with './bin/hbase master start' from 
> sources. In the RS stats on the master UI, the "num compacting KVs" has 
> diverged from "num compacted KVs" even though compaction has been completed 
> from perspective of selection, no compaction tasks are running on the RS. I 
> think this could be confusing -- is compaction happening or not?
> Or maybe I'm misunderstanding what this is supposed to show?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8060) "Num compacting KVs" diverges from "num compacted KVs" over time

2013-03-09 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-8060:
--

Attachment: screenshot.png

> "Num compacting KVs" diverges from "num compacted KVs" over time
> 
>
> Key: HBASE-8060
> URL: https://issues.apache.org/jira/browse/HBASE-8060
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, UI
>Reporter: Andrew Purtell
> Attachments: screenshot.png
>
>
> I have been running what amounts to an ingestion test for a day or so. This 
> is an all-in-one cluster launched with './bin/hbase master start' from 
> sources. In the RS stats on the master UI, the "num compacting KVs" has 
> diverged from "num compacted KVs" even though compaction has been completed 
> from perspective of selection, no compaction tasks are running on the RS. I 
> think this could be confusing -- is compaction happening or not?
> Or maybe I'm misunderstanding what this is supposed to show?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-8060) "Num compacting KVs" diverges from "num compacted KVs" over time

2013-03-09 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-8060:
-

 Summary: "Num compacting KVs" diverges from "num compacted KVs" 
over time
 Key: HBASE-8060
 URL: https://issues.apache.org/jira/browse/HBASE-8060
 Project: HBase
  Issue Type: Bug
  Components: Compaction, UI
Reporter: Andrew Purtell


I have been running what amounts to an ingestion test for a day or so. This is 
an all-in-one cluster launched with './bin/hbase master start' from sources. In 
the RS stats on the master UI, the "num compacting KVs" has diverged from "num 
compacted KVs" even though compaction has been completed from perspective of 
selection, no compaction tasks are running on the RS. I think this could be 
confusing -- is compaction happening or not?

Or maybe I'm misunderstanding what this is supposed to show?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7904) Upgrade hadoop 2.0 dependency to 2.0.4-alpha

2013-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598146#comment-13598146
 ] 

Hadoop QA commented on HBASE-7904:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12572944/7904-v4-hadoop-2.0.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
18 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestRowCounter
  org.apache.hadoop.hbase.master.TestDistributedLogSplitting

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:86)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery.testSplitWhileBulkLoadPhase(TestLoadIncrementalHFilesSplitRecovery.java:298)
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnDatanodeDeath(TestLogRolling.java:389)
at 
org.apache.hadoop.hbase.regionserver.TestJoinedScanners.testJoinedScanners(TestJoinedScanners.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4743//console

This message is automatically generated.

> Upgrade hadoop 2.0 dependency to 2.0.4-alpha
> 
>
> Key: HBASE-7904
> URL: https://issues.apache.org/jira/browse/HBASE-7904
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7904.txt, 7904-v2.txt, 7904-v4-hadoop-2.0.txt, 
> 7904-v4.txt, 7904-v4.txt, 7904-v5.txt, hbase-7904-v3.txt
>
>
> 2.0.3-alpha has been released.
> We should upgrade the dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8059:
--

Attachment: hadoop-2.0-template-pom.xml

Here is template for switching default profile to hadoop 2.0 in pom.xml files.

I want to get opinion on whether this should be embedded in 
dev-support/test-patch.sh or checked under dev-support as a template.

> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 7904-v4-hadoop-2.0.txt, 8059-v1.txt, 8059-v2.txt, 
> hadoop-2.0-template-pom.xml
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7904) Upgrade hadoop 2.0 dependency to 2.0.4-alpha

2013-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7904:
--

Attachment: 7904-v4-hadoop-2.0.txt

> Upgrade hadoop 2.0 dependency to 2.0.4-alpha
> 
>
> Key: HBASE-7904
> URL: https://issues.apache.org/jira/browse/HBASE-7904
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7904.txt, 7904-v2.txt, 7904-v4-hadoop-2.0.txt, 
> 7904-v4.txt, 7904-v4.txt, 7904-v5.txt, hbase-7904-v3.txt
>
>
> 2.0.3-alpha has been released.
> We should upgrade the dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7904) Upgrade hadoop 2.0 dependency to 2.0.4-alpha

2013-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7904:
--

Attachment: (was: 7904-v4-hadoop-2.0.txt)

> Upgrade hadoop 2.0 dependency to 2.0.4-alpha
> 
>
> Key: HBASE-7904
> URL: https://issues.apache.org/jira/browse/HBASE-7904
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7904.txt, 7904-v2.txt, 7904-v4-hadoop-2.0.txt, 
> 7904-v4.txt, 7904-v4.txt, 7904-v5.txt, hbase-7904-v3.txt
>
>
> 2.0.3-alpha has been released.
> We should upgrade the dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8059:
--

Attachment: 8059-v2.txt

> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 7904-v4-hadoop-2.0.txt, 8059-v1.txt, 8059-v2.txt
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8056) allow StoreScanner to drop deletes from some part of the compaction range

2013-03-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598129#comment-13598129
 ] 

chunhui shen commented on HBASE-8056:
-

Could make a new function to place the added code in StoreScanner and mark that 
it is used in stripe compactions.

I think it would be clear for reader.

The change is avaiable to me

> allow StoreScanner to drop deletes from some part of the compaction range
> -
>
> Key: HBASE-8056
> URL: https://issues.apache.org/jira/browse/HBASE-8056
> Project: HBase
>  Issue Type: Task
>  Components: Compaction, Scanners
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-8056-v0.patch
>
>
> Allow StoreScanner to drop deletes from some part of the compaction range. 
> Needed for stripe compactor, and maybe level compactor (although at present I 
> am not sure how level compactor will drop deletes at all).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598128#comment-13598128
 ] 

chunhui shen commented on HBASE-8059:
-

Reasonable to me

Is it better that make the header shortter?  e.g. hadoop.profile:2.0

+1

> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 7904-v4-hadoop-2.0.txt, 8059-v1.txt
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598128#comment-13598128
 ] 

chunhui shen edited comment on HBASE-8059 at 3/10/13 1:43 AM:
--

Reasonable to me

Is it better that make the header shortter?  e.g. hadoop.profile=2.0

+1

  was (Author: zjushch):
Reasonable to me

Is it better that make the header shortter?  e.g. hadoop.profile:2.0

+1
  
> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 7904-v4-hadoop-2.0.txt, 8059-v1.txt
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8055) Potentially missing null check in StoreFile.Reader.getMaxTimestamp()

2013-03-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598121#comment-13598121
 ] 

Lars Hofhansl commented on HBASE-8055:
--

Oh and one more comment - just so it's all here: If a table disable/re-enable 
fixed the problem in HBASE-7581 that would point to a problem during bulk load.


> Potentially missing null check in StoreFile.Reader.getMaxTimestamp()
> 
>
> Key: HBASE-8055
> URL: https://issues.apache.org/jira/browse/HBASE-8055
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.95.0, 0.98.0, 0.94.7
>
>
> We just ran into a scenario where we got the following NPE:
> {code}
> 13/03/08 11:52:13 INFO regionserver.Store: Successfully loaded store file 
> file:/tmp/hfile-import-00Dxx001lmJ-09CxxJm/COLFAM/file09CxxJm
>  into store COLFAM (new location: 
> file:/tmp/localhbase/data/SFDC.ENTITY_HISTORY_ARCHIVE/aeacee43aaf1748c6e60b9cc12bcac3d/COLFAM/120d683414e44478984b50ddd79b6826)
> 13/03/08 11:52:13 ERROR regionserver.HRegionServer: Failed openScanner
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.getMaxTimestamp(StoreFile.java:1702)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:301)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:127)
> at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2070)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3383)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1628)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1620)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1596)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2342)
> at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)
> 13/03/08 11:52:14 ERROR regionserver.HRegionServer: Failed openScanner
> {code}
> It's not clear, yet, how we got into this situation (we are generating HFiles 
> via HFileOutputFormat and bulk load those). It seems that can only happen 
> when the HFile itself is corrupted.
> Looking at the code, though, I see this is the only place where we access 
> StoreFile.reader.timeRangeTracker without a null check. So it appears we are 
> expecting scenarios in which it can be null.
> A simple fix would be:
> {code}
> public long getMaxTimestamp() {
>   return timeRangeTracker == null ? Long.MAX_VALUE : 
> timeRangeTracker.maximumTimestamp;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8055) Potentially missing null check in StoreFile.Reader.getMaxTimestamp()

2013-03-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598120#comment-13598120
 ] 

Lars Hofhansl commented on HBASE-8055:
--

I spent some time looking through the code. I can't see where this goes wrong.
Checked the following:
* bulk load will open the reader in all code paths (if the open was missing the 
metadata would not have been loaded)
* in all circumstances the StoreFile's metadata is written. I had initially 
suspected the adhoc splitting in bulk load, but that code copies the metadata 
from the original file.
* record writer in HFileOutputFormat writes the metadata

Not sure where else to look. In any case, we should either remove all the other 
null-checks for timeRangeTracker or add the same null-check to the only methods 
where this is not done.
Even then, I'd be worried about how this came about.


> Potentially missing null check in StoreFile.Reader.getMaxTimestamp()
> 
>
> Key: HBASE-8055
> URL: https://issues.apache.org/jira/browse/HBASE-8055
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.95.0, 0.98.0, 0.94.7
>
>
> We just ran into a scenario where we got the following NPE:
> {code}
> 13/03/08 11:52:13 INFO regionserver.Store: Successfully loaded store file 
> file:/tmp/hfile-import-00Dxx001lmJ-09CxxJm/COLFAM/file09CxxJm
>  into store COLFAM (new location: 
> file:/tmp/localhbase/data/SFDC.ENTITY_HISTORY_ARCHIVE/aeacee43aaf1748c6e60b9cc12bcac3d/COLFAM/120d683414e44478984b50ddd79b6826)
> 13/03/08 11:52:13 ERROR regionserver.HRegionServer: Failed openScanner
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.getMaxTimestamp(StoreFile.java:1702)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:301)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:127)
> at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2070)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3383)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1628)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1620)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1596)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2342)
> at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)
> 13/03/08 11:52:14 ERROR regionserver.HRegionServer: Failed openScanner
> {code}
> It's not clear, yet, how we got into this situation (we are generating HFiles 
> via HFileOutputFormat and bulk load those). It seems that can only happen 
> when the HFile itself is corrupted.
> Looking at the code, though, I see this is the only place where we access 
> StoreFile.reader.timeRangeTracker without a null check. So it appears we are 
> expecting scenarios in which it can be null.
> A simple fix would be:
> {code}
> public long getMaxTimestamp() {
>   return timeRangeTracker == null ? Long.MAX_VALUE : 
> timeRangeTracker.maximumTimestamp;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7624) Backport HBASE-5359 and HBASE-7596 to 0.94

2013-03-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598119#comment-13598119
 ] 

Lars Hofhansl commented on HBASE-7624:
--

Looks good to me.

[~jeffreyz], any chance to run this through the full 0.94 test suite, just to 
be safe (mvn clean -PrunAllTests -Dmaven.test.redirectTestOutputToFile=true 
install assembly:single  -DskipITs -Prelease)

If not, not a big deal, I'll run on one our machines at work.


> Backport HBASE-5359 and HBASE-7596 to 0.94
> --
>
> Key: HBASE-7624
> URL: https://issues.apache.org/jira/browse/HBASE-7624
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Jeffrey Zhong
> Fix For: 0.94.7
>
> Attachments: hbase-7624_0.patch
>
>
> Both HBASE-5359 and HBASE-7596 are useful and should be added to 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7581) TestAccessController depends on the execution order

2013-03-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598117#comment-13598117
 ] 

Lars Hofhansl commented on HBASE-7581:
--

Specifically, I think we pasted over an actual bug here.


> TestAccessController depends on the execution order
> ---
>
> Key: HBASE-7581
> URL: https://issues.apache.org/jira/browse/HBASE-7581
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
> Fix For: 0.95.0, 0.94.5
>
> Attachments: 7581.v1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7987) Snapshot Manifest file instead of multiple empty files

2013-03-09 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-7987:
---

Attachment: HBASE-7987.sketch

Attached a sketch to show what is the code impact of this patch.
*Please DO NOT even try to apply this patch!*

Other than this kind of "main code" changes, there're all the unit tests to 
fix, that strongly rely on fs layout...

> Snapshot Manifest file instead of multiple empty files
> --
>
> Key: HBASE-7987
> URL: https://issues.apache.org/jira/browse/HBASE-7987
> Project: HBase
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Matteo Bertozzi
> Attachments: HBASE-7987.sketch
>
>
> Currently taking a snapshot means creating one empty file for each file in 
> the source table directory, plus copying the .regioninfo file for each 
> region, the table descriptor file and a snapshotInfo file.
> during the restore or snapshot verification we traverse the filesystem 
> (fs.listStatus()) to find the snapshot files, and we open the .regioninfo 
> files to get the information.
> to avoid hammering the NameNode and having lots of empty files, we can use a 
> manifest file that contains the list of files and information that we need.
> To keep the RS parallelism that we have, each RS can write its own manifest.
> {code}
> message SnapshotDescriptor {
>   required string name;
>   optional string table;
>   optional int64 creationTime;
>   optional Type type;
>   optional int32 version;
> }
> message SnapshotRegionManifest {
>   optional int32 version;
>   required RegionInfo regionInfo;
>   repeated FamilyFiles familyFiles;
>   message StoreFile {
> required string name;
> optional Reference reference;
>   }
>   message FamilyFiles {
> required bytes familyName;
> repeated StoreFile storeFiles;
>   }
> }
> {code}
> {code}
> /hbase/.snapshot/
> /hbase/.snapshot//snapshotInfo
> /hbase/.snapshot//
> /hbase/.snapshot///tableInfo
> /hbase/.snapshot///regionManifest(.n)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8059:
--

Attachment: 8059-v1.txt

Patch v1 searches for fixed sentence in patch file.
If the patch is for hadoop 2.0, checkHadoop20Compile would return 0 immediately.

This would unblock patch testing for HBASE-7904

> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 7904-v4-hadoop-2.0.txt, 8059-v1.txt
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8040) Race condition in AM after HBASE-7521 (only 0.94)

2013-03-09 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598046#comment-13598046
 ] 

ramkrishna.s.vasudevan commented on HBASE-8040:
---

Dont want Lars.  But if patch is ok lets commit this.

> Race condition in AM after HBASE-7521 (only 0.94)
> -
>
> Key: HBASE-8040
> URL: https://issues.apache.org/jira/browse/HBASE-8040
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.6
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.94.6
>
> Attachments: HBASE-8040_1.patch, HBASE-8040.patch
>
>
> This is a problem that introduced when we tried to solve HBASE-7521.
> https://issues.apache.org/jira/browse/HBASE-7521?focusedCommentId=13576083&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13576083
> See the above comment and exactly the same has happened.  Will come up with a 
> solution for the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7961) truncate on disabled table should throw TableNotEnabledException.

2013-03-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-7961:
--

Attachment: HBASE-7961_94.patch

Patch for 0.94

> truncate on disabled table should throw TableNotEnabledException.
> -
>
> Key: HBASE-7961
> URL: https://issues.apache.org/jira/browse/HBASE-7961
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.94.7
>
> Attachments: HBASE-7961_94.patch, HBASE-7961.patch
>
>
> presently truncate on disabled table is deleting existing table and 
> recreating(ENABLED)
> disable(table_name) call in truncate returing if table is disabled without 
> nofifying to user.
> {code}
> def disable(table_name)
>   tableExists(table_name)
>   return if disabled?(table_name)
>   @admin.disableTable(table_name)
> end
> {code}
> one more thing is we are calling tableExists in disable(table_name) as well 
> as drop(table_name) which is un necessary.
> Any way below HTable object creation will check whether table exists or not.
> {code}
> h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
> {code}
> We can change it to 
> {code}
>   h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
>   table_description = h_table.getTableDescriptor()
>   yield 'Disabling table...' if block_given?
>   @admin.disableTable(table_name)
>   yield 'Dropping table...' if block_given?
>   @admin.deleteTable(table_name)
>   yield 'Creating table...' if block_given?
>   @admin.createTable(table_description)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8030) znode path of online region servers is hard coded in rolling_restart.sh

2013-03-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-8030:
--

Attachment: HBASE-8030_94.patch

Patch for 0.94

> znode  path of online region servers is hard coded in rolling_restart.sh
> 
>
> Key: HBASE-8030
> URL: https://issues.apache.org/jira/browse/HBASE-8030
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.94.7
>
> Attachments: HBASE-8030_94.patch, HBASE-8030.patch
>
>
> znode path of online region servers($zparent/rs) is hard coded. We need to 
> use  configured value of zookeeper.znode.rs as child path.
> {code}
> # gracefully restart all online regionservers
> online_regionservers=`$bin/hbase zkcli ls $zparent/rs 2>&1 | tail -1 | 
> sed "s/\[//" | sed "s/\]//"`
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8040) Race condition in AM after HBASE-7521 (only 0.94)

2013-03-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13598001#comment-13598001
 ] 

Lars Hofhansl commented on HBASE-8040:
--

Hmm... Now I feel good that in gave -0 for HBASE-7521 :-) 

Seriously, do you think this is severe enough to sink 0.94.6rc1?

> Race condition in AM after HBASE-7521 (only 0.94)
> -
>
> Key: HBASE-8040
> URL: https://issues.apache.org/jira/browse/HBASE-8040
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.6
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.94.6
>
> Attachments: HBASE-8040_1.patch, HBASE-8040.patch
>
>
> This is a problem that introduced when we tried to solve HBASE-7521.
> https://issues.apache.org/jira/browse/HBASE-7521?focusedCommentId=13576083&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13576083
> See the above comment and exactly the same has happened.  Will come up with a 
> solution for the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8059:
--

Attachment: 7904-v4-hadoop-2.0.txt

Patch from HBASE-7904 where hadoop-2.0 is the default profile

> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 7904-v4-hadoop-2.0.txt
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8059:
--

Fix Version/s: 0.98.0
   Issue Type: Improvement  (was: Bug)

> Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default 
> profile
> -
>
> Key: HBASE-8059
> URL: https://issues.apache.org/jira/browse/HBASE-8059
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Fix For: 0.98.0
>
>
> Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
> profile.
> However, when QA tries to validate compilation against hadoop-2.0:
> {code}
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
> (/Users/tyu/trunk/pom.xml) has 2 errors
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ line 979, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ line 984, column 21
> {code}
> We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
> profile doesn't go through validation step against hadoop-2.0
> Ideally, the changes in various pom.xml files should be saved as template. 
> User can specify the hadoop profile to test against in the header of patch 
> file.
> e.g.
> {code}
> This patch uses hadoop-2.0 as default profile
> Index: 
> hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-8059) Enhance test-patch.sh so that patch can specify hadoop-2.0 as the default profile

2013-03-09 Thread Ted Yu (JIRA)
Ted Yu created HBASE-8059:
-

 Summary: Enhance test-patch.sh so that patch can specify 
hadoop-2.0 as the default profile
 Key: HBASE-8059
 URL: https://issues.apache.org/jira/browse/HBASE-8059
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


Over in HBASE-7904, I produced a patch which uses hadoop-2.0 as the default 
profile.
However, when QA tries to validate compilation against hadoop-2.0:
{code}
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
(/Users/tyu/trunk/pom.xml) has 2 errors
[ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
match a valid id pattern. @ line 979, column 21
[ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
not match a valid id pattern. @ line 984, column 21
{code}
We should enhance test-patch.sh so that patch with hadoop-2.0 as default 
profile doesn't go through validation step against hadoop-2.0

Ideally, the changes in various pom.xml files should be saved as template. User 
can specify the hadoop profile to test against in the header of patch file.
e.g.
{code}
This patch uses hadoop-2.0 as default profile

Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7904) Upgrade hadoop 2.0 dependency to 2.0.4-alpha

2013-03-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597993#comment-13597993
 ] 

Ted Yu commented on HBASE-7904:
---

Making hadoop-2.0 the default profile results in the following error when QA 
tries to validate compilation against hadoop-2.0:
{code}
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project org.apache.hbase:hbase:0.97-SNAPSHOT 
(/Users/tyu/trunk/pom.xml) has 2 errors
[ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
match a valid id pattern. @ line 979, column 21
[ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
not match a valid id pattern. @ line 984, column 21
{code}
Looks like test-patch.sh needs to be enhanced so that patch can be tested 
against hadoop-2.0

> Upgrade hadoop 2.0 dependency to 2.0.4-alpha
> 
>
> Key: HBASE-7904
> URL: https://issues.apache.org/jira/browse/HBASE-7904
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7904.txt, 7904-v2.txt, 7904-v4-hadoop-2.0.txt, 
> 7904-v4.txt, 7904-v4.txt, 7904-v5.txt, hbase-7904-v3.txt
>
>
> 2.0.3-alpha has been released.
> We should upgrade the dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7904) Upgrade hadoop 2.0 dependency to 2.0.4-alpha

2013-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7904:
--

Attachment: 7904-v4-hadoop-2.0.txt

Patch v4 that uses hadoop 2.0 as default profile

> Upgrade hadoop 2.0 dependency to 2.0.4-alpha
> 
>
> Key: HBASE-7904
> URL: https://issues.apache.org/jira/browse/HBASE-7904
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.0
>
> Attachments: 7904.txt, 7904-v2.txt, 7904-v4-hadoop-2.0.txt, 
> 7904-v4.txt, 7904-v4.txt, 7904-v5.txt, hbase-7904-v3.txt
>
>
> 2.0.3-alpha has been released.
> We should upgrade the dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7961) truncate on disabled table should throw TableNotEnabledException.

2013-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597908#comment-13597908
 ] 

Hudson commented on HBASE-7961:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #438 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/438/])
HBASE-7961 truncate on disabled table should throw TableNotEnabledException.
(Rajesh) (Revision 1454677)

 Result = FAILURE
ramkrishna : 
Files : 
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb


> truncate on disabled table should throw TableNotEnabledException.
> -
>
> Key: HBASE-7961
> URL: https://issues.apache.org/jira/browse/HBASE-7961
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.94.7
>
> Attachments: HBASE-7961.patch
>
>
> presently truncate on disabled table is deleting existing table and 
> recreating(ENABLED)
> disable(table_name) call in truncate returing if table is disabled without 
> nofifying to user.
> {code}
> def disable(table_name)
>   tableExists(table_name)
>   return if disabled?(table_name)
>   @admin.disableTable(table_name)
> end
> {code}
> one more thing is we are calling tableExists in disable(table_name) as well 
> as drop(table_name) which is un necessary.
> Any way below HTable object creation will check whether table exists or not.
> {code}
> h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
> {code}
> We can change it to 
> {code}
>   h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
>   table_description = h_table.getTableDescriptor()
>   yield 'Disabling table...' if block_given?
>   @admin.disableTable(table_name)
>   yield 'Dropping table...' if block_given?
>   @admin.deleteTable(table_name)
>   yield 'Creating table...' if block_given?
>   @admin.createTable(table_description)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7961) truncate on disabled table should throw TableNotEnabledException.

2013-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597902#comment-13597902
 ] 

Hudson commented on HBASE-7961:
---

Integrated in HBase-TRUNK #3934 (See 
[https://builds.apache.org/job/HBase-TRUNK/3934/])
HBASE-7961 truncate on disabled table should throw TableNotEnabledException.
(Rajesh) (Revision 1454677)

 Result = FAILURE
ramkrishna : 
Files : 
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb


> truncate on disabled table should throw TableNotEnabledException.
> -
>
> Key: HBASE-7961
> URL: https://issues.apache.org/jira/browse/HBASE-7961
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.94.7
>
> Attachments: HBASE-7961.patch
>
>
> presently truncate on disabled table is deleting existing table and 
> recreating(ENABLED)
> disable(table_name) call in truncate returing if table is disabled without 
> nofifying to user.
> {code}
> def disable(table_name)
>   tableExists(table_name)
>   return if disabled?(table_name)
>   @admin.disableTable(table_name)
> end
> {code}
> one more thing is we are calling tableExists in disable(table_name) as well 
> as drop(table_name) which is un necessary.
> Any way below HTable object creation will check whether table exists or not.
> {code}
> h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
> {code}
> We can change it to 
> {code}
>   h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
>   table_description = h_table.getTableDescriptor()
>   yield 'Disabling table...' if block_given?
>   @admin.disableTable(table_name)
>   yield 'Dropping table...' if block_given?
>   @admin.deleteTable(table_name)
>   yield 'Creating table...' if block_given?
>   @admin.createTable(table_description)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7961) truncate on disabled table should throw TableNotEnabledException.

2013-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597901#comment-13597901
 ] 

Hudson commented on HBASE-7961:
---

Integrated in hbase-0.95 #48 (See 
[https://builds.apache.org/job/hbase-0.95/48/])
HBASE-7961 truncate on disabled table should throw TableNotEnabledException.
(Rajesh) (Revision 1454678)

 Result = SUCCESS
ramkrishna : 
Files : 
* /hbase/branches/0.95/hbase-server/src/main/ruby/hbase/admin.rb


> truncate on disabled table should throw TableNotEnabledException.
> -
>
> Key: HBASE-7961
> URL: https://issues.apache.org/jira/browse/HBASE-7961
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.94.7
>
> Attachments: HBASE-7961.patch
>
>
> presently truncate on disabled table is deleting existing table and 
> recreating(ENABLED)
> disable(table_name) call in truncate returing if table is disabled without 
> nofifying to user.
> {code}
> def disable(table_name)
>   tableExists(table_name)
>   return if disabled?(table_name)
>   @admin.disableTable(table_name)
> end
> {code}
> one more thing is we are calling tableExists in disable(table_name) as well 
> as drop(table_name) which is un necessary.
> Any way below HTable object creation will check whether table exists or not.
> {code}
> h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
> {code}
> We can change it to 
> {code}
>   h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
>   table_description = h_table.getTableDescriptor()
>   yield 'Disabling table...' if block_given?
>   @admin.disableTable(table_name)
>   yield 'Dropping table...' if block_given?
>   @admin.deleteTable(table_name)
>   yield 'Creating table...' if block_given?
>   @admin.createTable(table_description)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7992) provide pre/post region offline hooks for HMaster.offlineRegion().

2013-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597892#comment-13597892
 ] 

Hadoop QA commented on HBASE-7992:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12572907/HBASE-7992_trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestSplitLogWorker

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4741//console

This message is automatically generated.

> provide pre/post region offline hooks for HMaster.offlineRegion(). 
> ---
>
> Key: HBASE-7992
> URL: https://issues.apache.org/jira/browse/HBASE-7992
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.95.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0
>
> Attachments: HBASE-7992_trunk.patch
>
>
> presently no hooks to provide access control to offline region in master.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8035) Add site target check to precommit tests

2013-03-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597891#comment-13597891
 ] 

Andrew Purtell commented on HBASE-8035:
---

Take Maven out back and shoot.

> Add site target check to precommit tests
> 
>
> Key: HBASE-8035
> URL: https://issues.apache.org/jira/browse/HBASE-8035
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Nick Dimiduk
> Fix For: 0.98.0
>
> Attachments: 
> 0001-HBASE-8035-Add-site-generation-to-patch-validation.patch, 
> 8035-addendum.txt
>
>
> We should check that the Maven 'site' target passes as part of precommit 
> testing. See HBASE-8022.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7581) TestAccessController depends on the execution order

2013-03-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597890#comment-13597890
 ] 

Andrew Purtell commented on HBASE-7581:
---

Ping [~toffer]

> TestAccessController depends on the execution order
> ---
>
> Key: HBASE-7581
> URL: https://issues.apache.org/jira/browse/HBASE-7581
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
> Fix For: 0.95.0, 0.94.5
>
> Attachments: 7581.v1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7961) truncate on disabled table should throw TableNotEnabledException.

2013-03-09 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597889#comment-13597889
 ] 

ramkrishna.s.vasudevan commented on HBASE-7961:
---

Committed to trunk and 0.95.  Lars you need it in 0.94?

> truncate on disabled table should throw TableNotEnabledException.
> -
>
> Key: HBASE-7961
> URL: https://issues.apache.org/jira/browse/HBASE-7961
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.94.7
>
> Attachments: HBASE-7961.patch
>
>
> presently truncate on disabled table is deleting existing table and 
> recreating(ENABLED)
> disable(table_name) call in truncate returing if table is disabled without 
> nofifying to user.
> {code}
> def disable(table_name)
>   tableExists(table_name)
>   return if disabled?(table_name)
>   @admin.disableTable(table_name)
> end
> {code}
> one more thing is we are calling tableExists in disable(table_name) as well 
> as drop(table_name) which is un necessary.
> Any way below HTable object creation will check whether table exists or not.
> {code}
> h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
> {code}
> We can change it to 
> {code}
>   h_table = org.apache.hadoop.hbase.client.HTable.new(conf, table_name)
>   table_description = h_table.getTableDescriptor()
>   yield 'Disabling table...' if block_given?
>   @admin.disableTable(table_name)
>   yield 'Dropping table...' if block_given?
>   @admin.deleteTable(table_name)
>   yield 'Creating table...' if block_given?
>   @admin.createTable(table_description)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7992) provide pre/post region offline hooks for HMaster.offlineRegion().

2013-03-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-7992:
--

Attachment: HBASE-7992_trunk.patch

Patch for trunk. Including both HBASE-7992,HBASE-7993

> provide pre/post region offline hooks for HMaster.offlineRegion(). 
> ---
>
> Key: HBASE-7992
> URL: https://issues.apache.org/jira/browse/HBASE-7992
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.95.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0
>
> Attachments: HBASE-7992_trunk.patch
>
>
> presently no hooks to provide access control to offline region in master.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7993) add access control to HMaster offline region.

2013-03-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-7993:
--

Summary: add access control to HMaster offline region.  (was: add access 
control to HMaster.offline region.)

> add access control to HMaster offline region.
> -
>
> Key: HBASE-7993
> URL: https://issues.apache.org/jira/browse/HBASE-7993
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.95.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8049) If a RS cannot use a compression codec, it should have a retry limit on checking results of CompressionTest

2013-03-09 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597888#comment-13597888
 ] 

ramkrishna.s.vasudevan commented on HBASE-8049:
---

Whenever RS tries to open the region from OpenRegionHandler, 
MAke the zk state to FATAL or UNRECOVERABLE

On the master side add the regions under this znode to a special datastrucutre 
with the current RS on which it failed.
HAve a timer thread which acts on these regions with different region plan so 
that it can be tried on another RS.

-> Now if the master finds an RS with the compression codec available the 
Region gets opened there.
This may make all the regions to move to this RS as it is the expected RS with 
compression.  So once the RS are rebooted with compression, automatically the 
regions will be assigned and balanced

-> Now what if none of the RS has compression codec
Then we should be continuously retry the process and keep logging that the RS 
is not enabled with the expected compression.

Create Table:
If within the configured time if create table does not succeed then the client 
will get an error.  So once the reboot of the RS(after fixing the compression) 
is done we would be able to carry on with opening the regions.

Enable Table:
When the problem happens when we try to ENABLE a table, we should ensure that 
the table is forcefully ENABLED after the entire regions are assigned.

During this time the table is not usable.  

> If a RS cannot use a compression codec, it should have a retry limit on 
> checking results of CompressionTest
> ---
>
> Key: HBASE-8049
> URL: https://issues.apache.org/jira/browse/HBASE-8049
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.90.6, 0.92.3, 0.95.0, 0.94.7
> Environment: Including, but not limited to, Centos6_64
>Reporter: Aleksandr Shulman
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.95.0, 0.94.7
>
>
> Observed Behavior:
> When a user attempts to create a table but there is an issue with the codec, 
> the attempt continues repeatedly. The shell command times out but the RS and 
> Master are both occupied, leading to HBase being down. Further, HBase creates 
> the folders for the table in HDFS.
> The only way to restore the service is by disabling and dropping the table.
> Here are the log lines when a table, t8, is created with this definition:
> create 't8', {NAME=>'f1',COMPRESSION=>'lzo'}
> Error from shell:
> hbase(main):003:0> create 't8', {NAME=>'f1',BLOOMFILTER=>'row', 
> COMPRESSION=>'lzo'}
> ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1 
> regions are online; retries exhausted.
> Log lines on Master (repeats a few times/second):
> 2013-03-07 22:55:31,389 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Using pre-existing plan for 
> region t8,,1362725678436.311edabcc1fe52001cb00e7c3e7f75d4.; 
> plan=hri=t8,,1362725678436.311edabcc1fe52001cb00e7c3e7f75d4., src=, 
> dest=upgrade-vm-1.ent.cloudera.com,60020,1362709586485
> 2013-03-07 22:55:31,389 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
> t8,,1362725678436.311edabcc1fe52001cb00e7c3e7f75d4. to 
> upgrade-vm-1.ent.cloudera.com,60020,1362709586485
> 2013-03-07 22:55:31,398 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Handling 
> transition=RS_ZK_REGION_OPENING, 
> server=upgrade-vm-1.ent.cloudera.com,60020,1362709586485, 
> region=311edabcc1fe52001cb00e7c3e7f75d4
> 2013-03-07 22:55:31,406 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Handling 
> transition=RS_ZK_REGION_FAILED_OPEN, 
> server=upgrade-vm-1.ent.cloudera.com,60020,1362709586485, 
> region=311edabcc1fe52001cb00e7c3e7f75d4
> 2013-03-07 22:55:31,406 DEBUG 
> org.apache.hadoop.hbase.master.handler.ClosedRegionHandler: Handling CLOSED 
> event for 311edabcc1fe52001cb00e7c3e7f75d4
> 2013-03-07 22:55:31,406 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
> was=t8,,1362725678436.311edabcc1fe52001cb00e7c3e7f75d4. state=CLOSED, 
> ts=1362725731398, server=upgrade-vm-1.ent.cloudera.com,60020,1362709586485
> 2013-03-07 22:55:31,406 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
> master:6-0x13d47d21483 Creating (or updating) unassigned node for 
> 311edabcc1fe52001cb00e7c3e7f75d4 with OFFLINE state
> 2013-03-07 22:55:31,414 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Handling 
> transition=M_ZK_REGION_OFFLINE, server=upgrade-vm-1.ent.cloudera.com:6, 
> region=311edabcc1fe52001cb00e7c3e7f75d4
> Log lines on RS (repeats a few times/second):
> 2013-03-07 22:58:23,323 ERROR 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open 
> of regi

[jira] [Updated] (HBASE-7992) provide pre/post region offline hooks for HMaster.offlineRegion().

2013-03-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-7992:
--

Status: Patch Available  (was: Open)

> provide pre/post region offline hooks for HMaster.offlineRegion(). 
> ---
>
> Key: HBASE-7992
> URL: https://issues.apache.org/jira/browse/HBASE-7992
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.95.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0
>
> Attachments: HBASE-7992_trunk.patch
>
>
> presently no hooks to provide access control to offline region in master.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira