[jira] [Commented] (HBASE-7103) Need to fail split if SPLIT znode is deleted even before the split is completed.

2012-11-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13496025#comment-13496025
 ] 

ramkrishna.s.vasudevan commented on HBASE-7103:
---

Thanks for the info on the ZK stuff. :)

> Need to fail split if SPLIT znode is deleted even before the split is 
> completed.
> 
>
> Key: HBASE-7103
> URL: https://issues.apache.org/jira/browse/HBASE-7103
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 7103-6088-revert.txt, HBASE-7103_0.94.patch, 
> HBASE-7103_0.94.patch, HBASE-7103_testcase.patch, HBASE-7103_trunk.patch
>
>
> This came up after the following mail in dev list
> 'infinite loop of RS_ZK_REGION_SPLIT on .94.2'.
> The following is the reason for the problem
> The following steps happen
> -> Initially the parent region P1 starts splitting.
> -> The split is going on normally.
> -> Another split starts at the same time for the same region P1. (Not sure 
> why this started).
> -> Rollback happens seeing an already existing node.
> -> This node gets deleted in rollback and nodeDeleted Event starts.
> -> In nodeDeleted event the RIT for the region P1 gets deleted.
> -> Because of this there is no region in RIT.
> -> Now the first split gets over.  Here the problem is we try to transit the 
> node to SPLITTING to SPLIT. But the node even does not exist.
> But we don take any action on this.  We think it is successful.
> -> Because of this SplitRegionHandler never gets invoked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7152) testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally

2012-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13496016#comment-13496016
 ] 

Hadoop QA commented on HBASE-7152:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12553211/trunk-7152.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 18 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell
  org.apache.hadoop.hbase.TestDrainingServer

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3323//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3323//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3323//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3323//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3323//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3323//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3323//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3323//console

This message is automatically generated.

> testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally
> ---
>
> Key: HBASE-7152
> URL: https://issues.apache.org/jira/browse/HBASE-7152
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: trunk-7152.patch
>
>
> {noformat}
> java.lang.Exception: test timed out after 18 milliseconds
>   at java.lang.Throwable.fillInStackTrace(Native Method)
>   at java.lang.Throwable.(Throwable.java:181)
>   at 
> org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
>   at 
> org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555)
>   at 
> org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528)
>   at 
> org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
>   at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at 
> org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:188)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:407)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.join(MiniHBaseCluster.java:408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:599)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:573)
>   at 
> org.apache.hadoop.hbase.master.TestMasterFailover.testShouldCheckMasterFailOverWhenMETAIsInOpenedState(TestMasterFailover.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcc

[jira] [Updated] (HBASE-7069) HTable.batch does not have to be synchronized

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7069:
-

Summary: HTable.batch does not have to be synchronized  (was: HTable.batch 
does not have to synchronized)

> HTable.batch does not have to be synchronized
> -
>
> Key: HBASE-7069
> URL: https://issues.apache.org/jira/browse/HBASE-7069
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.94.3
>
> Attachments: 7069.txt
>
>
> This was raised on the mailing list by Yousuf.
> HTable.batch(...) is synchronized and there appears to be no reason for it.
> (flushCommits makes the same call to connection.processBatch and it also is 
> not synchronized)
> This is pretty bad actually marking critical.
> 0.96 is fine BTW.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-5898.
--

Resolution: Fixed

Committed to 0.94 and 0.96.
(Removal of those unneeded AtomicLongs will give some additional cycles back)

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-0.94.txt, 5898-TestBlocksRead.txt, 5898-v2.txt, 
> 5898-v3.txt, 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, 
> HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5898:
-

Attachment: 5898-0.94.txt

0.94 version of the patch

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-0.94.txt, 5898-TestBlocksRead.txt, 5898-v2.txt, 
> 5898-v3.txt, 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, 
> HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5898:
-

Status: Open  (was: Patch Available)

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-0.94.txt, 5898-TestBlocksRead.txt, 5898-v2.txt, 
> 5898-v3.txt, 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, 
> HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7128) Reduce annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13496004#comment-13496004
 ] 

stack commented on HBASE-7128:
--

+1 on patch.  Its good that the added methods to HConstants are private.

> Reduce annoying catch clauses of UnsupportedEncodingException that is never 
> thrown because of UTF-8
> ---
>
> Key: HBASE-7128
> URL: https://issues.apache.org/jira/browse/HBASE-7128
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Trivial
> Fix For: 0.96.0
>
> Attachments: HBASE-7128.patch, HBASE-7128-V2.patch
>
>
> There are some codes that catch UnsupportedEncodingException, and log or 
> ignore it because Java always supports UTF-8 (see the javadoc of Charset).
> The catch clauses are annoying, and they should be replaced by methods of 
> Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13496003#comment-13496003
 ] 

Lars Hofhansl commented on HBASE-5898:
--

TestDrainingServer passes locally. TestShell fails with or without this patch.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, 
> HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7148) Some files in hbase-examples module miss license header

2012-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495998#comment-13495998
 ] 

Hadoop QA commented on HBASE-7148:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12553264/hbase-7148.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 18 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3322//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3322//console

This message is automatically generated.

> Some files in hbase-examples module miss license header
> ---
>
> Key: HBASE-7148
> URL: https://issues.apache.org/jira/browse/HBASE-7148
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Enis Soztutar
> Attachments: hbase-7148.patch
>
>
> Trunk build 3530 got to building hbase-examples module but failed:
> {code}
> [INFO] HBase - Examples .. FAILURE [3.222s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 29:21.569s
> [INFO] Finished at: Sun Nov 11 15:17:35 UTC 2012
> [INFO] Final Memory: 68M/642M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.8:check 
> (default) on project hbase-examples: Too many unapproved licenses: 20 -> 
> [Help 1]
> {code}
> Looks like license headers are missing in some of the files in hbase-examples 
> module

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7133) svn:ignore on module directories

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495997#comment-13495997
 ] 

stack commented on HBASE-7133:
--

+1

> svn:ignore on module directories
> 
>
> Key: HBASE-7133
> URL: https://issues.apache.org/jira/browse/HBASE-7133
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Trivial
> Attachments: hbase-7133.patch
>
>
> This has been bothering me whenever I go back to svn to commit smt. We have 
> to set svn:ignore on module directories hbase-common,hbase-server,etc. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495982#comment-13495982
 ] 

Hadoop QA commented on HBASE-5898:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12553266/5898-v4.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
93 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 17 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell
  org.apache.hadoop.hbase.TestDrainingServer

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3321//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3321//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3321//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3321//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3321//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3321//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3321//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3321//console

This message is automatically generated.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, 
> HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495979#comment-13495979
 ] 

Hadoop QA commented on HBASE-5898:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12553266/5898-v4.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
93 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 17 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestShell

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3320//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3320//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3320//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3320//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3320//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3320//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3320//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3320//console

This message is automatically generated.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, 
> HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-11-12 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-7145:
-

Assignee: Yu Li

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495962#comment-13495962
 ] 

Lars Hofhansl commented on HBASE-5898:
--

Triggered HadoopQA directly through jenkins.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, 
> HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5984) TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0

2012-11-12 Thread Andrey Klochkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495961#comment-13495961
 ] 

Andrey Klochkov commented on HBASE-5984:


Thank you, [~stack]!

> TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0
> 
>
> Key: HBASE-5984
> URL: https://issues.apache.org/jira/browse/HBASE-5984
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.94.3, 0.96.0
>
> Attachments: hbase_5984.patch
>
>
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1455809779-127.0.0.1-1336670196362:blk_-6960847342982670493_1028;
>  getBlockSize()=1474; corrupt=false; offset=0; locs=[127.0.0.1:58343, 
> 127.0.0.1:48427]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:232)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:177)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:112)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:928)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1768)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:66)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1688)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1709)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:58)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:166)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:659)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnPipelineRestart(TestLogRolling.java:498)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software

[jira] [Updated] (HBASE-5778) Turn on WAL compression by default

2012-11-12 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5778:
--

Attachment: HBASE-5778-0.94-v3.patch

Attaching a patch that includes the new files, doh.

> Turn on WAL compression by default
> --
>
> Key: HBASE-5778
> URL: https://issues.apache.org/jira/browse/HBASE-5778
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
>Priority: Blocker
> Fix For: 0.96.0
>
> Attachments: 5778.addendum, 5778-addendum.txt, HBASE-5778-0.94.patch, 
> HBASE-5778-0.94-v2.patch, HBASE-5778-0.94-v3.patch, HBASE-5778.patch
>
>
> I ran some tests to verify if WAL compression should be turned on by default.
> For a use case where it's not very useful (values two order of magnitude 
> bigger than the keys), the insert time wasn't different and the CPU usage 15% 
> higher (150% CPU usage VS 130% when not compressing the WAL).
> When values are smaller than the keys, I saw a 38% improvement for the insert 
> run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
> WAL compression accounts for all the additional CPU usage, it might just be 
> that we're able to insert faster and we spend more time in the MemStore per 
> second (because our MemStores are bad when they contain tens of thousands of 
> values).
> Those are two extremes, but it shows that for the price of some CPU we can 
> save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
> CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5778) Turn on WAL compression by default

2012-11-12 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5778:
--

Attachment: (was: HBASE-5778-0.94-v3.patch)

> Turn on WAL compression by default
> --
>
> Key: HBASE-5778
> URL: https://issues.apache.org/jira/browse/HBASE-5778
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
>Priority: Blocker
> Fix For: 0.96.0
>
> Attachments: 5778.addendum, 5778-addendum.txt, HBASE-5778-0.94.patch, 
> HBASE-5778-0.94-v2.patch, HBASE-5778-0.94-v3.patch, HBASE-5778.patch
>
>
> I ran some tests to verify if WAL compression should be turned on by default.
> For a use case where it's not very useful (values two order of magnitude 
> bigger than the keys), the insert time wasn't different and the CPU usage 15% 
> higher (150% CPU usage VS 130% when not compressing the WAL).
> When values are smaller than the keys, I saw a 38% improvement for the insert 
> run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
> WAL compression accounts for all the additional CPU usage, it might just be 
> that we're able to insert faster and we spend more time in the MemStore per 
> second (because our MemStores are bad when they contain tens of thousands of 
> values).
> Those are two extremes, but it shows that for the price of some CPU we can 
> save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
> CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5778) Turn on WAL compression by default

2012-11-12 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5778:
--

Attachment: HBASE-5778-0.94-v3.patch

Second pass, probably a stepping stone.

Adds a {{ReplicationHLogReader}} that hides all the dirtiness from HLog and its 
compression functionality. Right now it's only a dumb extraction of code from 
{{ReplicationSource}} and it doesn't take care of any exception handling. It 
also has weird semantics like finishCurrentFile not closing the reader.

Still passes the tests.

> Turn on WAL compression by default
> --
>
> Key: HBASE-5778
> URL: https://issues.apache.org/jira/browse/HBASE-5778
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
>Priority: Blocker
> Fix For: 0.96.0
>
> Attachments: 5778.addendum, 5778-addendum.txt, HBASE-5778-0.94.patch, 
> HBASE-5778-0.94-v2.patch, HBASE-5778-0.94-v3.patch, HBASE-5778.patch
>
>
> I ran some tests to verify if WAL compression should be turned on by default.
> For a use case where it's not very useful (values two order of magnitude 
> bigger than the keys), the insert time wasn't different and the CPU usage 15% 
> higher (150% CPU usage VS 130% when not compressing the WAL).
> When values are smaller than the keys, I saw a 38% improvement for the insert 
> run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
> WAL compression accounts for all the additional CPU usage, it might just be 
> that we're able to insert faster and we spend more time in the MemStore per 
> second (because our MemStores are bad when they contain tens of thousands of 
> values).
> Those are two extremes, but it shows that for the price of some CPU we can 
> save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
> CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7137) Improve Bytes to accept byte buffers which don't allow us to directly access thier backed arrays

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495955#comment-13495955
 ] 

stack commented on HBASE-7137:
--

Nice patch.  Nice tests.

In toBytes, we do dup.postion(0) but we don't do this when we do getBytes.  
Should we?   Should getBytes make use of toBytes?

Thanks.

> Improve Bytes to accept byte buffers which don't allow us to directly access 
> thier backed arrays
> 
>
> Key: HBASE-7137
> URL: https://issues.apache.org/jira/browse/HBASE-7137
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-7137.patch
>
>
> Inside HBase, it seems that there is the implicit assumption that byte 
> buffers have backed arrays and are not read-only, and we can freely call 
> ByteBuffer.array() and arrayOffset() without runtime exceptions.
> But some classes, including Bytes, are supposed to be used by users from 
> outside of HBase, and we should think the possibility that methods receive 
> byte buffers which don't hold the assumption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5898:
-

Status: Patch Available  (was: Open)

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, 
> HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5898:
-

Status: Open  (was: Patch Available)

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, 
> HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5898:
-

Attachment: 5898-v4.txt

Attaching same patch again for HadoopQA

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, 
> HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495949#comment-13495949
 ] 

Lars Hofhansl commented on HBASE-7104:
--

Done.

> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495948#comment-13495948
 ] 

Lars Hofhansl commented on HBASE-7104:
--

Yeah, lemme do that. Then we can regroup.


> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495947#comment-13495947
 ] 

Lars Hofhansl commented on HBASE-5898:
--

Will update the comment. I would like to get a HadoopQA run through, but it's 
currently broken. (possibly related to HBASE-7104)

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, 
> hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6356) printStackTrace in FSUtils

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495946#comment-13495946
 ] 

stack commented on HBASE-6356:
--

+1 on commit

> printStackTrace in FSUtils
> --
>
> Key: HBASE-6356
> URL: https://issues.apache.org/jira/browse/HBASE-6356
> Project: HBase
>  Issue Type: Bug
>  Components: Client, master, regionserver
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Priority: Trivial
>  Labels: noob
> Attachments: HBASE-6356.patch
>
>
> This is bad...
> {noformat}
> public boolean accept(Path p) {
>   boolean isValid = false;
>   try {
> if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(p.toString())) {
>   isValid = false;
> } else {
> isValid = this.fs.getFileStatus(p).isDir();
> }
>   } catch (IOException e) {
> e.printStackTrace();  < 
>   }
>   return isValid;
> }
>   }
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495944#comment-13495944
 ] 

stack commented on HBASE-7104:
--

Back it out for now I'd say [~lhofhansl]

> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6470) SingleColumnValueFilter with private fields and methods

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495943#comment-13495943
 ] 

stack commented on HBASE-6470:
--

[~benkimkimben] Did you attach the patch?

> SingleColumnValueFilter with private fields and methods
> ---
>
> Key: HBASE-6470
> URL: https://issues.apache.org/jira/browse/HBASE-6470
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 0.94.0
>Reporter: Benjamin Kim
>Assignee: Benjamin Kim
>  Labels: patch
> Fix For: 0.96.0
>
>
> Why are most fields and methods declared private in SingleColumnValueFilter?
> I'm trying to extend the functions of the SingleColumnValueFilter to support 
> complex column types such as JSON, Array, CSV, etc.
> But inheriting the SingleColumnValueFilter doesn't give any benefits for I 
> have to rewrite the codes. 
> I think all private fields and methods could turn into protected mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495942#comment-13495942
 ] 

stack commented on HBASE-6352:
--

[~jmspaggi] Ignore the FAILURE state.  Unit test failures unrelated to your 
patch.  The FAILURE is more frequent than we would like.  Regards other JIRAs, 
I was just wondering if you had fixed others and if so I need to assign them 
retroactively to you so you get your credit for contribs done (I wasn't poking 
you to do more work -- smile).

> Add copy method in Bytes
> 
>
> Key: HBASE-6352
> URL: https://issues.apache.org/jira/browse/HBASE-6352
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.94.0
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: Bytes, Util
> Fix For: 0.96.0
>
> Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch, 
> HBASE_JIRA_6352_v3.patch, HBASE_JIRA_6352_v4.patch, HBASE_JIRA_6352_v5.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Having a "copy" method into Bytes might be nice to reduce client code size 
> and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495941#comment-13495941
 ] 

Lars Hofhansl edited comment on HBASE-7104 at 11/13/12 4:36 AM:


If I remove part of the patch, like so:
{code}
-  
-org.apache.hadoop
-hadoop-mapreduce-client-app
-${hadoop.version}
-   
- 
-   org.jboss.netty
-netty
- 
-   
-  
{code}

The compile is happy again.

  was (Author: lhofhansl):
If I remove part of the patch:
{code}
-org.apache.hadoop
-hadoop-mapreduce-client-app
-${hadoop.version}
-   
- 
-   org.jboss.netty
-netty
- 
-   
-  
-  
{code}

The compile is happy again.
  
> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495941#comment-13495941
 ] 

Lars Hofhansl commented on HBASE-7104:
--

If I remove part of the patch:
{code}
-org.apache.hadoop
-hadoop-mapreduce-client-app
-${hadoop.version}
-   
- 
-   org.jboss.netty
-netty
- 
-   
-  
-  
{code}

The compile is happy again.

> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7149) IBM JDK specific issues

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495940#comment-13495940
 ] 

stack commented on HBASE-7149:
--

[~kumarr] You seen this issue that Andrew filed?

> IBM JDK specific issues
> ---
>
> Key: HBASE-7149
> URL: https://issues.apache.org/jira/browse/HBASE-7149
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>
> Since there's an uptick in IBM JDK related bug reports, let's have an 
> umbrella to track them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7154) Move the call decode from Reader to Handler

2012-11-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495938#comment-13495938
 ] 

binlijin commented on HBASE-7154:
-

it is difficult to implement. because we have multi callQueues and different 
handlers.

> Move the call decode from Reader to Handler
> ---
>
> Key: HBASE-7154
> URL: https://issues.apache.org/jira/browse/HBASE-7154
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
> Environment: 0.89-fb has already do this.
>Reporter: binlijin
>
> HBaseServer has a few kinds of thread:
> {code}
>   Listener accept connction, pick a Reader to read call in this connection.
>   Reader (default 10 numbers), read call and decode call , put in callQueue.
>   Handler take call from callQueue and process call, write response.
>   Responder sends responses of RPC back to clients.
> {code}
> We can move the call decode from Reader to Handler, so reader thread just 
> read data for connection. 
> Reader number also can be availableProcessors+1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495935#comment-13495935
 ] 

stack commented on HBASE-6945:
--

[~kumarr] Would suggest you include your new JVM class in this patch (and 
remove the old MXBean class in this patch too).  Just close HBASE-7150 as won't 
fix and do your fix up all in here?

> Compilation errors when using non-Sun JDKs to build HBase-0.94
> --
>
> Key: HBASE-6945
> URL: https://issues.apache.org/jira/browse/HBASE-6945
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.94.1
> Environment: RHEL 6.3, IBM Java 7 
>Reporter: Kumar Ravi
>Assignee: Kumar Ravi
>  Labels: patch
> Fix For: 0.94.4
>
> Attachments: HBASE-6945_ResourceCheckerJUnitListener.patch
>
>
> When using IBM Java 7 to build HBase-0.94.1, the following comilation error 
> is seen. 
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25]
>  error: package com.sun.management does not exist
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23]
>  error: cannot find symbol
> [INFO] 4 errors 
> [INFO] -
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
>  I have a patch available which should work for all JDKs including Sun.
>  I am in the process of testing this patch. Preliminary tests indicate the 
> build is working fine with this patch. I will post this patch when I am done 
> testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Varun Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495936#comment-13495936
 ] 

Varun Sharma commented on HBASE-4583:
-

I see - i thought since 0.94 is the current stable version, the one we 
currently use and we are going heavily use it for counters - we wanted to see 
if we could use it. Eventually, the other JIRA i am looking into (allowing puts 
+ increments + deletes) in a single mutation is also important for the use case 
- basically to have counts + puts go in as a single mutation and have a 
consistent view of the table - also, it helps cut down write latency for us.

But judging from the reaction, it looks like we might not want that other JIRA 
to go into 0.94 either (even if we possibly flag protected the change) - in 
which case, I am okay with this not going through (we can probably use a 
manually patched version for our usage). Btw, when is the 0.96 expected to ship 
- it has lots of good features/fixes in it.

Thanks !

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4584-0.94-v1.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7148) Some files in hbase-examples module miss license header

2012-11-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495934#comment-13495934
 ] 

Ted Yu commented on HBASE-7148:
---

Thanks for the patch, Enis.

> Some files in hbase-examples module miss license header
> ---
>
> Key: HBASE-7148
> URL: https://issues.apache.org/jira/browse/HBASE-7148
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Enis Soztutar
> Attachments: hbase-7148.patch
>
>
> Trunk build 3530 got to building hbase-examples module but failed:
> {code}
> [INFO] HBase - Examples .. FAILURE [3.222s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 29:21.569s
> [INFO] Finished at: Sun Nov 11 15:17:35 UTC 2012
> [INFO] Final Memory: 68M/642M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.8:check 
> (default) on project hbase-examples: Too many unapproved licenses: 20 -> 
> [Help 1]
> {code}
> Looks like license headers are missing in some of the files in hbase-examples 
> module

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7152) testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495933#comment-13495933
 ] 

stack commented on HBASE-7152:
--

+1

> testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally
> ---
>
> Key: HBASE-7152
> URL: https://issues.apache.org/jira/browse/HBASE-7152
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: trunk-7152.patch
>
>
> {noformat}
> java.lang.Exception: test timed out after 18 milliseconds
>   at java.lang.Throwable.fillInStackTrace(Native Method)
>   at java.lang.Throwable.(Throwable.java:181)
>   at 
> org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
>   at 
> org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555)
>   at 
> org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528)
>   at 
> org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
>   at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at 
> org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:188)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:407)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.join(MiniHBaseCluster.java:408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:599)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:573)
>   at 
> org.apache.hadoop.hbase.master.TestMasterFailover.testShouldCheckMasterFailOverWhenMETAIsInOpenedState(TestMasterFailover.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7150) Utility class to determine File Descriptor counts depending on the JVM Vendor

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495930#comment-13495930
 ] 

stack commented on HBASE-7150:
--

[~kumarr] Thanks for opening this issue.  Thanks for looking at using hadoop 
shellcommandexecutor.  It looks like it won't work for you.  Would suggest that 
you include in your patch a note to this effect so that the next time someone 
is reading this code, you'll answer their question should they have the same 
thought I had.  I notice the patch includes a new class named JVM.java.  It 
does not include removal of the old class nor patch to hook up the code to use 
this new class rather than the one you'd have us remove (do 'svn rm' of the old 
class before making the patch).  Thanks.

> Utility class to determine File Descriptor counts depending on the JVM Vendor
> -
>
> Key: HBASE-7150
> URL: https://issues.apache.org/jira/browse/HBASE-7150
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, util
>Affects Versions: 0.94.1, 0.94.2
> Environment: Non Sun JDK environments
>Reporter: Kumar Ravi
>Assignee: Kumar Ravi
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-7150.patch
>
>
> This issue is being opened to replace the OSMXBean class that was submitted 
> in HBASE-6965. A new utility class called JVM is being opened to be used by 
> ResourceChecker (0.94 branch) and ResourceCheckerJUnitListener test classes.
> The patch for the ResourceChecker classes is being addressed by HBASE-6945.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5984) TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0

2012-11-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5984:
-

Fix Version/s: 0.94.3

Applied to 0.94 too at [~aklochkov] 's request.

> TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0
> 
>
> Key: HBASE-5984
> URL: https://issues.apache.org/jira/browse/HBASE-5984
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.94.3, 0.96.0
>
> Attachments: hbase_5984.patch
>
>
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1455809779-127.0.0.1-1336670196362:blk_-6960847342982670493_1028;
>  getBlockSize()=1474; corrupt=false; offset=0; locs=[127.0.0.1:58343, 
> 127.0.0.1:48427]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:232)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:177)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:112)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:928)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1768)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:66)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1688)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1709)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:58)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:166)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:659)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnPipelineRestart(TestLogRolling.java:498)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495923#comment-13495923
 ] 

Lars Hofhansl commented on HBASE-7104:
--

Precommit fails with the same error, but the trunk build does not. Strange.

> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7148) Some files in hbase-examples module miss license header

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495925#comment-13495925
 ] 

stack commented on HBASE-7148:
--

+1 on patch.

> Some files in hbase-examples module miss license header
> ---
>
> Key: HBASE-7148
> URL: https://issues.apache.org/jira/browse/HBASE-7148
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Enis Soztutar
> Attachments: hbase-7148.patch
>
>
> Trunk build 3530 got to building hbase-examples module but failed:
> {code}
> [INFO] HBase - Examples .. FAILURE [3.222s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 29:21.569s
> [INFO] Finished at: Sun Nov 11 15:17:35 UTC 2012
> [INFO] Final Memory: 68M/642M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.8:check 
> (default) on project hbase-examples: Too many unapproved licenses: 20 -> 
> [Help 1]
> {code}
> Looks like license headers are missing in some of the files in hbase-examples 
> module

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495922#comment-13495922
 ] 

stack commented on HBASE-7104:
--

It fails for Lars but works for you [~nkeywal]?

> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495920#comment-13495920
 ] 

Lars Hofhansl commented on HBASE-4583:
--

I'm actually not opposed to 0.94, but other folks voiced (valid) concerns.
We can fix the race condition between Puts followed by Increment or Append, but 
I don't think that would be that useful without the rest of this patch.

I see this more as a strategic correctness fix. This has been "incorrect" since 
the beginning, so not fixing this in 0.94 is OK, I think.

Anyway. Thanks for working on a 0.94 patch, Varun.
So you have a strong usecase for this in 0.94?


> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4584-0.94-v1.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495919#comment-13495919
 ] 

stack commented on HBASE-5898:
--

The repeat is a bit strange but it is at least easy to follow what is going on 
in this block getting code so +1 on commit for 0.94.  No harm in beefing up 
this comment on commit:

{code}
+   * @param repeat Whether this is a repeat lookup for the same block
+   *{@see HFileReaderV2#readBlock(long, long, boolean, boolean, 
boolean, BlockType)}
{code}

You refer to the code responsible for this 'repeat' param but maybe make 
mention of not wanting to double-count metrics when doing double-check locking 
in the cited code?

Good on you Lars.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, 
> hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7148) Some files in hbase-examples module miss license header

2012-11-12 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-7148:
-

Status: Patch Available  (was: Open)

> Some files in hbase-examples module miss license header
> ---
>
> Key: HBASE-7148
> URL: https://issues.apache.org/jira/browse/HBASE-7148
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Enis Soztutar
> Attachments: hbase-7148.patch
>
>
> Trunk build 3530 got to building hbase-examples module but failed:
> {code}
> [INFO] HBase - Examples .. FAILURE [3.222s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 29:21.569s
> [INFO] Finished at: Sun Nov 11 15:17:35 UTC 2012
> [INFO] Final Memory: 68M/642M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.8:check 
> (default) on project hbase-examples: Too many unapproved licenses: 20 -> 
> [Help 1]
> {code}
> Looks like license headers are missing in some of the files in hbase-examples 
> module

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7148) Some files in hbase-examples module miss license header

2012-11-12 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-7148:
-

Attachment: hbase-7148.patch

One liner patch. 

Checked with jenkins configuration:
{code}
mvn -PrunAllTests -DskipTests package assembly:assembly site -Prelease
{code}

> Some files in hbase-examples module miss license header
> ---
>
> Key: HBASE-7148
> URL: https://issues.apache.org/jira/browse/HBASE-7148
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Enis Soztutar
> Attachments: hbase-7148.patch
>
>
> Trunk build 3530 got to building hbase-examples module but failed:
> {code}
> [INFO] HBase - Examples .. FAILURE [3.222s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 29:21.569s
> [INFO] Finished at: Sun Nov 11 15:17:35 UTC 2012
> [INFO] Final Memory: 68M/642M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.8:check 
> (default) on project hbase-examples: Too many unapproved licenses: 20 -> 
> [Help 1]
> {code}
> Looks like license headers are missing in some of the files in hbase-examples 
> module

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7148) Some files in hbase-examples module miss license header

2012-11-12 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-7148:
-

Assignee: Enis Soztutar

> Some files in hbase-examples module miss license header
> ---
>
> Key: HBASE-7148
> URL: https://issues.apache.org/jira/browse/HBASE-7148
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Enis Soztutar
>
> Trunk build 3530 got to building hbase-examples module but failed:
> {code}
> [INFO] HBase - Examples .. FAILURE [3.222s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 29:21.569s
> [INFO] Finished at: Sun Nov 11 15:17:35 UTC 2012
> [INFO] Final Memory: 68M/642M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.8:check 
> (default) on project hbase-examples: Too many unapproved licenses: 20 -> 
> [Help 1]
> {code}
> Looks like license headers are missing in some of the files in hbase-examples 
> module

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7151) Better log message for Per-CF compactions

2012-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495917#comment-13495917
 ] 

Hudson commented on HBASE-7151:
---

Integrated in HBase-0.94 #581 (See 
[https://builds.apache.org/job/HBase-0.94/581/])
HBASE-7151 Better log message for Per-CF compactions (Revision 1408501)

 Result = FAILURE
gchanan : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> Better log message for Per-CF compactions
> -
>
> Key: HBASE-7151
> URL: https://issues.apache.org/jira/browse/HBASE-7151
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Trivial
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch
>
>
> A coworker pointed out that in HBASE-4913 it would be nice to include the 
> column family in the log message for a per-CF compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7103) Need to fail split if SPLIT znode is deleted even before the split is completed.

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495909#comment-13495909
 ] 

stack commented on HBASE-7103:
--

https://issues.apache.org/jira/browse/ZOOKEEPER-1297 adds a Stat to the create 
call.  It is not yet committed.  Patrick says that what we are doing is the 
best that can be done given current state of the API.

TRUNK patch looks good to me.

bq. Why so? Because now if the znode exists we will not start the split anyway 
so there is only one split right going on? 

That sounds right Ram.  So the execute that created the SPLITTING znode should 
be legit in rollback removing it.  Good stuff.

> Need to fail split if SPLIT znode is deleted even before the split is 
> completed.
> 
>
> Key: HBASE-7103
> URL: https://issues.apache.org/jira/browse/HBASE-7103
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 7103-6088-revert.txt, HBASE-7103_0.94.patch, 
> HBASE-7103_0.94.patch, HBASE-7103_testcase.patch, HBASE-7103_trunk.patch
>
>
> This came up after the following mail in dev list
> 'infinite loop of RS_ZK_REGION_SPLIT on .94.2'.
> The following is the reason for the problem
> The following steps happen
> -> Initially the parent region P1 starts splitting.
> -> The split is going on normally.
> -> Another split starts at the same time for the same region P1. (Not sure 
> why this started).
> -> Rollback happens seeing an already existing node.
> -> This node gets deleted in rollback and nodeDeleted Event starts.
> -> In nodeDeleted event the RIT for the region P1 gets deleted.
> -> Because of this there is no region in RIT.
> -> Now the first split gets over.  Here the problem is we try to transit the 
> node to SPLITTING to SPLIT. But the node even does not exist.
> But we don take any action on this.  We think it is successful.
> -> Because of this SplitRegionHandler never gets invoked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7154) Move the call decode from Reader to Handler

2012-11-12 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-7154:


Description: 
HBaseServer has a few kinds of thread:
{code}
  Listener accept connction, pick a Reader to read call in this connection.
  Reader (default 10 numbers), read call and decode call , put in callQueue.
  Handler take call from callQueue and process call, write response.
  Responder sends responses of RPC back to clients.
{code}
We can move the call decode from Reader to Handler, so reader thread just read 
data for connection. 

Reader number also can be availableProcessors+1

  was:
HBaseServer has a few kinds of thread:
  Listener accept connction, pick a Reader to read call in this connection.
  Reader (default 10 numbers), read call and decode call , put in callQueue.
  Handler take call from callQueue and process call write response.
  Responder sends responses of RPC back to clients.
We can move the call decode from Reader to Handler, so reader thread just read 
data for connection. 



> Move the call decode from Reader to Handler
> ---
>
> Key: HBASE-7154
> URL: https://issues.apache.org/jira/browse/HBASE-7154
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
> Environment: 0.89-fb has already do this.
>Reporter: binlijin
>
> HBaseServer has a few kinds of thread:
> {code}
>   Listener accept connction, pick a Reader to read call in this connection.
>   Reader (default 10 numbers), read call and decode call , put in callQueue.
>   Handler take call from callQueue and process call, write response.
>   Responder sends responses of RPC back to clients.
> {code}
> We can move the call decode from Reader to Handler, so reader thread just 
> read data for connection. 
> Reader number also can be availableProcessors+1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7154) Move the call decode from Reader to Handler

2012-11-12 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-7154:


Description: 
HBaseServer has a few kinds of thread:
  Listener accept connction, pick a Reader to read call in this connection.
  Reader (default 10 numbers), read call and decode call , put in callQueue.
  Handler take call from callQueue and process call write response.
  Responder sends responses of RPC back to clients.
We can move the call decode from Reader to Handler, so reader thread just read 
data for connection. 


> Move the call decode from Reader to Handler
> ---
>
> Key: HBASE-7154
> URL: https://issues.apache.org/jira/browse/HBASE-7154
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
> Environment: 0.89-fb has already do this.
>Reporter: binlijin
>
> HBaseServer has a few kinds of thread:
>   Listener accept connction, pick a Reader to read call in this connection.
>   Reader (default 10 numbers), read call and decode call , put in callQueue.
>   Handler take call from callQueue and process call write response.
>   Responder sends responses of RPC back to clients.
> We can move the call decode from Reader to Handler, so reader thread just 
> read data for connection. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7151) Better log message for Per-CF compactions

2012-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495894#comment-13495894
 ] 

Hudson commented on HBASE-7151:
---

Integrated in HBase-TRUNK #3533 (See 
[https://builds.apache.org/job/HBase-TRUNK/3533/])
HBASE-7151 Better log message for Per-CF compactions (Revision 1408502)

 Result = FAILURE
gchanan : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> Better log message for Per-CF compactions
> -
>
> Key: HBASE-7151
> URL: https://issues.apache.org/jira/browse/HBASE-7151
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Trivial
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch
>
>
> A coworker pointed out that in HBASE-4913 it would be nice to include the 
> column family in the log message for a per-CF compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7154) Move the call decode from Reader to Handler

2012-11-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495890#comment-13495890
 ] 

Ted Yu commented on HBASE-7154:
---

Can you put more detail in description ?
If you can attach what was done in 0.89-fb branch, that would be nice.

> Move the call decode from Reader to Handler
> ---
>
> Key: HBASE-7154
> URL: https://issues.apache.org/jira/browse/HBASE-7154
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
> Environment: 0.89-fb has already do this.
>Reporter: binlijin
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7154) Move the call decode from Reader to Handler

2012-11-12 Thread binlijin (JIRA)
binlijin created HBASE-7154:
---

 Summary: Move the call decode from Reader to Handler
 Key: HBASE-7154
 URL: https://issues.apache.org/jira/browse/HBASE-7154
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
 Environment: 0.89-fb has already do this.
Reporter: binlijin




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7144) Client should not retry the same server on NotServingRegionException

2012-11-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495876#comment-13495876
 ] 

ramkrishna.s.vasudevan commented on HBASE-7144:
---

Oops !! Thanks Jimmy.

> Client should not retry the same server on NotServingRegionException
> 
>
> Key: HBASE-7144
> URL: https://issues.apache.org/jira/browse/HBASE-7144
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>
> In working on HBASE-7131, we noticed that the client still retries the same 
> server in case a NotServingRegionException.  It should relocate the region 
> instead using the same region server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7103) Need to fail split if SPLIT znode is deleted even before the split is completed.

2012-11-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495871#comment-13495871
 ] 

ramkrishna.s.vasudevan commented on HBASE-7103:
---

Am very sorry Lars.  I forgot to remove the state from the enum.  I was 
thinking that it to be removed from the journal alone. 

> Need to fail split if SPLIT znode is deleted even before the split is 
> completed.
> 
>
> Key: HBASE-7103
> URL: https://issues.apache.org/jira/browse/HBASE-7103
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 7103-6088-revert.txt, HBASE-7103_0.94.patch, 
> HBASE-7103_0.94.patch, HBASE-7103_testcase.patch, HBASE-7103_trunk.patch
>
>
> This came up after the following mail in dev list
> 'infinite loop of RS_ZK_REGION_SPLIT on .94.2'.
> The following is the reason for the problem
> The following steps happen
> -> Initially the parent region P1 starts splitting.
> -> The split is going on normally.
> -> Another split starts at the same time for the same region P1. (Not sure 
> why this started).
> -> Rollback happens seeing an already existing node.
> -> This node gets deleted in rollback and nodeDeleted Event starts.
> -> In nodeDeleted event the RIT for the region P1 gets deleted.
> -> Because of this there is no region in RIT.
> -> Now the first split gets over.  Here the problem is we try to transit the 
> node to SPLITTING to SPLIT. But the node even does not exist.
> But we don take any action on this.  We think it is successful.
> -> Because of this SplitRegionHandler never gets invoked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7153) print gc option in hbase-env.sh affects hbase zkcli

2012-11-12 Thread wonderyl (JIRA)
wonderyl created HBASE-7153:
---

 Summary: print gc option in hbase-env.sh affects hbase zkcli
 Key: HBASE-7153
 URL: https://issues.apache.org/jira/browse/HBASE-7153
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.94.0
Reporter: wonderyl


I un-commented the -verbose:gc option in hbase-env.sh, which print out the gc 
info.
but when I use hbase zkcli to check zk, it can not connect to the server.
the problem is zkcli uses "hbase 
org.apache.hadoop.hbase.zookeeper.ZooKeeperMainServerArg" to get the server_arg 
in the script hbase. when gc verbose option is open, the output of 
ZooKeeperMainServerArg is with gc info, which masses up with server_arg. and 
this is the reason stop zkcli working.
I think the easiest way to fix this is to trim the gc info out of server_arg in 
the hbase script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Varun Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495856#comment-13495856
 ] 

Varun Sharma commented on HBASE-4583:
-

Oh okay - I thought from your previous comment that you agreed on a solution 
for 0.94 - maybe you only meant that we fix the Race condition b/w put and 
append/increment like we do it for checkAndPut and not really do the MVCC part 
? Or did I misunderstand ?

Varun

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4584-0.94-v1.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-4583:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4584-0.94-v1.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-4583:
-

Fix Version/s: (was: 0.94.3)

Sorry, not for 0.94. (As much as I like this patch as improvement, this is too 
radical for 0.94)

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4584-0.94-v1.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Varun Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Sharma updated HBASE-4583:


Attachment: 4584-0.94-v1.txt

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4584-0.94-v1.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Varun Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Sharma updated HBASE-4583:


Status: Patch Available  (was: Reopened)

Attaching patch.

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4584-0.94-v1.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Varun Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Sharma reopened HBASE-4583:
-


Adding patch for 0.94 - I had to rework incrementColumnValue to use Increment() 
instead so that it is MVCC'ised as well.

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4584-0.94-v1.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5898:
-

Status: Patch Available  (was: Open)

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, 
> hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5898:
-

Status: Open  (was: Patch Available)

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, 
> hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-12 Thread Varun Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Sharma updated HBASE-4583:


Fix Version/s: 0.94.3

> Integrate RWCC with Append and Increment operations
> ---
>
> Key: HBASE-4583
> URL: https://issues.apache.org/jira/browse/HBASE-4583
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
> 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
> 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
> 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
> 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
> 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt
>
>
> Currently Increment and Append operations do not work with RWCC and hence a 
> client could see the results of multiple such operation mixed in the same 
> Get/Scan.
> The semantics might be a bit more interesting here as upsert adds and removes 
> to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-1306) [performance] Threading the update process when writing to HBase.

2012-11-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1306.
--

Resolution: Won't Fix

Inspecific -- is this server-side (which we need but we have hbase-74) or 
client-side? -- and its old.  Closing.  Lets open more specific issues.

> [performance] Threading the update process when writing to HBase.
> -
>
> Key: HBASE-1306
> URL: https://issues.apache.org/jira/browse/HBASE-1306
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.90.0
>Reporter: Erik Holstad
>
> There are multiple places where we can make an update run in parallel when 
> inserting into HBase. Two of these points are making every row get it's own 
> thread and then every family, store, get it's own thread when you are in the 
> right region.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-1307) Threading writes and reads into HBase.

2012-11-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1307.
--

Resolution: Won't Fix

Resolving (at Otis prompting).  Issue is inspecific, 3.5 years old, and likely 
superceded by HTableMultiplexer.

> Threading writes and reads into HBase.
> --
>
> Key: HBASE-1307
> URL: https://issues.apache.org/jira/browse/HBASE-1307
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.90.0
>Reporter: Erik Holstad
>
> I created this issue to be the overall issue for threading to increase read 
> and write performance in HBase and to keep it as a discussion place about 
> threading of these elements in general. Today we are doing batching of  
> writes and from 0.20 you will be able to do that for reads too. The thing is 
> that the batching procedure doesn't use the ability to run these different 
> queries at the same time, but more like a series of queries. I think that 
> after getting a good stable 0.20 system down we should try to add threading 
> to increase throughput for both reading and writing. At the top level of 
> these calls I don't think that is is goin gto be to hard to do this in 
> parallel, where it gets a little bit more complicated is when you get down to 
> running a get query on memcache and all the storefiles at the same time, but 
> above that I don't see it being to hard. I do think that this should not be a 
> part of 0.20 but rather an optimization in 0.21 or so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7124) typo in pom.xml with "exlude", no definition of "test.exclude.pattern"

2012-11-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495840#comment-13495840
 ] 

stack commented on HBASE-7124:
--

[~michelle] Attach it here?

> typo in pom.xml with "exlude", no definition of "test.exclude.pattern"
> --
>
> Key: HBASE-7124
> URL: https://issues.apache.org/jira/browse/HBASE-7124
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0
>Reporter: Li Ping Zhang
>Priority: Minor
>  Labels: patch
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> There is a typo in pom.xml with "exlude", and there is no definition of 
> "test.exclude.pattern".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7124) typo in pom.xml with "exlude", no definition of "test.exclude.pattern"

2012-11-12 Thread Li Ping Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495821#comment-13495821
 ] 

Li Ping Zhang commented on HBASE-7124:
--

Yes, Jesse, I have a patch, so need I patch it now?

> typo in pom.xml with "exlude", no definition of "test.exclude.pattern"
> --
>
> Key: HBASE-7124
> URL: https://issues.apache.org/jira/browse/HBASE-7124
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0
>Reporter: Li Ping Zhang
>Priority: Minor
>  Labels: patch
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> There is a typo in pom.xml with "exlude", and there is no definition of 
> "test.exclude.pattern".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1307) Threading writes and reads into HBase.

2012-11-12 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495807#comment-13495807
 ] 

Otis Gospodnetic commented on HBASE-1307:
-

And maybe HBASE-1306 is then obsolete, too?

> Threading writes and reads into HBase.
> --
>
> Key: HBASE-1307
> URL: https://issues.apache.org/jira/browse/HBASE-1307
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.90.0
>Reporter: Erik Holstad
>
> I created this issue to be the overall issue for threading to increase read 
> and write performance in HBase and to keep it as a discussion place about 
> threading of these elements in general. Today we are doing batching of  
> writes and from 0.20 you will be able to do that for reads too. The thing is 
> that the batching procedure doesn't use the ability to run these different 
> queries at the same time, but more like a series of queries. I think that 
> after getting a good stable 0.20 system down we should try to add threading 
> to increase throughput for both reading and writing. At the top level of 
> these calls I don't think that is is goin gto be to hard to do this in 
> parallel, where it gets a little bit more complicated is when you get down to 
> running a get query on memcache and all the storefiles at the same time, but 
> above that I don't see it being to hard. I do think that this should not be a 
> part of 0.20 but rather an optimization in 0.21 or so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1307) Threading writes and reads into HBase.

2012-11-12 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495806#comment-13495806
 ] 

Otis Gospodnetic commented on HBASE-1307:
-

Does HBASE-5776 (HTableMultiplexer) make this 3.5 years old issue obsolete?

> Threading writes and reads into HBase.
> --
>
> Key: HBASE-1307
> URL: https://issues.apache.org/jira/browse/HBASE-1307
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.90.0
>Reporter: Erik Holstad
>
> I created this issue to be the overall issue for threading to increase read 
> and write performance in HBase and to keep it as a discussion place about 
> threading of these elements in general. Today we are doing batching of  
> writes and from 0.20 you will be able to do that for reads too. The thing is 
> that the batching procedure doesn't use the ability to run these different 
> queries at the same time, but more like a series of queries. I think that 
> after getting a good stable 0.20 system down we should try to add threading 
> to increase throughput for both reading and writing. At the top level of 
> these calls I don't think that is is goin gto be to hard to do this in 
> parallel, where it gets a little bit more complicated is when you get down to 
> running a get query on memcache and all the storefiles at the same time, but 
> above that I don't see it being to hard. I do think that this should not be a 
> part of 0.20 but rather an optimization in 0.21 or so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495786#comment-13495786
 ] 

Lars Hofhansl commented on HBASE-5898:
--

Any comments? Would like to commit and do a 0.94.3rc.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, 
> hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


TestHFileBlock failed with testGzipCompression test cases if install zlib and exporting LD_LIBRARY_PATH to hadoop native lib(*.so)

2012-11-12 Thread 张莉苹
Dear HBase users and Devs,

There are 2 TestHFileBlock.testGzipCompression unit tests failed when
running HBase 0.94.0 with installing zlib and exporting LD_LIBRARY_PATH
with hadoop lib/native/Linux-amd64-64. Did you met this UT failure before?

Failed tests:
  testGzipCompression[0](org.apache.hadoop.hbase.io.hfile.TestHFileBlock):
expected:<...\s\xA0\x0F\x00\x00\x[AB\x85g\x91]> but
was:<...\s\xA0\x0F\x00\x00\x[E1\x1C\x10\xE5]>
  testGzipCompression[1](org.apache.hadoop.hbase.io.hfile.TestHFileBlock):
expected:<...\s\xA0\x0F\x00\x00\x[AB\x85g\x91]> but
was:<...\s\xA0\x0F\x00\x00\x[E1\x1C\x10\xE5]>

Ted Yu has also reported this with his comments additionally when he ran
patches in HBASE-3857, his comment is "Ted Yu added a comment - 04/Aug/11
01:57 " in  https://issues.apache.org/jira/browse/HBASE-3857, and Mikhail
Bautin response it as following Green.

Mikhail Bautin added a comment - 04/Aug/11 03:50
Addressing the issue with TestHFileBlock reported by Ted. It turns out
there is an "OS" field inside the gzip header which might take different
values depending on the OS and configuration. I have changed the unit test
to always set that field to the same value before comparing.

It says the patch of HBASE-3857 is fixed in 0.92.0,  it is a hug patch
against 0.92.0. 0.94.0 version is newer than 0.92.0, and I checked the
patch code, the patch has basically been included  in 0.94.0.

However, from our test based on 0.94.0, the testGzipCompression unit tests
are still failed.


*How to reproduce:*
-
1.install zlib and do `make test` command to make sure it is installed well
on your hbase unit tests running server*(x86_64, 64-bit)*
#tar zvxf zlib-1.2.7.tar.gz
#cd zlib-1.2.7
#./configure --prefix=/usr --shared
#make
#make test

2.copy hadoop lib/native/Linux-amd64-64(in case Jenkins server is x86-64)
dir from hadoop-*.tar.gz into running hbase UT user permission dir, eg.
/opt/jenkins/ dir

3.expert JNI path with running hbase UT user

#export LD_LIBRARY_PATH=/opt/jenkins/Linux-amd64-64

Step 1,2,3 is used to resolved the  "Deflater has been closed" issue with
running hbase0.94.0 with Open JDK(like IBM JDK6, sr11).

4.after step 1~3, run HBase 0.94.0 (no matter SUN JDK or Open JDK) with
"mvn test" or Junit execution in eclipse. Then we can met these
TestHFileBlock.testGzipCompression unit tests failures.

*The question is changed into : if only we export LD_LIBRARY_PATH to hadoop
native lib(*.so) , hbase TestHFileBlock.testGzipCompression test cases will
be failed. Is it a HBase defect with running with zlib and hadoop native?
If YES, shall we open a JIRA for fixing and tracking this in next release?*

*Here is the log of running org.apache.hadoop.hbase.io.hfile.TestHFileBlock:
*

/root/zhangliping/hbase/target/surefire-reports/org.apache.hadoop.hbase.io.hfile.TestHFileBlock.txt
file:
---
Test set: org.apache.hadoop.hbase.io.hfile.TestHFileBlock
---
Tests run: 16, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 55.455 sec
<<< FAILURE!
testGzipCompression[0](org.apache.hadoop.hbase.io.hfile.TestHFileBlock)
 Time elapsed: 0.001 sec  <<< FAILURE!
org.junit.ComparisonFailure:
expected:<...\s\xA0\x0F\x00\x00\x[AB\x85g\x91]> but
was:<...\s\xA0\x0F\x00\x00\x[E1\x1C\x10\xE5]>
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at
org.apache.hadoop.hbase.io.hfile.TestHFileBlock.testGzipCompression(TestHFileBlock.java:252)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)


testGzipCompression[1](org.apache.hadoop.hbase.io.hfile.TestHFileBlock)
 Time elapsed: 0.027 sec  <<< FAILURE!
org.junit.ComparisonFailure:
expected:<...\s\xA0\x0F\x00\x00\x[AB\x85g\x91]> but
was:<...\s\xA0\x0F\x00\x00\x[E1\x1C\x10\xE5]>
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at
org.apache.hadoop.hbase.io.hfile.TestHFileBlock.testGzipCompression(TestHFileBlock.java:252)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)


/root/zhangliping/hbase/target/surefire-reports/org.apache.hadoop.hbase.io.hfile.TestHFileB

[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495748#comment-13495748
 ] 

Hudson commented on HBASE-4913:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #257 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/257/])
HBASE-4913 Addendum: better shell parsing (Revision 1408424)

 Result = FAILURE
gchanan : 
Files : 
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/compact.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/major_compact.rb


> Per-CF compaction Via the Shell
> ---
>
> Key: HBASE-4913
> URL: https://issues.apache.org/jira/browse/HBASE-4913
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, regionserver
>Reporter: Nicolas Spiegelberg
>Assignee: Mubarak Seyed
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, 
> HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
> HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495749#comment-13495749
 ] 

Hudson commented on HBASE-7104:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #257 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/257/])
HBASE-7104  HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2 
(Revision 1408274)

 Result = FAILURE
nkeywal : 
Files : 
* /hbase/trunk/pom.xml


> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7103) Need to fail split if SPLIT znode is deleted even before the split is completed.

2012-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495747#comment-13495747
 ] 

Hudson commented on HBASE-7103:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #257 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/257/])
HBASE-7103 Need to fail split if SPLIT znode is deleted even before the 
split is completed. (Ram) (Revision 1408418)

 Result = FAILURE
larsh : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java


> Need to fail split if SPLIT znode is deleted even before the split is 
> completed.
> 
>
> Key: HBASE-7103
> URL: https://issues.apache.org/jira/browse/HBASE-7103
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 7103-6088-revert.txt, HBASE-7103_0.94.patch, 
> HBASE-7103_0.94.patch, HBASE-7103_testcase.patch, HBASE-7103_trunk.patch
>
>
> This came up after the following mail in dev list
> 'infinite loop of RS_ZK_REGION_SPLIT on .94.2'.
> The following is the reason for the problem
> The following steps happen
> -> Initially the parent region P1 starts splitting.
> -> The split is going on normally.
> -> Another split starts at the same time for the same region P1. (Not sure 
> why this started).
> -> Rollback happens seeing an already existing node.
> -> This node gets deleted in rollback and nodeDeleted Event starts.
> -> In nodeDeleted event the RIT for the region P1 gets deleted.
> -> Because of this there is no region in RIT.
> -> Now the first split gets over.  Here the problem is we try to transit the 
> node to SPLITTING to SPLIT. But the node even does not exist.
> But we don take any action on this.  We think it is successful.
> -> Because of this SplitRegionHandler never gets invoked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7151) Better log message for Per-CF compactions

2012-11-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-7151:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the reviews, committed to trunk and 0.94.

> Better log message for Per-CF compactions
> -
>
> Key: HBASE-7151
> URL: https://issues.apache.org/jira/browse/HBASE-7151
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Trivial
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch
>
>
> A coworker pointed out that in HBASE-4913 it would be nice to include the 
> column family in the log message for a per-CF compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5984) TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0

2012-11-12 Thread Andrey Klochkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495739#comment-13495739
 ] 

Andrey Klochkov commented on HBASE-5984:


@stack Can you please apply it to 0.94 as well? 

> TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0
> 
>
> Key: HBASE-5984
> URL: https://issues.apache.org/jira/browse/HBASE-5984
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
> Attachments: hbase_5984.patch
>
>
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1455809779-127.0.0.1-1336670196362:blk_-6960847342982670493_1028;
>  getBlockSize()=1474; corrupt=false; offset=0; locs=[127.0.0.1:58343, 
> 127.0.0.1:48427]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:232)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:177)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:112)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:928)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1768)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:66)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1688)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1709)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:58)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:166)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:659)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnPipelineRestart(TestLogRolling.java:498)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.at

[jira] [Updated] (HBASE-7150) Utility class to determine File Descriptor counts depending on the JVM Vendor

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7150:
-

Fix Version/s: (was: 0.94.3)
   0.94.4

I'd like to do this in 0.94.4. If you need this earlier in 0.94.3, please let 
me know.

> Utility class to determine File Descriptor counts depending on the JVM Vendor
> -
>
> Key: HBASE-7150
> URL: https://issues.apache.org/jira/browse/HBASE-7150
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, util
>Affects Versions: 0.94.1, 0.94.2
> Environment: Non Sun JDK environments
>Reporter: Kumar Ravi
>Assignee: Kumar Ravi
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-7150.patch
>
>
> This issue is being opened to replace the OSMXBean class that was submitted 
> in HBASE-6965. A new utility class called JVM is being opened to be used by 
> ResourceChecker (0.94 branch) and ResourceCheckerJUnitListener test classes.
> The patch for the ResourceChecker classes is being addressed by HBASE-6945.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7151) Better log message for Per-CF compactions

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495724#comment-13495724
 ] 

Lars Hofhansl commented on HBASE-7151:
--

+1

> Better log message for Per-CF compactions
> -
>
> Key: HBASE-7151
> URL: https://issues.apache.org/jira/browse/HBASE-7151
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Trivial
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch
>
>
> A coworker pointed out that in HBASE-4913 it would be nice to include the 
> column family in the log message for a per-CF compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5898:
-

Attachment: 5898-v4.txt

Patch that handled the cache misses correctly. Also removes the unnecessary 
AtomicLong updates.

Not pretty... BlockCache.getBlock has another boolean parameter to indicate 
whether cache misses should be counted... All callers and implementors needed 
to be changed.

Please let me know what you think.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, 
> hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495712#comment-13495712
 ] 

Lars Hofhansl commented on HBASE-7104:
--

I am getting this failure locally after this change:
[ERROR] Failed to execute goal on project hbase-hadoop2-compat: Could not 
resolve dependencies for project 
org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT: Could not find 
artifact org.apache.hadoop:hadoop-mapreduce-client-app:jar:1.1.0 in cloudbees 
netty (http://repository-netty.forge.cloudbees.com/snapshot/)

I tried to pass -U to maven, but that did not resolve the problem.

> HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
> --
>
> Key: HBASE-7104
> URL: https://issues.apache.org/jira/browse/HBASE-7104
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 7104.v1.patch
>
>
> We've got 3 of them on trunk.
> [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT
> [INFO] +- io.netty:netty:jar:3.5.0.Final:compile
> [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile
> [INFO] |  \- org.jboss.netty:netty:jar:3.2.2.Final:compile
> [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile
> [INFO] |  +- 
> org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile
> [INFO] |  |  \- org.jboss.netty:netty:jar:3.2.4.Final:compile
> The patch attached:
> - fixes this for hadoop 1 profile
> - bump the netty version to 3.5.9
> - does not fix it for hadoop 2. I don't know why, but I haven't investigate: 
> as it's still alpha may be they will change the version on hadoop side anyway.
> Tests are ok.
> I haven't really investigated the differences between netty 3.2 and 3.5. A 
> quick search seems to say it's ok, but don't hesitate to raise a warning...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7152) testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally

2012-11-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7152:
---

Status: Patch Available  (was: Open)

> testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally
> ---
>
> Key: HBASE-7152
> URL: https://issues.apache.org/jira/browse/HBASE-7152
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: trunk-7152.patch
>
>
> {noformat}
> java.lang.Exception: test timed out after 18 milliseconds
>   at java.lang.Throwable.fillInStackTrace(Native Method)
>   at java.lang.Throwable.(Throwable.java:181)
>   at 
> org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
>   at 
> org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555)
>   at 
> org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528)
>   at 
> org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
>   at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at 
> org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:188)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:407)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.join(MiniHBaseCluster.java:408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:599)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:573)
>   at 
> org.apache.hadoop.hbase.master.TestMasterFailover.testShouldCheckMasterFailOverWhenMETAIsInOpenedState(TestMasterFailover.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7152) testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally

2012-11-12 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495707#comment-13495707
 ] 

Jimmy Xiang commented on HBASE-7152:


After startMiniHBaseCluster, the test assumes the failover is completed.  
Actually, it may not. We just know the meta table is assigned after 
startMiniHBaseCluster.  User regions may have not started to assign yet, 
depending on the timing. In this case, there is no region in transition yet.

> testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally
> ---
>
> Key: HBASE-7152
> URL: https://issues.apache.org/jira/browse/HBASE-7152
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: trunk-7152.patch
>
>
> {noformat}
> java.lang.Exception: test timed out after 18 milliseconds
>   at java.lang.Throwable.fillInStackTrace(Native Method)
>   at java.lang.Throwable.(Throwable.java:181)
>   at 
> org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
>   at 
> org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555)
>   at 
> org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528)
>   at 
> org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
>   at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at 
> org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:188)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:407)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.join(MiniHBaseCluster.java:408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:599)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:573)
>   at 
> org.apache.hadoop.hbase.master.TestMasterFailover.testShouldCheckMasterFailOverWhenMETAIsInOpenedState(TestMasterFailover.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94

2012-11-12 Thread Kumar Ravi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495702#comment-13495702
 ] 

Kumar Ravi commented on HBASE-6945:
---

Patch attached which includes use of JVM class instead of OSMXBean

> Compilation errors when using non-Sun JDKs to build HBase-0.94
> --
>
> Key: HBASE-6945
> URL: https://issues.apache.org/jira/browse/HBASE-6945
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.94.1
> Environment: RHEL 6.3, IBM Java 7 
>Reporter: Kumar Ravi
>Assignee: Kumar Ravi
>  Labels: patch
> Fix For: 0.94.4
>
> Attachments: HBASE-6945_ResourceCheckerJUnitListener.patch
>
>
> When using IBM Java 7 to build HBase-0.94.1, the following comilation error 
> is seen. 
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25]
>  error: package com.sun.management does not exist
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23]
>  error: cannot find symbol
> [INFO] 4 errors 
> [INFO] -
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
>  I have a patch available which should work for all JDKs including Sun.
>  I am in the process of testing this patch. Preliminary tests indicate the 
> build is working fine with this patch. I will post this patch when I am done 
> testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94

2012-11-12 Thread Kumar Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kumar Ravi updated HBASE-6945:
--

Attachment: HBASE-6945_ResourceCheckerJUnitListener.patch

> Compilation errors when using non-Sun JDKs to build HBase-0.94
> --
>
> Key: HBASE-6945
> URL: https://issues.apache.org/jira/browse/HBASE-6945
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.94.1
> Environment: RHEL 6.3, IBM Java 7 
>Reporter: Kumar Ravi
>Assignee: Kumar Ravi
>  Labels: patch
> Fix For: 0.94.4
>
> Attachments: HBASE-6945_ResourceCheckerJUnitListener.patch
>
>
> When using IBM Java 7 to build HBase-0.94.1, the following comilation error 
> is seen. 
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25]
>  error: package com.sun.management does not exist
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23]
>  error: cannot find symbol
> [INFO] 4 errors 
> [INFO] -
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
>  I have a patch available which should work for all JDKs including Sun.
>  I am in the process of testing this patch. Preliminary tests indicate the 
> build is working fine with this patch. I will post this patch when I am done 
> testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7152) testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally

2012-11-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7152:
---

Attachment: trunk-7152.patch

> testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally
> ---
>
> Key: HBASE-7152
> URL: https://issues.apache.org/jira/browse/HBASE-7152
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: trunk-7152.patch
>
>
> {noformat}
> java.lang.Exception: test timed out after 18 milliseconds
>   at java.lang.Throwable.fillInStackTrace(Native Method)
>   at java.lang.Throwable.(Throwable.java:181)
>   at 
> org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
>   at 
> org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555)
>   at 
> org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528)
>   at 
> org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
>   at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at 
> org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:188)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:407)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.join(MiniHBaseCluster.java:408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:599)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:573)
>   at 
> org.apache.hadoop.hbase.master.TestMasterFailover.testShouldCheckMasterFailOverWhenMETAIsInOpenedState(TestMasterFailover.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495699#comment-13495699
 ] 

Lars Hofhansl commented on HBASE-5898:
--

same for metaLoads and blockLoads. Both are atomic longs updated for no reason.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94

2012-11-12 Thread Kumar Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kumar Ravi updated HBASE-6945:
--

Status: Open  (was: Patch Available)

> Compilation errors when using non-Sun JDKs to build HBase-0.94
> --
>
> Key: HBASE-6945
> URL: https://issues.apache.org/jira/browse/HBASE-6945
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.94.1
> Environment: RHEL 6.3, IBM Java 7 
>Reporter: Kumar Ravi
>Assignee: Kumar Ravi
>  Labels: patch
> Fix For: 0.94.4
>
>
> When using IBM Java 7 to build HBase-0.94.1, the following comilation error 
> is seen. 
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25]
>  error: package com.sun.management does not exist
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23]
>  error: cannot find symbol
> [INFO] 4 errors 
> [INFO] -
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
>  I have a patch available which should work for all JDKs including Sun.
>  I am in the process of testing this patch. Preliminary tests indicate the 
> build is working fine with this patch. I will post this patch when I am done 
> testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7152) testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally

2012-11-12 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-7152:
--

 Summary: testShouldCheckMasterFailOverWhenMETAIsInOpenedState 
times out occasionally
 Key: HBASE-7152
 URL: https://issues.apache.org/jira/browse/HBASE-7152
 Project: HBase
  Issue Type: Test
  Components: test
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor


{noformat}
java.lang.Exception: test timed out after 18 milliseconds
at java.lang.Throwable.fillInStackTrace(Native Method)
at java.lang.Throwable.(Throwable.java:181)
at 
org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
at 
org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555)
at 
org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528)
at 
org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at 
org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:188)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:407)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.join(MiniHBaseCluster.java:408)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:599)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:573)
at 
org.apache.hadoop.hbase.master.TestMasterFailover.testShouldCheckMasterFailOverWhenMETAIsInOpenedState(TestMasterFailover.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94

2012-11-12 Thread Kumar Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kumar Ravi updated HBASE-6945:
--

Attachment: (was: ResourceCheckerJUnitListener_HBASE_6945-trunk.patch)

> Compilation errors when using non-Sun JDKs to build HBase-0.94
> --
>
> Key: HBASE-6945
> URL: https://issues.apache.org/jira/browse/HBASE-6945
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.94.1
> Environment: RHEL 6.3, IBM Java 7 
>Reporter: Kumar Ravi
>Assignee: Kumar Ravi
>  Labels: patch
> Fix For: 0.94.4
>
>
> When using IBM Java 7 to build HBase-0.94.1, the following comilation error 
> is seen. 
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25]
>  error: package com.sun.management does not exist
> [ERROR] 
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29]
>  error: cannot find symbol
> [ERROR]   symbol:   class UnixOperatingSystemMXBean
>   location: class ResourceAnalyzer
> /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23]
>  error: cannot find symbol
> [INFO] 4 errors 
> [INFO] -
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
>  I have a patch available which should work for all JDKs including Sun.
>  I am in the process of testing this patch. Preliminary tests indicate the 
> build is working fine with this patch. I will post this patch when I am done 
> testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495694#comment-13495694
 ] 

Lars Hofhansl commented on HBASE-5898:
--

BTW. It also turns out that HFileReaderVx update an AtomicLong (cacheHits) that 
is not read anywhere...?! I'll remove that.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7141) Cleanup Increment and Append issues

2012-11-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495685#comment-13495685
 ] 

Andrew Purtell commented on HBASE-7141:
---

bq.  I contemplated have Increment also maintain a map from CF -> Set of KV, 
but then there's unnecessary encoding at the client and decoding at the server. 
But maybe that is the right way to go.

+1, it's a reasonable tradeoff. Although, it would be enough for my purposed on 
HBASE-6222 for Increment to extend OperationWithAttributes.

> Cleanup Increment and Append issues
> ---
>
> Key: HBASE-7141
> URL: https://issues.apache.org/jira/browse/HBASE-7141
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.96.0
>
>
> * Append and Increment should take a TS for their update phase
> * Append should access a timerange for the read phase
> * Increment should no longer implement Writable (in trunk)
> * Increment and Append makes changes visible through the memstore before the 
> WAL is sync'ed
> This depends on HBASE-4583

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7141) Cleanup Increment and Append issues

2012-11-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495685#comment-13495685
 ] 

Andrew Purtell edited comment on HBASE-7141 at 11/12/12 10:20 PM:
--

bq.  I contemplated have Increment also maintain a map from CF -> Set of KV, 
but then there's unnecessary encoding at the client and decoding at the server. 
But maybe that is the right way to go.

+1, it's a reasonable tradeoff. Although, it would be enough for my purposes on 
HBASE-6222 for Increment to extend OperationWithAttributes.

  was (Author: apurtell):
bq.  I contemplated have Increment also maintain a map from CF -> Set of 
KV, but then there's unnecessary encoding at the client and decoding at the 
server. But maybe that is the right way to go.

+1, it's a reasonable tradeoff. Although, it would be enough for my purposed on 
HBASE-6222 for Increment to extend OperationWithAttributes.
  
> Cleanup Increment and Append issues
> ---
>
> Key: HBASE-7141
> URL: https://issues.apache.org/jira/browse/HBASE-7141
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.96.0
>
>
> * Append and Increment should take a TS for their update phase
> * Append should access a timerange for the read phase
> * Increment should no longer implement Writable (in trunk)
> * Increment and Append makes changes visible through the memstore before the 
> WAL is sync'ed
> This depends on HBASE-4583

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7151) Better log message for Per-CF compactions

2012-11-12 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495682#comment-13495682
 ] 

Jimmy Xiang commented on HBASE-7151:


+1

> Better log message for Per-CF compactions
> -
>
> Key: HBASE-7151
> URL: https://issues.apache.org/jira/browse/HBASE-7151
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Trivial
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch
>
>
> A coworker pointed out that in HBASE-4913 it would be nice to include the 
> column family in the log message for a per-CF compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7151) Better log message for Per-CF compactions

2012-11-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-7151:
--

Status: Patch Available  (was: Open)

> Better log message for Per-CF compactions
> -
>
> Key: HBASE-7151
> URL: https://issues.apache.org/jira/browse/HBASE-7151
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Trivial
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch
>
>
> A coworker pointed out that in HBASE-4913 it would be nice to include the 
> column family in the log message for a per-CF compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7151) Better log message for Per-CF compactions

2012-11-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-7151:
--

Attachment: HBASE-7151-trunk.patch

> Better log message for Per-CF compactions
> -
>
> Key: HBASE-7151
> URL: https://issues.apache.org/jira/browse/HBASE-7151
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Trivial
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch
>
>
> A coworker pointed out that in HBASE-4913 it would be nice to include the 
> column family in the log message for a per-CF compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7151) Better log message for Per-CF compactions

2012-11-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-7151:
--

Attachment: HBASE-7151-94.patch

> Better log message for Per-CF compactions
> -
>
> Key: HBASE-7151
> URL: https://issues.apache.org/jira/browse/HBASE-7151
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Trivial
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch
>
>
> A coworker pointed out that in HBASE-4913 it would be nice to include the 
> column family in the log message for a per-CF compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock

2012-11-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495676#comment-13495676
 ] 

Lars Hofhansl commented on HBASE-5898:
--

Yeah, just verified that RegionServerMetrics.blockCacheMissCount is driven by 
CacheStats.missCount, which will be double counted with this patch.

> Consider double-checked locking for block cache lock
> 
>
> Key: HBASE-5898
> URL: https://issues.apache.org/jira/browse/HBASE-5898
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 
> HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt
>
>
> Running a workload with a high query rate against a dataset that fits in 
> cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by 
> HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a 
> lot of CPU doing lock management here. I wrote a quick patch to switch to a 
> double-checked locking and it improved throughput substantially for this 
> workload.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >