[jira] [Commented] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191446#comment-14191446
 ] 

Hadoop QA commented on HBASE-12072:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678413/hbase-12072_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12678413

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3775 checkstyle errors (more than the trunk's current 3774 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11535//console

This message is automatically generated.

 We are doing 35 x 35 retries for master operations
 --

 Key: HBASE-12072
 URL: https://issues.apache.org/jira/browse/HBASE-12072
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.99.2

 Attachments: 12072-v1.txt, 12072-v2.txt, hbase-12072_v1.patch, 
 hbase-12072_v2.patch


 For master requests, there are two retry mechanisms in effect. The first one 
 is from HBaseAdmin.executeCallable() 
 {code}
   private V V executeCallable(MasterCallableV callable) throws 
 IOException {
 RpcRetryingCallerV caller = rpcCallerFactory.newCaller();
 try {
   return caller.callWithRetries(callable);
 } finally {
   callable.close();
 }
   }
 {code}
 And inside, the other one is from StubMaker.makeStub():
 {code}
 /**
* Create a stub against the master.  Retry if necessary.
* @return A stub to do codeintf/code against the master
* @throws MasterNotRunningException
*/
   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
 (value=SWL_SLEEP_WITH_LOCK_HELD)
   Object makeStub() throws MasterNotRunningException {
 {code}
 The tests will just hang for 10 min * 35 ~= 6hours. 
 {code}
 2014-09-23 16:19:05,151 INFO  [main] 
 

[jira] [Updated] (HBASE-12331) Shorten the mob snapshot unit tests

2014-10-31 Thread Li Jiajia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Jiajia updated HBASE-12331:
--
Attachment: HBASE-12331-V1.diff

change some UTs to be integration tests and shorten some UTs.

 Shorten the mob snapshot unit tests
 ---

 Key: HBASE-12331
 URL: https://issues.apache.org/jira/browse/HBASE-12331
 Project: HBase
  Issue Type: Sub-task
  Components: mob
Affects Versions: hbase-11339
Reporter: Jonathan Hsieh
 Fix For: hbase-11339

 Attachments: HBASE-12331-V1.diff


 The mob snapshot patch introduced a whole log of tests that take a long time 
 to run and would be better as integration tests.
 {code}
 ---
  T E S T S
 ---
 Running 
 org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClientWithRegionReplicas
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 394.803 sec - 
 in 
 org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClientWithRegionReplicas
 Running org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 212.377 sec - 
 in org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
 Running 
 org.apache.hadoop.hbase.client.TestMobSnapshotFromClientWithRegionReplicas
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.463 sec - 
 in org.apache.hadoop.hbase.client.TestMobSnapshotFromClientWithRegionReplicas
 Running org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.724 sec - 
 in org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
 Running org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 204.03 sec - 
 in org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
 Running 
 org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClientWithRegionReplicas
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 214.052 sec - 
 in 
 org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClientWithRegionReplicas
 Running org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
 Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.139 sec - 
 in org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
 Running org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.42 sec - 
 in org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
 Running org.apache.hadoop.hbase.regionserver.TestDeleteMobTable
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.136 sec - 
 in org.apache.hadoop.hbase.regionserver.TestDeleteMobTable
 Running org.apache.hadoop.hbase.regionserver.TestHMobStore
 Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.09 sec - in 
 org.apache.hadoop.hbase.regionserver.TestHMobStore
 Running org.apache.hadoop.hbase.regionserver.TestMobCompaction
 Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.629 sec - 
 in org.apache.hadoop.hbase.regionserver.TestMobCompaction
 Running org.apache.hadoop.hbase.mob.TestCachedMobFile
 Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.301 sec - 
 in org.apache.hadoop.hbase.mob.TestCachedMobFile
 Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepJob
 Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.752 sec - 
 in org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepJob
 Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.276 sec - 
 in org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer
 Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepMapper
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.46 sec - 
 in org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepMapper
 Running org.apache.hadoop.hbase.mob.mapreduce.TestMobSweeper
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 173.05 sec - 
 in org.apache.hadoop.hbase.mob.mapreduce.TestMobSweeper
 Running org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.86 sec - 
 in org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding
 Running org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.029 sec - 
 in org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner
 Running org.apache.hadoop.hbase.mob.TestMobFile
 Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.562 sec - 
 in org.apache.hadoop.hbase.mob.TestMobFile
 Running 

[jira] [Created] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread Jingcheng Du (JIRA)
Jingcheng Du created HBASE-12391:


 Summary: Correct a typo in the mob metrics
 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor


There's a typo in the temp variable in the region server metrics for mob. It's 
now testMobCompactedFromMobCellsSize, and should be changed to 
tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-12391:
-
Attachment: HBASE-12391.diff

Update the patch to fix the typo.

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-12391:
-
Status: Patch Available  (was: Open)

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191509#comment-14191509
 ] 

Jingcheng Du commented on HBASE-12391:
--

Hi [~jmhsieh], [~anoopsamjohn] and [~ram_krish], would you please look at it 
and commit it? Thanks a lot!

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191513#comment-14191513
 ] 

Hadoop QA commented on HBASE-12391:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678430/HBASE-12391.diff
  against trunk revision .
  ATTACHMENT ID: 12678430

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11537//console

This message is automatically generated.

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Jingcheng Du (JIRA)
Jingcheng Du created HBASE-12392:


 Summary: Incorrect implementation of 
CompactionRequest.isRetainDeleteMarkers
 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
Reporter: Jingcheng Du
Assignee: Jingcheng Du


Now in the implementation of the isRetainDeleteMarkers method, the code look 
like,
{code}
return (this.retainDeleteMarkers != null) ? 
this.retainDeleteMarkers.booleanValue()
: isAllFiles();
{code}
It means for a major compaction in a normal store, this method returns true. 
Consequently the delete marks could not be deleted in the major compaction, 
which leads the unit test TestKeepDeletes fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-12392:
-
Affects Version/s: hbase-11339

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-12392:
-
Description: 
Now in the implementation of the isRetainDeleteMarkers method, the code look 
like,
{code}
return (this.retainDeleteMarkers != null) ? 
this.retainDeleteMarkers.booleanValue()
: isAllFiles();
{code}
It means for a major compaction in a normal store, this method returns true. 
Consequently the delete marks could not be deleted in the major compaction, 
which leads the unit test TestKeepDeletes fails.
The correct implementation should be,
{code}
return (this.retainDeleteMarkers != null) ? 
this.retainDeleteMarkers.booleanValue()
: !isAllFiles();
{code}

  was:
Now in the implementation of the isRetainDeleteMarkers method, the code look 
like,
{code}
return (this.retainDeleteMarkers != null) ? 
this.retainDeleteMarkers.booleanValue()
: isAllFiles();
{code}
It means for a major compaction in a normal store, this method returns true. 
Consequently the delete marks could not be deleted in the major compaction, 
which leads the unit test TestKeepDeletes fails.


 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12274) Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception

2014-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191527#comment-14191527
 ] 

Hudson commented on HBASE-12274:


SUCCESS: Integrated in HBase-0.98 #643 (See 
[https://builds.apache.org/job/HBase-0.98/643/])
HBASE-12274 addendum removes synchronized keyword for nextRaw() (tedyu: rev 
f1cabd378512f1b18003afed9782d88d022d6850)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


 Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() 
 may produce null pointer exception
 --

 Key: HBASE-12274
 URL: https://issues.apache.org/jira/browse/HBASE-12274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12274-0.98.addendum, 12274-region-server.log, 
 12274-v2.txt, 12274-v2.txt, 12274-v3.txt


 I saw the following in region server log:
 {code}
 2014-10-15 03:28:36,976 ERROR 
 [B.DefaultRpcServer.handler=0,queue=0,port=60020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5023)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4932)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4923)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 This is where the NPE happened:
 {code}
 // Let's see what we have in the storeHeap.
 KeyValue current = this.storeHeap.peek();
 {code}
 The cause was race between nextInternal(called through nextRaw) and close 
 methods.
 nextRaw() is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-12392:
-
Attachment: HBASE-12392.diff

Upload the patch to fix this issue.
Hi [~anoopsamjohn], [~jmhsieh], could you please help look at it and commit? 
Thanks.

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-12392:
-
Status: Patch Available  (was: Open)

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191533#comment-14191533
 ] 

Hadoop QA commented on HBASE-12392:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678433/HBASE-12392.diff
  against trunk revision .
  ATTACHMENT ID: 12678433

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11538//console

This message is automatically generated.

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12377) HBaseAdmin#deleteTable fails when META region is moved around the same timeframe

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191534#comment-14191534
 ] 

Hadoop QA commented on HBASE-12377:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12678417/HBASE-12377.v3-2.0.patch
  against trunk revision .
  ATTACHMENT ID: 12678417

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestProcessBasedCluster
  org.apache.hadoop.hbase.mapreduce.TestImportExport

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11536//console

This message is automatically generated.

 HBaseAdmin#deleteTable fails when META region is moved around the same 
 timeframe
 

 Key: HBASE-12377
 URL: https://issues.apache.org/jira/browse/HBASE-12377
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.4
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12377.v1-2.0.patch, HBASE-12377.v2-2.0.patch, 
 HBASE-12377.v3-2.0.patch


 This is the same issue that HBASE-10809 tried to address.  The fix of 
 HBASE-10809 refetch the latest meta location in retry-loop.  However, there 
 are 2 problems: (1).  inside the retry loop, there is another try-catch block 
 that would throw the exception before retry can kick in; (2). It looks like 
 that HBaseAdmin::getFirstMetaServerForTable() always tries to get meta data 
 from meta cache, which means if the meta cache is stale and out of date, 
 retries would not solve the problem by fetching from the 

[jira] [Commented] (HBASE-12274) Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception

2014-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191539#comment-14191539
 ] 

Hudson commented on HBASE-12274:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #612 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/612/])
HBASE-12274 addendum removes synchronized keyword for nextRaw() (tedyu: rev 
f1cabd378512f1b18003afed9782d88d022d6850)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


 Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() 
 may produce null pointer exception
 --

 Key: HBASE-12274
 URL: https://issues.apache.org/jira/browse/HBASE-12274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12274-0.98.addendum, 12274-region-server.log, 
 12274-v2.txt, 12274-v2.txt, 12274-v3.txt


 I saw the following in region server log:
 {code}
 2014-10-15 03:28:36,976 ERROR 
 [B.DefaultRpcServer.handler=0,queue=0,port=60020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5023)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4932)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4923)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 This is where the NPE happened:
 {code}
 // Let's see what we have in the storeHeap.
 KeyValue current = this.storeHeap.peek();
 {code}
 The cause was race between nextInternal(called through nextRaw) and close 
 methods.
 nextRaw() is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12392:
---
Priority: Critical  (was: Major)

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Critical
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191553#comment-14191553
 ] 

Anoop Sam John commented on HBASE-12392:


Oh  that is bad :(
+1. Thanks for the find Jingcheng


 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12387) committer guidelines should include patch signoff

2014-10-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191567#comment-14191567
 ] 

Anoop Sam John commented on HBASE-12387:


So we should always go with making author to contributor and signed off?  Even 
now also many patch commits are like old way (adding contributor name in commit 
message).  What do you say Stack?

 committer guidelines should include patch signoff
 -

 Key: HBASE-12387
 URL: https://issues.apache.org/jira/browse/HBASE-12387
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Sean Busbey

 Right now our guide for committers apply patches has them use {{git am}} 
 without a signoff flag. This works okay, but it misses adding the 
 signed-off-by blurb in the commit message.
 Those messages make it easier to see at a glance with e.g. {{git log}} which 
 committer applied the patch.
 this section:
 {quote}
 The directive to use git format-patch rather than git diff, and not to use 
 --no-prefix, is a new one. See the second example for how to apply a patch 
 created with git diff, and educate the person who created the patch.
 {code}
 $ git checkout -b HBASE-
 $ git am ~/Downloads/HBASE--v2.patch
 $ git checkout master
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary or ask the submitter to do it
 $ git pull --rebase  # Better safe than sorry
 $ git push origin master
 $ git checkout branch-1
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary
 $ git pull --rebase  # Better safe than sorry
 $ git push origin branch-1
 $ git branch -D HBASE-
 {code}
 {quote}
 Should be
 {quote}
 The directive to use git format-patch rather than git diff, and not to use 
 --no-prefix, is a new one. See the second example for how to apply a patch 
 created with git diff, and educate the person who created the patch.
 Note that the {{--signoff}} flag to {{git am}} will insert a line in the 
 commit message that the patch was checked by your author string. This 
 addition to your inclusion as the commit's committer makes your participation 
 more prominent to users browsing {{git log}}.
 {code}
 $ git checkout -b HBASE-
 $ git am --signoff ~/Downloads/HBASE--v2.patch
 $ git checkout master
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary or ask the submitter to do it
 $ git pull --rebase  # Better safe than sorry
 $ git push origin master
 $ git checkout branch-1
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary
 $ git pull --rebase  # Better safe than sorry
 $ git push origin branch-1
 $ git branch -D HBASE-
 {code}
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-31 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-12219:
--
Attachment: HBASE-12219-0.99.patch

Attaching patch for 0.99. Is that good for you [~enis]?

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-0.99.patch, HBASE-12219-v1.patch, 
 HBASE-12219-v1.patch, HBASE-12219.v0.txt, HBASE-12219.v2.patch, 
 HBASE-12219.v3.patch, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191614#comment-14191614
 ] 

Hadoop QA commented on HBASE-12219:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12678443/HBASE-12219-0.99.patch
  against trunk revision .
  ATTACHMENT ID: 12678443

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11539//console

This message is automatically generated.

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-0.99.patch, HBASE-12219-v1.patch, 
 HBASE-12219-v1.patch, HBASE-12219.v0.txt, HBASE-12219.v2.patch, 
 HBASE-12219.v3.patch, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191626#comment-14191626
 ] 

ramkrishna.s.vasudevan commented on HBASE-12391:


+1

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191627#comment-14191627
 ] 

Jingcheng Du commented on HBASE-12392:
--

Hi Ram [~ram_krish], do you want to look at this patch? Thanks.

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Critical
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191632#comment-14191632
 ] 

ramkrishna.s.vasudevan commented on HBASE-12392:


+1 on patch.  Good catch.

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Critical
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191640#comment-14191640
 ] 

ramkrishna.s.vasudevan commented on HBASE-12391:


Pushed to HBASE-11339 branch. Thanks for the patch Jingcheng.

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12393) The regionserver web UI will throw exception when we set block cache to zero

2014-10-31 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-12393:
-

 Summary: The regionserver web UI will throw exception when we set 
block cache to zero
 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: BlockCache
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor


The CacheConfig.getBlockCache() will return the null point when we set 
hfile.block.cache.size to zero.
It caused the BlockCacheTmplImpl.java:123 to throw null exception.

{code}
org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
 jamonWriter);
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12393) The regionserver web UI will throw exception when we set block cache to zero

2014-10-31 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12393:
--
Component/s: (was: BlockCache)
 regionserver

 The regionserver web UI will throw exception when we set block cache to zero
 

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191641#comment-14191641
 ] 

ramkrishna.s.vasudevan commented on HBASE-12391:


Should we wait for [~jmhsieh] +1? Or can we commit this. Change is fine anyway.

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12393) The regionserver web UI will throw exception when we set block cache to zero

2014-10-31 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12393:
--
Affects Version/s: 0.98.7

 The regionserver web UI will throw exception when we set block cache to zero
 

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)
Weichen Ye created HBASE-12394:
--

 Summary: Support multiple regions as input to each mapper in 
map/reduce jobs
 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.6.1
Reporter: Weichen Ye






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12290) Column trackers and delete trackers should deal with BBs

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12290:
---
Attachment: HBASE-12290.patch

Adding some methods that deals with BB.  Currently they wil be unused. Modified 
some existing APIs to work witih BB by wrapping the exisiting byte[] with BB.  
The new code here will be unused but once all the subtasks are completed would 
be useful.  This would help in easier review.
Also DeleteTracker cannot be changed now because it already deals with Cell.  
So all the impl of delete tracker should change based on it. 


 Column trackers and delete trackers should deal with BBs
 

 Key: HBASE-12290
 URL: https://issues.apache.org/jira/browse/HBASE-12290
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12290.patch


 All the trackers should deal with BBs if we need E2E BB usage in the read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12290) Column trackers and delete trackers should deal with BBs

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12290:
---
Status: Patch Available  (was: Open)

 Column trackers and delete trackers should deal with BBs
 

 Key: HBASE-12290
 URL: https://issues.apache.org/jira/browse/HBASE-12290
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12290.patch


 All the trackers should deal with BBs if we need E2E BB usage in the read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-31 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-12219:
--
Attachment: (was: HBASE-12219-0.99.patch)

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-v1.patch, HBASE-12219-v1.patch, 
 HBASE-12219.v0.txt, HBASE-12219.v2.patch, HBASE-12219.v3.patch, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191647#comment-14191647
 ] 

ramkrishna.s.vasudevan commented on HBASE-12392:


Should we wait for [~jmhsieh] +1? Or can we commit this. Change is fine anyway.

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Critical
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191646#comment-14191646
 ] 

ramkrishna.s.vasudevan commented on HBASE-12391:


Sorry the previous comment was applicable for HBASE-12392.

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12391:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-31 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191648#comment-14191648
 ] 

Esteban Gutierrez commented on HBASE-12219:
---

Cancelled 0.99 patch for now, it was consistent but had some formatting issues.

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-v1.patch, HBASE-12219-v1.patch, 
 HBASE-12219.v0.txt, HBASE-12219.v2.patch, HBASE-12219.v3.patch, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12393) The regionserver web UI will throw exception when we disable block cache

2014-10-31 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12393:
--
Summary: The regionserver web UI will throw exception when we disable block 
cache  (was: The regionserver web UI will throw exception when we set block 
cache to zero)

 The regionserver web UI will throw exception when we disable block cache
 

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12393) The regionserver web will throw exception when we disable block cache

2014-10-31 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12393:
--
Summary: The regionserver web will throw exception when we disable block 
cache  (was: The regionserver web UI will throw exception when we disable block 
cache)

 The regionserver web will throw exception when we disable block cache
 -

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, users can add a property 
in configuration--hbase.mapreduce.scan.regionspermapper

This is an example,This means each mapper can have 3 regions as input.
property
 namehbase.mapreduce.scan.regionspermapper/name
 value3/value
/property

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.6.1
Reporter: Weichen Ye

 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
  
 The following new files are including in this patch:
 TableMultiRegionInputFormat.java
 TableMultiRegionInputFormatBase.java
 TableMultiRegionMapReduceUtil.java
 *TestTableMultiRegionInputFormatScan1.java
 *TestTableMultiRegionInputFormatScan2.java
 *TestTableMultiRegionInputFormatScanBase.java
 *TestTableMultiRegionMapReduceUtil.java
  
 The files start with * are tests.
 In order to support multiple regions for one mapper, users can add a property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 This is an example,This means each mapper can have 3 regions as input.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
 Text.class, Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394.patch

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394.patch


 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
  
 The following new files are including in this patch:
 TableMultiRegionInputFormat.java
 TableMultiRegionInputFormatBase.java
 TableMultiRegionMapReduceUtil.java
 *TestTableMultiRegionInputFormatScan1.java
 *TestTableMultiRegionInputFormatScan2.java
 *TestTableMultiRegionInputFormatScanBase.java
 *TestTableMultiRegionMapReduceUtil.java
  
 The files start with * are tests.
 In order to support multiple regions for one mapper, users can add a property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 This is an example,This means each mapper can have 3 regions as input.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
 Text.class, Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--hbase.mapreduce.scan.regionspermapper

This is an example,which means each mapper can have 3 regions as input.
property
 namehbase.mapreduce.scan.regionspermapper/name
 value3/value
/property

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, users can add a property 
in configuration--hbase.mapreduce.scan.regionspermapper

This is an example,This means each mapper can have 3 regions as input.
property
 namehbase.mapreduce.scan.regionspermapper/name
 value3/value
/property

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394.patch


 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
  
 The following new files are including in this patch:
 TableMultiRegionInputFormat.java
 TableMultiRegionInputFormatBase.java
 TableMultiRegionMapReduceUtil.java
 *TestTableMultiRegionInputFormatScan1.java
 *TestTableMultiRegionInputFormatScan2.java
 *TestTableMultiRegionInputFormatScanBase.java
 *TestTableMultiRegionMapReduceUtil.java
  
 The files start with * are tests.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 This is an example,which means each mapper can have 3 regions as input.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
 Text.class, Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--hbase.mapreduce.scan.regionspermapper

This is an example,which means each mapper has 3 regions as input.
property
 namehbase.mapreduce.scan.regionspermapper/name
 value3/value
/property

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--hbase.mapreduce.scan.regionspermapper

This is an example,which means each mapper can have 3 regions as input.
property
 namehbase.mapreduce.scan.regionspermapper/name
 value3/value
/property

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394.patch


 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
  
 The following new files are including in this patch:
 TableMultiRegionInputFormat.java
 TableMultiRegionInputFormatBase.java
 TableMultiRegionMapReduceUtil.java
 *TestTableMultiRegionInputFormatScan1.java
 *TestTableMultiRegionInputFormatScan2.java
 *TestTableMultiRegionInputFormatScanBase.java
 *TestTableMultiRegionMapReduceUtil.java
  
 The files start with * are tests.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 This is an example,which means each mapper has 3 regions as input.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
 Text.class, Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12392:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to HBASE-11339. Thanks for the patch Jingcheng.

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Critical
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--hbase.mapreduce.scan.regionspermapper

This is an example,which means each mapper has 3 regions as input.
property
 namehbase.mapreduce.scan.regionspermapper/name
 value3/value
/property

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--hbase.mapreduce.scan.regionspermapper

This is an example,which means each mapper has 3 regions as input.
property
 namehbase.mapreduce.scan.regionspermapper/name
 value3/value
/property

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394.patch


 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
  
 The following new files are included in this patch:
 TableMultiRegionInputFormat.java
 TableMultiRegionInputFormatBase.java
 TableMultiRegionMapReduceUtil.java
 *TestTableMultiRegionInputFormatScan1.java
 *TestTableMultiRegionInputFormatScan2.java
 *TestTableMultiRegionInputFormatScanBase.java
 *TestTableMultiRegionMapReduceUtil.java
  
 The files start with * are tests.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 This is an example,which means each mapper has 3 regions as input.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
 Text.class, Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Affects Version/s: 2.0.0
   Status: Patch Available  (was: Open)

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.6.1, 2.0.0
Reporter: Weichen Ye
 Attachments: HBASE-12394.patch


 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
  
 The following new files are included in this patch:
 TableMultiRegionInputFormat.java
 TableMultiRegionInputFormatBase.java
 TableMultiRegionMapReduceUtil.java
 *TestTableMultiRegionInputFormatScan1.java
 *TestTableMultiRegionInputFormatScan2.java
 *TestTableMultiRegionInputFormatScanBase.java
 *TestTableMultiRegionMapReduceUtil.java
  
 The files start with * are tests.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 This is an example,which means each mapper has 3 regions as input.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
 Text.class, Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191673#comment-14191673
 ] 

ramkrishna.s.vasudevan commented on HBASE-12358:


Some APIs needs some small changes.  Will update the patch once done and adding 
test cases. 

 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12358.patch, HBASE-12358_1.patch


 As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
  Changing the core Cell impl would not be needed as it is used in server 
 only.  So we will create a BB backed Cell and use it in the Server side read 
 path. This JIRA just creates an interface that extends Cell and adds the 
 needed API.
 The getTimeStamp and getTypebyte() can still refer to the original Cell API 
 only.  The getXXxOffset() and getXXXLength() can also refer to the original 
 Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2014-10-31 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12393:
--
Summary: The regionserver web will throw exception if we disable block 
cache  (was: The regionserver web will throw exception when we disable block 
cache)

 The regionserver web will throw exception if we disable block cache
 ---

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12363) KEEP_DELETED_CELLS considered harmful?

2014-10-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191690#comment-14191690
 ] 

Ted Yu commented on HBASE-12363:


+1 on getting test in. 

Nit: fix typo on commit: compactin

 KEEP_DELETED_CELLS considered harmful?
 --

 Key: HBASE-12363
 URL: https://issues.apache.org/jira/browse/HBASE-12363
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Lars Hofhansl
  Labels: Phoenix
 Attachments: 12363-test.txt


 Brainstorming...
 This morning in the train (of all places) I realized a fundamental issue in 
 how KEEP_DELETED_CELLS is implemented.
 The problem is around knowing when it is safe to remove a delete marker (we 
 cannot remove it unless all cells affected by it are remove otherwise).
 This was particularly hard for family marker, since they sort before all 
 cells of a row, and hence scanning forward through an HFile you cannot know 
 whether the family markers are still needed until at least the entire row is 
 scanned.
 My solution was to keep the TS of the oldest put in any given HFile, and only 
 remove delete markers older than that TS.
 That sounds good on the face of it... But now imagine you wrote a version of 
 ROW 1 and then never update it again. Then later you write a billion other 
 rows and delete them all. Since the TS of the cells in ROW 1 is older than 
 all the delete markers for the other billion rows, these will never be 
 collected... At least for the region that hosts ROW 1 after a major 
 compaction.
 Note, in a sense that is what HBase is supposed to do when keeping deleted 
 cells: Keep them until they would be removed by some other means (for example 
 TTL, or MAX_VERSION when new versions are inserted).
 The specific problem here is that even as all KVs affected by a delete marker 
 are expired this way the marker would not be removed if there just one older 
 KV in the HStore.
 I don't see a good way out of this. In parent I outlined these four solutions:
 So there are three options I think:
 # Only allow the new flag set on CFs with TTL set. MIN_VERSIONS would not 
 apply to deleted rows or delete marker rows (wouldn't know how long to keep 
 family deletes in that case). (MAX)VERSIONS would still be enforced on all 
 rows types except for family delete markers.
 # Translate family delete markers to column delete marker at (major) 
 compaction time.
 # Change HFileWriterV* to keep track of the earliest put TS in a store and 
 write it to the file metadata. Use that use expire delete marker that are 
 older and hence can't affect any puts in the file.
 # Have Store.java keep track of the earliest put in internalFlushCache and 
 compactStore and then append it to the file metadata. That way HFileWriterV* 
 would not need to know about KVs.
 And I implemented #4.
 I'd love to get input on ideas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12290) Column trackers and delete trackers should deal with BBs

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191706#comment-14191706
 ] 

Hadoop QA commented on HBASE-12290:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678449/HBASE-12290.patch
  against trunk revision .
  ATTACHMENT ID: 12678449

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11540//console

This message is automatically generated.

 Column trackers and delete trackers should deal with BBs
 

 Key: HBASE-12290
 URL: https://issues.apache.org/jira/browse/HBASE-12290
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12290.patch


 All the trackers should deal with BBs if we need E2E BB usage in the read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191741#comment-14191741
 ] 

Hadoop QA commented on HBASE-12394:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678452/HBASE-12394.patch
  against trunk revision .
  ATTACHMENT ID: 12678452

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   * See {@link 
org.apache.hadoop.hbase.mapreduce.TableMultiRegionMapReduceUtil#convertScanToString(org.apache.hadoop.hbase.client.Scan)}
 for more details.
+   * Overrides previous calls to {@link 
org.apache.hadoop.hbase.client.Scan#addColumn(byte[], byte[])}for any families 
in the
+ String regionPerMapper 
=context.getConfiguration().get(hbase.mapreduce.scan.regionspermapper,1);
+ LOG.error(ERROR when parseInt: hbase.mapreduce.scan.regionspermapper 
must be an integer );
+int 
stopRegion=(i*regionPerMapperInt+regionPerMapperInt-1keys.getFirst().length)?(i*regionPerMapperInt+regionPerMapperInt-1):(keys.getFirst().length-1);
+InetSocketAddress isa = new 
InetSocketAddress(location.getHostname(), location.getPort());
+   * This optimization is effective when there is a specific reasoning to 
exclude an entire region from the M-R job,
+   * Useful when we need to remember the last-processed top record and revisit 
the [last, current) interval for M-R processing,
+   * continuously. In addition to reducing InputSplits, reduces the load on 
the region server as well, due to the ordering of the keys.
+   * Override this method, if you want to bulk exclude regions altogether from 
M-R. By default, no region is excluded( i.e. all regions are included).

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11541//console

This message is automatically generated.

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: 

[jira] [Commented] (HBASE-12391) Correct a typo in the mob metrics

2014-10-31 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191748#comment-14191748
 ] 

Jonathan Hsieh commented on HBASE-12391:


+1, lgtm. 

 Correct a typo in the mob metrics
 -

 Key: HBASE-12391
 URL: https://issues.apache.org/jira/browse/HBASE-12391
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Fix For: hbase-11339

 Attachments: HBASE-12391.diff


 There's a typo in the temp variable in the region server metrics for mob. 
 It's now testMobCompactedFromMobCellsSize, and should be changed to 
 tempMobCompactedFromMobCellsSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12392) Incorrect implementation of CompactionRequest.isRetainDeleteMarkers

2014-10-31 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191749#comment-14191749
 ] 

Jonathan Hsieh commented on HBASE-12392:


+1, lgtm.

 Incorrect implementation of CompactionRequest.isRetainDeleteMarkers
 ---

 Key: HBASE-12392
 URL: https://issues.apache.org/jira/browse/HBASE-12392
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Critical
 Fix For: hbase-11339

 Attachments: HBASE-12392.diff


 Now in the implementation of the isRetainDeleteMarkers method, the code look 
 like,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : isAllFiles();
 {code}
 It means for a major compaction in a normal store, this method returns true. 
 Consequently the delete marks could not be deleted in the major compaction, 
 which leads the unit test TestKeepDeletes fails.
 The correct implementation should be,
 {code}
 return (this.retainDeleteMarkers != null) ? 
 this.retainDeleteMarkers.booleanValue()
 : !isAllFiles();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12358:
---
Status: Patch Available  (was: Open)

 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12358.patch, HBASE-12358_1.patch, 
 HBASE-12358_2.patch


 As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
  Changing the core Cell impl would not be needed as it is used in server 
 only.  So we will create a BB backed Cell and use it in the Server side read 
 path. This JIRA just creates an interface that extends Cell and adds the 
 needed API.
 The getTimeStamp and getTypebyte() can still refer to the original Cell API 
 only.  The getXXxOffset() and getXXXLength() can also refer to the original 
 Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-31 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12358:
---
Attachment: HBASE-12358_2.patch

Updated patch with test cases.  Also adds ByteBufferBackedKeyOnlyKeyValue.

 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12358.patch, HBASE-12358_1.patch, 
 HBASE-12358_2.patch


 As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
  Changing the core Cell impl would not be needed as it is used in server 
 only.  So we will create a BB backed Cell and use it in the Server side read 
 path. This JIRA just creates an interface that extends Cell and adds the 
 needed API.
 The getTimeStamp and getTypebyte() can still refer to the original Cell API 
 only.  The getXXxOffset() and getXXXLength() can also refer to the original 
 Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-31 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-12329:
-
Attachment: HBASE-12329-V2.diff

The latest patch is uploaded.

bq. either update the API example in the ref guide (currently Example 10.2) to 
show update an existing CF or file a follow on ticket that such docs are 
needed
I think it's better to be done by Misty for the ref guide? 
Hi, [~misty], do you want to update the ref guide(currently Example 10.2 to 
show update an existing CF with the new API modifyFamily in master) after 
this JIRA is committed? Thanks a lot!

Hi all, please be noted, this patch is targeted to master. If it's backported 
to lower versions, should have the last one wins warning in addFamily method 
instead of throwing a runtime exception.

Thanks.

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12329-V2.diff, HBASE-12329.diff


 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12332) [mob] use filelink instad of retry when resolving an hfilelink.

2014-10-31 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191806#comment-14191806
 ] 

Jingcheng Du commented on HBASE-12332:
--

Hi [~jmhsieh], any ideas on the comments? Thanks!

 [mob] use filelink instad of retry when resolving an hfilelink.
 ---

 Key: HBASE-12332
 URL: https://issues.apache.org/jira/browse/HBASE-12332
 Project: HBase
  Issue Type: Sub-task
  Components: mob
Affects Versions: hbase-11339
Reporter: Jonathan Hsieh
 Fix For: hbase-11339


 in the snapshot code, hmobstore was modified to traverse an hfile link to a 
 mob.   Ideally this should use the transparent filelink code to read the data.
 Also there will likely be some issues with the mob file cache with these 
 links.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2014-10-31 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191818#comment-14191818
 ] 

Jean-Marc Spaggiari commented on HBASE-12393:
-

Should we not simply throw exception when someone set the blockcache to 0 and 
not start the RS? 

 The regionserver web will throw exception if we disable block cache
 ---

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12387) committer guidelines should include patch signoff

2014-10-31 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-12387:
---

Assignee: Sean Busbey

 committer guidelines should include patch signoff
 -

 Key: HBASE-12387
 URL: https://issues.apache.org/jira/browse/HBASE-12387
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey

 Right now our guide for committers apply patches has them use {{git am}} 
 without a signoff flag. This works okay, but it misses adding the 
 signed-off-by blurb in the commit message.
 Those messages make it easier to see at a glance with e.g. {{git log}} which 
 committer applied the patch.
 this section:
 {quote}
 The directive to use git format-patch rather than git diff, and not to use 
 --no-prefix, is a new one. See the second example for how to apply a patch 
 created with git diff, and educate the person who created the patch.
 {code}
 $ git checkout -b HBASE-
 $ git am ~/Downloads/HBASE--v2.patch
 $ git checkout master
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary or ask the submitter to do it
 $ git pull --rebase  # Better safe than sorry
 $ git push origin master
 $ git checkout branch-1
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary
 $ git pull --rebase  # Better safe than sorry
 $ git push origin branch-1
 $ git branch -D HBASE-
 {code}
 {quote}
 Should be
 {quote}
 The directive to use git format-patch rather than git diff, and not to use 
 --no-prefix, is a new one. See the second example for how to apply a patch 
 created with git diff, and educate the person who created the patch.
 Note that the {{--signoff}} flag to {{git am}} will insert a line in the 
 commit message that the patch was checked by your author string. This 
 addition to your inclusion as the commit's committer makes your participation 
 more prominent to users browsing {{git log}}.
 {code}
 $ git checkout -b HBASE-
 $ git am --signoff ~/Downloads/HBASE--v2.patch
 $ git checkout master
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary or ask the submitter to do it
 $ git pull --rebase  # Better safe than sorry
 $ git push origin master
 $ git checkout branch-1
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary
 $ git pull --rebase  # Better safe than sorry
 $ git push origin branch-1
 $ git branch -D HBASE-
 {code}
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12387) committer guidelines should include patch signoff

2014-10-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191827#comment-14191827
 ] 

Sean Busbey commented on HBASE-12387:
-

If we want to do that we'll need to expand the committer guidelines to cover 
setting things up.

Something like

{quote}
This example shows how to commit a patch that was created using git diff 
without --no-prefix. If the patch was created with --no-prefix, add -p0 to the 
git apply command. *Before following these steps, you must ensure your working 
directory is clean.* Otherwise, the autocommit step (commit -a) will coalesce 
your local changes with those of the contributor.

Note that unlike the case of a patch made with git format-patch, the patch 
itself doesn't include information on the contributor. Normally, you should be 
able to use the name and email address from the user's ASF Jira account to fill 
in the author details, below we use the example Prathia Hall 
prathia.h...@example.com.

{code}
$ git apply ~/Downloads/HBASE--v2.patch 
$ git commit -m HBASE- Really Good Code --author Prathia Hall 
prathia.h...@example.com \
--signoff -a # This extra step is needed for patches created with 'git diff'
$ git checkout master
$ git pull --rebase
$ git cherry-pick sha-from-commit
# Resolve conflicts if necessary or ask the submitter to do it
$ git pull --rebase  # Better safe than sorry
$ git push origin master
$ git checkout branch-1
$ git pull --rebase
$ git cherry-pick sha-from-commit
# Resolve conflicts if necessary or ask the submitter to do it
$ git pull --rebase   # Better safe than sorry
$ git push origin branch-1
$ git branch -D HBASE-
{code}
{quote}

Actually, that's not that much more work. But it sounds like enough of a change 
to warrant a DISCUSS thread?

 committer guidelines should include patch signoff
 -

 Key: HBASE-12387
 URL: https://issues.apache.org/jira/browse/HBASE-12387
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Sean Busbey

 Right now our guide for committers apply patches has them use {{git am}} 
 without a signoff flag. This works okay, but it misses adding the 
 signed-off-by blurb in the commit message.
 Those messages make it easier to see at a glance with e.g. {{git log}} which 
 committer applied the patch.
 this section:
 {quote}
 The directive to use git format-patch rather than git diff, and not to use 
 --no-prefix, is a new one. See the second example for how to apply a patch 
 created with git diff, and educate the person who created the patch.
 {code}
 $ git checkout -b HBASE-
 $ git am ~/Downloads/HBASE--v2.patch
 $ git checkout master
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary or ask the submitter to do it
 $ git pull --rebase  # Better safe than sorry
 $ git push origin master
 $ git checkout branch-1
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary
 $ git pull --rebase  # Better safe than sorry
 $ git push origin branch-1
 $ git branch -D HBASE-
 {code}
 {quote}
 Should be
 {quote}
 The directive to use git format-patch rather than git diff, and not to use 
 --no-prefix, is a new one. See the second example for how to apply a patch 
 created with git diff, and educate the person who created the patch.
 Note that the {{--signoff}} flag to {{git am}} will insert a line in the 
 commit message that the patch was checked by your author string. This 
 addition to your inclusion as the commit's committer makes your participation 
 more prominent to users browsing {{git log}}.
 {code}
 $ git checkout -b HBASE-
 $ git am --signoff ~/Downloads/HBASE--v2.patch
 $ git checkout master
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary or ask the submitter to do it
 $ git pull --rebase  # Better safe than sorry
 $ git push origin master
 $ git checkout branch-1
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary
 $ git pull --rebase  # Better safe than sorry
 $ git push origin branch-1
 $ git branch -D HBASE-
 {code}
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191840#comment-14191840
 ] 

Sean Busbey commented on HBASE-12329:
-

{quote}
I think it's better to be done by Misty for the ref guide? 
Hi, Misty Stanley-Jones, do you want to update the ref guide(currently Example 
10.2 to show update an existing CF with the new API modifyFamily in master) 
after this JIRA is committed? Thanks a lot!
{quote}

Please file as a follow on ticket. If Misty wants to take it up, then she can. 
Otherwise someone else can do it.

{code}
+  @Test
+  public void testModifyFamily() {
+HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(table));
+byte[] familyName = Bytes.toBytes(cf);
+HColumnDescriptor hcd = new HColumnDescriptor(familyName);
+hcd.setBlocksize(1000);
+htd.addFamily(hcd);
+assertEquals(1000, htd.getFamily(familyName).getBlocksize());
+hcd.setBlocksize(2000);
+htd.modifyFamily(hcd);
+assertEquals(2000, htd.getFamily(familyName).getBlocksize());
+  }
{code}

Make a second HColumnDescriptor for the update to make sure we're not just 
mutating state on an old version in the htd.

{code}
+  @Test
+  public void testModifyInexistentFamily() {
+HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(table));
+byte[] familyName = Bytes.toBytes(cf);
+HColumnDescriptor hcd = new HColumnDescriptor(familyName);
+boolean hasException = false;
+try {
+  htd.modifyFamily(hcd);
+} catch (Exception e) {
+  hasException = true;
+}
+assertTrue(hasException);
+  }
+
+  @Test
+  public void testAddDuplicateFamilies() {
+HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(table));
+byte[] familyName = Bytes.toBytes(cf);
+HColumnDescriptor hcd = new HColumnDescriptor(familyName);
+hcd.setBlocksize(1000);
+htd.addFamily(hcd);
+assertEquals(1000, htd.getFamily(familyName).getBlocksize());
+hcd.setBlocksize(2000);
+boolean hasException = false;
+try {
+  htd.addFamily(hcd);
+} catch (Exception e) {
+  hasException = true;
+}
+assertTrue(hasException);
+  }
{code}

Use the {{@Test(expected=IllegalArgumentException.class)}} form for both of 
these instead.

{code}
+if (htd.hasFamily(hcd.getName())) {
+  htd.modifyFamily(hcd);
+} else {
+  htd.addFamily(hcd);
+}
{code}

I'm just thinking out loud here, but do we think this is going to be a common 
idiom? Should we add a third method setFamily that behaves like the old add?

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12329-V2.diff, HBASE-12329.diff


 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191878#comment-14191878
 ] 

Hadoop QA commented on HBASE-12358:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678464/HBASE-12358_2.patch
  against trunk revision .
  ATTACHMENT ID: 12678464

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified tests.

{color:red}-1 javac{color}.  The applied patch generated 115 javac compiler 
warnings (more than the trunk's current 102 warnings).

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3792 checkstyle errors (more than the trunk's current 3774 errors).

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  public static final int ROW_OFFSET = Bytes.SIZEOF_INT /* keylength */+ 
Bytes.SIZEOF_INT /* valuelength */;
+  public static long getKeyValueDataStructureSize(int rlength, int flength, 
int qlength, int vlength) {
+  public static ByteBufferBackedCell createFirstOnRow(final ByteBuffer row, 
int roffset, short rlength) {
+  public static ByteBufferBackedCell createLastOnRow(final ByteBuffer row, 
final int roffset, final int rlength,
+return new ByteBufferBackedKeyValue(row, roffset, rlength, family, 
foffset, flength, qualifier, qoffset,
+  public static boolean equals(final ByteBuffer left, int leftOffset, int 
leftLen, final ByteBuffer right,
+  public static boolean equals(final ByteBuffer left, int leftOffset, int 
leftLen, final byte[] right,
+  public static boolean equals(final byte[] left, int leftOffset, int leftLen, 
final ByteBuffer right,

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11542//console

This message is automatically generated.

 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: 

[jira] [Commented] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2014-10-31 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191900#comment-14191900
 ] 

ChiaPing Tsai commented on HBASE-12393:
---


It is only a little bug which we don't handle null point(cache block).
Maybe we can take null point as zero.
For example:
{code}
org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(checkSize(cacheConfig.getBlockCache(,
 jamonWriter);

static long checkSize(BlockCache blockCache)
{
return blockCache == null ? 0 : blockCache.size();
}
{code}

 The regionserver web will throw exception if we disable block cache
 ---

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi reassigned HBASE-10870:
-

Assignee: Ashish Singhi

 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: Ashish Singhi

 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-10870:
--
Attachment: HBASE-10870.patch

Patch for master branch.
Replaced and deprecated all apis in HCD having prefix 'should' with 'is'.
Some one please review.

 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: Ashish Singhi
 Attachments: HBASE-10870.patch


 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-10870:
--
Affects Version/s: 2.0.0
   Status: Patch Available  (was: Open)

 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: stack
Assignee: Ashish Singhi
 Attachments: HBASE-10870.patch


 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191941#comment-14191941
 ] 

stack commented on HBASE-10870:
---

Nice. +1.

 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: stack
Assignee: Ashish Singhi
 Attachments: HBASE-10870.patch


 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10870:
--
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to branch-1+.  Thanks for patch [~ashish singhi]

 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: stack
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-10870.patch


 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2014-10-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191953#comment-14191953
 ] 

stack commented on HBASE-12393:
---

Why set it to zero? When would it make sense? i.e. loading index blocks every 
time rather than keeping them around in mem.  But yeah, shouldn't NPE.  If you 
have a patch that fixes the NPE and it works for you, attach and we'll commit 
(but will add WARNINGs that 0 block cache is bad idea as per [~jmspaggi])

 The regionserver web will throw exception if we disable block cache
 ---

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12387) committer guidelines should include patch signoff

2014-10-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191956#comment-14191956
 ] 

stack commented on HBASE-12387:
---

bq. So we should always go with making author to contributor and signed off?

I like [~busbey]'s prescription [~anoopsamjohn]. He has the tooling doing the 
work for us.  Encouraging folks to produce patches that we can just do 'git am 
--signoff' makes for less work for committers; any savings are appreciated when 
we are running multiple branches as we are doing currently.  What you think 
[~anoopsamjohn]?

 committer guidelines should include patch signoff
 -

 Key: HBASE-12387
 URL: https://issues.apache.org/jira/browse/HBASE-12387
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey

 Right now our guide for committers apply patches has them use {{git am}} 
 without a signoff flag. This works okay, but it misses adding the 
 signed-off-by blurb in the commit message.
 Those messages make it easier to see at a glance with e.g. {{git log}} which 
 committer applied the patch.
 this section:
 {quote}
 The directive to use git format-patch rather than git diff, and not to use 
 --no-prefix, is a new one. See the second example for how to apply a patch 
 created with git diff, and educate the person who created the patch.
 {code}
 $ git checkout -b HBASE-
 $ git am ~/Downloads/HBASE--v2.patch
 $ git checkout master
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary or ask the submitter to do it
 $ git pull --rebase  # Better safe than sorry
 $ git push origin master
 $ git checkout branch-1
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary
 $ git pull --rebase  # Better safe than sorry
 $ git push origin branch-1
 $ git branch -D HBASE-
 {code}
 {quote}
 Should be
 {quote}
 The directive to use git format-patch rather than git diff, and not to use 
 --no-prefix, is a new one. See the second example for how to apply a patch 
 created with git diff, and educate the person who created the patch.
 Note that the {{--signoff}} flag to {{git am}} will insert a line in the 
 commit message that the patch was checked by your author string. This 
 addition to your inclusion as the commit's committer makes your participation 
 more prominent to users browsing {{git log}}.
 {code}
 $ git checkout -b HBASE-
 $ git am --signoff ~/Downloads/HBASE--v2.patch
 $ git checkout master
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary or ask the submitter to do it
 $ git pull --rebase  # Better safe than sorry
 $ git push origin master
 $ git checkout branch-1
 $ git pull --rebase
 $ git cherry-pick sha-from-commit
 # Resolve conflicts if necessary
 $ git pull --rebase  # Better safe than sorry
 $ git push origin branch-1
 $ git branch -D HBASE-
 {code}
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-10-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12072:
--
Attachment: hbase-12072_v2.patch

Retry

 We are doing 35 x 35 retries for master operations
 --

 Key: HBASE-12072
 URL: https://issues.apache.org/jira/browse/HBASE-12072
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.99.2

 Attachments: 12072-v1.txt, 12072-v2.txt, hbase-12072_v1.patch, 
 hbase-12072_v2.patch, hbase-12072_v2.patch


 For master requests, there are two retry mechanisms in effect. The first one 
 is from HBaseAdmin.executeCallable() 
 {code}
   private V V executeCallable(MasterCallableV callable) throws 
 IOException {
 RpcRetryingCallerV caller = rpcCallerFactory.newCaller();
 try {
   return caller.callWithRetries(callable);
 } finally {
   callable.close();
 }
   }
 {code}
 And inside, the other one is from StubMaker.makeStub():
 {code}
 /**
* Create a stub against the master.  Retry if necessary.
* @return A stub to do codeintf/code against the master
* @throws MasterNotRunningException
*/
   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
 (value=SWL_SLEEP_WITH_LOCK_HELD)
   Object makeStub() throws MasterNotRunningException {
 {code}
 The tests will just hang for 10 min * 35 ~= 6hours. 
 {code}
 2014-09-23 16:19:05,151 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
 failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,253 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
 failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,456 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
 failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,759 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
 failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:06,262 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
 failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:07,273 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
 failed; retrying after sleep of 2011, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:09,286 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 7 of 35 
 failed; retrying after sleep of 4012, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:13,303 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 8 of 35 
 failed; retrying after sleep of 10033, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:23,343 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 
 failed; retrying after sleep of 10089, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:33,439 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 10 of 
 35 failed; retrying after sleep of 10027, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:43,473 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 11 of 
 35 failed; retrying after sleep of 10004, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:53,485 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 12 of 
 35 failed; retrying after sleep of 20160, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:20:13,656 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 13 of 
 35 failed; retrying after sleep of 20006, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:20:33,675 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 14 

[jira] [Created] (HBASE-12395) Some internal classes are logging to DEBUG what should be logged to TRACE

2014-10-31 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-12395:
---

 Summary: Some internal classes are logging to DEBUG what should be 
logged to TRACE
 Key: HBASE-12395
 URL: https://issues.apache.org/jira/browse/HBASE-12395
 Project: HBase
  Issue Type: Improvement
Reporter: Dima Spivak
Assignee: Dima Spivak


e.g. RpcExecutor is doing this a lot. This leads to 1) huge log files that 
waste disk space and IO and 2) difficulty debugging tests themselves since you 
need to wade through thousands of lines to get to what your test is doing (see 
TestDistributedLogSplitting-output for 25 MB of thread information).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2014-10-31 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191969#comment-14191969
 ] 

ChiaPing Tsai commented on HBASE-12393:
---

hi stack

Thanks for your helpful suggestion. We test the lower-bound performance by 
disabling the block cache and others, and then we found this bug.
We just thought the 0 block cache shouldn't affect the regionserver web




 The regionserver web will throw exception if we disable block cache
 ---

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2014-10-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191995#comment-14191995
 ] 

stack commented on HBASE-12393:
---

[~chia7712] You are right. UI should keep going regardless. Attach a patch w/ 
your fix above and we'll get it in.

 The regionserver web will throw exception if we disable block cache
 ---

 Key: HBASE-12393
 URL: https://issues.apache.org/jira/browse/HBASE-12393
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.7
 Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
Reporter: ChiaPing Tsai
Priority: Minor

 The CacheConfig.getBlockCache() will return the null point when we set 
 hfile.block.cache.size to zero.
 It caused the BlockCacheTmplImpl.java:123 to throw null exception.
 {code}
 org.jamon.escaping.Escaping.HTML.write(org.jamon.emit.StandardEmitter.valueOf(StringUtils.humanReadableInt(cacheConfig.getBlockCache().size())),
  jamonWriter);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12396) Document suggested use of log levels in dev guide

2014-10-31 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-12396:
---

 Summary: Document suggested use of log levels in dev guide
 Key: HBASE-12396
 URL: https://issues.apache.org/jira/browse/HBASE-12396
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Sean Busbey
Priority: Minor


Right now we don't provide any guidance on appropriate use of log levels, which 
leads to inconsistent use and in some cases exacerbating problems for tests and 
troubleshooting (see HBASE-12395). We should add a section on suggested use of 
log levels to the guide.

Some related reading

* [a good ops-focused blog post on 
levels|http://watchitlater.com/blog/2009/12/logging-guidelines/]
* [another, focused on 
devs|http://www.codeproject.com/Articles/42354/The-Art-of-Logging]
* [guidelines from a user on stackoverflow (that I 
like)|http://stackoverflow.com/a/2031209] also has some good discussion.
* [extended logging discussion with some level use 
guidelines|http://www.javacodegeeks.com/2011/01/10-tips-proper-application-logging.html]
* [guidelines for Atlassian 
devs|https://developer.atlassian.com/display/CONFDEV/Logging+Guidelines]
* [guidelines for OpenStack 
devs|https://wiki.openstack.org/wiki/LoggingStandards]
* [the Kafka dev guide|http://kafka.apache.org/coding-guide.html] has a good 
section on their use titled Logging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12396) Document suggested use of log levels in dev guide

2014-10-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192008#comment-14192008
 ] 

Sean Busbey commented on HBASE-12396:
-

Personally, I like the focus on INFO and more severe is for operator use.

* ERROR if at all possible should include a plain description of the problem 
and an action the operator can take to either correct or troubleshoot
* Stack traces should be rare above DEBUG. We allow changing the level of a 
particular logger at run time, if an operator needs that level of detail they 
can alter the level (we should improve the docs on doing this).
* I think a good heuristic for DEBUG v TRACE is if you need multiple messages 
that show the flow of control within part of the codebase, you ought to be 
logging at TRACE. 

 Document suggested use of log levels in dev guide
 -

 Key: HBASE-12396
 URL: https://issues.apache.org/jira/browse/HBASE-12396
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Sean Busbey
Priority: Minor

 Right now we don't provide any guidance on appropriate use of log levels, 
 which leads to inconsistent use and in some cases exacerbating problems for 
 tests and troubleshooting (see HBASE-12395). We should add a section on 
 suggested use of log levels to the guide.
 Some related reading
 * [a good ops-focused blog post on 
 levels|http://watchitlater.com/blog/2009/12/logging-guidelines/]
 * [another, focused on 
 devs|http://www.codeproject.com/Articles/42354/The-Art-of-Logging]
 * [guidelines from a user on stackoverflow (that I 
 like)|http://stackoverflow.com/a/2031209] also has some good discussion.
 * [extended logging discussion with some level use 
 guidelines|http://www.javacodegeeks.com/2011/01/10-tips-proper-application-logging.html]
 * [guidelines for Atlassian 
 devs|https://developer.atlassian.com/display/CONFDEV/Logging+Guidelines]
 * [guidelines for OpenStack 
 devs|https://wiki.openstack.org/wiki/LoggingStandards]
 * [the Kafka dev guide|http://kafka.apache.org/coding-guide.html] has a good 
 section on their use titled Logging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192066#comment-14192066
 ] 

Hudson commented on HBASE-10870:


SUCCESS: Integrated in HBase-1.0 #399 (See 
[https://builds.apache.org/job/HBase-1.0/399/])
HBASE-10870 Deprecate and replace HCD methods that have a 'should' prefix with 
a 'is' instead (stack: rev ae8462b3a2de710ba2b0c49fbddf46b2316a61e3)
* hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java


 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: stack
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-10870.patch


 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-31 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-12219:
--
Attachment: HBASE-12219-0.99.patch

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-0.99.patch, HBASE-12219-v1.patch, 
 HBASE-12219-v1.patch, HBASE-12219.v0.txt, HBASE-12219.v2.patch, 
 HBASE-12219.v3.patch, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192081#comment-14192081
 ] 

Hadoop QA commented on HBASE-10870:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678484/HBASE-10870.patch
  against trunk revision .
  ATTACHMENT ID: 12678484

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileScanning(TestHRegion.java:3615)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11543//console

This message is automatically generated.

 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: stack
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-10870.patch


 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-31 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192082#comment-14192082
 ] 

Jingcheng Du commented on HBASE-12329:
--

Thanks [~busbey] for the review.
bq. Please file as a follow on ticket. If Misty wants to take it up, then she 
can. Otherwise someone else can do it.
Sure, will create a follow on jira.

bq. Should we add a third method setFamily that behaves like the old add?
If it's added, does it imply such an operation (add or update in one method) is 
allowed, and users should be aware of last one wins at that time? If so why 
do we re-implement addFamily and add modifyFamily? The existing addFamily is 
enough. Is it?


 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12329-V2.diff, HBASE-12329.diff


 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192091#comment-14192091
 ] 

Hadoop QA commented on HBASE-12219:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12678498/HBASE-12219-0.99.patch
  against trunk revision .
  ATTACHMENT ID: 12678498

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11545//console

This message is automatically generated.

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-0.99.patch, HBASE-12219-v1.patch, 
 HBASE-12219-v1.patch, HBASE-12219.v0.txt, HBASE-12219.v2.patch, 
 HBASE-12219.v3.patch, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-31 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-12329:
-
Attachment: HBASE-12329-V3.diff

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12329-V2.diff, HBASE-12329-V3.diff, 
 HBASE-12329.diff


 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192097#comment-14192097
 ] 

Sean Busbey commented on HBASE-12329:
-

{quote}
bq. Should we add a third method setFamily that behaves like the old add?

If it's added, does it imply such an operation (add or update in one method) is 
allowed, and users should be aware of last one wins at that time? If so why 
do we re-implement addFamily and add modifyFamily? The existing addFamily is 
enough. Is it?
{quote}

It's API semantics. addXXX means to put something new in, modifyXXX means 
to update something already present. setXXX is used in other places to mean 
add or update, knowledge of last write wins is already baked into the 
naming.

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12329-V2.diff, HBASE-12329-V3.diff, 
 HBASE-12329.diff


 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192100#comment-14192100
 ] 

Sean Busbey commented on HBASE-12329:
-

Thinking through things more, let's skip the third method. We can always add it 
later if the idiom shows up often downstream. Once it's there removing it is 
hard.

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12329-V2.diff, HBASE-12329-V3.diff, 
 HBASE-12329.diff


 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192114#comment-14192114
 ] 

Ashish Singhi commented on HBASE-10870:
---

Thanks for your time Stack. 
Test failures should not be related to patch. 

 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: stack
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-10870.patch


 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10870) Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead

2014-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192115#comment-14192115
 ] 

Hudson commented on HBASE-10870:


SUCCESS: Integrated in HBase-TRUNK #5730 (See 
[https://builds.apache.org/job/HBase-TRUNK/5730/])
HBASE-10870 Deprecate and replace HCD methods that have a 'should' prefix with 
a 'is' instead (stack: rev cacdb89e0345d4d507ae0ae04628d871d636fbca)
* hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 Deprecate and replace HCD methods that have a 'should' prefix with a 'get' 
 instead
 --

 Key: HBASE-10870
 URL: https://issues.apache.org/jira/browse/HBASE-10870
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: stack
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-10870.patch


 HColumnDescriptor has a bunch of methods that have 'should' for a prefix.  
 Deprecate and give them a javabean 'get' or 'is' instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12397) CopyTable fails to compile with the Hadoop 1 profile

2014-10-31 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-12397:
--

 Summary: CopyTable fails to compile with the Hadoop 1 profile
 Key: HBASE-12397
 URL: https://issues.apache.org/jira/browse/HBASE-12397
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.98.8


[ERROR] 
/usr/src/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java:[88,17]
 error: cannot find symbol

This was introduced in f31edd80
{noformat}
commit f31edd8004226c795ee46fbe9e93d10671ab895a
Author: Ted Yu te...@apache.org
Date:   Thu Oct 9 15:52:18 2014 +

HBASE-11997 CopyTable with bulkload (Yi Deng)
{noformat}

[~tedyu], [~daviddengcn], please have a look at this or I will revert it on 
0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192129#comment-14192129
 ] 

Hadoop QA commented on HBASE-12072:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678488/hbase-12072_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12678488

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3775 checkstyle errors (more than the trunk's current 3774 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11544//console

This message is automatically generated.

 We are doing 35 x 35 retries for master operations
 --

 Key: HBASE-12072
 URL: https://issues.apache.org/jira/browse/HBASE-12072
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.99.2

 Attachments: 12072-v1.txt, 12072-v2.txt, hbase-12072_v1.patch, 
 hbase-12072_v2.patch, hbase-12072_v2.patch


 For master requests, there are two retry mechanisms in effect. The first one 
 is from HBaseAdmin.executeCallable() 
 {code}
   private V V executeCallable(MasterCallableV callable) throws 
 IOException {
 RpcRetryingCallerV caller = rpcCallerFactory.newCaller();
 try {
   return caller.callWithRetries(callable);
 } finally {
   callable.close();
 }
   }
 {code}
 And inside, the other one is from StubMaker.makeStub():
 {code}
 /**
* Create a stub against the master.  Retry if necessary.
* @return A stub to do codeintf/code against the master
* @throws MasterNotRunningException
*/
   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
 (value=SWL_SLEEP_WITH_LOCK_HELD)
   Object makeStub() throws MasterNotRunningException {
 {code}
 The tests will just hang for 10 min * 35 ~= 6hours. 
 {code}
 2014-09-23 16:19:05,151 INFO  

[jira] [Commented] (HBASE-12396) Document suggested use of log levels in dev guide

2014-10-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192133#comment-14192133
 ] 

stack commented on HBASE-12396:
---

Nice one Sean. Kept meaning to do this.  Here's another few items to throw on 
the list.

+ Logs should be info-dense (don't just write 'starting', but rather 'starting 
...' and a listing of the config the service was started with).
+ Don't have two log lines where one would do.
+ Logs should be absent repetition and no need of verbiage that is obvious from 
context.
++ For example, don't write 'opening region xyz' when the context is the 
RegionOpenHandler
+ Logs should use same vocabulary (to be published) everywhere and the same 
format when referring to entities throughout; it makes the logs greppable.
++ We should not log an edits sequenceid with the label sequenceid in one 
location, seqid in another, and id somewhere else again.
++ We should not log sequenceid=XYZ in one log message and sequenceid: XYZ in 
another.
++ For example, tracing the history of a region, always refer to it the same 
when when making mention in the logs: if we use its encoded name everywhere, 
then a grep on this will turn up all mentions

Logs should be digestible, actionable.  They should be so regularized, 
'standardized' (as it is termed in one of the articles noted above by Sean) so 
that monitoring of logs can be done by tools and building a tool like 
http://www.ymc.ch/en/hbase-split-visualisation-introducing-hannibal is 
easy-to-do and continues to work across versions (you shouldn't need to install 
hannibal to get a historic, cluster-wide view on compactions/flushes -- but 
that is another issue).

I like the idea of standard context to dump on DEBUG (one of the articles talks 
of log4j MDC).  We had a hack of this where we'd dump context in JSON when 
stuff was slow.

 Document suggested use of log levels in dev guide
 -

 Key: HBASE-12396
 URL: https://issues.apache.org/jira/browse/HBASE-12396
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Sean Busbey
Priority: Minor

 Right now we don't provide any guidance on appropriate use of log levels, 
 which leads to inconsistent use and in some cases exacerbating problems for 
 tests and troubleshooting (see HBASE-12395). We should add a section on 
 suggested use of log levels to the guide.
 Some related reading
 * [a good ops-focused blog post on 
 levels|http://watchitlater.com/blog/2009/12/logging-guidelines/]
 * [another, focused on 
 devs|http://www.codeproject.com/Articles/42354/The-Art-of-Logging]
 * [guidelines from a user on stackoverflow (that I 
 like)|http://stackoverflow.com/a/2031209] also has some good discussion.
 * [extended logging discussion with some level use 
 guidelines|http://www.javacodegeeks.com/2011/01/10-tips-proper-application-logging.html]
 * [guidelines for Atlassian 
 devs|https://developer.atlassian.com/display/CONFDEV/Logging+Guidelines]
 * [guidelines for OpenStack 
 devs|https://wiki.openstack.org/wiki/LoggingStandards]
 * [the Kafka dev guide|http://kafka.apache.org/coding-guide.html] has a good 
 section on their use titled Logging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12397) CopyTable fails to compile with the Hadoop 1 profile

2014-10-31 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192135#comment-14192135
 ] 

Andrew Purtell commented on HBASE-12397:


This will fail with '-Dhadoop.profile=1.0' but not '-Dhadoop.profile=1.1' so 
one option is dropping support for compiling against 1.0 in the POMs. That 
would be one less source of surprise for devs working with 0.98. Thoughts? 

 CopyTable fails to compile with the Hadoop 1 profile
 

 Key: HBASE-12397
 URL: https://issues.apache.org/jira/browse/HBASE-12397
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.98.8


 [ERROR] 
 /usr/src/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java:[88,17]
  error: cannot find symbol
 This was introduced in f31edd80
 {noformat}
 commit f31edd8004226c795ee46fbe9e93d10671ab895a
 Author: Ted Yu te...@apache.org
 Date:   Thu Oct 9 15:52:18 2014 +
 HBASE-11997 CopyTable with bulkload (Yi Deng)
 {noformat}
 [~tedyu], [~daviddengcn], please have a look at this or I will revert it on 
 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12397) CopyTable fails to compile with the Hadoop 1 profile

2014-10-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192140#comment-14192140
 ] 

Ted Yu commented on HBASE-12397:


That would be nice.

 CopyTable fails to compile with the Hadoop 1 profile
 

 Key: HBASE-12397
 URL: https://issues.apache.org/jira/browse/HBASE-12397
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.98.8


 [ERROR] 
 /usr/src/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java:[88,17]
  error: cannot find symbol
 This was introduced in f31edd80
 {noformat}
 commit f31edd8004226c795ee46fbe9e93d10671ab895a
 Author: Ted Yu te...@apache.org
 Date:   Thu Oct 9 15:52:18 2014 +
 HBASE-11997 CopyTable with bulkload (Yi Deng)
 {noformat}
 [~tedyu], [~daviddengcn], please have a look at this or I will revert it on 
 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12396) Document suggested use of log levels in dev guide

2014-10-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192142#comment-14192142
 ] 

Sean Busbey commented on HBASE-12396:
-

We should also document what user-provided data might be included in log 
messages (like keys in region ids) so downstream people can reason about the 
protection level needed for them.

 Document suggested use of log levels in dev guide
 -

 Key: HBASE-12396
 URL: https://issues.apache.org/jira/browse/HBASE-12396
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Sean Busbey
Priority: Minor

 Right now we don't provide any guidance on appropriate use of log levels, 
 which leads to inconsistent use and in some cases exacerbating problems for 
 tests and troubleshooting (see HBASE-12395). We should add a section on 
 suggested use of log levels to the guide.
 Some related reading
 * [a good ops-focused blog post on 
 levels|http://watchitlater.com/blog/2009/12/logging-guidelines/]
 * [another, focused on 
 devs|http://www.codeproject.com/Articles/42354/The-Art-of-Logging]
 * [guidelines from a user on stackoverflow (that I 
 like)|http://stackoverflow.com/a/2031209] also has some good discussion.
 * [extended logging discussion with some level use 
 guidelines|http://www.javacodegeeks.com/2011/01/10-tips-proper-application-logging.html]
 * [guidelines for Atlassian 
 devs|https://developer.atlassian.com/display/CONFDEV/Logging+Guidelines]
 * [guidelines for OpenStack 
 devs|https://wiki.openstack.org/wiki/LoggingStandards]
 * [the Kafka dev guide|http://kafka.apache.org/coding-guide.html] has a good 
 section on their use titled Logging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-31 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12329:

Status: Patch Available  (was: Open)

+1 pending QA.

applies cleanly, ran through the tests altered by the patch.

switching to Patch Available to get a QA run.

 Table create with duplicate column family names quietly succeeds
 

 Key: HBASE-12329
 URL: https://issues.apache.org/jira/browse/HBASE-12329
 Project: HBase
  Issue Type: Bug
  Components: Client, shell
Reporter: Sean Busbey
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12329-V2.diff, HBASE-12329-V3.diff, 
 HBASE-12329.diff


 From the mailing list
 {quote}
 I was expecting that it is forbidden, **but** this call does not throw any
 exception
 {code}
 String[] families = {cf, cf};
 HTableDescriptor desc = new HTableDescriptor(name);
 for (String cf : families) {
   HColumnDescriptor coldef = new HColumnDescriptor(cf);
   desc.addFamily(coldef);
 }
 try {
 admin.createTable(desc);
 } catch (TableExistsException e) {
 throw new IOException(table \' + name + \' already exists);
 }
 {code}
 {quote}
 And Ted's follow up replicates in the shell
 {code}
 hbase(main):001:0 create 't2', {NAME = 'f1'}, {NAME = 'f1'}
 The table got created - with 1 column family:
 hbase(main):002:0 describe 't2'
 DESCRIPTION
ENABLED
  't2', {NAME = 'f1', DATA_BLOCK_ENCODING = 'NONE', BLOOMFILTER = 'ROW',
 REPLICATION_SCOPE = '0 true
  ', VERSIONS = '1', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL =
 '2147483647', KEEP_DELETED
  _CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE
 = 'true'}
 1 row(s) in 0.1000 seconds
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12219:
--
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed the branch-1 patch (Pushed the master patch yesterday).  Thanks 
[~esteban]

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12219-0.99.patch, HBASE-12219-v1.patch, 
 HBASE-12219-v1.patch, HBASE-12219.v0.txt, HBASE-12219.v2.patch, 
 HBASE-12219.v3.patch, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12397) CopyTable fails to compile with the Hadoop 1 profile

2014-10-31 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192163#comment-14192163
 ] 

Andrew Purtell commented on HBASE-12397:


I sent a mail to dev@. 

 CopyTable fails to compile with the Hadoop 1 profile
 

 Key: HBASE-12397
 URL: https://issues.apache.org/jira/browse/HBASE-12397
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.98.8


 [ERROR] 
 /usr/src/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java:[88,17]
  error: cannot find symbol
 This was introduced in f31edd80
 {noformat}
 commit f31edd8004226c795ee46fbe9e93d10671ab895a
 Author: Ted Yu te...@apache.org
 Date:   Thu Oct 9 15:52:18 2014 +
 HBASE-11997 CopyTable with bulkload (Yi Deng)
 {noformat}
 [~tedyu], [~daviddengcn], please have a look at this or I will revert it on 
 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192171#comment-14192171
 ] 

stack commented on HBASE-12358:
---

As a POC, v2 looks cleaner.

What is ByteBufferBackedKeyValue then?  An old-school KeyValue only backed by 
BB?  Thats a good POC exercise I'd say.

You don't need HeapSize and Settable... because ByteBufferBackedCell implements 
them already:

+public class ByteBufferBackedKeyValue implements ByteBufferBackedCell, 
HeapSize, Cloneable,
+SettableSequenceId {

Don't repeat these defines in new class I'd say... just refer to them from 
KeyValue?

+  /** Size of the key length field in bytes */
+  public static final int KEY_LENGTH_SIZE = Bytes.SIZEOF_INT;

Or if you do bring them over, make them private so this madness (smile) doesn't 
leak about.

On below, you can actually instantiate an empty one?

+  public ByteBufferBackedKeyValue() {
+
+  }


We can't have a util per type as in ByteBufferBackedKeyValueUtil

Could we have single util and it uses factory to get type particular methods?  
Can do later.

What you fellas think of adding this?

+  public boolean hasArray() {

Does it belong in Cell?  I mean, if it returns false, what are you to do?  Go 
find BB version?  Where you find that?  A factory can be used to figure the 
configured type inside server/client but as to whether or not we should use BBs 
rather than arrays... This is one way adding it to Cell but if DBBs, what you 
to do?

On other hand, we are not going to have that many implementations of Cell... so 
if Cell answer false, then you presume it has implemented BBBCell... and use 
the BB methods.  SOmething like that.


 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12358.patch, HBASE-12358_1.patch, 
 HBASE-12358_2.patch


 As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
  Changing the core Cell impl would not be needed as it is used in server 
 only.  So we will create a BB backed Cell and use it in the Server side read 
 path. This JIRA just creates an interface that extends Cell and adds the 
 needed API.
 The getTimeStamp and getTypebyte() can still refer to the original Cell API 
 only.  The getXXxOffset() and getXXXLength() can also refer to the original 
 Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12390) Change revision style from svn to git

2014-10-31 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12390:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed this to branch-1+. Thanks Stack for review. 

 Change revision style from svn to git
 -

 Key: HBASE-12390
 URL: https://issues.apache.org/jira/browse/HBASE-12390
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Minor
 Fix For: 2.0.0, 0.99.2

 Attachments: hbase-12390_v1.patch


 This was bothering me. We should change the {{-r revision_id}} style that 
 is an svn thing. 
 We can do: 
 {code}
 2.0.0-SNAPSHOT, revision=64b6109ce917a47e4fa4b88cdb800bcc7a228484
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12397) CopyTable fails to compile with the Hadoop 1 profile

2014-10-31 Thread Yi Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192185#comment-14192185
 ] 

Yi Deng commented on HBASE-12397:
-

I'm not familar with -Dhadoop.profile=1.0 but what could be changed between 
different profiles?

 CopyTable fails to compile with the Hadoop 1 profile
 

 Key: HBASE-12397
 URL: https://issues.apache.org/jira/browse/HBASE-12397
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.98.8


 [ERROR] 
 /usr/src/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java:[88,17]
  error: cannot find symbol
 This was introduced in f31edd80
 {noformat}
 commit f31edd8004226c795ee46fbe9e93d10671ab895a
 Author: Ted Yu te...@apache.org
 Date:   Thu Oct 9 15:52:18 2014 +
 HBASE-11997 CopyTable with bulkload (Yi Deng)
 {noformat}
 [~tedyu], [~daviddengcn], please have a look at this or I will revert it on 
 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-10-31 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192187#comment-14192187
 ] 

Enis Soztutar commented on HBASE-12072:
---

There seems to be a timeout in one of these tests. let me look into it. 

 We are doing 35 x 35 retries for master operations
 --

 Key: HBASE-12072
 URL: https://issues.apache.org/jira/browse/HBASE-12072
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.99.2

 Attachments: 12072-v1.txt, 12072-v2.txt, hbase-12072_v1.patch, 
 hbase-12072_v2.patch, hbase-12072_v2.patch


 For master requests, there are two retry mechanisms in effect. The first one 
 is from HBaseAdmin.executeCallable() 
 {code}
   private V V executeCallable(MasterCallableV callable) throws 
 IOException {
 RpcRetryingCallerV caller = rpcCallerFactory.newCaller();
 try {
   return caller.callWithRetries(callable);
 } finally {
   callable.close();
 }
   }
 {code}
 And inside, the other one is from StubMaker.makeStub():
 {code}
 /**
* Create a stub against the master.  Retry if necessary.
* @return A stub to do codeintf/code against the master
* @throws MasterNotRunningException
*/
   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
 (value=SWL_SLEEP_WITH_LOCK_HELD)
   Object makeStub() throws MasterNotRunningException {
 {code}
 The tests will just hang for 10 min * 35 ~= 6hours. 
 {code}
 2014-09-23 16:19:05,151 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
 failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,253 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
 failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,456 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
 failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,759 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
 failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:06,262 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
 failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:07,273 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
 failed; retrying after sleep of 2011, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:09,286 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 7 of 35 
 failed; retrying after sleep of 4012, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:13,303 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 8 of 35 
 failed; retrying after sleep of 10033, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:23,343 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 
 failed; retrying after sleep of 10089, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:33,439 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 10 of 
 35 failed; retrying after sleep of 10027, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:43,473 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 11 of 
 35 failed; retrying after sleep of 10004, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:53,485 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 12 of 
 35 failed; retrying after sleep of 20160, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:20:13,656 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 13 of 
 35 failed; retrying after sleep of 20006, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 

[jira] [Updated] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-31 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-12219:
--
Attachment: HBASE-12219-0.99.addendum.patch

Missing change.

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12219-0.99.addendum.patch, HBASE-12219-0.99.patch, 
 HBASE-12219-v1.patch, HBASE-12219-v1.patch, HBASE-12219.v0.txt, 
 HBASE-12219.v2.patch, HBASE-12219.v3.patch, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >