[jira] [Commented] (HBASE-12959) Compact never end when table's dataBlockEncoding using PREFIX_TREE

2015-02-03 Thread wuchengzhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304589#comment-14304589
 ] 

wuchengzhi commented on HBASE-12959:


I try to use that patch just now, but it seems not fix my issue .

  Compact never end when table's dataBlockEncoding using  PREFIX_TREE
 

 Key: HBASE-12959
 URL: https://issues.apache.org/jira/browse/HBASE-12959
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.7
 Environment: hbase 0.98.7
 hadoop 2.5.1
Reporter: wuchengzhi
Priority: Critical
 Attachments: PrefixTreeCompact.java, txtfile-part1.txt.gz, 
 txtfile-part2.txt.gz, txtfile-part4.txt.gz, txtfile-part5.txt.gz, 
 txtfile-part6.txt.gz, txtfile-part7.txt.gz


 I upgraded the hbase from 0.96.1.1 to 0.98.7 and hadoop from 2.2.0 to 
 2.5.1,some table encoding using prefix-tree was abnormal for compacting,  the 
 gui shows the table's Compaction status is MAJOR_AND_MINOR(MAJOR) all the 
 time.
 in the regionserver dump , there are some logs as below:
 Tasks:
 ===
 Task: Compacting info in 
 PREFIX_NOT_COMPACT,,1421954285670.41ef60e2c221772626e141d5080296c5.
 Status: RUNNING:Compacting store info
 Running for 1097s  (on the  site running more than 3 days)
 
 Thread 197 (regionserver60020-smallCompactions-1421954341530):
   State: RUNNABLE
   Blocked count: 7
   Waited count: 3
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.followFan(PrefixTreeArrayScanner.java:329)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:149)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.seekForwardToOrAfter(PrefixTreeArraySearcher.java:183)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToOrBeforeUsingPositionAtOrAfter(PrefixTreeSeeker.java:199)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToKeyInBlock(PrefixTreeSeeker.java:162)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1172)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:573)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 
 org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:222)
 
 org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:77)
 
 org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:110)
 org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1099)
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1482)
 Thread 177 (regionserver60020-smallCompactions-1421954314809):
   State: RUNNABLE
   Blocked count: 40
   Waited count: 60
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.column.ColumnReader.populateBuffer(ColumnReader.java:81)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateQualifier(PrefixTreeArrayScanner.java:471)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateNonRowFields(PrefixTreeArrayScanner.java:452)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.nextRow(PrefixTreeArrayScanner.java:226)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.advance(PrefixTreeArrayScanner.java:208)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtQualifierTimestamp(PrefixTreeArraySearcher.java:244)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:123)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.seekForwardToOrAfter(PrefixTreeArraySearcher.java:183)
 
 

[jira] [Commented] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304635#comment-14304635
 ] 

Hudson commented on HBASE-12964:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #790 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/790/])
HBASE-12964 Add the ability for hbase-daemon.sh to start in the foreground 
(eclark: rev c1a776293c81e10ff834f80d466728346b8b0fc9)
* bin/hbase-daemon.sh


 Add the ability for hbase-daemon.sh to start in the foreground
 --

 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12964-v1.patch, HBASE-12964-v2.patch, 
 HBASE-12964.patch


 The znode cleaner is awesome and gives great benefits.
 As more and more deployments start using containers some of them will want to 
 run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304652#comment-14304652
 ] 

Hudson commented on HBASE-12957:


FAILURE: Integrated in HBase-TRUNK #6084 (See 
[https://builds.apache.org/job/HBase-TRUNK/6084/])
HBASE-12957 region_mover#isSuccessfulScan may be extremely slow on region with 
lots of expired data (Hongyu Bi) (tedyu: rev 
4388fed83028325cfe75fc0a8787183db2a58855)
* bin/region_mover.rb
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerHostname.java


 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-12957:
--

Assignee: hongyu bi

 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304569#comment-14304569
 ] 

Hudson commented on HBASE-12964:


SUCCESS: Integrated in HBase-0.98 #833 (See 
[https://builds.apache.org/job/HBase-0.98/833/])
HBASE-12964 Add the ability for hbase-daemon.sh to start in the foreground 
(eclark: rev c1a776293c81e10ff834f80d466728346b8b0fc9)
* bin/hbase-daemon.sh


 Add the ability for hbase-daemon.sh to start in the foreground
 --

 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12964-v1.patch, HBASE-12964-v2.patch, 
 HBASE-12964.patch


 The znode cleaner is awesome and gives great benefits.
 As more and more deployments start using containers some of them will want to 
 run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12959) Compact never end when table's dataBlockEncoding using PREFIX_TREE

2015-02-03 Thread wuchengzhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304659#comment-14304659
 ] 

wuchengzhi commented on HBASE-12959:


Not yet,please help to fix it,thanks.

  Compact never end when table's dataBlockEncoding using  PREFIX_TREE
 

 Key: HBASE-12959
 URL: https://issues.apache.org/jira/browse/HBASE-12959
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.7
 Environment: hbase 0.98.7
 hadoop 2.5.1
Reporter: wuchengzhi
Priority: Critical
 Attachments: PrefixTreeCompact.java, txtfile-part1.txt.gz, 
 txtfile-part2.txt.gz, txtfile-part4.txt.gz, txtfile-part5.txt.gz, 
 txtfile-part6.txt.gz, txtfile-part7.txt.gz


 I upgraded the hbase from 0.96.1.1 to 0.98.7 and hadoop from 2.2.0 to 
 2.5.1,some table encoding using prefix-tree was abnormal for compacting,  the 
 gui shows the table's Compaction status is MAJOR_AND_MINOR(MAJOR) all the 
 time.
 in the regionserver dump , there are some logs as below:
 Tasks:
 ===
 Task: Compacting info in 
 PREFIX_NOT_COMPACT,,1421954285670.41ef60e2c221772626e141d5080296c5.
 Status: RUNNING:Compacting store info
 Running for 1097s  (on the  site running more than 3 days)
 
 Thread 197 (regionserver60020-smallCompactions-1421954341530):
   State: RUNNABLE
   Blocked count: 7
   Waited count: 3
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.followFan(PrefixTreeArrayScanner.java:329)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:149)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.seekForwardToOrAfter(PrefixTreeArraySearcher.java:183)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToOrBeforeUsingPositionAtOrAfter(PrefixTreeSeeker.java:199)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToKeyInBlock(PrefixTreeSeeker.java:162)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1172)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:573)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 
 org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:222)
 
 org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:77)
 
 org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:110)
 org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1099)
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1482)
 Thread 177 (regionserver60020-smallCompactions-1421954314809):
   State: RUNNABLE
   Blocked count: 40
   Waited count: 60
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.column.ColumnReader.populateBuffer(ColumnReader.java:81)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateQualifier(PrefixTreeArrayScanner.java:471)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateNonRowFields(PrefixTreeArrayScanner.java:452)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.nextRow(PrefixTreeArrayScanner.java:226)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.advance(PrefixTreeArrayScanner.java:208)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtQualifierTimestamp(PrefixTreeArraySearcher.java:244)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:123)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.seekForwardToOrAfter(PrefixTreeArraySearcher.java:183)
 
 

[jira] [Commented] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304614#comment-14304614
 ] 

Hadoop QA commented on HBASE-12957:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696108/HBASE-12957-v0.patch
  against master branch at commit 5c1b08c5ca18a499c0a336e5ebd7c6bcc45a9fad.
  ATTACHMENT ID: 12696108

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12685//console

This message is automatically generated.

 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12959) Compact never end when table's dataBlockEncoding using PREFIX_TREE

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304644#comment-14304644
 ] 

ramkrishna.s.vasudevan commented on HBASE-12959:


Okie, are you working on a patch for this?  If not I can take it up?

  Compact never end when table's dataBlockEncoding using  PREFIX_TREE
 

 Key: HBASE-12959
 URL: https://issues.apache.org/jira/browse/HBASE-12959
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.7
 Environment: hbase 0.98.7
 hadoop 2.5.1
Reporter: wuchengzhi
Priority: Critical
 Attachments: PrefixTreeCompact.java, txtfile-part1.txt.gz, 
 txtfile-part2.txt.gz, txtfile-part4.txt.gz, txtfile-part5.txt.gz, 
 txtfile-part6.txt.gz, txtfile-part7.txt.gz


 I upgraded the hbase from 0.96.1.1 to 0.98.7 and hadoop from 2.2.0 to 
 2.5.1,some table encoding using prefix-tree was abnormal for compacting,  the 
 gui shows the table's Compaction status is MAJOR_AND_MINOR(MAJOR) all the 
 time.
 in the regionserver dump , there are some logs as below:
 Tasks:
 ===
 Task: Compacting info in 
 PREFIX_NOT_COMPACT,,1421954285670.41ef60e2c221772626e141d5080296c5.
 Status: RUNNING:Compacting store info
 Running for 1097s  (on the  site running more than 3 days)
 
 Thread 197 (regionserver60020-smallCompactions-1421954341530):
   State: RUNNABLE
   Blocked count: 7
   Waited count: 3
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.followFan(PrefixTreeArrayScanner.java:329)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:149)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.seekForwardToOrAfter(PrefixTreeArraySearcher.java:183)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToOrBeforeUsingPositionAtOrAfter(PrefixTreeSeeker.java:199)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToKeyInBlock(PrefixTreeSeeker.java:162)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1172)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:573)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 
 org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:222)
 
 org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:77)
 
 org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:110)
 org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1099)
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1482)
 Thread 177 (regionserver60020-smallCompactions-1421954314809):
   State: RUNNABLE
   Blocked count: 40
   Waited count: 60
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.column.ColumnReader.populateBuffer(ColumnReader.java:81)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateQualifier(PrefixTreeArrayScanner.java:471)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateNonRowFields(PrefixTreeArrayScanner.java:452)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.nextRow(PrefixTreeArrayScanner.java:226)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.advance(PrefixTreeArrayScanner.java:208)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtQualifierTimestamp(PrefixTreeArraySearcher.java:244)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:123)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.seekForwardToOrAfter(PrefixTreeArraySearcher.java:183)
 
 

[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304646#comment-14304646
 ] 

ramkrishna.s.vasudevan commented on HBASE-12914:


[~ndimiduk]
Thanks for looking into the RB. I think as per the discussions over the other 
JIRA as 0.98 was considering TAgs as experimental better to notify the APIs and 
related features as Unstable.  I think marking Unstable is fine with me.  
What others think here?

 Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
 section
 ---

 Key: HBASE-12914
 URL: https://issues.apache.org/jira/browse/HBASE-12914
 Project: HBase
  Issue Type: Bug
  Components: API, documentation
Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
Reporter: Sean Busbey
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.98.11

 Attachments: HBASE-12914-0.98.patch, HBASE-12914-branch-1.patch, 
 HBASE-12914.patch


 There are several features in 0.98 that require enabling HFilev3 support. 
 Some of those features include new extendable components that are marked 
 IA.Public.
 Current practice has been to treat these features as experimental. This has 
 included pushing non-compatible changes to branch-1 as the API got worked out 
 through use in 0.98.
 * Update all of the IA.Public classes involved to make sure they are 
 IS.Unstable in 0.98.
 * Update the ref guide section on upgrading from 0.98 - 1.0 to make folks 
 aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304645#comment-14304645
 ] 

ramkrishna.s.vasudevan commented on HBASE-12962:


The failed tests 
{code}
org.apache.jena.hadoop.rdf.io.input.AbstractNodeTupleInputFormatTests.testMultipleInputs(AbstractNodeTupleInputFormatTests.java:477)
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604)
{code}
seems unrelated.  Thanks for the reviews. Will commit it now.

 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12957:
---
Summary: region_mover#isSuccessfulScan may be extremely slow on region with 
lots of expired data  (was: region_mover#isSuccessfulScan may extremely slow on 
region with lots of expired data)

 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Priority: Minor
 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12962:
---
   Resolution: Fixed
Fix Version/s: 1.0.1
   2.0.0
   1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to master, branch-1 and branch-1.0.  0.98 does not have this problem.  
Thanks for all the reviews.

 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.0.0, 2.0.0, 1.0.1

 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12962:
---
Component/s: test

 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.0.0, 2.0.0, 1.0.1

 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-02-03 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304695#comment-14304695
 ] 

Sean Busbey commented on HBASE-12914:
-

I don't think we need a special InterfaceAudience setting. AFAICT, most of 
these APIs are settled enough in 1.0+ for how they're labeled. The goal here 
was just to make sure early adopters on the 0.98 line had a heads-up 
appropriate for how we've been handling changes.

 Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
 section
 ---

 Key: HBASE-12914
 URL: https://issues.apache.org/jira/browse/HBASE-12914
 Project: HBase
  Issue Type: Bug
  Components: API, documentation
Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
Reporter: Sean Busbey
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.98.11

 Attachments: HBASE-12914-0.98.patch, HBASE-12914-branch-1.patch, 
 HBASE-12914.patch


 There are several features in 0.98 that require enabling HFilev3 support. 
 Some of those features include new extendable components that are marked 
 IA.Public.
 Current practice has been to treat these features as experimental. This has 
 included pushing non-compatible changes to branch-1 as the API got worked out 
 through use in 0.98.
 * Update all of the IA.Public classes involved to make sure they are 
 IS.Unstable in 0.98.
 * Update the ref guide section on upgrading from 0.98 - 1.0 to make folks 
 aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-02-03 Thread hongyu bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongyu bi reassigned HBASE-12948:
-

Assignee: hongyu bi

 Increment#addColumn on the same column multi times produce wrong result 
 

 Key: HBASE-12948
 URL: https://issues.apache.org/jira/browse/HBASE-12948
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Critical
 Attachments: 12948-v2.patch, 12948-v3.patch, 
 HBASE-12948-0.99.2-v1.patch, HBASE-12948-v0.patch, HBASE-12948.patch


 Case:
 Initially get('row1'):
 rowkey=row1 value=1
 run:
 Increment increment = new Increment(Bytes.toBytes(row1));
 for (int i = 0; i  N; i++) {
 increment.addColumn(Bytes.toBytes(cf), Bytes.toBytes(c), 1)
 }
 hobi.increment(increment);
 get('row1'):
 if N=1 then result is 2 else if N1 the result will always be 1
 Cause:
 https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
 mutation which change familyMap from NavigableMap to List, so from client 
 side, we can buffer many edits on the same column;
 However, HRegion#increment use idx to iterate the get's results, here 
 results.sizefamily.value().size if N1,so the latter edits on the same 
 column won't match the condition {idx  results.size()  
 CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
 the same mvccVersion ,so this case happen.
 Fix:
 according to the put/delete#add on the same column behaviour ,
 fix from server side: process last edit wins on the same column inside 
 HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
 result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304677#comment-14304677
 ] 

Hudson commented on HBASE-12957:


SUCCESS: Integrated in HBase-1.1 #138 (See 
[https://builds.apache.org/job/HBase-1.1/138/])
HBASE-12957 region_mover#isSuccessfulScan may be extremely slow on region with 
lots of expired data (Hongyu Bi) (tedyu: rev 
118f738d7ccb3f5f0c3e724bb67183e0440c201d)
* bin/region_mover.rb


 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12957:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12957) region_mover#isSuccessfulScan may extremely slow on region with lots of expired data

2015-02-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12957:
---
Status: Patch Available  (was: Open)

 region_mover#isSuccessfulScan may extremely slow on region with lots of 
 expired data
 

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Priority: Minor
 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12957:
---
Fix Version/s: 1.1.0
   2.0.0

 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-02-03 Thread hongyu bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongyu bi updated HBASE-12948:
--
Attachment: (was: 12948-v3.patch)

 Increment#addColumn on the same column multi times produce wrong result 
 

 Key: HBASE-12948
 URL: https://issues.apache.org/jira/browse/HBASE-12948
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Critical
 Attachments: 12948-v2.patch, HBASE-12948-0.99.2-v1.patch, 
 HBASE-12948-v0.patch, HBASE-12948.patch


 Case:
 Initially get('row1'):
 rowkey=row1 value=1
 run:
 Increment increment = new Increment(Bytes.toBytes(row1));
 for (int i = 0; i  N; i++) {
 increment.addColumn(Bytes.toBytes(cf), Bytes.toBytes(c), 1)
 }
 hobi.increment(increment);
 get('row1'):
 if N=1 then result is 2 else if N1 the result will always be 1
 Cause:
 https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
 mutation which change familyMap from NavigableMap to List, so from client 
 side, we can buffer many edits on the same column;
 However, HRegion#increment use idx to iterate the get's results, here 
 results.sizefamily.value().size if N1,so the latter edits on the same 
 column won't match the condition {idx  results.size()  
 CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
 the same mvccVersion ,so this case happen.
 Fix:
 according to the put/delete#add on the same column behaviour ,
 fix from server side: process last edit wins on the same column inside 
 HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
 result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304623#comment-14304623
 ] 

Ted Yu commented on HBASE-12957:


Ran test in hbase-shell module - pass.

Thanks for the patch, hongyu.

Thanks for the review, Stack

 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304756#comment-14304756
 ] 

Hudson commented on HBASE-12962:


SUCCESS: Integrated in HBase-1.1 #139 (See 
[https://builds.apache.org/job/HBase-1.1/139/])
HBASE-12962 - TestHFileBlockIndex.testBlockIndex() commented out during 
(ramkrishna: rev d33bc0c8c6e1b06a18837903488e3652b1c10217)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java


 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.0.0, 2.0.0, 1.0.1

 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12914:
---
Status: Patch Available  (was: Open)

 Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
 section
 ---

 Key: HBASE-12914
 URL: https://issues.apache.org/jira/browse/HBASE-12914
 Project: HBase
  Issue Type: Bug
  Components: API, documentation
Affects Versions: 0.98.9, 0.98.8, 0.98.7, 0.98.6
Reporter: Sean Busbey
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.98.11

 Attachments: HBASE-12914-0.98.patch, HBASE-12914-branch-1.patch, 
 HBASE-12914.patch


 There are several features in 0.98 that require enabling HFilev3 support. 
 Some of those features include new extendable components that are marked 
 IA.Public.
 Current practice has been to treat these features as experimental. This has 
 included pushing non-compatible changes to branch-1 as the API got worked out 
 through use in 0.98.
 * Update all of the IA.Public classes involved to make sure they are 
 IS.Unstable in 0.98.
 * Update the ref guide section on upgrading from 0.98 - 1.0 to make folks 
 aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12959) Compact never end when table's dataBlockEncoding using PREFIX_TREE

2015-02-03 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304717#comment-14304717
 ] 

zhangduo commented on HBASE-12959:
--

[~bdifn] Have you tried 0.98.10RC2? You can download it here.
http://people.apache.org/~apurtell/0.98.10RC2/

HBASE-12817 have fixed a prefix tree decoding issue and will be released with 
0.98.10.

Thanks~

  Compact never end when table's dataBlockEncoding using  PREFIX_TREE
 

 Key: HBASE-12959
 URL: https://issues.apache.org/jira/browse/HBASE-12959
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.7
 Environment: hbase 0.98.7
 hadoop 2.5.1
Reporter: wuchengzhi
Priority: Critical
 Attachments: PrefixTreeCompact.java, txtfile-part1.txt.gz, 
 txtfile-part2.txt.gz, txtfile-part4.txt.gz, txtfile-part5.txt.gz, 
 txtfile-part6.txt.gz, txtfile-part7.txt.gz


 I upgraded the hbase from 0.96.1.1 to 0.98.7 and hadoop from 2.2.0 to 
 2.5.1,some table encoding using prefix-tree was abnormal for compacting,  the 
 gui shows the table's Compaction status is MAJOR_AND_MINOR(MAJOR) all the 
 time.
 in the regionserver dump , there are some logs as below:
 Tasks:
 ===
 Task: Compacting info in 
 PREFIX_NOT_COMPACT,,1421954285670.41ef60e2c221772626e141d5080296c5.
 Status: RUNNING:Compacting store info
 Running for 1097s  (on the  site running more than 3 days)
 
 Thread 197 (regionserver60020-smallCompactions-1421954341530):
   State: RUNNABLE
   Blocked count: 7
   Waited count: 3
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.followFan(PrefixTreeArrayScanner.java:329)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:149)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.seekForwardToOrAfter(PrefixTreeArraySearcher.java:183)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToOrBeforeUsingPositionAtOrAfter(PrefixTreeSeeker.java:199)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToKeyInBlock(PrefixTreeSeeker.java:162)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1172)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:573)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 
 org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:222)
 
 org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:77)
 
 org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:110)
 org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1099)
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1482)
 Thread 177 (regionserver60020-smallCompactions-1421954314809):
   State: RUNNABLE
   Blocked count: 40
   Waited count: 60
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.column.ColumnReader.populateBuffer(ColumnReader.java:81)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateQualifier(PrefixTreeArrayScanner.java:471)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateNonRowFields(PrefixTreeArrayScanner.java:452)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.nextRow(PrefixTreeArrayScanner.java:226)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.advance(PrefixTreeArrayScanner.java:208)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtQualifierTimestamp(PrefixTreeArraySearcher.java:244)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:123)
 
 

[jira] [Commented] (HBASE-12965) Enhance the delete option to pass Filters

2015-02-03 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304708#comment-14304708
 ] 

Anoop Sam John commented on HBASE-12965:


BulkDeleteEndPoint is added for such use case. Here there is no need to fetch 
rows back to client and find out RKs from that and issue deletes again. The EP 
runs at server side avoiding too many RPCs and data flow btw server and client. 
 I agree that it might be bit challenging for users to write code to invoke 
this EP.  (Any EP).  So what we can do, IMO, is to add some thing like a Client 
service class which will take the head ache of calling this EP.
Just like we have AggregationEP and AggregationClient.

What do you think?

 Enhance the delete option to pass Filters
 -

 Key: HBASE-12965
 URL: https://issues.apache.org/jira/browse/HBASE-12965
 Project: HBase
  Issue Type: Improvement
  Components: API, Deletes, Filters
Affects Versions: 0.98.10
Reporter: IMRANKHAN SHAJAHAN

 Scan having the option to pass FIlters and filter the rows based on the 
 filter.
 But for deleting rows, there is no easy way to pass the filters to Delete 
 Object.
 We can do that using below three approach
 1) Scan the records and create list of Delete object and do the delete. This 
 needs iteration in client side.
 2) Writing map reduce
 3) BulkDeleteEndPoint
 If we implement the option of passing filters in Delete Object, then we dont 
 need to worry about option 2 and 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12965) Enhance the delete option to pass Filters

2015-02-03 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304725#comment-14304725
 ] 

Nick Dimiduk commented on HBASE-12965:
--

Agreed, BulkDelete seems like what you want. 
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/coprocessor/example/BulkDeleteEndpoint.html

Maybe it's worth promoting this to the Table API? How about Table#delete(Scan) 
instead of a coproc invocation?

 Enhance the delete option to pass Filters
 -

 Key: HBASE-12965
 URL: https://issues.apache.org/jira/browse/HBASE-12965
 Project: HBase
  Issue Type: Improvement
  Components: API, Deletes, Filters
Affects Versions: 0.98.10
Reporter: IMRANKHAN SHAJAHAN

 Scan having the option to pass FIlters and filter the rows based on the 
 filter.
 But for deleting rows, there is no easy way to pass the filters to Delete 
 Object.
 We can do that using below three approach
 1) Scan the records and create list of Delete object and do the delete. This 
 needs iteration in client side.
 2) Writing map reduce
 3) BulkDeleteEndPoint
 If we implement the option of passing filters in Delete Object, then we dont 
 need to worry about option 2 and 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12960) Cannot run the hbase shell command on Windows

2015-02-03 Thread Lukas Eder (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304738#comment-14304738
 ] 

Lukas Eder commented on HBASE-12960:


I'm sorry I've already deleted HBase as it was the wrong version for me anyway. 
I'll remember to create patches next time.

 Cannot run the hbase shell command on Windows
 ---

 Key: HBASE-12960
 URL: https://issues.apache.org/jira/browse/HBASE-12960
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.99.2
 Environment: Windows 8.1
Reporter: Lukas Eder
Priority: Minor

 I've just downloaded and unzipped hbase 0.99.2 and tried to run this command:
 {code}
 C:\hbase-0.99.2\binhbase shell
 Invalid maximum heap size: -Xmx1000m 
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.
 {code}
 The command is documented here:
 http://hbase.apache.org/book.html#_get_started_with_hbase
 The problem is in hbase.cmd on line 296
 {code}
 set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX%
 {code}
 The quotes should be stripped:
 {code}
 set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX%
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304760#comment-14304760
 ] 

Hudson commented on HBASE-12962:


SUCCESS: Integrated in HBase-1.0 #707 (See 
[https://builds.apache.org/job/HBase-1.0/707/])
HBASE-12962 - TestHFileBlockIndex.testBlockIndex() commented out during 
(ramkrishna: rev 9539373ca62a12b1d86e641ee8bd00526d30eeae)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java


 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.0.0, 2.0.0, 1.0.1

 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304735#comment-14304735
 ] 

Hudson commented on HBASE-12957:


FAILURE: Integrated in HBase-TRUNK #6085 (See 
[https://builds.apache.org/job/HBase-TRUNK/6085/])
HBASE-12957  Revert accidental checkin of unrelated test (tedyu: rev 
fd0bb89fdf67d996e8cc678d81c6acb799c2cc49)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerHostname.java


 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12960) Cannot run the hbase shell command on Windows

2015-02-03 Thread Lukas Eder (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303444#comment-14303444
 ] 

Lukas Eder commented on HBASE-12960:


In fact, line 91 also needs to be adapted:

Wrong:

{code}
set JAVA_OFFHEAP_MAX=
{code}

Correct:

{code}
set JAVA_OFFHEAP_MAX=
{code}

 Cannot run the hbase shell command on Windows
 ---

 Key: HBASE-12960
 URL: https://issues.apache.org/jira/browse/HBASE-12960
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.99.2
 Environment: Windows 8.1
Reporter: Lukas Eder
Priority: Minor

 I've just downloaded and unzipped hbase 0.99.2 and tried to run this command:
 {code}
 C:\hbase-0.99.2\binhbase shell
 Invalid maximum heap size: -Xmx1000m 
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.
 {code}
 The command is documented here:
 http://hbase.apache.org/book.html#_get_started_with_hbase
 The problem is in hbase.cmd on line 296
 {code}
 set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX%
 {code}
 The quotes should be stripped:
 {code}
 set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX%
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration

2015-02-03 Thread Aniket Bhatnagar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303421#comment-14303421
 ] 

Aniket Bhatnagar commented on HBASE-12108:
--

I don't think adding a test case for this is plausible. The test would have to 
construct a class loader containing HBASE jar with it's parent class loader 
having hadoop jars. For constructing classpaths, we would need to pull in jars 
that seems like a bad idea for a test case.

 HBaseConfiguration
 --

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12960) Cannot run the hbase shell command on Windows

2015-02-03 Thread Lukas Eder (JIRA)
Lukas Eder created HBASE-12960:
--

 Summary: Cannot run the hbase shell command on Windows
 Key: HBASE-12960
 URL: https://issues.apache.org/jira/browse/HBASE-12960
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.99.2
 Environment: Windows 8.1
Reporter: Lukas Eder
Priority: Minor


I've just downloaded and unzipped hbase 0.99.2 and tried to run this command:

{code}
C:\hbase-0.99.2\binhbase shell
Invalid maximum heap size: -Xmx1000m 
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
{code}

The command is documented here:
http://hbase.apache.org/book.html#_get_started_with_hbase

The problem is in hbase.cmd on line 296

{code}
set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX%
{code}

The quotes should be stripped:

{code}
set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX%
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8329) Limit compaction speed

2015-02-03 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-8329:

Attachment: HBASE-8329-0.98.patch

Patch for 0.98.

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-8329-0.98.patch, HBASE-8329-10.patch, 
 HBASE-8329-11.patch, HBASE-8329-12.patch, HBASE-8329-2-trunk.patch, 
 HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, 
 HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, 
 HBASE-8329-9-trunk.patch, HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, 
 HBASE-8329_13.patch, HBASE-8329_14.patch, HBASE-8329_15.patch, 
 HBASE-8329_16.patch, HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-02-03 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302918#comment-14302918
 ] 

zhangduo commented on HBASE-8329:
-

[~apurtell] Just post it here or open a backport issue?
The issue is resolved and the discussion here is too long I think.

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-8329-10.patch, HBASE-8329-11.patch, 
 HBASE-8329-12.patch, HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, 
 HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, 
 HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
 HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch, 
 HBASE-8329_14.patch, HBASE-8329_15.patch, HBASE-8329_16.patch, 
 HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12954) Ability impaired using HBase on multihomed hosts

2015-02-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302936#comment-14302936
 ] 

Andrew Purtell commented on HBASE-12954:


I see your concerns about the mapping file notion, because it is a separate 
file from HBase-site and therefore another opportunity to introduce 
configuration error. It also doesn't seem to hit the mark for you. We don't 
need to consider it further. 

If we do introduce an option for assigning regionservers a canonical name in 
their site file and having them send it off to the master with a forcing flag 
or similar, then I think we would need to ensure it is not default behavior and 
document it as wizard level configuration not recommended normally. 

 Ability impaired using HBase on multihomed hosts
 

 Key: HBASE-12954
 URL: https://issues.apache.org/jira/browse/HBASE-12954
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Clay B.
Assignee: Ted Yu
Priority: Minor
 Attachments: 12954-v1.txt, Hadoop Three Interfaces.png


 For HBase clusters running on unusual networks (such as NAT'd cloud 
 environments or physical machines with multiple IP's per network interface) 
 it would be ideal to have a way to both specify:
 # which IP interface to which HBase master or region-server will bind
 # what hostname HBase will advertise in Zookeeper both for a master or 
 region-server process
 While efforts such as HBASE-8640 go a long way to normalize these two sources 
 of information, it is not possible in the current design of the properties 
 available to an administrator for these to be unambiguously specified.
 One has been able to request {{hbase.master.ipc.address}} or 
 {{hbase.regionserver.ipc.address}} but one can not specify the desired HBase 
 {{hbase.master.hostname}}. (It was removed in HBASE-1357, further I am 
 unaware of a region-server equivalent.)
 I use a configuration management system to generate all of my configuration 
 files on a per-machine basis. As such, an option to generate a file 
 specifying exactly which hostname to use would be helpful.
 Today, specifying the bind address for HBase works and one can use an 
 HBase-only DNS for faking what to put in Zookeeper but this is far from 
 ideal. Network interfaces have no intrinsic IP address, nor hostname. 
 Specifing a DNS server is awkward as the DNS server may differ from the 
 system's resolver and is a single IP address. Similarly, on hosts which use a 
 transient VIP (e.g. through keepalived) for other services, it means there's 
 a seemingly non-deterministic hostname choice made by HBase depending on the 
 state of the VIP at daemon start-up time.
 I will attach two networking examples I use which become very difficult to 
 manage under the current properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10942) support parallel request cancellation for multi-get

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302940#comment-14302940
 ] 

Hudson commented on HBASE-10942:


FAILURE: Integrated in HBase-TRUNK #6080 (See 
[https://builds.apache.org/job/HBase-TRUNK/6080/])
HBASE-10942. support parallel request cancellation for multi-get (Nicolas 
Liochon  Devaraj Das) (ddas: rev cf5ad96fcc2ac02889e8a96a5d99cac071e1f25c)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcChannel.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java


 support parallel request cancellation for multi-get
 ---

 Key: HBASE-10942
 URL: https://issues.apache.org/jira/browse/HBASE-10942
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Devaraj Das
 Fix For: 1.1.0

 Attachments: 10942-1.1.txt, 10942-branch-1.txt, 10942-for-98.zip, 
 10942.patch, HBASE-10942.01.patch, HBASE-10942.02.patch, HBASE-10942.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302939#comment-14302939
 ] 

Hudson commented on HBASE-8329:
---

FAILURE: Integrated in HBase-TRUNK #6080 (See 
[https://builds.apache.org/job/HBase-TRUNK/6080/])
HBASE-8329 Limit compaction speed (stack: rev 
eb351b9ff8276228e725bcf58675ab75b640fbbf)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionContext.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFileManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreEngine.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreEngine.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestCompactionWithThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/PressureAwareCompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionTool.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreEngine.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFileManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionThroughputControllerFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/NoLimitCompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java


 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-8329-0.98.patch, HBASE-8329-10.patch, 
 HBASE-8329-11.patch, HBASE-8329-12.patch, HBASE-8329-2-trunk.patch, 
 HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, 
 HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, 
 HBASE-8329-9-trunk.patch, HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, 
 HBASE-8329_13.patch, HBASE-8329_14.patch, HBASE-8329_15.patch, 
 HBASE-8329_16.patch, HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10942) support parallel request cancellation for multi-get

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302975#comment-14302975
 ] 

Hudson commented on HBASE-10942:


FAILURE: Integrated in HBase-1.1 #134 (See 
[https://builds.apache.org/job/HBase-1.1/134/])
HBASE-10942. support parallel request cancellation for multi-get (Nicolas 
Liochon  Devaraj Das) (ddas: rev 44596148c7b433f9db5288a0e776365d9bab1fad)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcChannel.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java


 support parallel request cancellation for multi-get
 ---

 Key: HBASE-10942
 URL: https://issues.apache.org/jira/browse/HBASE-10942
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Devaraj Das
 Fix For: 1.1.0

 Attachments: 10942-1.1.txt, 10942-branch-1.txt, 10942-for-98.zip, 
 10942.patch, HBASE-10942.01.patch, HBASE-10942.02.patch, HBASE-10942.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration

2015-02-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14302957#comment-14302957
 ] 

Hadoop QA commented on HBASE-12108:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12696107/HBaseConfiguration_HBASE_HBASE-12108.patch
  against master branch at commit eb351b9ff8276228e725bcf58675ab75b640fbbf.
  ATTACHMENT ID: 12696107

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12677//console

This message is automatically generated.

 HBaseConfiguration
 --

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration: set classloader before loading xml files

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304116#comment-14304116
 ] 

Hudson commented on HBASE-12108:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #789 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/789/])
HBASE-12108 | Setting classloader so that HBase resources resolve even when 
HBaseConfiguration is loaded from a different class loader (stack: rev 
b39e158c3ffe237b415a68682e79c8262bcc48e8)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java


 HBaseConfiguration: set classloader before loading xml files
 

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12961) Negative values in read and write region server metrics

2015-02-03 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12961:
--
Attachment: HBASE-12961-v1.patch

 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Assignee: Victoria
Priority: Minor
 Attachments: HBASE-12961-2.0.0-v1.patch, HBASE-12961-v1.patch


 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12108) HBaseConfiguration: set classloader before loading xml files

2015-02-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12108:
--
   Resolution: Fixed
Fix Version/s: 0.98.11
   1.1.0
   1.0.1
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to 0.98+ Thanks for the patch [~aniket]

 HBaseConfiguration: set classloader before loading xml files
 

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12035) Client does an RPC to master everytime a region is relocated

2015-02-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303543#comment-14303543
 ] 

stack commented on HBASE-12035:
---

I did this to find the hanging tests:

kalashnikov-20:hbase.git.commit stack$ python ./dev-support/findHangingTests.py 
https://builds.apache.org/job/PreCommit-HBASE-Build/12672/consoleFull
Fetching the console output from the URL
Printing hanging tests
Hanging test : org.apache.hadoop.hbase.TestAcidGuarantees
Printing Failing tests
Failing test : org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
Failing test : org.apache.hadoop.hbase.client.TestMetaWithReplicas


They passing for you [~octo47]?

 Client does an RPC to master everytime a region is relocated
 

 Key: HBASE-12035
 URL: https://issues.apache.org/jira/browse/HBASE-12035
 Project: HBase
  Issue Type: Improvement
  Components: Client, master
Affects Versions: 2.0.0
Reporter: Enis Soztutar
Assignee: Andrey Stepachev
Priority: Critical
 Fix For: 2.0.0

 Attachments: HBASE-12035 (1).patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch


 HBASE-7767 moved table enabled|disabled state to be kept in hdfs instead of 
 zookeeper. isTableDisabled() which is used in 
 HConnectionImplementation.relocateRegion() now became a master RPC call 
 rather than a zookeeper client call. Since we do relocateRegion() calls 
 everytime we want to relocate a region (region moved, RS down, etc) this 
 implies that when the master is down, the some of the clients for uncached 
 regions will be affected. 
 See HBASE-7767 and HBASE-11974 for some more background. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12962:
---
Status: Patch Available  (was: Open)

 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-12953:
-

Assignee: stack

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Assignee: stack
Priority: Critical

 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded(SaslClientHandler.java:154)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:486)
   ... 20 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
   at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
   at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
   at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
 {noformat}
 When set hbase.rpc.client.impl to RpcClientImpl, there seems to be no issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12035) Client does an RPC to master everytime a region is relocated

2015-02-03 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303556#comment-14303556
 ] 

Andrey Stepachev commented on HBASE-12035:
--

TestMetaWithReplicas and TestAcidGuarantees are passable on my hosts (tried Mac 
and Linux VM).
But something broken in TestAssignmentManagerOnCluster.

mvn clean test 
-Dtest=TestAcidGuarantees,TestAssignmentManagerOnCluster,TestMetaWithReplicas
{code}
---
 T E S T S
---
Running org.apache.hadoop.hbase.client.TestMetaWithReplicas
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.535 sec - 
in org.apache.hadoop.hbase.client.TestMetaWithReplicas
Running org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
Tests run: 20, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 93.322 sec  
FAILURE! - in org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
testSSHWhenDisablingTableRegionsInOpeningOrPendingOpenState(org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster)
  Time elapsed: 60.051 sec   ERROR!
java.lang.Exception: test timed out after 6 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:732)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1790)
at 
org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster.testSSHWhenDisablingTableRegionsInOpeningOrPendingOpenState(TestAssignmentManagerOnCluster.java:647)

Running org.apache.hadoop.hbase.TestAcidGuarantees
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.618 sec - in 
org.apache.hadoop.hbase.TestAcidGuarantees

Results :
Tests in error:
  
TestAssignmentManagerOnCluster.testSSHWhenDisablingTableRegionsInOpeningOrPendingOpenState:647
 »
{code}

 Client does an RPC to master everytime a region is relocated
 

 Key: HBASE-12035
 URL: https://issues.apache.org/jira/browse/HBASE-12035
 Project: HBase
  Issue Type: Improvement
  Components: Client, master
Affects Versions: 2.0.0
Reporter: Enis Soztutar
Assignee: Andrey Stepachev
Priority: Critical
 Fix For: 2.0.0

 Attachments: HBASE-12035 (1).patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch


 HBASE-7767 moved table enabled|disabled state to be kept in hdfs instead of 
 zookeeper. isTableDisabled() which is used in 
 HConnectionImplementation.relocateRegion() now became a master RPC call 
 rather than a zookeeper client call. Since we do relocateRegion() calls 
 everytime we want to relocate a region (region moved, RS down, etc) this 
 implies that when the master is down, the some of the clients for uncached 
 regions will be affected. 
 See HBASE-7767 and HBASE-11974 for some more background. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-12962:
--

 Summary: TestHFileBlockIndex.testBlockIndex() commented out during 
HBASE-10531
 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan


Accidentally during HBASE-10531 the test case testBlockIndex() in 
TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12961) Negative values in read and write region server metrics

2015-02-03 Thread Victoria (JIRA)
Victoria created HBASE-12961:


 Summary: Negative values in read and write region server metrics 
 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Priority: Minor


HMaster web page ui, shows the read/write request per region server. They are 
currently displayed by using 32 bit integers. Hence, if the servers are up for 
a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12963) Add note about jdk8 compilation to the guide

2015-02-03 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-12963:
---

 Summary: Add note about jdk8 compilation to the guide
 Key: HBASE-12963
 URL: https://issues.apache.org/jira/browse/HBASE-12963
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 2.0.0


HBASE-12695 fixed building 2.0.0-SNAP with JDK8, but right now it's only 
documented in a release note. We should add a note to the building hbase 
section of the ref guide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12108) HBaseConfiguration: set classloader before loading xml files

2015-02-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12108:
--
Summary: HBaseConfiguration: set classloader before loading xml files  
(was: HBaseConfiguration)

 HBaseConfiguration: set classloader before loading xml files
 

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7332) [webui] HMaster webui should display the number of regions a table has.

2015-02-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7332:
-
Attachment: HBASE-7332.patch

 [webui] HMaster webui should display the number of regions a table has.
 ---

 Key: HBASE-7332
 URL: https://issues.apache.org/jira/browse/HBASE-7332
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 2.0.0, 1.1.0
Reporter: Jonathan Hsieh
Assignee: Andrey Stepachev
Priority: Minor
  Labels: beginner
 Attachments: HBASE-7332.patch, HBASE-7332.patch, Screen Shot 
 2014-07-28 at 4.10.01 PM.png


 Pre-0.96/trunk hbase displayed the number of regions per table in the table 
 listing.  Would be good to have this back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12962:
---
Attachment: HBASE-12962.patch

 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12953) RegionServer is not functionally working with AysncRpcClient in secure mode

2015-02-03 Thread Jurriaan Mous (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303226#comment-14303226
 ] 

Jurriaan Mous commented on HBASE-12953:
---

What I can see is that the SaslClientHandler removes itself because the 
credentials don't come through. 

Digest authentication works because unit tests test the functionality but I 
don't have the know-how and time to set up a correct Kerberos setup. Maybe it 
is much simpler than I think but currently I can't properly debug it to find 
the problem.

All the steps should be there in the basics since I translated it from the sync 
client to a Netty setup. But somehow the Kerberos credentials are not passed on 
: GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt) I think it should be trivial to find the issue with a debug 
point. A debug point in AsyncRpcChannel around line 174 could be a nice start. 
Or check out the setupAuthorization method just before that point. And check 
out what happens in SaslClientHandler after it is added if the setup is all 
correct to see what vars are still unset.

If it is not clear what goes wrong it should be clear by comparing a debug 
walkthrough with the connect handling within the sync RpcClientImpl which 
contains almost the same code. (See Connection class within RpcClientImpl its 
constructor and setupIOstreams method which matches with before mentioned 174 
and setupAuthorization) This way I was also able to make the digest 
authentication work.

Any questions are welcome! 

 RegionServer is not functionally working with AysncRpcClient in secure mode
 ---

 Key: HBASE-12953
 URL: https://issues.apache.org/jira/browse/HBASE-12953
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.1.0
Reporter: Ashish Singhi
Priority: Critical

 HBase version 2.0.0
 Default value for {{hbase.rpc.client.impl}} is set to AsyncRpcClient.
 When trying to install HBase with Kerberos, RegionServer is not working 
 functionally.
 The following log is logged in its log file
 {noformat}
 2015-02-02 14:59:05,407 WARN  [AsyncRpcChannel-pool1-t1] 
 channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it 
 reached at the tail of the pipeline. It usually means the last handler in the 
 pipeline did not handle the exception.
 io.netty.channel.ChannelPipelineException: 
 org.apache.hadoop.hbase.security.SaslClientHandler.handlerAdded() has thrown 
 an exception; removed.
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:499)
   at 
 io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:481)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst0(DefaultChannelPipeline.java:114)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:97)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:235)
   at 
 io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:214)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:194)
   at 
 org.apache.hadoop.hbase.ipc.AsyncRpcChannel$2.operationComplete(AsyncRpcChannel.java:157)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
   at 
 io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
   at 
 io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
   at 
 io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:253)
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:288)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)]
   at 
 com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
   at 
 

[jira] [Updated] (HBASE-7332) [webui] HMaster webui should display the number of regions a table has.

2015-02-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7332:
-
Labels: beginner operability  (was: beginner)

 [webui] HMaster webui should display the number of regions a table has.
 ---

 Key: HBASE-7332
 URL: https://issues.apache.org/jira/browse/HBASE-7332
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 2.0.0, 1.1.0
Reporter: Jonathan Hsieh
Assignee: Andrey Stepachev
Priority: Minor
  Labels: beginner, operability
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-7332.patch, HBASE-7332.patch, Screen Shot 
 2014-07-28 at 4.10.01 PM.png


 Pre-0.96/trunk hbase displayed the number of regions per table in the table 
 listing.  Would be good to have this back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12035) Client does an RPC to master everytime a region is relocated

2015-02-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303653#comment-14303653
 ] 

stack commented on HBASE-12035:
---

Ok [~octo47], let me take a look at that one.

 Client does an RPC to master everytime a region is relocated
 

 Key: HBASE-12035
 URL: https://issues.apache.org/jira/browse/HBASE-12035
 Project: HBase
  Issue Type: Improvement
  Components: Client, master
Affects Versions: 2.0.0
Reporter: Enis Soztutar
Assignee: Andrey Stepachev
Priority: Critical
 Fix For: 2.0.0

 Attachments: HBASE-12035 (1).patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch


 HBASE-7767 moved table enabled|disabled state to be kept in hdfs instead of 
 zookeeper. isTableDisabled() which is used in 
 HConnectionImplementation.relocateRegion() now became a master RPC call 
 rather than a zookeeper client call. Since we do relocateRegion() calls 
 everytime we want to relocate a region (region moved, RS down, etc) this 
 implies that when the master is down, the some of the clients for uncached 
 regions will be affected. 
 See HBASE-7767 and HBASE-11974 for some more background. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7332) [webui] HMaster webui should display the number of regions a table has.

2015-02-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7332:
-
Attachment: Screen Shot 2015-02-03 at 9.23.57 AM.png

Here is what it adds to our table display. I messed with it enabling/disabling 
and the numbers updated in tune.  We need more stuff like this (smile).

 [webui] HMaster webui should display the number of regions a table has.
 ---

 Key: HBASE-7332
 URL: https://issues.apache.org/jira/browse/HBASE-7332
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 2.0.0, 1.1.0
Reporter: Jonathan Hsieh
Assignee: Andrey Stepachev
Priority: Minor
  Labels: beginner, operability
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-7332.patch, HBASE-7332.patch, Screen Shot 
 2014-07-28 at 4.10.01 PM.png, Screen Shot 2015-02-03 at 9.23.57 AM.png


 Pre-0.96/trunk hbase displayed the number of regions per table in the table 
 listing.  Would be good to have this back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration: set classloader before loading xml files

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303680#comment-14303680
 ] 

Hudson commented on HBASE-12108:


SUCCESS: Integrated in HBase-1.0 #706 (See 
[https://builds.apache.org/job/HBase-1.0/706/])
HBASE-12108 | Setting classloader so that HBase resources resolve even when 
HBaseConfiguration is loaded from a different class loader (stack: rev 
3ba7339d43873db97e4211635aa75678a5a17e71)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java


 HBaseConfiguration: set classloader before loading xml files
 

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303623#comment-14303623
 ] 

Nick Dimiduk commented on HBASE-12962:
--

So it did. +1

 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-12964:
-

 Summary: Add the ability for hbase-daemon.sh to start in the 
foreground
 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.10, 1.0.0, 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11


The znode cleaner is awesome and gives great benefits.
As more and more deployments start using containers some of them will want to 
run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12035) Client does an RPC to master everytime a region is relocated

2015-02-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12035:
--
Attachment: HBASE-12035 (1) (1).patch

 Client does an RPC to master everytime a region is relocated
 

 Key: HBASE-12035
 URL: https://issues.apache.org/jira/browse/HBASE-12035
 Project: HBase
  Issue Type: Improvement
  Components: Client, master
Affects Versions: 2.0.0
Reporter: Enis Soztutar
Assignee: Andrey Stepachev
Priority: Critical
 Fix For: 2.0.0

 Attachments: HBASE-12035 (1) (1).patch, HBASE-12035 (1).patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch


 HBASE-7767 moved table enabled|disabled state to be kept in hdfs instead of 
 zookeeper. isTableDisabled() which is used in 
 HConnectionImplementation.relocateRegion() now became a master RPC call 
 rather than a zookeeper client call. Since we do relocateRegion() calls 
 everytime we want to relocate a region (region moved, RS down, etc) this 
 implies that when the master is down, the some of the clients for uncached 
 regions will be affected. 
 See HBASE-7767 and HBASE-11974 for some more background. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12961) Negative values in read and write region server metrics

2015-02-03 Thread Victoria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria updated HBASE-12961:
-
Attachment: HBASE-12961-2.0.0-v1.patch

Proposed fix for review.

 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Priority: Minor
 Attachments: HBASE-12961-2.0.0-v1.patch


 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303616#comment-14303616
 ] 

stack commented on HBASE-12962:
---

+1 if it passes (smile)

 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration: set classloader before loading xml files

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303655#comment-14303655
 ] 

Hudson commented on HBASE-12108:


FAILURE: Integrated in HBase-1.1 #135 (See 
[https://builds.apache.org/job/HBase-1.1/135/])
HBASE-12108 | Setting classloader so that HBase resources resolve even when 
HBaseConfiguration is loaded from a different class loader (stack: rev 
0fa6eedcdb3e446567c7581584c060852cedcbad)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java


 HBaseConfiguration: set classloader before loading xml files
 

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10900) FULL table backup and restore

2015-02-03 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303736#comment-14303736
 ] 

Demai Ni commented on HBASE-10900:
--

[~apurtell], sounds the right way to go.

[~jerryhe], any objections? If not, I will go ahead resolve the jiras under my 
name as not fix. 

 FULL table backup and restore
 -

 Key: HBASE-10900
 URL: https://issues.apache.org/jira/browse/HBASE-10900
 Project: HBase
  Issue Type: Task
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 1.1.0

 Attachments: HBASE-10900-fullbackup-trunk-v1.patch, 
 HBASE-10900-trunk-v2.patch, HBASE-10900-trunk-v3.patch, 
 HBASE-10900-trunk-v4.patch


 h2. Feature Description
 This is a subtask of 
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] to support FULL 
 backup/restore, and will complete the following function:
 {code:title=Backup Restore example|borderStyle=solid}
 /* backup from sourcecluster to targetcluster 
  */
 /* if no table name specified, all tables from source cluster will be 
 backuped */
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 /* restore on targetcluser, this is a local restore   
   */
 /* backup_1396650096738 - backup image name   
   */
 /* t1_dn,etc are the original table names. All tables will be restored if not 
 specified */
 /* t1_dn_restore, etc. are the restored table. if not specified, orginal 
 table name will be used*/
 [targetcluster]$ hbase restore /userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 /* restore from targetcluster back to source cluster, this is a remote restore
 [sourcecluster]$ hbase restore 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 {code}
 h2. Detail layout and frame work for the next jiras
 The patch is a wrapper of the existing snapshot and exportSnapshot, and will 
 use as the base framework for the over-all solution of  
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] as described 
 below:
 * *bin/hbase*  : end-user command line interface to invoke 
 BackupClient and RestoreClient
 * *BackupClient.java*  : 'main' entry for backup operations. This patch will 
 only support 'full' backup. In future jiras, will support:
 ** *create* incremental backup
 ** *cancel* an ongoing backup
 ** *delete* an exisitng backup image
 ** *describe* the detailed informaiton of backup image
 ** show *history* of all successful backups 
 ** show the *status* of the latest backup request
 ** *convert* incremental backup WAL files into HFiles.  either on-the-fly 
 during create or after create
 ** *merge* backup image
 ** *stop* backup a table of existing backup image
 ** *show* tables of a backup image 
 * *BackupCommands.java* : a place to keep all the command usages and options
 * *BackupManager.java*  : handle backup requests on server-side, create 
 BACKUP ZOOKEEPER nodes to keep track backup. The timestamps kept in zookeeper 
 will be used for future incremental backup (not included in this jira). 
 Create BackupContext and DispatchRequest. 
 * *BackupHandler.java*  : in this patch, it is a wrapper of snapshot and 
 exportsnapshot. In future jiras, 
 ** *timestamps* info will be recorded in ZK
 ** carry on *incremental* backup.  
 ** update backup *progress*
 ** set flags of *status*
 ** build up *backupManifest* file(in this jira only limited info for 
 fullback. later on, timestamps and dependency of multipl backup images are 
 also recorded here)
 ** clean up after *failed* backup 
 ** clean up after *cancelled* backup
 ** allow on-the-fly *convert* during incremental backup 
 * *BackupContext.java* : encapsulate backup information like backup ID, table 
 names, directory info, phase, TimeStamps of backup progress, size of data, 
 ancestor info, etc. 
 * *BackupCopier.java*  : the copying operation.  Later on, to support 
 progress report and mapper estimation; and extends DisCp for progress 
 updating to ZK during backup. 
 * *BackupExcpetion.java*: to handle exception from backup/restore
 * *BackupManifest.java* : encapsulate all the backup image information. The 
 manifest info will be bundled as manifest file together with data. So that 
 each backup image will contain all the info needed for restore. 
 * *BackupStatus.java*   : encapsulate backup status at table level during 
 backup progress
 * *BackupUtil.java* : utility methods during backup process
 * *RestoreClient.java*  : 'main' entry for restore operations. This patch 
 will only support 'full' backup. 
 * 

[jira] [Resolved] (HBASE-7332) [webui] HMaster webui should display the number of regions a table has.

2015-02-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-7332.
--
   Resolution: Fixed
Fix Version/s: 1.1.0
   2.0.0
 Release Note: Adds counts for various regions states to the table listing 
on main page. See attached screenshot.
 Hadoop Flags: Reviewed

Very nice addition. Pushed to branch-1+  Tried to put in 0.98 but conflict.  
Thanks [~octo47]

 [webui] HMaster webui should display the number of regions a table has.
 ---

 Key: HBASE-7332
 URL: https://issues.apache.org/jira/browse/HBASE-7332
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 2.0.0, 1.1.0
Reporter: Jonathan Hsieh
Assignee: Andrey Stepachev
Priority: Minor
  Labels: beginner, operability
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-7332.patch, HBASE-7332.patch, Screen Shot 
 2014-07-28 at 4.10.01 PM.png


 Pre-0.96/trunk hbase displayed the number of regions per table in the table 
 listing.  Would be good to have this back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12809) Remove unnecessary calls to Table.setAutoFlush()

2015-02-03 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis resolved HBASE-12809.

Resolution: Duplicate

This was fixed in a different issue.

 Remove unnecessary calls to Table.setAutoFlush()
 

 Key: HBASE-12809
 URL: https://issues.apache.org/jira/browse/HBASE-12809
 Project: HBase
  Issue Type: Sub-task
  Components: hbase
Affects Versions: 1.0.0, 2.0.0
Reporter: Solomon Duskis
Assignee: Solomon Duskis

 It looks like there are a lot of places where setAutoFlushTo() is called in 
 places where that's not necessary.  HBASE-12728 will likely result in 
 removing the flushCommits() method from Table. The patch for this issue 
 should remove all unnecessary calls to setAutoFlushTo() to prepare for the 
 full fix.
 setAutoFlushTo(true) is unnecessary on newly constructed HTables, since 
 autoFlush is true by default.  Calls like the following
 {code}
   table.setAutoFlushTo(false);
   for(...) {
 Put put = new Put(...);
 ...
 table.put(put);
   }
   table.flushCommits();
 {code}
 Is equivalent in functionality to:
 {code}
   ListPut puts = new ArrayList();
   for(...) {
 Put put = new Put(...);
 ...
 puts.add(put);
   }
   table.put(puts);
 {code}
 The put(ListPut) semantics ought to be the preferred approach.
 Note: here's the code for put(Put) and put(ListPut):
 {code:title=HTable.java|borderStyle=solid}
   @Override
   public void put(final Put put)
   throws InterruptedIOException, RetriesExhaustedWithDetailsException {
 doPut(put);
 if (autoFlush) {
   flushCommits();
 }
   }
   @Override
   public void put(final ListPut puts)
   throws InterruptedIOException, RetriesExhaustedWithDetailsException {
 for (Put put : puts) {
   doPut(put);
 }
 if (autoFlush) {
   flushCommits();
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration: set classloader before loading xml files

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303703#comment-14303703
 ] 

Hudson commented on HBASE-12108:


FAILURE: Integrated in HBase-TRUNK #6081 (See 
[https://builds.apache.org/job/HBase-TRUNK/6081/])
HBASE-12108 | Setting classloader so that HBase resources resolve even when 
HBaseConfiguration is loaded from a different class loader (stack: rev 
c812d13a471d4f8ee346fb3fc61f3d7763484b94)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java


 HBaseConfiguration: set classloader before loading xml files
 

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12961) Negative values in read and write region server metrics

2015-02-03 Thread Victoria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria updated HBASE-12961:
-
Release Note: Change read and write request count in ServerLoad from int to 
long
  Status: Patch Available  (was: Open)

HMaster web page ui, shows the read/write request per region server. They are 
currently displayed by using 32 bit integers. Hence, some time they show as 
negative values. Change the type readRequestsCount and writeRequestCount in 
ServerLoad.java from int to long. I also added a unit test to test this 
scenario. 

 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Priority: Minor

 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12961) Negative values in read and write region server metrics

2015-02-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303826#comment-14303826
 ] 

stack commented on HBASE-12961:
---

Patch LGTM. Only issue is where to apply it. ServerLoad is public and we are 
changing the return type so we'd 'break' compatibility. ServerLoad makes it 
over to the client in ClusterStatus. This is a bug and ClusterStatus is more 
admin/ops API. Where should we commit it? master branch for sure. branch-1 for 
1.1.?  We going to backport to 0.98?

 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Priority: Minor
 Attachments: HBASE-12961-2.0.0-v1.patch


 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7332) [webui] HMaster webui should display the number of regions a table has.

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303842#comment-14303842
 ] 

Hudson commented on HBASE-7332:
---

FAILURE: Integrated in HBase-TRUNK #6082 (See 
[https://builds.apache.org/job/HBase-TRUNK/6082/])
HBASE-7332 [webui] HMaster webui should display the number of regions a table 
has. (Andrey Stepachev) (stack: rev 7861e518efb2dc5d393b07079f4309a91b31dea3)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java


 [webui] HMaster webui should display the number of regions a table has.
 ---

 Key: HBASE-7332
 URL: https://issues.apache.org/jira/browse/HBASE-7332
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 2.0.0, 1.1.0
Reporter: Jonathan Hsieh
Assignee: Andrey Stepachev
Priority: Minor
  Labels: beginner, operability
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-7332.patch, HBASE-7332.patch, Screen Shot 
 2014-07-28 at 4.10.01 PM.png, Screen Shot 2015-02-03 at 9.23.57 AM.png


 Pre-0.96/trunk hbase displayed the number of regions per table in the table 
 listing.  Would be good to have this back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12964:
--
Attachment: HBASE-12964-v1.patch

Add the command to the usage.

 Add the ability for hbase-daemon.sh to start in the foreground
 --

 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12964-v1.patch, HBASE-12964.patch


 The znode cleaner is awesome and gives great benefits.
 As more and more deployments start using containers some of them will want to 
 run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-02-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303804#comment-14303804
 ] 

Andrew Purtell commented on HBASE-8329:
---

Patch here is fine. I will look at it later today. If there's a compat issue I 
can move it to a backport issue, no problem. Otherwise only the fix versions 
need update.

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-8329-0.98.patch, HBASE-8329-10.patch, 
 HBASE-8329-11.patch, HBASE-8329-12.patch, HBASE-8329-2-trunk.patch, 
 HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, 
 HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, 
 HBASE-8329-9-trunk.patch, HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, 
 HBASE-8329_13.patch, HBASE-8329_14.patch, HBASE-8329_15.patch, 
 HBASE-8329_16.patch, HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303810#comment-14303810
 ] 

stack commented on HBASE-11544:
---

Excellent [~jonathan.lawlor]

Speak up [~lhofhansl], [~enis], and [~ndimiduk] or anyone else interested in 
this -- would be great to have your input lads.

 [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
 batch even if it means OOME
 --

 Key: HBASE-11544
 URL: https://issues.apache.org/jira/browse/HBASE-11544
 Project: HBase
  Issue Type: Bug
Reporter: stack
Priority: Critical
  Labels: beginner

 Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
 large cells.  I kept OOME'ing.
 Serverside, we should measure how much we've accumulated and return to the 
 client whatever we've gathered once we pass out a certain size threshold 
 rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7332) [webui] HMaster webui should display the number of regions a table has.

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303748#comment-14303748
 ] 

Hudson commented on HBASE-7332:
---

FAILURE: Integrated in HBase-1.1 #136 (See 
[https://builds.apache.org/job/HBase-1.1/136/])
HBASE-7332 [webui] HMaster webui should display the number of regions a table 
has. (Andrey Stepachev) (stack: rev adcb840e1bb56bd3a525f672ef17deb19639d3f6)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java


 [webui] HMaster webui should display the number of regions a table has.
 ---

 Key: HBASE-7332
 URL: https://issues.apache.org/jira/browse/HBASE-7332
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 2.0.0, 1.1.0
Reporter: Jonathan Hsieh
Assignee: Andrey Stepachev
Priority: Minor
  Labels: beginner, operability
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-7332.patch, HBASE-7332.patch, Screen Shot 
 2014-07-28 at 4.10.01 PM.png, Screen Shot 2015-02-03 at 9.23.57 AM.png


 Pre-0.96/trunk hbase displayed the number of regions per table in the table 
 listing.  Would be good to have this back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12964:
--
Attachment: HBASE-12964.patch

 Add the ability for hbase-daemon.sh to start in the foreground
 --

 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12964.patch


 The znode cleaner is awesome and gives great benefits.
 As more and more deployments start using containers some of them will want to 
 run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12964:
--
Status: Patch Available  (was: Open)

 Add the ability for hbase-daemon.sh to start in the foreground
 --

 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.10, 1.0.0, 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12964.patch


 The znode cleaner is awesome and gives great benefits.
 As more and more deployments start using containers some of them will want to 
 run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12960) Cannot run the hbase shell command on Windows

2015-02-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303769#comment-14303769
 ] 

Enis Soztutar commented on HBASE-12960:
---

Thanks Lukas. Mind creating a patch? 

 Cannot run the hbase shell command on Windows
 ---

 Key: HBASE-12960
 URL: https://issues.apache.org/jira/browse/HBASE-12960
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.99.2
 Environment: Windows 8.1
Reporter: Lukas Eder
Priority: Minor

 I've just downloaded and unzipped hbase 0.99.2 and tried to run this command:
 {code}
 C:\hbase-0.99.2\binhbase shell
 Invalid maximum heap size: -Xmx1000m 
 Error: Could not create the Java Virtual Machine.
 Error: A fatal exception has occurred. Program will exit.
 {code}
 The command is documented here:
 http://hbase.apache.org/book.html#_get_started_with_hbase
 The problem is in hbase.cmd on line 296
 {code}
 set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX%
 {code}
 The quotes should be stripped:
 {code}
 set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX%
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12962) TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531

2015-02-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303782#comment-14303782
 ] 

Hadoop QA commented on HBASE-12962:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696211/HBASE-12962.patch
  against master branch at commit c812d13a471d4f8ee346fb3fc61f3d7763484b94.
  ATTACHMENT ID: 12696211

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.jena.hadoop.rdf.io.input.AbstractNodeTupleInputFormatTests.testMultipleInputs(AbstractNodeTupleInputFormatTests.java:477)
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12678//console

This message is automatically generated.

 TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
 -

 Key: HBASE-12962
 URL: https://issues.apache.org/jira/browse/HBASE-12962
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12962.patch


 Accidentally during HBASE-10531 the test case testBlockIndex() in 
 TestHFileBlockIndex was commented out.  Apologies for that. Not sure how that 
 happened. This patch uncomments the commented out test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-03 Thread Jonathan Lawlor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303786#comment-14303786
 ] 

Jonathan Lawlor commented on HBASE-11544:
-

I have started to look into this issue this past week. I have begun by 
investigating how [~lhofhansl]'s solution #1 could be implemented (solution #2 
would be the natural next step afterwards). As discussed above, the currently 
implementations of setBatch and setMaxResultSize seem to reveal how we could 
develop a solution for #1:

Currently, if a user uses the setBatch method on their scan, they will receive 
partial rows (assuming the batch size is less than the number of columns in the 
row) on each call to next(). As [~lhofhansl] has called out above, this does 
not break edit atomicity because the scanner maintains the readpoint state on 
the server. This is an important workflow that we could mimic in the 
implementation of solution #1: In the event that the entire row does not fit 
into a chunk, we would be returning partial rows in a manner similar to how 
batching returns partial rows. 

The implementation of setMaxResultSize is a good starting point for the logic 
behind rpcChunkSize but it is currently at too high of a level. The current 
implementation evaluates the limit on the result size after each row's worth of 
cells is retrieved. Specifically, in the event that the limit has been set, the 
server will run through a loop and on each iteration it will retrieve all the 
cells for one row. The loop will continue until the requested number of rows 
has been retrieved OR the limit on the result size has been reached. 

The reason why we would need to modify this in the case of rpcChunkSize is 
because we want the limit to be at the cell level rather than at the row level. 
If the row has many large cells, the result size limit won't matter because it 
will OOME when retrieving the cells for single row. 

In the case that we return a partial row due to the limits of the chunk size, 
we would want to indicate that the result is indeed a partial with some flag in 
the returned results. The flag would be necessary so that the client could 
recognize whether or not it would need to make another RPC request to finish 
the API call before delivering the results to the caller.

A couple issues that come to mind with the move to this new rpcChunkSize 
architecture are highlighted below:
- Currently, filters are not always compatible with partial rows (as in the 
case of setBatch) because sometimes all of the cells within a row are needed to 
make a decision as to whether or not the row will be filtered. With the 
introduction of rpcChunkSize, the logic behind evaluating filters may need to 
be revised. Does anyone have any comments with respect to how this could be 
handled?
- The solution #1 would not be able to prevent OOME that result from a single 
cell being too large (in the same way that the current implementation of 
setMaxResultSize cannot prevent OOME that result from a single row being too 
large). The issue of Cells that are too large would need to be addressed with 
the move to the full streaming protocol of solution #2.

In summary, the approach that I am thinking of taking for solution #1 is:
- Remove setMaxResultSize and replace it with a limit that we will call 
rpcChunkSize
- Move the logic for rpcChunkSize down into the Cell level so that we can 
prevent OOME that result from trying to fetch an entire row's worth of cells 
- Add a flag to Results that allows the client to determine if the Result is a 
partial (and they need to make more RPC requests to finish off the API call)
- Add logic on the client side to recognize when they need to make more RPC 
requests to finish the API call
- Add a method to combine partial results into a single result before 
delivering to caller.
- Still brainstorming how to handle the application of filters server side (any 
advice here would be much appreciated).

Any feedback on my thought process, the issues I raised, and proposed approach 
would be greatly appreciated!

Thanks


 [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
 batch even if it means OOME
 --

 Key: HBASE-11544
 URL: https://issues.apache.org/jira/browse/HBASE-11544
 Project: HBase
  Issue Type: Bug
Reporter: stack
Priority: Critical
  Labels: beginner

 Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
 large cells.  I kept OOME'ing.
 Serverside, we should measure how much we've accumulated and return to the 
 client whatever we've gathered once we pass out a certain size threshold 
 rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12035) Client does an RPC to master everytime a region is relocated

2015-02-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303843#comment-14303843
 ] 

Hadoop QA commented on HBASE-12035:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12696225/HBASE-12035%20%281%29%20%281%29.patch
  against master branch at commit 7861e518efb2dc5d393b07079f4309a91b31dea3.
  ATTACHMENT ID: 12696225

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 42 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.TestDistributedLogSplitting

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12679//console

This message is automatically generated.

 Client does an RPC to master everytime a region is relocated
 

 Key: HBASE-12035
 URL: https://issues.apache.org/jira/browse/HBASE-12035
 Project: HBase
  Issue Type: Improvement
  Components: Client, master
Affects Versions: 2.0.0
Reporter: Enis Soztutar
Assignee: Andrey Stepachev
Priority: Critical
 Fix For: 2.0.0

 Attachments: HBASE-12035 (1) (1).patch, HBASE-12035 (1).patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch


 HBASE-7767 moved table enabled|disabled state to be kept in hdfs instead of 
 zookeeper. isTableDisabled() which is used in 
 HConnectionImplementation.relocateRegion() now became a master RPC call 
 rather than a zookeeper client call. Since we do relocateRegion() calls 
 everytime we want to relocate a region (region moved, RS down, etc) this 
 implies that when the master is down, the some of the clients for uncached 
 regions will be affected. 
 See HBASE-7767 and 

[jira] [Commented] (HBASE-12961) Negative values in read and write region server metrics

2015-02-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303841#comment-14303841
 ] 

Hadoop QA commented on HBASE-12961:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12696241/HBASE-12961-2.0.0-v1.patch
  against master branch at commit 7861e518efb2dc5d393b07079f4309a91b31dea3.
  ATTACHMENT ID: 12696241

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
.setStorefileIndexSizeMB(42).setRootIndexSizeKB(201).setReadRequestsCount(Integer.MAX_VALUE).setWriteRequestsCount(Integer.MAX_VALUE).build();
+
.setStorefileIndexSizeMB(40).setRootIndexSizeKB(303).setReadRequestsCount(Integer.MAX_VALUE).setWriteRequestsCount(Integer.MAX_VALUE).build();

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestServerLoad

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12680//console

This message is automatically generated.

 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Priority: Minor
 Attachments: HBASE-12961-2.0.0-v1.patch


 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12961) Negative values in read and write region server metrics

2015-02-03 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303848#comment-14303848
 ] 

Elliott Clark commented on HBASE-12961:
---

I'd be +1 for everywhere. I actually think that we should tag this with public 
evolving.

 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Priority: Minor
 Attachments: HBASE-12961-2.0.0-v1.patch


 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12961) Negative values in read and write region server metrics

2015-02-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303880#comment-14303880
 ] 

Andrew Purtell commented on HBASE-12961:


+1 to what Elliott said. 

 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Priority: Minor
 Attachments: HBASE-12961-2.0.0-v1.patch


 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12956) Binding to 0.0.0.0 is broken after HBASE-10569

2015-02-03 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304312#comment-14304312
 ] 

Esteban Gutierrez commented on HBASE-12956:
---

Thats correct [~enis], also RS znodes report as 0.0.0.0 I'm testing a patch as 
I type this.

 Binding to 0.0.0.0 is broken after HBASE-10569
 --

 Key: HBASE-12956
 URL: https://issues.apache.org/jira/browse/HBASE-12956
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Esteban Gutierrez
 Fix For: 2.0.0, 1.0.1, 1.1.0


 After the Region Server and Master code was merged, we lost the functionality 
 to bind to 0.0.0.0 via hbase.regionserver.ipc.address and znodes now get 
 created with the wildcard address which means that RSs and the master. Thanks 
 to [~dimaspivak] for reporting the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12956) Binding to 0.0.0.0 is broken after HBASE-10569

2015-02-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12956:
--
Fix Version/s: 1.1.0
   1.0.1
   2.0.0

 Binding to 0.0.0.0 is broken after HBASE-10569
 --

 Key: HBASE-12956
 URL: https://issues.apache.org/jira/browse/HBASE-12956
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Esteban Gutierrez
 Fix For: 2.0.0, 1.0.1, 1.1.0


 After the Region Server and Master code was merged, we lost the functionality 
 to bind to 0.0.0.0 via hbase.regionserver.ipc.address and znodes now get 
 created with the wildcard address which means that RSs and the master. Thanks 
 to [~dimaspivak] for reporting the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304278#comment-14304278
 ] 

Elliott Clark commented on HBASE-12964:
---

Yep tested start, stop, foreground_start, and autostart.
Everything worked well for me.

Thanks for the review

 Add the ability for hbase-daemon.sh to start in the foreground
 --

 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12964-v1.patch, HBASE-12964-v2.patch, 
 HBASE-12964.patch


 The znode cleaner is awesome and gives great benefits.
 As more and more deployments start using containers some of them will want to 
 run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12964:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Add the ability for hbase-daemon.sh to start in the foreground
 --

 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12964-v1.patch, HBASE-12964-v2.patch, 
 HBASE-12964.patch


 The znode cleaner is awesome and gives great benefits.
 As more and more deployments start using containers some of them will want to 
 run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12956) Binding to 0.0.0.0 is broken after HBASE-10569

2015-02-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304308#comment-14304308
 ] 

Enis Soztutar commented on HBASE-12956:
---

This looks important. What happens when {{hbase.regionserver.ipc.address}} is 
set to 0.0.0.0? We put 0.0.0.0 to zk as the master address? 

 Binding to 0.0.0.0 is broken after HBASE-10569
 --

 Key: HBASE-12956
 URL: https://issues.apache.org/jira/browse/HBASE-12956
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Esteban Gutierrez
 Fix For: 2.0.0, 1.0.1, 1.1.0


 After the Region Server and Master code was merged, we lost the functionality 
 to bind to 0.0.0.0 via hbase.regionserver.ipc.address and znodes now get 
 created with the wildcard address which means that RSs and the master. Thanks 
 to [~dimaspivak] for reporting the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12956) Binding to 0.0.0.0 is broken after HBASE-10569

2015-02-03 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304314#comment-14304314
 ] 

Dima Spivak commented on HBASE-12956:
-

Yep, you end up with 0.0.0.0 znodes.

 Binding to 0.0.0.0 is broken after HBASE-10569
 --

 Key: HBASE-12956
 URL: https://issues.apache.org/jira/browse/HBASE-12956
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Esteban Gutierrez
 Fix For: 2.0.0, 1.0.1, 1.1.0


 After the Region Server and Master code was merged, we lost the functionality 
 to bind to 0.0.0.0 via hbase.regionserver.ipc.address and znodes now get 
 created with the wildcard address which means that RSs and the master. Thanks 
 to [~dimaspivak] for reporting the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12958) SSH doing hbase:meta get but hbase:meta not assigned

2015-02-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303902#comment-14303902
 ] 

stack commented on HBASE-12958:
---

Odd. We go from acknowledging that the host with meta is down (c2022) to of a 
sudden forgetting about it:

{code}
2015-02-02 22:32:15,574 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[c2022.halxg.cloudera.com,16020,1422944918568]
2015-02-02 22:32:15,575 DEBUG [main-EventThread] master.AssignmentManager: 
based on AM, current region=hbase:meta,,1.1588230740 is on 
server=c2022.halxg.cloudera.com,16020,1422944918568 server being checked: 
c2022.halxg.cloudera.com,16020,1422944918568
2015-02-02 22:32:15,575 DEBUG [main-EventThread] master.ServerManager: 
Added=c2022.halxg.cloudera.com,16020,1422944918568 to dead servers, submitted 
shutdown handler to be executed meta=true
2015-02-02 22:32:15,576 INFO  [MASTER_META_SERVER_OPERATIONS-c2020:16020-1] 
handler.MetaServerShutdownHandler: Splitting hbase:meta logs for 
c2022.halxg.cloudera.com,16020,1422944918568
2015-02-02 22:32:15,577 DEBUG [main-EventThread] zookeeper.RegionServerTracker: 
Added tracking of RS /hbase/rs/c2023.halxg.cloudera.com,16020,1422945128068
2015-02-02 22:32:15,578 DEBUG [main-EventThread] zookeeper.RegionServerTracker: 
Added tracking of RS /hbase/rs/c2025.halxg.cloudera.com,16020,1422935795768
2015-02-02 22:32:15,578 DEBUG [main-EventThread] zookeeper.RegionServerTracker: 
Added tracking of RS /hbase/rs/c2024.halxg.cloudera.com,16020,1422944894206
2015-02-02 22:32:15,585 DEBUG [MASTER_META_SERVER_OPERATIONS-c2020:16020-1] 
master.MasterFileSystem: Renamed region directory: 
hdfs://c2020.halxg.cloudera.com:8020/hbase/WALs/c2022.halxg.cloudera.com,16020,1422944918568-splitting
2015-02-02 22:32:15,585 INFO  [MASTER_META_SERVER_OPERATIONS-c2020:16020-1] 
master.SplitLogManager: dead splitlog workers 
[c2022.halxg.cloudera.com,16020,1422944918568]
2015-02-02 22:32:15,587 DEBUG [MASTER_META_SERVER_OPERATIONS-c2020:16020-1] 
master.SplitLogManager: Scheduling batch of logs to split
2015-02-02 22:32:15,587 INFO  [MASTER_META_SERVER_OPERATIONS-c2020:16020-1] 
master.SplitLogManager: started splitting 1 logs in 
[hdfs://c2020.halxg.cloudera.com:8020/hbase/WALs/c2022.halxg.cloudera.com,16020,1422944918568-splitting]
 for [c2022.halxg.cloudera.com,16020,1422944918568]
2015-02-02 22:32:15,591 DEBUG [main-EventThread] 
coordination.SplitLogManagerCoordination: put up splitlog task at znode 
/hbase/splitWAL/WALs%2Fc2022.halxg.cloudera.com%2C16020%2C1422944918568-splitting%2Fc2022.halxg.cloudera.com%252C16020%252C1422944918568..meta.1422945128892.meta
2015-02-02 22:32:15,591 DEBUG [main-EventThread] 
coordination.SplitLogManagerCoordination: task not yet acquired 
/hbase/splitWAL/WALs%2Fc2022.halxg.cloudera.com%2C16020%2C1422944918568-splitting%2Fc2022.halxg.cloudera.com%252C16020%252C1422944918568..meta.1422945128892.meta
 ver = 0
2015-02-02 22:32:15,607 INFO  [main-EventThread] 
coordination.SplitLogManagerCoordination: task 
/hbase/splitWAL/WALs%2Fc2022.halxg.cloudera.com%2C16020%2C1422944918568-splitting%2Fc2022.halxg.cloudera.com%252C16020%252C1422944918568..meta.1422945128892.meta
 acquired by c2025.halxg.cloudera.com,16020,1422935795768
2015-02-02 22:32:15,929 INFO  
[c2020.halxg.cloudera.com,16020,1422944946802.splitLogManagerTimeoutMonitor] 
coordination.SplitLogManagerCoordination: resubmitting task 
/hbase/splitWAL/WALs%2Fc2021.halxg.cloudera.com%2C16020%2C1422944889403-splitting%2Fc2021.halxg.cloudera.com%252C16020%252C1422944889403.default.1422945068674
2015-02-02 22:32:15,941 INFO  
[c2020.halxg.cloudera.com,16020,1422944946802.splitLogManagerTimeoutMonitor] 
master.SplitLogManager: resubmitted 1 out of 3 tasks
2015-02-02 22:32:15,941 DEBUG [main-EventThread] 
coordination.SplitLogManagerCoordination: task not yet acquired 
/hbase/splitWAL/WALs%2Fc2021.halxg.cloudera.com%2C16020%2C1422944889403-splitting%2Fc2021.halxg.cloudera.com%252C16020%252C1422944889403.default.1422945068674
 ver = 3
2015-02-02 22:32:15,949 INFO  [main-EventThread] 
coordination.SplitLogManagerCoordination: task /hbase/splitWAL/RESCAN004442 
entered state: DONE c2020.halxg.cloudera.com,16020,1422944946802
2015-02-02 22:32:15,957 DEBUG [main-EventThread] 
coordination.ZKSplitLogManagerCoordination$DeleteAsyncCallback: deleted 
/hbase/splitWAL/RESCAN004442
2015-02-02 22:32:15,957 DEBUG [main-EventThread] 
coordination.SplitLogManagerCoordination: deleted task without in memory state 
/hbase/splitWAL/RESCAN004442
2015-02-02 22:32:16,007 INFO  [main-EventThread] 
coordination.SplitLogManagerCoordination: task 
/hbase/splitWAL/WALs%2Fc2021.halxg.cloudera.com%2C16020%2C1422944889403-splitting%2Fc2021.halxg.cloudera.com%252C16020%252C1422944889403.default.1422945068674
 acquired by c2024.halxg.cloudera.com,16020,1422944894206
2015-02-02 22:32:16,208 INFO  

[jira] [Updated] (HBASE-12695) JDK 1.8 compilation broken

2015-02-03 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12695:

Issue Type: Sub-task  (was: Bug)
Parent: HBASE-7608

 JDK 1.8 compilation broken
 --

 Key: HBASE-12695
 URL: https://issues.apache.org/jira/browse/HBASE-12695
 Project: HBase
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Esteban Gutierrez
Priority: Critical
 Fix For: 2.0.0

 Attachments: 0001-HBASE-12695-JDK-1.8-compilation-broken.patch, 
 0002-HBASE-12695-JDK-1.8-compilation-broken.patch


 Looks like trunk only.
 https://code.google.com/p/error-prone/issues/detail?id=240
 https://code.google.com/p/error-prone/issues/detail?id=246



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10952) [REST] Let the user turn off block caching if desired

2015-02-03 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-10952:
-
Component/s: REST

 [REST] Let the user turn off block caching if desired
 -

 Key: HBASE-10952
 URL: https://issues.apache.org/jira/browse/HBASE-10952
 Project: HBase
  Issue Type: Improvement
  Components: REST
Affects Versions: 0.98.1, 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10952.patch


 After HBASE-10884 the REST gateway will use scanner defaults with respect to 
 block caching. Add support for a query parameter for hinting blocks for the 
 query should not be cached. Enable block caching by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9866) Support the mode where REST server authorizes proxy users

2015-02-03 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-9866:

Component/s: REST

 Support the mode where REST server authorizes proxy users
 -

 Key: HBASE-9866
 URL: https://issues.apache.org/jira/browse/HBASE-9866
 Project: HBase
  Issue Type: Improvement
  Components: REST
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.98.0, 0.99.0

 Attachments: 9866-1.txt, 9866-2.txt, 9866-3.txt, 9866-4.txt, 
 9866-4.txt


 In one use case, someone was trying to authorize with the REST server as a 
 proxy user. That mode is not supported today. 
 The curl request would be something like (assuming SPNEGO auth) - 
 {noformat}
 curl -i --negotiate -u : http://HOST:PORT/version/cluster?doas=USER
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10884) [REST] Do not disable block caching when scanning

2015-02-03 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-10884:
-
Component/s: REST

 [REST] Do not disable block caching when scanning
 -

 Key: HBASE-10884
 URL: https://issues.apache.org/jira/browse/HBASE-10884
 Project: HBase
  Issue Type: Improvement
  Components: REST
Affects Versions: 0.98.1, 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10884.patch


 The REST gateway pessimistically disables block caching when issuing Scans to 
 the cluster, using Scan#setCacheBlocks(false) in ScannerResultGenerator. It 
 does not do this when issuing Gets on behalf of HTTP clients in 
 RowResultGenerator. This is an old idea now, the reasons for doing so lost 
 sometime back in the era when HBase walked the earth with dinosaurs ( 0.20). 
 We probably should not be penalizing REST scans in this way. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12963) Add note about jdk8 compilation to the guide

2015-02-03 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303886#comment-14303886
 ] 

Sean Busbey commented on HBASE-12963:
-

worth noting that the underlying error-prone library now supports java8. the 
compiler plugin hasn't had a release that includes it yet. Should be 2.4 when 
it comes out.

 Add note about jdk8 compilation to the guide
 

 Key: HBASE-12963
 URL: https://issues.apache.org/jira/browse/HBASE-12963
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 2.0.0


 HBASE-12695 fixed building 2.0.0-SNAP with JDK8, but right now it's only 
 documented in a release note. We should add a note to the building hbase 
 section of the ref guide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12197) Move REST

2015-02-03 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-12197:
-
Component/s: REST

 Move REST
 -

 Key: HBASE-12197
 URL: https://issues.apache.org/jira/browse/HBASE-12197
 Project: HBase
  Issue Type: Bug
  Components: REST
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.98.8, 0.99.1

 Attachments: 0001-HBASE-12197-Move-rest-to-it-s-on-module.patch, 
 0001-HBASE-12197-Move-rest-to-it-s-on-module.patch, 
 0001-Move-rest-to-it-s-on-module.patch, HBASE-12197-0.98.patch, 
 HBASE-12197-branch-1.patch


 Lets move Rest to it's own module like thrift. That should allow us to remove 
 some dependencies from the class path when running MR tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration: set classloader before loading xml files

2015-02-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303929#comment-14303929
 ] 

Hudson commented on HBASE-12108:


FAILURE: Integrated in HBase-0.98 #832 (See 
[https://builds.apache.org/job/HBase-0.98/832/])
HBASE-12108 | Setting classloader so that HBase resources resolve even when 
HBaseConfiguration is loaded from a different class loader (stack: rev 
b39e158c3ffe237b415a68682e79c8262bcc48e8)
* hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java


 HBaseConfiguration: set classloader before loading xml files
 

 Key: HBASE-12108
 URL: https://issues.apache.org/jira/browse/HBASE-12108
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.6
Reporter: Aniket Bhatnagar
Priority: Minor
  Labels: class_loader, configuration, patch
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBaseConfiguration_HBASE_HBASE-12108.patch


 IN the setup wherein HBase jars are loaded in child classloader whose parent 
 had loaded hadoop-common jar, HBaseConfiguration.create() throws 
 hbase-default.xml file seems to be for and old version of HBase (null)...  
 exception. ClassLoader should be set in Hadoop conf object before calling 
 addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12964) Add the ability for hbase-daemon.sh to start in the foreground

2015-02-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303954#comment-14303954
 ] 

Hadoop QA commented on HBASE-12964:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696248/HBASE-12964.patch
  against master branch at commit 7861e518efb2dc5d393b07079f4309a91b31dea3.
  ATTACHMENT ID: 12696248

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+nohup $thiscmd --config ${HBASE_CONF_DIR} foreground_start $command 
$args  /dev/null  ${HBASE_LOGOUT} 21  
+nohup $thiscmd --config ${HBASE_CONF_DIR} internal_autorestart $command 
$args  /dev/null  ${HBASE_LOGOUT} 21  

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12681//console

This message is automatically generated.

 Add the ability for hbase-daemon.sh to start in the foreground
 --

 Key: HBASE-12964
 URL: https://issues.apache.org/jira/browse/HBASE-12964
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12964-v1.patch, HBASE-12964.patch


 The znode cleaner is awesome and gives great benefits.
 As more and more deployments start using containers some of them will want to 
 run things in the foreground. hbase-daemon.sh should allow that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12959) Compact never end when table's dataBlockEncoding using PREFIX_TREE

2015-02-03 Thread wuchengzhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wuchengzhi updated HBASE-12959:
---
Attachment: PrefixTreeCompact.java
txtfile-part7.txt.gz
txtfile-part6.txt.gz
txtfile-part5.txt.gz
txtfile-part4.txt.gz
txtfile-part2.txt.gz
txtfile-part1.txt.gz

storefiles are too large i can't upload

  Compact never end when table's dataBlockEncoding using  PREFIX_TREE
 

 Key: HBASE-12959
 URL: https://issues.apache.org/jira/browse/HBASE-12959
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.7
 Environment: hbase 0.98.7
 hadoop 2.5.1
Reporter: wuchengzhi
Priority: Critical
 Attachments: PrefixTreeCompact.java, txtfile-part1.txt.gz, 
 txtfile-part2.txt.gz, txtfile-part4.txt.gz, txtfile-part5.txt.gz, 
 txtfile-part6.txt.gz, txtfile-part7.txt.gz


 I upgraded the hbase from 0.96.1.1 to 0.98.7 and hadoop from 2.2.0 to 
 2.5.1,some table encoding using prefix-tree was abnormal for compacting,  the 
 gui shows the table's Compaction status is MAJOR_AND_MINOR(MAJOR) all the 
 time.
 in the regionserver dump , there are some logs as below:
 Tasks:
 ===
 Task: Compacting info in 
 PREFIX_NOT_COMPACT,,1421954285670.41ef60e2c221772626e141d5080296c5.
 Status: RUNNING:Compacting store info
 Running for 1097s  (on the  site running more than 3 days)
 
 Thread 197 (regionserver60020-smallCompactions-1421954341530):
   State: RUNNABLE
   Blocked count: 7
   Waited count: 3
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.followFan(PrefixTreeArrayScanner.java:329)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:149)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.seekForwardToOrAfter(PrefixTreeArraySearcher.java:183)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToOrBeforeUsingPositionAtOrAfter(PrefixTreeSeeker.java:199)
 
 org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeSeeker.seekToKeyInBlock(PrefixTreeSeeker.java:162)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1172)
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:573)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
 
 org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:222)
 
 org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:77)
 
 org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:110)
 org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1099)
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1482)
 Thread 177 (regionserver60020-smallCompactions-1421954314809):
   State: RUNNABLE
   Blocked count: 40
   Waited count: 60
   Stack:
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.column.ColumnReader.populateBuffer(ColumnReader.java:81)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateQualifier(PrefixTreeArrayScanner.java:471)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.populateNonRowFields(PrefixTreeArrayScanner.java:452)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.nextRow(PrefixTreeArrayScanner.java:226)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArrayScanner.advance(PrefixTreeArrayScanner.java:208)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtQualifierTimestamp(PrefixTreeArraySearcher.java:244)
 
 org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher.positionAtOrAfter(PrefixTreeArraySearcher.java:123)
 
 

[jira] [Commented] (HBASE-12439) Procedure V2

2015-02-03 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14303017#comment-14303017
 ] 

Matteo Bertozzi commented on HBASE-12439:
-

{quote}FATE calls the above adompotent, since the step can be in partially 
done or failed. So the step should work over the result of a partial execution 
from previous. For example, a step for creating a dir for the table in hdfs 
should not fail if the directory is already there.{quote}
here the logic is the same, once you execute a step if there is a non retryable 
code failure there will be a rollback step called.
the logic to revert partial step is responsibility of the execute()/rollback() 
implementation not of the framework, the framework only knows if a step is 
supposed to be executed or rollback, it has no knowledge about what you are 
doing.  

{quote}I think we should address fencing as a first level goal, and mention it 
in the state store implementation. If we make it explicit in store, alternative 
implementations if any has to take that into account {quote}
agreed, I'm not yet at this point. I'm still at making sure the 
execution/rollback was as expected.

{quote}This is easy to workaround. We can have two state store implementations. 
One is a smaller scale zk based one, for doing bootstrap. The other is for 
usual operations. However, I think we still do not need a table yet, but a 
state store can be implemented as a region opened in master. This way, we do 
not have to re-implement yet another wal, and custom in-memory data structures. 
Let me experiment with this approach on top of this patch.{quote}
The reason I choose the wal was to support assignment, all the logged events 
will probably trigger to many flush and compactions. and we don't really need 
this data to be compacted. but maybe a simple tuning on the region to avoid 
compaction and relying on TTL may be just fine and avoid the problem. didn't 
look into it too much, if you have time to experiment with it feel free to post 
a patch or just suggestions on how to change it.

 Procedure V2
 

 Key: HBASE-12439
 URL: https://issues.apache.org/jira/browse/HBASE-12439
 Project: HBase
  Issue Type: New Feature
  Components: master
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Attachments: ProcedureV2.pdf, Procedurev2Notification-Bus.pdf


 Procedure v2 (aka Notification Bus) aims to provide a unified way to build:
 * multi-steps procedure with a rollback/rollforward ability in case of 
 failure (e.g. create/delete table)
 ** HBASE-12070
 * notifications across multiple machines (e.g. ACLs/Labels/Quotas cache 
 updates)
 ** Make sure that every machine has the grant/revoke/label
 ** Enforce space limit quota across the namespace
 ** HBASE-10295 eliminate permanent replication zk node
 * procedures across multiple machines (e.g. Snapshots)
 * coordinated long-running procedures (e.g. compactions, splits, ...)
 * Synchronous calls, with the ability to see the state/result in case of 
 failure.
 ** HBASE-11608 sync split
 still work in progress/initial prototype: https://reviews.apache.org/r/27703/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >