[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653184#comment-14653184
 ] 

Hadoop QA commented on HBASE-14178:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12748601/HBASE-14178-0.98.patch
  against 0.98 branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748601

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
21 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14965//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14965//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14965//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14965//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14965//console

This message is automatically generated.

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653201#comment-14653201
 ] 

Duo Zhang commented on HBASE-14178:
---

[~anoopsamjohn]

{{CacheConfig}} is a bit confusing I think. {{family.isBlockCacheEnabled}} is 
only equal to {{cacheDataOnRead}}, and we still have chance to put data into 
{{BlockCache}} if we set {{cacheDataOnWrite}} or {{prefetchOnOpen}} to {{true}} 
even if we set  {{cacheDataOnRead}} to {{false}}?

So I suggest here we make a new method called {{shouldReadBlockFromCache}}, and 
check all the possibility that we may put a block into {{BlockCache}}?

Thanks.

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
> - <0x0005e5c55c08> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {code}



--
This message wa

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653334#comment-14653334
 ] 

Anoop Sam John commented on HBASE-14178:


I see.. I didnt not check much like how this variable is getting initialized in 
CacheConfig..  Ya we better do some cleanup there. So much confusing stuff.
bq.and we still have chance to put data into BlockCache if we set 
cacheDataOnWrite or prefetchOnOpen to true even if we set cacheDataOnRead to 
false?
I did not test it.  Nice to test with some UTs.  If at CF level we set like 
never cache the data from this CF into BC, we should NOT cache it at all.  
Whatever be value of cacheDataOnWrite or prefetchOnOpen.  If we are not doing 
so, then those are bugs to be addressed.

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653341#comment-14653341
 ] 

Anoop Sam John commented on HBASE-14178:


bq.So I suggest here we make a new method called shouldReadBlockFromCache, and 
check all the possibility that we may put a block into BlockCache
Ideally, when the BC is enabled and CF level there is no setting like NOT to 
cache data into BC, we should try read it from the BC. Also even if the CF 
level setting is there and we are not reading back Data blocks, then also we 
have to consult BC.  Still it will be much cleaner to do ur suggestion of 
adding the new method to CacheConfig. It will look much cleaner.

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
> - <0x0005e5c55c08> (a 
> java.util.concurrent.lo

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653391#comment-14653391
 ] 

Heng Chen commented on HBASE-14178:
---

{quote}
Ideally, when the BC is enabled and CF level there is no setting like NOT to 
cache data into BC, we should try read it from the BC. Also even if the CF 
level setting is there and we are not reading back Data blocks, then also we 
have to consult BC. Still it will be much cleaner to do ur suggestion of adding 
the new method to CacheConfig. It will look much cleaner.
{quote}

I agree with both of you, I will write a function named 
shouldReadBlockFromCache in CacheConfig to check all the situations we should 
read from BC.

But there is one problem.  we acquire lock to ensure next request could read 
block from BC.  
If cacheDataOnRead is false but cacheDataOnWrite is true, as we discuss, we 
still read from BC, and acquire the lock.
But after read block from hdfs, we use another condition to decide whether we 
should cache the block, 
and it will not cache the block when cacheDataOnRead is false and 
cacheDataOnWrite is true。 
In this situation, the lock is useless.

So i think we will use another 'If' to check whether we should acquire the 
lock. Do you think so?





> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbas

[jira] [Commented] (HBASE-12865) WALs may be deleted before they are replicated to peers

2015-08-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653394#comment-14653394
 ] 

Lars Hofhansl commented on HBASE-12865:
---

Yeah. Apologies from me as well... This went under the radar for some reason.

> WALs may be deleted before they are replicated to peers
> ---
>
> Key: HBASE-12865
> URL: https://issues.apache.org/jira/browse/HBASE-12865
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Liu Shaohui
>Assignee: He Liangliang
>Priority: Critical
> Attachments: HBASE-12865-V1.diff, HBASE-12865-V2.diff
>
>
> By design, ReplicationLogCleaner guarantee that the WALs  being in 
> replication queue can't been deleted by the HMaster. The 
> ReplicationLogCleaner gets the WAL set from zookeeper by scanning the 
> replication zk node. But it may get uncompleted WAL set during replication 
> failover for the scan operation is not atomic.
> For example: There are three region servers: rs1, rs2, rs3, and peer id 10.  
> The layout of replication zookeeper nodes is:
> {code}
> /hbase/replication/rs/rs1/10/wals
>  /rs2/10/wals
>  /rs3/10/wals
> {code}
> - t1: the ReplicationLogCleaner finished scanning the replication queue of 
> rs1, and start to scan the queue of rs2.
> - t2: region server rs3 is down, and rs1 take over rs3's replication queue. 
> The new layout is
> {code}
> /hbase/replication/rs/rs1/10/wals
>  /rs1/10-rs3/wals
>  /rs2/10/wals
>  /rs3
> {code}
> - t3, the ReplicationLogCleaner finished scanning the queue of rs2, and start 
> to scan the node of rs3. But the the queue has been moved to  
> "replication/rs1/10-rs3/WALS"
> So the  ReplicationLogCleaner will miss the WALs of rs3 in peer 10 and the 
> hmaster may delete these WALs before they are replicated to peer clusters.
> We encountered this problem in our cluster and I think it's a serious bug for 
> replication.
> Suggestions are welcomed to fix this bug. thx~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12865) WALs may be deleted before they are replicated to peers

2015-08-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653406#comment-14653406
 ] 

Lars Hofhansl commented on HBASE-12865:
---

Patch looks good. I find it hard to convince myself that the cversion would 
change in all cases that we care about... I'll trust you on this.

Minor nit:
{{int retry = 0; do \{...; retry+\+;} while (true)}}
can perhaps be expressed nicer as
{{for (int retry=0; ; retry++) \{...\}}}

> WALs may be deleted before they are replicated to peers
> ---
>
> Key: HBASE-12865
> URL: https://issues.apache.org/jira/browse/HBASE-12865
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Liu Shaohui
>Assignee: He Liangliang
>Priority: Critical
> Attachments: HBASE-12865-V1.diff, HBASE-12865-V2.diff
>
>
> By design, ReplicationLogCleaner guarantee that the WALs  being in 
> replication queue can't been deleted by the HMaster. The 
> ReplicationLogCleaner gets the WAL set from zookeeper by scanning the 
> replication zk node. But it may get uncompleted WAL set during replication 
> failover for the scan operation is not atomic.
> For example: There are three region servers: rs1, rs2, rs3, and peer id 10.  
> The layout of replication zookeeper nodes is:
> {code}
> /hbase/replication/rs/rs1/10/wals
>  /rs2/10/wals
>  /rs3/10/wals
> {code}
> - t1: the ReplicationLogCleaner finished scanning the replication queue of 
> rs1, and start to scan the queue of rs2.
> - t2: region server rs3 is down, and rs1 take over rs3's replication queue. 
> The new layout is
> {code}
> /hbase/replication/rs/rs1/10/wals
>  /rs1/10-rs3/wals
>  /rs2/10/wals
>  /rs3
> {code}
> - t3, the ReplicationLogCleaner finished scanning the queue of rs2, and start 
> to scan the node of rs3. But the the queue has been moved to  
> "replication/rs1/10-rs3/WALS"
> So the  ReplicationLogCleaner will miss the WALs of rs3 in peer 10 and the 
> hmaster may delete these WALs before they are replicated to peer clusters.
> We encountered this problem in our cluster and I think it's a serious bug for 
> replication.
> Suggestions are welcomed to fix this bug. thx~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653415#comment-14653415
 ] 

Duo Zhang commented on HBASE-14178:
---

Yes, the problem here is the lock, not when to read from cache...So if we can 
make sure the block will not be put into cache after we fetch it from HDFS, 
then we can bypass the locking step.

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
> - <0x0005e5c55c08> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14178:
--
Attachment: HBASE-14178_v5.patch

Upload patch 
changes blow:
1.  add function to check all situations we should read BC
2.  add function to check if we should acquire the lock

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, HBASE-14178_v5.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
> - <0x0005e5c55c08> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14182) My regionserver change ip. But hmaster still connect to old ip after the rs restart

2015-08-04 Thread Heng Chen (JIRA)
Heng Chen created HBASE-14182:
-

 Summary: My regionserver change ip. But hmaster still connect to 
old ip after the rs restart
 Key: HBASE-14182
 URL: https://issues.apache.org/jira/browse/HBASE-14182
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6
Reporter: Heng Chen


I use docker to deploy my hbase cluster, and the RS ip changed. When restart 
this RS,  hmaster webUI shows it connect to hmaster, but regions num. is zero 
after a long time. I check the hmaster log and found that master still use old 
ip to connect this rs.

This is hmaster's log below:
PS: 10.11.21.140 is old ip of  rs dx-ape-regionserver1-online
{code}
2015-08-04 17:24:00,081 INFO  [AM.ZK.Worker-pool2-t14141] 
master.AssignmentManager: Assigning 
solar_image,\x01Y\x8E\xA3y,1434968237206.4a1bdeec85b9f55b962596f9fb2cd07f. to 
dx-ape-regionserver1-online,60020,1438679950072
2015-08-04 17:24:06,800 WARN  [AM.ZK.Worker-pool2-t14133] 
master.AssignmentManager: Failed assignment of 
solar_image,\x00\x94\x09\x8D\x95,1430991781025.b0f5b755f443d41cf306026a60675020.
 to dx-ape-regionserver1-online,60020,1438679950072, trying to assign elsewhere 
instead; try=3 of 10
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:578)
at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:868)
at 
org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1543)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442)
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:20964)
at 
org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:671)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2097)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1577)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1550)
at 
org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:104)
at 
org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:999)
at 
org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1447)
at 
org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1260)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2015-08-04 17:24:06,801 WARN  [AM.ZK.Worker-pool2-t14140] 
master.AssignmentManager: Failed assignment of 
solar_image,\x00(.\xE7\xB1L,1430024620929.534025fcf4cae5516513b9c9a4cf73dc. to 
dx-ape-regionserver1-online,60020,1438679950072, trying to assign elsewhere 
instead; try=2 of 10
java.net.ConnectException: Call to 
dx-ape-regionserver1-online/10.11.21.140:60020 failed on connection exception: 
java.net.ConnectException: Connection timed out
at 
org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1483)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1461)
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:20964)
at 
org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:671)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2097)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1577)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1550)
at 
org.apache.hadoop.hbase.master.handler.ClosedRegionHand

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653551#comment-14653551
 ] 

Hadoop QA commented on HBASE-14178:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748640/HBASE-14178_v5.patch
  against master branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748640

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.trace.TestHTraceHooks
  org.apache.hadoop.hbase.client.TestScannersFromClientSide
  org.apache.hadoop.hbase.TestLocalHBaseCluster
  org.apache.hadoop.hbase.TestMetaTableAccessor
  
org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient
  org.apache.hadoop.hbase.client.TestScannerTimeout
  
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestMetaWithReplicas
  org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
  org.apache.hadoop.hbase.client.TestHCM
  
org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient
  org.apache.hadoop.hbase.backup.TestHFileArchiving
  
org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestClientPushback
  org.apache.hadoop.hbase.TestIOFencing
  org.apache.hadoop.hbase.client.TestClientTimeouts
  org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
  org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
  org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
  org.apache.hadoop.hbase.TestMultiVersions

 {color:red}-1 core zombie tests{color}.  There are 7 zombie test(s):   
at 
org.apache.hadoop.hbase.namespace.TestNamespaceAuditor.testRegionMerge(TestNamespaceAuditor.java:316)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14966//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14966//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14966//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14966//console

This message is automatically generated.

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, HBASE-14178_v5.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation be

[jira] [Updated] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14178:
--
Attachment: HBASE-14178_v6.patch

changes:

1. modify some comments

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
> - <0x0005e5c55c08> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Ashish Singhi (JIRA)
Ashish Singhi created HBASE-14183:
-

 Summary: Scanning hbase meta table is failing in master branch
 Key: HBASE-14183
 URL: https://issues.apache.org/jira/browse/HBASE-14183
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0


As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14183:
--
Attachment: HBASE-14183.patch

> Scanning hbase meta table is failing in master branch
> -
>
> Key: HBASE-14183
> URL: https://issues.apache.org/jira/browse/HBASE-14183
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-14183.patch
>
>
> As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14183:
--
Status: Patch Available  (was: Open)

> Scanning hbase meta table is failing in master branch
> -
>
> Key: HBASE-14183
> URL: https://issues.apache.org/jira/browse/HBASE-14183
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-14183.patch
>
>
> As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653645#comment-14653645
 ] 

Ashish Singhi commented on HBASE-14183:
---

Checked no other place is missed.
Please review.

> Scanning hbase meta table is failing in master branch
> -
>
> Key: HBASE-14183
> URL: https://issues.apache.org/jira/browse/HBASE-14183
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-14183.patch
>
>
> As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14184) Fix indention and type-o in JavaHBaseContext

2015-08-04 Thread Ted Malaska (JIRA)
Ted Malaska created HBASE-14184:
---

 Summary: Fix indention and type-o in JavaHBaseContext
 Key: HBASE-14184
 URL: https://issues.apache.org/jira/browse/HBASE-14184
 Project: HBase
  Issue Type: Wish
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor


Looks like there is a Ddd that should be Rdd.

Also looks like everything is one space over too much



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-04 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-14150:

Attachment: HBASE-14150.2.patch

Did the following:
1. Added test for rdd implicit function
2. Applied some of Ted Y's comments



> Add BulkLoad functionality to HBase-Spark Module
> 
>
> Key: HBASE-14150
> URL: https://issues.apache.org/jira/browse/HBASE-14150
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
> Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch
>
>
> Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
> from a given RDD.
> This will do the following:
> 1. figure out the number of regions and sort and partition the data correctly 
> to be written out to HFiles
> 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
> the shuffle stage and not in the memory of the reducer.  This will allow this 
> design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14184) Fix indention and type-o in JavaHBaseContext

2015-08-04 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-14184:

Attachment: HBASE-14184.3.patch

> Fix indention and type-o in JavaHBaseContext
> 
>
> Key: HBASE-14184
> URL: https://issues.apache.org/jira/browse/HBASE-14184
> Project: HBase
>  Issue Type: Wish
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
> Attachments: HBASE-14184.3.patch
>
>
> Looks like there is a Ddd that should be Rdd.
> Also looks like everything is one space over too much



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14184) Fix indention and type-o in JavaHBaseContext

2015-08-04 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653679#comment-14653679
 ] 

Ted Malaska commented on HBASE-14184:
-

Also fixed some JavaDoc stuff.  Nothing in the code should had changed in this 
patch.  Simple cleaning effort.

Should be a simple review and commit.

> Fix indention and type-o in JavaHBaseContext
> 
>
> Key: HBASE-14184
> URL: https://issues.apache.org/jira/browse/HBASE-14184
> Project: HBase
>  Issue Type: Wish
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
> Attachments: HBASE-14184.3.patch
>
>
> Looks like there is a Ddd that should be Rdd.
> Also looks like everything is one space over too much



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653722#comment-14653722
 ] 

Anoop Sam John commented on HBASE-14183:


Why not doing kv.getValueLength?

> Scanning hbase meta table is failing in master branch
> -
>
> Key: HBASE-14183
> URL: https://issues.apache.org/jira/browse/HBASE-14183
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-14183.patch
>
>
> As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14182) My regionserver change ip. But hmaster still connect to old ip after the rs restart

2015-08-04 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653730#comment-14653730
 ] 

Heng Chen commented on HBASE-14182:
---

I think i found the answer!

RpcClient use InetAddress class in Java.  And InetAddress has a cache to store 
 pair
getAllByName0 will be called when request ip for a host, the source code in 
jdk1.8 is below:

{code}
private static InetAddress[] getAllByName0 (String host, InetAddress reqAddr, 
boolean check)
throws UnknownHostException  {

/* If it gets here it is presumed to be a hostname */
/* Cache.get can return: null, unknownAddress, or InetAddress[] */

/* make sure the connection to the host is allowed, before we
 * give out a hostname
 */
if (check) {
SecurityManager security = System.getSecurityManager();
if (security != null) {
security.checkConnect(host, -1);
}
}

InetAddress[] addresses = getCachedAddresses(host);

/* If no entry in cache, then do the host lookup */
if (addresses == null) {
addresses = getAddressesFromNameService(host, reqAddr);
}

if (addresses == unknown_array)
throw new UnknownHostException(host);

return addresses.clone();
}
{code}

It will request cache first.  

So we can't change rs ip without hmaster restart.

One solution is that we can store ip information in ZK, and pass ip information 
into InetAddress Constructor when generate new instance.  The problem will be 
solved. 



> My regionserver change ip. But hmaster still connect to old ip after the rs 
> restart
> ---
>
> Key: HBASE-14182
> URL: https://issues.apache.org/jira/browse/HBASE-14182
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>
> I use docker to deploy my hbase cluster, and the RS ip changed. When restart 
> this RS,  hmaster webUI shows it connect to hmaster, but regions num. is zero 
> after a long time. I check the hmaster log and found that master still use 
> old ip to connect this rs.
> This is hmaster's log below:
> PS: 10.11.21.140 is old ip of  rs dx-ape-regionserver1-online
> {code}
> 2015-08-04 17:24:00,081 INFO  [AM.ZK.Worker-pool2-t14141] 
> master.AssignmentManager: Assigning 
> solar_image,\x01Y\x8E\xA3y,1434968237206.4a1bdeec85b9f55b962596f9fb2cd07f. to 
> dx-ape-regionserver1-online,60020,1438679950072
> 2015-08-04 17:24:06,800 WARN  [AM.ZK.Worker-pool2-t14133] 
> master.AssignmentManager: Failed assignment of 
> solar_image,\x00\x94\x09\x8D\x95,1430991781025.b0f5b755f443d41cf306026a60675020.
>  to dx-ape-regionserver1-online,60020,1438679950072, trying to assign 
> elsewhere instead; try=3 of 10
> java.net.ConnectException: Connection timed out
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:578)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:868)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1543)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:20964)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:671)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2097)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1577)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1550)
> at 
> org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:104)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:999)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1447)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:12

[jira] [Commented] (HBASE-14182) My regionserver change ip. But hmaster still connect to old ip after the rs restart

2015-08-04 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653748#comment-14653748
 ] 

Heng Chen commented on HBASE-14182:
---

It seems has a better solution.  As JDK docs said 
{quote}
InetAddress Caching
The InetAddress class has a cache to store successful as well as unsuccessful 
host name resolutions.
By default, when a security manager is installed, in order to protect against 
DNS spoofing attacks, the result of positive host name resolutions are cached 
forever. When a security manager is not installed, the default behavior is to 
cache entries for a finite (implementation dependent) period of time. The 
result of unsuccessful host name resolution is cached for a very short period 
of time (10 seconds) to improve performance.

If the default behavior is not desired, then a Java security property can be 
set to a different Time-to-live (TTL) value for positive caching. Likewise, a 
system admin can configure a different negative caching TTL value when needed.

Two Java security properties control the TTL values used for positive and 
negative host name resolution caching:

networkaddress.cache.ttl
Indicates the caching policy for successful name lookups from the name service. 
The value is specified as as integer to indicate the number of seconds to cache 
the successful lookup. The default setting is to cache for an implementation 
specific period of time.
A value of -1 indicates "cache forever".

networkaddress.cache.negative.ttl (default: 10)
Indicates the caching policy for un-successful name lookups from the name 
service. The value is specified as as integer to indicate the number of seconds 
to cache the failure for un-successful lookups.
A value of 0 indicates "never cache". A value of -1 indicates "cache forever".
{quote}

We can set networkaddress.cache.ttl to be a limit time. 

> My regionserver change ip. But hmaster still connect to old ip after the rs 
> restart
> ---
>
> Key: HBASE-14182
> URL: https://issues.apache.org/jira/browse/HBASE-14182
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>
> I use docker to deploy my hbase cluster, and the RS ip changed. When restart 
> this RS,  hmaster webUI shows it connect to hmaster, but regions num. is zero 
> after a long time. I check the hmaster log and found that master still use 
> old ip to connect this rs.
> This is hmaster's log below:
> PS: 10.11.21.140 is old ip of  rs dx-ape-regionserver1-online
> {code}
> 2015-08-04 17:24:00,081 INFO  [AM.ZK.Worker-pool2-t14141] 
> master.AssignmentManager: Assigning 
> solar_image,\x01Y\x8E\xA3y,1434968237206.4a1bdeec85b9f55b962596f9fb2cd07f. to 
> dx-ape-regionserver1-online,60020,1438679950072
> 2015-08-04 17:24:06,800 WARN  [AM.ZK.Worker-pool2-t14133] 
> master.AssignmentManager: Failed assignment of 
> solar_image,\x00\x94\x09\x8D\x95,1430991781025.b0f5b755f443d41cf306026a60675020.
>  to dx-ape-regionserver1-online,60020,1438679950072, trying to assign 
> elsewhere instead; try=3 of 10
> java.net.ConnectException: Connection timed out
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:578)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:868)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1543)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:20964)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:671)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2097)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1577)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1550)
> at 
> org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:104)
> at 
> org.apache.had

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653814#comment-14653814
 ] 

Hadoop QA commented on HBASE-14178:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748653/HBASE-14178_v6.patch
  against master branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748653

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 3 zombie test(s):   
at 
org.apache.hadoop.hbase.client.TestReplicasClient.testSmallScanWithReplicas(TestReplicasClient.java:606)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14967//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14967//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14967//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14967//console

This message is automatically generated.

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHe

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2015-08-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653873#comment-14653873
 ] 

Ted Yu commented on HBASE-14178:


{code}
450 if (blockType == null) {
451   return true;
452 }
{code}
Should false be returned in above condition ?

> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Priority: Critical
> Fix For: 0.98.6
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178.patch, 
> HBASE-14178_v1.patch, HBASE-14178_v2.patch, HBASE-14178_v3.patch, 
> HBASE-14178_v4.patch, HBASE-14178_v5.patch, HBASE-14178_v6.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
> - <0x0005e5c55c08> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14150:

Status: Patch Available  (was: Open)

putting to patch available status so QABot will run.

> Add BulkLoad functionality to HBase-Spark Module
> 
>
> Key: HBASE-14150
> URL: https://issues.apache.org/jira/browse/HBASE-14150
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
> Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch
>
>
> Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
> from a given RDD.
> This will do the following:
> 1. figure out the number of regions and sort and partition the data correctly 
> to be written out to HFiles
> 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
> the shuffle stage and not in the memory of the reducer.  This will allow this 
> design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14122) Client API for determining if server side supports cell level security

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653886#comment-14653886
 ] 

Andrew Purtell commented on HBASE-14122:


bq. You seem to be using 'UnsupportedOperationException' for backward 
compatibility, depending on it being thrown by the RPC facility if the method 
can not be located on the server side

Strictly speaking, I don't depend on it . If it's not supported on the server, 
the invocation will get back an IOE because the server couldn't process the 
call. The new API and the AccessControlClient and VisibilityClient utility 
methods will let that IOE out to the caller. However I do try to do something 
nice in the shell so we can print a clean message instead of a stack trace. 
This relies on string matching given how remote exceptions work. I think that's 
fine for the shell but too brittle to do in the API. What do you think? 

> Client API for determining if server side supports cell level security
> --
>
> Key: HBASE-14122
> URL: https://issues.apache.org/jira/browse/HBASE-14122
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-14122-0.98.patch, HBASE-14122-branch-1.patch, 
> HBASE-14122.patch, HBASE-14122.patch
>
>
> Add a client API for determining if the server side supports cell level 
> security. 
> Ask the master, assuming as we do in many other instances that the master and 
> regionservers all have a consistent view of site configuration.
> Return {{true}} if all features required for cell level security are present, 
> {{false}} otherwise, or throw {{UnsupportedOperationException}} if the master 
> does not have support for the RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13825:
---
Status: Patch Available  (was: Open)

Dang forgot to set patch available, let's do that now...

> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 1.0.0
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
> HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653898#comment-14653898
 ] 

Andrew Purtell commented on HBASE-13825:


If you have a sec [~esteban], the branch-1 and 0.98 patches here incorporate 
your work on HBASE-14076, what do you think?

> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
> HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14183:
--
Attachment: HBASE-14183-v1.patch

Don't know! May be I was in hurry to catch the bus, did not check all 
solutions, will be careful next time.

> Scanning hbase meta table is failing in master branch
> -
>
> Key: HBASE-14183
> URL: https://issues.apache.org/jira/browse/HBASE-14183
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-14183-v1.patch, HBASE-14183.patch
>
>
> As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14185) Incorrect region names logged by MemStoreFlusher.java

2015-08-04 Thread Biju Nair (JIRA)
Biju Nair created HBASE-14185:
-

 Summary: Incorrect region names logged by MemStoreFlusher.java
 Key: HBASE-14185
 URL: https://issues.apache.org/jira/browse/HBASE-14185
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Biju Nair
Assignee: Biju Nair
Priority: Minor


In MemstoreFlusher the method 
[flushOneForGlobalPressure|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L142]
 logs incorrect region names which makes debugging issues a bit difficult. 
Instead of logging the secondary replica region names in 
[these|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L200]
 
[locations|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L205],
 the code logs the primary replica region names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14186) Read mvcc vlong optimization

2015-08-04 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-14186:
--

 Summary: Read mvcc vlong optimization
 Key: HBASE-14186
 URL: https://issues.apache.org/jira/browse/HBASE-14186
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0


{code}
for (int idx = 0; idx < remaining; idx++) {
  byte b = blockBuffer.getByteAfterPosition(offsetFromPos + idx);
  i = i << 8;
  i = i | (b & 0xFF);
}
{code}
Doing the read as in case of BIG_ENDIAN.
After HBASE-12600, we tend to keep the mvcc and so byte by byte read looks 
eating up lot of CPU time. (In my test HFileReaderImpl#_readMvccVersion comes 
on top in terms of hot methods). We can optimize here by reading 4 or 2 bytes 
in one shot when the length of the vlong is more than 4 bytes. We will in turn 
use UnsafeAccess methods which handles ENDIAN.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14184) Fix indention and type-o in JavaHBaseContext

2015-08-04 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654019#comment-14654019
 ] 

Ted Malaska commented on HBASE-14184:
-

Thanks Andrew for the ship.

What should my next steps be?

> Fix indention and type-o in JavaHBaseContext
> 
>
> Key: HBASE-14184
> URL: https://issues.apache.org/jira/browse/HBASE-14184
> Project: HBase
>  Issue Type: Wish
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
> Attachments: HBASE-14184.3.patch
>
>
> Looks like there is a Ddd that should be Rdd.
> Also looks like everything is one space over too much



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-04 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654027#comment-14654027
 ] 

Ted Malaska commented on HBASE-14150:
-

Thanks [~busbey]

Trying to get a lot done this week:
14150, 14184, and now I need to get 14181 started and done



> Add BulkLoad functionality to HBase-Spark Module
> 
>
> Key: HBASE-14150
> URL: https://issues.apache.org/jira/browse/HBASE-14150
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
> Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch
>
>
> Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
> from a given RDD.
> This will do the following:
> 1. figure out the number of regions and sort and partition the data correctly 
> to be written out to HFiles
> 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
> the shuffle stage and not in the memory of the reducer.  This will allow this 
> design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654032#comment-14654032
 ] 

Hadoop QA commented on HBASE-14183:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748660/HBASE-14183.patch
  against master branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748660

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14968//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14968//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14968//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14968//console

This message is automatically generated.

> Scanning hbase meta table is failing in master branch
> -
>
> Key: HBASE-14183
> URL: https://issues.apache.org/jira/browse/HBASE-14183
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-14183-v1.patch, HBASE-14183.patch
>
>
> As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14186) Read mvcc vlong optimization

2015-08-04 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14186:
---
Attachment: HBASE-14186.patch

> Read mvcc vlong optimization
> 
>
> Key: HBASE-14186
> URL: https://issues.apache.org/jira/browse/HBASE-14186
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-14186.patch
>
>
> {code}
> for (int idx = 0; idx < remaining; idx++) {
>   byte b = blockBuffer.getByteAfterPosition(offsetFromPos + idx);
>   i = i << 8;
>   i = i | (b & 0xFF);
> }
> {code}
> Doing the read as in case of BIG_ENDIAN.
> After HBASE-12600, we tend to keep the mvcc and so byte by byte read looks 
> eating up lot of CPU time. (In my test HFileReaderImpl#_readMvccVersion comes 
> on top in terms of hot methods). We can optimize here by reading 4 or 2 bytes 
> in one shot when the length of the vlong is more than 4 bytes. We will in 
> turn use UnsafeAccess methods which handles ENDIAN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14186) Read mvcc vlong optimization

2015-08-04 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14186:
---
Status: Patch Available  (was: Open)

> Read mvcc vlong optimization
> 
>
> Key: HBASE-14186
> URL: https://issues.apache.org/jira/browse/HBASE-14186
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-14186.patch
>
>
> {code}
> for (int idx = 0; idx < remaining; idx++) {
>   byte b = blockBuffer.getByteAfterPosition(offsetFromPos + idx);
>   i = i << 8;
>   i = i | (b & 0xFF);
> }
> {code}
> Doing the read as in case of BIG_ENDIAN.
> After HBASE-12600, we tend to keep the mvcc and so byte by byte read looks 
> eating up lot of CPU time. (In my test HFileReaderImpl#_readMvccVersion comes 
> on top in terms of hot methods). We can optimize here by reading 4 or 2 bytes 
> in one shot when the length of the vlong is more than 4 bytes. We will in 
> turn use UnsafeAccess methods which handles ENDIAN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14186) Read mvcc vlong optimization

2015-08-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654042#comment-14654042
 ] 

Anoop Sam John commented on HBASE-14186:


On JMH benchmark, the difference is 
{code}
BenchmarkMode  Cnt  Score Error  Units
MBBTest.readMvccNew thrpt6  122467888.294 ± 2143187.504  ops/s
MBBTest.readMvccOldway  thrpt6   75684230.226 ± 9943572.564  ops/s
{code}

Also done a PE test where all data in offheap cache. After noticing the 
_readMvccVersion () being the hot method, done this optimization.  This makes 
the avg latency of the thread run by ~15%

> Read mvcc vlong optimization
> 
>
> Key: HBASE-14186
> URL: https://issues.apache.org/jira/browse/HBASE-14186
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-14186.patch
>
>
> {code}
> for (int idx = 0; idx < remaining; idx++) {
>   byte b = blockBuffer.getByteAfterPosition(offsetFromPos + idx);
>   i = i << 8;
>   i = i | (b & 0xFF);
> }
> {code}
> Doing the read as in case of BIG_ENDIAN.
> After HBASE-12600, we tend to keep the mvcc and so byte by byte read looks 
> eating up lot of CPU time. (In my test HFileReaderImpl#_readMvccVersion comes 
> on top in terms of hot methods). We can optimize here by reading 4 or 2 bytes 
> in one shot when the length of the vlong is more than 4 bytes. We will in 
> turn use UnsafeAccess methods which handles ENDIAN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14186) Read mvcc vlong optimization

2015-08-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654045#comment-14654045
 ] 

Anoop Sam John commented on HBASE-14186:


When the read has to happen from MultiByteBuff, there is another optimization 
possible.  Will more test that and handle in another Jira.

> Read mvcc vlong optimization
> 
>
> Key: HBASE-14186
> URL: https://issues.apache.org/jira/browse/HBASE-14186
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-14186.patch
>
>
> {code}
> for (int idx = 0; idx < remaining; idx++) {
>   byte b = blockBuffer.getByteAfterPosition(offsetFromPos + idx);
>   i = i << 8;
>   i = i | (b & 0xFF);
> }
> {code}
> Doing the read as in case of BIG_ENDIAN.
> After HBASE-12600, we tend to keep the mvcc and so byte by byte read looks 
> eating up lot of CPU time. (In my test HFileReaderImpl#_readMvccVersion comes 
> on top in terms of hot methods). We can optimize here by reading 4 or 2 bytes 
> in one shot when the length of the vlong is more than 4 bytes. We will in 
> turn use UnsafeAccess methods which handles ENDIAN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14122) Client API for determining if server side supports cell level security

2015-08-04 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654059#comment-14654059
 ] 

Jerry He commented on HBASE-14122:
--

Sounds good.
Not updating the API is reasonable for now.  
The remote exception propagation has been a little opaque to me sometimes.

> Client API for determining if server side supports cell level security
> --
>
> Key: HBASE-14122
> URL: https://issues.apache.org/jira/browse/HBASE-14122
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-14122-0.98.patch, HBASE-14122-branch-1.patch, 
> HBASE-14122.patch, HBASE-14122.patch
>
>
> Add a client API for determining if the server side supports cell level 
> security. 
> Ask the master, assuming as we do in many other instances that the master and 
> regionservers all have a consistent view of site configuration.
> Return {{true}} if all features required for cell level security are present, 
> {{false}} otherwise, or throw {{UnsupportedOperationException}} if the master 
> does not have support for the RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14185) Incorrect region names logged by MemStoreFlusher.java

2015-08-04 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-14185:
--
Attachment: HBASE-14185.patch

Patch with changes to log the secondary region replica names.

> Incorrect region names logged by MemStoreFlusher.java
> -
>
> Key: HBASE-14185
> URL: https://issues.apache.org/jira/browse/HBASE-14185
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: HBASE-14185.patch
>
>
> In MemstoreFlusher the method 
> [flushOneForGlobalPressure|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L142]
>  logs incorrect region names which makes debugging issues a bit difficult. 
> Instead of logging the secondary replica region names in 
> [these|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L200]
>  
> [locations|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L205],
>  the code logs the primary replica region names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654166#comment-14654166
 ] 

Hadoop QA commented on HBASE-13825:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12748597/HBASE-13825-branch-1.patch
  against branch-1 branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748597

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3828 checkstyle errors (more than the master's current 3825 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14970//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14970//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14970//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14970//console

This message is automatically generated.

> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
> HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654167#comment-14654167
 ] 

Hadoop QA commented on HBASE-14150:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748663/HBASE-14150.2.patch
  against master branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748663

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14969//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14969//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14969//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14969//console

This message is automatically generated.

> Add BulkLoad functionality to HBase-Spark Module
> 
>
> Key: HBASE-14150
> URL: https://issues.apache.org/jira/browse/HBASE-14150
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
> Attachments: HBASE-14150.1.patch, HBASE-14150.2.patch
>
>
> Add on to the work done in HBASE-13992 to add functionality to do a bulk load 
> from a given RDD.
> This will do the following:
> 1. figure out the number of regions and sort and partition the data correctly 
> to be written out to HFiles
> 2. Also unlike the MR bulkload I would like that the columns to be sorted in 
> the shuffle stage and not in the memory of the reducer.  This will allow this 
> design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654180#comment-14654180
 ] 

Andrew Purtell commented on HBASE-13825:


The precommit test was bad because someone killed our test JVM externally:
{noformat}
xecutionException: java.lang.RuntimeException: The forked VM terminated without 
properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server && 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51/jre/bin/java 
-enableassertions -XX:MaxDirectMemorySize=1G -Xmx2800m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -jar 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/surefire/surefirebooter8543005017696418773.jar
 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/surefire/surefire2508603723119457542tmp
 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/surefire/surefire_9171447714369068258025tmp
{noformat}

The "zombie" was AmbariManagementControllerTest, that's not us. 

Tests pass for me locally. 

Let me check on that checkstyle thing

> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
> HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14186) Read mvcc vlong optimization

2015-08-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654201#comment-14654201
 ] 

stack commented on HBASE-14186:
---

Excellent.

Here,   if (remaining >= Bytes.SIZEOF_INT) { Is it possible, that 
we could come in here and there'd only be a short amount to read so we'd skip 
the SIZEOF_INT parens?  if so, the shift by 16 bits in the second paren would 
be not needed (might not be a problem if left shifting 0)?

Otherwise, +1. Nice.



> Read mvcc vlong optimization
> 
>
> Key: HBASE-14186
> URL: https://issues.apache.org/jira/browse/HBASE-14186
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-14186.patch
>
>
> {code}
> for (int idx = 0; idx < remaining; idx++) {
>   byte b = blockBuffer.getByteAfterPosition(offsetFromPos + idx);
>   i = i << 8;
>   i = i | (b & 0xFF);
> }
> {code}
> Doing the read as in case of BIG_ENDIAN.
> After HBASE-12600, we tend to keep the mvcc and so byte by byte read looks 
> eating up lot of CPU time. (In my test HFileReaderImpl#_readMvccVersion comes 
> on top in terms of hot methods). We can optimize here by reading 4 or 2 bytes 
> in one shot when the length of the vlong is more than 4 bytes. We will in 
> turn use UnsafeAccess methods which handles ENDIAN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654218#comment-14654218
 ] 

Andrew Purtell commented on HBASE-13825:


Valid checkstyle issues:
- ClusterID: Unused import - com.google.protobuf.InvalidProtocolBufferException 
. Missed that one.
- HColumnDescriptor: Unused import - 
com.google.protobuf.InvalidProtocolBufferException . Also missed this one.

Nothing else jumps out as relevant or related. New patches coming up.


> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
> HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654220#comment-14654220
 ] 

Hadoop QA commented on HBASE-14183:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748691/HBASE-14183-v1.patch
  against master branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748691

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):   
at 
org.apache.hadoop.hbase.client.TestReplicasClient.testSmallScanWithReplicas(TestReplicasClient.java:606)
at 
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient.testRestoreSchemaChange(TestRestoreSnapshotFromClient.java:210)
at 
org.apache.hadoop.hbase.client.TestAdmin2.testWALRollWriting(TestAdmin2.java:543)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14971//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14971//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14971//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14971//console

This message is automatically generated.

> Scanning hbase meta table is failing in master branch
> -
>
> Key: HBASE-14183
> URL: https://issues.apache.org/jira/browse/HBASE-14183
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-14183-v1.patch, HBASE-14183.patch
>
>
> As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654225#comment-14654225
 ] 

Andrew Purtell commented on HBASE-13825:


New patches for branch-1 and 0.98 fix unused imports.

> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
> HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13825:
---
Attachment: HBASE-13825-branch-1.patch
HBASE-13825-0.98.patch

> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
> HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14122) Client API for determining if server side supports cell level security

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654241#comment-14654241
 ] 

Andrew Purtell commented on HBASE-14122:


Thanks [~jerryhe] . I'll get around to updating the patch with your suggestion 
for making the shell security commands more friendly when the new API is 
available shortly. 

> Client API for determining if server side supports cell level security
> --
>
> Key: HBASE-14122
> URL: https://issues.apache.org/jira/browse/HBASE-14122
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0
>
> Attachments: HBASE-14122-0.98.patch, HBASE-14122-branch-1.patch, 
> HBASE-14122.patch, HBASE-14122.patch
>
>
> Add a client API for determining if the server side supports cell level 
> security. 
> Ask the master, assuming as we do in many other instances that the master and 
> regionservers all have a consistent view of site configuration.
> Return {{true}} if all features required for cell level security are present, 
> {{false}} otherwise, or throw {{UnsupportedOperationException}} if the master 
> does not have support for the RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654257#comment-14654257
 ] 

Andrew Purtell commented on HBASE-14085:


What else can I do to help [~busbey]? 

We can ignore long lines in the interests of unblocking releases. No need for 
further revs on that account, IHMO. 

We can doc maintenance of NOTICE and LICENSE files and related files as a new 
issue that would not be a blocker.

You indicate we are waiting on legal on the license for a subset of the files 
incorporated in the fat JRuby jar. We have made many previous releases that 
include JRuby so resuming this practice until we hear otherwise doesn't 
materially change anything. That shouldn't be a blocker. 

Up on reviewboard, you mention:
bq. I think I figured out why the centralized supplemental models and resource 
bundle weren't working.

That's great, but if the current patch as-is works, then we can improve it with 
a follow on issue.

What else?

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14146) Once replication sees an error it slows down forever

2015-08-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14146:
---
Fix Version/s: 1.3.0
   1.1.2
   1.0.2
   0.98.14

Picked this bug fix back to the other active release branches also. It applies 
equally there with a trivial fixup for 0.98. To be thorough I ran the 
replication unit tests after this change was applied to the affected branches. 
No problems, of course.

> Once replication sees an error it slows down forever
> 
>
> Key: HBASE-14146
> URL: https://issues.apache.org/jira/browse/HBASE-14146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14146.patch
>
>
> sleepMultiplier inside of HBaseInterClusterReplicationEndpoint and 
> ReplicationSource never gets reset to zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14186) Read mvcc vlong optimization

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654307#comment-14654307
 ] 

Hadoop QA commented on HBASE-14186:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748694/HBASE-14186.patch
  against master branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748694

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14972//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14972//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14972//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14972//console

This message is automatically generated.

> Read mvcc vlong optimization
> 
>
> Key: HBASE-14186
> URL: https://issues.apache.org/jira/browse/HBASE-14186
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-14186.patch
>
>
> {code}
> for (int idx = 0; idx < remaining; idx++) {
>   byte b = blockBuffer.getByteAfterPosition(offsetFromPos + idx);
>   i = i << 8;
>   i = i | (b & 0xFF);
> }
> {code}
> Doing the read as in case of BIG_ENDIAN.
> After HBASE-12600, we tend to keep the mvcc and so byte by byte read looks 
> eating up lot of CPU time. (In my test HFileReaderImpl#_readMvccVersion comes 
> on top in terms of hot methods). We can optimize here by reading 4 or 2 bytes 
> in one shot when the length of the vlong is more than 4 bytes. We will in 
> turn use UnsafeAccess methods which handles ENDIAN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654308#comment-14654308
 ] 

Sean Busbey commented on HBASE-14085:
-

{quote}
Up on reviewboard, you mention:

bq. I think I figured out why the centralized supplemental models and resource 
bundle weren't working.

That's great, but if the current patch as-is works, then we can improve it with 
a follow on issue.
What else?
{quote}

I have this working now, and was currently tweaking to avoid having per-module 
appended-resources. I'll stop fiddling and put this up now.

{quote}
You indicate we are waiting on legal on the license for a subset of the files 
incorporated in the fat JRuby jar. We have made many previous releases that 
include JRuby so resuming this practice until we hear otherwise doesn't 
materially change anything. That shouldn't be a blocker.
{quote}

Release votes are majority, so the PMC can decide this as it will. For me 
personally, the way I read my responsibility to the foundation as a PMC member 
means I'll be voting -1 until legal rules on the license or we replace it.

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14185) Incorrect region names logged by MemStoreFlusher.java

2015-08-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14185:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

lgtm

> Incorrect region names logged by MemStoreFlusher.java
> -
>
> Key: HBASE-14185
> URL: https://issues.apache.org/jira/browse/HBASE-14185
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: HBASE-14185.patch
>
>
> In MemstoreFlusher the method 
> [flushOneForGlobalPressure|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L142]
>  logs incorrect region names which makes debugging issues a bit difficult. 
> Instead of logging the secondary replica region names in 
> [these|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L200]
>  
> [locations|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L205],
>  the code logs the primary replica region names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14085:

Status: Open  (was: Patch Available)

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14085:

Attachment: HBASE-14085.3.patch

-03

* pulls the supplemental models and most of the appended-resources into a 
common module to deduplicate maintenance.
* wraps long lines in licenses
* update LEGAL to remove reference to JRuby dev
* removed reference to cygwin

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
> HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654336#comment-14654336
 ] 

Sean Busbey commented on HBASE-14085:
-

[~apurtell] if you're fine with v03 I'll go ahead and push it and then file a 
follow on for the nice-to-have clean up I'd like to do to remove the remaining 
appended-resources directories.

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
> HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14146) Once replication sees an error it slows down forever

2015-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654343#comment-14654343
 ] 

Hudson commented on HBASE-14146:


FAILURE: Integrated in HBase-1.0 #1000 (See 
[https://builds.apache.org/job/HBase-1.0/1000/])
HBASE-14146 Fix Once replication sees an error it slows down forever (apurtell: 
rev e0fa13685ac78132dfb1a46b18d089b2f55b9b22)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> Once replication sees an error it slows down forever
> 
>
> Key: HBASE-14146
> URL: https://issues.apache.org/jira/browse/HBASE-14146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14146.patch
>
>
> sleepMultiplier inside of HBaseInterClusterReplicationEndpoint and 
> ReplicationSource never gets reset to zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14085:

Status: Patch Available  (was: Open)

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
> HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14183) Scanning hbase meta table is failing in master branch

2015-08-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654349#comment-14654349
 ] 

Ted Yu commented on HBASE-14183:


+1 on patch v2.

> Scanning hbase meta table is failing in master branch
> -
>
> Key: HBASE-14183
> URL: https://issues.apache.org/jira/browse/HBASE-14183
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-14183-v1.patch, HBASE-14183.patch
>
>
> As part of HBASE-14047 cleanup this issue has been introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14146) Once replication sees an error it slows down forever

2015-08-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654365#comment-14654365
 ] 

Hudson commented on HBASE-14146:


FAILURE: Integrated in HBase-1.1 #596 (See 
[https://builds.apache.org/job/HBase-1.1/596/])
HBASE-14146 Fix Once replication sees an error it slows down forever (apurtell: 
rev 7dcd3c0bdfa8246cb7ca04bd1e21bae37776130b)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> Once replication sees an error it slows down forever
> 
>
> Key: HBASE-14146
> URL: https://issues.apache.org/jira/browse/HBASE-14146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14146.patch
>
>
> sleepMultiplier inside of HBaseInterClusterReplicationEndpoint and 
> ReplicationSource never gets reset to zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654372#comment-14654372
 ] 

Esteban Gutierrez commented on HBASE-13825:
---

+1 [~apurtell] also I think you addressed some of the comments from 
[~anoopsamjohn] from HBASE-14076. I'm going to open a JIRA to port the changes 
to master as well. Thanks!

> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
> HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654414#comment-14654414
 ] 

Andrew Purtell commented on HBASE-14085:


bq. For me personally, the way I read my responsibility to the foundation as a 
PMC member means I'll be voting -1 until legal rules on the license or we 
replace it.

The way I read LEGAL-222 is it is not as clear cut as I think is implied here. 

The question is on a small number of files bundled in the JRuby jar, which is 
only included in binary convenience release artifacts.

We have larger issues than just the slate of release candidates on deck if 
there is a problem, every available release in the archive and on the mirrors 
is affected. That said I buy the argument that we can't release something that 
has a question mark.

I believe it is immediately possible to resume releases, as long as we stick to 
source only releases until the matter is cleared up.

It would also be possible to resume releases including binary artifacts as long 
as we exclude the binary artifacts for the hbase-shell module. 


> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
> HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654422#comment-14654422
 ] 

Sean Busbey commented on HBASE-14085:
-

I agree with that analysis. My prior statement presumed we continued our 
pattern of doing source + binary artifacts for votes.

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
> HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654424#comment-14654424
 ] 

Andrew Purtell commented on HBASE-14085:


bq. I agree with that analysis. My prior statement presumed we continued our 
pattern of doing source + binary artifacts for votes.

Which was correct, because I gave nothing indicating otherwise. My bad. 

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
> HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13825) Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name

2015-08-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13825:
---
Summary: Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in 
place of builder methods of same name  (was: Get operations on large objects 
fail with protocol errors)

> Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of 
> builder methods of same name
> ---
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
> HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-13825) Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654430#comment-14654430
 ] 

Andrew Purtell edited comment on HBASE-13825 at 8/4/15 10:03 PM:
-

Thanks [~esteban]. Or, if you like, I can add what you've identified as missing 
to the patch for master here. If so, what would that be


was (Author: apurtell):
Thanks [~esteban]. Or, if you like, I can add what you've identified as missing 
to the patch for master here.

> Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of 
> builder methods of same name
> ---
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
> HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654430#comment-14654430
 ] 

Andrew Purtell commented on HBASE-13825:


Thanks [~esteban]. Or, if you like, I can add what you've identified as missing 
to the patch for master here.

> Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of 
> builder methods of same name
> ---
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
> HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654455#comment-14654455
 ] 

Hadoop QA commented on HBASE-13825:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12748724/HBASE-13825-branch-1.patch
  against branch-1 branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748724

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3826 checkstyle errors (more than the master's current 3825 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14973//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14973//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14973//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14973//console

This message is automatically generated.

> Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of 
> builder methods of same name
> ---
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
> HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14182) My regionserver change ip. But hmaster still connect to old ip after the rs restart

2015-08-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-14182.

Resolution: Invalid

Please write in to u...@hbase.apache.org for help troubleshooting issues. This 
is the project dev tracker. Thanks!


> My regionserver change ip. But hmaster still connect to old ip after the rs 
> restart
> ---
>
> Key: HBASE-14182
> URL: https://issues.apache.org/jira/browse/HBASE-14182
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>
> I use docker to deploy my hbase cluster, and the RS ip changed. When restart 
> this RS,  hmaster webUI shows it connect to hmaster, but regions num. is zero 
> after a long time. I check the hmaster log and found that master still use 
> old ip to connect this rs.
> This is hmaster's log below:
> PS: 10.11.21.140 is old ip of  rs dx-ape-regionserver1-online
> {code}
> 2015-08-04 17:24:00,081 INFO  [AM.ZK.Worker-pool2-t14141] 
> master.AssignmentManager: Assigning 
> solar_image,\x01Y\x8E\xA3y,1434968237206.4a1bdeec85b9f55b962596f9fb2cd07f. to 
> dx-ape-regionserver1-online,60020,1438679950072
> 2015-08-04 17:24:06,800 WARN  [AM.ZK.Worker-pool2-t14133] 
> master.AssignmentManager: Failed assignment of 
> solar_image,\x00\x94\x09\x8D\x95,1430991781025.b0f5b755f443d41cf306026a60675020.
>  to dx-ape-regionserver1-online,60020,1438679950072, trying to assign 
> elsewhere instead; try=3 of 10
> java.net.ConnectException: Connection timed out
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:578)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:868)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1543)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:20964)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:671)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2097)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1577)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1550)
> at 
> org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:104)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:999)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1447)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1260)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 2015-08-04 17:24:06,801 WARN  [AM.ZK.Worker-pool2-t14140] 
> master.AssignmentManager: Failed assignment of 
> solar_image,\x00(.\xE7\xB1L,1430024620929.534025fcf4cae5516513b9c9a4cf73dc. 
> to dx-ape-regionserver1-online,60020,1438679950072, trying to assign 
> elsewhere instead; try=2 of 10
> java.net.ConnectException: Call to 
> dx-ape-regionserver1-online/10.11.21.140:60020 failed on connection 
> exception: java.net.ConnectException: Connection timed out
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1483)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1461)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$Adm

[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654482#comment-14654482
 ] 

Sean Busbey commented on HBASE-14085:
-

v3 is in master (reviewed on reviewboard).

If anyone wants to handle the back port to a particular branch, just drop a 
note here. Later tonight I'll start pulling back.

Steps:

* cherry pick change back
* verify no other third party works in source (I did this by grepping for 
"copyright", "mit", "bsd", "guava", and "copied from")
* verify hbase-server, hbase-common, hbase-thrift all still incorporate third 
party works they did in master (the logo files, jquery, and bootstrap)
* iff there are shaded jars, check dependency:list against the one in master 
and check any difference for LICENSE.vm or NOTICE.vm in hbase-resource-bundle 
not covering things
* in the assembly artifact check dependency:list against the one in master 
similar to the shaded jars
* build release bin and source artifacts, spot check the LICENSE and NOTICE 
files in a few jars (common, server, server-test) and in the bin and source 
artifacts.

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
> HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654510#comment-14654510
 ] 

Hadoop QA commented on HBASE-14085:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748736/HBASE-14085.3.patch
  against master branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748736

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 30 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
${license.debug.print.included}
+
${project.groupId}:hbase-resource-bundle:${project.version}
+
${project.groupId}:hbase-resource-bundle:${project.version}
+   Build an aggregation of our templated NOTICE file and the NOTICE 
files in our dependencies.
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+1.1. “Contributor” means each individual or entity that creates or 
contributes to the creation of
+1.2. “Contributor Version” means the combination of the Original Software, 
prior Modifications used
+1.5. “Initial Developer” means the individual or entity that first makes 
Original Software available
+1.6. “Larger Work” means a work which combines Covered Software or 
portions thereof with code not
+1.8. “Licensable” means having the right to grant, to the maximum extent 
possible, whether at the

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.client.TestScannersFromClientSide
  org.apache.hadoop.hbase.client.TestFromClientSideNoCodec
  org.apache.hadoop.hbase.client.TestScannerTimeout
  
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestTableSnapshotScanner
  org.apache.hadoop.hbase.client.TestMetaWithReplicas
  org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
  org.apache.hadoop.hbase.client.TestHCM
  
org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.TestIOFencing
  
org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters
  org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
  org.apache.hadoop.hbase.client.TestClientTimeouts
  org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
  org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14974//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14974//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14974//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14974//console

This message is automatically generated.

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sea

[jira] [Commented] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-08-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654515#comment-14654515
 ] 

Nick Dimiduk commented on HBASE-13889:
--

[~busbey] I haven't held up 1.1.x releases for a fix here, though it's 
disappointing we didn't get it tested properly the first time.

[~dminkovsky] have you had any luck with this one?

> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.1.2, 1.3.0, 1.2.1
>
> Attachments: 13889.wip.patch, Screen Shot 2015-06-11 at 10.59.55 
> AM.png
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
>   
>   
>   1.0-SNAPSHOT 
>   

[jira] [Updated] (HBASE-13143) TestCacheOnWrite is flaky and needs a diet

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13143:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> TestCacheOnWrite is flaky and needs a diet
> --
>
> Key: HBASE-13143
> URL: https://issues.apache.org/jira/browse/HBASE-13143
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.11
>Reporter: Andrew Purtell
>Assignee: Esteban Gutierrez
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.0.3, 1.1.3
>
>
> TestCacheOnWrite passes locally but has been flaking in 0.98 builds on 
> Jenkins, most recently https://builds.apache.org/job/HBase-0.98/878/
> The test takes a long time to execute (338.492 sec) and is resource intensive 
> (216 tests). Neither of these characteristics endear it to Jenkins.
> When I ran this unit test on a macbook after a minute the fan was running so 
> fast I thought it would take flight. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13221) HDFS Transparent Encryption breaks WAL writing

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13221:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> HDFS Transparent Encryption breaks WAL writing
> --
>
> Key: HBASE-13221
> URL: https://issues.apache.org/jira/browse/HBASE-13221
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 0.98.0, 1.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.0.3, 1.1.3
>
>
> We need to detect when HDFS Transparent Encryption (Hadoop 2.6.0+) is enabled 
> and fall back to more synchronization in the WAL to prevent catastrophic 
> failure under load.
> See HADOOP-11708 for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13271) Table#puts(List) operation is indeterminate; needs fixing

2015-08-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654523#comment-14654523
 ] 

Nick Dimiduk commented on HBASE-13271:
--

Any movement here? Bumping out of 1.1.2 but bring it on back if you can -- I'm 
waiting on a resolution to HBASE-14085.

> Table#puts(List) operation is indeterminate; needs fixing
> --
>
> Key: HBASE-13271
> URL: https://issues.apache.org/jira/browse/HBASE-13271
> Project: HBase
>  Issue Type: Improvement
>  Components: API
>Affects Versions: 1.0.0
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0, 1.1.2, 1.3.0, 1.2.1, 1.0.3
>
>
> Another API issue found by [~larsgeorge]:
> "Table.put(List {code}
> [Mar-17 9:21 AM] Lars George: Table.put(List) is weird since you cannot 
> flush partial lists
> [Mar-17 9:21 AM] Lars George: Say out of 5 the third is broken, then the 
> put() call returns with a local exception (say empty Put) and then you have 2 
> that are in the buffer
> [Mar-17 9:21 AM] Lars George: but how to you force commit them?
> [Mar-17 9:22 AM] Lars George: In the past you would call flushCache(), but 
> that is "gone" now
> [Mar-17 9:22 AM] Lars George: and flush() is not available on a Table
> [Mar-17 9:22 AM] Lars George: And you cannot access the underlying 
> BufferedMutation neither
> [Mar-17 9:23 AM] Lars George: You can *only* add more Puts if you can, or 
> call close()
> [Mar-17 9:23 AM] Lars George: that is just weird to explain
> {code}
> So, Table needs to get flush back or we deprecate this method or it flushes 
> immediately and does not return until complete in the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13452) HRegion warning about memstore size miscalculation is not actionable

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13452:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

Bumping from 1.1.2.

> HRegion warning about memstore size miscalculation is not actionable
> 
>
> Key: HBASE-13452
> URL: https://issues.apache.org/jira/browse/HBASE-13452
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Dev Lakhani
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 2.0.0, 1.2.1, 1.0.3, 1.1.3
>
>
> During normal operation the HRegion class reports a message related to 
> memstore flushing in HRegion.class :
>   if (!canFlush) {
> addAndGetGlobalMemstoreSize(-memstoreSize.get());
>   } else if (memstoreSize.get() != 0) {
> LOG.error("Memstore size is " + memstoreSize.get());
>   }
> The log file is filled with lots of 
> Memstore size is 558744
> Memstore size is 4390632
> Memstore size is 558744 
> ...
> These message are uninformative, clog up the logs and offers no root cause 
> nor solution. Maybe the message needs to be more informative, changed to WARN 
> or some further information provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13271) Table#puts(List) operation is indeterminate; needs fixing

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13271:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> Table#puts(List) operation is indeterminate; needs fixing
> --
>
> Key: HBASE-13271
> URL: https://issues.apache.org/jira/browse/HBASE-13271
> Project: HBase
>  Issue Type: Improvement
>  Components: API
>Affects Versions: 1.0.0
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.0.3, 1.1.3
>
>
> Another API issue found by [~larsgeorge]:
> "Table.put(List {code}
> [Mar-17 9:21 AM] Lars George: Table.put(List) is weird since you cannot 
> flush partial lists
> [Mar-17 9:21 AM] Lars George: Say out of 5 the third is broken, then the 
> put() call returns with a local exception (say empty Put) and then you have 2 
> that are in the buffer
> [Mar-17 9:21 AM] Lars George: but how to you force commit them?
> [Mar-17 9:22 AM] Lars George: In the past you would call flushCache(), but 
> that is "gone" now
> [Mar-17 9:22 AM] Lars George: and flush() is not available on a Table
> [Mar-17 9:22 AM] Lars George: And you cannot access the underlying 
> BufferedMutation neither
> [Mar-17 9:23 AM] Lars George: You can *only* add more Puts if you can, or 
> call close()
> [Mar-17 9:23 AM] Lars George: that is just weird to explain
> {code}
> So, Table needs to get flush back or we deprecate this method or it flushes 
> immediately and does not return until complete in the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654526#comment-14654526
 ] 

Andrew Purtell commented on HBASE-14085:


I pulled down the master patch. Let me make sure it tests out ok locally. If so 
I'll start porting it back, starting with branch-1. 

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085.1.patch, HBASE-14085.2.patch, 
> HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13996) Add write sniffing in canary

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13996:
-
Fix Version/s: (was: 1.1.2)

This looks like a feature enhancement, not a bug fix. Hence dropping from 1.1.x 
line.

> Add write sniffing in canary
> 
>
> Key: HBASE-13996
> URL: https://issues.apache.org/jira/browse/HBASE-13996
> Project: HBase
>  Issue Type: New Feature
>  Components: canary
>Affects Versions: 0.98.13, 1.1.0.1
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Fix For: 2.0.0, 0.98.14
>
> Attachments: HBASE-13996-v001.diff
>
>
> Currently the canary tool only sniff the read operations, it's hard to 
> finding the problem in write path. 
> To support the write sniffing, we create a system table named '_canary_'  in 
> the canary tool. And the tool will make sure that the region number is large 
> than the number of the regionserver and the regions will be distributed onto 
> all regionservers.
> Periodically, the tool will put data to these regions to calculate the write 
> availability of HBase and send alerts if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13605) RegionStates should not keep its list of dead servers

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13605:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

Deferring from 1.1.2.

> RegionStates should not keep its list of dead servers
> -
>
> Key: HBASE-13605
> URL: https://issues.apache.org/jira/browse/HBASE-13605
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 2.0.0, 1.1.3
>
> Attachments: hbase-13605_v1.patch, hbase-13605_v3-branch-1.1.patch, 
> hbase-13605_v4-branch-1.1.patch, hbase-13605_v4-master.patch
>
>
> As mentioned in 
> https://issues.apache.org/jira/browse/HBASE-9514?focusedCommentId=13769761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13769761
>  and HBASE-12844 we should have only 1 source of cluster membership. 
> The list of dead server and RegionStates doing it's own liveliness check 
> (ServerManager.isServerReachable()) has caused an assignment problem again in 
> a test cluster where the region states "thinks" that the server is dead and 
> SSH will handle the region assignment. However the RS is not dead at all, 
> living happily, and never gets zk expiry or YouAreDeadException or anything. 
> This leaves the list of regions unassigned in OFFLINE state. 
> master assigning the region:
> {code}
> 15-04-20 09:02:25,780 DEBUG [AM.ZK.Worker-pool3-t330] master.RegionStates: 
> Onlined 77dddcd50c22e56bfff133c0e1f9165b on 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268 {ENCODED => 
> 77dddcd50c
> {code}
> Master then disabled the table, and unassigned the region:
> {code}
> 2015-04-20 09:02:27,158 WARN  [ProcedureExecutorThread-1] 
> zookeeper.ZKTableStateManager: Moving table loadtest_d1 state from DISABLING 
> to DISABLING
>  Starting unassign of 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b. (offlining), 
> current state: {77dddcd50c22e56bfff133c0e1f9165b state=OPEN, 
> ts=1429520545780,   
> server=os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268}
> bleProcedure$BulkDisabler-0] master.AssignmentManager: Sent CLOSE to 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268 for region 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b.
> 2015-04-20 09:02:27,414 INFO  [AM.ZK.Worker-pool3-t316] master.RegionStates: 
> Offlined 77dddcd50c22e56bfff133c0e1f9165b from 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> {code}
> On table re-enable, AM does not assign the region: 
> {code}
> 2015-04-20 09:02:30,415 INFO  [ProcedureExecutorThread-3] 
> balancer.BaseLoadBalancer: Reassigned 25 regions. 25 retained the pre-restart 
> assignment.·
> 2015-04-20 09:02:30,415 INFO  [ProcedureExecutorThread-3] 
> procedure.EnableTableProcedure: Bulk assigning 25 region(s) across 5 
> server(s), retainAssignment=true
> l,16000,1429515659726-GeneralBulkAssigner-4] master.RegionStates: Couldn't 
> reach online server 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> l,16000,1429515659726-GeneralBulkAssigner-4] master.AssignmentManager: 
> Updating the state to OFFLINE to allow to be reassigned by SSH
> nmentManager: Skip assigning 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b., it is on a dead 
> but not processed yet server: 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13267) Deprecate or remove isFileDeletable from SnapshotHFileCleaner

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13267:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

Deferring from 1.1.2.

> Deprecate or remove isFileDeletable from SnapshotHFileCleaner
> -
>
> Key: HBASE-13267
> URL: https://issues.apache.org/jira/browse/HBASE-13267
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.0.3, 1.1.3
>
>
> The isFileDeletable method in SnapshotHFileCleaner became vestigial after 
> HBASE-12627, lets remove it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13504) Alias current AES cipher as AES-CTR

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13504:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> Alias current AES cipher as AES-CTR
> ---
>
> Key: HBASE-13504
> URL: https://issues.apache.org/jira/browse/HBASE-13504
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.0.3, 1.1.3
>
>
> Alias the current cipher with the name "AES" to the name "AES-CTR".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13511) Derive data keys with HKDF

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13511:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> Derive data keys with HKDF
> --
>
> Key: HBASE-13511
> URL: https://issues.apache.org/jira/browse/HBASE-13511
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.0.3, 1.1.3
>
>
> When we are locally managing master key material, when users have supplied 
> their own data key material, derive the actual data keys using HKDF 
> (https://tools.ietf.org/html/rfc5869)
> DK' = HKDF(S, DK, MK)
> where
> S = salt
> DK = user supplied data key
> MK = master key
> DK' = derived data key for the HFile
> User supplied key material may be weak or an attacker may have some partial 
> knowledge of it.
> Where we generate random data keys we can still use HKDF as a way to mix more 
> entropy into the secure random generator. 
> DK' = HKDF(R, MK)
> where
> R = random key material drawn from the system's secure random generator
> MK = master key
> (Salting isn't useful here because salt S and R would be drawn from the same 
> pool, so will not have statistical independence.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13505) Deprecate the "AES" cipher type

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13505:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> Deprecate the "AES" cipher type
> ---
>
> Key: HBASE-13505
> URL: https://issues.apache.org/jira/browse/HBASE-13505
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.0.3, 1.1.3
>
>
> Deprecate the "AES" cipher type. Remove internal references to it and use the 
> "AES-CTR" name instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11819) Unit test for CoprocessorHConnection

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11819:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

What happened with the commit here folks? Has this been released on any branch?

> Unit test for CoprocessorHConnection 
> -
>
> Key: HBASE-11819
> URL: https://issues.apache.org/jira/browse/HBASE-11819
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Talat UYARER
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 0.98.14, 1.3.0, 1.2.1, 1.1.3
>
> Attachments: HBASE-11819v4-master.patch, HBASE-11819v5-0.98 
> (1).patch, HBASE-11819v5-0.98.patch, HBASE-11819v5-master (1).patch, 
> HBASE-11819v5-master.patch, HBASE-11819v5-master.patch, 
> HBASE-11819v5-v0.98.patch, HBASE-11819v5-v1.0.patch
>
>
> Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13603) Write test asserting desired priority of RS->Master RPCs

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13603:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> Write test asserting desired priority of RS->Master RPCs
> 
>
> Key: HBASE-13603
> URL: https://issues.apache.org/jira/browse/HBASE-13603
> Project: HBase
>  Issue Type: Test
>  Components: rpc, test
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3
>
>
> From HBASE-13351:
> {quote}
> Any way we can write a FT test to assert that the RS->Master APIs are treated 
> with higher priority. I see your UT for asserting the annotation.
> {quote}
> Write a test that verifies expected RPCs are run on the correct pools in as 
> real of an environment possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13627) Terminating RS results in redundant CLOSE RPC

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13627:
-
Labels: beginner beginners  (was: )

> Terminating RS results in redundant CLOSE RPC
> -
>
> Key: HBASE-13627
> URL: https://issues.apache.org/jira/browse/HBASE-13627
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.1.0
>Reporter: Nick Dimiduk
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3
>
>
> Noticed while testing the 1.1.0RC0 bits. It seems we're issuing a redundant 
> close RPC during shutdown. This results in a logging warning for each region.
> {noformat}
> 2015-05-06 00:07:19,214 INFO  [RS:0;ndimiduk-apache-1-1-dist-6:56371] 
> regionserver.HRegionServer: Received CLOSE for the region: 
> 19cbe4fe2fe5335e7aace05e10e36ede, which we are already trying to CLOSE, but 
> not completed yet
> 2015-05-06 00:07:19,214 WARN  [RS:0;ndimiduk-apache-1-1-dist-6:56371] 
> regionserver.HRegionServer: Failed to close 
> cluster_test,,1430869443384.19cbe4fe2fe5335e7aace05e10e36ede. - 
> ignoring and continuing
> org.apache.hadoop.hbase.regionserver.RegionAlreadyInTransitionException: The 
> region 19cbe4fe2fe5335e7aace05e10e36ede was already closing. New CLOSE 
> request is ignored.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2769)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegionIgnoreErrors(HRegionServer.java:2695)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeUserRegions(HRegionServer.java:2327)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:937)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> 1. launch a standalone cluster from tgz (./bin/start-hbase.sh)
> 2. load some data (ie, run bin/hbase ltt)
> 3. terminate cluster (./bin/stop-hbase.sh)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13627) Terminating RS results in redundant CLOSE RPC

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13627:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> Terminating RS results in redundant CLOSE RPC
> -
>
> Key: HBASE-13627
> URL: https://issues.apache.org/jira/browse/HBASE-13627
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.1.0
>Reporter: Nick Dimiduk
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3
>
>
> Noticed while testing the 1.1.0RC0 bits. It seems we're issuing a redundant 
> close RPC during shutdown. This results in a logging warning for each region.
> {noformat}
> 2015-05-06 00:07:19,214 INFO  [RS:0;ndimiduk-apache-1-1-dist-6:56371] 
> regionserver.HRegionServer: Received CLOSE for the region: 
> 19cbe4fe2fe5335e7aace05e10e36ede, which we are already trying to CLOSE, but 
> not completed yet
> 2015-05-06 00:07:19,214 WARN  [RS:0;ndimiduk-apache-1-1-dist-6:56371] 
> regionserver.HRegionServer: Failed to close 
> cluster_test,,1430869443384.19cbe4fe2fe5335e7aace05e10e36ede. - 
> ignoring and continuing
> org.apache.hadoop.hbase.regionserver.RegionAlreadyInTransitionException: The 
> region 19cbe4fe2fe5335e7aace05e10e36ede was already closing. New CLOSE 
> request is ignored.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2769)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegionIgnoreErrors(HRegionServer.java:2695)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeUserRegions(HRegionServer.java:2327)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:937)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> 1. launch a standalone cluster from tgz (./bin/start-hbase.sh)
> 2. load some data (ie, run bin/hbase ltt)
> 3. terminate cluster (./bin/stop-hbase.sh)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12816) GC logs are lost upon Region Server restart if GCLogFileRotation is enabled

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-12816:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

> GC logs are lost upon Region Server restart if GCLogFileRotation is enabled
> ---
>
> Key: HBASE-12816
> URL: https://issues.apache.org/jira/browse/HBASE-12816
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.2.1, 1.1.3
>
> Attachments: HBASE-12816.patch
>
>
> When -XX:+UseGCLogFileRotation is used gc log files end with .gc.0 instead of 
> .gc.  hbase_rotate_log () in hbase-daemon.sh does not handle this correctly 
> and hence when a RS is restarted old gc logs are lost(overwritten).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13706:
-
Fix Version/s: (was: 1.1.2)
   1.1.3

Any update here?

> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.3
>
> Attachments: HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14021) Quota table has a wrong description on the UI

2015-08-04 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14021:
-
Attachment: HBASE-14021.patch

LGTM too. Patch still applies on master. Reattaching.

> Quota table has a wrong description on the UI
> -
>
> Key: HBASE-14021
> URL: https://issues.apache.org/jira/browse/HBASE-14021
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.1.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 1.1.2, 1.3.0, 1.2.1
>
> Attachments: HBASE-14021.patch, HBASE-14021.patch, error.png, fix.png
>
>
> !error.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654548#comment-14654548
 ] 

Andrew Purtell commented on HBASE-13706:


bq. no good reason [to exempt the Hadoop classes]
bq. The logic will be that all HBase classes and all their dependencies will be 
loaded by native/parent loader.
All co-processor implementation classes and their dependencies will be loaded 
by the CoprocessorClassLoader, unless they spill over.

We could try only org.apache.hadoop.hbase.*. 

The subset of the Hadoop APIs relevant and useful for coprocessors is pretty 
big, there could be unexpected/unintended consequences. If not going through a 
facade in org.apache.hadoop.hbase.* whatever objects the coprocessor 
instantiates and interacts with will not have access to static shared state 
like UGI, the metrics subsystem registry, the FileSystem instance cache, etc. 
Working with HDFS, metrics, and security APIs would be "interesting". We could 
try it only in master for a while. We could claim such things out of scope for 
coprocessors, but because we haven't up to now, it's a hairy backwards 
compatibility problem.


> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.3
>
> Attachments: HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654548#comment-14654548
 ] 

Andrew Purtell edited comment on HBASE-13706 at 8/4/15 11:55 PM:
-

bq. no good reason [to exempt the Hadoop classes]
bq. The logic will be that all HBase classes and all their dependencies will be 
loaded by native/parent loader.All co-processor implementation classes and 
their dependencies will be loaded by the CoprocessorClassLoader, unless they 
spill over.

We could try only org.apache.hadoop.hbase.*. 

The subset of the Hadoop APIs relevant and useful for coprocessors is pretty 
big, there could be unexpected/unintended consequences. If not going through a 
facade in org.apache.hadoop.hbase.* whatever objects the coprocessor 
instantiates and interacts with will not have access to static shared state 
like UGI, the metrics subsystem registry, the FileSystem instance cache, etc. 
Working with HDFS, metrics, and security APIs would be "interesting". We could 
try it only in master for a while. We could claim such things out of scope for 
coprocessors, but because we haven't up to now, it's a hairy backwards 
compatibility problem.



was (Author: apurtell):
bq. no good reason [to exempt the Hadoop classes]
bq. The logic will be that all HBase classes and all their dependencies will be 
loaded by native/parent loader.
All co-processor implementation classes and their dependencies will be loaded 
by the CoprocessorClassLoader, unless they spill over.

We could try only org.apache.hadoop.hbase.*. 

The subset of the Hadoop APIs relevant and useful for coprocessors is pretty 
big, there could be unexpected/unintended consequences. If not going through a 
facade in org.apache.hadoop.hbase.* whatever objects the coprocessor 
instantiates and interacts with will not have access to static shared state 
like UGI, the metrics subsystem registry, the FileSystem instance cache, etc. 
Working with HDFS, metrics, and security APIs would be "interesting". We could 
try it only in master for a while. We could claim such things out of scope for 
coprocessors, but because we haven't up to now, it's a hairy backwards 
compatibility problem.


> CoprocessorClassLoader should not exempt Hive classes
> -
>
> Key: HBASE-13706
> URL: https://issues.apache.org/jira/browse/HBASE-13706
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.3
>
> Attachments: HBASE-13706.patch
>
>
> CoprocessorClassLoader is used to load classes from the coprocessor jar.
> Certain classes are exempt from being loaded by this ClassLoader, which means 
> they will be ignored in the coprocessor jar, but loaded from parent classpath 
> instead.
> One problem is that we categorically exempt "org.apache.hadoop".
> But it happens that Hive packages start with "org.apache.hadoop".
> There is no reason to exclude hive classes from theCoprocessorClassLoader.
> HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14184) Fix indention and type-o in JavaHBaseContext

2015-08-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14184:

Status: Patch Available  (was: Open)

marking as patch available so QABot will run

> Fix indention and type-o in JavaHBaseContext
> 
>
> Key: HBASE-14184
> URL: https://issues.apache.org/jira/browse/HBASE-14184
> Project: HBase
>  Issue Type: Wish
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
> Attachments: HBASE-14184.3.patch
>
>
> Looks like there is a Ddd that should be Rdd.
> Also looks like everything is one space over too much



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14167) hbase-spark integration tests do not respect -DskipITs

2015-08-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14167:

Component/s: spark

> hbase-spark integration tests do not respect -DskipITs
> --
>
> Key: HBASE-14167
> URL: https://issues.apache.org/jira/browse/HBASE-14167
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Priority: Minor
>
> When running a build with {{mvn ... -DskipITs}}, the hbase-spark module's 
> integration tests do not respect the flag and run anyway. Fix. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14181) Add Spark DataFrame DataSource to HBase-Spark Module

2015-08-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14181:

Component/s: spark

> Add Spark DataFrame DataSource to HBase-Spark Module
> 
>
> Key: HBASE-14181
> URL: https://issues.apache.org/jira/browse/HBASE-14181
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
>
> Build a RelationProvider for HBase-Spark Module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14185) Incorrect region names logged by MemStoreFlusher.java

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14654566#comment-14654566
 ] 

Hadoop QA commented on HBASE-14185:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12748709/HBASE-14185.patch
  against master branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748709

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnDatanodeDeath(TestLogRolling.java:393)
at 
org.apache.hadoop.hbase.regionserver.TestCorruptedRegionStoreFile.testLosingFileAfterScannerInit(TestCorruptedRegionStoreFile.java:167)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14975//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14975//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14975//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14975//console

This message is automatically generated.

> Incorrect region names logged by MemStoreFlusher.java
> -
>
> Key: HBASE-14185
> URL: https://issues.apache.org/jira/browse/HBASE-14185
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: HBASE-14185.patch
>
>
> In MemstoreFlusher the method 
> [flushOneForGlobalPressure|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L142]
>  logs incorrect region names which makes debugging issues a bit difficult. 
> Instead of logging the secondary replica region names in 
> [these|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L200]
>  
> [locations|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L205],
>  the code logs the primary replica region names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14185) Incorrect region names logged by MemStoreFlusher.java

2015-08-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14185:
---
Fix Version/s: 1.3.0
   1.1.2
   1.2.0
   2.0.0

> Incorrect region names logged by MemStoreFlusher.java
> -
>
> Key: HBASE-14185
> URL: https://issues.apache.org/jira/browse/HBASE-14185
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14185.patch
>
>
> In MemstoreFlusher the method 
> [flushOneForGlobalPressure|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L142]
>  logs incorrect region names which makes debugging issues a bit difficult. 
> Instead of logging the secondary replica region names in 
> [these|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L200]
>  
> [locations|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java#L205],
>  the code logs the primary replica region names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >